Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,300 | 399 | Analog Neural Networks as Decoders
Ruth Erlanson?
Dept. of Electrical Engineering
California Institute of Technology
Pasadena, CA 91125
Yaser Abu-Mostafa
Dept. of Electrical Engineering
California Institute of Technology
Pasadena, CA 91125
Abstract
Analog neural networks with feedback can be used to implement l(Winner-Take-All (KWTA) networks. In turn, KWTA networks can be
used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable
of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding
theory metrics, and consider the feasibility of embedding such networks in
VLSI technologies.
1
INTRODUCTION: THE K-WINNER-TAKE-ALL
NETWORK
We have previously demonstrated the use of a continuous Hopfield neural network
as a K-Winner-Take-All (KWTA) network [Majani et al., 1989, Erlanson and AbuMostafa, 1988}. Given an input of N real numbers, such a network will converge
to a vector of K positive one components and (N - K) negative one components,
with the positive positions indicating the K largest input components. In addition,
we have shown that the (~) such vectors are the only stable states of the system.
One application of the KWTA network is the analog decoding of error-correcting
codes [Majani et al., 1989, Platt and Hopfield, 1986]. Here, a known set of vectors
(the codewords) are transmitted over a noisy channel. At the receiver's end of the
channel, the initial vector must be reconstructed from the noisy vector.
? currently at: Hughes Network Systems, 10790 Roselle St., San Diego, CA 92121
585
586
Erlanson and Abu-Mostafa
If we select our codewords to be the (Z) vectors with J( positive one components
and (N - K) negative one components, then the K'VTA neural network will perform
this decoding task. Furthermore, the network decodes from the noisy analog vector
to a binary codeword (so no information is lost in quantization of the noisy vector).
Also, we have shown [Majani et al., 1989] that the K"VTA network will perform the
optimal decoding, maximum likelihood decoding (MLD), if we assume noise where
the probability of a large noise spike is less than the probability of a small noise spike
(such as additive white Gaussian noise). For this type of noise, an MLD outputs
the codeword closest to the noisy received vector. Hence, the most straightforward
implementation of MLD would involve the comparison of the noisy vector to all the
codewords. For large codes, this method is computationally impractical.
Two important parameters of any code are its rate and minimum distance. The
rate, or amount of information transmitted per bit sent over the channel, of this
code is good (asymptotically approaches 1). The minimum distance of a code is
the Hamming distance between the two closest codewords in the code. The minimum distance determines the error-correcting capabilities of a code. The minimum
distance of the KWTA code is 2.
In our previous work, we have found that the K'VTA network performs optimal
decoding of a nonlinear code. However, the small minimum distance of this code
limited the system's usefulness.
2
INTERCONNECTED KWTA NETWORKS
In order to look for more useful code-decoder pairs, we have considered interconnected K"VTA networks. 'Ve have found two interesting families of codes:
2.1
THE HYPERCUBE FAMILY
A decoder for this family of codes has m = ni nodes. 'Ve label the nodes Xl, X2, ??. Xi
with X j E 1,2 ... n. K'VTA constraints are placed on sets of n nodes which differ
in only one index. For example, {I, 1, 1, ... ,I}, {2, 1, 1, ... ,I}, {3, 1, 1, ... ,I}, ... ,
{n, 1, 1, ... ,I} are the nodes in one KWTA constraint.
For a two-dimensional system (i = 2) the nodes can be laid out in an array where
the K"VTA constraints will be along the rows and columns of the array. For the
code associated with the two-dimensional system, we find that
rate ~ 1-
310gn
2n .
The minimum distance of this code is 4. Experimental results show that the decoder
is nearly optimal.
In general, for an i-dimensional code, the minimum distance is 2i. The rate of these
codes can be bounded only very roughly.
We also consider implementing these decoders on an integrated circuit. Because
of the high level of interconnectivity of these decoders and the simple processing
required at each node (or neuron) we assume that the interconnections will dictate
the chip's size. Using a standard model for VLSI area complexity, we determine
Analog Neural Networks as Decoders
that the circuit area scales as the square of the network size. Feature sizes of current
mainstream technologies suggest that we could construct systems with 222 = 484
625 (4-dimensional) nodes.
(2-dimensional), 63 = 216 (3-dimensional) and 54
Thus, nontrivial systems could be constructed with current VLSI technology.
=
2.2
NET-GENERATED CODES
This family uses combinatorial nets to specify the nodes in the K\VTA constraints.
A net on n 2 points consists of parallel classes: Each class partitions the n 2 points
into n disjoint lines each containing n points. Two lines from different classes
intersect at exactly one point.
If we impose a KWTA constraint on the points on a line, a net can be used to
generate a family of code-decoder pairs. If n is the integer power of a prime number,
we can use a projective plane to generate a net with (n + 1) classes. For example,
in Table 1 we have the projective plane of order 2 (n = 2). A projective plane has
n 2 + n + 1 points and n 2 + n + 1 lines where each line has n + 1 points and any 2
lines intersect in exactly one point.
Table 1: Projective Plane of Order 2. Points are numbered for clarity.
1
1
lines:
points:
2 3 4
1
1
1 1
1 1
1
1
1
1
1
5
6
7
1
1
1
1
1
1
1
1
1
We can generate a net of 3 (i.e., n + 1) classes in the following way: Pick one line of
the projective plane. \Vithout loss of generality, we select the first line. Eliminate
the points in that line from all the lines in the projective plane, as shown in Table 2.
Renumber the remaining n 2 + n + 1 - (n + 1) = n 2 points. These are the points of
the net. The first class of the net is composed of the reduced lines which previously
contained the first point (old label 1) of the projective plane. In our example, this
class contains two lines: L1 consists of points 2 and 3, and L2 consists of points 1
and 4. The remaining classes of the net are formed in a corresponding manner from
the other points of the first line of the projective plane.
If we use all (n + 1) classes to specify KWTA constraints, the nodes are overconstrained and the network has no stable states. We can obtain n different codes
by using 1,2, ... , up to n classes to specify constraints. (The code constructed with
two classes is identical to the two-dimensional code in Section 2.1!) Experimentally,
we have found that these decoders perform near-optimal decoding on their corresponding code. A code constructed with i nets has a minimum distance of at least
2i. Thus, a code of size n 2 (i.e., the codewords contain n 2 bits) can be constructed
587
588
ErIanson and Abu-Mostafa
with minimum distance up to 2n. The rate of these codes in general can be bounded
only roughly.
We found that we could embed the decoder with a nets in an integrated circuit
with width proportional to .lan3 , or area proportional to the cube of the number
of processors. In a typical vLSI process, one could implement systems with 484
(a = 2, n = 22), 81 (a = 3, n = 9) or 64 (a = 4, n = 8) nodes.
3
SUMMARY
We have simulated and analyzed analog neural networks which perform nearoptimal decoding of certain families of nonlinear codes. Furthermore, we have
shown that nontrivial implementations could be constructed. This work is discussed
in more detail in [Erlanson, 1991).
References
E. Majani, R. Erlanson and Y.S. Abu-Mostafa, "On the K-Winners-Take-All Feedback Network," Advances in Neural Information Processing Systems, D. Touretzky
(ed.), Vol. 1, pp. 634-642, 1989.
R. Erlanson and Y.S. Abu-Mostafa, "Using an Analog Neural Network for Decoding," Proceedings of the 1988 Connectionist Models Summer School, D. Touretzky,
G. Hinton, T. Sejnowski (eds.), pp. 186-190, 1988.
J.C. Platt and J.J. Hopfield, "Analog decoding using neural networks," AlP Conference Proceedings #151, Neural Networks for Computing, J. Denker (ed.), pp. 364369, 1986.
R. Erlanson, "Soft-Decision Decoding of a Family of Nonlinear Codes Using a Neural
Network," PhD. Thesis, California Institute of Technology, 1991.
Table 2: Constructing a Net from a Projective Plane.
projective plane's points:
1
2
\ \
\
3
1
1
net's points:
!
\
\
\
lines:
\
4
\
1
1
5
6
7
1
1
1
1
2
1
1
1
3
Ll
1
1
4
L2
Part X
Language and Cognition
| 399 |@word contain:1 hypercube:1 differ:1 hence:1 spike:2 codewords:5 white:1 alp:1 ll:1 pick:1 width:1 implementing:1 distance:10 simulated:1 decoder:12 initial:1 contains:1 performs:1 l1:1 current:2 ruth:1 majani:4 considered:1 code:30 index:1 must:1 cognition:1 additive:1 partition:1 mostafa:5 abumostafa:1 negative:2 winner:4 analog:8 discussed:1 implementation:2 label:2 currently:1 combinatorial:1 perform:4 plane:10 neuron:1 largest:1 interconnectivity:1 hinton:1 node:10 gaussian:1 language:1 stable:2 mainstream:1 along:1 constructed:5 closest:2 consists:3 pair:2 required:1 prime:1 likelihood:1 manner:1 codeword:2 certain:1 california:3 binary:1 roughly:2 transmitted:2 minimum:9 integrated:2 eliminate:1 impose:1 pasadena:2 vlsi:4 converge:1 determine:1 bounded:2 circuit:3 power:1 cube:1 construct:2 impractical:1 vta:7 technology:6 identical:1 feasibility:1 look:1 nearly:1 metric:1 exactly:2 connectionist:1 platt:2 l2:2 composed:1 addition:1 positive:3 ve:2 engineering:2 loss:1 interesting:1 proportional:2 sent:1 integer:1 analyzed:1 near:1 limited:1 projective:10 row:1 summary:1 placed:1 hughes:1 lost:1 implement:2 capable:1 institute:3 old:1 intersect:2 area:3 feedback:2 dictate:1 column:1 numbered:1 soft:1 suggest:1 gn:1 yaser:1 san:1 useful:1 reconstructed:1 involve:1 demonstrated:1 usefulness:1 amount:1 straightforward:1 mld:3 kwta:11 receiver:1 nearoptimal:1 reduced:1 generate:3 xi:1 correcting:3 continuous:1 st:1 array:2 disjoint:1 per:1 table:4 channel:3 embedding:1 decoding:11 ca:3 vol:1 abu:5 diego:1 thesis:1 us:1 containing:1 clarity:1 constructing:1 noise:5 asymptotically:1 powerful:1 electrical:2 coding:1 family:8 laid:1 interconnecting:1 position:1 decision:1 xl:1 analyze:1 bit:2 complexity:1 capability:1 parallel:1 summer:1 renumber:1 embed:1 nontrivial:2 square:1 ni:1 formed:1 constraint:7 x2:1 hopfield:3 chip:1 decodes:1 quantization:1 phd:1 processor:1 sejnowski:1 touretzky:2 ed:3 pp:3 interconnection:1 contained:1 associated:1 hamming:1 noisy:6 determines:1 computationally:1 previously:2 net:13 turn:1 overconstrained:1 interconnected:3 end:1 experimentally:1 typical:1 specify:3 denker:1 generality:1 furthermore:2 experimental:1 indicating:1 select:2 nonlinear:4 remaining:2 erlanson:7 dept:2 school:1 received:1 |
3,301 | 3,990 | Multitask Learning without Label Correspondences
Novi Quadrianto1 , Alex Smola2 , Tib?erio Caetano1 , S.V.N. Vishwanathan3 , James Petterson1
1 SML-NICTA & RSISE-ANU, Canberra, ACT, Australia
2 Yahoo! Research, Santa Clara, CA, USA
3 Purdue University, West Lafayette, IN, USA
Abstract
We propose an algorithm to perform multitask learning where each task has potentially distinct label sets and label correspondences are not readily available. This is
in contrast with existing methods which either assume that the label sets shared by
different tasks are the same or that there exists a label mapping oracle. Our method
directly maximizes the mutual information among the labels, and we show that the
resulting objective function can be efficiently optimized using existing algorithms.
Our proposed approach has a direct application for data integration with different
label spaces, such as integrating Yahoo! and DMOZ web directories.
1
Introduction
In machine learning it is widely known that if several tasks are related, then learning them simultaneously can improve performance [1?4]. For instance, a personalized spam classifier trained with
data from several different users is likely to be more accurate than one that is trained with data from
a single user. If one views learning as the task of inferring a function f from the input space X to the
output space Y, then multitask learning is the problem of inferring several functions fi : Xi 7? Yi
simultaneously. Traditionally, one either assumes that the set of labels Yi for all the tasks are the
same (that is, Yi = Y for all i), or that we have access to an oracle mapping function gi,j : Yi 7? Yj .
However, as we argue below, in many natural settings these assumptions are not satisfied.
Our motivating example is the problem of learning to automatically categorize objects on the web
into an ontology or directory. It is well established that many web-related objects such as web directories and RSS directories admit a (hierarchical) categorization, and web directories aim to do this
in a semi-automated fashion. For instance, it is desirable, when building a categorizer for the Yahoo!
directory1 , to take into account other web directories such as DMOZ2 . Although the tasks are clearly
related, their label sets are not identical. For instance, some section heading and sub-headings may
be named differently in the two directories. Furthermore, different editors may have made different decisions about the ontology depth and structure, leading to incompatibilities. To make matters
worse, these ontologies evolve with time and certain topic labels may die naturally due to lack of
interest or expertise while other new topic labels may be added to the directory. Given the large label
space, it is unrealistic to expect that a label mapping function is readily available. However, the two
tasks are clearly related and learning them simultaneously is likely to improve performance.
This paper presents a method to learn classifiers from a collection of related tasks or data sets, in
which each task has its own label dictionary, without constructing an explicit label mapping among
them. We formulate the problem as that of maximizing mutual information among the labels sets.
We then show that this maximization problem yields an objective function which can be written
as a difference of concave functions. By exploiting convex duality [5], we can solve the resulting
optimization problem efficiently in the dual space using existing DC programming algorithms [6].
1
2
http://dir.yahoo.com/
http://www.dmoz.org/
1
Related Work As described earlier, our work is closely related to the research efforts on multitask
learning, where the problem of simultaneously learning multiple related tasks is addressed. Several
papers have empirically and theoretically highlighted the benefits of multitask learning over singletask learning when the tasks are related. There are several approaches to define task relatedness.
The works of [2, 7, 8] consider the setting when the tasks to be learned jointly share a common
subset of features. This can be achieved by adding a mixed-norm regularization term that favors a
common sparsity profile in features shared by all tasks. Task relatedness can also be modeled as
learning functions that are close to each other in some sense [3, 9]. Crammer et al. [10] consider the
setting where, in addition to multiple sources of data, estimates of the dissimilarities between these
sources are also available. There is also work on data integration via multitask learning where each
data source has the same binary label space, whereas the attributes of the inputs can admit different
orderings as well as be linearly transformed [11].
The remainder of the paper is organized as follows. We briefly develop background on the maximum
entropy estimation problem and its dual in Section 2. We introduce in Section 3 the novel multitask formulation in terms of a mutual information maximization criterion. Section 4 presents the
algorithm to solve the optimization problem posed by the multitask problem. We then present the
experimental results, including applications on news articles and web directories data integration, in
Section 5. Finally, in Section 6 we conclude the paper.
2
Maximum Entropy Duality for Conditional Distributions
Here we briefly summarize the well known duality relation between approximate conditional maximum entropy estimation and maximum a posteriori estimation (MAP) [5, 12].
PWe will exploit this
in Section 4. Recall the definition of the Shannon entropy, H(y|x) := ? y p(y|x) log p(y|x),
where p(y|x) is a conditional distribution on the space of labels Y. Let x ? X and assume the
existence of ?(x, y) : X ? Y 7? H, a feature map into a Hilbert space H. Given a data set
(X, Y ) := {(x1 , y1 ) , . . . , (xm , ym )}, where X := {x1 , . . . , xm }, define
m
Ey?p(y|X) [?(X, y)] :=
m
1 X
1 X
Ey?p(y|xi ) [?(xi , y)] , and ? =
?(xi , yi ).
m i=1
m i=1
(1)
Lemma 1 ([5], Lemma 6) With the above notation we have
min
p(y|x)
m
X
X
?H(y|xi ) s.t.
Ey?p(y|X) [?(X, y)] ? ?
H ? and
p(y|xi ) = 1
i=1
= max h?, ?iH ?
?
(2a)
y?Y
m
X
i=1
log
X
exp(h?, ?(xi , y)i) ? k?kH .
(2b)
y
Although we presented a version of the above theorem using Hilbert spaces, it can also be extended
to Banach spaces. Choosing different Banach space norms recovers well known algorithms such as
`1 or `2 regularized logistic regression. Also note that by enforcing the moment matching constraint
exactly, that is, setting = 0, we recover the well-known duality between maximum (Shannon)
entropy and maximum likelihood (ML) estimation.
3
Multitask Learning via Mutual Information
For the purpose of explaining our basic idea, we focus on the case when we want to integrate two
data sources such as Yahoo! directory and DMOZ. Associated with each data source are labels
Y = {y1 , . . . , yc } ? Y and observations X = {x1 , . . . , xm } ? X (resp. Y 0 = {y10 , . . . , yc0 0 } ? Y0
and X 0 = {x01 , . . . , x0m0 } ? X0 ). The observations are disjoint but we assume that they are drawn
from the same domain, i.e., X = X0 (in our running example they are webpages).
If we are interested to solve each of the categorization tasks independently, a maximum entropy
estimator described in Section 2 can be readily employed [13]. Here we would like to learn the
2
two tasks simultaneously in order to improve classification accuracy. Assuming that the labels are
different yet correlated we should assume that the joint distribution p(y, y 0 ) displays high mutual
information between y and y 0 . Recall that the mutual information between random variables y and
y 0 is defined as I(y, y 0 ) = H(y) + H(y 0 ) ? H(y, y 0 ), and that this quantity is high when the two
variables are mutually dependent. To illustrate this, consider in our running example of integrating
Yahoo! and DMOZ web directories, we would expect there is a high mutual dependency between
section heading ?Computer & Internet? at Yahoo! directory and ?Computers? at DMOZ directory
although they are named somewhat slightly different. Since the marginal distributions over the
labels, p(y) and p(y 0 ) are fixed, maximizing mutual information can then be viewed as minimizing
the joint entropy
X
p(y, y 0 ) log p(y, y 0 ).
(3)
H(y, y 0 ) = ?
y,y 0
This reasoning leads us to adding the joint entropy as an additional term for the objective function
of the multitask problem. If we define
0
m
m
1 X
1 X
?=
?(xi , yi ) and ?0 = 0
?(x0i , yi0 ),
m i=1
m i=1
(4)
then we have the following objective function
maximize
p(y|x)
m
X
i=1
0
H(y|xi ) +
m
X
H(y 0 |x0i ) ? ?H(y, y 0 ) for some ? > 0
(5a)
i=1
X
p(y|xi ) = 1
s.t.
Ey?p(y|X) [?(X, y)] ? ?
? and
(5b)
y?Y
X
Ey0 ?p(y0 |X 0 ) [?0 (X 0 , y 0 )] ? ?0
? 0 and
p(y 0 |x0i ) = 1.
(5c)
y 0 ?Y0
Intuitively, the above objective function tries to find a ?simple? distribution p which is consistent
with the observed samples via moment matching constraints while also taking into account task
relatedness. We can recover the single task maximum entropy estimator by removing the joint
entropy term (by setting ? = 0), since the optimization problem (the objective functions as well
as the constraints) in (5) will be decoupled in terms of p(y|x) and p(y 0 |x0 ). There are two main
challenges in solving (5):
? The joint entropy term H(y, y 0 ) is concave, hence the above objective of the optimization
problem is not concave in general (it is the difference of two concave functions). We therefore propose to solve this non-concave problem using DC programming [6], in particular
the concave convex procedure (CCCP) [14, 15].
? The joint distribution between labels p(y, y 0 ) is unknown. We will estimate this quantity (therefore the joint entropy quantity) from the observations x and x0 . Further, we
assume that y and y 0 are conditionally independent given an arbitrary input x ? X, that is
p(y, y 0 |x) = p(y|x)p(y 0 |x). For instance, in our example, annotations made by an editor
at Yahoo! and an editor at DMOZ on the set of webpages are assumed conditionally independent given the set of webpages. This assumption essentially means that the labeling
process depends entirely on the set of webpages, i.e., any other latent factors that might
connect the two editors are ignored.
In the following section we discuss in further detail how to address these two challenges, as well
as the resulting optimization problem obtained, which can be solved efficiently by existing convex
solvers.
4
Optimization
The concave convex procedure (CCCP) works as follow: for a given function f (x) = g(x) ? h(x),
where g is concave and ?h is convex, a lower bound can be found by
f (x) ? g(x) ? h(x0 ) ? h?h(x0 ), x ? x0 i .
3
(6)
This lower bound is concave and can be maximized effectively over a convex domain. Subsequently
one finds a new location x0 and the entire procedure is repeated. This procedure is guaranteed to
converge to a local optimum or saddle point [16].
Therefore, one potential approach to solve the optimization problem in (5) is to use successive linear
lower bounds on H(y, y 0 ) and to solve the resulting decoupled problems in p(y|x) and p(y 0 |x0 )
separately. We estimate the joint entropy term H(y, y 0 ) by its empirical quantity on x and x0 with
the conditional independence assumption (in the sequel, we make the dependency of p(y|x) on a
parameter ? explicit and similarly for the dependency of p(y 0 |x0 ) on ?0 ), that is
?
?
"
#
m
m
X 1 X
X
1
H(y, y 0 |X) = ?
p(y|xi , ?)p(y 0 |xi , ?0 ) log ?
p(y|xj , ?)p(y 0 |xj , ?0 )? , (7)
m i=1
m j=1
0
y,y
and similarly for H(y, y 0 |X 0 ). Each iteration of CCCP approximates the convex part (negative joint
entropy) by its tangent, that is h?h(x0 ), xi in (6). Therefore, taking derivatives of the joint entropy
0
with respect to p(y|xi ) and evaluating at parameters at iteration t ? 1, denoted as ?t?1 and ?t?1
,
yields
gy (xi ) := ??p(y|xi ) H(y, y 0 |X)
?
?
m
1 X
1 X?
0
0
1 + log
p(y|xj , ?t?1 )p(y 0 |xj , ?t?1
)? p(y 0 |xi , ?t?1
).
=
m 0
m j=1
(8)
(9)
y
Define similarly gy (x0i ), gy0 (xi ), and gy0 (x0i ) for the derivative with respect to p(y|x0i ), p(y 0 |xi ) and
p(y 0 |x0i ), respectively. This leads, by optimizing the lower bound in (6), to the following decoupled
optimization problems in p(y|xi ) and an analogous problem in p(y 0 |x0i ):
"
# m0 "
#
m
X
X
X
X
0
0
0
0
?H(y|xi ) + ?
min
gy (xi )p(y|xi ) +
?H(y|xi ) + ?
gy (xi )p(y|xi ) (10a)
p(y|x)
y
i=1
y
i=1
subject to
Ey?p(y|X) [?(X, y)] ? ?
? .
(10b)
The above objective function is still in the form of maximum entropy estimation, with the linearization of the joint entropy quantities acting like additional evidence terms. Furthermore, we also
impose an additional maximum entropy requirement on the ?off-set? observations p(y|x0i ), as after
all we also want the ?simplicity? requirement of the distribution p on the input x0i . We can of course
weigh the requirement on ?off-set? observations differently.
While we succeed in reducing the non-concave objective function in (5) to a decoupled concave objective function in (10), it might be desirable to solve the problem in the dual space due to difficulty
in handling the constraint in (10b). The following lemma shows the duality of the objective function
in (10). The proof is given in the supplementary material.
Lemma 2 The corresponding Fenchel?s dual of (10) is
min
?
m
X
i=1
0
log
X
exp(h?, ?(xi , y)i ? ?gy (xi )) +
y
m
X
i=1
m
1 X
h?, ?(xi , yi )i + k?k`2
?
m i=1
log
X
exp(h?, ?(x0i , y)i ? ?0 gy (x0i ))
y
(11)
The above dual problem still has the form of logistic regression with the additional evidence terms
from task relatedness appearing in the log-partition function. Several existing convex solvers can be
used to solve the optimization problem in (11) efficiently. Refer to Algorithm 1 for a pseudocode of
our proposed method.
Initialization For each iteration of CCCP, the linearization part of the joint entropy function requires the value of ? and ?0 at the previous iteration (refer to (9)). At the beginning of the iteration,
we can start the algorithm with a uniform prior, i.e. set p(y) = 1/|Y| and p(y 0 ) = 1/|Y0 |.
4
Algorithm 1 Multitask Mutual Information
Input: Datasets (X, Y ) and (X 0 , Y 0 ) with Y 6= Y0 , number of iterations N
Output: ?, ?0
Initialize p(y) = 1/|Y| and p(y 0 ) = 1/|Y0 |
for t = 1 to N do
Solve the dual problem in (11) w.r.t. p(y|x, ?) and obtain ?t
Solve the dual problem in (11) w.r.t. p(y 0 |x0 , ?0 ) and obtain ?t0
end for
0
return ? ? ?N , ?0 ? ?N
5
Experiments
To assess the performance of our proposed multitask algorithm, we perform binary n-task (n ?
{3, 5, 7, 10}) experiments on MNIST digit dataset and a multiclass 2-task experiment on the
Reuters1-v2 dataset plus an application on integrating Yahoo! and DMOZ web directory. We detail
those experiments in turn in the following sections.
5.1
MNIST
Datasets MNIST data set3 consists of 28 ? 28-size images of hand-written digits from 0 through
9. We use a small sample of the available training set to simulate the situation when we only have
limited number of labeled examples and test the performance on the entire available test set. In this
experiment, we look at a binary n-task (n ? {3, 5, 7, 10}) problem. We consider digits {8, 9, 0},
{6, 7, 8, 9, 0}, {4, 5, 6, 7, 8, 9, 0} and {1, 2, 3, 4, 5, 6, 7, 8, 9, 0} for the 3-task, 5-task, 7-task and 10task, respectively. To simulate the problem that we have distinct label dictionaries for each task,
we consider the following setting: in the 3-task problem, the first task has binary labels {+1, ?1},
where label +1 means digit 8 and label ?1 means digit 9 and 0; in the second task, label +1 means
digit 9 and label ?1 means digit 8 and 0; lastly in the third task, label +1 means digit 0 and label
?1 means digit 8 and 9. Similar one-against-rest grouping is also used for 5-task, 7-task and 10-task
problems. Each of the tasks has its own input x.
Algorithms We couldn?t find in the literature of multitask learning methods addressing the same
problem as the one we study: learn multiple tasks when there is no correspondence between the
output spaces. Therefore we compared the performance of our multitask method against the baseline
given by the maximum entropy estimator applied to each of the tasks independently. Note that
we focus on the setting in which data sources have disjoint sets of covariate observations (vide
Section 3) and thus a simple strategy of multilabel prediction with union of label sets corresponds
to our baseline. For both ours and the baseline method, we use a Gaussian kernel to define the
implicit feature map on the inputs. The width of the kernel was set to the median between pairs
of observations, as suggested in [17]. The regularization parameter was tuned for the single task
estimator and the same value was used for the multitask. The weight on the joint entropy term was
set to be equal to 1.
Pairwise Label Correlation Section 3 describes the multitask objective function for the case of
the 2-task problem. For the case when the number of tasks to be learned jointly is greater than 2, we
experiment in two different ways: in one approach we can define the joint entropy term on the full
joint distribution, that is when we want to learn jointly 3 different
tasks having label y, y 0 and y 00 ,
P
0 00
we can then define the joint entropy as H(y, y , y ) = ? y,y0 ,y00 p(y, y 0 , y 00 ) log p(y, y 0 , y 00 ). As
more computationally efficient way, we can consider the joint entropy on the pairwise distribution
instead. We found that the performance of our method is quite similar for the two cases and we
report results only on the pairwise case.
Results The experiments are repeated for 10 times and the results are summarized in Table 1. We
find that, on average, jointly learning the multiple related tasks always improves the classification
3
http://yann.lecun.com/exdb/mnist
5
Table 1: Performance assessment, Accuracy ? STD. m(m0 ) denotes the number of training data
points (number of test points). STL: single task learning; MTL: multi task learning and Upper
Bound: multi class learning. Boldface indicates a significance difference between STL and MTL
(one-sided paired Welch t-test with 99.95% confidence level).
Tasks
8 \-8
9 \-9
0 \-0
Average
m (m?)
15 (2963)
15 (2963)
120 (2963)
STL
77.39?5.23
91.12?5.94
98.66?0.67
89.06
MTL
80.03?4.83
91.96?5.42
98.21?0.92
90.07
Upper Bound
93.42?0.87
95.99?0.75
98.79?0.25
96.07
6 \-6
7 \-7
8 \-8
9 \-9
0 \-0
Average
25 (4949)
25 (4949)
25 (4949)
25 (4949)
150 (4949)
81.79?10.18
70.73?16.58
62.52?10.15
63.80?13.70
97.35?1.33
75.84
83.86?9.51
72.84?15.77
66.77?9.43
67.26?12.65
96.60?1.64
77.47
96.37?1.06
91.99?2.23
92.05?1.76
92.53?1.65
97.59?0.62
94.10
4 \-4
5 \-5
6 \-6
7 \-7
8 \-8
9 \-9
0 \-0
Average
70 (6823)
70 (6823)
70 (6823)
70 (6823)
70 (6823)
70 (6823)
210 (6823)
71.69?6.83
67.55?4.70
86.31?2.93
83.34?3.54
75.61?6.00
63.69?11.42
97.20?1.49
77.91
73.49?6.77
70.10?4.61
87.21?2.77
84.02?3.69
76.97?5.12
65.74?10.15
96.56?1.67
79.16
91.20?1.55
89.30?0.34
94.03?0.95
91.94?0.90
87.46?1.69
86.89?1.79
97.24?0.73
91.15
1 \-1
2 \-2
3 \-3
4 \-4
5 \-5
6 \-6
7 \-7
8 \-8
9 \-9
0 \-0
Average
100 (10000)
100 (10000)
100 (10000)
100 (10000)
100 (10000)
100 (10000)
100 (10000)
100 (10000)
100 (10000)
300 (10000)
96.59?2.11
67.77?3.49
72.59?5.90
69.91?5.82
53.78?2.78
79.22?5.21
76.57?10.2
63.57?2.65
63.28?6.69
98.43?0.84
74.17
96.80?1.91
69.95?2.68
74.18?5.54
71.76?5.47
57.26?2.72
80.54?4.53
77.18?9.43
65.85?2.50
65.38?6.09
97.81?1.01
75.67
96.89?0.59
88.74?1.94
87.59?2.95
92.87?0.94
85.71?1.38
92.93?0.98
89.83?1.24
83.51?0.63
84.94?1.45
98.49?0.40
90.82
accuracy. When assessing the performance on each of the tasks, we notice that the advantage of
learning jointly is particularly significant for those tasks with smaller number of observations.
5.2
Ontology
News Ontologies In this experiment, we consider multiclass learning in a 2-task problem. We
use the Reuters1-v2 news article dataset [18] which has been pre-processed4 . In the pre-processing
stage, the label hierarchy is reorganized by mapping the data set to the second level of topic hierarchy. The documents that only have labels of the third or fourth levels are mapped to their parent
category of the second level. The documents that only have labels of the first level are not mapped
onto any category. Lastly any multi-labelled instances are removed. The second level hierarchy
consists of 53 categories and we perform experiments on the top 10 categories. TF-IDF features are
used, and the dictionary size (feature dimension) is 47236. For this experiment, we use 12500 news
articles to form one set of data and another 12500 news article to form the second set of data. In the
first set, we group the news articles having the label {1, 2}, {3, 4}, {5, 6}, {7, 8} and {9, 10} and
re-label it as {1, 2, 3, 4, 5}. For the second set of data, it also has 5 labels but this time the labels are
4
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/multiclass.html
6
Table 2: Yahoo! Top Level Categorization Results. STL: single task learning accuracy; MTL:
multi task learning accuracy; % Imp.: relative performance improvement. The highest relative
improvement at Yahoo! is for the topic of ?Computer & Internet?, i.e. there is an increase in accuracy
from 48.12% to 52.57%. Interestingly, DMOZ has a similar topic but was called ?Computers? and it
achieves accuracy of 75.72%.
Topic
MTL/STL (% Imp.) Topic
MTL/STL (% Imp.)
Arts
Business & Economy
Computer & Internet
Education
Entertainment
Government
Health
56.27/55.11
66.52/66.88
52.57/48.12
62.48/63.02
63.30/61.37
24.44/22.88
85.42/85.27
(2.10)
(-0.53)
(9.25)
(-0.85)
(3.14)
(6.82)
(1.76)
News & Media
Recreation
Reference
Regional
Science
Social Science
Society & Culture
15.23/14.83
68.81/67.00
26.65/24.81
62.85/61.86
78.58/79.75
31.55/30.68
49.51/49.05
(1.03)
(2.70)
(7.42)
(1.60)
(-1.46)
(2.84)
(0.94)
Table 3: DMOZ Top Level Categorization Results. STL: single task learning accuracy; MTL:
multi task learning accuracy; % Imp.: relative performance improvement. The improvement of
multitask to single task on each topic is negligible for DMOZ web directories. Arguably, this can be
partly explained as DMOZ has higher average topic categorization accuracy than Yahoo! and there
might be more knowledge to be shared from DMOZ to Yahoo! than vice versa.
Topic
MTL/STL (% Imp.) Topic
MTL/STL (% Imp.)
Arts
Business
Computers
Games
Health
Home
News
Recreation
57.52/57.84
54.02/53.05
75.08/75.72
78.58/78.58
82.34/82.55
67.47/67.47
61.70/62.01
58.04/58.25
(-0.5)
(1.83)
(-0.8)
(0)
(-0.14)
(0)
(-0.49)
(-0.36)
Reference
Regional
Science
Shopping
Society
Sports
World
67.42/67.42
28.59/28.56
42.67/42.09
75.20/74.62
57.68/58.20
83.49/83.53
87.80/87.57
(0)
(0.10)
(1.38)
(0.54)
(-0.89)
(-0.05)
(0.26)
generated by {1, 6}, {2, 7}, {3, 8}, {4, 9} and {5, 10} grouping. We split equally the news articles
on each set to form training and test sets. We run a maximum entropy estimator independently,
p(y|x, ?) and p(y 0 |x0 , ?0 ) , on the two sets achieving accuracy of 92.59% for the first set and 91.53%
for the second set. We then learn the two sets of the news articles jointly and in the first test set,
we achieve accuracy of 93.81%. For the second test set, we achieve an accuracy of 93.31%. This
experiment further emphasizes that it is possible to learn several related tasks simultaneously even
though they have different label sets and it is beneficial to do so.
Web Ontologies We also perform an experiment on the data integration of Yahoo! and DMOZ
web directories. We consider the top level of the Yahoo!?s topic tree and sample web links listed in
the directory. Similarly we also consider the top level of the DMOZ topic tree and retrieve sampled
web links. We consider the content of the first page of each web link as our input data. It is possible
that the first page that is being linked from the web directory contain mostly images (for the purpose
of attracting visitors), thus we only consider those webpages that have enough texts to be a valid
input. This gives us 19186 webpages for Yahoo! and 35270 for DMOZ. For the sake of getting
enough texts associated with each link, we can actually crawl many more pages associated with the
link. However, we find that it is quite damaging to do so because as we crawl deeper the topic of the
texts are rapidly changing. We use the standard bag-of-words representation with TF-IDF weighting
as our features. The dictionary size (feature dimension) is 27075. We then use 2000 web pages from
Yahoo! and 2000 pages from DMOZ as training sets and the remainder as test sets. Table 2 and 3
summarize the experimental results.
7
From the experimental results on web directories integration, we observe the following:
? Similarly to the experiments on MNIST digits and Reuters1-v2 news articles, multitask
learning always helps on average, i.e. the average relative improvements are positive for
both Yahoo! and DMOZ web directories;
? The improvement of multitask to single task on each topic is more prominent for Yahoo!
web directories and is negligible for DMOZ web directories (2.62% and 0.07%, respectively). Arguably, this can be partly explained as Yahoo! has lower average topic categorization accuracy than DMOZ (c.f. 60.22% and 64.68 %, respectively). It seems that there
is much more knowledge to be shared from DMOZ to Yahoo! in the hope to increase the
latter?s classification accuracies;
? Looking closely at accuracy at each topic, the highest relative improvement at Yahoo! is
for the topic of ?Computer & Internet?, i.e. there is an increase in accuracy from 48.12%
to 52.57%. Interestingly, DMOZ has a similar topic but was called ?Computers? and it
achieves accuracy of 75.72%. The improvement might be partly because our proposed
method is able to discover the implicit label correlations despite the two topics being named
differently;
? Regarding the worst classified categories, we have ?News & Media? for Yahoo! and ?Regional? for DMOZ. This is intuitive since those two topics can indeed cover a wide range
of subjects. The easiest category to be classified is ?Health? for Yahoo! and ?World? for
DMOZ. As well, this is quite intuitive as the world of health contains mostly specific jargon
and the world of world has much language-specific webpage content.
6
Discussion and Conclusion
We presented a method to learn classifiers from a collection of related tasks or data sets, in which
each task has its own label set. Our method works without the need of an explicit mapping between
the label spaces of the different tasks. We formulate the problem as one of maximizing the mutual
information among the label sets. Our experiments on binary n-task (n ? {3, 5, 7, 10}) and multiclass 2-task problems revealed that, on average, jointly learning the multiple related tasks, albeit
with different label sets, always improves the classification accuracy. We also provided experiments
on a prototypical application of our method: classifying in Yahoo! and DMOZ web directories.
Here we deliberately used small amounts of data?a common situation in commercial tagging and
classification. This shows that classification accuracy of Yahoo! significantly increased. Given that
DMOZ classification was already 4.5% better prior to the application of our method, this shows
the method was able to transfer classification accuracy from the DMOZ task to the Yahoo! task.
Furthermore, the experiments seem to suggest that our proposed method is able to discover implicit
label correlations despite the lack of label correspondences.
Although the experiments on web directories integration is encouraging, we have clearly only
touched the surface of possibilities to be explored. While we focused on the categorization at the
top level of the topic tree, it might be beneficial (and further highlight the usefulness of multitask
learning, as observed in [2?4, 9]) to consider categorization at deeper levels (take for example the
second level of the tree), where we have much fewer observations for each category. In the extreme
case, we might consider the labels as corresponding to a directed acyclic graph (DAG) and encode
the feature map associated with the label hierarchy accordingly. One instance as considered in [19]
is to use a feature map ?(y) ? Rk for k nodes in the DAG (excluding the root node) and associate
with every label y the vector describing the path from the root node to y, ignoring the root node
itself.
Furthermore, the application of data integration which admit a hierarchical categorization goes beyond web related objects. With our method, it is also now possible to learn classifiers from a collection of related gene-ontology graphs [20] or patent hierarchies [19].
Acknowledgments NICTA is funded by the Australian Government as represented by the Department of Broadband, Communications and the Digital Economy and the Australian Research Council
through the ICT Centre of Excellence program. N. Quadrianto is partly supported by Microsoft Research Asia Fellowship.
8
References
[1] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[2] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature
learning. Mach. Learn., 73(3):243?272, 2008.
[3] Kai Yu, Volker Tresp, and Anton Schwaighofer. Learning gaussian processes from multiple
tasks. In ICML ?05: Proceedings of the 22nd international conference on Machine learning,
pages 1012?1019, New York, NY, USA, 2005. ACM.
[4] Rie Kubota Ando and Tong Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817?1853, 2005.
[5] Y. Altun and A.J. Smola. Unifying divergence minimization and statistical inference via convex
duality. In H.U. Simon and G. Lugosi, editors, Proc. Annual Conf. Computational Learning
Theory, LNCS, pages 139?153. Springer, 2006.
[6] T. Pham Dinh and L. Hoai An. A D.C. optimization algorithm for solving the trust-region
subproblem. SIAM Journal on Optimization, 8(2):476?505, 1988.
[7] G. Obozinski, B. Taskar, and M. I. Jordan. Multi-task feature selection. Technical report, U.C.
Berkeley, 2007.
[8] Remi Flamary, Alain Rakotomamonjy, Gilles Gasso, and Stephane Canu. Svm multi-task
learning and non convex sparsity measure. In The Learning Workshop, 2009.
[9] Theodoros Evgeniou, Charles A. Micchelli, and Massimiliano Pontil. Learning multiple tasks
with kernel methods. J. Mach. Learn. Res., 6:615?637, 2005.
[10] K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. In NIPS 19, pages
321?328. MIT Press, 2007.
[11] Shai Ben-David, Johannes Gehrke, and Reba Schuller. A theoretical framework for learning
from a pool of disparate data sources. In KDD ?02: Proceedings of the 8th ACM international
conference on Knowledge discovery and data mining, pages 443?449. ACM, 2002.
[12] M. Dud??k and R. E. Schapire. Maximum entropy distribution estimation with generalized regularization. In G?abor Lugosi and Hans U. Simon, editors, Proc. Annual Conf. Computational
Learning Theory. Springer Verlag, June 2006.
[13] Nadia Ghamrawi and Andrew McCallum. Collective multi-label classification. In CIKM ?05:
Proceedings of the 14th ACM international conference on Information and knowledge management, pages 195?200, New York, NY, USA, 2005. ACM.
[14] A.L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15:915?
936, 2003.
[15] A. J. Smola, S. V. N. Vishwanathan, and T. Hofmann. Kernel methods for missing variables. In
R.G. Cowell and Z. Ghahramani, editors, Proceedings of International Workshop on Artificial
Intelligence and Statistics, pages 325?332, 2005.
[16] Bharath Sriperumbudur and Gert Lanckriet. On the convergence of the concave-convex procedure. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors,
Advances in Neural Information Processing Systems 22, pages 1759?1767. MIT Press, 2009.
[17] B. Sch?olkopf. Support Vector Learning. R. Oldenbourg Verlag, Munich, 1997. Download:
http://www.kernel-machines.org.
[18] David D. Lewis, Yiming Yang, Tony G. Rose, and Fan Li. RCV1: A new benchmark collection
for text categorization research. The Journal of Machine Learning Research, 5:361?397, 2004.
[19] Lijuan Cai and T. Hofmann. Hierarchical document categorization with support vector machines. In Proceedings of the Thirteenth ACM conference on Information and knowledge management, pages 78?87, New York, NY, USA, 2004. ACM Press.
[20] M. Ashburner, C. A. Ball, J. A. Blake, D. Botstein, H. Butler, J. M. Cherry, A. P. Davis,
K. Dolinski, S. S. Dwight, J. T. Eppig, M. A. Harris, D. P. Hill, L. Issel-Tarver, A. Kasarskis,
S. Lewis, J. C. Matese, J. E. Richardson, M. Ringwald, G. M. Rubin, and G. Sherlock. Gene
ontology: tool for the unification of biology. the gene ontology consortium. Nat Genet, 25:25?
29, 2000.
[21] J. M. Borwein and Q. J. Zhu. Techniques of Variational Analysis. CMS books in Mathematics.
Canadian Mathematical Society, 2005.
9
| 3990 |@word multitask:21 version:1 briefly:2 norm:2 seems:1 yi0:1 nd:1 r:1 moment:2 contains:1 tuned:1 ours:1 document:3 interestingly:2 existing:5 com:2 clara:1 yet:1 written:2 readily:3 oldenbourg:1 partition:1 kdd:1 hofmann:2 intelligence:1 fewer:1 accordingly:1 directory:24 mccallum:1 beginning:1 node:4 location:1 successive:1 theodoros:2 org:2 zhang:1 mathematical:1 kasarskis:1 direct:1 consists:2 vide:1 introduce:1 excellence:1 x0:14 pairwise:3 theoretically:1 tagging:1 indeed:1 ontology:9 multi:9 automatically:1 encouraging:1 solver:2 provided:1 discover:2 notation:1 maximizes:1 medium:2 easiest:1 cm:1 berkeley:1 every:1 act:1 concave:13 exactly:1 classifier:4 x0m0:1 arguably:2 positive:1 negligible:2 local:1 despite:2 mach:2 path:1 lugosi:2 might:6 plus:1 initialization:1 limited:1 range:1 lafayette:1 directed:1 acknowledgment:1 lecun:1 yj:1 union:1 digit:10 procedure:6 pontil:2 lncs:1 empirical:1 significantly:1 matching:2 confidence:1 integrating:3 pre:2 word:1 suggest:1 altun:1 consortium:1 onto:1 close:1 unlabeled:1 selection:1 www:3 map:5 missing:1 maximizing:3 go:1 williams:1 independently:3 convex:13 focused:1 formulate:2 welch:1 simplicity:1 estimator:5 retrieve:1 gert:1 traditionally:1 analogous:1 resp:1 hierarchy:5 commercial:1 user:2 programming:2 lanckriet:1 associate:1 particularly:1 std:1 labeled:1 observed:2 csie:1 tib:1 subproblem:1 taskar:1 solved:1 worst:1 region:1 culotta:1 news:12 ordering:1 removed:1 highest:2 weigh:1 rose:1 reba:1 multilabel:1 trained:2 solving:2 predictive:1 yuille:1 joint:17 differently:3 represented:1 distinct:2 massimiliano:2 artificial:1 labeling:1 couldn:1 choosing:1 quite:3 widely:1 solve:10 posed:1 supplementary:1 kai:1 favor:1 statistic:1 gi:1 richardson:1 highlighted:1 jointly:7 itself:1 advantage:1 cai:1 propose:2 remainder:2 rapidly:1 achieve:2 flamary:1 intuitive:2 kh:1 olkopf:1 getting:1 exploiting:1 webpage:7 parent:1 optimum:1 requirement:3 assessing:1 rangarajan:1 categorization:11 convergence:1 ben:1 object:3 help:1 illustrate:1 develop:1 andrew:1 yiming:1 x0i:12 australian:2 closely:2 attribute:1 stephane:1 subsequently:1 australia:1 libsvmtools:1 material:1 education:1 government:2 shopping:1 ntu:1 pham:1 y00:1 considered:1 blake:1 exp:3 mapping:6 m0:2 dictionary:4 achieves:2 purpose:2 estimation:6 proc:2 bag:1 label:52 council:1 vice:1 tf:2 gehrke:1 tool:1 hope:1 minimization:1 mit:2 clearly:3 gaussian:2 always:3 aim:1 incompatibility:1 volker:1 encode:1 focus:2 june:1 improvement:8 likelihood:1 indicates:1 contrast:1 baseline:3 sense:1 posteriori:1 inference:1 economy:2 dependent:1 entire:2 abor:1 relation:1 transformed:1 interested:1 ey0:1 among:4 dual:7 classification:9 denoted:1 html:1 yahoo:27 art:2 integration:7 initialize:1 mutual:10 marginal:1 equal:1 evgeniou:2 having:2 identical:1 biology:1 look:1 novi:1 yu:1 imp:6 icml:1 nadia:1 report:2 simultaneously:6 divergence:1 microsoft:1 ando:1 interest:1 possibility:1 mining:1 recreation:2 extreme:1 cherry:1 accurate:1 unification:1 culture:1 decoupled:4 tree:4 re:2 theoretical:1 increased:1 fenchel:1 instance:6 earlier:1 eppig:1 cover:1 caruana:1 maximization:2 addressing:1 subset:1 rakotomamonjy:1 uniform:1 usefulness:1 wortman:1 motivating:1 dependency:3 connect:1 dir:1 international:4 siam:1 sequel:1 off:2 pool:1 ym:1 borwein:1 satisfied:1 management:2 worse:1 admit:3 conf:2 book:1 derivative:2 leading:1 return:1 li:1 account:2 potential:1 gy:6 sml:1 summarized:1 matter:1 depends:1 view:1 try:1 root:3 linked:1 start:1 recover:2 annotation:1 hoai:1 simon:2 shai:1 ass:1 accuracy:21 efficiently:4 maximized:1 yield:2 anton:1 emphasizes:1 ghamrawi:1 expertise:1 classified:2 bharath:1 ashburner:1 definition:1 against:2 sriperumbudur:1 james:1 naturally:1 associated:4 proof:1 recovers:1 sampled:1 dataset:3 recall:2 knowledge:5 improves:2 organized:1 hilbert:2 actually:1 higher:1 follow:1 mtl:9 asia:1 botstein:1 rie:1 formulation:1 though:1 furthermore:4 implicit:3 lastly:2 stage:1 correlation:3 smola:2 hand:1 web:24 trust:1 assessment:1 lack:2 logistic:2 categorizer:1 building:1 usa:5 contain:1 dwight:1 deliberately:1 regularization:3 hence:1 dud:1 jargon:1 pwe:1 conditionally:2 game:1 width:1 davis:1 die:1 criterion:1 generalized:1 prominent:1 exdb:1 hill:1 reasoning:1 image:2 variational:1 novel:1 fi:1 charles:1 common:3 pseudocode:1 empirically:1 patent:1 banach:2 approximates:1 refer:2 significant:1 dinh:1 versa:1 dag:2 canu:1 similarly:5 mathematics:1 centre:1 language:1 funded:1 access:1 han:1 surface:1 attracting:1 own:3 optimizing:1 certain:1 verlag:2 binary:5 yi:7 additional:4 somewhat:1 impose:1 greater:1 ey:5 employed:1 converge:1 maximize:1 semi:1 multiple:9 desirable:2 full:1 technical:1 cccp:4 equally:1 paired:1 prediction:1 regression:2 basic:1 essentially:1 iteration:6 kernel:5 achieved:1 background:1 addition:1 whereas:1 want:3 addressed:1 separately:1 set3:1 median:1 source:8 thirteenth:1 fellowship:1 sch:1 rest:1 regional:3 subject:2 lafferty:1 seem:1 jordan:1 yang:1 revealed:1 split:1 enough:2 bengio:1 automated:1 canadian:1 independence:1 xj:4 andreas:1 idea:1 regarding:1 multiclass:4 genet:1 t0:1 effort:1 york:3 ignored:1 santa:1 listed:1 johannes:1 amount:1 category:7 http:5 schapire:1 notice:1 cikm:1 disjoint:2 group:1 achieving:1 drawn:1 changing:1 y10:1 graph:2 run:1 fourth:1 named:3 yann:1 home:1 decision:1 entirely:1 bound:6 internet:4 guaranteed:1 display:1 correspondence:4 fan:1 oracle:2 annual:2 constraint:4 idf:2 alex:1 vishwanathan:1 personalized:1 sake:1 simulate:2 min:3 rcv1:1 kubota:1 department:1 munich:1 ball:1 describes:1 slightly:1 smaller:1 y0:7 beneficial:2 tw:1 intuitively:1 explained:2 sided:1 computationally:1 singletask:1 mutually:1 discus:1 turn:1 cjlin:1 describing:1 end:1 available:5 observe:1 hierarchical:3 v2:3 appearing:1 existence:1 assumes:1 running:2 denotes:1 top:6 entertainment:1 tony:1 unifying:1 exploit:1 ghahramani:1 society:3 micchelli:1 objective:12 added:1 quantity:5 already:1 strategy:1 link:5 mapped:2 topic:22 argue:1 dmoz:26 nicta:2 enforcing:1 boldface:1 assuming:1 modeled:1 minimizing:1 mostly:2 potentially:1 negative:1 disparate:1 collective:1 unknown:1 perform:4 reorganized:1 upper:2 gilles:1 observation:9 datasets:3 purdue:1 benchmark:1 erio:1 extended:1 situation:2 looking:1 excluding:1 dc:2 y1:2 communication:1 arbitrary:1 download:1 david:2 pair:1 rsise:1 optimized:1 learned:2 established:1 nip:1 address:1 able:3 suggested:1 beyond:1 below:1 xm:3 yc:1 sparsity:2 summarize:2 challenge:2 program:1 sherlock:1 including:1 max:1 unrealistic:1 natural:1 difficulty:1 regularized:1 business:2 schuller:1 zhu:1 improve:3 gasso:1 health:4 gy0:2 tresp:1 text:4 prior:2 literature:1 ict:1 tangent:1 discovery:1 evolve:1 relative:5 expect:2 highlight:1 mixed:1 prototypical:1 acyclic:1 digital:1 integrate:1 x01:1 consistent:1 caetano1:1 article:8 editor:8 rubin:1 classifying:1 share:1 course:1 supported:1 heading:3 alain:1 deeper:2 explaining:1 wide:1 taking:2 benefit:1 depth:1 dimension:2 evaluating:1 world:5 valid:1 crawl:2 made:2 collection:4 spam:1 social:1 approximate:1 relatedness:4 gene:3 ml:1 conclude:1 assumed:1 xi:29 butler:1 latent:1 table:5 learn:10 transfer:1 ca:1 correlated:1 yc0:1 ignoring:1 schuurmans:1 constructing:1 domain:2 significance:1 main:1 linearly:1 matese:1 profile:1 quadrianto:1 repeated:2 x1:3 canberra:1 west:1 broadband:1 fashion:1 ny:3 tong:1 sub:1 inferring:2 explicit:3 third:2 weighting:1 touched:1 theorem:1 removing:1 rk:1 specific:2 covariate:1 explored:1 svm:1 evidence:2 grouping:2 exists:1 stl:9 ih:1 mnist:5 adding:2 effectively:1 albeit:1 workshop:2 dissimilarity:1 linearization:2 nat:1 anu:1 entropy:26 remi:1 likely:2 saddle:1 schwaighofer:1 sport:1 cowell:1 springer:2 corresponds:1 lewis:2 acm:7 harris:1 obozinski:1 succeed:1 conditional:4 viewed:1 labelled:1 shared:4 content:2 reducing:1 acting:1 lemma:4 kearns:1 called:2 duality:6 experimental:3 partly:4 shannon:2 damaging:1 support:2 latter:1 crammer:2 categorize:1 visitor:1 argyriou:1 handling:1 |
3,302 | 3,991 | Multi-View Active Learning in
the Non-Realizable Case
Wei Wang and Zhi-Hua Zhou
National Key Laboratory for Novel Software Technology
Nanjing University, Nanjing 210093, China
{wangw,zhouzh}@lamda.nju.edu.cn
Abstract
The sample complexity of active learning under the realizability assumption has
been well-studied. The realizability assumption, however, rarely holds in practice. In this paper, we theoretically characterize the sample complexity of active
learning in the non-realizable case under multi-view setting. We prove that, with
unbounded Tsybakov noise, the sample complexity of multi-view active learning
1
e
can be O(log
? ), contrasting to single-view setting where the polynomial improvement is the best possible achievement. We also prove that in general multi-view
setting the sample complexity of active learning with unbounded Tsybakov noise
e 1 ), where the order of 1/? is independent of the parameter in Tsybakov noise,
is O(
?
contrasting to previous polynomial bounds where the order of 1/? is related to the
parameter in Tsybakov noise.
1
Introduction
In active learning [10, 13, 16], the learner draws unlabeled data from the unknown distribution
defined on the learning task and actively queries some labels from an oracle. In this way, the active
learner can achieve good performance with much fewer labels than passive learning. The number
of these queried labels, which is necessary and sufficient for obtaining a good leaner, is well-known
as the sample complexity of active learning.
Many theoretical bounds on the sample complexity of active learning have been derived based on the
realizability assumption (i.e., there exists a hypothesis perfectly separating the data in the hypothesis
class) [4, 5, 11, 12, 14, 16]. The realizability assumption, however, rarely holds in practice. Recently,
the sample complexity of active learning in the non-realizable case (i.e., the data cannot be perfectly
separated by any hypothesis in the hypothesis class because of the noise) has been studied [2, 13, 17].
2
It is worth noting that these bounds obtained in the non-realizable case match the lower bound ?( ??2 )
[19], in the same order as the upper bound O( ?12 ) of passive learning (? denotes the generalization
error rate of the optimal classifier in the hypothesis class and ? bounds how close to the optimal
classifier in the hypothesis class the active learner has to get). This suggests that perhaps active
learning in the non-realizable case is not as efficient as that in the realizable case. To improve the
sample complexity of active learning in the non-realizable case remarkably, the model of the noise
or some assumptions on the hypothesis class and the data distribution must be considered. Tsybakov
noise model [21] is more and more popular in theoretical analysis on the sample complexity of active
learning. However, existing result [8] shows that obtaining exponential improvement in the sample
complexity of active learning with unbounded Tsybakov noise is hard.
Inspired by [23] which proved that multi-view setting [6] can help improve the sample complexity
of active learning in the realizable case remarkably, we have an insight that multi-view setting will
also help active learning in the non-realizable case. In this paper, we present the first analysis on the
1
sample complexity of active learning in the non-realizable case under multi-view setting, where the
non-realizability is caused by Tsybakov noise. Specifically:
-We define ?-expansion, which extends the definition in [3] and [23] to the non-realizable case,
and ?-condition for multi-view setting.
-We prove that the sample complexity of active learning with Tsybakov noise under multi-view
1
1
e
setting can be improved to O(log
? ) when the learner satisfies non-degradation condition. This
exponential improvement holds no matter whether Tsybakov noise is bounded or not, contrasting to
single-view setting where the polynomial improvement is the best possible achievement for active
learning with unbounded Tsybakov noise.
-We also prove that, when non-degradation condition does not hold, the sample complexity of ace 1 ), where the order of
tive learning with unbounded Tsybakov noise under multi-view setting is O(
?
e 1)
1/? is independent of the parameter in Tsybakov noise, i.e., the sample complexity is always O(
?
no matter how large the unbounded Tsybakov noise is. While in previous polynomial bounds, the
order of 1/? is related to the parameter in Tsybakov noise and is larger than 1 when unbounded Tsybakov noise is larger than some degree (see Section 2). This discloses that, when non-degradation
condition does not hold, multi-view setting is still able to lead to a faster convergence rate and our
polynomial improvement in the sample complexity is better than previous polynomial bounds when
unbounded Tsybakov noise is large.
The rest of this paper is organized as follows. After introducing related work in Section 2 and
preliminaries in Section 3, we define ?-expansion in the non-realizable case in Section 4. We analyze
the sample complexity of active learning with Tsybakov noise under multi-view setting with and
without the non-degradation condition in Section 5 and Section 6, respectively. Finally we conclude
the paper in Section 7.
2
Related Work
Generally, the non-realizability of learning task is caused by the presence of noise. For learning the
task with arbitrary forms of noise, Balcan et al. [2] proposed the agnostic active learning algorithm
b ?22 ).2 Hoping to get tighter bound on the sample
A2 and proved that its sample complexity is O(
?
complexity of the algorithm A2 , Hanneke [17] defined the disagreement coefficient ?, which depends
on the hypothesis class and the data distribution, and proved that the sample complexity of the
b 2 ?22 ). Later, Dasgupta et al. [13] developed a general agnostic active learning
algorithm A2 is O(?
?
b ?22 ).
algorithm which extends the scheme in [10] and proved that its sample complexity is O(?
?
Recently, the popular Tsybakov noise model [21] was considered in theoretical analysis on active learning and there have been some bounds on the sample complexity. For some simple cases,
where Tsybakov noise is bounded, it has been proved that the exponential improvement in the sample complexity is possible [4, 7, 18]. As for the situation where Tsybakov noise is unbounded,
only polynomial improvement in the sample complexity has been obtained. Balcan et al. [4] assumed that the samples are drawn uniformly from the the unit ball in Rd and proved that the sample
2
complexity of active learning with unbounded Tsybakov noise is O ?? 1+? (? > 0 depends on
Tsybakov noise). This uniform distribution assumption, however, rarely holds in practice. Castro
and Nowak [8] showed that the sample complexity of active learning with unbounded Tsybakov
2??+d?2??1
b ??
??
noise is O
(? > 1 depends on another form of Tsybakov noise, ? ? 1 depends
on the H?older smoothness and d is the dimension of the data). This result is also based on the
strong uniform distribution assumption. Cavallanti et al. [9] assumed that the labels of examples
are generated according to a simple linear noise model and indicated that the sample complexity
2(3+?)
of active learning with unbounded Tsybakov noise is O ?? (1+?)(2+?) . Hanneke [18] proved that
the algorithms or variants thereof in [2] and [13] can achieve the polynomial sample complexity
2
b ?? 1+?
for active learning with unbounded Tsybakov noise. For active learning with unbounded
O
Tsybakov noise, Castro and Nowak [8] also proved that at least ?(??? ) labels are requested to learn
1
2
e notation is used to hide the factor log log( 1? ).
The O
b notation is used to hide the factor polylog( 1? ).
The O
2
an ?-approximation of the optimal classifier (? ? (0, 2) depends on Tsybakov noise). This result
shows that the polynomial improvement is the best possible achievement for active learning with unbounded Tsybakov noise in single-view setting. Wang [22] introduced smooth assumption to active
learning with approximate Tsybakov noise and proved that if the classification boundary and the
underlying distribution are smooth to ?-th order and ? > d, the sample complexity of active learning
2d
b ?? ?+d
is O
; if the boundary and
the distribution are infinitely smooth, the sample complexity of
active learning is O polylog( 1? ) . Nevertheless, this result is for approximate Tsybakov noise and
the assumption on large smoothness order (or infinite smoothness order) rarely holds for data with
high dimension d in practice.
3
Preliminaries
In multi-view setting, the instances are described with several different disjoint sets of features. For
the sake of simplicity, we only consider two-view setting in this paper. Suppose that X = X1 ? X2
is the instance space, X1 and X2 are the two views, Y = {0, 1} is the label space and D is the
distribution over X?Y . Suppose that c = (c1 , c2 ) is the optimal Bayes classifier, where c1 and c2 are
the optimal Bayes classifiers in the two views, respectively. Let H1 and H2 be the hypothesis class
in each view and suppose that c1 ? H1 and c2 ? H2 . For any instance x = (x1 , x2 ), the hypothesis
hv ? Hv (v = 1, 2) makes that hv (xv ) = 1 if xv ? Sv and hv (xv ) = 0 otherwise, where Sv is a
subset of Xv . In this way, any hypothesis hv ? Hv corresponds to a subset Sv of Xv (as for how to
combine the hypotheses in the two views, see Section 5). Considering that x1 and x2 denote the same
instance x in different views, we overload Sv to denote the instance set {x = (x1 , x2 ) : xv ? Sv }
without confusion. Let Sv? correspond to the optimal Bayes classifier cv . It is well-known [15] that
Sv? = {xv : ?v (xv ) ? 12 }, where ?v (xv ) = P (y = 1|xv ). Here, we also overload Sv? to denote
the instances set {x = (x1 , x2 ) : xv ? Sv? }. The error rateof a hypothesis Sv under the distribution
D is R(hv ) = R(Sv ) = P r(x1 ,x2 ,y)?D y 6= I(xv ? Sv ) . In general, R(Sv? ) 6= 0 and the excess
error of Sv can be denoted as follows, where Sv ?Sv? = (Sv ? Sv? ) ? (Sv? ? Sv ) and d(Sv , Sv? ) is a
pseudo-distance between the sets Sv and Sv? .
Z
R(Sv ) ? R(Sv? ) =
|2?v (xv ) ? 1|pxv dxv , d(Sv , Sv? )
(1)
Sv ?Sv?
Let ?v denote the error rate of the optimal Bayes classifier cv which is also called as the noise rate
in the non-realizable case. In general, ?v is less than 12 . In order to model the noise, we assume that
the data distribution and the Bayes decision boundary in each view satisfies the popular Tsybakov
noise condition [21] that P rxv ?Xv (|?v (xv ) ? 1/2| ? t) ? C0 t? for some finite C0 > 0, ? > 0
and all 0 < t ? 1/2, where ? = ? corresponds to the best learning situation and the noise is called
bounded [8]; while ? = 0 corresponds to the worst situation. When ? < ?, the noise is called
unbounded [8]. According to Proposition 1 in [21], it is easy to know that (2) holds.
d(Sv , Sv? ) ? C1 dk? (Sv , Sv? )
(2)
?1/?
?(? + 1)?1?1/? , d? (Sv , Sv? ) = P r(Sv ? Sv? ) + P r(Sv? ? Sv ) is also
Here k = 1+?
? , C1 = 2C0
a pseudo-distance between the sets Sv and Sv? , and d(Sv , Sv? ) ? d? (Sv , Sv? ) ? 1. We will use the
following lamma [1] which gives the standard sample complexity for non-realizable learning task.
Lemma 1 Suppose that H is a set of functions from X to Y = {0, 1} with finite VC-dimension
V ? 1 and D is the fixed but unknown distribution over X ? Y . For any ?, ? > 0, there is a
1 1
N
N
positive constant
C, such that if the size of sample {(x , y ), . . . , (x , y )} from D is N (?, ?) =
1
C
?2 V + log( ? ) , then with probability at least 1 ? ?, for all h ? H, the following holds.
1 XN
|
I h(xi ) 6= y i ? E(x,y)?D I h(x) 6= y | ? ?
i=1
N
4
?-Expansion in the Non-realizable Case
Multi-view active learning first described in [20] focuses on the contention points (i.e., unlabeled
instances on which different views predict different labels) and queries some labels of them. It is
motivated by that querying the labels of contention points may help at least one of the two views
to learn the optimal classifier. Let S1 ? S2 = (S1 ? S2 ) ? (S2 ? S1 ) denote the contention points
3
Table 1: Multi-view active learning with the non-degradation condition
Input: Unlabeled data set U = {x1 , x2 , ? ? ? , } where each example xj is given as a pair (xj1 , xj2 )
Process:
Query the labels of m0 instances drawn randomly from U to compose the labeled data set L
iterate: i = 0, 1, ? ? ? , s
Train the classifier hiv (v
P= 1, 2) by minimizing the empirical risk with L in each view:
hiv = arg minh?Hv (x1 ,x2 ,y)?L I(h(xv ) 6= y);
Apply hi1 and hi2 to the unlabeled data set U and find out the contention point set Qi ;
Query the labels of mi+1 instances drawn randomly from Qi , then add them into L and delete them
from U.
end iterate
Output: hs+ and hs?
between S1 and S2 , then P r(S1 ? S2 ) denotes the probability mass on the contentions points. ???
and ??? mean the same operation rule. In this paper, we use ??? when referring the excess error
between Sv and Sv? and use ??? when referring the difference between the two views S1 and S2 . In
order to study multi-view active learning, the properties of contention points should be considered.
One basic property is that P r(S1 ? S2 ) should not be too small, otherwise the two views could be
exactly the same and two-view setting would degenerate into single-view setting.
In multi-view learning, the two views represent the same learning task and generally are consistent
with each other, i.e., for any instance x = (x1 , x2 ) the labels of x in the two views are the same.
Hence we first assume that S1? = S2? = S ? . As for the situation where S1? 6= S2? , we will discuss on it
further in Section 5.2. The instances agreed by the two views can be denoted as (S1 ?S2 )?(S1 ?S2 ).
However, some of these agreed instances may be predicted different label by the optimal classifier
S ? , i.e., the instances in (S1 ? S2 ? S ? ) ? (S1 ? S2 ? S ? ). Intuitively, if the contention points
can convey some information about (S1 ? S2 ? S ? ) ? (S1 ? S2 ? S ? ), then querying the labels of
contention points could help to improve S1 and S2 . Based on this intuition and that P r(S1 ? S2 )
should not be too small, we give our definition on ?-expansion in the non-realizable case.
Definition 1 D is ?-expanding if for some ? > 0 and any S1 ? X1 , S2 ? X2 , (3) holds.
P r S1 ? S2 ? ? P r S1 ? S2 ? S ? + P r S1 ? S2 ? S ?
(3)
We say that D is ?-expanding with respect to hypothesis class H1 ? H2 if the above holds for all
S1 ? H1 ? X1 , S2 ? H2 ? X2 (here we denote by Hv ? Xv the set {h ? Xv : h ? Hv } for v = 1, 2).
Balcan et al. [3] also gave a definition of expansion, P r(T1 ? T2 ) ? ? min P r(T1 ? T2 ), P r(T1 ?
T2 ) , for realizable learning task under the assumptions that the learner in each view is never ?confident but wrong? and the learning algorithm is able to learn from positive data only. Here Tv denotes
the instances which are classified as positive confidently in each view. Generally, in realizable learning tasks, we aim at studying the asymptotic performance and assume that the performance of initial
classifier is better than guessing randomly, i.e., P r(Tv ) > 1/2. This ensures that P r(T1 ? T2 ) is
larger than P r(T1 ? T2 ). In addition, in [3] the instances which are agreed by the two views but are
predicted different label by the optimal classifier can be denoted as T1 ? T2 . So, it can be found that
Definition 1 and the definition of expansion in [3] are based on the same intuition that the amount of
contention points is no less than a fraction of the amount of instances which are agreed by the two
views but are predicted different label by the optimal classifiers.
5
Multi-view Active Learning with Non-degradation Condition
In this section, we first consider the multi-view learning in Table 1 and analyze whether multiview setting can help improve the sample complexity of active learning in the non-realizable case
remarkably. In multi-view setting, the classifiers are often combined to make predictions and many
strategies can be used to combine them. In this paper, we consider the following two combination
schemes, h+ and h? , for binary classification:
1 if hi1 (x1 ) = hi2 (x2 ) = 1
0 if hi1 (x1 ) = hi2 (x2 ) = 0
i
i
h+ (x) =
h? (x) =
(4)
0 otherwise
1 otherwise
4
5.1
The Situation Where S1? = S2?
With (4), the error rate of the combined classifiers hi+ and hi? satisfy (5) and (6), respectively.
R(hi+ ) ? R(S ? ) = R(S1i ? S2i ) ? R(S ? ) ? d? (S1i ? S2i , S ? )
(5)
R(hi? )
(6)
?
? R(S ) =
R(S1i
?
S2i )
?
? R(S ) ?
d? (S1i
?
S2i , S ? )
Here Svi ? Xv (v = 1, 2) corresponds to the classifier hiv ? Hv in the i-th round. In each round of
multi-view active learning, labels of some contention points are queried to augment the training data
set L and the classifier in each view is then refined. As discussed in [23], we also assume that the
learner in Table 1 satisfies the non-degradation condition as the amount of labeled training examples
increases, i.e., (7) holds, which implies that the excess error of Svi+1 is no larger than that of Svi in
the region of S1i ? S2i .
P r Svi+1 ?S ? S i ? S i ? P r(Svi ?S ? S i ? S i )
(7)
1
2
1
2
To illustrate the non-degradation condition, we give the following example: Suppose the data in
Xv (v = 1, 2) fall into n different clusters, denoted by ?1v , . . . , ?nv , and every cluster has the same
probability mass for simplicity. The positive class is the union of some clusters while the negative
class is the union of the others. Each positive (negative) cluster ??v in Xv is associated with only
3 positive (negative) clusters ??3?v (?, ? ? {1, . . . , n}) in X3?v (i.e., given an instance xv in ??v ,
x3?v will only be in one of these ??3?v ). Suppose the learning algorithm will predict all instances
in each cluster with the same label, i.e., the hypothesis class Hv consists of the hypotheses which
do not split any cluster. Thus, the cluster ??v can be classified according to the posterior probability
P (y = 1|??v ) and querying the labels of instances in cluster ??v will not influence the estimation of
the posterior probability for cluster ??v (? 6= ?). It is evident that the non-degradation condition holds
in this task. Note that the non-degradation assumption may not always hold, and we will discuss on
this in Section 6. Now we give Theorem 1.
Theorem 1 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 ac2 log 1
cording to Definition 1, when the non-degradation condition holds, if s = ? log 18? ? and mi =
C2
16(s+1)
256k C
V
+
log(
)
,
the
multi-view
active
learning
in
Table
1
will
generate
two
classifiers hs+
?
C12
s
?
and h? , at least one of which is with error rate no larger than R(S ) + ? with probability at least
1 ? ?.
Here, V = max[V C(H1 ), V C(H2 )] where V C(H) denotes the VC-dimension of the hypothesis
?1/?
class H, k = 1+?
?(? + 1)?1?1/? and C2 = 5?+8
? , C1 = 2C0
6?+8 .
Proof sketch. Let Qi = S1i ? S2i , first with Lemma 1 and (2) we have d? (S1i+1 ? S2i+1 | Qi , S ? |
P r(T1i+1 ?T2i+1 ?S ? )
? 21 . Considering (7)
P r(T1i+1 ?T2i+1 )
S ? ) + P r(S1i ? S2i ? S ? ), then we calculate
Qi ) ? 1/8. Let Tvi+1 = Svi+1 ? Qi and ?i+1 =
and
d? (S1i ? S2i |Qi , S ? |Qi )P r(Qi ) = P r(S1i ? S2i ?
that
d? (S1i+1
?
S2i+1 , S ? )
1
? P r(S1i ? S2i ? S ? ) + P r(S1i ? S2i ? S ? ) + P r(S1i ? S2i ) ? ?i+1 P r (S1i+1 ? S2i+1 ) ? Qi
8
d? (S1i+1 ? S2i+1 , S ? )
1
? P r(S1i ? S2i ? S ? ) + P r(S1i ? S2i ? S ? ) + P r(S1i ? S2i ) + ?i+1 P r (S1i+1 ? S2i+1 ) ? Qi .
8
As in each round some contention points are queried and added into the training set, the difference
between the two views is decreasing, i.e., P r(S1i+1 ? S2i+1 ) is no larger than P r(S1i ? S2i ). Let
P r(S1i ?S2i ?S ? )
? 12 , with Definition 1 and different combinations of ?i+1 and ?i , we can
P r(S1i ?S2i )
1
d (S i+1 ?S i+1 ,S ? )
2 log 8?
d (S i+1 ?S i+1 ,S ? )
5?+8
or ?d? 1(S i ?S2i ,S ? ) ? 5?+8
have either ?d? 1(S i ?S2i ,S ? ) ? 6?+8
6?+8 . When s = ? log C1 ?, where
1
2
1
2
2
s
s
?
s
s
?
C2 = 5?+8
6?+8 is a constant less than 1, we have either d? (S1 ? S2 , S ) ? ? or d? (S1 ? S2 , S ) ? ?.
s
?
s
?
Thus, with (5) and (6) we have either R(h+ ) ? R(S ) + ? or R(h? ) ? R(S ) + ?.
?i =
5
Ps
1
s
e
From Theorem 1 we know that we only need to request i=0 mi = O(log
? ) labels to learn h+
s
?
and h? , at least one of which is with error rate no larger than R(S ) + ? with probability at least
1 ? ?. If we choose hs+ and it happens to satisfy R(hs+ ) ? R(S ? ) + ?, we can get a classifier whose
error rate is no larger than R(S ? ) + ?. Fortunately, there are only two classifiers and the probability
of getting the right classifier is no less than 12 . To study how to choose between hs+ and hs? , we give
Definition 2 at first.
Definition 2 The multi-view classifiers S1 and S2 satisfy ?-condition if (8) holds for some ? > 0.
P r {x : x ? S ? S ? y(x) = 1} P r {x : x ? S ? S ? y(x) = 0}
1
2
1
2
?
(8)
??
P r(S1 ? S2 )
P r(S1 ? S2 )
(8) implies the difference between the examples belonging to positive class and that belonging to
negative class in the contention region of S1 ? S2 . Based on Definition 2, we give Lemma 2 which
provides information for deciding how to choose between h+ and h? . This helps to get Theorem 2.
2 log( 4 )
Lemma 2 If the multi-view classifiers S1s and S2s satisfy ?-condition, with the number of ? 2 ?
labels we can decide correctly whether P r {x : x ? S1s ? S2s ? y(x) = 1} or P r {x : x ?
S1s ? S2s ? y(x) = 0} ) is smaller with probability at least 1 ? ?.
Theorem 2 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 according to Definition 1, when the non-degradation condition holds, if the multi-view classifiers satisfy
1
e
?-condition, by requesting O(log
? ) labels the multi-view active learning in Table 1 will generate a
classifier whose error rate is no larger than R(S ? ) + ? with probability at least 1 ? ?.
1
e
From Theorem 2 we know that we only need to request O(log
? ) labels to learn a classifier with
error rate no larger than R(S ? ) + ? with probability at least 1 ? ?. Thus, we achieve an exponential
improvement in sample complexity of active learning in the non-realizable case under multi-view
setting. Sometimes, the difference between the examples belonging to positive class and that belonging to negative class in S1s ? S2s may be very small, i.e., (9) holds.
P r {x : x ? S s ? S s ? y(x) = 1} P r {x : x ? S s ? S s ? y(x) = 0}
1
2
1
2
?
(9)
= O(?)
P r(S1s ? S2s )
P r(S1s ? S2s )
If so, we need not to estimate whether R(hs+ ) or R(hs? ) is smaller and Theorem 3 indicates that
both hs+ and hs? are good approximations of the optimal classifier.
Theorem 3 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 according to Definition 1, when the non-degradation condition holds, if (9) is satisfied, by request1
s
e
ing O(log
? ) labels the multi-view active learning in Table 1 will generate two classifiers h+ and
s
s
?
h? which satisfy either (a) or (b) with probability at least 1 ? ?. (a) R(h+ ) ? R(S ) + ? and
R(hs? ) ? R(S ? ) + O(?); (b) R(hs+ ) ? R(S ? ) + O(?) and R(hs? ) ? R(S ? ) + ?.
The complete proof of Theorem 1, and the proofs of Lemma 2, Theorem 2 and Theorem 3 are given
in the supplementary file.
5.2
The Situation Where S1? 6= S2?
Although the two views represent the same learning task and generally are consistent with each
other, sometimes S1? may be not equal to S2? . Therefore, the ?-expansion assumption in Definition
1 should be adjusted to the situation where S1? 6= S2? . To analyze this theoretically, we replace S ?
by S1? ? S2? in Definition 1 and get (10). Similarly to Theorem 1, we get Theorem 4.
P r S1 ? S2 ? ? P r S1 ? S2 ? S1? ? S2? + P r S1 ? S2 ? S1? ? S2?
(10)
Theorem 4 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 accordk
2 log 1
C
V +
ing to (10), when the non-degradation condition holds, if s = ? log 18? ? and mi = 256
C12
C2
log( 16(s+1)
) , the multi-view active learning in Table 1 will generate two classifiers hs+ and hs? , at
?
least one of which is with error rate no larger than R(S1? ? S2? ) + ? with probability at least 1 ? ?.
(V , k, C1 and C2 are given in Theorem 1.)
6
Table 2: Multi-view active learning without the non-degradation condition
Input: Unlabeled data set U = {x1 , x2 , ? ? ? , } where each example xj is given as a pair (xj1 , xj2 )
Process:
Query the labels of m0 instances drawn randomly from U to compose the labeled data set L;
Train the classifier h0v (v
P= 1, 2) by minimizing the empirical risk with L in each view:
h0v = arg minh?Hv (x1 ,x2 ,y)?L I(h(xv ) 6= y);
iterate: i = 1, ? ? ? , s
Apply hi?1
and hi?1
to the unlabeled data set U and find out the contention point set Qi ;
1
2
Query the labels of mi instances drawn randomly from Qi , then add them into L and delete them
from U;
Query the labels of (2i ? 1)mi instances drawn randomly from U ? Qi , then add them into L and
delete them from U;
Train the classifier hiv by
Pminimizing the empirical risk with L in each view:
hiv = arg minh?Hv (x1 ,x2 ,y)?L I(h(xv ) 6= y).
end iterate
Output: hs+ and hs?
Proof. Since Sv? is the optimal Bayes classifier in the v-th view, obviously, R(S1? ? S2? ) is no less
than R(Sv? ), (v = 1, 2). So, learning a classifier with error rate no larger than R(S1? ? S2? ) + ? is not
harder than learning a classifier with error rate no larger than R(Sv? ) + ?. Now we aim at learning
a classifier with error rate no larger than R(S1? ? S2? ) + ?. Without loss of generality, we assume
R(Svi ) > R(S1? ? S2? ) for i = 0, 1, . . . , s. If R(Svi ) ? R(S1? ? S2? ), we get a classifier with error rate
no larger than R(S1? ? S2? ) + ?. Thus, we can neglect the probability mass on the hypothesis whose
error rate is less than R(S1? ? S2? ) and regard S1? ? S2? as the optimal. Replacing S ? by S1? ? S2? in
the discussion of Section 5.1, with the proof of Theorem 1 we get Theorem 4 proved.
1
e
Theorem 4 shows that for the situation where S1? 6= S2? , by requesting O(log
? ) labels we can learn
s
s
two classifiers h+ and h? , at least one of which is with error rate no larger than R(S1? ? S2? ) + ?
with probability at least 1 ? ?. With Lemma 2, we get Theorem 5 from Theorem 4.
Theorem 5 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 according to (10), when the non-degradation condition holds, if the multi-view classifiers satisfy ?1
e
condition, by requesting O(log
? ) labels the multi-view active learning in Table 1 will generate a
classifier whose error rate is no larger than R(S1? ? S2? ) + ? with probability at least 1 ? ?.
Generally, R(S1? ? S2? ) is larger than R(S1? ) and R(S2? ). When S1? is not too much different from
S2? , i.e., P r(S1? ?S2? ) ? ?/2, we have Corollary 1 which indicates that the exponential improvement
in the sample complexity of active learning with Tsybakov noise is still possible.
Corollary 1 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 according to (10), when the non-degradation condition holds, if the multi-view classifiers satisfy ?1
e
condition and P r(S1? ? S2? ) ? ?/2, by requesting O(log
? ) labels the multi-view active learning in
Table 1 will generate a classifier with error rate no larger than R(Sv? )+? (v = 1, 2) with probability
at least 1 ? ?.
The proofs of Theorem 5 and Corollary 1 are given in the supplemental file.
6
Multi-view Active Learning without Non-degradation Condition
Section 5 considers situations when the non-degradation condition holds, there are cases, however,
the non-degradation condition (7) does not hold. In this section we focus on the multi-view active
learning in Table 2 and give an analysis with the non-degradation condition waived. Firstly, we give
Theorem 6 for the sample complexity of multi-view active learning in Table 2 when S1? = S2? = S ? .
Theorem 6 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 accord
k
2 log 1
C
ing to Definition 1, if s = ? log 18? ? and mi = 256
) , the multi-view active
V + log( 16(s+1)
?
C2
1
C2
learning in Table 2 will generate two classifiers hs+ and hs? , at least one of which is with error rate
no larger than R(S ? ) + ? with probability at least 1 ? ?. (V , k, C1 and C2 are given in Theorem 1.)
7
Proof sketch. In the (i + 1)-th round, we randomly query (2i+1 ? 1)mi labels from Qi and add
them into L. So the number of training examples for Svi+1 (v = 1, 2) is larger than the number of
whole training examples for Svi . Thus we know that d(Svi+1 |Qi , S ? |Qi ) ? d(Svi |Qi , S ? |Qi ) holds
for any ?v . Setting ?v ? {0, 1}, the non-degradation condition (7) stands. Thus, with the proof of
Theorem 1 we get Theorem 6 proved.
Ps
e 1 ) labels to learn two classifiers hs+ and hs? ,
Theorem 6 shows that we can request i=0 2i mi = O(
?
at least one of which is with error rate no larger than R(S ? ) + ? with probability at least 1 ? ?. To
guarantee the non-degradation condition (7), we only need to query (2i ? 1)mi more labels in the
i-th round. With Lemma 2, we get Theorem 7.
Theorem 7 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 accorde 1 ) labels the
ing to Definition 1, if the multi-view classifiers satisfy ?-condition, by requesting O(
?
multi-view active learning in Table 2 will generate a classifier whose error rate is no larger than
?
R(S ) + ? with probability at least 1 ? ?.
e 1 ) labels to
Theorem 7 shows that, without the non-degradation condition, we need to request O(
?
?
learn a classifier with error rate no larger than R(S ) + ? with probability at least 1 ? ?. The order of
1/? is independent of the parameter in Tsybakov noise. Similarly to Theorem 3, we get Theorem 8
which indicates that both hs+ and hs? are good approximations of the optimal classifier.
Theorem 8 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 accorde 1 ) labels the multi-view active learning in Table
ing to Definition 1, if (9) holds, by requesting O(
?
s
s
2 will generate two classifiers h+ and h? which satisfy either (a) or (b) with probability at least
1 ? ?. (a) R(hs+ ) ? R(S ? ) + ? and R(hs? ) ? R(S ? ) + O(?); (b) R(hs+ ) ? R(S ? ) + O(?) and
R(hs? ) ? R(S ? ) + ?.
As for the situation where S1? 6= S2? , similarly to Theorem 5 and Corollary 1, we have Theorem 9
and Corollary 2.
Theorem 9 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 accorde 1 ) labels the multi-view
ing to (10), if the multi-view classifiers satisfy ?-condition, by requesting O(
?
active learning in Table 2 will generate a classifier whose error rate is no larger than R(S1? ?S2? )+?
with probability at least 1 ? ?.
Corollary 2 For data distribution D ?-expanding with respect to hypothesis class H1 ? H2 according to (10), if the multi-view classifiers satisfy ?-condition and P r(S1? ? S2? ) ? ?/2, by requesting
e 1 ) labels the multi-view active learning in Table 2 will generate a classifier with error rate no
O(
?
larger than R(Sv? ) + ? (v = 1, 2) with probability at least 1 ? ?.
The complete proof of Theorem 6, the proofs of Theorem 7 to 9 and Corollary 2 are given in the
supplementary file.
7
Conclusion
We present the first study on active learning in the non-realizable case under multi-view setting in
this paper. We prove that the sample complexity of multi-view active learning with unbounded Tsy1
e
bakov noise can be improved to O(log
? ), contrasting to single-view setting where only polynomial
improvement is proved possible with the same noise condition. In general multi-view setting, we
e 1 ), where
prove that the sample complexity of active learning with unbounded Tsybakov noise is O(
?
the order of 1/? is independent of the parameter in Tsybakov noise, contrasting to previous polynomial bounds where the order of 1/? is related to the parameter in Tsybakov noise. Generally, the
non-realizability of learning task can be caused by many kinds of noise, e.g., misclassification noise
and malicious noise. It would be interesting to extend our work to more general noise model.
Acknowledgments
This work was supported by the NSFC (60635030, 60721002), 973 Program (2010CB327903) and
JiangsuSF (BK2008018).
8
References
[1] M. Anthony and P. L. Bartlett, editors. Neural Network Learning: Theoretical Foundations.
Cambridge University Press, Cambridge, UK, 1999.
[2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, pages
65?72, 2006.
[3] M.-F. Balcan, A. Blum, and K. Yang. Co-training and expansion: Towards bridging theory and
practice. In NIPS 17, pages 89?96. 2005.
[4] M.-F. Balcan, A. Z. Broder, and T. Zhang. Margin based active learning. In COLT, pages
35?50, 2007.
[5] M.-F. Balcan, S. Hanneke, and J. Wortman. The true sample complexity of active learning. In
COLT, pages 45?56, 2008.
[6] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In COLT,
pages 92?100, 1998.
[7] R. M. Castro and R. D. Nowak. Upper and lower error bounds for active learning. In Allerton
Conference, pages 225?234, 2006.
[8] R. M. Castro and R. D. Nowak. Minimax bounds for active learning. IEEE Transactions on
Information Theory, 54(5):2339?2353, 2008.
[9] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Linear classification and selective sampling
under low noise conditions. In NIPS 21, pages 249?256. 2009.
[10] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning.
Machine Learning, 15(2):201?221, 1994.
[11] S. Dasgupta. Analysis of a greedy active learning strategy. In NIPS 17, pages 337?344. 2005.
[12] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS 18, pages 235?
242. 2006.
[13] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In
NIPS 20, pages 353?360. 2008.
[14] S. Dasgupta, A. T. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. In
COLT, pages 249?263, 2005.
[15] L. Devroye, L. Gy?orfi, and G. Lugosi, editors. A Probabilistic Theory of Pattern Recognition.
Springer, New York, 1996.
[16] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by
committee algorithm. Machine Learning, 28(2-3):133?168, 1997.
[17] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages
353?360, 2007.
[18] S. Hanneke. Adaptive rates of convergence in active learning. In COLT, 2009.
[19] M. K?aa? ri?ainen. Active learning in the non-realizable case. In ACL, pages 63?77, 2006.
[20] I. Muslea, S. Minton, and C. A. Knoblock. Active + semi-supervised learning = robust multiview learning. In ICML, pages 435?442, 2002.
[21] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics,
32(1):135?166, 2004.
[22] L. Wang. Sufficient conditions for agnostic active learnable. In NIPS 22, pages 1999?2007.
2009.
[23] W. Wang and Z.-H. Zhou. On multi-view active learning and the combination with semisupervised learning. In ICML, pages 1152?1159, 2008.
9
| 3991 |@word h:28 polynomial:11 c0:4 harder:1 initial:1 existing:1 beygelzimer:1 must:1 hoping:1 atlas:1 ainen:1 greedy:1 fewer:1 coarse:1 provides:1 allerton:1 firstly:1 zhang:1 unbounded:18 c2:11 waived:1 prove:6 consists:1 combine:2 compose:2 theoretically:2 multi:49 inspired:1 zhouzh:1 decreasing:1 muslea:1 zhi:1 considering:2 bounded:3 notation:2 underlying:1 agnostic:6 mass:3 kind:1 developed:1 contrasting:5 supplemental:1 guarantee:1 pseudo:2 every:1 exactly:1 classifier:53 wrong:1 uk:1 unit:1 positive:8 nju:1 t1:6 xv:24 nsfc:1 lugosi:1 acl:1 china:1 studied:2 suggests:1 co:2 acknowledgment:1 practice:5 union:2 x3:2 svi:12 empirical:3 orfi:1 pxv:1 nanjing:2 cannot:1 unlabeled:7 close:1 get:12 risk:3 influence:1 simplicity:2 insight:1 rule:1 annals:1 shamir:1 suppose:6 hypothesis:29 recognition:1 t1i:2 labeled:4 t2i:2 wang:4 hv:14 worst:1 calculate:1 s1i:24 ensures:1 region:2 intuition:2 complexity:40 seung:1 learner:6 s2i:26 train:3 separated:1 query:10 refined:1 hiv:5 ace:1 larger:26 whose:6 supplementary:2 say:1 otherwise:4 statistic:1 obviously:1 h0v:2 combining:1 degenerate:1 achieve:3 hi2:3 getting:1 achievement:3 xj2:2 convergence:2 cluster:10 p:2 help:6 polylog:2 illustrate:1 dxv:1 strong:1 predicted:3 implies:2 vc:2 generalization:2 preliminary:2 proposition:1 tighter:1 adjusted:1 hold:26 considered:3 deciding:1 predict:2 m0:2 a2:3 estimation:1 label:39 always:2 aim:2 lamda:1 kalai:1 zhou:2 corollary:7 minton:1 derived:1 focus:2 improvement:11 indicates:3 realizable:22 selective:2 arg:3 classification:3 colt:5 denoted:4 augment:1 equal:1 never:1 sampling:2 icml:4 t2:6 others:1 ac2:1 randomly:7 national:1 nowak:4 necessary:1 theoretical:4 delete:3 instance:22 introducing:1 subset:2 uniform:2 wortman:1 too:3 tishby:1 characterize:1 sv:53 combined:2 referring:2 confident:1 broder:1 probabilistic:1 satisfied:1 cesa:1 choose:3 actively:1 gy:1 c12:2 coefficient:1 matter:2 satisfy:12 caused:3 depends:5 discloses:1 h1:16 view:80 later:1 analyze:3 bayes:6 aggregation:1 correspond:1 worth:1 hanneke:5 classified:2 monteleoni:2 definition:18 thereof:1 associated:1 mi:10 proof:10 hsu:1 proved:12 popular:3 mitchell:1 organized:1 agreed:4 supervised:1 wei:1 improved:2 generality:1 jiangsusf:1 langford:1 sketch:2 replacing:1 cohn:1 indicated:1 perhaps:1 semisupervised:1 xj1:2 true:1 hence:1 laboratory:1 round:5 multiview:2 evident:1 complete:2 confusion:1 cb327903:1 passive:2 balcan:7 novel:1 recently:2 contention:13 discussed:1 extend:1 cambridge:2 queried:3 cv:2 smoothness:3 rd:1 similarly:3 knoblock:1 add:4 posterior:2 showed:1 hide:2 binary:1 fortunately:1 gentile:1 semi:1 smooth:3 ing:6 match:1 faster:1 qi:19 prediction:1 variant:1 basic:1 represent:2 sometimes:2 accord:1 c1:9 addition:1 remarkably:3 malicious:1 rest:1 file:3 nv:1 noting:1 presence:1 yang:1 split:1 easy:1 iterate:4 xj:2 gave:1 perfectly:2 cn:1 requesting:8 whether:4 motivated:1 bartlett:1 bridging:1 york:1 generally:6 amount:3 tsybakov:38 generate:11 disjoint:1 correctly:1 dasgupta:5 key:1 nevertheless:1 blum:2 drawn:6 hi1:3 fraction:1 extends:2 decide:1 draw:1 decision:1 bound:15 hi:6 oracle:1 x2:17 software:1 ri:1 sake:1 min:1 tv:2 according:8 ball:1 combination:3 request:4 belonging:4 smaller:2 s1:60 happens:1 castro:4 intuitively:1 discus:2 committee:1 know:4 end:2 studying:1 operation:1 apply:2 disagreement:1 denotes:4 neglect:1 tvi:1 added:1 strategy:2 leaner:1 guessing:1 distance:2 separating:1 considers:1 devroye:1 minimizing:2 negative:5 unknown:2 bianchi:1 upper:2 ladner:1 finite:2 minh:3 situation:10 arbitrary:1 tive:1 introduced:1 pair:2 s2s:6 nip:6 able:2 pattern:1 confidently:1 cavallanti:2 program:1 max:1 misclassification:1 minimax:1 scheme:2 improve:4 older:1 technology:1 realizability:7 asymptotic:1 freund:1 loss:1 interesting:1 querying:3 h2:16 foundation:1 degree:1 sufficient:2 consistent:2 editor:2 cording:1 supported:1 perceptron:1 fall:1 regard:1 boundary:3 dimension:4 xn:1 stand:1 adaptive:1 transaction:1 excess:3 approximate:2 active:75 conclude:1 assumed:2 xi:1 table:17 learn:8 robust:1 expanding:13 obtaining:2 improving:1 requested:1 expansion:8 anthony:1 s2:60 noise:53 whole:1 convey:1 x1:17 exponential:5 theorem:39 learnable:1 dk:1 exists:1 margin:1 infinitely:1 hua:1 springer:1 aa:1 corresponds:4 satisfies:3 towards:1 replace:1 hard:1 specifically:1 infinite:1 uniformly:1 degradation:24 lemma:7 called:3 rarely:4 overload:2 |
3,303 | 3,992 | Nonparametric Bayesian Policy Priors for
Reinforcement Learning
Finale Doshi-Velez, David Wingate, Nicholas Roy and Joshua Tenenbaum
Massachusetts Institute of Technology
Cambridge, MA 02139
{finale,wingated,nickroy,jbt}@csail.mit.edu
Abstract
We consider reinforcement learning in partially observable domains where the
agent can query an expert for demonstrations. Our nonparametric Bayesian approach combines model knowledge, inferred from expert information and independent exploration, with policy knowledge inferred from expert trajectories. We
introduce priors that bias the agent towards models with both simple representations and simple policies, resulting in improved policy and model learning.
1
Introduction
We address the reinforcement learning (RL) problem of finding a good policy in an unknown,
stochastic, and partially observable domain, given both data from independent exploration and expert demonstrations. The first type of data, from independent exploration, is typically used by
model-based RL algorithms [1, 2, 3, 4] to learn the world?s dynamics. These approaches build models to predict observation and reward data given an agent?s actions; the action choices themselves,
since they are made by the agent, convey no statistical information about the world. In contrast,
imitation and inverse reinforcement learning [5, 6] use expert trajectories to learn reward models.
These approaches typically assume that the world?s dynamics is known.
We consider cases where we have data from both independent exploration and expert trajectories. Data from independent observation gives direct information about the dynamics, while expert
demonstrations show outputs of good policies and thus provide indirect information about the underlying model. Similarly, rewards observed during independent exploration provide indirect information about good policies. Because dynamics and policies are linked through a complex, nonlinear
function, leveraging information about both these aspects at once is challenging. However, we show
that using both data improves model-building and control performance.
We use a Bayesian model-based RL approach to take advantage of both forms of data, applying
Bayes rule to write a posterior over models M given data D as p(M |D) ? p(D|M )p(M ). In previous work [7, 8, 9, 10], the model prior p(M ) was defined as a distribution directly on the dynamics
and rewards models, making it difficult to incorporate expert trajectories. Our main contribution is
a new approach to defining this prior: our prior uses the assumption that the expert knew something
about the world model when computing his optimal policy. Different forms of these priors lead us to
three different learning algorithms: (1) if we know the expert?s planning algorithm, we can sample
models from p(M |D), invoke the planner, and weigh models given how likely it is the planner?s
policy generated the expert?s data; (2) if, instead of a planning algorithm, we have a policy prior, we
can similarly weight world models according to how likely it is that probable policies produced the
expert?s data; and (3) we can search directly in the policy space guided by probable models.
We focus on reinforcement learning in discrete action and observation spaces. In this domain, one of
our key technical contributions is the insight that the Bayesian approach used for building models of
transition dynamics can also be used as policy priors, if we exchange the typical role of actions and
1
observations. For example, algorithms for learning partially observable Markov decision processes
(POMDPs) build models that output observations and take in actions as exogenous variables. If
we reverse their roles, the observations become the exogenous variables, and the model-learning
algorithm is exactly equivalent to learning a finite-state controller [11]. By using nonparametric
priors [12], our agent can scale the sophistication of its policies and world models based on the data.
Our framework has several appealing properties. First, our choices for the policy prior and a world
model prior can be viewed as a joint prior which introduces a bias for world models which are
both simple and easy to control. This bias is especially beneficial in the case of direct policy search,
where it is easier to search directly for good controllers than it is to first construct a complete POMDP
model and then plan with it. Our method can also be used with approximately optimal expert data; in
these cases the expert data can be used to bias which models are likely but not set hard constraints on
the model. For example, in Sec. 4 an application where we extract the essence of a good controller
from good?but not optimal?trajectories generated by a randomized planning algorithm.
2
Background
A partially observable Markov decision process (POMDP) model M is an n-tuple
{S,A,O,T ,?,R,?}. S, A, and O are sets of states, actions, and observations. The state transition function T (s? |s, a) defines the distribution over next-states s? to which the agent may transition
after taking action a from state s. The observation function ?(o|s? , a) is a distribution over observations o that may occur in state s? after taking action a. The reward function R(s, a) specifies the
immediate reward for each state-action pair, while ? ? [0, 1) is the discount factor. We focus on
learning discrete state, observation, and action spaces.
Bayesian RL In Bayesian RL, the agent starts with a prior distribution P (M ) over possible
POMDP models. Given data D from an unknown , the agent can compute a posterior over possible worlds P (M |D) ? P (D|M )P (M ). The model prior can encode both vague notions, such as
?favor simpler models,? and strong structural assumptions, such as topological constraints among
states. Bayesian nonparametric approaches are well-suited for partially observable environments
because they can also infer the dimensionality of the underlying state space. For example, the recent infinite POMDP (iPOMDP) [12] model, built from HDP-HMMs [13, 14], places prior over
POMDPs with infinite states but introduces a strong locality bias towards exploring only a few.
The decision-theoretic approach to acting in the Bayesian RL setting is to treat the model M as
additional hidden state in a larger ?model-uncertainty? POMDP and plan in the joint space of models
and states. Here, P (M ) represents a belief over models. Computing a Bayes-optimal policy is
computationally intractable; methods approximate the optimal policy by sampling a single model
and following that model?s optimal policy for a fixed period of time [8]; by sampling multiple
models and choosing actions based on a vote or stochastic forward search [1, 4, 12, 2]; and by trying
to approximate the value function for the full model-uncertainty POMDP analytically [7]. Other
approaches [15, 16, 9] try to balance the off-line computation of a good policy (the computational
complexity) and the cost of getting data online (the sample complexity).
Finite State Controllers Another possibility for choosing actions?including in our partiallyobservable reinforcement learning setting?is to consider a parametric family of policies, and attempt to estimate the optimal policy parameters from data. This is the approach underlying, for
example, much work on policy gradients. In this work, we focus on the popular case of a finite-state
controller, or FSC [11]. An FSC consists of the n-tuple {N ,A,O,?,?}. N , A, and O are sets of
nodes, actions, and observations. The node transition function ?(n? |n, o) defines the distribution
over next-nodes n? to which the agent may transition after taking action a from node n. The policy
function ?(a|n) is a distribution over actions that the finite state controller may output in node n.
Nodes are discrete; we again focus on discrete observation and action spaces.
3
Nonparametric Bayesian Policy Priors
We now describe our framework for combining world models and expert data. Recall that our key
assumption is that the expert used knowledge about the underlying world to derive his policy. Fig. 1
2
Figure 1: Two graphical models of expert data generation. Left: the prior only addresses world
dynamics and rewards. Right: the prior addresses both world dynamics and controllable policies.
shows the two graphical models that summarize our approaches. Let M denote the (unknown) world
model. Combined with the world model M , the expert?s policy ?e and agent?s policy ?a produce the
expert?s and agent?s data De and Da . The data consist of a sequence of histories, where a history ht
is a sequence of actions a1 , ? ? ? , at , observations o1 , ? ? ? , ot , and rewards r1 , ? ? ? , rt . The agent has
access to all histories, but the true world model and optimal policy are hidden.
Both graphical models assume that a particular world M is sampled from a prior over POMDPs,
gM (M ). In what would be the standard application of Bayesian RL with expert data (Fig. 1(a)), the
prior gM (M ) fully encapsulates our initial belief over world models. An expert, who knows the true
world model M , executes a planning algorithm plan(M ) to construct an optimal policy ?e . The
expert then executes the policy to generate expert data De , distributed according to p(De |M, ?e ),
where ?e = plan(M ).
However, the graphical model in Fig. 1(a) does not easily allow us to encode a prior bias toward
more controllable world models. In Fig. 1(b), we introduce a new graphical model in which we
allow additional parameters in the distribution p(?e ). In particular, if we choose a distribution of the
form
p(?e |M ) ? fM (?e )g? (?e )
(1)
where we interpret g? (?e ) as a prior over policies and fM (?e ) as a likelihood of a policy given a
model. We can write the distribution over world models as
Z
p(M ) ?
fM (?e )g? (?e )gM (M )
(2)
?e
If fM (?e ) is a delta function on plan(M ), then the integral in Eq. 2 reduces to
p(M ) ? g? (?eM )gM (M )
(3)
?eM
= plan(M ), and we see that we have a prior that provides input on both the world?s
where
dynamics and the world?s controllability. For example, if the policy class is the set of finite state
controllers as discussed in Sec. 2, the policy prior g? (?e ) might encode preferences for a smaller
number of nodes used the policy, while gM (M ) might encode preferences for a smaller number of
visited states in the world. The function fM (?e ) can also be made more general to encode how
likely it is that the expert uses the policy ?e given world model M .
Finally, we note that p(De |M, ?) factors as p(Dea |?)p(Deo,r |M ), where Dea are the actions in the
histories De and Deo,r are the observations and rewards. Therefore, the conditional distribution over
world models given data De and Da is:
Z
p(M |De , Da ) ? p(Deo,r , Da |M )gM (M )
p(Dea |?e )g? (?e )fM (?e )
(4)
?e
The model in Fig. 1(a) corresponds to setting a uniform prior on g? (?e ). Similarly, the conditional
distribution over policies given data De and Da is
Z
p(?e |De , Da ) ? g? (?e )p(Dea |?e )
fM (?e )p(Deo,r , Da |M )gM (M )
(5)
M
We next describe three inference approaches for using Eqs. 4 and 5 to learn.
3
#1: Uniform Policy Priors (Bayesian RL with Expert Data). If fM (?e ) = ?(plan(M )) and we
believe that all policies are equally likely (graphical model 1(a)), then we can leverage the expert?s
data by simply considering how well that world model?s policy plan(M ) matches the expert?s
actions for a particular world model M . Eq. 4 allows us to compute a posterior over world models
that accounts for the quality of this match. We can then use that posterior as part of a planner by
using it to evaluate candidate actions. The expected value of an action1 q(a) with respect to this
posterior is given by:
Z
E [q(a)] =
q(a|M )p(M |Deo,r , Da )
M
Z
q(a|M )p(Deo,r , Da |M )gM (M )p(Dea |plan(M ))
(6)
=
M
We assume that we can draw samples from p(M |Deo,r , Da ) ? p(Deo,r , Da |M )gM (M ), a common
assumption in Bayesian RL [12, 9]; for our iPOMDP-based case, we can draw these samples using
the beam sampler of [17]. We then weight those samples by p(Dea |?e ), where ?e = plan(M ), to
yield the importance-weighted estimator
X
E [q(a)] ?
q(a|Mi )p(Dea |Mi , ?e ), Mi ? p(M |Deo,r , Da ).
i
Finally, we can also sample values for q(a) by first sampling a world model given the importanceweighted distribution above and recording the q(a) value associated with that model.
#2: Policy Priors with Model-based Inference. The uniform policy prior implied by standard
Bayesian RL does not allow us to encode prior biases about the policy. With a more general prior
(graphical model 1(b) in Fig. 1), the expectation in Eq. 6 becomes
Z
E [q(a)] =
q(a|M )p(Deo,r , Da |M )gM (M )g? (plan(M ))p(Dea |plan(M ))
(7)
M
where we still assume that the expert uses an optimal policy, that is, fM (?e ) = ?(plan(M )). Using
Eq. 7 can result in somewhat brittle and computationally intensive inference, however, as we must
compute ?e for each sampled world model M . It also assumes that the expert used the optimal
policy, whereas a more realistic assumption might be that the expert uses a near-optimal policy.
We now discuss an alternative that relaxes fM (?e ) = ?(plan(M )): let fM (?e ) be a function that
prefers policies that achieve higher rewards in world model M : fM (?e ) ? exp {V (?e |M )}, where
V (?e |M ) is the value of the policy ?e on world M ; indicating a belief that the expert tends to sample
policies that yield high value. Substituting this fM (?e ) into Eq. 4, the expected value of an action is
Z
E [q(a)] =
q(a|M )p(Dea |?e ) exp {V (?e |M )} g? (?e )p(Deo,r , Da|M )gM (M )
M,?e
We again assume that we can draw samples from p(M |Deo,r , Da ) ? p(Deo,r , Da |M )gM (M ), and
additionally assume that we can draw samples from p(?e |Dea ) ? p(Dea |?e )g? (?e ), yielding:
X
X
E [q(a)] ?
q(a|Mi )
exp V (?e j |Mi ) , Mi ? p(M |Deo,r , Da ), ?e j ? p(?e |Dea ) (8)
i
j
As in the case with standard Bayesian RL, we can also use our weighted world models to draw
samples from q(a).
#3: Policy Priors with Joint Model-Policy Inference. While the model-based inference for policy priors is correct, using importance weights often suffers when the proposal distribution is not
near the true posterior. In particular, sampling world models and policies?both very high dimensional objects?from distributions that ignore large parts of the evidence means that large numbers
of samples may be needed to get accurate estimates. We now describe an inference approach that
alternates sampling models and policies that both avoids importance sampling and can be used even
1
We omit the belief over world states b(s) from the equations that follow for clarity; all references to q(a|M )
are q(a|bM (s), M ).
4
in cases where fM (?e ) = ?(plan(M )). Once we have a set of sampled models we can compute the
expectation E[q(a)] simply as the average over the action values q(a|Mi ) for each sampled model.
The inference proceeds in two alternating stages: first, we sample a new policy given a sampled
model. Given a world model, Eq. 5 becomes
p(?e |De , Da , M ) ? g? (?e )p(Dea |?e )fM (?e )
(9)
where making g? (?e ) and p(Dea |?e ) conjugate is generally an easy design choice?for example,
in Sec. 3.1, we use the iPOMDP [12] as a conjugate prior over policies encoded as finite state
controllers. We then approximate fM (?e ) with a function in the same conjugate family: in the case
of the iPOMDP prior and count data Dea , we also approximate fM with a set of Dirichlet counts
scaled by some temperature parameter a. As a is increased, we recover the desired fM (?e ) =
?(plan(M )); the initial approximation speeds up the inference and does not affect its correctness.
Next we sample a new world model given the policy. Given a policy, Eq. 4 reduces to
p(M |De , Da ) ? p(Deo,r , Da |M )gM (M )fM (?e ).
(10)
We apply a Metropolis-Hastings (MH) step to sample new world models, drawing a new model M ?
? (?e )
from p(Deo,r , Da |M )gM (M ) and accepting it with ratio ffM
. If fM (?e ) is highly peaked, then
M (?e )
this ratio is likely to be ill-defined; as when sampling policies, we apply a tempering scheme in the
inference to smooth fM (?e ). For example, if we desired fM (?e ) = ?(plan(M )), then we could use
smoothed version fM ?(?e ) ? exp(a?(V (?e |M )?V (?eM |M ))2 ), where b is a temperature parameter
for the inference. While applying MH can suffer from the same issues as the importance sampling in
the model-based approach, Gibbs sampling new policies removes one set of proposal distributions
from the inference, resulting in better estimates with fewer samples.
3.1
Priors over State Controller Policies
We now turn to the definition of the policy prior p(?e ). In theory, any policy prior can be used, but
there are some practical considerations. Mathematically, the policy prior serves as a regularizer to
avoid overfitting the expert data, so it should encode a preference toward simple policies. It should
also allow computationally tractable sampling from the posterior p(?e |De ) ? p(De |?e )p(?e ).
In discrete domains, one choice for the policy prior (as well as the model prior) is the iPOMDP [12].
To use the iPOMDP as a model prior (its intended use), we treat actions as inputs and observations
as outputs. The iPOMDP posits that there are an infinite number of states s but a few popular states
are visited most of the time; the beam sampler [17] can efficiently draw samples of state transition,
observation, and reward models for visited states. Joint inference over the model parameters T, ?, R
and the state sequence s allows us to infer the number of visited states from the data.
To use the iPOMDP as a policy prior, we simply reverse the roles of actions and observations,
treating the observations as inputs and the actions as outputs. Now, the iPOMDP posits that there is
a state controller with an infinite number of nodes n, but probable polices use only a small subset
of the nodes a majority of the time. We perform joint inference over the node transition and policy
parameters ? and ? as well as the visited nodes n. The ?policy state? representation learned is not
the world state, rather it is a summary of previous observations which is sufficient to predict actions.
Assuming that the training action sequences are drawn from the optimal policy, the learner will
learn just enough ?policy state? to control the system optimally. As in the model prior application,
using the iPOMDP as a policy prior biases the agent towards simpler policies?those that visit fewer
nodes?but allows the number of nodes to grow as with new expert experience.
3.2
Consistency and Correctness
In all three inference approaches, the sampled models and policies are an unbiased representation
of the true posterior and are consistent in that in the limit of infinite samples, we will recover the
true model and policy posteriors conditioned on their respective data Da , Deo,r and Dea . There are
some mild conditions on the world and policy priors to ensure consistency: since the policy prior
and model prior are specified independently, we require that there exist models for which both the
policy prior and model prior are non-zero in the limit of data. Formally, we also require that the
expert provide optimal trajectories; in practice, we see that this assumption can be relaxed.
5
Rewards for Snakes
120
iPOMDP
Approach 1
Approach 2
Approach 3
4000
iPOMDP
Inference #1
Inference #2
Inference #3
3000
100
Cumulative Reward
Cumulative Reward
Rewards for Multicolored Gridworld
2000
1000
0
80
60
40
20
?1000
0
1000
2000
3000
Iterations of Experience
0
0
1000
2000
3000
4000
5000
6000
7000
8000
Iterations of Experience
Figure 2: Learning curves for the multicolored gridworld (left) and snake (right). Error bars are 95%
confidence intervals of the mean. On the far right is the snake robot.
3.3
Planning with Distributions over Policies and Models
All the approaches in Sec. 3 output samples of models or policies to be used for planning. As
noted in Section 2, computing the Bayes optimal action is typically intractable. Following similar
work [4, 1, 2, 12], we interpret these samples as beliefs. In the model-based approaches, we first
solve each model (all of which are generally small) using standard POMDP planners. During the
testing phase, the internal belief state of the models (in the model-based approaches) or the internal
node state of the policies (in the policy-based approaches), is updated after each action-observation
pair. Models are also reweighted using standard importance weights so that they continue to be an
unbiased approximation of the true belief. Actions are chosen by first selecting, depending on the
approach, a model or policy based on their weights, and then performing its most preferred action.
While this approach is clearly approximate (it considers state uncertainty but not model uncertainty),
we found empirically that this simple, fast approach to action selection produced nearly identical
results to the much slower (but asymptotically Bayes optimal) stochastic forward search in [12].2
4
Experiments
We first describe a pair of demonstrations that show two important properties of using policy priors:
(1) that policy priors can be useful even in the absence of expert data and (2) that our approach
works even when the expert trajectories are not optimal. We then compare policy priors with the
basic iPOMDP [12] and finite-state model learner trained with EM on several standard problems. In
all cases, the tasks were episodic. Since episodes could be of variable length?specifically, experts
generally completed the task in fewer iterations?we allowed each approach N = 2500 iterations, or
interactions with the world, during each learning trial. The agent was provided with an expert tran
jectory with probability .5 N
, where n was the current amount of experience. No expert trajectories
were provided in the last quarter of the iterations. We ran each approach for 10 learning trials.
Models and policies were updated every 100 iterations, and each episode was capped at 50 iterations
(though it could be shorter, if the task was achieved in fewer iterations). Following each update, we
ran 50 test episodes (not included in the agent?s experience) with the new models and policies to
empirically evaluate the current value of the agents? policy. For all of the nonparametric approaches,
50 samples were collected, 10 iterations apart, after a burn-in of 500 iterations. Sampled models
were solved using 25 backups of PBVI [18] with 500 sampled beliefs. One iteration of bounded
policy iteration [19] was performed per sampled model. The finite-state learner was trained using
min(25, |S|), where |S| was the true number of underlying states. Both the nonparametric and
finite learners were trained from scratch during each update; we found empirically that starting from
random points made the learner more robust than starting it at potentially poor local optima.
Policy Priors with No Expert Data The combined policy and model prior can be used to encode
a prior bias towards models with simpler control policies. This interpretation of policy priors can
2
We suspect that the reason the two planning approaches yield similar results is that the stochastic forward
search never goes deep enough to discover the value of learning the model and thus acts equivalently to our
sampling-based approach, which only considers the value of learning more about the underlying state.
6
be useful even without expert data: the left pane of Fig. 2 shows the performance of the policy
prior-biased approaches and the standard iPOMDP on a gridworld problem in which observations
correspond to both the adjacent walls (relevant for planning) and the color of the square (not relevant
for planning). This domain has 26 states, 4 colors, standard NSEW actions, and an 80% chance of
a successful action. The optimal policy for this gridworld was simple: go east until the agent hits
a wall, then go south. However, the varied observations made the iPOMDP infer many underlying
states, none of which it could train well, and these models also confused the policy-inference in
Approach 3. Without expert data, Approach 1 cannot do better than iPOMDP. By biasing the agent
towards worlds that admit simpler policies, the model-based inference with policy priors (Approach
2) creates a faster learner.
Policy Priors with Imperfect Experts While we focused on optimal expert data, in practice policy priors can be applied even if the expert is imperfect. Fig. 2(b) shows learning curves for a simulated snake manipulation problem with a 40-dimensional continuous state space, corresponding
to (x,y) positions and velocities of 10 body segments. Actions are 9-dimensional continuous vectors, corresponding to desired joint angles between segments. The snake is rewarded based on the
distance it travels along a twisty linear ?maze,? encouraging it to wiggle forward and turn corners.
We generated expert data by first deriving 16 motor primitives for the action space using a clustering technique on a near-optimal trajectory produced by a rapidly-exploring random tree (RRT). A
reasonable?but not optimal?controller was then designed using alternative policy-learning techniques on the action space of motor primitives. Trajectories from this controller were treated as
expert data for our policy prior model. Although the trajectories and primitives are suboptimal,
Fig. 2(b) shows that knowledge of feasible solutions boosts performance when using the policybased technique.
Tests on Standard Problems We also tested the approaches on ten problems: tiger [20] (2 states),
network [20] (7 states), shuttle [21] (8 states), an adapted version of gridworld [20] (26 states),
an adapted version of follow [2] (26 states) hallway [20] (57 states), beach (100 states), rocksample(4,4) [22] (257 states), tag [18] (870 states), and image-search (16321 states). In the beach
problem, the agent needed to track a beach ball on a 2D grid. The image-search problem involved
identifying a unique pixel in an 8x8 grid with three type of filters with varying cost and scales.
We compared our inference approaches with two approaches that did not leverage the expert data:
expectation-maximization (EM) used to learn a finite world model of the correct size and the infinite
POMDP [12], which placed the same nonparametric prior over world models as we did.
Rewards for tiger
Cumulative Reward
0
x Rewards
10
4
1.5
for network
Rewards for shuttle
0
1
?500
Rewards for follow
100
Rewards for gridworld
500
?1000
0
?2000
?500
?3000
?1000
?4000
?1500
?5000
?2000
50
0.5
?1000
0
0
?1500
iPOMDP
Inference #1
Inference #2
Inference #3
EM
?2000
?2500
?0.5
0
1000
2000
?50
?1
3000
?1.5
0
Rewards for hallway
Cumulative Reward
8
1000
2000
3000
4000
Rewards for beach
250
6
200
4
150
?100
0
1000
2000
?6000
0
Rewards for rocksample
0
2000
3000
?2500
0
Rewards for tag
0
?500
1000
2000
3000
Rewards for image
0
?1000
?1000
?5000
?1500
2
1000
?2000
?3000
100
?2000
0
50
?2
0
0
1000
2000
3000
?10000
?2500
0
1000
2000
3000
?3000
?4000
?5000
0
1000
2000
3000
?15000
4000
0
?6000
1000 2000 3000 4000 5000 0
1000
2000
3000
4000
Iterations of Experience Iterations of Experience Iterations of Experience Iterations of Experience Iterations of Experience
Figure 3: Performance on several standard problems, with 95% confidence intervals of the mean.
7
Fig. 3 shows the learning curves for our policy priors approaches (problems ordered by state space
size); the cumulative rewards and final values are shown in Table 1. As expected, approaches that
leverage expert trajectories generally perform better than those that ignore the near-optimality of the
expert data. The policy-based approach is successful even among the larger problems. Here, even
though the inferred state spaces could grow large, policies remained relatively simple. The optimization used in the policy-based approach?recall we use the stochastic search to find a probable
policy?was also key to producing reasonable policies with limited computation.
tiger
network
shuttle
follow
gridworld
hallway
beach
rocksample
tag
image
Cumulative Reward
iPOMDP App.
App.
App.
1
2
3
-2.2e3
-1.4e3 -5.3e2 -2.2e2
-1.5e4
-6.3e3 -2.1e3 1.9e4
-5.3e1
7.9e1
1.5e2
5.1e1
-6.3e3
-2.3e3 -1.9e3 -1.6e3
-2.0e3
-6.2e2 -7.0e2 4.6e2
2.0e-1
1.4
1.6
6.6
1.9e2
1.4e2
1.8e2
1.9e2
-3.2e3
-1.7e3 -1.8e3 -1.0e3
-1.6e4
-6.9e3 -7.4e3 -3.5e3
-7.8e3
-5.3e3 -6.1e3 -3.9e3
EM
Final Reward
iPOMDP App. 1 App. 2 App. 3
EM
-3.0e3
-2.6e3
0.0
-5.0e3
-3.7e3
0.0
3.5e2
-3.5e3
-
-2.0e1
-1.1e1
1.7e-1
-5.9
-1.3
8.6e-4
2.0e-1
-1.6
-9.4
-5.0
-2.0e1
-4.7
0.0
-5.0
-2.1
0.0
3.4e-1
-2.0
-9.1
-5.0
-1.0e1
-1.2e1
3.3e-1
-3.1
5.3e-1
7.4e-3
1.1e-1
-5.3e-1
-2.8
-3.6
-2.3
-4.0e-1
6.5e-1
-1.4
1.8
1.4e-2
1.4e-1
-1.3
-4.1
-4.2
1.6
1.1e1
8.6e-1
-1.1
2.3
1.9e-2
2.7e-1
1.2
-1.7
1.3e1
Table 1: Cumulative and final rewards on several problems. Bold values highlight best performers.
5
Discussion and Related Work
Several Bayesian approaches have been developed for RL in partially observable domains. These
include [7], which uses a set of Gaussian approximations to allow for analytic value function updates
in the POMDP space; [2], which jointly reasons over the space of Dirichlet parameters and states
when planning in discrete POMDPs, and [12], which samples models from a nonparametric prior.
Both [1, 4] describe how expert data augment learning. The first [1] lets the agent to query a state
oracle during the learning process. The computational benefit of a state oracle is that the information can be used to directly update a prior over models. However, in large or complex domains, the
agent?s state might be difficult to define. In contrast, [4] lets the agent query an expert for optimal actions. While policy information may be much easier to specify?incorporating the result of a single
query into the prior over models is challenging; the particle-filtering approach of [4] can be brittle
as model-spaces grow large. Our policy priors approach uses entire trajectories; by learning policies
rather than single actions, we can generalize better and evaluate models more holistically. By working with models and policies, rather than just models as in [4], we can also consider larger problems
which still have simple policies. Targeted criteria for asking for expert trajectories, especially one
with performance guarantees such as [4], would be an interesting extension to our approach.
6
Conclusion
We addressed a key gap in the learning-by-demonstration literature: learning from both expert and
agent data in a partially observable setting. Prior work used expert data in MDP and imitationlearning cases, but less work exists for the general POMDP case. Our Bayesian approach combined
priors over the world models and policies, connecting information about world dynamics and expert
trajectories. Taken together, these priors are a new way to think about specifying priors over models:
instead of simply putting a prior over the dynamics, our prior provides a bias towards models with
simple dynamics and simple optimal policies. We show with our approach expert data never reduces
performance, and our extra bias towards controllability improves performance even without expert
data. Our policy priors over nonparametric finite state controllers were relatively simple; classes of
priors to address more problems is an interesting direction for future work.
8
References
[1] R. Jaulmes, J. Pineau, and D. Precup. Learning in non-stationary partially observable Markov
decision processes. ECML Workshop, 2005.
[2] Stephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayes-adaptive POMDPs. In Neural
Information Processing Systems (NIPS), 2008.
[3] Stephane Ross, Brahim Chaib-draa, and Joelle Pineau. Bayesian reinforcement learning in
continuous POMDPs with application to robot navigation. In ICRA, 2008.
[4] Finale Doshi, Joelle Pineau, and Nicholas Roy. Reinforcement learning with limited reinforcement: Using Bayes risk for active learning in POMDPs. In International Conference on
Machine Learning, volume 25, 2008.
[5] Pieter Abbeel, Morgan Quigley, and Andrew Y. Ng. Using inaccurate models in reinforcement
learning. In In International Conference on Machine Learning (ICML) Pittsburgh, pages 1?8.
ACM Press, 2006.
[6] Nathan Ratliff, Brian Ziebart, Kevin Peterson, J. Andrew Bagnell, Martial Hebert, Anind K.
Dey, and Siddhartha Srinivasa. Inverse optimal heuristic control for imitation learning. In
Proc. AISTATS, pages 424?431, 2009.
[7] P. Poupart and N. Vlassis. Model-based Bayesian reinforcement learning in partially observable domains. In ISAIM, 2008.
[8] M. Strens. A Bayesian framework for reinforcement learning. In ICML, 2000.
[9] John Asmuth, Lihong Li, Michael Littman, Ali Nouri, and David Wingate. A Bayesian sampling approach to exploration in reinforcement learning. In Uncertainty in Artificial Intelligence (UAI), 2009.
[10] R. Dearden, N. Friedman, and D. Andre. Model based Bayesian exploration. pages 150?159,
1999.
[11] E. J. Sondik. The Optimial Control of Partially Observable Markov Processes. PhD thesis,
Stanford University, 1971.
[12] Finale Doshi-Velez. The infinite partially observable Markov decision process. In Y. Bengio,
D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems 22, pages 477?485. 2009.
[13] Matthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen. The infinite hidden Markov
model. In Machine Learning, pages 29?245. MIT Press, 2002.
[14] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet
processes. Journal of the American Statistical Association, 101:1566?1581, 2006.
[15] Tao Wang, Daniel Lizotte, Michael Bowling, and Dale Schuurmans. Bayesian sparse sampling
for on-line reward optimization. In International Conference on Machine Learning (ICML),
2005.
[16] J. Zico Kolter and Andrew Ng. Near-Bayesian exploration in polynomial time. In International
Conference on Machine Learning (ICML), 2009.
[17] J. van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden
Markov model. In ICML, volume 25, 2008.
[18] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for
POMDPs. IJCAI, 2003.
[19] Pascal Poupart and Craig Boutilier. Bounded finite state controllers. In Neural Information
Processing Systems, 2003.
[20] M. L. Littman, A. R. Cassandra, and L. P. Kaelbling. Learning policies for partially observable
environments: scaling up. ICML, 1995.
[21] Lonnie Chrisman. Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In In Proceedings of the Tenth National Conference on Artificial Intelligence,
pages 183?188. AAAI Press, 1992.
[22] T. Smith and R. Simmons. Heuristic search value iteration for POMDPs. In Proc. of UAI 2004,
Banff, Alberta, 2004.
9
| 3992 |@word mild:1 trial:2 version:3 polynomial:1 pieter:1 initial:2 selecting:1 daniel:1 current:2 must:1 john:1 realistic:1 analytic:1 motor:2 remove:1 treating:1 designed:1 update:4 rrt:1 stationary:1 fewer:4 intelligence:2 ffm:1 hallway:3 smith:1 accepting:1 blei:1 provides:2 node:14 preference:3 banff:1 simpler:4 wingated:1 along:1 direct:2 become:1 consists:1 combine:1 introduce:2 expected:3 themselves:1 planning:10 aliasing:1 alberta:1 encouraging:1 considering:1 becomes:2 provided:2 discover:1 underlying:7 bounded:2 confused:1 what:1 developed:1 finding:1 guarantee:1 every:1 act:1 exactly:1 scaled:1 hit:1 control:6 zico:1 omit:1 producing:1 local:1 treat:2 tends:1 limit:2 approximately:1 might:4 burn:1 specifying:1 challenging:2 hmms:1 limited:2 practical:1 unique:1 testing:1 practice:2 importanceweighted:1 episodic:1 confidence:2 zoubin:1 get:1 cannot:1 selection:1 risk:1 applying:2 yee:1 equivalent:1 go:3 primitive:3 starting:2 independently:1 williams:1 pomdp:10 focused:1 identifying:1 rule:1 insight:1 estimator:1 deriving:1 his:2 notion:1 updated:2 simmons:1 gm:14 carl:1 us:6 velocity:1 roy:2 observed:1 role:3 wingate:2 solved:1 wang:1 culotta:1 episode:3 ran:2 weigh:1 environment:2 complexity:2 reward:32 ziebart:1 littman:2 dynamic:12 deo:17 trained:3 segment:2 ali:1 creates:1 learner:6 vague:1 easily:1 joint:6 indirect:2 mh:2 regularizer:1 train:1 fast:1 describe:5 fsc:2 query:4 artificial:2 kevin:1 choosing:2 encoded:1 larger:3 solve:1 heuristic:2 stanford:1 drawing:1 favor:1 think:1 jointly:1 final:3 online:1 beal:2 advantage:1 sequence:4 quigley:1 tran:1 interaction:1 relevant:2 combining:1 rapidly:1 pbvi:1 achieve:1 getting:1 ijcai:1 optimum:1 r1:1 produce:1 object:1 derive:1 depending:1 andrew:3 eq:8 strong:2 direction:1 guided:1 posit:2 correct:2 stephane:2 filter:1 stochastic:5 exploration:8 exchange:1 require:2 brahim:2 abbeel:1 wall:2 probable:4 brian:1 mathematically:1 exploring:2 extension:1 exp:4 predict:2 matthew:2 substituting:1 proc:2 travel:1 visited:5 ross:2 correctness:2 partiallyobservable:1 weighted:2 mit:2 clearly:1 gaussian:1 rather:3 avoid:1 shuttle:3 rocksample:3 varying:1 encode:8 focus:4 likelihood:1 contrast:2 lizotte:1 inference:23 inaccurate:1 typically:3 snake:5 entire:1 hidden:4 tao:1 pixel:1 issue:1 among:2 ill:1 pascal:1 augment:1 plan:17 once:2 construct:2 never:2 beach:5 sampling:14 ng:2 identical:1 represents:1 icml:6 nearly:1 peaked:1 future:1 gordon:1 few:2 national:1 saatci:1 intended:1 phase:1 attempt:1 friedman:1 possibility:1 highly:1 introduces:2 navigation:1 yielding:1 accurate:1 tuple:2 integral:1 experience:10 respective:1 shorter:1 draa:2 tree:1 desired:3 increased:1 asking:1 maximization:1 cost:2 kaelbling:1 subset:1 uniform:3 successful:2 optimally:1 combined:3 international:4 randomized:1 csail:1 invoke:1 off:1 michael:3 connecting:1 together:1 precup:1 again:2 thesis:1 aaai:1 choose:1 isaim:1 admit:1 corner:1 expert:58 american:1 li:1 account:1 de:13 sec:4 bold:1 kolter:1 performed:1 try:1 sondik:1 exogenous:2 linked:1 start:1 bayes:6 recover:2 contribution:2 square:1 who:1 efficiently:1 yield:3 correspond:1 generalize:1 bayesian:23 produced:3 craig:1 none:1 trajectory:15 pomdps:9 executes:2 history:4 app:6 suffers:1 andre:1 definition:1 involved:1 doshi:3 e2:11 associated:1 mi:7 sampled:9 chaib:2 massachusetts:1 popular:2 recall:2 knowledge:4 color:2 improves:2 dimensionality:1 anytime:1 higher:1 asmuth:1 follow:4 specify:1 improved:1 though:2 dey:1 just:2 stage:1 until:1 working:1 hastings:1 nonlinear:1 defines:2 pineau:5 quality:1 believe:1 mdp:1 building:2 true:7 unbiased:2 analytically:1 alternating:1 jbt:1 reweighted:1 adjacent:1 nsew:1 during:5 bowling:1 essence:1 noted:1 strens:1 criterion:1 trying:1 whye:1 complete:1 theoretic:1 temperature:2 image:4 nouri:1 consideration:1 srinivasa:1 common:1 quarter:1 rl:12 empirically:3 volume:2 discussed:1 interpretation:1 association:1 velez:2 interpret:2 cambridge:1 gibbs:1 consistency:2 grid:2 similarly:3 particle:1 lihong:1 access:1 robot:2 something:1 posterior:9 recent:1 apart:1 reverse:2 manipulation:1 rewarded:1 continue:1 joelle:3 joshua:1 morgan:1 additional:2 somewhat:1 relaxed:1 performer:1 period:1 multiple:1 full:1 infer:3 reduces:3 smooth:1 technical:1 match:2 faster:1 equally:1 visit:1 e1:10 a1:1 basic:1 controller:14 expectation:3 iteration:19 achieved:1 beam:3 proposal:2 background:1 whereas:1 interval:2 addressed:1 grow:3 ot:1 biased:1 extra:1 south:1 recording:1 suspect:1 leveraging:1 lafferty:1 finale:4 jordan:1 structural:1 near:5 leverage:3 jaulmes:1 easy:2 relaxes:1 enough:2 bengio:1 affect:1 fm:23 suboptimal:1 lonnie:1 imperfect:2 intensive:1 suffer:1 e3:25 action:39 prefers:1 deep:1 boutilier:1 generally:4 useful:2 gael:1 amount:1 nonparametric:10 discount:1 tenenbaum:1 ten:1 generate:1 specifies:1 exist:1 holistically:1 delta:1 per:1 track:1 write:2 discrete:6 siddhartha:1 key:4 putting:1 tempering:1 drawn:1 clarity:1 tenth:1 ht:1 asymptotically:1 inverse:2 angle:1 uncertainty:5 place:1 planner:4 family:2 reasonable:2 draw:6 decision:5 scaling:1 topological:1 oracle:2 adapted:2 occur:1 constraint:2 tag:3 aspect:1 speed:1 nathan:1 min:1 optimality:1 pane:1 performing:1 relatively:2 according:2 alternate:1 ball:1 poor:1 conjugate:3 beneficial:1 smaller:2 em:8 appealing:1 metropolis:1 making:2 encapsulates:1 taken:1 computationally:3 equation:1 discus:1 count:2 turn:2 needed:2 know:2 tractable:1 serf:1 apply:2 hierarchical:1 nicholas:2 alternative:2 slower:1 assumes:1 dirichlet:3 ensure:1 clustering:1 completed:1 graphical:7 include:1 ghahramani:2 build:2 especially:2 ipomdp:19 icra:1 implied:1 parametric:1 rt:1 bagnell:1 gradient:1 distance:1 simulated:1 thrun:1 majority:1 poupart:2 considers:2 collected:1 toward:2 reason:2 assuming:1 hdp:1 length:1 o1:1 ratio:2 demonstration:5 balance:1 equivalently:1 difficult:2 potentially:1 ratliff:1 design:1 policy:121 unknown:3 perform:2 teh:2 observation:22 markov:7 finite:12 controllability:2 ecml:1 immediate:1 defining:1 vlassis:1 gridworld:7 varied:1 smoothed:1 police:1 inferred:3 david:3 pair:3 specified:1 learned:1 distinction:1 chrisman:1 boost:1 nip:1 address:4 capped:1 bar:1 proceeds:1 biasing:1 summarize:1 built:1 including:1 belief:8 dearden:1 treated:1 scheme:1 technology:1 martial:1 x8:1 extract:1 prior:75 literature:1 jectory:1 fully:1 highlight:1 brittle:2 generation:1 interesting:2 filtering:1 agent:23 sufficient:1 consistent:1 editor:1 summary:1 placed:1 last:1 rasmussen:1 hebert:1 bias:11 allow:5 institute:1 peterson:1 taking:3 sparse:1 distributed:1 benefit:1 curve:3 van:1 world:47 transition:7 avoids:1 cumulative:7 maze:1 forward:4 made:4 reinforcement:14 adaptive:1 dale:1 bm:1 far:1 approximate:5 observable:12 ignore:2 preferred:1 overfitting:1 active:1 uai:2 pittsburgh:1 knew:1 imitation:2 search:10 continuous:3 table:2 additionally:1 learn:5 robust:1 controllable:2 schuurmans:2 dea:16 complex:2 domain:8 da:22 did:2 aistats:1 main:1 backup:1 allowed:1 convey:1 body:1 fig:10 position:1 candidate:1 perceptual:2 e4:3 remained:1 evidence:1 intractable:2 consist:1 incorporating:1 exists:1 workshop:1 importance:5 anind:1 phd:1 conditioned:1 wiggle:1 gap:1 easier:2 cassandra:1 suited:1 locality:1 sophistication:1 simply:4 likely:6 ordered:1 partially:12 corresponds:1 chance:1 acm:1 ma:1 conditional:2 viewed:1 targeted:1 towards:7 absence:1 feasible:1 hard:1 tiger:3 included:1 typical:1 infinite:9 specifically:1 acting:1 sampler:2 vote:1 east:1 indicating:1 formally:1 internal:2 incorporate:1 evaluate:3 tested:1 scratch:1 |
3,304 | 3,993 | Block Variable Selection in Multivariate Regression
and High-dimensional Causal Inference
Aur?elie C. Lozano, Vikas Sindhwani
IBM T.J. Watson Research Center,
1101 Kitchawan Road,
Yorktown Heights NY 10598,USA
{aclozano,vsindhw}@us.ibm.com
Abstract
We consider multivariate regression problems involving high-dimensional predictor and response spaces. To efficiently address such problems, we propose a variable selection method, Multivariate Group Orthogonal Matching Pursuit, which
extends the standard Orthogonal Matching Pursuit technique. This extension accounts for arbitrary sparsity patterns induced by domain-specific groupings over
both input and output variables, while also taking advantage of the correlation that
may exist between the multiple outputs. Within this framework, we then formulate
the problem of inferring causal relationships over a collection of high-dimensional
time series variables. When applied to time-evolving social media content, our
models yield a new family of causality-based influence measures that may be seen
as an alternative to the classic PageRank algorithm traditionally applied to hyperlink graphs. Theoretical guarantees, extensive simulations and empirical studies
confirm the generality and value of our framework.
1 Introduction
The broad goal of supervised learning is to effectively learn unknown functional dependencies between a set of input variables and a set of output variables, given a finite collection of training
examples. This paper is at the intersection of two key topics that arise in this context.
The first topic is Multivariate Regression [4, 2, 24] which generalizes basic single-output regression
to settings involving multiple output variables with potentially significant correlations between them.
Applications of multivariate regression models include chemometrics, econometrics and computational biology. Multivariate Regression may be viewed as the classical precursor to many modern
techniques in machine learning such as multi-task learning [15, 16, 1] and structured output prediction [18, 10, 22]. These techniques are output-centric in the sense that they attempt to exploit
dependencies between output variables to learn joint models that generalize better than those that
treat outputs independently.
The second topic is that of sparsity [3], variable selection and the broader notion of regularization [20]. The view here is input-centric in the following specific sense. In very high dimensional
problems where the number of input variables may exceed the number of examples, the only hope
for avoiding overfitting is via some form of aggressive capacity control over the family of dependencies being explored by the learning algorithm. This capacity control may be implemented in various
ways, e.g., via dimensionality reduction, input variable selection or regularized risk minimization.
Estimation of sparse models that are supported on a small set of input variables is a highly active
and very successful strand of research in machine learning. It encompasses l1 regularization (e.g.,
Lasso [19]) and matching pursuit techniques [13] which come with theoretical guarantees on the
recovery of the exact support under certain conditions. Particularly pertinent to this paper is the
1
notion of group sparsity. In many problems involving very high-dimensional datasets, it is natural
to enforce the prior knowledge that the support of the model should be a union over domain-specific
groups of features. For instance, Group Lasso [23] extends Lasso, and Group-OMP [12, 9] extends
matching pursuit techniques to this setting.
In view of these two topics, we consider here very high dimensional problems involving a large
number of output variables. We address the problem of enforcing sparsity via variable selection
in multivariate linear models where regularization becomes crucial since the number of parameters
grows not only with the data dimensionality but also the number of outputs. Our approach is guided
by the following desiderata: (a) performing variable selection for each output in isolation may be
highly suboptimal since the input variables which are relevant to (a subset of) the outputs may only
exhibit weak correlation with each individual output. It is also desirable to leverage information
on the relatedness between outputs, so as to guide the decision on the relevance of a certain input
variable to a certain output, using additional evidence based on the relevance to related outputs. (b)
It is desirable to take into account any grouping structure that may exist between input and output
variables. In the presence of noisy data, inclusion decisions made at the group level may be more
robust than those at the level of individual variables.
To efficiently satisfy the above desiderata, we propose Multivariate Group Orthogonal Matching
Pursuit (MGOMP) for enforcing arbitrary block sparsity patterns in multivariate regression coefficients. These patterns are specified by groups defined over both input and output variables. In
particular, MGOMP can handle cases where the set of relevant features may differ from one response (group) to another, and is thus more general than simultaneous variable selection procedures
(e.g. S-OMP of [21]), as simultaneity of the selection in MGOMP is enforced within groups of
related output variables rather than the entire set of outputs. MGOMP also generalizes the GroupOMP algorithm of [12] to the multivariate regression case. We provide theoretical guarantees on the
quality of the model in terms of correctness of group variable selection and regression coefficient
estimation. We present empirical results on simulated datasets that illustrate the strength of our
technique.
We then focus on applying MGOMP to high-dimensional multivariate time series analysis problems. Specifically, we propose a novel application of multivariate regression methods with variable
selection, namely that of inferring key influencers in online social communities, a problem of increasing importance with the rise of planetary scale web 2.0 platforms such as Facebook, Twitter,
and innumerable discussion forums and blog sites. We rigorously map this problem to that of inferring causal influence relationships. Using special cases of MGOMP, we extend the classical notion
of Granger Causality [7] which provides an operational notion of causality in time series analysis,
to apply to a collection of multivariate time series variables representing the evolving textual content of a community of bloggers. The sparsity structure of the resulting model induces a weighted
causal graph that encodes influence relationships. While we use blog communities to concretize
the application of our models, our ideas hold more generally to a wider class of spatio temporal
causal modeling problems. In particular, our formulation gives rise to a new class of influence measures that we call GrangerRanks, that may be seen as causality-based alternatives to hyperlink-based
ranking techniques like the PageRank [17], popularized by Google in the early days of the internet.
Empirical results on a diverse collection of real-world key influencer problems clearly show the
value of our models.
2 Variable Group Selection in Multivariate Regression
? + E, where Y ? Rn?K
Let us begin by recalling the multivariate regression model, Y = XA
is the output matrix formed by n training examples on K output variables, X ? Rn?p is the data
? is the p ? K
matrix whose rows are p-dimensional feature vectors for the n training examples, A
matrix formed by the true regression coefficients one wishes to estimate, and E is the n ? K error
matrix. The row vectors of E, are assumed to be independently sampled from N (0, ?) where ? is
the K ? K error covariance matrix. For simplicity of notation we assume without loss of generality
that the columns of X and Y have been centered so we need not deal with intercept terms.
The negative log-likelihood function (up to a constant) corresponding to the aforementioned model
can be expressed as
(1)
? l(A, ?) = tr (Y ? XA)T (Y ? XA)??1 ? n log ??1 ,
2
? and |?| denotes the determinant of a matrix. The maximum likelihood
where A is any estimate of A,
? OLS = (XT X)?1 XT Y, namely, the
estimator is the Ordinary Least Squares (OLS) estimator A
concatenation of the OLS estimates for each of the K outputs taken separately, irrespective of ?.
This suggests its suboptimality as the relatedness of the responses is disregarded. Also the OLS
estimator is known to perform poorly in the case of high dimensional predictors and/or when the
predictors are highly correlated. To alleviate these issues, several methods have been proposed
that are based on dimension reduction. Among those, variable selection methods are most popular
as they lead to parsimonious and interpretable models, which is desirable in many applications.
Clearly, however, variable selection in multiple output regression is particularly challenging in the
presence of high dimensional feature vectors as well as possibly a large number of responses.
In many applications, including high-dimensional time series analysis and causal modeling settings showcased later in this paper, it is possible to provide domain specific guidance for variable selection by imposing a sparsity structure on A. Let I = {I1 . . . IL } denote the set formed
by L (possibly overlapping) groups of input variables where Ik ? {1 . . . p}, k = 1, . . . L. Let
O = {O1 . . . OM } denote the set formed by M (possibly overlapping) groups of output variables where Ok ? {1 . . . K}, k = 1, . . . , M . Note that if certain variables do not belong to any
group, they may be considered to be groups of size 1. These group definitions specify a block
sparsity/support pattern on A. Without loss of generality, we assume that column indices are permuted so that groups go over contiguous indices. We now outline a novel algorithm, Multivariate
Group Orthogonal Matching Pursuit (MGOMP), that seeks to minimize the negative log-likelihood
associated with the multivariate regression model subject to the constraint that the support (set of
non-zeros) of the regression coefficient matrix, A, is a union of blocks formed by input and output
variable groupings1.
2.1 Multivariate Group Orthogonal Matching Pursuit
The MGOMP procedure performs greedy pursuit with respect to the loss function
LC (A) = tr (Y ? XA)T (Y ? XA)C ,
(2)
where C is an estimate of the precision matrix ??1 , given as input. Possible estimates include
the sample estimate using residual error obtained from running univariate Group-OMP for each
response individually. In addition to leveraging the grouping information via block sparsity constraints, MGOMP is able to incorporate additional information on the relatedness among output
variables as implicitly encoded in the error covariance matrix ?, noting that the latter is also the covariance matrix of the response Y conditioned on the predictor matrix X. Existing variable selection
methods often ignore this informationand deal instead with (regularized versions of) the simplified
objective tr (Y ? XA)T (Y ? XA) , thereby implicitly assuming that ? = I.
Before outlining the details of MGOMP, we first need to introduce some notation. For any set of
output variables O ? {1, . . . , K}, denote by CO the restriction of the K ? K precision matrix
C to columns corresponding to the output variables in O, and by CO,O similar restriction to both
columns and rows. For any set of input variables I ? {1, . . . , p}, denote by XI the restriction of
X to columns corresponding to the input variables in I. Furthermore, to simplify the exposition,
we assume in the remainder of the paper that for each group of input variables Is ? I, XIs is
orthonormalized, i.e., XIs T XIs = I. Denote by A(m) the estimate of the regression coefficient
matrix at iteration m, and by R(m) the corresponding matrix of residuals, i.e. R(m) = Y ? XA(m) .
The MGOMP procedure iterates between two steps : (a) Block Variable Selection and (b) Coefficient
matrix re-estimation with selected block. We now outline the details of these two steps.
Block Variable Selection: In this step, each block, (Ir , Os ), is evaluated with respect to how much
its introduction into Am?1 can reduce residual loss. Namely, at round m, the procedure selects the
block (Ir , Os ) that minimizes
arg min
min
1?r?L,1?s?M A:Av,w =0,v6?Ir ,w6?Os
(LC (A(m?1) + A) ? LC (A(m?1) )).
1
We note that we could easily generalize this setting and MGOMP to deal with the more general case where
there may be a different grouping structure for each output group, namely for each Ok , we could consider a
different set IOk of input variable groups.
3
Note that when the minimum attained falls below , the algorithm is stopped. Using standard Linear
Algebra, the block variable selection criteria simplifies to
?1
)
.
(3)
(r(m) , s(m) ) = arg max tr (XTIr R(m?1) COs )T (XTIr R(m?1) COs )(CO
s ,Os
r,s
From the above equation, it is clear that the relatedness between output variables is taken into account in the block selection process.
Coefficient Re-estimation:
Let M(m?1) be the set of blocks selected up to iteration m ? 1 . The set is now updated to include
the selected block of variables (Ir(m) , Os(m) ), i.e., M(m) = M(m?1) ? {(Ir(m) , Os(m) )}. The
? X (M(m) , Y), where
regression coefficient matrix is then re-estimated as A(m) = A
? X (M(m) , Y) = arg min LC (A) subject to supp(A) ? M(m) .
A
A?Rp?K
(4)
Since certain features are only relevant to a subset of responses, here the precision matrix estimate
C comes into play, and the problem can not be decoupled. However, a closed form solution for (4)
can be derived by recalling the following matrix identities [8],
tr(MT1 M2 M3 MT4 ) =
vec(M1 M2 ) =
vec(M1 )T (M4 ? M2 )vec(M3 ),
(I ? M1 )vec(M2 ),
(5)
(6)
where vec denotes the matrix vectorization, ? the Kronecker product, and I the identity matrix.
From (5), we have
tr (Y ? XA)T (Y ? XA)C = (vec(Y ? XA))T (C ? In )(vec(Y ? XA)).
(7)
For a set of selected blocks, say M, denote by O(M) the union of the output groups in M. Let
? = CO(M),O(M) ? In and Y? = vec(YO(M) ). For each output group Os in M, let I(Os ) =
C
? such that X
? = diag I|O | ? XI(O ) , Os ? O(M) . Using (7)
?(Ir ,Os )?M Ir . Finally define X
s
s
? X (M, Y)), namely those corresponding to
and (6) one can show that the non-zero entries of vec(A
?1
?X
?
?TC
? Y? , thus providing a closed?TC
the support induced by M, are given by ?
? = X
X
form formula for the coefficient re-estimation step.
To conclude this section, we note that we could also consider preforming alternate optimization of
the objective in (1) over A and ?, using MGOMP to optimize over A for a fixed estimate of ?, and
using a covariance estimation algorithm (e.g. Graphical Lasso [5]) to estimate ? with fixed A.
2.2 Theoretical Performance Guarantees for MGOMP
In this section we show that under certain conditions MGOMP can identify the correct blocks of
variables and provide an upperbound on the maximum absolute difference between the estimated
and true regression coefficients. We assume that the estimate of the error precision matrix, C, is in
agreement with the specification of the output groups, namely that Ci,j = 0 if i and j belong to
different output groups.
For each output variable group Ok , denote by Ggood (k) the set formed by the input groups included
in the true model for the regressions in Ok , and let Gbad (k) be the set formed by all the pairs that are
not included. Similarly denote by Mgood the set formed by the pairs of input and output variable
groups included in the true model, and Mbad be the set formed by all the pairs that are not included.
Before we can state the theorem, we need to define the parameters
that are key in the conditions
for consistency. Let ?X (Mgood ) = mink?{1,...,M} inf ? kX?k22 /k?k22 : supp(?) ? Ggood (k) ,
namely ?X (Mgood ) is the minimum over the output groups Ok of the smallest eigenvalue of
XTGgood (k) XGgood (k) .
For each output group Ok , define generally for any u = {u1 , . . . , u|Ggood (k)| } and v =
{v1 , . . . , v|Gbad (k)| },
rP
rP
P
P
good(k)
bad(k)
kuk(2,1) = Gi ?Ggood (k)
u2j , and kvk(2,1) = Gi ?Gbad (k)
vj2 .
j?Gi
j?Gi
4
good/bad(k)
For any matrix M ? R|Ggood (k)|?|Gbad (k)| , let kMk(2,1)
=
sup
bad(k)
kvk(2,1) =1
good(k)
kMvk(2,1)
.
good/bad(k)
, where X+ deThen we define ?X (Mgood ) = maxk?{1,...,M} kX+
Ggood (k) XGbad (k) k(2,1)
notes the Moore-Penrose pseudoinverse of X. We are now able to state the consistency theorem.
Theorem 1. Assume that ?X (Mgood ) < 1 and 0 < ?X (Mgood ) ? 1. For any
? ? (0, 1/2), with probability
p at least 1 ? 2?, if the stopping criterion of MGOMP is
1
? I ,O kF ?
such that > 1??X (M
2pK ln(2pK/?) and mink?{1,...,M},Ij ?Ggood (k) kA
j
k
good )
?
?1
(m?1)
(m?1)
? max ?
8?
(M
)
then
when
the
algorithm
stops
M
=
M
and
kA
?
Ak
X
good
good
p
(2 ln(2|Mgood |/?))/?X (Mgood ).
? + E can be rewritten in an equivaProof. The multivariate regression model Y = XA
? Y? =
lent univariate form with white noise: Y? = (IK ? X)?
? + ?, where ?
? = vec(A),
K
1
diag
vec(YC1/2 ), and ? is formed by i.i.d samples from N (0, 1). We can see that
1/2 In
Ck,k
k=1
applying the MGOMP procedure is equivalent to applying the Group-OMP procedure [12] to the
above vectorized regression model, using as grouping structure that naturally induced by the inputoutput groups originally considered for MGOMP. The theorem then follows from Theorem 3 in [12]
and translating the univariate conditions for consistency into their multivariate counterparts via
?X (Mgood ) and ?X (Mgood ). Since C is such that Ci,j = 0 for any i, j belonging to distinct
groups, the entries in Y? do not mix components of Y from different output groups and hence the
error covariance matrix does not appear in the consistency conditions.
Note that the theorem can also be re-stated with an alternative condition ?
on the amplitude of the
p true
? Ij ,k k2 ? 8?X (Mgood )?1 / |Ok |
regression coefficient: mink?{1,...,M},Ij ?Ggood (k) mins?Ok kA
which suggests that the amplitude of the true regression coefficients is allowed to be smaller in
MGOMP compared to Group-OMP on individual regressions. Intuitively, through MGOMP we
are combining information from multiple regressions, thus improving our capability to identify the
correct groups.
2.3 Simulation Results
We empirically evaluate the performance of our method against representative variable selection
methods, in terms of accuracy of prediction and variable (group) selection. As a measure of variable
R
selection accuracy we use the F1 measure, which is defined as F1 = P2P+R
, where P denotes the
precision and R denotes the recall. To compute the variable group F1 of a variable selection method,
we consider a group to be selected if any of the variables in the group is selected. As a measure of
prediction accuracy we use the average squared error on a test set. For all the greedy pursuit methods, we consider the ?holdout validated? estimates. Namely, we select the iteration number that
minimizes the average squared error on a validation set. For univariate methods, we consider individual selection of the iteration number for each univariate regression (joint selection of a common
iteration number across the univariate regressions led to worse results in the setting considered). For
each setting, we ran 50 runs, each with 50 observations for training, 50 for validation and 50 for
testing.
We consider an n ? p predictor matrix X, where the rows are generated independently according to Np (0, S), with Si,j = 0.7|i?j| . The n ? K error matrix E is generated according to
NK (0, ?), with ?i,j = ?|i?j| , where ? ? {0, 0.5, 0.7, 0/9}. We consider a model with 3rd order polynomial expansion: [YT1 , . . . , YTM ] = X[A1,T1 , . . . , A1,TM ] + X2 [A2,T1 , . . . , A2,TM ] +
X3 [A3,T1 , . . . , A3,TM ] + E. Here we abuse notation to denote by Xq the matrix such that Xqi,j =
(Xi,j )q . T1 , . . . , TM are the target groups. For each k, each row of [A1,Tk , . . . , A3,Tk ] is either all
non-zero or all zero, according to Bernoulli draws with success probability 0.1. Then for each nonzero entry of Ai,Tk , independently, we set its value according to N (0, 1). The number of features
for X is set to 20. Hence we consider 60 variables grouped into 20 groups corresponding the the 3rd
degree polynomial expansion. The number of regressions is set to 60. We consider 20 regression
groups (T1 , . . . T20 ), each of size 3.
5
Parallel runs
K
K
1
1
1
M
(p, L)
(p, p)
(p, L)
(p, p)
(p, L)
(p, L)
(p, L)
(K, M)
(1, 1)
(1, 1)
(K, 1)
(K, M )
(K, M )
(M 0 , 1)
Precision matrix estimate
Not applicable
Not applicable
Identity matrix
Identity matrix
Estimate from univariate OMP fits
Identity matrix
Method
OMP [13]
Group-OMP [12]
S-OMP [21]
MGOMP(Id)
MGOMP(C)
MGOMP(Parallel)
Table 1: Various matching pursuit methods and their corresponding parameters.
?
0.9
0.7
0.5
0
MGOMP (C)
0.863 ? 0.003
0.850 ? 0.002
0.850 ? 0.003
0.847 ? 0.004
MGOMP (Id)
0.818 ? 0.003
0.806 ? 0.003
0.802 ? 0.004
0.848 ? 0.004
MGOMP(Parallel)
0.762 ? 0.003
0.757 ? 0.003
0.766 ? 0.004
0.783 ? 0.004
Group-OMP
0.646 ? 0.007
0.631 ? 0.008
0.641 ? 0.006
0.651 ? 0.007
OMP
0.517 ? 0.006
0.517 ? 0.007
0.525 ? 0.007
0.525 ? 0.007
?
0.9
0.7
0.5
0
MGOMP (C)
3.009 ? 0.234
3.114 ? 0.252
3.117 ? 0.234
3.124 ? 0.256
MGOMP (Id)
3.324 ? 0.273
3.555 ? 0.287
3.630 ? 0.281
3.123 ? 0.262
MGOMP(Parallel)
4.086 ? 0.169
4.461 ? 0.159
4.499 ? 0.288
3.852 ? 0.185
Group-OMP
6.165 ? 0.317
8.170 ? 0.328
7.305 ? 0.331
6.137 ? 0.330
OMP
6.978 ? 0.206
8.14 ? 0.390
8.098 ? 0.323
7.414 ? 0.331
Table 2: Average F1 score (top) and average test set squared error (bottom) for the models output
by variants of MGOMP, Group-OMP and OMP under the settings of Table 1.
A dictionary of various matching pursuit methods and their corresponding parameters is provided in
Table 1. In the table, note that MGOMP(Parallel) consists in running MGOMP separately for each
regression group and C set to identity (Using C estimated from univariate OMP fits has negligible
impact on performance and hence is omitted for conciseness.). The results are presented in Table 2.
Overall, in all the settings considered, MGOMP is superior both in terms of prediction and variable selection accuracy, and more so when the correlation between responses increases. Note that
MGOMP is stable with respect to the choice of the precision matrix estimate. Indeed the advantage
of MGOMP persists under imperfect estimates (Identity and sample estimate from univariate OMP
fits) and varying degrees of error correlation. In addition, model selection appears to be more robust
for MGOMP, which has only one stopping point (MGOMP has one path interleaving input variables
for various regressions, while GOMP and OMP have K paths, one path per univariate regression).
3 Granger Causality with Block Sparsity in Vector Autoregressive Models
3.1 Model Formulation
We begin by motivating our main application. The emergence of the web2.0 phenomenon has set in
place a planetary-scale infrastructure for rapid proliferation of information and ideas. Social media
platforms such as blogs, twitter accounts and online discussion sites are large-scale forums where
every individual can voice a potentially influential public opinion. This unprecedented scale of unstructured user-generated web content presents new challenges to both consumers and companies
alike. Which blogs or twitter accounts should a consumer follow in order to get a gist of the community opinion as a whole? How can a company identify bloggers whose commentary can change
brand perceptions across this universe, so that marketing interventions can be effectively strategized?
The problem of finding key influencers and authorities in online communities is central to any viable information triage solution, and is therefore attracting increasing attention [14, 6]. A traditional
approach to this problem would treat it no different from the problem of ranking web-pages in a
hyperlinked environment. Seminal ideas such as the PageRank [17] and Hubs-and-Authorities [11]
were developed in this context, and in fact even celebrated as bringing a semblance of order to the
web. However, the mechanics of opinion exchange and adoption makes the problem of inferring
authority and influence in social media settings somewhat different from the problem of ranking
generic web-pages. Consider the following example that typifies the process of opinion adoption. A
consumer is looking to buy a laptop. She initiates a web search for the laptop model and browses
several discussion and blog sites where that model has been reviewed. The reviews bring to her
attention that among other nice features, the laptop also has excellent speaker quality. Next she buys
the laptop and in a few days herself blogs about it. Arguably, conditional on being made aware of
6
speaker quality in the reviews she had read, she is more likely to herself comment on that aspect
without necessarily attempting to find those sites again in order to link to them in her blog. In other
words, the actual post content is the only trace that the opinion was implicitly absorbed. Moreover,
the temporal order of events in this interaction is indicative of the direction of causal influence.
We formulate these intuitions rigorously in terms of the notion of Granger Causality [7] and then
employ MGOMP for its implementation. For scalability, we work with MGOMP (Parallel), see
table 1. Introduced by the Nobel prize winning economist, Clive Granger, this notion has proven
useful as an operational notion of causality in time series analysis. It is based on the intuition that
a cause should necessarily precede its effect, and in particular if a time series variable X causally
affects another Y , then the past values of X should be helpful in predicting the future values of Y ,
beyond what can be predicted based on the past values of Y alone.
Let B1 . . . BG denote a community of G bloggers. With each blogger, we associate content variables, which consist of frequencies of words relevant to a topic across time. Specifically, given a
dictionary of K words and the time-stamp of each blog post, we record wik,t , the frequency of the
kth word for blogger Bi at time t. Then, the content of blogger Bi at time t can be represented as
Bti = [wi1,t , . . . , wiK,t ]. The input to our model is a collection of multivariate time series, {Bti }Tt=1
(1 ? i ? G), where T is the timespan of our analysis. Our key intuition is that authorities and
influencers are causal drivers of future discussions and opinions in the community. This may be
phrased in the following terms:
Granger Causality: A collection of bloggers is said to influence Blogger Bi if their collective past
content (blog posts) is predictive of the future content of Blogger Bi , with statistical significance,
and more so than the past content of Blogger Bi alone.
The influence problem can thus be mapped to a variable group selection problem in
a vector autoregressive model, i.e., in multivariate
regression with G
? tK responses
d
{Btj , j = 1, 2 . . . G} in terms of variable groups {Bt?l
}
,
j
=
1,
2
.
.
.
G
: [B1 , . . . , BtG ] =
j
l=1
t?1
t?d
t?1
t?d
[B1 , . . . , B1 , . . . , BG , . . . , BG ]A + E. We can then conclude that a certain blogger Bi
influences blogger Bj , if the variable group {Bt?l
i }l?{1,...,d} is selected by the variable selection
method for the responses concerning blogger Bj . For each blogger Bj , this can be viewed as an application of a Granger test on Bj against bloggers B1 , B2 , . . . , BG . This induces a directed weighted
graph over bloggers, which we call causal graph, where edge weights are derived from the underlying regression coefficients. We refer to influence measures on causal graphs as GrangerRanks. For
example, GrangerPageRank refers to applying pagerank on the causal graph while GrangerOutDegree refers to computing out-degrees of nodes as a measure of causal influence.
3.2 Application: Causal Influence in Online Social Communities
Proof of concept: Key Influencers in Theoretical Physics: Drawn from a KDD Cup 2003 task,
this dataset is publically available at: http://www.cs.cornell.edu/projects/kddcup/datasets.html. It
consists of the latex sources of all papers in the hep-th portion of the arXiv (http://arxiv.org) In
consultation with a theoretical physicist we did our analysis at a time granularity of 1 month. In
total, the data spans 137 months. We created document term matrices using standard text processing
techniques, over a vocabulary of 463 words chosen by running an unsupervised topic model. For
each of the 9200 authors, we created a word-time matrix of size 463x137, which is the usage of the
topic-specific key words across time. We considered one year, i.e., d = 12 months as maximum
time lag. Our model produces the causal graph shown in Figure 1 showing influence relationships
amongst high energy physicists. The table on the right side of Figure 1 lists the top 20 authors according to GrangerOutDegree (also marked on the graph), GrangerPagerRank and Citation Count.
The model correctly identifies several leading figures such as Edward Witten, Cumrun Vafa as authorities in theoretical physics. In this domain, number of citations is commonly viewed as a valid
measure of authority given disciplined scholarly practice of citing prior related work. Thus, we
consider citation-count based ranking as the ?ground truth?. We also find that GrangerPageRank
and GrangerOutDegree have high positive rank correlation with citation counts (0.728 and 0.384
respectively). This experiment confirms that our model agrees with how this community recognizes
its authorities.
7
S.Gukov
R.J.Szabo
C.S.Chu
Arkady Tseytlin
Michael Douglas
I.Antoniadis
R.Tatar
E.Witten
Per Kraus
Ian Kogan
S.Theisen
Jacob Sonnenschein
Igor Klebanov
C.Vafa
P.K.TownsendG.Moore
J.L.F.Barbon
S.Ferrara
GrangerOutdegree
E.Witten
C.Vafa
Alex Kehagias
Arkady Tseytlin
P.K.Townsend
Jacob Sonnenschein
Igor Klebanov
R.J.Szabo
G.Moore
Michael Douglas
GrangerPageRank
E.Witten
C.Vafa
Alex Kehagias
Arkady Tseytlin
P.K.Townsend
Jacob Sonnenschein
R.J.Szabo
G.Moore
Igor Klebanov
Ian Kogan
Citation Count
E.Witten
N.Seiberg
C.Vafa
J.M.Maldacena
A.A.Sen
Andrew Strominger
Igor Klebanov
Michael Douglas
Arkady Tseytlin
L.Susskind
Alex Kehagias
M.Berkooz
Figure 1: Causal Graph and top authors in High-Energy Physics according to various measures.
(a) Causal Graph
(b) Hyperlink Graph
Figure 2: Causal and hyperlink graphs for the lotus blog dataset.
Real application: IBM Lotus Bloggers: We crawled blogs pertaining to the IBM Lotus software
brand. Our crawl process ran in conjunction with a relevance classifier that continuously filtered out
posts irrelevant to Lotus discussions. Due to lack of space we omit preprocessing details that are
similar to the previous application. In all, this dataset represents a Lotus blogging community of
684 bloggers, each associated with multiple time series describing the frequency of 96 words over a
time period of 376 days. We considered one week i.e., d = 7 days as maximum time lag in this application. Figure 2 shows the causal graph learnt by our models on the left, and the hyperlink graph
on the right. We notice that the causal graph is sparser than the hyperlink graph. By identifying the
most significant causal relationships between bloggers, our causal graphs allow clearer inspection
of the authorities and also appear to better expose striking sub-community structures in this blog
community. We also computed the correlation between PageRank and Outdegrees computed over
our causal graph and the hyperlink graph (0.44 and 0.65 respectively). We observe positive correlations indicating that measures computed on either graph partially capture related latent rankings,
but at the same time are also sufficiently different from each other. Our results were also validated
by domain experts.
4 Conclusion and Perspectives
We have provided a framework for learning sparse multivariate regression models, where the sparsity
structure is induced by groupings defined over both input and output variables. We have shown that
extended notions of Granger Causality for causal inference over high-dimensional time series can
naturally be cast in this framework. This allows us to develop a causality-based perspective on the
problem of identifying key influencers in online communities, leading to a new family of influence
measures called GrangerRanks. We list several directions of interest for future work: optimizing
time-lag selection; considering hierarchical group selection to identify pertinent causal relationships
not only between bloggers but also between communities of bloggers; incorporating the hyperlink
graph in the causal modeling; adapting our approach to produce topic specific rankings; developing
online learning versions; and conducting further empirical studies on the properties of the causal
graph in various applications of multivariate regression.
Acknowledgments
We would like to thank Naoki Abe, Rick Lawrence, Estepan Meliksetian, Prem Melville and Grzegorz Swirszcz for their contributions to this work in a variety of ways.
8
References
[1] Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Pontil. Convex multi-task feature
learning. Machine Learning, 73(3):243?272, 2008.
[2] Leo Breiman and Jerome H Friedman. Predicting multivariate responses in multiple linear
regression. Journal of the Royal Statistical Society: Series B, (1):1369?7412, 1997.
[3] M. Elad. Sparse and Redundant Representations: From Theory to Applications in Signal and
Image Processing. Springer,2010
[4] Ildiko E. Frank and Jerome H. Friedman. A statistical view of some chemometrics regression
tools. Technometrics, 35(2):109?135, 1993.
[5] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432?441, July 2008.
[6] M. Gomez-Rodriguez and J. Leskovec and A. Krause. Inferring Networks of Diffusion and
Influence, KDD 2010.
[7] C. Granger. Testing for causality: A personal viewpoint. Journal of Economic Dynamics and
Control, 2:329?352, 1980.
[8] D. Harville. Matrix Algebra from a Statistician?s Perspective. Springer, 1997.
[9] J. Huang, T. Zhang, and D. Metaxas D. Learning with structured sparsity, ICML 2009.
[10] T. Joachims. Structured output prediction with support vector machines. In Joint IAPR International Workshops on Structural and Syntactic Pattern Recognition (SSPR) and Statistical
Techniques in Pattern Recognition (SPR), pages 1?7, 2006.
[11] Jon M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM,
46:668?677, 1999.
[12] A.C. Lozano, G. Swirszcz, and N. Abe. Grouped orthogonal matching pursuit for variable
selection and prediction. Advances in Neural Information Processing Systems 22, 2009.
[13] S. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE Transactions on Signal Processing, 1993.
[14] P. Melville, K. Subbian, C. Perlich, R. Lawrence and E. Meliksetian. A Predictive Perspective
on Measures of Influence in Networks Workshop on Information in Networks (WIN-10), New
York, September, 2010.
[15] Charles A. Micchelli and Massimiliano Pontil. Kernels for multi?task learning. In NIPS, 2004.
[16] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. Technical report, 2006.
[17] L. Page, S. Brin, R. Motwani, and T. Winograd. The pagerank citation ranking: Bringing order
to the web. Technical Report, Stanford Digital Libraries, 1998.
[18] Elisa Ricci, Tijl De Bie, and Nello Cristianini. Magic moments for structured output prediction.
Journal of Machine Learning Research, 9:2803?2846, December 2008.
[19] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58:267?288, 1994.
[20] A.N. Tikhonov. Regularization of incorrectly posed problems. Sov. Math. Dokl, 4:16241627,
1963.
[21] J.A. Tropp, A.C. Gilbert, and M.J. Strauss. Algorithms for simultaneous sparse approximation:
part i: Greedy pursuit. Sig. Proc., 86(3):572?588, 2006.
[22] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for
interdependent and structured output spaces. In International Conference on Machine Learning
(ICML), pages 104?112, 2004.
[23] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society, Series B, 68:49?67, 2006.
[24] Ming Yuan, Ali Ekici, Zhaosong Lu, and Renato Monteiro. Dimension reduction and coefficient estimation in multivariate linear regression. Journal Of The Royal Statistical Society
Series B, 69(3):329?346, 2007.
9
| 3993 |@word determinant:1 version:2 polynomial:2 confirms:1 simulation:2 seek:1 covariance:6 jacob:3 thereby:1 tr:6 moment:1 reduction:3 celebrated:1 series:14 score:1 document:1 past:4 existing:1 kmk:1 ka:3 com:1 si:1 bie:1 chu:1 kdd:2 hofmann:1 pertinent:2 interpretable:1 gist:1 alone:2 greedy:3 selected:7 vafa:5 antoniadis:1 indicative:1 inspection:1 prize:1 record:1 filtered:1 infrastructure:1 provides:1 iterates:1 authority:8 node:1 triage:1 theodoros:1 org:1 math:1 zhang:2 height:1 driver:1 ik:2 viable:1 yuan:2 consists:2 kraus:1 introduce:1 indeed:1 rapid:1 proliferation:1 mechanic:1 multi:4 ming:1 company:2 actual:1 precursor:1 considering:1 increasing:2 becomes:1 begin:2 provided:2 notation:3 moreover:1 underlying:1 project:1 medium:3 laptop:4 what:1 biostatistics:1 minimizes:2 developed:1 finding:1 guarantee:4 temporal:2 every:1 k2:1 clive:1 classifier:1 control:3 intervention:1 appear:2 causally:1 arguably:1 omit:1 before:2 t1:5 negligible:1 persists:1 treat:2 positive:2 naoki:1 physicist:2 tijl:1 ak:1 id:3 path:3 abuse:1 suggests:2 challenging:1 co:6 bi:6 adoption:2 elie:1 directed:1 acknowledgment:1 testing:2 union:3 block:17 practice:1 x3:1 susskind:1 procedure:6 pontil:2 empirical:4 evolving:2 xggood:1 adapting:1 matching:11 word:8 road:1 refers:2 altun:1 get:1 selection:35 tsochantaridis:1 context:2 influence:16 risk:1 applying:4 intercept:1 restriction:3 optimize:1 map:1 equivalent:1 center:1 seminal:1 www:1 go:1 attention:2 gilbert:1 independently:4 citing:1 convex:1 formulate:2 simplicity:1 recovery:1 unstructured:1 identifying:2 m2:4 estimator:3 classic:1 handle:1 notion:8 traditionally:1 sov:1 updated:1 target:1 play:1 mallat:1 user:1 exact:1 sig:1 agreement:1 associate:1 recognition:2 particularly:2 econometrics:1 winograd:1 bottom:1 taskar:1 u2j:1 capture:1 kogan:2 ran:2 intuition:3 environment:2 cristianini:1 rigorously:2 dynamic:1 personal:1 algebra:2 iapr:1 predictive:2 ali:1 easily:1 joint:3 various:6 herself:2 represented:1 leo:1 distinct:1 massimiliano:2 pertaining:1 whose:2 encoded:1 lag:3 elad:1 stanford:1 say:1 posed:1 gbad:4 melville:2 gi:4 syntactic:1 emergence:1 noisy:1 online:6 advantage:2 eigenvalue:1 unprecedented:1 hyperlinked:2 sen:1 propose:3 blogger:21 interaction:1 product:1 remainder:1 relevant:4 combining:1 poorly:1 inputoutput:1 scalability:1 chemometrics:2 motwani:1 produce:2 tk:4 wider:1 illustrate:1 andrew:1 clearer:1 develop:1 ij:3 edward:1 implemented:1 predicted:1 c:1 come:2 differ:1 direction:2 guided:1 correct:2 centered:1 translating:1 public:1 opinion:6 brin:1 exchange:1 ricci:1 f1:4 showcased:1 alleviate:1 extension:1 hold:1 sufficiently:1 considered:6 ground:1 lawrence:2 bj:4 week:1 dictionary:3 early:1 smallest:1 a2:2 omitted:1 estimation:9 proc:1 wi1:1 applicable:2 precede:1 expose:1 individually:1 grouped:3 agrees:1 correctness:1 tool:1 weighted:2 minimization:1 orthonormalized:1 hope:1 clearly:2 rather:1 ck:1 cornell:1 breiman:1 varying:1 rick:1 broader:1 crawled:1 conjunction:1 shrinkage:1 derived:2 focus:1 yo:1 joachim:2 she:4 validated:2 lotus:5 likelihood:3 bernoulli:1 rank:1 perlich:1 sense:2 am:1 helpful:1 inference:2 twitter:3 stopping:2 publically:1 entire:1 bt:2 her:2 i1:1 selects:1 monteiro:1 arg:3 issue:1 aforementioned:1 overall:1 among:3 html:1 platform:2 special:1 aware:1 evgeniou:1 biology:1 represents:1 broad:1 unsupervised:1 igor:4 icml:2 jon:1 future:4 np:1 report:2 simplify:1 few:1 employ:1 modern:1 individual:5 m4:1 szabo:3 statistician:1 attempt:1 recalling:2 friedman:3 technometrics:1 interest:1 highly:3 zhaosong:1 ekici:1 kvk:2 edge:1 orthogonal:6 decoupled:1 re:5 causal:26 guidance:1 theoretical:7 leskovec:1 stopped:1 instance:1 column:5 modeling:3 hep:1 contiguous:1 ordinary:1 subset:2 entry:3 predictor:5 successful:1 motivating:1 dependency:3 aclozano:1 learnt:1 international:2 aur:1 physic:3 influencers:5 michael:3 continuously:1 squared:3 central:1 again:1 huang:1 possibly:3 worse:1 expert:1 leading:2 supp:2 account:5 aggressive:1 upperbound:1 de:1 b2:1 coefficient:14 satisfy:1 kehagias:3 ranking:7 bg:4 later:1 view:3 closed:2 sup:1 portion:1 capability:1 parallel:6 p2p:1 ytm:1 contribution:1 om:1 formed:10 square:1 il:1 ir:7 minimize:1 conducting:1 efficiently:2 accuracy:4 yield:1 identify:4 generalize:2 weak:1 metaxas:1 lu:1 simultaneous:2 facebook:1 definition:1 against:2 web2:1 energy:2 frequency:4 naturally:2 associated:2 conciseness:1 proof:1 latex:1 sampled:1 stop:1 holdout:1 dataset:3 popular:1 consultation:1 recall:1 knowledge:1 dimensionality:2 amplitude:2 scholarly:1 elisa:1 centric:2 ok:8 appears:1 attained:1 originally:1 supervised:1 day:4 follow:1 response:11 specify:1 disciplined:1 formulation:2 evaluated:1 generality:3 furthermore:1 xa:13 marketing:1 correlation:8 jerome:2 lent:1 web:7 tropp:1 o:10 overlapping:2 lack:1 google:1 rodriguez:1 quality:3 grows:1 usage:1 usa:1 k22:2 effect:1 true:6 concept:1 counterpart:1 lozano:2 regularization:4 hence:3 read:1 moore:4 nonzero:1 deal:3 white:1 round:1 speaker:2 yorktown:1 suboptimality:1 criterion:2 xqi:1 outline:2 tt:1 performs:1 l1:1 bring:1 btj:1 image:1 novel:2 outdegrees:1 charles:1 ols:4 common:1 superior:1 permuted:1 functional:1 witten:5 empirically:1 extend:1 belong:2 m1:3 significant:2 refer:1 cup:1 imposing:1 vec:11 ai:1 rd:2 consistency:4 similarly:1 inclusion:1 had:1 specification:1 stable:1 attracting:1 bti:2 influencer:1 multivariate:26 perspective:4 optimizing:1 inf:1 irrelevant:1 tikhonov:1 certain:7 browse:1 blog:12 watson:1 success:1 seen:2 minimum:2 additional:2 commentary:1 somewhat:1 omp:18 period:1 redundant:1 signal:2 july:1 multiple:6 desirable:3 mix:1 technical:2 lin:1 concerning:1 post:4 a1:3 impact:1 prediction:7 involving:4 regression:42 basic:1 desideratum:2 variant:1 arxiv:2 iteration:5 kernel:1 addition:2 separately:2 gomp:1 krause:1 source:2 crucial:1 bringing:2 comment:1 induced:4 subject:2 december:1 leveraging:1 jordan:1 call:2 structural:1 leverage:1 presence:2 exceed:1 noting:1 granularity:1 variety:1 affect:1 isolation:1 fit:3 hastie:1 lasso:6 suboptimal:1 reduce:1 idea:3 simplifies:1 tm:4 imperfect:1 andreas:1 economic:1 york:1 cause:1 generally:2 useful:1 clear:1 induces:2 http:2 exist:2 simultaneity:1 notice:1 estimated:3 per:2 correctly:1 tibshirani:2 diverse:1 group:55 key:9 drawn:1 harville:1 douglas:3 kuk:1 diffusion:1 v1:1 graph:22 year:1 enforced:1 run:2 inverse:1 striking:1 extends:3 family:3 place:1 yt1:1 parsimonious:1 draw:1 decision:2 renato:1 internet:1 gomez:1 strength:1 constraint:2 kronecker:1 alex:3 x2:1 software:1 encodes:1 phrased:1 kleinberg:1 u1:1 aspect:1 min:4 span:1 performing:1 attempting:1 structured:5 influential:1 according:6 popularized:1 alternate:1 developing:1 belonging:1 smaller:1 across:4 alike:1 intuitively:1 taken:2 ln:2 equation:1 describing:1 granger:8 count:4 initiate:1 pursuit:14 generalizes:2 rewritten:1 available:1 apply:1 observe:1 hierarchical:1 enforce:1 generic:1 alternative:3 voice:1 rp:3 vikas:1 denotes:4 running:3 include:3 top:3 recognizes:1 graphical:2 exploit:1 classical:2 forum:2 society:4 micchelli:1 objective:2 sspr:1 traditional:1 said:1 exhibit:1 amongst:1 kth:1 win:1 september:1 link:1 mapped:1 simulated:1 capacity:2 concatenation:1 thank:1 topic:8 nello:1 nobel:1 enforcing:2 assuming:1 w6:1 consumer:3 o1:1 index:2 relationship:6 economist:1 providing:1 potentially:2 frank:1 trace:1 negative:2 rise:2 mink:3 xgbad:1 stated:1 implementation:1 magic:1 collective:1 unknown:1 perform:1 av:1 observation:1 datasets:3 groupomp:1 finite:1 incorrectly:1 maxk:1 innumerable:1 looking:1 extended:1 vj2:1 rn:2 arbitrary:2 community:14 abe:2 grzegorz:1 introduced:1 namely:8 pair:3 specified:1 extensive:1 cast:1 textual:1 planetary:2 swirszcz:2 nip:1 address:2 able:2 beyond:1 dokl:1 below:1 pattern:6 perception:1 sparsity:12 hyperlink:8 challenge:1 encompasses:1 pagerank:6 t20:1 including:1 max:2 royal:4 event:1 natural:1 regularized:2 concretize:1 predicting:2 townsend:2 residual:3 representing:1 wik:2 library:1 identifies:1 irrespective:1 mt1:1 created:2 xq:1 text:1 prior:2 review:2 nice:1 interdependent:1 kf:1 loss:4 yc1:1 subbian:1 proven:1 outlining:1 validation:2 digital:1 authoritative:1 degree:3 vectorized:1 blogging:1 viewpoint:1 ibm:4 row:5 vsindhw:1 supported:1 guide:1 side:1 allow:1 fall:1 taking:1 absolute:1 sparse:5 dimension:2 vocabulary:1 world:1 valid:1 crawl:1 autoregressive:2 author:3 collection:6 made:2 commonly:1 simplified:1 preprocessing:1 social:5 transaction:1 citation:6 ignore:1 relatedness:4 implicitly:3 confirm:1 pseudoinverse:1 overfitting:1 active:1 buy:2 b1:5 assumed:1 conclude:2 spatio:1 xi:6 kddcup:1 search:1 vectorization:1 latent:1 table:8 reviewed:1 learn:2 robust:2 klebanov:4 operational:2 improving:1 expansion:2 spr:1 excellent:1 necessarily:2 timespan:1 domain:5 diag:2 did:1 pk:2 main:1 significance:1 universe:1 whole:1 noise:1 arise:1 allowed:1 causality:11 site:4 representative:1 ny:1 lc:4 precision:7 sub:1 inferring:5 wish:1 winning:1 stamp:1 interleaving:1 ian:2 formula:1 theorem:6 bad:4 specific:6 xt:2 hub:1 showing:1 explored:1 list:2 evidence:1 grouping:6 a3:3 consist:1 incorporating:1 workshop:2 strauss:1 effectively:2 importance:1 ci:2 conditioned:1 disregarded:1 kx:2 nk:1 sparser:1 intersection:1 tc:2 led:1 univariate:10 likely:1 absorbed:1 penrose:1 expressed:1 strand:1 v6:1 partially:1 sindhwani:1 ggood:8 springer:2 truth:1 acm:1 obozinski:1 conditional:1 goal:1 viewed:3 identity:7 month:3 exposition:1 marked:1 content:9 change:1 included:4 specifically:2 total:1 called:1 m3:2 brand:2 indicating:1 select:1 support:7 latter:1 relevance:3 prem:1 avoiding:1 incorporate:1 evaluate:1 argyriou:1 phenomenon:1 correlated:1 |
3,305 | 3,994 | LSTD with Random Projections
Mohammad Ghavamzadeh, Alessandro Lazaric, Odalric-Ambrym Maillard, R?emi Munos
INRIA Lille - Nord Europe, Team SequeL, France
Abstract
We consider the problem of reinforcement learning in high-dimensional spaces
when the number of features is bigger than the number of samples. In particular,
we study the least-squares temporal difference (LSTD) learning algorithm when
a space of low dimension is generated with a random projection from a highdimensional space. We provide a thorough theoretical analysis of the LSTD with
random projections and derive performance bounds for the resulting algorithm.
We also show how the error of LSTD with random projections is propagated
through the iterations of a policy iteration algorithm and provide a performance
bound for the resulting least-squares policy iteration (LSPI) algorithm.
1
Introduction
Least-squares temporal difference (LSTD) learning [3, 2] is a widely used reinforcement learning
(RL) algorithm for learning the value function V ? of a given policy ?. LSTD has been successfully
applied to a number of problems especially after the development of the least-squares policy iteration
(LSPI) algorithm [9], which extends LSTD to control problems by using it in the policy evaluation
step of policy iteration. More precisely, LSTD computes the fixed point of the operator ?T ? , where
T ? is the Bellman operator of policy ? and ? is the projection operator onto a linear function
space. The choice of the linear function space has a major impact on the accuracy of the value
function estimated by LSTD, and thus, on the quality of the policy learned by LSPI. The problem
of finding the right space, or in other words the problems of feature selection and discovery, is an
important challenge in many areas of machine learning including RL, or more specifically, linear
value function approximation in RL.
To address this issue in RL, many researchers have focused on feature extraction and learning.
Mahadevan [13] proposed a constructive method for generating features based on the eigenfunctions
of the Laplace-Beltrami operator of the graph built from observed system trajectories. Menache et
al. [16] presented a method that starts with a set of features and then tunes both features and the
weights using either gradient descent or the cross-entropy method. Keller et al. [7] proposed an
algorithm in which the state space is repeatedly projected onto a lower dimensional space based on
the Bellman error and then states are aggregated in this space to define new features. Finally, Parr et
al. [17] presented a method that iteratively adds features to a linear approximation architecture such
that each new feature is derived from the Bellman error of the existing set of features.
A more recent approach to feature selection and discovery in value function approximation in RL is
to solve RL in high-dimensional feature spaces. The basic idea here is to use a large number of features and then exploit the regularities in the problem to solve it efficiently in this high-dimensional
space. Theoretically speaking, increasing the size of the function space can reduce the approximation error (the distance between the target function and the space) at the cost of a growth in the
estimation error. In practice, in the typical high-dimensional learning scenario when the number of
features is larger than the number of samples, this often leads to the overfitting problem and poor
prediction performance. To overcome this problem, several approaches have been proposed including regularization. Both `1 and `2 regularizations have been studied in value function approximation
in RL. Farahmand et al. presented several `2 -regularized RL algorithms by adding `2 -regularization
to LSTD and modified Bellman residual minimization [4] as well as fitted value iteration [5], and
proved finite-sample performance bounds for their algorithms. There have also been algorithmic
work on adding `1 -penalties to the TD [12], LSTD [8], and linear programming [18] algorithms.
1
In this paper, we follow a different approach based on random projections [21]. In particular, we
study the performance of LSTD with random projections (LSTD-RP). Given a high-dimensional
linear space F, LSTD-RP learns the value function of a given policy from a small (relative to the
dimension of F) number of samples in a space G of lower dimension obtained by linear random
projection of the features of F. We prove that solving the problem in the low dimensional random
space instead of the original high-dimensional space reduces the estimation error at the price of a
?controlled? increase in the approximation error of the original space F. We present the LSTDRP algorithm and discuss its computational complexity in Section 3. In Section 4, we provide the
finite-sample analysis of the algorithm. Finally in Section 5, we show how the error of LSTD-RP is
propagated through the iterations of LSPI.
2
Preliminaries
For a measurable space with domain X , we let S(X ) and B(X ; L) denote the set of probability
measures over X and the space of measurable functions with domain X and bounded in absolute
value by 0 < L < ?, respectively. For a measure R? ? S(X ) and a measurable function f :
X ? R, we define the `2 (?)-norm of f as ||f ||2? = f (x)2 ?(dx), the supremum norm of f as
||f ||? = sup
and for a set of n states X1 , . . . , Xn ? X the empirical norm
f as
Pnx?X |f (x)|,
Pof
n
1
2
2
n
2
||f ||n = n t=1 f (Xt ) . Moreover, for a vector u ? R we write its `2 -norm as ||u||2 = i=1 u2i .
We consider the standard RL framework [20] in which a learning agent interacts with a stochastic
environment and this interaction is modeled as a discrete-time discounted Markov decision process
(MDP). A discount MDP is a tuple M = hX , A, r, P, ?i where the state space X is a bounded closed
subset of a Euclidean space, A is a finite (|A| < ?) action space, the reward function r : X ?A ? R
is uniformly bounded by Rmax , the transition kernel P is such that for all x ? X and a ? A,
P (?|x, a) is a distribution over X , and ? ? (0, 1) is a discount factor. A deterministic policy ? :
X ? A is a mapping from states to actions. Under a policy ?, the
MDP M is reduced to a Markov
chain M? = hX , R? , P ? , ?i with reward R? (x) = r x, ?(x) , transition kernel P ? (?|x) = P ?
|x, ?(x) , and stationary distribution ?? (if it admits one). The value function of a policy ?, V ? , is
max
) ? B(X ; Vmax ) defined
the unique fixed-point of the Bellman operator T ? : B(X ; Vmax = R1??
R
?
?
?
by (T V )(x) = R (x) + ? X P (dy|x)V (y). We also define the optimal value function V ? as
the unique fixed-point of the
operator T ? : B(X ; Vmax ) ? B(X ; Vmax ) defined
optimal Bellman
R
?
by (T V )(x) = maxa?A r(x, a) + ? X P (dy|x, a)V (y) . Finally, we denote
by T the truncation
operator at threshold Vmax , i.e., if |f (x)| > Vmax then T (f )(x) = sgn f (x) Vmax .
To approximate a value function V ? B(X ; Vmax ), we first define a linear function space F spanned
by the basis functions ?j ? B(X ; L), j = 1, . . . , D, i.e., F = {f? | f? (?) = ?(?)> ?, ? ? RD },
>
where ?(?) = ?1 (?), . . . , ?D (?) is the feature vector. We define the orthogonal projection of
V onto the space F w.r.t. norm ? as ?F V = arg minf ?F ||V ? f ||? . From F we can generate a d-dimensional (d < D) random space G = {g? | g? (?) = ? (?)> ?, ? ? Rd }, where
>
the feature vector ? (?) = ?1 (?), . . . , ?d (?)
is defined as ? (?) = A?(?) with A ? Rd?D
be a random matrix whose elements are drawn i.i.d. from a suitable distribution, e.g., Gaussian
N (0, 1/d). Similar to the space F, we define the orthogonal projection of V onto the space G
w.r.t. norm ? as ?G V = arg ming?G ||V ? g||? . Finally, for any function f? ? F, we define
m(f? ) = ||?||2 supx?X ||?(x)||2 .
3
LSTD with Random Projections
The objective of LSTD with random projections (LSTD-RP) is to learn the value function of a
given policy from a small (relative to the dimension of the original space) number of samples in a
low-dimensional linear space defined by a random projection of the high-dimensional space. We
show that solving the problem in the low dimensional space instead of the original high-dimensional
space reduces the estimation error at the price of a ?controlled? increase in the approximation error.
In this section, we introduce the notations and the resulting algorithm, and discuss its computational
complexity. In Section 4, we provide the finite-sample analysis of the algorithm.
We use the linear spaces F and G with dimensions D and d (d < D) as defined in Section 2. Since in
the following the policy is fixed, we drop the dependency of R? , P ? , V ? , and T ? on ? and simply
use R, P , V , and T . Let {Xt }nt=1 be a sample path (or trajectory) of size n generated by the Markov
2
chain M? , and let v ? Rn and r ? Rn , defined as vt = V (Xt ) and rt = R(Xt ), be the value and
reward vectors of this trajectory. Also, let ? = [? (X1 )> ; . . . ; ? (Xn )> ] be the feature matrix
defined at these n states and Gn = {?? | ? ? Rd } ? Rn be the corresponding vector space. We
b G : Rn ? Gn the orthogonal projection onto Gn , defined by ?
b G y = arg minz?G ||y ?
denote by ?
n
Pn
z||n , where ||y||2n = n1 t=1 yt2 . Similarly, we can define the orthogonal projection onto Fn =
b F y = arg minz?F ||y ? z||n , where ? = [?(X1 )> ; . . . ; ?(Xn )> ] is the
{?? | ? ? RD } as ?
n
b G y and
feature matrix defined at {Xt }nt=1 . Note that for any y ? Rn , the orthogonal projections ?
b
?F y exist and are unique.
We consider the pathwise-LSTD algorithm introduced in [11]. Pathwise-LSTD takes a single trajectory {Xt }nt=1 of size n generated by the Markov chain as input and returns the fixed point of the
b G Tb , where Tb is the pathwise Bellman operator defined as Tb y = r + ? Pby. The
empirical operator ?
n
operator Pb : R ? Rn is defined as (Pby)t = yt+1 for 1 ? t < n and (Pby)n = 0. As shown
b G , it
in [11], Tb is a ?-contraction in `2 -norm, thus together with the non-expansive property of ?
n
b
guarantees the existence and uniqueness of the pathwise-LSTD fixed point v? ? R , v? = ?G Tb v?.
?
Note that the uniqueness of v? does not imply the uniqueness of the parameter ?? such that v? = ??.
n
LSTD-RP D, d, {Xt }n
Cost
t=1 , {R(Xt )}t=1 , ?, ?
Compute
? the reward vector rn?1 ; rt = R(Xt )
O(n)
? the high-dimensional feature matrix ?n?D = [?(X1 )> ; . . . ; ?(Xn )> ]
O(nD)
? the projection matrix Ad?D whose elements are i.i.d. samples from N (0, 1/d)
O(dD)
? the low-dim feature matrix ?n?d = [? (X1 )> ; . . . ; ? (Xn )> ] ; ? (?) = A?(?)
O(ndD)
? the matrix Pb? = ?0n?d = [? (X2 )> ; . . . ; ? (Xn )> ; 0> ]
O(nd)
?bd?1 = ?> r
? A?d?d = ?> (? ? ??0 )
,
O(nd + nd2 ) + O(nd)
? O(d2 + d3 )
return either ?? = A??1?b or ?? = A?+?b (A?+ is the Moore-Penrose pseudo-inverse of A)
Figure 1: The pseudo-code of the LSTD with random projections (LSTD-RP) algorithm.
Figure 1 contains the pseudo-code and the computational cost of the LSTD-RP algorithm. The total
computational cost of LSTD-RP is O(d3 + ndD), while the computational cost of LSTD in the
high-dimensional space F is O(D3 +?nD2 ). As we will see, the analysis of Section 4 suggests
that the value of d should be set to O( n). In this case the numerical complexity of LSTD-RP is
O(n3/2 D), which is better than O(D3 ), the cost of LSTD in F when n < D (the case considered
in this paper). Note that the cost of making a prediction is D in LSTD in F and dD in LSTD-RP.
4
Finite-Sample Analysis of LSTD with Random Projections
In this section, we report the main theoretical results of the paper. In particular, we derive a performance bound for LSTD-RP in the Markov design setting, i.e., when the LSTD-RP solution is
compared to the true value function only at the states belonging to the trajectory used by the algorithm (see Section 4 in [11] for a more detailed discussion). We then derive a condition on the
number of samples to guarantee the uniqueness of the LSTD-RP solution. Finally, from the Markov
design bound we obtain generalization bounds when the Markov chain has a stationary distribution.
4.1
Markov Design Bound
Theorem 1. Let F and G be linear spaces with dimensions D and d (d < D) as defined in Section 2.
Let {Xt }nt=1 be a sample path generated by the Markov chain M? , and v, v? ? Rn be the vectors
whose components are the value function and the LSTD-RP solution at {Xt }nt=1 . Then for any
? > 0, whenever d ? 15 log(8n/?), with probability 1 ? ? (the randomness is w.r.t. both the random
sample path and the random projection), v? satisfies
"
#
r
r
8 log(8n/?)
1
?Vmax L
d
b
b
||v??
v ||n ? p
||v ? ?F v||n +
m(?F v) +
2
d
1
?
?
?
n
1??
3
r
!
8 log(4d/?)
1
+
,
n
n
(1)
where the random variable ?n is the smallest strictly positive eigenvalue of the sample-based Gram
b F v) = m(f? ) with f? be any function in F such that f? (Xt ) =
matrix n1 ?> ?. Note that m(?
b F v)t for 1 ? t ? n.
(?
Before stating the proof of Theorem 1, we need to prove the following lemma.
Lemma 1. Let F and G be linear spaces with dimensions D and d (d < D) as defined in Section 2.
Let {Xi }ni=1 be n states and f? ? F. Then for any ? > 0, whenever d ? 15 log(4n/?), with
probability 1 ? ? (the randomness is w.r.t. the random projection), we have
8 log(4n/?)
m(f? )2 .
(2)
d
Proof. The proof relies on the application of a variant of Johnson-Lindenstrauss (JL) lemma which
states that the inner-products are approximately preserved by the application of the random matrix
A (see e.g., Proposition 1 in [14]). For any ? > 0, we set 2 = d8 log(4n/?). Thus for d ?
15 log(4n/?), we have ? 3/4 and as a result 2 /4 ? 3 /6 ? 2 /8 and d ? log(4n/?)
2 /4?3 /6 . Thus, from
Proposition 1 in [14], for all 1 ? i ? n, we have |?(Xi ) ? ? ? A?(Xi ) ? A?| ? ||?||2 ||?(Xi )||2 ?
m(f? ) with high probability. From this result, we deduce that with probability 1 ? ?
inf ||f? ? g||2n ?
g?G
inf ||f? ? g||2n ? ||f? ? gA? ||2n =
g?G
n
8 log(4n/?)
1X
|?(Xi ) ? ? ? A?(Xi ) ? A?|2 ?
m(f? )2 .
n i=1
d
Proof of Theorem 1. For any fixed space G, the performance of the LSTD-RP solution can be
bounded according to Theorem 1 in [10] as
1
b G v||n + ?Vmax L
||v ? v?||n ? p
||v ? ?
1??
1 ? ?2
r
d
?n
r
8 log(2d/? 0 )
1
+
,
n
n
(3)
with probability 1 ? ? 0 (w.r.t. the random sample path). From the triangle inequality, we have
b G v||n ? ||v ? ?
b F v||n + ||?
bFv ? ?
b G v||n = ||v ? ?
b F v||n + ||?
bFv ? ?
b G (?
b F v)||n .
||v ? ?
(4)
The equality in Eq. 4 comes from the fact that for any vector g ? G, we can write ||v ? g||2n =
b F v||2 +||?
b F v?g||2 . Since ||v? ?
b F v||n is independent of g, we have arg inf g?G ||v?g||2 =
||v? ?
n
n
n
bGv = ?
b G (?
b F v). From Lemma 1, if d ? 15 log(4n/? 00 ), with
b F v ? g||2n , and thus, ?
arg inf g?G ||?
probability 1 ? ? 00 (w.r.t. the choice of A), we have
r
bFv ? ?
b G (?
b F v)||n ?
||?
8 log(4n/? 00 )
b F v).
m(?
d
(5)
We conclude from a union bound argument that Eqs. 3 and 5 hold simultaneously with probability
at least 1 ? ? 0 ? ? 00 . The claim follows by combining Eqs. 3?5, and setting ? 0 = ? 00 = ?/2.
Remark 1. Using Theorem 1, we can compare the performance of LSTD-RP with the performance
of LSTD directly applied in the high-dimensional space F. Let v? be the LSTD solution in F, then
up to constants, logarithmic, and dominated factors, with high probability, v? satisfies
p
1
b F v||n + 1 O( D/n).
||v ? v?||n ? p
||v ? ?
2
1
?
?
1??
(6)
p
By comparing Eqs. 1 and 6, we notice that 1) the estimation error
p of v? is of order O( d/n), and
thus, is smaller than the estimation error of v?, which is of order O( D/n), and 2) the approximation
b F v||n , plus an additional term that depends on
error of v? is the approximation error of v?, ||v ? ?
p
b F v) and decreases with d, the dimensionality of G, with the rate O( 1/d). Hence, LSTD-RP
m(?
may have a better performance than solving LSTD in F whenever this additional term is smaller than
b F v) highly depends on the value function
the gain achieved in the estimation error. Note that m(?
V that is being approximated and the features of the space F. It is important to carefully tune the
value of d as both the estimation error and the additional approximation error in Eq. 1 depend on
d. For instance, while a small value of d significantly reduces the estimation error (and the need for
samples), it may amplify the additional approximation error term, and thus, reduce the advantage of
LSTD-RP over LSTD. We may get an idea on how to select the value of d by optimizing the bound
4
b F v)
m(?
d=
?Vmax L
s
n?n (1 ? ?)
.
1+?
(7)
?
Therefore, when n samples are available the optimal value for d is of the order O( n). Using the
value of d in Eq. 7, we can rewrite the bound of Eq. 1 as (up to the dominated term 1/n)
||v ? v?||n ? p
p
1
b F v||n + 1
8 log(8n/?)
||v ? ?
1??
1 ? ?2
q
b F v)
?Vmax L m(?
1 ? ? 1/4
. (8)
n?n (1 + ?)
Using Eqs. 6 and 8, it would be easier to compare the performance of LSTD-RP and LSTD in space
b F v). For further discussion on m(?
b F v) refer to [14] and
F, and observe the role of the term m(?
for the case of D = ? to Section 4.3 of this paper.
Remark 2. As discussed in the introduction, when the dimensionality D of F is much bigger than
the number of samples n, the learning algorithms are likely to overfit the data. In this case, it is
reasonable to assume that the target vector v itself belongs to the vector space Fn . We state this
condition using the following assumption:
Assumption 1. (Overfitting). For any set of n points {Xi }ni=1 , there exists a function f ? F such
that f (Xi ) = V (Xi ), 1 ? i ? n .
Assumption 1 is equivalent to require that the rank of the empirical Gram matrices n1 ?> ? to be
bigger than n. Note that Assumption 1 is likely to hold whenever D n, because in this case we
can expect that the features to be independent enough on {Xi }ni=1 so that the rank of n1 ?> ? to be
bigger than n (e.g., if the features are linearly independent on the samples, it is sufficient to have
D ? n). Under Assumption 1 we can remove the empirical approximation error term in Theorem 1
and deduce the following result.
Corollary 1. Under Assumption 1 and the conditions of Theorem 1, with probability 1 ? ? (w.r.t. the
random sample path and the random space), v? satisfies
1
||v ? v?||n ? p
1 ? ?2
4.2
r
8 log(8n/?)
b F v) + ?Vmax L
m(?
d
1??
r
d
?n
r
8 log(4d/?)
1
+
.
n
n
Uniqueness of the LSTD-RP Solution
While the results in the previous section hold for any Markov chain, in this section we assume that
the Markov chain M? admits a stationary distribution ? and is exponentially fast ?-mixing with
? b, ?, i.e., its ?-mixing coefficients satisfy ?i ? ?? exp(?bi? ) (see e.g., Sections 8.2
parameters ?,
and 8.3 in [10] for a more detailed definition of ?-mixing processes). As shown in [11, 10], if
? exists, it would be possible to derive a condition for the existence and uniqueness of the LSTD
solution depending on the number of samples and the smallest eigenvalue ofR the Gram matrix defined
according to the stationary distribution ?, i.e., G ? RD?D , Gij = ?i (x)?j (x)?(dx). We
now discuss the existence and uniqueness of the LSTD-RP solution. Note that as D increases, the
smallest eigenvalue of G is likely to become smaller and smaller. In fact, the more the features in F,
the higher the chance for some of them to be correlated under ?, thus leading to an ill-conditioned
matrix G. On the other hand, since d < D, the probability that d independent random combinations
of ?i lead to highly correlated features ?j is relatively small. RIn the following we prove that the
smallest eigenvalue of the Gram matrix H ? Rd?d , Hij = ?i (x)?j (x)?(dx) in the random
space G is indeed bigger than the smallest eigenvalue of G with high probability.
Lemma 2. Let ? > 0 andp
F and G be linear spaces with dimensions D and d (d < D) as defined in
Section 2 with D > d + 2 2d log(2/?) + 2 log(2/?). Let the elements of the projection matrix A be
Gaussian random variables drawn from N (0, 1/d). Let the Markov chain M? admit a stationary
distribution ?. Let G and H be the Gram matrices according to ? for the spaces F and G, and ?
and ? be their smallest eigenvalues. We have with probability 1 ? ? (w.r.t. the random space)
??
D
?
d
r
1?
d
?
D
r
2 log(2/?)
D
!2
.
(9)
Proof. Let ? ? Rd be the eigenvector associated to the smallest eigenvalue ? of H, from the
definition of the features ? of G (H = AGA> ) and linear algebra, we obtain
5
?||?||22 = ? > ?? = ? > H? = ? > AGA> ? ? ?||A> ?||22 = ? ? > AA> ? ? ? ? ||?||22 ,
(10)
?
where ? is the smallest eigenvalue of the random matrix AA> , or in?other words, ? is the smallest
?
singular value of the D ? d random matrix A> , i.e., smin (A> ) = ?. We now define B = dA.
Note that if the elements of A are drawn from the Gaussian distribution N (0, 1/d), the elements
of B are standard Gaussian random variables, and thus, the smallest eigenvalue of AA> , ?, can be
written as ? = s2min (B > )/d. There has been extensive work on extreme singular values of random
matrices (see e.g., [19]). For a D ? d random matrix with independent standard normal random
variables, such as B > , we have with probability 1 ? ? (see [19] for more details)
smin (B > ) ?
?
p
?
D ? d ? 2 log(2/?) .
(11)
From Eq. 11 and the relation between ? and smin (B > ), we obtain
D
??
d
r
1?
d
?
D
r
2 log(2/?)
D
!2
,
(12)
with probability 1 ? ?. The claim follows by replacing the bound for ? from Eq. 12 in Eq. 10.
The result of Lemma 2 is for Gaussian random matrices. However, it would be possible to extend
this result using non-asymptotic bounds for the extreme singular values of more general random
matrices [19]. Note that in Eq. 9, D/d is always greater than 1 and the term in the parenthesis
approaches 1 for large values of D. Thus, we can conclude that with high probability the smallest
eigenvalue ? of the Gram matrix H of the randomly generated low-dimensional space G is bigger
than the smallest eigenvalue ? of the Gram matrix G of the high-dimensional space F.
Lemma 3. Let ? > 0 andp
F and G be linear spaces with dimensions D and d (d < D) as defined in
Section 2 with D > d + 2 2d log(2/?) + 2 log(2/?). Let the elements of the projection matrix A be
Gaussian random variables drawn from N (0, 1/d). Let the Markov chain M? admit a stationary
distribution ?. Let G be the Gram matrix according to ? for space F and ? be its smallest eigenvalue.
Let {Xt }nt=1 be a trajectory of length n generated by a stationary ?-mixing process with stationary
distribution ?. If the number of samples n satisfies
288L2 d ?(n, d, ?/2)
max
n>
?D
?(n, d, ?/2)
,1
b
1/?
r
1?
d
?
D
r
2 log(2/?)
D
!?2
,
(13)
? , then with probability
where ?(n, d, ?) = 2(d + 1) log n + log ?e + log+ max{18(6e)2(d+1) , ?}
1 ? ?, the features ?1 , . . . , ?d are linearly independent on the states {Xt }nt=1 , i.e., ||g? ||n = 0
implies ? = 0, and the smallest eigenvalue ?n of the sample-based Gram matrix n1 ?> ? satisifies
?
?
?n ? ? =
v
?
?
s
(
)1/?
u
r
? r
2
u 2?(n, d, ? )
2
log(
)
?(n, d, 2? )
? D?
d
t
? ?
2
1?
? 6L
?
max
,1
>0.
2
d
D
D
n
b
(14)
Proof. The proof follows similar steps as in Lemma 4 in [10]. A sketch of the proof is available
in [6].
By comparing Eq. 13 with Eq. 13 in [10], we can see that the number of samples needed for the
empirical Gram matrix n1 ?> ? in G to be invertible with high probability is less than that for its
counterpart n1 ?> ? in the high-dimensional space F.
4.3
Generalization Bound
In this section, we show how Theorem 1 can be generalized to the entire state space X when the
Markov chain M? has a stationary distribution ?. We consider the case in which the samples
{Xt }nt=1 are obtained by following a single trajectory in the stationary regime of M? , i.e., when X1
is drawn from ?. As discussed in Remark 2 of Section 4.1, it is reasonable to assume that the highdimensional space F contains functions that are able to perfectly fit the value function V in any finite
number n (n < D) of states {Xt }nt=1 , thus we state the following theorem under Assumption 1.
6
Theorem 2. Let ? > 0 and F and G be linear spaces with dimensions D and d (d < D) as defined
in Section 2 with d ? 15 log(8n/?). Let {Xt }nt=1 be a path generated by a stationary ?-mixing
process with stationary distribution ?. Let V? be the LSTD-RP solution in the random space G. Then
under Assumption 1, with probability 1 ? ? (w.r.t. the random sample path and the random space),
2
||V ? T (V? )||? ? p
1 ? ?2
r
8 log(24n/?)
2?Vmax L
m(?F V ) +
d
1??
r r
8 log(12d/?) 1
d
+
+ , (15)
?
n
n
where ? is a lower bound on the eigenvalues of the Gram matrix n1 ?> ? defined by Eq. 14 and
s
= 24Vmax
2?(n, d, ?/3)
max
n
?(n, d, ?/3)
,1
b
1/?
.
with ?(n, d, ?) defined as in Lemma 3. Note that T in Eq. 15 is the truncation operator defined in
Section 2.
Proof. The proof is a consequence of applying concentration of measures inequalities for ?-mixing
processes and linear spaces (see Corollary 18 in [10]) on the term ||V ? T (V? )||n , using the fact that
||V ? T (V? )||n ? ||V ? V? ||n , and using the bound of Corollary 1. The bound of Corollary 1 and
the lower bound on ?, each one holding with probability 1 ? ? 0 , thus, the statement of the theorem
(Eq. 15) holds with probability 1 ? ? by setting ? = 3? 0 .
Remark 1. An interesting property of the bound in Theorem 2 is that the approximation error of
V in space F, ||V ? ?F V ||? , does not appear and the error of the LSTD solution in the randomly
projected space only depends on the dimensionality d of G and the number of samples n. However
this property is valid only when Assumption 1 holds, i.e., at most for n ? D. An interesting case
here is when the dimension of F is infinite (D = ?), so that the bound is valid for any number
of samples n. In [15], two approximation spaces F of infinite dimension were constructed based
on a multi-resolution set of features that are rescaled and translated versions of a given mother
function. In the case that the mother function is a wavelet, the resulting features, called scrambled
wavelets, are linear combinations of wavelets at all scales weighted by Gaussian coefficients. As a
results, the corresponding approximation space is a Sobolev space H s (X ) with smoothness of order
s > p/2, where p is the dimension of the state space X . In this case, for a function f? ? H s (X ),
it is proved that the `2 -norm of the parameter ? is equal to the norm of the function in H s (X ), i.e.,
||?||2 = ||f? ||H s (X ) . We do not describe those results further and refer the interested readers to [15].
What is important about the results of [15] is that it shows that it is possible to consider infinite
dimensional function spaces for which supx ||?(x)||2 is finite and ||?||2 is expressed in terms of the
norm of f? in F. In such cases, m(?F V ) is finite and the bound of Theorem 2, which does not
contain any approximation error of V in F, holds for any n. Nonetheless, further investigation is
needed to better understand the role of ||f? ||H s (X ) in the final bound.
Remark 2. As discussed in the introduction, regularization methods have been studied in solving
high-dimensional RL problems. Therefore, it is interesting to compare our results for LSTD-RP with
those reported in [4] for `2 -regularized LSTD. Under Assumption 1, when D = ?, by selecting the
features as described in the previous remark and optimizing the value of d as in Eq. 7, we obtain
||V ? T (V? )||? ? O
q
||f? ||H s (X ) n?1/4 .
(16)
Although the setting considered in [4] is different than ours (e.g., the samples are i.i.d.), a qualitative comparison of Eq. 16 with the bound in Theorem 2 of [4] shows a striking similarity in the
performance of the two algorithms. In fact, they both contain the Sobolev norm of the target function and have a similar dependency on the number of samples with a convergence rate of O(n?1/4 )
(when the smoothness of the Sobolev space in [4] is chosen to be half of the dimensionality of X ).
This similarity asks for further investigation on the difference between `2 -regularized methods and
random projections in terms of prediction performance and computational complexity.
5
LSPI with Random Projections
In this section, we move from policy evaluation to policy iteration and provide a performance bound
for LSPI with random projections (LSPI-RP), i.e., a policy iteration algorithm that uses LSTD-RP
at each iteration. LSPI-RP starts with an arbitrary initial value function V?1 ? B(X ; Vmax ) and
its corresponding greedy policy ?0 . At the first iteration, it approximates V ?0 using LSTD-RP and
7
returns a function V?0 , whose truncated version V?0 = T (V?0 ) is used to build the policy for the second
iteration. More precisely, ?1 is a greedy policy w.r.t. V?0 . So, at each iteration k, a function V?k?1 is
computed as an approximation to V ?k?1 , and then truncated, V?k?1 , and used to build the policy ?k .1
Note that in general, the measure ? ? S(X ) used to evaluate the final performance of the LSPIRP algorithm might be different from the distribution used to generate samples at each iteration.
Moreover, the LSTD-RP performance bounds require the samples to be collected by following the
policy under evaluation. Thus, we need Assumptions 1-3 in [10] in order to 1) define a lowerbounding distribution ? with constant C < ?, 2) guarantee that with high probability a unique
LSTD-RP solution exists at each iteration, and 3) define the slowest ?-mixing process among all the
mixing processes M?k with 0 ? k < K.
Theorem 3. Let ? > 0 and F and G be linear spaces with dimensions D and d (d < D) as defined
in Section 2 with d ? 15 log(8Kn/?). At each iteration k, we generate a path of size n from the
stationary ?-mixing process with stationary distribution ?k?1 = ??k?1 . Let n satisfy the condition in
Eq. 13 for the slower ?-mixing process. Let V?1 be an arbitrary initial value function, V?0 , . . . , V?K?1
(V?0 , . . . , V?K?1 ) be the sequence of value functions (truncated value functions) generated by LSPIRP, and ?K be the greedy policy w.r.t. V?K?1 . Then, under Assumption 1 and Assumptions 1-3
in [10], with probability 1 ? ? (w.r.t. the random samples and the random spaces), we have
?
||V ? V
?K
s r
"
p
8 log(24Kn/?)
2Vmax
C
(1 + ?) CC?,? p
sup ||?(x)||2
d
x?X
1 ? ? 2 ??
s
#
)
r
K?1
2?Vmax L
d 8 log(12Kd/?)
1
+
+
+ E + ? 2 Rmax ,
1??
??
n
n
4?
||? ?
(1 ? ?)2
(
(17)
where C?,? is the concentrability term from Definition 2 in [1], ?? is the smallest eigenvalue of the
Gram matrix of space F w.r.t. ?, ?? is ? from Eq. 14 in which ? is replaced by ?? , and E is from
Theorem 2 written for the slowest ?-mixing process.
Proof. The proof follows similar lines as in the proof of Thm. 8 in [10] and is available in [6].
Remark. The most critical issue about Theorem 3 is the validity of Assumptions 1-3 in [10]. It
is important to note that Assumption 1 is needed to bound the performance of LSPI independent
from the use of random projections (see [10]). On the other hand, Assumption 2 is explicitly related
to random projections and allows us to bound the term m(?F V ). In order for this assumption to
hold, the features {?j }D
j=1 of the high-dimensional space F should be carefully chosen so as to be
linearly independent w.r.t. ?.
6
Conclusions
Learning in high-dimensional linear spaces is particularly appealing in RL because it allows to have
a very accurate approximation of value functions. Nonetheless, the larger the space, the higher
the need of samples and the risk of overfitting. In this paper, we introduced an algorithm, called
LSTD-RP, in which LSTD is run in a low-dimensional space obtained by a random projection of
the original high-dimensional space. We theoretically analyzed the performance of LSTD-RP and
showed that it solves the problem of overfitting (i.e., the estimation error depends on the value of
the low dimension) at the cost of a slight worsening in the approximation accuracy compared to the
high-dimensional space. We also analyzed the performance of LSPI-RP, a policy iteration algorithm
that uses LSTD-RP for policy evaluation. The analysis reported in the paper opens a number of interesting research directions such as: 1) comparison of LSTD-RP to `2 and `1 regularized approaches,
and 2) a thorough analysis of the case when D = ? and the role of ||f? ||H s (X ) in the bound.
Acknowledgments This work was supported by French National Research Agency through the
projects EXPLO-RA n? ANR-08-COSI-004 and LAMPADA n? ANR-09-EMER-007, by Ministry
of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the
?contrat de projets e? tat region 2007?2013?, and by PASCAL2 European Network of Excellence.
1
Note that the MDP model is needed to generate a greedy policy ?k . In order to avoid the need for the
model, we can simply move to LSTD-Q with random projections. Although the analysis of LSTD-RP can be
extended to action-value functions and LSTD-RP-Q, for simplicity we use value functions in the following.
8
References
[1] A. Antos, Cs. Szepesvari, and R. Munos. Learning near-optimal policies with Bellman-residual
minimization based fitted policy iteration and a single sample path. Machine Learning Journal,
71:89?129, 2008.
[2] J. Boyan. Least-squares temporal difference learning. Proceedings of the 16th International
Conference on Machine Learning, pages 49?56, 1999.
[3] S. Bradtke and A. Barto. Linear least-squares algorithms for temporal difference learning.
Machine Learning, 22:33?57, 1996.
[4] A. M. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. Regularized policy
iteration. In Proceedings of Advances in Neural Information Processing Systems 21, pages
441?448. MIT Press, 2008.
[5] A. M. Farahmand, M. Ghavamzadeh, Cs. Szepesv?ari, and S. Mannor. Regularized fitted Qiteration for planning in continuous-space Markovian decision problems. In Proceedings of
the American Control Conference, pages 725?730, 2009.
[6] M. Ghavamzadeh, A. Lazaric, O. Maillard, and R. Munos. LSPI with random projections.
Technical Report inria-00530762, INRIA, 2010.
[7] P. Keller, S. Mannor, and D. Precup. Automatic basis function construction for approximate
dynamic programming and reinforcement learning. In Proceedings of the Twenty-Third International Conference on Machine Learning, pages 449?456, 2006.
[8] Z. Kolter and A. Ng. Regularization and feature selection in least-squares temporal difference
learning. In Proceedings of the Twenty-Sixth International Conference on Machine Learning,
pages 521?528, 2009.
[9] M. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning
Research, 4:1107?1149, 2003.
[10] A. Lazaric, M. Ghavamzadeh, and R. Munos. Finite-sample analysis of least-squares policy
iteration. Technical Report inria-00528596, INRIA, 2010.
[11] A. Lazaric, M. Ghavamzadeh, and R. Munos. Finite-sample analysis of LSTD. In Proceedings
of the Twenty-Seventh International Conference on Machine Learning, pages 615?622, 2010.
[12] M. Loth, M. Davy, and P. Preux. Sparse temporal difference learning using lasso. In IEEE
Symposium on Approximate Dynamic Programming and Reinforcement Learning, pages 352?
359, 2007.
[13] S. Mahadevan. Representation policy iteration. In Proceedings of the Twenty-First Conference
on Uncertainty in Artificial Intelligence, pages 372?379, 2005.
[14] O. Maillard and R. Munos. Compressed least-squares regression. In Proceedings of Advances
in Neural Information Processing Systems 22, pages 1213?1221, 2009.
[15] O. Maillard and R. Munos. Brownian motions and scrambled wavelets for least-squares regression. Technical Report inria-00483014, INRIA, 2010.
[16] I. Menache, S. Mannor, and N. Shimkin. Basis function adaptation in temporal difference
reinforcement learning. Annals of Operations Research, 134:215?238, 2005.
[17] R. Parr, C. Painter-Wakefield, L. Li, and M. Littman. Analyzing feature generation for valuefunction approximation. In Proceedings of the Twenty-Fourth International Conference on
Machine Learning, pages 737?744, 2007.
[18] M. Petrik, G. Taylor, R. Parr, and S. Zilberstein. Feature selection using regularization in
approximate linear programs for Markov decision processes. In Proceedings of the TwentySeventh International Conference on Machine Learning, pages 871?878, 2010.
[19] M. Rudelson and R. Vershynin. Non-asymptotic theory of random matrices: extreme singular
values. In Proceedings of the International Congress of Mathematicians, 2010.
[20] R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIP Press, 1998.
[21] S. Vempala. The Random Projection Method. American Mathematical Society, 2004.
9
| 3994 |@word version:2 norm:11 nd:4 open:1 d2:1 tat:1 contraction:1 valuefunction:1 asks:1 initial:2 contains:2 selecting:1 ours:1 existing:1 comparing:2 nt:10 worsening:1 dx:3 bd:1 written:2 fn:2 numerical:1 remove:1 drop:1 stationary:14 half:1 greedy:4 intelligence:1 mannor:4 u2i:1 mathematical:1 constructed:1 become:1 symposium:1 farahmand:3 prove:3 qualitative:1 introduce:1 excellence:1 theoretically:2 ra:1 indeed:1 planning:1 multi:1 bellman:8 discounted:1 ming:1 td:1 increasing:1 project:1 pof:1 bounded:4 moreover:2 notation:1 what:1 rmax:2 eigenvector:1 maxa:1 mathematician:1 finding:1 guarantee:3 temporal:7 thorough:2 pseudo:3 growth:1 control:2 appear:1 positive:1 before:1 congress:1 consequence:1 sutton:1 analyzing:1 path:9 approximately:1 inria:7 plus:1 might:1 studied:2 suggests:1 bi:1 unique:4 acknowledgment:1 practice:1 union:1 area:1 empirical:5 significantly:1 projection:32 davy:1 word:2 get:1 onto:6 ga:1 selection:4 operator:11 amplify:1 risk:1 applying:1 measurable:3 deterministic:1 equivalent:1 yt:1 keller:2 focused:1 resolution:1 simplicity:1 spanned:1 laplace:1 annals:1 target:3 construction:1 programming:3 us:2 pa:1 element:6 approximated:1 particularly:1 observed:1 role:3 region:1 decrease:1 rescaled:1 alessandro:1 environment:1 agency:1 complexity:4 reward:4 littman:1 dynamic:2 ghavamzadeh:6 depend:1 solving:4 rewrite:1 algebra:1 contrat:1 petrik:1 rin:1 basis:3 triangle:1 translated:1 fast:1 describe:1 artificial:1 whose:4 widely:1 solve:2 larger:2 anr:2 compressed:1 itself:1 final:2 advantage:1 eigenvalue:15 sequence:1 interaction:1 product:1 adaptation:1 combining:1 mixing:11 convergence:1 regularity:1 r1:1 generating:1 derive:4 depending:1 stating:1 eq:21 solves:1 c:3 come:1 implies:1 direction:1 beltrami:1 stochastic:1 sgn:1 education:1 require:2 hx:2 generalization:2 preliminary:1 investigation:2 proposition:2 strictly:1 hold:7 considered:2 aga:2 exp:1 normal:1 algorithmic:1 mapping:1 claim:2 parr:4 major:1 smallest:15 uniqueness:7 estimation:9 calais:1 council:1 successfully:1 weighted:1 minimization:2 mit:1 gaussian:7 always:1 modified:1 pn:1 avoid:1 barto:2 corollary:4 zilberstein:1 derived:1 nd2:2 rank:2 expansive:1 slowest:2 dim:1 lowerbounding:1 entire:1 relation:1 france:1 interested:1 issue:2 arg:6 ill:1 among:1 development:1 equal:1 extraction:1 ng:1 lille:1 minf:1 report:4 randomly:2 simultaneously:1 national:1 loth:1 replaced:1 n1:8 highly:2 evaluation:4 analyzed:2 extreme:3 antos:1 chain:10 accurate:1 tuple:1 orthogonal:5 euclidean:1 taylor:1 mip:1 theoretical:2 fitted:3 instance:1 gn:3 markovian:1 cost:8 subset:1 johnson:1 seventh:1 reported:2 dependency:2 kn:2 supx:2 vershynin:1 international:7 sequel:1 invertible:1 together:1 precup:1 d8:1 admit:2 american:2 leading:1 return:3 li:1 de:2 coefficient:2 satisfy:2 kolter:1 explicitly:1 ad:1 depends:4 closed:1 sup:2 start:2 square:11 ni:3 accuracy:2 painter:1 efficiently:1 trajectory:7 researcher:1 cc:1 randomness:2 concentrability:1 whenever:4 definition:3 sixth:1 nonetheless:2 shimkin:1 proof:13 associated:1 propagated:2 gain:1 proved:2 dimensionality:4 maillard:4 carefully:2 ndd:2 higher:3 follow:1 cosi:1 wakefield:1 overfit:1 hand:2 sketch:1 replacing:1 french:1 quality:1 mdp:4 validity:1 contain:2 true:1 counterpart:1 regularization:6 hence:1 equality:1 iteratively:1 moore:1 generalized:1 mohammad:1 bradtke:1 motion:1 ari:2 lagoudakis:1 rl:11 exponentially:1 jl:1 discussed:3 extend:1 approximates:1 slight:1 refer:2 mother:2 smoothness:2 rd:8 automatic:1 similarly:1 europe:1 similarity:2 deduce:2 add:1 brownian:1 recent:1 showed:1 optimizing:2 inf:4 belongs:1 scenario:1 inequality:2 vt:1 ofr:1 ministry:1 additional:4 greater:1 aggregated:1 reduces:3 technical:3 cross:1 bigger:6 controlled:2 impact:1 prediction:3 variant:1 basic:1 parenthesis:1 regression:2 iteration:22 kernel:2 achieved:1 preserved:1 szepesv:2 singular:4 regional:1 eigenfunctions:1 near:1 mahadevan:2 enough:1 fit:1 architecture:1 perfectly:1 lasso:1 reduce:2 idea:2 inner:1 feder:1 penalty:1 speaking:1 repeatedly:1 action:3 remark:7 detailed:2 tune:2 discount:2 reduced:1 generate:4 exist:1 notice:1 estimated:1 lazaric:4 pnx:1 write:2 discrete:1 smin:3 threshold:1 pb:2 drawn:5 d3:4 graph:1 satisifies:1 run:1 inverse:1 uncertainty:1 fourth:1 striking:1 extends:1 qiteration:1 reasonable:2 reader:1 sobolev:3 decision:3 dy:2 bound:27 precisely:2 x2:1 n3:1 dominated:2 emi:1 argument:1 vempala:1 relatively:1 according:4 combination:2 poor:1 belonging:1 kd:1 smaller:4 appealing:1 making:1 discus:3 needed:4 available:3 operation:1 observe:1 slower:1 rp:36 existence:3 original:5 rudelson:1 yt2:1 exploit:1 especially:1 build:2 society:1 lspi:11 objective:1 move:2 concentration:1 rt:2 interacts:1 gradient:1 distance:1 odalric:1 collected:1 code:2 length:1 modeled:1 statement:1 holding:1 menache:2 nord:2 hij:1 design:3 policy:32 twenty:5 markov:15 finite:10 descent:1 projets:1 truncated:3 extended:1 team:1 emer:1 rn:8 arbitrary:2 thm:1 introduced:2 lampada:1 extensive:1 learned:1 address:1 able:1 andp:2 regime:1 challenge:1 tb:5 preux:1 built:1 max:5 including:2 program:1 pascal2:1 suitable:1 critical:1 boyan:1 regularized:6 residual:2 imply:1 discovery:2 l2:1 relative:2 asymptotic:2 expect:1 interesting:4 generation:1 agent:1 sufficient:1 dd:2 supported:1 truncation:2 understand:1 ambrym:1 munos:7 absolute:1 sparse:1 overcome:1 dimension:15 xn:6 transition:2 gram:12 lindenstrauss:1 computes:1 valid:2 reinforcement:6 vmax:18 projected:2 approximate:4 supremum:1 overfitting:4 conclude:2 xi:10 scrambled:2 continuous:1 learn:1 szepesvari:1 european:1 domain:2 da:1 main:1 linearly:3 pby:3 x1:6 minz:2 third:1 learns:1 wavelet:4 theorem:17 xt:17 admits:2 exists:3 adding:2 conditioned:1 twentyseventh:1 easier:1 entropy:1 logarithmic:1 simply:2 likely:3 penrose:1 expressed:1 pathwise:4 lstd:66 aa:3 satisfies:4 relies:1 chance:1 price:2 specifically:1 typical:1 uniformly:1 infinite:3 lemma:9 total:1 gij:1 called:2 explo:1 select:1 highdimensional:2 constructive:1 evaluate:1 correlated:2 |
3,306 | 3,995 | A VLSI Implementation of the Adaptive Exponential
Integrate-and-Fire Neuron Model
? Karlheinz Meier,
Sebastian Millner, Andreas Grubl,
Johannes Schemmel and Marc-Olivier Schwartz
Kirchhoff-Institut f?ur Physik
Ruprecht-Karls-Universit?at Heidelberg
[email protected]
Abstract
We describe an accelerated hardware neuron being capable of emulating the adaptive exponential integrate-and-fire neuron model. Firing patterns of the membrane
stimulated by a step current are analyzed in transistor level simulations and in
silicon on a prototype chip. The neuron is destined to be the hardware neuron
of a highly integrated wafer-scale system reaching out for new computational
paradigms and opening new experimentation possibilities. As the neuron is dedicated as a universal device for neuroscientific experiments, the focus lays on parameterizability and reproduction of the analytical model.
1
Introduction
Since the beginning of neuromorphic engineering [1, 2] designers have had great success in building VLSI1 neurons mimicking the behavior of biological neurons using analog circuits [3?8]. The
design approaches are quite different though, as the desired functions constrain the design.
It has been argued [4] whether it is best to emulate an established model or to create a new one using
analog circuits. The second way is gone by [3?7] for instance, aiming at the low power consumption and fault tolerance of neural computation to be used in a computational device in robotics for
example. This can be done most effectively by the technology-driven design of a new model, fitted
directly to biological results. We approach gaining access to the computational power of neural systems and creating a device being able to emulate biologically relevant spiking neural networks that
can be reproduced in a traditional simulation environment for modeling. The use of a commonly
known model enables modelers to do experiments on neuromorphic hardware and compare them
to simulations. This design methodology has been applied successfully in [8, 9], implementing the
conductance-based integrate-and-fire model [10]. The software framework PyNN [11, 12] even allows for directly switching between a simulator and the neuromorphic hardware device, allowing
modelers to access the hardware on a high level without knowing all implementation details.
The hardware neuron presented here can emulate the adaptive exponential integrate-and-fire neuron
model (AdEx) [13], developed within the FACETS-project [14]. The AdEx model can produce
complex firing patterns observed in biology [15], like spike-frequency-adaptation, bursting, regular
spiking, irregular spiking and transient spiking by tuning a limited number of parameters [16].
1
Very large scale integration
1
Completed by the reset conditions, the model can be described by the following two differential
equations for the membrane voltage V and the adaptation variable w:
V ?Vt
dV
?Cm
(1)
= gl (V ? E1 ) ? gl ?t e( ?t ) + ge (t)(V ? Ee ) + gi (t)(V ? Ei ) + w;
dt
dw
??w
= w ? a(V ? El ).
(2)
dt
Cm , gl , ge and gi are the membrane capacitance, the leakage conductance and the conductances for
excitatory and inhibitory synaptic inputs, where ge and gi depend on time and the inputs from other
neurons. El , Ei and Ee are the leakage reversal potential and the synaptic reversal potentials. The
parameters Vt and ?t are the effective threshold potential and the threshold slope factor. The time
constant of the adaptation variable is ?w and a is called adaptation parameter. It has the dimension
of a conductance.
If the membrane voltage crosses a certain threshold voltage ?, the neuron is reset:
V ? Vreset ;
w ? w + b.
(3)
(4)
The parameter b is responsible for spike-triggered adaptation. Due to the sharp rise, created by
the exponential term in equation 1, the exact value of ? is not critical for the determination of the
moment of a spike [13].
400
V-nullcline
w-nullcline
V=Vt
300
w[pA]
200
100
0
-100
-60
-55
-50
V[mV]
-45
-40
Figure 1: Phase plane of the AdEx model with parameters according to figure 4 d) from [16],
stimulus excluded. V and w will be rising below their nullclines and falling above.
Figure 1 shows the phase plane of the AdEx model with its nullclines. The nullcline of a variable is
the cline, where its time derivative is zero. The crossing off the nullclines in the left is the stable fixpoint, where the trajectory is located in rest. A constant current stimulus will lift the V -nullcline. For
V > Vt below the V -nullcline, the derivative of V is proportional to V - the exponential dominates
and V diverges until ? is reached.
The neuron is integrated on a prototype chip called HICANN2 [17?19] (figure 2) which has been
produced in 2009. Each HICANN contains 512 dendrite membrane (DenMem) circuits (figure 3),
each being connected to 224 dynamic input synapses. Neurons are built of DenMems by shorting
their membrane capacitances gaining up to 14336 input synapses for a single neuron. The HICANN
is prepared for integration in the FACETS wafer-scale system [17?19] allowing to interconnect 384
HICANNs on an uncut silicon wafer via a high speed bus system, so networks of up to 196 608
neurons can be emulated on a single wafer.
A major feature of the described hardware neuron is that the size of components allows working with
an acceleration factor of 103 up to 105 compared to biological real time, enabling the operator to do
several runs of an experiment in a short time to do large parameter sweeps and gain better statistics.
Effects occurring on a longer timescale like long term synaptic plasticity could be emulated. This
way the wafer-scale system can emerge as an alternative and an enhancement to traditional computer
simulations in neuroscience. Another VLSI neuron designed with a time scaling factor is presented
in [7]. This implementation is capable of reproducing lots of different firing patterns of cortical
neurons, but has no direct correspondence to a neuron from the modeling area.
2
High Input Count Analog Neural Network
2
bus system
synapse
array
neuron
block
floating
gates
10 mm
Figure 2: Photograph of the HICANN-chip
2
2.1
Neuron implementation
Neuron
The smallest part of a neuron is a DenMem, which implements the terms of the AdEx neuron described above. Each term is constructed by a single circuit using operational amplifiers (OP) and
operational transconductance amplifiers (OTA) and can be switched off separately, so less complex
models like the leaky integrate-and-fire model implemented in [9] can be emulated. OTAs directly
model conductances for small input differences. The conductance is proportional to a biasing current. A first, not completely implemented version of the neuron has been proposed in [17]. Some
simulation results of the actual neuron can be found in [19].
Input
Neighbour-Neurons
Leak
SynIn
Exp
Spiking/
Connection
Membrane
CMembrane
Reset
SynIn
Spikes
VReset
In/Out
Input
STDP/
Network
Adapt
Current-Input Membrane-Output
Figure 3: Schematic diagram of AdEx neuron circuit
Figure 3 shows a block diagram of a DenMem. During normal operation, the neuron gets rectangular
shaped current pulses as input from the synapse array (figure 2) at one of the two synaptic input
circuits. Inside these circuits the current is integrated by a leaky integrator OP-circuit resulting in
a voltage that is transformed to a current by an OTA. Using this current as bias for another OTA, a
sharply rising and exponentially decaying synaptic input conductance is created. Each DenMem is
equipped with two synaptic input circuits, each having its own connection to the synapse array.
The output of a synapse can be chosen between them, which allows for two independent synaptic
channels which could be inhibitory or excitatory.
The leakage term of equation 1 can be implemented directly using an OTA, building a conductance
between the leakage potential El and the membrane voltage V .
Replacing the adaptation variable w in equation 2 by a(Vadapt ? El ), results in:
??w
dVadapt
= Vadapt ? V.
dt
3
(5)
Now the time constant ?w shall be created by a capacitance Cadapt and a conductance gadapt and
we get:
dVadapt
?Cadapt
= gadapt (Vadapt ? V ).
(6)
dt
We need to transform b into a voltage using the conductance a and get
Cadapt Ib tpulse =
b
a
(7)
where the fixed tpulse is the time a current Ib increases Vadapt on Cadapt at each detected spike of a
neuron. These resulting equations for adaptation can be directly implemented as a circuit.
A MOSFET3 connected as a diode is used to emulate the exponential positive feedback of equation 1
(figure 4). To generate the correct gate source voltage, a non inverting amplifier multiplies the
difference between the membrane voltage and a voltage Vt by an adjustable factor. A simplified
version of the circuit can be seen in figure 4. The gate source voltage of M1 is :
R1
(V ? Vt )
(8)
R2
Deployed in the equation for a MOSFET in sub-threshold mode this results in a current depending
exponentially on V following equation 1 where ?t can be adjusted via the resistors R1 and R2 . The
factor in front of the exponential gl ?t and Vt of the model can be changed by moving the circuits
Vt . To realize huge (hundreds of k?) variable resistors, the slope of the output characteristic of a
MOSFET biased in saturation is used as replacement for R1 .
VGSM1 =
Figure 4: Simplified schematic of the exponential circuit
Our neuron detects a spike at a directly adjustable threshold voltage ? - this is especially necessary
as the circuit cannot only implement the AdEx model, but also less complex models. In a model
without a sharp spike, like the one created by the positive feedback of the exponential term, spike
timing very much depends on the exact voltage ?.
A detected spike triggers reseting of the membrane by a current pulse to a potential Vreset for an
adjustable time. Therefore our circuit supports basic modeling of a refractory period additionally to
the modeling by the adaption variable.
2.2
Parameterization
In contrast to most other systems, we are using analog floating gate memories similar to [20] as
storage device for the analog parameters of a neuron. Due to the small size of these cells, we are
capable of providing most parameters individually for a single DenMem circuit. This way, matching
issues can be counterbalanced, and different types of neurons can be implemented on a single chip
enhancing the universality of the wafer-scale system.
Table 1 shows the parameters used in the implemented AdEx model and the parameter ranges aimed
during design. Technical biasing parameters and parameters of the synaptic input circuits are excluded. Parameter ranges of several orders of magnitude are necessary, as our neurons can work in
different time scalings relative to real time. This is achieved by switching between different multiplication factors for biasing currents. As these switches are parameterized globally, ranges of a
parameter of a neuron group(one quarter of a HICANN) need to be in the same order of magnitude.
3
metal-oxide-semiconductor field-effect transistor
4
Table 1: Neuron parameters
PARAMETER
SHARING
gl
a
gadapt
Ib
tpulse
Vreset
Vexp
treset
Cmem
Cadapt
?t
?
individual
individual
individual
individual
fixed
global
individual
global
global
fixed
individual
individual
RANGE
34 nS..4 ?S
34 nS..4 ?S
5 nS..2 ?S
200 nA..5 ?A
18 ns
0 V..1.8 V
0 V..1.8 V
25 ns..500 ns
400 fF or 2 pF
2 pF
..10 mV..
0 V..1.8 V
As a starting point for for the parameter ranges, [13] and [21] have been used. The chosen ranges
allow leakage time constants ?mem = Cmem /gl at an acceleration factor of 104 between 1 ms and
588 ms and an adaptation time constant ?w between 10 ms and 5 s in terms of biological real time.
So the parameters used in [22] are easily reached for instance. Switching to other acceleration
modes, the regime for a biologically realistic operation is reduced as the needed time constants are
shifted one order of magnitude.
As OTAs are used for modeling conductances, and linear operation for this type of devices can
only be achieved for smaller voltage differences, it is necessary to limit the operating range of the
variables V and Vadapt to some hundreds of millivolts. If this area is left, the OTAs will not work as
a conductance anymore, but as a constant current, hence there will not be any more spike triggered
adaptation for example.
A neuron can be composed of up to 64 DenMem circuit hence several different adaptation variables
with different time constants for each are allowed.
2.3
Parameter mapping
For a given set of parameters from the AdEx model, we want to reproduce the exact same behavior
with our hardware neuron. Therefore, a simple two-steps procedure was developed to translate
biological parameters from the AdEx model to hardware parameters. The translation procedure is
summarized in figure 5:
Biological AdEx
parameters
Scaling
Scaled AdEx
parameters
Translation
Hardware
parameters
Figure 5: Biology to hardware parameter translation
The first step is to scale the biological AdEx parameters in terms of time and voltage. At this stage,
the desired time acceleration factor is chosen, and applied to the two time constants of the model.
Then, a voltage scaling factor is defined, by which the biological voltages parameters are multiplied.
This factor has to be high enough to improve the signal-to-noise ratio in the hardware, but not too
high to stay in the operating range of the OTAs.
The second step is to translate the parameters from the scaled AdEx model to hardware parameters.
For this purpose, each part of the DenMem circuit was characterized in transistor-level simulations
using a circuit simulator. This theoretical characterization was then used to establish mathematical
relations between scaled AdEx parameters and hardware parameters.
5
2.4
Measurement capabilities
For neuron measuring purposes, the membrane can be either stimulated by incoming events from
the synapse array - as an additional feature a Poisson event source is implemented on the chip
- or by a programmable current. This current can be programmed up to few ?A replaying 129 10 bit
values using a sequencer and a digital-to-analog converter. Four current sources are implemented
on the chip allowing to stimulate adjacent neurons individually. Currently, the maximum period of
a current stimulus is limited to 33 ?s, but this can be easily enhanced as the HICANN host interface
allows an update of the value storage in real time.
The membrane voltage and all stored parameters in the floating gates can directly be measured via
one of the two analog outputs of the HICANN chip. Membrane voltages of two arbitrary neurons
can be read out at the same time.
To characterize the chip, parameters like the membrane capacitance need to be measured indirectly
using the OTA, emulating gl , as a current source example.
3
Results
Different firing patterns have been reproduced using our hardware neuron and the current stimulus in
circuit simulation and in silicon, inducing a periodic step current onto the membrane. The examined
neuron consists of two DenMem circuits with their membrane capacitances switched to 2 pF each.
Figure 6 shows results of some reproduced patterns according to [23] or [16] neighbored by their
phase plane trajectory of V and Vadapt . As the simulation describes an electronic circuit, the trajectories are continuous. All graphs have been recorded injecting a step current of 600 nA onto the
membrane. gadapt and gl have been chosen equal in all simulations except tonic spiking to facilitate
the nullclines:
Vadapt
Vadapt
gl
gl
= ? (V ? El ) + ?T e
a
a
=V;
V ?VT
?T
+ El +
I
a
(9)
(10)
As described in [16], the AdEx model allows different types of spike after potentials (SAP). Sharp
SAPs are reached if the reset after a spike sets the trajectory to a point, below the V-nullcline. If
reset ends in a point, above the V-nullcline, the membrane voltage will be pulled down below the
reset voltage Vreset by the adaptation current.
The first pattern - tonic spiking with a sharp reset - can be reached by either setting b to a small
value and shrinking the adaptation time constant to make Vadapt follow V very fast - at least, the
adaptation constant must be small enough to enable Vadapt to regenerate b in the inter-spike interval
(ISI)- or by setting a to zero. Here, a has been set to zero, while gl has been doubled to keep the total
conductance at a similar level. Parameters between simulation and measurement are only roughly
mapped, as the precise mapping algorithm is still in progress - on a real chip there is a variation of
transistor parameters which still needs to be counterbalanced by parameter choice.
Spike-frequency adaptation is caused by enlarging Vadapt at each detected spike, while still staying
below the V-nullcline (equation 9). As metric, for adaptation [24] and [16] use the accommodation
index:
N
A=
X ISIi ? ISIi?1
1
N ?k?1
ISIi + ISIi?1
(11)
i=k
Here k determines the number of ISI excluded from A to exclude transient behavior [15, 24] and
can be chosen as one fifth for small numbers of ISIs [24]. The metric calculates the average of the
difference between two neighbored ISIs weighted by their sum, so it should be zero for ideal tonic
spiking. For our results we get an accommodation index of 0 ? 0.0003 for fast spiking neurons in
simulation and ?0.0004?0.001 in measurement. For adaptation the values are 0.1256?0.0002 and
6
0.94
1.15
1.2
0.93
1.1
1.15
1.1
0.9
V[V]
1.05
0.91
V[V]
Vadapt[V]
0.92
1
0.95
0.89
0.88
0.87
0.85
0.9
0.95
1
V[V]
1.05
1.1
1.05
1
0.95
0.9
0.9
0.85
0.85
1.15
0
5
10
15
20
Time[?s]
25
30
0
5
10
15
20
Time[?s]
25
30
0
5
10
15
20
Time[?s]
25
30
0
5
10
15
20
Time[?s]
25
30
0
5
10
15
20
Time[?s]
25
30
(a) Tonic spiking
0.96
1.15
1.2
0.95
1.1
1.15
1.1
1.05
0.92
0.91
0.9
1.05
1
V[V]
0.93
V[V]
Vadapt[V]
0.94
0.95
0.9
0.9
0.89
0.85
0.85
0.88
0.87
0.8
0.8
0.8 0.85 0.9 0.95 1
V[V]
1.05 1.1 1.15
1
0.95
0.75
0
5
10
15
20
Time[?s]
25
30
0.98
1.15
1.2
0.96
1.1
1.15
0.94
1.05
0.92
1
0.9
0.95
0.88
0.9
0.86
0.85
0.84
1.1
1.05
V[V]
V[V]
Vadapt[V]
(b) Spike frequency adaptation
0.9
0.85
0.8
0.8
0.8 0.85 0.9 0.95 1
V[V]
1.05 1.1 1.15
1
0.95
0.75
0
5
10
15
20
Time[?s]
25
30
1
1.15
1.2
0.98
1.1
1.15
0.96
1.05
0.94
1
0.95
0.9
0.9
0.88
0.85
0.86
0.84
0.75 0.8 0.85 0.9 0.95 1 1.05 1.1 1.15
V[V]
1.1
1.05
V[V]
0.92
V[V]
Vadapt[V]
(c) Phasic burst
1
0.95
0.9
0.85
0.8
0.8
0.75
0.75
0
5
10
15
20
Time[?s]
25
30
(d) Tonic burst
Figure 6: Phase plane and transient plot from simulations and measurement results of the neuron
stimulated by a step current of 600 nA.
0.039 ? 0.001. As parameters have been chosen to reproduce the patterns obviously (adaptation is
switched of for tonic spiking and strong for spike frequency adaptation) they are a little bit extreme
in comparison to the calculated ones in [24] which are 0.0045 ? 0.0023 for fast spiking interneurons
and 0.017 ? 0.004 for adapting neurons.
It is ambiguous to define a burst looking just at the spike frequency. We follow the definition used
in [16] and define a burst as one or more sharp resets followed by a broad reset. The bursting results
can be found in figure 6, too. To generate bursting behavior, the reset has to be set to a value above
the exponential threshold so that V is pulled upwards by the exponential directly after a spike.
As can be seen in figure 1, depending on the sharpness ?t of the exponential term, the exact reset
voltage Vr might be critical in bursting, when reseting above the exponential threshold and the
nullcline is already steep at this point. The AdEx model is capable of irregular spiking in contrast
to the Izhikevich neuron [25] which uses a quadratic term to simulate the rise at a spike. The
7
chaotic spiking capability of the AdEx model has been shown in [16]. In Hardware, we observe
that it is common to reach regimes, where the exact number of spikes in a burst is not constant, thus
the distance to the next spike or burst may differ in the next period. Another effect is that if the
equilibrium potential - the potential, where the nullclines cross - is near Vt , noise may cause the
membrane to cross Vt and hence generate a spike (Compare phase planes in figure 6 c) and d) ).
Figure 6 shows tonic bursting and phasic bursting. In phasic bursting, the nullclines are still crossing
in a stable fix point - the resting potential caused by adaptation, leakage and stimulus is below the
firing threshold of the exponential.
Patterns reproduced in experiment and simulations but not shown here are phasic spiking and initial
bursting.
4
Discussion
The main feature of our neuron is the capability of directly reproducing the AdEx model. It is neither
optimized to be low power nor small in size in contrast to postulations by Livi in [6]. Nevertheless,
it is low power in comparison to simulation on a supercomputer (estimated 100 ?W in comparison
to 370 mW on a Blue Gene/P [26] at an acceleration factor of 104 , computing time of Izhikevich
neuron model [23] used as estimate.) and does not consume much chip area in comparison to the
synapse array and communication infrastructure on the HICANN (figure 2). Complex individual parameterization allows adaptation onto different models. As our model is working on an
accelerated time scale of up to 105 times faster than biological real time, it is neither possible nor
wanted to interact with systems relying on biological real time. Instead, by scaling the system up to
about a million neurons, it will be possible to do experiments which have never been feasible so far
due to the long duration of numerical simulations at this scale, i.e. allowing large parameter sweeps,
dense real-world stimuli as well as many repetitions of experiments for gaining statistics.
Due to the design approach - implementing an established model instead of developing a new model
fitting best to hardware devices - we gain a neuron allowing neuroscientist to do experiments without
being a hardware specialist.
5
Outlook
The neuron topology - several DenMems are interconnected to form a neuron - is predestined to be
enhanced to a multi-compartment model. This will be the next design step.
The simulations and measurements in this work qualitatively reproduce patterns observed in biology
and reproduced by the AdEx model in [16]. A method to directly map the parameters of the AdEx
quantitatively to the simulations has already been developed. This method needs to be enhanced to
a mapping onto the real hardware, counterbalancing mismatch and accounting for limited parameter
resolution.
Nested in the FACETS wafer-scale system, our neuron will complete the universality of the system
by a versatile core for analog computation. Encapsulation of the parameter mapping into low level
software and PyNN [12] integration of the system will allow computational neural scientists to do
experiments on the hardware and compare them to simulations, or to do large experiments, currently
not implementable in a simulation.
Acknowledgments
This work is supported in part by the European Union under the grant no. IST-2005-15879
(FACETS).
References
[1] Carver A. Mead and M. A. Mahowald. A silicon model of early visual processing. Neural Networks,
1(1):91?97, 1988.
8
[2] C. A. Mead. Analog VLSI and Neural Systems. Addison Wesley, Reading, MA, 1989.
[3] Misha Mahowald and Rodney Douglas. A silicon neuron. Nature, 354(6354):515?518, Dec 1991.
[4] E. Farquhar and P. Hasler. A bio-physically inspired silicon neuron. Circuits and Systems I: Regular
Papers, IEEE Transactions on, 52(3):477 ? 488, march 2005.
[5] J.V. Arthur and K. Boahen. Silicon neurons that inhibit to synchronize. In Circuits and Systems, 2007.
ISCAS 2007. IEEE International Symposium on, pages 1186 ?1186, 27-30 2007.
[6] P. Livi and G. Indiveri. A current-mode conductance-based silicon neuron for address-event neuromorphic
systems. In Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on, pages 2898 ?
2901, 24-27 2009.
[7] Jayawan H.B. Wijekoon and Piotr Dudek. Compact silicon neuron circuit with spiking and bursting
behaviour. Neural Networks, 21(2-3):524 ? 534, 2008. Advances in Neural Networks Research: IJCNN
?07, 2007 International Joint Conference on Neural Networks IJCNN ?07.
[8] J. Schemmel, A. Gr?ubl, K. Meier, and E. Muller. Implementing synaptic plasticity in a VLSI spiking
neural network model. In Proceedings of the 2006 International Joint Conference on Neural Networks
(IJCNN). IEEE Press, 2006.
[9] J. Schemmel, D. Br?uderle, K. Meier, and B. Ostendorf. Modeling synaptic plasticity within networks of
highly accelerated I&F neurons. In Proceedings of the 2007 IEEE International Symposium on Circuits
and Systems (ISCAS), pages 3367?3370. IEEE Press, 2007.
[10] Alain Destexhe. Conductance-based integrate-and-fire models. Neural Comput., 9(3):503?514, 1997.
[11] Daniel Br?uderle, Eric M?uller, Andrew Davison, Eilif Muller, Johannes Schemmel, and Karlheinz Meier.
Establishing a novel modeling tool: A python-based interface for a neuromorphic hardware system. Front.
Neuroinform., 3(17), 2009.
[12] A. P. Davison, D. Br?uderle, J. Eppler, J. Kremkow, E. Muller, D. Pecevski, L. Perrinet, and P. Yger. PyNN:
a common interface for neuronal network simulators. Front. Neuroinform., 2(11), 2008.
[13] R. Brette and W. Gerstner. Adaptive exponential integrate-and-fire model as an effective description of
neuronal activity. J. Neurophysiol., 94:3637 ? 3642, 2005.
[14] FACETS. Fast Analog Computing with Emergent Transient States ? project website. http://www.
facets-project.org, 2010.
[15] Henry Markram, Maria Toledo-Rodriguez, Yun Wang, Anirudh Gupta, Gilad Silberberg, and Caizhi Wu.
Interneurons of the neocortical inhibitory system. Nat Rev Neurosci, 5(10):793?807, Oct 2004.
[16] Richard Naud, Nicolas Marcille, Claudia Clopath, and Wulfram Gerstner. Firing patterns in the adaptive
exponential integrate-and-fire model. Biological Cybernetics, 99(4):335?347, Nov 2008.
[17] J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In Proceedings
of the 2008 International Joint Conference on Neural Networks (IJCNN), 2008.
[18] J. Fieres, J. Schemmel, and K. Meier. Realizing biological spiking network models in a configurable
wafer-scale hardware system. In Proceedings of the 2008 International Joint Conference on Neural Networks (IJCNN), 2008.
[19] J. Schemmel, D. Br?uderle, A. Gr?ubl, M. Hock, K. Meier, and S. Millner. A wafer-scale neuromorphic
hardware system for large-scale neural modeling. In Proceedings of the 2010 IEEE International Symposium on Circuits and Systems (ISCAS), pages 1947?1950, 2010.
[20] T.S. Lande, H. Ranjbar, M. Ismail, and Y. Berg. An analog floating-gate memory in a standard digital technology. In Microelectronics for Neural Networks, 1996., Proceedings of Fifth International Conference
on, pages 271 ?276, 12-14 1996.
[21] Alain Destexhe, Diego Contreras, and Mircea Steriade. Mechanisms underlying the synchronizing action of corticothalamic feedback through inhibition of thalamic relay cells. Journal of Neurophysiology,
79:999?1016, 1998.
[22] Martin Pospischil, Maria Toledo-Rodriguez, Cyril Monier, Zuzanna Piwkowska, Thierry Bal, Yves
Fr?egnac, Henry Markram, and Alain Destexhe. Minimal hodgkin?huxley type models for different classes
of cortical and thalamic neurons. Biological Cybernetics, 99(4):427?441, Nov 2008.
[23] Eugene M. Izhikevich. Which Model to Use for Cortical Spiking Neurons? IEEE Transactions on Neural
Networks, 15:1063?1070, 2004.
[24] Shaul Druckmann, Yoav Banitt, Albert Gidon, Felix Schrmann, Henry Markram, and Idan Segev. A
novel multiple objective optimization framework for constraining conductance-based neuron models by
experimental data. Front Neurosci, 1(1):7?18, Nov 2007.
[25] Eugene M. Izhikevich. Simple Model of Spiking Neurons. IEEE Transactions on Neural Networks,
14:1569?1572, 2003.
[26] IBM. System blue gene solution. ibm.com/systems/deepcomputing/bluegene/, 2010.
9
| 3995 |@word neurophysiology:1 version:2 rising:2 physik:1 simulation:19 pulse:2 accounting:1 versatile:1 outlook:1 moment:1 initial:1 contains:1 daniel:1 current:24 com:1 universality:2 must:1 realize:1 numerical:1 realistic:1 ota:5 plasticity:3 enables:1 wanted:1 designed:1 plot:1 update:1 device:7 website:1 parameterization:2 plane:5 destined:1 beginning:1 realizing:1 short:1 core:1 infrastructure:1 characterization:1 davison:2 org:1 mathematical:1 burst:6 constructed:1 direct:1 differential:1 symposium:4 consists:1 fitting:1 inside:1 yger:1 inter:1 roughly:1 isi:4 nor:2 behavior:4 simulator:3 integrator:1 multi:1 otas:4 detects:1 globally:1 relying:1 nullcline:9 inspired:1 actual:1 little:1 equipped:1 pf:3 project:3 underlying:1 circuit:29 cm:2 developed:3 universit:1 scaled:3 schwartz:1 bio:1 grant:1 positive:2 felix:1 engineering:1 timing:1 scientist:1 limit:1 semiconductor:1 aiming:1 switching:3 uncut:1 mead:2 establishing:1 piwkowska:1 firing:6 might:1 bursting:9 karlheinz:2 examined:1 limited:3 programmed:1 gone:1 range:8 acknowledgment:1 responsible:1 union:1 block:2 implement:2 chaotic:1 procedure:2 sequencer:1 area:3 universal:1 adapting:1 matching:1 regular:2 get:4 cannot:1 onto:4 doubled:1 operator:1 storage:2 www:1 ranjbar:1 map:1 starting:1 duration:1 rectangular:1 sharpness:1 resolution:1 fieres:2 array:5 neuroinform:2 dw:1 variation:1 enhanced:3 trigger:1 diego:1 exact:5 olivier:1 us:1 pa:1 crossing:2 located:1 lay:1 observed:2 postulation:1 wang:1 connected:2 inhibit:1 boahen:1 environment:1 leak:1 dynamic:1 depend:1 eric:1 completely:1 neurophysiol:1 easily:2 joint:4 kirchhoff:1 chip:10 emergent:1 emulate:4 mosfet:2 fast:4 describe:1 effective:2 detected:3 lift:1 quite:1 consume:1 statistic:2 gi:3 timescale:1 transform:1 reproduced:5 obviously:1 triggered:2 transistor:4 analytical:1 interconnected:1 steriade:1 reset:11 adaptation:21 fr:1 relevant:1 translate:2 ismail:1 description:1 inducing:1 enhancement:1 diverges:1 r1:3 produce:1 staying:1 depending:2 andrew:1 measured:2 op:2 thierry:1 progress:1 strong:1 implemented:8 diode:1 differ:1 correct:1 transient:4 cline:1 enable:1 implementing:3 argued:1 behaviour:1 fix:1 biological:13 adjusted:1 mm:1 stdp:1 exp:1 great:1 normal:1 mapping:4 equilibrium:1 pecevski:1 major:1 early:1 smallest:1 relay:1 purpose:2 injecting:1 currently:2 individually:2 repetition:1 create:1 successfully:1 tool:1 weighted:1 uller:1 reaching:1 nullclines:6 voltage:21 adex:21 focus:1 indiveri:1 maria:2 contrast:3 el:6 interconnect:1 integrated:3 brette:1 shaul:1 vlsi:4 relation:1 transformed:1 reproduce:3 mimicking:1 issue:1 multiplies:1 integration:4 field:1 equal:1 never:1 shaped:1 having:1 piotr:1 biology:3 broad:1 synchronizing:1 dudek:1 stimulus:6 quantitatively:1 richard:1 opening:1 few:1 neighbour:1 composed:1 anirudh:1 individual:8 floating:4 phase:5 replacement:1 fire:8 iscas:4 amplifier:3 conductance:16 vexp:1 huge:1 interneurons:2 neuroscientist:1 highly:2 possibility:1 analyzed:1 extreme:1 perrinet:1 misha:1 uderle:4 capable:4 necessary:3 arthur:1 institut:1 carver:1 desired:2 theoretical:1 minimal:1 fitted:1 instance:2 modeling:8 facet:6 measuring:1 yoav:1 neuromorphic:6 mahowald:2 hundred:2 gr:2 front:4 too:2 characterize:1 stored:1 encapsulation:1 configurable:1 periodic:1 international:9 stay:1 off:2 na:3 recorded:1 creating:1 oxide:1 derivative:2 potential:9 exclude:1 de:1 summarized:1 caused:2 mv:2 depends:1 lot:1 reached:4 decaying:1 thalamic:2 capability:3 slope:2 rodney:1 yves:1 compartment:1 characteristic:1 produced:1 emulated:3 trajectory:4 cybernetics:2 synapsis:2 reach:1 sebastian:1 synaptic:10 sharing:1 definition:1 pospischil:1 frequency:5 modeler:2 gain:2 sap:2 wesley:1 corticothalamic:1 dt:4 follow:2 methodology:1 synapse:6 done:1 though:1 just:1 stage:1 until:1 working:2 ei:2 replacing:1 ostendorf:1 hock:1 rodriguez:2 mode:3 stimulate:1 izhikevich:4 building:2 effect:3 facilitate:1 counterbalancing:1 hence:3 excluded:3 read:1 adjacent:1 during:2 ambiguous:1 claudia:1 m:3 bal:1 yun:1 complete:1 neocortical:1 dedicated:1 interface:3 upwards:1 karls:1 novel:2 common:2 quarter:1 spiking:20 refractory:1 exponentially:2 million:1 analog:12 m1:1 resting:1 egnac:1 silicon:9 measurement:5 tuning:1 had:1 henry:3 moving:1 access:2 stable:2 longer:1 operating:2 inhibition:1 accommodation:2 own:1 driven:1 certain:1 contreras:1 success:1 fault:1 vt:11 muller:3 seen:2 additional:1 paradigm:1 period:3 signal:1 multiple:1 schemmel:7 technical:1 faster:1 determination:1 adapt:1 cross:3 long:2 characterized:1 host:1 e1:1 lande:1 schematic:2 calculates:1 basic:1 enhancing:1 metric:2 poisson:1 physically:1 albert:1 gilad:1 robotics:1 cell:2 irregular:2 achieved:2 dec:1 want:1 separately:1 interval:1 diagram:2 source:5 biased:1 rest:1 ee:2 near:1 mw:1 ideal:1 constraining:1 enough:2 destexhe:3 switch:1 counterbalanced:2 converter:1 topology:1 andreas:1 prototype:2 knowing:1 br:4 whether:1 clopath:1 cause:1 cyril:1 action:1 programmable:1 aimed:1 johannes:2 prepared:1 hardware:23 fixpoint:1 reduced:1 generate:3 http:1 inhibitory:3 shifted:1 designer:1 neuroscience:1 estimated:1 blue:2 shall:1 wafer:10 group:1 ist:1 four:1 threshold:8 nevertheless:1 falling:1 neither:2 douglas:1 millivolt:1 hasler:1 graph:1 sum:1 run:1 parameterized:1 hodgkin:1 electronic:1 wu:1 scaling:5 bit:2 followed:1 correspondence:1 quadratic:1 activity:1 ijcnn:5 sharply:1 constrain:1 huxley:1 segev:1 software:2 neighbored:2 speed:1 simulate:1 transconductance:1 martin:1 developing:1 according:2 march:1 membrane:20 smaller:1 describes:1 ur:1 rev:1 biologically:2 dv:1 equation:9 bus:2 count:1 mechanism:1 needed:1 phasic:4 addison:1 ge:3 reversal:2 end:1 operation:3 experimentation:1 multiplied:1 observe:1 indirectly:1 anymore:1 alternative:1 specialist:1 isii:4 gate:6 supercomputer:1 completed:1 especially:1 establish:1 leakage:6 sweep:2 objective:1 capacitance:5 already:2 naud:1 millner:2 spike:23 traditional:2 distance:1 mapped:1 consumption:1 index:2 providing:1 ratio:1 steep:1 farquhar:1 rise:2 neuroscientific:1 implementation:4 design:7 adjustable:3 allowing:5 neuron:65 regenerate:1 enabling:1 implementable:1 emulating:2 tonic:7 precise:1 looking:1 communication:1 reproducing:2 sharp:5 arbitrary:1 inverting:1 meier:7 connection:2 optimized:1 kip:1 established:2 toledo:2 address:1 able:1 below:6 pattern:10 mismatch:1 biasing:3 regime:2 reading:1 bluegene:1 saturation:1 built:1 gaining:3 memory:2 power:4 critical:2 event:3 synchronize:1 shorting:1 replaying:1 improve:1 technology:2 created:4 vreset:5 eugene:2 python:1 multiplication:1 relative:1 proportional:2 digital:2 integrate:8 switched:3 metal:1 silberberg:1 translation:3 ibm:2 excitatory:2 changed:1 gl:11 supported:1 alain:3 bias:1 allow:2 pulled:2 markram:3 emerge:1 fifth:2 leaky:2 tolerance:1 feedback:3 dimension:1 cortical:3 calculated:1 world:1 ubl:2 commonly:1 adaptive:5 qualitatively:1 simplified:2 far:1 transaction:3 nov:3 compact:1 uni:1 keep:1 gene:2 global:3 incoming:1 mem:1 gidon:1 continuous:1 table:2 stimulated:3 additionally:1 channel:1 nature:1 nicolas:1 operational:2 dendrite:1 kremkow:1 heidelberg:2 interact:1 gerstner:2 complex:4 european:1 marc:1 main:1 dense:1 neurosci:2 noise:2 allowed:1 neuronal:2 ff:1 deployed:1 vr:1 n:6 shrinking:1 sub:1 resistor:2 exponential:16 comput:1 ib:3 monier:1 down:1 enlarging:1 r2:2 gupta:1 microelectronics:1 reproduction:1 dominates:1 effectively:1 magnitude:3 nat:1 occurring:1 photograph:1 idan:1 cmem:2 visual:1 nested:1 determines:1 adaption:1 ma:1 oct:1 acceleration:5 feasible:1 wulfram:1 except:1 called:2 total:1 experimental:1 berg:1 support:1 accelerated:3 druckmann:1 |
3,307 | 3,996 | Heavy-Tailed Process Priors for Selective Shrinkage
Michael I. Jordan
University of California, Berkeley
[email protected]
Fabian L. Wauthier
University of California, Berkeley
[email protected]
Abstract
Heavy-tailed distributions are often used to enhance the robustness of regression
and classification methods to outliers in output space. Often, however, we are confronted with ?outliers? in input space, which are isolated observations in sparsely
populated regions. We show that heavy-tailed stochastic processes (which we construct from Gaussian processes via a copula), can be used to improve robustness
of regression and classification estimators to such outliers by selectively shrinking
them more strongly in sparse regions than in dense regions. We carry out a theoretical analysis to show that selective shrinkage occurs when the marginals of the
heavy-tailed process have sufficiently heavy tails. The analysis is complemented
by experiments on biological data which indicate significant improvements of estimates in sparse regions while producing competitive results in dense regions.
1
Introduction
Gaussian process classifiers (GPCs) [12] provide a Bayesian approach to nonparametric classification with the key advantage of producing predictive class probabilities. Unfortunately, when training
data are unevenly sampled in input space, GPCs tend to overfit in the sparsely populated regions.
Our work is motivated by an application to protein folding where this presents a major difficulty.
In particular, while Nature provides samples of protein configurations near the global minima of
free energy functions, protein-folding algorithms, which imitate Nature by minimizing an estimated
energy function, necessarily explore regions far from the minimum. If the estimate of free energy is
poor in those sparsely-sampled regions then the algorithm has a poor guide towards the minimum.
More generally this problem can be viewed as one of ?covariate shift,? where the sampling pattern
differs in the training and testing phase.
In this paper we investigate a GPC-based approach that addresses overfitting by shrinking predictive
class probabilities towards conservative values. For an unevenly sampled input space it is natural
to consider a selective shrinkage strategy: we wish to shrink probability estimates more strongly in
sparse regions than in dense regions. To this end several approaches could be considered. If sparse
regions can be readily identified, selective shrinkage could be induced by tailoring the Gaussian
process (GP) kernel to reflect that information. In the absence of such knowledge, Goldberg and
Williams [5] showed that Gaussian process regression (GPR) can be augmented with a GP on the
log noise level. More recent work has focused on partitioning input space into discrete regions
and defining different kernel functions on each. Treed Gaussian process regression [6] and Treed
Gaussian process classification [1] represent advanced variations of this theme that define a prior
distribution over partitions and their respective kernel hyperparameters. Another line of research
which could be adapted to this problem posits that the covariate space is a nonlinear deformation
of another space on which a Gaussian process prior is placed [3, 13]. Instead of directly modifying
the kernel matrix, the observed non-uniformity of measurements is interpreted as being caused by
the spatial deformation. A difficulty with all these approaches is that posterior inference is based on
MCMC, which can be overly slow for the large-scale problems that we aim to address.
1
This paper shows that selective shrinkage can be more elegantly introduced by replacing the Gaussian process underlying GPC with a stochastic process that has heavy-tailed marginals (e.g., Laplace,
hyperbolic secant, or Student-t). While heavy-tailed marginals are generally viewed as providing robustness to outliers in the output space (i.e., the response space), selective shrinkage can be viewed
as a form of robustness to outliers in the input space (i.e., the covariate space). Indeed, selective
shrinkage means the data points that are far from other data points in the input space are regularized
more strongly. We provide a theoretical analysis and empirical results to show that inference based
on stochastic processes with heavy-tailed marginals yields precisely this kind of shrinkage.
The paper is structured as follows: Section 2 provides background on GPCs and highlights how
selective shrinkage can arise. We present a construction of heavy-tailed processes in Section 3 and
show that inference reduces to standard computations in a Gaussian process. An analysis of our
approach is presented in Section 4 and details on inference algorithms are presented in Section 5.
Experiments on biological data in Section 6 demonstrate that heavy-tailed process classification
substantially outperforms GPC in sparse regions while performing competitively in dense regions.
The paper concludes with an overview of related research and final remarks in Sections 7 and 8.
2
Gaussian process classification and shrinkage
A Gaussian process (GP) [12] is a prior on functions z : X ? R defined through a mean function
(usually identically zero) and a symmetric positive semidefinite kernel k(?, ?). For a finite set of
locations X = (x1 , . . . , xn ) we write z(X) ? p(z(X)) = N (0, K(X, X)) as a random variable
distributed according to the GP with finite-dimensional kernel matrix [K(X, X)]i,j = k(xi , xj ). Let
y denote an n-vector of binary class labels associated with measurement locations X 1 . For Gaussian
process classification (GPC) [12] the probability that a test point x? is labeled as class y? = 1, given
training data (X, y), is computed as
1
p(y? = 1|X, y, x? ) = Ep(z(x? )|X,y,x? )
(1)
1 + exp{?z(x? )}
Z
p(z(x? )|X, y, x? ) = p(z(x? )|X, z(X), x? )p(z(X)|X, y)dz(X).
The predictive distribution p(z(x? )|X, y, x? ) represents a regression on z(x? ) with a complicated
observation model y|z. The central observation from Eq. (1) is that we could selectively shrink
the prediction p(y? = 1|X, y, x? ) towards a conservative value 1/2 by selectively shrinking
p(z(x? )|X, y, x? ) closer to a point mass at zero.
3
Heavy-tailed process priors via the Gaussian copula
In this section we construct the heavy-tailed stochastic process by transforming a GP. As with the
GP, we will treat the new process as a prior on functions. Suppose that diag (K(X, X)) = ? 2 1. We
define the heavy-tailed process f (X) with marginal c.d.f. Gb as
z(X) ? N (0, K(X, X))
u(X) = ?0,?2 (z(X))
(2)
(3)
?1
f (X) = G?1
b (u(X)) = Gb (?0,? 2 (z(X))).
Here the function ?0,?2 (?) is the c.d.f. of a centered Gaussian with variance ? 2 . Presently, we
only consider the case when Gb is the (continuous) c.d.f. of a heavy-tailed density gb with scale
parameter b that is symmetric about the origin. Examples include the Laplace, hyperbolic secant
and Student-t distribution. We note that other authors have considered asymmetric or even discrete
distributions [2, 11, 16] while Snelson et al. [15] use arbitrary monotonic transformations in place
of Gb?1 (?0,?2 (?)). The process u(X) has the density of a Gaussian copula [10, 16] and is critical
in transferring the correlation structure encoded by K(X, X) from z(X) to f (X). If we define
1
To improve the clarity of exposition, we only deal with binary classification for now. A full multiclass
classification model is used in our experiments.
2
z(f (X)) = ??1
0,? 2 (Gb (f (X))), it is well known [7, 9, 11, 15, 16] that the density of f (X) satisfies
Q
1
I
>
?1
i=1 gb (f (xi ))
p(f (X)) =
z(f
(X))
. (4)
exp
?
z(f
(X))
K(X,
X)
?
2
?2
|K(X, X)/? 2 |1/2
Q
Observe that if K(X, X) = ? 2 I then p(f (X)) = i=1 gb (f (xi )). Also note that if Gb were
chosen to be Gaussian, we would recover the Gaussian process. The predictive distribution
p(f (x? )|X, f (X), x? ) can be interpreted as a Heavy-tailed process regression (HPR). It is easy to
see that its computation can be reduced to standard computations in a Gaussian model by nonlinearly
transforming observations f (X) into z-space. The predictive distribution in z-space satisfies
p(z(x? )|X, f (X), x? ) = N (?? , ?? )
(5)
?? = K(x? , X)K(X, X)?1 z(f (X))
(6)
?1
?? = K(x? , x? ) ? K(x? , X)K(X, X) K(X, x? ).
(7)
The corresponding distribution in f -space follows by another change of variables. Having defined
the heavy-tailed stochastic process in general we now turn to an analysis of its shrinkage properties.
4
Selective shrinkage
By ?selective shrinkage? we mean that the degree of shrinkage applied to a collection of estimators
varies across estimators. As motivated in Section 2, we are specifically interested in selectively
shrinking posterior distributions near isolated observations more strongly than in dense regions.
This section shows that we can achieve this by changing the form of prior marginals (heavy-tailed
instead of Gaussian) and that this induces stronger selective shrinkage than any GPR could induce.
Since HPR uses a GP in its construction, which can induce some selective shrinkage on its own, care
must be taken to investigate only the additional benefits the transformation G?1
b (?0,? 2 (?)) has on
shrinkage. For this reason we assume a particular GP prior which leads to a special type of shrinkage
in GPR and then check how an HPR model built on top of that GP changes the observed behavior.
In this section we provide an idealized analysis that allows us to compare the selective shrinkage
obtained by GPR and HPR. Note that we focus on regression in this section so that we can obtain
analytical results. We work with n measurement locations, X = (x1 , . . . , xn ), whose index set
{1, . . . , n} can be partitioned into a ?dense? set D with |D| = n ? 1 and a single ?sparse? index s ?
/
? d , xd0 ) =
D. Assume that xd = xd0 , ?d, d0 ? D, so that we may let (without loss of generality) K(x
? d , xs ) = K(x
? s , xd ) = 0 ?d ? D.
1, ?d 6= d0 ? D. We also assert that xd 6= xs ?d ? D and let K(x
?
Assuming that n > 2 we fix the remaining entry K(xs , xs ) = /( + n ? 2), for some > 0. We
? + I.
interpret as a noise variance and let K = K
Denote any distributions computed under the GPR model by pgp (?) and those computed in HPR
by php (?). Using K(X, X) = K, define z(X) as in Eq. (2). Let y denote a vector of real-valued
measurements for a regression task. The posterior distribution of z(xi ) given y, with xi ? X, is
derived by standard Gaussian computations as
pgp (z(xi )|X, y) = N ?i , ?i2
? i , X)K(X, X)?1 y
?i = K(x
? i , X)K(X, X)?1 K(X,
?
?i2 = K(xi , xi ) ? K(x
xi ).
For our choice of K(X, X) one can show that ?d2 = ?s2 for d ? D. To ensure that the posterior
distributions agree at the two locations we require ?d = ?s , which holds if measurements y satisfy
(
)
X
n
o
?1
?
?
y ? Ygp , y| K(xd , X) ? K(xs , X) K(X, X) y = 0 = y
yd = ys .
d?D
A similar analysis can be carried out for the induced HPR model. By Eqs. (5)?(7) HPR inference
leads to identical distributions php (z(xd )|X, y 0 ) = php (z(xs )|X, y 0 ) with d ? D if measurements
y 0 in f -space satisfy
n
o
? d , X) ? K(x
? s , X) K(X, X)?1 ??1 2 (Gb (y 0 )) = 0
y 0 ? Yhp , y 0 | K(x
0,?
= y 0 = G?1
b (?0,? 2 (y))|y ? Ygp .
3
?5
?10
(a)
0
x
n
o
1 exp ? |x|
gb (x) = 2b
b
10
0
0
b
G?1
(?(x))
b
G?1
(?(x))
b
0
5
G?1(?(x))
5
5
?5
?10
(b)
0
x
1 sech
gb (x) = 2b
10
?x
2b
?5
?10
(c)
gb (x) =
0
x
10
1
3/2
b 2+(x/b)2
2
Figure 1: Illustration of G?1
b (?0,? 2 (x)), for ? = 1.0 with Gb the c.d.f. of (a) the Laplace distribution (b) the hyperbolic secant distribution (c) a Student-t inspired distribution, all with scale
parameter b. Each plot shows three samples?dotted, dashed, solid?for growing b. As b increases
the distributions become heavy-tailed and the gradient of G?1
b (?0,? 2 (x)) increases.
To compare the shrinkage properties of GPR and HPR we analyze select pairs of measurements
in Ygp and Yhp . The derivation requires that G?1
b (?0,? 2 (?)) is strongly concave on (??, 0],
strongly convex on [0, +?) and has gradient > 1 on R. To see intuitively why this should hold,
note that for Gb with fatter tails than a Gaussian, |G?1
b (?0,? 2 (x))| should eventually dominate
2
|??1
(?
(x))|
=
(b/?)|x|.
Figure
1
demonstrates
graphically
that the assumption holds for sev0,?
0,b2
eral choices of Gb , provided b is large enough, i.e., that gb has sufficiently heavy tails. Indeed, it can
be shown that for scale parameters b > 0, the first and second derivatives of G?1
b (?0,? 2 (?)) scale linearly with b. Consider a measurement 0 6= y ? Ygp with sign (y(xd )) = sign (y(xd0 )) , ?d, d0 ? D.
Analyzing such y is relevant, as we are most interested in comparing how multiple reinforcing observations at clustered locations and a single isolated observation are absorbed during inference. By
definition of Ygp , for d? = argmaxd?D |yd | we have |yd? | < |ys | as long as n > 2. The corresponding element y 0 = G?1
b (?0,? 2 (y)) ? Yhp then satisfies
y 0 (x ? )
?1
G?1
?
d
0
b (?0,? 2 (y(xd )))
(8)
y(xs ) =
y(xs ) .
|y (xs )| = Gb (?0,?2 (y(xs ))) >
y(xd? )
y(xd? )
Thus HPR inference leads to identical predictive distributions in f -space at the two locations even
though the isolated observation y 0 (xs ) has disproportionately larger magnitude than y 0 (xd? ), relative
to the GPR measurements y(xs ) and y(xd? ). As this statement holds for any y ? Ygp satisfying
our earlier sign requirement, it indicates that HPR systematically shrinks isolated observations more
strongly than GPR. Since the second derivative of G?1
b (?0,? 2 (?)) scales linearly with scale b > 0,
an intuitive connection suggests itself when looking at inequality (8): the heavier the marginal tails,
the stronger the inequality and thus the stronger the selective shrinkage effect.
The previous derivation exemplifies in an idealized setting that HPR leads to improved shrinkage of
predictive distributions near isolated observations. More generally, because GPR transforms measurements only linearly, while HPR additionally pre-transforms measurements nonlinearly, our analysis suggests that for any GPR we can find an HPR model which leads to stronger selective shrinkage. The result has intuitive parallels to the parametric case: just as `1 -regularization improves
shrinkage of parametric estimators, heavy-tailed processes improve shrinkage of nonparametric estimators. We note that although our analysis kept K(X, X) fixed for GPR and HPR, in practice we
are free to tune the kernel to yield a desired scale of predictive distributions. The above analysis
has been carried out for regression, but motivates us to now explore heavy-tailed processes in the
classification case.
5
Heavy-tailed process classification
The derivation of heavy-tailed process classification (HPC) is similar to that of standard multiclass
GPC with Laplace approximation in Rasmussen and Williams [12]. However, due to the nonlinear
transformations involved, some nice properties of their derivation are lost. We revert notation and
let y denote a vector of class labels. For a C-class classification problem with n training points we
4
introduce a vector of nC latent function measurements (f11 , . . . , fn1 , f12 , . . . , fn2 , . . . , f1C , . . . , fnC )> .
For each block c ? {1, . . . , C} of n variables we define an independent heavy-tailed process prior
using Eq. (4) with kernel matrix Kc . Equivalently, we can define the prior jointly on f by letting
K be a block-diagonal kernel matrix with blocks K1 , . . . , KC . Each kernel matrix Kc is defined
by a (possibly different) symmetric positive semidefinite kernel with its own set of parameters. The
following construction relaxes the earlier condition that diag (K) = ? 2 1 and instead views ?0,?2 (?)
as some nonlinear transformation with parameter ? 2 . By this relaxation we effectively adopt Liu et
al.?s [9] interpretation that Eq. (4) defines the copula. The scale parameters b could in principle vary
across the nC variables, but we keep them constant at least within each block of n. Labels y are
represented in a 1-of-n form and generated by the following observation model
exp{fic }
.
c0
c0 exp{fi }
p(yic = 1|fi ) = ?ic = P
(9)
For inference we are ultimately interested in computing
p(y?c = 1|X, y, x? ) = Ep(f? |X,y,x? )
exp{f?c }
c0
c0 exp{f? }
P
,
(10)
where f? = (f?1 , . . . , f?C )> . The previous section motivates that improved selective shrinkage will
occur in p(f? |X, y, x? ), provided the prior marginals have sufficiently heavy tails.
5.1
Inference
As in GPC, most of the intractability lies in computing the predictive distribution p(f? |X, y, x? ). We
use the Laplace approximation to address this issue: a Gaussian approximation to p(z|X, y) is found
and then combined with the Gaussian p(z? |X, z, x? ) to give us an approximation to p(z? |X, y, x? ).
This is then transformed to a (typically non-Gaussian) distribution in f -space using a change of
variables. Hence we first seek to find a mode and corresponding Hessian matrix of the log posterior
log p(z|X, y). Recalling the relation f = G?1
b (?0,? 2 (z)), the log posterior can be written as
J(z) , log p(y|z) + log p(z) = y > f ?
X
i
log
X
c
1
1
exp {fic )} ? z > K ?1 z ? log |K| + const.
2
2
Let ? be an nC ? n matrix of stacked diagonal matrices diag (? c ) for n-subvectors ? c of ?. With
W = diag (?) ? ??> , the gradients are
df
?J(z) = diag
(y ? ?) ? K ?1 z
dz
2
d f
df
df
?2 J(z) = diag
diag
(y
?
?)
?
diag
W
diag
? K ?1 .
2
dz
dz
dz
Unlike in Rasmussen and Williams [12], ??2 J(z) is not generally positive definite owing to its first
term. For that reason we cannot use a Newton step to find the mode and instead resort to a simpler
gradient method. Once the mode z? has been found we approximate the posterior as
p(z|X, y) ? q(z|X, y) = N z?, ??2 J(?
z )?1 ,
and use this to approximate the predictive distribution by
Z
q(z? |X, y, x? ) = p(z? |X, z, x? )q(z|X, y)df.
Since we arranged for both distributions in the integral to be Gaussian, the resulting Gaussian can
be straightforwardly evaluated. Finally, to approximate the one-dimensional integral with respect
to p(f? |X, y, x? ) in Eq. (10) we could either use a quadrature method, or generate samples from
q(z? |X, y, x? ), convert them to f -space using G?1
b (?0,? 2 (?)) and then approximate the expectation
by an average. We have compared predictions of the latter method with those of a Gibbs sampler;
the Laplace approximation matched Gibbs results well, while being much faster to compute.
5
pi
r=1
r=2
r=3
O
C
H
?
0
pi/2
Rotamer r ? {1, 2, 3}
C?
?
?
Residue
ue
id
Res
Re
sid
ue
0
N
C0
N
?pi/2
H
H
O
?pi
?pi
?pi/2
0
?
pi/2
pi
(b)
(a)
Figure 2: (a) Schematic of a protein segment. The backbone is the sequence of C 0 , N, C? , C 0 , N
atoms. An amino-acid-specific sidechain extends from the C? atom at one of three discrete angles known as ?rotamers.? (b) Ramachandran plot of 400 (?, ?) measurements and corresponding
rotamers (by shapes/colors) for amino-acid arginine (arg). The dark shading indicates the sparse
region we considered in producing results in Figure 3. Progressively lighter shadings indicate how
the sparse region was grown to produce Figure 4.
5.2
Parameter estimation
Using a derivation similar to that in [12], we have for f? = G?1
z )) that the Laplace approxb (?0,? 2 (?
imation of the marginal log likelihood is
1
log p(y|x) ? log q(y|x) = J(?
z ) ? log | ? 2??2 J(?
z )|
(11)
2
n o 1
X
X
1
1
= y > f? ?
log
exp f?ic ? z?> K ?1 z? ? log |K| ? log | ? ?2 J(?
z )| + const.
2
2
2
c
i
We optimize kernel parameters ? by taking gradient steps on log q(y|x). The derivative needs to
take into account that perturbing the parameters can also perturb the mode z? found for the Laplace
approximation. At an optimum ?J(?
z ) must be zero, so that
!
df?
z? = Kdiag
(y ? ?
? ),
(12)
d?
z
where ?
? is defined as in Eq. (9) but using f? rather than f . Taking derivatives of this equation allows
us to compute the gradient d?
z /d?. Differentiating the marginal likelihood we have
!
d log q(y|x)
df? d?
z
d?
z
1
dK ?1
>
= (y ? ?
? ) diag
? K ?1 z? + z?> K ?1
K z? ?
d?
d?
z d?
d?
2
d?
1
dK
1
d?2 J(?
z)
tr K ?1
? tr ?2 J(?
z )?1
.
2
d?
2
d?
The remaining gradient computations are straightforward, albeit tedious. In addition to optimizing
the kernel parameters, it may also be of interest to optimize the scale parameter b of marginals Gb .
Again, differentiating Eq. (12) with respect to b allows us to compute d?
z /db. We note that when
?
perturbing b we change f by changing the underlying mode z? as well as by changing the parameter
b which is used to compute f? from z?. Suppressing the detailed computations, the derivative of the
marginal log likelihood with respect to b is
2
? d?
d log q(y|x)
z > ?1
1
z)
> df
2
?1 d? J(?
= (y ? ?
?)
?
K z? ? tr ? J(?
z)
.
db
db
db
2
db
6
1
0.8
0.8
Prediction rate
Prediction rate
1
0.6
0.4
HPC Hyp. sec.
HPC Laplace
GPC
0.2
0
0.6
0.4
HPC Hyp. sec.
HPC Laplace
GPC
0.2
0
trp tyr ser phe glu asn leu thr his asp arg cys lys met gln ile val
(a)
trp tyr ser phe glu asn leu thr his asp arg cys lys met gln ile val
(b)
Figure 3: Rotamer prediction rates in percent in (a) sparse and (b) dense regions. Both flavors
of HPC (hyperbolic secant and Laplace marginals) significantly outperform GPC in sparse regions
while performing competitively in dense regions.
6
Experiments
To a first approximation, the three-dimensional structure of a folded protein is defined by pairs
of continuous backbone angles (?, ?), one pair for each amino-acid, as well as discrete angles,
so-called rotamers, that define the conformations of the amino-acid sidechains that extend from
the backbone. The geometry is outlined in Figure 2(a). There is a strong dependence between
backbone angles (?, ?) and rotamer values; this is illustrated in the ?Ramachandran plot? shown
in Figure 2(b), which plots the backbone angles for each rotamer (indicated by the shapes/colors).
The dependence is exploited in computational approaches to protein structure prediction, where
estimates of rotamer probabilities given backbone angles are used as one term in an energy function
that models native protein states as minima of the energy. Poor estimates of rotamer probabilities
in sparse regions can derail the prediction procedure. Indeed, sparsity has been a serious problem
in state-of-the-art rotamer models based on kernel density estimates (Roland Dunbrack, personal
communication). Unfortunately, we have found that GPC is not immune to the sparsity problem.
To evaluate our algorithm we consider rotamer-prediction tasks on the 17 amino-acids (out of 20)
that have three rotamers at the first dihedral angle along the sidechain2 . Our previous work thus
applies with the number of classes C = 3 and the covariates being (?, ?) angle pairs. Since the
input space is a torus we defined GPC and HPC using the following von Mises-inspired kernel for
d-dimensional angular data:
(
!
!)
d
X
2
k(xi , xj ) = ? exp ?
cos(xi,k ? xj,k ) ? d
,
k=1
2
3
where xi,k , xj,k ? [0, 2?] and ? , ? ? 0 . To find good GPC kernel parameters we optimize
an `2 -regularized version of the Laplace approximation to the log marginal likelihood reported in
Eq. 3.44 of [12]. For HPC we let Gb be either the centered Laplace distribution or the hyperbolic
secant distribution with scale parameter b. We estimate HPC kernel parameters as well as b by
similarly maximizing an `2 -regularized form of Eq. (11). In both cases we restricted the algorithms
to training sets of only 100 datapoints. Since good regularization parameters for the objectives are
not known a priori we train with and test them on a grid for each of the 17 rotameric residues in
ten-fold cross-validation. To find good regularization parameters for a particular residue we look up
that combination which, averaged over the ten folds of the remaining 16 residues, produced the best
test results. Having chosen the regularization constants we report average test results computed in
ten-fold cross validation.
We evaluate the algorithms on predefined sparse and dense regions in the Ramachandran plot, as
indicated by the background shading in Figure 2(b). Across 17 residues the sparse regions usually
contained more than 70 measurements (and often more than 150), each of which appears in one
of the 10 cross validations. Figure 3 compares the label prediction rates on the dense and sparse
2
Residues alanine and glycine are non-discrete while proline has two rotamers at the first dihedral angle.
The function cos(xi,k ? xj,k ) = [cos(xi.k ), sin(xi,k )][cos(xj.k ), sin(xj,k )]> is a symmetric positive
semi-definite kernel. By Propositions 3.22 (i) and (ii) and Proposition 3.25 in Shawe-Taylor and Cristianini [14], so is k(xi , xj ) above.
3
7
0.65
Prediction rate
0.6
0.55
HPC Hyp. sec.
HPC Laplace
CTGP
GPC
0.5
0.45
155
246
390
618
980
?Density of test data?
1554
2463
3906
Figure 4: Average rotamer prediction rate in the sparse region for two flavors of HPC, standard GPC
well as CTGP [1] as a function of the average number of points per residue in the sparse region.
regions. Averaged over all 17 residues HPC outperforms GPC by 5.79% with Laplace and 7.89%
with hyperbolic secant marginals. With Laplace marginals HPC underperforms GPC on only two
residues in sparse regions: by 8.22% on glutamine (gln), and by 2.53% on histidine (his). On
dense regions HPC lies within 0.5% on 16 residues and only degrades once by 3.64% on his.
Using hyperbolic secant marginals HPC often improves GPC by more than 10% on sparse regions
and degrades by more than 5% only on cysteine (cys) and his. On dense regions HPC usually
performs within 1.5% of GPC. In Figure 4 we show how the average rotamer prediction rate across
17 residues changes for HPC, GPC, as well as CTGP [1] as we grow the sparse region to include
more measurements from dense regions. The growth of the sparse region is indicated by progressively lighter shadings in Figure 2(b). As more points are included the significant advantage of HPC
lessens. Eventually GPC does marginally better than HPC and much better than CTGP. The values
reported in Figure 3 correspond to the dark shaded region, with an average of 155 measurements.
7
Related research
Copulas [10] allow convenient modelling of multivariate correlation structures as separate from
marginal distributions. Early work by Song [16] used the Gaussian copula to generate complex
multivariate distributions by complementing a simple copula form with marginal distributions of
choice. Popularity of the Gaussian copula in the financial literature is generally credited to Li [8]
who used it to model correlation structure for pairs of random variables with known marginals. More
recently, the Gaussian process has been modified in a similar way to ours by Snelson et al. [15].
They demonstrate that posterior distributions can better approximate the true noise distribution if
the transformation defining the warped process is learned. Jaimungal and Ng [7] have extended
this work to model multiple parallel time series with marginally non-Gaussian stochastic processes.
Their work uses a ?binding copula? to combine several subordinate copulas into a joint model.
Bayesian approaches focusing on estimation of the Gaussian copula covariance matrix for a given
dataset are given in [4, 11]. Research also focused on estimation in high-dimensional settings [9].
8
Conclusions
This paper analyzed learning scenarios where outliers are observed in the input space, rather than
the output space as commonly discussed in the literature. We illustrated heavy-tailed processes as
a straightforward extension of GPs and an economical way to improve the robustness of estimators
in sparse regions beyond those of GP-based methods. Importantly, because these processes are
based on a GP, they inherit many of its favorable computational properties; predictive inference
in regression, for instance, is straightforward. Moreover, because heavy-tailed processes have a
parsimonious representation, they can be used as building blocks in more complicated models where
currently GPs are used. In this way the benefits of heavy-tailed processes extend to any GP-based
model that struggles with covariate shift.
Acknowledgements
We thank Roland Dunbrack for helpful discussions and providing access to the rotamer datasets.
8
References
[1] Tamara Broderick and Robert B. Gramacy. Classification and Categorical Inputs with Treed
Gaussian Process Models. Journal of Classification. To appear.
[2] Wei Chu and Zoubin Ghahramani. Gaussian Processes for Ordinal Regression. Journal of
Machine Learning Research, 6:1019?1041, 2005.
[3] Doris Damian, Paul D. Sampson, and Peter Guttorp. Bayesian Estimation of Semi-Parametric
Non-Stationary Spatial Covariance Structures. Environmetrics, 12:161?178.
[4] Adrian Dobra and Alex Lenkoski. Copula Gaussian Graphical Models. Technical report,
Department of Statistics, University of Washington, 2009.
[5] Paul W. Goldberg, Christopher K. I. Williams, and Christopher M. Bishop. Regression with
Input-dependent Noise: A Gaussian Process Treatment. In Advances in Neural Information
Processing Systems, volume 10, pages 493?499. MIT Press, 1998.
[6] Robert B. Gramacy and Herbert K. H. Lee. Bayesian Treed Gaussian Process Models with an
Application to Computer Modeling. Journal of the American Statistical Association, 2007.
[7] Sebastian Jaimungal and Eddie K. Ng. Kernel-based Copula Processes. In Proceedings of the
European Conference on Machine Learning and Knowledge Discovery in Databases, pages
628?643. Springer-Verlag, 2009.
[8] David X. Li. On Default Correlation: A Copula Function Approach. Technical Report 99-07,
Riskmetrics Group, New York, April 2000.
[9] Han Liu, John Lafferty, and Larry Wasserman. The Nonparanormal: Semiparametric Estimation of High Dimensional Undirected Graphs. Journal of Machine Learning Research,
10:1?37, 2009.
[10] Roger B. Nelsen. An Introduction to Copulas. Springer, 1999.
[11] Michael Pitt, David Chan, and Robert J. Kohn. Efficient Bayesian Inference for Gaussian
Copula Regression Models. Biometrika, 93(3):537?554, 2006.
[12] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning.
MIT Press, 2006.
[13] Alexandra M. Schmidt and Anthony O?Hagan. Bayesian Inference for Nonstationary Spatial Covariance Structure via Spatial Deformations. Journal of the Royal Statistical Society,
65(3):743?758, 2003. Ser. B.
[14] John Shawe-Taylor and Nello Cristianini. Kernel Methods for Pattern Analysis. Cambridge
University Press, 2004.
[15] Ed Snelson, Carl E. Rasmussen, and Zoubin Ghahramani. Warped Gaussian Processes. In
Advances in Neural Information Processing Systems, volume 16, pages 337?344, 2004.
[16] Peter Xue-Kun Song. Multivariate Dispersion Models Generated From Gaussian Copula.
Scandinavian Journal of Statistics, 27(2):305?320, 2000.
9
| 3996 |@word version:1 stronger:4 c0:5 tedious:1 adrian:1 d2:1 seek:1 covariance:3 tr:3 solid:1 shading:4 carry:1 configuration:1 liu:2 series:1 ours:1 suppressing:1 nonparanormal:1 outperforms:2 comparing:1 chu:1 must:2 readily:1 written:1 john:2 partition:1 tailoring:1 shape:2 plot:5 progressively:2 stationary:1 imitate:1 complementing:1 dunbrack:2 gpcs:3 provides:2 location:6 treed:4 simpler:1 along:1 become:1 combine:1 introduce:1 indeed:3 behavior:1 growing:1 f11:1 inspired:2 subvectors:1 provided:2 underlying:2 notation:1 f1c:1 mass:1 matched:1 moreover:1 backbone:6 kind:1 interpreted:2 substantially:1 transformation:5 assert:1 berkeley:4 concave:1 xd:11 growth:1 lys:2 biometrika:1 classifier:1 demonstrates:1 ser:3 partitioning:1 appear:1 producing:3 positive:4 treat:1 struggle:1 approxb:1 analyzing:1 id:1 yd:3 credited:1 suggests:2 shaded:1 co:4 averaged:2 testing:1 practice:1 lost:1 block:5 differs:1 definite:2 procedure:1 secant:7 empirical:1 hyperbolic:7 significantly:1 convenient:1 pre:1 induce:2 protein:7 zoubin:2 cannot:1 optimize:3 fn2:1 dz:5 maximizing:1 williams:5 graphically:1 straightforward:3 convex:1 focused:2 gramacy:2 wasserman:1 estimator:6 importantly:1 dominate:1 his:5 datapoints:1 financial:1 variation:1 laplace:16 construction:3 suppose:1 lighter:2 gps:2 goldberg:2 us:2 rotamers:5 origin:1 carl:2 element:1 cysteine:1 satisfying:1 hagan:1 asymmetric:1 sparsely:3 native:1 labeled:1 database:1 observed:3 ep:2 region:35 transforming:2 broderick:1 covariates:1 cristianini:2 personal:1 ultimately:1 uniformity:1 segment:1 predictive:11 joint:1 represented:1 grown:1 derivation:5 stacked:1 revert:1 train:1 phe:2 whose:1 encoded:1 larger:1 valued:1 statistic:2 gp:12 jointly:1 itself:1 final:1 confronted:1 advantage:2 sequence:1 analytical:1 relevant:1 achieve:1 intuitive:2 lenkoski:1 requirement:1 optimum:1 produce:1 nelsen:1 conformation:1 eq:10 strong:1 c:2 indicate:2 met:2 posit:1 owing:1 modifying:1 stochastic:6 centered:2 larry:1 subordinate:1 disproportionately:1 require:1 fix:1 clustered:1 proposition:2 biological:2 extension:1 hold:4 sufficiently:3 considered:3 ic:2 exp:10 pitt:1 major:1 vary:1 adopt:1 early:1 estimation:5 favorable:1 lessens:1 label:4 currently:1 hpc:20 mit:2 gaussian:40 aim:1 modified:1 rather:2 asp:2 shrinkage:26 derived:1 focus:1 exemplifies:1 flw:1 improvement:1 modelling:1 check:1 indicates:2 likelihood:4 helpful:1 inference:12 dependent:1 typically:1 transferring:1 kc:3 relation:1 selective:16 transformed:1 interested:3 arg:3 issue:1 classification:15 priori:1 spatial:4 special:1 copula:17 art:1 marginal:8 construct:2 once:2 having:2 ng:2 sampling:1 atom:2 identical:2 represents:1 washington:1 look:1 report:3 serious:1 fn1:1 phase:1 geometry:1 recalling:1 hyp:3 interest:1 investigate:2 analyzed:1 semidefinite:2 predefined:1 integral:2 closer:1 respective:1 taylor:2 desired:1 re:2 isolated:6 deformation:3 theoretical:2 instance:1 earlier:2 modeling:1 argmaxd:1 entry:1 reported:2 straightforwardly:1 varies:1 xue:1 combined:1 density:5 lee:1 michael:2 enhance:1 again:1 reflect:1 central:1 von:1 dihedral:2 possibly:1 warped:2 american:1 resort:1 derivative:5 li:2 account:1 student:3 b2:1 sec:3 satisfy:2 caused:1 idealized:2 view:1 jaimungal:2 analyze:1 competitive:1 recover:1 complicated:2 parallel:2 f12:1 php:3 variance:2 acid:5 who:1 doris:1 yield:2 correspond:1 bayesian:6 sid:1 produced:1 marginally:2 economical:1 sebastian:1 ed:1 definition:1 energy:5 involved:1 tamara:1 fnc:1 associated:1 mi:1 sampled:3 dataset:1 treatment:1 rotameric:1 leu:2 knowledge:2 color:2 improves:2 appears:1 focusing:1 dobra:1 response:1 improved:2 wei:1 april:1 arranged:1 evaluated:1 shrink:3 strongly:7 generality:1 though:1 just:1 angular:1 roger:1 correlation:4 overfit:1 ramachandran:3 replacing:1 christopher:3 nonlinear:3 defines:1 mode:5 indicated:3 alexandra:1 building:1 effect:1 true:1 regularization:4 hence:1 symmetric:4 i2:2 illustrated:2 deal:1 sin:2 during:1 ue:2 demonstrate:2 performs:1 percent:1 snelson:3 fi:2 recently:1 overview:1 perturbing:2 volume:2 tail:5 interpretation:1 extend:2 discussed:1 marginals:12 interpret:1 association:1 significant:2 measurement:16 cambridge:1 gibbs:2 outlined:1 populated:2 similarly:1 grid:1 shawe:2 immune:1 access:1 han:1 sech:1 scandinavian:1 posterior:8 own:2 showed:1 recent:1 multivariate:3 optimizing:1 chan:1 scenario:1 verlag:1 inequality:2 binary:2 exploited:1 herbert:1 minimum:4 additional:1 care:1 dashed:1 semi:2 ii:1 full:1 multiple:2 reduces:1 d0:3 technical:2 faster:1 cross:3 long:1 roland:2 y:2 schematic:1 prediction:12 ile:2 regression:13 expectation:1 df:7 kernel:20 represent:1 underperforms:1 folding:2 damian:1 background:2 residue:11 addition:1 semiparametric:1 unevenly:2 grow:1 xd0:3 unlike:1 induced:2 tend:1 db:5 fatter:1 undirected:1 lafferty:1 jordan:2 nonstationary:1 near:3 identically:1 easy:1 enough:1 relaxes:1 xj:8 identified:1 multiclass:2 shift:2 motivated:2 heavier:1 kohn:1 gb:20 reinforcing:1 song:2 peter:2 hessian:1 york:1 remark:1 generally:5 gpc:20 detailed:1 tune:1 transforms:2 nonparametric:2 dark:2 ten:3 induces:1 glu:2 reduced:1 generate:2 outperform:1 dotted:1 sign:3 estimated:1 overly:1 per:1 popularity:1 alanine:1 discrete:5 write:1 group:1 key:1 clarity:1 changing:3 kept:1 graph:1 relaxation:1 convert:1 angle:9 place:1 extends:1 environmetrics:1 parsimonious:1 eral:1 fold:3 adapted:1 occur:1 precisely:1 alex:1 performing:2 structured:1 department:1 according:1 combination:1 poor:3 across:4 partitioned:1 presently:1 outlier:6 intuitively:1 restricted:1 taken:1 equation:1 agree:1 turn:1 eventually:2 ordinal:1 letting:1 fic:2 imation:1 end:1 competitively:2 observe:1 schmidt:1 robustness:5 asn:2 top:1 remaining:3 include:2 ensure:1 graphical:1 newton:1 const:2 k1:1 perturb:1 ghahramani:2 society:1 objective:1 occurs:1 strategy:1 parametric:3 dependence:2 degrades:2 diagonal:2 gradient:7 separate:1 wauthier:1 thank:1 nello:1 reason:2 assuming:1 index:2 illustration:1 providing:2 minimizing:1 nc:3 equivalently:1 unfortunately:2 kun:1 robert:3 statement:1 motivates:2 observation:11 dispersion:1 datasets:1 fabian:1 finite:2 defining:2 extended:1 looking:1 communication:1 arbitrary:1 rotamer:11 introduced:1 cys:3 nonlinearly:2 pair:5 david:2 connection:1 thr:2 california:2 learned:1 address:3 beyond:1 usually:3 pattern:2 sparsity:2 built:1 royal:1 critical:1 difficulty:2 natural:1 regularized:3 advanced:1 improve:4 concludes:1 carried:2 gln:3 categorical:1 prior:11 nice:1 literature:2 acknowledgement:1 val:2 discovery:1 relative:1 loss:1 highlight:1 validation:3 degree:1 principle:1 systematically:1 intractability:1 pi:8 heavy:28 placed:1 free:3 rasmussen:4 guide:1 allow:1 taking:2 differentiating:2 sparse:21 distributed:1 benefit:2 default:1 xn:2 kdiag:1 author:1 collection:1 commonly:1 far:2 approximate:5 arginine:1 keep:1 global:1 overfitting:1 histidine:1 xi:16 eddie:1 continuous:2 latent:1 tailed:25 why:1 yic:1 additionally:1 guttorp:1 nature:2 hpr:14 necessarily:1 complex:1 european:1 elegantly:1 diag:10 anthony:1 inherit:1 dense:13 linearly:3 s2:1 noise:4 hyperparameters:1 arise:1 paul:2 quadrature:1 x1:2 augmented:1 amino:5 slow:1 shrinking:4 theme:1 wish:1 torus:1 lie:2 gpr:11 pgp:2 specific:1 covariate:4 bishop:1 x:12 dk:2 albeit:1 effectively:1 magnitude:1 flavor:2 explore:2 sidechain:1 absorbed:1 contained:1 trp:2 monotonic:1 applies:1 binding:1 springer:2 satisfies:3 complemented:1 viewed:3 exposition:1 towards:3 sampson:1 absence:1 change:5 included:1 specifically:1 folded:1 sampler:1 conservative:2 called:1 selectively:4 select:1 latter:1 evaluate:2 mcmc:1 |
3,308 | 3,997 | Group Sparse Coding with a Laplacian Scale Mixture
Prior
Bruno A. Olshausen
Helen Wills Neuroscience Institute
School of Optometry
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Pierre J. Garrigues
IQ Engines, Inc.
Berkeley, CA 94704
[email protected]
Abstract
We propose a class of sparse coding models that utilizes a Laplacian Scale Mixture
(LSM) prior to model dependencies among coefficients. Each coefficient is modeled as a Laplacian distribution with a variable scale parameter, with a Gamma
distribution prior over the scale parameter. We show that, due to the conjugacy
of the Gamma prior, it is possible to derive efficient inference procedures for both
the coefficients and the scale parameter. When the scale parameters of a group of
coefficients are combined into a single variable, it is possible to describe the dependencies that occur due to common amplitude fluctuations among coefficients,
which have been shown to constitute a large fraction of the redundancy in natural images [1]. We show that, as a consequence of this group sparse coding, the
resulting inference of the coefficients follows a divisive normalization rule, and
that this may be efficiently implemented in a network architecture similar to that
which has been proposed to occur in primary visual cortex. We also demonstrate
improvements in image coding and compressive sensing recovery using the LSM
model.
1
Introduction
The concept of sparsity is widely used in the signal processing, machine learning and statistics
communities for model fitting and solving inverse problems. It is also important in neuroscience as
it is thought to underlie the neural representations used by the brain. The operation to compute the
sparse representation of a signal x ? Rn with respect to a dictionary of basis functions ? ? Rn?m
can be implemented via an `1 -penalized least-square problem commonly referred to as Basis Pursuit
Denoising (BPDN) [2] or Lasso [3]
1
kx ? ?sk22 + ?ksk1 ,
(1)
2
where ? is a regularization parameter that controls the tradeoff between the quality of the reconstruction and the sparsity. This approach has been applied to problems such as image coding, compressive
sensing [4], or classification [5]. The `1 penalty leads to solutions where typically a large number
of coefficients are exactly zero, which is a desirable property to achieve model selection or data
compression, or for obtaining interpretable results. The cost function of BPDN is convex, and many
efficient algorithms have been recently developed to solve this problem [6, 7, 8, 9].
min
s
Minimizing the cost function of BPDN corresponds to MAP inference in a probabilistic model
where the coefficients are independent and have Laplacian priors p(si ) = ?2 e??|si | . Hence, the
signal model assumed by BPDN is linear, generative, and the basis function coefficients are independent. In the context of analysis-based models of natural images (for a review on analysis-based
1
and synthesis-based or generative models see [10]), it has been shown that the linear responses of
natural images to Gabor-like filters have kurtotic histograms, and that there can be strong dependencies among these responses in the form of common amplitude fluctuations [11, 12, 13, 14]. It has
also been observed in the context of generative image models that the inferred sparse coefficients
exhibit pronounced statistical dependencies [15, 16], and therefore the independence assumption is
violated. It has been proposed in block-`1 methods to account for dependencies among the coefficients by dividing them into subspaces such that dependencies within the subspaces are allowed, but
not across the subspaces [17] . This approach can produce blocking artifacts and has recently been
generalized to overlapping subspaces in [18]. Another approach is to only allow certain configurations of active coefficients [19].
We propose in this paper a new class of prior on the basis function coefficients that makes it possible
to model their statistical dependencies in a probabilistic generative model, whose inferred representations are more sparse than those obtained with the factorial Laplacian prior, and for which we have
efficient inference algorithms. Our approach consists of introducing for each coefficient a hyperprior
on the inverse scale parameter ?i of the Laplacian distribution. The coefficient prior is thus a mixture
of Laplacian distributions which we denote ?Laplacian Scale Mixture? (LSM), which is an analogy
to the Gaussian scale mixture (GSM) [12]. Higher-order dependencies of feedforward responses of
wavelet coefficients [12] or basis functions learned using independent component analysis [14] have
been captured using GSMs, and we extend this approach to a generative sparse coding model using
LSMs.
We define the Laplacian scale mixture in Section 2, and we describe the inference algorithms in the
resulting sparse coding models with an LSM prior on the coefficients in Section 3. We present an
example of a factorial LSM model in Section 4, and of a non-factorial LSM model in Section 5 that
is particularly well suited to signals having the ?group sparsity? property. We show that the nonfactorial LSM results in a divisive normalization rule for inferring the coefficients. When the groups
are organized topographically and the basis is trained on natural images, the resulting model resembles the neighborhood divisive normalization that has been hypothesized to occur in visual cortex.
We also demonstrate that the proposed LSM inference algorithm provides superior performance in
image coding and compressive sensing recovery.
2
The Laplacian Scale Mixture distribution
A random variable si is a Laplacian scale mixture if it can be written si = ??1
i ui , where ui has
a Laplacian distribution with scale 1, i.e. p(ui ) = 21 e?|ui | , and the multiplier variable ?i is a
positive random variable with probability p(?i ). We also suppose that ?i and ui are independent.
Conditioned on the parameter ?i , the coefficient si has a Laplacian distribution with inverse scale ?i ,
i.e. p(si |?i ) = ?2i e??i |si | . The distribution over si is therefore a continuous mixture of Laplacian
distributions with different inverse scales, and it can be computed by integrating out ?i
Z ?
Z ?
?i ??i |si |
p(si ) =
p(si | ?i )p(?i )d?i =
e
p(?i )d?i .
2
0
0
Note that for most choices of p(?i ) we do not have an analytical expression for p(si ). We denote
such a distribution a Laplacian Scale Mixture (LSM). It is a special case of the Gaussian Scale
Mixture (GSM) [12] as the Laplacian distribution can be written as a GSM.
3
Inference in a sparse coding model with LSM prior
We propose the linear generative model
x = ?s + ? =
m
X
si ?i + ?,
(2)
i=1
where x ? Rn , ? = [?1 , . . . , ?m ] ? Rn?m is an overcomplete transform or basis set, and the
columns ?i are its basis functions. ? ? N (0, ? 2 In ) is small Gaussian noise. The coefficients are
endowed with LSM distributions. They can be used to reconstruct x and are called the synthesis
coefficients.
2
Given a signal x, we wish to infer its sparse representation s in the dictionary ?. We consider in this
section the computation of the maximum a posteriori (MAP) estimate of the coefficients s given the
input signal x. Using Bayes? rule we have p(s | x) ? p(x | s)p(s), and therefore the MAP estimate
s? is given by
s? = arg min {? log p(s | x)} = arg min {? log p(x | s) ? log p(s)}.
(3)
s
s
In general it is difficult to compute the MAP estimate with an LSM prior on s since we do not
necessarily have an analytical expression for the log-likelihood log p(s). However, we can compute
the complete log-likelihood log p(s, ?) analytically
?i
log p(s, ?) = log p(s | ?) + log p(?) = ??i |si | + log
+ log p(?).
2
Hence, if we also observed the latent variable ?, we would have an objective function that can be
maximized with respect to s. The standard approach in machine learning when confronted with
such a problem is the Expectation-Maximization (EM) algorithm, and we derive in this Section an
EM algorithm for the MAP estimation of the coefficients. We use Jensen?s inequality and obtain the
following upper bound on the posterior likelihood
Z
p(s, ?)
? log p(s | x) ? ? log p(x | s) ? q(?) log
d? := L(q, s),
(4)
q(?)
?
which is true for any probability distribution q(?). Performing coordinate descent in the auxiliary
function L(q, s) leads to the following updates that are usually called the E step and the M step.
E Step
q (t+1)
=
arg min L(q, s(t) )
(5)
q
M Step
s(t+1)
=
arg min L(q (t+1) , s)
(6)
s
Let < . >q denote the expectation with respect to q(?). The M Step (6) simplifies to
m
X
1
2
s(t+1) = arg min
kx
?
?sk
+
h?i iq(t+1) |si |,
2
2? 2
s
i=1
(7)
which is a least-square problem regularized by a weighted sum of the absolute values of the coefficients. It is a quadratic program very similar to BPDN, and we can therefore use efficient algorithms
developed for BPDN that take advantage of the sparsity of the solution. This presents a significant
computational advantage over the GSM prior where the inferred coefficients are not exactly sparse.
We have equality in the Jensen inequality if q(?) = p(? | s). The inequality (4) is therefore tight for
this particular choice of q, which implies that the E step reduces to q (t+1) (?) = p(? | s(t) ). Note
that in the M step we only need to compute the expectation of ?i with respect to the maximizing
distribution in the E step. Hence we only need to compute the sufficient statistics
Z
h?i ip(?|s(t) ) =
?i p(? | s(t) )d?.
(8)
?
Note that the posterior of the multiplier given the coefficient p(? | s) might be hard to compute. We
will see in Section 4.1 that it is tractable if the prior on ? is factorial and each ?i has a Gamma distribution, as the Laplacian distribution and the Gamma distribution are conjugate. We can apply the
efficient algorithms developed for BPDN to solve (7). Furthermore, warm-start capable algorithms
are particularly interesting in this context as we can initialize the algorithm with s(t) , and we do not
expect the solution to change much after a few iterations of EM.
4
Sparse coding with a factorial LSM prior
We propose in this Section a sparse coding model where the distribution of the multipliers is factorial, and each multiplier has a Gamma distribution, i.e. p(?i ) = (? ? /?(?))?i??1 e???i , where ? is
the shape parameter and ? is the inverse scale parameter. With this particular choice of a prior on
the multiplier, we can compute the probability distribution of si analytically:
?? ?
.
p(si ) =
2(? + |si |)?+1
This distribution has heavier tails than the Laplacian distribution. The graphical model corresponding to this generative model is shown in Figure 1.
3
4.1
Conjugacy
The Gamma distribution and Laplacian distribution are conjugate, i.e. the posterior probability of
?i given si is also a Gamma distribution when the prior over ?i is a Gamma distribution and the
conditional probability of si given ?i is a Laplace distribution with inverse scale ?i . Hence, the
posterior of ?i given si is a Gamma distribution with parameters ? + 1 and ? + |si |.
The conjugacy is a key property that we can use in our EM algorithm proposed in Section 3. We saw
that the solution of the E step is given by q (t+1) (?) = p(? | s(t) ). In the factorial model we have
Q
(t)
p(? | s) = i p(?i | si ). The solution of the E step is therefore a product of Gamma distributions
(t)
with parameters ? + 1 and ? + |si |, and the sufficient statistics (8) are given by
h?i ip(?i |s(t) ) =
i
?+1
(t)
? + |si |
.
(9)
A coefficient that has a small value after t iterations but is not exactly zero will have in the next
(t+1)
iteration a large reweighting factor ?i
, which increases the chance that it will be set to zero
in the next iteration, resulting in a sparser representation. On the other hand, a coefficient having
a large value after t iterations corresponds to a feature that is very salient in the signal x. It is
(t+1)
therefore beneficial to reduce its corresponding inverse scale ?i
such that it is not penalized and
can account for as much information as possible.
We saw that with the Gamma prior we can compute the distribution of si analytically, and therefore
we can compute the gradient of log p(s | x) with respect to s. Hence another inference algorithm
is to descend the cost function in (3) directly using a method such as conjugate gradient, or the
method proposed in [20] where the authors also exploit the conjugacy of the Laplacian and Gamma
priors. We argue here that the EM algorithm is in fact more efficient. The solution of (7) indeed has
typically few elements that are non-zero, and the computational complexity scales with the number
of non-zero coefficients [6, 7]. On the other hand, a gradient-based method will have a harder time
identifying the support of the solution, and therefore the required computations will involve all the
coefficients, which is computationally expensive.
The update formula (9) is coincidentally equivalent to the reweighted L1 minimization scheme proposed by Cand`es et al. [21]. They solve the following sequence of problems
(t+1)
s
= arg min
s
(t+1)
m
X
(t)
?i |si | subject to kx ? ?sk2 ? ?
(10)
i=1
(t)
with update ?i
= 1/(? + |si |) (which is identical to our rule when ? = 0). The authors show
that the solutions achieved by their algorithm are more sparse than the solution where ?i = 1 for
all i. Whereas they derive this rule from mathematical intuitions regarding the L1 ball, we show
that this update rule follows from from Bayesian inference assuming a Gamma prior over ?. It
was also shown that evidence maximization in a sparse coding model with an automatic relevance
determination prior can also be solved via a sequence of reweighted `1 optimization problems [22].
4.2
Application to image coding
It has been shown that the convex relaxation consisting of replacing the `0 norm with the `1 norm is
able to identify the sparsest solution under some conditions on the dictionary of basis functions [23].
However, these conditions are typically not verified for the dictionaries learned from the statistics
of natural images [24]. For instance, it was observed in [16] that it is possible to infer sparser
representations with a prior over the coefficients that is a mixture of a delta function at zero and a
Gaussian distribution than with the Laplacian prior. We show that our proposed inference algorithm
also leads to representations that are more sparse, as the LSM prior with Gamma hyperprior has
heavier tails than the Laplacian distribution. We selected 1000 16 ? 16 image patches at random,
and computed their sparse representations in a dictionary with 256 basis functions using both the
conventional Laplacian prior and our LSM prior. The dictionary is learned from the statistics of
natural images [24] using a Laplacian prior over the coefficients. To ensure that the reconstruction
error is the same in both cases, we solve the constrained version of the problem as in [21], where we
require that the signal to noise ratio of the reconstruction is equal to 10. We choose ? = 0.01 and 5
4
EM iterations. We can see in Figure 2 that the representations using the LSM prior are indeed more
sparse by approximately a factor of 2. Note that the computational complexity to compute these
sparse representations is much lower than that of [16].
Sparsity of the representation
140
?1
?2
?j
120
?m
s1
s2
sj
LSM prior
100
sm
80
60
40
?ij
20
x1
xi
xn
00
Figure 1: Graphical model representation of our
proposed generative model where the multipliers distribution is factorial.
5
20
40
60
80
100
Laplacian prior
120
140
Figure 2: Sparsity comparison. On the x-axis
(resp. y-axis) is the `0 norm of the representation inferred with the Laplacian prior (resp.
LSM prior).
Sparse coding with a non-factorial model
It has been shown that many natural signals such as sound or images have a particular type of
higher-order, sparse structure in which active coefficients occur in groups corresponding to basis
functions having similar properties (position, orientation, or frequency tuning) [25, 1]. We focus in
this Section on a class of signals that has a particular type of higher-order structure where the active
coefficients occur in groups. We show here that the LSM prior can be used to capture this group
structure in natural images, and we propose an efficient inference algorithm for this case.
5.1
Group sparsity
We consider a dictionary ? such that the basis functions
S can be divided in a set of disjoint groups
or neighborhoods indexed by Nk , i.e. {1, . . . , m} = k?? Nk , and Ni ? Nj = ? if i 6= j. A
signal having the group sparsity property is suchSthat the sparse coefficients occur in groups, i.e. the
indices of the nonzero coefficients are given by k?? Nk , where ? is a subset of ?.
The group sparsity structure can be captured with the LSM prior by having all the coefficients in a
group share the same inverse scale parameter, i.e. for all i ? Nk , ?i = ?(k) . The corresponding
graphical model is shown in Figure 3. This addresses the case where dependencies are allowed
within groups, but not across groups as in the block-`1 method [17]. Note that for some types of
dictionaries it is more natural to consider overlapping groups to avoid blocking artifacts. We propose
in Section 5.2 inference algorithms for both overlapping and non-overlapping cases.
?(k)
si-2
si-1
?(l)
si
si+1
si+2
si+3
si-2
Figure 3: The two groups N(k) = {i ? 2, i ?
1, i} and N(l) = {i + 1, i + 2, i + 3} are nonoverlapping.
5.2
?i-1
?i
?i+1
?i+2
si-1
si
si+1
si+2
si+3
Figure 4: The basis function coefficients in the
neighborhood defined by N (i) = {i?1, i, i+1}
share the same multiplier ?i .
Inference
In the EM algorithm we proposed in Section 3, the sufficient statistics that are computed in the E
step are h?i ip(?i |s(t) ) for all i. We suppose as in Section 4.1 that the prior on ?(k) is Gamma with
5
parameters ? and ?. Using the structure of the dependencies in the probabilistic model shown in
Figure 3, we have
(11)
h?i ip(?i |s(t) ) = ?(k) p(? |s(t) )
(k)
Nk
where the index i is in the group Nk , and sNk = (sj )j?Nk is the vector containing all the coefficients
in the group. Using the conjugacy of the Laplacian and Gamma distributions, the distribution of ?(k)
givenP
all the coefficients in the neighborhood is a Gamma distribution with parameters ? + |Nk | and
? + j?Nk |sj |, where |Nk | denotes the size of the neighborhood. Hence (11) can be rewritten as
follows
? + |Nk |
(t+1)
?(k) =
.
(12)
P
(t)
? + j?Nk |sj |
The resulting update rule is a form of divisive normalization. We saw in Section 2 that we can write
sk = ??1
(k) uk , where uk is a Laplacian random variable with scale 1, and thus after convergence we
P
(?)
(?)
(?)
have uk = (? + |Nk |)sk /(? + j?Nk |sj |). Such rescaling operations are also thought to
play an important role in the visual system. [25]
Now let us consider the case where coefficient neighborhoods are allowed to overlap. Let N (i)
denote the indices of the neighborhood that is centered around si (see Figure 4 for an example). We
propose to estimate the scale parameter ?i by only considering the coefficients in N (i), and suppose
that they all share the same multiplier ?i . In this case the EM update is given by
(t+1)
?i
=
? + |N (i)|
.
P
(t)
? + j?N (i) |sj |
(13)
Note that we have not derived this rule from a proper probabilistic model. A coefficient is indeed a
member of many neighborhoods as shown in Figure 4, and the structure of the dependencies implies
p(?i | s) 6= p(?i | sN (i) ). However, we show experimentally that estimating the multiplier using
(13) gives good performance. A similar approximation is used in the GSM analysis-based model
[26]. Note that the noise shaping algorithm, which bears similarities with the iterative thresholding
algorithm developed for BPDN [7], is modified in [27] using an update that is essentially inversely
proportional to ours. The authors show improved coding efficiency in the context of natural images.
5.3
Compressive sensing recovery
In compressed sensing, we observe a number n of random projections of a signal s0 ? Rm , and
it is in principle impossible to recover s0 if n < m. However, if s0 has p non-zero coefficients, it
has been shown in [28] that it is sufficient to use n ? p log m such measurements. We denote by
W ? Rn?m the measurement matrix and let y = W s0 be the observations. A standard method to
obtain the reconstruction is to use the solution of the Basis Pursuit (BP) problem
s? = arg min ksk1
subject to W s = y.
(14)
s
Note that the solution of BP is the solution of BPDN as ? converges to zero in (1), or ? = 0 in (10).
If the signal has structure beyond sparsity, one can in principle recover the signal with even fewer
measurements using an algorithm that exploits this structure [19, 29]. We therefore compare the
performance of BP with the performance of our proposed LSM inference algorithms
s(t+1) = arg min
s
m
X
(t)
?i |si | subject to W s = y.
(15)
i=1
We denote by RWBP the algorithm with the factorial update (9), and RW3 BP (resp. RW5 BP) the
algorithm with our proposed divisive normalization update (13) with group size 3 (resp. 5). We
consider 50-dimensional signals that are sparse in the canonical basis and where the neighborhood
size is 3. To sample such a signal s ? R50 , we draw a number d of ?centroids? i, and we sample three
values for si?1 , si and si+1 using a normal distribution of variance 1. The groups are thus allowed
to overlap. A compressive sensing recovery problem is parameterized by (m, n, d). To explore the
problem space we display the results using phase plots as in [30], which plots performance as a
function of different parameter settings. We fix m = 50 and parameterize the phase plots using
the indeterminacy of the system indexed by ? = n/m, and the approximate sparsity of the system
6
indexed by ? = 3d/m. We vary ? and ? in the range [.1, .9] using a 30 by 30 grid. For a given
value (?, ?) on the grid, we sample 10 sparse signals using the corresponding (m, n, d) parameters.
The underlying sparse signal is recovered using the three algorithms and we average the recovery
error k?
s ? s0 k2 /ks0 k2 for each of them. We show in Figure 5 that RW3 BP clearly outperforms
RWBP. There is a slight improvement by going from BP to RWBP (see supplementary material),
but this improvement is rather small as compared with going from RWBP to RW3 BP and RW5 BP.
This illustrates the importance of using the higher-order structure of the signals in the inference
algorithm. The performance of RW3 BP and RW5 BP is comparable (see supplementary material),
which shows that our algorithm is not very sensitive to the choice of the neighborhood size.
RWBP
0.9
0.8
1.0
0.9
0.9
0.8
0.8
0.7
RW3 BP
1.0
0.9
0.8
0.7
0.7
0.7
0.6
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
0.3
0.3
0.2
0.2
0.10.1
?
0.6
?
0.6
0.1
0.2
0.0
0.10.1
0.2
0.1
0.2
0.3
0.4
0.5
?
0.6
0.7
0.8
0.9
0.0
Figure 5: Compressive sensing recovery results using synthetic data. Shown are the phase plots for
a sequence of BP problems with the factorial update (RWBP), and a sequence of BP problems with
the divisive normalization update with neighborhood size 3 (RW3 BP). On the x-axis is the sparsity
of the system indexed by ? = 3d/m, and on the y-axis is the indeterminacy of the system indexed
by ? = n/m. At each point (?, ?) in the phase plot we display the average recovery error.
5.4
Application to natural images
It has been shown that adapting a dictionary of basis functions to the statistics of natural images so
as to maximize sparsity in the coefficients results in a set of dictionary elements whose spatial properties match those of V1 (primary visual cortex) receptive fields [24]. However, the basis functions
are learned under a probabilistic model where the probability density over the basis functions coefficients is factorial, whereas the sparse coefficients exhibit statistical dependencies [15, 16]. Hence,
a generative model with factorial LSM is not rich enough to capture the complex statistics of natural
images. We propose here to model these dependencies using a non-factorial LSM model. We fix
a topography where the basis functions coefficients are arranged on a 2D grid, and with overlapping neighborhoods of fixed size 3 ? 3. The corresponding inference algorithm uses the divisive
normalization update (13).
We learn the optimal dictionary of basis functions ? using the learning rule ?? = ? (x ? ??
s)?
sT
as in [24], where ? is the learning rate, s? are the basis functions coefficients inferred under the model
(13), and the average is taken over a batch of size 100. We fix n = m = 256, and sample 16 ? 16
image patches from a set of whitened images, using a total of 100000 batches. The learned basis
functions are shown in Figure 6. We see here that the neighborhoods of size 3 ? 3 group basis
functions at a similar position, scale and orientation. The topography is similar to how neurons are
arranged in the visual cortex, and is reminiscent of the results obtained in topographic ICA [13] and
topographic mixture of experts models [31]. An important difference is that our model is based on a
generative sparse coding model in which both inference and learning can be implemented via local
network interactions [7]. Because of the topographic organization, we also obtain a neighborhoodbased divisive normalization rule.
Does the proposed non-factorial model represent image structure more efficiently than those with
factorial priors? To answer this question we measured the models? ability to recover sparse structure in the compressed sensing setting. We note that the basis functions are learned such that they
represent the sparse structure in images, as opposed to representing the images exactly (there is a
noise term in the generative model (2)). Hence, we design our experiment such that we measure
the recovery of this sparse structure. Using the basis functions shown in Figure 6, we first infer the
7
sparse coefficients s0 of an image patch x such that kx ? ?s0 k2 < ? using the inference algorithm
corresponding to the model. We fix ? such that the SNR is 10, and thus the three sparse approximations for the three models contain the same amount of signal power. We then compute random
? ?s0 where W
? is the random measurements matrix. We attempt to recover the
projections y = W
? , and y := ?s0 . We compare the
sparse coefficients as in Section 5.3 by substituting W := ?W
recovery performance k??
s ? ?s0 k2 /k?s0 k0 for 100 16 ? 16 image patches selected at random, and
we use 110 random projections. We can see in Figure 7 that the model with non-factorial LSM prior
outperforms the other models as it is able to capture the group sparsity structure in natural images.
Figure 6: Basis functions learned in a nonfactorial LSM model with overlapping groups of
size 3 ? 3
6
Figure 7: Compressive sensing recovery. On the
x-axis is the recovery performance for the factorial LSM model (RWBP), and on the y-axis
the recovery performance for the non-factorial
LSM model with 3 ? 3 overlapping groups
(RW3?3 BP). RW3?3 BP outperforms RWBP.
See supplementary material for the comparison
between RW3?3 BP and BP as well as between
RWBP and BP.
Conclusion
We introduced a new class of probability densities that can be used as a prior for the coefficients in a
generative sparse coding model of images. By exploiting the conjugacy of the Gamma and Laplacian
prior, we were able to derive an efficient inference algorithm that consists of solving a sequence of
reweighted `1 least-square problems, thus leveraging the multitude of algorithms already developed
for BPDN. Our framework also makes it possible to capture higher-order dependencies through
group sparsity. When applied to natural images, the learned basis functions of the model may be
topographically organized according to the specified group structure. We also showed that exploiting
the group sparsity results in performance gains for compressive sensing recovery on natural images.
An open question is the learning of group structure, which is a topic of ongoing work.
We wish to acknowledge support from NSF grant IIS-0705939.
References
[1] S. Lyu and E. P. Simoncelli. Statistical modeling of images with fields of gaussian scale mixtures. In
Advances in Neural Computation Systems (NIPS), Vancouver, Canada, 2006.
[2] S.S. Chen, D.L. Donoho, and M.A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific Computing, 20(1):33?61, 1999.
[3] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B, 58(1):267?288, 1996.
[4] Y. Tsaig and D.L. Donoho. Extensions of compressed sensing. Signal Processing, 86(3):549?571, 2006.
[5] R. Raina, A. Battle, H. Lee, B. Packer, and A.Y. Ng. Self-taught learning: Transfer learning from unlabeled data. Proceedings of the Twenty-fourth International Conference on Machine Learning, 2007.
8
[6] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics,
32(2):407?499, 2004.
[7] C.J. Rozell, D.H Johnson, R.G. Baraniuk, and B.A. Olshausen. Sparse coding via thresholding and local
competition in neural circuits. Neural Computation, 20(10):2526?2563, October 2008.
[8] J. Friedman, T. Hastie, H. Hoefling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of
Applied Statistics, 1(2):302?332, 2007.
[9] M. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction: Application to
compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing,
1(4):586?597, 2007.
[10] M. Elad, P. Milanfar, and R. Rubinstein. Analysis vs synthesis in signal priors. Inverse Problems,
23(3):947?968, June 2007.
[11] C. Zetzsche, G. Krieger, and B. Wegmann. The atoms of vision: Cartesian or polar? Journal of the
Optical Society of America A, 16(7):1554?1565, 1999.
[12] M.J. Wainwright, E.P. Simoncelli, and A.S. Willsky. Random cascades on wavelet trees and their use
in modeling and analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1), July
2001.
[13] A. Hyv?arinen, P.O. Hoyer, and M. Inki. Topographic independent component analysis. Neural Computation, 13(7):1527?1558, 2001.
[14] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning nonlinear statistical regularities
in nonstationary natural signals. Neural Computation, 17(2):397?423, February 2005.
[15] P. Hoyer and A. Hyv?arinen. A multi-layer sparse coding network learns contour coding from natural
images. Vision Research, 42:1593?1605, 2002.
[16] P.J. Garrigues and B.A. Olshausen. Learning horizontal connections in a sparse coding model of natural
images. In Advances in Neural Computation Systems (NIPS), Vancouver, Canada, 2007.
[17] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 68(1):49?67, February 2006.
[18] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In International
Conference on Machine Learning (ICML), 2009.
[19] R.G. Baraniuk, V. Cevher, M.F. Duarte, and C. Hegde. Model-based compressive sensing. Preprint,
August 2008.
[20] I. Ramirez, F. Lecumberry, and G. Sapiro. Universal priors for sparse modeling. CAMPSAP, December
2009.
[21] E.J. Cand`es, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization. J. Fourier
Anal. Appl., to appear, 2008.
[22] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In Advances in Neural
Information Processing Systems 20, 2008.
[23] J.A. Tropp. Just relax: convex programming methods for identifying sparse signals in noise. IEEE
Transactions on Information Theory, 52(3):1030?1051, 2006.
[24] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381(6583):607?609, June 1996.
[25] M.J. Wainwright, O. Schwartz, and E.P. Simoncelli. Natural image statistics and divisive normalization:
Modeling nonlinearity and adaptation in cortical neurons. In R. Rao, B.A. Olshausen, and M.S. Lewicki,
editors, Statistical Theories of the Brain. MIT Press, 2001.
[26] J. Portilla, V. Strela, M.J Wainwright, and E.P. Simoncelli. Image denoising using scale mixtures of
gaussians in the wavelet domain. IEEE Transactions on Image Processing, 12(11):1338?1351, 2003.
[27] R.M. Figueras and E.P. Simoncelli. Statistically driven sparse image representation. In Proc 14th IEEE
Int?l Conf on Image Proc, volume 6, pages 29?32, September 2007.
[28] E. Cand`es. Compressive sampling. Proceedings of the International Congress of Mathematicians, 2006.
[29] V. Cevher, , M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using markov random
fields. In Advances in Neural Computation Systems (NIPS), Vancouver, B.C., Canada, 2008.
[30] D. Donoho and Y. Tsaig. Fast solution of l 1-norm minimization problems when the solution may be
sparse. preprint, 2006.
[31] S. Osindero, M. Welling, and G.E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18(2):381?414, 2006.
9
| 3997 |@word version:1 compression:1 norm:4 open:1 hyv:2 ks0:1 decomposition:1 jacob:1 harder:1 garrigues:3 configuration:1 series:2 ours:1 outperforms:3 ksk1:2 recovered:1 com:1 si:45 gmail:1 written:2 reminiscent:1 optometry:1 shape:1 plot:5 interpretable:1 update:12 v:1 generative:12 selected:3 fewer:1 provides:1 lsm:27 mathematical:1 yuan:1 consists:2 fitting:1 ica:1 indeed:3 cand:3 bpdn:10 brain:2 multi:1 considering:1 estimating:1 underlying:1 circuit:1 strela:1 developed:5 compressive:10 mathematician:1 nj:1 sapiro:1 berkeley:4 exactly:4 rm:1 k2:4 uk:3 control:1 schwartz:1 underlie:1 grant:1 appear:1 neighborhoodbased:1 positive:1 local:2 gsms:1 congress:1 consequence:1 analyzing:1 fluctuation:2 approximately:1 might:1 resembles:1 appl:1 range:1 statistically:1 atomic:1 block:2 procedure:1 universal:1 thought:2 gabor:1 projection:4 adapting:1 cascade:1 integrating:1 vert:1 boyd:1 unlabeled:1 selection:3 context:4 impossible:1 equivalent:1 map:5 conventional:1 hegde:2 maximizing:1 helen:1 convex:3 recovery:14 identifying:2 rule:10 coordinate:2 laplace:1 resp:4 annals:2 suppose:3 play:1 programming:1 us:1 element:2 expensive:1 particularly:2 rozell:1 blocking:2 observed:3 role:1 preprint:2 solved:1 capture:4 descend:1 parameterize:1 intuition:1 ui:5 complexity:2 trained:1 solving:2 tight:1 topographically:2 efficiency:1 basis:28 k0:1 america:1 fast:1 describe:2 rubinstein:1 neighborhood:13 saunders:1 whose:2 widely:1 solve:4 supplementary:3 elad:1 relax:1 reconstruct:1 compressed:4 ability:1 statistic:12 topographic:5 transform:1 emergence:1 ip:4 confronted:1 advantage:2 sequence:5 analytical:2 propose:8 reconstruction:5 interaction:1 product:2 adaptation:1 achieve:1 pronounced:1 competition:1 exploiting:2 convergence:1 regularity:1 produce:1 converges:1 iq:2 derive:4 measured:1 ij:1 school:1 indeterminacy:2 strong:1 dividing:1 implemented:3 auxiliary:1 implies:2 filter:1 centered:1 material:3 require:1 arinen:2 nagarajan:1 fix:4 extension:1 around:1 wright:1 normal:1 lyu:1 substituting:1 dictionary:11 vary:1 estimation:2 polar:1 proc:2 saw:3 sensitive:1 grouped:1 weighted:1 minimization:3 mit:1 clearly:1 gaussian:5 modified:1 rather:1 avoid:1 shrinkage:1 derived:1 focus:1 june:2 improvement:3 likelihood:3 centroid:1 duarte:2 posteriori:1 inference:19 wegmann:1 typically:3 nonfactorial:2 going:2 arg:8 among:4 classification:1 orientation:2 constrained:1 special:1 initialize:1 spatial:1 equal:1 field:5 having:5 ng:1 atom:1 sampling:1 identical:1 icml:1 wipf:1 few:2 gamma:18 packer:1 phase:4 consisting:1 attempt:1 friedman:1 organization:1 baolshausen:1 lsms:1 mixture:15 zetzsche:1 capable:1 nowak:1 indexed:5 tree:1 hyperprior:2 overcomplete:1 cevher:2 instance:1 column:1 modeling:4 rao:1 kurtotic:1 maximization:2 cost:3 introducing:1 subset:1 snr:1 johnson:1 osindero:1 dependency:14 answer:1 synthetic:1 combined:1 st:1 density:2 international:3 siam:1 probabilistic:5 lee:1 synthesis:3 imagery:1 containing:1 choose:1 opposed:1 conf:1 expert:1 rescaling:1 account:2 nonoverlapping:1 coding:21 coefficient:52 inc:1 int:1 view:1 start:1 bayes:1 recover:4 square:3 ni:1 variance:1 efficiently:2 maximized:1 identify:1 bayesian:2 gsm:5 frequency:1 gain:1 efron:1 organized:2 shaping:1 amplitude:2 higher:5 methodology:1 response:3 improved:1 arranged:2 furthermore:1 hoefling:1 just:1 hand:2 horizontal:1 tropp:1 replacing:1 nonlinear:1 overlapping:7 reweighting:1 artifact:2 quality:1 scientific:1 olshausen:5 hypothesized:1 concept:1 multiplier:9 true:1 contain:1 regularization:1 hence:8 analytically:3 equality:1 nonzero:1 reweighted:4 self:1 generalized:1 complete:1 demonstrate:2 l1:3 image:38 harmonic:1 recently:2 common:2 superior:1 inki:1 volume:1 extend:1 tail:2 slight:1 significant:1 measurement:4 automatic:2 tuning:1 grid:3 nonlinearity:1 bruno:1 cortex:4 similarity:1 posterior:4 showed:1 driven:1 certain:1 inequality:3 captured:2 maximize:1 signal:26 ii:1 july:1 desirable:1 sound:1 infer:3 reduces:1 simoncelli:5 match:1 determination:2 lin:1 divided:1 sk22:1 laplacian:29 regression:3 whitened:1 essentially:1 expectation:3 vision:2 enhancing:1 histogram:1 normalization:9 iteration:6 represent:2 achieved:1 cell:1 whereas:2 subject:3 member:1 december:1 leveraging:1 nonstationary:1 feedforward:1 enough:1 independence:1 architecture:1 lasso:4 hastie:2 reduce:1 simplifies:1 regarding:1 tradeoff:1 expression:2 heavier:2 penalty:1 milanfar:1 constitute:1 involve:1 factorial:19 coincidentally:1 amount:1 canonical:1 nsf:1 neuroscience:2 delta:1 disjoint:1 tibshirani:3 write:1 taught:1 group:31 redundancy:1 key:1 salient:1 verified:1 v1:1 graph:1 relaxation:1 fraction:1 sum:1 inverse:10 parameterized:1 fourth:1 angle:1 baraniuk:3 utilizes:1 patch:4 draw:1 comparable:1 bound:1 layer:1 display:2 quadratic:1 occur:6 bp:20 scene:1 fourier:1 min:9 performing:1 optical:1 according:1 ball:1 conjugate:3 battle:1 across:2 beneficial:1 em:8 s1:1 taken:1 computationally:1 conjugacy:6 r50:1 tractable:1 pursuit:3 gaussians:1 operation:2 endowed:1 rewritten:1 apply:1 snk:1 observe:1 hierarchical:1 pierre:2 batch:2 denotes:1 ensure:1 graphical:3 wakin:1 exploit:2 february:2 society:3 objective:1 question:2 already:1 receptive:2 primary:2 exhibit:2 gradient:4 hoyer:2 subspace:4 september:1 topic:2 argue:1 willsky:1 assuming:1 code:1 modeled:1 index:3 ratio:1 minimizing:1 difficult:1 october:1 design:1 anal:1 proper:1 twenty:1 upper:1 observation:1 neuron:2 markov:1 sm:1 acknowledge:1 descent:1 hinton:1 rn:5 portilla:1 august:1 community:1 canada:3 inferred:5 introduced:1 required:1 specified:1 connection:1 tsaig:2 engine:1 california:1 learned:8 nip:3 address:1 able:3 beyond:1 usually:1 sparsity:17 program:1 royal:2 wainwright:3 power:1 overlap:3 natural:23 warm:1 regularized:1 karklin:1 raina:1 representing:1 scheme:1 inversely:1 axis:6 sn:1 prior:40 review:1 vancouver:3 expect:1 sk2:1 bear:1 topography:2 interesting:1 proportional:1 analogy:1 sufficient:4 s0:11 thresholding:2 principle:2 editor:1 share:3 penalized:2 figueiredo:1 allow:1 institute:1 johnstone:1 absolute:1 sparse:44 xn:1 cortical:1 rich:1 contour:1 author:3 commonly:1 welling:1 transaction:2 sj:6 approximate:1 active:3 assumed:1 xi:1 continuous:1 latent:1 iterative:1 sk:3 learn:1 transfer:1 nature:1 ca:2 obtaining:1 necessarily:1 complex:1 domain:1 s2:1 noise:5 allowed:4 x1:1 referred:1 inferring:1 position:2 wish:2 sparsest:1 wavelet:3 learns:1 formula:1 jensen:2 sensing:13 multitude:1 evidence:1 importance:1 conditioned:1 illustrates:1 krieger:1 kx:4 nk:14 sparser:2 chen:1 cartesian:1 suited:1 explore:1 ramirez:1 visual:5 pathwise:1 lewicki:2 corresponds:2 chance:1 obozinski:1 conditional:1 donoho:3 hard:1 change:1 experimentally:1 denoising:2 called:2 total:1 divisive:9 e:3 support:2 relevance:2 violated:1 ongoing:1 |
3,309 | 3,998 | Worst-Case Linear Discriminant Analysis
Yu Zhang and Dit-Yan Yeung
Department of Computer Science and Engineering
Hong Kong University of Science and Technology
{zhangyu,dyyeung}@cse.ust.hk
Abstract
Dimensionality reduction is often needed in many applications due to the high
dimensionality of the data involved. In this paper, we first analyze the scatter
measures used in the conventional linear discriminant analysis (LDA) model and
note that the formulation is based on the average-case view. Based on this analysis,
we then propose a new dimensionality reduction method called worst-case linear
discriminant analysis (WLDA) by defining new between-class and within-class
scatter measures. This new model adopts the worst-case view which arguably is
more suitable for applications such as classification. When the number of training
data points or the number of features is not very large, we relax the optimization problem involved and formulate it as a metric learning problem. Otherwise,
we take a greedy approach by finding one direction of the transformation at a
time. Moreover, we also analyze a special case of WLDA to show its relationship
with conventional LDA. Experiments conducted on several benchmark datasets
demonstrate the effectiveness of WLDA when compared with some related dimensionality reduction methods.
1
Introduction
With the development of advanced data collection techniques, large quantities of high-dimensional
data are commonly available in many applications. While high-dimensional data can bring us more
information, processing and storing such data poses many challenges. From the machine learning
perspective, we need a very large number of training data points to learn an accurate model due
to the so-called ?curse of dimensionality?. To alleviate these problems, one common approach is
to perform dimensionality reduction on the data. An assumption underlying many dimensionality
reduction techniques is that the most useful information in many high-dimensional datasets resides
in a low-dimensional latent space. Principal component analysis (PCA) [8] and linear discriminant
analysis (LDA) [7] are two classical dimensionality reduction methods that are still widely used in
many applications. PCA, as an unsupervised linear dimensionality reduction method, finds a lowdimensional subspace that preserves as much of the data variance as possible. On the other hand,
LDA is a supervised linear dimensionality reduction method which seeks to find a low-dimensional
subspace that keeps data points from different classes far apart and those from the same class as
close as possible.
The focus of this paper is on the supervised dimensionality reduction setting like that for LDA. To set
the stage, we first analyze the between-class and within-class scatter measures used in conventional
LDA. We then establish that conventional LDA seeks to maximize the average pairwise distance
between class means and minimize the average within-class pairwise distance over all classes. Note
that if the purpose of applying LDA is to increase the accuracy of the subsequent classification task,
then it is desirable for every pairwise distance between two class means to be as large as possible and
every within-class pairwise distance to be as small as possible, but not just the average distances.
To put this thinking into practice, we incorporate a worst-case view to define a new between-class
1
scatter measure as the minimum of the pairwise distances between class means, and a new withinclass scatter measure as the maximum of the within-class pairwise distances over all classes. Based
on the new scatter measures, we propose a novel dimensionality reduction method called worst-case
linear discriminant analysis (WLDA). WLDA solves an optimization problem which simultaneously
maximizes the worst-case between-class scatter measure and minimizes the worst-case within-class
scatter measure. If the number of training data points or the number of features is not very large,
e.g., below 100, we propose to relax the optimization problem and formulate it as a metric learning
problem. In case both the number of training data points and the number of features are large, we
propose a greedy approach based on the constrained concave-convex procedure (CCCP) [24, 18] to
find one direction of the transformation at a time with the other directions fixed. Moreover, we also
analyze a special case of WLDA to show its relationship with conventional LDA. We will report
experiments conducted on several benchmark datasets.
2
Worst-Case Linear Discriminant Analysis
We are given a training set of ? data points, ? = {x1 , . . . , x? } ? ?? . Let ? be partitioned into
? ? 2 disjoint classes ?? , ? = 1, . . . , ?, where class ?? contains ?? examples. We perform linear
dimensionality reduction by finding a transformation matrix W ? ???? .
2.1
Objective Function
We first briefly review the conventional LDA. The between-class scatter matrix and within-class
scatter matrix are defined as
S? =
?
?
??
(m
? ? ? m)(
? m
? ? ? m)
? ?,
?
S? =
?=1
?
? 1
?
(x? ? m
? ? )(x? ? m
? ? )? ,
?
x ??
?=1 ?
?
??
where m
?? =
? = ?1 ?=1 x? is the
x? ??? x? is the class mean of the ?th class ?? and m
sample mean of all data points. Based on the scatter matrices, the between-class scatter measure and
within-class scatter measure are defined as
1
??
?
(
)
?? = tr W? S? W ,
(
)
?? = tr W? S? W ,
where tr(?) denotes the trace of a square matrix. LDA seeks to find the optimal solution of W that
maximizes the ratio ?? /?? as the optimality criterion.
??
By using the fact that m
? = ?1 ?=1 ?? m
? ? , we can rewrite S? as
S? =
?
?
1 ??
?? ?? (m
?? ?m
? ? )(m
?? ?m
? ? )? .
2?2 ?=1 ?=1
According to this and the definition of the within-class scatter measure, we can see that LDA tries
to maximize the average pairwise distance between class means {m
? ? } and minimize the average
within-class pairwise distance over all classes. Instead of taking this average-case view, our WLDA
model adopts a worst-case view which arguably is more suitable for classification applications.
We define the sample covariance matrix for the ?th class ?? as
S? =
1 ?
(x? ? m
? ? )(x? ? m
? ? )? .
?? x ??
?
(1)
?
Unlike LDA which uses the average of the distances between each class mean and the sample mean
as the between-class scatter measure, here we use the minimum of the pairwise distances between
class means as the between-class scatter measure:
{ (
)}
?? = min tr W? (m
?? ?m
? ? )(m
?? ?m
? ? )? W .
?,?
(2)
Also, we define the new within-class scatter measure as
{ (
)}
?? = max tr W? S? W ,
?
which is the maximum of the average within-class pairwise distances.
2
(3)
Similar to LDA, we define the optimality criterion of WLDA as the ratio of the between-class scatter
measure to the within-class scatter measure:
max
W
s.t.
?(W) =
??
??
W ? W = I? ,
(4)
where I? denotes the ? ? ? identity matrix. The orthonormality constraint in problem (4) is widely
used by many existing dimensionality reduction methods. Its role is to limit the scale of each column
of W and eliminate the redundancy among all columns of W.
2.2
Optimization Procedure
Since problem (4) is not easy to optimize with respect to W, we resort to formulate this dimensionality reduction problem as a metric learning problem [22, 21, 4]. We define a new variable
? = WW? which can be used to define a metric. Then we express ?? and ?? in terms of ? as
??
=
??
=
{ (
)}
min tr (m
?? ?m
? ? )(m
?? ?m
? ? )? ?
?,?
{ (
)}
max tr S? ? ,
?
due to a property of the matrix trace that tr(AB) = tr(BA) for any matrices A and B with proper
sizes. The orthonormality constraint in problem (4) is non-convex with respect to W and cannot be
expressed in terms of ?.
We define a set ?? as
{
}
?? = M? ? M? = WW? , W? W = I? , W ? ???? .
Apparently ? ? ?? . It has been shown in [16] that the convex hull of ?? can be precisely
expressed as a convex set ?? given by
{
}
?? = M? ? tr(M? ) = ?, 0 ? M? ? I? ,
where 0 denotes the zero vector or matrix of appropriate size and A ? B means that the matrix
B ? A is positive semidefinite. Each element in ?? is referred to as an extreme point of ?? .
Since ?? consists of all convex combinations of the elements in ?? , ?? is the smallest convex
set that contains ?? , and hence ?? ? ?? . Then problem (4) can be relaxed as
{ (
)}
min?,? tr S?? ?
{ (
?(?) =
)}
max? tr S? ?
max
?
tr(?) = ?, 0 ? ? ? I? ,
s.t.
(5)
where S?? = (m
?? ? m
? ? )(m
?? ? m
? ? )? . For notational simplicity, we denote the constraint set as
? = {? ? tr(?) = ?, 0 ? ? ? I? }. Table 1 shows an iterative algorithm for solving problem (5).
Table 1: Algorithm for solving optimization problem (5)
Input: {m
? ? }, {S? } and ?
1: Initialize ?(0) ;
2: For ? = 1, . . . , ?iter
2.1: Compute the ratio ?? from ?(??1) as: ?? = ?(?(??1) );
2.2: Solve the optimization problem
{ (
{ (
)}
)}
?(?) = arg max??? min?,? tr S?? ? ? ?? max? tr S? ? ;
2.3: If ??(?) ? ?(??1) ?? ? ? (here we set ? = 10?4 )
break;
Output: ?
We now present the solution of the optimization problem in step 2.2. It is equivalent to the following
problem
{
}
{
}
(
)
(
)
min ?? max tr S? ? ? min tr S?? ? .
???
?
?,?
3
(6)
{ (
)}
According to [3], we know that max? tr S? ? is a convex function because it is the maximum of
{ (
)}
several convex functions, and min?,? tr S?? ? is a concave function because it is the minimum of
several concave functions. Moreover, ?? is a positive scalar since ?? = ?(?(??1) ). So problem (6)
is a convex optimization problem. We introduce new variables ? and ? to simplify problem (6) as
?? ? ? ?
(
)
tr S? ? ? ?, ??
(
)
tr S?? ? ? ? > 0, ??, ?
tr(?) = ?, 0 ? ? ? I? .
min
?,?,?
s.t.
(7)
Note that problem (7) is a semidefinite programming (SDP) problem [19] which can be solved using
a standard SDP solver. After obtaining the optimal ?? , we can recover the optimal W? as the top ?
eigenvectors of ?? . In the following, we will prove the convergence of the algorithm in Table 1.
Theorem 1 For the algorithm in Table 1, we have ?(?(?) ) ? ?(?(??1) ).
{ (
{ (
)}
)}
Proof: We define ?(?) = min?,? tr S?? ? ? ?? max? tr S? ? . Then ?(?(??1) ) = 0 since
{ (
)}
min?,?
?? =
max?
(?)
?(?
tr S?? ?(??1)
{ (
tr S?
?(??1)
(?)
= arg max??? ?(?) and ?(??1) ? ?, we have
)} . Because ?
) ? ?(?(??1) ) = 0. This means
{ (
)}
min?,? tr S?? ?(?)
{ (
)} ? ?? ,
max? tr S? ?(?)
which implies that ?(?(?) ) ? ?(?(??1) ).
?
Theorem 2 For any ? ? ?, we have 0 ? ?(?) ?
value of S? .
??2tr(S? )
?=1 ????+1
where ?? is the ?th largest eigen-
Proof: It is obvious that ?(?) ? 0. The numerator of ?(?) can be upper-bounded as
{ (
)}
min tr S?? ? ?
?? ??
(
)
?=1 ?? ?? tr S?? ?
= 2tr(S? ?) ? 2tr(S? ).
?? ??
?=1
?=1 ?? ??
?=1
?,?
(8)
Moreover, the denominator of ?(?) can be lower-bounded as
(
)
??
?
?
{ (
?
)}
(
) ?
?=1 ?? tr S? ?
?? ?
????+1 ?
????+1 ,
max tr S? ? ?
= tr S? ? ?
??
?
?=1 ??
?=1
?=1
(9)
? ? is the ?th largest eigenvalue of ? and satisfies 0 ? ?
? ? ? 1 and ?? ?
?
where ?
?=1 ? = ? due to the
constraints ? on ?. By utilizing Eqs. (8) and (9), we can reach the conclusion.
?
From Theorem 2, we can see that ?(?) is bounded and our method is non-decreasing. So our
method can achieve a local optimum when converged.
2.3
Optimization in Dual Form
In the previous subsection, we need to solve the SDP problem in problem (7). However, SDP is
not scalable to high dimensionality ?. In many real-world applications to which dimensionality
reduction is applied, the number of data points ? is much smaller than the dimensionality ?. Under
such circumstances, speedup can be obtained by solving the dual form of problem (4) instead.
It is easy to show that the solution of problem (4) satisfies W = XA [14] where X = (x1 , . . . , x? )
is the data matrix and A ? ???? . Then problem (4) can be formulated as
max
A
s.t.
{ (
)}
min?,? tr A? X? S?? XA
{ (
)}
max? tr A? X? S? XA
A? KA = I? ,
4
(10)
where K = X? X is the linear kernel matrix. Here we assume that K is positive definite since the
data points are independent and identically distributed and ? is much larger than ?. We define a new
1
variable B = K 2 A and problem (10) can be reformulated as
{ (
)}
1
1
min?,? tr B? K? 2 X? S?? XK? 2 B
{ (
)}
1
1
max? tr B? K? 2 X? S? XK? 2 B
max
B
B? B = I ? .
s.t.
(11)
Note that problem (11) is almost the same as problem (4) and so we can use the same relaxation
? = BB? used to
technique above to solve problem (11). In the relaxed problem, the variable ?
define the metric in the dual form is of size ? ? ? which is much smaller than that (? ? ?) of ? in
the primal form when ? < ?. So solving the problem in the dual form is more efficient. Moreover,
the dual form facilitates kernel extension of our method.
2.4
Alternative Optimization Procedure
In case the number of training data points ? and the dimensionality ? are both large, the above
optimization procedures will be infeasible. Here we introduce yet another optimization procedure
based on a greedy approach to solve problem (4) when both ? and ? are large.
We find the first column of W by solving problem (4) where W is a vector, then find the second
column of W by assuming the first column is fixed, and so on. This procedure consists of ? steps.
In the ?th step, we assume that the first ? ? 1 columns of W have been obtained and we find the
?th column according to problem (4). We use W??1 to denote the matrix in which the first ? ? 1
columns are already known and the constraint in problem (4) becomes
?
w?? w? = 1, W??1
w? = 0.
?
?
w? = 0 does not
When ? = 1, W??1
can be viewed as an empty matrix and the constraint W??1
exist. So in the ?th step, we need to solve the following problem
min
w? ,?,?
s.t.
?
?
w?? S? w? + ?? ? ? ? 0, ??
? ? w?? (m
?? ?m
? ? )(m
?? ?m
? ? )? w? ? ??? ? 0, ??, ?
?, ? > 0
?
w?? w? ? 1, W??1
w? = 0,
(
?
W??1
S? W??1
)
(
(12)
?
W??1
(m
??
?
)
where ?? = tr
and ??? = tr
?m
? ? )(m
?? ?m
? ? ) W??1 . In the last
?
constraint of problem (12), we relax the constraint on w? as w? w? ? 1 to make it convex.
The function ?? is not convex with respect to (?, ?)? since the Hessian matrix is not positive semidefinite. So the objective function of problem (12) is non-convex. Moreover, the second constraint in
problem (12), which is the difference of two convex functions, is also non-convex. We rewrite the
objective function as
(? + 1)2
(? ? 1)2
?
=
?
,
?
4?
4?
2
which is also the difference of two convex functions since ? (?, ?) = (?+?)
for ? > 0 is convex
?
with respect to ? and ? according to [3]. Then we can use the constrained concave-convex procedure (CCCP) [24, 18] to optimize problem (12). More specifically, in the (? + 1)th iteration of
CCCP, we replace the non-convex parts of the objective function and the second constraint with
(?)
their first-order Taylor expansions at the solution {?(?) , ?(?) , w? } in the ?th iteration and solve the
following problem
min
w? ,?,?
s.t.
(? + 1)2
? ?? + ?2 ?
4?
w?? S? w? + ?? ? ? ? 0, ??
(?)
(?)
? ? 2(w? )? (m
?? ?m
? ? )(m
?? ?m
? ? )? w? + ??? ? ??? ? 0, ??, ?
?, ? > 0
?
w?? w? ? 1, W??1
w? = 0,
5
(13)
where ? =
(?+1)2
4? ,
i.e.,
(?)
(?)
?(?) ?1
and ??? = (w? )? (m
??
2?(?)
(?+1)2
? ?, and using the fact
4?
(?)
?m
? ? )(m
?? ?m
? ? )? w? . By putting an upper bound on
that
2
(? + 1)
? + 1
? ? (?, ? > 0) ?
? ? + ?,
4?
??? 2
where ? ? ?2 denotes the 2-norm of a vector, we can reformulate problem (13) into a second-order
cone programming (SOCP) problem [12] which is more efficient than SDP:
min
w? ,?,?,?
s.t.
? ? ?? + ?2 ?
w?? S? w? + ?? ? ? ? 0, ??
(?)
(?)
? ? 2(w? )? (m
?? ?m
? ? )(m
?? ?m
? ? )? w? + ??? ? ??? ? 0, ??, ?
? + 1
? ? + ? with ?, ?, ? > 0
??? 2
?
?
w? w? ? 1, W??1
w? = 0.
2.5
(14)
Analysis
It is well known that in binary classification problems when both classes are normally distributed
with the same covariance matrix, the solution given by conventional LDA is the Bayes optimal
solution. We will show here that this property still holds for WLDA.
The objective function for WLDA in a binary classification problem is formulated as
w
w? (m
?1 ?m
? 2 )(m
?1 ?m
? 2 )? w
?
?
max{w S1 w, w S2 w}
s.t.
w ? ?? , w? w ? 1.
max
(15)
Here, similar to conventional LDA, the reduced dimensionality ? is set to 1. When the two classes
have the same covariance matrix, i.e., S1 = S2 , the problem degenerates to the optimization problem
of conventional LDA since w? S1 w = w? S2 w for any w and w is the solution of conventional
LDA.1 So WLDA also gives the same Bayes optimal solution as conventional LDA.
Since the scale of w does not affect the final solution in problem (15), we simplify problem (15) as
max
w
s.t.
w? (m
?1 ?m
? 2 )(m
?1 ?m
? 2 )? w
w? S1 w ? 1, w? S2 w ? 1.
(16)
Since problem (16) is to maximize a convex function, it is not a convex problem. We can still use
CCCP to optimize problem (16). In the (? + 1)th iteration of CCCP, we need to solve the following
problem
max
w
s.t.
(w(?) )? (m
?1 ?m
? 2 )(m
?1 ?m
? 2 )? w
w? S1 w ? 1, w? S2 w ? 1.
(17)
The Lagrangian is given by
? = ?(w(?) )? (m
?1 ?m
? 2 )(m
?1 ?m
? 2 )? w + ?(w? S1 w ? 1) + ?(w? S2 w ? 1),
where ? ? 0 and ? ? 0. We calculate the gradients of ? with respect to w and set them to 0 to
obtain
w = (2?S1 + 2?S2 )?1 (m
?1 ?m
? 2 )(m
?1 ?m
? 2 )? w(?) .
From this, we can see that when the algorithm converges, the optimal w? satisfies
w? ? (2?? S1 + 2? ? S2 )?1 (m
?1 ?m
? 2 ).
This is similar to the following property of the optimal solution in conventional LDA
w? ? S?1
?1 ?m
? 2 ) ? (?1 S1 + ?2 S2 )?1 (m
?1 ?m
? 2 ).
? (m
1
The constraint w? w ? 1 in problem (15) only serves to limit the scale of w.
6
However, in our method, ?? and ? ? are not fixed but learned from the following dual problem
min
?,?
s.t.
(
?
(m
?1 ?m
? 2 )(?S1 + ?S2 )?1 (m
?1 ?m
? 2) + ? + ?
4
? ? 0, ? ? 0,
(18)
)2
where ? = (m
?1 ?m
? 2 )? w(?) . Note that the first term in the objective function of problem (18)
is just the scaled optimality criterion of conventional LDA when we assume the within-class scatter
matrix S? to be S? = ?S1 + ?S2 . From this view, WLDA seeks to find a linear combination of S1
and S2 as the within-class scatter matrix to maximize the optimality criterion of conventional LDA
while controlling the complexity of the within-class scatter matrix as reflected by the second and
third terms of the objective function in problem (18).
3
Related Work
In [11], Li et al. proposed a maximum margin criterion for dimensionality
reduction
(
) by changing
the optimization problem of conventional LDA to: maxW tr W? (S? ? S? )W . The objective
function has a physical meaning similar to that of LDA which favors a large between-class scatter
measure and a small within-class scatter measure. However, similar to LDA, the maximum margin
criterion also uses the average distances to describe the between-class and within-class scatter measures. Kocsor et al. [10] proposed another maximum margin criterion for dimensionality reduction.
The objective function in [10] is identical to that of support vector machine (SVM) and it treats the
decision function in SVM as one direction in the transformation matrix W.
In [9], Kim et al. proposed a robust LDA algorithm to deal with data uncertainty in classification
applications by formulating the problem as a convex problem. However, in many applications, it is
not easy to obtain the information about data uncertainty. Moreover, its limitation is that it can only
handle binary classification problems but not more general multi-class problems.
The orthogonality constraint on the transformation matrix W has been widely used by dimensionality reduction methods, such as Foley-Sammon LDA (FSLDA) [6, 5] and orthogonal LDA [23].
The orthogonality constraint can help to eliminate the redundant information in W. This has been
shown to be effective for dimensionality reduction.
4
Experimental Validation
In this section, we evaluate WLDA empirically on some benchmark datasets and compare WLDA
with several related methods, including conventional LDA, trace-ratio LDA [20], FSLDA [6, 5], and
MarginLDA [11]. For fair comparison with conventional LDA, we set the reduced dimensionality
of each method compared to ? ? 1 where ? is the number of classes in the dataset. After dimensionality reduction, we use a simple nearest-neighbor classifier to perform classification. Our choice
of the optimization procedure follows this strategy: when the number of features ? or the number
of training data points ? is smaller than 100, the optimization method in Section 2.2 or 2.3 is used
depending on which one is smaller; otherwise, we use the greedy method in Section 2.4.
4.1
Experiments on UCI Datasets
Ten UCI datasets [1] are used in the first set of experiments. For each dataset, we randomly select
70% to form the training set and the rest for the test set. We perform 10 random splits and report
in Table 2 the average results across the 10 trials. For each setting, the lowest classification error is
shown in bold. We can see that WLDA gives the best result for most datasets. For some datasets,
e.g., balance-scale and hayes-roth, even though WLDA is not the best, the difference between it and
the best one is very small. Thus it is fair to say that the results obtained demonstrate convincingly
the effectiveness of WLDA.
4.2
Experiments on Face and Object Datasets
Dimensionality reduction methods have been widely used for face and object recognition applications. Previous research found that face and object images usually lie in a low-dimensional subspace
7
Table 2: Average classification errors
LDA [20].
Dataset
LDA
diabetes
0.3233
heart
0.2448
liver
0.4001
sonar
0.2806
spambase
0.1279
balance-scale 0.1193
iris
0.0244
hayes-roth
0.3125
waveform
0.1861
mfeat-factors 0.0732
on the UCI datasets. Here tr-LDA denotes the trace-ratio
tr-LDA
0.3143
0.2259
0.3933
0.2895
0.1301
0.1198
0.0267
0.3104
0.1865
0.0518
FSLDA
0.4039
0.4395
0.4365
0.3694
0.3093
0.1176
0.0622
0.3104
0.2261
0.0868
MarginLDA
0.4143
0.2407
0.5058
0.2806
0.1440
0.1150
0.0644
0.2958
0.2303
0.0817
WLDA
0.2996
0.2157
0.3779
0.2661
0.1260
0.1174
0.0211
0.3050
0.1671
0.0250
of the ambient image space. Fisherface (based on LDA) [2] is one representative dimensionality reduction method. We use three face databases, ORL [2], PIE [17] and AR [13], and one object
database, COIL [15], in our experiments. In the AR face database, 2,600 images of 100 persons
(50 men and 50 women) are used. Before the experiment, each image is converted to gray scale
and normalized to a size of 33 ? 24 pixels. The ORL face database contains 400 face images of
40 persons, each having 10 images. Each image is preprocessed to a size of 28 ? 23 pixels. In our
experiment, we choose the frontal pose from the PIE database with varying lighting and illumination conditions. There are about 49 images for each subject. Before the experiment, we resize each
image to a resolution of 32 ? 32 pixels. The COIL database contains 1,440 grayscale images with
black background for 20 objects with each object having 72 different images.
In face and object recognition applications, the size of the training set is usually not very large since
labeling data is very laborious and costly. To simulate this realistic situation, we randomly choose
4 images of a person or object in the database to form the training set and the remaining images
to form the test set. We perform 10 random splits and report the average classification error rates
across the 10 trials in Table 3. From the result, we can see that WLDA is comparable to or even
better than the other methods compared.
Table 3: Average classification errors on the face and object datasets. Here tr-LDA denotes the
trace-ratio LDA [20].
Dataset
LDA
tr-LDA FSLDA MarginLDA WLDA
ORL
0.1529 0.1042 0.0654
0.0536
0.0446
PIE
0.4305 0.2527 0.6715
0.2936
0.2469
AR
0.2498 0.1919 0.7726
0.4282
0.1965
COIL
0.2554 0.1737 0.1726
0.1653
0.1593
5
Conclusion
In this paper, we have presented a new supervised dimensionality reduction method by exploiting
the worst-case view instead of average-case view in the formulation. One interesting direction of
our future work is to extend WLDA to handle tensors for 2D or higher-order data. Moreover, we
will investigate the semi-supervised extension of WLDA to exploit the useful information contained
in the unlabeled data available in some applications.
Acknowledgement
This research has been supported by General Research Fund 621407 from the Research Grants
Council of Hong Kong.
8
References
[1] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[2] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using class
specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711?
720, 1997.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, 2004.
[4] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning. In Proceedings of the Twenty-Fourth International Conference on Machine Learning, pages 209?216, Corvalis,
Oregon, USA, 2007.
[5] J. Duchene and S. Leclercq. An optimal transformation for discriminant and principal component analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 10(6):978?983, 1988.
[6] D. H. Foley and J. W. Sammon. An optimal set of discriminant vectors. IEEE Transactions on Computers,
24(3):281?289, 1975.
[7] K Fukunnaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 1991.
[8] I. T. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, 2nd edition, 2002.
[9] S.-J. Kim, A. Magnani, and S. Boyd. Robust Fisher discriminant analysis. In Y. Weiss, B. Sch?olkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 659?666. Vancouver,
British Columbia, Canada, 2006.
[10] A. Kocsor, K. Kov?acs, and C. Szepesv?ari. Margin maximizing discriminant analysis. In Proceedings of
the 15th European Conference on Machine Learning, pages 227?238, Pisa, Italy, 2004.
[11] H. Li, T. Jiang, and K. Zhang. Efficient and robust feature extraction by maximum margin criterion. In
S. Thrun, L. K. Saul, and B. Sch?olkopf, editors, Advances in Neural Information Processing Systems 16,
Vancouver, British Columbia, Canada, 2003.
[12] M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming.
Linear Algebra and its Applications, 284:193?228, 1998.
[13] A. M. Mart??nez and R. Benavente. The AR-face database. Technical Report 24, CVC, 1998.
[14] S. Mika, G. R?atsch, J. Weston, B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Constructing descriptive and
discriminative nonlinear features: Rayleigh coefficients in kernel feature spaces. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 25(5):623?633, 2003.
[15] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL-20). Technical Report
005, CUCS, 1996.
[16] M. L. Overton and R. S. Womersley. Optimality conditions and duality theory for minimizing sums of
the largest eigenvalues of symmetric matrices. Math Programming, 62(2):321?357, 1993.
[17] T. Sim, S. Baker, and M. Bsat. The CMU pose, illumination and expression database. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 25(12):1615?1618, 2003.
[18] A. J. Smola, S. V. N. Vishwanathan, and T. Hofmann. Kernel methods for missing variables. In Proceedings of the Tenth International Workshop on Artificial Intelligence and Statistics, Barbados, 2005.
[19] L. Vandenberghe and S. Boyd. Semidefinite prgramming. SIAM Review, 38(1):49?95, 1996.
[20] H. Wang, S. Yan, D. Xu, X. Tang, and T. Huang. Trace ratio vs. ratio trace for dimensionality reduction.
In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
pages 1?8, Minneapolis, Minnesota, USA, 2007.
[21] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest neighbor
classification. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing
Systems 18, pages 1473?1480, Vancouver, British Columbia, Canada, 2005.
[22] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. J. Russell. Distance metric learning with application to clustering
with side-information. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information
Processing Systems 15, pages 505?512, Vancouver, British Columbia, Canada, 2002.
[23] J.-P. Ye and T. Xiong. Computational and theoretical analysis of null space and orthogonal linear discriminant analysis. Journal of Machine Learning Research, 7:1183?1204, 2006.
[24] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15(4):915?936, 2003.
9
| 3998 |@word kong:2 repository:1 briefly:1 trial:2 kulis:1 norm:1 sammon:2 nd:1 seek:4 covariance:3 tr:48 reduction:23 contains:4 spambase:1 existing:1 ka:1 scatter:25 yet:1 ust:1 realistic:1 subsequent:1 hofmann:1 fund:1 v:2 greedy:4 intelligence:5 xk:2 math:1 cse:1 zhang:2 consists:2 prove:1 kov:1 introduce:2 pairwise:10 sdp:5 multi:1 decreasing:1 curse:1 solver:1 becomes:1 moreover:8 underlying:1 maximizes:2 bounded:3 baker:1 lowest:1 null:1 minimizes:1 finding:2 transformation:6 every:2 concave:5 scaled:1 classifier:1 platt:2 normally:1 grant:1 arguably:2 positive:4 before:2 engineering:1 local:1 treat:1 limit:2 jiang:1 black:1 mika:1 minneapolis:1 practice:1 definite:1 procedure:9 yan:2 projection:1 boyd:4 kocsor:2 cannot:1 close:1 unlabeled:1 put:1 applying:1 optimize:3 conventional:17 equivalent:1 lagrangian:1 roth:2 maximizing:1 missing:1 convex:23 formulate:3 resolution:1 simplicity:1 utilizing:1 vandenberghe:3 handle:2 controlling:1 programming:4 us:2 diabetes:1 element:2 recognition:5 database:9 role:1 solved:1 wang:1 worst:10 calculate:1 russell:1 complexity:1 kriegman:1 lobo:1 rewrite:2 solving:5 algebra:1 yuille:1 jain:1 describe:1 effective:1 artificial:1 labeling:1 newman:1 widely:4 solve:7 larger:1 say:1 relax:3 otherwise:2 favor:1 statistic:1 final:1 descriptive:1 eigenvalue:2 propose:4 lowdimensional:1 uci:4 degenerate:1 achieve:1 olkopf:4 exploiting:1 convergence:1 empty:1 optimum:1 rangarajan:1 converges:1 object:10 help:1 depending:1 blitzer:1 ac:1 pose:3 liver:1 nearest:2 sim:1 eq:1 solves:1 murase:1 implies:1 direction:5 waveform:1 hull:1 alleviate:1 extension:2 hold:1 smallest:1 purpose:1 council:1 largest:3 uller:1 varying:1 focus:1 notational:1 hk:1 kim:2 eliminate:2 pixel:3 arg:2 classification:13 among:1 dual:6 development:1 constrained:2 special:2 initialize:1 having:2 extraction:1 ng:1 identical:1 yu:1 unsupervised:1 thinking:1 future:1 report:5 simplify:2 randomly:2 preserve:1 simultaneously:1 ab:1 investigate:1 laborious:1 extreme:1 semidefinite:4 primal:1 accurate:1 ambient:1 overton:1 orthogonal:2 taylor:1 theoretical:1 column:8 ar:4 conducted:2 person:3 international:2 siam:1 barbados:1 benavente:1 choose:2 huang:1 woman:1 resort:1 li:2 converted:1 socp:1 bold:1 coefficient:1 oregon:1 view:8 try:1 break:1 analyze:4 apparently:1 xing:1 recover:1 bayes:2 asuncion:1 minimize:2 square:1 accuracy:1 variance:1 lighting:1 converged:1 nene:1 reach:1 definition:1 involved:2 obvious:1 proof:2 dataset:4 subsection:1 dimensionality:30 higher:1 supervised:4 reflected:1 wei:2 formulation:2 though:1 just:2 stage:1 xa:3 smola:2 hand:1 nonlinear:1 lda:40 gray:1 usa:2 ye:1 normalized:1 orthonormality:2 hence:1 symmetric:1 dhillon:1 deal:1 numerator:1 davis:1 iris:1 hong:2 criterion:8 theoretic:1 demonstrate:2 bring:1 meaning:1 image:14 novel:1 ari:1 common:1 womersley:1 physical:1 empirically:1 extend:1 cambridge:1 fisherface:1 minnesota:1 perspective:1 italy:1 dyyeung:1 apart:1 verlag:1 binary:3 minimum:3 relaxed:2 belhumeur:1 maximize:4 redundant:1 semi:1 desirable:1 technical:2 academic:1 cccp:5 scalable:1 denominator:1 vision:1 circumstance:1 metric:8 cmu:1 yeung:1 iteration:3 kernel:4 background:1 szepesv:1 cvc:1 sch:4 rest:1 unlike:1 subject:1 facilitates:1 effectiveness:2 jordan:1 split:2 easy:3 identically:1 affect:1 withinclass:1 expression:1 pca:2 becker:1 reformulated:1 hessian:1 york:3 useful:2 eigenvectors:1 ten:1 dit:1 reduced:2 exist:1 disjoint:1 express:1 redundancy:1 iter:1 putting:1 changing:1 preprocessed:1 tenth:1 relaxation:1 cone:2 sum:1 uncertainty:2 fourth:1 almost:1 decision:1 resize:1 orl:3 comparable:1 bound:1 constraint:13 precisely:1 orthogonality:2 vishwanathan:1 mfeat:1 simulate:1 optimality:5 min:18 formulating:1 speedup:1 department:1 according:4 combination:2 smaller:4 across:2 partitioned:1 s1:12 heart:1 jolliffe:1 needed:1 know:1 bsat:1 serf:1 available:2 appropriate:1 xiong:1 alternative:1 weinberger:1 eigen:1 denotes:6 top:1 remaining:1 clustering:1 exploit:1 establish:1 classical:1 society:1 tensor:1 objective:9 already:1 quantity:1 strategy:1 costly:1 obermayer:1 gradient:1 subspace:3 distance:15 thrun:2 discriminant:11 assuming:1 relationship:2 reformulate:1 ratio:8 balance:2 minimizing:1 pie:3 trace:7 hespanha:1 ba:1 proper:1 fisherfaces:1 perform:5 twenty:1 upper:2 datasets:11 benchmark:3 defining:1 situation:1 ww:2 magnani:1 canada:4 cucs:1 learned:1 lebret:1 below:1 usually:2 pattern:6 challenge:1 convincingly:1 max:22 including:1 suitable:2 advanced:1 technology:1 library:1 columbia:5 foley:2 review:2 acknowledgement:1 vancouver:4 men:1 limitation:1 interesting:1 validation:1 editor:4 storing:1 supported:1 last:1 infeasible:1 side:1 neighbor:2 eigenfaces:1 taking:1 face:10 saul:2 distributed:2 world:1 resides:1 adopts:2 collection:1 commonly:1 corvalis:1 far:1 transaction:5 bb:1 keep:1 hayes:2 discriminative:1 grayscale:1 latent:1 iterative:1 sonar:1 table:8 learn:1 robust:3 sra:1 obtaining:1 expansion:1 european:1 constructing:1 s2:12 edition:1 fair:2 x1:2 xu:1 referred:1 representative:1 ny:1 pisa:1 lie:1 third:1 tang:1 theorem:3 british:4 specific:1 svm:2 workshop:1 illumination:2 margin:6 rayleigh:1 nez:1 expressed:2 contained:1 scalar:1 maxw:1 springer:1 satisfies:3 mart:1 coil:4 weston:1 identity:1 formulated:2 viewed:1 replace:1 fisher:1 specifically:1 principal:3 called:3 duality:1 experimental:1 atsch:1 select:1 support:1 frontal:1 incorporate:1 evaluate:1 nayar:1 |
3,310 | 3,999 | Active Estimation of F-Measures
Christoph Sawade, Niels Landwehr, and Tobias Scheffer
University of Potsdam
Department of Computer Science
August-Bebel-Strasse 89, 14482 Potsdam, Germany
{sawade, landwehr, scheffer}@cs.uni-potsdam.de
Abstract
We address the problem of estimating the F? -measure of a given model as accurately as possible on a fixed labeling budget. This problem occurs whenever an
estimate cannot be obtained from held-out training data; for instance, when data
that have been used to train the model are held back for reasons of privacy or do
not reflect the test distribution. In this case, new test instances have to be drawn
and labeled at a cost. An active estimation procedure selects instances according
to an instrumental sampling distribution. An analysis of the sources of estimation
error leads to an optimal sampling distribution that minimizes estimator variance.
We explore conditions under which active estimates of F? -measures are more accurate than estimates based on instances sampled from the test distribution.
1
Introduction
This paper addresses the problem of evaluating a given model in terms of its predictive performance.
In practice, it is not always possible to evaluate a model on held-out training data; consider, for
instance, the following scenarios. When a readily trained model is shipped and deployed, training
data may be held back for reasons of privacy. Secondly, training data may have been created under
laboratory conditions and may not entirely reflect the test distribution. Finally, when a model has
been trained actively, the labeled data is biased towards small-margin instances which would incur
a pessimistic bias on any cross-validation estimate.
This problem has recently been studied for risks?i.e., for performance measures which are integrals
of a loss function over an instance space [7]. However, several performance measures cannot be
expressed as a risk. Perhaps the most prominent such measure is the F? -measure [10]. For a given
binary classifier and sample of size n, let ntp and nf p denote the number of true and false positives,
respectively, and nf n the number of false negatives. Then the classifier?s F? -measure on the sample
is defined as
ntp
F? =
.
(1)
?(ntp + nf p ) + (1 ? ?)(ntp + nf n )
Precision and recall are special cases for ? = 1 and ? = 0, respectively. The F? -measure is
defined as an estimator in terms of empirical quantities. This is unintuitive from a statistical point of
view and raises the question which quantity of the underlying distribution the F -measure actually
estimates. We will now introduce the class of generalized risk functionals that we study in this paper.
We will then show that F? is a consistent estimate of a quantity that falls into this class.
Let X denote the feature space and Y the label space. An unknown test distribution p(x, y) is defined
over X ? Y. Let p(y|x; ?) be a given ?-parameterized model of p(y|x) and let f? : X ? Y with
f? (x) = arg maxy p(y|x; ?) be the corresponding hypothesis.
Like any risk functional, the generalized risk is parameterized with a function ` : Y ? Y ? R
determining either the loss or?alternatively?the gain that is incurred for a pair of predicted and
1
true label. In addition, the generalized risk is parameterized with a function w that assigns a weight
w(x, y, f? ) to each instance. For instance, precision sums over instances with f? (x) = 1 with
weight 1 and gives no consideration to other instances. Equation 2 defines the generalized risk:
RR
`(f? (x), y)w(x, y, f? )p(x, y)dydx
RR
G=
.
(2)
w(x, y, f? )p(x, y)dydx
The integral over Y is replaced by a sum in the case of a discrete label space Y. Note that the
generalized risk (Equation 2) reduces to the regular risk for w(x, y, f? ) = 1. On a sample of size
n, a consistent estimator can be obtained by replacing the cumulative distribution function with the
empirical distribution function.
Proposition 1. Let (x1 , y1 ), . . . , (xn , yn ) be drawn iid according to p(x, y). The quantity
Pn
`(f? (xi ), yi )w(xi , yi , f? )
?
Gn = i=1 Pn
i=1 w(xi , yi , f? )
(3)
is a consistent estimate of the generalized risk G defined by Equation 2.
Proof. The proposition follows from Slutsky?s theorem [3] applied to the numerator and denominator of Equation 3.
? n conConsistency means asymptotical unbiasedness; that is, the expected value of the estimate G
verges in distribution to the true risk G for n ? ?. We now observe that F? -measures?including
precision and recall?are consistent empirical estimates of generalized risks for appropriately chosen functions w.
Corollary 1. F? is a consistent estimate of the generalized risk with Y = {0, 1}, w(x, y, f? ) =
?f? (x) + (1 ? ?)y and ` = 1 ? `0/1 , where `0/1 denotes the zero-one loss.
Proof. The claim follows from Proposition 1 since
Pn
`0/1 (f? (xi ), yi )) (?f? (xi ) + (1 ? ?)yi )
? n = i=1 (1 ? P
G
n
(?f? (xi ) + (1 ? ?)yi )
Pni=1
ntp
i=1 f? (xi )yi P
.
= Pn
=
n
? (ntp + nf p ) + (1 ? ?) (ntp + nf n )
? i=1 f? (xi ) + (1 ? ?) i=1 yi
Having established and motivated the generalized risk functional, we now turn towards the problem
of acquiring a consistent estimate with minimal estimation error on a fixed labeling budget n. Test
instances x1 , ..., xn need not necessarily be drawn according to the distribution p. Instead, we study
an active estimation process that selects test instances according to an instrumental distribution q.
When instances are sampled from q, an estimator of the generalized risk can be defined as
Pn p(xi )
q(xi ) `(f? (xi ), yi )w(xi , yi , f? )
? n,q = i=1 P
G
(4)
n
p(xi )
i=1 q(xi ) w(xi , yi , f? )
i)
where (xi , yi ) are drawn from q(x)p(y|x). Weighting factors p(x
q(xi ) compensate for the discrepancy
between test and instrumental distributions. Because of the weighting factors, Slutsky?s Theorem
again implies that Equation 4 defines a consistent estimator for G, under the precondition that for all
x ? X with p(x) > 0 it holds that q(x) > 0. Note that Equation 3 is a special case of Equation 4,
using the instrumental distribution q = p.
? n,q given by Equation 4 depends on the selected instances (xi , yi ), which are drawn
The estimate G
? n,q is a random variable whose distribution deaccording to the distribution q(x)p(y|x). Thus, G
pends on q. Our overall goal is to determine the instrumental distribution q such that the expected
deviation from the generalized risk is minimal for fixed labeling costs n:
2
?
?
q = arg min E Gn,q ? G
.
q
2
2
Active Estimation through Variance Minimization
The bias-variance decomposition expresses the estimation error as a sum of a squared bias and a
variance term [5]:
i h
i
2
h
h
i2
? n,q ? G + E G
? n,q ? G)2 = E G
? n,q ? E G
? n,q
E (G
(5)
? n,q ] + Var[G
? n,q ].
= Bias2 [G
(6)
? n,q is consistent, both Bias2 [G
? n,q ] and Var[G
? n,q ] vanish for n ? ?. More specifically,
Because G
2 ?
1
Lemma 1 shows that Bias [Gn,q ] is of order n2 .
? n,q be as defined in Equation 4. Then there exists C ? 0 with
Lemma 1 (Bias of Estimator). Let G
h
i
? n,q ? G ? C .
(7)
E G
n
? n,q
The proof can be found in the online appendix. Lemma 2 states that the active risk estimator G
is asymptotically normally distributed, and characterizes its variance in the limit.
? n,q be defined as in Equation 4. Then,
Lemma 2 (Asymptotic Distribution of Estimator). Let G
?
? n,q ? G n??
n G
?? N 0, ?q2
(8)
with asymptotic variance
Z
Z
p(x)
2
2
2
?q =
w(x, y, f? ) (`(f? (x), y) ? G) p(y|x)dy p(x)dx
q(x)
(9)
n??
where ?? denotes convergence in distribution.
A proof of Lemma 2 can be found in the appendix. Taking the variance of Equation 8, we obtain
h
i
? n,q n??
n Var G
?? ?q2 ,
(10)
? n,q ] is of order 1 . As the bias term vanishes with 12 , the expected estimation error
thus Var[G
n
n
? n,q ? G)2 ] will be dominated by Var[G
? n,q ]. Moreover, Equation 10 indicates that Var[G
? n,q ]
E[(G
can be approximately minimized by minimizing ?q2 . In the following, we will consequently derive a
? n,q .
sampling distribution q ? that minimizes the asymptotic variance ?q2 of the estimator G
2.1
Optimal Sampling Distribution
The following theorem derives the sampling distribution that minimizes the asymptotic variance ?q2 :
Theorem 1 (Optimal Sampling Distribution). The instrumental distribution that minimizes the
? n,q is given by
asymptotic variance ?q2 of the generalized risk estimator G
sZ
q ? (x) ? p(x)
2
w(x, y, f? )2 (`(f? (x), y) ? G) p(y|x)dy.
(11)
A proof of Theorem 1 is given in the appendix. Since F -measures are estimators of generalized
risks according to Corollary 1, we can now derive their variance-minimizing sampling distributions.
Corollary 2 (Optimal Sampling for F? ). The sampling distribution that minimizes the asymptotic
variance of the F? -estimator resolves to
p
? G)2 + ?2 (1 ? p(f? (x)|x))G2 : f (x) = 1
p(x) p(f? (x)|x)(1
p
q ? (x) ?
(12)
p(x)(1 ? ?) (1 ? p(f? (x)|x))G2
: f (x) = 0
3
Algorithm 1 Active Estimation of F? -Measures
input Model parameters ?, pool D, labeling costs n.
? n,q? .
output Generalized risk estimate G
1: Compute optimal sampling distribution q ? according to Corollary 2, 3, or 4, respectively.
2: for i = 1, . . . , n do
3:
Draw xi ? q ? (x) from D with replacement.
4:
Query label yi ? p(y|xi ) from oracle.
5: end forPn
1
i=1 q(xi ) `(f? (xi ),yi )w(xi ,yi ,f? )
Pn
6: return
1
w(x ,y ,f )
i=1 q(xi )
i
i
?
Proof. According to Corollary 1, F? estimates a generalized risk with Y = {0, 1}, w(x, y, f? ) =
?f? (x) + (1 ? ?)y and ` = 1 ? `0/1 . Starting from Theorem 1, we derive
s X
2
2
(13)
q ? (x) ? p(x)
(?f? (x) + (1 ? ?)y) 1 ? `0/1 (f? (x), y) ? G p(y|x)
y?{0,1}
2
= p(x) ?2 f? (x) ((1 ? f? (x)) ? G) p(y = 0|x)
21
2
2
+ (1 ? ?(1 ? f? (x))) (f? (x) ? G) p(y = 1|x)
(14)
The claim follows by case differentiation according to the value of f? (x).
Corollary 3 (Optimal Sampling for Recall). The sampling distribution that minimizes ?q2 for recall
resolves to
p
p(x)pp(f? (x)|x)(1 ? G)2 : f (x) = 1
?
q (x) ?
(15)
p(x) (1 ? p(f? (x)|x))G2 : f (x) = 0.
Corollary 4 (Optimal Sampling for Precision). The sampling distribution that minimizes ?q2 for
precision resolves to
p
(16)
q ? (x) ? p(x)f? (x) (1 ? 2G)p(f? (x)|x) + G2 .
Corollaries 3 and 4 directly follow from Corollary 2 for ? = 0 and ? = 1. Note that for standard
risks (that is, w = 1) Theorem 1 coincides with the optimal sampling distribution derived in [7].
2.2
Empirical Sampling Distribution
Theorem 1 and Corollaries 2, 3, and 4 depend on the unknown test distribution p(x). We now turn
towards a setting in which a large pool D of unlabeled test instances is available. Instances from this
pool can be sampled and then labeled at a cost. Drawing instances from the pool replaces generating
1
them under the test distribution; that is, p(x) = m
for all x ? D.
Theorem 1 and its corollaries also depend on the true conditional p(y|x). To implement the method,
we have to approximate the true conditional p(y|x); we use the model p(y|x; ?). This approximation
constitutes an analogy to active learning: In active learning, the model-based output probability
p(y|x; ?) serves as the basis on which the least confident instances are selected. Note that as long as
p(x) > 0 implies q(x) > 0, the weighting factors ensure that such approximations do not introduce
an asymptotic bias in our estimator (Equation 4). Finally, Theorem 1 and its corollaries depend on
the true generalized risk G. G is replaced by an intrinsic generalized risk calculated from Equation 2,
1
where the integral over X is replaced by a sum over the pool, p(x) = m
, and p(y|x) ? p(y|x; ?).
Algorithm 1 summarizes the procedure for active estimation of F -measures. A special case occurs
when the labeling process is deterministic. Since instances are sampled with replacement, elements
may be drawn more than once. In this case, labels can be looked up rather than be queried from the
deterministic labeling oracle repeatedly. The loop may then be continued until the labeling budget
is exhausted. Note that F -measures are undefined when the denominator is zero which is the case
when all drawn examples have a weight w of zero. For instance, precision is undefined when no
positive examples have been drawn.
4
2.3
Confidence Intervals
? n,q is asymptotically normally distributed and characterLemma 2 shows that the estimator G
izes its asymptotic variance. A consistent estimate of ?q2 is obtained from the labeled sample
(x1 , y1 ), . . . , (xn , yn ) drawn from the distribution q(x)p(y|x) by computing empirical variance
2
n
2
X
1
p(xi )
2
? n,q .
Sn,q
= Pn p(x )
w(xi , yi , f? )2 `(f? (xi ), yi ) ? G
i
q(xi )
i=1
i=1 q(xi )
? n,q ? z, G
? n,q + z] with coverage 1 ? ? is now given by
A two-sided confidence interval [G
n,q
? S?
?1
?1
where Fn is the inverse cumulative distribution function of the Student?s t
z = Fn 1 ? 2
n
distribution. As in the standard case of drawing test instances xi from the original distribution p,
such confidence intervals are approximate for finite n, but become exact for n ? ?.
3
Empirical Studies
We compare active estimation of F? -measures according to Algorithm 1 (denoted activeF ) to estimation based on a sample of instances drawn uniformly from the pool (denoted passive). We also
consider the active estimator for risks presented in [7]. Instances are drawn according to the opti?
mal sampling distribution q0/1
for zero-one risk (Derivation 1 in [7]); the F? -measure is computed
?
according to Equation 4 using q = q0/1
(denoted activeerr ).
3.1
Experimental Setting and Domains
For each experimental domain, data is split into a training set and a pool of test instances. We
train a kernelized regularized logistic regression model p(y|x; ?) (using the implementation of Yamada [11]). All methods operate on identical labeling budgets n. The evaluation process is averaged
over 1,000 repetitions. In case one of the repetitions results in an undefined estimate, the entire experiment is discarded (i.e., there is no data point for the method in the corresponding diagram).
Spam filtering domain. Spammers impose a shift on the distribution over time as they implement
new templates and generators. In our experiments, a filter trained in the past has to be evaluated
with respect to a present distribution of emails. We collect 169,612 emails from an email service
provider between June 2007 and April 2010; of these, 42,165 emails received by February 2008 are
used for training. Emails are represented by 541,713 binary bag-of-word features. Approximately
5% of all emails fall into the positive class non-spam.
Text classification domain. The Reuters-21578 text classification task [4] allows us to study the
effect of class skew, and serves as a prototypical domain for active learning. We experiment on the
ten most frequently occurring topics. We employ an active learner that always queries the example
with minimal functional margin p(f? (x)|x; ?) ? maxy6=f? (x) p(y|x; ?) [9]. The learning process is
initialized with one labeled training instance from each class, another 200 class labels are queried.
Digit recognition domain. We also study a digit recognition domain in which training and test data
originate from different sources. A detailed description is included in the online appendix.
3.2
Empirical Results
We study the performance of active and passive estimates as a function of (a) the precision-recall
trade-off parameter ?, (b) the discrepancy between training and test distribution, and (c) class skew
in the test distribution. Point (b) is of interest because active estimates require the approximation
p(y|x) ? p(y|x; ?); this assumption is violated when training and test distributions differ.
Effect of the trade-off parameter ?. For the spam filtering domain, Figure 1 shows the average
absolute estimation error for F0 (recall), F0.5 , and F1 (precision) estimates on a test set of 33,296
emails received between February 2008 and October 2008. The active generalized risk estimate
activeF significantly outperforms the passive estimate passive for all three measures. In order
to reach the estimation accuracy of passive with a labeling budget of n = 800, activeF requires
fewer than 150 (recall), 200 (F0.5 ), or 100 (precision) labeled test instances. Estimates obtained from
5
F0.5?measure
Recall
0.035
activeerr
0.03
0.025
0.02
0.015
0.01
0
200
400
600
labeling costs n
Precision
0.35
0.25
passive
activeF
0.2
estimation error (absolute)
passive
activeF
estimation error (absolute)
estimation error (absolute)
0.04
activeerr
0.15
0.1
0.05
800
0
200
400
600
labeling costs n
0.3
passive
activeF
0.25
activeerr
0.2
0.15
0.1
0.05
800
0
200
400
600
labeling costs n
800
Figure 1: Spam filtering: Estimation error over labeling costs. Error bars indicate the standard error.
Time Shift vs. Ratio of Estimation Errors
15
10
5
0
?10
?5
0
5
log(p(y=1|x)/p(y=0|x))
10
2.5
0.9
0.8
2
0.7
1.5
05/2008
F0.5?measure (n=500)
1
ratio
likelihood
0.6
01/2009
08/2009
date
04/2010
likelihood
20
Precision
0/1?Risk
ratio of estimation errors
m q*(x)
3
Recall
F0.5
estimation error (absolute)
Optimal Sampling Distribution (class ratio: 5/95)
25
passive
activeF
0.1
activeerr
0.01
0.03
0.1
positive class fraction
0.32
Figure 2: Spam filtering: Optimal sampling distribution q ? for F? over log-odds (left). Ratio of
passive and active estimation error, error bars indicate standard deviation (center). Estimation error
over class ratio, logarithmic scale, error bars indicate standard errors (right).
activeF are at least as accurate as those of activeerr , and more accurate for high ? values. Results
obtained in the digit recognition domain are consistent with these findings (see online appendix).
Figure 2 (left) shows the sampling distribution q ? (x) for recall, precision and F0.5 -measure in the
spam filtering domain as a function of the classifier?s confidence, characterized by the log-odds ratio
log p(y=1|x;?)
p(y=0|x;?) . The figure also shows the optimal sampling distribution for zero-one risk as used
in activeerr (denoted ?0/1-Risk?). We observe that the precision estimator dismisses all examples
with f? (x) = 0; this is intuitive because precision is a function of true-positive and false-positive
examples only. By contrast, the recall estimator selects examples on both sides of the decision
boundary, as it has to estimate both the true positive and the false negative rate. The optimal sampling
distribution for zero-one risk is symmetric, it prefers instances close to the decision boundary.
Effect of discrepancy between training and test distribution. We keep the training set of emails
fixed and move the time interval from which test instances are drawn increasingly further away
into the future, thereby creating a growing gap between training and test distribution. Specifically,
we divide 127,447 emails received between February 2008 and April 2010 into ten different test
sets spanning approximately 2.5 months each. Figure 2 (center, red curve) shows the discrepancy
between training and test distribution measured in terms of the exponentiated average log-likelihood
of the test labels given the model parameters ?. The likelihood at first continually decreases. It grows
again for the two most recent batches; this coincides with a recent wave of text-based vintage spam.
?
Figure 2 (center, blue curve) also shows the ratio of passive-to-active estimation errors |G?|Gn ?G|
.A
n,q ? ?G|
value above one indicates that the active estimate is more accurate than a passive estimate. The active
estimate consistently outperforms the passive estimate; its advantage diminishes when training and
test distributions diverge and the assumption of p(y|x) ? p(y|x; ?) becomes less accurate.
Effect of class skew. In the spam filtering domain we artificially sub-sampled data to different ratios
of spam and non-spam emails. Figure 2 (right) shows the performance of activeF , passive, and
activeerr for F0.5 estimation as a function of class skew. We observe that activeF outperforms
passive consistently. Furthermore, activeF outperforms activeerr for imbalanced classes, while
the approaches perform comparably when classes are balanced. This finding is consistent with the
intuition that accuracy and F -measure diverge more strongly for imbalanced classes.
6
F0.5?measure (class fraction: 4.4%)
F0.5?measure (class fraction: 51.0%)
activeerr
0.06
0.04
0.02
200
400
600
labeling costs n
800
passive
activeF
activeerr
0.015
0.01
0.005
0
200
400
600
labeling costs n
800
estimation error (absolute)
0.08
0
F0.5?measure (n=800)
0.02
passive
activeF
estimation error (absolute)
estimation error (absolute)
0.1
passive
activeF
0.1
activeerr
0.01
0.01
0.03
0.1
positive class fraction
0.32
Figure 3: Text classification: Estimation error over number of labeled data for infrequent (left) and
frequent (center) class. Estimation error over class ratio for all ten classes, logarithmic scale (right).
Error bars indicate the standard error.
In the text classification domain we estimate the F0.5 -measure for ten one-versus-rest classifiers.
Figure 3 shows the estimation error of activeF , passive, and activeerr for an infrequent class
(?crude?, 4.41%, left) and a frequent class (?earn?, 51.0%, center). These results are representative
for other frequent and infrequent classes, all results are included in the online appendix. Figure 3
(right) shows the estimation error of activeF , passive, and activeerr on all ten one-versus-rest
problems as a function of the problem?s class skew. We again observe that activeF outperforms
passive consistently, and activeF outperforms activeerr for strongly skewed class distributions.
4
Related Work
Sawade et al. [7] derive a variance-minimizing sampling distribution for risks. Their result does not
cover F -measures. Our experimental findings show that for estimating F -measures their varianceminimizing sampling distribution performs worse than the sampling distributions characterized by
Theorem 1, especially for skewed class distributions.
Active estimation of generalized risks can be considered to be a dual problem of active learning; in
active learning, the goal of the selection process is to minimize the variance of the predictions or
the variance of the model parameters, while in active evaluation the variance of the risk estimate
is reduced. The variance-minimizing sampling distribution derived in Section 2.1 depends on the
unknown conditional distribution p(y|x). We use the model itself to approximate this distribution
and decide on instances whose class labels are queried. This is analogous to many active learning
algorithms. Specifically, Bach derives a sampling distribution for active learning under the assumption that the current model gives a good approximation to the conditional probability p(y|x) [1]. To
compensate for the bias incurred by the instrumental distribution, several active learning algorithms
use importance weighting: for regression [8], exponential family models [1], or SVMs [2].
Finally, the proposed active estimation approach can be considered an instance of the general principle of importance sampling [6], which we employ in the context of generalized risk estimation.
5
Conclusions
F? -measures are defined as empirical estimates; we have shown that they are consistent estimates
of a generalized risk functional which Proposition 1 identifies. Generalized risks can be estimated
actively by sampling test instances from an instrumental distribution q. An analysis of the sources
of estimation error leads to an instrumental distribution q ? that minimizes estimator variance. The
optimal sampling distribution depends on the unknown conditional p(y|x); the active generalized
risk estimator approximates this conditional by the model to be evaluated.
Our empirical study supports the conclusion that the advantage of active over passive evaluation
is particularly strong for skewed classes. The advantage of active evaluation is also correlated
to the quality of the model as measured by the model-based likelihood of the test labels. In our
experiments, active evaluation consistently outperformed passive evaluation, even for the greatest
divergence between training and test distribution that we could observe.
7
Appendix
Proof of Lemma 2
? 0n,q = Pn vi `i wi and Wn =
Let (x1 , y1 ), ..., (xn , yn ) be drawn according to q(x)p(y|x). Let G
i=1
h
i
Pn
p(xi )
? 0n,q =
v
w
with
v
=
,
w
=
w(x
,
y
,
f
)
and
`
=
`(f
(x
),
y
).
We
note that E G
i
i
i i ?
i
?
i
i
i=1 i i
q(xi )
nG E [wi ] and E [Wn ] = n E [wi ]. The random variables w1 v1 , . . . , wn vn and w1 `1 v1 , . . . , wn `n vn
? 0n,q and 1 Wn are asymptotically normally
are iid, therefore the central limit theorem implies that n1 G
n
distributed with
?
1 ?0
n??
n
Gn,q ? G E [wi ] ?? N (0, Var[wi `i vi ])
(17)
n
?
1
n??
n
Wn ? E [wi ] ?? N (0, Var[wi vi ])
(18)
n
n??
where ?? denotes convergence in distribution. Application of the delta method to the function f (x, y) = xy yields
!
1 ?0
G
?
n??
n,q
T
n n1
? G ?? N (0, ?f (G E [wi ] , E [wi ]) ??f (G E [wi ] , E [wi ]))
n Wn
where ?f denotes the gradient of f and ? is the asymptotic covariance matrix of the input arguments
Var[wi `i vi ]
Cov[wi `i vi , wi vi ]
?=
.
Cov[wi `i vi , wi vi ]
Var[wi vi ]
Furthermore,
T
?f (G E [wi ] , E [wi ]) ??f (G E [wi ] , E [wi ])
= Var [wi `i vi ] ? 2G Cov [wi vi , wi `i vi ] + G2 Var [wi vi ]
= E wi2 `2i vi2 ? 2G E wi2 `i vi2 + G2 E wi2 vi2
2
ZZ
p(x)
2
w(x, y, f? )2 (`(f? (x), y) ? G) p(y|x)q(x)dydx.
=
q(x)
From this, the claim follows by canceling q(x).
Proof of Theorem 1
To minimize
the variance with respect to the function q under the the normalization conR
straint q(x)dx = 1 we define the Lagrangian with Lagrange multiplier ?
Z
Z
Z
c(x)
c(x)
dx + ?
q(x)dx ? 1 =
+ ?q(x) dx ? ?,
(19)
L [q, ?] =
q(x)
q(x)
|
{z
}
=K(q(x),x)
2
R
2
2
where c(x) = p(x) w(x, y, f? ) (`(f? (x), y) ? G) p(y|x)dy. The optimal function for the conc(x)
?K
strained problem satisfies the Euler-Lagrange equation ?q(x)
= ? q(x)
2 + ? = 0. A solution for this
Equation under the side condition is given by
p
c(x)
?
q (x) = R p
.
(20)
c(x)dx
Note that we dismiss the negative solution, since q(x) is a probability density function. Resubstitution of c in Equation 20 implies the theorem.
Acknowledgments
We gratefully acknowledge that this work was supported by a Google Research Award. We wish to
thank Michael Br?uckner for his help with the experiments on spam data.
8
References
[1] F. Bach. Active learning for misspecified generalized linear models. In Advances in Neural
Information Processing Systems, 2007.
[2] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Proceedings of the International Conference on Machine Learning, 2009.
[3] H Cram?er. Mathematical Methods of Statistics, chapter 20. Princeton University Press, 1946.
[4] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[5] S. Geman, E. Bienenstock, and R. Doursat. Neural networks and the bias/variance dilemma.
Neural Computation, 4:1?58, 1992.
[6] J. Hammersley and D. Handscomb. Monte carlo methods. Taylor & Francis, 1964.
[7] C. Sawade, N. Landwehr, S. Bickel, and T. Scheffer. Active risk estimation. In Proceedings of
the 27th International Conference on Machine Learning, 2010.
[8] M. Sugiyama. Active learning in approximately linear regression based on conditional expectation of generalization error. Journal of Machine Learning Research, 7:141?166, 2006.
[9] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. Journal of Machine Learning Research, pages 45?66, 2002.
[10] C. van Rijsbergen. Information Retrieval. Butterworths, 2nd edition, 1979.
[11] M. Yamada, M. Sugiyama, and T. Matsui. Semi-supervised speaker identification under covariate shift. Signal Processing, 90(8):2353?2361, 2010.
9
| 3999 |@word repository:1 instrumental:9 nd:1 decomposition:1 covariance:1 thereby:1 past:1 outperforms:6 current:1 beygelzimer:1 dx:6 readily:1 fn:2 dydx:3 v:1 sawade:4 selected:2 fewer:1 yamada:2 mathematical:1 become:1 introduce:2 privacy:2 expected:3 frequently:1 growing:1 resolve:3 becomes:1 estimating:2 underlying:1 moreover:1 minimizes:8 q2:9 finding:3 differentiation:1 nf:6 classifier:4 normally:3 yn:3 continually:1 positive:8 service:1 limit:2 opti:1 approximately:4 studied:1 collect:1 christoph:1 matsui:1 averaged:1 acknowledgment:1 practice:1 implement:2 digit:3 procedure:2 strasse:1 empirical:9 significantly:1 confidence:4 word:1 regular:1 cram:1 cannot:2 unlabeled:1 close:1 selection:1 risk:39 context:1 deterministic:2 lagrangian:1 center:5 starting:1 assigns:1 estimator:19 continued:1 his:1 analogous:1 infrequent:3 exact:1 hypothesis:1 element:1 recognition:3 particularly:1 geman:1 labeled:7 precondition:1 mal:1 trade:2 decrease:1 balanced:1 intuition:1 vanishes:1 tobias:1 trained:3 raise:1 depend:3 predictive:1 incur:1 dilemma:1 learner:1 basis:1 represented:1 chapter:1 derivation:1 train:2 monte:1 query:2 labeling:15 whose:2 drawing:2 cov:3 statistic:1 itself:1 online:4 advantage:3 rr:2 varianceminimizing:1 frequent:3 uci:1 loop:1 date:1 description:1 intuitive:1 convergence:2 generating:1 help:1 derive:4 measured:2 received:3 strong:1 coverage:1 c:1 predicted:1 implies:4 indicate:4 differ:1 filter:1 require:1 f1:1 generalization:1 proposition:4 pessimistic:1 secondly:1 hold:1 considered:2 claim:3 strained:1 landwehr:3 bickel:1 niels:1 estimation:37 diminishes:1 outperformed:1 bag:1 label:9 repetition:2 weighted:1 minimization:1 always:2 rather:1 conc:1 pn:9 corollary:12 derived:2 june:1 consistently:4 indicates:2 likelihood:5 contrast:1 bebel:1 entire:1 kernelized:1 bienenstock:1 koller:1 selects:3 germany:1 arg:2 overall:1 classification:5 dual:1 denoted:4 special:3 once:1 having:1 ng:1 sampling:30 zz:1 identical:1 constitutes:1 discrepancy:4 minimized:1 future:1 employ:2 divergence:1 replaced:3 replacement:2 n1:2 interest:1 evaluation:6 undefined:3 held:4 accurate:5 integral:3 xy:1 shipped:1 divide:1 taylor:1 initialized:1 minimal:3 instance:32 gn:5 cover:1 cost:10 deviation:2 euler:1 unbiasedness:1 confident:1 density:1 international:2 off:2 pool:7 diverge:2 michael:1 earn:1 w1:2 again:3 reflect:2 squared:1 central:1 worse:1 verge:1 creating:1 return:1 actively:2 de:1 student:1 depends:3 vi:13 view:1 characterizes:1 red:1 wave:1 francis:1 asuncion:1 minimize:2 accuracy:2 variance:22 yield:1 identification:1 accurately:1 comparably:1 iid:2 provider:1 carlo:1 reach:1 whenever:1 canceling:1 email:10 pp:1 proof:8 sampled:5 gain:1 recall:11 vintage:1 actually:1 back:2 supervised:1 follow:1 april:2 evaluated:2 strongly:2 furthermore:2 until:1 langford:1 dismiss:1 replacing:1 google:1 defines:2 logistic:1 quality:1 perhaps:1 grows:1 effect:4 true:8 multiplier:1 q0:2 laboratory:1 symmetric:1 numerator:1 skewed:3 speaker:1 coincides:2 generalized:24 prominent:1 performs:1 passive:23 consideration:1 recently:1 misspecified:1 functional:4 approximates:1 queried:3 sugiyama:2 gratefully:1 f0:12 resubstitution:1 imbalanced:2 recent:2 scenario:1 ntp:7 binary:2 yi:18 impose:1 determine:1 signal:1 semi:1 reduces:1 characterized:2 bach:2 cross:1 compensate:2 long:1 retrieval:1 maxy6:1 award:1 uckner:1 prediction:1 regression:3 denominator:2 expectation:1 normalization:1 addition:1 interval:4 diagram:1 source:3 appropriately:1 biased:1 operate:1 rest:2 doursat:1 asymptotical:1 odds:2 split:1 wn:7 br:1 shift:3 motivated:1 spammer:1 repeatedly:1 prefers:1 detailed:1 ten:5 svms:1 reduced:1 straint:1 estimated:1 delta:1 blue:1 discrete:1 dasgupta:1 express:1 drawn:13 v1:2 asymptotically:3 fraction:4 sum:4 inverse:1 parameterized:3 bias2:2 family:1 decide:1 vn:2 draw:1 decision:2 appendix:7 dy:3 summarizes:1 entirely:1 replaces:1 slutsky:2 oracle:2 dominated:1 argument:1 min:1 department:1 according:12 increasingly:1 wi:25 maxy:1 sided:1 equation:18 turn:2 skew:5 end:1 serf:2 available:1 observe:5 away:1 batch:1 original:1 denotes:4 ensure:1 especially:1 february:3 move:1 question:1 quantity:4 occurs:2 looked:1 gradient:1 thank:1 topic:1 originate:1 reason:2 spanning:1 rijsbergen:1 ratio:10 minimizing:4 october:1 frank:1 negative:3 unintuitive:1 implementation:1 unknown:4 perform:1 discarded:1 finite:1 acknowledge:1 y1:3 august:1 pair:1 potsdam:3 established:1 address:2 bar:4 wi2:3 hammersley:1 including:1 vi2:3 greatest:1 regularized:1 identifies:1 created:1 sn:1 text:6 handscomb:1 determining:1 asymptotic:9 loss:3 prototypical:1 filtering:6 analogy:1 var:12 versus:2 generator:1 validation:1 incurred:2 consistent:12 principle:1 supported:1 bias:9 side:2 exponentiated:1 fall:2 pni:1 taking:1 template:1 absolute:8 distributed:3 van:1 boundary:2 calculated:1 xn:4 evaluating:1 cumulative:2 curve:2 spam:11 functionals:1 approximate:3 uni:1 keep:1 sz:1 active:38 butterworths:1 xi:32 alternatively:1 necessarily:1 artificially:1 domain:12 reuters:1 edition:1 n2:1 x1:4 representative:1 scheffer:3 deployed:1 tong:1 precision:14 sub:1 wish:1 exponential:1 crude:1 vanish:1 weighting:4 theorem:14 covariate:1 er:1 derives:2 exists:1 intrinsic:1 false:4 importance:3 budget:5 exhausted:1 occurring:1 margin:2 gap:1 logarithmic:2 explore:1 lagrange:2 expressed:1 g2:6 acquiring:1 satisfies:1 conditional:7 goal:2 month:1 consequently:1 towards:3 included:2 specifically:3 uniformly:1 lemma:6 pends:1 experimental:3 support:2 violated:1 evaluate:1 princeton:1 correlated:1 |
3,311 | 4 | 612
Constrained Differential Optimization
John C. Platt
Alan H. Barr
California Institute of Technology, Pasadena, CA 91125
Abstract
Many optimization models of neural networks need constraints to restrict the space of outputs to
a subspace which satisfies external criteria. Optimizations using energy methods yield "forces" which
act upon the state of the neural network. The penalty method, in which quadratic energy constraints
are added to an existing optimization energy, has become popular recently, but is not guaranteed
to satisfy the constraint conditions when there are other forces on the neural model or when there
are multiple constraints. In this paper, we present the basic differential multiplier method (BDMM),
which satisfies constraints exactly; we create forces which gradually apply the constraints over time,
using "neurons" that estimate Lagrange multipliers.
The basic differential multiplier method is a differential version of the method of multipliers
from Numerical Analysis. We prove that the differential equations locally converge to a constrained
minimum.
Examples of applications of the differential method of multipliers include enforcing permutation
codewords in the analog decoding problem and enforcing valid tours in the traveling salesman problem.
1. Introduction
Optimization is ubiquitous in the field of neural networks. Many learning algorithms, such as
back-propagation,18 optimize by minimizing the difference between expected solutions and observed
solutions. Other neural algorithms use differential equations which minimize an energy to solve
a specified computational problem, such as associative memory, D differential solution of the traveling salesman problem,s,lo analog decoding,lS and linear programming. 1D Furthennore, Lyapunov
methods show that various models of neural behavior find minima of particular functions. 4,D
Solutions to a constrained optimization problem are restricted to a subset of the solutions of the
corresponding unconstrained optimization problem. For example, a mutual inhibition circuitS requires
one neuron to be "on" and the rest to be "off". Another example is the traveling salesman problem,ls
where a salesman tries to minimize his travel distance, subject to the constraint that he must visit
every city exactly once. A third example is the curve fitting problem, where elastic splines are as
smooth as possible, while still going through data points.s Finally, when digital decisions are being
made on analog data, the answer is constrained to be bits, either 0 or 1. 14
A constrained optimization problem can be stated as
minimize / (~),
subject to g(~) = 0,
(1)
where ~ is the state of the neural network, a position vector in a high-dimensional space; f(~) is a
scalar energy, which can be imagined as the height of a landscape as a function of position~; g(~) = 0
is a scalar equation describing a subspace of the state space. During constrained optimization, the
state should be attracted to the subspace g(~) = 0, then slide along the subspace until it reaches the
locally smallest value of f(~) on g(~) = O.
In section 2 of the paper, we describe classical methods of constrained optimization, such as the
penalty method and Lagrange multipliers.
Section 3 introduces the basic differential multiplier method (BDMM) for constrained optimization, which calcuIates a good local minimum. If the constrained optimization problem is convex, then
the local minimum is the global minimum; in general, finding the global minimum of non-convex
problems is fairly difficult.
In section 4, we show a Lyapunov function for the BDMM by drawing on an analogy from
physics.
? American Institute of Physics 1988
613
In section 5, augmented Lagrangians, an idea from optimization theory, enhances the convergence
properties of the BDMM.
In section 6, we apply the differential algorithm to two neural problems, and discuss the insensitivity of BDMM to choice of parameters. Parameter sensitivity is a persistent problem in neural
networks.
2. Classical Methods of Constrained Optimization
This section discusses two methods of constrained optimization, the penalty method and Lagrange
multipliers. The penalty method has been previously used in differential optimization. The basic
differential multiplier method developed in this paper applies Lagrange multipliers to differential
optimization.
2.l. The Penalty Method
The penalty method is analogous to adding a rubber band which attracts the neural state to
the subspace g(~) = o. The penalty method adds a quadratic energy term which penalizes violations of constraints. 8 Thus, the constrained minimization problem (1) is converted to the following
unconstrained minimization problem:
(2)
Figure 1. The penalty method makes a trough in state space
The penalty method can be extended to fulfill multiple constraints by using more than one rubber
band. Namely, the constrained optimization problem
minimize f (.~),
8ubject to go (~)
= OJ
a
= 1,2, ... , n;
(3)
is converted into unconstrained optimization problem
n
minimize l'pena1ty(~) = f(~)
+ L Co(go(~))2.
(4)
0:::1
The penalty method has several convenient features. First, it is easy to use. Second, it is globally
convergent to the correct answer as Co - 00. 8 Third, it allows compromises between constraints. For
example, in the case of a spline curve fitting input data, there can be a compromise between fitting
the data and making a smooth spline.
614
However, the penalty method has a number of disadvantages. First, for finite constraint strengths
it doesn't fulfill the constraints exactly. Using multiple rubber band constraints is like building
a machine out of rubber bands: the machine would not hold together perfectly. Second, as more
constraints are added, the constraint strengths get harder to set, especially when the size of the
network (the dimensionality of
gets large.
In addition, there is a dilemma to the setting of the constraint strengths. If the strengths are small,
then the system finds a deep local minimum, but does not fulfill all the constraints. If the strengths
are large, then the system quickly fulfills the constraints, but gets stuck in a poor local minimum.
COl'
.u
2.2. Lagrange Multipliers
Lagrange multiplier methods also convert constrained optimization problems into unconstrained
extremization problems. Namely, a solution to the equation (1) is also a critical point of the energy
(5)
). is called the Lagrange multiplier for the constraint g(~) = 0.8
A direct consequence of equation (5) is that the gradient of f is collinear to the gradient of 9 at
the constrained extrema (see Figure 2). The constant of proportionality between 'i1 f and 'i1 9 is -).:
'i1 'Lagrange
= 0 = 'i1 f + ). 'i1 g.
(6)
We use the collinearity of 'i1 f and 'i1 9 in the design of the BDMM.
Figure 2. At the constrained minimum, 'i1 f = -). 'i1 9
A simple example shows that Lagrange multipliers provide the extra degrees of freedom necessary
to solve constrained optimization problems. Consider the problem of finding a point (x, y) on the
line x + y = 1 that is closest to the origin. Using Lagrange multipliers,
'Lagrange
= x 2 + y2 + ).(x + y -
1)
(7)
Now, take the derivative with respect to all variables, x, y, and A.
aeLagrange
= 2x + A = 0
a'Lagrange
= 2y + A = 0
ax
ay
a'Lagrange =
a).
x
+y -
1= 0
(8)
615
With the extra variable A, there are now three equations in three unknowns. In addition, the last
equation is precisely the constraint equation.
3. The Basic Differential Multiplier Method for Constrained Optimization
This section presents a new "neural" algorithm for constrained optimization, consisting of differential equations which estimate Lagrange multipliers. The neural algorithm is a variation of the
method of multipliers, first presented by Hestenes 9 and Powell 16 ?
3.1. Gradient Descent does not work with Lagrange Multipliers
The simplest differential optimization algorithm is gradient descent, where the state variables of
the network slide downhill, opposite the gradient. Applying gradient descent to the energy in equation
(5) yields
x. - _ a!Lagrange
,ax?,
\.
a!Lagrange
= aA
J\
= _
al _ A ag
ax?"
ax' '
= -g
*.
(9)
( )
Note that there is a auxiliary differential equation for A, which is an additional "neuron" necessary
to apply the constraint g(~) = O. Also, recall that when the system is at a constrained extremum,
VI = -AVg, hence, x. = O.
Energies involving Lagrange multipliers, however, have critical points which tend to be saddle
points. Consider the energy in equation (5). If ~ is frozen, the energy can be decreased by sending
A to +00 or -00.
Gradient descent does not work with Lagrange multipliers, because a critical point of the energy
in equation (5) need not be an attractor for (9). A stationary point must be a local minimum in order
for gradient descent to converge.
3.2. The New Algorithm: the Basic Differential Multiplier Method
We present an alternative to differential gradient descent that estimates the Lagrange multipliers,
so that the constrained minima are attractors of the differential equations, instead of "repulsors." The
differential equations that solve (1) is
.
al
,
ax,
i = +g(*).
ag
ax.'
X' = - - - A -
(10)
Equation (10) is similar to equation (9). As in equation (9), constrained extrema of the energy
(5) are stationary points of equation (10). Notice, however, the sign inversion in the equation for i,
as compared to equation (9). The equation (10) is performing gradient ascent on A. The sign flip
makes the BDMM stable, as shown in section 4.
Equation (10) corresponds to a neural network with anti-symmetric connections between the A
neuron and all of the ~ neurons.
3.3. Extensions to the Algorithm
One extension to equation (10) is an algorithm for constrained minimization with multiple constraints. Adding an extra neuron for every equality constraint and summing all of the constraint forces
creates the energy
(11)
!multiple = !(~) +
Ao<ga(~),
I:
0<
which yields differential equations
x' - _ al _ "" A agcr.
,- ax'
~
'0<
0<
ax' )
'
(12)
616
Another extension is constrained minimization with inequality constraints. As in traditional
optimization theory.8 one uses extra slack variables to convert inequality constraints into equality
constraints. Namely. a constraint of the form h(~) ~ 0 can be expressed as
(13)
Since Z2 must always be positive, then h(~) is constrained to be positive. The slack variable z is
treated like a component of ~ in equation (10). An inequality constraint requires two extra neurons,
one for the slack variable % and one for the Lagrange multiplier ~.
Alternatively, the inequality constraint can be represented as an equality constraint For example,
if h(~) ~ 0, then the optimization can be constrained with g(~) = h(.~), when h(~) ~ 0; and
g(.~) = 0 otherwise.
4. Why the algorithm works
The system of differential equations (10) (the BDMM) gradually fulfills the constraints. Notice
that the function g(~) can be replaced by kg(~), without changing the location of the constrained
minimum. As k is increased, the state begins to undergo damped oscillation about the constraint
subspace g(~) = o. As k is increased further, the frequency of the oscillations increase, and the time
to convergence increases.
constraint subspace
./
/'
initial?state
.,.-
path of algorithm
"\
\
Figure 3. The state is attracted to the constraint subspace
The damped oscillations of equation (10) can be explained by combining both of the differential
equations into one second-order differential equation.
(14)
Equation (14) is the equation for a damped mass system, with an inertia term Xi. a damping matrix
(15)
and an internal force, gOg/O%i, which is the derivative of the internal energy
(16)
617
If the system is damped and the state remains bounded, the state falls into a constrained minima.
As in physics, we can construct a total energy of the system, which is the sum of the kinetic and
potential energies.
E= T
+U =
L, i(xd
2
+ i(g(~))2.
(17)
If the total energy is decreasing with time and the state remains bounded, then the system will
dissipate any extra energy, and will settle down into the state where
(18)
which is a constrained extremum of the original problem in equation (1).
The time derivative of the total energy in equation (17) is
= -
(19)
Lx,A,jxj.
',i
If damping matrix Aii is positive definite, the system converges to fulfill the constraints.
BDMM always converges for a special case of constrained optimization: quadratic programming.
A quadratic programming problem has a quadratic function f(~) and a piecewise linear continuous
function g(~) such that
(20)
Under these circumstances, the damping matrix Aii is positive definite for all
system converges to the constraints.
~
and A, so that the
4.1. Multiple constraints
For the case of multiple constraints, the total energy for equation (12) is
E = T
+U =
L
i
i(Xi)2 +
L igo(~)2.
(21)
0
and the time derivative is
(22)
Again, BDMM solves a quadratic programming problem, if a solution exists. However, it is
possible to pose a problem that has contradictory constraints. For example,
gdx) = x =
0,
g2(X) = x - I = 0
(23)
In the case of conflicting constraints, the BDMM compromises, trying to make each constraint go as
small as possible. However, the Lagrange multipliers Ao goes to ?oo as the constraints oppose each
other. It is possible, however, to arbitrarily limit the Ao at some large absolute value.
618
LaSalle's invariance theorem 12 is used to prove that the BDMM eventually fulfills the constraints.
Let G be an open subset of Rn. Let F be a subset of G*, the closure of G, where the system of
differential equations (12) is at an equilibrium.
(24)
If the damping matrix
a2 f + '" A a2 ga
-----:;_
ax, ax;
~
a
ax,ax;
(25)
is positive definite in G, if xa{ t) and Aa (t) are bounded, and remain in G for all time, and ~f F
is non-empty, then F is the largest invariant set in G*, hence, by LaSalle's invariance theorem, the
system (t), Aa (t) approaches Fast -+ 00.
x,
5. The Modified Differential Method of Multipliers
This section presents the modified differemiaI multiplier method (MDMM), which is a modification of the BDMM with more robust convergence properties. For a given constrained optimization
problem, it is frequently necessary to alter the BDMM to have a region of positive damping surrounding the constrained minima. The non-differential method of multipliers from Numerical Analysis also
has this difficulty. 2 Numerical Analysis combines the multiplier method with the penalty method to
yield a modified multiplier method that is locally convergent around constrained minima. 2
The BDMM is completely compatible with the penalty method. If one adds a penalty force to
equation (10) corresponding to an quadratic energy
Epenalty
= ~(g(~))2.
(26)
then the set of differential equations for MDMM is
.
af
ag
x, = -ax,
-- A
ax,- j = g(~).
ag
ax,
cg-,
(27)
The extra force from the penalty does not change the position of the stationary points of the differential
equations, because the penalty force is 0 when g(~) = O. The damping matrix is modified by the
penalty force to be
(28)
There is a theorem 1 that states that there exists a c* > 0 such that if c > c*, the damping matrix
in equation (28) is positive definite at constrained minima. Using continuity, the damping matrix is
positive definite in a region R surrounding each constrained minimum. If the system starts in the
region R and remains bounded and in R, then the convergence theorem at the end of section 4 is
applicable, and MDMM will converge to a constrained minimum.
The minimum necessary penalty strength c for the MDMM is usually much less than the strength
needed by the penalty method alone. 2
6. Examples
This section contains two examples which illustrate the use of the BDMM and the MDMM. First,
the BDMM is used to find a good solution to the planar traveling salesman problem. Second, the
MDMM is used to enforcing mutual inhibition and digital results in the task of analog decoding.
6.1. Planar Traveling Salesman
The traveling salesman problem (fSP) is, given a set of cities lying in the plane, find the shortest
closed path that goes through every city exactly once. Finding the shortest path is NP-complete.
619
Finding a nearly optimal path, however, is much easier than finding a globally optimal path. There
exist many heuristic algorithms for approximately solving the traveling salesman problem. 5,10,11,13
The solution presented in this section is moderately effective and illustrates the independence of
BDMM to changes in parameters.
Following Durbin and Willshaw,5 we use an elastic snake to solve the TSP. A snake is a discretized
curve which lies on the plane. The elements of the snake are points on the plane, (Xi, Yd. A snake
is a locally connected neural network, whose neural outputs are positions on the plane.
The snake minimizes its length
2:)Xi+1 - x,)2 - (Yi+l - Yi)2,
(29)
i
subject to the constraint that the snake must lie on the cities:
k(x* - xc) = 0,
k(y* - Yc) = 0,
(30)
where (x*, y*) are city coordinates, (xc, Yc) is the closest snake point to the city, and k is the constraint
strength.
The minimization in equation (29) is quadratic and the constraints in equation (30) are piecewise
linear, corresponding to a CO continuous potential energy in equation (21). Thus, the damping is
positive definite, and the system converges to a state where the constraints are fulfilled.
In practice, the snake starts out as a circle. Groups of cities grab onto the snake, deforming
it As the snake gets close to groups of cities, it grabs onto a specific ordering of cities that locally
minimize its length (see Figure 4).
The system of differential equations that solve equations (29) and (30) are piecewise linear. The
differential equations for Xi and Yi are solved with implicit Euler's method, using tridiagonal LV
decomposition to solve the linear system. 17 The points of the snake are sorted into bins that divide
the plane, so that the computation of finding the nearest point is simplified.
Figure 4. The snake eventually attaches to the cities
The constrained minimization in equations (29) and (30) is a reasonable method for approximately
solving the TSP. For 120 cities distributed in the unti square, and 600 snake points, a numerical step
size of 100 time units, and a constraint strength of 5 x 10- 3 , the tour lengths are 6% ? 2% longer
than that yielded by simulated annealing 11 . Empirically, for 30 to 240 cities, the time needed to
compute the final city ordering scales as N1.6, as compared to the Kernighan-Lin method13 , which
scales roughly as N 2 .2 ?
The constraint strength is usable for both a 30 city problem and a 240 city problem. Although
changing the constraint strength affects the performance, the snake attaches to the cities for any nonzero constraint strength. Parameter adjustment does not seem to be an issue as the number of cities
increases, unlike the penalty method.
620
6.2. Analog Decoding
Analog decoding uses analog signals from a noisy channel to reconstruct codewords. Analog
decoding has been performed neurally,15 with a code space of permutation matrices, out of the
possible space of binary matrices.
To perform the decoding of permutation matrices, the nearest permutation matrix to the signal
matrix must be found. In other words, find the nearest matrix to the signal matrix, subject to the
constraint that the matrix has on/off binary elements, and has exactly one "on" per row and one "on"
per column. If the signal matrix is Ii; and the result is Vi;, then minimize
- "v..
L..J .,,1-.
.,
(31)
i ,;
subject to constraints
Vi,,(l- Vi;) = OJ
LVi" -1 =
(32)
O.
;
In this example, the first constraint in equation (32) forces crisp digital decisions. The second
and third constraints are mutual inhibition along the rows and columns of the matrix.
The optimization in equation (31) is not quadratic, it is linear. In addition, the first constraint in
equation (32) is non-linear. Using the BDMM results in undamped oscillations. In order to converge
onto a constrained minimum, the MDMM must be used. For both a 5 x 5 and a 20 x 20 system, a
c = 0,2 is adequate for damping the oscillations. The choice of c seems to be reasonably insensitive
to the size of the system, and a wide range of c, from 0.02 to 2.0, damps the oscillations.
.?...?....
..?????...???. ?....?.?????
..'
????
???????
??..'
...????.
. .?.???.
??
??
.?.
???
??? ???
??
???
???
.... . ...
??
??
.. ??? ... ... .?....??
..
,
?
?
?
?
.
'
..?......
? ? ... e? ... .
.. .
? .. . e? ... ? .
?
???
?
? ??
.
?
?
?
?
??????
'
????????
?
?
?
?
,
?
.
?
.?
: :e&:.:: ....?.
??? ?.
?
????
.
? ???
..
?
???::r::::::::
.
.
?????
....
?
?
:~:.:.:
?
?
?
?
?
.
.
???.
?
'
?
??
???
? ?
?
?
?
?
.
?
.....
Figure 5. The decoder finds the nearest permutation matrix
In a test of the MDMM, a signal matrix which is a permutation matrix plus some noise, with
a signal-to-noise ratio of 4 is supplied to the network. In figure 5, the system has turned on the
correct neurons but also many incorrect neurons. The constraints start to be applied, and eventually
the system reaches a permutation matrix. The differential equations do not need to be reset. If a new
signal matrix is applied to the network, the neural state will move towards the new solution.
7. ConClusions
In the field of neural networks, there are differential optimization algorithms which find local
solutions to non-convex problems. The basic differential multiplier method is a modification of a
standard constrained optimization algorithm, which improves the capability of neural networks to
perform constrained optimization.
The BDMM and the MDMM offer many advantages over the penalty method. First, the differential equations (10) are much less stiff than those of the penalty method. Very large quadratic terms
are not needed by the MDMM in order to strongly enforce the constraints. The energy terrain for the
621
penalty method looks like steep canyons, with gentle floors; finding minima of these types of energy
surfaces is numerically difficult In addition, the steepness of the penalty tenns is usually sensitive
to the dimensionality of the space. The differential multiplier methods are promising techniques for
alleviating stiffness.
The differential multiplier methods separate the speed of fulfilling the constraints from the accuracy of fulfilling the constraints. In the penalty method, as the strengths of a constraint goes to
00, the constraint is fulfilled, but the energy has many undesirable local minima. The differential
multiplier methods allow one to choose how quickly to fulfill the constraints.
The BDMM fulfills constraints exactly and is compatible with the penalty method. Addition of
penalty tenns in the MDMM does not change the stationary points of the algorithm, and sometimes
helps to damp oscillations and improve convergence.
Since the BDMM and the MDMM are in the form of first-order differential equations, they can
be directly implemented in hardware. Performing constrained optimization at the raw speed of analog
VLSI seems like a promising technique for solving difficult perception problems. 14
There exist Lyapunov functions for the BDMM and the MDMM. The BDMM converges globally for quadratic programming. The MDMM is provably convergent in a local region around the
constrained minima Other optimization algorithms, such as Newton's method,17 have similar local convergence properties. The global convergence properties of the BDMM and the MDMM are
currently under investigation.
In summary, the differential method of multipliers is a useful way of enforcing constraints on
neural networks for enforcing syntax of solutions, encouraging desirable properties of solutions, and
making crisp decisions.
Acknowledgments
This paper was supported by an AT&T Bell Laboratories fellowship (JCP).
References
1. K. J. Arrow, L. Hurwicz, H. Uzawa, Studies in Linear and Nonlinear Programming. (Stanford
University Press, Stanford, CA, 1958).
2. D. P. Bertsekas, Automatica, 12, 133-145, (1976).
3. C. de Boor, A Practical Guide to Splines. (Springer-Verlag, NY, 1978).
4. M. A. Cohen, S. Grossberg, IEEE Trans. Systems. Man. and Cybernetics, ,815-826, (1983).
5. R. Durbin, D. Willshaw, Nature, 326, 689-691, (1987).
6. J. C. Eccles, The Physiology of Nerve Cells, (Johns Hopkins Press, Baltimore, 1957).
7. M. R. Hestenes, J. Opt. Theory Appl., 4, 303-320, (1969).
8. M. R. Hestenes, Optimization Theory, (Wiley & Sons, NY, 1975).
9. J. J. Hopfield, PNAS, 81, 3088, (1984).
10. J. J. Hopfield, D. W. Tank, Biological Cybernetics, 52, 141, (1985).
11. S. Kirkpatrick, C. D. Gelatt, C. M. Vecchi, Science, 220, 671-680, (1983).
12. J. LaSalle, The Stability of Dynamical Systems, (SIAM, Philadelphia, 1976).
13. S. Lin, B. W. Kernighan, Oper. Res., 21,498-516 (1973).
14. C. A. Mead, Analog VLSI and Neural Systems, (Addison-Wesley, Reading. MA, TBA).
15. J. C. Platt, J. J. Hopfield, in AlP Con/. Proc.151: Neural Networksfor Computing (1. Denker
ed.) 364-369, (American Institute of PhysiCS, NY, 1986).
16. M. 1. Powell, in Optimization, (R. Fletcher, ed.), 283-298, (Academic Press, NY, 1969).
17. W. H. Press, B. P. Flannery, S. A. Teukolsky, W. T. Vetterling, Numerical Recipes, (Cambridge University Press, Cambridge, 1986).
18. D. Rumelhart, G. Hinton, R. Williams, in Parallel Distributed Processing, (D. Rumelhart,
ed), 1, 318-362, (MIT Press, Cambridge, MA, 1986).
19. D. W. Tank, J. J. Hopfield, IEEE Trans. Cir. & Sys., CAS-33, no. 5,533-541 (1986).
| 4 |@word collinearity:1 version:1 inversion:1 seems:2 proportionality:1 open:1 closure:1 decomposition:1 harder:1 initial:1 contains:1 existing:1 z2:1 must:6 attracted:2 john:2 numerical:5 stationary:4 alone:1 plane:5 sys:1 location:1 lx:1 height:1 along:2 direct:1 differential:41 become:1 persistent:1 incorrect:1 prove:2 fitting:3 combine:1 boor:1 expected:1 roughly:1 behavior:1 frequently:1 discretized:1 globally:3 decreasing:1 encouraging:1 begin:1 bounded:4 circuit:1 mass:1 kg:1 minimizes:1 developed:1 finding:7 extremum:4 ag:4 every:3 act:1 xd:1 exactly:6 willshaw:2 platt:2 unit:1 bertsekas:1 positive:9 local:9 limit:1 consequence:1 mead:1 path:5 approximately:2 yd:1 plus:1 appl:1 co:3 oppose:1 range:1 grossberg:1 acknowledgment:1 practical:1 practice:1 definite:6 powell:2 bell:1 physiology:1 convenient:1 word:1 get:4 onto:3 ga:2 close:1 undesirable:1 applying:1 optimize:1 crisp:2 go:6 williams:1 l:2 convex:3 his:1 stability:1 variation:1 coordinate:1 analogous:1 alleviating:1 programming:6 us:2 origin:1 element:2 rumelhart:2 observed:1 solved:1 region:4 connected:1 ordering:2 moderately:1 solving:3 compromise:3 dilemma:1 upon:1 creates:1 completely:1 aii:2 hopfield:4 various:1 represented:1 surrounding:2 fast:1 describe:1 effective:1 whose:1 heuristic:1 stanford:2 solve:6 drawing:1 furthennore:1 otherwise:1 reconstruct:1 tsp:2 noisy:1 final:1 associative:1 advantage:1 frozen:1 reset:1 turned:1 combining:1 insensitivity:1 gentle:1 recipe:1 convergence:7 empty:1 converges:5 help:1 oo:1 illustrate:1 pose:1 nearest:4 solves:1 auxiliary:1 implemented:1 lyapunov:3 correct:2 alp:1 settle:1 bin:1 barr:1 ao:3 lagrangians:1 investigation:1 opt:1 biological:1 extension:3 hold:1 lying:1 around:2 equilibrium:1 fletcher:1 smallest:1 a2:2 proc:1 travel:1 applicable:1 currently:1 sensitive:1 largest:1 create:1 city:17 minimization:6 mit:1 always:2 modified:4 fulfill:5 ax:15 cg:1 hestenes:3 vetterling:1 snake:14 pasadena:1 vlsi:2 going:1 i1:9 provably:1 tank:2 issue:1 constrained:42 special:1 fairly:1 mutual:3 field:2 once:2 construct:1 look:1 nearly:1 alter:1 np:1 spline:4 piecewise:3 replaced:1 consisting:1 attractor:2 n1:1 freedom:1 introduces:1 violation:1 kirkpatrick:1 damped:4 necessary:4 damping:10 divide:1 penalizes:1 circle:1 re:1 increased:2 column:2 disadvantage:1 subset:3 euler:1 tour:2 lasalle:3 tridiagonal:1 answer:2 damp:2 sensitivity:1 siam:1 off:2 physic:4 decoding:7 together:1 quickly:2 hopkins:1 again:1 choose:1 external:1 american:2 derivative:4 usable:1 oper:1 converted:2 potential:2 de:1 trough:1 satisfy:1 vi:4 dissipate:1 performed:1 try:1 extremization:1 closed:1 start:3 unti:1 capability:1 parallel:1 minimize:7 square:1 accuracy:1 yield:4 landscape:1 raw:1 cybernetics:2 reach:2 ed:3 energy:26 frequency:1 con:1 jxj:1 popular:1 recall:1 dimensionality:2 ubiquitous:1 improves:1 back:1 nerve:1 wesley:1 planar:2 strongly:1 xa:1 implicit:1 until:1 traveling:7 nonlinear:1 propagation:1 continuity:1 kernighan:2 building:1 multiplier:35 y2:1 hence:2 equality:3 symmetric:1 nonzero:1 laboratory:1 during:1 criterion:1 trying:1 syntax:1 ay:1 complete:1 eccles:1 recently:1 empirically:1 cohen:1 insensitive:1 imagined:1 analog:10 he:1 numerically:1 cambridge:3 unconstrained:4 stable:1 longer:1 surface:1 inhibition:3 add:2 igo:1 closest:2 stiff:1 verlag:1 inequality:4 binary:2 arbitrarily:1 tenns:2 yi:3 minimum:23 additional:1 floor:1 converge:4 shortest:2 signal:7 ii:1 multiple:7 neurally:1 desirable:1 pnas:1 alan:1 smooth:2 academic:1 af:1 offer:1 lin:2 visit:1 involving:1 basic:7 circumstance:1 sometimes:1 cell:1 addition:5 fellowship:1 decreased:1 annealing:1 baltimore:1 extra:7 rest:1 unlike:1 ascent:1 subject:5 tend:1 undergo:1 seem:1 easy:1 fsp:1 independence:1 affect:1 attracts:1 restrict:1 perfectly:1 opposite:1 idea:1 hurwicz:1 collinear:1 penalty:27 adequate:1 deep:1 useful:1 slide:2 locally:5 band:4 hardware:1 simplest:1 supplied:1 exist:2 notice:2 sign:2 fulfilled:2 per:2 steepness:1 group:2 changing:2 canyon:1 grab:2 convert:2 sum:1 reasonable:1 oscillation:7 decision:3 networksfor:1 bit:1 guaranteed:1 convergent:3 quadratic:11 durbin:2 yielded:1 strength:13 constraint:66 precisely:1 speed:2 vecchi:1 performing:2 poor:1 remain:1 son:1 making:2 modification:2 explained:1 gradually:2 restricted:1 invariant:1 fulfilling:2 equation:53 rubber:4 previously:1 remains:3 describing:1 discus:2 slack:3 eventually:3 needed:3 addison:1 flip:1 end:1 sending:1 salesman:8 stiffness:1 apply:3 denker:1 enforce:1 gelatt:1 alternative:1 original:1 include:1 tba:1 newton:1 xc:2 especially:1 classical:2 move:1 added:2 codewords:2 traditional:1 enhances:1 gradient:10 subspace:8 distance:1 separate:1 simulated:1 decoder:1 enforcing:5 length:3 code:1 ratio:1 minimizing:1 difficult:3 steep:1 stated:1 design:1 unknown:1 perform:2 neuron:9 finite:1 descent:6 anti:1 extended:1 hinton:1 rn:1 namely:3 specified:1 connection:1 california:1 conflicting:1 trans:2 usually:2 perception:1 dynamical:1 yc:2 reading:1 oj:2 memory:1 critical:3 treated:1 force:10 difficulty:1 improve:1 technology:1 cir:1 philadelphia:1 permutation:7 attache:2 analogy:1 lv:1 digital:3 undamped:1 degree:1 lo:1 row:2 compatible:2 summary:1 supported:1 last:1 guide:1 allow:1 institute:3 fall:1 wide:1 absolute:1 distributed:2 uzawa:1 curve:3 valid:1 doesn:1 stuck:1 made:1 avg:1 inertia:1 simplified:1 global:3 summing:1 automatica:1 xi:5 alternatively:1 terrain:1 continuous:2 why:1 promising:2 nature:1 channel:1 reasonably:1 robust:1 ca:3 elastic:2 lvi:1 arrow:1 noise:2 augmented:1 ny:4 wiley:1 position:4 downhill:1 col:1 lie:2 third:3 down:1 theorem:4 specific:1 exists:2 adding:2 illustrates:1 easier:1 flannery:1 saddle:1 lagrange:22 expressed:1 adjustment:1 g2:1 scalar:2 applies:1 springer:1 aa:3 corresponds:1 satisfies:2 teukolsky:1 kinetic:1 ma:2 sorted:1 towards:1 man:1 change:3 contradictory:1 called:1 total:4 invariance:2 deforming:1 internal:2 fulfills:4 |
3,312 | 40 | 103
NEURAL NETWORKS FOR TEMPLATE MATCHING:
APPLICATION TO REAL-TIME CLASSIFICATION
OF THE ACTION POTENTIALS OF REAL NEURONS
Yiu-fai Wongt, Jashojiban Banikt and James M. Bower!
tDivision of Engineering and Applied Science
!Division of Biology
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
Much experimental study of real neural networks relies on the proper classification of
extracellulary sampled neural signals (i .e. action potentials) recorded from the brains of experimental animals. In most neurophysiology laboratories this classification task is simplified
by limiting investigations to single, electrically well-isolated neurons recorded one at a time.
However, for those interested in sampling the activities of many single neurons simultaneously,
waveform classification becomes a serious concern. In this paper we describe and constrast
three approaches to this problem each designed not only to recognize isolated neural events,
but also to separately classify temporally overlapping events in real time. First we present two
formulations of waveform classification using a neural network template matching approach.
These two formulations are then compared to a simple template matching implementation.
Analysis with real neural signals reveals that simple template matching is a better solution to
this problem than either neural network approach.
INTRODUCTION
For many years, neurobiologists have been studying the nervous system by
using single electrodes to serially sample the electrical activity of single neurons in the brain. However, as physiologists and theorists have become more
aware of the complex, nonlinear dynamics of these networks, it has become
apparent that serial sampling strategies may not provide all the information
necessary to understand functional organization. In addition, it will likely be
necessary to develop new techniques which sample the activities of multiple
neurons simultaneouslyl. Over the last several years, we have developed two
different methods to acquire multineuron data. Our initial design involved
the placement of many tiny micro electrodes individually in a tightly packed
pseudo-floating configuration within the brain 2 . More recently we have been
developing a more sophisticated approach which utilizes recent advances in
silicon technology to fabricate multi-ported silicon based electrodes (Fig. 1) .
Using these electrodes we expect to be able to readily record the activity patterns of larger number of neurons.
As research in multi-single neuron recording techniques continue, it has become very clear that whatever technique is used to acquire neural signals from
many brain locations, the technical difficulties associated with sampling, data
compressing, storing, analyzing and interpreting these signals largely dwarf the
development of the sampling device itself. In this report we specifically consider
the need to assure that neural action potentials (also known as "spikes") on
each of many parallel recording channels are correctly classified, which is just
one aspect of the problem of post-processing multi-single neuron data. With
more traditional single electrode/single neuron recordings, this task usually in? American Institute or Physics 1988
104
volves passing analog signals through a Schmidt trigger whose output indicates
the occurence of an event to a computer, at the same time as it triggers an
oscilloscope sweep of the analog data. The experimenter visually monitors the
oscilloscope to verify the accuracy of the discrimination as a well-discriminated
signal from a single neuron will overlap on successive oscilloscope traces (Fig.
Ic). Obviously this approach is impractical when large numbers of channels
are recorded at the same time. Instead, it is necessary to automate this classification procedure. In this paper we will describe and contrast three approaches
we have developed to do this .
Traces
on upper
layer
~
'a.
E
IV
1~
0
Traces
4
2
Ume (msec)
on lower
layer
C. ,
&.
Recording s~e
b.
75sq.jllT1
Fig. 1. Silicon probe being developed in our lababoratory for multi-single unit recording
in cerebellar cortex. a) a complete probe; b) surface view of one recording tip; c) several
superimposed neuronal action potentials recorded from such a silicon electrode ill cerebellar
cortex.
While our principal design objective is the assurance that neural waveforms
are adequately discriminated on multiple channels, technically the overall objective of this research project is to sample from as many single neurons as
possible. Therefore, it is a natural extention of our effort to develop a neural
waveform classification scheme robust enough to allow us to distinguish activities arising from more than one neuron per recording site. To do this, however,
we now not only have to determine that a particular signal is neural in origin,
but also from which of several possible neurons it arose (see Fig. 2a). While
in general signals from different neurons have different waveforms aiding in
the classification, neurons recorded on the same channel firing simultaneously
or nearly simultaneously will produce novel combination waveforms (Fig. 2b)
which also need to be classified. It is this last complication which particularly
105
bedevils previous efforts to classify neural signals (For review see 5, also see
3-4). In summary, then, our objective was to design a circuit that would:
1. distinguish different waveforms even though neuronal discharges tend
to be quite similar in shape (Fig. 2a);
2. recognize the same waveform even though unavoidable movements
such as animal respiration often result in periodic changes in the amplitude
of a recorded signal by moving the brain relative to the tip of the electrode;
3. be considerably robust to recording noise which variably corrupts all
neural recordings (Fig. 2);
4. resolve overlapping waveforms, which are likely to be particularly interesting events from a neurobiological point of view;
5. provide real-time performance allowing the experimenter to detect
problems with discrimination and monitor the progress of the experiment;
6. be implementable in hardware due to the need to classify neural signals on many channels simultaneously. Simply duplicating a software-based
algorithm for each channel will not work, but rather, multiple, small, independent, and programmable hardware devices need to be constructed.
I
b.
50 Jl.V
signal recorded
c.
electrode
a.
Fig. 2. a) Schematic diagram of an electrode recording from two neuronal cell bodies b) An
actual multi-neuron recording. Note the similarities in the two waveforms and the overlapping
event. c) and d) Synthesized data with different noise levels for testing classificat.ion algorithms
(c : 0.3 NSR ; d: 1.1 NSR) .
106
METHODS
The problem of detecting and classifying multiple neural signals on single voltage records involves two steps. First, the waveforms that are present
in a particular signal must be identified and the templates be generated;
second, these waveforms must be detected and classified in ongoing data
records. To accomplish the first step we have modified the principal component analysis procedure described by Abeles and Goldstein 3 to automatically extract templates of the distinct waveforms found in an initial sample of the digitized analog data. This will not be discussed further as it is
the means of accomplishing the second step which concerns us here. Specifically, in this paper we compare three new approaches to ongoing waveform classification which deal explicitly with overlapping spikes and variably meet other design criteria outlined above. These approaches consist of
a modified template matching scheme, and two applied neural network implementations. We will first consider the neural network approaches. On
a point of nomenclature, to avoid confusion in what follows, the real neurons whose signals we want to classify will be referred to as "neurons" while
computing elements in the applied neural networks will be called "Hopons."
Neural Network Approach - Overall, the problem of classifying neural
waveforms can best be seen as an optimization problem in the presence of
noise. Much recent work on neural-type network algorithms has demonstrated
that these networks work quite well on problems of this sort 6- 8 . In particular,
in a recent paper Hopfield and Tank describe an A/D converter network and
suggest how to map the problem of template matching into a similar context 8 .
The energy functional for the network they propose has the form:
- 1
E = - 2 ~~
'" '" T.I] v..v.
I]
1
]
- '"
~ VI
II
(1)
1
where Tij = connectivity between Hopon i and Hopon y', V; = voltage output
of Hopon i, Ii = input current to Hopon i and each Hopon has a sigmoid
input-output characteristic V = g(u) = 1/(1 + exp( -au)).
If the equation of motion is set to be:
du;fdt
=
-oE/oV =
L
T;jVj
+ Ii
(la)
j
then we see that dE/dt = -(I:iTijVj + Ii)dV/dt = - (du/dt)(dV/dt) =
-g'{u)(du/dt)2 :s: O. Hence E will go to to a minimum which, in a network
constructed as described below, will correspond to a proposed solution to a
particular waveform classification problem.
Template Matching using a Hopfield-type Neural Net - We have
taken the following approach to template matching using a neural network. For
simplicity, we initially restricted the classification problem to one involving two
waveforms and have accordingly constructed a neural network made up of two
groups of Hopons, each concerned with discriminating one or the other waveform. The classification procedure works as follows: first, a Schmidt trigger
107
is used to detect the presence of a voltage on the signal channel above a set
threshold . When this threshold is crossed, implying the presence of a possible
neural signal, 2 msecs of data around the crossing are stored in a buffer (40
samples at 20 KHz). Note that biophysical limitations assure that a single real
neuron cannot discharge more than once in this time period, so only one waveform of a particular type can occur in this data sample. Also, action potentials
are of the order of 1 msec in duration, so the 2 msec window will include the full
signal for single or overlapped waveforms. In the next step (explained later)
the data values are correlated and passed into a Hopfield network designed to
minimize the mean-square error between the actual data and the linear combination of different delays of the templates. Each Hopon in the set of Hopons
concerned with one waveform represents a particular temporal delay in the
occurrence of that waveform in the buffer. To express the network in terms of
an energy function formulation: Let x(t) = input waveform amplitude in the
tth time bin, Sj(t) = amplitude of the ph template, Vjk denote if Sj(t - k)(J?th
template delayed by k time bins)is present in the input waveform. Then the
appropriate energy function is:
(2)
The first term is designed to minimize the mean-square error and specifies
the best match. Since V E [0,1]' the second term is minimized only when each
Vjk assumes values 0 or 1. It also sets the diagonal elements Tij to o. The
third term creates mutual inhibition among the processing nodes evaluating
the same neuronal signal, which as described above can only occur once per
sample.
Expanding and simplifying expression (2), the connection matrix is :
(3a)
and the input current
(3b)
As it can be seen, the inputs are the correlations between the actual data and
the various delays of the templates subtracting a constant term.
Modified Hopfield Network - As documented in more detail in Fig.
3-4, the above full Hopfield-type network works well for temporally isolated
spikes at moderate noise levels, but for overlapping spikes it has a local minima
problem. This is more severe with more than two waveforms in the network.
108
Further, we need to build our network in hardware and the full Hopfield network is difficult to implement with current technology (see below) . For these
reasons, we developed a modified neural network approach which significantly
reduces the necessary hardware complexity and also has improved performance.
To understand how this works, let us look at the information contained in the
quantities Tij and Iij (eq. 3a and 3b ) and make some use of them. These
quantities have to be calculated at a pre-processing stage before being loaded
into the Hopfield network. If after calculating these quantities, we can quickly
rule out a large number of possible template combinations, then we can significantly reduce the size of the problem and thus use a much smaller (and
hence more efficient) neural network to find the optimal solution. To make the
derivation simple, we define slightly modified versions of 1';j and Iij (eq. 4a
and 4b) for two-template case.
Iij
= L x(t) [~SI(t - i) + ~S2(t - j)] t
~L
si(t - i) -
t
~ L s~(t -
j)
(4b)
t
In the case of overlaping spikes the 1';j'S are the cross-correlations between SI (t)
and S2(t) with different delays and Ii;'s are the cross-correlations between input
x(t) and weighted combination of SI(t) and S2(t). Now if x(t) = SI(t - i) +
S2(t - J') (i.e. the overlap of the first template with i time bin delay and the
second template with j time bin delay), then I:::.ij = l1';j - Iijl = O. However
in the presence of noise, I:::. ij will not be identically zero, but will equal to the
noise, and if I:::.ij > l:::.1';j (where l:::.1';j = l1';j - 1';'j.1 for i =f: i' and j =f: l) this
simple algorithm may make unacceptable errors. A solution to this problem
for overlapping spikes will be described below, but now let us consider the
problem of classifying non-overlapping spikes. In this case, we can compare
the input cross-correlation with the auto-correlations (eq. 4c and 4d).
T! = Lsi(t - i); T!, = Ls~(t - i)
(4c)
t
(4d)
So for non-overlapping cases, if x(t) = SI(t - i), then I:::.~ = IT: - 1:1 = O. If
x(t) = S2(t - i), then 1:::.:' = IT:' - 1:'1 = o.
In the absence of noise, then the minimum of I:::. ij , 1:::.: and I:::.? represents the
correct classification. However, in the presence of noise, none of these quantities
will be identically zero, but will equal the noise in the input x(t) which will
give rise to unacceptible errors. Our solution to this noise related. problem is
to choose a few minima (three have chosen in our case) instead of one. For
each minimum there is either a known corresponding linear combination of
templates for overlapping cases or a simple template for non-overlapping cases.
A three neuron Hopfield-type network is then programmed so that each neuron
corresponds to each of the cases. The input x(t) is fed to this tiny network to
resolve whatever confusion remains after the first step of "cross-correlation"
comparisons. (Note: Simple template matching as described below can also be
used in the place of the tiny Hopfield type network.)
109
Simple Template Matching ~ To evaluate the performances of these
neural network approaches, we decided to implement a simple template matching scheme, which we will now describe. However, as documented below, this
approach turned out to be the most accurate and require the least complex
hardware of any of the three approaches. The first step is, again, to fill a buffer
with data based on the detection of a possible neural signal. Then we calculate
the difference between the recorded waveform and all possible combinations of
the two previously identified templates. Formally, this consists of calculating
the distances between the input x(m) and all possible cases generated by all
the combinations of the two templates.
d,j =
L
Ix(t) - {Sl(t - i)
+ S2(t - Jonl
t
d~
=
L
t
Ix(t) - Sl(t - i)l;
d~'
= L Ix(t) - S2(t - i)1
t
dmin = min(dij,d~,dn
dm,n gives the best fit of all possible combinations of templates to the actual
voltage signal.
TESTING PROCEDURES
To compare the performance of each of the three approaches, we devised a
common set of test data using the following procedures. First, we used the principal component method of Abeles and Goldstein 3 to generate two templates
from a digitized analog record of neural activity recorded in the cerebellum
of the rat. The two actual spike waveform templates we decided to use had
a peak-to-peak ratio of 1.375. From a second set of analog recordings made
from a site in the cerebellum in which no action potential events were evident,
we determined the spectral characteristics of the recording noise. These two
components derived from real neural recordings were then digitally combined,
the objective being to construct realistic records, while also knowing absolutely
what the correct solution to the template matching problem was for each occurring spike. As shown in Fig. 2c and 2d, data sets corresponding to different
noise to signal ratios were constructed. We also carried out simulations with
the amplitudes of the templates themselves varied in the synthesized records to
simulate waveform changes due to brain movements often seen in real recordings. In addition to two waveform test sets, we also constructed three waveform
sets by generating a third template that was the average of the first two templates. To further quantify the comparisons of the three diffferent approaches
described above we considered non-overlapping and overlapping spikes separately. To quantify the performance of the three different approaches, two
standards for classification were devised. In the first and hardest case, to be
judged a correct classification, the precise order and timing of two waveforms
had to be reconstructed. In the second and looser scheme, classification was
judged correct if the order of two waveforms was correct but timing was allowed to vary by ?lOO Jlsecs(i.e. ?2 time bins) which for most neurobiological
applications is probably sufficient resolution . Figs. 3-4 compare the performance results for the three approaches to waveform classification implemented
as digital simulations.
110
PERFORMANCE COMPARISON
Two templates - non-overlapping waveforms: As shown in Fig. 3a, at
low noise-to-signal ratios (NSRs below .2) each of the three approaches were
comparable in performance reaching close to 100% accuracy for each criterion.
As the ratio was increased, however the neural network implementations did
less and less well with respect to the simple template matching algorithm with
the full Hopfield type network doing considerably worse than the modified
network. In the range of NSR most often found in real data (.2 - .4) simple
template matching performed considerably better than either of the neural
network approaches. Also it is to be noted that simple template matching
gives an estimate of the goodness of fit betwwen the waveform and the closest
template which could be used to identify events that should not be classified
(e.g. signals due to noise).
a.
.
b.
,
..
c.
,.
..
. ..
..
..
1.1
noise level: 3a/peak amplitude
.,
,
?
//
\,
.
,
.
1.1
,.-.-..-----------.
/
,
,,
,,
I
,,
,:
.
.
.
noise level: 3a/peak amplitude
I
I
I
:'
I
,I
\,'
-14
-12
-tli
-I
-2
12
degrees of overlap
light line - absolute criteria
heavy line - less stringent criteria
simple template matching
Hopfield network
modified Hopfield network
Fig. 3. Comparisons of the three approaches detecting two non-overlapping (a), and overlapping (b) waveforms, c) compares the performances of the neural network approaches for
different degrees of waveform overlap.
Two' templates - overlapping waveforms: Fig. 3b and 3c compare performances when waveforms overlapped. In Fig. 3b the serious local minima problem encountered in the full neural network is demonstrated as is the improved
performance of the modified network. Again, overall performance in physi-
111
ological ranges of noise is clearly best for simple template matching. When
the noise level is low, the modified approach is the bet ter of the two neural
networks due to the reliability of the correlation number which reflects the
resemblence between the input data and the template. When the noise level
is high, errors in the correlation numbers may exclude the right combination
from the smaller network. In this case its performance is actually a little worse
than the larger Hopfield network. Fig. 3c documents in detail which degrees
of overlap produce the most trouble for the neural network approaches at average NSR levels found in real neural data. It can be seen that for the neural
networks, the most serious problem is encountered when the delays between
the two waveforms are small enough that the resulting waveform looks like the
larger waveform with some perturbation.
Three templates - overlapping and non-overlapping: In Fig. 4 are shown
the comparisons between the full Hopfield network approach and the simple
template matching approach. For nonoverlapping waveforms, the performance
of these two approaches is much more comparable than for the two waveform
case (Fig. 4a), although simple template matching is still the optimal method.
In the overlapping waveform condition, however, the neural network approach
fails badly (Fig. 4b and 4c). For this particular application and implementation, the neural network approach does not scale well.
b.
a.
~
!:!...
.
o ..
v
~
..
.
28
.2
c.
~
......'"
o
V
~
.
1. 1
.S
.2
noise level: 3a /peak amplitude
..
..
.4
..
.S
.. I
noise level: 3a /peak amplitude
Hopfield network
simple template matching
light line - absolute criteria
heavy line - less stringent criteria
a = variance of the noise
50
2.
.2
.6
.8
1. ?
noise level: 3a /peak amplitude
Fig. 4. Comparisons of performance for three waveforms. a) nonoverlapping waveforms; b)
two waveforms overlapping; c) three waveforms overlapping.
HARDWARE COMPARISONS
As described earlier, an important design requi~ement for this work was the
ability to <letect neural signals in analog records in real-time originating from
112
many simultaneously active sampling electrodes. Because it is not feasible to
run the algorithms in a computer in real time for all the channels simultaneously, it is necessary to design and build dedicated hardware for each channel.
To do this, we have decided to design VLSI implementations of our circuitry.
In this regard, it is well recognized that large modifiable neural networks need
very elaborate hardware implementations. Let us consider, for example, implementing hard wares for a two-template case for comparisons. Let n = no.
of neurons per template (one neuron for each delay of the template), m =
no. of iterations to reach the stable state (in simulating the discretized differential equation, with step size = 0.05), [ = no. of samples in a template
tj(m). Then, the number of connections in the full Hopfield network will be
4n 2 ? The total no. of synaptic calculations = 4mn 2 ? So, for two templates
and n = 16, m = 100,4mn 2 = 102,400. Thus building the full Hopfield-type
network digitally requires a system too large to be put in a single VLSI chip
which will work in real time. If we want to build an analog system, we need
to have many (O{ 4n 2 )) easily modifiable synapses. As yet this technology is
not available for nets of this size. The modified Hopfield-type network on the
other hand is less technically demanding . To do the preprocessing to obtain
the minimum values we have to do about n 2 = 256 additions to find all possible
Iijs and require 256 subtractions and comparisons to find three minima. The
costs associated with doing input cross-correlations are the same as for the full
neural network (i.e. 2nl = 768(l = 24) mUltiplications). The saving with the
modified approach is that the network used is small and fast (120 multiplications and 120 additions to construct the modifiable synapses, no. of synaptic
calculations = 90 with m = 10, n = 3).
In contrast to the neural networks, simple temrlate matching is simple
indeed. For example, it must perform about n 2 [ + n = 10,496 additions and
n 2 = 256 comparisons to find the minimum d ij . Additions are considerably less
costly in time and hardware than multiplications. In fact, because this method
needs only addition operations, our preliminary design work suggests it can be
built on a single chip and will be able to do the two-template classification
in as little as 20 microseconds. This actually raises the possibility that with
switching and buffering one chip might be able to service more than one channel
in essentially real time.
CONCLUSIONS
Template matching using a full Hopfield-type neural network is found to
be robust to noise and changes in signal waveform for the two neural waveform
classification problem. However, for a three-waveform case, the network does
not perform well. Further, the network requires many modifiable connections
and therefore results in an elaborate hardware implementation. The overall
performance of the modified neural network approach is better than the full
~Iopfield network approach. The computation has been reduced largly and
the hardware requirements are considerably less demanding demonstrating the
value of designing a specific network to a specified problem. However, even the
modified neural network performs less well than a simple template-matching
algorithm which also has the simplest hardware implementation. Using the
simple template matching algorithm, our simulations suggest it will be possible to build a two or three waveform classifier on a single VLSI chip using
CMOS technology that works in real time with excellent error characteristics.
Further, such a chip will be able to accurately classify variably overlapping
113
neural signals.
REFERENCES
[1] G. L. Gerstein, M. J. Bloom, 1. E. Espinosa, S. Evanczuk & M. R. Turner,
IEEE Trans. Sys. Cyb. Man., SMC-13, 668(1983).
2 J. M. Bower & R . Llinas, Soc. Neurosci. Abst.,~, 607(1983).
3 M. Abeles & M. H. Goldstein, Proc. IEEE, 65, 762(1977).
4 W. M. Roberts & D. K. Hartline, Brain Res., 94, 141(1976).
5 E. M. Schmidt, J. of Neurosci. Methods, 12, 95(1984).
6 J. J. Hopfield, Proc. Natl. Acad. Sci. (USA), 81, 3088(1984).
7 J. J. Hopfield & D. W. Tank, BioI. Cybern., 52, 141(1985).
8 D. W. Tank & J. J. Hopfield, IEEE Trans. Circuits Syst., CAS-33,
533(1986).
ACKNOWLEDGEMENTS
We would like to acknowledge the contribution of Dr. Mark Nelson to the intellectual
development of these projects and the able assistance of Herb Adams, Mike Walshe and John
Powers in designing and constructing support equipment. This work was supported by NIH
grant NS22205, the Whitaker Foundation and the Joseph Drown Foundation.
| 40 |@word neurophysiology:1 version:1 simulation:3 simplifying:1 initial:2 configuration:1 document:1 current:3 si:6 yet:1 must:3 readily:1 john:1 realistic:1 multineuron:1 shape:1 designed:3 discrimination:2 implying:1 device:2 nervous:1 assurance:1 accordingly:1 sys:1 record:7 detecting:2 complication:1 location:1 successive:1 node:1 intellectual:1 unacceptable:1 constructed:5 dn:1 become:3 vjk:2 differential:1 consists:1 fabricate:1 fdt:1 indeed:1 oscilloscope:3 themselves:1 multi:5 brain:7 discretized:1 automatically:1 resolve:2 actual:5 little:2 window:1 becomes:1 project:2 circuit:2 what:2 developed:4 impractical:1 pseudo:1 duplicating:1 temporal:1 classifier:1 whatever:2 unit:1 grant:1 before:1 service:1 engineering:1 local:2 timing:2 switching:1 acad:1 analyzing:1 meet:1 firing:1 ware:1 might:1 au:1 suggests:1 programmed:1 smc:1 range:2 decided:3 testing:2 ement:1 implement:2 sq:1 procedure:5 significantly:2 matching:24 pre:1 suggest:2 cannot:1 close:1 judged:2 put:1 context:1 cybern:1 map:1 demonstrated:2 go:1 duration:1 l:1 resolution:1 simplicity:1 constrast:1 rule:1 fill:1 limiting:1 discharge:2 trigger:3 designing:2 origin:1 overlapped:2 assure:2 element:2 crossing:1 particularly:2 variably:3 mike:1 electrical:1 calculate:1 compressing:1 oe:1 movement:2 digitally:2 complexity:1 dynamic:1 ov:1 raise:1 cyb:1 technically:2 creates:1 division:1 easily:1 hopfield:22 chip:5 various:1 derivation:1 distinct:1 fast:1 describe:4 detected:1 apparent:1 whose:2 larger:3 quite:2 ability:1 itself:1 obviously:1 abst:1 biophysical:1 net:2 propose:1 subtracting:1 turned:1 electrode:10 requirement:1 produce:2 generating:1 cmos:1 adam:1 develop:2 ij:5 progress:1 eq:3 soc:1 implemented:1 involves:1 quantify:2 waveform:53 correct:5 stringent:2 implementing:1 bin:5 require:2 investigation:1 preliminary:1 around:1 considered:1 ic:1 drown:1 visually:1 exp:1 automate:1 circuitry:1 vary:1 proc:2 individually:1 weighted:1 reflects:1 clearly:1 modified:13 rather:1 arose:1 avoid:1 reaching:1 bet:1 voltage:4 derived:1 indicates:1 superimposed:1 contrast:2 equipment:1 detect:2 initially:1 pasadena:1 vlsi:3 originating:1 interested:1 corrupts:1 tank:3 overall:4 classification:19 ill:1 among:1 ported:1 development:2 animal:2 mutual:1 equal:2 aware:1 once:2 construct:2 saving:1 sampling:5 biology:1 extention:1 represents:2 look:2 hardest:1 nearly:1 buffering:1 minimized:1 report:1 serious:3 micro:1 few:1 simultaneously:6 recognize:2 tightly:1 delayed:1 floating:1 detection:1 organization:1 possibility:1 severe:1 nl:1 light:2 tj:1 natl:1 accurate:1 necessary:5 iv:1 re:1 isolated:3 increased:1 classify:5 earlier:1 herb:1 goodness:1 cost:1 delay:8 dij:1 too:1 loo:1 stored:1 periodic:1 accomplish:1 considerably:5 abele:3 combined:1 espinosa:1 peak:7 discriminating:1 physic:1 tip:2 quickly:1 connectivity:1 again:2 recorded:9 unavoidable:1 choose:1 dr:1 worse:2 american:1 syst:1 potential:6 exclude:1 de:1 nonoverlapping:2 explicitly:1 vi:1 crossed:1 later:1 view:2 performed:1 tli:1 doing:2 sort:1 parallel:1 contribution:1 minimize:2 square:2 accuracy:2 accomplishing:1 variance:1 largely:1 characteristic:3 loaded:1 correspond:1 identify:1 accurately:1 none:1 hartline:1 classified:4 synapsis:2 reach:1 synaptic:2 energy:3 involved:1 james:1 dm:1 associated:2 sampled:1 experimenter:2 amplitude:9 sophisticated:1 goldstein:3 actually:2 dt:5 improved:2 llinas:1 formulation:3 though:2 just:1 stage:1 correlation:9 hand:1 nonlinear:1 overlapping:22 fai:1 nsr:4 building:1 usa:1 verify:1 adequately:1 hence:2 laboratory:1 deal:1 cerebellum:2 assistance:1 noted:1 rat:1 criterion:6 evident:1 complete:1 confusion:2 performs:1 motion:1 interpreting:1 l1:2 dedicated:1 novel:1 recently:1 nih:1 sigmoid:1 common:1 functional:2 discriminated:2 khz:1 jl:1 analog:7 discussed:1 synthesized:2 silicon:4 respiration:1 theorist:1 outlined:1 had:2 reliability:1 moving:1 stable:1 cortex:2 surface:1 similarity:1 inhibition:1 closest:1 recent:3 moderate:1 buffer:3 continue:1 seen:4 minimum:9 recognized:1 determine:1 subtraction:1 period:1 signal:27 ii:5 multiple:4 full:11 reduces:1 technical:1 match:1 calculation:2 cross:5 devised:2 serial:1 post:1 schematic:1 involving:1 essentially:1 iteration:1 cerebellar:2 cell:1 ion:1 addition:7 want:2 separately:2 diagram:1 probably:1 recording:15 tend:1 presence:5 ter:1 enough:2 concerned:2 identically:2 fit:2 identified:2 converter:1 reduce:1 knowing:1 expression:1 passed:1 effort:2 nomenclature:1 passing:1 action:6 programmable:1 tij:3 clear:1 aiding:1 ph:1 hardware:12 tth:1 documented:2 generate:1 specifies:1 sl:2 reduced:1 lsi:1 simplest:1 arising:1 correctly:1 per:3 modifiable:4 express:1 group:1 threshold:2 demonstrating:1 monitor:2 bloom:1 year:2 run:1 place:1 neurobiologist:1 utilizes:1 looser:1 gerstein:1 comparable:2 layer:2 distinguish:2 encountered:2 activity:6 badly:1 occur:2 placement:1 software:1 aspect:1 simulate:1 min:1 developing:1 combination:9 electrically:1 smaller:2 slightly:1 joseph:1 dv:2 restricted:1 explained:1 taken:1 equation:2 remains:1 previously:1 fed:1 physiologist:1 studying:1 available:1 operation:1 probe:2 appropriate:1 spectral:1 simulating:1 occurrence:1 schmidt:3 assumes:1 include:1 trouble:1 whitaker:1 calculating:2 physi:1 build:4 sweep:1 objective:4 dwarf:1 quantity:4 spike:10 strategy:1 costly:1 traditional:1 diagonal:1 distance:1 sci:1 nelson:1 reason:1 ratio:4 acquire:2 difficult:1 robert:1 trace:3 rise:1 implementation:8 design:8 proper:1 packed:1 perform:2 allowing:1 upper:1 dmin:1 neuron:23 implementable:1 acknowledge:1 precise:1 digitized:2 varied:1 perturbation:1 specified:1 connection:3 california:1 trans:2 able:5 usually:1 pattern:1 below:6 built:1 power:1 event:7 overlap:5 serially:1 difficulty:1 natural:1 demanding:2 turner:1 mn:2 scheme:4 technology:5 temporally:2 carried:1 extract:1 auto:1 occurence:1 review:1 acknowledgement:1 multiplication:3 relative:1 expect:1 interesting:1 limitation:1 digital:1 foundation:2 degree:3 sufficient:1 ume:1 tiny:3 storing:1 classifying:3 heavy:2 summary:1 supported:1 last:2 allow:1 understand:2 institute:2 template:54 volves:1 absolute:2 yiu:1 regard:1 calculated:1 evaluating:1 made:2 preprocessing:1 simplified:1 sj:2 reconstructed:1 neurobiological:2 active:1 reveals:1 channel:10 robust:3 ca:2 expanding:1 du:3 excellent:1 complex:2 constructing:1 did:1 neurosci:2 s2:7 noise:24 allowed:1 body:1 neuronal:4 fig:20 site:2 referred:1 elaborate:2 iij:3 fails:1 msec:4 bower:2 jvj:1 third:2 ix:3 specific:1 concern:2 consist:1 occurring:1 simply:1 likely:2 contained:1 corresponds:1 relies:1 bioi:1 classificat:1 microsecond:1 absence:1 feasible:1 change:3 hard:1 man:1 specifically:2 determined:1 principal:3 called:1 total:1 experimental:2 la:1 formally:1 mark:1 support:1 absolutely:1 ongoing:2 evaluate:1 correlated:1 |
3,313 | 400 | Note on Learning Rate Schedules for Stochastic
Optimization
Christian Darken and John Moody
Yale University
P.O. Box 2158 Yale Station
New Haven, CT 06520
Email: [email protected]
Abstract
We present and compare learning rate schedules for stochastic gradient
descent, a general algorithm which includes LMS, on-line backpropagation and k-means clustering as special cases. We introduce "search-thenconverge" type schedules which outperform the classical constant and
"running average" (1ft) schedules both in speed of convergence and quality
of solution.
1
Introduction: Stochastic Gradient Descent
The optimization task is to find a parameter vector W which minimizes a func?x E(W, X), i.e.
tion G(W). In the context of learning systems typically G(W)
G is the average of an objective function over the exemplars, labeled E and X
respectively. The stochastic gradient descent algorithm is
=
Ll Wet) = -1](t)V'w E(W(t), X(t)).
where t is the "time", and X(t) is the most recent independently-chosen random
exemplar. For comparison, the deterministic gradient descent algorithm is
Ll Wet) = -1](t)V'w?x E(W(t), X).
832
Note on Learning Rate Schedules for Stochastic Optimization
Ia' ~---_=--=------------------------
HI
Figure 1: Comparison of the shapes of the schedules. Dashed line = constant, Solid line
= search-then-converge, Dotted line = "running-average"
While on average the stochastic step is equal to the deterministic step, for any
particular exemplar X(t) the stochastic step may be in any direction, even uphill
in ?x E(W(t), X). Despite its noisiness, the stochastic algorithm may be preferable
when the exemplar set is large, making the average over exemplars expensive to
compute.
The issue addressed by this paper is: which function should one choose for 7](t)
(the learning rate schedule) in order to obtain fast convergence to a good local
minimum? The schedules compared in this paper are the following (Fig. 1):
? Constant: 7](t)
= 7]0
? "Running Average": 7](t) = 7]0/(1
+ t)
? Search-Then-Converge: 7](t) = 7]0/(1
+ tlr)
"Search-then-converge" is the name of a novel class of schedules which we introducein this paper. The specific equation above is merely one member of this class and
was chosen for comparison because it is the simplest member of that class. We find
that the new schedules typically outperform the classical constant and running average schedules. Furthermore the new schedules are capable of attaining the optimal
asymptotic convergence rate for any objective function and exemplar distribution.
The classical schedules cannot.
Adaptive schedules are beyond the scope of this short paper (see however Darken
and Moody, 1991). Nonetheless, all of the adaptive schedules in the literature of
which we are aware are either second order, and thus too expensive to compute for
large numbers of parameters, or make no claim to asymptotic optimality.
833
834
Darken and Moody
2
Example Task: K-Means Clustering
As our sample gradient-descent task we choose a k-means clustering problem . Clustering is a good sample problem to study, both for its inherent usefulness and its
illustrative qualities. Under the name of vector-quantization, clustering is an important technique for signal compression in communications engineering. In the
machine learning field, clustering has been used as a front-end for function learning
and speech recognition systems. Clustering also has many features to recommend it
as an illustrative stochastic optimization problem. The adaptive law is very simple,
and there are often many local minima even for small problems. Most significantly
however, if the means live in a low dimensional space, visualization of the parameter
vector is simple: it has the interpretation of being a set of low-dimensional points
which can be easily plotted and understood.
The k-means task is to locate k points (called "means") to minimize the expected distance between a new random exemplar and the nearest mean to that
exemplar. Thus, the function being minimized in k-means is ?xllX - A1nr8t112,
where M nr8t is the nearest mean to exemplar X.
An equivalent form is
2
J dX P(X) E:=l Ia(X)IIX - Ma11 , where P(X) is the density of the exemplar
distribution and Ia(X) is the indicator function of the Veronois region corresponding to the ath mean. The stochastic gradient descent algorithm for this function
IS
~Mnr8t(t)
= -7](tnr8t)[Mnr6t(t) -
X(t)),
i.e. the nearest mean to the latest exemplar moves directly towards the exemplar
a fractional distance 7](t nr6t ). In a slight generalization from the stochastic gradient descent algorithm above, t nr6t is the total number of exemplars (including the
current one) which have been assigned to mean Mnr6t .
As a specific example problem to compare various schedules across, we take k = 9
(9 means) and X uniformly distributed over the unit square. Although this would
appear to be a simple problem, it has several observed local minima. The global
minimum is where the means are located at the centers of a uniform 3x3 grid over
the square. Simulation results are presented in figures 2 and 3.
3
Constant Schedule
A constant learning rate has been the traditional choice for LMS and backpropagation. However, a constant rate generally does not allow the parameter vector
(the "means" in the case of clustering) to converge. Instead, the parameters hover
around a minimum at an average distance proportional to 7] and to a variance which
depends on the objective function and the exemplar set. Since the statistics of the
exemplars are generally assumed to be unknown, this residual misadjustment cannot
be predicted . The resulting degradation of other measures of system performance,
mean squared classification error for instance, is still more difficult to predict. Thus
the study of how to make the parameters converge is of significant practical interest.
Current practice for backpropagation, when large misadjustment is suspected, is to
restart learning with a smaller 7]. Shrinking 7] does result in less residual misadjustment, but at the same time the speed of convergence drops. In our example
Note on Learning Rate Schedules for Stochastic Optimization
clustering problem, a new phenomenon appears as 71 drops-metastable local minima. Here the parameter vector hovers around a relatively poor solution for a very
long time before slowly transiting to a better one.
4
Running Average Schedule
=
The running average schedule (71(t) 710/(1 + t)) is the staple of the stochastic approximation literature (Robbins and Monro, 1951) and of k-means clustering (with
710
1) (Macqueen, 1967). This schedule is optimal for k = 1 (1 mean), but performs very poorly for moderate to large k (like our example problem with 9 means).
From the example run (Fig. 2A), it is clear that 71 must decrease more slowly in
order for a good solution to be reached. Still, an advantage of this schedule is that
the parameter vector has been proven to converge to a local minimum (Macqueen,
1967). We would like a class of schedules which is guaranteed to converge, and yet
converges as quickly as possible.
=
5
Stochastic Approximation Theory
In the stochastic approximation literature, which has grown steadily since it began
in 1951 with the Robbins and Monro paper, we find conditions on the learning rate
to ensure convergence with optimal speed 1.
From (Ljung, 1977), we find that 71(t) --+ Ar p asymptotically for any 1 > P > 0,
is sufficient to guarantee convergence. Power law schedules may work quite well in
practice (Darken and Moody, 1990), however from (Goldstein, 1987) we find that in
order to converge at an optimal rate, we must have 71(t) --+ cit asymptotically, for c
~reater than some threshold which depends on the objective function and exemplars
. When the optimal convergence rate is achieved, IIW - W?W goes like lit.
The running average schedule goes as 710lt asymptotically. Unfortunately, the convergence rate of the running average schedule often cannot be improved by enlarging
710, because the resulting instability for small t can outweigh the improvements in
asymptotic convergence rate.
6
Search-Then-Converge Schedules
We now introduce a new class of schedules which are guaranteed to converge and
furthermore, can achieve the optimal lit convergence rate without stability problems. These schedules are characterized by the following features. The learning
rate stays high for a "search time" T in which it is hoped that the parameters will
find and hover about a good minimum. Then, for times greater than T, the learning
rate decreases as cit, and the parameters converge.
IThe cited theory generally does not directly apply to the full nonlinear setting of
interest in much practical work. For more details on the relation of the theory to practical
applications and a complete quantitative theory of asymptotic misadjustment, see (Darken
and Moody, 1991).
2This choice of asymptotic 11 satisfies the necessary conditions given in (White, 1989).
835
836
Darken and Moody
We choose the simplest of this class of schedules for study, the "short-term linear"
schedule (7](t) = 7]0/(1 +tIT)), so called because the learning rate decreases linearly
during the search phase. This schedule has c
T7]o and reduces to the running
1.
average schedule for T
=
7
=
Conclusions
We have introduced the new class of "search-then-converge" learning rate schedules.
Stochastic approximation theory indicates that for large enough T, these schedules
can achieve optimally fast asymptotic convergence for any exemplar distribution
and objective function. Neither constant nor "running average" (lIt) schedules
can achieve this. Empirical measurements on k-means clustering tasks are consistent with this expectation. Furthermore asymptotic conditions obtain surprisingly
quickly. Additionally, the search-then-converge schedule improves the observed likelihood of escaping bad local minima.
As implied above, k-means clustering is merely one example of a stochastic gradient
descent algorithm. LMS and on-line backpropagation are others of great interest
to the learning systems community. Due to space limitations, experiments in these
settings will be published elsewhere (Darken and Moody, 1991). Preliminaryexperiments seem to confirm the generality of the above conclusions.
Extensions to this work in progress includes application to algorithms more sophisticated than simple gradient descent, and adaptive search-then-converge algorithms
which automatically determine the search time.
Acknowledgements
The authors wish to thank Hal White for useful conversations and Jon Kauffman for
developing the animator which was used to produce figure 2. This work was supported by
ONR Grant N00014-89-J-1228 and AFOSR Grant 89-0478.
References
C. Darken and J. Moody. (1990) Fast Adaptive K-Means Clustering: Some Empirical
Results. In International Joint Conference on Neural Networks 1990, 2:233-238. IEEE
Neural Networks Council.
C. Darken and J. Moody. (1991) Learning Rate Schedules for Stochastic Optimization. In
preparation.
L. Goldstein. (1987) Mean square optimality in the continuous time Robbins Monro procedure. Technical Report DRB-306. Department of Mathematics, University of Southern
California.
L. Ljung. (1977) Analysis of Recursive Stochastic Algorithms. IEEE Trans. on Automatic
Control. AC-22( 4):551-575.
J. MacQueen. (1967) Some methods for classification and analysis of multivariate observations. In Proc. 5th Berkeley Symp. Math. Stat. Prob. 3:281.
H. Robbins and S. Monro. (1951) A Stochastic Approximation Method. Ann. Math. Stat.
22:400-407.
Note on Learning Rate Schedules for Stochastic Optimization
H. White. (1989) Learning in Artificial Neural Networks: A Statistical Perspective. Neural
Computation. 1:425-464.
A
B
1
..
--
'.
'-?~I~
..
c
D
.AIL., ?..
~
...
-.
.9.
.,
'f;
?...
~
.;
.,.
? t
:.a:
Figure 2: Example runs with classical schedules on 9-means clustering task. Exemplars
are uniformly distributed over the square. Dots indicate previous locations of the means.
The triangles (barely visible) are the final locations of the means. (A) "Running average"
loOk exemplars. Means are far from any minimum and proschedule ('11 = 1/(1 +
gressing very slowly. (B) Large constant schedule ('11=0.1), lOOk exemplars. Means hover
around global minimum at large average distance. (C) Small constant schedule (71=0 .01) ,
50k exemplars. Means stuck in metastable local minimum. (D) Small constant schedule ('11=0 .01), lOOk exemplars (later in the run pictured in C). Means tunnel out of loc al
minimum and hover around global minimum.
t?,
837
838
Darken and Moody
10-'
10"
B
10"
10"
~
~
~
-'!
"
~
..!:.
;j
;j
" 10-?
~ 10'"
1
1
j
j
a
10'"
10"
10"
10"'0
aampl_/elu.oter
1
I
.ampl ??/ e1uat??
10-1
10"
C
10"
10'"
o.
o.
..
':>
j
""..
'0
~
i
'i'
? 10"
~
i
il
10'"
?
:I
10'"
10"
10'" !'1.J"or-'-'............';'J;IO..--'-.............~lo=r-............w:,'k-'--'-'-'~r-'--'u..L.U'fh-J
IOUIlpl.../ctuter
nmpl ??/du.tu
Figure 3: Comparison of 10 runs over the various schedules on the 9-means clustering task (as described under Fig. 1). The exemplars are the same for each schedule.
Misadjustment is defined as IIW - W be "tIl2. (A) Small constant schedule (1]=0.01).
Note the well-defined transitions out of metastable local minima and large misadjustment late in the runs. (B) "Running average" schedule (T}
1/(1 + t)). 6
out of 10 runs stick in a local minimum. The others slowly head for the global
minimum. (C) Search-then-converge schedule (T} = 1/(1 + t/4)). All but one run
head for global minimum, but at a suboptimal rate (asymptotic slope less than -1).
(D) Search-then-converge schedule (T}
1/(1 + t/32)). All runs head for global
minimum at optimally quick rate (asymptotic slope of -1).
=
=
| 400 |@word compression:1 simulation:1 solid:1 loc:1 t7:1 current:2 yet:1 dx:1 must:2 john:1 visible:1 shape:1 christian:1 drop:2 short:2 math:2 location:2 symp:1 introduce:2 uphill:1 expected:1 nor:1 animator:1 automatically:1 minimizes:1 ail:1 guarantee:1 quantitative:1 berkeley:1 preferable:1 stick:1 control:1 unit:1 grant:2 appear:1 before:1 engineering:1 local:9 understood:1 io:1 despite:1 practical:3 practice:2 recursive:1 backpropagation:4 x3:1 procedure:1 empirical:2 significantly:1 staple:1 cannot:3 context:1 live:1 instability:1 equivalent:1 deterministic:2 outweigh:1 center:1 quick:1 latest:1 go:2 independently:1 stability:1 expensive:2 recognition:1 located:1 labeled:1 observed:2 ft:1 region:1 decrease:3 ithe:1 tit:1 triangle:1 easily:1 joint:1 various:2 grown:1 fast:3 artificial:1 quite:1 ampl:1 statistic:1 final:1 advantage:1 hover:4 tu:1 ath:1 poorly:1 achieve:3 convergence:11 produce:1 converges:1 ac:1 stat:2 exemplar:23 nearest:3 progress:1 c:1 predicted:1 indicate:1 direction:1 stochastic:21 generalization:1 extension:1 around:4 great:1 scope:1 predict:1 lm:3 claim:1 fh:1 proc:1 wet:2 council:1 robbins:4 noisiness:1 improvement:1 indicates:1 likelihood:1 typically:2 hovers:1 relation:1 issue:1 classification:2 special:1 equal:1 aware:1 field:1 lit:3 look:3 jon:1 minimized:1 others:2 recommend:1 report:1 haven:1 inherent:1 phase:1 interest:3 capable:1 necessary:1 plotted:1 instance:1 ar:1 uniform:1 usefulness:1 too:1 front:1 optimally:2 density:1 cited:1 international:1 stay:1 quickly:2 moody:11 squared:1 choose:3 slowly:4 attaining:1 includes:2 depends:2 tion:1 later:1 reached:1 slope:2 monro:4 minimize:1 square:4 il:1 variance:1 published:1 email:1 nonetheless:1 steadily:1 conversation:1 fractional:1 improves:1 schedule:49 sophisticated:1 goldstein:2 appears:1 improved:1 box:1 generality:1 furthermore:3 nonlinear:1 quality:2 hal:1 name:2 assigned:1 white:3 ll:2 during:1 illustrative:2 complete:1 performs:1 iiw:2 novel:1 began:1 interpretation:1 slight:1 significant:1 measurement:1 automatic:1 drb:1 grid:1 mathematics:1 dot:1 multivariate:1 recent:1 perspective:1 moderate:1 n00014:1 onr:1 misadjustment:6 minimum:19 greater:1 converge:16 determine:1 dashed:1 signal:1 full:1 reduces:1 technical:1 characterized:1 long:1 expectation:1 achieved:1 addressed:1 member:2 seem:1 enough:1 escaping:1 suboptimal:1 speech:1 tunnel:1 useful:1 generally:3 clear:1 simplest:2 cit:2 outperform:2 dotted:1 threshold:1 neither:1 asymptotically:3 merely:2 run:8 prob:1 ct:1 hi:1 guaranteed:2 yale:3 tlr:1 speed:3 optimality:2 relatively:1 department:1 developing:1 metastable:3 transiting:1 poor:1 across:1 smaller:1 making:1 equation:1 visualization:1 end:1 apply:1 clustering:15 running:12 ensure:1 iix:1 classical:4 implied:1 objective:5 move:1 traditional:1 southern:1 gradient:9 distance:4 thank:1 restart:1 barely:1 difficult:1 unfortunately:1 unknown:1 oter:1 observation:1 darken:10 macqueen:3 descent:9 communication:1 head:3 locate:1 station:1 community:1 introduced:1 california:1 trans:1 beyond:1 kauffman:1 including:1 ia:3 power:1 indicator:1 residual:2 pictured:1 func:1 til2:1 literature:3 acknowledgement:1 asymptotic:9 law:2 afosr:1 ljung:2 limitation:1 proportional:1 proven:1 sufficient:1 consistent:1 suspected:1 lo:1 elsewhere:1 surprisingly:1 supported:1 allow:1 distributed:2 transition:1 author:1 stuck:1 adaptive:5 far:1 confirm:1 elu:1 global:6 assumed:1 search:13 continuous:1 additionally:1 du:1 linearly:1 fig:3 shrinking:1 wish:1 late:1 enlarging:1 bad:1 specific:2 quantization:1 hoped:1 lt:1 satisfies:1 ann:1 towards:1 uniformly:2 degradation:1 called:2 total:1 preparation:1 phenomenon:1 |
3,314 | 4,000 | Sufficient Conditions for Generating Group Level
Sparsity in a Robust Minimax Framework
Hongbo Zhou and Qiang Cheng
Computer Science department,
Southern Illinois University Carbondale, IL, 62901
[email protected], [email protected]
Abstract
Regularization technique has become a principled tool for statistics and machine
learning research and practice. However, in most situations, these regularization
terms are not well interpreted, especially on how they are related to the loss function and data. In this paper, we propose a robust minimax framework to interpret
the relationship between data and regularization terms for a large class of loss
functions. We show that various regularization terms are essentially corresponding to different distortions to the original data matrix. This minimax framework
includes ridge regression, lasso, elastic net, fused lasso, group lasso, local coordinate coding, multiple kernel learning, etc., as special cases. Within this minimax
framework, we further give mathematically exact definition for a novel representation called sparse grouping representation (SGR), and prove a set of sufficient
conditions for generating such group level sparsity. Under these sufficient conditions, a large set of consistent regularization terms can be designed. This SGR
is essentially different from group lasso in the way of using class or group information, and it outperforms group lasso when there appears group label noise. We
also provide some generalization bounds in a classification setting.
1
Introduction
A general form of estimating a quantity w ? Rn from an empirical measurement set X by minimizing a regularized or penalized functional is
w
? = argmin{L(Iw (X )) + ?J (w)},
(1)
w
where Iw (X ) ? Rm expresses the relationship between w and data X , L(.) := Rm ? R+ is a
loss function, J (.) := Rn ? R+ is a regularization term and ? ? R is a weight. Positive integers
n, m represent the dimensions of the associated Euclidean spaces. Varying in specific applications,
the loss function L has lots of forms, and the most often used are these induced (A is induced
by B, means B is the core part of A) by squared Euclidean norm or squared Hilbertian norms.
Empirically, the functional J is often interpreted as smoothing function, model bias or uncertainty.
Although Equation (1) has been widely used, it is difficult to establish a general mathematically
exact relationship between L and J . This directly encumbers the interpretability of parameters in
the model selection. It would be desirable if we can represent Equation (1) by a simpler form
?
?
w
? = argminL (Iw (X )).
(2)
w
Obviously, Equation (2) provides a better interpretability for the regularization term in Equation (1)
by explicitly expressing the model bias or uncertainty as a variable of the relationship functional. In
this paper, we introduce a minimax framework and show that for a large family of Euclidean norm
induced loss functions, an equivalence relationship between Equation (1) and Equation (2) can be
1
established. Moreover, the model bias or uncertainty will be expressed as distortions associated
with certain functional spaces. We will give a series of corollaries to show that well-studied lasso,
group lasso, local coordinate coding, multiple kernel learning, etc., are all special cases of this novel
framework. As a result, we shall see that various regularization terms associated with lasso, group
lasso, etc., can be interpreted as distortions that belong to different distortion sets.
Within this framework, we further investigate a large family of distortion sets which can generate
a special type of group level sparsity which we call sparse grouping representation (SGR). Instead
of merely designing one specific regularization term, we give sufficient conditions for the distortion
sets to generate the SGR. Under these sufficient conditions, a large set of consistent regularization
terms can be designed. Compared with the well-known group lasso which uses group distribution
information in a supervised learning setting, the SGR is an unsupervised one and thus essentially
different from the group lasso. In a novel fault-tolerance classification application, where there
appears class or group label noise, we show that the SGR outperforms the group lasso. This is not
surprising because the class or group label information is used as a core part of the group lasso while
the group sparsity produced by the SGR is intrinsic, in that the SGR does not need the class label
information as priors. Finally, we also note that the group level sparsity is of great interests due to
its wide applications in various supervised learning settings.
In this paper, we will state our results in a classification setting. In Section 2 we will review some
closely related work, and we will introduce the robust minimax framework in Section 3. In Section
4, we will define the sparse grouping representation and prove a set of sufficient conditions for
generating group level sparsity. An experimental verification on a low resolution face recognition
task will be reported in Section 5.
2
Related Work
In this paper, we will mainly work with the penalized linear regression problem and we shall review
some closely related work here. For penalized linear regression, several well-studied regularization
procedures are ridge regression or Tikhonov regularization [15], bridge regression [10], lasso [19]
and subset selection [5], fused lasso [20], elastic net [27], group lasso [25], multiple kernel learning
[3, 2], local coordinate coding [24], etc. The lasso has at least three prominent features to make itself
a principled tool among all of these procedures: continuous shrinkage and automatic variable selection at the same time, computational tractability (can be solved by linear programming methods) as
well as inducing sparsity. Recent results show that lasso can recover the solution of l0 regularization
under certain regularity conditions [8, 6, 7]. Recent advances such as fused lasso [20], elastic net
[27], group lasso [25] and local coordinate coding [24] are motivated by lasso [19].
Two concepts closely related to our work are the elastic net or grouping effect observed by [27] and
the group lasso [25]. The elastic net model hybridizes lasso and ridge regression to preserve some
redundancy for the variable selection, and it can be viewed as a stabilized version of lasso [27] and
hence it is still biased. The group lasso can produce group level sparsity [25, 2] but it requires the
group label information as prior. We shall see that in a novel classification application when there
appears class label noise [22, 18, 17, 26], the group lasso fails. We will discuss the differences of
various regularization procedures in a classification setting. We will use the basic schema for the
sparse representation classification (SRC) algorithm proposed in [21], and different regularization
procedures will be used to replace the lasso in the SRC.
The proposed framework reveals a fundamental connection between robust linear regression and
various regularized techniques using regularization terms of l0 , l1 , l2 , etc. Although [11] first introduced a robust model for least square problem with uncertain data and [23] discussed a robust model
for lasso, our results allow for using any positive regularization functions and a large family of loss
functions.
3
Minimax Framework for Robust Linear Regression
In this section, we will start with taking the loss function L as squared Euclidean norm, and we will
generalize the results to other loss functions in section 3.4.
2
3.1
Notations and Problem Statement
In a general M (M > 1)-classes classification setting, we are given a training dataset T =
{(xi , gi )}ni=1 , where xi ? Rp is the feature vector and gi ? {1, ? ? ? , M } is the class label for
the ith observation. A data (observation) matrix is formed as A = [x1 , ? ? ? , xn ] of size p ? n. Given
a test example y, the goal is to determine its class label.
3.2
Distortion Models
(j)
(j)
Assume that the jth class Cj has nj observations x1 , ? ? ? , xnj . If x belongs to the jth class, then
(j)
(j)
x ? span{x1 , ? ? ? , xnj }. We approximate y by a linear combination of the training examples:
y = Aw + ?,
T
(3)
p
where w = [w1 , w2 , ? ? ? , wn ] is a vector of combining coefficients; and ? ? R represents a vector
of additive zero-mean noise. We assume a Gaussian model v ? N (0, ? 2 I) for this additive noise,
so a least squares estimator can be used to compute the combining coefficients.
The observed training dataset T may have undergone various noise or distortions. We define the
following two classes of distortion models.
Definition 1: A random matrix ?A is called bounded example-wise (or attribute) distortion
(BED) with a bound ?, denoted as BED(?), if ?A := [d1 , ? ? ? , dn ], dk ? Rp , ||dk ||2 ? ?, k =
1, ? ? ? , n. where ? is a positive parameter.
This distortion model assumes that each observation (signal) is distorted independently from the
other observations, and the distortion has a uniformly upper bounded energy (?uniformity? refers to
the fact that all the examples have the same bound). BED includes attribute noise defined in [22,
26], and some examples of BED include Gaussian noise and sampling noise in face recognition.
Definition 2: A random matrix ?A is called bounded coefficient distortion (BCD) with bound f ,
denoted as BCD(f ), if ||?Aw||2 ? f (w), ?w ? Rp , where f (w) ? R+ .
The above definition allows for any distortion with or without inter-observation dependency. For
example, we can take f (w) = ?||w||2 , and Definition 2 with this f (w) means that the maximum
eigenvalue of ?A is upper limited by ?. This can be easily seen as follows. Denote the maximum
eigenvalue of ?A by ?max (?A). Then we have
||?Au||2
uT ?Av
= sup
,
||u||2
u6=0
u,v6=0 ||u||2 ||v||2
?max (?A) = sup
which is a standard result from the singular value decomposition (SVD) [12]. That is, the condition
of ||?Aw||2 ? ?||w||2 is equivalent to the condition that the maximum eigenvalue of ?A is upper
bounded by ?. In fact, BED is a subset of BCD by using triangular inequality and taking special
forms of f (w). We will use D := BCD to represent the distortion model.
Besides the additive residue ? generated from fitting models, to account for the above distortion
models, we shall consider multiplicative noise by extending Equation (3) as follows:
y = (A + ?A)w + ?,
(4)
where ?A ? D represents a possible distortion imposed to the observations.
3.3
Fundamental Theorem of Distortion
Now with the above refined linear model that incorporates a distortion model, we estimate the model
parameters w by minimizing the variance of Gaussian residues for the worst distortions within a
permissible distortion set D. Thus our robust model is
min max ||y ? (A + ?A)w||2 .
w?Rp ?A?D
(5)
The above minimax estimation will be used in our robust framework.
An advantage of this model is that it considers additive noise as well as multiplicative one within
a class of allowable noise models. As the optimal estimation of the model parameter in Equation
3
(5), w? , is derived for the worst distortion in D, w? will be insensitive to any deviation from the
underlying (unknown) noise-free examples, provided the deviation is limited to the tolerance level
given by D. The estimate w? thus is applicable to any A + ?A with ?A ? D. In brief, the
robustness of our framework is offered by modeling possible multiplicative noise as well as the
consequent insensitivity of the estimated parameter to any deviations (within D) from the noise-free
underlying (unknown) data. Moreover, this model can seamlessly incorporate either example-wise
noise or class noise, or both.
Equation (5) provides a clear interpretation of the robust model. In the following, we will give a
theorem to show an equivalence relationship between the robust minimax model of Equation (5)
and a general form of regularized linear regression procedure.
Theorem 1. Equation (5) with distortion set D(f ) is equivalent to the following generalized regularized minimization problem:
minp ||y ? Aw||2 + f (w).
(6)
w?R
?
Sketch of the proof: Fix w = w and establish equality between upper bound and lower bound.
||y ? (A + ?A)w? ||2 ? ||y ? Aw? ||2 + ||?Aw? ||2
? ||y ? Aw? ||2 + f (w? ).
In the above we have used the triangle inequality of norms. If y ? Aw? 6= 0, we define u =
(y ? Aw? )/||y ? Aw? ||2 . Since max f (?A) ? f (?A? ), by taking ?A? = ?uf (w? )t(w? )T /k,
?A?D
where t(wi? ) = 1/wi? for wi? 6= 0, t(wi? ) = 0 for wi? = 0 and k is the number of non-zero wi? (note
that w? is fixed so we can define t(w? )), we can actually attain the upper bound. It is easily verified
that the expression is also valid if y ? Aw? = 0.
Theorem 1 gives an equivalence relationship between general regularized least squares problems
and the robust regression under certain distortions. It should be noted that Equation (6) involves
min ||.||2 , and the standard form for least squares problem uses min ||.||22 as a loss function. It
is known that these two coincide up to a change of the regularization coefficient so the following
conclusions are valid for both of them. Several corollaries related to l0 , l1 , l2 , elastic net, group
lasso, local coordinate coding, etc., can be derived based on Theorem 1.
Corollary 1: l0 regularized regression is equivalent to taking a distortion set D(f l0 ) where
f l0 (w) = t(w)wT , t(wi ) = 1/wi for wi 6= 0, t(wi ) = 0 for wi = 0.
Corollary 2: l1 regularized regression (lasso) is equivalent to taking a distortion set D(f l1 ) where
f l1 (w) = ?||w||1 .
Corollary 3: Ridge regression (l2 ) is equivalent to taking a distortion set D(f l2 ) where f l2 (w) =
?||w||2 .
Corollary 4: Elastic net regression [27] (l2 + l1 ) is equivalent to taking a distortion set D(f e )
where f e (w) = ?1 ||w||1 + ?2 ||w||22 , with ?1 > 0, ?2 > 0.
gl1
Corollary 5: Group
Pmlasso [25] (grouped l1 of l2 ) is equivalent to taking a distortion set D(f )
where f gl1 (w) = j=1 dj ||wj ||2 , dj is the weight for jth group and m is the number of group.
lcc
Corollary 6:
PnLocal coordinate2 coding [24] is equivalent to taking a distortion set D(f ) where
lcc
f (w) = i=1 |wi |||xi ? y||2 , xi is ith basis, n is the number of basis, y is the test example.
Similar results can be derived for multiple kernel learning [3, 2], overlapped group lasso [16], etc.
3.4
Generalization to Other Loss Functions
From the proof of Theorem 1, we can see the Euclidean norm used in Theorem 1 can be generalized
to other loss functions too. We only require the loss function is a proper norm in a normed vector
space. Thus, we have the following Theorem for a general form of Equation (1).
Theorem 2. Given the relationship function Iw (X ) = y ? Aw and J ? R+ in a normed vector
space, if the loss functional L is a norm, then Equation (1) is equivalent to the following minimax
estimation with a distortion set D(J ):
(7)
minp max L(y ? (A + ?A)w).
w?R ?A?D(J )
4
4
4.1
Sparse Grouping Representation
Definition of SGR
We consider a classification application where class noise is present. The class noise can be viewed
as inter-example distortions. The following novel representation is proposed to deal with such distortions.
Definition 3. Assume all examples are standardized with zero mean and unit variance. Let ?ij =
xTi xj be the correlation for any two examples xi , xj ? T. Given a test example y, w ? Rn is defined
as a sparse grouping representation for y, if both of the following two conditions are satisfied,
(a) If wi ? ? and ?ij > ?, then |wi ? wj | ? 0 (when ? ? 1) for all i and j.
(b) If wi < ? and ?ij > ?, then wj ? 0 (when ? ? 1) for all i and j.
Especially, ? is the sparsity threshold, and ? is the grouping threshold.
This definition requires that if two examples are highly correlated, then the resulted coefficients
tend to be identical. Condition (b) produces sparsity by requiring that these small coefficients will
be automatically thresholded to zero. Condition (a) preserves grouping effects [27] by selecting
all these coefficients which are larger than a certain threshold. In the following we will provide
sufficient conditions for the distortion set D(J ) to produce this group level sparsity.
4.2
Group Level Sparsity
As known, D(l1 ) or lasso can only select arbitrarily one example from many identical candidates
[27]. This leads to the sensitivity to the class noise as the example lasso chooses may be mislabeled.
As a consequence, the sparse representation classification (SRC), a lasso based classification schema
[21], is not suitable for applications in the presence of class noise. The group lasso can produce
group level sparsity, but it uses group label information to restrict the distribution of the coefficients.
When there exists group label noise or class noise, group lasso will fail because it cannot correctly
determine the group. Definition 3 says that the SGR is defined by example correlations and thus it
will not be affected by class noise.
In the general situation where the examples are not identical but have high within-class correlations,
we give the following theorem to show that the grouping is robust in terms of data correlation. From
now on, for distortion set D(f (w)), we require that f (w) = 0 for w = 0 and we use a special form
of f (w), which is a sum of components fj (w),
f (w) = ?
n
X
fj (wj ).
j=1
Theorem 3. Assume all examples are standardized. Let ?ij = xTi xj be the correlation for any two
examples. For a given test example y, if both fi 6= 0 and fj 6= 0 have first order derivatives, we
have
q
?
?
2||y||2
2(1 ? ?ij ).
(8)
|fi ? fj | ?
?
P
Sketch of the proof: By differentiating ||y ? Aw||22 + fj with respect to wi and wj respectively,
?
?
we have ?2xTi {y ? Aw} + ?fi = 0 and ?2xTj {y ? Aw} + ?fj = 0. The difference of these two
?
?
2(xT ?xT )r
equations is fi ? fj = i ? j where r = y ? Aw is the residual vector. Since all examples are
standardized, we have ||xTi ? xTj ||22 = 2(1 ? ?ij ) where ? = xTi xj . For a particular value w = 0, we
have ||r||2 = ||y||2 , and thus we can get ||r||2 ? ||y||2 for the optimal value of w. Combining r and
||xTi ? xTj ||2 , we proved the Theorem 3.
This theorem is different from the Theorem 1 in [27] in the following aspects: a) we have no restrictions on the sign of the wi or wj ; b) we use a family of functions which give us more choices to
bound the coefficients. As aforementioned, it is not necessary for fi to be the same with fj and we
?
even can use different growth rates for different components; and c) fi (wi ) does not have to be wi
and a monotonous function with very small growth rate would be enough.
5
As an illustrative example, we can choose fi (wi ) or fj (wj ) to be a second order function with
?
?
respect to wi or wj . Then the resulted |fi ? fj | will be the difference of the coefficients ?|wi ? wj |
with a constant ?. If the two examples are highly correlated and ? is sufficiently large, then we can
conclude that the difference of the coefficients will be close to zero.
The sparsity implies an automatic thresholding ability with which all small estimated coefficients
will be shrunk to zero, that is, f (w) has to be singular at the point w = 0 [9]. Incorporating this
requirement with Theorem 3, we can achieve group level sparsity: if some of the group coefficients
are small and automatically thresholded to zero, all other coefficients within this group will be reset
to zero too. This correlation based group level sparsity does not require any prior information on the
distribution of group labels.
To make a good estimator, there are still two properties we have to consider: continuity and unbiasedness [9]. In short, to avoid instability, we always require the resulted estimator for w be a
?
continuous function; and a sufficient condition for unbiasedness is that f (|w|) = 0 when |w| is
large. Generally, the requirement of stability is not consistent with that of sparsity. Smoothness
determines the stability and singularity at zero measures the degree of sparsity. As an extreme example, l1 can produce sparsity while l2 does not because l1 is singular while l2 is smooth at zero;
at the same time, l2 is more stable than l1 . More details regarding these conditions can be found in
[1, 9].
4.3
Sufficient Condition for SGR
Based on the above discussion, we can readily construct a sparse grouping representation based
on Equation (5) where we only need to specify a distortion set D(f ? (w)) satisfying the following
sufficient conditions:
Lemma?? 1: Sufficient condition
for SGR.
?
(a). fj? ? R+ for all fj 6= 0.
(b). fj? is continuous and singular at zero with respect to wj for all j.
?
(c). fj? (|wj |) = 0 for large |wj | for all j.
Proof: Together with Theorem 3, it is easy to be verified.
As we can see, the regularization term ?l1 + (1 ? ?)l22 proposed by [27] satisfies the above condition
(a) and (b), but it fails to comply with (c). So, it may become biased for large |w|. Based on
these conditions, we can easily construct regularization terms f ? to generate the sparse grouping
representation. We will call these f ? as core functions for producing the SGR. As some concrete
examples, we can construct a large family of clipped ?1 Lq + ?2 l22 where 0 < q ? 1 by restricting
fi? = wi I(|wi | < ?) + c for some constant ? and c. Also, SCAD [9] satisfies all three conditions
so it belongs to f ? . This gives more theoretic justifications for previous empirical success of using
SCAD.
4.4
Generalization Bounds for Presence of Class Noise
We will follow the algorithm given in [21] and merely replace the lasso with the SGR or group lasso.
After estimating the (minimax) optimal combining coefficient vector w? by the SGR or group lasso,
we may calculate the distance from the new test data y to the projected point in the subspace spanned
by class Ci :
di (A, w? |Ci ) = di (A|Ci , w? ) = ||y ? Aw? |Ci ||2
(9)
?
where w |Ci represents restricting w? to the ith class Ci ; that is, (w? |Ci )j = wj? 1(xj ? Ci ), where
1(?) is an indicator function; and similarly A|Ci represents restricting A to the ith class Ci .
A decision rule may be obtained by choosing the class with the minimum distance:
?i = argmini?{1,??? ,M } {di }.
(10)
Based on these notations, we now have the following generalization bounds for the SGR in the
presence of class noise in the training data.
Theorem 4. All examples are standardized to be zero mean and unit variance. For an arbitrary
class Ci of N examples, we have p (p < 0.5) percent (fault level) of labels mis-classified into class
6
Ck 6= Ci . We assume w is a sparse grouping representation for any test example y and ?ij > ? (?
is in Definition 3) for any two examples. Under the distance function d(A|Ci , w) = d(A, w|Ci ) =
?
||y ? Aw|Ci ||2 and fj = w for all j, we have confidence threshold ? to give correct estimation ?i for
y, where
(1 ? p) ? N ? (w0 )2
,
??
d
where w0 is a constant and the confidence threshold is defined as ? = di (A|Ci ) ? di (A|Ck ).
Sketch of the proof: Assume y is in class Ci . The correctly labeled (mislabeled, respectively) subset
for Ci is Ci1 (Ci2 , respectively) and the size of set Ci1 is larger than that of Ci2 . We use A1 w to
denote Aw|Ci1 and A2 w to denote Aw|Ci2 . By triangular inequality, we have
? = ||y ? Aw|Ci1 ||2 ? ||y ? Aw|Ci2 ||2
? ||A1 w ? A2 w||2 .
For each k ? Ci1 , we differentiate with respect to wk and do the same procedure as in proof of
Theorem 3. Then summarizing all equalities for Ci1 and repeating the same procedure for each
i ? Ci2 . Finally we subtract the summation of Ci2 from the summation of Ci1 . Use the conditions
that w is a sparse grouping representation and ?ij > ?, combing Definition 3, so all wk in class Ci
should be the same as a constant w0 while others ? 0. By taking the l2 -norm for both sides, we
(w0 )2
.
have ||A1 w ? A2 w||2 ? (1?p)N
d
This theorem gives an upper bound for the fault-tolerance against class noise. By this theorem, we
can see that the class noise must be smaller than a certain value to guarantee a given fault correction
confidence level ? .
5
Experimental Verification
In this section, we compare several methods on a challenging low-resolution face recognition task
(multi-class classification) in the presence of class noise. We use the Yale database [4] which consists
of 165 gray scale images of 15 individuals (each person is a class). There are 11 images per subject,
one per different facial expression or configuration: center-light, w/glasses, happy, left-light, w/no
glasses, normal, right-light, sad, sleepy, surprised, and wink. Starting from the orignal 64 ? 64
images, all images are down-sampled to have a dimension of 49. A training/test data set is generated
by uniformly selecting 8 images per individual to form the training set, and the rest of the database
is used as the test set; repeating this procedure to generate five random split copies of training/test
data sets. Five class noise levels are tested. Class noise level=p means there are p percent of labels
(uniformly drawn from all labels of each class) mislabeled for each class.
For SVM, we use the standard implementation of multiple-class (one-vs-all) LibSVM in MatlabArsenal1 . For lasso based SRC, we use the CVX software [13, 14] to solve the corresponding
convex optimization problems. The group lasso based classifier is implemented in the same way as
the SRC. We use a clipped ?l1 + (1 ? ?)l2 as an illustrative example of the SGR, and the corresponding classifier is denoted as SGRC. For lasso, group Lasso and the SGR based classifier, we
run through ? ? {0.001, 0.005, 0.01, 0.05, 0.1, 0.2} and report the best results for each classifier.
Figure 1 (b) shows the parameter range of ? that is appropriate for lasso, group lasso and the SGR
based classifier. Figure 1 (a) shows that the SGR based classifier is more robust than lasso or group
lasso based classifier in terms of class noise. These results verify that in a novel application when
there exists class noise in the training data, the SGR is more suitable than group lasso for generating
group level sparsity.
6
Conclusion
Towards a better understanding of various regularized procedures in robust linear regression, we
introduce a robust minimax framework which considers both additive and multiplicative noise or
distortions. Within this unified framework, various regularization terms correspond to different
1
A matlab package for classification algorithms which can be
http://www.informedia.cs.cmu.edu/yanrong/MATLABArsenal/MATLABArsenal.htm.
7
downloaded
from
1
0.65
SVM
SRC
SGRC
Group lasso
0.8
0.7
0.6
0.5
0.4
0.55
0.5
0.45
0.4
0.35
0.3
0.3
0.2
0.25
0.1
0.15
0.2
0.25
0.3
SRC
SGRC
Group lasso
0.6
Classification error rate
Classification error rate
0.9
0.35
0.4
0.45
0.5
0.55
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
?
Class noise level
(a)
(b)
Figure 1: (a) Comparison of SVM, SRC (lasso), SGRC and Group lasso based classifiers on the low
resolution Yale face database. At each level of class noise, the error rate is averaged over five copies
of training/test datasets for each classifier. For each classifier, the variance bars for each class noise
level are plotted. (b) Illustration of the paths for SRC (lasso), SGRC and group lasso. ? is the weight
for regularization term. All data points are averaged over five copies with the same class noise level
of 0.2.
distortions to the original data matrix. We further investigate a novel sparse grouping representation
(SGR) and prove sufficient conditions for generating such group level sparsity. We also provide a
generalization bound for the SGR. In a novel classification application when there exists class noise
in the training example, we show that the SGR is more robust than group lasso. The SCAD and
clipped elastic net are special instances of the SGR.
References
[1] A. Antoniadis and J. Fan. Regularitation of wavelets approximations. J. the American Statistical Association, 96:939?967, 2001.
[2] F. Bach. Consistency of the group lasso and multiple kernel learning. Journal of Machine
Learning Research, 9:1179?1225, 2008.
[3] F. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and
the smo algorithm. In Proceedings of the Twenty-first International Conference on Machine
Learning, 2004.
[4] P. N. Bellhumer, J. Hespanha, and D. Kriegman. Eigenfaces vs. fisherfaces: Recognition using
class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intelligence, 17(7):711?720,
1997.
[5] L. Breiman. Heuristics of instability and stabilization in model selection. Ann. Statist.,
24:2350?2383, 1996.
[6] E. Cand?es, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate
measurements. Comm. on Pure and Applied Math, 59(8):1207?1233, 2006.
[7] E. Cand?es and T. Tao. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Information Theory, 52(12):5406?5425, 2006.
[8] D. Donoho. For most large underdetermined systems of linear equations the minimum l1 nom
solution is also the sparsest solution. Comm. on Pure and Applied Math, 59(6):797?829, 2006.
[9] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Statist. Ass., 96:1348?1360, 2001.
[10] I. Frank and J. Friedman. A statistical view of some chemometrics regression tools. Technometrics, 35:109?148, 1993.
[11] L. El Ghaoui and H. Lebret. Robust solutions to least-squares problems with uncertain data.
SIAM Journal Matrix Analysis and Applications, 18:1035?1064, 1997.
[12] G.H. Golub and C.F. Van Loan. Matrix computations. Johns Hopkins Univ Pr, 1996.
8
[13] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs, recent advances in learning and control. Lecture Notes in Control and Information Sciences, pages
95?110, 2008.
[14] M. Grant and S. Boyd. UCI machine learning repositorycvx: Matlab software for disciplined
convex programming, 2009.
[15] A. Hoerl and R. Kennard. Ridge regression. Encyclpedia of Statistical Science, 8:129?136,
1988.
[16] L. Jacob, G. Obozinski, and J.-P. Vert. Group lasso with overlap and graph lasso. In Proceedings of the Twenty-six International Conference on Machine Learning, pages 433?440,
2009.
[17] J. Maletic and A. Marcus. Data cleansing: Beyond integrity analysis. In Proceedings of the
Conference on Information Quality, 2000.
[18] K. Orr. Data quality and systems theory. Communications of the ACM, 41(2):66?71, 1998.
[19] R. Tibshirani. Regression shrinkage and selection via the lasso. J. R. Statist. Soc. B, 58:267?
288, 1996.
[20] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the
fused lasso. J.R.Statist.Soc.B, 67:91?108, 2005.
[21] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Y. Ma. Robust face recognition via sparse
representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, pages 210?
227, 2009.
[22] X. Wu. Knowledge Acquisition from Databases. Ablex Pulishing Corp, Greenwich, CT, USA,
1995.
[23] H. Xu, C. Caramanis, and S. Mannor. Robust regression and lasso. In NIPS, 2008.
[24] K. Yu, T. Zhang, and Y. Gong. Nonlinear learning using local coordinate coding. In Advances
in Neural Information Processing Systems, volume 22, 2009.
[25] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of The Royal Statistical Society Series B, 68(1):49?67, 2006.
[26] X. Zhu, X. Wu, and S. Chen. Eliminating class noise in large datasets. In Proceedings of the
20th ICML International Conference on Machine Learning, Washington D.C., USA, March
2003.
[27] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. J. R. Statist.
Soc. B, 67(2):301?320, 2005.
9
| 4000 |@word version:1 eliminating:1 norm:9 ci2:6 decomposition:1 jacob:1 configuration:1 series:2 selecting:2 outperforms:2 xnj:2 surprising:1 must:1 readily:1 john:1 additive:5 designed:2 v:2 intelligence:2 antoniadis:1 ith:4 core:3 short:1 provides:2 math:2 mannor:1 nom:1 simpler:1 zhang:1 five:4 dn:1 become:2 surprised:1 yuan:1 prove:3 consists:1 fitting:1 introduce:3 inter:2 cand:2 multi:1 automatically:2 xti:6 provided:1 estimating:2 moreover:2 notation:2 bounded:4 underlying:2 argmin:1 interpreted:3 unified:1 nj:1 guarantee:1 growth:2 rm:2 classifier:10 control:2 unit:2 grant:2 producing:1 positive:3 local:6 consequence:1 mach:1 encoding:1 gl1:2 path:1 au:1 studied:2 equivalence:3 challenging:1 limited:2 range:1 averaged:2 practice:1 procedure:9 universal:1 empirical:2 attain:1 vert:1 projection:2 boyd:2 confidence:3 refers:1 get:1 cannot:1 close:1 selection:9 romberg:1 instability:2 restriction:1 equivalent:9 imposed:1 www:1 center:1 starting:1 independently:1 normed:2 convex:3 resolution:3 recovery:2 pure:2 estimator:3 rule:1 d1:1 spanned:1 u6:1 stability:2 coordinate:6 justification:1 exact:2 programming:2 us:3 designing:1 lanckriet:1 overlapped:1 recognition:5 satisfying:1 labeled:1 database:4 observed:2 solved:1 worst:2 calculate:1 wj:13 knight:1 src:9 principled:2 comm:2 kriegman:1 uniformity:1 ablex:1 basis:2 triangle:1 mislabeled:3 easily:3 htm:1 various:8 caramanis:1 univ:1 choosing:1 refined:1 saunders:1 heuristic:1 widely:1 larger:2 solve:1 distortion:38 say:1 triangular:2 ability:1 statistic:1 gi:2 itself:1 obviously:1 differentiate:1 advantage:1 eigenvalue:3 net:9 propose:1 reset:1 uci:1 combining:4 insensitivity:1 achieve:1 bed:5 inducing:1 chemometrics:1 regularity:1 requirement:2 extending:1 produce:5 generating:5 gong:1 ij:8 soc:3 implemented:1 c:2 involves:1 implies:1 closely:3 correct:1 attribute:2 shrunk:1 stabilization:1 lcc:2 require:4 fix:1 generalization:5 ci1:7 singularity:1 mathematically:2 summation:2 underdetermined:1 correction:1 sufficiently:1 wright:1 normal:1 great:1 informedia:1 a2:3 estimation:5 applicable:1 hoerl:1 label:14 iw:4 bridge:1 grouped:2 tool:3 minimization:1 gaussian:3 always:1 ck:2 zhou:1 avoid:1 shrinkage:2 breiman:1 varying:1 corollary:8 l0:6 derived:3 likelihood:1 mainly:1 seamlessly:1 sgr:25 summarizing:1 glass:2 am:1 el:1 inaccurate:1 tao:2 classification:15 among:1 aforementioned:1 denoted:3 hilbertian:1 smoothing:1 special:6 construct:3 washington:1 sampling:1 qiang:1 identical:3 represents:4 yu:1 unsupervised:1 icml:1 others:1 report:1 nonsmooth:1 preserve:2 resulted:3 individual:2 xtj:3 cleansing:1 friedman:1 technometrics:1 interest:1 investigate:2 highly:2 golub:1 extreme:1 light:3 necessary:1 carbondale:1 facial:1 incomplete:1 euclidean:5 plotted:1 uncertain:2 monotonous:1 instance:1 modeling:1 tractability:1 deviation:3 subset:3 hybridizes:1 siu:2 too:2 reported:1 dependency:1 aw:22 rosset:1 chooses:1 unbiasedness:2 person:1 fundamental:2 sensitivity:1 international:3 siam:1 together:1 fused:4 concrete:1 hopkins:1 w1:1 squared:3 satisfied:1 choose:1 l22:2 american:1 derivative:1 combing:1 li:1 account:1 orr:1 coding:7 wk:2 includes:2 coefficient:15 explicitly:1 multiplicative:4 view:1 lot:1 schema:2 sup:2 start:1 recover:1 il:1 square:5 ni:1 formed:1 variance:4 correspond:1 generalize:1 produced:1 classified:1 definition:11 against:1 energy:1 acquisition:1 associated:3 proof:6 di:5 mi:1 sampled:1 dataset:2 proved:1 knowledge:1 ut:1 cj:1 actually:1 appears:3 supervised:2 follow:1 specify:1 disciplined:1 correlation:6 sketch:3 ganesh:1 nonlinear:1 continuity:1 quality:2 gray:1 usa:2 effect:2 concept:1 requiring:1 verify:1 regularization:23 hence:1 equality:2 deal:1 noted:1 illustrative:2 generalized:2 prominent:1 allowable:1 ridge:5 theoretic:1 l1:14 fj:15 percent:2 image:5 wise:2 novel:8 fi:9 functional:5 empirically:1 insensitive:1 volume:1 belong:1 discussed:1 interpretation:1 association:1 interpret:1 measurement:2 expressing:1 smoothness:2 automatic:2 consistency:1 sastry:1 similarly:1 illinois:1 dj:2 stable:2 etc:7 integrity:1 recent:3 belongs:2 tikhonov:1 certain:5 corp:1 inequality:3 arbitrarily:1 success:1 fault:4 seen:1 minimum:2 determine:2 signal:3 multiple:7 desirable:1 smooth:1 bach:2 lin:1 a1:3 regression:20 basic:1 essentially:3 cmu:1 kernel:6 represent:3 residue:2 singular:4 permissible:1 biased:2 w2:1 rest:1 induced:3 tend:1 subject:1 incorporates:1 nonconcave:1 jordan:1 integer:1 call:2 near:1 presence:4 yang:1 split:1 enough:1 wn:1 easy:1 xj:5 hastie:1 lasso:61 restrict:1 regarding:1 motivated:1 expression:2 six:1 matlab:2 generally:1 clear:1 repeating:2 statist:5 generate:4 http:1 stabilized:1 sign:1 estimated:2 correctly:2 per:3 tibshirani:2 shall:4 affected:1 express:1 group:61 redundancy:1 threshold:5 drawn:1 libsvm:1 verified:2 thresholded:2 graph:2 merely:2 sum:1 run:1 package:1 uncertainty:3 distorted:1 clipped:3 family:5 wu:2 cvx:1 sad:1 decision:1 bound:12 ct:1 cheng:1 yale:2 fan:2 oracle:1 software:2 bcd:4 aspect:1 span:1 min:3 uf:1 department:1 combination:1 scad:3 march:1 smaller:1 wi:24 pr:1 ghaoui:1 equation:17 discus:1 fail:1 appropriate:1 robustness:1 rp:4 original:2 assumes:1 standardized:4 include:1 especially:2 establish:2 society:1 quantity:1 strategy:1 orignal:1 southern:1 subspace:1 distance:3 w0:4 considers:2 marcus:1 besides:1 relationship:8 illustration:1 minimizing:2 happy:1 difficult:1 statement:1 frank:1 hespanha:1 implementation:2 anal:1 proper:1 unknown:2 twenty:2 fisherfaces:1 upper:6 av:1 observation:7 datasets:2 situation:2 communication:1 rn:3 arbitrary:1 introduced:1 connection:1 smo:1 established:1 nip:1 trans:2 lebret:1 beyond:1 bar:1 pattern:2 sparsity:22 program:1 interpretability:2 max:5 royal:1 suitable:2 overlap:1 regularized:8 indicator:1 residual:1 zhu:2 minimax:12 brief:1 conic:1 prior:3 review:2 l2:12 comply:1 understanding:1 loss:13 lecture:1 downloaded:1 degree:1 offered:1 sufficient:12 consistent:3 verification:2 undergone:1 minp:2 thresholding:1 penalized:4 free:2 sleepy:1 jth:3 copy:3 bias:3 allow:1 side:1 wide:1 eigenfaces:1 face:5 taking:10 differentiating:1 sparse:13 tolerance:3 van:1 dimension:2 xn:1 valid:2 coincide:1 projected:1 transaction:1 approximate:1 reveals:1 conclude:1 xi:5 continuous:3 robust:20 elastic:9 as:1 zou:1 noise:40 x1:3 kennard:1 xu:1 wink:1 fails:2 sparsest:1 lq:1 candidate:1 wavelet:1 theorem:20 down:1 specific:3 xt:2 dk:2 svm:3 consequent:1 grouping:14 intrinsic:1 exists:3 incorporating:1 restricting:3 ci:19 chen:1 subtract:1 expressed:1 v6:1 determines:1 satisfies:2 acm:1 ma:1 obozinski:1 viewed:2 goal:1 ann:1 donoho:1 towards:1 replace:2 change:1 argmini:1 loan:1 uniformly:3 wt:1 lemma:1 called:3 duality:1 experimental:2 svd:1 e:2 select:1 incorporate:1 tested:1 correlated:2 |
3,315 | 4,001 | Switched Latent Force Models
for Movement Segmentation
1
?
Mauricio A. Alvarez
, Jan Peters 2 , Bernhard Sch?olkopf 2 , Neil D. Lawrence 3,4
School of Computer Science, University of Manchester, Manchester, UK M13 9PL
2
Max Planck Institute for Biological Cybernetics, T?ubingen, Germany 72076
3
School of Computer Science, University of Sheffield, Sheffield, UK S1 4DP
4
The Sheffield Institute for Translational Neuroscience, Sheffield, UK S10 2HQ
1
Abstract
Latent force models encode the interaction between multiple related dynamical
systems in the form of a kernel or covariance function. Each variable to be modeled is represented as the output of a differential equation and each differential
equation is driven by a weighted sum of latent functions with uncertainty given
by a Gaussian process prior. In this paper we consider employing the latent force
model framework for the problem of determining robot motor primitives. To deal
with discontinuities in the dynamical systems or the latent driving force we introduce an extension of the basic latent force model, that switches between different
latent functions and potentially different dynamical systems. This creates a versatile representation for robot movements that can capture discrete changes and
non-linearities in the dynamics. We give illustrative examples on both synthetic
data and for striking movements recorded using a Barrett WAM robot as haptic input device. Our inspiration is robot motor primitives, but we expect our model to
have wide application for dynamical systems including models for human motion
capture data and systems biology.
1
Introduction
Latent force models [1] are a new approach for modeling data that allows combining dimensionality
reduction with systems of differential equations. The basic idea is to assume an observed set of
D correlated functions to arise from an unobserved set of R forcing functions. The assumption is
that the R forcing functions drive the D observed functions through a set of differential equation
models. Each differential equation is driven by a weighted mix of latent forcing functions. Sets
of coupled differential equations arise in many physics and engineering problems particularly when
the temporal evolution of a system needs to be described. Learning such differential equations has
important applications, e.g., in the study of human motor control and in robotics [6]. A latent force
model differs from classical approaches as it places a probabilistic process prior over the latent
functions and hence can make statements about the uncertainty in the system. A joint Gaussian
process model over the latent forcing functions and the observed data functions can be recovered
using a Gaussian process prior in conjunction with linear differential equations [1]. The resulting
latent force modeling framework allows the combination of the knowledge of the systems dynamics
with a data driven model. Such generative models can be used to good effect, for example in ranked
target prediction for transcription factors [5].
If a single Gaussian process prior is used to represent each latent function then the models we consider are limited to smooth driving functions. However, discontinuities and segmented latent forces
are omnipresent in real-world data. For example, impact forces due to contacts in a mechanical
dynamical system (when grasping an object or when the feet touch the ground) or a switch in an
electrical circuit result in discontinuous latent forces. Similarly, most non-rhythmic natural mo1
tor skills consist of a sequence of segmented, discrete movements. If these segments are separate
time-series, they should be treated as such and not be modeled by the same Gaussian process model.
In this paper, we extract a sequence of dynamical systems motor primitives modeled by second
order linear differential equations in conjunction with forcing functions (as in [1, 6]) from human
movement to be used as demonstrations of elementary movements for an anthropomorphic robot.
As human trajectories have a large variability: both due to planned uncertainty of the human?s
movement policy, as well as due to motor execution errors [7], a probabilistic model is needed to
capture the underlying motor primitives. A set of second order differential equations is employed
as mechanical systems are of the same type and a temporal Gaussian process prior is used to allow
probabilistic modeling [1]. To be able to obtain a sequence of dynamical systems, we augment the
latent force model to include discontinuities in the latent function and change dynamics. We introduce discontinuities by switching between different Gaussian process models (superficially similar
to a mixture of Gaussian processes; however, the switching times are modeled as parameters so that
at any instant a single Gaussian process is driving the system). Continuity of the observed functions
is then ensured by constraining the relevant state variables (for example in a second order differential
equation velocity and displacement) to be continuous across the switching points. This allows us
to model highly non stationary multivariate time series. We demonstrate our approach on synthetic
data and real world movement data.
2
Review of Latent force models (LFM)
Latent force models [1] are hybrid models that combine mechanistic principles and Gaussian processes as a flexible way to introduce prior knowledge for data modeling. A set of D functions
{yd (t)}D
d=1 is modeled as the set of output functions of a series of coupled differential equations,
whose common input is a linear combination of R latent functions, {ur (t)}R
r=1 . Here we focus on a
second order ordinary differential equation (ODE). We assume the output yd (t) is described by
Ad
PR
d2 yd (t)
dyd (t)
+ Cd
+ ?d yd (t) = r=1 Sd,r ur (t),
2
dt
dt
where, for a mass-spring-damper system, Ad would represent the mass, Cd the damper and ?d , the
spring constant associated to the output d. We refer to the variables Sd,r as the sensitivity parameters.
They are used to represent the relative strength that the latent force r exerts over the output d. For
simplicity we now focus on the case where R = 1, although our derivations apply more generally.
Note that models that learn a forcing function to drive a linear system have proven to be well-suited
for imitation learning for robot systems [6]. The solution of the second order ODE follows
yd (t) = yd (0)cd (t) + y? d (0)ed (t) + fd (t, u),
(1)
where yd (0) and y? d (0) are the output and the velocity at time t p
= 0, respectively, known as the
initial conditions (IC). The angular frequency is given by ?d = (4Ad ?d ? Cd2 )/(4A2d ) and the
remaining variables are given by
h
i
?d
e??d t
cd (t) = e??d t cos(?d t) +
sin(?d t) , ed (t) =
sin(?d t),
?d
?d
Z t
Z t
Sd
Sd
fd (t, u) =
Gd (t ? ? )u(? )d? =
e??d (t?? ) sin[(t ? ? )?d ]u(? )d?,
Ad ?d 0
Ad ? d 0
with ?d = Cd /(2Ad ). Note that fd (t, u) has an implicit dependence on the latent function u(t). The
uncertainty in the model of Eq. (1) is due to the fact that the latent force u(t) and the initial conditions
yd (0) and y? d (0) are not known. We will assume that the latent function u(t) is sampled from a zero
mean Gaussian process prior, u(t) ? GP(0, ku,u (t, t0 )), with covariance function ku,u (t, t0 ).
If the initial conditions, yIC = [y1 (0), y2 (0), . . . , yD (0), v1 (0), v2 (0), . . . , vD (0)]> , are independent of u(t) and distributed as a zero mean Gaussian with covariance KIC the covariance function
between any two output functions, d and d0 at any two times, t and t0 , kyd ,yd0 (t, t0 ) is given by
cd (t)cd0 (t0 )?yd ,yd0 + cd (t)ed0 (t0 )?yd ,vd0 + ed (t)cd0 (t0 )?vd ,yd0 + ed (t)ed0 (t0 )?vd ,vd0 + kfd ,fd0 (t, t0 ),
where ?yd ,yd0 , ?yd ,vd0 , ?vd ,yd0 and ?vd ,vd0 are entries of the covariance matrix KIC and
kfd ,fd0 (t, t0 ) = K0
Rt
0
R t0
Gd (t ? ? ) 0 Gd0 (t0 ? ? 0 )ku,u (t, t0 )d? 0 d?,
2
(2)
where K0 = Sd Sd0 /(Ad Ad0 ?d ?d0 ). So the covariance function kfd ,fd0 (t, t0 ) depends on the covariance function of the latent force u(t). If we assume the latent function has a radial basis function
(RBF) covariance, ku,u (t, t0 ) = exp[?(t ? t0 )2 /`2 ], then kfd ,fd0 (t, t0 ) can be computed analytically [1] (see also supplementary material). The latent force model induces a joint Gaussian process
model across all the outputs. The parameters of the covariance function are given by the parameters
of the differential equations and the length scale of the latent force. Given a multivariate time series
data set these parameters may be determined by maximum likelihood.
The model can be thought of as a set of mass-spring-dampers being driven by a function sampled
from a Gaussian process. In this paper we look to extend the framework to the case where there can
be discontinuities in the latent functions. We do this through switching between different Gaussian
process models to drive the system.
3
Switching dynamical latent force models (SDLFM)
We now consider switching the system between different latent forces. This allows us to change the
dynamical system and the driving force for each segment. By constraining the displacement and
velocity at each switching time to be the same, the output functions remain continuous.
3.1 Definition of the model
We assume that the input space is divided in a series of non-overlapping intervals [tq?1 , tq ]Q
q=1 .
During each interval, only one force uq?1 (t) out of Q forces is active, that is, there are {uq?1 (t)}Q
q=1
forces. The force uq?1 (t) is activated after time tq?1 (switched on) and deactivated (switched off)
after time tq . We can use the basic model in equation (1) to describe the contribution to the output
due to the sequential activation of these forces. A particular output zd (t) at a particular time instant
t, in the interval (tq?1 , tq ), is expressed as
zd (t) = ydq (t ? tq?1 ) = cqd (t ? tq?1 )ydq (tq?1 ) + eqd (t ? tq?1 )y? dq (tq?1 ) + fdq (t ? tq?1 , uq?1 ).
This equation is assummed to be valid for describing the output only inside the interval (tq?1 , tq ).
Here we highlighted this idea by including the superscript q in ydq (t ? tq?1 ) to represent the interval
q for which the equation holds, although later we will omit it to keep the notation uncluttered. Note
that for Q = 1 and t0 = 0, we recover the original latent force model given in equation (1). We also
define the velocity z?d (t) at each time interval (tq?1 , tq ) as
z?d (t) = y? dq (t ? tq?1 ) = gdq (t ? tq?1 )ydq (tq?1 ) + hqd (t ? tq?1 )y? dq (tq?1 ) + mqd (t ? tq?1 , uq?1 ),
where gd (t) = ?e??d t sin(?d t)(?d2 ?d?1 + ?d ) and
Z t
Sd d
??d t ?d
hd (t) = ?e
Gd (t ? ? )u(? )d? .
sin(?d t) ? cos(?d t) , md (t) =
?d
Ad ?d dt
0
Q
Given the parameters ? = {{Ad , Cd , ?d , Sd }D
d=1 , {`q?1 }q=1 }, the uncertainty in the outputs is
induced by the prior over the initial conditions ydq (tq?1 ), y? dq (tq?1 ) for all values of tq?1 and the
prior over latent force uq?1 (t) that is active during (tq?1 , tq ). We place independent Gaussian
process priors over each of these latent forces uq?1 (t), assuming independence between them.
For initial conditions ydq (tq?1 ), y? dq (tq?1 ), we could assume that they are either parameters to
be estimated or random variables with uncertainty governed by independent Gaussian distribuq
tions with covariance matrices KIC
as described in the last section. However, for the class
of applications we will consider: mechanical systems, the outputs should be continuous across
the switching points. We therefore assume that the uncertainty about the initial conditions
for the interval q, ydq (tq?1 ), y? dq (tq?1 ) are proscribed by the Gaussian process that describes the
outputs zd (t) and velocities z?d (t) in the previous interval q ? 1. In particular, we assume
ydq (tq?1 ), y? dq (tq?1 ) are Gaussian-distributed with mean values given by ydq?1 (tq?1 ? tq?2 ) and
y? dq?1 (tq?1 ? tq?2 ) and covariances kzd ,zd0 (tq?1 , tq0 ?1 ) = cov[ydq?1 (tq?1 ? tq?2 ), ydq?1
(tq?1 ?
0
tq?2 )] and kz?d ,z?d0 (tq?1 , tq0 ?1 ) = cov[y? dq?1 (tq?1 ? tq?2 ), y? dq?1
(tq?1 ? tq?2 )]. We also consider
0
covariances between zd (tq?1 ) and z?d0 (tq0 ?1 ), this is, between positions and velocities for different
values of q and d.
Example 1. Let us assume we have one output (D = 1) and three switching intervals (Q = 3)
with switching points t0 , t1 and t2 . At t0 , we assume that yIC follows a Gaussian distribution with
3
mean zero and covariance KIC . From t0 to t1 , the output z(t) is described by
z(t) = y 1 (t ? t0 ) = c1 (t ? t0 )y 1 (t0 ) + e1 (t ? t0 )y? 1 (t0 ) + f 1 (t ? t0 , u0 ).
The initial condition for the position in the interval (t1 , t2 ) is given by the last equation evaluated a
t1 , this is, z(t1 ) = y 2 (t1 ) = y 1 (t1 ? t0 ). A similar analysis is used to obtain the initial condition
associated to the velocity, z(t
? 1 ) = y? 2 (t1 ) = y? 1 (t1 ? t0 ). Then, from t1 to t2 , the output z(t) is
z(t) = y 2 (t ? t1 ) = c2 (t ? t1 )y 2 (t1 ) + e2 (t ? t1 )y? 2 (t1 ) + f 2 (t ? t1 , u1 ),
= c2 (t ? t1 )y 1 (t1 ? t0 ) + e2 (t ? t1 )y? 1 (t1 ? t0 ) + f 2 (t ? t1 , u1 ).
Following the same train of thought, the output z(t) from t2 is given as
z(t) = y 3 (t ? t2 ) = c3 (t ? t2 )y 3 (t2 ) + e3 (t ? t2 )y? 3 (t2 ) + f 3 (t ? t2 , u2 ),
where y 3 (t2 ) = y 2 (t2 ? t1 ) and y? 3 (t2 ) = y? 2 (t2 ? t1 ). Figure 1 shows an example of the switching
dynamical latent force model scenario. To ensure the continuity of the outputs, the initial condition
is forced to be equal to the output of the last interval evaluated at the switching point.
3.2
z(t)
The covariance function
The derivation of the covariance function for the
switching model is rather
y 2 (t ? t1 )
involved.
For continuous output signals, we
must take into account cony 2 (t2 ? t1 )
straints at each switching
y 2 (t1 )
time. This causes initial
y 3 (t2 )
1
1
y
(t
?
t
)
y
(t
)
0
0
conditions for each interval to be dependent on final
y 3 (t ? t2 )
y 1 (t1 ? t0 )
conditions for the previous
t2
t1
t0
interval and induces correlations across the inter- Figure 1: Representation of an output constructed through a switching dynamvals. This effort is worth- ical latent force model with Q = 3. The initial conditions y q (tq?1 ) for each
while though as the result- interval are matched to the value of the output in the last interval, evaluated at
q
q?1
ing model is very flexible the switching point tq?1 , this is, y (tq?1 ) = y (tq?1 ? tq?2 ).
and can take advantage of
the switching dynamics to represent a range of signals.
As a taster, Figure 2 shows samples from a covariance function of a switching dynamical latent
force model with D = 1 and Q = 3. Note that while the latent forces (a and c) are discrete,
the outputs (b and d) are continuous and have matching gradients at the switching points. The
outputs are highly nonstationary. The switching times turn out to be parameters of the covariance
function. They can be optimized along with the dynamical system parameters to match the location
of the nonstationarities. We now give an overview of the covariance function derivation. Details are
provided in the supplementary material.
(a) System 1. Samples (b) System 1. Samples (c) System 2. Samples (d) System 2. Samples
from the output.
from the output.
from the latent force.
from the latent force.
10
4
3
3
6
2
4
1
2
5
2
1
0
0
?1
0
0
?1
?2
?5
?2
?2
?4
?3
?4
0
2
4
6
8
10
?10
0
2
4
6
8
?3
0
10
2
4
6
8
10
?6
0
2
4
6
8
10
Figure 2: Joint samples of a switching dynamical LFM model with one output, D = 1, and three intervals,
Q = 3, for two different systems. Dashed lines indicate the presence of switching points. While system 2
responds instantaneously to the input force, system 1 delays its reaction due to larger inertia.
4
In general, we need to compute the covariance kzd ,zd0 (t, t0 ) = cov[zd (t), zd0 (t0 )] for zd (t) in time
interval (tq?1 , tq ) and zd0 (t0 ) in time interval (tq0 ?1 , tq0 ). By definition, this covariance follows
0
cov[zd (t), zd0 (t0 )] = cov ydq (t ? tq?1 ), ydq0 (t ? tq0 ?1 )) .
We assumme independence between the latent forces uq (t) and independence between the initial
conditions yIC and the latent forces uq (t).1 With these conditions, it can be shown2 that the covariance function3 for q = q 0 is given as
cqd (t ? tq?1 )cqd0 (t0 ? tq?1 )kzd ,zd0 (tq?1 , tq?1 ) + cqd (t ? tq?1 )eqd0 (t0 ? tq?1 )kzd ,z?d0 (tq?1 , tq?1 )
+eqd (t ? tq?1 )cqd0 (t0 ? tq?1 )kz?d ,zd0 (tq?1 , tq?1 ) + eqd (t ? tq?1 )eqd0 (t0 ? tq?1 )kz?d ,z?d0 (tq?1 , tq?1 )
+kfqd ,fd0 (t, t0 ),
(3)
where
kzd ,zd0 (tq?1 , tq?1 ) = cov[ydq (tq?1 )ydq0 (tq?1 )],
kzd ,z?d0 (tq?1 , tq?1 ) = cov[ydq (tq?1 )y? dq0 (tq?1 )],
kz?d ,zd0 (tq?1 , tq?1 ) = cov[y? dq (tq?1 )ydq0 (tq?1 )],
kz?d ,z?d0 (tq?1 , tq?1 ) = cov[y? dq (tq?1 )y? dq0 (tq?1 )].
kfqd ,fd0 (t, t0 ) = cov[fdq (t ? tq?1 )fdq0 (t0 ? tq?1 )].
In expression (3), kzd ,zd0 (tq?1 , tq?1 ) = cov[ydq?1 (tq?1 ? tq?2 ), ydq?1
(tq?1 ? tq?2 )] and values
0
for kzd ,z?d0 (tq?1 , tq?1 ), kz?d ,zd0 (tq?1 , tq?1 ) and kz?d ,z?d0 (tq?1 , tq?1 ) can be obtained by similar expressions. The covariance kfqd ,fd0 (t, t0 ) follows a similar expression that the one for kfd ,fd0 (t, t0 ) in
equation (2), now depending on the covariance kuq?1 ,uq?1 (t, t0 ). We will assume that the covariances for the latent forces follow the RBF form, with length-scale `q .
When q > q 0 , we have to take into account the correlation between the initial conditions ydq (tq?1 ),
y? dq (tq?1 ) and the latent force uq0 ?1 (t0 ). This correlation appears because of the contribution of
uq0 ?1 (t0 ) to the generation of the initial conditions, ydq (tq?1 ), y? dq (tq?1 ). It can be shown4 that the
covariance function cov[zd (t), zd0 (t0 )] for q > q 0 follows
0
0
cqd (t ? tq?1 )cqd0 (t0 ? tq0 ?1 )kzd ,zd0 (tq?1 , tq0 ?1 ) + cqd (t ? tq?1 )eqd0 (t0 ? tq0 ?1 )kzd ,z?d0 (tq?1 , tq0 ?1 )
0
0
+eqd (t ? tq?1 )cqd0 (t0 ? tq0 ?1 )kz?d ,zd0 (tq?1 , tq0 ?1 ) + eqd (t ? tq?1 )eqd0 (t0 ? tq0 ?1 )kz?d ,z?d0 (tq?1 , tq0 ?1 )
0
0
0
0
q
+cqd (t ? tq?1 )Xd1 kfqd ,fd0 (tq0 ?1 , t0 ) + cqd (t ? tq?1 )Xd2 km
(tq0 ?1 , t0 )
d ,fd0
q
+eqd (t ? tq?1 )Xd3 kfqd ,fd0 (tq0 ?1 , t0 ) + eqd (t ? tq?1 )Xd4 km
(tq0 ?1 , t0 ),
d ,fd0
where
0
kzd ,zd0 (tq?1 , tq0 ?1 ) = cov[ydq (tq?1 )ydq0 (tq0 ?1 )],
(4)
0
kzd ,z?d0 (tq?1 , tq0 ?1 ) = cov[ydq (tq?1 )y? dq0 (tq0 ?1 )],
0
0
kz?d ,zd0 (tq?1 , tq0 ?1 ) = cov[y? dq (tq?1 )ydq0 (tq0 ?1 )],
kz?d ,z?d0 (tq?1 , tq0 ?1 ) = cov[y? dq (tq?1 )y? dq0 (tq0 ?1 )],
q
km
(t, t0 ) = cov[mqd (t ? tq?1 )fdq0 (t0 ? tq?1 )],
d ,fd0
Pq?q0 Qq?q0
and Xd1 , Xd2 , Xd3 and Xd4 are functions of the form n=2 i=2 xq?i+1
(tq?i+1 ? tq?i ), with
d
q?i+1
q?i+1 q?i+1
q?i+1
q?i+1
xd
being equal to cd
, ed
, gd
or hd
, depending on the values of q and q 0 .
A similar expression to (4) can be obtained for q 0 > q. Examples of these functions for specific
values of q and q 0 and more details are also given in the supplementary material.
4
Related work
There has been a recent interest in employing Gaussian processes for detection of change points in
time series analysis, an area of study that relates to some extent to our model. Some machine learning
related papers include [3, 4, 9]. [3, 4] deals specifically with how to construct covariance functions
1
Derivations of these equations are rather involved. In the supplementary material, section 2, we include a
detailed description of how to obtain the equations (3) and (4)
2
See supplementary material, section 2.2.1.
3
We will write fdq (t ? tq?1 , uq?1 ) as fdq (t ? tq?1 ) for notational simplicity.
4
See supplementary material, section 2.2.2
5
in the presence of change points (see [3], section 4). The authors propose different alternatives
according to the type of change point. From these alternatives, the closest ones to our work appear
in subsections 4.2, 4.3 and 4.4. In subsection 4.2, a mechanism to keep continuity in a covariance
function when there are two regimes described by different GPs, is proposed. The authors call this
covariance continuous conditionally independent covariance function. In our switched latent force
model, a more natural option is to use the initial conditions as the way to transit smoothly between
different regimes. In subsections 4.3 and 4.4, the authors propose covariances that account for a
sudden change in the input scale and a sudden change in the output scale. Both type of changes
are automatically included in our model due to the latent force model construction: the changes in
the input scale are accounted by the different length-scales of the latent force GP process and the
changes in the output scale are accounted by the different sensitivity parameters. Importantly, we
also concerned about multiple output systems.
On the other hand, [9] proposes an efficient inference procedure for Bayesian Online Change Point
Detection (BOCPD) in which the underlying predictive model (UPM) is a GP. This reference is less
concerned about the particular type of change that is represented by the model: in our application
scenario, the continuity of the covariance function between two regimes must be assured beforehand.
5
Implementation
In this section, we describe additional details on the implementation, i.e., covariance function, hyperparameters, sparse approximations.
Additional covariance functions. The covariance functions kz?d ,zd0 (t, t0 ), kzd ,z?d0 (t, t0 ) and
kz?d ,z?d0 (t, t0 ) are obtained by taking derivatives of kzd ,zd0 (t, t0 ) with respect to t and t0 [10].
Estimation of hyperparameters. Given the number of outputs D and the number of intervals
Q, we estimate the parameters ? by maximizing the marginal-likelihood of the joint Gaussian proN
cess {zd (t)}D
d=1 using gradient-descent methods. With a set of input points, t = {tn }n=1 , the
>
> >
marginal-likelihood is given as p(z|?) = N (z|0, Kz,z + ?), where z = [z1 , . . . , zD ] , with
zd = [zd (t1 ), . . . , zd (tN )]> , Kz,z is a D ? D block-partitioned matrix with blocks Kzd ,zd0 . The
entries in each of these blocks are evaluated using kzd ,zd0 (t, t0 ). Furthermore, kzd ,zd0 (t, t0 ) is computed using the expressions (3), and (4), according to the relative values of q and q 0 .
Efficient approximations Optimizing the marginal likelihood involves the inversion of the matrix Kz,z , inversion that grows with complexity O(D3 N 3 ). We use a sparse approximation based
on variational methods presented in [2] as a generalization of [11] for multiple output Gaussian
processes. The approximations establish a lower bound on the marginal likelihood and reduce computational complexity to O(DN K 2 ), being K a reduced number of points used to represent u(t).
6
Experimental results
We now show results with artificial data and data recorded from a robot performing a basic set of
actions appearing in table tennis.
6.1
Toy example
Using the model, we generate samples from the GP with covariance function as explained before.
In the first experiment, we sample from a model with D = 2, R = 1 and Q = 3, with switching
points t0 = ?1, t1 = 5 and t2 = 12. For the outputs, we have A1 = A2 = 0.1, C1 = 0.4, C2 = 1,
?1 = 2, ?2 = 3. We restrict the latent forces to have the same length-scale value `0 = `1 = `2 =
1e?3, but change the values of the sensitivity parameters as S1,1 = 10, S2,1 = 1, S1,2 = 10, S2,2 =
5, S1,3 = ?10 and S2,3 = 1, where the first subindex refers to the output d and the second subindex
refers to the force in the interval q. In this first experiment, we wanted to show the ability of the
model to detect changes in the sensitivities of the forces, while keeping the length scales equal along
the intervals. We sampled 5 times from the model with each output having 500 data points and add
some noise with variance equal to ten percent of the variance of each sampled output. In each of the
five repetitions, we took N = 200 data points for training and the remaining 300 for testing.
6
SMSE
MSLL
SMSE
MSLL
1
2
Q=1
76.27?35.63
?0.98?0.46
7.27?6.88
?1.79?0.28
Q=2
14.66?11.74
?1.79?0.26
1.08?0.05
?2.26?0.02
Q=3
0.30?0.02
?2.90?0.03
1.10?0.05
?2.25?0.02
Q=4
0.31?0.03
?2.87?0.04
1.06?0.05
?2.27?0.03
Q=5
0.72?0.56
?2.55?0.41
1.10?0.09
?2.26?0.06
Table 1: Standarized mean square error (SMSE) and mean standardized log loss (MSLL) using different values
of Q for both toy examples. The figures for the SMSE must be multiplied by 10?2 . See the text for details.
(a) Latent force toy example 1.
(b) Output 1 toy example 1.
(c) Output 2 toy example 1.
2
0.4
1
0
?1
?2
0
5
10
15
(d) Latent force toy example 2.
0.05
0.3
0
0.2
?0.05
0.1
?0.1
0
?0.15
?0.1
?0.2
?0.2
0
?0.25
0
5
10
15
(e) Output 1 toy example 2.
5
10
15
(f) Output 3 toy example 2.
1.5
2
0.5
1
0.4
1
0.5
0.3
0
0
0.2
0.1
?0.5
0
?1
0
5
10
15
20
?1
0
5
10
15
20
?0.1
0
5
10
15
20
Figure 4: Mean and two standard deviations for the predictions over the latent force and two of the three outputs
in the test set. Dashed lines indicate the final value of the swithcing points after optimization. Dots indicate
training data.
Optimization of the hyperparameters (including t1 and t2 ) is done
by maximization of the marginal likelihood through scaled conjugate gradient. We train models for Q = 1, 2, 3, 4 and 5 and measure
the mean standarized log loss (MSLL) and the mean standarized
mean square error (SMSE) [8] over the test set for each value of Q.
Table 1, first two rows, show the corresponding average results over
the 5 repetitions together with one standard deviation. Notice that
for Q = 3, the model gets by the first time the best performance,
performance that repeats again for Q = 4. The SMSE performance Figure 3: Data collection was
remains approximately equal for values of Q greater than 3. Fig- performed using a Barrett WAM
ures 4(a), 4(b) and 4(c) shows the kind of predictions made by the robot as haptic input device.
model for Q = 3.
We generate also a different toy example, in which the length-scales of the intervals are different.
For the second toy experiment, we assume D = 3, Q = 2 and switching points t0 = ?2 and
t1 = 8. The parameters of the outputs are A1 = A2 = A3 = 0.1, C1 = 2, C2 = 3, C3 = 0.5,
?1 = 0.4, ?2 = 1, ?3 = 1 and length scales `0 = 1e ? 3 and `1 = 1. Sensitivities in this case are
S1,1 = 1, S2,1 = 5, S3,1 = 1, S1,2 = 5, S2,2 = 1 and S3,2 = 1. We follow the same evaluation
setup as in toy example 1. Table 1, last two rows, show the performance again in terms of MLSS
and SMSE. We see that for values of Q > 2, the MLSS and SMSE remain similar. In figures 4(d),
4(e) and 4(f), the inferred latent force and the predictions made for two of the three outputs.
6.2 Segmentation of human movement data for robot imitation learning
In this section, we evaluate the feasibility of the model for motion segmentation with possible applications in the analysis of human movement data and imitation learning. To do so, we had a human
teacher take the robot by the hand and have him demonstrate striking movements in a cooperative
game of table tennis with another human being as shown in Figure 3. We recorded joint positions,
7
(a) Log-Likelihood Try 1.
(b) Latent force Try 1.
(c) HR Output Try 1.
0.5
2
0
?200
?400
?600
0
?0.5
1
HR
200
Latent Force
Value of the log?likelihood
400
0
?1
?2
?800
?2.5
?2
?3
1 2 3 4 5 6 7 8 9 10 11 12
Number of intervals
5
4
0
3
?200
?400
?600
?800
?1000
?1200
1 2 3 4 5 6 7 8 9 10 11 12
Number of intervals
15
20
(e) Latent force Try 2.
200
Latent Force
Value of the log?likelihood
(d) Log-Likelihood Try 2.
10
Time
10
Time
15
20
(f) SFE Output Try 2.
2
1.5
2
1
1
0.5
0
0
?1
?0.5
?2
5
2.5
SFE
?1000
?1
?1.5
?1
5
10
Time
15
5
10
Time
15
Figure 5: Employing the switching dynamical LFM model on the human movement data collected as in
Fig.3 leads to plausible segmentations of the demonstrated trajectories. The first row corresponds to the loglikelihood, latent force and one of four outputs for trial one. Second row shows the same quantities for trial two.
Crosses in the bottom of the figure refer to the number of points used for the approximation of the Gaussian
process, in this case K = 50.
angular velocities, and angular acceleration of the robot for two independent trials of the same table tennis exercise. For each trial, we selected four output positions and train several models for
different values of Q, including the latent force model without switches (Q = 1). We evaluate the
quality of the segmentation in terms of the log-likelihood. Figure 5 shows the log-likelihood, the
inferred latent force and one output for trial one (first row) and the corresponding quantities for trial
two (second row). Figures 5(a) and 5(d) show peaks for the log-likelihood at Q = 9 for trial one and
Q = 10 for trial two. As the movement has few gaps and the data has several output dimensions,
it is hard even for a human being to detect the transitions between movements (unless it is visualized as in a movie). Nevertheless, the model found a maximum for the log-likelihood at the correct
instances in time where the human transits between two movements. At these instances the human
usually reacts due to an external stimulus with a large jerk causing a jump in the forces. As a result,
we obtained not only a segmentation of the movement but also a generative model for table tennis
striking movements.
7
Conclusion
We have introduced a new probabilistic model that develops the latent force modeling framework
with switched Gaussian processes. This allows for discontinuities in the latent space of forces. We
have shown the application of the model in toy examples and on a real world robot problem, in
which we were interested in finding and representing striking movements. Other applications of the
switching latent force model that we envisage include modeling human motion capture data using
the second order ODE and a first order ODE for modeling of complex circuits in biological networks.
To find the order of the model, this is, the number of intervals, we have used cross-validation. Future
work includes proposing a less expensive model selection criteria.
Acknowledgments
MA and NL are very grateful for support from a Google Research Award ?Mechanistically Inspired
Convolution Processes for Learning? and the EPSRC Grant No EP/F005687/1 ?Gaussian Processes
for Systems Identification with Applications in Systems Biology?. MA also thanks PASCAL2 Internal Visiting Programme. We also thank to three anonymous reviewers for their helpful comments.
8
References
?
[1] Mauricio Alvarez,
David Luengo, and Neil D. Lawrence. Latent Force Models. In David van Dyk and
Max Welling, editors, Proceedings of the Twelfth International Conference on Artificial Intelligence and
Statistics, pages 9?16, Clearwater Beach, Florida, 16-18 April 2009. JMLR W&CP 5.
?
[2] Mauricio A. Alvarez,
David Luengo, Michalis K. Titsias, and Neil D. Lawrence. Efficient multioutput
Gaussian processes through variational inducing kernels. In JMLR: W&CP 9, pages 25?32, 2010.
[3] Roman Garnett, Michael A. Osborne, Steven Reece, Alex Rogers, and Stephen J. Roberts. Sequential
Bayesian prediction in the presence of changepoints and faults. The Computer Journal, 2010. Advance
Access published February 1, 2010.
[4] Roman Garnett, Michael A. Osborne, and Stephen J. Roberts. Sequential Bayesian prediction in the presence of changepoints. In Proceedings of the 26th Annual International Conference on Machine Learning,
pages 345?352, 2009.
[5] Antti Honkela, Charles Girardot, E. Hilary Gustafson, Ya-Hsin Liu, Eileen E. M. Furlong, Neil D.
Lawrence, and Magnus Rattray. Model-based method for transcription factor target identification with
limited data. PNAS, 107(17):7793?7798, 2010.
[6] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. In
Advances in Neural Information Processing Systems 15, 2003.
[7] T. Oyama, Y. Uno, and S. Hosoe. Analysis of variability of human reaching movements based on the
similarity preservation of arm trajectories. In International Conference on Neural Information Processing
(ICONIP), pages 923?932, 2007.
[8] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT
Press, Cambridge, MA, 2006.
[9] Yunus Saatc?i, Ryan Turner, and Carl Edward Rasmussen. Gaussian Process change point models. In
Proceedings of the 27th Annual International Conference on Machine Learning, pages 927?934, 2010.
[10] E. Solak, R. Murray-Smith W. E. Leithead, D. J. Leith, and C. E. Rasmussen. Derivative observations in
Gaussian process models of dynamic systems. In Sue Becker, Sebastian Thrun, and Klaus Obermayer,
editors, NIPS, volume 15, pages 1033?1040, Cambridge, MA, 2003. MIT Press.
[11] Michalis K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In JMLR:
W&CP 5, pages 567?574, 2009.
9
| 4001 |@word trial:8 inversion:2 twelfth:1 d2:2 km:3 covariance:35 versatile:1 reduction:1 initial:15 liu:1 series:6 vd0:4 reaction:1 recovered:1 activation:1 must:3 multioutput:1 motor:7 wanted:1 stationary:1 generative:2 selected:1 device:2 a2d:1 intelligence:1 smith:1 sudden:2 yunus:1 location:1 five:1 along:2 c2:4 constructed:1 differential:14 dn:1 combine:1 inside:1 introduce:3 inter:1 inspired:1 automatically:1 yd0:5 provided:1 linearity:1 underlying:2 circuit:2 mass:3 notation:1 matched:1 kind:1 proposing:1 unobserved:1 finding:1 temporal:2 tq0:26 xd:1 ensured:1 scaled:1 uk:3 control:1 grant:1 omit:1 mauricio:3 planck:1 appear:1 t1:32 before:1 engineering:1 leithead:1 sd:7 switching:26 cd0:2 leith:1 yd:13 approximately:1 co:2 limited:2 range:1 acknowledgment:1 testing:1 block:3 differs:1 procedure:1 jan:1 displacement:2 area:1 thought:2 matching:1 radial:1 refers:2 get:1 selection:1 fd0:13 demonstrated:1 reviewer:1 maximizing:1 primitive:5 williams:1 upm:1 simplicity:2 importantly:1 ed0:2 hd:2 qq:1 target:2 construction:1 gps:1 carl:2 velocity:8 expensive:1 particularly:1 cooperative:1 observed:4 bottom:1 epsrc:1 ep:1 steven:1 electrical:1 capture:4 grasping:1 movement:19 cqd:7 complexity:2 dynamic:5 grateful:1 segment:2 predictive:1 kzd:17 creates:1 titsias:2 basis:1 joint:5 k0:2 represented:2 derivation:4 train:3 forced:1 reece:1 describe:2 artificial:2 klaus:1 clearwater:1 whose:1 supplementary:6 larger:1 plausible:1 loglikelihood:1 ability:1 cov:17 neil:4 statistic:1 gp:4 highlighted:1 envisage:1 superscript:1 final:2 online:1 sequence:3 advantage:1 took:1 propose:2 interaction:1 causing:1 relevant:1 combining:1 description:1 inducing:2 olkopf:1 manchester:2 object:1 tions:1 depending:2 school:2 hilary:1 eq:1 edward:2 involves:1 indicate:3 foot:1 discontinuous:1 correct:1 human:15 material:6 rogers:1 generalization:1 anonymous:1 anthropomorphic:1 biological:2 elementary:1 ryan:1 extension:1 pl:1 hold:1 ground:1 ic:1 exp:1 standarized:3 lawrence:4 magnus:1 driving:4 tor:1 a2:2 estimation:1 him:1 repetition:2 weighted:2 instantaneously:1 mit:2 gaussian:31 rather:2 reaching:1 conjunction:2 encode:1 focus:2 schaal:1 notational:1 likelihood:14 detect:2 helpful:1 inference:1 dependent:1 ical:1 interested:1 germany:1 translational:1 flexible:2 augment:1 proposes:1 marginal:5 equal:5 construct:1 having:1 beach:1 biology:2 look:1 future:1 t2:20 stimulus:1 develops:1 roman:2 few:1 attractor:1 tq:133 detection:2 interest:1 fd:3 kfd:5 highly:2 evaluation:1 function3:1 mixture:1 nl:1 activated:1 beforehand:1 unless:1 instance:2 modeling:7 planned:1 maximization:1 ordinary:1 deviation:2 entry:2 delay:1 teacher:1 synthetic:2 gd:5 thanks:1 peak:1 sensitivity:5 mechanistically:1 international:4 probabilistic:4 physic:1 off:1 michael:2 together:1 again:2 recorded:3 external:1 derivative:2 toy:12 account:3 includes:1 ad:9 depends:1 pron:1 later:1 performed:1 mls:2 try:6 hsin:1 recover:1 option:1 contribution:2 square:2 variance:2 landscape:1 bayesian:3 identification:2 trajectory:3 worth:1 drive:3 cybernetics:1 eqd:7 published:1 sebastian:1 ed:5 definition:2 frequency:1 involved:2 e2:2 associated:2 sampled:4 kic:4 knowledge:2 subsection:3 dimensionality:1 segmentation:6 oyama:1 appears:1 dt:3 follow:2 alvarez:3 april:1 evaluated:4 though:1 done:1 furthermore:1 angular:3 implicit:1 correlation:3 honkela:1 hand:2 touch:1 christopher:1 overlapping:1 google:1 saatc:1 continuity:4 quality:1 grows:1 effect:1 y2:1 evolution:1 hence:1 inspiration:1 analytically:1 sfe:2 q0:2 deal:2 conditionally:1 sin:5 during:2 game:1 illustrative:1 criterion:1 iconip:1 demonstrate:2 tn:2 motion:3 cp:3 percent:1 variational:3 zd0:21 charles:1 common:1 overview:1 volume:1 extend:1 refer:2 cambridge:2 similarly:1 had:1 pq:1 dot:1 robot:12 tennis:4 access:1 similarity:1 add:1 multivariate:2 closest:1 recent:1 optimizing:1 driven:4 forcing:6 scenario:2 ubingen:1 fault:1 additional:2 greater:1 employed:1 signal:2 preservation:1 relates:1 u0:1 dashed:2 pnas:1 multiple:3 mix:1 ad0:1 d0:16 uncluttered:1 smooth:1 segmented:2 ing:1 match:1 cross:2 nonstationarities:1 divided:1 nakanishi:1 e1:1 award:1 a1:2 feasibility:1 impact:1 prediction:6 basic:4 sheffield:4 exerts:1 sue:1 represent:6 kernel:2 robotics:1 c1:3 ures:1 ode:4 interval:25 sch:1 haptic:2 induced:1 comment:1 call:1 nonstationary:1 presence:4 constraining:2 gustafson:1 concerned:2 reacts:1 switch:3 independence:3 jerk:1 restrict:1 reduce:1 idea:2 t0:69 expression:5 becker:1 effort:1 peter:1 e3:1 cause:1 action:1 luengo:2 generally:1 subindex:2 detailed:1 cony:1 ten:1 induces:2 visualized:1 reduced:1 generate:2 notice:1 s3:2 neuroscience:1 estimated:1 bocpd:1 rattray:1 zd:13 discrete:3 write:1 four:2 nevertheless:1 d3:1 ce:1 v1:1 sum:1 uncertainty:7 striking:4 place:2 bound:1 annual:2 strength:1 s10:1 alex:1 uno:1 damper:3 u1:2 spring:3 performing:1 according:2 combination:2 conjugate:1 across:4 remain:2 describes:1 ur:2 partitioned:1 wam:2 s1:6 explained:1 pr:1 equation:22 remains:1 describing:1 turn:1 mechanism:1 needed:1 mechanistic:1 changepoints:2 multiplied:1 apply:1 v2:1 appearing:1 uq:11 alternative:2 florida:1 original:1 standardized:1 remaining:2 include:4 ensure:1 straints:1 michalis:2 instant:2 murray:1 establish:1 february:1 classical:1 contact:1 quantity:2 dependence:1 rt:1 md:1 responds:1 visiting:1 obermayer:1 gradient:3 dp:1 hq:1 separate:1 thank:1 thrun:1 vd:5 transit:2 extent:1 collected:1 assuming:1 length:7 modeled:5 demonstration:1 setup:1 robert:2 potentially:1 statement:1 implementation:2 policy:1 convolution:1 observation:1 descent:1 proscribed:1 variability:2 y1:1 lfm:3 inferred:2 introduced:1 david:3 mechanical:3 c3:2 optimized:1 z1:1 discontinuity:6 nip:1 able:1 dynamical:14 smse:8 usually:1 stephen:2 regime:3 max:2 including:4 deactivated:1 pascal2:1 ranked:1 force:63 natural:2 treated:1 hybrid:1 hr:2 turner:1 arm:1 msll:4 representing:1 xd1:2 movie:1 mo1:1 coupled:2 extract:1 dyk:1 xq:1 text:1 review:1 prior:10 determining:1 relative:2 loss:2 expect:1 generation:1 proven:1 validation:1 switched:5 principle:1 dq:16 editor:2 cd:9 row:6 accounted:2 repeat:1 last:5 keeping:1 antti:1 rasmussen:3 allow:1 institute:2 wide:1 taking:1 rhythmic:1 sparse:3 distributed:2 van:1 dimension:1 world:3 superficially:1 valid:1 kz:16 transition:1 inertia:1 author:3 collection:1 made:2 jump:1 programme:1 employing:3 welling:1 skill:1 bernhard:1 transcription:2 keep:2 active:2 imitation:3 continuous:6 latent:64 yic:3 table:7 learn:1 ku:4 solak:1 m13:1 complex:1 garnett:2 assured:1 s2:5 noise:1 arise:2 hyperparameters:3 osborne:2 fig:2 position:4 exercise:1 governed:1 jmlr:3 specific:1 barrett:2 omnipresent:1 a3:1 cd2:1 consist:1 sequential:3 eileen:1 execution:1 gap:1 suited:1 smoothly:1 f005687:1 expressed:1 u2:1 furlong:1 corresponds:1 ma:4 acceleration:1 rbf:2 xd2:2 change:16 hard:1 included:1 determined:1 specifically:1 ijspeert:1 experimental:1 ya:1 internal:1 support:1 sd0:1 evaluate:2 correlated:1 |
3,316 | 4,002 | Evidence-Specific Structures for Rich Tractable CRFs
Carlos Guestrin
Carnegie Mellon University
[email protected]
Anton Chechetka
Carnegie Mellon University
[email protected]
Abstract
We present a simple and effective approach to learning tractable conditional random fields with structure that depends on the evidence. Our approach retains the
advantages of tractable discriminative models, namely efficient exact inference
and arbitrarily accurate parameter learning in polynomial time. At the same time,
our algorithm does not suffer a large expressive power penalty inherent to fixed
tractable structures. On real-life relational datasets, our approach matches or exceeds state of the art accuracy of the dense models, and at the same time provides
an order of magnitude speedup.
1
Introduction
Conditional random fields (CRFs, [1]) have been successful in modeling complex systems, with
applications from speech tagging [1] to heart motion abnormality detection [2]. A key advantage
of CRFs over other probabilistic graphical models (PGMs, [3]) stems from the observation that in
almost all applications, some variables are unknown at test time (we will denote such variables X ),
but others, called the evidence E, are known at test time. While other PGM formulations model the
joint distribution P (X , E), CRFs directly model conditional distributions P (X | E).
The discriminative approach adopted by CRFs allows for better approximation quality of the learned
conditional distribution P (X | E), because the representational power of the model is not ?wasted?
on modeling P (E). However, the better approximation comes at a cost of increased computational
complexity for both structure [4] and parameter learning [1] as compared to generative models. In
particular, unlike Bayesian networks or junction trees [3], (a) the likelihood of a CRF structure does
not decompose into a combination of small subcomponent scores, making many existing approaches
to structure learning inapplicable, and, (b) instead of computing optimal parameters in closed form,
with CRFs one has to resort to gradient-based methods. Moreover, computing the gradient of the
log-likelihood with respect to the CRF parameters requires inference in the current model to be done
for every training datapoint. For high-treewidth models, even approximate inference is NP-hard [5].
To overcome the extra computational challenges posed by the conditional random fields, practitioners usually resort to several of the following approximations throughout the process:
?
?
?
?
CRF structure is specified by hand, leading to suboptimal structures.
Approximate inference during parameter learning results in suboptimal parameters.
Approximate inference at test time results in suboptimal results [5].
Replacing the CRF conditional likelihood objective with a more tractable one (e.g. [6])
results in suboptimal models (both in terms of learned structure and parameters).
Not only do all of the above approximation techniques lack any quality guarantees, but also combining several of them in the same system serves to further compound the errors.
A well-known way to avoid approximations in CRF parameter learning is to restrict the models to
have low treewidth, where the dependencies between the variables X have a tree-like structure. For
1
such models, parameter learning and inference can be done exactly1 ; only structure learning involves
approximations. The important dependencies between the variables X , however, usually cannot all
be captured with a single tree-like structure, so low-treewidth CRFs are rarely used in practice.
In this paper, we argue that it is the commitment to a single CRF structure irrespective of the evidence
E that makes tree-like CRFs an inferior option. We show that tree CRFs with evidence-dependent
structure, learned by a generalization of the Chow-Liu algorithm [7], (a) yield results equal to or
significantly better than densely-connected CRFs on real-life datasets, and (b) are an order of magnitude faster than the dense models. More specifically, our contributions are as follows:
?
?
?
?
?
2
Formally define CRFs with evidence-specific (ES) structure.
Observe that, given the ES structures, CRF feature weights can be learned exactly.
Generalize the Chow-Liu algorithm [7] to learn evidence-specific structures for tree CRFs.
Generalize tree CRFs with evidence-specific structure (ESS-CRFs) to the relational setting.
Demonstrate empirically the superior performance of ESS-CRFs over densely connected
models in terms of both accuracy and runtime on real-life relational models.
Conditional random fields
A conditional random field with pairwise features2 defines a conditional distribution P (X | E) as
X
X
P (X | E) = Z ?1 (E) exp
wijk fijk (Xi , Xj , E) ,
(1)
(i,j)?T
k
where functions f are called features, w are feature weights, Z(E) is the normalization constant
(which depends on evidence), and T is the set of edges of the model. To reflect the fact that P (X | E)
depends on the weights w, we will write P (X | E,w). To apply a CRF model, one first defines the set
of features f. A typical feature may mean that two pixels i and j in the same image segment tend to
have have similar colors: f (Xi , Xj , E) ? I(Xi = Xj , |colori ?colorj | < ?), where I(?) is an indicator
function. Given the features f and training data D that consists of fully observed assignments to X
and E, the optimal feature weights w? maximize the conditional log-likelihood (CLLH) of the data:
?
?
X
X X
? wijk fijk (Xi ,Xj ,E) ? logZ(E,w))?. (2)
w?= arg max
logP (X | E,w) = arg max
(X,E)?D
(X,E)?D (i,j)?T,k
The problem (2) does not have a closed form solution, but has a unique global optimum that can be
found using any gradient-based optimization technique because of the following fact [1]:
Fact 1 Conditional log-likelihood (2), abbreviated CLLH, is concave in w. Moreover,
? log P (X | E, w)
= fijk (Xi ,Xj ,E) ? EP (Xi ,Xj |E,w) [fijk (Xi ,Xj ,E)] ,
?wijk
where EP denotes expectation with respect to a distribution P.
(3)
Convexity of the negative CLLH objective and the closed-form expression for the gradient lets us use
convex optimization techniques such as L-BFGS [9] to find the unique optimum w? . However, the
gradient (3) contains the conditional distribution over Xi Xj , so computing (3) requires inference in
the model for every datapoint. Time complexity of the exact inference is exponential in the treewidth
of the graph defined by edges T [5]. Therefore, exact evaluation of the CLLH objective (2)and
gradient (3) and exact inference at test time are all only feasible for models with low-treewidth T.
Unfortunately, restricting the space of models to only those with low treewidth severely decreases
the expressive power of CRFs. Complex dependencies of real-life distributions usually cannot be
adequately captured by a single tree-like structure, so most of the models used in practice have high
treewidth, making exact inference infeasible. Instead, approximate inference techniques, such as
1
Here and in the rest of the paper, by ?exact parameter learning? we will mean ?with arbitrary accuracy
in polynomial time? using standard convex optimization techniques. This is in contrast to closed form exact
parameter learning possible for generative low-treewidth models representing the joint distribution P (X , E).
2
In this paper, we only consider the case of pairwise dependencies, that is, features f that depend on at most
two variables from X (but may depend on arbitrary many variables from E). Our approach can be in principle
extended to CRFs with higher order dependencies, but Chow-Liu algorithm for structure learning will have to
be replaced with an algorithm that learns low-treewidth junction trees, such as [8].
2
belief propagation [10, 11] or sampling [12] are used for parameter learning and at test time. Approximate inference is NP-hard [5], so approximate inference algorithms have very few result quality
guarantees. Greater expressive power of the models is thus obtained at the expense of worse quality
of estimated parameters and inference. Here, we show an alternative way to increase expressive
power of tree-like structured CRFs without sacrificing optimal weights learning and exact inference
at test time. In practice, our approach is much better suited for relational than for propositional
settings, because of much higher parameters dimensionality in the propositional case. However, we
first present in detail the propositional case theory to better convey the key high-level ideas.
3
Evidence-specific structure for CRFs
Observe that, given a particular evidence value E, the set of edges T in the CRF formulation (1)
actually can be viewed as a supergraph of the conditional model over X . An edge (r, s) ? T
can be ?disabled? in the following sense: if for E = E the edge features are identically zero,
frsk (XrX
, Xs , E) ?X
0, regardless of the values ofX
Xr and Xs , then
X
wijk fijk (Xi , Xj , E) ?
wijk fijk (Xi , Xj , E),
(i,j)?T
(i,j)?T \(r,s)
k
k
and so for evidence value E, the model (1) with edges T is equivalent to (1) with (r ? s) removed
from T. The following notion of effective CRF structure, captures the extra sparsity:
Definition 2 Given the CRF model (1) and evidence value E = E, the effective conditional model
structure T (E = E) is the set of edges corresponding to features that are not identically zero:
T (E = E) = {(i, j) | (i, j) ? T, ?k, xi , xj s.t. fijk (xi , xj , E) 6= 0} .
If T (E) has low treewidth for all values of E, inference and parameter learning using the effective
structure are tractable, even if a priori structure T has high treewidth. Unfortunately, in practice the
treewidth of T (E) is usually not much smaller than the treewidth of T. Low-treewidth effective structures are rarely used, because treewidth is a global property of the graph (even computing treewidth
is NP-complete [13]), while feature design is a local process. In fact, it is the ability to learn optimal
weights for a set of mutually correlated features without first understanding the inter-feature dependencies that is the key advantage of CRFs over other PGM formulations. Achieving low treewidth
for the effective structures requires elaborate feature design, making model construction very difficult. Instead, in this work, we separate construction of low-treewidth effective structures from
feature design and weight learning, to combine the advantages of exact inference and discriminative
weights learning, high expressive power of high-treewidth models, and local feature design.
Observe that the CRF definition (1) can
be written equivalently as
nX
o
X
?1
P (X | E, w) = Z (E, w) exp
wijk ? (I((i, j) ? T ) ? fijk (Xi , Xj , E)) .
ij
k
(4)
Even though (1) and (4) are equivalent, in (4) the structure of the model is explicitly encoded as
multiplicative component of the features. In addition to the feature values f, the effective structure
of the model is now controlled by the indicator functions I(?). These indicator functions provide us
with a way to control the treewidth of the effective structures independently of the features.
Traditionally, it has been assumed that the a priori structure T of a CRF model is fixed. However,
such an assumption is not necessary. In this work, we assume that the structure is determined by the
evidence E and some parameters u : T = T (E, u). The resulting model, which we call a CRF with
evidence-specific structure (ESS-CRF), defines a conditional distribution P (X | E, w, u) as follows
nX X
o
P (X | E,w,u) = Z ?1 (E,w,u) exp
wijk (I((i, j) ? T (E, u)) ? fijk (Xi , Xj , E)) . (5)
ij
k
The dependence of the structure T on E and u can have different forms. We will provide one example
of an algorithm for constructing evidence-specific CRF structures shortly.
ESS-CRFs have an important advantage over the traditional parametrization: in (5) the parameters
u that determine the model structure are decoupled from the feature weights w. As a result, the
problem of structure learning (i.e., optimizing u) can be decoupled from feature selection (choosing
f ) and feature weights learning (optimizing w). Such a decoupling makes it much easier to guarantee
that the effective structure of the model has low treewidth by relegating all the necessary global
computation to the structure construction algorithm T = T (E, u). For any fixed choice of a structure
construction algorithm T (?, ?) and structure parameters u, as long as T (?, ?) is guaranteed to return
low-treewidth structures, learning optimal feature weights w? and inference at test time can be done
exactly, because Fact 1 directly extends to feature weights w in ESS-CRFs:
3
Algorithm 1: Standard CRF approach
1 Define features fijk (Xi , Xj , E), implicitly defining the high-treewidth CRF structure T.
2 Optimize weights w to maximize conditional LLH (2) of the training data.
Use approximate inference to compute CLLH objective (2) and gradient (3).
3 foreach E in test data do
4
Use conditional model (1) to define the conditional distribution P (X | E, w).
Use approximate inference to compute the marginals or the most likely assignment to X .
Algorithm 2: CRF with evidence-specific structures approach
1 Define features fijk (Xi , Xj , E).
Choose structure learning alg. T (E, u) that is guaranteed to return low-treewidth structures.
Define or learn from data parameters u for the structure construction algorithm T (?, ?).
2 Optimize weights w to maximize conditional LLH log P (X | E, u, w) of the training data.
Use exact inference to compute CLLH objective (2) and gradient (3).
3 foreach E in test data do
4
Use conditional model (5) to define the conditional distribution P (X | E, w, u).
Use exact inference to compute the marginals or the most likely assignment to X .
Observation 3 Conditional log-likelihood logP (X | E,w,u) of ESS-CRFs (5) is concave in w. Also,
? logP(X | E,w,u)
= I((i, j) ? T (E, u)) fijk (Xi ,Xj ,E)?EP (Xi ,Xj |E,w,u) [fijk (Xi ,Xj ,E)] . (6)
?wijk
To summarize, instead of the standard CRF workflow (Alg. 1), we propose ESS-CRFs (Alg. 2).
The standard approach has approximations (with little, if any, guarantees on the result quality) at
every stage (lines 1,2,4), while in our ESS-CRF approach only structure selection (line 1) involves
an approximation. Next, we present a simple but effective algorithm for learning evidence-specific
tree structures, based on an existing algorithm for generative models. Many other existing structure
learning algorithms can be similarly adapted to learn evidence-specific models of higher treewidth.
4
Conditional Chow-Liu algorithm for tractable evidence-specific structures
Learning the most likely PGM structure from data is in most cases intractable. Even for Markov
random fields (MRFs), which are a special case of CRFs with no evidence, learning the most likely
structure is NP-hard (c.f. [8]). However, for one very simple class of MRFs, namely tree-structured
models, an efficient algorithm exists [7] that finds the most likely structure. In this section, we adapt
this algorithm (called the Chow-Liu algorithm) to learning evidence-specific structures for CRFs.
Pairwise Markov random fields are graphical models that define
Q a distribution over X as a normalized product of low-dimensional potentials: P (X ) ? Z ?1 (i,j)?T ?(Xi , Xj ), Notice that pairwise MRFs are a special case of CRFs with fij = log ?ij , wij = 1 and E = ?. Unlike tree CRFs,
however, likelihood of tree MRF structures decomposes into contributions of individual edges:
X
X
LLH(T ) =
I(Xi , Xj ) ?
H(Xi ),
(7)
Xi ?X
(i,j)?T
where I(?, ?) is the mutual information and H(?) is entropy. Therefore, as shown in [7], the most
likely structure can be obtained by taking the maximum spanning tree of a fully connected graph,
where the weight of an edge ij is I(Xi , Xj ). Pairwise marginals have relatively low dimensionality,
so the marginals and corresponding mutual informations can be estimated from data accurately,
which makes Chow-Liu algorithm a useful one for learning tree-structured models.
Given the concrete value E of evidence E, one can write down the conditional version of the tree
structure likelihood (7) for that particular value of evidence:
X
X
LLH(T | E) =
IP (?|E) (Xi , Xj ) ?
HP (?|E) (Xi ).
(8)
Xi ?X
(i,j)?T
If exact conditional distributions P (Xi , Xj | E) were available, then the same Chow-Liu algorithm
would find the optimal conditional structure. Unfortunately, estimating conditional distributions
P (Xi , Xj | E) with fixed accuracy in general requires the amount of data exponential in the dimensionality of E [14]. However, we can still plug in approximate conditionals Pb(? | E) learned from
4
Algorithm 3: Conditional Chow-Liu algorithm for learning evidence-specific tree structures
// Parameter learning stage. u? is found e.g. using L-BFGS with Pb(?) as in (9)
P
?
log Pb(Xi , Xj | E, uij )
1 foreach Xi , Xj ? X do u ? arg max
ij
2
3
(X,E)?Dtrain
// Constructing structures at test time
foreach E ? Dtest do
foreach Xi , Xj ? X do set edge weight rij (E, u?ij ) ? IPb(Xi ,Xj |E,u? ) (Xi , Xj )
ij
T (E, u? ) ? maximum spanning tree(r(E, u? ))
4
Algorithm 4: Relational ESS-CRF algorithm - parameter learning stage
?
1 Learn structure parameters u using conditional Chow-Liu algorithm (Alg. 3)
2 Let P (X | E, R, w, u) be defined as in (11)
?
b(X | E, R, w, u? ) // Find e.g. with L-BFGS using the gradient (12)
3 w ? arg maxw log P
data using any standard density estimation technique3 In particular, with the same features fijk that
are used in the CRF model, one can train a logistic regression model for Pb(? | E) :
nX
o
?1
Pb(Xi , Xj | E, uij ) = Zij
(E, uij ) exp
uijk fijk (Xi , Xj , E) .
(9)
k
Essentially, a logistic regression model is a small CRF over only two variables. Exact optimal
weights u? can be found efficiently using standard convex optimization techniques.
The resulting evidence-specific structure learning algorithm T (E, u) is summarized in Alg 3. Alg 3
always returns a tree, and the better the quality of the estimators (9), the better the quality of the
resulting structures. Importantly, Alg. 3 is by no means the only choice for the ESS-CRF approach.
Other edge scores, e.g. from [4], and edge selection procedures, e.g. [8, 15] for higher treewidth
junction trees, can be used as components in the same way as Chow-Liu algorithm is used in Alg. 3.
5
Relational CRFs with evidence-specific sructure
Traditional (also called propositional) PGMs are not well suited for dealing with relational data,
where every variable is an entity of some type, and entities are related to each other via different types
of links. Usually, there are relatively few entity types and link types. For example, the webpages on
the internet are linked via hyperlinks, and social networks link people via friendship relationships.
Relational data violates the i.i.d. data assumption of traditional PGMs, and huge dimensionalities of
relational datasets preclude learning meaningful propositional models. Instead, several formulations
of relational PGMs have been proposed [16] to work with relational data, including relational CRFs.
The key property of all these formulations is that the model is defined using a few template potentials
defined on the abstract level of variable types and replicated as necessary for concrete entities.
More concretely, in relational CRFs every variable Xi is assigned a type mi out of the set M of
possible types. A binary relation R ? R, corresponding to a specific type of link between two
variables, specifies the types of its input arguments, and a set of features fkR (?, ?, E) and feature
weights wkR . We will write Xi , Xj ? inst(R, X ) if the types of Xi and Xj match the input types
specified by the relation R and there is a link of type R between Xi and Xj in the data (for example,
a hyperlink between two webpages). The conditional distribution P (X | E) is then generalized from
the propositional CRF (1) by copyingthe template potentials for every instance of a relation:
X
X
X
?1
R R
P (X | E, R, w) = Z (E, w) exp
wk fk (Xi , Xj , E)
(10)
Xi ,Xj ?inst(R,X )
R?R
k
Observe that the only meaningful difference of the relational CRF (10) from the propositional formulation (1) is that the former shares the same parameters between different edges. By accounting for
parameter sharing, it is straightforward to adapt our ESS-CRF formulation to the relational setting.
We define the relationalESS-CRF conditional distribution as
X
X
X
R R
P(X | E,R,w,u) ? exp
I((i, j) ? T (E,u))
wk fk (Xi , Xj , E) (11)
R?R
3
Xi ,Xj ?inst(R,X )
k
Notice that the approximation error from Pb(?) is the only source of approximations in all our approach.
5
Chow?Liu CRF
?30
ESS?CRF
3
10
Train set size
?16
Chow?Liu CRF
?18
ESS?CRF
?20
4
10
0.2
ESS?CRF +
structure reg.
2
10
Classification error
Test LLH
Test LLH
?14
ESS?CRF +
structure reg.
?25
?35 2
10
WebKB ? Classification Error
TRAFFIC
TEMPERATURE
?20
0.15
0.1
SVM
RMN ESS?CRF
M3N
0.05
3
10
Train set size
0
Figure 1: Left: test LLH for TEMPERATURE. Middle: TRAFFIC. Right: classification errors for WebKB.
Given the structure learning algorithm T (?, ?) that is guaranteed to return low-treewidth structures,
one can learn optimal feature weights w? and perform inference at test time exactly:
Observation 4 Relational ESS-CRF log-likelihood is concave with respect to w. Moreover,
X
?logP (X | E,R,w,u)
fkR(Xi ,Xj ,E) ? EP (?|E,R,w,u) fkR (Xi ,Xj ,E) . (12)
=I(ij
?
T
(E,u))
?wkR
Xi ,Xj ?inst(R,X )
Conditional Chow-Liu algorithm (Alg. 3) can be also extended to the relational setting by using
templated logistic regression weights for estimating edge conditionals. The resulting algorithm is
shown as Alg. 4. Observe that the test phase of Alg. 4 is exactly the same as for Alg. 3. In the
relational setting, one only needs to learn O(|R|) parameters, regardless of the dataset size, for both
structure selection and feature weights, as opposed to O(|X |2 ) parameters for the propositional case.
Thus, relational ESS-CRFs are typically much less prone to overfitting than propositional ones.
6
Experiments
We have tested the ESS-CRF approach on both propositional and relational data. With the large
number of parameters needed for the propositional case (O(|X |2 )), our approach is only practical for
cases of abundant data. So our experiments with propositional data serve only to prove the concept,
verifying that ESS-CRF can successfully learn a model better than a single tree baseline. In contrast
to the propositional settings, in the relational cases the relatively low parameter space dimensionality
(O(|R|2 )) almost eliminates the overfitting problem. As a result, on relational datasets ESS-CRF is
a very attractive approach in practice. Our experiments show ESS-CRFs comfortably outperforming
state of the art high-treewidth discriminative models on several real-life relational datasets.
6.1
Propositional models
We compare ESS-CRFs with fixed tree CRFs, where the tree structure learned by the Chow-Liu
algorithm using P (X ). We used TEMPERATURE sensor network data [17] (52 discretized variables) and San Francisco TRAFFIC data [18] (we selected 32 variables). In both cases, 5 variables
were used as evidence E and the rest as unknowns X . The results are in Fig. 1. We have found it
useful to regularize the conditional Chow-Liu (Alg. 3) by only choosing at test time from the edges
that have been selected often enough during training. In Fig. 1 we plot results for both regularized
(red) and unregularized (blue). One can see that in the limit of plentiful data ESS-CRF does indeed
outperform the fixed tree baseline. However, because the space of available models is much larger
for ESS-CRF, overfitting becomes an important issue and regularization is important.
6.2
Relational models
Face recognition. We evaluate ESS-CRFs on two relational models. The first model, called FACES,
aims to improve face recognition in collections of related images using information about similarity
between different faces in addition to the standard single-face features. The key idea is that whenever
two people in different images look similar, they are more likely to be the same person. Our model
has a variable Xi , denoting the label, for every face blob. Pairwise features f (Xi , Xj , E), based
on blob color similarity, indicate how close two faces are in appearance. Single-variable features
f (Xi , E) encode information such as the output of an off-the-shelf standalone face classifier or face
location within the image (see [19] for details). The model is used in a semi-supervised way: at test
time, a PGM is instantiated jointly over the train and test entities, values of the train entities are fixed
to the ground truth, and inference finds the (approximately) most likely labels for the test entities.
6
FACES 1 ? ACCURACY
FACES 2 ? ACCURACY
MLN+ sum
ESS?CRF
0.9
0.9
MLN+ max
0.7
Accuracy
0.8
0.5
ESS?CRF
0.85
Accuracy
FACES 3 ? ACCURACY
0.6
MLN+ max
MLN+ sum
0.8
0.75
MLN sum
Accuracy
1
0.4
MLN+
ESS?CRF
0.3
MLN
0.7
0.6
MLN max MLN sum
Time, seconds
2500
1000
1500
2000
Time, seconds
FACES 1 ? TIME TO CONVERGENCE
0
MLN+ sum
MLN+ max
MLN sum
MLN max
1500
500
ESS?CRF
0
300
50
Inference
200
Inference
150
100
MLN+ sum
MLN+ max
MLN sum
50
Parameter learning
0
ESS?CRF
Parameter learning
20
40
60
80
Time, seconds
FACES 3 ? TIME TO CONVERGENCE
60
MLN max
250
2000
1000
0.1
0
50
100
Time, seconds
FACES 2 ? TIME TO CONVERGENCE
Time, seconds
3000
MLN max
0.65
500
Time, seconds
0.5
0
0.2
MLN+ sum
MLN sum
MLN max
MLN+ max
40
Inference
30
20
Parameter learning
10
ESS?CRF
0
Figure 2: Results for FACES datasets. Top: evolution of classification accuracy as inference progresses over
time. Stars show the moment when ESS-CRF finishes running. Horizontal dashed line indicates resulting
accuracy. For FACES 3, sum-product and max-product gave the same accuracy. Bottom: time to convergence.
We compare ESS-CRFs with a dense relational PGM encoded by a Markov logic network
(MLN, [20]) using the same features. We used a state of the art MLN implementations in the
Alchemy package [21] with MC-SAT sampling algorithm for discriminative parameter learning,
and belief propagation [22] for inference. For the MLN, we had to threshold the pairwise features
indicating the likelihood of label agreement and set those under the threshold to 0 to prevent (a)
oversmoothing and (b) very long inference times. Also, to prevent oversmoothing by the MLN, we
have found it useful to scale down the pairwise feature weights learned during training, thus weakening the smoothing effect of any single edge in the model4 . We denote models with so adjusted
weights as MLN+. No thresholding or weights adjustment was done for ESS-CRFs.
Figure 2 shows the results on three separate datasets: FACES 1 with 1720 images, 4 unique people
and 100 training images in every fold, FACES 2 with 245 images, 9 unique people and 50 training
images, and FACES 3 with 352 images, 24 unique people and 70 training images. We tried both sumproduct and max-product BP for inference, denoted as sum and max correspondingly in Fig. 2. For
ESS-CRF the choice made no difference. One can see that (a) ESS-CRF model provides superior
(FACES 2 and 3) or equal (FACES 1) accuracy to the dense MLN model, even with extra heuristic
weights tweaking for the MLN, (b) ESS-CRF is more than an order of magnitude faster. One can
see that for the FACES model, ESS-CRF is clearly superior to the high-treewidth alternative.
Hypertext data. For WebKB data (see [23] for details), the task is to label webpages from four
computer science departments as course, faculty, student, project, or other, given
their text and link structure. We compare ESS-CRFs to high-treewidth relational Markov networks
(RMNs, [23]), max-margin Markov networks (M3Ns, [24]) and a standalone SVM classifier. All
the relational PGMs use the same single-variable features encoding the webpage text, and pairwise
features encoding the link structure. The baseline SVM classifier only uses single-variable features.
RMNs and ESS-CRFs are trained to maximize the conditional likelihood of the labels, while M3Ns
maximize the margin in likelihood between the correct assignment and all of the incorrect ones,
explicitly targeting the classification. The results are in Fig. 1. Observe that ESS-CRF matches the
accuracy of high-treewidth RMNs, again showing that the smaller expressive power of tree models
can be fully compensated by exact parameter learning and inference. ESS-CRF is much faster than
the RMN, taking only 50 sec. to train and 0.3 sec. to test on a single core of a 2.7GHz Opteron
CPU. RMN and M3N models take about 1500 sec. each to train on a 700MHz Pentium III. Even
accounting for the CPU speed difference, the speedup is significant. ESS-CRF does not achieve the
accuracy of M3Ns, which use a different objective more directly related to the classification problem
as opposed to density estimation. Still, the RMN results indicate that it may be possible to match the
M3N accuracy with much faster tractable ESS models by replacing the CRF conditional likelihood
objective with the max-margin objective, which is an important direction of future work.
4
Because the number of pairwise relations in the model grows quadratically with the number of variables,
the ?per-variable force of smoothing? grows with the dataset size, hence the need to adjust.
7
7
Related work and conclusions
Related work. Two cornerstones of our ESS-CRF approach, namely using models that become
more sparse when evidence is instantiated, and using multiple tractable models to avoid restrictions
on the expressive power inherent to low-treewidth models, have been discussed in the existing literature. First, context-specific independence (CSI, [25]) has been long used both for speeding up
inference [25] and regularizing the model parameters [26]. However, so far CSI has been treated
as a local property of the model, which made reasoning about the resulting treewidth of evidencespecific models impossible. Thus, the full potential of exact inference for models with CSI remained
unused. Our work is a step towards fully exploiting that potential. Multiple tractable models, such
as trees, are widely used as components of mixtures (e.g. [27]), including mixtures of all possible
trees [28], to approximate distributions with rich inherent structure. Unlike the mixture models, our
approach of selecting a single structure for any given evidence value has the advantage of allowing
for efficient exact decoding of the most probable assignment to the unknowns X using the Viterbi
algorithm [29]. Both for the mixture models and our approach, joint optimization of the structure
and weights (u and w in our notation) is infeasible due to many local optima of the objective. Our
one-shot structure learning algorithm, as we empirically demonstrated, works well in practice. It is
also much faster then expectation maximization [30] - the standard way to train mixture models.
Learning the CRF structure in general is NP-hard, which follows from the hardness results for the
generative models (c.f. [8]). Moreover, CRF structure learning is further complicated by the fact
the CRF structure likelihood does not decompose into scores of local graph components, as do
scores for some generative models [3]. Existing work on CRF structure learning thus provides only
local guarantees. In practice, the hardness of CRF structure learning leads to high popularity of
heuristics: chain and skip-chain [32] structures are often used, as well as grid-like structures. All
the approaches that do learn structure from data can be broadly divided into three categories. First,
the CRF structure can be defined via the sparsity pattern of the feature weights, so one can use L1
regularization penalty to achieve sparsity during weight learning [2]. The second type of approaches
greedily adds the features to the CRF model so as to maximize the immediate improvement in the
(approximate) model likelihood (e.g. [31]). Finally, one can try to approximate the CRF structure
score as a combination of local scores [15, 4] and use an algorithm for learning generative structures
(where the score actually decomposes). ESS-CRF also falls in this category of approaches. Although
there are some negative theoretical results about learnability of even the simplest CRF structures
using local scores [4], such approaches often work well in practice [15].
Learning the weights is straightforward for tractable CRFs, because the log-likelihood is concave [1]
and the gradient (3) can be used with mature convex optimization techniques. So far, exact weights
learning was mostly used for special hand-crafted structures, such as chains [1, 32], but in this work
we use arbitrary trees. For dense structures, computing the gradient (3) exactly is intractable as
even approximate inference in general models is NP-hard [5]. As a result, approximate inference
techniques, such as belief propagation [10, 11] or Gibbs sampling [12] are employed, without guarantees on the quality of the result. Alternatively, an approximation of the objective (e.g. [6]) is used,
also yielding suboptimal weights. Our experiments showed that exact weight learning for tractable
models gives an advantage in approximation quality and efficiency over dense structures.
Conclusions and future work. To summarize, we have shown that in both propositional and relational settings, tractable CRFs with evidence-specific structures, a class of models with expressive
power greater than any single tree-structured model, can be constructed by relying only on the globally optimal results of efficient algorithms (logistic regression, Chow-Liu algorithm, exact inference
in tree-structured models, L-BFGS for convex differentiable functions). Whereas traditional CRF
workflow (Alg. 1) involves approximation without any quality guaranteed on multiple stages of the
process, our approach, ESS-CRF (Alg. 2), has just one source of approximation, namely conditional
structure scores. We have demonstrated on real-life relational datasets that our approach matches
or exceeds the accuracy of state of the art dense discriminative models, and at the same time provide more than a factor of magnitude speedup. Important future work directions are generalizing
ESS-CRF to larger treewidths and max-margin weights learning for better classification.
Acknowledgements. This work is supported by NSF Career IIS-0644225 and ARO MURI
W911NF0710287 and W911NF0810242. We thank Ben Taskar for sharing the WebKB data.
FACES model and data were developed jointly with Denver Dash and Matthai Philipose.
8
References
[1] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In ICML, 2001.
[2] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion
abnormality detection. In CVPR, 2008.
[3] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. 2009.
[4] J. K. Bradley and C. Guestrin. Learning tree conditional random fields. In ICML, to appear, 2010.
[5] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(1-2), 1996.
[6] C. Sutton and A. McCallum. Piecewise pseudolikelihood for efficient CRF training. In ICML, 2007.
[7] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans.
on Inf. Theory, 14(3), 1968.
[8] D. Karger and N. Srebro. Learning Markov networks: Maximum bounded tree-width graphs. In SODA?01.
[9] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3), 1989.
[10] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. 1988.
[11] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS, 2000.
[12] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of
images. Pattern Analysis and Machine Intelligence, IEEE Transactions on, PAMI-6(6), 1984.
[13] S. Arnborg, D. G. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree. SIAM
Journal on Algebraic and Discrete Methods, 8(2), 1987.
[14] W. H?ardle, M. M?uller, S. Sperlich, and A. Werwatz. Nonparametric and Semiparametric Models. 2004.
[15] D. Shahaf, A. Chechetka, and C. Guestrin. Learning thin junction trees via graph cuts. In AISTATS, 2009.
[16] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning. The MIT Press, 2007.
[17] A. Deshpande, C. Guestrin, S. Madden, J. Hellerstein, and W. Hong. Model-driven data acquisition in
sensor networks. In VLDB, 2004.
[18] A. Krause and C. Guestrin. Near-optimal nonmyopic value of information in graphical models. In UAI?05.
[19] A. Chechetka, D. Dash, and M. Philipose. Relational learning for collective classification of entities in
images. In AAAI Workshop on Statistical Relational AI, 2010.
[20] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62(1-2), 2006.
[21] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, and P. Domingos. The alchemy system
for statistical relational AI. Technical report, University of Washington, Seattle, WA., 2009.
[22] J. Gonzalez, Y. Low, and C. Guestrin. Residual splash for optimally parallelizing belief propagation. In
AISTATS, 2009.
[23] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In UAI, 2002.
[24] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003.
[25] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian
networks. In UAI, 1996.
[26] M. desJardins, P. Rathod, and L. Getoor. Bayesian network learning with abstraction hierarchies and
context-specific independence. In ECML, 2005.
[27] B. Thiesson, C. Meek, D. Chickering, and D. Heckerman. Learning mixtures of DAG models. In UAI?97.
[28] M. Meil?a and M. I. Jordan. Learning with mixtures of trees. JMLR, 1, 2001.
[29] A. J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm.
IEEE Transactions on Information Theory, IT-13, 1967.
[30] S. L. Lauritzen. The EM algorithm for graphical association models with missing data. Computational
Statistics & Data Analysis, 19(2), 1995.
[31] A. Torralba, K. P. Murphy, and W. T. Freeman. Contextual models for object detection using boosted
random fields. In NIPS, 2004.
[32] C. Sutton and A. McCallum. Collective segmentation and labeling of distant entities in information
extraction. In ICML Workshop on Statistical Relational Learning and Its Connections, 2004.
9
| 4002 |@word middle:1 faculty:1 version:1 polynomial:2 vldb:1 tried:1 accounting:2 shot:1 moment:1 liu:18 contains:1 score:9 zij:1 plentiful:1 selecting:1 denoting:1 karger:1 existing:5 bradley:1 current:1 contextual:1 written:1 w911nf0810242:1 distant:1 subcomponent:1 plot:1 standalone:2 generative:6 selected:2 intelligence:2 mln:29 mccallum:3 es:50 parametrization:1 werwatz:1 core:1 provides:3 m3n:3 location:1 chechetka:3 sperlich:1 mathematical:1 constructed:1 supergraph:1 become:1 ipb:1 incorrect:1 consists:1 prove:1 combine:1 pairwise:10 inter:1 tagging:1 hardness:3 indeed:1 discretized:1 fkr:3 freeman:2 relying:1 globally:1 alchemy:2 little:1 cpu:2 preclude:1 relegating:1 becomes:1 project:1 estimating:2 moreover:4 webkb:4 notation:1 bounded:1 developed:1 finding:1 guarantee:6 every:8 concave:4 runtime:1 exactly:5 classifier:3 control:1 appear:1 segmenting:1 local:8 limit:1 severely:1 sutton:2 encoding:2 meil:1 approximately:1 pami:1 limited:1 proskurowski:1 unique:5 practical:1 practice:8 xr:1 procedure:1 logz:1 significantly:1 tweaking:1 cannot:2 close:1 selection:4 targeting:1 context:3 impossible:1 optimize:2 equivalent:2 restriction:1 demonstrated:2 compensated:1 crfs:42 roth:1 straightforward:2 regardless:2 missing:1 independently:1 convex:5 sumner:1 estimator:1 importantly:1 regularize:1 notion:1 traditionally:1 construction:5 hierarchy:1 exact:19 programming:1 us:1 domingo:2 agreement:1 recognition:2 cut:1 muri:1 geman:2 observed:1 ep:4 bottom:1 taskar:4 rij:1 capture:1 verifying:1 hypertext:1 connected:3 decrease:1 removed:1 csi:3 convexity:1 complexity:3 trained:1 depend:2 segment:1 serve:1 inapplicable:1 efficiency:1 joint:3 oversmoothing:2 train:8 instantiated:2 effective:11 wkr:2 artificial:1 labeling:2 choosing:2 encoded:2 posed:1 larger:2 heuristic:2 widely:1 cvpr:1 plausible:1 tested:1 ability:1 statistic:1 richardson:2 jointly:2 ip:1 advantage:7 blob:2 differentiable:1 sequence:1 propose:1 aro:1 product:4 commitment:1 combining:1 poon:1 achieve:2 representational:1 webpage:4 convergence:4 exploiting:1 optimum:4 seattle:1 ben:1 object:1 ij:8 lauritzen:1 progress:1 c:2 involves:3 come:1 treewidth:33 indicate:2 skip:1 direction:2 rosales:1 fij:1 correct:1 opteron:1 uijk:1 stochastic:1 violates:1 abbeel:1 generalization:1 decompose:2 probable:1 adjusted:1 ardle:1 ground:1 exp:6 viterbi:2 desjardins:1 torralba:1 estimation:2 label:5 singla:1 successfully:1 uller:1 mit:1 clearly:1 sensor:2 always:1 aim:1 avoid:2 shelf:1 boosted:1 encode:1 improvement:1 model4:1 likelihood:16 indicates:1 contrast:2 pentium:1 greedily:1 baseline:3 sense:1 inst:4 inference:38 dependent:1 mrfs:3 abstraction:1 typically:1 weakening:1 chow:17 uij:3 relation:4 koller:4 wij:1 pixel:1 arg:4 classification:8 issue:1 denoted:1 priori:2 art:4 special:3 smoothing:2 mutual:2 field:11 equal:2 extraction:1 washington:1 sampling:3 look:1 icml:4 thin:1 future:3 others:1 np:6 piecewise:1 inherent:3 few:3 intelligent:1 report:1 densely:2 individual:1 murphy:2 replaced:1 phase:1 friedman:2 detection:3 huge:1 wijk:8 evaluation:1 adjust:1 mixture:7 yielding:1 chain:3 accurate:1 edge:16 necessary:3 decoupled:2 tree:37 abundant:1 sacrificing:1 theoretical:1 increased:1 instance:1 modeling:2 mhz:1 retains:1 logp:4 assignment:5 maximization:1 restoration:1 cost:1 arnborg:1 successful:1 dtrain:1 learnability:1 optimally:1 dependency:6 person:1 density:2 siam:1 probabilistic:5 off:1 decoding:2 concrete:2 again:1 reflect:1 aaai:1 dtest:1 choose:1 opposed:2 worse:1 resort:2 leading:1 return:4 potential:5 bfgs:5 star:1 summarized:1 wk:2 student:1 sec:3 explicitly:2 depends:3 multiplicative:1 try:1 closed:4 linked:1 traffic:3 red:1 carlos:1 option:1 complicated:1 contribution:2 accuracy:18 convolutional:1 efficiently:1 yield:1 generalize:2 anton:1 bayesian:4 accurately:1 mc:1 datapoint:2 sharing:2 whenever:1 definition:2 acquisition:1 deshpande:1 mi:1 dataset:2 color:2 dimensionality:5 segmentation:1 actually:2 higher:4 supervised:1 wei:1 formulation:7 done:4 though:1 rmns:3 just:1 stage:4 hand:2 horizontal:1 shahaf:1 expressive:8 replacing:2 lack:1 propagation:5 defines:3 logistic:4 quality:10 lowd:1 disabled:1 grows:2 effect:1 normalized:1 concept:1 adequately:1 former:1 assigned:1 regularization:2 evolution:1 hence:1 attractive:1 during:4 width:1 inferior:1 hong:1 generalized:2 crf:72 demonstrate:1 complete:1 motion:2 temperature:3 l1:1 reasoning:3 image:12 nonmyopic:1 superior:3 rmn:4 thiesson:1 empirically:2 denver:1 foreach:5 comfortably:1 discussed:1 association:1 marginals:4 mellon:2 significant:1 gibbs:2 ai:2 dag:1 fk:2 grid:1 similarly:1 hp:1 had:1 similarity:2 add:1 showed:1 optimizing:2 inf:1 driven:1 compound:1 binary:1 arbitrarily:1 outperforming:1 life:6 guestrin:8 captured:2 greater:2 employed:1 determine:1 maximize:6 dashed:1 semi:1 ii:1 multiple:3 full:1 stem:1 exceeds:2 technical:1 match:5 faster:5 adapt:2 plug:1 long:3 divided:1 controlled:1 mrf:1 regression:4 essentially:1 expectation:2 cmu:2 normalization:1 addition:2 conditionals:2 whereas:1 semiparametric:1 krause:1 source:2 extra:3 rest:2 unlike:3 eliminates:1 tend:1 ofx:1 mature:1 lafferty:1 jordan:1 practitioner:1 call:1 near:1 abnormality:2 unused:1 iii:1 identically:2 enough:1 embeddings:1 xj:43 finish:1 gave:1 independence:3 restrict:1 suboptimal:5 idea:2 expression:1 penalty:2 suffer:1 algebraic:1 speech:1 workflow:2 useful:3 cornerstone:1 boutilier:1 features2:1 amount:1 nonparametric:1 kok:1 category:2 simplest:1 specifies:1 outperform:1 nsf:1 notice:2 estimated:2 per:1 popularity:1 blue:1 broadly:1 carnegie:2 write:3 discrete:2 key:5 four:1 threshold:2 pb:6 achieving:1 prevent:2 nocedal:1 wasted:1 graph:6 relaxation:1 asymptotically:1 sum:12 package:1 soda:1 extends:1 almost:2 throughout:1 gonzalez:1 bound:1 internet:1 guaranteed:4 dash:2 meek:1 fold:1 adapted:1 pgm:5 bp:1 speed:1 argument:1 relatively:3 speedup:3 structured:5 department:1 fung:1 combination:2 smaller:2 heckerman:1 em:1 making:3 heart:2 unregularized:1 mutually:1 abbreviated:1 needed:1 tractable:13 serf:1 adopted:1 junction:4 available:2 yedidia:1 apply:1 observe:6 hellerstein:1 alternative:2 schmidt:1 shortly:1 denotes:1 top:1 running:1 graphical:5 approximating:1 objective:10 dependence:2 traditional:4 gradient:11 separate:2 link:7 thank:1 entity:9 nx:3 fijk:15 argue:1 spanning:2 code:1 copying:1 relationship:1 equivalently:1 difficult:1 unfortunately:3 mostly:1 expense:1 negative:2 design:4 implementation:1 collective:2 unknown:3 perform:1 allowing:1 observation:3 datasets:8 markov:8 ecml:1 immediate:1 defining:1 relational:37 extended:2 treewidths:1 arbitrary:3 sumproduct:1 parallelizing:1 propositional:15 namely:4 specified:2 connection:1 learned:7 quadratically:1 pearl:1 nip:3 trans:1 usually:5 pattern:2 sparsity:3 challenge:1 summarize:2 hyperlink:2 max:20 including:2 memory:1 belief:5 power:9 getoor:2 treated:1 force:1 regularized:1 indicator:3 residual:1 representing:1 improve:1 irrespective:1 madden:1 speeding:1 text:2 understanding:1 literature:1 acknowledgement:1 rathod:1 fully:4 srebro:1 principle:2 thresholding:1 share:1 prone:1 course:1 supported:1 infeasible:2 pseudolikelihood:1 fall:1 template:2 taking:2 face:24 correspondingly:1 sparse:1 pgms:5 ghz:1 overcome:1 rich:2 llh:7 concretely:1 collection:1 made:2 replicated:1 san:1 far:2 social:1 transaction:2 approximate:15 implicitly:1 logic:2 dealing:1 global:3 overfitting:3 uai:4 sat:1 assumed:1 francisco:1 discriminative:7 xi:50 alternatively:1 decomposes:2 learn:9 m3ns:3 decoupling:1 career:1 alg:15 complex:2 constructing:2 aistats:2 dense:7 convey:1 fig:4 crafted:1 elaborate:1 pereira:1 exponential:2 chickering:1 jmlr:1 learns:1 down:2 remained:1 friendship:1 specific:20 showing:1 x:2 svm:3 evidence:31 intractable:2 exists:1 workshop:2 restricting:1 magnitude:4 splash:1 margin:5 easier:1 suited:2 entropy:1 generalizing:1 likely:8 appearance:1 adjustment:1 maxw:1 truth:1 conditional:38 viewed:1 towards:1 feasible:1 hard:5 typical:1 specifically:1 determined:1 called:5 e:2 meaningful:2 rarely:2 formally:1 indicating:1 people:5 corneil:1 goldszmidt:1 evaluate:1 reg:2 regularizing:1 correlated:1 |
3,317 | 4,003 | Towards Holistic Scene Understanding:
Feedback Enabled Cascaded Classification Models
Congcong Li, Adarsh Kowdle, Ashutosh Saxena, Tsuhan Chen
Cornell University, Ithaca, NY.
{cl758,apk64}@cornell.edu,
[email protected], [email protected]
Abstract
In many machine learning domains (such as scene understanding), several related
sub-tasks (such as scene categorization, depth estimation, object detection) operate on the same raw data and provide correlated outputs. Each of these tasks is
often notoriously hard, and state-of-the-art classifiers already exist for many subtasks. It is desirable to have an algorithm that can capture such correlation without
requiring to make any changes to the inner workings of any classifier.
We propose Feedback Enabled Cascaded Classification Models (FE-CCM), that
maximizes the joint likelihood of the sub-tasks, while requiring only a ?black-box?
interface to the original classifier for each sub-task. We use a two-layer cascade of
classifiers, which are repeated instantiations of the original ones, with the output
of the first layer fed into the second layer as input. Our training method involves
a feedback step that allows later classifiers to provide earlier classifiers information about what error modes to focus on. We show that our method significantly
improves performance in all the sub-tasks in two different domains: (i) scene
understanding, where we consider depth estimation, scene categorization, event
categorization, object detection, geometric labeling and saliency detection, and
(ii) robotic grasping, where we consider grasp point detection and object classification.
1
Introduction
In many machine learning domains, several sub-tasks operate on the same raw data to provide correlated outputs. Each of these sub-tasks are often notoriously hard and state-of-the-art classifiers
already exist for many of them. In the domain of scene understanding for example, several independent efforts have resulted in good classifiers for tasks such as scene categorization, depth estimation,
object detection, etc. In practice, we see that these sub-tasks are coupled?for example, if we know
that the scene is indoors, it would help us estimate depth more accurately from that single image.
In another example in the robotic grasping domain, if we know what object it is, then it is easier
for a robot to figure out how to pick it up. In this paper, we propose a unified model which jointly
optimizes for all the sub-tasks, allowing them to share information and guide the classifiers towards
a joint optimal. We show that this can be seamlessly applied across different machine learning
domains.
Recently, several approaches have tried to combine these different classifiers for related tasks in
vision [19, 25, 35]; however, most of them tend to be ad-hoc (i.e., a hard-coded rule is used) and
often intimate knowledge of the inner workings of the individual classifiers is required. Even beyond
vision, in most other domains, state-of-the-art classifiers already exist for many sub-tasks. However,
these carefully engineered models are often tricky to modify, or even to simply re-implement from
the available descriptions. Heitz et. al. [17] recently developed a framework for scene understanding called Cascaded Classification Models (CCM) treating each classifier as a ?black-box?. Each
classifier is repeatedly instantiated with the next layer using the outputs of the previous classifiers
as inputs. While this work proposed a method of combining the classifiers in a way that increased
1
the performance in all of the four tasks they considered, it had a drawback that it optimized for each
task independently and there was no way of feeding back information from later classifiers to earlier
classifiers during training. This feedback can help the CCM achieve a more optimal solution.
In our work, we propose Feedback Enabled Cascaded Classification Models (FE-CCM), which provides feedback from the later classifiers to the earlier ones, during the training phase. This feedback,
provides earlier stages information about what error modes should be focused on, or what can be
ignored without hurting the performance of the later classifiers. For example, misclassifying a street
scene as highway would not hurt as much as misclassifying a street scene as open country. Therefore
we prefer the first layer classifier to focus on fixing the latter error instead of optimizing the training
accuracy. In another example, allowing the depth estimation to focus on some specific regions can
help perform better scene categorization. For instance, the open country scene is characterized by its
upper part as a wide sky area. Therefore, to estimate the depth well in that region by sacrificing some
regions in the bottom may help an image to be categorized to the correct category. In detail, we do
so by jointly maximizing the likelihood of all the tasks; the outputs of the first layers are treated as
latent variables and training is done by using an iterative algorithm. Another benefit of our method
is that each of the classifiers can be trained using their own independent training datasets, i.e., our
model does not require a datapoint to have labels for all the tasks, and hence it scales well with
heterogeneous datasets.
In our approach, we treat the classifier as a ?black-box?, with no restrictions on its operation other
than requiring the ability to train on data and have input/output interface. Often each of these individual classifiers could be quite complex, e.g., producing labelings over pixels in an entire image.
Therefore, our method is applicable to many tasks that have different but correlated outputs.
In extensive experiments, we show that our method achieves significant improvements in the performance of all the sub-tasks in two different domains: (i) scene understanding, where we consider
six tasks: depth estimation, object detection, scene categorization, event categorization, geometric
labeling and saliency detection, and (ii) robotic grasping, where we consider two tasks: grasp point
detection and object classification.
The rest of the paper is organized as follows. We discuss the related works in Section 2. We describe
our FE-CCM method in Section 3 followed by the implementation of the classifiers in Section 4.
We present the experiments and results in Section 5. We finally conclude in Section 6.
2
Related Work
The idea of using information from related tasks to improve the performance of the task in question
has been studied in various fields of machine learning and vision. The idea of cascading layers of
classifiers to aid the final task was first introduced with neural networks as multi-level perceptrons
where, the output of the first layer of perceptrons is passed on as input to the next hidden layer
[16, 12, 6]. However, it is often hard to train neural networks and gain an insight into its operation,
thus making it hard to work for complicated tasks.
There has been a huge body of work in the area of sensor fusion where classifiers work with different modalities, each one giving additional information and thus improving the performance, e.g.,
in biometrics, data from voice recognition and face recognition is combined [21]. However, in
our scenario, we consider multiple tasks where each classifier is tackling a different problem (i.e.,
predicting different labels), with the same input being provided to all the classifiers.
The idea of improving classification performance by combining outputs of many classifiers is used in
methods such as Boosting [13], where many weak learners are combined to obtain a more accurate
classifier; this has been applied tasks such as face detection [4, 40]. However, unlike the CCM
framework which focuses on contextual benefits, their motivation was computational efficiency.
Tu [39] used pixel-level label maps to learn a contextual model for pixel-level labeling, through a
cascaded classifier approach, but such works considered only the interactions between labels of the
same type.
While the above combine classifiers to predict the same labels, there are a group of works that combine classifiers, and use them as components in large systems. Kumar and Hebert [23] developed a
large MRF-based probabilistic model to link multi-class segmentation and object detection. Similar
efforts have been made in the field of natural language processing. Sutton and McCallum [36] combined a parsing model with a semantic role labeling model into a unified probabilistic framework
that solved both simultaneously. However, it is hard to fit existing state-of-the-art classifiers into
these technically-sound probabilistic representations because they require knowledge of the inner
2
(a)
(b)
Figure 1: Combining related classifiers using the proposed FE-CCM model (?i ? {1, 2, . . . , n} ?i (X) =
Features corresponding to Classif ieri extracted from image X, Zi = Output of the Classif ieri in the first
stage parameterized by ?i , Yi = Output of the Classif ieri in the second stage parameterized by ?i ): (a)
Cascaded classification model (CCM) where the output from the previous stage of the classifier is used in the
subsequent stage along with image features. The model optimizes the output of each Classif ierj on the
second stage independently; (b) Proposed Feed-back enabled cascaded classification model (FE-CCM), where
there is feed-back from the latter stages to help achieve a model which optimizes all the tasks considered,
jointly. (Note that different colors of lines are used only to make the figure more readable)
workings of the individual classifiers. Structured learning (e.g., [38]) could also be a viable option
for our setting, however, they need a fully-labeled dataset which is not available in vision tasks.
There have been many works which show that with a well-designed model, one can improve the
performance of a particular task by using cues from other tasks (e.g., [29, 37, 2]). Saxena et. al.
manually designed the terms in an MRF to combine depth estimation with object detection [34] and
stereo cues [33]. Sudderth et al. [35] used object recognition to help 3D structure estimation. Hoiem
et. al. [19] proposed an innovative but ad-hoc system that combined boundary detection and surface
labeling by sharing some low-level information between the classifiers. Li et. al. [25, 24] combined
image classification, annotation and segmentation with a hierarchical graphical model. However,
these methods required considerable attention to each classifier, and considerable insight into the
inner workings of each task and also the connections between tasks. This limits the generality of the
approaches in introducing new tasks easily or being applied to other domains.
There is also a large body of work in the areas of deep learning, and we refer the reader to Bengio
and LeCun [3] for a nice overview of deep learning architectures and Caruana [5] for multitask learning with shared representation. While most works in deep learning (e.g., [15, 18, 41]) are different
from our work in that, those works focus on one particular task (same labels) by building different
classifier architectures, as compared to our setting of different tasks with different labels. Hinton et
al. [18] used unsupervised learning to obtain an initial configuration of the parameters. This provides
a good initialization and hence their multi-layered architecture does not suffer from local minimas
during optimization. At a high-level, we can also look at our work as a multi-layered architecture
(where each node typically produces complex outputs, e.g., labels over the pixels in the image); and
initialization in our case comes from existing state-of-the-art individual classifiers. Given this initialization, our training procedure finds parameters that (consistently) improve performance across
all the sub-tasks.
3
Feedback Enabled Cascaded Classification Models
We will describe the proposed model for combining and training the classifiers in this section.
We consider related subtasks denoted by Classifieri , where i ? {1, 2, . . . , n} for a total of n tasks
(Figure 1). Let ?i (X) correspond to the features extracted from image X for the Classifieri . Our
cascade is composed of two layers, where the outputs from classifiers on the first layer go as input
into the classifiers in the second layer. We do this by appending all the outputs from the first layer
to the features for that task. ?i represents the parameters for the first level of Classifieri with output
Zi , and ?i represents the parameters of the second level of Classifieri with output Yi .
We model the conditional joint log likelihood of all the classifiers, i.e., log P (Y1 , Y2 , . . . , Yn |X),
where X is an image belonging to training set ?.
log
Y
P (Y1 , Y2 , . . . , Yn |X; ?1 , ?2 , . . . , ?n , ?1 , ?2 , . . . , ?n )
(1)
X??
During training, Y1 , Y2 , . . . , Yn are all observed (because the ground-truth labels are available).
However, Z1 , Z2 , . . . , Zn (output of layer 1 and input to layer 2) are hidden, and this makes training
of each classifier as a black-box hard. Heitz et al. [17] assume that each layer is independent and
that each layer produces the best output independently (without consideration for other layers), and
therefore use the ground-truth labels for Z1 , Z2 , . . . , Zn for training the classifiers.
3
On the other hand, we want our classifiers to learn jointly, i.e., the first layer classifiers need not
perform their best (w.r.t. groundtruth), but rather focus on error modes, which would result in the
second layer?s output (Y1 , Y2 , . . . , Yn ) to become the best. Therefore, we expand Equation 1 as
follows, using the independencies represented by the directed graphical model in Figure 1(b).
=
X
X??
=
X
X??
X
log
P (Y1 , . . . , Yn , Z1 , . . . , Zn |X; ?1 , . . . , ?n , ?1 , . . . , ?n )
(2)
Z1 ,...,Zn
X
n
Y
Z1 ,...,Zn
i=1
log
P (Yi |?i (X), Z1 , . . . , Zn ; ?i )P (Zi |?i (X); ?i )
(3)
However, the summation inside the log makes it difficult to learn the parameters. Motivated by
the Expectation Maximization [8] algorithm, we use an iterative algorithm where we first fix the
latent variables Zi ?s and learn the parameters in the first step (Feed-forward step), and estimate the
latent variables Zi ?s in the second step (Feed-back step). We then iterate between these two steps.
While this algorithm is not guranteed to converge to the global maxima, in practice, we find it gives
good results. The results of our algorithm are always better than [17] which in our formulation is
equivalent to fixing the latent variables to ground-truth permanently (thus highlighting the impact of
the feedback).
Initialization: We initialize this process by setting the latent variables Zi ?s to the groundtruth.
Training with this initialization, our cascade is equivalent to CCM in [17], where the classifiers (and
the parameters) in the first layer are similar to the original state-of-the-art classfier and the classifiers
in the second layer use the outputs of the first layer in addition to the original features.
Feed-forward Step: In this step, we estimate the parameters. We assume that the latent variables
Zi ?s are known (and Yi ?s are known anyway because they are the ground-truth). This results in
X
maximize
?1 ,...,?n ,?1 ,...,?n
log
X??
n
Y
P (Yi |?i (X), Z1 , . . . , Zn ; ?i )P (Zi |?i (X); ?i )
(4)
i=1
Now in this feed-forward step, the terms for maximizing the different parameters turn out to be
independent. So, for the ith classifier we have:
maximize
?i
maximize
?i
X
log P (Yi |?i (X), Z1 , . . . , Zn ; ?i )
(5)
log P (Zi |?i (X); ?i )
(6)
X??
X
X??
Note that the optimization problem nicely breaks down into the sub-problems of training the individual classifier for the respective sub-tasks. Depending on the specific form of the classifier used
for the sub-task (see Section 4 for our implementation), we can use the appropriate training method
for each of them. For example, we can use the same training algorithm as the original black-box
classifier. Therefore, we consider the original classifiers as black-box and we do not need any low
level information about the particular tasks or knowledge of the inner workings of the classifier.
Feed-back Step: In this second step, we will estimate the values of the latent variables Zi ?s assuming that the parameters are fixed (and Yi ?s are given because the ground-truth is available). This
feed-back step is the crux that provides information to the first-layer classifiers what error modes
should be focused on and what can be ignored without hurting the final performance.
We will perform MAP inference on Zi ?s (and not marginalization). This can be considered as
a special variant of the general EM framework (hard EM, [26]). Using Equation 3, we get the
following optimization problem for the feed-back step:
maximize log P (Y1 , . . . , Yn , Z1 , . . . , Zn |X; ?1 , . . . , ?n , ?1 , . . . , ?n )
Z1 ,...,Zn
? maximize
Z1 ,...,Zn
n
X
(7)
log P (Zi |?i (X); ?i ) + log P (Yi |?i (X), Z1 , . . . , Zn ; ?i )
i=1
This maximization problem requires that we have access to the characterization of the individual
black-box classifiers in a probabilistic form. While at the first blush this may seem asking a lot,
our method can even handle classifiers for which log likelihood is not available. We can do this
by taking the output of the previous classifiers and modeling their log-odds as a Gaussian (partly
motivated by variational approximation methods [14]). Parameters of the Gaussians are empirically
estimated when the actual model is not available.
In some cases, the classifier log-likelihoods in the problem in Equation 7 actually turn out to be
convex. For example, if the individual classifiers are linear or logistic classifiers, the minimization
problem is convex and can be solved by using a gradient descent (or any similar method).
4
345.6)+7.).68#%+
5&6-.'417897")'
!"#$%&'
($"#)*+,$#-.'
//0'$&1#2-'
34+//0'$&1#2-'
,,-+"./$0)+
!"#$%&'("$)*+
(&"5&-$6%'789&26):'
!526&)%7'8&-&%9")'
!"#$%&'
//0'$&1#2-'
($"#)*+,$#-.'
12',,-+"./$0)+
34+//0'$&1#2-'
!"#$%&'
($"#)*+,$#-.'
//0'$&1#2-'
34+//0'$&1#2-'
Figure 2: Results showing improvement using the proposed model. All depth maps in depth estimation are
at the same scale (black means near and white means far); Salient region in saliency detection are indicated in
cyan; Geometric labeling - Green = Support, Blue = Sky and Red = Vertical (Best viewed in color).
Inference. Our FE-CCM is a directed model and inference in these models is straight-forward.
Maximizing the conditional log likelihood P (Y1 , Y2 , . . . , Yn |X) corresponds to performing inference over the first layer (using the same inference techniques for the respective black-box classifiers),
followed by inference on the second layer.
Sparsity and Scaling with a large number of tasks. In Equations 4 we use weight decay (with L-1
penalty on the weights, ||?||1 ) to enforce sparsity in the ??s. With a large number of sub-tasks, the
number of the weights in the second layer increases, and our sparsity term results in a few non-zero
connections between sub-tasks that are active.
Training with Heterogeneous datasets. Often real datasets are disjoint for different tasks, i.e,
each datapoint does not have the labels for all the tasks. Our formulation handles this scenario well. We showed our formulation for the general case, where we use ?i as the dataset
thatQ
has labels
for ith task. Now, we maximize the joint likelihood over all the datapoints, i.e.,
n Q
log i=1 X??i P (Y1 , . . . , Yn |X). Equation 3 reduces to maximizing the terms below, which is
solved using equations in Section 3 with corresponding modification
n
X
i=1
?i
X
X??i
log
X
P (Yi |?i (X), Z1 , . . . , Zn ; ?i )
n
Y
P (Zj |?j (X); ?j )
(8)
j=1
Z1 ,...,Zn
Here ?i is the tuning parameter that balances the amount of data in different datasets (n = 6 in our
experiments).
4
Scene Understanding: Implementation
Here we briefly describe the implementation details for our instantiation of FE-CCMs for scene
understanding.1 Each of the classifiers described below for the sub-tasks are our ?base-model?
shown in Table 1. In some sub-tasks, our base-model will be simpler than the state-of-the-art models
(that are often hand-tuned for the specific sub-tasks respectively). However, even when using basemodels in our FE-CCM, our comparison will still be against the state-of-the-art models for the
respective sub-tasks (and on the same standard respective datasets) in Section 5.
In our preliminary work [22], where we optimized for each target task independently, we considered four vision tasks: scene categorization, depth estimation, event categorization and saliency
detection. Please refer to Section 4 in [22] for implementation details. In this work, we add object
detection and geometric labeling, and jointly optimize all six tasks.
Scene Categorization. For scene categorization, we classify an image into one of the 8 categories
defined by Torralba et. al. [28]: tall building, inside city, street, highway, coast, open-country, mountain and forest. We define the output of a scene classifier to be a 8-dimensional vector with each
element representing the score for each category. We evaluate the performance by measuring the
accuracy of assigning the correct scene label to an image on the MIT outdoor scene dataset [28].
Depth Estimation. For the single image depth estimation task, we want to estimate the depth
d ? R+ of every pixel in an image (Figure 2a). We evaluate the performance of the estimation by
computing the root mean square error of the estimated depth with respect to ground truth laser scan
depth using the Make3D Range Image dataset [30, 31].
1
Space constraints do not allow us to describe each sub-task in detail here, but please refer to the respective
state-of-the-art algorithm. Note that the power of our method is in not needing to know the details of the
internals of each sub-task.
5
Event Categorization. For event categorization, we classify an image into one of the 8 sports events
as defined by Li et. al. [24]: bocce, badminton, polo, rowing, snowboarding, croquet, sailing and
rock-climbing. We define the output of a event classifier to be a 8-dimensional vector with each
element representing the log-odds score for each category. For evaluation, we compute the accuracy
assigning the correct event label to an image.
Saliency Detection. Here, we want to classify each pixel in the image to be either salient or nonsalient (Figure 2c). We define the output of the classifier as a scalar indicating the saliency confidence score of each pixel. We threshold this saliency score to determine whether the point is salient
(+1) or not (?1). For evaluation, we compute the accuracy of assigning a pixel as a salient point.
Object Detection. We consider the following object categories: car, person, horse and cow. A
sample image with the object detections is shown in Figure 2b. We use the train-set and test-set
of PASCAL 2006 [9] for our experiments. Our object detection module builds on the part-based
detector of Felzenszwalb et. al. [10]. We first generate 5 to 100 candidate windows for each image
by applying the part-based detector with a low threshold (over-detection). We then extract HOG features [7] on every candidate window and learn a RBF-kernel SVM model as the first layer classifier.
The classifier assigns each window a +1 or ?1 label indicating whether the window belongs to the
object or not. For the second-layer classifier, we learn a logistic model over the feature vector constituted by the outputs of all first-level tasks and the original HOG feature. We use average precision
to quantitatively measure the performance.
Geometric labeling. The geometric labeling task refers to assigning each pixel to one of three
geometric classes: support, vertical and sky (Figure 2d), as defined by Hoiem et. al. [20]. We use
the dataset and the algorithm by [20] as the first-layer geometric labeling module. In order to reduce
the computational time, we avoid the multiple segmentation and instead use a single segmentation
with about 100 segments/image. For the second-layer, we learn a logistic model over the a feature
vector which is constituted by the outputs of all first-level tasks and the features used in the first
layer. For evaluation, we compute the accuracy of assigning the correct geometric label to a pixel.
5
Experiments and Results
The proposed FE-CCM model is a unified model which jointly optimizes for all sub-tasks. We
believe this is a powerful algorithm in that, while independent efforts towards each sub-task have led
to state-of-the-art algorithms that require intricate modeling for that specific sub-task, the proposed
approach is a unified model which can beat the state-of-the-art performance in each sub-task and,
can be seamlessly applied across different machine learning domains.
We evaluate our proposed method on two different domains: scene understanding and robotic grasping. We use the same proposed algorithm in both domains. For each of the sub-task in each of the
domains, we evaluate our performance on the standard dataset for that sub-task (and compare against
the specifically designed state-of-the-art algorithm for that dataset). Note that, with such disjoint yet
practical datasets, no image would have ground truth available for more than one task. Our model
handles this well.
In experiment we evaluate the following algorithms as in Table 1,
? Base model: Our implementation (Section 4) of the algorithm for the sub-task, which serves
as a base model for our FE-CCM. (The base model uses less information than state-of-theart algorithms for some sub-tasks.)
? All-features-direct: A classifier that takes all the features of all sub-tasks, appends them
together, and builds a separate classifier for each task.
? State-of-the-art model: The state-of-the-art algorithm for each sub-task respectively on that
specific dataset.
? CCM: The cascaded classifier model by Heitz et. al. [17], which we re-implement for six
sub-tasks.
? FE-CCM (unified): This is our proposed model. Note that this is one single model which
maximizes the joint likelihood of all sub-tasks.
? FE-CCM (target specific): Here, we train a specific FE-CCM for each sub-task, by using
cross-validation to estimate ?i ?s in Equation 8. Different values for ?i ?s result in different
parameters learned for each FE-CCM.
Note that both CCM and All-features-direct use information from all sub-tasks, and state-of-the-art
models also use carefully designed models that implicitly capture information from other sub-tasks.
6
Table 1: Summary of results for the SIX vision tasks. Our method improves performance in every single task.
(Note: Bold face corresponds to our model performing better than state-of-the-art.)
Model
Images in testset
Chance
Our base-model
All-features-direct
State-of-the-art
model (reported)
CCM [17]
(our implementation)
FE-CCM (unified)
FE-CCM
(target specific)
5.1
Event
Depth
Scene
Saliency
Geometric
Categorization Estimation Categorization
Detection
Labeling
(% Accuracy) (RMSE in m) (% Accuracy) (% Accuracy) (% Accuracy)
1579
400
2688
1000
300
22.5
24.6
22.5
50
33.3
71.8 (?0.8)
16.7 (?0.4)
83.8 (?0.2)
85.2 (?0.2)
86.2 (?0.2)
72.7 (?0.8)
16.4 (?0.4)
83.8 (?0.4)
85.7 (?0.2)
87.0 (?0.6)
73.4
16.7 (MRF)
83.8
82.5 (?0.2)
88.1
Li [24]
Saxena [31] Torralba [27]
Achanta [1]
Hoiem [20]
73.3 (?1.6)
16.4 (?0.4)
83.8 (?0.6)
85.6 (?0.2)
87.0 (?0.6)
Object detection
Car Person Horse Cow
Mean
(% Average precision)
2686
62.4 36.3 39.0 39.9
44.4
62.3 36.8 38.8 40.0
44.5
61.5 36.3 39.2 40.7
44.4
Felzenswalb et. al. [11] (base)
62.2 37.0 38.8 40.1
44.5
74.3 (?0.6)
15.5 (?0.2)
85.9 (?0.3)
86.2 (?0.2)
88.6 (?0.2)
63.2 37.6
40.1
40.5
45.4
74.7 (?0.6)
15.2 (?0.2)
86.1 (?0.2)
87.6 (?0.2)
88.9 (?0.2)
63.2 38.0
40.1
40.7
45.5
Scene Understanding
Datasets: The datasets used are mentioned in Section 4, and the number of test images in each
dataset is in Table 1. For each dataset we use the same number of training images as the stateof-the-art algorithm (for comparison). We perform 6-fold cross validation on the whole model
with 5 of 6 sub-tasks to evaluate the performance on each task. We do not do cross-validation
on object detection as it is standard on the PASCAL 2006 [9] dataset (1277 train and 2686 test
images respectively).
Results and discussion:
To quantitatively evaluate our method for each of the sub-tasks, we consider the metrics appropriate
to each of the six tasks in Section 4. Table 1 shows that FE-CCM not only beats state of art in all
the tasks but also does it jointly as one single unified model.
In detail, we see that all-features-direct improves over the base model because it uses features from
all the tasks. The state-of-the-art classifiers improve on the base model by explicitly hand-designing
the task specific probabilistic model [24, 31] or by using adhoc methods to implicitly use information
from other tasks [20]. Our FE-CCM model, which is a single model that was not given any manually
designed task-specific insight, achieves a more significant improvement over the base model.
We also observe that our target-specific FE-CCM, which is optimized for each task independently
achieves the best performance, and this is a more fair comparison to the state-of-the-art because each
state-of-the-art model is trained specifically to the respective task. Furthermore, Table 1 shows the
results for CCM (which is a cascade without feedback information) and all-features-direct (which
uses features from all the tasks). This indicates that the improvement is strictly due to the proposed
feedback and not just because of having more information.
We show some visual improvements due to the proposed FE-CCM, in Figure 2. In comparison
to CCM, FE-CCM leads to better depth estimation of the sky and the ground, and it leads to better
coverage and accurate labeling of the salient region in the image, and it also leads to better geometric
labeling and object detection. More visual results are provided in the supplementary material.
FE-CCM allows each classifier in the second layer to learn which information from the other firstlayer sub-tasks is useful in the form of weights (in contrast to manually using the information shared
across sub-tasks in some prior works). We provide a visualization of the weights for the 6 vision
tasks in Figure 3-left. We see that the model agrees with our intuitions that high weights are assigned to the outputs of the same task from the first layer classifier (see high weights assigned to
the diagonals in the categorization tasks), though saliency detection is an exception which depends
more on its original features (not shown here) and the geometric labeling output. We also observe
that the weights are sparse. This is an advantage of our approach since the algorithm automatically
figures out which outputs from the first level classifiers are useful for the second level classifier to
achieve the best performance.
Figure 3-right provides a closer look to the positive weights given to the various outputs for a secondlevel geometric classifier. We observe that high positive weights are assigned to ?mountain?, ?forest?, ?tall building?, etc. for supporting the geometric class ?vertical?, and similarly ?coast?, ?sailing? and ?depth? for supporting the ?sky? class. These illustrate some of the relationships the model
learns automatically without any manual intricate modeling.
5.2
Robotic Grasping
In order to show the applicability of our FE-CCM to problems across different machine learning
experiments, we also considered the problem of a robot autonomously grasping objects. Given an
image and a depthmap, the goal of the learning algorithm is to select a point at which to grasp the
7
Figure 3: (Left) The absolute values of the weight vectors for second-level classifiers, i.e. ?. Each column
shows the contribution of the various tasks towards a certain task. (Right) Detailed illustration of the positive
values in the weight vector for a second-level geometric classifier. (Note: Blue is low and Red is high)
object (this location is called grasp point, [32]). It turns out that different categories of objects could
have different strategies for grasping, and therefore in this work, we use our FE-CCM to combine
object classification and grasping point detection.
Implementation: We work with the labeled synthetic dataset by Saxena et. al. [32] which spans
6 object categories and also includes an aligned pixel level depth map for each image. For grasp
point detection, we use a regression over features computed from the image [32]. The output of
the regression is a score for each point giving the confidence of the point being a good grasping
point. For object detection, we use a logistic classifier to perform the classification. The output of
the classifier is a 6-dimensional vector representing the log odds score for each category.
Results: We evaluate our algorithm on dataset published in [32], and perform cross-validation to
evaluate the performance on each task. Table 2 shows the results for our algorithm?s ability to predict
the grasping point, given an image and the depths observed by the robot using its sensors. We see
that our FE-CCM obtains significantly better performance over all-features-direct and CCM (our
implementation). Figure 4 show our robot grasping an object using our algorithm.
Table 2: Summary of results for the the robotic grasping experiment. Our method improves performance in every single task.
Model
Images in testset
Chance
All features direct
Our base-model
CCM (Heitz et. al.)
FE-CCM
6
Graping point
Detection
(% accuracy)
6000
50
87.7
87.7
90.5
92.2
Object
Classification
(% accuracy)
1200
16.7
45.8
45.8
49.5
49.7
Figure 4: Our robot grasping an object
using our algorithm.
Conclusions
We propose a method for combining existing classifiers for different but related tasks. We only
consider the individual classifiers as a ?black-box? (thus not needing to know the inner workings of
the classifier) and propose learning techniques for combining them (thus not needing to know how
to combine the tasks). Our method introduces feedback in the training process from the later stage
to the earlier one, so that a later classifier can provide the earlier classifiers information about what
error modes to focus on, or what can be ignored without hurting the joint performance.
We consider two domains: scene understanding and robotic grasping. Our unified model (a single
FE-CCM trained for all the sub-tasks in that domain) improves performance significantly across all
the sub-tasks considered over the respective state-of-the-art classifiers. We show that this was the
result of our feedback process. The classifier actually learns meaningful relationships between the
tasks automatically. We believe that this is a small step towards holistic scene understanding.
Acknowledgements
We thank Industrial Technology Research Institute in Taiwan and Kodak for their financial support
in this research. We thank Anish Nahar, Matthew Cong and Colin Ponce for help with the robotic
experiments. We also thank John Platt and Daphne Koller for useful discussions.
8
References
[1] R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk. Frequency-tuned Salient Region Detection. In
CVPR, 2009.
[2] A. Agarwal and B. Triggs. Monocular human motion capture with a mixture of regressors. In IEEE
Workshop Vision for HCI, CVPR, 2005.
[3] Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. In Large-Scale Kernel Machines, 2007.
[4] S. C. Brubaker, J. Wu, J. Sun, M. D. Mullin, and J. M. Rehg. On the design of cascades of boosted
ensembles for face detection. IJCV, 77(1-3):65?86, 2008.
[5] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[6] R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural networks
with multitask learning. In ICML, 2008.
[7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005.
[8] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em
algorithm. J of Royal Stat. Soc., Series B, 39(1):1?38, 1977.
[9] M. Everingham, A. Zisserman, C. K. I. Williams, and L. Van Gool. The pascal voc2006 results.
[10] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Discriminatively trained deformable
part models, release 3. http://people.cs.uchicago.edu/?pff/latent-release3/.
[11] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 2009.
[12] Y. Freund and R. E. Schapire. Cascaded neural networks based image classifier. In ICASSP, 1993.
[13] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. In EuroCOLT, 1995.
[14] M. Gibbs and D. Mackay. Variational gaussian process classifiers. Neural Networks, IEEE Trans, 2000.
[15] I. Goodfellow, Q. Le, A. Saxena, H. Lee, and A. Ng. Measuring invariances in deep networks. In NIPS,
2009.
[16] L. Hansen and P. Salamon. Neural network ensembles. PAMI, 12(10):993?1001, 1990.
[17] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for
holistic scene understanding. In NIPS, 2008.
[18] G. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. In N. Comp, 2006.
[19] D. Hoiem, A. A. Efros, and M. Hebert. Closing the loop on scene interpretation. In CVPR, 2008.
[20] D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. IJCV, 2008.
[21] J. Kittler, M. Hatef, R. P. Duin, and J. Matas. On combining classifiers. PAMI, 20:226?239, 1998.
[22] A. Kowdle, C. Li, A. Saxena, and T. Chen. A generic model to compose vision modules for holistic scene
understanding. In Workshop on Parts and Attributes, ECCV, 2010.
[23] S. Kumar and M. Hebert. A hierarchical field framework for unified context-based classification. In
ICCV, 2005.
[24] L. Li and L. Fei-Fei. What, where and who? classifying event by scene and object recognition. In ICCV,
2007.
[25] L.-J. Li, R. Socher, and L. Fei-Fei. Towards total scene understanding: Classification, annotation and
segmentation in an automatic framework. In CVPR, 2009.
[26] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants.
Learning in graphical models, 89:355?368, 1998.
[27] A. Oliva and A. Torralba. Mit outdoor scene dataset. http://people.csail.mit.edu/torralba/code/spatialenvelope/.
[28] A. Oliva and A. Torralba. Modeling the shape of the scene: A holistic representation of the spatial
envelope. IJCV, 42:145?175, 2001.
[29] D. Parikh, C. Zitnick, and T. Chen. From appearance to context-based recognition: Dense labeling in
small images. CVPR, 2008.
[30] A. Saxena, S. H. Chung, and A. Y. Ng. Learning depth from single monocular images. In NIPS, 2005.
[31] A. Saxena, S. H. Chung, and A. Y. Ng. 3-d depth reconstruction from a single still image. IJCV, 76, 2007.
[32] A. Saxena, J. Driemeyer, J. Kearns, and A. Y. Ng. Robotic grasping of novel objects. In NIPS, 2006.
[33] A. Saxena, J. Schulte, and A. Y. Ng. Depth estimation using monocular and stereo cues. In IJCAI, 2007.
[34] A. Saxena, M. Sun, and A. Y. Ng. Make3d: Learning 3d scene structure from a single still image. IEEE
PAMI, 30(5), 2009.
[35] E. B. Sudderth, A. Torralba, W. T. Freeman, and A. S. Willsky. Depth from familiar objects: A hierarchical
model for 3d scenes. In CVPR, 2006.
[36] C. Sutton and A. McCallum. Joint parsing and semantic role labeling. In CoNLL, 2005.
[37] A. Toshev, B. Taskar, and K. Daniilidis. Object detection via boundary structure segmentation. In CVPR,
2010.
[38] I. Tsochantaridis, T. Hofmann, and T. Joachims. Support vector machine learning for interdependent and
structured output spaces. In ICML, 2004.
[39] Z. Tu. Auto-context and its application to high-level vision tasks. In CVPR, 2008.
[40] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 57(2):137?154, 2004.
[41] M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010.
9
| 4003 |@word multitask:3 briefly:1 dalal:1 everingham:1 triggs:2 open:3 tried:1 pick:1 initial:1 configuration:1 series:1 score:6 hoiem:5 tuned:2 deconvolutional:1 existing:3 contextual:2 z2:2 tackling:1 assigning:5 yet:1 parsing:2 john:1 subsequent:1 shape:1 hofmann:1 voc2006:1 treating:1 designed:5 ashutosh:1 cue:3 mccallum:2 ith:2 provides:5 boosting:2 node:1 characterization:1 location:1 simpler:1 daphne:1 along:1 direct:7 become:1 viable:1 hci:1 ijcv:5 combine:6 compose:1 inside:2 intricate:2 multi:4 freeman:1 eurocolt:1 automatically:3 actual:1 window:4 provided:2 maximizes:2 what:9 mountain:2 developed:2 unified:10 sky:5 every:4 saxena:12 classifier:96 bocce:1 tricky:1 platt:1 ramanan:2 yn:8 producing:1 positive:3 local:1 modify:1 treat:1 limit:1 sutton:2 pami:4 black:10 initialization:5 studied:1 achanta:2 range:1 snowboarding:1 directed:2 internals:1 lecun:2 practical:1 practice:2 implement:2 procedure:1 area:3 cascade:5 significantly:3 ccm:38 confidence:2 refers:1 get:1 layered:2 tsochantaridis:1 context:3 applying:1 restriction:1 equivalent:2 map:4 optimize:1 maximizing:4 go:1 attention:1 williams:1 independently:5 convex:2 focused:2 assigns:1 rule:1 insight:3 cascading:1 datapoints:1 enabled:5 badminton:1 financial:1 handle:3 anyway:1 rehg:1 hurt:1 target:4 us:3 designing:1 thatq:1 goodfellow:1 element:2 recognition:5 labeled:2 bottom:1 role:2 observed:2 module:3 taskar:1 solved:3 capture:3 cong:1 region:6 kittler:1 sun:2 autonomously:1 grasping:15 ccms:1 mentioned:1 intuition:1 dempster:1 trained:5 segment:1 technically:1 efficiency:1 learner:1 easily:1 joint:7 icassp:1 various:3 represented:1 train:5 minimas:1 instantiated:1 laser:1 describe:4 fast:1 labeling:16 horse:2 quite:1 supplementary:1 cvpr:10 ability:2 jointly:7 laird:1 final:2 hoc:2 advantage:1 net:1 rock:1 propose:5 reconstruction:1 interaction:1 tu:2 aligned:1 combining:8 loop:1 holistic:5 achieve:3 deformable:1 description:1 ijcai:1 produce:2 categorization:16 incremental:1 object:34 help:7 depending:1 tall:2 illustrate:1 stat:1 fixing:2 make3d:2 soc:1 coverage:1 c:2 involves:1 come:1 drawback:1 correct:4 attribute:1 human:2 engineered:1 mcallester:2 material:1 require:3 driemeyer:1 feeding:1 crux:1 fix:1 generalization:1 preliminary:1 summation:1 strictly:1 considered:7 ground:8 predict:2 matthew:1 efros:2 achieves:3 torralba:6 estimation:15 applicable:1 label:16 hansen:1 highway:2 agrees:1 city:1 minimization:1 mit:3 sensor:2 always:1 gaussian:2 rather:1 avoid:1 cornell:4 boosted:1 blush:1 release:1 focus:7 susstrunk:1 ponce:1 improvement:5 consistently:1 joachim:1 likelihood:9 indicates:1 seamlessly:2 contrast:1 industrial:1 inference:6 entire:1 typically:1 hidden:2 koller:2 expand:1 labelings:1 pixel:11 classification:17 pascal:3 denoted:1 stateof:1 art:23 special:1 initialize:1 mackay:1 spatial:1 field:3 schulte:1 nicely:1 having:1 ng:6 manually:3 represents:2 look:2 unsupervised:1 icml:2 theart:1 jones:1 quantitatively:2 few:1 oriented:1 composed:1 simultaneously:1 resulted:1 individual:8 familiar:1 phase:1 detection:35 huge:1 evaluation:3 grasp:5 introduces:1 mixture:1 accurate:2 closer:1 respective:7 biometrics:1 incomplete:1 taylor:1 re:2 sacrificing:1 girshick:2 mullin:1 increased:1 column:1 instance:1 earlier:6 asking:1 modeling:4 classify:3 caruana:2 zn:14 measuring:2 maximization:2 applicability:1 introducing:1 osindero:1 reported:1 synthetic:1 combined:5 person:2 csail:1 probabilistic:5 lee:1 together:1 chung:2 li:7 bold:1 includes:1 explicitly:1 ad:2 depends:1 collobert:1 later:6 break:1 lot:1 root:1 view:1 red:2 option:1 complicated:1 annotation:2 rmse:1 contribution:1 square:1 accuracy:11 who:1 ensemble:2 correspond:1 saliency:9 climbing:1 weak:1 raw:2 accurately:1 notoriously:2 comp:1 daniilidis:1 straight:1 published:1 datapoint:2 detector:2 sharing:1 manual:1 against:2 frequency:1 gain:1 dataset:14 appends:1 knowledge:3 color:2 improves:5 car:2 organized:1 segmentation:6 carefully:2 actually:2 back:7 salamon:1 feed:9 zisserman:1 formulation:3 done:1 box:9 though:1 generality:1 furthermore:1 just:1 stage:8 correlation:1 working:6 hand:3 mode:5 logistic:4 indicated:1 believe:2 building:3 requiring:3 y2:5 classif:4 hence:2 firstlayer:1 assigned:3 semantic:2 neal:1 white:1 during:4 please:2 theoretic:1 motion:1 interface:2 image:36 variational:2 consideration:1 coast:2 recently:2 parikh:1 novel:1 empirically:1 overview:1 sailing:2 interpretation:1 significant:2 refer:3 hurting:3 ai:1 gibbs:1 tuning:1 automatic:1 similarly:1 closing:1 language:2 had:1 rowing:1 robot:5 access:1 surface:1 etc:2 base:11 add:1 own:1 showed:1 perspective:1 optimizing:1 optimizes:4 belongs:1 scenario:2 certain:1 depthmap:1 yi:9 additional:1 estrada:1 converge:1 maximize:6 determine:1 colin:1 ii:2 multiple:2 desirable:1 sound:1 reduces:1 needing:3 characterized:1 cross:4 coded:1 impact:1 mrf:3 variant:2 oliva:2 heterogeneous:2 vision:10 expectation:1 croquet:1 metric:1 regression:2 histogram:1 kernel:2 agarwal:1 addition:1 want:3 adarsh:1 sudderth:2 country:3 modality:1 ithaca:1 envelope:1 operate:2 rest:1 unlike:1 tend:1 seem:1 odds:3 near:1 bengio:2 krishnan:1 iterate:1 marginalization:1 fit:1 zi:12 architecture:5 cow:2 inner:6 idea:3 reduce:1 whether:2 six:5 motivated:2 passed:1 effort:3 penalty:1 stereo:2 suffer:1 repeatedly:1 deep:6 ignored:3 useful:3 indoors:1 detailed:1 amount:1 category:8 generate:1 http:2 schapire:2 exist:3 misclassifying:2 zj:1 estimated:2 disjoint:2 blue:2 group:1 independency:1 four:2 salient:6 threshold:2 putting:1 parameterized:2 powerful:1 reader:1 groundtruth:2 wu:1 decision:1 prefer:1 scaling:2 conll:1 asaxena:1 layer:34 cyan:1 followed:2 fold:1 duin:1 constraint:1 fei:4 scene:37 toshev:1 innovative:1 span:1 kumar:2 performing:2 gould:1 structured:2 belonging:1 across:6 em:4 making:1 modification:1 iccv:2 equation:7 visualization:1 monocular:3 discus:1 turn:3 anish:1 know:5 fed:1 serf:1 available:7 operation:2 gaussians:1 observe:3 hierarchical:3 appropriate:2 enforce:1 kodak:1 generic:1 appending:1 voice:1 permanently:1 original:8 zeiler:1 graphical:3 readable:1 giving:2 build:2 matas:1 already:3 question:1 strategy:1 diagonal:1 gradient:2 link:1 separate:1 thank:3 street:3 polo:1 willsky:1 taiwan:1 assuming:1 code:1 relationship:2 illustration:1 balance:1 difficult:1 tsuhan:2 fe:27 hog:2 implementation:9 design:1 perform:6 allowing:2 upper:1 vertical:3 teh:1 datasets:9 descent:1 beat:2 supporting:2 viola:1 hinton:3 y1:8 brubaker:1 subtasks:2 introduced:1 required:2 extensive:1 optimized:3 connection:2 z1:14 adhoc:1 learned:1 nip:4 trans:1 beyond:1 below:2 sparsity:3 green:1 royal:1 belief:1 gool:1 power:1 event:10 treated:1 natural:2 predicting:1 cascaded:11 representing:3 improve:4 technology:1 coupled:1 extract:1 auto:1 nice:1 understanding:15 geometric:15 prior:1 acknowledgement:1 interdependent:1 freund:2 fully:1 discriminatively:2 validation:4 rubin:1 classifying:1 share:1 eccv:1 summary:2 hebert:4 guide:1 allow:1 uchicago:1 institute:1 wide:1 face:5 taking:1 felzenszwalb:3 absolute:1 sparse:2 benefit:2 van:1 feedback:13 depth:25 heitz:5 boundary:2 forward:4 made:1 regressors:1 testset:2 far:1 obtains:1 implicitly:2 global:1 robotic:9 instantiation:2 active:1 conclude:1 fergus:1 latent:8 iterative:2 table:8 learn:8 robust:1 improving:2 forest:2 complex:2 domain:15 zitnick:1 dense:1 constituted:2 motivation:1 whole:1 repeated:1 fair:1 categorized:1 body:2 ny:1 aid:1 precision:2 sub:43 candidate:2 outdoor:2 intimate:1 learns:2 down:1 specific:11 showing:1 decay:1 svm:1 fusion:1 workshop:2 socher:1 classfier:1 felzenswalb:1 justifies:1 pff:1 chen:3 easier:1 led:1 simply:1 appearance:1 visual:2 highlighting:1 sport:1 scalar:1 corresponds:2 truth:7 chance:2 extracted:2 weston:1 conditional:2 viewed:1 goal:1 rbf:1 towards:7 shared:2 considerable:2 hard:8 change:1 specifically:2 kowdle:2 kearns:1 called:2 total:2 ece:1 partly:1 invariance:1 meaningful:1 perceptrons:2 indicating:2 exception:1 select:1 support:4 people:2 latter:2 scan:1 evaluate:9 correlated:3 |
3,318 | 4,004 | On Herding and the Perceptron Cycling Theorem
Andrew E. Gelfand, Yutian Chen, Max Welling
Department of Computer Science
University of California, Irvine
{agelfand,yutianc,welling}@ics.uci.edu
Laurens van der Maaten
Department of CSE, UC San Diego
PRB Lab, Delft University of Tech.
[email protected]
Abstract
The paper develops a connection between traditional perceptron algorithms and
recently introduced herding algorithms. It is shown that both algorithms can be
viewed as an application of the perceptron cycling theorem. This connection
strengthens some herding results and suggests new (supervised) herding algorithms that, like CRFs or discriminative RBMs, make predictions by conditioning
on the input attributes. We develop and investigate variants of conditional herding, and show that conditional herding leads to practical algorithms that perform
better than or on par with related classifiers such as the voted perceptron and the
discriminative RBM.
1
Introduction
The invention of the perceptron [12] goes back to the very beginning of AI more than half a century
ago. Rosenblatt?s very simple, neurally plausible learning rule made it an attractive algorithm for
learning relations in data: for every input xi , make a linear prediction about its label: yi? = wT xi
and update the weights as,
w ? w + xi (yi ? yi? )
(1)
A critical evaluation by Minsky and Papert [11] revealed the perceptron?s limited representational
power. This fact is reflected in the behavior of Rosenblatt?s learning rule: if the data is linearly
separable, then the learning rule converges to the correct solution in a number of iterations that can
be bounded by (R/?)2 , where R represents the norm of the largest input vector and ? represents the
margin between the decision boundary and the closest data-case. However, ?for data sets that are
not linearly separable, the perceptron learning algorithm will never converge? (quoted from [1]).
While the above result is true, the theorem in question has something much more powerful to say.
The ?perceptron cycling theorem? (PCT) [2, 11] states that for the inseparable case the weights remain bounded and do not diverge to infinity. In this paper, we show that the implication of this
theorem is that certain moments are conserved on average. Denoting the data-case selected at iteration t by it (note that the same data-case can be picked multiple times), the corresponding attribute
vector and label by (xit , yit ) with xi ? X , and the label predicted by the perceptron at iteration t
for data-case it by yi?t , we obtain the following result:
||
T
T
1X
1X
xit yit ?
xi y ? || ? O(1/T )
T t=1
T t=1 t it
(2)
This result implies that, even though the perceptron learning algorithm does not converge in the
inseparable case, it generates predictions that correlate with the attributes in the same way as the true
labels do. More importantly, the correlations converge to the sample mean
? with a rate 1/T , which is
much faster than sampling based algorithms that converge at a rate 1/ T . By using general features
?(x), the above result can be extended to the matching of arbitrarily complicated statistics between
data and predictions.
1
In the inseparable case, we can interpret the perceptron as a bagging procedure and average predictions instead of picking the single best (or last) weights found during training. Although not directly
motivated by the PCT and Eqn. 2, this is exactly what the voted perceptron (VP) [5] does. Interesting generalization bounds for the voted perceptron have been derived in [5]. Extensions of VP to
chain models have been explored in, e.g. [4].
Herding is a seemingly unrelated family of algorithms for unsupervised learning [15, 14, 16, 3]. In
traditional methods for learning Markov Random Field (MRF) models, the goal is to converge to
a single parameter estimate and then perform (approximate) inference in the resulting model. In
contrast, herding combines the learning and inference phases by treating the weights as dynamic
quantities and defining a deterministic set of updates such that averaging predictions preserves certain moments of the training data. The herding algorithm generates a weakly chaotic sequence of
weights and a sequence of states of both hidden and visible variables of the MRF model. The intermediate states produced by herding are really ?representative points? of an implicit model that
interpolates between data cases. We can view these states as pseudo-samples, which analogously to
Eqn. 2, satisfy certain constraints on their average sufficient statistics. However, unlike in perceptron
learning, the non-convergence of the weights is needed to generate long, non-periodic trajectories of
states that can be averaged over.
In this paper, we show that supervised perceptron algorithms and unsupervised herding algorithms
can all be derived from the PCT. This connection allows us to strengthen existing herding results.
For instance, we prove fast convergence rates of sample averages when we use small mini-batches
for making updates, or when we use incomplete optimization algorithms to run herding. Moreover,
the connection suggests new algorithms that lie between supervised perceptron and unsupervised
herding algorithms. We refer to these algorithms as ?conditional herding? (CH) because, like conditional random fields, they condition on the input features. From the perceptron perspective, conditional herding can be understood as ?voted perceptrons with hidden units?. Conditional herding can
also be interpreted as the zero temperature limit of discriminative RBMs (dRBMs) [10].
2
Perceptrons, Herding and the Perceptron Cycling Theorem
We first review the perceptron cycling theorem that was initially introduced in [11] with a gap in the
proof that was fixed in [2]. A sequence of vectors {wt }, wt ? RD , t = 0, 1, . . . is generated by the
following iterative procedure: wt+1 = wt + vt , where vt is an element of a finite set, V, and the
norm of vt is bounded: maxi ||vi || = R < ?.
Perceptron Cycling Theorem (PCT). ?t ? 0: If wtT vt ? 0, then there exists a constant M > 0
such that kwt ? w0 k < M .
The theorem still holds when V is a finite set in a Hilbert space. The PCT immediately leads to the
following result:
PT
Convergence Theorem. If PCT holds, then: || T1 t=1 vt || ? O(1/T ).
This result is easily shown by observing that ||wT +1 ? w0 || = ||
and dividing all terms by T .
2.1
PT
t=1
?wt || = ||
PT
t=1
vt || < M ,
Voted Perceptron and Moment Matching
The voted perceptron (VP) algorithm [5] repeatedly applies the update rule in Eqn. 1. Predictions
of test labels are made after each update and final label predictions are taken as an average of all
intermediate predictions. The PCT convergence theorem leads to the result of Eqn. 2, where we
identify V = {xi (yi ? yi? )| , yi = ?1, yi? = ?1, i = 1, . . . , N }. For the VP algorithm, the PCT
thus guarantees that the moments hxyip(x,y)
(with p? the empirical distribution) are matched with
?
?
hxy ? ip(y? |x)p(x)
where
p(y
|x)
is
the
model
distribution
implied by how VP generates y ? .
?
In maximum entropy models, one seeks a model that satisfies a set of expectation constraints (moments) from the training data, while maximizing the entropy of the remaining degrees of freedom [9]. In contrast, a single perceptron strives to learn a deterministic mapping p(y ? |x) =
?[y ? ? arg maxy (ywT x)] that has zero entropy and gets every prediction on every training case
2
correct (where ? is the delta function). Entropy is created in p(y ? |x) only when the weights wt do
not converge (i.e. for inseparable data sets). Thus, VP and maximum entropy methods are related,
but differ in how they handle the degrees of freedom that are unconstrained by moment matching.
2.2
Herding
A new class of unsupervised learning algorithms, known as ?herding?, was introduced in [15].
Rather than learning a single ?best? MRF model that can be sampled from to estimate quantities
of interest, herding combines learning and inference into a single process. In particular, herding
produces a trajectory of weights and states that reproduce the moments of the training data.
Consider a fully observed MRF with features ?(x), x ? X = [1, . . . , K]m with K the number of
states for each variable xj (j = 1, . . . , m) and with an energy function E(x) given by:
E(x) = ?wT ?(x).
In herding [15], the parameters w are updated as:
wt+1 = wt + ? ? ?(x?t ),
(3)
(4)
?
T
where ? =
i ?(xi ) and xt = arg maxx wt ?(x). Eqn. 4 looks like a maximum likelihood (ML) gradient update, with constant learning rate and maximization in place of expectation in
the right-hand side. This follows from taking the zero temperature limit of the ML objective (see
Section 2.5). The maximization prevents the herding sequence from converging to a single point
estimate on this alternative objective.
1
N
P
Let {wt } denote the sequence of weights and {x?t } denote the sequence of states (pseudo-samples)
produced by herding. We can apply the PCT to herding by identifying V = {? ? ?(x? )| x? ? X }.
It is now easy to see that, in general, herding does not converge because under very mild conditions
we can always find an x?t such that wtT vt < 0. From the PCT convergence theorem, we also see
PT
that ||? ? T1 t=1 ?(x?t )|| ? O(1/T ), i.e. the pseudo-sample averages of the features converge
to the data averages ? at a rate 1/T 1 . This is considerably faster
? than i.i.d. sampling from the
corresponding MRF model, which would converge at a rate of 1/ T .
Since the cardinality of the set V is exponentially large (i.e. |V| = K m ), finding the maximizing
state x?t at each update may be hard. However, the PCT only requires us to find some state x?t
such that wtT vt ? 0 and in most cases this can easily be verified. Hence, the PCT provides a
theoretical justification for using a local search algorithm that performs partial energy maximization.
For example, we may start the local search from the state we ended up in during the previous
iteration (a so-called persistent chain [13, 17]). Or, one may consider contrastive divergence-like
algorithms [8], in which the sampling or mean field approximation is replaced by a maximization.
In this case, maximizations are initialized on all data-cases and the weights are updated by the
difference between the average over the data-cases minus the average over the {x?i } found after
P
(partial) maximization. In this case, the set V is given by: V = {? ? N1 i ?(x?i )| x?i ? X ?i}.
For obvious reasons, it is now guaranteed that wtT vt ? 0.
In practice, we often use mini-batches of size n < N instead of the full data set. In this case, the
cardinality of the set V is enlarged to |V| = C(n, N )K m , with C(n, N ) representing the ?n choose
N? ways to compute the sample mean ?(n) based on a subset of n data-cases. The negative term
PT
PT
remains unaltered. Since the PCT still applies: || T1 t=1 ?(n),t ? T1 t=1 ?(x?t )|| ? O(1/T ).
Depending
on how the mini-batches are picked, convergence onto the overall mean ? can be either
?
O(1/ T ) (random sampling with replacement) or O(1/T ) (sampling without replacement which
has picked all data-cases after dN/ne rounds).
2.3
Hidden Variables
The discussion so far has considered only constant features: ?(x, y) = xy for VP and ?(x) for
herding. However, the PCT allows us to consider more general features that depend on the weights
1
Similar convergence could also be achieved (without concern for generalization performance) by sampling
directly from the training data. However, herding converges with rate 1/T and is regularized by the weights to
prevent overfitting.
3
w, as long as the image of this feature mapping (and therefore, the update vector v) is a set of finite
cardinality. In [14], such features took the form of ?hidden units?:
?(x, z),
z(x, w) = arg max wT ?(x, z0 )
(5)
z0
In this case, we identify the vector v as v = ?(x, z) ? ?(x? , z? ). In the left-hand term of this
expression, x is clamped to the data-cases and z is found as in Eqn. 5 by maximizing every data-case
separately; in the right-hand (or negative) term, x? , z? are found by jointly maximizing wT ?(x, z).
The quantity ?(x, z) denotes a sample average over the training cases. We note that ?(x, z) indeed
maps to a finite domain because it depends on the real parameter w only through the discrete state
z. We also notice again that wT v ? 0 because of the definition of (x? , z? ). From the convergence
PT
PT
theorem we find that, || T1 t=1 ?(x, zt ) ? T1 t=1 ?(x?t , z?t )|| ? O(1/T ). This result can be
extended to mini-batches as well.
2.4
Conditional Herding
We are now ready to propose our new algorithm: conditional herding (CH). Like the VP algorithm,
CH is concerned with discriminative learning and, therefore, it conditions on the input attributes
{xi }. CH differs from VP in that it uses hidden variables, similar to the herder described in the
previous subsection. In the most general setting, CH uses features:
?(x, y, z),
z(x, y, w) = arg max wT ?(x, y, z0 ).
(6)
z0
In the experiments in Section 3, we use the explicit form:
wT ?(x, y, z) = xT Wz + yT Bz + ? T z + ?T y.
(7)
where W, B, ? and ? are the weights, z is a binary vector and y is a binary vector in a 1-of-K
scheme (see Figure 1). At each iteration t, CH randomly samples a subset of the data-cases and their
labels Dt = {xit , yit } ? D. For every member of this mini-batch it computes a hidden variable zit
using Eqn. 6. The parameters are then updated as:
? X
(?(xit , yit , zit ) ? ?(xit , yi?t , z?it ))
(8)
wt+1 = wt +
|Dt |
it ?Dt
In the positive term, zit , is found as in Eqn. 5. The negative term is obtained (similar to the perceptron) by making a prediction for the labels, keeping the input attributes fixed:
(yi?t , z?it ) = arg max wT ?(xit , y0 , z0 ),
y0 ,z0
?it ? Dt .
(9)
For the PCT to apply to CH, the set V of update vectors must be finite. The inputs x can be realvalued because we condition on the inputs and there will be at most N distinct values (one for each
data-case). However, since we maximize over y and z these states must be discrete for the PCT to
apply.
Eqn. 8 includes a potentially vector-valued stepsize ?. Notice however that scaling w ? ?w will
have no affect on the values of z, z? or y? and hence on v. Therefore, if we also scale
? ? ??, then
Pt?1
the sequence of discrete states zt , z?t , yt? will not be affected either. Since wt = ? t0 =0 vt0 + w0 ,
the only scale that matters is the relative scale between w0 and ?. In case there would just be a single
attractor set for the dynamics of w, the initialization w0 would only represent a transient affect.
However, in practice the scale of w0 relative to that of ? does play an important role indicating that
many different attractor sets exist for this system.
Irrespective of the attractor we end up in, the PCT guarantees that:
T
T
1X 1 X
1X 1 X
?(xit , yit , zit ) ?
?(xit , yi?t , z?it )|| ? O(1/T ).
||
T t=1 |Dt | i
T t=1 |Dt | i
t
(10)
t
In general, herding systems perform better when we use normalized features: k?(x, z, y)k =
R, ?(x, z, y). The reason is that herding selects states by maximizing the inner product wT ?
4
and features with large norms will therefore become more likely to be selected. In fact, one can
show that states inside the convex hull of the ?(x, y, z) are never selected. For binary (?1) variables all states live on the convex hull, but this need not be true in general, especially when we use
continuous attributesp
x. To remedy this, one can either normalize features or add one additional fea2
ture2 ?0 (x, y, z) = Rmax
? ||?(x, y, z)||2 , where Rmax = maxx,y,z ?(x, y, z) where x is only
allowed to vary over the data-cases.
Finally, predictions on unseen test data are made by:
?
(ytst,t
, z?tst,t ) = arg max wtT ?(xtst , y0 , z0 ),
(11)
y0 ,z0
The algorithm is summarized in the algorithm-box below.
Conditional Herding (CH)
1. Initialize w0 (with finite norm) and yavg,j = 0 for all test cases j.
2. For t ? 0:
(a) Choose a subset {xit , yit } = Dt ? D. For each (xit , yit ), choose a hidden state zit .
(b) Choose a set of ?negative states? {(x?it = xit , yi?t , z?it )}, such that:
1 X T
1 X T
wt?1 ?(xit , yit , zit ) ?
wt?1 ?(xit , yi?t , z?it ).
|Dt | i
|Dt | i
t
(12)
t
3. Update wt according to Eqn. 8.
4. Predict on test data as follows:
?
(a) For every test case xtst,j at every iteration, choose negative states (ytst,jt
, z?tst,jt ) in the
same way as for training data.
(b) Update online average over predictions, yavg,j , for all test cases j.
2.5
Zero Temperature Limit of Discriminative MRF Learning
Regular herding can be understood as gradient descent on the zero temperature limit of an MRF
model. In this limit, gradient updates with constant step size never lead to convergence, irrespective
of how small the step size is. Analogously, CH can be viewed as constant step size gradient updates
on the zero temperature limit of discriminative MRFs (see [10] for the corresponding RBM model).
The finite temperature model is given by:
T
P
z exp w ?(y, z, x)
p(y|x) = P
.
(13)
T
0 0
z0 ,y0 exp [w ?(y , z , x)]
Similar to herding [14], conditional herding introduces a temperature by replacing w by w/T and
P
takes the limit T ? 0 of `T , T `, where ` = i log p(yi |xi ).
3
Experiments
We studied the behavior of conditional herding on two artificial and four real-world data sets, comparing its performance to that of the voted perceptron [5] and that of discriminative RBMs [10]. The
experiments on artificial and real-world data are discussed separately in Section 3.1 and 3.2.
We studied conditional herding in the discriminative RBM architecture illustrated in Figure 1 (i.e.,
we use the energypfunction in Eqn. 7). Per the discussion in Section 2.4, we added an additional
2
feature ?0 (x) = Rmax
? ||x||2 with Rmax = maxi kxi k in all experiments.
2
If in test data this extra feature becomes imaginary we simply set it to zero.
5
zi1
zi2
...
ziK
B
W
xi1
xi2
...
?
xiD
yi1
yi2
...
yiC
?
Figure 1: Discriminative Restricted Boltzmann Machine model of distribution p(y, z|x).
Voted perceptron
Discr. RBM
Cond. herding
(a) Banana data set.
(b) Lithuanian data set.
Figure 2: Decision boundaries of VP, CH, and dRBMs on two artificial data sets.
3.1
Artificial Data
To investigate the characteristics of VP, dRBMs and CH, we used the techniques to construct decision boundaries on two artificial data sets: (1) the banana data set; and (2) the Lithuanian data
set. We ran VP and CH for 1, 000 epochs using mini-batches of size 100. The decision bound?
ary for VP and CH is located at the location where the sign of the prediction ytst
changes. We
used conditional herders with 20 hidden units. The dRBMs also had 20 hidden units and were
trained by running conjugate gradients until convergence. The weights of the dRBMs were initialized by sampling from a Gaussian distribution with a variance of 10?4 . The decision boundary for the dRBMs is located at the point where both class posteriors are equal, i.e., where
?
?
= +1|?
xtst ) = 0.5.
= ?1|?
xtst ) = p(ytst
p(ytst
Plots of the decision boundary for the artificial data sets are shown in Figure 2. The results on the
banana data set illustrate the representational advantages of hidden units. Since VP selects data
points at random to update the weights, on the banana data set, the weight vector of VP tends to
oscillate back and forth yielding a nearly linear decision boundary3 . This happens because VP can
regress on only 2 + 1 = 3 fixed features. In contrast, for CH the simple predictor in the top layer can
regress onto M = 20 hidden features. This prevents the same oscillatory behavior from occurring.
3.2
Real-World Data
In addition to the experiments on synthetic data, we also performed experiments on four real-world
data sets - namely, (1) the USPS data set, (2) the MNIST data set, (3) the UCI Pendigits data set, and
(4) the 20-Newsgroups data set. The USPS data set consists of 11,000, 16 ? 16 grayscale images of
handwritten digits (1, 100 images of each digit 0 through 9) with no fixed division. The MNIST data
set contains 70, 000, 28 ? 28 grayscale images of digits, with a fixed division into 60, 000 training
and 10, 000 test instances. The UCI Pendigits consists of 16 (integer-valued) features extracted from
the movement of a stylus. It contains 10, 992 instances, with a fixed division into 7, 494 training
and 3, 498 test instances. The 20-Newsgroups data set contains bag-of-words representations of
18, 774 documents gathered from 20 different newsgroups. Since the bag-of-words representation
3
On the Lithuanian data set, VP constructs a good boundary by exploiting the added ?normalizing? feature.
6
comprises over 60, 000 words, we identified the 5, 000 most frequently occurring words. From this
set, we created a data set of 4, 900 binary word-presence features by binarizing the word counts and
removing the 100 most frequently occurring words. The 20-Newsgroups data has a fixed division
into 11, 269 training and 7, 505 test instances. On all data sets with real-valued input attributes we
used the ?normalizing? feature described above.
The data sets used in the experiments are multi-class. We adopted a 1-of-K encoding, where if yi
is the label for data point xi , then yi = {yi,1 , ..., yi,K } is a binary vector such that yi,k = 1 if the
label of the ith data point is k and yi,k = ?1 otherwise. Performing the maximization in Eqn. 9 is
difficult when K > 2. We investigated two different procedures for doing so. In the first procedure,
we reduce the multi-class problem to a series of binary decision problems using a one-versus-all
scheme. The prediction on a test point is taken as the label with the largest online average. In the
second procedure, we make predictions on all K labels jointly. To perform the maximization in
Eqn. 9, we explore all states of y in a one-of-K encoding - i.e. one unit is activated and all others
are inactive. This partial maximization is not a problem as long as the ensuing configuration satisfies
wtT vt ? 0 4 . The main difference between the two procedures is that in the second procedure the
weights W are shared amongst the K classifiers. The primary advantage of the latter procedure is
it less computationally demanding than the one-versus-all scheme.
We trained the dRBMs by performing iterations of conjugate gradients (using 3 linesearches) on
mini-batches of size 100 until the error on a small held-out validation set started increasing (i.e.,
we employed early stopping) or until the negative conditional log-likelihood on the training data
stopped coming down. Following [10], we use L2-regularization on the weights of the dRBMs;
the regularization parameter was determined based on the generalization error on the same heldout validation set. The weights of the dRBMs were initialized from a Gaussian distribution with
variance of 10?4 .
CH used mini-batches of size 100. For the USPS and Pendigits data sets CH used a burn-in period
of 1, 000 updates; on MNIST it was 5, 000 updates; and on 20 Newsgroups it was 20, 000 updates.
Herding was stopped when the error on the training set became zero 5 .
The parameters of the conditional herders were initialized by sampling from a Gaussian distribution.
Ideally, we would like each of the terms in the energy function in Eqn. 7 to contribute equally during
updating. However, since the dimension of the data is typically much greater than the number of
classes, the dynamics of the conditional herding system will be largely driven by W. To negate this
effect, we rescaled the standard deviation of the Gaussian by a factor 1/M with M the total number
of elements of the parameter involved (e.g. ?W = ?/(dim(x) dim(z)) etc.). We also scale the step
sizes ? by the same factor so the updates will retain this scale during herding. The relative scale
between ? and ? was chosen by cross-validation. Recall that the absolute scale is unimportant (see
Section 2.4 for details).
In addition, during the early stages of herding, we adapted the parameter update for the bias on the
hidden units ? in such a way that the marginal distribution over the hidden units was nearly uniform.
This has the advantage that it encourages high entropy in the hidden units,Pleading to more useful
dynamics of the system. In practice, we update ? as ? t+1 = ? t + |D?t | it (1 ? ?) hzit i ? z?it ,
where hzit i is the batch mean. ? is initialized to 1 and we gradually half its value every 500 updates,
slowly moving from an entropy-encouraging update to the standard update for the biases of the
hidden units.
VP was also run on mini-batches of size 100 (with step size of 1). VP was run until the predictor
started overfitting on a validation set. No burn-in was considered for VP.
The results of our experiments are shown in Table 1. In the table, the best performance on each
data set using each procedure is typeset in boldface. The results reveal that the addition of hidden
units to the voted perceptron leads to significant improvements in terms of generalization error.
Furthermore, the results of our experiments indicate that conditional herding performs on par with
discriminative RBMs on the MNIST and USPS data sets and better on the 20 Newsgroups data set.
The 20 Newsgroups data is high dimensional and sparse and both VP and CH appear to perform
?,k
?,k
Local maxima can also be found by iterating over ytst
, ztst,j
, but the proposed procedure is more efficient.
We use a fixed order of the mini-batches, so that if there are N data cases and the batch size is K, if the
training error is 0 for dN/Ke iterations, the error for the whole training set is 0.
4
5
7
One-Versus-All Procedure
XXX
VP
Discriminative RBM
Conditional herding
Technique
XX
Data Set XXXX
100
200
100
200
MNIST
7.69%
3.57%
3.58%
3.97%
3.99%
USPS
5.03% (0.4%) 3.97% (0.38%) 4.02% (0.68%) 3.49% (0.45%) 3.35%(0.48%)
UCI Pendigits
10.92%
5.32%
5.00%
3.37%
3.00%
20 Newsgroups
27.75%
34.78%
34.36%
29.78%
25.96%
Joint Procedure
XXX
VP
Discriminative RBM
Conditional herding
Technique
XX
50
100
500
50
100
500
Data Set XXXX
MNIST
8.84% 3.88%
2.93%
1.98%
2.89%
2.09%
2.09%
USPS
UCI Pendigits
20 Newsgroups
4.86%
(0.52%)
6.78%
24.89%
3.13%
(0.73%)
3.80%
?
2.84%
(0.59%)
3.23%
30.57%
4.06%
(1.09%)
8.89%
30.07%
3.36%
(0.48%)
3.14%
?
3.07%
(0.52%)
2.57%
25.76%
2.81%
(0.50%)
2.86%
24.93%
Table 1: Generalization errors of VP, dRBMs, and CH on 4 real-world data sets. dRBMs and CH
results are shown for various numbers of hidden units. The best performance on each data set is
typeset in boldface; missing values are shown as ?-?. The std. dev. of the error on the 10-fold cross
validation of the USPS data set is reported in parentheses.
quite well in this regime. Techniques to promote sparsity in the hidden layer when training dRBMs
exist (see [10]), but we did not investigate them here. It is also worth noting that CH is rather resilient
to overfitting. This is particularly evident in the low-dimensional UCI Pendigits data set, where the
dRBMs start to badly overfit with 500 hidden units, while the test error for CH remains level. This
phenomena is the benefit of averaging over many different predictors.
4
Concluding Remarks
The main contribution of this paper is to expose a relationship between the PCT and herding algorithms. This has allowed us to strengthen certain results for herding - namely, theoretically validating herding with mini-batches and partial optimization. It also directly leads to the insight that
non-convergent VPs and herding match moments between
data and generated predictions at a rate
?
much faster than random sampling (O(1/T ) vs. O(1/ T )). From these insights, we have proposed
a new conditional herding algorithm that is the zero-temperature limit of dRBMs [10].
The herding perspective provides a new way of looking at learning as a dynamical system. In fact,
the PCT precisely specifies the conditions that need to hold for a herding system (in batch mode)
to be a piecewise isometry [7]. A piecewise isometry is a weakly chaotic dynamical system that
divides parameter space into cells and applies a different isometry in each cell. For herding, the
isometry is given by a translation and the cells are labeled by the states {x? , y? , z, z? }, whichever
combination applies. Therefore, the requirement of the PCT that the space V must be of finite
cardinality translates into the division of parameter space in a finite number of cells, each with
its own isometry. Many interesting results about piecewise isometries have been proven in the
mathematics literature such as the fact that the sequence of sampled states grows algebraically with
T and not exponentially as in systems with random or chaotic components [6]. We envision a fruitful
cross-fertilization between the relevant research areas in mathematics and learning theory.
Acknowledgments
This work is supported by NSF grants 0447903, 0914783, 0928427 and 1018433 as well as
ONR/MURI grant 00014-06-1-073. LvdM acknowledges support by the Netherlands Organisation
for Scientific Research (grant no. 680.50.0908) and by EU-FP7 NoE on Social Signal Processing
(SSPNet).
References
[1] C.M. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
8
[2] H.D. Block and S.A. Levin. On the boundedness of an iterative procedure for solving a system
of linear inequalities. Proceedings of the American Mathematical Society, 26(2):229?235,
1970.
[3] Y. Chen and M. Welling. Parametric herding. In Proceedings of the Thirteenth International
Conference on Artificial Intelligence and Statistics, 2010.
[4] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical
methods in natural language processing-Volume 10, page 8. Association for Computational
Linguistics, 2002.
[5] Y. Freund and R.E. Schapire. Large margin classification using the perceptron algorithm.
Machine learning, 37(3):277?296, 1999.
[6] A. Goetz. Perturbations of 8-attractors and births of satellite systems. Internat. J. Bifur. Chaos,
Appl. Sci. Engrg., 8(10):1937?1956, 1998.
[7] A. Goetz. Global properties of a family of piecewise isometries. Ergodic Theory Dynam.
Systems, 29(2):545?568, 2009.
[8] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14:1771?1800, 2002.
[9] E.T. Jaynes. Information theory and statistical mechanics.
106(4):620?663, 1957.
Physical Review Series II,
[10] H. Larochelle and Y. Bengio. Classification using discriminative Restricted Boltzmann Machines. In Proceedings of the 25th International Conference on Machine learning, pages 536?
543. ACM, 2008.
[11] M.L. Minsky and S. Papert. Perceptrons; An introduction to computational geometry. Cambridge, Mass.,: MIT Press, 1969.
[12] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological review, 65(6):386?408, 1958.
[13] T. Tieleman. Training Restricted Boltzmann Machines using approximations to the likelihood gradient. In Proceedings of the 25th International Conference on Machine learning,
volume 25, pages 1064?1071, 2008.
[14] M. Welling. Herding dynamic weights for partially observed random field models. In Proc. of
the Conf. on Uncertainty in Artificial Intelligence, Montreal, Quebec, CAN, 2009.
[15] M. Welling. Herding dynamical weights to learn. In Proceedings of the 21st International
Conference on Machine Learning, Montreal, Quebec, CAN, 2009.
[16] M. Welling and Y. Chen. Statistical inference using weak chaos and infinite memory. In
Proceedings of the Int?l Workshop on Statistical-Mechanical Informatics (IW-SMI 2010), pages
185?199, 2010.
[17] L. Younes. Parametric inference for imperfectly observed Gibbsian fields. Probability Theory
and Related Fields, 82:625?645, 1989.
9
| 4004 |@word mild:1 unaltered:1 norm:4 seek:1 contrastive:2 minus:1 boundedness:1 moment:8 configuration:1 contains:3 series:2 denoting:1 document:1 envision:1 existing:1 imaginary:1 com:1 comparing:1 jaynes:1 gmail:1 must:3 visible:1 fertilization:1 treating:1 plot:1 update:23 zik:1 v:1 half:2 selected:3 intelligence:2 beginning:1 yi1:1 ith:1 provides:2 cse:1 location:1 contribute:1 mathematical:1 dn:2 become:1 persistent:1 prove:1 consists:2 combine:2 inside:1 theoretically:1 indeed:1 behavior:3 frequently:2 mechanic:1 multi:2 brain:1 encouraging:1 cardinality:4 increasing:1 becomes:1 xx:2 bounded:3 unrelated:1 moreover:1 matched:1 mass:1 what:1 interpreted:1 rmax:4 finding:1 ended:1 guarantee:2 pseudo:3 noe:1 every:8 stylus:1 exactly:1 classifier:2 unit:13 grant:3 appear:1 t1:6 positive:1 understood:2 local:3 tends:1 limit:8 encoding:2 burn:2 pendigits:6 initialization:1 studied:2 acl:1 suggests:2 appl:1 limited:1 averaged:1 practical:1 acknowledgment:1 practice:3 block:1 differs:1 chaotic:3 digit:3 procedure:13 area:1 empirical:2 maxx:2 matching:3 word:7 regular:1 get:1 onto:2 storage:1 live:1 fruitful:1 deterministic:2 map:1 yt:2 crfs:1 maximizing:5 go:1 missing:1 convex:2 ergodic:1 ke:1 identifying:1 immediately:1 rule:4 insight:2 importantly:1 century:1 handle:1 justification:1 updated:3 diego:1 pt:9 play:1 strengthen:2 us:2 element:2 strengthens:1 particularly:1 located:2 updating:1 recognition:1 std:1 muri:1 labeled:1 observed:3 role:1 eu:1 movement:1 rescaled:1 ran:1 ideally:1 dynamic:5 trained:2 weakly:2 depend:1 solving:1 yutian:1 division:5 binarizing:1 usps:7 easily:2 joint:1 various:1 distinct:1 fast:1 artificial:8 birth:1 quite:1 gelfand:1 plausible:1 valued:3 say:1 otherwise:1 statistic:3 typeset:2 unseen:1 jointly:2 final:1 seemingly:1 ip:1 online:2 sequence:8 advantage:3 took:1 propose:1 product:2 coming:1 uci:6 relevant:1 representational:2 forth:1 normalize:1 exploiting:1 convergence:10 requirement:1 satellite:1 produce:1 converges:2 depending:1 andrew:1 develop:1 illustrate:1 montreal:2 zit:6 dividing:1 predicted:1 implies:1 indicate:1 larochelle:1 differ:1 dynam:1 laurens:1 correct:2 attribute:6 hull:2 transient:1 xid:1 resilient:1 generalization:5 really:1 extension:1 hold:3 considered:2 ic:1 exp:2 mapping:2 predict:1 inseparable:4 vary:1 early:2 proc:1 bag:2 label:12 iw:1 expose:1 largest:2 mit:1 always:1 gaussian:4 rather:2 derived:2 xit:13 improvement:1 likelihood:3 tech:1 contrast:3 dim:2 inference:5 mrfs:1 stopping:1 typically:1 initially:1 hidden:20 relation:1 reproduce:1 selects:2 arg:6 overall:1 classification:2 smi:1 initialize:1 uc:1 marginal:1 field:6 construct:2 never:3 equal:1 sampling:9 represents:2 look:1 unsupervised:4 nearly:2 promote:1 others:1 develops:1 piecewise:4 randomly:1 preserve:1 divergence:2 kwt:1 replaced:1 geometry:1 minsky:2 delft:1 phase:1 replacement:2 n1:1 attractor:4 freedom:2 organization:1 interest:1 investigate:3 evaluation:1 introduces:1 yielding:1 activated:1 held:1 chain:2 implication:1 gibbsian:1 partial:4 xy:1 incomplete:1 divide:1 initialized:5 theoretical:1 stopped:2 psychological:1 instance:5 dev:1 maximization:9 deviation:1 subset:3 imperfectly:1 predictor:3 uniform:1 xtst:4 levin:1 lvdmaaten:1 reported:1 periodic:1 kxi:1 considerably:1 synthetic:1 st:1 international:4 retain:1 probabilistic:1 xi1:1 informatics:1 diverge:1 picking:1 analogously:2 again:1 choose:5 slowly:1 conf:1 american:1 expert:1 ywt:1 summarized:1 includes:1 int:1 matter:1 satisfy:1 vi:1 depends:1 performed:1 view:1 picked:3 lab:1 observing:1 doing:1 start:2 complicated:1 vt0:1 contribution:1 voted:9 became:1 variance:2 characteristic:1 largely:1 gathered:1 identify:2 vp:24 weak:1 handwritten:1 produced:2 trajectory:2 worth:1 ago:1 ary:1 herding:58 oscillatory:1 definition:1 rbms:4 energy:3 involved:1 regress:2 obvious:1 proof:1 rbm:6 prb:1 irvine:1 sampled:2 recall:1 subsection:1 hilbert:1 back:2 dt:9 supervised:3 xxx:2 reflected:1 though:1 box:1 furthermore:1 just:1 implicit:1 stage:1 correlation:1 until:4 hand:3 eqn:14 xxxx:2 overfit:1 replacing:1 mode:1 reveal:1 scientific:1 grows:1 effect:1 normalized:1 true:3 remedy:1 hence:2 regularization:2 illustrated:1 attractive:1 round:1 during:5 encourages:1 evident:1 performs:2 temperature:8 image:4 chaos:2 recently:1 physical:1 conditioning:1 exponentially:2 volume:2 discussed:1 association:1 interpret:1 refer:1 significant:1 cambridge:1 ai:1 rd:1 unconstrained:1 mathematics:2 engrg:1 language:1 had:1 moving:1 internat:1 etc:1 add:1 something:1 closest:1 posterior:1 isometry:7 own:1 perspective:2 driven:1 certain:4 inequality:1 binary:6 arbitrarily:1 onr:1 vt:10 der:1 yi:20 conserved:1 additional:2 greater:1 employed:1 converge:9 maximize:1 period:1 algebraically:1 signal:1 ii:1 neurally:1 multiple:1 full:1 faster:3 match:1 cross:3 long:3 goetz:2 equally:1 parenthesis:1 prediction:17 variant:1 mrf:7 converging:1 expectation:2 bz:1 iteration:8 represent:1 achieved:1 cell:4 addition:3 separately:2 thirteenth:1 extra:1 unlike:1 validating:1 member:1 quebec:2 integer:1 presence:1 noting:1 revealed:1 intermediate:2 easy:1 concerned:1 bengio:1 newsgroups:9 xj:1 affect:2 architecture:1 identified:1 inner:1 reduce:1 translates:1 t0:1 inactive:1 motivated:1 expression:1 interpolates:1 discr:1 oscillate:1 repeatedly:1 remark:1 useful:1 iterating:1 unimportant:1 netherlands:1 younes:1 generate:1 specifies:1 schapire:1 exist:2 nsf:1 notice:2 sign:1 delta:1 per:1 rosenblatt:3 discrete:3 affected:1 four:2 yit:8 tst:2 prevent:1 verified:1 invention:1 wtt:6 run:3 powerful:1 uncertainty:1 place:1 family:2 maaten:1 decision:8 scaling:1 bound:2 layer:2 pct:20 guaranteed:1 convergent:1 fold:1 badly:1 adapted:1 infinity:1 constraint:2 precisely:1 generates:3 concluding:1 performing:2 separable:2 department:2 according:1 combination:1 conjugate:2 remain:1 strives:1 y0:5 making:2 happens:1 maxy:1 restricted:3 gradually:1 taken:2 computationally:1 remains:2 count:1 xi2:1 needed:1 fp7:1 whichever:1 end:1 adopted:1 apply:3 stepsize:1 zi2:1 batch:14 alternative:1 lithuanian:3 bagging:1 denotes:1 remaining:1 ytst:6 running:1 top:1 linguistics:1 especially:1 society:1 implied:1 objective:2 question:1 quantity:3 added:2 parametric:2 primary:1 traditional:2 cycling:6 gradient:7 amongst:1 sci:1 ensuing:1 w0:7 reason:2 boldface:2 relationship:1 mini:11 minimizing:1 difficult:1 potentially:1 negative:6 zt:2 boltzmann:3 perform:5 markov:2 finite:9 hxy:1 descent:1 defining:1 extended:2 zi1:1 looking:1 banana:4 hinton:1 perturbation:1 introduced:3 namely:2 mechanical:1 connection:4 california:1 below:1 dynamical:3 pattern:1 regime:1 sparsity:1 max:5 wz:1 memory:1 power:1 critical:1 demanding:1 natural:1 regularized:1 representing:1 scheme:3 ne:1 realvalued:1 created:2 ready:1 irrespective:2 started:2 acknowledges:1 review:3 epoch:1 l2:1 literature:1 relative:3 freund:1 fully:1 par:2 heldout:1 interesting:2 proven:1 versus:3 validation:5 degree:2 sufficient:1 translation:1 supported:1 last:1 keeping:1 side:1 bias:2 perceptron:30 taking:1 absolute:1 sparse:1 van:1 benefit:1 boundary:6 dimension:1 world:5 computes:1 made:3 san:1 far:1 welling:6 correlate:1 social:1 approximate:1 vps:1 ml:2 global:1 overfitting:3 discriminative:14 xi:10 quoted:1 grayscale:2 search:2 iterative:2 continuous:1 yic:1 table:3 learn:2 investigated:1 domain:1 did:1 yi2:1 main:2 linearly:2 whole:1 allowed:2 enlarged:1 representative:1 papert:2 comprises:1 explicit:1 lie:1 clamped:1 theorem:13 z0:9 removing:1 down:1 xt:2 bishop:1 jt:2 maxi:2 explored:1 negate:1 concern:1 normalizing:2 exists:1 organisation:1 mnist:6 workshop:1 occurring:3 margin:2 chen:3 gap:1 entropy:7 simply:1 likely:1 explore:1 prevents:2 partially:1 applies:4 springer:1 ch:21 linesearches:1 tieleman:1 satisfies:2 extracted:1 acm:1 conditional:20 viewed:2 goal:1 shared:1 hard:1 change:1 determined:1 infinite:1 wt:26 averaging:2 called:1 total:1 cond:1 perceptrons:3 indicating:1 support:1 latter:1 collins:1 phenomenon:1 |
3,319 | 4,005 | Robust PCA via Outlier Pursuit
Huan Xu
Electrical and Computer Engineering
University of Texas at Austin
[email protected]
Constantine Caramanis
Electrical and Computer Engineering
University of Texas at Austin
[email protected]
Sujay Sanghavi
Electrical and Computer Engineering
University of Texas at Austin
[email protected]
Abstract
Singular Value Decomposition (and Principal Component Analysis) is one of the
most widely used techniques for dimensionality reduction: successful and efficiently computable, it is nevertheless plagued by a well-known, well-documented
sensitivity to outliers. Recent work has considered the setting where each point
has a few arbitrarily corrupted components. Yet, in applications of SVD or PCA
such as robust collaborative filtering or bioinformatics, malicious agents, defective genes, or simply corrupted or contaminated experiments may effectively yield
entire points that are completely corrupted.
We present an efficient convex optimization-based algorithm we call Outlier Pursuit, that under some mild assumptions on the uncorrupted points (satisfied, e.g.,
by the standard generative assumption in PCA problems) recovers the exact optimal low-dimensional subspace, and identifies the corrupted points. Such identification of corrupted points that do not conform to the low-dimensional approximation, is of paramount interest in bioinformatics and financial applications, and
beyond. Our techniques involve matrix decomposition using nuclear norm minimization, however, our results, setup, and approach, necessarily differ considerably from the existing line of work in matrix completion and matrix decomposition, since we develop an approach to recover the correct column space of the
uncorrupted matrix, rather than the exact matrix itself.
1 Introduction
This paper is about the following problem: suppose we are given a large data matrix M , and we
know it can be decomposed as
M = L0 + C0 ,
where L0 is a low-rank matrix, and C0 is non-zero in only a fraction of the columns. Aside from
these broad restrictions, both components are arbitrary. In particular we do not know the rank (or
the row/column space) of L0 , or the number and positions of the non-zero columns of C0 . Can we
recover the column-space of the low-rank matrix L0 , and the identities of the non-zero columns of
C0 , exactly and efficiently?
We are primarily motivated by Principal Component Analysis (PCA), arguably the most widely
used technique for dimensionality reduction in statistical data analysis. The canonical PCA problem [1], seeks to find the best (in the least-square-error sense) low-dimensional subspace approximation to high-dimensional points. Using the Singular Value Decomposition (SVD), PCA finds
the lower-dimensional approximating subspace by forming a low-rank approximation to the data
1
matrix, formed by considering each point as a column; the output of PCA is the (low-dimensional)
column space of this low-rank approximation.
It is well known (e.g., [2?4]) that standard PCA is extremely fragile to the presence of outliers: even
a single corrupted point can arbitrarily alter the quality of the approximation. Such non-probabilistic
or persistent data corruption may stem from sensor failures, malicious tampering, or the simple fact
that some of the available data may not conform to the presumed low-dimensional source / model.
In terms of the data matrix, this means that most of the column vectors will lie in a low-dimensional
space ? and hence the corresponding matrix L0 will be low-rank ? while the remaining columns will
be outliers ? corresponding to the column-sparse matrix C. The natural question in this setting is to
ask if we can still (exactly or near-exactly) recover the column space of the uncorrupted points, and
the identities of the outliers. This is precisely our problem.
Recent years have seen a lot of work on both robust PCA [3, 5?12], and on the use of convex
optimization for recovering low-dimensional structure [4, 13?15]. Our work lies at the intersection
of these two fields, but has several significant differences from work in either space. We compare
and relate our work to existing literature, and expand on the differences, in Section 3.3.
2 Problem Setup
The precise PCA with outlier problem that we consider is as follows: we are given n points in pdimensional space. A fraction 1?? of the points lie on a r-dimensional true subspace of the ambient
Rp , while the remaining ?n points are arbitrarily located ? we call these outliers/corrupted points.
We do not have any prior information about the true subspace or its dimension r. Given the set of
points, we would like to learn (a) the true subspace and (b) the identities of the outliers.
As is common practice, we collate the points into a p ? n data matrix M , each of whose columns
is one of the points, and each of whose rows is one of the p coordinates. It is then clear that the data
matrix can be decomposed as
M = L0 + C0 .
Here L0 is the matrix corresponding to the non-outliers; thus rank(L0 ) = r. Consider its Singular
Value Decomposition (SVD)
L0 = U0 ?0 V0? .
(1)
Thus it is clear that the columns of U0 form an orthonormal basis for the r-dimensional true subspace. Also note that at most (1 ? ?)n of the columns of L0 are non-zero (the rest correspond to
the outliers). C0 is the matrix corresponding to the non-outliers; we will denote the set of non-zero
columns of C0 by I0 , with |I0 | = ?n. These non-zero columns are completely arbitrary.
With this notation, out intent is to exactly recover the column space of L0 , and the set of outliers I0 .
Clearly, this is not always going to be possible (regardless of the algorithm used) and thus we need
to impose a few additional assumptions. We develop these in Section 2.1 below.
We are also interested in the noisy case, where
M = L0 + C0 + N,
and N corresponds to any additional noise. In this case we are interested in approximate identification of both the true subspace and the outliers.
2.1 Incoherence: When does exact recovery make sense?
In general, our objective of splitting a low-rank matrix from a column-sparse one is not always a well
defined one. As an extreme example, consider the case where the data matrix M is non-zero in only
one column. Such a matrix is both low-rank and column-sparse, thus the problem is unidentifiable.
To make the problem meaningful, we need to impose that the low-rank matrix L0 cannot itself be
column-sparse as well. This is done via the following incoherence condition.
Definition: A matrix L ? Rp?n with SVD as in (1), and (1 ? ?)n of whose columns are non-zero,
is said to be column-incoherent with parameter ? if
?r
max kV ? ei k2 ?
i
(1 ? ?)n
2
where {ei } are the coordinate unit vectors.
Thus if V has a column aligned with a coordinate axis, then ? = (1 ? ?)n/r. Similarly,
p if V is
perfectly incoherent (e.g. if r = 1 and every non-zero entry of V has magnitude 1/ (1 ? ?)n)
then ? = 1.
In the standard PCA setup, if the points are generated by some low-dimensional isometric Gaussian
distribution, then with high probability, one will have ? = O(max(1, log(n)/r)) [16]. Alternatively,
if the points are generated by a uniform distribution over a bounded set, then ? = ?(1).
A small incoherence parameter ? essentially enforces that the matrix L0 will have column support
that is spread out. Note that this is quite natural from the application perspective. Indeed, if the left
hand side is as big as 1, it essentially means that one of the directions of the column space which we
wish to recover, is defined by only a single observation. Given the regime of a constant fraction of
arbitrarily chosen and arbitrarily corrupted points, such a setting is not meaningful. Indeed, having
a small incoherence ? is an assumption made in all methods based on nuclear norm minimization
up to date [4, 15?17].
We would like to identify the outliers, which can be arbitrary. However, clearly an ?outlier? point
that lies in the true subspace is a meaningless concept. Thus, in matrix terms, we require that every
column of C0 does not lie in the column space of L0 .
The parameters ? and ? are not required for the execution of the algorithm, and do not need to be
known a priori. They only arise in the analysis of our algorithm?s performance.
Other Notation and Preliminaries: Capital letters such as A are used to represent matrices, and
accordingly, Ai denotes the ith column vector. Letters U , V , I and their variants (complements,
subscripts, etc.) are reserved for column space, row space and column support respectively. There
are four associated projection operators we use throughout. The projection onto the column space,
U , is denoted by PU and given by PU (A) = U U ? A, and similarly for the row-space PV (A) =
AV V ? . The matrix PI (A) is obtained from A by setting column Ai to zero for all i 6? I. Finally,
PT is the projection to the space spanned by U and V , and given by PT (?) = PU (?) + PV (?) ?
PU PV (?). Note that PT depends on U and V , and we suppress this notation wherever it is clear
which U and V we are using. The complementary operators, PU ? , PV ? , PT ? and PI c are defined
as usual. The same notation is also used to represent a subspace of matrices: e.g., we write A ? PU
for any matrix A that satisfies PU (A) = A. Five matrix norms are used: kAk? is the nuclear norm,
kAk is the spectral norm, kAk1,2 is the sum of ?2 norm of the columns Ai , kAk?,2 is the largest
?2 norm of the columns, and kAkF is the Frobenius norm. The only vector norm used is k ? k2 , the
?2 norm. Depending on the context, I is either the unit matrix, or the identity operator; ei is the ith
base vector. The SVD of L0 is U0 ?0 V0? . The rank of L0 is denoted as r, and we have ? , |I0 |/n,
i.e., the fraction of outliers.
3 Main Results and Consequences
While we do not recover the matrix L0 , we show that the goal of PCA can be attained: even under
our strong corruption model, with a constant fraction of points corrupted, we show that we can ?
under very weak assumptions ? exactly recover both the column space of L0 (i.e the low-dimensional
space the uncorrupted points lie on) and the column support of C0 (i.e. the identities of the outliers),
from M . If there is additional noise corrupting the data matrix, i.e. if we have M = L0 + C0 + N ,
a natural variant of our approach finds a good approximation.
3.1 Algorithm
? , with orthonorGiven data matrix M , our algorithm, called Outlier Pursuit, generates (a) a matrix U
mal rows, that spans the low-dimensional true subspace we want to recover, and (b) a set of column
indices I? corresponding to the outlier points. To ensure success, one choice of the tuning parameter
is ? = 7?3?n , as Theorem 1 below suggests.
While in the noiseless case there are simple algorithms with similar performance, the benefit of
the algorithm, and of the analysis, is extension to more realistic and interesting situations where in
3
Algorithm 1 Outlier Pursuit
? C),
? the optimum of the following convex optimization program.
Find (L,
Minimize:
Subject to:
kLk? + ?kCk1,2
M =L+C
(2)
? = U1 ?1 V1? and output U
? = U1 .
Compute SVD L
?
Output the set of non-zero columns of C, i.e. I? = {j : c?ij 6= 0 for some i}.
addition to gross corruption of some samples, there is additional noise. Adapting the Outlier Pursuit
algorithm, we have the following variant for the noisy case.
Noisy Outlier Pursuit:
Minimize:
Subject to:
kLk? + ?kCk1,2
kM ? (L + C)kF ? ?
(3)
Outlier Pursuit (and its noisy variant) is a convex surrogate for the following natural (but combinatorial and intractable) first approach to the recovery problem:
Minimize:
Subject to:
rank(L) + ?kCk0,c
M =L+C
(4)
where k ? k0,c stands for the number of non-zero columns of a matrix.
3.2 Performance
We show that under rather weak assumptions, Outlier Pursuit exactly recovers the column space of
the low-rank matrix L0 , and the identities of the non-zero columns of outlier matrix C0 . The formal
statement appears below.
Theorem 1 (Noiseless Case). Suppose we observe M = L0 + C0 , where L0 has rank r and incoherence parameter ?. Suppose further that C0 is supported on at most ?n columns. Any output to
Outlier Pursuit recovers the column space exactly, and identifies exactly the indices of columns corresponding to outliers not lying in the recovered column space, as long as the fraction of corrupted
points, ?, satisfies
?
c1
?
,
(5)
1??
?r
9
where c1 = 121
. This can be achieved by setting the parameter ? in outlier pursuit to be 7?3?n ?
indeed it holds for any ? in a specific range which we provide below.
? = M + W,
For the case where in addition to the corrupted points, we have noisy observations, M
we have the following result.
? = M + N = L0 + C0 + N , where
Theorem 2 (Noisy Case). Suppose we observe M
c2
?
?
1??
?r
(6)
9
with c2 = 1024
, and kN kF ? ?. Let the output of Noisy Outlier Pursuit be L? , C ? . Then there exists
? C? such that M = L
? + C,
? L
? has the correct column space, and C? the correct column support, and
L,
?
?
? F ? 10 n?; kC ? ? Ck
? F ? 9 n?; .
kL? ? Lk
The conditions in this theorem are essentially tight in the following scaling sense (i.e., up to universal
constants). If there is no additional structure imposed, beyond what we have stated above, then up
to scaling, in the noiseless case, Outlier Pursuit can recover from as many outliers (i.e., the same
fraction) as any possible algorithm with arbitrary complexity. In particular, it is easy to see that if
the rank of the matrix L0 is r, and the fraction of outliers satisfies ? ? 1/(r + 1), then the problem
is not identifiable, i.e., no algorithm can separate authentic and corrupted points.1
1
Note that this is no longer true in the presence of stronger assumptions, e.g., isometric distribution, on the
authentic points [12].
4
3.3 Related Work
Robust PCA has a long history (e.g., [3, 5?11]). Each of these algorithms either performs standard
PCA on a robust estimate of the covariance matrix, or finds directions that maximize a robust estimate of the variance of the projected data. These algorithms seek to approximately recover the
column space, and moreover, no existing approach attempts to identify the set of outliers. This outlier identification, while outside the scope of traditional PCA algorithms, is important in a variety of
applications such as finance, bio-informatics, and more.
Many existing robust PCA algorithms suffer two pitfalls: performance degradation with dimension increase, and computational intractability. To wit, [18] shows several robust PCA algorithms
including M-estimator [19], Convex Peeling [20], Ellipsoidal Peeling [21], Classical Outlier Rejection [22], Iterative Deletion [23] and Iterative Trimming [24] have breakdown points proportional to
the inverse of dimensionality, and hence are useless in the high dimensional regime we consider.
Algorithms with non-diminishing breakdown point, such as Projection-Pursuit [25] are non-convex
or even combinatorial, and hence computationally intractable (NP-hard) as the size of the problem
scales. In contrast to these, the performance of Outlier Pursuit does not depend on p, and can be
solved in polynomial time.
Algorithms based on nuclear norm minimization to recover low rank matrices are now standard,
since the seminal paper [14]. Recent work [4,15] has taken the nuclear norm minimization approach
to the decomposition of a low-rank matrix and an overall sparse matrix. At a high level, these papers
are close in spirit to ours. However, there are critical differences in the problem setup, the results, and
in key analysis techniques. First, these algorithms fail in our setting as they cannot handle outliers ?
entire columns where every entry is corrupted. Second, from a technical and proof perspective, all
the above works investigate exact signal recovery ? the intended outcome is known ahead of time,
and one just needs to investigate the conditions needed for success. In our setting however, the
convex optimization cannot recover L0 itself exactly. This requires an auxiliary ?oracle problem? as
well as different analysis techniques on which we elaborate below.
4 Proof Outline and Comments
In this section we provide an outline of the proof of Theorem 1. The full proofs of all theorems
appear in a full version available online [26]. The proof follows three main steps
1. Identify the first-order necessary and sufficient conditions, for any pair (L? , C ? ) to be the
optimum of the convex program (2).
? C)
? that is the optimum of an alternate optimization problem,
2. Consider a candidate pair (L,
? C)
? has the
often called the ?oracle problem?. The oracle problem ensures that the pair (L,
desired column space and column support, respectively.
? C)
? is the optimum of Outlier Pursuit.
3. Show that this (L,
We remark that the aim of the matrix recovery papers [4, 15, 16] was exact recovery of the entire
matrix, and thus the optimality conditions required are clear. Since our setup precludes exact recovery of L0 and C0 , 2 our optimality conditions must imply the optimality for Outlier Pursuit of an
? C),
? the solution to the oracle problem. We now elaborate.
as-of-yet-undetermined pair (L,
Optimality Conditions: We now specify the conditions a candidate optimum needs to satisfy; these
arise from the standard subgradient conditions for the norms involved. Suppose the pair (L? , C ? ) is a
feasible point of (2), i.e. we have that L? + C ? = M . Let the SVD of L? be given by L? = U ? ?? V ?? .
For any matrix X, define PT ? (X) := U ? U ?? X + XV ? V ?? ? U ? U ?? XV ? V ?? , the projection of X
onto matrices that share the same column space or row space with L? .
Let I ? be the set of non-zero columns of C ? , and let H ? be the column-normalized version of C ? .
C?
That is, column Hi? = kC ?ik2 for all i ? I ? , and Hi? = 0 for all i ?
/ I ? . Finally, for any matrix X let
i
?c
PI ? (X) denote the matrix with all columns in I set to 0, and the columns in I ? left as-is.
2
? of (2) will be non-zero in every column of C0 that is not orthogonal to L0 ?s column space.
The optimum L
5
Proposition 1. With notation as above, L? , C ? is an optimum of the Outlier Pursuit progam (2) if
there exists a Q such that
PT ? (Q) = U ? V ?
PI ? (Q) = ?H ?
kQ ? PT ? (Q)k ? 1
kQ ? PI ? (Q)k?,2 ? ?.
(7)
Further, if both inequalities above are strict, dubbed Q strictly satisfies (7), then (L? , C ? ) is the
unique optimum.
Note that here k ? k is the spectral norm (i.e. largest singular value) and k ? k?,2 is the magnitude ?
i.e. ?2 norm ? of the column with the largest magnitude.
? C)
? by considering the alternate optimizaOracle Problem: We develop our candidate solution (L,
tion problem where we add constraints to (2) based on what we hope its optimum should be. In
particular, recall the SVD of the true L0 = U0 ?0 V0? and define for any matrix X the projection
onto the space of all matrices with column space contained in U0 as PU0 (X) := U0 U0? X. Similarly
for the column support I0 of the true C0 , define the projection PI0 (X) to be the matrix that results
when all the columns in I0c are set to 0.
Note that U0 and I0 above correspond to the truth. Thus, with this notation, we would like the
? = L,
? as this is nothing but the fact that L
? has recovered the true
optimum of (2) to satisfy PU0 (L)
? = C? means that we have succeeded in identifying the
subspace. Similarly, having C? satisfy PI0 (C)
outliers. The oracle problem arises by imposing these as additional constraints in (2). Formally:
kLk? + ?kCk1,2
M = L + C; PU0 (L) = L; PI0 (C) = C.
Minimize:
Subject to:
Oracle Problem:
(8)
? C)
? to
Obtaining Dual Certificates for Outlier Pursuit: We now construct a dual certificate of (L,
?
?
?
?
?
establish Theorem 1. Let the SVD of L be U ?V . It is easy to see that there exists an orthonormal
? V? ? = U0 V ? , where U0 is the column space of L0 . Moreover, it is
matrix V ? Rr?n such that U
? and V? ,
easy to show that PU? (?) = PU0 (?), PV? (?) = PV , and hence the operator PT? defined by U
?
?
obeys PT? (?) = PU0 (?) + PV (?) ? PU0 PV (?). Let H be the matrix satisfying that PI0c (H) = 0 and
? i = C?i /kC?i k2 .
?i ? I0 , H
Define matrix G ? Rr?r as
?
?
G , PI0 (V )(PI0 (V ))? =
X
?
?
[(V )i ][(V )i ]? ,
i?I0
? and
and constant c , kGk. Further define matrices ?1 , ?PU0 (H),
?2 , PU0? P PV I +
I0c
?
X
i=1
?
X
?
?
c
(PV PI0 PV ) PV (?H) = PI0 PV I + (PV PI0 PV )i PV PU0? (?H).
i
i=1
? C).
?
Then we can define the dual certificate for strict optimality of the pair (L,
?
?r
(1?c) 1??
(1?c)2
?
1?c
? ?
?
Proposition 2. If c < 1, 1??
< (3?c)
< ? < (2?c)
2 ?r , and ?
n? , then Q ,
?r)
n(1?c?
1??
U0 V
?
? ? ?1 ? ?2 strictly satisfies Condition (7), i.e., it is the dual certificate.
+ ?H
Consider the (much) simpler case where the corrupted columns are assumed to be orthogonal to
the column space of L0 which we seek to recover.
Indeed, in that setting, where V0 = V? =
T
V , we automatically satisfy the condition PI0 PV0 = {0}. In the general case, we require the
condition c < 1 to recover the same property. Moreover, considering that the columns of H are
either zero, or defined as normalizations of the columns of matrix C (i.e., normalizations of outliers),
that PU0 (H) = PV0 (H) = PT0 (H) = 0, is immediate, as is the condition that PI0 (U0 V0? ) = 0.
For the general, non-orthogonal case, however, we require the matrices ?1 and ?2 to obtain these
equalities, and the rest of the dual certificate properties. In the full version [26] we show in detail
how these ideas and the oracle problem, are used to construct the dual certificate Q. Extending these
ideas, we then quickly obtain the proof for the noisy case.
6
5 Implementation issue and numerical experiments
Solving nuclear-norm minimizations naively requires use of general purpose SDP solvers, which
unfortunately still have questionable scaling capabilities. Instead, we use the proximal gradient algorithms [27], a.k.a., Singular Value Thresholding [28] to solve Outlier Pursuit. The algorithm converges with a rate of O(k ?2 ) where k is the number of iterations, and in each iteration, it involves a
singular value decomposition and thresholding, therefore, requiring significantly less computational
time than interior point methods.
Our first experiment investigates the phase-transition property of Outlier Pursuit, using randomly
generated synthetic data. Fix n = p = 400. For different r and number of outliers ?n, we generated
matrices A ? Rp?r and B ? R(n??n)?r where each entry is an independent N (0, 1) random
variable, and then set L? := A ? B ? (the ?clean? part of M ). Outliers, C ? ? R?n?p are generated
either neutrally, where each entry of C ? is iid N (0, 1), or adversarial, where every column is an
? ? PU . Note
identical copy of a random Gaussian vector. Outlier Pursuit succeeds if C? ? PI , and L
that if a lot of outliers span a same direction, it would be difficult to identify whether they are all
outliers, or just a new direction of the true space. Indeed, such a setup is order-wise worst, as we
proved in the full version [26] a matching lower bound is achieved when all outliers are identical.
(a) Random Outlier
(b) Identical Outlier
(c) Noisy Outlier Detection
1
s=20,
random outlier
s=20,
identical outlier
0.9
Succeed rate
0.8
0.7
0.6
0.5
0.4
s=10,
identical outlier
s=10,
random outlier
0.3
0.2
0.1
0
0.2
0.3
0.4
0.5
0.6
?/s
0.7
0.8
0.9
1
Figure 1: Complete Observation: Results averaged over 10 trials.
Figure 1 shows the phase transition property. We represent success in gray scale, with white denoting
success, and black failure. When outliers are random (easier case) Outlier Pursuit succeeds even
when r = 20 with 100 outliers. In the adversarial case, we observe a phase transition: Outlier
Pursuit succeeds when r ? ? is small, and fails otherwise, consistent with our theory?s predictions.
We then fix r = ?n = 5 and examine the outlier identification ability of Outlier Pursuit with noisy
observations. We scale each outlier so that the ?2 distance of the outlier to the span of true samples
equals a pre-determined value s. Each true sample is thus corrupted with a Gaussian random vector
with an ?2 magnitude ?. We perform (noiseless) Outlier Pursuit on this noisy observation matrix, and
claim that the algorithm successfully identifies outliers if for the resulting C? matrix, kC?j k2 < kC?i k2
for all j 6? I and i ? I, i.e., there exists a threshold value to separate out outliers. Figure 1 (c) shows
the result: when ?/s ? 0.3 for the identical outlier case, and ?/s ? 0.7 for the random outlier case,
Outlier Pursuit correctly identifies the outliers.
We further study the case of decomposing M under incomplete observation, which is motivated by
robust collaborative filtering: we generate M as before, but only observe each entry with a given
probability (independently). Letting ? be the set of observed entries, we solve
Minimize: kLk? + ?kCk1,2 ;
Subject to: P? (L + C) = P? (M ).
(9)
The same success condition is used. Figure 2 shows a very promising result: the successful decomposition rate under incomplete observation is close to the complete observation case even when only
30% of entries are observed. Given this empirical result, a natural direction of future research is to
understand theoretical guarantee of (9) in the incomplete observation case.
Next we report some experiment results on the USPS digit data-set. The goal of this experiment is
to show that Outlier Pursuit can be used to identify anomalies within the dataset. We use the data
from [29], and construct the observation matrix M as containing the first 220 samples of digit ?1?
and the last 11 samples of ?7?. The learning objective is to correctly identify all the ?7?s?. Note
that throughout the experiment, label information is unavailable to the algorithm, i.e., there is no
training stage. Since the columns of digit ?1? are not exactly low rank, an exact decomposition
7
(a) 30% entries observed
(b) 80% entries observed
(c) Success rate vs Observe ratio
1
0.9
Succeed Rate
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Fraction of Observed Entries
Figure 2: Partial Observation.
is not possible. Hence, we use the ?2 norm of each column in the resulting C matrix to identify
the outliers: a larger ?2 norm means that the sample is more likely to be an outlier ? essentially,
we apply thresholding after C is obtained. Figure 3(a) shows the ?2 norm of each column of the
resulting C matrix. We see that all ?7?s? are indeed identified. However, two ?1? samples (columns
71 and 137) are also identified as outliers, due to the fact that these two samples are written in a way
that is different from the rest ?1?s? as showed in Figure 4. Under the same setup, we also simulate
the case where only 80% of entries are observed. As Figure 3 (b) and (c) show, similar results as
that of the complete observation case are obtained, i.e., all true ?7?s? and also ?1?s? No 71, No 177
are identified.
(a) Complete Observation
(b) Partial Obs. (one run)
6
2
2
1
1
0
50
100
150
200
i
4
3
2
3
2
3
"7"
"1"
5
"1"
l norm of C
4
i
4
l norm of C
i
5
2
l norm of C
6
"7"
5
0
(c) Partial Obs. (average)
6
"7"
"1"
0
2
1
0
50
i
100
150
0
200
0
i
50
100
150
200
i
Figure 3: Outlyingness: ?2 norm of Ci .
?1?
?7?
No 71
No 177
Figure 4: Typical ?1?, ?7? and abnormal ?1?.
6 Conclusion and Future Direction
This paper considers robust PCA from a matrix decomposition approach, and develops the algorithm
Outlier Pursuit. Under some mild conditions, we show that Outlier Pursuit can exactly recover the
column support, and exactly identify outliers. This result is new, differing both from results in
Robust PCA, and also from results using nuclear-norm approaches for matrix completion and matrix
reconstruction. One central innovation we introduce is the use of an oracle problem. Whenever the
recovery concept (in this case, column space) does not uniquely correspond to a single matrix (we
believe many, if not most cases of interest, will fall under this description), the use of such a tool
will be quite useful. Immediate goals for future work include considering specific applications, in
particular, robust collaborative filtering (here, the goal is to decompose a partially observed columncorrupted matrix) and also obtaining tight bounds for outlier identification in the noisy case.
Acknowledgements H. Xu would like to acknowledge support from DTRA grant HDTRA1-080029. C. Caramanis would like to acknowledge support from NSF grants EFRI-0735905, CNS0721532, CNS- 0831580, and DTRA grant HDTRA1-08-0029. S. Sanghavi would like to acknowledge support from the NSF CAREER program, Grant 0954059.
8
References
[1] I. T. Jolliffe. Principal Component Analysis. Springer Series in Statistics, Berlin: Springer, 1986.
[2] P. J. Huber. Robust Statistics. John Wiley & Sons, New York, 1981.
[3] L. Xu and A. L. Yuille. Robust principal component analysis by self-organizing rules based on statistical
physics approach. IEEE Tran. on Neural Networks, 6(1):131?143, 1995.
[4] E. Cand`es, X. Li, Y. Ma, and J. Wright. Robust pricinpal component analysis? ArXiv:0912.3599, 2009.
[5] S. J. Devlin, R. Gnanadesikan, and J. R. Kettenring. Robust estimation of dispersion matrices and principal components. Journal of the American Statistical Association, 76(374):354?362, 1981.
[6] T. N. Yang and S. D. Wang. Robust algorithms for principal component analysis. Pattern Recognition
Letters, 20(9):927?933, 1999.
[7] C. Croux and G. Hasebroeck. Principal component analysis based on robust estimators of the covariance
or correlation matrix: Influence functions and efficiencies. Biometrika, 87(3):603?618, 2000.
[8] F. De la Torre and M. J. Black. Robust principal component analysis for computer vision. In ICCV?01,
pages 362?369, 2001.
[9] F. De la Torre and M. J. Black. A framework for robust subspace learning. International Journal of
Computer Vision, 54(1/2/3):117?142, 2003.
[10] C. Croux, P. Filzmoser, and M. Oliveira. Algorithms for Projection?Pursuit robust principal component
analysis. Chemometrics and Intelligent Laboratory Systems, 87(2):218?225, 2007.
[11] S. C. Brubaker. Robust PCA and clustering on noisy mixtures. In SODA?09, pages 1078?1087, 2009.
[12] H. Xu, C. Caramanis, and S. Mannor. Principal component analysis with contaminated data: The high
dimensional case. In COLT?10, pages 490?502, 2010.
[13] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Tran. on Information Theory, 52(2):489?509, 2006.
[14] B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum rank solutions to linear matrix equations via
nuclear norm minimization. To appear in SIAM Review, 2010.
[15] V. Chandrasekaran, S. Sanghavi, P. Parrilo, and A. Willsky. Rank-sparsity incoherence for matrix decomposition. ArXiv:0906.2220, 2009.
[16] E. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational
Mathematics, 9:717?772, 2009.
[17] E. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Tran. on
Information Theory, 56(2053-2080), 2010.
[18] D. L. Donoho. Breakdown properties of multivariate location estimators. Qualifying paper, Harvard
University, 1982.
[19] R. Maronna. Robust M-estimators of multivariate location and scatter. The Annals of Statistics, 4:51?67,
1976.
[20] V. Barnett. The ordering of multivariate data. Journal of Royal Statistics Society, A, 138:318?344, 1976.
[21] D. Titterington. Estimation of correlation coefficients by ellipsoidal trimming. Applied Statistics, 27:227?
234, 1978.
[22] V. Barnett and T. Lewis. Outliers in Statistical Data. Wiley, New York, 1978.
[23] A. Dempster and M. Gasko-Green. New tools for residual analysis. The Annals of Statistics, 9(5):945?
959, 1981.
[24] S. J. Devlin, R. Gnanadesikan, and J. R. Kettenring. Robust estimation and outlier detection with correlation coefficients. Biometrika, 62:531?545, 1975.
[25] G. Li and Z. Chen. Projection-pursuit approach to robust dispersion matrices and principal components:
Primary theory and monte carlo. Journal of the American Statistical Association, 80(391):759?766, 1985.
[26] H. Xu, C. Caramanis, and S. Sanghavi. Robust PCA via outlier pursuit. http://arxiv.org/abs/1010.4237,
2010.
[27] Y. Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2 ). Soviet
Mathematics Doklady, 27(372-376), 1983.
[28] J-F. Cai, E. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20:1956?1982, 2008.
[29] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. the MIT Press, 2006.
9
| 4005 |@word mild:2 kgk:1 trial:1 version:4 polynomial:1 norm:25 stronger:1 c0:18 km:1 seek:3 decomposition:11 covariance:2 klk:4 reduction:2 series:1 pt0:1 denoting:1 ours:1 existing:4 recovered:2 yet:2 scatter:1 must:1 written:1 john:1 realistic:1 numerical:1 aside:1 v:1 generative:1 accordingly:1 ith:2 certificate:6 mannor:1 location:2 org:1 simpler:1 five:1 c2:2 persistent:1 introduce:1 huber:1 indeed:6 presumed:1 cand:5 examine:1 sdp:1 decomposed:2 pitfall:1 automatically:1 considering:4 solver:1 notation:6 bounded:1 moreover:3 what:2 titterington:1 differing:1 dubbed:1 guarantee:1 every:5 finance:1 questionable:1 exactly:12 biometrika:2 k2:6 doklady:1 bio:1 unit:2 grant:4 appear:2 arguably:1 before:1 engineering:3 xv:2 consequence:1 subscript:1 incoherence:6 approximately:1 black:3 suggests:1 range:1 obeys:1 averaged:1 fazel:1 unique:1 enforces:1 practice:1 progam:1 digit:3 universal:1 empirical:1 adapting:1 significantly:1 projection:9 matching:1 pre:1 cannot:3 onto:3 close:2 operator:4 interior:1 romberg:1 context:1 influence:1 seminal:1 restriction:1 imposed:1 williams:1 regardless:1 independently:1 convex:11 shen:1 wit:1 recovery:7 splitting:1 identifying:1 estimator:4 rule:1 nuclear:8 orthonormal:2 spanned:1 financial:1 handle:1 coordinate:3 annals:2 pt:9 suppose:5 exact:9 anomaly:1 programming:1 harvard:1 satisfying:1 recognition:1 located:1 breakdown:3 observed:7 electrical:3 solved:1 worst:1 wang:1 mal:1 ensures:1 ordering:1 gross:1 dempster:1 complexity:1 nesterov:1 depend:1 tight:2 solving:2 yuille:1 efficiency:1 completely:2 basis:1 usps:1 k0:1 caramanis:4 soviet:1 monte:1 outside:1 outcome:1 whose:3 quite:2 widely:2 solve:2 larger:1 pi0:10 otherwise:1 precludes:1 ability:1 statistic:6 itself:3 noisy:13 online:1 rr:2 cai:1 reconstruction:2 tran:3 aligned:1 date:1 kak1:1 organizing:1 description:1 frobenius:1 kv:1 chemometrics:1 convergence:1 optimum:10 extending:1 converges:1 depending:1 develop:3 completion:5 ij:1 strong:1 recovering:1 auxiliary:1 involves:1 differ:1 direction:6 correct:3 torre:2 require:3 fix:2 preliminary:1 decompose:1 proposition:2 extension:1 strictly:2 hold:1 lying:1 considered:1 wright:1 plagued:1 scope:1 claim:1 purpose:1 estimation:3 combinatorial:2 label:1 utexas:3 gnanadesikan:2 largest:3 successfully:1 tool:2 minimization:6 hope:1 mit:1 clearly:2 sensor:1 always:2 gaussian:4 aim:1 rather:2 ck:1 l0:31 rank:20 contrast:1 adversarial:2 sense:3 i0:8 entire:3 diminishing:1 kc:5 expand:1 going:1 interested:2 tao:2 overall:1 dual:6 issue:1 colt:1 denoted:2 priori:1 field:1 construct:3 pdimensional:1 having:2 equal:1 barnett:2 identical:6 broad:1 alter:1 future:3 hdtra1:2 report:1 contaminated:2 sanghavi:5 np:1 few:2 primarily:1 develops:1 randomly:1 intelligent:1 intended:1 phase:3 cns:1 attempt:1 ab:1 detection:2 interest:2 trimming:2 investigate:2 highly:1 mixture:1 extreme:1 ambient:1 succeeded:1 partial:3 necessary:1 huan:2 orthogonal:3 incomplete:4 desired:1 theoretical:1 column:77 entry:11 undetermined:1 uniform:1 kq:2 successful:2 kn:1 corrupted:15 proximal:1 considerably:1 synthetic:1 kck0:1 recht:2 international:1 sensitivity:1 siam:2 probabilistic:1 physic:1 informatics:1 quickly:1 central:1 satisfied:1 containing:1 american:2 li:2 parrilo:2 de:2 coefficient:2 satisfy:4 depends:1 tion:1 lot:2 recover:15 capability:1 collaborative:3 minimize:5 square:1 formed:1 variance:1 reserved:1 efficiently:2 yield:1 correspond:3 identify:8 weak:2 identification:5 iid:1 carlo:1 corruption:3 history:1 whenever:1 definition:1 failure:2 frequency:1 involved:1 associated:1 proof:6 recovers:3 proved:1 dataset:1 ask:1 recall:1 dimensionality:3 appears:1 attained:1 isometric:2 specify:1 unidentifiable:1 done:1 just:2 stage:1 correlation:3 hand:1 ei:3 quality:1 gray:1 believe:1 concept:2 true:15 normalized:1 requiring:1 hence:5 equality:1 laboratory:1 white:1 self:1 uniquely:1 kak:3 outline:2 complete:4 performs:1 wise:1 common:1 association:2 significant:1 imposing:1 ai:3 tuning:1 sujay:1 mathematics:2 similarly:4 longer:1 v0:5 etc:1 pu:9 base:1 add:1 multivariate:3 recent:3 showed:1 perspective:2 constantine:1 inequality:1 arbitrarily:5 success:6 uncorrupted:4 seen:1 minimum:1 additional:6 impose:2 dtra:2 maximize:1 signal:2 u0:12 full:4 stem:1 technical:1 collate:1 long:2 neutrally:1 prediction:1 variant:4 essentially:4 noiseless:4 vision:2 arxiv:3 iteration:2 represent:3 normalization:2 achieved:2 c1:2 addition:2 want:1 singular:7 malicious:2 source:1 rest:3 meaningless:1 strict:2 comment:1 subject:5 pv0:2 spirit:1 call:2 near:2 presence:2 yang:1 easy:3 variety:1 perfectly:1 identified:3 idea:2 devlin:2 computable:1 texas:3 fragile:1 whether:1 motivated:2 pca:21 suffer:1 york:2 remark:1 useful:1 clear:4 involve:1 oliveira:1 ellipsoidal:2 maronna:1 documented:1 generate:1 http:1 canonical:1 nsf:2 correctly:2 conform:2 write:1 key:1 four:1 nevertheless:1 threshold:1 authentic:2 capital:1 clean:1 v1:1 kettenring:2 subgradient:1 relaxation:1 fraction:9 year:1 sum:1 run:1 inverse:1 letter:3 uncertainty:1 soda:1 throughout:2 chandrasekaran:1 ob:2 scaling:3 investigates:1 bound:2 hi:2 abnormal:1 guaranteed:1 paramount:1 identifiable:1 oracle:8 croux:2 ahead:1 precisely:1 constraint:2 generates:1 u1:2 simulate:1 extremely:1 span:3 optimality:5 alternate:2 son:1 wherever:1 outlier:85 iccv:1 taken:1 computationally:1 equation:1 fail:1 jolliffe:1 needed:1 know:2 letting:1 pursuit:32 available:2 decomposing:1 apply:1 observe:5 spectral:2 rp:3 denotes:1 remaining:2 ensure:1 include:1 clustering:1 establish:1 approximating:1 classical:1 society:1 objective:2 question:1 primary:1 usual:1 traditional:1 surrogate:1 said:1 gradient:1 subspace:13 distance:1 separate:2 berlin:1 mail:2 considers:1 willsky:1 index:2 useless:1 ratio:1 innovation:1 setup:7 unfortunately:1 difficult:1 statement:1 relate:1 stated:1 intent:1 suppress:1 implementation:1 perform:1 av:1 observation:13 dispersion:2 acknowledge:3 immediate:2 situation:1 precise:1 brubaker:1 arbitrary:4 complement:1 pair:6 required:2 kl:1 deletion:1 beyond:2 below:5 pattern:1 regime:2 sparsity:1 program:3 max:2 including:1 royal:1 green:1 power:1 critical:1 natural:5 residual:1 imply:1 qualifying:1 identifies:4 axis:1 lk:1 incoherent:2 prior:1 literature:1 acknowledgement:1 review:1 kf:2 kakf:1 interesting:1 filtering:3 proportional:1 foundation:1 agent:1 sufficient:1 consistent:1 thresholding:4 principle:1 corrupting:1 intractability:1 pi:6 share:1 austin:3 row:6 supported:1 last:1 copy:1 rasmussen:1 side:1 formal:1 ik2:1 understand:1 fall:1 sparse:5 benefit:1 dimension:2 kck1:4 stand:1 transition:3 made:1 projected:1 efri:1 approximate:1 gene:1 assumed:1 alternatively:1 iterative:2 promising:1 learn:1 robust:27 career:1 obtaining:2 unavailable:1 necessarily:1 spread:1 main:2 big:1 noise:3 arise:2 nothing:1 complementary:1 defective:1 xu:6 elaborate:2 wiley:2 fails:1 position:1 pv:16 wish:1 lie:6 candidate:3 peeling:2 theorem:7 specific:2 intractable:2 exists:4 naively:1 effectively:1 ci:1 magnitude:4 execution:1 chen:1 easier:1 rejection:1 intersection:1 simply:1 likely:1 forming:1 contained:1 partially:1 springer:2 corresponds:1 truth:1 satisfies:5 lewis:1 outlyingness:1 ma:1 succeed:2 identity:6 goal:4 donoho:1 feasible:1 hard:1 determined:1 typical:1 principal:11 degradation:1 called:2 ece:1 svd:9 e:5 succeeds:3 la:2 meaningful:2 formally:1 support:10 pu0:10 arises:1 bioinformatics:2 |
3,320 | 4,006 | Parallelized Stochastic Gradient Descent
Markus Weimer
Yahoo! Labs
Sunnyvale, CA 94089
[email protected]
Martin A. Zinkevich
Yahoo! Labs
Sunnyvale, CA 94089
[email protected]
Lihong Li
Yahoo! Labs
Sunnyvale, CA 94089
[email protected]
Alex Smola
Yahoo! Labs
Sunnyvale, CA 94089
[email protected]
Abstract
With the increase in available data parallel machine learning has become an increasingly pressing problem. In this paper we present the first parallel stochastic
gradient descent algorithm including a detailed analysis and experimental evidence. Unlike prior work on parallel optimization algorithms [5, 7] our variant
comes with parallel acceleration guarantees and it poses no overly tight latency
constraints, which might only be available in the multicore setting. Our analysis introduces a novel proof technique ? contractive mappings to quantify the
speed of convergence of parameter distributions to their asymptotic limits. As a
side effect this answers the question of how quickly stochastic gradient descent
algorithms reach the asymptotically normal regime [1, 8].
1
Introduction
Over the past decade the amount of available data has increased steadily. By now some industrial
scale datasets are approaching Petabytes. Given that the bandwidth of storage and network per
computer has not been able to keep up with the increase in data, the need to design data analysis
algorithms which are able to perform most steps in a distributed fashion without tight constraints
on communication has become ever more pressing. A simple example illustrates the dilemma. At
current disk bandwidth and capacity (2TB at 100MB/s throughput) it takes at least 6 hours to read
the content of a single harddisk. For a decade, the move from batch to online learning algorithms
was able to deal with increasing data set sizes, since it reduced the runtime behavior of inference
algorithms from cubic or quadratic to linear in the sample size. However, whenever we have more
than a single disk of data, it becomes computationally infeasible to process all data by stochastic
gradient descent which is an inherently sequential algorithm, at least if we want the result within a
matter of hours rather than days.
Three recent papers attempted to break this parallelization barrier, each of them with mixed success. [5] show that parallelization is easily possible for the multicore setting where we have a tight
coupling of the processing units, thus ensuring extremely low latency between the processors. In
particular, for non-adversarial settings it is possible to obtain algorithms which scale perfectly in
the number of processors, both in the case of bounded gradients and in the strongly convex case.
Unfortunately, these algorithms are not applicable to a MapReduce setting since the latter is fraught
with considerable latency and bandwidth constraints between the computers.
A more MapReduce friendly set of algorithms was proposed by [3, 9]. In a nutshell, they rely on
distributed computation of gradients locally on each computer which holds parts of the data and
subsequent aggregation of gradients to perform a global update step. This algorithm scales linearly
1
in the amount of data and log-linearly in the number of computers. That said, the overall cost in
terms of computation and network is very high: it requires many passes through the dataset for
convergence. Moreover, it requires many synchronization sweeps (i.e. MapReduce iterations). In
other words, this algorithm is computationally very wasteful when compared to online algorithms.
[7] attempted to deal with this issue by a rather ingenious strategy: solve the sub-problems exactly on
each processor and in the end average these solutions to obtain a joint solution. The key advantage
of this strategy is that only a single MapReduce pass is required, thus dramatically reducing the
amount of communication. Unfortunately their proposed algorithm has a number of drawbacks:
the theoretical guarantees they are able to obtain imply a significant variance reduction relative
to the single processor solution [7, Theorem 3, equation 13] but no bias reduction whatsoever [7,
Theorem 2, equation 9] relative to a single processor approach. Furthermore, their approach requires
a relatively expensive algorithm (a full batch solver) to run on each processor. A further drawback
of the analysis in [7] is that the convergence guarantees are very much dependent on the degree of
strong convexity as endowed by regularization. However, since regularization tends to decrease with
increasing sample size the guarantees become increasingly loose in practice as we see more data.
We attempt to combine the benefits of a single-average strategy as proposed by [7] with asymptotic
analysis [8] of online learning. Our proposed algorithm is strikingly simple: denote by ci (w) a loss
function indexed by i and with parameter w. Then each processor carries out stochastic gradient
descent on the set of ci (w) with a fixed learning rate ? for T steps as described in Algorithm 1.
Algorithm 1 SGD({c1 , . . . , cm }, T, ?, w0 )
for t = 1 to T do
Draw j ? {1 . . . m} uniformly at random.
wt ? wt?1 ? ??w cj (wt?1 ).
end for
return wT .
On top of the SGD routine which is carried out on each computer we have a master-routine which
aggregates the solution in the same fashion as [7].
Algorithm 2 ParallelSGD({c1 , . . . cm }, T, ?, w0 , k)
for all i ? {1, . . . k} parallel do
vi = SGD({c1 , . . . cm }, T, ?, w0 ) on client
end for
!k
Aggregate from all computers v = k1 i=1 vi and return v
The key algorithmic difference to [7] is that the batch solver of the inner loop is replaced by a
stochastic gradient descent algorithm which digests not a fixed fraction of data but rather a random
fixed subset of data. This means that if we process T instances per machine, each processor ends up
T
seeing m
of the data which is likely to exceed k1 .
Algorithm
Distributed subgradient [3, 9]
Distributed convex solver [7]
Multicore stochastic gradient [5]
This paper
Latency tolerance
moderate
high
low
high
MapReduce
yes
yes
no
yes
Network IO
high
low
n.a.
low
Scalability
linear
unclear
linear
linear
A direct implementation of the algorithms above would place every example on every machine:
however, if T is much less than m, then it is only necessary for a machine to have access to the
data it actually touches. Large scale learning, as defined in [2], is when an algorithm is bounded
by the time available instead of by the amount of data available. Practically speaking, that means
that one can consider the actual data in the real dataset to be a subset of a virtually infinite set,
and drawing with replacement (as the theory here implies) and drawing without replacement on the
2
Algorithm 3 SimuParallelSGD(Examples {c1 , . . . cm }, Learning Rate ?, Machines k)
Define T = $m/k%
Randomly partition the examples, giving T examples to each machine.
for all i ? {1, . . . k} parallel do
Randomly shuffle the data on machine i.
Initialize wi,0 = 0.
for all t ? {1, . . . T }: do
Get the tth example on the ith machine (this machine), ci,t
wi,t ? wi,t?1 ? ??w ci (wi,t?1 )
end for
end for
!k
Aggregate from all computers v = k1 i=1 wi,t and return v.
infinite data set can both be simulated by shuffling the real data and accessing it sequentially. The
initial distribution and shuffling can be a part of how the data is saved. SimuParallelSGD fits very
well with the large scale learning paradigm as well as the MapReduce framework. Our paper applies
an anytime algorithm via stochastic gradient descent. The algorithm requires no communication
between machines until the end. This is perfectly suited to MapReduce settings. Asymptotically,
the error approaches zero. The amount of time required is independent of the number of examples,
only depending upon the regularization parameter and the desired error at the end.
2
Formalism
In stark contrast to the simplicity of Algorithm 2, its convergence analysis is highly technical. Hence
we limit ourselves to presenting the main results in this extended abstract. Detailed proofs are given
in the appendix. Before delving into details we briefly outline the proof strategy:
? When performing stochastic gradient descent with fixed (and sufficiently small) learning
rate ? the distribution of the parameter vector is asymptotically normal [1, 8]. Since all
computers are drawing from the same data distribution they all converge to the same limit.
1
? Averaging between the parameter vectors of k computers reduces variance by O(k ? 2 )
similar to the result of [7]. However, it does not reduce bias (this is where [7] falls short).
? To show that the bias due to joint initialization decreases we need to show that the distribution of parameters per machine converges sufficiently quickly to the limit distribution.
? Finally, we also need to show that the mean of the limit distribution for fixed learning rate
is sufficiently close to the risk minimizer. That is, we need to take finite-size learning rate
effects into account relative to the asymptotically normal regime.
2.1
Loss and Contractions
In this paper we consider estimation with convex loss functions ci : #2 ? [0, ?). While our
analysis extends to other Hilbert Spaces such as RKHSs we limit ourselves to this class of functions
for convenience. For instance, in the case of regularized risk minimization we have
ci (w) =
?
(w(2 + L(xi , y i , w ? xi )
2
(1)
where L is a convex function in w?xi , such as 12 (y i ?w?xi )2 for regression or log[1+exp(?y i w?xi )]
for binary classification. The goal is to find an approximate minimizer of the overall risk
m
c(w) =
1 " i
c (w).
m i=1
(2)
To deal with stochastic gradient descent we need tools for quantifying distributions over w.
Lipschitz continuity: A function f : X ? R is Lipschitz continuous with constant L with respect
to a distance d if |f (x) ? f (y)| ? Ld(x, y) for all x, y ? X.
3
H?older continuity: A function f is H?older continous with constant L and exponent ? if |f (x) ?
f (y)| ? Ld? (x, y) for all x, y ? X.
Lipschitz seminorm: [10] introduce a seminorm. With minor modification we use
(f (Lip := inf {l where |f (x) ? f (y)| ? ld(x, y) for all x, y ? X} .
That is, (f (Lip is the smallest constant for which Lipschitz continuity holds.
H?older seminorm: Extending the Lipschitz norm for ? ? 1:
(f (Lip? := inf {l where |f (x) ? f (y)| ? ld? (x, y) for all x, y ? X} .
(3)
(4)
Contraction: For a metric space (M, d), f : M ? M is a contraction mapping if (f (Lip < 1.
In the following we assume that (L(x, y, y " )(Lip ? G as a function of y " for all occurring data
(x, y) ? X ? Y and for all values of w within a suitably chosen (often compact) domain.
Theorem 1 (Banach?s Fixed Point Theorem) If (M, d) is a non-empty complete metric space,
then any contraction mapping f on (M, d) has a unique fixed point x? = f (x? ).
t
Corollary 2 The sequence xt = f (xt?1 ) converges linearly with d(x? , xt ) ? (f (Lip d(x0 , x? ).
Our strategy is to show that the stochastic gradient descent mapping
w ? ?i (w) := w ? ??ci (w)
(5)
is a contraction, where i is selected uniformly at random from {1, . . . m}. This would allow us
to demonstrate exponentially fast convergence. Note that since the algorithm selects i at random,
different runs with the same initial settings can produce different results. A key tool is the following:
#
#
Lemma 3 Let c? ? #?y?L(xi , y i , y?)#Lip be a Lipschitz bound on the loss gradient. Then if ? ?
# #2
(#xi # c? + ?)?1 the update rule (5) is a contraction mapping in #2 with Lipschitz constant 1 ? ??.
We prove this in Appendix B. If we choose ? ?low enough?, gradient descent uniformly becomes a
contraction. We define
$# # 2
%?1
? ? := min #xi # c? + ?
.
(6)
i
2.2
Contraction for Distributions
For fixed learning rate ? stochastic gradient descent is a Markov process with state vector w. While
there is considerable research regarding the asymptotic properties of this process [1, 8], not much is
known regarding the number of iterations required until the asymptotic regime is assumed. We now
address the latter by extending the notion of contractions from mappings of points to mappings of
distributions. For this we introduce the Monge-Kantorovich-Wasserstein earth mover?s distance.
Definition 4 (Wasserstein metric) For a Radon space (M, d) let P (M, d) be the set of all distributions over the space. The Wasserstein distance between two distributions X, Y ? P (M, d) is
Wz (X, Y ) =
&
inf
???(X,Y )
'
z
d (x, y)d ?(x, y)
x,y
( z1
(7)
where ?(X, Y ) is the set of probability distributions on (M, d) ? (M, d) with marginals X and Y .
This metric has two very important properties: it is complete and a contraction in (M, d) induces a
contraction in (P (M, d), Wz ). Given a mapping ? : M ? M , we can construct p : P (M, d) ?
P (M, d) by applying ? pointwise to M . Let X ? P (M, d) and let X " := p(X). Denote for any
measurable event E its pre-image by ??1 (E). Then we have that X " (E) = X(??1 (E)).
4
Lemma 5 Given a metric space (M, d) and a contraction mapping ? on (M, d) with constant c, p
is a contraction mapping on (P (M, d), Wz ) with constant c.
This is proven in Appendix C. This shows that any single mapping is a contraction. However, since
we draw ci at random we need to show that a mixture of such mappings is a contraction, too. Here
the fact that we operate on distributions comes handy since the mixture of mappings on distribution
is a mapping on distributions.
Lemma 6 Given a Radon space (M, d), if p1 . . . pk are contraction mappings with constants
!
!k
c1 . . . ck with respect to Wz , and i ai = 1 where ai ? 0, then p =
i=1 ai pi is a contrac1
!
z z
tion mapping with a constant of no more than [ i ai (ci ) ] .
Corollary 7 If for all i, ci ? c, then p is a contraction mapping with a constant of no more than c.
!m i
1
This is proven in Appendix C. We apply this to SGD as follows: Define p? = m
i=1 p to be the
stochastic operation in one step. Denote by D?0 the initial parameter distribution from which w0 is
drawn and by D?t the parameter distribution after t steps, which is obtained via D?t = p? (D?t?1 ).
Then the following holds:
Theorem 8 For any z ? N, if ? ? ? ? , then p? is a contraction mapping on (M, Wz ) with contraction rate (1 ? ??). Moreover, there exists a unique fixed point D?? such that p? (D?? ) = D?? . Finally,
G
T
?
T
if w0 = 0 with probability 1, then Wz (D?0 , D?? ) = G
? , and Wz (D? , D? ) ? ? (1 ? ??) .
This is proven in Appendix F. The contraction rate (1 ? ??) can be proven by applying Lemma 3,
Lemma 5, and Corollary 6. As we show later, wt ? G/? with probability 1, so Prw?D?? [d(0, w) ?
G/?] = 1, and since w0 = 0, this implies Wz (D?0 , D?? ) = G/?. From this, Corollary 2 establishes
T
Wz (D?T , D?? ) ? G
? (1 ? ??) .
This means that for a suitable choice of ? we achieve exponentially fast convergence in T to some
stationary distribution D?? . Note that this distribution need not be centered at the risk minimizer
of c(w). What the result does, though, is establish a guarantee that each computer carrying out
Algorithm 1 will converge rapidly to the same distribution over w, which will allow us to obtain
good bounds if we can bound the ?bias? and ?variance? of D?? .
2.3
Guarantees for the Stationary Distribution
At this point, we know there exists a stationary distribution, and our algorithms are converging to
that distribution exponentially fast. However, unlike in traditional gradient descent, the stationary
distribution is not necessarily just the optimal point. In particular, the harder parts of understanding
this algorithm involve understanding the properties of the stationary distribution. First, we show that
the mean of the stationary distribution has low error. Therefore, if we ran for a really long time and
averaged over many samples, the error would be low.
Theorem 9 c(Ew?D?? [w]) ? minw?Rn c(w) ? 2?G2 .
Proven in Appendix G using techniques from regret minimization. Secondly, we show that the
squared distance from the optimal point, and therefore the variance, is low.
Theorem 10 The average squared distance of D?? from the optimal point is bounded by:
Ew?D?? [(w ? w? )2 ] ?
4?G2
.
(2 ? ??)?
In other words, the squared distance is bounded by O(?G2 /?).
5
Proven in Appendix I using techniques from reinforcement learning. In what follows, if x ? M ,
Y ? P (M, d), we define Wz (x, Y ) to be the Wz distance between Y and a distribution with a
probability of 1 at x. Throughout the appendix, we develop tools to show that the distribution
over the output vector of the algorithm is ?near? ?D?? , the mean of the stationary distribution. In
particular, if D?T,k is the distribution over the final vector of ParallelSGD after T iterations on each
)
of k machines with a learning rate ?, then W2 (?D?? , D?T,k ) = Ex?D?T ,k [(x ? ?D?? )2 ] becomes
small. Then, we need to connect the error of the mean of the stationary distribution to a distribution
that is near to this mean.
Theorem 11 Given a cost function c such that (c(L and (?c(L are bounded, a distribution D such
that ?D and is bounded, then, for any v:
Ew?D [c(w)] ? min c(w)
w
? (W2 (v, D))
)
2 (?c(L (c(v) ? min c(w)) +
w
(?c(L
(W2 (v, D))2 + (c(v) ? min c(w)).
w
2
(8)
This is proven in Appendix K. The proof is related to the Kantorovich-Rubinstein theorem, and
bounds on the Lipschitz of c near v based on c(v) ? minw c(w). At this point, we are ready to get
the main theorem:
Theorem 12 If ? ? ? ? and T =
ln k?(ln ?+ln ?)
:
2??
8?G2
Ew?D?T ,k [c(w)] ? min c(w) ? ?
w
k?
)
(?c(L +
8?G2 (?c(L
+ (2?G2 ).
k?
(9)
This is proven in Appendix K.
2.4
Discussion of the Bound
The guarantee obtained in (9) appears rather unusual insofar as it does not have an explicit dependency on the sample size. This is to be expected since we obtained a bound in terms of risk minimization of the given corpus rather than a learning bound. Instead the runtime required depends
only on the accuracy of the solution itself.
In comparison to [2], we look at the number of iterations to reach ? for SGD in Table 2. Ignoring
the effect of the dimensions (such as ? and d), setting these parameters to 1, and assuming that the
conditioning number ? = ?1 , and ? = ?. In terms of our bound, we assume G = 1 and (?c(L = 1.
2
In order to make our error order ?, we must set k = ?1 . So, the Bottou paper claims a bound of ???
1
iterations, which we interpret as ??1 2 . Modulo logarithmic factors, we require ?1 machines to run ??
1
time, which is the same order of computation, but a dramatic speedup of a factor of ? in wall clock
time.
Another important aspect of the algorithm is that it can be arbitrarily precise. By halving ? and
roughly doubling T , you can halve the error. Also, the bound captures how much paralllelization
%?c%
can help. If k > ? L , then the last term ?G2 will start to dominate.
3
Experiments
Data: We performed experiments on a proprietary dataset drawn from a major email system with
labels y ? ?1 and binary, sparse features. The dataset contains 3, 189, 235 time-stamped instances
out of which the last 68, 1015 instances are used to form the test set, leaving 2, 508, 220 training
points. We used hashing to compress the features into a 218 dimensional space. In total, the dataset
contained 785, 751, 531 features after hashing, which means that each instance has about 313 features on average. Thus, the average sparsity of each data point is 0.0012. All instance have been
normalized to unit length for the experiments.
6
Figure 1: Relative training error with ? = 1e?3 : Huber loss (left) and squared error (right)
Approach: In order to evaluate the parallelization ability of the proposed algorithm, we followed
the following procedure: For each configuration (see below), we trained up to 100 models, each on
an independent, random permutation of the full training data. During training, the model is stored on
disk after k = 10, 000 ? 2i updates. We then averaged the models obtained for each i and evaluated
the resulting model. That way, we obtained the performance for the algorithm after each machine
has seen k samples. This approach is geared towards the estimation of the parallelization ability of
our optimization algorithm and its application to machine learning equally. This is in contrast to
the evaluation approach taken in [7] which focussed solely on the machine learning aspect without
studying the performance of the optimization approach.
Evaluation measures: We report both the normalized root mean squared error (RMSE) on the test
set and the normalized value of the objective function during training. We normalize the RMSE
such that 1.0 is the RMSE obtained by training a model in one single, sequential pass over the data.
The objective function values are normalized in much the same way such that the objective function
value of a single, full sequential pass over the data reaches the value 1.0.
Configurations: We studied both the Huber and the squared error loss. While the latter does not
satisfy all the assumptions of our proofs (its gradient is unbounded), it is included due to its popularity. We choose to evaluate using two different regularization constants, ? = 1e?3 and ? = 1e?6
in order to estimate the performance characteristics both on smooth, ?easy? problems (1e?3 ) and on
high-variance, ?hard? problems (1e?6 ). In all experiments, we fixed the learning rate to ? = 1e?3 .
3.1
Results and Discussion
Optimization: Figure 1 shows the relative objective function values for training using 1, 10 and
100 machines with ? = 1e?3 . In terms of wall clock time, the models obtained on 100 machines
clearly outperform the ones obtained on 10 machines, which in turn outperform the model trained
on a single machine. There is no significant difference in behavior between the squared error and
the Huber loss in these experiments, despite the fact that the squared error is effectively unbounded.
Thus, the parallelization works in the sense that many machines obtain a better objective function
value after each machine has seen k instances. Additionally, the results also show that data-local
parallelized training is feasible and beneficial with the proposed algorithm in practice. Note that
the parallel training needs slightly more machine time to obtain the same objective function value,
which is to be expected. Also unsurprising, yet noteworthy, is the trade-off between the number of
machines and the quality of the solution: The solution obtained by 10 machines is much more of an
improvement over using one machine than using 100 machines is over 10.
Predictive Performance: Figure 2 shows the relative test RMSE for 1, 10 and 100 machines with
? = 1e?3 . As expected, the results are very similar to the objective function comparison: The
parallel training decreases wall clock time at the price of slightly higher machine time. Again, the
gain in performance between 1 and 10 machines is much higher than the one between 10 and 100.
7
Figure 2: Relative Test-RMSE with ? = 1e?3 : Huber loss (left) and squared error (right)
Figure 3: Relative train-error using Huber loss: ? = 1e?3 (left), ? = 1e?6 (right)
Performance using different ?: The last experiment is conducted to study the effect of the regularization constant ? on the parallelization ability: Figure 3 shows the objective function plot using
the Huber loss and ? = 1e?3 and ? = 1e?6 . The lower regularization constant leads to more
variance in the problem which in turn should increase the benefit of the averaging algorithm. The
plots exhibit exactly this characteristic: For ? = 1e?6 , the loss for 10 and 100 machines not only
drops faster, but the final solution for both beats the solution found by a single pass, adding further
empirical evidence for the behaviour predicted by our theory.
4
Conclusion
In this paper, we propose a novel data-parallel stochastic gradient descent algorithm that enjoys a
number of key properties that make it highly suitable for parallel, large-scale machine learning: It
imposes very little I/O overhead: Training data is accessed locally and only the model is communicated at the very end. This also means that the algorithm is indifferent to I/O latency. These aspects
make the algorithm an ideal candidate for a MapReduce implementation. Thereby, it inherits the latter?s superb data locality and fault tolerance properties. Our analysis of the algorithm?s performance
is based on a novel technique that uses contraction theory to quantify finite-sample convergence
rate of stochastic gradient descent. We show worst-case bounds that are comparable to stochastic
gradient descent in terms of wall clock time, and vastly faster in terms of overall time. Lastly, our
experiments on a large-scale real world dataset show that the parallelization reduces the wall-clock
time needed to obtain a set solution quality. Unsurprisingly, we also see diminishing marginal utility of adding more machines. Finally, solving problems with more variance (smaller regularization
constant) benefits more from the parallelization.
8
References
[1] Shun-ichi Amari. A theory of adaptive pattern classifiers. IEEE Transactions on Electronic
Computers, 16:299?307, 1967.
[2] L. Bottou and O. Bosquet. The tradeoffs of large scale learning. In Advances in Neural
Information Processing Systems, 2008.
[3] C.T. Chu, S.K. Kim, Y. A. Lin, Y. Y. Yu, G. Bradski, A. Ng, and K. Olukotun. Map-reduce for
machine learning on multicore. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances
in Neural Information Processing Systems 19, 2007.
[4] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning
and stochastic optimization. In Conference on Computational Learning Theory, 2010.
[5] J. Langford, A.J. Smola, and M. Zinkevich. Slow learners are fast. In Neural Information
Processing Systems, 2009.
[6] J. Langford, A.J. Smola, and M. Zinkevich. Slow learners are fast. arXiv:0911.0491, 2009.
[7] G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker. Efficient large-scale distributed training of conditional maximum entropy models. In Y. Bengio, D. Schuurmans,
J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1231?1239. 2009.
[8] N. Murata, S. Yoshizawa, and S. Amari. Network information criterion ? determining the
number of hidden units for artificial neural network models. IEEE Transactions on Neural
Networks, 5:865?872, 1994.
[9] Choon Hui Teo, S. V. N. Vishwanthan, Alex J. Smola, and Quoc V. Le. Bundle methods for
regularized risk minimization. J. Mach. Learn. Res., 11:311?365, January 2010.
[10] U. von Luxburg and O. Bousquet. Distance-based classification with lipschitz functions. Journal of Machine Learning Research, 5:669?695, 2004.
[11] M. Zinkevich. Online convex programming and generalised infinitesimal gradient ascent. In
Proc. Intl. Conf. Machine Learning, pages 928?936, 2003.
9
| 4006 |@word briefly:1 maz:1 norm:1 disk:3 suitably:1 contraction:21 dramatic:1 sgd:5 thereby:1 harder:1 ld:4 carry:1 reduction:2 configuration:2 contains:1 initial:3 past:1 current:1 com:4 yet:1 chu:1 must:1 john:1 subsequent:1 partition:1 hofmann:1 plot:2 drop:1 update:3 stationary:8 selected:1 ith:1 short:1 accessed:1 unbounded:2 direct:1 become:3 prove:1 combine:1 overhead:1 introduce:2 x0:1 huber:6 expected:3 roughly:1 p1:1 behavior:2 actual:1 little:1 solver:3 increasing:2 becomes:3 bounded:6 moreover:2 what:2 cm:4 superb:1 whatsoever:1 guarantee:7 every:2 friendly:1 nutshell:1 runtime:2 exactly:2 classifier:1 platt:1 unit:3 before:1 generalised:1 local:1 tends:1 limit:6 io:1 despite:1 mach:1 solely:1 noteworthy:1 might:1 initialization:1 studied:1 contractive:1 averaged:2 unique:2 practice:2 regret:1 handy:1 communicated:1 procedure:1 empirical:1 word:2 pre:1 seeing:1 get:2 convenience:1 close:1 storage:1 risk:6 applying:2 zinkevich:4 measurable:1 map:1 williams:1 convex:5 simplicity:1 rule:1 dominate:1 notion:1 modulo:1 programming:1 us:1 expensive:1 capture:1 worst:1 culotta:1 decrease:3 shuffle:1 trade:1 ran:1 accessing:1 convexity:1 trained:2 carrying:1 tight:3 solving:1 predictive:1 dilemma:1 upon:1 learner:2 strikingly:1 easily:1 joint:2 train:1 fast:5 artificial:1 rubinstein:1 aggregate:3 elad:1 solve:1 drawing:3 amari:2 ability:3 itself:1 final:2 online:5 advantage:1 pressing:2 sequence:1 propose:1 mb:1 loop:1 rapidly:1 achieve:1 normalize:1 scalability:1 olkopf:1 convergence:7 empty:1 extending:2 intl:1 produce:1 converges:2 help:1 coupling:1 depending:1 develop:1 pose:1 multicore:4 minor:1 strong:1 predicted:1 come:2 implies:2 quantify:2 drawback:2 saved:1 stochastic:17 centered:1 sunnyvale:4 shun:1 mann:1 require:1 behaviour:1 really:1 wall:5 secondly:1 hold:3 practically:1 sufficiently:3 normal:3 exp:1 mapping:18 algorithmic:1 claim:1 major:1 smallest:1 earth:1 estimation:2 proc:1 applicable:1 label:1 teo:1 establishes:1 tool:3 minimization:4 clearly:1 rather:5 ck:1 corollary:4 inherits:1 improvement:1 industrial:1 adversarial:1 contrast:2 kim:1 sense:1 inference:1 dependent:1 diminishing:1 hidden:1 selects:1 overall:3 issue:1 classification:2 exponent:1 yahoo:8 initialize:1 marginal:1 construct:1 ng:1 look:1 yu:1 throughput:1 report:1 randomly:2 mover:1 choon:1 replaced:1 ourselves:2 replacement:2 attempt:1 bradski:1 highly:2 evaluation:2 indifferent:1 introduces:1 mixture:2 bundle:1 necessary:1 minw:2 indexed:1 desired:1 re:1 theoretical:1 increased:1 instance:7 formalism:1 cost:2 subset:2 conducted:1 too:1 unsurprising:1 stored:1 connect:1 answer:1 dependency:1 bosquet:1 off:1 quickly:2 squared:9 again:1 vastly:1 von:1 choose:2 conf:1 return:3 stark:1 li:1 account:1 inc:4 matter:1 satisfy:1 vi:2 depends:1 tion:1 break:1 later:1 lab:4 performed:1 root:1 hazan:1 start:1 aggregation:1 parallel:10 rmse:5 accuracy:1 variance:7 characteristic:2 murata:1 yes:3 processor:8 reach:3 halve:1 whenever:1 email:1 definition:1 infinitesimal:1 steadily:1 yoshizawa:1 proof:5 gain:1 dataset:6 anytime:1 cj:1 hilbert:1 routine:2 actually:1 appears:1 hashing:2 higher:2 day:1 evaluated:1 though:1 strongly:1 furthermore:1 just:1 smola:5 lastly:1 langford:2 until:2 clock:5 touch:1 continuity:3 quality:2 seminorm:3 effect:4 normalized:4 regularization:7 hence:1 read:1 deal:3 during:2 criterion:1 presenting:1 outline:1 complete:2 demonstrate:1 mcdonald:1 duchi:1 image:1 novel:3 conditioning:1 exponentially:3 banach:1 marginals:1 interpret:1 significant:2 ai:4 shuffling:2 lihong:2 access:1 geared:1 recent:1 moderate:1 inf:3 binary:2 success:1 arbitrarily:1 fault:1 seen:2 wasserstein:3 parallelized:2 converge:2 paradigm:1 full:3 reduces:2 smooth:1 technical:1 faster:2 long:1 lin:1 equally:1 ensuring:1 converging:1 variant:1 regression:1 halving:1 metric:5 arxiv:1 iteration:5 c1:5 want:1 walker:1 leaving:1 sch:1 parallelization:8 operate:1 unlike:2 w2:3 pass:1 ascent:1 virtually:1 lafferty:1 near:3 ideal:1 exceed:1 bengio:1 enough:1 insofar:1 easy:1 fit:1 approaching:1 bandwidth:3 perfectly:2 inner:1 reduce:2 regarding:2 tradeoff:1 utility:1 vishwanthan:1 speaking:1 proprietary:1 dramatically:1 latency:5 detailed:2 involve:1 amount:5 locally:2 induces:1 tth:1 reduced:1 outperform:2 overly:1 per:3 popularity:1 ichi:1 key:4 drawn:2 wasteful:1 stamped:1 asymptotically:4 subgradient:2 olukotun:1 fraction:1 run:3 luxburg:1 master:1 you:1 place:1 extends:1 throughout:1 electronic:1 draw:2 appendix:10 radon:2 comparable:1 bound:11 followed:1 quadratic:1 constraint:3 alex:2 markus:1 bousquet:1 aspect:3 speed:1 extremely:1 min:5 performing:1 martin:1 relatively:1 speedup:1 beneficial:1 slightly:2 increasingly:2 smaller:1 wi:5 modification:1 quoc:1 prw:1 taken:1 computationally:2 equation:2 ln:3 turn:2 loose:1 needed:1 know:1 singer:1 end:9 unusual:1 studying:1 available:5 operation:1 endowed:1 apply:1 batch:3 rkhss:1 compress:1 top:1 yoram:1 giving:1 k1:3 establish:1 silberman:1 sweep:1 move:1 objective:8 question:1 ingenious:1 digest:1 strategy:5 traditional:1 kantorovich:2 said:1 unclear:1 gradient:23 exhibit:1 distance:8 simulated:1 capacity:1 w0:6 assuming:1 length:1 pointwise:1 unfortunately:2 design:1 implementation:2 perform:2 datasets:1 markov:1 finite:2 descent:16 beat:1 january:1 extended:1 communication:3 ever:1 precise:1 rn:1 required:4 continous:1 z1:1 hour:2 address:1 able:4 below:1 pattern:1 regime:3 sparsity:1 tb:1 including:1 wz:11 event:1 suitable:2 rely:1 client:1 regularized:2 older:3 imply:1 carried:1 ready:1 prior:1 understanding:2 mapreduce:8 determining:1 asymptotic:4 relative:8 synchronization:1 loss:11 unsurprisingly:1 permutation:1 mixed:1 proven:8 monge:1 degree:1 imposes:1 editor:2 pi:1 mohri:1 last:3 infeasible:1 enjoys:1 side:1 bias:4 allow:2 fall:1 focussed:1 barrier:1 sparse:1 distributed:5 benefit:3 tolerance:2 dimension:1 world:1 reinforcement:1 adaptive:2 transaction:2 approximate:1 compact:1 keep:1 global:1 sequentially:1 corpus:1 assumed:1 xi:8 continuous:1 decade:2 table:1 lip:7 additionally:1 learn:1 delving:1 ca:4 petabyte:1 inherently:1 ignoring:1 schuurmans:1 fraught:1 bottou:2 necessarily:1 domain:1 pk:1 main:2 weimer:2 linearly:3 fashion:2 cubic:1 slow:2 sub:1 explicit:1 candidate:1 theorem:11 xt:3 evidence:2 exists:2 sequential:3 effectively:1 adding:2 ci:10 hui:1 illustrates:1 occurring:1 suited:1 locality:1 entropy:1 logarithmic:1 likely:1 contained:1 g2:7 doubling:1 applies:1 minimizer:3 conditional:1 goal:1 acceleration:1 quantifying:1 towards:1 lipschitz:9 price:1 content:1 considerable:2 hard:1 included:1 infinite:2 feasible:1 reducing:1 uniformly:3 wt:5 averaging:2 lemma:5 total:1 pas:4 experimental:1 attempted:2 ew:4 latter:4 evaluate:2 ex:1 |
3,321 | 4,007 | Space-Variant Single-Image Blind Deconvolution
for Removing Camera Shake
Stefan Harmeling, Michael Hirsch, and Bernhard Sch?olkopf
Max Planck Institute for Biological Cybernetics, T?ubingen, Germany
[email protected]
Abstract
Modelling camera shake as a space-invariant convolution simplifies the problem
of removing camera shake, but often insufficiently models actual motion blur such
as those due to camera rotation and movements outside the sensor plane or when
objects in the scene have different distances to the camera. In an effort to address
these limitations, (i) we introduce a taxonomy of camera shakes, (ii) we build on a
recently introduced framework for space-variant filtering by Hirsch et al. and a fast
algorithm for single image blind deconvolution for space-invariant filters by Cho
and Lee to construct a method for blind deconvolution in the case of space-variant
blur, and (iii), we present an experimental setup for evaluation that allows us to
take images with real camera shake while at the same time recording the spacevariant point spread function corresponding to that blur. Finally, we demonstrate
that our method is able to deblur images degraded by spatially-varying blur originating from real camera shake, even without using additionally motion sensor
information.
1
Introduction
Camera shake is a common problem of handheld, longer exposed photographs occurring especially
in low light situations, e.g., inside buildings. With a few exceptions such as panning photography,
camera shake is unwanted, since it often destroys details and blurs the image. The effect of a
particular camera shake can be described by a linear transformation on the sharp image, i.e., the
image that would have been recorded using a tripod. Denoting for simplicity images as column
vectors, the recorded blurry image y can be written as a linear transformation of the sharp image
x, i.e., as y = Ax, where A is an unknown matrix describing the camera shake. The task of blind
image deblurring is to recover x given only the blurred image y, but not A.
Main contributions. (i) We present a taxonomy of camera shakes; (ii) we propose an algorithm
for deblurring space-variant camera shakes; and (iii) we introduce an experimental setup that allows
to simultaneously record images blurred by real camera shake and an image of the corresponding
spatially varying point spread functions (PSFs).
Related work. Our work combines ideas of three papers: (i) Hirsch et al?s work [1] on efficient
space-variant filtering, (ii) Cho and Lee?s work [2] on single frame blind deconvolution, and (iii)
Krishnan and Fergus?s work [3] on fast non-blind deconvolution.
Previous approaches to single image blind deconvolution have dealt only with space-invariant blurs.
This includes the works of Fergus et al. [4], Shan et al. [5], as well as Cho and Lee [2] (see Kundur
and Hatzinakos [6] and Levin et al. [7] for overviews and further references).
Tai et al. [8] represent space-variant blurs as projective motion paths and propose a non-blind deconvolution method. Shan et al. [9] consider blindly deconvolving rotational object motion, yielding
a particular form of space-variant PSFs. Blind deconvolution of space-variant blurs in the context
1
of star fields has been considered by Bardsley et al. [10]. Their method estimates PSFs separately
(and not simultaneously) on image patches using phase diversity, and deconvolves the overall image
using [11]. Joshi et al. [12] recently proposed a method that estimates the motion path using inertial
sensors, leading to high-quality image reconstructions.
There exists also some work for images in which different segments have different blur: Levin [13]
and Cho et al. [14] segment images into layers where each layer has a different motion blur. Both
approaches consider uniform object motion, but not non-uniform ego-motion (of the camera). Hirsch
?
et al. [1] require multiple images to perform blind deconvolution with space-variant blur, as do Sorel
?
and Sroubek
[15].
2
A taxonomy of camera shakes
Camera shake can be described from two perspectives: (i) how the PSF varies across the image, i.e.,
how point sources would be recorded at different locations on the sensor, and (ii) by the trajectory of
the camera and how the depth of the scene varies. Throughout this discussion we assume the scene
to be static, i.e., only the camera moves (only ego-motion), and none of the photographed objects
(no object motion).
PSF variation across the image. We distinguish three classes:
? Constant: The PSF is constant across the image. In this case the linear transformation is a
convolution matrix. Most algorithms for blind deconvolution are restricted to this case.
? Smooth: The PSF is smoothly varying across the image. Here, the linear transformation is
no longer a convolution matrix, but a more general framework is needed such as the smoothly
space-varying filters in the multi-frame method of Hirsch et al. [1]. For this case, our paper
proposes an algorithm for single image deblurring.
? Segmented: The PSF varies smoothly within segments of the image, but between segments it
may change abruptly.
Depth variation across the scene. The depth in a scene, i.e., the distance of the camera to objects
at different locations in the scene, can be classified into three categories:
? Constant: All objects have the same distance to the camera. Example: photographing a picture
hanging on the wall.
? Smooth: The distance to the camera is smoothly varying across the scene. Example: photographing a wall at an angle.
? Segmented: The scene can be segmented into different objects each having a different distance
to the camera. Example: photographing a scene with different objects partially occluding each
other.
Camera trajectories. The motion of the camera can be represented by a six dimensional trajectory
with three spatial and three angular coordinates. We denote the two coordinates inside the sensor
plane as a and b, the coordinate corresponding to the distance to the scene as c. Furthermore, ? and
? describe the camera tilting up/down and left/right, and ? the camera rotation around the optical
axis.
It is instructive to picture how different trajectories correspond to different PSF variations in different
depth situations. Exemplarily we consider the following trajectories:
?
?
?
?
?
Pure shift: The camera moves inside the sensor plane without rotation; only a and b vary.
Rotated shift: The camera moves inside the sensor plane with rotation; a, b, and ? vary.
Back and forth: The distance between camera and scene is changing; only c varies.
Pure tilt: The camera is tilted up and down and left and right; only ? and ? vary.
General trajectory: All coordinates might vary as a function of time.
Table 1 shows all possible combinations. Note that only ?pure shifts? in combination with ?constant depths? lead to a constant PSF across the image, which is the case most methods for camera
unshaking are proposed for. Thus, extending blind deconvolution to smoothly space-varying PSFs
can increases the range of possible applications. Furthermore, we see that for segmented scenes,
camera shake usually leads to blurs that are non-smoothly changing across the image. Even though
2
Pure shift
Constant depth constant
Smooth depth
smooth
Segmented depth segmented
Rotated shift
smooth
smooth
segmented
Back and forth
smooth
smooth
segmented
Pure tilt
smooth
smooth
segmented
General trajectory
smooth
smooth
segmented
Table 1: How the PSF varies for different camera trajectories and for different depth situations.
in this case the model of smoothly varying PSFs is incorrect, it might still lead to better results than
constant PSFs.
3
Smoothly varying PSF as Efficient Filter Flow
To obtain a generalized image deblurring method we represent the linear transformation y = Ax by
the recently proposed efficient filter flow (EFF) method of Hirsch et al. [1] that can handle smoothly
varying PSFs. For convenience, we briefly describe EFF, using the notation and results from [1].
Space-invariant filters. As our starting point we consider space-invariant filters (aka convolutions),
which are an efficient, but restrictive class of linear transformations. We denote by y the recorded
image, represented as a column vector of length m, and by a a column vector of length k, representing the space-invariant PSF, and by x the true image, represented as a column vector of length
n = m + k ? 1 (we consider the valid part of the convolution). Then the usual convolution can
Pk?1
be written as yi = j=0 aj xi?j for 0 ? i < m. This transformation is linear in x, and thus an
instance of the general linear transformation y = Ax, where the column vector a parametrizes the
transformation matrix A. Furthermore, the transformation is linear in a, which implies that there
exists a matrix X such that y = Ax = Xa. Using fast Fourier transforms (FFTs), these matrixvector-multiplications (MVMs) can be calculated in O(n log n).
Space-variant filters. Although being efficient, the (space-invariant) convolution applies only to
camera shakes which are pure shifts of flat scenes. This is generalized to space-variant filtering
by employing Stockham?s overlap-add (OLA) trick [16]. The idea is (i) to cover the image with
overlapping patches, (ii) to apply to each patch a different PSF, and (iii) to add the patches to obtain
a single large image. The transformation can be written as
yi =
p?1 k?1
X
X
(r) (r)
aj wi?j
xi?j for 0 ? i < m where
r=0 j=0
p?1
X
(r)
wi
= 1 for 0 ? i < m.
(1)
r=0
Here, w(r) ? 0 smoothly fades the r-th patch in and masks out the others. Note that at each pixel
the sum of the weights must sum to one.
Note that this method does not simply apply a different PSF to different image regions, but instead
yields a different PSF for each pixel. The reason is that usually, the patches are chosen to overlap
at least 50%, so that the PSF at a pixel is a certain linear combination of several filters, where the
weights are chosen to smoothly blend filters in and out, and thus the PSF tends to be different at each
pixel. Fig. 1 shows that a PSF array as small as 3 ? 3, corresponding to p = 9 and nine overlapping
patches (right panel of the bottom row), can parametrize smoothly varying blurs (middle column)
that closely mimic real camera shake (left column).
Efficient implementation. As is apparent from Eq. (1), EFF is linear in x and in a, the vector
obtained by stacking a(0) , . . . , a(p?1) . This implies that there exist matrices A and X such that
y = Ax = Xa. Using Stockham?s ideas [16] to speed-up large convolutions, Hirsch et al. derive
expressions for these matrices, namely
A=
ZyT
p?1
X
CrT F H Diag(F Za a(r) )F Cr Diag(w(r) ),
(2)
CrT F H Diag F Cr Diag(w(r) )x F Za Br ,
(3)
r=0
X = ZyT
p?1
X
r=0
where Diag(w(r) ) is the diagonal matrix with vector w(r) along its diagonal, Cr is a matrix that
crops out the r-th patch, F is the discrete Fourier transform matrix, Za is a matrix that zero-pads
3
hand shaked photo of grid
artificially blurred grid
PSFs used for artificial blur
Figure 1: A small set of PSFs can parametrize smoothly varying blur: (left) grid photographed with
real camera shake, (middle) grid blurred by the EFF framework parametrized by nine PSFs (right).
a(r) to the size of the patch, F H performs the inverse Fourier transform, ZyT chops out the valid part
of the space-variant convolution.
Reading Eqs. (2) and (3) forward and backward yields efficient implementations for A, AT , X, and
X T with running times O(n log q) where q is the patch size, see [1] for details. The overlap increases
the computational cost by a constant factor and is thus omitted. The EFF framework thus implements
space-variant convolutions which are as efficient to compute as space-invariant convolutions, while
being much more expressive.
Note that each of the MVMs with A, AT , X, and X T is needed for blind deconvolution: A and AT
for the estimation of x given a, and X and X T for the estimation of a.
4
Blind deconvolution with smoothly varying PSF
We now outline a single image blind deconvolution algorithm for space-variant blur, generalizing
the method of Cho and Lee [2], that aims to recover a sharp image in two steps: (i) first estimate
the parameter vector a of the EFF transformation, and (ii) then perform space-variant non-blind
deconvolution by running a generalization of Krishnan and Fergus? algorithm [3].
(i) Estimation of the linear transformation: initializing x with the blurry image y, the estimation
of the linear transformation A parametrized as an EFF, is performed by iterating over the following
four steps:
? Prediction step: remove noise in flat regions of x by edge-preserving bilateral filtering and
overemphasize edges by shock filtering. To counter enhanced noise by shock filtering, we
apply spatially adaptive gradient magnitude thresholding.
? PSF estimation step: update the PSFs given the blurry image y and the current estimate of
the predicted x, using only the gradient images of x (resulting in a preconditioning effect)
and enforcing smoothness between neighboring PSFs.
? Propagation step: identify regions of poorly estimated PSFs and replace them with neighboring PSFs.
? Image estimation step: update the current deblurred image x by minimizing a leastsquares cost function using a smoothness prior on the gradient image.
(ii) Non-blind deblurring: given the linear transformation we estimate the final deblurred image x
by alternating between the following two steps:
? Latent variable estimation: estimate latent variables regularized with a sparsity prior that
approximate the gradient of x. This can be efficiently solved with look-up tables, see ?w
sub-problem? of [3] for details.
? Image estimation step: update the current deblurred image x by minimizing a leastsquares cost function while penalizing the Euclidean norm of the gradient image to the
latent variables of the previous step, see ?x sub-problem? of [3] for details.
The steps of (i) are repeated seven times on each scale of a multi-scale image pyramid. We always
start with flat PSFs of size 3 ? 3 pixels and the correspondingly downsampled observed image. For
up- and downsampling we employ a simple linear interpolation scheme. The resulting PSFs in a
4
and the resulting image x at each scale are upsampled and initialize the next scale. The final output
of this iterative procedure are the PSFs that parametrize the spatially varying linear transformation.
Having obtained an estimate for the linear transformation in form of an array of PSFs, the alternating
steps of (ii) perform space variant non-blind deconvolution of the recorded image y using a natural
image statistics prior (as in [13]). To this end, we adapt the recently proposed method of Krishnan
and Fergus [3] to deal with linear transformations represented as EFF.
While our procedure is based on Cho and Lee?s [2] and Krishnan and Fergus? [3] methods for
space-invariant single blind deconvolution, it differs in several important aspects which we presently
explain.
Details of the Prediction step. The prediction step of Cho and Lee [2] is a clever trick to avoid the
nonlinear optimizations which would be necessary if the image features emphasized by the nonlinear filtering operations (namely shock and bilateral filtering and gradient magnitude thresholding)
would have to be implemented by an image prior on x. Our procedure also profits from this trick
and we set the hyper-parameters exactly as Cho and Lee do (see [2] for details on the nonlinear
filtering operations). However, we note that for linear transformations represented as EFF, the gradient thresholding must be applied spatially adaptive, i.e., on each patch separately. This is necessary
because otherwise a large gradient in some region might totally wipe out the gradients in regions
that are less textured, leading to poor PSF estimates in those regions.
Details on the PSF estimation step. Given the thresholded gradient images of the nonlinear filtered
image x as the output of the prediction step, the PSF estimation minimizes a regularized leastsquares cost function,
X
k?z y ? A?z xk2 + ?kak2 + ?g(a),
(4)
z
where z ranges over the set {h, v, hh, vv, hv}, i.e., the first and second, horizontal and vertical
derivatives of y and x are considered. Omitting the zeroth derivative (i.e., the images x and y
themselves) has a preconditioning effect as discussed in Cho and Lee [2]. Matrix A depends on
the vector of PSFs a as well. For the EFF framework we added the regularization term g(a) which
encourages similarity between neighboring PSFs,
g(a) =
p?1 X
X
ka(r) ? a(s) k2 ,
(5)
r=0 s?N (r)
where s ? N (r) if patches r and s are neighbors.
Details on the Propagation step. Since high-frequency information, i.e. image details are required
for PSF estimation, for images with less structured areas (such as sky) we can not estimate reasonable PSFs everywhere. The problem stems from the finding that even though some area might
be less informative about the local PSF, it can look blurred, and thus would require deconvolution.
?
These areas are identified by thresholding the entropy of the corresponding PSFs (similar to Sorel
?
and Sroubek
[15]). The rejected PSFs are replaced by the average of their neighboring PSFs. Since
there might be areas for which the neighboring PSFs have been rejected as well, we perform a simple
recursive procedure which propagates the accepted PSFs to the rejected ones.
Details on the Image estimation step. In both Cho and Lee?s and also in Krishnan and Fergus?
work, the image estimation step involves direct deconvolution which corresponds to a simple pixelwise divison of the blurry image by the zero-padded PSF in Fourier domain. Unfortunately, a
direct deconvolution does not exist in general for linear transformations represented as EFF, since
it involves summations over patches. However, we can replace the direct deconvolution by an optimization of some regularized least-squares cost function ky ? Axk2 + ?k?xkp .
While estimating the linear transformation in (i), the regularizer is Tikhonov on the gradient image,
i.e., p = 2. As the estimated x is subsequently processed in the prediction step, one might consider
regularization redundant in the image estimation step of (i). However, the regularization is crucial for
suppressing ringing due to insufficient estimation of a. In (ii) during the final non-blind deblurring
procedure we employ a sparsity prior for x by choosing p = 1/2.
The main difference in the image estimation steps to [2] and [3] is that the linear transformation A
is no longer a convolution but instead a space-variant filter implemented by the EFF framework.
5
?
?
(a)
?
?
?
(b)
(c)
(d)
Figure 2: How to simultaneously capture an image blurred with real camera shake and its spacevarying PSF; (a) the true image and a grid of dots is combined to (b) an RBG image, that is (c)
photographed with camera shake, and (d) split into blue and red channel to separate the PSF depicting the blur and the blurred image.
5
Experiments
We present results on several example images with space-variant blur, for which we are able to
recover a deblurred image, while a state-of-the-art method for single image blind deconvolution
does not. We begin by describing the image capture procedure.
Capturing a gray scale image blurred with real camera shake along with the set of spatially
varying PSFs. The idea is to create a color image where the gray scale image is shown in the red
channel, a grid of dots (for recording the PSFs) is shown in the blue channel, and the green channel
is set to zero. We display the resulting RBG image on a computer screen and take a photo with
real hand shake. We split the recorded raw image into the red and blue part. The red part only
shows the image blurred with camera shake and the blue part shows the spatially varying PSFs that
depict the effect of the camera shake. To avoid a Moir?e effect the distance between the camera
and the computer screen must be chosen carefully such that the discrete structure of the computer
screen can not be resolved by the (discrete) image sensor of the camera. We verified that the spectral
characteristics of the screen and the camera?s Bayer array filters are such that there is no cross-talk,
i.e., the blue PSFs are not visible in the red image. Fig. 2 shows the whole process.
Three example images with real camera shake. We applied our method, Cho and Lee?s [2]
method, and a custom patch-wise variant of Cho and Lee to three examples captured as explained
above. For all experiments, photos were taken with a hand-held Canon EOS 1000D digital single
lens reflex camera with a zoom lens (Canon zoom lens EF 24-70 mm 1:2.8 L USM). The exposure
time was 1/4 second, the distance to the screen was about two meters. The input to the deblurring
algorithm was only the red channel of the RAW file which we treat as if it were a captured gray-scale
image. The image sizes are: vintage car 455 ? 635, butcher shop 615 ? 415, elephant 625 ? 455.
To assess the accuracy of estimating the linear transformation (i.e., of step (i) in Sec. 4), we compare
our estimated PSFs evaluated on a regular grid of dots to the true PSFs recorded in the blue channel
during the camera shake. This comparison has been made for the vintage car example and is included
in the supplementary material.
We compare with Cho and Lee?s [2] method which we consider currently the state-of-art method
for single image blind deconvolution. This method assumes space-invariant blurs, and thus we also
compare to a modified version of this algorithm that is applied to the patches of our method and that
finally blends the individually deblurred patches carefully to one final output image.
Fig 3 shows from top to bottom, the blurry captured image, the result of our method, Cho and
Lee?s [2] result, and a patch-wise variant of Cho and Lee. In our method we used for the linear
transformation estimation step (step (i) in Sec. 4) for all examples the hyper-parameters detailed in
[2]. Our additional hyper-parameters were set as follows: the regularization constant ? weighting
the regularization term in cost function (4) that measures the similarity between neighboring PSFs
is set to 5e4 for all three examples. The entropy threshold for identifying poorly estimated PSFs is
set to 0.7, with the entropy normalized to range between zero and one. In all experiments, the size
of a single PSF kernel is allowed to be 15 ? 15 pixels. The space-variant blur was modelled for the
6
Hand-shaked photo
Our result
Cho and Lee [14]
Patchwise Cho and Lee
Butcher Shop
Vintage Car
Elephant
Figure 3: Deblurring results and comparison.
vintage car example by an array of 6 ? 7 PSF kernels, for butcher shop by an array of 4 ? 6 PSF
kernels, and for the elephant by an array of 5 ? 6 PSF kernels. These setting were also used for
the patch-wise Cho and Lee variant. For the blending function w(r) in Eq. (1) we used a BartlettHanning window with 75% overlap in the vintage car example and 50% in the butcher shop and
elephant example. We choose for the vintage car a larger overlap to keep the patch size reasonably
large. For the final non-blind deconvolution (step (ii) in Sec. 4) hyper-parameter ? was set to 2e3 and
p was set to 0.5. On the three example images our algorithm took about 30 minutes for space-variant
image restoration.
In summary, our experiments show that our method is able to deblur space-variant blurs that are too
difficult for Cho and Lee?s method. Especially, our results reveal greater detail and less restoration
artifacts, especially noticeable in the regions of the closeup views. Interesting is the comparison
with the patch-wise version of Cho and Lee: looking at the details (such as the house number 117
at the butcher shop, the licence plate of the vintage car, or the trunk of the elephant) our method is
better. At the door frame in the vintage car image, we see that the patch-wise version of Cho and
Lee has alignment problems. Our experience was that this gets more severe for larger blur kernels.
7
Blurry image
Our result
Joshi et al. [12]
Shan et al. [5]
Fergus et al.[18]
Figure 4: Our blind method achieves results comparable to Joshi et al. [12] who additionally require
motion sensor information which we do not use. All images apart from our own algorithm?s results
are taken from [12]. This figure is best viewed on screen rather than in print.
Comparison with Joshi et al.?s recent results. Fig. 4 compares the results from [12] with our
method on their example images. Even though our method does not exploit the motion sensor data
utilized by Joshi et al. we obtain comparable results.
Run-time. The running times of our method is about 30 minutes for the images in Fig. 3 and about
80 minutes on the larger images of [12] (1123 ? 749 pixels in size). How does this compare with
Cho and Lee?s method for fast deblurring, which works in seconds? There are several reasons for the
discrepancy: (i) Cho and Lee implemented their method using the GPU, while our implementation
is in Matlab, logging lots of intermediate results for debugging and studying the code behaviour. (ii)
A space-variant blur has more parameters, e.g. for 6 by 7 patches we need to estimate 42 times as
many parameters as for a single kernel. Even though calculating the forward model is almost as fast
as for the single kernel, convergence for that many parameters appeared to be slower. (iii) Cho and
Lee are able to use direct deconvolution (division in Fourier space) for the image estimation step,
while we have to solve an optimization problem, because we currently do not know how to perform
direct deconvolution for the space-variant filters.
6
Discussion
Blind deconvolution of images degraded by space-variant blur is a much harder problem than simply
assuming space-invariant blurs. Our experiments show that even state-of-the-art algorithms such as
Cho and Lee?s [2] are not able to recover image details for such blurs without unpleasant artifacts.
We have proposed an algorithm that is able to tackle space-variant blurs with encouraging results.
Presently, the main limitation of our approach is that it can fail if the blurs are too large or if they vary
too quickly across the image. We believe there are two main reasons for this: (i) on the one hand,
if the blurs are large, the patches need to be large as well to obtain enough statistics for estimating
the blur. On the other hand, if at the same time the PSF is varying too quickly, the patches need to
be small enough. Our method only works if we can find a patch size and overlap setting that is a
good trade-off for both requirements. (ii) The method of Cho and Lee [2], which is an important
component of ours, does not work for all blurs. For instance, a PSF that looks like a thick horizontal
line is challenging, because the resulting image feature might be misunderstood by the prediction
step to be horizontal lines in the image. Improving the method of Cho and Lee [2] to deal with such
blurs would be worthwhile.
Another limitation of our method are image areas with little structure. On such patches it is difficult
to infer a reasonable blur kernel, and our method propagates the results from the neighboring patches
to these cases. However, this propagation is heuristic and we hope to find a more rigorous approach
to this problem in future work.
8
References
[1] M. Hirsch, S. Sra, B. Sch?olkopf, and S. Harmeling. Efficient Filter Flow for Space-Variant
Multiframe Blind Deconvolution. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2010.
[2] S. Cho and S. Lee. Fast Motion Deblurring. ACM Transactions on Graphics (SIGGRAPH
ASIA 2009), 28(5), 2009.
[3] D. Krishnan and R. Fergus. Fast image deconvolution using hyper-Laplacian priors. In Advances in Neural Information Processing Systems (NIPS), 2009.
[4] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, and W.T. Freeman. Removing camera shake
from a single photograph. In ACM SIGGRAPH, page 794. ACM, 2006.
[5] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. ACM
Transactions on Graphics (SIGGRAPH), 2008.
[6] D. Kundur and D. Hatzinakos. Blind image deconvolution. IEEE Signal Processing Mag.,
13(3):43?64, May 1996.
[7] A. Levin, Y. Weiss, F. Durand, and W.T. Freeman. Understanding and evaluating blind deconvolution algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2009.
[8] Y. W. Tai, P. Tan, L. Gao, and M. S. Brown. Richardson-Lucy deblurring for scenes under
projective motion path. Technical report, KAIST, 2009.
[9] Qi Shan, Wei Xiong, and Jiaya Jia. Rotational motion deblurring of a rigid object from a single
image. In Proc. Int. Conf. on Computer Vision, 2007.
[10] J. Bardsley, S. Jeffries, J. Nagy, and B. Plemmons. A computational method for the restoration
of images with an unknown, spatially-varying blur. Optics Express, 14(5):1767?1782, 2006.
[11] J.G. Nagy and D.P. O?Leary. Restoring images degraded by spatially variant blur. SIAM
Journal on Scientific Computing, 19(4):1063?1082, 1998.
[12] N. Joshi, S.B. Kang, C.L. Zitnick, and R. Szeliski. Image deblurring using inertial measurement sensors. In ACM SIGGRAPH 2010 Papers. ACM, 2010.
[13] A. Levin. Blind motion deblurring using image statistics. In Advances in Neural Information
Processing Systems (NIPS), 2006.
[14] S. Cho, Y. Matsushita, and S. Lee. Removing non-uniform motion blur from images. In IEEE
11th International Conference on Computer Vision, 2007, 2007.
?
?
[15] M. Sorel
and F. Sroubek.
Space-variant deblurring using one blurred and one underexposed
image. In Proceedings of the International Conference on Image Processing (ICIP), 2009.
[16] T.G. Stockham Jr. High-speed convolution and correlation. In Proceedings of the Spring joint
computer conference, pages 229?233. ACM, 1966.
[17] N. Joshi, R. Szeliski, and D.J. Kriegman. Image/video deblurring using a hybrid camera. In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008.
[18] R. Fergus, B. Singh, A. Hertzmann, S.T. Roweis, and W.T. Freeman. Removing camera shake
from a single image. ACM Transactions on Graphics (SIGGRAPH), 2006.
9
| 4007 |@word middle:2 version:3 briefly:1 norm:1 profit:1 harder:1 mag:1 denoting:1 ours:1 suppressing:1 current:3 ka:1 written:3 gpu:1 must:3 tilted:1 visible:1 informative:1 blur:36 remove:1 update:3 depict:1 plane:4 record:1 filtered:1 location:2 along:2 direct:5 incorrect:1 combine:1 inside:4 introduce:2 divison:1 mask:1 psf:66 mpg:1 themselves:1 multi:2 plemmons:1 freeman:3 actual:1 encouraging:1 window:1 little:1 totally:1 mvms:2 estimating:3 notation:1 begin:1 panel:1 minimizes:1 ringing:1 sroubek:3 finding:1 transformation:24 photographing:3 sky:1 tackle:1 unwanted:1 exactly:1 k2:1 planck:1 local:1 treat:1 tends:1 path:3 interpolation:1 might:7 zeroth:1 challenging:1 projective:2 range:3 ola:1 harmeling:2 camera:52 restoring:1 recursive:1 implement:1 differs:1 procedure:6 area:5 regular:1 downsampled:1 upsampled:1 get:1 convenience:1 clever:1 closeup:1 context:1 shaked:2 xkp:1 exposure:1 starting:1 simplicity:1 identifying:1 pure:6 fade:1 array:6 handle:1 variation:3 coordinate:4 enhanced:1 tan:1 deblurring:17 trick:3 ego:2 recognition:3 utilized:1 bottom:2 observed:1 tripod:1 initializing:1 solved:1 hv:1 capture:2 region:7 movement:1 counter:1 trade:1 hertzmann:2 kriegman:1 singh:2 segment:4 exposed:1 division:1 logging:1 textured:1 preconditioning:2 resolved:1 siggraph:5 joint:1 represented:6 talk:1 regularizer:1 fast:7 describe:2 artificial:1 hyper:5 outside:1 choosing:1 eos:1 apparent:1 heuristic:1 supplementary:1 larger:3 solve:1 kaist:1 otherwise:1 elephant:5 statistic:3 richardson:1 transform:2 final:5 took:1 propose:2 reconstruction:1 neighboring:7 poorly:2 roweis:2 forth:2 olkopf:2 ky:1 convergence:1 requirement:1 extending:1 rotated:2 object:10 derive:1 noticeable:1 eq:3 implemented:3 predicted:1 bardsley:2 implies:2 involves:2 zyt:3 thick:1 closely:1 filter:13 subsequently:1 crt:2 eff:12 material:1 require:3 behaviour:1 generalization:1 wall:2 licence:1 biological:1 leastsquares:3 summation:1 blending:1 mm:1 around:1 considered:2 vary:5 achieves:1 omitted:1 xk2:1 estimation:18 proc:1 tilting:1 currently:2 individually:1 create:1 stefan:1 hope:1 destroys:1 sensor:11 always:1 aim:1 modified:1 rather:1 avoid:2 cr:3 varying:17 ax:5 modelling:1 aka:1 rigorous:1 rigid:1 pad:1 originating:1 butcher:5 germany:1 pixel:7 overall:1 agarwala:1 proposes:1 spatial:1 art:3 initialize:1 field:1 construct:1 having:2 look:3 deconvolving:1 mimic:1 parametrizes:1 others:1 discrepancy:1 future:1 report:1 few:1 deblurred:5 employ:2 simultaneously:3 zoom:2 replaced:1 phase:1 custom:1 evaluation:1 severe:1 alignment:1 yielding:1 light:1 hatzinakos:2 held:1 edge:2 bayer:1 necessary:2 experience:1 euclidean:1 wipe:1 instance:2 column:7 cover:1 restoration:3 stacking:1 cost:6 uniform:3 levin:4 too:4 graphic:3 pixelwise:1 varies:5 cho:29 combined:1 international:2 siam:1 lee:28 off:1 michael:1 quickly:2 leary:1 recorded:7 choose:1 multiframe:1 overemphasize:1 conf:1 derivative:2 leading:2 de:1 diversity:1 star:1 sec:3 includes:1 int:1 blurred:10 blind:29 depends:1 performed:1 bilateral:2 view:1 lot:1 red:6 start:1 recover:4 jia:2 contribution:1 ass:1 square:1 degraded:3 accuracy:1 characteristic:1 efficiently:1 who:1 correspond:1 yield:2 identify:1 dealt:1 modelled:1 raw:2 none:1 trajectory:8 cybernetics:1 classified:1 za:3 explain:1 frequency:1 static:1 color:1 car:8 vintage:8 inertial:2 carefully:2 back:2 asia:1 wei:2 evaluated:1 though:4 furthermore:3 angular:1 xa:2 rejected:3 correlation:1 hand:6 stockham:3 horizontal:3 expressive:1 nonlinear:4 overlapping:2 propagation:3 quality:2 aj:2 gray:3 reveal:1 artifact:2 believe:1 scientific:1 building:1 effect:5 omitting:1 normalized:1 true:3 brown:1 regularization:5 spatially:9 alternating:2 deal:2 during:2 encourages:1 lastname:1 chop:1 generalized:2 plate:1 outline:1 demonstrate:1 performs:1 motion:19 image:115 photography:1 wise:5 ef:1 recently:4 common:1 rotation:4 overview:1 tilt:2 discussed:1 measurement:1 smoothness:2 grid:7 dot:3 longer:3 similarity:2 jiaya:1 add:2 own:1 recent:1 perspective:1 apart:1 tikhonov:1 certain:1 ubingen:1 durand:1 yi:2 matrixvector:1 handheld:1 preserving:1 captured:3 canon:2 additional:1 greater:1 redundant:1 signal:1 ii:12 multiple:1 infer:1 stem:1 smooth:12 segmented:10 technical:1 adapt:1 cross:1 laplacian:1 qi:1 prediction:6 variant:31 crop:1 vision:5 blindly:1 represent:2 kernel:8 pyramid:1 separately:2 source:1 crucial:1 sch:2 file:1 recording:2 flow:3 joshi:7 door:1 intermediate:1 iii:5 split:2 enough:2 krishnan:6 identified:1 simplifies:1 idea:4 br:1 shift:6 six:1 expression:1 panning:1 effort:1 abruptly:1 e3:1 nine:2 matlab:1 iterating:1 detailed:1 shake:29 transforms:1 processed:1 category:1 exist:2 estimated:4 blue:6 discrete:3 express:1 four:1 threshold:1 changing:2 penalizing:1 verified:1 thresholded:1 shock:3 backward:1 padded:1 sum:2 run:1 angle:1 inverse:1 everywhere:1 throughout:1 reasonable:2 almost:1 patch:27 misunderstood:1 comparable:2 capturing:1 layer:2 shan:5 rbg:2 distinguish:1 display:1 matsushita:1 insufficiently:1 optic:1 scene:14 flat:3 fourier:5 speed:2 aspect:1 spring:1 optical:1 photographed:3 structured:1 hanging:1 debugging:1 combination:3 poor:1 jr:1 across:9 wi:2 axk2:1 presently:2 explained:1 invariant:11 restricted:1 taken:2 tai:2 trunk:1 describing:2 fail:1 hh:1 needed:2 know:1 end:1 photo:4 studying:1 parametrize:3 operation:2 apply:3 worthwhile:1 spectral:1 blurry:6 xiong:1 slower:1 assumes:1 running:3 top:1 sorel:3 calculating:1 exploit:1 restrictive:1 build:1 especially:3 move:3 added:1 print:1 blend:2 usual:1 diagonal:2 kak2:1 gradient:11 distance:9 separate:1 parametrized:2 seven:1 tuebingen:1 reason:3 enforcing:1 kundur:2 assuming:1 length:3 code:1 insufficient:1 rotational:2 minimizing:2 downsampling:1 setup:2 unfortunately:1 difficult:2 taxonomy:3 implementation:3 unknown:2 perform:5 vertical:1 convolution:13 situation:3 looking:1 frame:3 sharp:3 introduced:1 namely:2 required:1 icip:1 kang:1 nip:2 address:1 able:6 usually:2 pattern:3 firstname:1 appeared:1 reading:1 sparsity:2 max:1 green:1 video:1 overlap:6 natural:1 hybrid:1 regularized:3 representing:1 scheme:1 shop:5 picture:2 axis:1 prior:6 understanding:1 meter:1 multiplication:1 exemplarily:1 interesting:1 limitation:3 filtering:9 digital:1 propagates:2 thresholding:4 row:1 summary:1 vv:1 nagy:2 institute:1 neighbor:1 szeliski:2 correspondingly:1 depth:9 calculated:1 valid:2 evaluating:1 forward:2 made:1 adaptive:2 employing:1 transaction:3 approximate:1 usm:1 bernhard:1 keep:1 hirsch:8 fergus:10 xi:2 latent:3 iterative:1 table:3 additionally:2 channel:6 reasonably:1 sra:1 depicting:1 improving:1 artificially:1 domain:1 diag:5 zitnick:1 pk:1 spread:2 main:4 whole:1 noise:2 repeated:1 allowed:1 fig:5 screen:6 sub:2 house:1 weighting:1 ffts:1 removing:5 down:2 e4:1 minute:3 emphasized:1 deconvolution:31 exists:2 magnitude:2 occurring:1 smoothly:14 generalizing:1 entropy:3 photograph:2 simply:2 lucy:1 gao:1 deblur:2 partially:1 applies:1 reflex:1 corresponds:1 acm:8 viewed:1 replace:2 change:1 deconvolves:1 included:1 lens:3 accepted:1 experimental:2 occluding:1 exception:1 unpleasant:1 instructive:1 |
3,322 | 4,008 | Object Bank: A High-Level Image Representation for Scene
Classification & Semantic Feature Sparsification
Li-Jia Li*1 , Hao Su*1 , Eric P. Xing2 , Li Fei-Fei1
1 Computer Science Department, Stanford University
2 Machine Learning Department, Carnegie Mellon University
Abstract
Robust low-level image features have been proven to be effective representations
for a variety of visual recognition tasks such as object recognition and scene classification; but pixels, or even local image patches, carry little semantic meanings.
For high level visual tasks, such low-level image representations are potentially
not enough. In this paper, we propose a high-level image representation, called the
Object Bank, where an image is represented as a scale-invariant response map of a
large number of pre-trained generic object detectors, blind to the testing dataset or
visual task. Leveraging on the Object Bank representation, superior performances
on high level visual recognition tasks can be achieved with simple off-the-shelf
classifiers such as logistic regression and linear SVM. Sparsity algorithms make
our representation more efficient and scalable for large scene datasets, and reveal
semantically meaningful feature patterns.
1 Introduction
Understanding the meanings and contents of images remains one of the most challenging problems
in machine intelligence and statistical learning. Contrast to inference tasks in other domains, such
as NLP, where the basic feature space in which the data lie usually bears explicit human perceivable
meaning, e.g., each dimension of a document embedding space could correspond to a word [21], or
a topic, common representations of visual data seem to primarily build on raw physical metrics of
the pixels such as color and intensity, or their mathematical transformations such as various filters,
or simple image statistics such as shape, edges orientations etc. Depending on the specific visual
inference task, such as classification, a predictive method is deployed to pool together and model the
statistics of the image features, and make use of them to build some hypothesis for the predictor. For
example, Fig.1 illustrates the gradient-based GIST features [25] and texture-based Spatial Pyramid
representation [19] of two different scenes (foresty mountain vs. street). But such schemes often
fail to offer sufficient discriminative power, as one can see from the very similar image statistics in
the examples in Fig.1.
Original Image
Gist (filters)
SIFT-SPM (L=2)
Object Filters in OB
Tree
Mountain
Tower
Sky
Tree
Mountain
Tower
Sky
Figure 1: (Best viewed in colors and magnification.) Comparison of object bank (OB) representation with
two low-level feature representations, GIST and SIFT-SPM of two types of images, mountain vs. city street.
From left to right, for each input image, we show the selected filter responses in the GIST representation [25],
a histogram of the SPM representation of SIFT patches [19], and a selected number of OB responses.
*indicates equal contributions.
1
While more sophisticated low-level feature engineering and recognition model design remain important sources of future developments, we argue that the use of semantically more meaningful feature
space, such as one that is directly based on the content (e.g., objects) of the images, as words for textual documents, may offer another promising venue to empower a computational visual recognizer
to potentially handle arbitrary natural images, especially in our current era where visual knowledge
of millions of common objects are readily available from various easy sources on the Internet.
In this paper, we propose ?Object Bank? (OB), a new representation of natural images based on
objects, or more rigorously, a collection of object sensing filters built on a generic collection of labeled objects. We explore how a simple linear hypothesis classifier, combined with a sparse-coding
scheme, can leverage on this representation, despite its extreme high-dimensionality, to achieve
superior predictive power over similar linear prediction models trained on conventional representations. We show that an image representation based on objects can be very useful in high-level visual
recognition tasks for scenes cluttered with objects. It provides complementary information to that of
the low-level features. As illustrated in Fig.1, these two different scenes show very different image
responses to objects such as tree, street, water, sky, etc. Given the availability of large-scale image
datasets such as LabelMe [30] and ImageNet [5], it is no longer inconceivable to obtain trained object detectors for a large number of visual concepts. In fact we envision the usage of thousands if
not millions of these available object detectors as the building block of such image representation in
the future.
While the OB representation offers a rich, high-level description of images, a key technical challenge due to this representation is the ?curse of dimensionality?, which is severe because of the size
(i.e., number of objects) of the object bank and the dimensionality of the response vector for each
object. Typically, for a modest sized picture, even hundreds of object detectors would result into a
representation of tens of thousands of dimensions. Therefore to achieve robust predictor on practical dataset with typically only dozens or a couple of hundreds of instances per class, structural risk
minimization via appropriate regularization of the predictive model is essential.
In this paper, we propose a regularized logistic regression method, akin to the group lasso approach
for structured sparsity, to explore both feature sparsity and object sparsity in the Object Bank representation for learning and classifying complex scenes. We show that by using this high-level image
representation and a simple sparse coding regularization, our algorithm not only achieves superior
image classification results in a number of challenging scene datasets, but also can discover semantically meaningful descriptions of the learned scene classes.
2 Related Work
A plethora of image descriptors have been developed for object recognition and image classification [25, 1, 23]. We particularly draw the analogy between our object bank and the texture filter
banks [26, 10].
Object detection and recognition also entail a large body of literature [7]. In this work, we mainly
use the current state-of-the-art object detectors of Felzenszwalb et. al. [9], as well as the geometric
context classifiers (?stuff? detectors) of Hoeim et. al. [13] for pre-training the object detectors.
The idea of using object detectors as the basic representation of images is analogous [12, 33, 35]. In
contrast to our work, in [12] and [33] each semantic concept is trained by using the entire images or
frames of video. As there is no localization of object concepts in scenes, understanding cluttered images composed of many objects will be challenging. In [35], a small number of concepts are trained
and only the most probable concept is used to form the representation for each region, whereas in
our approach all the detector responses are used to encode richer semantic information.
The idea of using many object detectors as the basic representation of images is analogous to approaches applying a large number of ?semantic concepts? to video and image annotation and retrieval [12, 33, 35]. In contrast to our work, in [12, 33, 35] each semantic concept is trained by using
entire images or frames of videos. There is no sense of localized representation of meaningful object
concepts in scenes. As a result, this approach is difficult to use for understanding cluttered images
composed of many objects.
Combinations of small set of (? a dozen of) off-the-shelf object detectors with global scene context
have been used to improve object detection [14, 28, 29]. Also related to our work is a very recent
exploration of using attributes for recognition [17, 8, 16]. But we emphasize such usage is not a
2
Object Detector Responses
Spatial Pyramid
Object Bank Representation
Sailboat
Max Response (OB)
Sailboat
Original Image
Response
Water
Water Sky
Bear
Bear
Objects
le
te
de
ca
rs
cto
Figure 2: (Best viewed in colors and magnification.) Illustration of OB. A large number of object detectors
are first applied to an input image at multiple scales. For each object at each scale, a three-level spatial pyramid
representation of the resulting object filter map is used, resulting in No.Objects ? No.Scales ? (12 + 22 + 42 )
grids; the maximum response for each object in each grid is then computed, resulting in a No.Objects length
feature vector for each grid. A concatenation of features in all grids leads to an OB descriptor for the image.
universal representation of images as we have proposed. To our knowledge, this is the first work that
use such high-level image features at different image location and scale.
3
The Object Bank Representation of Images
Object Bank (OB) is an image representation constructed from the responses of many object detectors, which can be viewed as the response of a ?generalized object convolution.? We use two
state-of-the-art detectors for this operation: the latent SVM object detectors [9] for most of the
blobby objects such as tables, cars, humans, etc, and a texture classifier by Hoiem [13] for more
texture- and material-based objects such as sky, road, sand, etc. We point out here that we use the
word ?object? in its very general form ? while cars and dogs are objects, so are sky and water. Our
image representation is agnostic to any specific type of object detector; we take the ?outsourcing?
approach and assume the availability of these pre-trained detectors.
Fig. 2 illustrates the general setup for obtaining the OB representation. A large number of object
detectors are run across an image at different scales. For each scale and each detector, we obtain an
initial response map of the image (see Appendix for more details of using the object detectors [9,
13]). In this paper, we use 200 object detectors at 12 detection scales and 3 spatial pyramid levels
(L=0,1,2) [19]. We note that this is a universal representation of any images for any tasks. We use
the same set of object detectors regardless of the scenes or the testing dataset.
3.1
Implementation Details of Object Bank
So what are the ?objects? to use in the object bank? And how many? An obvious answer to this
question is to use all objects. As the detectors become more robust, especially with the emergence
of large-scale datasets such as LabelMe [30] and ImageNet [5], this goal becomes more reachable.
But time is not fully ripe yet to consider using all objects in, say, the LabelMe dataset. Not enough
research has yet gone into building robust object detector for tens of thousands of generic objects.
And even more importantly, not all objects are of equal importance and prominence in natural images. As Fig.1 in Appendix shows, the distribution of objects follows Zipf?s Law, which implies
that a small proportion of object classes account for the majority of object instances.
For this paper, we will choose a few hundred most useful (or popular) objects in images1 . An important practical consideration for our study is to ensure the availability of enough training images for
each object detectors. We therefore focus our attention on obtaining the objects from popular image
datasets such as ESP [31], LabelMe [30], ImageNet [5] and the Flickr online photo sharing community. After ranking the objects according to their frequencies in each of these datasets, we take
the intersection set of the most frequent 1000 objects, resulting in 200 objects, where the identities
and semantic relations of some of them are illustrated in Fig.2 in the Appendix. To train each of the
200 object detectors, we use 100?200 images and their object bounding box information from the
LabelMe [30] (86 objects) and ImageNet [5] datasets (177 objects). We use a subset of LabelMe
scene dataset to evaluate the object detector performance. Final object detectors are selected based
on their performance on the validation set from LabelMe (see Appendix for more details).
1
This criterion prevents us from using the Caltech101/256 datasets to train our object detectors [6, 11] where
the objects are chosen without any particular considerations of their relevance to daily life pictures.
3
4
Scene Classification and Feature/Object Compression via Structured
Regularized Learning
We envisage that with the avalanche of annotated objects on the web, the number of object detectors in our object bank will increase quickly from hundreds to thousands or even millions, offering
increasingly rich signatures for each images based on the identity, location, and scale of the objectbased content of the scene. However, from a learning point of view, it also poses a challenge on how
to train predictive models built on such high-dimensional representation with limited number of examples. We argue that, with an ?overcomplete? OB representation, it is possible to compress ultrahigh dimensional image vector without losing semantic saliency. We refer this semantic-preserving
compression as content-based compression to contrast the conventional information-theoretic compression that aims at lossless reconstruction of the data.
In this paper, we intend to explore the power of OB representation in the context of Scene Classification, and we are also interested in discovering meaningful (possibly small subset of) dimensions during regularized learning for different classes of scenes. For simplicity, here we present our
model in the context of linear binary classier in a 1-versus-all classification scheme for K classes.
Generalization to a multiway softmax classifier is slightly more involved under structured regularization and thus deferred to future work. Let X = [xT1 ; xT2 ; . . . ; xTN ] ? RN ?J , an N ? J
matrix, represent the design built on the J-dimensional object bank representation of N images;
and let Y = (y1 , . . . , yN ) ? {0, 1}N denote the binary classification labels of N samples. A
linear classifier is a function h? : RJ ? {0, 1} defined as h? (x) , arg maxy?{0,1} x?, where
? = (?1 , . . . , ?J ) ? RJ is a vector ofP
parameters to be estimated. This leads to the following
m
1
learning problem min??RJ ?R(?) + m
i=1 L(?; xi , yi ), where L(?; x, y) is some non-negative,
convex loss, m is the number of training images, R(?) is a regularizer that avoids overfitting, and
? ? R is the regularization coefficient, whose value can be determined by cross validation.
A common choice of L is the Log loss, L = log(1/P (yi |xi , ?)), where P (yi |xi , ?)) is the logistic function P (y|x, ?)) = Z1 exp( 12 y(x ? ?)). This leads to the popular logistic regression (LR)
classifier2 . Structural risk minimization schemes over LR via various forms of regularizations have
been widely studied and understood in the literature. In particular, recent asymptotic analysis of the
`1 norm and `1 /`2 mixed norm regularized LR proved that under certain conditions the estimated
sparse coefficient vector ? enjoys a property called sparsistency [34], suggesting their applicability for meaningful variable selection in high-dimensional feature space. In this paper, we employ
an LR classifier for our scene classification problem. We investigate content-based compression
of the high-dimensional OB representation that exploits raw feature-, object-, and (feature+object)sparsity, respectively, using LR with appropriate regularization.
PJ
Feature sparsity via `1 regularized LR (LR1) By letting R(?) , k?k1 =
j=1 |?j |, we
obtain an estimator of ? that is sparse. The shrinkage function on ? is applied indistinguishably
to all dimensions in the OB representation, and it does not have a mechanism to incorporate any
potential coupling of multiple features that are possibly synergistic, e.g., features induced by the
same object detector. We call such a sparsity pattern feature sparsity, and denote the resultant
coefficient estimator by ? F .
Object sparsity via `1 /`2 (group) regularized LR (LRG) Recently, a mixed-norm (e.g., `1 /`2 )
regularization [36] has been used for recovery of joint sparsity across input dimensions. By letting
PJ
R(?) , k?k1,2 = j=1 k? j k2 , where ? j is the j-th group (i.e., features grouped by an object j),
and k ? k2 is the vector `2 -norm, we set the feature group to be corresponding to that of all features
induced by the same object in the OB. This shrinkage tends to encourage features in the same group
to be jointly zero. Therefore, the sparsity is now imposed on object level, rather than merely on raw
feature level. Such structured sparsity is often desired because it is expected to generate semantically
more meaningful lossless compression, that is, out of all the objects in the OB, only a few are needed
to represent any given natural image. We call such a sparsity pattern object sparsity, and denote the
resultant coefficient estimator by ? O .
2
We choose not to use the popular SVM which correspond to L being a hinge loss and R(?) being a
`2 -regularizer, because under SVM, content-based compression via structured regularization is much harder.
4
0.6
Gist
BOW
SPM
OB-SVM OB-LR
Classification on MIT Indoor
0.36
0.7
0.32
0.6
0.28
0.5
0.24
0.4
Gist
BOW
SPM
0.20
OB-SVM OB-LR
0.8
average percent correctness
0.7
0.40
Classification on LabelMe Scenes
average percent correctness
0.8
0.5
0.8
Classification on 15-Scenes
average percent correctness
average percent correctness
0.9
Gist
BOW
SPM
OB-SVM OB-LR
Classification on UIUC-Sports
0.7
0.6
0.5
Gist BOW
SPM Pseudo OB OB-SVM OB-LR
Figure 3: (Best viewed in colors and magnification.) Comparison of classification performance of different
features (GIST vs. BOW vs. SPM vs. OB) and classifiers (SVM vs. LR) on (top to down) 15 scene, LabelMe,
UIUC-Sports and MIT-Indoor datasets. In the LabelMe dataset, the ?ideal? classification accuracy is 90%,
where we use the human ground-truth object identities to predict the labels of the scene classes. The blue bar
in the last panel is the performance of ?pseudo? object bank representation extracted from the same number
of ?pseudo? object detectors. The values of the parameters in these ?pseudo? detectors are generated without
altering the original detector structures. In the case of linear classifier, the weights of the classifier are randomly
generated from a uniform distribution instead of learned. ?Pseudo? OB is then extracted with exactly the same
setting as OB.
Joint object/feature sparsity via `1 /`2 + `1 (sparse group) regularized LR (LRG1) The groupregularized LR does not, however, yield sparsity within a group (object) for those groups with nonzero total weights. That is, if a group of parameters is non-zero, they will all be non-zero. Translating
to the OB representation, this means there is no scale or spatial location selection for an object. To
remedy this, we proposed a composite regularizer, R(?) , ?1 k?k1,2 + ?2 k?k1 , which conjoin the
sparsification effects of both shrinkage functions, and yields sparsity at both the group and individual
feature levels. This regularizer necessitates determination of two regularization parameters ?1 and
?2 , and therefore is more difficult to optimize. Furthermore, although the optimization problem for
`1 /`2 + `1 regularized LR is convex, the non-smooth penalty function makes the optimization highly
nontrivial. In the Appendix, we derive a coordinate descent algorithm for solving this problem. To
conclude, we call the sparse group shrinkage patten object/feature sparsity, and denote the resultant
coefficient estimator by ? OF .
5
Experiments and Results
Dataset We evaluate the OB representation on 4 scene datasets, ranging from generic natural scene
images (15-Scene, LabelMe 9-class scene dataset3 ), to cluttered indoor images (MIT Indoor Scene),
and to complex event and activity images (UIUC-Sports). Scene classification performance is evaluated by average multi-way classification accuracy over all scene classes in each dataset. We list
below the experiment setting for each dataset:
? 15-Scene: This is a dataset of 15 natural scene classes. We use 100 images in each class for training
and rest for testing following [19].
? LabelMe: This is a dataset of 9 classes. 50 images randomly drawn images from each scene classes
are used for training and 50 for testing.
? MIT Indoor: This is a dataset of 15620 images over 67 indoor scenes assembled by [27]. We follow
their experimental setting in [27] by using 80 images from each class for training and 20 for testing.
? UIUC-Sports: This is a dataset of 8 complex event classes. 70 randomly drawn images from each
classes are used for training and 60 for testing following [22].
Experiment Setup We compare OB in scene classification tasks with different types of conventional image features, such as SIFT-BoW [23, 3], GIST [25] and SPM [19]. An off-the-shelf SVM
classifier, and an in-house implementation of the logistic regression (LR) classifier were used on
all feature representations being compared. We investigate the behaviors of different structural risk
minimization schemes over LR on the OB representation. As introduced in Sec 4, we experimented
`1 regularized LR (LR1), `1 /`2 regularized LR (LRG) and `1 /`2 + `1 regularized LR (LRG1).
5.1
Scene Classification
Fig.3 summarizes the results on scene classification based on OB and a set of well known lowlevel feature representations: GIST [25], Bag of Words (BOW) [3] and Spatial Pyramid Matching
3
From 100 popular scene names, we obtained 9 classes from the LabelMe dataset in which there are more
than 100 images: beach, mountain, bathroom, church, garage, office, sail, street, forest. The maximum number
of images in those classes is 1000.
5
(SPM) [19] on four challenging scene datasets. We show the results of OB using both an LR classifier and a linear SVM 4 We achieve substantially superior performances on three out of four datasets,
and are on par with the 15-Scene dataset. The substantial performance gain on the UIUC-Sports
and the MIT-Indoor scene datasets illustrates the importance of using a semantically meaningful
representation for complex scenes cluttered with objects. For example, the difference between a livingroom and a bedroom is less so in the overall texture (easily captured by BoW or GIST), but more
so in the different objects and their arrangements. This result underscores the effectiveness of OB,
highlighting the fact that in high-level visual tasks such as complex scene recognition, a higher level
image representation can be very useful. We further decompose the spatial structure and semantic
meaning encoded in OB by using a ?pseudo? OB without semantic meaning. The significant improvement of OB in classification performance over the ?pseudo object bank? is largely attributed
to the effectiveness of using object detectors trained from image. For each of the existing scene
datasets (UIUC-Sports, 15-Scene and MIT-Indoor), we also compare the reported state of the arts
performances to our OB algorithm (using a standard LR classifier). This result is shown in Tab.15
5.2
Control Experiment: Object Recognition by OB vs. Classemes [33]
15-Scene
UIUCMITOB is constructed from the responses of many objects,
Sports
Indoor
which encodes the semantic and spatial information of
state-of
72.2%[19]
66.0% [32]
26% [27]
-the-art
81.1%[19]
73.4% [22]
objects within images. It can be naturally applied to obOB
80.9%
76.3%
37.6%
ject recognition task. We compare the object recognition
performance on the Caltech 256 dataset to [33], a high Table 1: Comparison of classification relevel image representation obtained as the output of a sults using OB with reported state-of-thelarge number of weakly trained object classifiers on the art algorithms. Many of the algorithms use
image. By encoding the spatial locations of the objects more complex model and supervised information, whereas our results are obtained by
within an image, OB (39%) significantly outperforms applying simple logistic regression.
[33] (36%) on the 256-way classification task, where performance is measured as the average of the diagonal values of a 256?256 confusion matrix.
5.3 Semantic Feature Sparsification Over OB
In this subsection, we systematically investigate semantic feature sparsification of the OB representation. We focus on the practical issues directly relevant to the effectiveness of OB representation
and quality of feature sparsification, and study the following three aspects of the scene classifier:
1) robustness, 2) feasibility of lossless content-based compression, 3) profitability over growing
OB.interpretability of predictive features.
5.3.1
Robustness with Respect to Training Sample Size
80
LR1
LG
Compression of Image Representation
80
80
Compression of Image Representation
80
70
50
40
Accuracy
Accuracy
Accuracy
60
50
70
LR
LR1
LRG
LRG1
60
Classification Accuracy
70
40
LR
LR1
LRG
LRG1
60
50
40
30
30
30
20
20
20
10
10
0
25%
50%
(a)
75%
100%
0
0.2
0.4
0.6
0.8
Dimension Percentage
1
(b)
10
75
70
60
55
50
45
40
?8
?6
?4
?2
Dimension Percentage Log Scale
(c)
0
LRG
65
0
50
100
Number of Objects
150
(d)
Figure 4: (a) Classification performance (and s.t.d.) w.r.t number of training images. Each pair represents performances of LR1 and LRG respectively. X-axis is the ratio of the training images over the full training dataset
(70 images/class). (b) Classification performance w.r.t feature dimension. X-axis is the size of compressed
feature dimension, represented as the ratio of the compressed feature dimension over the full OB representation
dimension (44604). (c) Same as (b), represented in Log Scale to contrast the performances of different algorithms. (d) Classification performance w.r.t number of object filters. X-axis is the number of object filters. 3
rounds of randomized sampling is performed to choose the object filters from all the object detectors.
The intrinsic high-dimensionness of the OB representation raises a legitimate concern on its demand
on training sample size. We investigate the robustness of the logistic regression classifier built on
4
We also evaluate the classification performance of using the detected object location and its detection score
of each object detector as the image representation. The classification performance of this representation is
62.0%, 48.3%, 25.1% and 54% on the 15 scene, LabelMe, UIUC-Sports and MIT-Indoor datasets respectively.
5
We refer to the Appendix for a further discussion of the issue of comparing different algorithms based on
different training strategies.
6
features selected by LR1 and LRG in this experiment. We train LR1 and LRG on the UIUC-Sports
dataset by using multiple sizes of training examples, ranging from 25%, 50%, 75% to 100% of the
full training data.
As shown in Fig. 4(a), we observe only moderate drop of performance when the number of training
samples decreases from 100% to 25% of the training examples, suggesting that the OB representation is a rich representation where discriminating information residing in a lower dimensional ?informative? feature space, which are likely to be retained during feature sparsification, and thereby
ensuring robustness under small training data. We explore this issue further in the next experiment.
5.3.2
Near Losslessness of Content-based Compression via Regularized Learning
We believe that the OB can offer an over complete representation of any natural image. Therefore,
there is great room for possibly (near) lossless content-based compression of the image features into
a much lower-dimensional, but equally discriminative subspace where key semantic information of
the images are preserved, and the quality of inference on images such as scene classification are not
compromised significantly. Such compression can be attractive in reducing representation cost of
image query, and improving the speed of query inference.
In this experiment, we use the classification performance as a measurement to show how different
regularization schemes over LR can preserve the discriminative power. For LR1, LRG and LRG1,
cross-validation is used to decide the best regularization parameters. To study the extend of information loss as a function of different number of features being retained in the classifier, we re-train
an LR classifier using features from the top x% percentile of the rank list, where x is a compression
scale ranging from 0.05% to 100%. One might think that LR itself when fitted on full input dimensional can also produce a rank list of features for subsequent selection. For comparison purpose, we
also include results from the LR-ranked features, as can be seen in Fig.4(b,c), indeed its performance
drops faster than all the regularization methods.
In Fig.4 (b), we observe that the classification accuracy drops very slowly as the number of selected
features decreases. By excluding 75% feature dimensions, classification performance of each algorithm decreases less than 3%. One point to notice here is that, the non-zero entries only appear in
dimensions corresponding to no more than 45 objects for LRG at this point. Even more surprisingly,
LR1 and LRG preserve accuracies above 70% when 99% of the feature dimensions are excluded.
Fig. 4 (c) shows more detailed information in the low feature dimension range, which corresponds
to a high compression ratio. We observe that algorithms imposing sparsity in features (LR1, LRG,
and LRG1) outperform unregularized algorithm (LR) with a larger margin when the compression
ratio becomes higher. This reflects that the sparsity learning algorithms are capable of learning the
much lower-dimensional, but highly discriminative subspace.
5.3.3
Profitability Over Growing OB
We envisage the Object Bank will grow rapidly and constantly as more and more labeled web images
become available. This will naturally lead to increasingly richer and higher-dimensional representation of images. We ask, are image inference tasks such as scene classification going to benefit from
this trend?
As group regularized LR imposes sparsity on object level, we choose to use it to investigate how the
number of objects will affect the discriminative power of OB representation. To simulate what happens when the size of OB grows, we randomly sample subsets of object detectors at 1%, 5%, 10%,
25%, 50% and 75% of total number of objects for multiple rounds. As in Fig.4(d), the classification
performance of LRG continuously increases when more objects are incorporated in the OB representation. We conjecture that this is due to the accumulation of discriminative object features, and
we believe that future growth of OB will lead to stronger representation power and discriminability
of images models build on OB.
5.4
Interpretability of the Compressed Representation
Intuitively, a few key objects can discriminate a scene class from another. In this experiment, we aim
to discover the object sparsity and investigate its interpretability. Again, we use group regularized
LR (LRG) since the sparsity is imposed on object level and hence generates a more semantically
meaningful compression.
7
3500
3000
building
1400
1200
2500
1000
2000
800
tree
1000
cloud
800
4000
3500
boat
3000
2500
600
600
2000
1500
400
400
1000
1500
200
1000
500
200
0
500
?200
0
1 2 3 4 5 6 7 8 9 10 11 12
1 2 3 4 5 6 7 8 9 101112
0
1 2 3 4 5 6 7 8 9 10 11 12
0
1 2 3 4 5 6 7 8 9 10 11 12
Figure 6: Illustration of the learned ? OF by LRG1 within
an object group. Columns from left to right correspond to
?building? in ?church? scene, ?tree? in ?mountain?, ?cloud?
in ?beach?, and ?boat? in ?sailing?. Top Row: weights of
OB dimensions corresponding to different scales, from small
to large. The weight of a scale is obtained by summing up
the weights of all features corresponding to this scale in ? OF .
Middle: Heat map of feature weights in image space at the
scale with the highest weight (purple bars above). We project
the learned feature weights back to the image by reverting the
OB extraction procedure. The purple bounding box shows the
size of the object filter at this scale, centered at the peak of
the heat map. Bottom: example scene images masked by the
feature weights in image space (at the highest weighted scale),
highlighting the most relevant object dimension.
We show in Fig.5 the object-wise coefficients of the comsailing
beach
pression results for 4 sample scene classes. The object
weight is obtained by accumulating the coefficient of ? O
from the feature dimensions of each object (at different
scales and spatial locations) learned by LRG. Objects
with all zero coefficients in the resultant coefficient estimator are not displayed. Fig.5 shows that objects that are
church
mountain
?representative? for each scene are retained by LRG. For
example, ?sailboat?, ?boat?, and ?sky? are objects with
very high weight in the ?sailing? scene class. This suggests that the representation compression via LRG is virtually based upon the image content and is semantically
meaningful; therefore, it is nearly ?semantically lossless?. Figure 5: Object-wise coefficients given
5000
3500
4000
3000
3000
2500
2000
2000
1000
1500
0
1000
?1000
500
?2000
0
?3000
?4000
?500
?5000
?1000
g
r
he
ot
g
in
ild
bu
e
tre
r
rk
fo ape
cr
ys
sk
n
ea
oc
ud
clo
s
as
gr
nd
er
at
w
y
sk
sa
n
t
r
he
ot alk
ew
sid
e
tre
r
rso
pe
o
flo
in
ild
bu
r
ca
oa
ilb
y
sk
sa
8000
7000
7000
6000
6000
5000
5000
4000
4000
3000
3000
2000
2000
1000
1000
0
0
?1000
?1000
?2000
ot
er
6
Conclusion
As we try to tackle higher level visual recognition problems, we show that Object Bank representation is powerful on scene classification tasks because it carries rich semantic level image information. We also apply structured regularization schemes on the OB representation, and achieve nearly
lossless semantic-preserving compression. In the future, we will further test OB representation in
other useful vision applications, as well as other interesting structural regularization schemes.
Acknowledgments L. F-F is partially supported by an NSF CAREER grant (IIS-0845230), a Google research award, and a Microsoft Research Fellowship. E. X is supported by AFOSR FA9550010247, ONR
N0001140910758, NSF Career DBI-0546594, NSF IIS- 0713379 and Alfred P. Sloan Fellowship. We thank
Wei Yu, Jia Deng, Olga Russakovsky, Bangpeng Yao, Barry Chai, Yongwhan Lim, and anonymous reviewers
for helpful comments.
References
[1] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE
PAMI, pages 509?522, 2002.
8
r
he
s
as
gr
r
ca
r
ca
e
tre
y
sk
g
ap
cr
in
ild
bu
ys
sk
le
r
he
ot ng
i
ild
bu
ud
n
ai
nt
op
r
ca
clo
pe
k
e
c
ro
ou
m
tre
y
sk
scene class. Selected objects correspond to
Knowing the important objects learned by the compres- non-zero ? values learned by LRG.
sion algorithm, we further investigate the discriminative
dimensions within the object level. We use LRG1 to examine the learned weights within an object. In Sec.3, we introduce that each feature dimension in the OB representation is directly related
to a specific scale, geometric location and object identity. Hence, the weights in ? OF reflects the
importance of an object at a certain scale and location. To verify the hypothesis, we examine the importance of objects across scales by summing up the weights of related spatial locations and pyramid
resolutions. We show one representative object in a scene and visualize the feature patterns within
the object group. As it is shown in Fig.6(Top), LRG1 has achieved joint object/feature sparsification
by zero-out less relevant scales, thus only the most discriminative scales are retained. To analyze
how ? OF reflects the geometric location, we further project the learned coefficient back to the image space by reversing the OB representation extraction procedure. In Fig.6(Middle), we observe
that the regions with high intensities are also the locations where the object frequently appears. For
example, cloud usually appears in the upper half of a scene in the beach class.
[2] L. Bourdev and J. Malik. Poselets: Body Part Detectors Trained Using 3D Human Pose Annotations.
ICCV, 2009.
[3] G. Csurka, C. Bray, C. Dance, and L. Fan. Visual categorization with bags of keypoints. Workshop on
Statistical Learning in Computer Vision, ECCV, 2004.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. CVPR, 2005.
[5] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. CVPR, 2009.
[6] L. Fei-Fei, R. Fergus, and P. Perona. One-Shot learning of object categories. TPAMI, 2006.
[7] L. Fei-Fei, R. Fergus, and A. Torralba. Recognizing and learning object categories. Short Course CVPR
[8] A. Farhadi, I. Endres, D. Hoiem and D. Forsyth. Describing objects by their attributes. CVPR, 2009.
[9] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object Detection with Discriminatively
Trained Part Based Models. JAIR, 29, 2007.
[10] W.T. Freeman and E.H. Adelson. The design and use of steerable filters. IEEE PAMI, 1991.
[11] G. Griffin, A. Holub, and P. Perona. Caltech-256 Object Category Dataset. 2007.
[12] A. Hauptmann, R. Yan, W. Lin, M. Christel, and H. Wactlar. Can high-level concepts fill the semantic
gap in video retrieval? a case study with broadcast news. IEEE TMM, 9(5):958, 2007.
[13] D. Hoiem, A.A. Efros, and M. Hebert. Automatic photo pop-up. SIGGRAPH 2005, 24(3):577?584, 2005.
[14] D. Hoiem, A.A. Efros, and M. Hebert. Putting Objects in Perspective. CVPR, 2006.
[15] T. Kadir and M. Brady. Scale, saliency and image description. IJCV, 45(2):83?105, 2001.
[16] N. Kumar, A. C. Berg, P. N. Belhumeur and S. K. Nayar. Attribute and Simile Classifiers for Face
Verification. ICCV, 2009.
[17] C.H. Lampert, H. Nickisch and S. Harmeling. Learning to detect unseen object classes by between-class
attribute transfer. CVPR, 2009.
[18] C.H. Lampert, M.B. Blaschko, T. Hofmann, and S. Zurich. Beyond sliding windows: Object localization
by efficient subwindow search. CVPR, 2008.
[19] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. CVPR, 2006.
[20] H.Lee, R.Grosse, R.Ranganath and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. ICML, 2009.
[21] D.Lewis. Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval. ECML, 1998.
[22] L-J. Li and L. Fei-Fei. What, where and who? classifying events by scene and object recognition. ICCV,
2007.
[23] D. Lowe. Object recognition from local scale-invariant features. ICCV, 1999.
[24] K. Mikolajczyk and C. Schmid. An affine invariant interest point detector. ECCV, 2002.
[25] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42, 2001.
[26] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. PAMI, 1990.
[27] A. Quattoni and A. Torralba. Recognizing indoor scenes. CVPR, 2009.
[28] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora and S. Belongie. Objects in context. ICCV,
2007.
[29] D. Ramanan C. Desai and C. Fowlkes. Discriminative models for multi-class object layout. ICCV, 2009.
[30] B.C. Russell, A. Torralba, K.P. Murphy, and W.T. Freeman. Labelme: a database and web-based tool for
image annotation. MIT AI Lab Memo, 2005.
[31] L. Von Ahn. Games with a purpose. Computer, 39(6):92?94, 2006.
[32] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. CVPR, 2009.
[33] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient Object Category Recognition Using Classemes.
European Conference of Computer Vision 2010, pages 776?789, 2010.
[34] P.Ravikumar, M.Wainwright, J.Lafferty. High-Dimensional Ising Model Selection Using L1-Regularized
Logistic Regression. Annals of Statistics, 2009.
[35] J. Vogel and B. Schiele. Semantic modeling of natural scenes for content-based image retrieval. International Journal of Computer Vision, 2007.
[36] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statistical Society: Series B (Statistical Methodology), 2006.
9
| 4008 |@word middle:2 dalal:1 compression:19 proportion:1 norm:4 stronger:1 nd:1 triggs:1 r:1 prominence:1 thereby:1 shot:1 harder:1 carry:2 initial:1 series:1 score:1 hoiem:4 offering:1 document:2 envision:1 outperforms:1 existing:1 current:2 comparing:1 nt:1 yet:2 readily:1 indistinguishably:1 subsequent:1 wiewiora:1 informative:1 shape:4 hofmann:1 drop:3 gist:12 v:7 intelligence:1 selected:6 discovering:1 half:1 classier:1 short:1 lr:31 blei:1 provides:1 location:11 mathematical:1 constructed:2 become:2 yuan:1 ijcv:2 introduce:1 indeed:1 expected:1 behavior:1 examine:2 uiuc:8 multi:2 frequently:1 growing:2 freeman:2 little:1 curse:1 farhadi:1 window:1 becomes:2 project:2 discover:2 blaschko:1 panel:1 agnostic:1 what:3 mountain:7 substantially:1 developed:1 sparsification:7 transformation:1 brady:1 pseudo:7 sky:7 stuff:1 growth:1 tackle:1 exactly:1 ro:1 classifier:20 k2:2 control:1 ramanan:2 grant:1 yn:1 appear:1 engineering:1 local:2 understood:1 tends:1 esp:1 era:1 despite:1 encoding:1 ap:1 pami:3 might:1 discriminability:1 studied:1 suggests:1 challenging:4 limited:1 gone:1 sail:1 range:1 practical:3 acknowledgment:1 harmeling:1 fei1:1 testing:6 block:1 fitzgibbon:1 procedure:2 steerable:1 universal:2 yan:1 significantly:2 composite:1 matching:3 vedaldi:1 pre:3 word:4 road:1 synergistic:1 selection:5 risk:3 context:6 applying:2 accumulating:1 optimize:1 conventional:3 map:5 imposed:2 outsourcing:1 accumulation:1 reviewer:1 layout:1 regardless:1 attention:1 cluttered:5 convex:2 lowlevel:1 resolution:1 simplicity:1 recovery:1 legitimate:1 estimator:5 importantly:1 dbi:1 fill:1 embedding:1 handle:1 coordinate:1 analogous:2 annals:1 losing:1 hypothesis:3 trend:1 recognition:17 magnification:3 particularly:1 ising:1 labeled:2 database:2 bottom:1 cloud:3 wang:1 thousand:4 region:2 news:1 desai:1 decrease:3 highest:2 russell:1 substantial:1 schiele:1 rigorously:1 signature:1 trained:11 weakly:1 solving:1 raise:1 predictive:5 localization:2 upon:1 eric:1 necessitates:1 easily:1 joint:3 siggraph:1 represented:3 various:3 regularizer:4 train:5 heat:2 effective:1 detected:1 query:2 whose:1 richer:2 stanford:1 widely:1 encoded:1 say:1 garage:1 larger:1 compressed:3 cvpr:10 kadir:1 statistic:4 unseen:1 think:1 emergence:1 jointly:1 envisage:2 final:1 online:1 itself:1 tpami:1 propose:3 reconstruction:1 frequent:1 relevant:3 rapidly:1 bow:8 holistic:1 achieve:4 description:3 flo:1 chai:1 plethora:1 produce:1 categorization:1 object:170 depending:1 coupling:1 derive:1 pose:2 bourdev:1 measured:1 op:1 sa:2 implies:1 poselets:1 annotated:1 attribute:4 filter:12 exploration:1 human:5 centered:1 mcallester:1 translating:1 material:1 sand:1 generalization:1 decompose:1 anonymous:1 probable:1 tmm:1 residing:1 ground:1 exp:1 great:1 predict:1 visualize:1 efros:2 achieves:1 torralba:4 purpose:2 recognizer:1 estimation:1 bag:3 label:2 grouped:2 correctness:4 city:1 tool:1 reflects:3 weighted:1 minimization:3 mit:8 aim:2 rather:1 shelf:3 shrinkage:4 cr:2 sion:1 office:1 encode:1 focus:2 ponce:1 improvement:1 rank:2 indicates:1 mainly:1 underscore:1 contrast:5 sense:1 detect:1 helpful:1 inference:5 typically:2 entire:2 fa9550010247:1 perona:3 relation:1 going:1 interested:1 pixel:2 issue:3 arg:1 classification:38 orientation:1 overall:1 development:1 spatial:13 art:5 softmax:1 equal:2 extraction:2 beach:4 sampling:1 ng:2 blobby:1 represents:1 yu:1 unsupervised:1 nearly:2 adelson:1 icml:1 patten:1 future:5 ilb:1 torresani:1 primarily:1 few:3 employ:1 randomly:4 oriented:1 composed:2 preserve:2 individual:1 sparsistency:1 murphy:1 pression:1 microsoft:1 detection:7 interest:1 investigate:7 highly:2 severe:1 deferred:1 extreme:1 edge:2 encourage:1 capable:1 daily:1 dataset3:1 perceivable:1 modest:1 tree:5 desired:1 re:1 overcomplete:1 girshick:1 fitted:1 instance:2 column:1 modeling:2 altering:1 rabinovich:1 applicability:1 cost:1 subset:3 entry:1 predictor:2 hundred:4 uniform:1 masked:1 conjoin:1 recognizing:3 gr:2 reported:2 answer:1 endres:1 nickisch:1 combined:1 venue:1 peak:1 randomized:1 discriminating:1 international:1 bu:4 lee:1 off:3 dong:1 pool:1 together:1 quickly:1 continuously:1 yao:1 again:1 von:1 choose:4 possibly:3 slowly:1 broadcast:1 li:6 account:1 suggesting:2 potential:1 de:1 coding:2 sec:2 availability:3 coefficient:11 forsyth:1 sloan:1 ranking:1 blind:1 performed:1 view:1 try:1 csurka:1 lowe:1 tab:1 analyze:1 lab:1 bayes:1 avalanche:1 annotation:4 jia:2 objectbased:1 contribution:1 purple:2 accuracy:8 convolutional:1 descriptor:2 largely:1 who:1 correspond:4 saliency:2 yield:2 raw:3 sid:1 russakovsky:1 xtn:1 detector:41 quattoni:1 flickr:1 fo:1 simultaneous:1 sharing:1 frequency:1 involved:1 obvious:1 resultant:4 naturally:2 attributed:1 couple:1 gain:1 dataset:19 proved:1 popular:5 ask:1 color:4 knowledge:2 dimensionality:3 car:2 subsection:1 lim:1 ou:1 holub:1 sophisticated:1 ea:1 back:2 appears:2 higher:4 jair:1 supervised:1 follow:1 methodology:1 response:14 wei:1 evaluated:1 box:2 furthermore:1 profitability:2 tre:4 web:3 su:1 ild:4 google:1 spm:10 logistic:8 quality:2 reveal:1 believe:2 grows:1 usage:2 building:4 effect:1 verify:1 concept:9 remedy:1 name:1 galleguillos:1 regularization:14 hence:2 excluded:1 nonzero:1 semantic:19 illustrated:2 attractive:1 round:2 during:2 game:1 percentile:1 oc:1 criterion:1 generalized:1 theoretic:1 complete:1 confusion:1 l1:1 percent:4 image:98 meaning:5 consideration:2 ranging:3 recently:1 wise:2 lazebnik:1 superior:4 common:3 physical:1 lr1:11 sailing:2 million:3 extend:1 he:4 anisotropic:1 mellon:1 refer:2 significant:1 measurement:1 imposing:1 ai:2 zipf:1 automatic:1 grid:4 multiway:1 reachable:1 entail:1 longer:1 ahn:1 etc:4 recent:2 perspective:1 moderate:1 certain:2 binary:2 onr:1 life:1 yi:3 caltech:2 preserving:2 captured:1 seen:1 bathroom:1 deng:2 belhumeur:1 forty:1 ud:2 barry:1 ii:2 sliding:1 multiple:4 full:4 rj:3 keypoints:1 smooth:1 technical:1 faster:1 determination:1 offer:4 cross:2 retrieval:4 ofp:1 lin:2 equally:1 y:2 award:1 ravikumar:1 feasibility:1 ensuring:1 prediction:1 scalable:2 regression:8 basic:3 oliva:1 vision:4 metric:1 histogram:2 represent:2 pyramid:7 achieved:2 preserved:1 whereas:2 fellowship:2 sailboat:3 grow:1 source:2 ot:4 rest:1 envelope:1 vogel:1 ape:1 induced:2 comment:1 virtually:1 leveraging:1 lafferty:1 seem:1 effectiveness:3 call:3 structural:4 near:2 leverage:1 empower:1 ideal:1 enough:3 easy:1 variety:1 affect:1 independence:1 bedroom:1 lasso:1 classemes:2 idea:2 knowing:1 alk:1 ject:1 akin:1 penalty:1 deep:1 useful:4 detailed:1 ten:2 category:5 generate:1 outperform:1 percentage:2 nsf:3 notice:1 estimated:2 per:1 blue:1 alfred:1 carnegie:1 group:15 key:3 four:2 putting:1 drawn:2 pj:2 diffusion:1 merely:1 xt2:1 run:1 powerful:1 decide:1 patch:2 draw:1 ob:62 appendix:6 summarizes:1 griffin:1 internet:1 fan:1 activity:1 nontrivial:1 bray:1 fei:11 scene:65 encodes:1 generates:1 aspect:1 speed:1 simulate:1 min:1 kumar:1 simile:1 conjecture:1 department:2 structured:6 according:1 combination:1 remain:1 across:3 increasingly:2 slightly:1 happens:1 maxy:1 intuitively:1 invariant:3 iccv:6 unregularized:1 zurich:1 remains:1 describing:1 fail:1 mechanism:1 needed:1 reverting:1 letting:2 photo:2 available:3 operation:1 apply:1 observe:4 hierarchical:2 generic:4 appropriate:2 fowlkes:1 robustness:4 bangpeng:1 original:3 compress:1 top:4 nlp:1 ensure:1 include:1 hinge:1 exploit:1 inconceivable:1 k1:4 build:3 especially:2 society:1 malik:3 intend:1 question:1 arrangement:1 strategy:1 diagonal:1 gradient:2 subspace:2 thank:1 concatenation:1 street:4 majority:1 oa:1 topic:1 tower:2 argue:2 water:4 length:1 retained:4 illustration:2 ratio:4 difficult:2 setup:2 lg:1 potentially:2 hao:1 negative:1 memo:1 design:3 implementation:2 upper:1 convolution:1 datasets:15 descent:1 displayed:1 ecml:1 excluding:1 incorporated:1 frame:2 rn:1 y1:1 arbitrary:1 community:1 intensity:2 introduced:1 dog:1 pair:1 z1:1 imagenet:5 learned:9 textual:1 pop:1 assembled:1 beyond:2 bar:2 usually:2 pattern:4 below:1 indoor:11 sparsity:23 challenge:2 built:4 royal:1 max:1 video:4 interpretability:3 belief:1 power:6 event:3 wainwright:1 natural:9 ranked:1 regularized:15 boat:3 scheme:8 improve:1 lossless:6 sults:1 picture:2 axis:3 church:3 naive:1 schmid:2 understanding:3 literature:2 geometric:3 ultrahigh:1 afosr:1 law:1 asymptotic:1 fully:1 loss:4 bear:3 mixed:2 par:1 interesting:1 images1:1 discriminatively:1 proven:1 analogy:1 localized:1 versus:1 validation:3 ripe:1 affine:1 sufficient:1 verification:1 imposes:1 bank:20 systematically:1 classifying:2 row:1 eccv:2 course:1 caltech101:1 surprisingly:1 last:1 supported:2 hebert:2 enjoys:1 felzenszwalb:2 face:1 sparse:6 benefit:1 dimension:20 avoids:1 rich:4 mikolajczyk:1 collection:2 subwindow:1 ranganath:1 emphasize:1 global:1 overfitting:1 summing:2 xt1:1 hoeim:1 conclude:1 belongie:2 discriminative:9 cto:1 xi:3 fergus:2 search:1 latent:1 compromised:1 sk:6 table:2 promising:1 transfer:1 robust:4 ca:5 career:2 obtaining:2 forest:1 improving:1 complex:6 european:1 domain:1 bounding:2 lampert:2 complementary:1 body:2 fig:16 representative:2 deployed:1 grosse:1 explicit:1 lie:1 house:1 pe:2 dozen:2 down:1 rk:1 specific:3 sift:4 er:2 sensing:1 list:3 experimented:1 svm:11 concern:1 essential:1 intrinsic:1 workshop:1 socher:1 importance:4 texture:5 hauptmann:1 te:1 illustrates:3 demand:1 margin:1 gap:1 intersection:1 explore:4 likely:1 visual:13 prevents:1 highlighting:2 sport:9 partially:1 corresponds:1 truth:1 constantly:1 extracted:2 lewis:1 viewed:4 sized:1 goal:1 identity:4 room:1 labelme:15 content:11 determined:1 reducing:1 semantically:8 reversing:1 olga:1 called:2 total:2 discriminate:1 experimental:1 rso:1 meaningful:10 ew:1 puzicha:1 berg:1 szummer:1 relevance:1 incorporate:1 evaluate:3 dance:1 nayar:1 |
3,323 | 4,009 | Co-regularization Based Semi-supervised Domain Adaptation
Hal Daum?e III
Department of Computer Science
University of Maryland CP, MD, USA
[email protected]
Abhishek Kumar
Department of Computer Science
University of Maryland CP, MD, USA
[email protected]
Avishek Saha
School Of Computing
University of Utah, UT, USA
[email protected]
Abstract
This paper presents a co-regularization based approach to semi-supervised domain adaptation. Our
proposed approach (EA++) builds on the notion of augmented space (introduced in E ASYA DAPT
(EA) [1]) and harnesses unlabeled data in target domain to further assist the transfer of information
from source to target. This semi-supervised approach to domain adaptation is extremely simple to
implement and can be applied as a pre-processing step to any supervised learner. Our theoretical
analysis (in terms of Rademacher complexity) of EA and EA++ show that the hypothesis class of
EA++ has lower complexity (compared to EA) and hence results in tighter generalization bounds.
Experimental results on sentiment analysis tasks reinforce our theoretical findings and demonstrate
the efficacy of the proposed method when compared to EA as well as few other representative
baseline approaches.
1 Introduction
A domain adaptation approach for NLP tasks, termed E ASYA DAPT (EA), augments the source domain feature space
using features from labeled data in target domain [1]. EA is simple, easy to extend and implement as a preprocessing
step and most importantly is agnostic of the underlying classifier. However, EA requires labeled data in both source
and target, and hence applies to fully supervised domain adaptation settings only. In this paper, 1 we propose a semisupervised 2 approach to leverage unlabeled data for E ASYA DAPT (which we call EA++) and theoretically, as well as
empirically, demonstrate its superior performance over EA.
There exists prior work on supervised domain adaptation (and multi-task learning) that can be related to E ASYA DAPT.
An algorithm for multi-task learning using shared parameters was proposed for multi-task regularization [3] wherein
each task parameter was represented as sum of a mean parameter (that stays same for all tasks) and its deviation
from this mean. SVMs were used as the base classifiers and the algorithm was formulated in the standard SVM dual
optimization setting. Subsequently, this framework was extended to online multi-domain setting in [4]. Prior work
on semi-supervised approaches to domain adaptation also exists in literature. Extraction of specific features from the
available dataset was proposed [5, 6] to facilitate the task of domain adaptation. Co-adaptation [7], a combination of
co-training and domain adaptation, can also be considered as a semi-supervised approach to domain adaptation. A
semi-supervised EM algorithm for domain adaptation was proposed in [8]. Similar to graph based semi-supervised
approaches, a label propagation method was proposed [9] to facilitate domain adaptation. Domain Adaptation Machine (DAM) [10] is a semi-supervised extension of SVMs for domain adaptation and presents extensive empirical
results. Nevertheless, in almost all of the above cases, the proposed methods either use specifics of the datasets or are
customized for some particular base classifier and hence it is not clear how the proposed methods can be extended to
other existing classifiers.
1
A preliminary version [2] of this work appeared in the DANLP workshop at ACL 2010.
We define supervised domain adaptation as having labeled data in both source and target and unsupervised domain adaptation
as having labeled data in only source. In semi-supervised domain adaptation, we also have access to both labeled and unlabeled
data in target.
2
1
As mentioned earlier, EA is remarkably general in the sense that it can be used as a pre-processing step in conjunction
with any base classifier. However, one of the prime limitations of EA is its incapability to leverage unlabeled data.
Given its simplicity and generality, it would be interesting to extend EA to semi-supervised settings. In this paper, we
propose EA++, a co-regularization based semi-supervised extension to EA. We also present Rademacher complexity based generalization bounds for EA and EA++. Our generalization bounds also apply to the approach proposed
in [3] for domain adaptation setting, where we are only concerned with the error on target domain. The closest to our
work is a recent paper [11] that theoretically analyzes E ASYA DAPT. Their paper investigates the necessity to combine supervised and unsupervised domain adaptation (which the authors refer to as labeled and unlabeled adaptation
frameworks, respectively) and analyzes the combination using mistake bounds (which is limited to perceptron-based
online scenarios). In addition, their work points out that E ASYA DAPT is limited to only supervised domain adaptation.
On the contrary, our work extends E ASYA DAPT to semi-supervised settings and presents generalization bound based
theoretical analysis which specifically demonstrate why EA++ is better than EA.
2 Background
In this section, we introduce notations and provide a brief overview of E ASYA DAPT [1].
2.1 Problem Setup and Notations
Let X ? Rd denote the instance space and Y = {?1, +1} denote the label space. Let Ds (x, y) be the source
distribution and Dt (x, y) be the target distribution. We have a set of source labeled examples Ls (? Ds (x, y)) and a
set of target labeled examples Lt (? Dt (x, y)), where |Ls | = ls ? |Lt | = lt . We also have target unlabeled data
denoted by Ut (? Dt (x)), where |Ut | = ut . Our goal is to learn a hypothesis h : X 7? Y having low expected error
with respect to the target domain. In this paper, we consider linear hypotheses only. However, the proposed techniques
extend to non-linear hypotheses, as mentioned in [1]. Source and target empirical errors for hypothesis h are denoted
by ??s (h, fs ) and ??t (h, ft ) respectively, where fs and ft are the true source and target labeling functions. Similarly,
the corresponding expected errors are denoted by ?s (h, fs ) and ?t (h, ft ). We will use shorthand notations of ??s , ??t , ?s
and ?t wherever the intention is clear from context.
2.2 EasyAdapt (EA)
Let us denote Rd as the original space. EA operates in an augmented space denoted by X? ? R3d (for a single pair of
source and target domain). For k domains, the augmented space blows up to R(k+1)d . The augmented feature maps
?s , ?t : X 7? X? for source and target domains are defined as ?s (x) = hx, x, 0i and ?t (x) = hx, 0, xi where x
and 0 are vectors in Rd , and 0 denotes a zero vector of dimension d. The first d-dimensional segment corresponds to
commonality between source and target, the second d-dimensional segment corresponds to the source domain while
the last segment corresponds to the target domain. Source and target domain examples are transformed using these
feature maps and the augmented features so constructed are passed onto the underlying supervised classifier. One of
the most appealing properties of E ASYA DAPT is that it is agnostic of the underlying supervised classifier being used to
learn in the augmented space. Almost any standard supervised learning approach (for e.g., SVMs, perceptrons) can be
? ? R3d in the augmented space. Let us denote h
? = hgc , gs , gt i, where each of gc ,
used to learn a linear hypothesis h
? respectively.
gs , gt is of dimension d, and represent the common, source-specific and target-specific components of h,
t
?
During prediction on target data, the incoming target sample x is transformed to obtain ? (x) and h is applied on this
transformed sample. This is equivalent to applying (gc + gt ) on x. A intuitive insight into why this simple algorithm
works so well in practice and outperforms most state-of-the-art algorithms is given in [1]. Briefly, it can be thought
to be simultaneously training two hypotheses: hs = (gc + gs ) for source domain and ht = (gc + gt ) for target
domain. The commonality between the domains is represented by gc whereas gs and gt capture the idiosyncrasies of
the source and target domain, respectively.
3 EA++: EA using unlabeled data
As discussed in the previous section, the E ASYA DAPT algorithm is attractive because it performs very well empirically
and can be used in conjunction with any underlying supervised linear classifier. One drawback of E ASYA DAPT is its
inability to leverage unlabeled target data which is usually available in large quantities in most practical scenarios. In
this section, we extend EA to semi-supervised settings while maintaining the desirable classifier-agnostic property.
2
3.1 Motivation
In multi-view approach to semi-supervised learning [12], different hypotheses are learned using different views of
the dataset. Thereafter, unlabeled data is utilized to co-regularize these learned hypotheses by making them agree on
unlabeled samples. In domain adaptation, the source and target data come from two different distributions. However,
if the source and target domains are reasonably close, we can employ a similar form of regularization using unlabeled
data. A prior co-regularization based idea to harness unlabeled data in domain adaptation tasks demonstrated improved
empirical results [10]. However, their technique applies for the particular base classifier they consider and hence does
not extend to other supervised classifiers.
3.2 EA++: E ASYA DAPT with unlabeled data
In our proposed semi-supervised approach, the source and target hypotheses are made to agree on unlabeled data.
? ? R3d in the augmented
We refer to this algorithm as EA++. Recall that E ASYA DAPT learns a linear hypothesis h
? contains common, source-specific and target-specific sub-hypotheses and is expressed as
space. The hypothesis h
?
h = hgc , gs , gt i. In original space (ref. Section 2.2), this is equivalent to learning a source specific hypothesis
hs = (gc + gs ) and a target specific hypothesis ht = (gc + gt ).
In EA++, we want the source hypothesis hs and the target hypothesis ht to agree on the unlabeled data. For an
unlabeled target sample xi ? Ut ? Rd , the goal of EA++ is to make the predictions of hs and ht on xi , agree with
each other. Formally, it aims to achieve the following condition:
hs ? xi ? ht ? xi ?? (gc + gs ) ? xi ? (gc + gt ) ? xi
?? (gs ? gt ) ? xi ? 0 ?? hgc , gs , gt i ? h0, xi , ?xi i ? 0.
(3.1)
The above expression leads to the definition of a new feature map ?u : X 7? X? for unlabeled data given by ?u (x) =
h0, x, ?xi. Every unlabeled target sample is transformed using the map ?u (.). The augmented feature space that
results from the application of three feature maps, namely, ?s (?), ?t (?) and ?u (?) on source labeled samples, target
labeled samples and target unlabeled samples is summarized in Figure 1(a).
ls
d
d
d
Ls
Ls
0
Lt
0
Loss
As shown in Eq. 3.1, during the training phase, EA++ assigns a predicted value close to 0 for each unlabeled sample.
However, it is worth noting that during the test phase, EA++ predicts labels from two classes: +1 and ?1. This
warrants further exposition of the implementation specifics which is deferred until the next subsection.
(a)
lt
Loss
EA
Lt
EA++
Loss
(b)
0000000000000000000000000000000000000
1111111111111111111111111111111111111
0000000000000000000000000000000000000
1111111111111111111111111111111111111
0000000000000000000000000000000000000
0000000000000000000000000000000000000
1111111111111111111111111111111111111
ut 1111111111111111111111111111111111111
Ut
0
?Ut
0000000000000000000000000000000000000
1111111111111111111111111111111111111
1111111111111111111111111111111111111
0000000000000000000000000000000000000
0000000000000000000000000000000000000
1111111111111111111111111111111111111
0000000000000000000000000000000000000
1111111111111111111111111111111111111
(c)
(b)
(a)
Figure 1: (a) Diagrammatic representation of feature augmentation in EA and EA++, (b) Loss functions for class +1,
class ?1 and their summation.
3.3 Implementation
In this section, we present implementation specific details of EA++. For concreteness, we consider SVM as the base
supervised learner. However, these details hold for other supervised linear classifiers. In the dual form of SVM
optimization function, the labels are multiplied with features. Since, we want the predicted labels for unlabeled data
to be 0 (according to Eq. 3.1), multiplication by zero will make the unlabeled samples ineffective in the dual form of
3
the cost function. To avoid this, we create as many copies of ?u (x) as there are labels and assign each label to one
copy of ?u (x). For the case of binary classification, we create two copies of every augmented unlabeled sample, and
assign +1 label to one copy and ?1 to the other. The learner attempts to balance the loss of the two copies, and tries
to make the prediction on unlabeled sample equal to 0. Figure 1(b) shows the curves of the hinge loss for class +1,
class ?1 and their summation. The effective loss for each unlabeled sample is similar to the sum of losses for +1 and
?1 classes (shown in Figure 1(b)c).
4 Generalization Bounds
In this section, we present Rademacher complexity based generalization bounds for EA and EA++. First, we define
hypothesis classes for EA and EA++ using an alternate formulation. Second, we present a theorem (Theorem 4.1)
which relates empirical and expected error for the general case and hence applies to both the source and target domains.
Third, we prove Theorem 4.2 which relates the expected target error to the expected source error. Fourth, we present
Theorem 4.3 which combines Theorem 4.1 and Theorem 4.2 so as to relate the expected target error to empirical
errors in source and target (which is the main goal of the generalization bounds presented in this paper). Finally, all
that remains is to bound the Rademacher complexity of the various hypothesis classes.
4.1 Define Hypothesis Classes for EA and EA++
Our goal now is to define the hypothesis classes for EA and EA++ so as to make the theoretical analysis feasible.
? is trained
Both EA and EA++ train hypotheses in the augmented space X? ? R3d . The augmented hypothesis h
using data from both domains, and the three sub-hypotheses (gc + gs + gt ) of d-dimension are treated in a different
manner for source and target data. We use an alternate formulation of the hypothesis classes and work in the original
space X ? Rd . As discussed briefly in Section 2.2, EA can be thought to be simultaneously training two hypotheses
hs = (gc + gs ) and ht = (gc + gt ) for source and target domains, respectively. We consider the case when the
? 2 (as used in
underlying supervised classifier in augmented space uses a square L2 -norm regularizer of the form ||h||
SVM). This is equivalent to imposing the regularizer (||gc ||2 +||gs ||2 +||gt ||2 ) = (||gc ||2 +||hs ?gc ||2 +||ht ?gc ||2 ).
Differentiating this regularizer w.r.t. gc gives gc = (hs + ht )/3 at the minimum, and the regularizer reduces to
1
2
2
2
3 (||hs || + ||ht || + ||hs ? ht || ). Thus, EA can be thought to be minimizing the sum of empirical source error on
hs , empirical target error on ht and this regularizer. The cost function QEA (h1 , h2 ) can now be written as:
(4.1)
??
?s (h1 ) + (1 ? ?)?
?t (h2 ) + ?1 ||h1 ||2 + ?2 ||h2 ||2 + ?||h1 ? h2 ||2 , and (hs , ht ) = arg min QEA
h1 ,h2
The EA algorithm minimizes this cost function over h1 and h2 jointly to obtain hs and ht . The EA++ algorithm
uses target unlabeled data, and encourages hs and
P ht to agree on unlabeled samples (Eq. 3.1). This can be thought of
as having an additional regularizer of the form i?Ut (hs (xi ) ? ht (xi ))2 in the cost function. The cost function for
EA++ (denoted as Q++ (h1 , h2 )) can then be written as:
X
??
?s (h1 ) + (1 ? ?)?
?t (h2 ) + ?1 ||h1 ||2 + ?2 ||h2 ||2 + ?||h1 ? h2 ||2 + ?u
(h1 (xi ) ? h2 (xi ))2
(4.2)
i?Ut
Both EA and EA++ give equal weights to source and target empirical errors, so ? turns out to be 0.5. We use
hyperparameters ?1 , ?2 , ?, and ?u in the cost functions to make them more general. However, as explained earlier,
EA implicitly sets all these hyperparameters (?1 , ?2 , ?) to the same value (which will be 0.5( 31 ) = 16 in our case,
since the weights in the entire cost function are multiplied by ? = 0.5). The hyperparameter for unlabeled data (?u )
is 0.5 in EA++. We assume that the loss L(y, h.x) is bounded by 1 for the zero hypothesis h = 0. This is true for
many popular loss functions including square loss and hinge loss when y ? {?1, +1}. One possible way [13] of
defining the hypotheses classes is to substitute trivial hypotheses h1 = h2 = 0 in both the cost functions which makes
all regularizers and co-regularizers equal to zero and thus bounds the cost functions QEA and Q++ . This gives us
QEA (0, 0) ? 1 and Q++ (0, 0) ? 1 since ??s (0), ??t (0) ? 1. Without loss of generality, we also assume that final
source and target hypotheses can only reduce the cost function as compared to the zero hypotheses. Hence, the final
hypothesis pair (hs , ht ) that minimizes the cost functions is contained in the following paired hypothesis classes for
EA and EA++,
H := {(h1 , h2 ) : ?1 ||h1 ||2 + ?2 ||h2 ||2 + ?||h1 ? h2 ||2 ? 1}
X
(4.3)
H
:= {(h , h ) : ? ||h ||2 + ? ||h ||2 + ?||h ? h ||2 + ?
(h (x ) ? h (x ))2 ? 1}
++
1
2
1
1
2
2
1
2
u
1
i?Ut
4
i
2
i
The source hypothesis class for EA is the set of all h1 such that the pair (h1 , h2 ) is in H. Similarly, the target hypothesis
class for EA is the set of all h2 such that the pair (h1 , h2 ) is in H. Consequently, the source and target hypothesis
classes for EA can be defined as:
s
JEA
:= {h1 : X 7? R, (h1 , h2 ) ? H}
t
JEA
:= {h2 : X 7? R, (h1 , h2 ) ? H}
and
(4.4)
Similarly, the source and target hypothesis classes for EA++ are defined as:
s
J++
:= {h1 : X 7? R, (h1 , h2 ) ? H++ }
and
t
J++
:= {h2 : X 7? R, (h1 , h2 ) ? H++ }
(4.5)
Furthermore, we assume that our hypothesis class is comprised of real-valued functions over an RKHS with reproducing kernel k(?, ?), k :X ?X 7? R. Let us define the kernel matrix and partition it corresponding to source labeled,
target labeled and target unlabeled data as shown below:
!
As?s Cs?t Ds?u
?
Ct?s Bt?t Et?u
,
(4.6)
K=
?
?
Du?s
Eu?t
Fu?u
where ?s?, ?t? and ?u? indicate terms corresponding to source labeled, target labeled and target unlabeled, respectively.
4.2 Relate empirical and expected error (for both source and target)
Having defined the hypothesis classes, we now proceed to obtain generalization bounds for EA and EA++. We have
the following standard generalization bound based on the Rademacher complexity of a hypothesis class [13].
Theorem 4.1. Suppose the uniform Lipschitz condition holds for L : Y 2 ? [0, 1], i.e., |L(y?1 , y) ?
L(y?2 , y)| ? M |y?1 ? y?2 |, where y, y?1 , y?2 ? Y and y?1 6= y?2 . Then for any ? ? (0, 1) and for m samples
(X1 , Y1 ), (X2 , Y2 ), . . . , (Xm , Ym ) drawn i.i.d. from distribution D, we have with probability at least (1 ? ?) over
random draws of samples,
p
? m (F ) + ?1 (2 + 3 ln(2/?)/2).
?(f ) ? ??(f ) + 2M R
m
? m (F ) is the empirical Rademacher complexity of F
where f ? F is the class of functions mapping X 7? Y, and R
2 Pm
?
defined as Rm (F ) := E? [supf ?F | m i=1 ?i h2 (xi )|].
s
t
If we can bound the complexity of hypothesis classes JEA
and JEA
, we will have a uniform convergence bound on
the difference of expected and empirical errors (|?t (h) ? ??t (h)| and |?s (h) ? ??s (h)|) using Theorem 4.1. However, in
domain adaptation setting, we are also interested in the bounds that relate expected target error to total empirical error
on source and target samples. The following sections aim to achieve this goal.
4.3 Relate source expected error and target expected error
The following theorem provides a bound on the difference of expected target error and expected source error. The
bound is in terms of ?s := ?s (fs , ft ), ?s := ?s (h?t , ft ) and ?t := ?t (h?t , ft ), where fs and ft are the source and target
labeling functions, and h?t is the optimal target hypothesis in target hypothesis class. It also uses dH?H (Ds , Dt )?
distance [14], which is defined as suph1 ,h2 ?H 2|?s (h1 , h2 ) ? ?t (h1 , h2 )|. The dH?H ?distance measures the distance
between two distribution using a hypothesis class-specific distance measure. If the two domains are close to each
other, ?s and dH?H (Ds , Dt ) are expected to be small. On the contrary, if the domains are far apart, these terms will
be big and the use of extra source samples may not help in learning a better target hypothesis. These two terms also
represent the notion of adaptability in our case.
Theorem 4.2. Suppose the loss function is M-Lipschitz as defined in Theorem 4.1, and obeys triangle inequality. For
any two source and target hypotheses hs , ht (which belong to different hypotheses classes), we have
hp
i 1
?t (ht , ft ) ? ?s (hs , fs ) ?M ||ht ? hs ||Es
k(x, x) + dHt ?Ht (Ds , Dt ) + ?s + ?s + ?t .
2
where Ht is the target hypothesis class, and k(?, ?) is the reproducing kernel for the RKHS. ?s , ?s , and ?t are defined
as above.
Proof. Please see Appendix A in the supplement.
5
4.4 Relate target expected error with source and target empirical errors
EA and EA++ learn source and target hypotheses jointly. So the empirical error in one domain is expected to have
its effect on the generalization error in the other domain. In this section, we aim to bound the target expected error in
terms of source and target empirical errors. The following theorem achieves this goal.
Theorem 4.3. Under the assumptions and definitions used in Theorem 4.1 and Theorem 4.2, with probability at least
1 ? ? we have
?t (ht , ft ) ?
?
?
p
1
1
? m (Hs ) + 2M R
? m (Ht )) + 1 ?1 + ?1
(2 + 3 ln(2/?)/2)
(?
?s (hs , fs ) + ??t (ht , ft )) + (2M R
2
2
2
ls
lt
hp
i 1
1
1
+ M ||ht ? hs ||Es
k(x, x) + dHt ?Ht (Ds , Dt ) + (?s + ?s + ?t )
2
4
2
for any hs and ht . Hs and Ht are the source hypothesis class and the target hypothesis class, respectively.
Proof. We first use Theorem 4.1 to bound (?t (ht )? ??t (ht )) and (?s (hs )? ??s (hs )). The above theorem directly follows
by combining these two bounds and Theorem 4.2.
This bound provides better a understanding of how the target expected error is governed by both source and target
empirical errors, and hypotheses class complexities. This behavior is expected since both EA and EA++ learn source
and target hypotheses jointly. We also note that the bound in Theorem 4.3 depends on ||hs ? ht ||, which apparently
might give an impression that the best possible thing to do is to make source and target hypotheses equal. However, due
to joint learning of source and target hypotheses (by optimizing the cost function of Eq. 4.1), making the source and
target hypotheses close will increase the source empirical error, thus loosening the bound of Theorem 4.3. Noticing
that ||hs ? ht ||2 ? ?1 for both EA and EA++, the bound can be made independent of ||hs ? ht || although with a
sacrifice on the tightness. We note that Theorem 4.1 can also be used to bound the target generalization error of EA
and EA++ in terms of only target empirical error. However, if the number of labeled target samples is extremely
low, this bound can be loose due to inverse dependency on number of target samples. Theorem 4.3 bounds the target
expected error using the averages of empirical errors, Rademacher complexities, and sample dependent terms. If the
domains are reasonably close and the number of labeled source samples is much higher than target samples, this can
provide a tighter bound compared to Theorem 4.1.
Finally, we need the Rademacher complexities of source and target hypothesis classes (for both EA and EA++) to be
able to use Theorem 4.3, which are provided in the next sections.
4.5 Bound the Complexity of EA and EA++ Hypothesis Classes
The following theorems bound the Rademacher complexity of the target hypothesis classes for EA and EA++.
4.5.1
E ASYA DAPT (EA)
2C t
2C t
1
t
EA
EA
? m (J t ) ?
Theorem 4.4. For the hypothesis class JEA
defined in Eq. 4.4 we have, ?
? R
where,
4
EA
lt
2 lt
P
t
2
? m (J t ) = E? suph ?J t |
??1 tr(B) and B is the kernel sub-matrix de? 1
R
EA
i ?i h2 (x)|, (CEA ) =
2
1
1
EA
?2 +
?1
+?
fined as in Eq. 4.6.
Proof. Please see Appendix B in the supplement.
The complexity of target class decreases with an increase in the values of hyperparameters. It decreases more rapidly
with change in ?2 compared to ? and ?1 , which is also expected since ?2 is the hyperparameter directly influencing
the target hypothesis. The kernel block sub-matrix corresponding to source samples does not appear in the bound.
This result in conjunction with Theorem 4.1 gives a bound on the target generalization error.
To be able to use the bound of Theorem 4.3, we need the Rademacher complexity of the source hypothesis class.
Due to the symmetry of paired hypothesis class (Eq. 4.3) in h1 and h2 up to scalar parameters, the complex6
s
s
1 2CEA
? m (J s ) ? 2CEA , where (C s )2 =
? R
ity of source hypothesis class can be similarly bounded by ?
4
EA
EA
ls
2 ls
1
??1 tr(A), and A is the kernel block sub-matrix corresponding to source samples.
?
1
1
?1 +
4.5.2
?2
+?
E ASYA DAPT ++ (EA++)
t
1 2C++
t
t
? m (J++
Theorem 4.5. For the hypothesis class J++
defined in Eq. 4.5 we have, ?
? R
) ?
4
2 lt
t
P
2C++
t
t
? m (J++
??1 tr(B) ?
? 1
where, R
) = E? suph2 ?J++
t
| i ?i h2 (x)| and (C++
)2 =
lt
1
?2 + ?1 + ?
1
2
(?1 +?2 )
?1
tr E(I + kF )?1 E ? , where k = ???1 u+??
.
?u ??1 +??
2 +?1 ?2
2 +?1 ?2
Proof. Please see Appendix C in the Supplement.
t
The second term in (C++
)2 is always positive since the trace of a positive definite matrix is positive. So, the unlabeled
data results in a reduction
P of complexity over the labeled data case (Theorem 4.4). The trace term in the reduction
can also be written as i ||Ei ||2(I+kF )?1 , where Ei is the i?th column of matrix E and || ? ||2Z is the norm induced by a
positive definite matrix Z. Since Ei is the vector representing the inner product of i?th target sample with all unlabeled
samples, this means that the reduction in complexity is proportional to the similarity between target unlabeled samples
and target labeled samples. This result in conjunction with Theorem 4.1 gives a bound on the target generalization
error in terms of target empirical error.
To be able to use the bound of Theorem 4.3, we need the Rademacher complexity of source hypothesis class too.
Again, as in case of EA, using the symmetry of paired hypothesis class H++ (Eq. 4.3) in h1 and h2 up to scalar
s
s
1 2C++
? m (J s ) ? 2C++ ,
parameters, the complexity of source hypothesis class can be similarly bounded by ?
?R
4
++
l
l
s
s
2
2
?2
s
??1 tr(A) ? ?u
? 1
tr D(I + kF )?1 D? , and k is defined similarly
where (C++
)2 =
??1 +??2 +?1 ?2
1
1
?1 +
?2
+?
as in Theorem 4.5. The trace term can again be interpreted as before, which implies that the reduction in source class
complexity is proportional to the similarity between source labeled samples and target unlabeled samples.
5 Experiments
We follow experimental setups similar to [1] but report our empirical results for the task of sentiment classification
using the S ENTIMENT data provided by [15]. The task of sentiment classification is a binary classification task which
corresponds to classifying a review as positive or negative for user reviews of eight product types (apparel, books,
DVD, electronics, kitchen, music, video, and other) collected from amazon.com. We quantify the domain divergences
in terms of the A-distance [16] which is computed [17] from finite samples of source and target domain using the
proxy A-distance [16]. For our experiments, we consider the following domain-pairs: (a) DVD?BOOKS (proxy
A-distance=0.7616) and, (b) KITCHEN?APPAREL (proxy A-distance=0.0459). As in [1], we use an averaged
perceptron classifier from the Megam framework (implementation due to [18]) for all the aforementioned tasks. The
training sample size varies from 1k to 16k. In all cases, the amount of unlabeled target data is equal to the total amount
of labeled source and target data.
We compare the empirical performance of EA++ with a few other baselines, namely, (a) S OURCE O NLY (classifier
trained on source labeled samples), (b) TARGET O NLY-F ULL (classifier trained on the same number of target labeled
samples as the number of source labeled samples in S OURCE O NLY), (c) TARGET O NLY (classifier trained on small
amount of target labeled samples, roughly one-tenth of the amount of source labeled samples in S OURCE O NLY), (d)
A LL (classifier trained on combined labeled samples of S OURCE O NLY and TARGET O NLY), (e) EA (classifier trained
in augmented feature space on the same input training set as A LL), (f) EA++ (classifier trained in augmented feature
space on the same input training set as EA and an equal amount of unlabeled target data). All these approaches were
tested on the entire amount of available target test data.
Figure 2 presents the learning curves for (a) S OURCE O NLY, (b) TARGET O NLY-F ULL, (c) TARGET O NLY, (d) A LL,
(e) EA, and (f) EA++ (EA with unlabeled data). The x-axis represents the number of training samples on which the
7
0.4
error rate
0.3
error rate
SrcOnly
TgtOnly-Full
TgtOnly
All
EA
EA++
0.2
SrcOnly
TgtOnly-Full
TgtOnly
All
EA
EA++
0.3
0.2
2000
5000
8000
number of samples
11000
1000
(a)
2500
4000
number of samples
6500
(b)
Figure 2: Test accuracy of S OURCE O NLY, TARGET O NLY-F ULL, TARGET O NLY, A LL, EA, EA++ (with unlabeled
data) for, (a) DVD?BOOKS (proxy A-distance=0.7616), (b) KITCHEN?APPAREL (proxy A-distance=0.0459)
predictor has been trained. At this point, we note that the number of training samples vary depending on the particular approach being used. For S OURCE O NLY, TARGET O NLY-F ULL and TARGET O NLY, it is just the corresponding
number of labeled source or target samples, respectively. For A LL and EA, it is the summation of labeled source and
target samples. For EA++, the x-value plotted denotes the amount of unlabeled target data used (in addition to an
equal amount of source+target labeled data, as in A LL or EA). We plot this number for EA++, just to compare its
improvement over EA when using an additional (and equal) amount of unlabeled target data. This accounts for the
different x values plotted for the different curves. In all cases, the y-axis denotes the error rate.
As can be seen, for both the cases, EA++ outperforms E ASYA DAPT. For DVD?BOOKS, the domains are far apart
as denoted by a high proxy A-distance. Hence, TARGET O NLY-F ULL achieves the best performance and EA++ almost
catches up for large amounts of training data. For different number of sample points, EA++ gives relative improvements in the range of 4.36% ? 9.14%, as compared to EA. The domains KITCHEN and APPAREL can be considered
to be reasonably close due to their low domain divergence. Hence, this domain pair is more amenable for domain adaptation as is demonstrated by the fact that the other approaches (S OURCE O NLY, TARGET O NLY, A LL) perform better
or atleast as good as TARGET O NLY-F ULL. However, as earlier, EA++ once again outperforms all these approaches
including TARGET O NLY-F ULL. Due to the closeness of the two domains, additional unlabeled data in EA++ helps
it in outperforming TARGET O NLY-F ULL. At this point, we also note that EA performs poorly for some cases, which
corroborates with prior experimental results [1]. For this dataset, EA++ yields relative improvements in the range of
14.08% ? 39.29% over EA for different number of sample points experimented with. Similar trends were observed
for other tasks and datasets (refer Figure 3 of [2]).
6 Conclusions
We proposed a semi-supervised extension to an existing domain adaptation technique (EA). Our approach EA++,
leverages unlabeled data to improve the performance of EA. With this extension, EA++ applies to both fully supervised
and semi-supervised domain adaptation settings. We have formulated EA and EA++ in terms of co-regularization, an
idea that originated in the context of multiview learning [13, 19]. Our proposed formulation also bears resemblance
to existing work [20] in semi-supervised (SSL) literature which has been studied extensively in [21, 22, 23]. The
difference being, while in SSL one would try to make the two views (on unlabeled data) agree, in domain adaptation
the aim is to make the two hypotheses in source and target agree. Using our formulation, we have presented theoretical
analysis of the superior performance of EA++ as compared to EA. Our empirical results further confirm the theoretical
findings. EA++ can also be extended to the multiple source settings. If we have k sources and a single target domain
then we can introduce a co-regularizer for each source-target pair. Due to space constraints, we defer details to a full
version.
8
References
[1] Hal Daum?e III. Frustratingly easy domain adaptation. In ACL?07, pages 256?263, Prague, Czech Republic, June 2007.
[2] Hal Daum?e III, Abhishek Kumar, and Avishek Saha. Frustratingly easy semi-supervised domain adaptation. In ACL 2010
Workshop on Domain Adaptation for Natural Language Processing (DANLP), pages 53?59, Uppsala, Sweden, July 2010.
[3] Theodoros Evgeniou and Massimiliano Pontil. Regularized multitask learning. In KDD?04, pages 109?117, Seattle, WA,
USA, August 2004.
[4] Mark Dredze, Alex Kulesza, and Koby Crammer. Multi-domain learning by confidence-weighted parameter combination.
Machine Learning, 79(1-2):123?149, 2010.
[5] Andrew Arnold and William W. Cohen. Intra-document structural frequency features for semi-supervised domain adaptation.
In CIKM?08, pages 1291?1300, Napa Valley, California, USA, October 2008.
[6] John Blitzer, Ryan Mcdonald, and Fernando Pereira. Domain adaptation with structural correspondence learning. In
EMNLP?06, pages 120?128, Sydney, Australia, July 2006.
[7] Gokhan Tur. Co-adaptation: Adaptive co-training for semi-supervised learning. In ICASSP?09, pages 3721?3724, Taipei,
Taiwan, April 2009.
[8] Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. Transferring Naive Bayes classifiers for text classification. In
AAAI?07, pages 540?545, Vancouver, B.C., July 2007.
[9] Dikan Xing, Wenyuan Dai, Gui-Rong Xue, and Yong Yu. Bridged refinement for transfer learning. In PKDD?07, pages
324?335, Warsaw, Poland, September 2007.
[10] Lixin Duan, Ivor W. Tsang, Dong Xu, and Tat-Seng Chua. Domain adaptation from multiple sources via auxiliary classifiers.
In ICML?09, pages 289?296, Montreal, Quebec, June 2009.
[11] Ming-Wei Chang, Michael Connor, and Dan Roth. The necessity of combining adaptation methods. In EMNLP?10, pages
767?777, Cambridge, MA, October 2010.
[12] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. A co-regularization approach to semi-supervised learning with multiple
views. In ICML Workshop on Learning with Multiple Views, pages 824?831, Bonn, Germany, August 2005.
[13] D. S. Rosenberg and P. L. Bartlett. The Rademacher complexity of co-regularized kernel classes. In AISTATS?07, pages
396?403, San Juan, Puerto Rico, March 2007.
[14] John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman. Learning bounds for domain adaptation.
In NIPS?07, pages 129?136, Vancouver, B.C., December 2007.
[15] John Blitzer, Mark Dredze, and Fernando Pereira. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for
sentiment classification. In ACL?07, pages 440?447, Prague, Czech Republic, June 2007.
[16] Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. Analysis of representations for domain adaptation. In
NIPS?06, pages 137?144, Vancouver, B.C., December 2006.
[17] Piyush Rai, Avishek Saha, Hal Daum?e III, and Suresh Venkatasubramanian. Domain adaptation meets active learning. In
NAACL 2010 Workshop on Active Learning for NLP (ALNLP), pages 27?32, Los Angeles, USA, June 2010.
[18] Hal Daum?e III. Notes on CG and LM-BFGS optimization of logistic regression. August 2004.
[19] Vikas Sindhwani and David S. Rosenberg. An RKHS for multi-view learning and manifold co-regularization. In ICML?08,
pages 976?983, Helsinki, Finland, June 2008.
[20] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In COLT?98, pages 92?100, New
York, NY, USA, July 1998. ACM.
[21] Maria-Florina Balcan and Avrim Blum. A PAC-style model for learning from labeled and unlabeled data. In COLT?05, pages
111?126, Bertinoro, Italy, June 2005.
[22] Maria-Florina Balcan and Avrim Blum. A discriminative model for semi-supervised learning. J. ACM, 57(3), 2010.
[23] Karthik Sridharan and Sham M. Kakade. An information theoretic framework for multi-view learning. In COLT?08, pages
403?414, Helsinki, Finland, June 2008.
9
| 4009 |@word h:29 multitask:1 version:2 briefly:2 norm:2 tat:1 blender:1 tr:6 reduction:4 venkatasubramanian:1 electronics:1 contains:1 efficacy:1 necessity:2 rkhs:3 document:1 outperforms:3 existing:3 com:1 written:3 john:4 partition:1 kdd:1 plot:1 chua:1 provides:2 uppsala:1 theodoros:1 constructed:1 shorthand:1 prove:1 combine:2 dan:1 manner:1 introduce:2 theoretically:2 sacrifice:1 expected:21 roughly:1 pkdd:1 behavior:1 multi:8 ming:1 duan:1 provided:2 underlying:5 notation:3 bounded:3 agnostic:3 interpreted:1 minimizes:2 finding:2 every:2 ull:8 classifier:22 rm:1 appear:1 positive:5 before:1 influencing:1 mistake:1 meet:1 might:1 acl:4 studied:1 co:16 limited:2 range:2 obeys:1 averaged:1 practical:1 practice:1 block:2 implement:2 definite:2 suresh:1 pontil:1 empirical:23 thought:4 intention:1 pre:2 confidence:1 onto:1 unlabeled:45 close:6 valley:1 dam:1 context:2 applying:1 equivalent:3 map:5 demonstrated:2 roth:1 l:9 simplicity:1 amazon:1 assigns:1 insight:1 importantly:1 regularize:1 ity:1 notion:2 target:120 suppose:2 user:1 us:3 hypothesis:69 trend:1 utilized:1 predicts:1 labeled:31 observed:1 ft:10 capture:1 tsang:1 eu:1 decrease:2 tur:1 mentioned:2 complexity:21 trained:8 segment:3 learner:3 triangle:1 icassp:1 joint:1 represented:2 various:1 regularizer:7 train:1 massimiliano:1 effective:1 labeling:2 h0:2 valued:1 tightness:1 niyogi:1 jointly:3 final:2 online:2 propose:2 product:2 adaptation:41 combining:3 rapidly:1 poorly:1 achieve:2 intuitive:1 los:1 hgc:3 seattle:1 convergence:1 rademacher:12 ben:1 help:2 depending:1 andrew:1 blitzer:4 montreal:1 piyush:1 school:1 eq:9 sydney:1 auxiliary:1 c:2 predicted:2 come:1 indicate:1 implies:1 quantify:1 drawback:1 subsequently:1 australia:1 apparel:4 hx:2 assign:2 generalization:13 preliminary:1 tighter:2 ryan:1 summation:3 dapt:16 extension:4 rong:2 hold:2 considered:2 warsaw:1 mapping:1 lm:1 achieves:2 commonality:2 vary:1 finland:2 label:8 create:2 puerto:1 weighted:1 always:1 aim:4 avoid:1 rosenberg:2 conjunction:4 june:7 improvement:3 maria:2 cg:1 baseline:2 sense:1 dependent:1 entire:2 bt:1 transferring:1 transformed:4 interested:1 germany:1 arg:1 dual:3 classification:6 aforementioned:1 denoted:6 colt:3 art:1 ssl:2 equal:8 once:1 evgeniou:1 extraction:1 having:5 qiang:1 represents:1 koby:3 unsupervised:2 yu:2 warrant:1 icml:3 report:1 few:2 saha:3 employ:1 belkin:1 bertinoro:1 simultaneously:2 divergence:2 kitchen:4 phase:2 william:1 gui:2 attempt:1 karthik:1 ource:8 intra:1 deferred:1 regularizers:2 amenable:1 fu:1 sweden:1 plotted:2 theoretical:6 instance:1 column:1 earlier:3 cost:12 deviation:1 republic:2 uniform:2 comprised:1 predictor:1 wortman:1 too:1 dependency:1 varies:1 xue:2 combined:1 stay:1 dong:1 michael:1 r3d:4 ym:1 augmentation:1 again:3 aaai:1 emnlp:2 idiosyncrasy:1 juan:1 book:4 style:1 account:1 avishek:4 de:1 blow:1 bfgs:1 seng:1 summarized:1 boom:1 depends:1 view:7 try:2 h1:28 apparently:1 xing:1 bayes:1 shai:1 defer:1 partha:1 square:2 accuracy:1 yield:1 worth:1 definition:2 frequency:1 proof:4 dataset:3 popular:1 bridged:1 mitchell:1 recall:1 subsection:1 ut:11 adaptability:1 ea:124 rico:1 higher:1 dt:7 supervised:38 follow:1 tom:1 harness:2 wherein:1 improved:1 april:1 formulation:4 wei:1 box:1 generality:2 furthermore:1 just:2 until:1 d:7 ei:3 propagation:1 logistic:1 resemblance:1 hal:6 semisupervised:1 dredze:2 facilitate:2 effect:1 usa:7 utah:2 true:2 y2:1 naacl:1 regularization:9 hence:8 attractive:1 ll:7 during:3 encourages:1 please:3 impression:1 multiview:1 demonstrate:3 mcdonald:1 theoretic:1 performs:2 cp:2 balcan:2 superior:2 common:2 empirically:2 overview:1 cohen:1 extend:5 discussed:2 belong:1 refer:3 connor:1 danlp:2 imposing:1 cambridge:1 rd:5 pm:1 similarly:6 hp:2 language:1 access:1 similarity:2 gt:13 base:5 closest:1 recent:1 optimizing:1 italy:1 apart:2 prime:1 termed:1 scenario:2 inequality:1 binary:2 outperforming:1 seen:1 analyzes:2 minimum:1 additional:3 dai:2 fernando:4 july:4 semi:23 relates:2 full:3 desirable:1 multiple:4 reduces:1 sham:1 fined:1 paired:3 prediction:3 regression:1 florina:2 represent:2 kernel:7 addition:2 remarkably:1 background:1 whereas:1 want:2 source:77 extra:1 umiacs:2 umd:2 ineffective:1 induced:1 thing:1 quebec:1 contrary:2 december:2 sridharan:1 prague:2 call:1 structural:2 leverage:4 noting:1 yang:1 iii:5 easy:3 concerned:1 reduce:1 idea:2 inner:1 angeles:1 expression:1 bartlett:1 assist:1 passed:1 sentiment:4 f:7 proceed:1 york:1 clear:2 amount:10 incapability:1 extensively:1 svms:3 augments:1 cikm:1 hyperparameter:2 thereafter:1 nevertheless:1 blum:3 drawn:1 tenth:1 ht:33 graph:1 concreteness:1 sum:3 inverse:1 noticing:1 fourth:1 extends:1 almost:3 wenyuan:2 draw:1 appendix:3 investigates:1 bound:36 ct:1 correspondence:1 g:12 constraint:1 alex:2 x2:1 helsinki:2 dvd:4 yong:2 bonn:1 extremely:2 min:1 kumar:2 department:2 according:1 alternate:2 rai:1 combination:3 march:1 em:1 appealing:1 kakade:1 wherever:1 making:2 explained:1 ln:2 agree:7 remains:1 jennifer:1 turn:1 loose:1 available:3 multiplied:2 apply:1 eight:1 vikas:2 original:3 substitute:1 denotes:3 nlp:2 lixin:1 maintaining:1 hinge:2 daum:5 music:1 taipei:1 build:1 dht:2 quantity:1 md:2 september:1 distance:11 maryland:2 reinforce:1 manifold:1 collected:1 trivial:1 taiwan:1 loosening:1 balance:1 minimizing:1 setup:2 october:2 relate:5 trace:3 negative:1 implementation:4 perform:1 datasets:2 finite:1 defining:1 extended:3 y1:1 gc:18 reproducing:2 august:3 introduced:1 david:2 pair:7 namely:2 extensive:1 california:1 learned:2 czech:2 nip:2 able:3 usually:1 below:1 xm:1 appeared:1 kulesza:2 nly:22 including:2 video:1 treated:1 natural:1 regularized:2 customized:1 cea:3 representing:1 improve:1 brief:1 axis:2 catch:1 naive:1 poland:1 text:1 prior:4 literature:2 l2:1 understanding:1 kf:3 multiplication:1 review:2 relative:2 vancouver:3 fully:2 loss:14 bear:1 interesting:1 limitation:1 suph:1 proportional:2 h2:32 proxy:6 classifying:1 atleast:1 last:1 copy:5 perceptron:2 arnold:1 differentiating:1 mikhail:1 curve:3 dimension:3 author:1 made:2 adaptive:1 preprocessing:1 refinement:1 san:1 far:2 implicitly:1 confirm:1 active:2 incoming:1 corroborates:1 abhishek:3 xi:16 discriminative:1 why:2 frustratingly:2 learn:5 transfer:2 reasonably:3 symmetry:2 du:1 domain:72 bollywood:1 easyadapt:1 aistats:1 main:1 motivation:1 big:1 hyperparameters:3 ref:1 x1:1 augmented:15 xu:1 representative:1 ny:1 sub:5 originated:1 pereira:4 governed:1 third:1 learns:1 theorem:33 diagrammatic:1 specific:11 pac:1 experimented:1 svm:4 closeness:1 exists:2 workshop:4 avrim:3 supplement:3 jea:5 supf:1 lt:11 ivor:1 expressed:1 contained:1 scalar:2 chang:1 applies:4 sindhwani:2 corresponds:4 dh:3 ma:1 acm:2 goal:6 formulated:2 consequently:1 exposition:1 shared:1 lipschitz:2 feasible:1 change:1 specifically:1 operates:1 total:2 experimental:3 e:2 perceptrons:1 formally:1 mark:2 inability:1 crammer:3 tested:1 biography:1 |
3,324 | 401 | Can neural networks do better than the
Vapnik-Chervonenkis bounds?
David Cohn
Dept. of Compo Sci. & Eng.
University of Washington
Seattle, WA 98195
Gerald Tesauro
IBM Watson Research Center
P.O. Box 704
Yorktown Heights, NY 10598
Abstract
\Ve describe a series of careful llumerical experiments which measure the
average generalization capability of neural networks trained on a variety of
simple functions. These experiments are designed to test whether average
generalization performance can surpass the worst-case bounds obtained
from formal learning theory using the Vapnik-Chervonenkis dimension
(Blumer et al., 1989). We indeed find that, in some cases, the average
generalization is significantly better than the VC bound: the approach to
perfect performance is exponential in the number of examples m, rather
than the 11m result of the bound. In other cases, we do find the 11m
behavior of the VC bound, and in these cases, the numerical prefactor is
closely related to prefactor contained in the bound .
1
INTRODUCTION
Probably the most important issue in the study of supervised learning procedures is
the issue of generalization, i.e., how well the learning system can perform on inputs
not seen during training. Significant progress in the understanding of generalization
was made in the last few years using a concept known as the Vapnik-Chervonenkis
dimension, or VC-dimension. The VC-dimension provides a basis for a number of
powerful theorems which establish worst-case bounds on the ability of arbitrary
learning systems to generalize (Blumer et al., 1989; Haussler et al., 1988). These
theorems state that under certain broad conditions, the generalization error f of
a learning system with VC-dimensioll D trained on m random examples of an
arbitrary fUllction will with high confidence be no worse than a bound roughly of
order Dim. The basic requirements for the theorems to hold are that the training
911
912
Cohn and Tesauro
and testing examples are generated from the same probability distribution. and that
the learning system is able to correctly classify the training examples.
Unfortunately, since these theorems do not calculate the expected generalization
error but instead only bound it, the question is left open whether expected error
might lie significantly below the bound. Empirical results of (Ahmad and Tesauro,
1988) indicate that in at least one case, average error was in fact significantly below
the VC bound: the error decreased exponentially with the number of examples,
t "'" exp( -m/mo}, rather than the l/m, result of the bound. Also, recent statistical
learning theories (Tishby et al., 1989; Schwartz et al., 1990), which provide an
analytic means of calculating expected performance, indicate that an exponential
approach to perfect performance could be obtained if the spectrum of possible
network generalizations has a "gap" near perfect performance.
\IVe have addressed the issue of whether average performance can surpass \vorstcase performance t.hrough numerical experiments which measure the average generalization of simple neural networks trained on a variety of simple fUllctions. Our
experiments extend the work of (Ahmad and Tesauro, 1988). They test bot.h the
relevance of the \'\'orst-case VC bounds to average generalization performance, and
the predictions of exponential behavior due to a gap in the generalization spectrum.
2
EXPERIMENTAL METHODOLOGY
T\',,'o pairs of N-dimensional classification tasks were examined in our experiments :
two linf'ariy sepa.rable functions ("majority" and "real-valued threshold"). anel
two higlwr-order functions ("majority-XOR" and "threshold-XOR"). rVlajority is
a Boolean predicate in which the output is 1 if and only if more than half of the
inputs are 1. The real-valued threshold function is a natural extension of ma.jority to t.he continuous space [O,l]N: the output is 1 if and only if the sum of
the N real-valued inputs is greater than N /2, The majority-XOR function is a
Boolean function where the output is 1 if and only if the N'th input disagrees
with the majority computed by the first N - 1 inputs. This is a natural extension of majority which retains many of its symmetry properties, e.g., the positive
and negative examples are equally numerous and somewhat uniformly distributed.
Similarly, threshold-XOR is natural extension of the real-valued threshold function
\'\'hich maps [0, l]N-l x {O, I} f-+ {0,1}. Here, the output is 1 if and only if the
N'th input, which is binary, disagrees with the threshold function computed by the
first N - 1 real-valued inputs. Networks trained on these tasks used sigmoidal units
and had standard feed-forward fully-connected structures with at most a single hidden layer. The training algorithm was standard back-propagation with momentum
(Rumelhart. et al., 1986).
A simulator run consisted of training a randomly initialized network on a training
set of 111 examples of the target function, chosen uniformly from the input space.
Networks were trained until all examples were classified within a specified margin
of the correct classification. Runs that failed to converge within a cutoff time of
50,000 epochs were discal'ded. The genel'alization error of the resulting network
was then estimated by testing on a set of 2048 novel examples independently drawn
from the same distribution . The average generalization errol' fol' a given value of
111 was typically computed by averaging the l'esults of 10-40 simulator runs, ea.ch
Can Neural Networks do Better Than the Vapnik-Chervonenkis Bounds?
with a different set of training patterns, test patterns, and random initial weights.
A wide range of values of 1l1, was examined in this way in each experiment.
2.1
SOURCES OF ERROR
Our experiments were carefully controlled for a number of potential sources of error.
Random errors due to the particular choice of random training patterns, test patterns, and initial weights were reduced to low levels by performing a large number
of runs and varying each of these in each run.
\Ve have also looked for systematic errors due to the particular values of learning rate and momentum constants, initial random weight scale, frequency of weight
changes, training threshold, and training cutoff time. \Vithin wide ranges of para.meter values, we find no significant dependence of the generalization performance on
the particular choice of any of these parameters except k, the frequency of weight
changes. (However, the parameter values can affect the rate of convergence or
probability of convergence on the training set.) Variations in k appear to alter the
numerical coefficients of the learning curve, but. not the overall functional form.
Another potential concern is the possibility of overtraining: even though the training
set error should decrease monotonically with training time, the test set error might
reach a minimum and then increase with further training. \Ve have monitored
hundreds of simulations of both the linearly separable and higher-order tasks, and
find no significant overtraining in either case.
Other aspects of the experimental protocol which could affect measured results
include order of pattern presentation, size of test set, testing threshold , and choice
of input representation. We find that presenting the patterns in a random order
as opposed to a fixed order improves the probability of convergence, but does not
alter the average generalization of runs that do converge. Changing the criterion by
which a test pattern is judged correct alters the numerical prefactor of the learning
curve but not the functional form. Using test sets of 4096 patterns instead of
2048 patterns has no significant effect on measured generalization values. Finally,
convergence is faster with a [-1,1] coding scheme than with a [0,1] scheme, and
generalization is improved, but only by numerical constants.
2.2
ANALYSIS OF DATA
To determine the functional dependence of measured generalization error e on the
number of examples In, we apply the standard curve-fitting technique of performing
linear re~ression on the appropriately ~ransformed data. Thus we .can look for an
exponentIal law e Ae- m / mo by plottmg log(e) vs. m and observmg whether the
transformed data lies on a straight line. We also look for a polynomial law of the
form e = B/(m + a) by plotting l/e vs. m. \Ve have not attempted to fit to a more
general polynomial law because this is less reliable, and because theory predicts a
1/171, law.
=
By plotting each experimental curve in both forms, log(e) vs. m and l/e vs. m, we
can determine which model provides a better fit to the data. This can be done both
visually and more quantitatively by computing the linear correlation coefficient ,,2
in a linear least-squares fit. To the extent that one of the curves has a higher value
913
914
Cohn and Thsauro
of 1,2 than the other one, we can say that it provides a better model of the data
than the other functional form.
We have also developed the following technique to assess absolute goodness-offit . \Ve generate a set of artificial data points by adding noise equivalent to the
error bars on the original data points to the best-fit curve obtained from the linear
regression. Regression on the artificial data set yields a value of r2, and repeating
this process many times gives a distribution of r2 values which should approximate
the distribution expected with the amount of noise in our data. By comparing the
value 1'2 from our original data to this generated distribution, we can estimate the
probability that our functional model would produce data like that we observed.
3
EXPERIMENTS ON LINEARLY-SEPARABLE
FUNCTIONS
Networks with 50 inputs and no hidden units were trained on majority and l'ealvalued threshold functions, with training set sizes ranging from m = 40 to Tn = 500
in increments of 20 patterns. Twenty networks were trained for each value of m. A
total of 3.8% of the binary majority and 7.7% of the real-valued threshold simulation
runs failed to converge and were discarded.
The data obtained from the binary majority and real-valued threshold problems
was tested for fit to the exponential and polynomial functional models, as shown in
Figure 1. The binary majority data had a correlation coefficient of 1' '2 = 0.982 in
the exponential fit; this was better than 40% of the "artificial" data sets described
previously. However, the polynomial fit only gave a value of 1,2 = 0.9(:i6, which
was bett.er than only 6% of the artificial data sets. We conclude that the binary
majority data is consistent with an exponential law and not with a 11m law.
The real-valued threshold data, however, behaved in the opposite manner . The
exponential fit gave a value of 1'2 = 0.943, which was better than only 14% of the
artificial data sets. However, the polynomial fit gave a value of 1'2
0.996, which
was better than 40% of the artificial data sets. We conclude that the real-valued
threshold data closely approximates a 11m law and was not likely to have been
generated by an exponential law.
=
4
EXPERIMENTS ON HIGHER-ORDER FUNCTIONS
For the majority-XOR and threshold-XOR problems, we used N = 26 input units:
25 for the "majority" (or threshold) and a single "XOR" unit. In theory, these
problems can be solved with only two hidden units, but in practice, at least three
hidden ullit.s were needed for reliable convergence. Training set sizes ranging from
m = 40 to 111 = 1000 in increments of 20 were studied for both tasks. At each
value of m., 40 simulations were performed. Of the 1960 simulations, 1702 of the
binary and 1840 of the real-valued runs converged. No runs in either case achieved
a perfect score on the test data.
With both sets of runs, there was a visible change in the shape of the generalization
curve when the training set size reached 200 samples. We are interested primarily
Can Neural Networks do Better Than the Vapnik-Chervonenkis Bounds?
o ......... ........... ........... .... ... .............. .. ........ ... ... .. .
o . ... ....... ... ... .. .... ............ ..... ... ........ ..... ......... ... .
::;
<::
50-Input binary majority
''Q.>
Q.>
c
o
SO-Input real-valued threshold
o
''-1
C -1
o
~
'"
N
~ ~+---------~~~------------
iii ?z
'-
'-
Q.>
Q.>
C
C
Q.>
CJ>
C " +--~-""T""'---r---.-----I"~--.
o
20
600
100
200
300
toil
.. .... .... . .. .... ............. . ........ . ..... ..... . ... ... .......... ..
50-Input binary majority
'o
300
&00
500
600
L
Q)
C
C
o
o
~
10
N
6+-------~~--------
___
N
ro
L
Q.>
Q)
C
C
0.'
Q)
C>
....
200
L
'Q.>
'"
''"
100
3ro+-------------~~=--?
'-
.=.
Q.>
C>
C ,,+--""T""'---.----.----.--.------.
O-l.?..-~-""T""'---.--__._--.--~
100
ZIlO
50D
4110
500
6011
CI
....
O+--~-~-__._-~-~-~
o
training set size
100
ZOO
JOO
&00
500
600
training set size
Figure 1: Observed generalization curves for binary majority and real-valued threshold, and their fit to the exponential and polynomial models. Errol' bars denote 9.5%
confidence intervals for the mean.
in the asymptotic behavior of these curves, so we restricted our analysis to sample
sizes 200 and above. As with the single-layer problems, we measured goodness of
fit to appropriately linearized forms of the exponential and polynomial curves in
question. Results are plotted in Figure 2.
It appears that the generalization curve of the threshold-XOR problem is not likely
to have been generated by an exponential, but is a plausible 11m polynomial. The
conelation coefficient in the exponential fit is only 1,2 = 0 .959 (better than only
10% of the artificial data sets), but in the polynomial fit is 1,2 = 0.997 (better than
1'32% of the artificial data sets).
The binary majority-XOR data, however, appears both visually an d from the relative 7'2 values to fit the exponential model better than the polynomial model. In the
exponential fit, 1,2 = 0.994, while in the polynomial fit, 1'2 = 0.940. However, we are
somewhat. cautious because the artificial data test is inconclusive. The exponential
fit is bett.er than 40% of artificial data sets, but the polynomial fit is better than
60% of artificial data sets . Also , there appears to be a small component of t.he curve
that is slower than a pure exponential.
5
COMPARISON TO THEORY
Figure 3 plot.s our data for both the first.-order and higher-order tasks compared t.o
the thol'etical error bounds of (Blumer et aI., 1989) and (Haussler et aI., 1988) . In
the higher-order case we have used the total number of weights as an estimate of the
VC-dimension, following (Baum and Haussler, 1989). (Even with this low estimate,
the bound of (Blumer et aI., 1989) lies off the scale.) All of our experimenta.l
curves fall below both bounds, and in each case the binary task does asymptotically
better than the corresponding real-valued task. One should note tha.t the bound in
915
916
Cohn and Thsauro
'o
''...
o .. ...................?.....?....??.....??......................?.........??...
26-input majority-XOR
o ............................................................................ .
26-input threshold-XOR
L.
o
''-
...
?1
c
o
~ ?1
...
--
'"
N
'" ?2
N
'"'-~
~ ?2
~+-----------------~~~------
<I>
0>
...
...
C
0>
~
~
?-4-1----.---..------,---.---.---.:.,
E
100
200
1000
26-input threshold-XOR
...
L
30
C
o
~ 10
N
N
...'"'...
10
c
0'1
0>
_"-
........................................................................... .
L.
~ 20
...c
20
o
L.
'"~
-1----.---..------,----.-----.--'---.
100
1000
1200
200
o
L.
'o
~
c
o
-3
1200
o+-~~---..__-___,_---._-_._-~
o
200
400
600
IlOO
tralnlng set sIze
1000
1200
'-
200
400
600
100
1000
1200
tralnlng set sIze
Figure 2: Generalization curves for 26-3-1 nets trained on majority-XOR and
threshold-XOR, and their fit to the exponential and polynomial models.
(Haussler et al., 1988) fits the real-valued data to within a small numerical constant.
However, strictly speaking it may not apply to our experiments because it is for
Bayes-optimal learning algorithms, and we do not know whether back-propagation
is Bayes-optimal
6
CONCLUSIONS
We have seen that two problems using strict binary inputs (majority and majorityXOR) exhibited distinctly exponential generalization with increasing training set
size. This indicates that there exists a class of problems that is asymptotically
much easier to learn than others of the same VC-dimension. This is not only of
theoretical interest, but it also hac; potential bearing on what kinds of large-scale
applications might be tractable with network learning methods. On the other hand,
merely by making the inputs real instead of binary, we found average error curves
lying close to the theoretical bounds. This indicates that the worst-cage bounds
may be more relevant to expected performance than has been previously realized.
It is interesting that the statistical theories of (Tishby et al., 1989; Schwartz et
al, 1990) predict the two classes of behavior seen in our experiments. Our future
research will focus on whether or not there is a "gap" as suggested by these theories.
Our preliminary findings for majority suggest that there is in fact no gap, except
possibly an "inductive gap" in which the learning process for some reason tends
to avoid the near-perfect solutions. If such an inductive gap does not exist, then
either the theory does not apply to back-propagation, or it must have some other
mechanism to generate the exponential behavior.
Can Neural Networks do Better Than the Vapnik-Chervonenkis Bounds?
,
-;::
o
2
....co
?
?
?
'-
...'~
.........................................................................
Blumer et al bound
Haussler et al bound
Real-valued threshold functIOn
Binary majority function
c
::
?1
?
Haussler et al bound
Threshold-XOR
MaJorlty-XOR
'-
o
N
~4
C>
o
'-
'"c:
0
~ .2t----~~_~.~.~::=:::::~~--c:
o ..................................................................... .
.....?
...
.. .. ...... .......... . ............?
-. ... -- ..... .. .......... ..... .
""".
........ .
.. . ........ .
''""
N
...'-
~ ?3
o?
C
?6 +--.,....--,.-----,,.----,.---.--s..;."r-----.
600
700
o
100
200
300
'00
500
tra I nIng set 51 ze
training set size
Figure 3: (a) The real-valued threshold problem performs roughly within a constant
factor of the upper bounds predicted in (Blumer et al., 1989) and (Haussler et aI.,
1988), while the binary majority problem performs asymptotically better. (b) The
threshold-XOR performs roughly within a constant factor of the predicted bound,
while majority-XOR performs asymptotically better .
References
S . Ahmad and G. Tesauro. (1988) Scaling and generalization in neural net.works: a
case study. In D. S. Touretzky et al., eds., Proceedings of the 1988 Conllectionist
Models Summer School, 3-10, Morgan Kaufmann.
E. B.' BaUln and D. Haussler. (1989) 'Vhat size net gives valid generalization?
Neural Computation 1(1):151-160.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. Warmuth. (1989) Learnability
and the Vapnik-Chervonenkis dimension. JACM 36(4):929-965.
D. Haussler, N. Littlestone, and M. vVarmuth. (1990) Predicting {O, l}-Functions
Oll Randomly Drawn Points. Tech Rep07'i UCSC-CRL-90-54, Univ. of California
at Santa Cruz, CA.
D. E. RUlnelhart, G. E. Hinton and R. J. vVilliams. (1986) Learning internal representations by error propagation. In Parallel Distributed Processing, 1:381-362 'MIT
Press.
D. B. Schwartz, V. K. Samalam, S. A. Salla and J. S. Denker. (1990) Exhaustive
learning. Neural Computation 2:374-385.
N. Tishby, E. Levin and S. A. SolIa. (1989) Consistent inference of probabilities in
layered networks: Predictions and generalizations. In IJCNN Proceedings, 2:403409, IEEE.
917
| 401 |@word polynomial:13 open:1 simulation:4 sepa:1 linearized:1 eng:1 initial:3 series:1 score:1 chervonenkis:7 comparing:1 must:1 cruz:1 visible:1 numerical:6 shape:1 analytic:1 designed:1 plot:1 v:4 half:1 warmuth:1 compo:1 provides:3 sigmoidal:1 height:1 ucsc:1 fitting:1 manner:1 expected:5 indeed:1 roughly:3 behavior:5 simulator:2 increasing:1 what:1 kind:1 developed:1 finding:1 ro:2 schwartz:3 unit:5 appear:1 positive:1 tends:1 might:3 studied:1 examined:2 co:1 range:2 testing:3 practice:1 procedure:1 empirical:1 significantly:3 thsauro:2 confidence:2 suggest:1 close:1 layered:1 judged:1 equivalent:1 map:1 center:1 baum:1 independently:1 pure:1 haussler:10 variation:1 increment:2 target:1 rumelhart:1 ze:1 predicts:1 observed:2 prefactor:3 solved:1 worst:3 calculate:1 connected:1 decrease:1 ahmad:3 gerald:1 trained:8 basis:1 univ:1 describe:1 artificial:11 exhaustive:1 ive:1 valued:16 say:1 plausible:1 ability:1 net:3 relevant:1 cautious:1 seattle:1 hrough:1 convergence:5 requirement:1 produce:1 perfect:5 measured:4 school:1 progress:1 predicted:2 indicate:2 ning:1 closely:2 correct:2 vc:9 generalization:25 preliminary:1 extension:3 strictly:1 hold:1 lying:1 exp:1 visually:2 predict:1 mo:2 mit:1 rather:2 avoid:1 varying:1 focus:1 indicates:2 tech:1 dim:1 inference:1 typically:1 hidden:4 transformed:1 interested:1 issue:3 classification:2 overall:1 washington:1 broad:1 look:2 alter:2 future:1 others:1 quantitatively:1 few:1 primarily:1 randomly:2 ve:5 interest:1 possibility:1 initialized:1 re:1 plotted:1 littlestone:1 theoretical:2 classify:1 boolean:2 samalam:1 retains:1 goodness:2 hundred:1 predicate:1 levin:1 tishby:3 learnability:1 para:1 systematic:1 off:1 opposed:1 possibly:1 worse:1 potential:3 coding:1 coefficient:4 tra:1 performed:1 fol:1 reached:1 bayes:2 capability:1 parallel:1 ass:1 square:1 xor:18 kaufmann:1 yield:1 generalize:1 zoo:1 straight:1 classified:1 converged:1 overtraining:2 reach:1 errol:2 touretzky:1 ed:1 frequency:2 monitored:1 improves:1 cj:1 carefully:1 ea:1 back:3 appears:3 feed:1 higher:5 supervised:1 methodology:1 improved:1 done:1 box:1 though:1 until:1 correlation:2 hand:1 cohn:4 cage:1 propagation:4 behaved:1 effect:1 concept:1 consisted:1 inductive:2 ehrenfeucht:1 during:1 yorktown:1 criterion:1 presenting:1 tn:1 performs:4 l1:1 ranging:2 novel:1 functional:6 exponentially:1 extend:1 he:2 approximates:1 significant:4 ai:4 similarly:1 i6:1 had:2 joo:1 recent:1 tesauro:5 certain:1 binary:15 watson:1 seen:3 minimum:1 greater:1 somewhat:2 morgan:1 converge:3 determine:2 monotonically:1 faster:1 hich:1 equally:1 controlled:1 prediction:2 basic:1 regression:2 ae:1 achieved:1 decreased:1 addressed:1 interval:1 source:2 appropriately:2 hac:1 exhibited:1 probably:1 strict:1 near:2 iii:1 variety:2 affect:2 fit:20 gave:3 opposite:1 whether:6 fullction:1 ullit:1 speaking:1 vithin:1 santa:1 amount:1 repeating:1 solia:1 bett:2 reduced:1 generate:2 exist:1 alters:1 bot:1 estimated:1 correctly:1 threshold:25 drawn:2 changing:1 cutoff:2 asymptotically:4 merely:1 year:1 sum:1 run:10 powerful:1 scaling:1 bound:27 layer:2 summer:1 ijcnn:1 aspect:1 performing:2 separable:2 making:1 restricted:1 previously:2 mechanism:1 needed:1 know:1 tractable:1 alization:1 apply:3 denker:1 slower:1 original:2 include:1 calculating:1 establish:1 question:2 realized:1 looked:1 dependence:2 sci:1 majority:23 extent:1 reason:1 esults:1 unfortunately:1 negative:1 twenty:1 perform:1 upper:1 discarded:1 hinton:1 arbitrary:2 david:1 pair:1 specified:1 orst:1 california:1 able:1 bar:2 suggested:1 below:3 pattern:10 reliable:2 natural:3 predicting:1 scheme:2 linf:1 numerous:1 epoch:1 understanding:1 disagrees:2 meter:1 asymptotic:1 law:8 relative:1 fully:1 interesting:1 anel:1 consistent:2 plotting:2 ibm:1 last:1 formal:1 wide:2 fall:1 absolute:1 distinctly:1 distributed:2 curve:15 dimension:7 valid:1 forward:1 made:1 offit:1 approximate:1 vhat:1 conclude:2 spectrum:2 continuous:1 learn:1 ca:1 symmetry:1 bearing:1 protocol:1 linearly:2 noise:2 ded:1 ny:1 momentum:2 exponential:20 lie:3 theorem:4 er:2 r2:2 concern:1 inconclusive:1 exists:1 oll:1 vapnik:7 adding:1 ci:1 margin:1 gap:6 easier:1 likely:2 jacm:1 failed:2 contained:1 ch:1 tha:1 ma:1 presentation:1 blumer:7 careful:1 crl:1 change:3 except:2 uniformly:2 averaging:1 surpass:2 total:2 experimental:3 attempted:1 internal:1 relevance:1 dept:1 tested:1 |
3,325 | 4,010 | Learning Efficient Markov Networks
Vibhav Gogate William Austin Webb Pedro Domingos
Department of Computer Science & Engineering
University of Washington
Seattle, WA 98195. USA
{vgogate,webb,pedrod}@cs.washington.edu
Abstract
We present an algorithm for learning high-treewidth Markov networks where inference is still tractable. This is made possible by exploiting context-specific independence and determinism in the domain. The class of models our algorithm can
learn has the same desirable properties as thin junction trees: polynomial inference,
closed-form weight learning, etc., but is much broader. Our algorithm searches for
a feature that divides the state space into subspaces where the remaining variables
decompose into independent subsets (conditioned on the feature and its negation)
and recurses on each subspace/subset of variables until no useful new features can
be found. We provide probabilistic performance guarantees for our algorithm under the assumption that the maximum feature length is bounded by a constant k
(the treewidth can be much larger) and dependences are of bounded strength. We
also propose a greedy version of the algorithm that, while forgoing these guarantees, is much more efficient. Experiments on a variety of domains show that our
approach outperforms many state-of-the-art Markov network structure learners.
1
Introduction
Markov networks (also known as Markov random fields, etc.) are an attractive class of joint probability models because of their generality and flexibility. However, this generality comes at a cost.
Inference in Markov networks is intractable [25], and approximate inference schemes can be unreliable, and often require much hand-crafting. Weight learning has no closed-form solution, and
requires convex optimization. Computing the gradient for optimization in turn requires inference.
Structure learning ? the problem of finding the features of the Markov network ? is also intractable
[15], and has weight learning and inference as subroutines.
Intractable inference and weight optimization can be avoided if we restrict ourselves to decomposable
Markov networks [22]. A decomposable model can be expressed as a product of distributions over
the cliques in the graph divided by the product of the distributions of their intersections. An arbitrary
Markov network can be converted into a decomposable one by triangulation (adding edges until every
cycle of length four or more has at least one chord). The resulting structure is called a junction tree.
Goldman [13] proposed a method for learning Markov networks without numeric optimization based
on this idea. Unfortunately, the triangulated network can be exponentially larger than the original one,
limiting the applicability of this method. More recently, a series of papers have proposed methods
for directly learning junction trees of bounded treewidth ([2, 21, 8] etc.). Unfortunately, since the
complexity of inference (and typically of learning) is exponential in the treewidth, only models of
very low treewidth (typically 2 or 3) are feasible in practice, and thin junction trees have not found
wide applicability.
Fortunately, low treewidth is an overly strong condition. Models can have high treewidth and still
allow tractable inference and closed-form weight learning from a reasonable number of samples, by
exploiting context-specific independence [6] and determinism [7]. Both of these result in clique dis1
tributions that can be compactly expressed even if the cliques are large. In this paper we propose a
learning algorithm based on this observation. Inference algorithms that exploit context-specific independence and determinism [7, 26, 11] have a common structure: they search for partial assignments
to variables that decompose the remaining variables into independent subsets, and recurse on these
smaller problems until trivial ones are obtained. Our algorithm uses a similar strategy, but at learning
time: it recursively attempts to find features (i.e., partial variable assignments) that decompose the
problem into smaller (nearly) independent subproblems, and stops when the data does not warrant
further decomposition.
Decomposable models can be expressed as both Markov networks and Bayesian networks, and stateof-the-art Bayesian network learners extensively exploit context-specific independence [9]. However, they typically still learn intractable models. Lowd and Domingos [18] learned tractable hightreewidth Bayesian networks by penalizing inference complexity along with model complexity in
a standard Bayesian network learner. Our approach can learn exponentially more compact models
by exploiting the additional flexibility of Markov networks, where features can overlap in arbitrary
ways. It can greatly speed up learning relative to standard Markov network learners because it avoids
weight optimization and inference, while Lowd and Domingos? algorithm is much slower than standard Bayesian network learning (where, given complete data, weight optimization and inference are
already unnecessary). Perhaps most significantly, it is also more fundamental in that it is based on
identifying what makes inference tractable and directly exploiting it, potentially leading to a much
better accuracy/inference cost trade-off. As a result, our approach has formal guarantees, which
Lowd and Domingos? algorithm lacks.
We provide both theoretical guarantees and empirical evidence for our approach. First, we provide
probabilistic performance guarantees for our algorithm by making certain assumptions about the
underlying distribution. These results rely on exhaustive search over features up to length k. (The
treewidth of the resulting model can still be as large as the number of variables.) We then propose
greedy heuristics for more efficient learning, and show empirically that the Markov networks learned
in this way are more accurate than thin junction trees as well as networks learned using the algorithm
of Della Pietra et al. [12] and L1 regularization [16, 24], while allowing much faster inference (which
in practice translates into more accurate query answers).
2
Background: Junction Trees and Feature Graphs
We denote sets by capital letters and members of a set by small letters. A double capital letter denotes
a set of subsets. We assume that all random variables have binary domains {0,1} (or {false,true}).
We make this assumption for simplicity of exposition; our analysis extends trivially to multi-valued
variables.
We begin with some necessary definitions. An atomic feature or literal is an assignment of a value to
a variable. x denotes the assignment x = 1 while ?x denotes x = 0 (note that the distinction between
an atomic feature x and the variable which is also denoted by x is usually clear from context). A
feature, denoted by F , defined over a subset of variables V (F ) is formed by conjoining atomic
features or literals, e.g., x1 ? ?x2 is a feature formed by conjoining two atomic features x1 and ?x2 .
Given an assignment, denoted by V (F ), to all variables of F , F is said to be satisfied or assigned the
value 1 iff for all literals l ? F , it also holds that l ? V (F ). A feature that is not satisfied is said to
be assigned the value 0. Often, given a feature F , we will abuse notation and write V (F ) as F .
A Markov network or a log-linear model is defined as a set of pairs (Fi , wi ) where Fi is a feature
and wi is its weight. It represents the following joint probability distribution:
!
X
1
(1)
wi ? Fi (V V (Fi ) )
P (V ) = exp
Z
i
where V is a truth-assignment to all variables V = ?i V (Fi ), Fi (V V (Gi ) ) = 1 if V V (Gi ) satisfies
Fi , and 0 otherwise, and Z is the normalization constant, often called the partition function.
Next, we define junction trees. Let C = {C1 , . . . , Cm } be a collection of subsets of V such that:
(a) ?m
i=1 Ci = V and (b) for each feature Fj , there exists a Ci ? C such that all variables of Fj are
contained in Ci . Each Ci is referred to as a clique.
2
x1 ? x2
0
x3 ? x4
0
1
1
x5 ? x6
0
1
x3 ? x5
0
1
(a) A feature tree
x4 ? x6
0
1
Weights
Features
?(x1 ? x2 ) ? ?(x3 ? x4 )
w1
?(x1 ? x2 ) ? (x3 ? x4 )
w2
?(x1 ? x2 ) ? ?(x5 ? x6 )
w3
w4
?(x1 ? x2 ) ? (x5 ? x6 )
(x1 ? x2 ) ? ?(x3 ? x5 )
w5
(x1 ? x2 ) ? (x3 ? x5 )
w6
(x1 ? x2 ) ? ?(x4 ? x6 )
w7
w8
(x1 ? x2 ) ? (x4 ? x6 )
x1 x2 x4 x5 x6
x1 x2 x4 x5
x1 x2 x3 x4 x5
(c) A junction tree
(b) A Markov network
Figure 1: Figure showing (a) a feature tree, (b) the Markov network corresponding to the leaf features of (a)
and (c) the (optimal) junction tree for the Markov network in (b). A leaf feature is formed by conjoining the
feature assignments along the path from the leaf to the root. For example, the feature corresponding to the right
most leaf node is: (x1 ? x2 ) ? (x4 ? x6 ). For the feature tree, ovals denote F-nodes and rectangles denote
A-nodes. For the junction tree, ovals denote cliques and rectangles denote separators. Notice that each F-node
in the feature tree has a feature of size bounded by 2 while the maximum clique in the junction tree is of size 5.
Moreover notice that the A-node corresponding to (x1 ? x2 ) = 0 induces a different variable decomposition as
compared with the A-node corresponding to (x1 ? x2 ) = 1.
D EFINITION 1. A tree T = (C, E) is a junction tree iff it satisfies the running intersection property,
i.e., ?Ci , Cj , Ck ? C, i 6= j 6= k, such that Ck lies on the unique simple path between Ci and Cj ,
x ? Ci ?Cj ? x ? Ck . The treewidth of T , denoted by w, is the size of the largest clique in C minus
one. The set Sij ? Ci ? Cj is referred to as the separator corresponding to the edge (i ? j) ? E.
Pm
The space complexity of representing a junction tree is O( i=1 2|Ci | ) ? O(n ? 2w+1 ).
Our goal is to exploit context-specific and deterministic dependencies that is not explicitly represented in junction trees. Representations that do this include arithmetic circuits [10] and AND/OR
graphs [11]. We will use a more convenient form for our purposes, which we call feature graphs. Inference in feature graphs is linear in the size of the graph. For readers familiar with AND/OR graphs
[11], a feature tree (or graph) is simply an AND/OR tree (or graph) with OR nodes corresponding to
features and AND nodes corresponding to feature assignments.
D EFINITION 2. A feature tree denoted by ST is a rooted-tree that consists of alternating levels of
feature nodes or F-nodes and feature assignment nodes or A-nodes. Each F-node F is labeled by
a feature F and has two child A-nodes labeled by 0 and 1, corresponding to the true and the false
assignments of F respectively. Each A-node A has k ? 0 child F-nodes that satisfy the following
requirement. Let {FA,1 , . . . , FA,k } be the set of child F-nodes of A and let D(FA,i ) be the union
of all variables involved in the features associated with FA,i and all its descendants, then ?i, j ?
{1, . . . , k}, i 6= j, D(FA,i ) ? D(FA,j ) = ?.
Semantically, each F-node represents conditioning while each A-node represents partitioning of the
variables into conditionally-independent subsets. The space complexity of representing a feature tree
is the number of its A-nodes. A feature graph denoted by SG is formed by merging identical subtrees of a feature tree ST . It is easy to show that a feature graph generalizes a junction tree and in fact
any model that can be represented using a junction tree having treewidth k can also be represented
by a feature graph that uses only O(n ? 2k ) space [11]. In some cases, a feature graph can be
exponentially smaller than a junction tree because it can capture context-specific independence [6].
A feature tree can be easily converted to a Markov network. The corresponding Markov network has
one feature for each leaf node, formed by conjoining all feature assignments from the root to the leaf.
The following example demonstrates the relationship between a feature tree, a Markov network and
a junction tree.
E XAMPLE 1. Figure 1(a) shows a feature tree. Figure 1(b) shows the Markov network corresponding
to the leaf features of the feature tree given in Figure 1(a). Figure 1(c) shows the junction tree
for the Markov network given in 1(b). Notice that because the feature tree uses context-specific
independence, all the F -nodes in the feature tree have a feature of size bounded by 2 while the
maximum clique size of the junction tree is 5. The junction tree given in Figure 1(b) requires 25 ?2 =
64 potential values while the feature tree given in Figure 1(a) requires only 10 A-nodes.
In this paper, we will present structure learning algorithms to learn feature trees only. We can do this
without loss of generality, because a feature graph can be constructed by caching information and
merging identical nodes, while learning (constructing) a feature tree.
3
The distribution represented by a feature tree ST can be defined procedurally as follows (for more
details see [11]). We assume that each leaf A-node Al is associated with a weight w(Al ). For each
A-node A and each F-node F, we associate a value denoted by v(A) and v(F) respectively. We
compute these values recursively as follows from the leaves to the root. The value of all A-nodes
is initialized to 1 while the value of all F-nodes is initialized to 0. The value of the leaf A-node Al
is w(Al ) ? #(M (Al )) where #(M (Al )) is number of (full) variable assignments that satisfy the
constraint M (Al ) formed by conjoining the feature-assignments from the root to Al . The value of
an internal F-node is the sum of the values of the child A-nodes. The value of an internal A-node
Ap that has k children is the product of the values of its child F-nodes divided by [#(M (Ap ))]k?1
(the division takes care of double counting). Let v(Fr ) be the value of the root node; computed as
described above. Let V be an assignment to all variables V of the feature tree, then:
v (Fr )
P (V ) = V
v(Fr )
where vV (Fr ) is the value of the root node of ST computed as above in which each leaf A-node is
initialized instead to w(Al ) if V satisfies the constraint formed by conjoining the feature-assignments
from the root to Al and 0 otherwise.
3
Learning Efficient Structure
Algorithm 1: LMIP: Low Mutual Information Partitioning
Input: A variable set V , sample data D, mutual information subroutine I, a feature assignment F , threshold ?,
max set size q.
Output: A set of subsets of V
QF = {Q1 , . . . , Q|V | }, where Qi = {xi } // QF is a set of singletons
if the subset of D that satisfies F is too small then
return QF
else
for A ? V , |A| ? q do
if minX?A I(X, A\X|F ) > ? then
// find min using Queyranne?s algorithm [23] applied to the
subset of D satisfying F
merge all Qi ? QF s.t. Qi ? A 6= ?.
return QF
We propose a feature-based structure learning algorithm that searches for a feature that divides the
configuration space into subspaces. We will assume that the selected feature or its negation divides
the (remaining) variables into conditionally independent partitions (we don?t require this assumption
to be always satisfied, as we explain in the section on greedy heuristics and implementation details).
In practice, the notion of conditional independence is too strong. Therefore, as in previous work
[21, 8], we instead use conditional mutual information, denoted by I, to partition the set of variables.
For this we use the LMIP subroutine (see Algorithm 1), a variant of Chechetka and Guestrin?s [8]
LTCI algorithm that outputs a partitioning of V . The runtime guarantees of LMIP follow from those
of LTCI and correctness guarantees follow in an analogous fashion. In general, estimating mutual
information between sets of random variables has time and sample complexity exponential in the
number of variables considered. However, we can be more efficient as we show below. We start with
a required definition.
D EFINITION 3. Given a feature assignment F , a distribution P (V ) is (j, ?, F )-coverable if there
exists a set of cliques C such that for every Ci ? C, |Ci | ? j and I(Ci , V \ Ci |F ) ? ?. Similarly,
given a feature F , a distribution P (V ) is (j, ?, F )-coverable if it is both (j, ?, F = 0)-coverable and
(j, ?, F = 1)-coverable.
L EMMA 1. Let A ? V . Suppose there exists a distribution on V that is (j, ?, F )-coverable and
?X ? V where |X| ? j, it holds that I(X ? A, X ? (V \A)|F ) ? ?. Then, I(A, V \A|F ) ?
|V |(2? + ?).
Lemma 1 immediately leads to the following lemma:
4
L EMMA 2. Let P (V ) be a distribution that is (j, ?, F )-coverable. Then LMIP, for q ? j, returns a
partitioning of V into disjoint subsets {Q1 , . . . , Qm } such that ?i, I(Qi , V \Qi |F ) ? |V |(2? + (j ?
1)?).
We summarize the time and space complexity of LMIP in the following lemma.
L EMMA 3. The time and space complexity of LMIP is O( nq ? n ? JqM I ) where JqM I is the time
complexity of estimating the mutual information between two disjoint sets which have combined
cardinality q.
Note that our actual algorithm will use a subroutine that estimates mutual information from data,
and the time complexity of this routine will be described in the section on sample complexity and
probabilistic performance guarantees.
Algorithm 2: LEM: Learning Efficient Markov Networks
Input: Variable set V , sample data S, mutual information subroutine I, feature length k, set size parameter q,
threshold ?, an A-node A.
Output: A feature tree M
for each feature F of length k constructible for V do
QF =1 = LMIP(V , S, I, F = 1, ?, q);
QF =0 = LMIP (V , S, I, F = 0, ?, q)
G = argmaxF (Score(QF =0 )+ Score(QF =1 ))// G is a feature
if |QG=0 | = 1 and |QG=1 | = 1 then
Create a feature tree corresponding to all possible assignments to the atomic features. Add this feature tree
as a child of A;
return
Create a F-node G with G as its feature, and add it as a child of A;
Create two A-child nodes AG,0 and AG,1 for G;
for i ? {0, 1} do
if |QG=i | > 1 then
for each component (subset of V ) C ? QG=i do
SC = ProjectC ({X ? S : X satisfies G = i}) // SC is the set of
instantiations of V in S that satisfy G = i restricted to the
variables in C
LEM(C, SC , I, k, q, ?,AG,i ) // Recursion
else
Create a feature tree corresponding to all possible assignments to the atomic features. Add this feature
tree as a child of AG,i .
Next, we present our structure learning algorithm called LEM (see Algorithm 2) which utilizes the
LMIP subroutine to learn feature trees from data. The algorithm has probabilistic performance guarantees if we make some assumptions on the type of the distribution. We present these guarantees in
the next subsection. Algorithm 2 operates as follows. First, it runs the LMIP subroutine on all possible features of length k constructible from V . Recall that given a feature assignment F , the LMIP
sub-routine partitions the variables into (approximately) conditionally independent components. It
then selects a feature G having the highest score. Intuitively, to reduce the inference time and the size
of the model, we should try to balance the trade-off between increasing the number of partitions and
maintaining partition size uniformity (namely, we would want the partition sizes to be almost equal).
The following score function achieves this objective. Let Q = {Q1 , . . . , Qm } be a m-partition of V ,
then the score of Q is given by: Score(Q) = Pm 12|Qi | , where the denominator bounds worst-case
i=1
inference complexity.
After selecting a feature G, the algorithm creates a F-node corresponding to G and two child A-nodes
corresponding to the true and the false assignments of G. Then, corresponding to each element of
QG=1 , it recursively creates a child node for G = 1 (and similarly for G = 0 using QG=0 ). An
interesting special case is when either |QG=1 | = 1 or |QG=0 | = 1 or when both conditions hold.
In this case, no partitioning of V exists for either or both the value assignments of G and therefore
we return a feature tree which has 2|V | leaf A-nodes corresponding to all possible instantiations of
the remaining variables. In practice, because of the exponential dependence on |V |, we would want
5
this condition to hold only when a few variables remain. To obtain guarantees, however, we need
stronger conditions to be satisfied. We describe these guarantees next.
3.1
Theoretical Guarantees
To derive performance guarantees and to guarantee polynomial complexity, we make some fundamental assumptions about the data and the distribution P (V ) that we are trying to learn. Intuitively,
if there exists a feature F such that the distribution P (V ) at each recursive call to LEM is (j, ?, F )coverable, then the LMIP sub-routine is guaranteed to return at least a two-way partitioning of V .
Assume that P (V ) is such that at each recursive call to LEM, there exists a unique F (such that the
distribution at the recursive call is (j, ?, F )-coverable). Then, LEM is guaranteed to find this unique
feature tree. However, the trouble is that at each step of the recursion, there may exist m > 1 candidate features that satisfy this property. Therefore, we want this coverability requirement to hold
not only recursively but also for each candidate feature (at each recursive call). The following two
definitions and Theorem 1 capture this intuition.
D EFINITION 4. Given a constant ? > 0, we say that a distribution P (V ) satisfies the (j, ?, m, G)
assumption if |V | ? j or if the following property is satisfied. For every feature F , and each assignment F of F , such that |V (F )| ? m, P (V ) is (j, ?, F )-coverable and for any partitioning S1 , ..., Sz
of V with z ? 2, such that for each i, I(Si , V \ Si |F ? G) ? |V |(2? + ?) and P (S1 ), ..., P (Sz ) each
satisfy the (j, ?, m, G ? F ) assumption.
D EFINITION 5. We say the a sequence of pairs (F n , Sn ), (F n?1 , Sn?1 ), . . . , (F 0 , S0 = V ) satisfies the nested context independence condition for (?, w) if ?i, Si ? Si?1 and the distribution on V conditioned on the satisfaction of Gi?1 = (F i?1 ? F i?2 ? . . . ? F 0 ) is such that
I(Si , Si?1 \Si |Gi?1 ) ? |Si?1 |(2? + w).
T HEOREM 1. Given a distribution P (V ) that satisfies the (j, ?, m, true)-assumption and a perfect
mutual information oracle I, LEM(V , S, I, k, j + 1, ?) returns a feature tree ST such that each leaf
feature of ST satisfies the nested context independence condition for (?, j ? ?).
3.1.1
Sample Complexity and Probabilistic Performance Guarantees
The foregoing analysis relies on a perfect, deterministic mutual information subroutine I. In reality, all we have is sample data and probabilistic mutual information subroutines. As the following
theorem shows, we can get estimates of I(A, B|F ) with accuracy ?? and probability 1 ? ? with a
1
number of samples and running time polynomial in ?
and log ?1 .
L EMMA 4. (Hoffgen [14]) The entropy of a probability distribution over 2k + 2 discrete variables with domain size R can be estimated with accuracy ? with probability at least 1 ? ? using
4k+4
2k+2
2k+2
F (k, R, ?, ?) = O( R?2 log 2 ( R?2 )log( R ? )) samples and the same amount of time.
To ensure that our algorithm doesn?t run out of data somewhere in the recursion, we have to
strengthen our assumptions, as we define below.
D EFINITION 6. If P (V ) satisfies the (j, ?, m, true)-assumption and a set of sample data H drawn
from the distribution is such that for any Gi?1 = F i?1 ? . . . F 0 if neither Fi = 0 or Fi = 1 hold in
less than some constant fraction c of the subset of H that satisfies Gi?1 , then we say that H satisfies
the c-strengthened (j, ?, m, true) assumption.
T HEOREM 2 (Probabilistic performance guarantees). Let P (V ) be a distribution that satisfies the (j, ?, m, true) assumption and let H be the training data which satisfies the
c-strengthened (j, ?, m, true) assumption from which we draw S samples of size T =
?
( 1c )D F ( j?1
2 , |V |, ?, nm+j+2 (j+1)3 ), where D is the worst-case length of any leaf feature returned
? m,
by the algorithm. Given a mutual information subroutine I? implied by Lemma 4, LEM(V , S, I,
j + 1, ? + ?) returns a feature tree, the leaves of which satisfy the nested context independence
condition for (?, j ? (? + ?)), with probability 1 ? ?.
4
Greedy Heuristics and Implementation Details
When implemented naively, Algorithm 2 may be computationally infeasible. The most expensive
step in LEM is the LMIP sub-routine which is called O(nk ) times at each A-node of the feature
6
graph. Given a max set size of q, LMIP requires running Queyranne?s algorithm [23] (complexity
O(q 3 )) to minimize minX?A I(X, V \ X|F ) over every |A| ? q. Thus, its overall time complexity
is O(nq ? q 3 ). Also, our theoretical analysis assumes access to a mutual information oracle which is
not available in practice and one has to compute I(X, V \ X|F ) from data. In our implementation,
we used Moore and Lee?s AD-trees [19] to pre-compute and cache the sufficient statistics (counts),
in advance, so that at each step, I(X, V \ X|F ) can be computed efficiently. A second improvement
that we considered is due to Chechtka and Guestrin [8]. It is based on the observation that if A is a
subset of a connected component Q ? QF , then we don?t need to compute minX?A I(X, V \ X|F ),
because merging all Qi ? QF s.t. Qi ?A 6= ?. would not change QF . In spite of these improvements,
our algorithm is not practical for q > 3 and k > 3. Note however, that low values of q and k are not
entirely problematic for our approach because we may still be able to induce large treewidth models
by taking advantage of context specific independence, as depicted in Figure 1.
To further improve the performance of our algorithm, we fix q to 3 and use a greedy heuristic to
construct the features. The greedy heuristic is able to split on arbitrarily long features by only calling
LMIP k ? n times instead of O(nk ) times, but does not have any guarantees. It starts with a set
of atomic features (i.e., just the variables in the domain), runs LMIP on each, and selects the (best)
feature with the highest score. Then, it creates candidate features by conjoining this best feature from
the previous step with each atomic feature, runs LMIP on each, and then selects a best feature for the
next iteration. It repeats this process until i equals k or the score does not improve. This heuristic is
loosely based on the greedy approach of Della Pietra et al.[12]. We also use a balance heuristic to
reduce the size of the model learned; which imposes a form of regularization constraint and biases
our search towards sparser models, in order to avoid over-fitting. Here, given a set of features with
similar scores, we select a feature F such that the difference between the scores of F = 0 and F = 1
is the smallest. The intuition behind this heuristic is that by maintaining balance we reduce the height
of the feature graph and thus its size. Finally, in our implementation, we do not return all possible
instantiations of the variables when a feature assignment yields only one partition, unless the number
of remaining variables is smaller than 5. This is because even though a feature may not partition the
set of variables, it may still partition the data, thereby reducing complexity.
5
Experimental Evaluation
We evaluated LEM on one synthetic data set and four real world ones. Figure 2(f) lists the five data
sets and the number of atomic features in each. The synthetic domain consists of samples from the
Alarm Bayesian network [3]. From the UCI machine learning repository [5], we used the Adult and
MSNBC anonymous Web data domains. Temperature and Traffic are sensor network data sets and
were used in Checketka and Guestrin [8].
We compared LEM to the standard Markov network structure learning algorithm of Della Pietra
et al.[12] (henceforth, called the DL scheme), the L1 approach of Ravikumar et al. [24] and the
lazy thin-junction tree algorithm (LPACJT) of Chechetka and Guestrin [8]. We used the following
parameters for LEM: q = 3, and ? = 0.05. We found that the results were insensitive to the
value of ? used. We suggest using any reasonably small value ? 0.1. The LPACJT implementation
available from the authors requires entropies (computed from the data) as input. We were unable to
compute the entropies in the required format because they use a propriety software that we did not
have access to, and therefore we use the results provided by the authors for the temperature, traffic
and alarm domains. We were unable to run LPACJT on the other two domains. We altered the DL
algorithm to only evaluate candidate features that match at least one example. This simple extension
vastly reduces the number of candidate features and greatly improves the algorithm?s efficiency. For
implementing DL, we use pseudo-likelihood [4] as a scoring function and optimized it via the limitedmemory BFGS algorithm [17]. For implementing L1, we used the OWL-QN software package of
Andrew and Gao [1]. The neighborhood structures for L1 can be merged in two ways (logical-OR or
logical-AND of the structures); we tried both and used the best one for plotting the results. For the
regularization, we tried penalty = {1, 2, 5, 10, 20, 25, 50, 100, 200, 500, 1000} and used a tuning
set to pick the one that gave the best results. We used a time-bound of 24 hrs for each algorithm.
For each domain, we evaluated the algorithms on training set sizes varying from 100 to 10000. We
performed a five-fold train-test split. For the sensor networks, traffic and alarm domains, we use
the test set sizes provided in Chechtka and Guestrin [8]. For the MSNBC and Adult domains, we
selected a test set consisting of 58265 and 7327 examples respectively. We evaluate the performance
7
-16
-30
-18
-35
-20
-22
-24
DL
L1
LEM
LPACJT
-26
-28
-30
100
1000
-40
-45
-50
-60
-65
100
Training Set size
10000
-20
-3.2
-25
Log-likelihood
-3.6
-3.8
-4
-35
-40
-45
-50
-55
DL
L1
LEM
(d) MSNBC
1000
10000
(c) Temperature
-30
-3.4
1000
Training Set size
DL
L1
LEM
LPACJT
Adult
-3
-4.2
-40
-45
-50
-55
-60
-65
-70
-75
-80
-85
-90
100
Training Set size
(b) Traffic
MSNBC
Log-likelihood
1000
Training Set size
(a) Alarm
-4.4
100
DL
L1
LEM
LPACJT
-55
10000
Temperature
Log-likelihood
Traffic
-25
Log-likelihood
Log-likelihood
Alarm
-14
DL
L1
LEM
-60
10000
-65
100
1000
Training Set size
(e) Adult
10000
Data set #Features Time in minutes
DL L1 LEM
Alarm
148
60 14 91
Traffic
128
1440 2 691
Temp.
216
1440 21 927
MSNBC
17
1440 1
31
Adult
125
22 19 48
(f) Data set characteristics and Tim-
(f) Data set characteristics and timing
results
Figure 2: Figures (a)-(e) showing average log-Likelihood as a function of the training data size for LEM, DL,
L1 and LPACJT. Figure (f) reports the run-time in minutes for LEM, DL and L1 for training set of size 10000.
based on average-log-likelihood of the test data, given the learned model. The log-likelihood of the
test data was computed exactly for the models output by LPACJT and LEM, because inference is
tractable in these models. The size of the feature graphs learned by LEM ranged from O(n2 ) to
O(n3 ), comparable to those generated by LPACJT. Exact inference on the learned feature graphs
was a matter of milliseconds. For the Markov networks output by DL and L1, we compute the
log-likelihood approximately using loopy Belief propagation [20].
Figure 2 summarizes the results for the five domains. LEM significantly outperforms L1 on all the
domains except the Alarm dataset. It is better than the greedy DL scheme on three out of the five
domains while it is always better than LPACJT. Figure 2(f) shows the timing results for LEM, DL
and L1. L1 is substantially faster than DL and LEM. DL is the slowest scheme.
6
Conclusions
We have presented an algorithm for learning a class of high-treewidth Markov networks that admit
tractable inference and closed-form parameter learning. This class is much richer than thin junction
trees because it exploits context-specific independence and determinism. We showed that our algorithm has probabilistic performance guarantees under the recursive assumption that the distribution at
each node in the (rooted) feature graph (which is defined only over a decreasing subset of variables as
we move further away from the root), is itself representable by a polynomial-sized feature graph and
in which the maximum feature-size at each node is bounded by k. We believe that our new theoretical
insights further the understanding of structure learning in Markov networks, especially those having
high treewidth. In addition to the theoretical guarantees, we showed that our algorithm has good
performance in practice, usually having higher test-set likelihood than other competing approaches.
Although learning may be slow, inference always has quick and predictable runtime, which is linear
in the size of the feature graph. Intuitively, our method seems likely to perform well on large sparsely
dependent datasets.
Acknowledgements
This research was partly funded by ARO grant W911NF-08-1-0242, AFRL contract FA8750-09-C0181, DARPA contracts FA8750-05-2-0283, FA8750-07-D-0185, HR0011-06-C-0025, HR0011-07C-0060 and NBCH-D030010, NSF grants IIS-0534881 and IIS-0803481, and ONR grant N0001408-1-0670. The views and conclusions contained in this document are those of the authors and should
not be interpreted as necessarily representing the official policies, either expressed or implied, of
ARO, DARPA, NSF, ONR, or the United States Government.
8
References
[1] G. Andrew and J. Gao. Scalable training of L1-regularized log-linear models. In Proceedings of the
Twenty-Fourth International Conference (ICML), pages 33?40, 2007.
[2] F. R. Bach and M. I. Jordan. Thin junction trees. In Advances in Neural Information Processing Systems,
pages 569?576, 2001.
[3] I. Beinlich, J. Suermondt, M. Chavez, and G. Cooper. The alarm monitoring system: A case study with
two probablistic inference techniques for belief networks. In European Conference on AI in Medicine,
1988.
[4] J. Besag. Statistical analysis of non-lattice data. The Statistician, 24:179?195, 1975.
[5] C. Blake and C. J. Merz. UCI repository of machine learning databases. Machine-readable data repository,
Department of Information and Computer Science, University of California at Irvine, Irvine, CA, 2000.
http://www.ics.uci.edu/?mlearn/MLRepository.html.
[6] C. Boutilier. Context-specific independence in Bayesian networks. In Proceedings of the Twelfth Annual
Conference on Uncertainty in Artificial Intelligence (UAI), pages 115?123, 1996.
[7] M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6?7):772?799, April 2008.
[8] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In Advances in Neural
Information Processing Systems (NIPS), December 2007.
[9] D.M. Chickering, D. Geiger, and D. Heckerman. Learning Bayesian networks: Search methods and experimental results. In Proceedings of the Fifth International Workshop on Artificial Intelligence and Statistics
(AISTATS), pages 112?128, 1995.
[10] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM, 50(3):280?
305, 2003.
[11] R. Dechter and R. Mateescu. AND/OR search spaces for graphical models. Artificial Intelligence, 171(23):73?106, 2007.
[12] S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing features of random fields. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 19:380?392, 1997.
[13] S. Goldman. Efficient methods for calculating maximum entropy distributions. Master?s thesis, Massachusetts Institute of Technology, 1987.
[14] K. H?offgen. Learning and robust learning of product distributions. In Proceedings of the Sixth Annual
ACM Conference on Computational Learning Theory (COLT), pages 77?83, 1993.
[15] D. R. Karger and N. Srebro. Learning Markov networks: maximum bounded tree-width graphs. In
Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages
392?401, 2001.
[16] S. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of Markov networks using L1regularization. In Proceedings of the Twentieth Annual Conference on Neural Information Processing
Systems (NIPS), pages 817?824, 2006.
[17] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical
Programming, 45(3):503?528, 1989.
[18] D. Lowd and P. Domingos. Learning arithmetic circuits. In Proceedings of the Twenty Fourth Conference
in Uncertainty in Artificial Intelligence, pages 383?392, 2008.
[19] A. W. Moore and M. S. Lee. Cached sufficient statistics for efficient machine learning with large datasets.
Journal of Artificial Intelligence Research, 8:67?91, 1997.
[20] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy belief propagation for approximate inference: An
empirical study. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (UAI),
pages 467?475, 1999.
[21] M. Narasimhan and J. Bilmes. PAC-learning bounded tree-width graphical models. In Proceedings of the
Twentieth Conference in Uncertainty in Artificial Intelligence (UAI), pages 410?417, 2004.
[22] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[23] M. Queyranne. Minimizing symmetric submodular functions. Mathematical Programming, 82(1):3?12,
1998.
[24] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional Ising model selection using L1regularized logistic regression. Annals of Statistics, 38(3):1287?1319, 2010.
[25] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82:273?302, 1996.
[26] T. Sang, P. Beame, and H. Kautz. Performing Bayesian inference by weighted model counting. In Proceedings of The Twentieth National Conference on Artificial Intelligence (AAAI), pages 475?482, 2005.
9
| 4010 |@word repository:3 version:1 hoffgen:1 polynomial:4 stronger:1 seems:1 twelfth:1 tried:2 decomposition:2 q1:3 pick:1 thereby:1 minus:1 recursively:4 configuration:1 series:1 score:10 selecting:1 united:1 karger:1 liu:1 document:1 fa8750:3 outperforms:2 si:8 suermondt:1 dechter:1 partition:11 greedy:8 leaf:15 selected:2 nq:2 intelligence:11 node:47 chechetka:3 five:4 height:1 mathematical:2 along:2 constructed:1 differential:1 symposium:1 descendant:1 consists:2 fitting:1 emma:4 darwiche:2 hardness:1 beame:1 multi:1 decreasing:1 goldman:2 actual:1 cache:1 cardinality:1 increasing:1 begin:1 estimating:2 bounded:8 underlying:1 notation:1 moreover:1 circuit:2 provided:2 what:1 argmaxf:1 cm:1 interpreted:1 substantially:1 narasimhan:1 finding:1 ag:4 guarantee:20 pseudo:1 w8:1 every:4 runtime:2 exactly:1 demonstrates:1 vgogate:1 qm:2 partitioning:7 grant:3 engineering:1 timing:2 path:2 abuse:1 ap:2 merge:1 approximately:2 probablistic:1 limited:1 seventeenth:1 unique:3 practical:1 atomic:9 practice:6 union:1 recursive:5 x3:7 empirical:2 w4:1 significantly:2 convenient:1 pre:1 induce:1 spite:1 suggest:1 get:1 selection:1 context:14 www:1 limitedmemory:1 deterministic:2 quick:1 roth:1 convex:1 decomposable:4 identifying:1 simplicity:1 immediately:1 offgen:1 insight:1 l1regularized:1 notion:1 analogous:1 limiting:1 annals:1 suppose:1 strengthen:1 exact:1 programming:2 us:3 domingo:5 associate:1 element:1 satisfying:1 expensive:1 tributions:1 sparsely:1 labeled:2 database:1 ising:1 capture:2 worst:2 cycle:1 connected:1 trade:2 highest:2 chord:1 principled:1 intuition:2 predictable:1 complexity:17 uniformity:1 creates:3 division:1 efficiency:1 learner:4 compactly:1 easily:1 joint:2 darpa:2 represented:4 train:1 describe:1 query:1 sc:3 artificial:10 neighborhood:1 exhaustive:1 heuristic:8 larger:2 valued:1 foregoing:1 say:3 richer:1 otherwise:2 plausible:1 statistic:4 gi:6 itself:1 sequence:1 advantage:1 propose:4 aro:2 product:4 recurses:1 fr:4 uci:3 iff:2 flexibility:2 inducing:1 seattle:1 exploiting:4 double:2 requirement:2 cached:1 perfect:2 tim:1 derive:1 andrew:2 strong:2 implemented:1 c:1 treewidth:13 come:1 triangulated:1 merged:1 implementing:2 owl:1 require:2 government:1 fix:1 decompose:3 anonymous:1 extension:1 hold:6 considered:2 blake:1 ic:1 exp:1 heorem:2 achieves:1 smallest:1 purpose:1 largest:1 correctness:1 create:4 weighted:2 sensor:2 always:3 ck:3 caching:1 avoid:1 efinition:6 varying:1 broader:1 l1regularization:1 improvement:2 likelihood:11 slowest:1 greatly:2 besag:1 inference:29 dependent:1 chavira:1 typically:3 koller:1 subroutine:10 selects:3 overall:1 html:1 colt:1 stateof:1 denoted:8 art:2 special:1 mutual:12 field:2 construct:1 equal:2 having:4 washington:2 x4:10 represents:3 identical:2 icml:1 nearly:1 thin:7 warrant:1 report:1 intelligent:1 few:1 national:1 pietra:5 familiar:1 murphy:1 ourselves:1 consisting:1 statistician:1 william:1 negation:2 attempt:1 ltci:2 w5:1 evaluation:1 recurse:1 behind:1 subtrees:1 accurate:2 edge:2 partial:2 necessary:1 unless:1 tree:60 divide:3 loosely:1 initialized:3 theoretical:5 xample:1 w911nf:1 assignment:24 lattice:1 loopy:2 cost:2 applicability:2 subset:15 too:2 dependency:1 answer:1 synthetic:2 combined:1 st:6 fundamental:2 international:2 siam:1 probabilistic:10 off:2 lee:3 contract:2 w1:1 vastly:1 aaai:1 satisfied:5 nm:1 thesis:1 literal:3 henceforth:1 admit:1 leading:1 forgoing:1 return:9 ganapathi:1 sang:1 converted:2 potential:1 singleton:1 bfgs:2 matter:1 satisfy:6 explicitly:1 ad:1 performed:1 root:8 try:1 closed:4 view:1 traffic:6 start:2 kautz:1 minimize:1 formed:7 accuracy:3 kaufmann:1 characteristic:2 efficiently:1 yield:1 bayesian:10 monitoring:1 bilmes:1 mlearn:1 explain:1 definition:3 sixth:1 involved:1 associated:2 stop:1 irvine:2 dataset:1 massachusetts:1 logical:2 recall:1 subsection:1 improves:1 cj:4 routine:4 afrl:1 higher:1 x6:8 follow:2 wei:1 april:1 evaluated:2 though:1 generality:3 just:1 until:4 hand:1 web:1 lack:1 propagation:2 logistic:1 lowd:4 perhaps:1 vibhav:1 believe:1 usa:1 ranged:1 true:8 regularization:3 assigned:2 alternating:1 symmetric:1 moore:2 attractive:1 conditionally:3 x5:9 width:2 rooted:2 mlrepository:1 trying:1 complete:1 l1:17 temperature:4 fj:2 reasoning:2 recently:1 fi:9 common:1 empirically:1 conditioning:1 exponentially:3 insensitive:1 ai:1 tuning:1 trivially:1 pm:2 similarly:2 submodular:1 funded:1 access:2 etc:3 add:3 showed:2 triangulation:1 certain:1 binary:1 arbitrarily:1 onr:2 scoring:1 guestrin:6 morgan:1 fortunately:1 additional:1 care:1 arithmetic:2 ii:2 full:1 desirable:1 reduces:1 faster:2 match:1 bach:1 long:1 divided:2 ravikumar:2 qg:8 qi:8 variant:1 scalable:1 regression:1 denominator:1 fifteenth:1 iteration:1 normalization:1 c1:1 background:1 want:3 addition:1 else:2 w2:1 member:1 december:1 lafferty:2 jordan:2 call:5 counting:3 split:2 easy:1 variety:1 independence:13 gave:1 w3:1 restrict:1 competing:1 reduce:3 idea:1 translates:1 queyranne:3 penalty:1 returned:1 boutilier:1 useful:1 clear:1 amount:1 extensively:1 induces:1 conjoining:7 http:1 exist:1 problematic:1 millisecond:1 notice:3 nsf:2 estimated:1 overly:1 disjoint:2 write:1 discrete:2 four:2 threshold:2 drawn:1 capital:2 penalizing:1 neither:1 rectangle:2 nocedal:1 graph:22 fraction:1 sum:1 run:6 package:1 letter:3 fourth:2 uncertainty:4 procedurally:1 master:1 soda:1 extends:1 almost:1 reasonable:1 reader:1 utilizes:1 geiger:1 draw:1 summarizes:1 comparable:1 entirely:1 bound:2 guaranteed:2 fold:1 oracle:2 annual:4 strength:1 constraint:3 x2:17 software:2 n3:1 calling:1 speed:1 min:1 performing:1 constructible:2 format:1 department:2 representable:1 smaller:4 remain:1 heckerman:1 wi:3 temp:1 making:1 lem:25 s1:2 intuitively:3 restricted:1 sij:1 computationally:1 turn:1 count:1 tractable:6 junction:25 generalizes:1 available:2 away:1 coverable:9 slower:1 original:1 denotes:3 remaining:5 running:3 include:1 trouble:1 ensure:1 assumes:1 maintaining:2 graphical:2 readable:1 medicine:1 somewhere:1 exploit:4 calculating:1 especially:1 crafting:1 objective:1 implied:2 already:1 move:1 strategy:1 fa:6 dependence:2 said:2 gradient:1 minx:3 subspace:3 unable:2 w7:1 trivial:1 w6:1 length:7 relationship:1 gogate:1 balance:3 minimizing:1 unfortunately:2 webb:2 potentially:1 subproblems:1 implementation:5 policy:1 twenty:2 perform:1 allowing:1 observation:2 markov:30 datasets:2 beinlich:1 msnbc:5 lpacjt:10 arbitrary:2 pair:2 required:2 namely:1 optimized:1 california:1 learned:7 distinction:1 propriety:1 pearl:1 nip:2 adult:5 able:2 hr0011:2 usually:2 below:2 pattern:1 summarize:1 max:2 memory:1 belief:3 wainwright:1 overlap:1 satisfaction:1 rely:1 regularized:1 hr:1 recursion:3 representing:3 scheme:4 improve:2 altered:1 pedrod:1 technology:1 sn:2 sg:1 understanding:1 acknowledgement:1 relative:1 loss:1 interesting:1 srebro:1 sufficient:2 s0:1 imposes:1 plotting:1 austin:1 qf:12 mateescu:1 repeat:1 infeasible:1 formal:1 allow:1 vv:1 bias:1 institute:1 wide:1 taking:1 fifth:1 determinism:4 numeric:1 avoids:1 world:1 doesn:1 qn:1 author:3 made:1 collection:1 san:1 avoided:1 transaction:1 approximate:3 compact:1 unreliable:1 clique:9 sz:2 nbch:1 instantiation:3 uai:3 unnecessary:1 francisco:1 xi:1 don:2 search:7 reality:1 learn:6 reasonably:1 robust:1 ca:2 necessarily:1 separator:2 constructing:1 domain:15 official:1 european:1 did:1 aistats:1 alarm:8 n2:1 child:12 x1:17 referred:2 fashion:1 strengthened:2 slow:1 cooper:1 sub:3 exponential:3 lie:1 candidate:5 chickering:1 theorem:2 minute:2 specific:10 showing:2 pac:1 list:1 evidence:1 dis1:1 workshop:1 intractable:4 exists:6 false:3 adding:1 merging:3 naively:1 ci:13 dl:16 conditioned:2 nk:2 sparser:1 chavez:1 entropy:4 intersection:2 depicted:1 simply:1 likely:1 twentieth:3 gao:2 lazy:1 expressed:4 contained:2 pedro:1 nested:3 truth:1 satisfies:14 relies:1 acm:3 conditional:2 goal:1 sized:1 exposition:1 towards:1 feasible:1 change:1 except:1 operates:1 semantically:1 reducing:1 lemma:4 called:5 oval:2 partly:1 experimental:2 merz:1 select:1 internal:2 evaluate:2 della:5 |
3,326 | 4,011 | Approximate Inference by Compilation to
Arithmetic Circuits
Daniel Lowd
Department of Computer and Information Science
University of Oregon
Eugene, OR 97403-1202
[email protected]
Pedro Domingos
Department of Computer Science and Engineering
University of Washington
Seattle, WA 98195-2350
[email protected]
Abstract
Arithmetic circuits (ACs) exploit context-specific independence and determinism
to allow exact inference even in networks with high treewidth. In this paper, we
introduce the first ever approximate inference methods using ACs, for domains
where exact inference remains intractable. We propose and evaluate a variety of
techniques based on exact compilation, forward sampling, AC structure learning,
Markov network parameter learning, variational inference, and Gibbs sampling.
In experiments on eight challenging real-world domains, we find that the methods
based on sampling and learning work best: one such method (AC2 -F) is faster
and usually more accurate than loopy belief propagation, mean field, and Gibbs
sampling; another (AC2 -G) has a running time similar to Gibbs sampling but is
consistently more accurate than all baselines.
1
Introduction
Compilation to arithmetic circuits (ACs) [1] is one of the most effective methods for exact inference
in Bayesian networks. An AC represents a probability distribution as a directed acyclic graph of
addition and multiplication nodes, with real-valued parameters and indicator variables at the leaves.
This representation allows for linear-time exact inference in the size of the circuit. Compared to
a junction tree, an AC can be exponentially smaller by omitting unnecessary computations, or by
performing repeated subcomputations only once and referencing them multiple times. Given an
AC, we can efficiently condition on evidence or marginalize variables to yield a simpler AC for the
conditional or marginal distribution, respectively. We can also compute all marginals in parallel by
differentiating the circuit. These many attractive properties make ACs an interesting and important
representation, especially when answering many queries on the same domain. However, as with
junction trees, compiling a BN to an equivalent AC yields an exponentially-sized AC in the worst
case, preventing their application to many domains of interest.
In this paper, we introduce approximate compilation methods, allowing us to construct effective ACs
for previously intractable domains. For selecting circuit structure, we compare exact compilation of
a simplified network to learning it from samples. Structure selection is done once per domain, so the
cost is amortized over all future queries. For selecting circuit parameters, we compare variational
inference to maximum likelihood learning from samples. We find that learning from samples works
1
best for both structure and parameters, achieving the highest accuracy on eight challenging, realworld domains. Compared to loopy belief propagation, mean field, and Gibbs sampling, our AC2 -F
method, which selects parameters once per domain, is faster and usually more accurate. Our AC2 -G
method, which optimizes parameters at query time, achieves higher accuracy on every domain with
a running time similar to Gibbs sampling.
The remainder of this paper is organized as follows. In Section 2, we provide background on
Bayesian networks and arithmetic circuits. In Section 3, we present our methods and discuss related work. We evaluate the methods empirically in Section 4 and conclude in Section 5.
2
2.1
Background
Bayesian networks
Bayesian networks (BNs) exploit conditional independence to compactly represent a probability
distribution over a set of variables, {X1 , . . . , Xn }. A BN consists of a directed, acyclic graph with
a node for each variable, and a set of conditional probability distributions (CPDs) describing the
probability of each variable, Xi , given its parentsQ
in the graph, denoted ?i [2]. The full probability
n
distribution is the product of the CPDs: P (X) = i=1 P (Xi |?i ).
Each variable in a BN is conditionally independent of its non-descendants given its parents. Depending on the how the CPDs are parametrized, there may be additional independencies. For discrete
domains, the simplest form of CPD is a conditional probability table, but this requires space exponential in the number of parents of the variable. A more scalable approach is to use decision trees as
CPDs, taking advantage of context-specific independencies [3, 4, 5]. In a decision tree CPD for variable Xi , each interior node is labeled with one of the parent variables, and each of its outgoing edges
is labeled with a value of that variable. Each leaf node is a multinomial representing the marginal
distribution of Xi conditioned on the parent values specified by its ancestor nodes and edges in the
tree.
Bayesian networks can be represented as log-linear models:
P
(1)
log P (X = x) = ? log Z + i wi fi (x)
where each fi is a feature, each wi is a real-valued weight, and Z is the partition function. In BNs, Z
is 1, since the conditional distributions ensure global normalization. After conditioning on evidence,
the resulting distribution may no longer be a BN, but it can still be represented as a log linear model.
The goal of inference in Bayesian networks and other graphical models is to answer arbitrary
marginal and conditional queries (i.e., to compute the marginal distribution of a set of query variables, possibly conditioned on the values of a set of evidence variables). Popular methods include
variational inference, Gibbs sampling, and loopy belief propagation.
In variational inference, the goal is to select a tractable distribution Q that is as close as possible to
the original, intractable distribution P . Minimizing the KL divergence from P to Q (KL(P k Q)) is
generally intractable, so the ?reverse? KL divergence is typically used instead:
X
X
Q(x)
KL(Q k P ) =
Q(x) log
= ?HQ (x) ?
wi EQ [fi ] + log ZP
(2)
P (x)
x
i
where HQ (x) is the entropy of Q, EQ is an expectation computed over the probability distribution Q,
ZP is the partition function of P , and wi and fi are the weights and features of P (see Equation 1).
This quantity can be minimized by fixed-point iteration or by using a gradient-based numerical
optimization method. What makes the reverse KL divergence more tractable to optimize is that the
expectations are done over Q instead of P . This minimization also yields bounds on the log partition
function, or the probability
of evidence in a BN. Specifically, because KL(Q k P ) is non-negative,
P
log ZP ? HQ (x) + i wi EQ [fi ].
The most commonly applied variational method is mean field, in which Q is chosen from the set
of fully factorized distributions. Generalized or structured mean field operates on a set of clusters
(possibly overlapping), or junction tree formed from a subset of the edges [6, 7, 8]. Selecting the
best tractable substructure is a difficult problem. One approach is to greedily delete arcs until the
junction tree is tractable [6]. Alternately, Xing et al. [7] use weighted graph cuts to select clusters
for structured mean field.
2
2.2
Arithmetic circuits
The probability distribution represented by a Bayesian network can be equivalently represented
by
multilinear function known as the network polynomial [1]: P (X1 = x1 , . . . , Xn = xn ) =
P aQ
n
X
i=1 I(Xi = xi )P (Xi = xi |?i = ?i ) where the sum ranges over all possible instantiations of
the variables, I() is the indicator function (1 if the argument is true, 0 otherwise), and the P (Xi |?i )
are the parameters of the BN. The probability of any partial instantiation of the variables can now be
computed simply by setting to 1 all the indicators consistent with the instantiation, and to 0 all others.
This allows arbitrary marginal and conditional queries to be answered in time linear in the size of
the polynomial. Furthermore, differentiating the network with respect to its weight parameters (wi )
yields the probabilities of the corresponding features (fi ).
The size of the network polynomial is exponential in the number of variables, but it can be more
compactly represented using an arithmetic circuit (AC). An AC is a rooted, directed acyclic graph
whose leaves are numeric constants or variables, and whose interior nodes are addition and multiplication operations. The value of the function for an input tuple is computed by setting the variable
leaves to the corresponding values and computing the value of each node from the values of its children, starting at the leaves. In the case of the network polynomial, the leaves are the indicators and
network parameters. The AC avoids the redundancy present in the network polynomial, and can be
exponentially more compact.
Every junction tree has a corresponding AC, with an addition node for every instantiation of a separator, a multiplication node for every instantiation of a clique, and a summation node as the root.
Thus one way to compile a BN into an AC is via a junction tree. However, when the network contains context-specific independences, a much more compact circuit can be obtained. Darwiche [1]
describes one way to do this, by encoding the network into a special logical form, factoring the
logical form, and extracting the corresponding AC.
Other exact inference methods include variable elimination with algebraic decision diagrams (which
can also be done with ACs [9]), AND/OR graphs [10], bucket elimination [11], and more.
3
Approximate Compilation of Arithmetic Circuits
In this section, we describe AC2 (Approximate Compilation of Arithmetic Circuits), an approach
for constructing an AC to approximate a given BN. AC2 does this in two stages: structure search
and parameter optimization. The structure search is done in advance, once per network, while the
parameters may be selected at query time, conditioned on evidence. This amortizes the cost of
the structure search over all future queries.The parameter optimization allows us to fine-tune the
circuit to specific pieces of evidence. Just as in variational inference methods such as mean field,
we optimize the parameters of a tractable distribution to best approximate an intractable one. Note
that, if the BN could be compiled exactly, this step would be unnecessary, since the conditional
distribution would always be optimal.
3.1
Structure search
We considered two methods for generating circuit structures. The first is to prune the BN structure
and then compile the simplified BN exactly. The second is to approximate the BN distribution with
a set of samples and learn a circuit from this pseudo-empirical data.
3.1.1
Pruning and compiling
Pruning and compiling a BN is somewhat analogous to edge deletion methods (e.g., [6]), except
that instead of removing entire edges and building the full junction tree, we introduce contextspecific independencies and build an arithmetic circuit that can exploit them. This finer-grained
simplification offers the potential of much richer models or smaller circuits. However, it also offers
more challenging search problems that must be approximated heuristically.
We explored several techniques for greedily simplifying a network into a tractable AC by pruning
splits from its decision-tree CPDs. Ideally, we would like to have bounds on the error of our simplified model, relative to the original. This can be accomplished by bounding the ratio of each log con3
ditional probability distribution, so that the approximated log probability of every instance is within
a constant factor of the truth, as done by the Multiplicative Approximation Scheme (MAS) [12].
However, we found that the bounds for our networks were very large, with ratios in the hundreds or
thousands. This occurs because our networks have probabilities close to 0 and 1 (with logs close to
negative infinity and zero), and because the bounds focus on the worst case.
Therefore, we chose to focus instead on the average case by attempting to minimize the KL divergence between the original model and the simplified approximation:
P
P (x)
KL(P k Q) = x P (x) log Q(x)
where P is the original network and Q is the simplified approximate network, in which each of P ?s conditional probability distributions has been simplified. We
choose to optimize the KL divergence here because the reverse KL is prone to fitting only a single mode, and we want to avoid excluding any significant parts of the distribution before seeing
evidence. Since Q?s structure is a subset of P ?s, we can decompose the KL divergence as follows:
XX
X
P (xi |?i )
KL(P k Q) =
P (?i )
P (xi |?i ) log
(3)
Q(xi |?i )
?
x
i
i
i
where the summation is over all states of the Xi ?s parents, ?i . In other words, the KL divergence
can be computed by adding the expected divergence of each local factor, where the expectation is
computed according to the global probability distribution. For the case of BNs with tree CPDs (as
described in Section 2.1), this means that knowing the distribution of the parent variables allows us
to compute the change in KL divergence from pruning a tree CPD.
Unfortunately, computing the distribution of each variable?s parents is intractable and must be approximated in some way. We tried two different methods for computing these distributions: estimating the joint parent probabilities from a large number of samples (one million in our experiments)
(?P-Samp?), and forming the product of the parent marginals estimated using mean field (?P-MF?).
Given a method for computing the parent marginals, we remove the splits that least increase the
KL divergence. We implement this by starting from a fully pruned network and greedily adding the
splits that most decrease KL divergence. After every 10 splits, we check the number of edges by
compiling the candidate network to an AC using the C2D compiler. 1 We stop when the number of
edges exceeds our prespecified bound.
3.1.2
Learning from samples
The second approach we tried is learning a circuit from a set of generated samples. The samples
themselves are generated using forward sampling, in which each variable in the BN is sampled in
topological order according to its conditional distribution given its parents. The circuit learning
method we chose is the LearnAC algorithm by Lowd and Domingos [13], which greedily learns
an AC representing a BN with decision tree CPDs by trading off log likelihood and circuit size.
We made one modification to the the LearnAC (LAC) algorithm in order to learn circuits with a
fixed number of edges. Instead of using a fixed edge penalty, we start with an edge penalty of 100
and halve it every time we run out of candidate splits with non-negative scores. The effect of this
modified procedure is to conservatively selects splits that add few edges to the circuit at first, and
become increasingly liberal until the edge limit is reached. Tuning the initial edge penalty can lead
to slightly better performance at the cost of additional training time. We also explored using the BN
structure to guide the AC structure search (for example, by excluding splits that would violate the
partial order of the original BN), but these restrictions offered no significant advantage in accuracy.
Many modifications to this procedure are possible. Larger edge budgets or different heuristics could
yield more accurate circuits. With additional engineering, the LearnAC algorithm could be adapted
to dynamically request only as many samples as necessary to be confident in its choices. For example, Hulten and Domingos [14] have developed methods that scale learning algorithms to datasets
of arbitrary size; the same approach could be used here, except in a ?pull? setting where the data is
generated on-demand. Spending a long time finding the most accurate circuit may be worthwhile,
since the cost is amortized over all queries.
We are not the first to propose sampling as a method for converting intractable models into tractable
ones. Wang et al. [15] used a similar procedure for learning a latent tree model to approximate a
1
Available at http://reasoning.cs.ucla.edu/c2d/.
4
BN. They found that the learned models had faster or more accurate inference on a wide range of
standard BNs (where exact inference is somewhat tractable). In a semi-supervised setting, Liang et
al. [16] trained a conditional random field (CRF) from a small amount of labeled training data, used
the CRF to label additional examples, and learned independent logistic regression models from this
expanded dataset.
3.2
Parameter optimization
In this section, we describe three methods for selecting AC parameters: forward sampling, variational optimization, and Gibbs sampling.
3.2.1
Forward sampling
In AC2 -F, we use forward sampling to generate a set of samples from the original BN (one million
in our experiments) and maximum likelihood estimation to estimate the AC parameters from those
samples. This can be done in closed form because, before conditioning on evidence, the AC structure
also represents a BN. AC2 -F selects these parameters once per domain, before conditioning on any
evidence. This makes it very fast at query time.
AC2 -F can be viewed as approximately minimizing the KL divergence KL(P k Q) between the
BN distribution P and the AC distribution Q. For conditional queries P (Y |X = xev ), we are more
interested in the divergence of the conditional distributions, KL(P (.|xev ) k Q(.|xev )). The following
theorem bounds the conditional KL divergence as a function of the unconditional KL divergence:
Theorem 1. For discrete probability distributions P and Q, and evidence xev ,
1
KL(P (.|xev ) k Q(.|xev )) ?
KL(P k Q)
P (xev )
(See the supplementary materials for the proof.) From this theorem, we expect AC2 -F to work
better when evidence is likely (i.e., P (xev ) is not too small). For rare evidence, the conditional KL
divergence could be much larger than the unconditional KL divergence.
3.2.2
Variational optimization
Since AC2 -F selects parameters based on the unconditioned BN, it may do poorly when conditioning
on rare evidence. An alternative is to choose AC parameters that (locally) minimize the reverse KL
divergence to the BN conditioned on evidence. Let P and Q be log-linear models, i.e.:
P
P
log P (x) = ? log ZP + i wi fi (x)
log Q(x) = ? log ZQ + j vj gj (x)
The reverse KL divergence and its gradient can now be written as follows:
P
P
ZP
KL(Q k P ) = j vj EQ (gj ) ? i wi EQ (fi ) + log Z
Q
P
P
?
k vk (EQ (gk gj ) ? Q(gk )Q(gj )) ?
i vi (EQ (fi gj ) ? Q(fi )Q(gj ))
?vj KL(Q k P ) =
(4)
(5)
where EQ (gk gj ) is the expected value of gk (x) ? gj (x) according to Q. In our application, P is
the BN conditioned on evidence and Q is the AC. Since inference in Q (the AC) is tractable, the
gradient can be computed exactly.
We can optimize this using any numerical optimization method, such as gradient descent. Due
to local optima, the results may depend on the optimization procedure and its initialization. In
experiments, we used the limited memory BFGS algorithm (L-BFGS) [17], initialized with AC2 -F.
We now discuss how to compute the gradient efficiently in a circuit with e edges. By setting leaf
values and evaluating the circuit as described by Darwiche [1], we can compute the probability of
any conjunctive feature Q(fi ) (or Q(gk )) in O(e) operations. If we differentiate the circuit after
conditioning on a feature fi (or gk ), we can obtain the probabilities of the conjunctions Q(fi gj ) (or
Q(gk gj )) for all gj in O(e) time. Therefore, if there are n features in P , and m features in Q, then
the total complexity of computing the derivative is O((n + m)e). Since there are typically fewer
features in Q than P , this simplifies to O(ne).
These methods are applicable to any tractable structure represented as an AC, including low treewidth models, mixture models, latent tree models, etc. We refer to this method as AC2 -V.
5
3.2.3
Gibbs sampling
While optimizing the reverse KL is a popular choice for approximate inference, there are certain
risks. Even if KL(Q k P ) is small, Q may assign very small or zero probabilities to important modes
of P . Furthermore, we are only guaranteed to find a local optimum, which may be much worse
than the global optimum. The ?regular? KL divergence, does not suffer these disadvantages, but is
impractical to compute since it involves expectations according to P :
P
P
KL(P k Q)= i wi EP (fi ) ? j vj EP (gj ) + log ZQ /ZP
(6)
?
?vj KL(P
k Q)= EQ (gj ) ? EP (gj )
(7)
Therefore, minimizing KL(P k Q) by gradient descent or L-BFGS requires computing the conditional probability of each AC feature according to the BN, EP (gj ). Note that these only need to be
computed once, since they are unaffected by the AC feature weights, vj . We chose to approximate
these expectations using Gibbs sampling, but an alternate inference method (e.g., importance sampling) could be substituted. The probabilities of the AC features according to the AC, EQ (gj ), can
be computed in parallel by differentiating the circuit, requiring time O(e).2 This is typically orders
of magnitude faster than the variational approach described above, since each optimization step runs
in O(e) instead of O(ne), where n is the number of BN features. We refer to this method as AC2 -G.
4
Experiments
In this section, we compare the proposed methods experimentally and demonstrate that approximate
compilation is an accurate and efficient technique for inference in intractable networks.
4.1
Datasets
We wanted to evaluate our methods on challenging, realistic networks where exact inference is intractable, even for the most sophisticated arithmetic circuit-based techniques. This ruled out most
traditional benchmarks, for which ACs can already perform exact inference [9]. We generated intractable networks by learning them from eight real-world datasets using the WinMine Toolkit [18].
The WinMine Toolkit learns BNs with tree-structured CPDs, leading to complex models with high
tree-width. In theory, this additional structure can be exploited by existing arithmetic circuit techniques, but in practice, compilation techniques ran out of memory on all eight networks. See Davis
and Domingos [19] and our supplementary material for more details on the datasets and the networks
learned from them, respectively.
4.2
Structure selection
In our first set of experiments, we compared the structure selection algorithms from Section 3.1
according to their ability to approximate the original models. Since computing the KL divergence
directly is intractable, we approximated it using random samples x(i) :
X
P (x)
1 X
D(P ||Q) =
P (x) log
= EP [log(P (x)/Q(x))] ?
log(P (x(i) )/Q(x(i) )) (8)
Q(x)
m
x
i
where m is the number of samples (10,000 in our experiments). These samples were distinct from
the training data, and the same set of samples was used to evaluate each algorithm.
For LearnAC, we trained circuits with a limit of 100,000 edges. All circuits were learned using
100,000 samples, and then the parameters were set using AC2 -F with 1 million samples.3 Training
time ranged from 17 minutes (KDD Cup) to 8 hours (EachMovie). As an additional baseline, we also
learned tree-structured BNs from the same 1 million samples using the Chow-Liu algorithm [20].
Results are in Table 1. The learned arithmetic circuit (LAC) achieves the best performance on all
datasets, often by a wide margin. We also observe that, of the pruning methods, samples (P-Samp)
work better than mean field marginals (P-MF). Chow-Liu trees (C-L) typically perform somewhere
between P-MF and P-Samp. For the rest of this paper, we focus on structures selected by LearnAC.
2
To support optimization methods that perform line search (including L-BFGS), we can similarly approximate KL(P k Q). log ZQ can also be computed in O(e) time.
3
With 1 million samples, we ran into memory limitations that a more careful implementation might avoid.
6
Table 2: Mean time for answering a single conditional
query, in seconds.
Table 1: KL divergence of different
structure selection algorithms.
P-MF
2.44
8.41
4.99
5.14
3.83
1.78
4.90
29.66
P-Samp
0.10
2.29
3.31
3.55
3.06
0.52
2.43
17.61
C-L
0.23
4.48
4.47
5.08
4.14
0.70
2.84
17.11
LAC
0.07
1.27
2.12
2.82
2.24
0.38
1.89
11.12
KDD
-0.4
50%
10%
30%
40%
-0.2
-0.3
Evidence variables
Netflix
20% 30% 40%
Evidence variables
MSWeb
-0.46
-0.50
-0.54
-0.58
10%
50%
-0.60
-0.62
-0.64
20%
30%
40%
-0.032
-0.036
-0.040
-0.044
50%
10%
Evidence variables
2
AC -F
20%
30%
40%
50%
20% 30% 40%
Evidence variables
2
-0.64
-0.68
10%
-0.09
50%
-0.10
10%
MF
AC -G
20%
30%
40%
50%
-0.08
-0.08
2
AC -V
-0.60
Evidence variables
EachMovie
-0.07
Log probability
-0.58
Gibbs
2.5
2.8
3.4
3.3
3.3
4.3
6.6
11.0
-0.56
Evidence variables
Book
-0.028
-0.56
Log probability
20% 30% 40%
Evidence variables
BP
50%
-0.10
-0.12
-0.14
-0.16
-0.18
10%
20%
30%
40%
50%
Evidence variables
Gibbs
EachMovie
Figure 1: Average conditional log-0.08
likelihood of the query variables (y axis), divided by the number
of query variables (x axis). Higher is better. Gibbs often performs too badly to appear in the frame.
-0.10
4.3
Conditional probabilities
Log probability
Log probability
-0.54
MF
0.025
0.073
0.048
0.057
0.053
0.046
0.059
0.342
Jester
Log probability
-0.044
10%
-0.040
BP
0.050
0.081
0.063
0.054
0.057
0.277
0.864
1.441
-0.52
-0.42
Log probability
-0.042
-0.038
AC2 -G
11.2
11.2
14.4
13.8
12.3
12.2
16.1
28.6
-0.38
Log probability
Log probability
-0.036
AC2 -V
3803
2741
4184
3448
3050
2831
5190
10204
Audio
-0.1
20%
AC2 -F
0.022
0.022
0.023
0.019
0.021
0.022
0.020
0.022
Plants
-0.034
10%
KDD Cup
Plants
Audio
Jester
Netflix
MSWeb
Book
EachMovie
Log probability
KDD Cup
Plants
Audio
Jester
Netflix
MSWeb
Book
EachMovie
-0.12
Using structures selected by LearnAC, we compared the accuracy of AC2 -F, AC2 -V, and AC2 -G
to mean field (MF), loopy belief propagation (BP), and Gibbs sampling (Gibbs) on conditional
probability queries. We ran MF and
-0.14BP to convergence. For Gibbs sampling, we ran 10 chains, each
with 1000 burn-in iterations and 10,000 sampling iterations. All methods exploited CPD structure
whenever possible (e.g., in the computation of BP messages). All code will be publicly released.
-0.16
Since most of these queries are intractable to compute exactly, we cannot determine the true probabilities directly. Instead, we generated 100 random samples from each network, selected a random
subset of the variables to use as -0.18
evidence (10%-50% of the total variables), and measured the log
10%
20%
30%
50%
conditional probability of the non-evidence
variables
according
to each40%
inference method.
Different
Evidence variables
queries used different evidence variables. This approximates the KL divergence between the true
and inferred conditional distributions up to a constant. We reduced the variance of this approximation by selecting additional queries for each evidence configuration. Specifically, we generated
100,000 samples and kept the ones compatible with the evidence, up to 10,000 per configuration.
For some evidence, none of the 100,000 samples were compatible, leaving just the original query.
Full results are in Figure 1. Table 2 contains the average inference time for each method.
Overall, AC2 -F does very well against BP and even better against MF and Gibbs, especially with
lesser amounts of evidence. Its somewhat worse performance at greater amounts of evidence is
consistent with Theorem 1. AC2 -F is also the fastest of the inference methods, making it a very
good choice for speedy inference with small to moderate amounts of evidence.
AC2 -V obtains higher accuracy than AC2 -F at higher levels of evidence, but is often less accurate at
lesser amounts of evidence. This can be attributed to different optimization and evaluation metrics:
7
reducing KL(Q k P ) may sometimes lead to increased KL(P k Q). On EachMovie, AC2 -V does
particularly poorly, getting stuck in a worse local optimum than the much simpler MF. AC2 -V is
also the slowest method, by far.
AC2 -G is the most accurate method overall. It dominates BP, MF, and Gibbs on all datasets. With
the same number of samples, AC2 -G takes 2-4 times longer than Gibbs. This additional running time
is partly due to the parameter optimization step and partly due to the fact that AC2 -G is computing
many expectations in parallel, and therefore has more bookkeeping per sample. If we increase the
number of samples in Gibbs by a factor of 10 (not shown), then Gibbs wins on KDD at 40 and 50%
and Plants at 50% evidence, but is also significantly slower than AC2 -G. Compared to the other AC
methods, AC2 -G wins everywhere except for KDD at 10-40% evidence and Netflix at 10% evidence.
If we increase the number of samples in AC2 -G by a factor of 10 (not shown), then it beats AC2 -F
and AC2 -V on every dataset. The running time of AC2 -G is split approximately evenly between
computing sufficient statistics and optimizing parameters with L-BFGS.
Gibbs sampling did poorly in almost all of the scenarios, which can be attributed to the fact that
it is unable to accurately estimate the probabilities of very infrequent events. Most conjunctions
of dozens or hundreds of variables are very improbable, even if conditioned on a large amount
of evidence. If a certain configuration is never seen, then its probability is estimated to be very
low (non-zero due to smoothing). MF and BP did not have this problem, since they represent the
conditional distribution as a product of marginals, each of which can be estimated reasonably well.
In follow-up experiments, we found that using Gibbs sampling to compute the marginals yielded
slightly better accuracy than BP, but much slower. AC2 -G can be seen as a generalization of using
Gibbs sampling to compute marginals, just as AC2 -V generalizes MF.
5
Conclusion
Arithmetic circuits are an attractive alternative to junction trees due to their ability to exploit determinism and context-specific independence. However, even with ACs, exact inference remains
intractable for many networks of interest. In this paper, we introduced the first approximate compilation methods, allowing us to apply ACs to any BN. Our most efficient method, AC2 -F, is faster than
traditional approximate inference methods and more accurate most of the time. Our most accurate
method, AC2 -G, is more accurate than the baselines on every domain.
One of the key lessons is that combining sampling and learning is a good strategy for accurate
approximate inference. Sampling generates a coarse approximation of the desired distribution which
is subsequently smoothed by learning. For structure selection, an AC learning method applied to
samples was more effective than exact compilation of a simplified network. For parameter selection,
maximum likelihood estimation applied to Gibbs samples was both faster and more effective than
variational inference in ACs.
For future work, we hope to extend our methods to Markov networks, in which generating samples
is a difficult inference problem in itself. Similar methods could be used to select AC structures tuned
to particular queries, since a BN conditioned on evidence can be represented as a Markov network.
This could lead to more accurate results, especially in cases with a lot of evidence, but the cost would
no longer be amortized over all future queries. Comparisons with more sophisticated baselines are
another important item for future work.
Acknowledgements
The authors wish to thank Christopher Meek and Jesse Davis for helpful comments. This research
was partly funded by ARO grant W911NF-08-1-0242, AFRL contract FA8750-09-C-0181, DARPA
contracts FA8750-05-2-0283, FA8750-07-D-0185, HR0011-06-C-0025, HR0011-07-C-0060 and
NBCH-D030010, NSF grants IIS-0534881 and IIS-0803481, and ONR grant N00014-08-1-0670.
The views and conclusions contained in this document are those of the authors and should not be
interpreted as necessarily representing the official policies, either expressed or implied, of ARO,
DARPA, NSF, ONR, or the United States Government.
8
References
[1] A. Darwiche. A differential approach to inference in Bayesian networks. Journal of the ACM, 50(3):280?
305, 2003.
[2] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.
[3] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian
networks. In Proc. of the 12th Conference on Uncertainty in Artificial Intelligence, pages 115?123,
Portland, OR, 1996. Morgan Kaufmann.
[4] N. Friedman and M. Goldszmidt. Learning Bayesian networks with local structure. In Proc. of the
12th Conference on Uncertainty in Artificial Intelligence, pages 252?262, Portland, OR, 1996. Morgan
Kaufmann.
[5] D. Chickering, D. Heckerman, and C. Meek. A Bayesian approach to learning Bayesian networks with
local structure. In Proc. of the 13th Conference on Uncertainty in Artificial Intelligence, pages 80?89,
Providence, RI, 1997. Morgan Kaufmann.
[6] Arthur Choi and Adnan Darwiche. A variational approach for approximating Bayesian networks by edge
deletion. In Proc. of the 22nd Conference on Uncertainty in Artificial Intelligence (UAI-06), Arlington,
Virginia, 2006. AUAI Press.
[7] E. P. Xing, M. I. Jordan, and S. Russell. Graph partition strategies for generalized mean field inference.
In Proc. of the 20th Conference on Uncertainty in Artificial Intelligence, pages 602?610, Banff, Canada,
2004.
[8] D. Geiger, C. Meek, and Y. Wexler. A variational inference procedure allowing internal structure for
overlapping clusters and deterministic constraints. Journal of Artificial Intelligence Research, 27:1?23,
2006.
[9] M. Chavira and A. Darwiche. Compiling Bayesian networks using variable elimination. In Proc. of the
20th International Joint Conference on Artificial Intelligence (IJCAI), pages 2443?2449, 2007.
[10] R. Dechter and R. Mateescu. AND/OR search spaces for graphical models. Artificial Intelligence, 171:73?
106, 2007.
[11] R. Dechter. Bucket elimination: a unifying framework for reasoning. Artificial Intelligence, 113:41?85,
1999.
[12] Y. Wexler and C. Meek. MAS: a multiplicative approximation scheme for probabilistic inference. In
Advances in Neural Information Processing Systems 22, Cambridge, MA, 2008. MIT Press.
[13] D. Lowd and P. Domingos. Learning arithmetic circuits. In Proc. of the 24th Conference on Uncertainty
in Artificial Intelligence, Helsinki, Finland, 2008. AUAI Press.
[14] G. Hulten and P. Domingos. Mining complex models from arbitrarily large databases in constant time.
In Proc. of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pages 525?531, Edmonton, Canada, 2002. ACM Press.
[15] Y. Wang, N. L. Zhang, and T. Chen. Latent tree models and approximate inference in Bayesian networks.
Journal of Artificial Intelligence Research, 32:879?900, 2008.
[16] P. Liang, III H. Daum?e, and D. Klein. Structure compilation: trading structure for features. In Proc. of
the 25th International Conference on Machine Learning, pages 592?599, Helsinki, Finland, 2008. ACM.
[17] D. C. Liu and J. Nocedal. On the limited memory BFGS method for large scale optimization. Mathematical Programming, 45(3):503?528, 1989.
[18] D. M. Chickering. The WinMine toolkit. Technical Report MSR-TR-2002-103, Microsoft, Redmond,
WA, 2002.
[19] J. Davis and P. Domingos. Bottom-up learning of Markov network structure. In Proc. of the 27th International Conference on Machine Learning, Haifa, Israel, 2010. ACM Press.
[20] C. K. Chow and C. N Liu. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, 14:462?467, 1968.
9
| 4011 |@word msr:1 polynomial:5 nd:1 adnan:1 heuristically:1 tried:2 bn:28 simplifying:1 wexler:2 tr:1 initial:1 liu:4 contains:2 score:1 selecting:5 configuration:3 daniel:1 tuned:1 document:1 united:1 fa8750:3 existing:1 conjunctive:1 must:2 written:1 dechter:2 numerical:2 partition:4 realistic:1 cpds:8 kdd:6 wanted:1 remove:1 intelligence:11 leaf:7 selected:4 fewer:1 item:1 prespecified:1 coarse:1 node:10 banff:1 liberal:1 simpler:2 zhang:1 mathematical:1 become:1 differential:1 descendant:1 consists:1 fitting:1 darwiche:5 introduce:3 expected:2 themselves:1 xx:1 estimating:1 circuit:36 factorized:1 what:1 israel:1 interpreted:1 developed:1 finding:1 impractical:1 pseudo:1 every:9 auai:2 exactly:4 grant:3 appear:1 before:3 engineering:2 local:6 limit:2 encoding:1 approximately:2 might:1 chose:3 burn:1 initialization:1 dynamically:1 challenging:4 compile:2 fastest:1 limited:2 range:2 uoregon:1 directed:3 practice:1 implement:1 procedure:5 empirical:1 significantly:1 word:1 regular:1 seeing:1 cannot:1 marginalize:1 selection:6 interior:2 close:3 context:5 risk:1 optimize:4 equivalent:1 restriction:1 deterministic:1 jesse:1 starting:2 pull:1 amortizes:1 analogous:1 infrequent:1 exact:12 programming:1 domingo:7 amortized:3 approximated:4 particularly:1 cut:1 labeled:3 database:1 ep:5 bottom:1 wang:2 worst:2 thousand:1 decrease:1 highest:1 russell:1 ran:4 complexity:1 ideally:1 trained:2 depend:1 compactly:2 joint:2 darpa:2 represented:7 distinct:1 fast:1 effective:4 describe:2 query:21 artificial:11 whose:2 richer:1 larger:2 valued:2 heuristic:1 supplementary:2 plausible:1 otherwise:1 ability:2 statistic:1 itself:1 unconditioned:1 differentiate:1 advantage:2 propose:2 aro:2 product:3 remainder:1 combining:1 poorly:3 getting:1 seattle:1 parent:11 cluster:3 optimum:4 zp:6 convergence:1 ijcai:1 generating:2 depending:1 ac:46 measured:1 eq:10 c:3 involves:1 treewidth:2 trading:2 subsequently:1 elimination:4 material:2 government:1 assign:1 generalization:1 decompose:1 multilinear:1 summation:2 considered:1 achieves:2 finland:2 ditional:1 released:1 estimation:2 proc:10 applicable:1 label:1 weighted:1 minimization:1 hope:1 mit:1 always:1 modified:1 avoid:2 hulten:2 conjunction:2 focus:3 vk:1 consistently:1 portland:2 likelihood:5 check:1 slowest:1 sigkdd:1 greedily:4 baseline:4 helpful:1 inference:36 factoring:1 chavira:1 typically:4 entire:1 chow:3 koller:1 ancestor:1 selects:4 interested:1 overall:2 denoted:1 jester:3 smoothing:1 special:1 marginal:5 field:11 once:6 construct:1 never:1 washington:2 sampling:25 represents:2 future:5 minimized:1 others:1 cpd:4 intelligent:1 report:1 ac2:40 few:1 divergence:23 lac:3 microsoft:1 friedman:2 interest:2 message:1 mining:2 evaluation:1 mixture:1 unconditional:2 compilation:12 chain:1 accurate:14 edge:17 tuple:1 partial:2 necessary:1 improbable:1 arthur:1 tree:23 initialized:1 ruled:1 desired:1 haifa:1 delete:1 instance:1 increased:1 disadvantage:1 w911nf:1 loopy:4 cost:5 subset:3 rare:2 hundred:2 too:2 virginia:1 providence:1 answer:1 confident:1 international:4 contract:2 off:1 probabilistic:2 choose:2 possibly:2 worse:3 book:3 derivative:1 leading:1 potential:1 bfgs:6 oregon:1 vi:1 piece:1 multiplicative:2 root:1 lot:1 closed:1 view:1 compiler:1 xing:2 start:1 reached:1 parallel:3 samp:4 netflix:4 substructure:1 msweb:3 minimize:2 formed:1 publicly:1 accuracy:6 variance:1 kaufmann:4 efficiently:2 yield:5 lesson:1 bayesian:15 accurately:1 none:1 finer:1 unaffected:1 halve:1 whenever:1 against:2 proof:1 attributed:2 stop:1 sampled:1 dataset:2 popular:2 logical:2 knowledge:1 organized:1 sophisticated:2 afrl:1 higher:4 supervised:1 follow:1 arlington:1 done:6 furthermore:2 just:3 stage:1 until:2 christopher:1 overlapping:2 propagation:4 mode:2 logistic:1 lowd:4 building:1 omitting:1 effect:1 requiring:1 true:3 ranged:1 attractive:2 conditionally:1 width:1 rooted:1 davis:3 generalized:2 crf:2 demonstrate:1 performs:1 reasoning:3 spending:1 variational:12 fi:14 bookkeeping:1 multinomial:1 empirically:1 conditioning:5 exponentially:3 million:5 extend:1 approximates:1 marginals:7 significant:2 refer:2 cup:3 gibbs:24 cambridge:1 tuning:1 similarly:1 aq:1 had:1 funded:1 toolkit:3 longer:3 compiled:1 gj:16 etc:1 add:1 optimizing:2 optimizes:1 moderate:1 reverse:6 scenario:1 certain:2 n00014:1 onr:2 arbitrarily:1 accomplished:1 exploited:2 seen:2 morgan:4 additional:8 somewhat:3 greater:1 prune:1 converting:1 determine:1 arithmetic:14 semi:1 multiple:1 full:3 violate:1 ii:2 eachmovie:6 exceeds:1 technical:1 faster:6 offer:2 long:1 divided:1 scalable:1 regression:1 expectation:6 metric:1 iteration:3 represent:2 normalization:1 sometimes:1 addition:3 background:2 fine:1 want:1 diagram:1 leaving:1 rest:1 comment:1 jordan:1 extracting:1 split:8 iii:1 variety:1 independence:5 simplifies:1 lesser:2 knowing:1 penalty:3 suffer:1 algebraic:1 boutilier:1 generally:1 tune:1 amount:6 locally:1 simplest:1 reduced:1 http:1 generate:1 nsf:2 estimated:3 per:6 klein:1 discrete:3 independency:3 redundancy:1 key:1 achieving:1 kept:1 nocedal:1 graph:7 sum:1 realworld:1 run:2 everywhere:1 uncertainty:6 almost:1 geiger:1 decision:5 bound:6 guaranteed:1 simplification:1 meek:4 topological:1 yielded:1 badly:1 adapted:1 infinity:1 constraint:1 bp:9 ri:1 helsinki:2 ucla:1 generates:1 answered:1 argument:1 bns:6 pruned:1 performing:1 attempting:1 expanded:1 department:2 structured:4 according:8 alternate:1 request:1 smaller:2 describes:1 increasingly:1 slightly:2 heckerman:1 wi:9 modification:2 making:1 referencing:1 bucket:2 equation:1 remains:2 previously:1 discus:2 describing:1 tractable:10 junction:8 operation:2 available:1 generalizes:1 eight:4 observe:1 worthwhile:1 apply:1 alternative:2 compiling:5 slower:2 original:8 c2d:2 running:4 include:2 ensure:1 graphical:2 unifying:1 daum:1 somewhere:1 exploit:4 especially:3 build:1 approximating:2 implied:1 already:1 quantity:1 occurs:1 strategy:2 dependence:1 traditional:2 gradient:6 win:2 hq:3 unable:1 thank:1 parametrized:1 evenly:1 code:1 ratio:2 minimizing:3 equivalently:1 difficult:2 unfortunately:1 liang:2 gk:7 negative:3 implementation:1 policy:1 perform:3 allowing:3 markov:4 datasets:6 arc:1 benchmark:1 descent:2 beat:1 ever:1 excluding:2 frame:1 smoothed:1 arbitrary:3 canada:2 inferred:1 introduced:1 specified:1 kl:41 learned:6 deletion:2 hour:1 pearl:1 alternately:1 hr0011:2 redmond:1 usually:2 including:2 memory:4 belief:4 event:1 indicator:4 representing:3 scheme:2 pedrod:1 ne:2 axis:2 eugene:1 acknowledgement:1 discovery:1 multiplication:3 relative:1 fully:2 expect:1 plant:4 interesting:1 limitation:1 acyclic:3 offered:1 sufficient:1 consistent:2 prone:1 compatible:2 mateescu:1 contextspecific:1 allow:1 guide:1 wide:2 taking:1 differentiating:3 determinism:2 xn:3 world:2 numeric:1 avoids:1 evaluating:1 preventing:1 forward:5 commonly:1 made:1 conservatively:1 simplified:7 stuck:1 author:2 far:1 san:1 transaction:1 approximate:19 compact:2 pruning:5 obtains:1 clique:1 nbch:1 global:3 instantiation:5 uai:1 unnecessary:2 conclude:1 francisco:1 xi:13 search:8 latent:3 zq:3 table:5 learn:2 reasonably:1 ca:1 complex:2 separator:1 constructing:1 domain:12 vj:6 substituted:1 did:2 necessarily:1 official:1 bounding:1 repeated:1 child:1 x1:3 edmonton:1 wish:1 exponential:2 candidate:2 answering:2 chickering:2 learns:2 grained:1 dozen:1 removing:1 theorem:4 minute:1 choi:1 specific:6 explored:2 evidence:41 dominates:1 intractable:13 adding:2 importance:1 magnitude:1 conditioned:7 budget:1 demand:1 margin:1 chen:1 mf:13 entropy:1 simply:1 likely:1 forming:1 expressed:1 contained:1 pedro:1 truth:1 acm:5 ma:3 conditional:23 goal:2 sized:1 viewed:1 careful:1 change:1 experimentally:1 specifically:2 except:3 operates:1 reducing:1 total:2 partly:3 select:3 speedy:1 support:1 internal:1 goldszmidt:2 evaluate:4 outgoing:1 audio:3 |
3,327 | 4,012 | Using body-anchored priors for identifying actions in
single images
Leonid Karlinsky
Michael Dinerstein
Shimon Ullman
Department of Computer Science
Weizmann Institute of Science
Rehovot 76100, Israel
{leonid.karlinsky, michael.dinerstein, shimon.ullman} @weizmann.ac.il
Abstract
This paper presents an approach to the visual recognition of human actions using
only single images as input. The task is easy for humans but difficult for current
approaches to object recognition, because instances of different actions may be
similar in terms of body pose, and often require detailed examination of relations
between participating objects and body parts in order to be recognized. The proposed approach applies a two-stage interpretation procedure to each training and
test image. The first stage produces accurate detection of the relevant body parts
of the actor, forming a prior for the local evidence needed to be considered for
identifying the action. The second stage extracts features that are anchored to the
detected body parts, and uses these features and their feature-to-part relations in
order to recognize the action. The body anchored priors we propose apply to a
large range of human actions. These priors allow focusing on the relevant regions
and relations, thereby significantly simplifying the learning process and increasing
recognition performance.
1
Introduction
This paper deals with the problem of recognizing transitive actions in single images. A transitive
action is often described by a transitive verb and involves a number of components, or thematic
roles [1], including an actor, a tool, and in some cases a recipient of the action. Simple examples
are drinking from a glass, talking on the phone, eating from a plate with a spoon, or brushing teeth.
Transitive actions are characterized visually by the posture of the actor, the tool she/he is holding,
the type of grasping, and the presence of the action recipient. In many cases, such actions can be
readily identified by human observers from only a single image (see figure 1a). We will consider
below the problem of static action recognition (SAR for short) from a single image, without using
motion information that is exploited by approaches dealing with dynamic action recognition in video
sequences, such as [2]. The problem is of interest first, because in a short observation interval, the
use of motion information for identifying an action (e.g. talking on the phone) may be limited.
Second, as a natural human capacity, it is of interest for both cognitive and brain studies. Several
studies [3, 4, 5, 6] have shown evidence for the presence of SAR related mechanisms in both the
ventral and dorsal areas of the visual cortex, and computational modeling of SAR may shed new
light on these mechanisms. Unlike the more common task of detecting individual objects such as
faces and cars, SAR depends on detecting object configurations. Different actions may involve
the same type of objects (eg. person, phone) but appearing in different configurations (answering,
dialing), sometimes differing in subtle details, making their identification difficult compared with
individual object recognition.
Only a few approaches to date have dealt with the SAR problem. [7] studied the recognition of
sports actions using the pose of the actor. [8] used scene interpretation in terms of objects and
1
(b)
(a)
drinking
drinking no cup
eating w ith spoon
phone talking
phone w ith bottle
scratching
singing w ith mike
smoking
teeth brushing
toasting
w aving
w earing glasses
Figure 1: (a) Examples of similar transitive actions identifiable by humans from single images
(brushing teeth, talking on a cell phone and wearing glasses). (b) Illustration of a single run of the
proposed two-stage approach. In the first stage the parts are detected in the face?hand?elbow order. In the second stage we apply both action learning and action recognition using the configuration
of the detected parts and the features anchored to the hand region; the bar graph on the right shows
relative log-posterior estimates for the different actions.
their relative configuration to distinguish between different sporting events such as badminton and
sailing. [9] recognized static intransitive actions such as walking and jumping based on a human
body pose represented by a variant of the HOG descriptor. [10] discriminated between playing and
not playing musical instruments using a star-like model. The most detailed static schemes to date
[11, 12] recognized static transitive sports actions, such as the tennis forehand and the volleyball
smash. [11] used a full body mask, bag of features for describing scene context, and the detection of
the objects relevant for the action, such as bats, balls, etc., while [12] learned joint models of body
pose and objects specific to each action. [11] used GrabCut [13] to extract the body mask, and both
[11] and [12] performed fully supervised training for the a priori known relevant objects and scenes.
In this paper we consider the task of differentiating between similar types of transitive actions, such
as smoking a cigarette, drinking from a cup, eating from a cup with a spoon, talking on the phone,
etc., given only a single image as input. The similarity between the body poses in such actions
creates a difficulty for approaches that rely on pose analysis [7, 9, 11]. The relevant differences
between similar actions in terms of the actor body configuration can be at a fine level of detail.
Therefore, one cannot rely on a fixed pre-determined number of configuration types [7, 9, 11]; rather,
one needs to be able to make as fine discriminations as required by the task. Objects participating in
different actions may be very small, occupying only a few pixels in a low resolution image (brush,
phone, Fig. 1a). In addition, these objects may be unknown a priori, such as in the natural case when
the learning is weakly supervised, i.e. we know only the action label of the training images, while
the participating objects are not annotated and cannot be independently learned as in [8, 11]. Finally,
the background scene, used by [8, 11] to recognize sports actions and events, is uninformative for
many transitive actions of interest, and cannot be directly utilized.
Since SAR is a version of an object recognition problem, a natural question to ask is whether it
can be solved by directly applying state-of-the-art techniques of object recognition. As shown in
the results section 3, the problem is significantly more difficult for current methods compared with
more standard object recognition applications. The proposed method identifies more accurately the
features and geometric relationships that contribute to correct recognition in this domain, leading to
better recognition. It is further shown that integrating standard object recognition approaches into
the proposed framework significantly improves their results in the SAR domain.
The main contribution of this paper is an approach, employing the so-called body anchored strategy explained below, for recognizing and distinguishing between similar transitive actions in single
images. In both the learning and the test settings, the approach applies a two-stage interpretation to
each (training or test) image. The first stage produces accurate detection and localization of body
parts, and the second then extracts and uses features from locations anchored to body parts. In the
implementation of the first stage, the face is detected first, and its detection is extended to accurately
localize the elbow and the hand of the actor. In the second stage, the relative part locations and the
hand region are analyzed for action related learning and recognition. During training, this allows
the automatic discovery and construction of implicit non-parametric models for different important
aspects of the actions, such as accurate relative part locations, relevant objects, types of grasping,
and part-object configurations. During testing, this allows the approach to focus on image regions,
2
OHF
(a)
onfh , OEH
(b)
m 1, , kn
A
onhe
Bm
Fm
f nm
Figure 2: (a) Examples of the computed binary masks (cyan) for searching for elbow location given
the detected hand and face marked by a red-green star and magenta rectangle respectively. The
yellow square marks the detected elbow; (b) Graphical representation (in plate notation) of the
proposed probabilistic model for action recognition (see section 2.2 for details).
features and relations that contain all of the relevant information for recognizing the action. As a
result, we eliminate the need to have a priori models for the objects relevant for the action that were
used in [11, 12]. Focusing in a body-anchored manner on the relevant information not only increases
efficiency, but also considerably enhances recognition results. The approach is illustrated in fig. 1b.
The rest of the paper is organized as follows. Section 2 describes the proposed approach and its
implementation details. Section 3 describes the experimental validation. Summary and discussion
are provided in section 4.
2
Method
As outlined above, the approach proceeds in two main stages. The first stage is body interpretation,
which is by itself a sequential process. First, the person is detected by detecting her/his face. Next,
the face detection is extended to detect the hands and elbows of the person. This is achieved in a
non-parametric manner by following chains of features connecting the face to the part of interest
(hand, elbow), by an extension of [14]. In the second stage, features gathered from the hand region
and the relative locations of the hand, face and elbow, are used to model and recognize the static
action of interest. The first stage of the process, dealing with the face, hand and elbow detection, is
described in section 2.1. The static action modeling and recognition is described in section 2.2 and
additional implementation details are provided in section 2.3.
2.1
Body parts detection
Body parts detection in static images is a challenging problem, which has recently been addressed
by several studies [14, 15, 16, 17, 18]. The most difficult parts to detect are the most flexible parts
of the body - the lower arms and the hands. This is due to large pose and appearance variability
and the small size typical to these parts. In our approach, we have adopted an extension of the
non-parametric method for the detection of parts of deformable objects recently proposed by [14].
This method can operate in two modes. The first mode is used for the independent detection of
sufficiently large and rigid objects and object parts, such as the face. The second mode allows
propagating from some of the parts, which are independently detected, to additional parts, which
are more difficult to detect independently, such as hands and elbows. The method extends the socalled star model by allowing features to vote for the detection target either directly, or indirectly,
via features participating in feature-chains going towards the target. In the independent detection
mode, these feature chains may start anywhere in the image, whereas in the propagation mode
these chains must originate from already detected parts. The method employs a non-parametric
generative probabilistic model, which can efficiently learn to detect any part from a collection of
training sequences with marked target (e.g., hand) and source (e.g., face) parts (or only the target
parts in the independent detection mode). The details of this model are described in [14]. In our
approach, the face is detected in the independent detection mode of [14], and the hand and the elbow
are detected by chains-propagation from the face detection (treated as the source part). The method
is trained using a collection of short video sequences, each having the face, the hand and the elbow
marked by three points. The code for the method of [14] was extended to allow restricted detection
of dependent parts, such as hand and elbow. In some cases, the elbow is more difficult to detect than
the hand, as it has less structure. For each (training or test) image In , we therefore constrain the
elbow detection by a binary mask of possible elbow locations gathered from training images with
3
the sufficiently similar hand-face offset (within 0.25 face width) to the one detected on In . Figure
2a shows some examples of the detected faces, hands and elbows together with the elbow masks
derived from the detected face-hand offset.
2.2
Modeling and recognition of static actions
Given an image In (training or test), we first introduce the following notation (lower index refers to
the image, upper indices to parts). Denote the instance of the action contained in In by an (known
for training and unknown for test images). Denote the detected locations of the face by xfn , the hand
by xhn , and the elbow by xen . Also denote the width of the detected face by sn . Throughout the paper,
we will express all size and distance parameters in sn units, in order to eliminate the dependence on
the scale of the person performing the action. For many transitive actions most of the discriminating
information about the action resides in regions around specific body parts [19]. Here we focus on
hand regions for hand-related actions, but for other actions their respective parts and part regions
can be learned and used. We represent the information from the hand region by a set of rectangular
patch features extracted from this region. All features are taken from a circular region with a radius
0.75 ? sn around the hand location xhn . From this region we extract sn ? sn pixel rectangular patch
features centered at all Canny edge points sub-sampled
n
h with a 0.2 ? sn pixel grid.
io Denote the set of
m
m 1
m
f
patch features extracted from image In by fn = SIF Tn , sn xn ? xn
, where SIF Tnm is
1
m
f
the SIFT descriptor [20] of the m-th feature, xm
n is its image location, sn xn ? xn is the offset
(in sn units) between the feature and the face, and square brackets denote a row vector. The index
m enumerates the features in arbitrary order for each image. Denote by kn the number of patch
features extracted from image In .
The probabilistic generative model explaining all the gathered data is defined as follows. The obF
= ofnh ? xhn ? xfn , the hand-elbow
served variables of the model are: the face-hand offset OH
H
he
e
h
m
offset OE = on ? xn ? xn , and the patch features {F = fnm }. The unobserved variables of the
model are the action label variable A, and the set of binary variables {B m }, one for each extracted
patch feature. The meaning of B m = 1 is that the m-th patch feature was generated by the action
A, while the meaning of B m = 0 is that the m-th patch feature was generated independently of
A. Throughout the paper we will use a shorthand form of variable assignments, e.g., P ofnh , ohe
n
F
H
instead of P OH
= ofnh , OE
= ohe
n . We define the joint distribution of the model that generates
the data for image In as:
kn
Y
m
f h he
m
P (B m ) ? P fnm A, ofnh , ohe
?
P A, {B m } , ofnh , ohe
n , {fn } = P (A) ? P on , on
n ,B
(1)
m=1
Here P (A) is a prior action distribution, which we take to be uniform, and:
P fnm A,
ofnh , ohe
if B m = 1
m
f h he
m
n
=
P fn A, on , on , B
m f h he
P fn o n , o n
otherwise
(2)
The P (B m ) = ?, is the prior probability for the m-th feature to be generated from the action,
and we assume it maintains the following relation: P (B m = 1) = ? (1 ? ?) = P (B m = 0)
reflecting the fact that most patch features are not related to the action. Figure 2b shows the graphical
representation of the proposed model.
As shown in the Appendix A, in order to find the action label assignment to A that maximizes the
posterior of the proposed probabilistic generative model, it is sufficient to compute:
kn
X
P fnm , A, ofnh , ohe
n
m
,
{f
}
=
arg
max
arg max log P A ofnh , ohe
n
n
f h he
m
A
A
m=1 P fn , on , on
(3)
As can be seen from eq. 3, and as shown in the Appendix A, the inference is independent of the
exact value of ? (as long as ? (1 ? ?)). In section 2.3 we explain how to empirically estimate
the probabilities P fnm , A, ofnh , ohe
and P fnm , ofnh , ohe
that are necessary to compute 3.
n
n
4
1: drinking
1
2
3
4
5
2: drinking no cup
1
4
5
11
12
5: phone talking with bottle
5
8
10
11
2
3
4
5
1
7: singing with mike
5
8
9
10
11
12
9
11
12
1
11
1
3
4
5
5
7
8
9
10
11
12
9
10
11
12
12
1
5
7
8
9
10
11
12
12: wearing glasses
1
11
2
3
4
5
6
7
8
11
4
6
6
10
3
12
2
4
10
10
11: waving
9
2
8
9
10
8
6
8
3
7
7
7
8
7
8: smoking
6
7
2
6
1
4
5
10: toasting
5
12
3
4
9
11
2
3
12
1
12
10
2
6
9
4
4
9
6
7
3
6
8
11
1
2
3
7
10
6: scratching head
4: phone talking
6
9
6
9: teeth brushing
5
8
2
10
5
4
7
9
4
3
6
8
3
2
3
7
2
1
2
6
1
3: eating with spoon
1
7
12
8
9
10
11
12
Figure 3: Examples of similar static transitive action recognition on our 12-actions / 10-people
(?12/10?) dataset. On all examples, the detected face, hand and elbow are shown by cyan circle,
red-green star and yellow square, respectively. At the right hand side of each image, the bar graph
shows the estimated log-posterior of the action variable A. Each example shows a zoomed-in ROI
of the action. Additional examples are provided in supplementary material.
2.3
Model probabilities
The model probabilities are estimated from the training data using Kernel Density Estimation (KDE)
[21]. Assume we are given a set of samples {Y1 , . . . , YR } from some distribution of interest. Given a
new sample Y from the same distribution, a symmetric Gaussian KDE
estimate P (Y ) for
. the probaP
2
bility of Y can be approximated as: P (Y ) ? R1 ? Yr ?N N (Y ) exp ?0.5 ? kY ? Yr k ? 2 where
N N (Y ) is the set of nearest neighbors of Y within the given set of samples. When the number of
samples R is large, brute-force search for the N N (Y ) set becomes infeasible. Therefore, we use Approximate Nearest Neighbor (ANN) search (using the implementation of h[22]) to compute the KDE.i
h
To compute P fnm , A = a, ofnh , ohe
for the m-th patch feature fnm = SIF Tnm , s1n xm
n
n ? xn
h
i
in test image In , we search for the nearest neighbors of the row vector fnm , s1n ofnh , s1n ohe
in a
n
nh
i
o
f
h
set of row vectors:
ftr , s1t ot , s1t ohe
all ftr in training images It , s.t. at = a using an ANN
t
query. Recall that sn was defined as the width of the detected face in image In , and hence s1n is the
scale factor that we use for the offsets in the query. The query returns a set of K nearest neighbors,
and the Gaussian KDE with ? = 0.2, is applied to this set to compute the estimated probability
that it is sufficient to useK = 25. The
P fnm , A = a, ofnh , ohe
n . In our experiments we found
P
m f h he
m
f h he
,
o
,
o
P fnm , ofnh , ohe
is
computed
as:
P
f
=
n
n
n
n
a P fn , A = a, on , on .
3
Results
To test our approach, we have applied it to two static transitive action recognition datasets. The first
dataset, denoted ?12/10? dataset, was created by us and contained 12 similar transitive actions performed by 10 different people, appearing against different natural backgrounds. The second dataset
was compiled by [11] for dynamic transitive action recognition. It contains 9 different people performing 6 general transitive actions. Although originally designed and used by Gupta et al in [11]
for dynamic action recognition, we transformed it into a static action recognition dataset by assigning action labels to frames actually containing the actions and treating each such frame as a separate
static instance of the action. Since successive frames are not independent, the experiments conducted on both datasets were all performed in a person-leave-one-out manner, meaning that during
the training we completely excluded all the frames of the tested person. Section 3.1 provides more
details on the relevant parts (face, hand, and elbow) detection in our experiments complementing
section 2.1. Sections 3.2 and 3.3 describe the ?12/10? and the Gupta et al datasets in more detail
5
(a)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1
2
3
4
5
6
7
8
9 10 11 12 13
1: drinking
2: drinking no cup
3: eating with spoon
4: phone
5: phone with bottle
6: scratching
7: singing with mike
8: smoking
9: teeth brushing
10: toasting
11: waving
12: wearing glasses
13: no action
(b)
1
2
1
3
4
5
6
7
8
9
10
1
2
3
4
2
1
4
5
6
7
8
7
8
9
10
12
12
3
6
11
11
2
5
1
12
9
10
11
12
2
3
4
4
5
6
7
8
9
10
11
12
Figure 4: (a) Average static action confusion matrix obtained by leave-one-out cross validation of
the proposed method on the ?12/10? dataset; (b) Some interesting failures (red boxes), on the right
of each failure there is a successfully recognized instance of an action with which the method has
confused. The meaning of the bar-graph is as in figure 3. Additional failure examples are provided
in the supplementary material.
together with the respective static action recognition experiments performed on them. All experiments were performed on grayscale versions of the images. Figures 3 and 6a illustrate the two
tested datasets also showing examples of successfully recognized static transitive actions, and figure
4b shows some interesting failures.
3.1
Part detection details
Our approach is based on prior part detection and its performance is bounded from above by the part
detection performance. The detection rates of state-of-the-art methods for localizing body parts in a
general setting are currently a significant limiting factor. For example, [14] that we use here, obtains
an average of 66% correct hand detection (comparing favorably to other state-of-the-art methods)
in the general setting experiments, when both the person and the background are unseen during
part detector training. However, as shown in [14], average 85% part detection performance can be
achieved in more restricted settings. One such setting (denoted self-trained) is when an independent
short part detection training period of several seconds is allowed for each test person, as for e.g.
in the human-computer interaction applications. Another setting (denoted environment-trained) is
when the environment in which people perform the action is fixed, e.g. in applications where we can
train part detectors on some people, and then apply them to new unseen people, but appearing in the
same environment. As demonstrated in the methods comparison experiment in section 3.2, it appears
that part detection is an essential component of solving SAR. Current performance in automatic body
parts detection is well below human performance, but the area is now a focus of active research
which is likely to reduce this current performance gap. In our experiments we adopted the more
constrained (but still useful) part detection settings described above, the self-trained for the 12-10
dataset (having each person in different environment) and the environment-trained for the Gupta et
al. dataset (having all the people in the same general environment).
In the 12-10 dataset experiments, the part detection models for the face, hand and elbow described
in section 2.1, were trained using 10 additional short movies, one for each person, in which the
actors randomly moved their hands. On these 10 movies, face, hand and elbow locations were
manually marked. The learned models were then applied to detect their respective parts on the 120
movie sequences of our dataset. The hand detection performance was 94.1% (manually evaluated
on a subset of frames). Qualitative examples are provided in the supplementary material. The part
detection for the Gupta et al. dataset was performed in a person-leave-one-out manner. For each
person the parts (face, hand) were detected using models trained on other people. The mean hand
detection performance was 88% (manually evaluated). Since most people in the dataset wear very
dark clothing, in many cases the elbow is invisible and therefore it was not used in this experiment
(it is straightforward to remove it from the model by assigning a fixed value to the hand-elbow offset
in both training and test).
3.2
The ?12/10? dataset experiments
The ?12/10? dataset consists of 120 videos of 10 people performing 12 similar transitive actions,
namely drinking, talking on the phone, scratching, toasting, waving, brushing teeth, smoking, wearing glasses, eating with a spoon, singing to a microphone, and also drinking without a cup and
6
(a)
drinking
1
drinking without cup
eating with spoon
1
1
0.5
0
0.5
0
0.5
1
0
phone talking with bottle
1
1
0.5
0
0.5
scratching
1
0
1
0
0.5
1
singing with mike
0
0.5
0.5
0
0
0
0
1
0.5
0
1
0
0.5
toasting
1
0.5
0
0.5
1
0
1
0
0.5
waving
1
0.5
0
0.5
1
0
0
1
0.5
0.5
1
teeth brushing
(b)
0.5
0.5
0
phone talking
1
1
0
0.5
smoking
0.5
wearing glasses
Method /
experiment
1
SAR method
(section 2.2)
1
[2 ]
BoW SVM
0.5
0
0.5
1
0
0
0.5
Full person
bounding box (no
anchoring)
Hand
anchored
region
24.6 ? 12.5%
58.5 ? 9.1%
8.8 ?1%
37.7 ? 2.7%
16.6 ? 6.6%
45.7 ? 9.1%
1
Figure 5: (a) ROC based comparison with the state-of-the-art method of object detection [23] applied
to recognize static actions. For each action, the blue line is the average ROC of [23], and the magenta
line is the average ROC of the proposed method. (b) Comparing state-of-the-art object recognition
methods on the SAR task with and without ?body anchoring?.
making a phone call with a bottle. All people except one were filmed against different backgrounds.
All backgrounds were natural indoor / outdoor scenes containing both clutter and people passing
by. The drinking and toasting actions were performed with 2-4 different tools, and phone talking
was performed with mobile and regular phones. Overall, the dataset contains 44,522 frames. Not
all frames contain actions of interest (e.g. in drinking there are frames where the person reaches to
/ puts down a cup). The ground-truth action labels were manually assigned to the relevant frames.
Each of the resulting 23,277 relevant action frames was considered a separate instance of an action.
The remaining frames were labeled ?no-action?. The average recognition accuracy was 59.3?8.6 for
the 13 actions (including no-action) and 58.5 ? 9.1% for the 12 main actions (excluding no-action).
Figure 4a shows the confusion matrix for 13 actions averaged over the 10 test people.
As mentioned in the introduction, one of the important questions we wish to answer is the need for
the detection of the fine details of the person, such as the accurate hand, and elbow locations, in
order to recognize the action. To test this issue, we have compared the results of three approaches:
deformable parts model [23], Bag-of-Words (BoW) SVM ([24]), and our approach described in
section 2.2, in two settings. In the first setting the methods were trained to distinguish between the
actions based on a bounding box of the entire person (i.e. without focusing on the fine details such
as provided by the hand and elbow detection). In the second, body anchored setting, the methods
were applied to the hand anchored regions (small regions around the detected hand as described in
section 2.2). The method of [23] is one of the state-of-the-art object recognition schemes, achieving
top scores on recent PASCAL-VOC competitions [25], and BoW SVM is a popular method in
the literature also obtaining state-of-the art results for some datasets. Figure 5b shows the results
obtained by the three methods in the two settings. Figure 5a provides ROC-based comparison of the
results of our full approach with the ones obtained by [23]. The obtained results strongly suggest that
body anchoring is a powerful prior for the task of distinguishing between similar transitive actions.
3.3
Gupta et al dataset experiments
This dataset was compiled by [11]. It consists of 46 movie sequences of 9 different people performing 6 distinct transitive actions: drinking, spraying, answering the phone, making a call, pouring
from a pitcher and lighting a flashlight. In each movie, we manually assigned action labels to all
the frames actually containing the action, labeling the remainder of the frames ?no-action?. Since
the distinction between ?making a call? and ?answering phone? was in the presence or absence of
the ?dialing? action in the respective video, we re-labeled the frames of these actions into ?phone
talking? and ?dialing?. The action recognition performance was measured using the person-leaveone-out cross-validation, in the same manner as for our dataset. The average accuracy over the 7
static actions (including no-action) was 82 ? 11.5%, and was 86 ? 14.4% excluding no-action. The
average 7-action confusion matrix is shown in figure 6b. The presented results are for the static
action recognition, and hence are not directly comparable with the results obtained on this dataset
for the dynamic action recognition by [11], who obtained 93.34% recognition (out of the 46 video
7
(a)
1
2
4
2
3
4
5
6
3
2
4
5
6
1
3
4
5
4
1
2
3
1
1
2
5
6
1
5
6
3
6
1
2
2
3
3
4
4
5
5
6
6
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
(b)
1: dialing
2: drinking
3: flashlight
4: phone talking
5: pouring
6: spraying
1
2
3
4
5
6
7
7: no action
Figure 6: (a) some successfully identified action examples from the dataset of [11]; (b) mean static
action confusion matrix for leave-one-out cross validation experiments on the Gupta et al. dataset.
sequences and not frames) using both the temporal information (part tracks, etc.) and a priori models
for the participating objects (cup, pitcher, flashlight, spray bottle and phone).
4
Discussion
A
Log-posterior derivation
We have presented a method for recognizing transitive actions from single images. This task is
performed naturally and efficiently by humans, but performance by current recognition methods is
severely limited. The proposed method can successfully handle both similar transitive actions (the
?12/10? dataset), and general transitive actions (the Gupta et al dataset). The method uses priors
that focus on body part anchored features and relations. It has been shown that most common verbs
are associated with specific body parts [19]; the actions considered here were all hand-related in
this sense. The detection of hands and elbows therefore provided useful priors in terms of regions
and properties likely to contribute to the SAR task in this setting. The proposed approach can be
generalized to deal with other actions by detecting all the body parts associated with common verbs,
automatically detecting the relevant parts for each specific action during training, and finally applying the body anchored SAR model described in section 2.2. The comparisons show that without
using the body anchored priors there is a highly significant drop in SAR performance even when
employing state-of-the-art methods for object recognition. The main reasons for this drop are the
fine details and local nature of the relevant evidence for distinguishing between actions, the huge
number of possible locations, and detailed features that need to be searched if body-anchored priors
are not used. Directions for future studies therefore include a more complete and accurate body parts
detection and their use in providing useful priors for static action recognition and interpretation.
Here we derive the equivalent form of log-posterior (eq. 3) of the proposed probabilistic action
recognition model defined in eq. 1. In 4, the symbol ? means equivalent in terms of maximizing
over the values of the action variable A.
P
m
m
f h he
m
log P A ofnh , ohe
n , {fn } ? log h {B m } P A, {B } , on , on , {fn } =
h
Qkn
ii
P1
m
m
f h he
m
log P (A) ? P ofnh , ohe
P
(B
)
?
P
f
,
o
,
B
?
?
A,
o
n
n
n
B m =0
m=1
i n
hP
Pkn
1
m
m
f h he
m
=
B m =0 P (B ) ? P fn A, on , on , B
m=1 log
Pkn
m
f h he
m f h he
=
m=1 log ? ? P fn A, on , on + (1 ? ?) ? P fn on , on
m
Pkn
Pkn
P ( fn
|A,ofnh ,ohe
n )
m f h he
?
m=1 log 1 + ??P (f m |ofnh ,ohe ) +
m=1 log (1 ? ?) ? P fn on , on
n
n
m
Pkn P (fnm |A,ofnh ,ohe
Pkn P (fnm |A,ofnh ,ohe
Pkn
P ( fn
|A,ofnh ,ohe
n )
n )
n )
m=1 log 1 + ??P (f m |ofnh ,ohe ) ?
m=1 ??P (f m |ofnh ,ohe ) ?
m=1 P (f m |ofnh ,ohe ) ?
(?)
n
n
n
n
n
n
Pkn P (fnm ,A,ofnh ,ohe
n )
m=1 P (f m ,ofnh ,ohe )
n
n
(4)
Pkn
In eq. 4, ? = ?/ (1 ? ?), the term m=1 log (1 ? ?) ? P fnm ofnh , ohe
is independent of the
n
action (constant for a given image In ) and thus can be dropped, and (?) follows from log (1 + ?) ? ?
for ? 1 and from ? being large due to our assumption that ? (1 ? ?).
8
References
[1] Jackendoff, R.: Semantic interpretation in generative grammar. The MIT Press (1972)
[2] Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from
movies. In: CVPR. (2008) 1?8
[3] Iacoboni, M., Mazziotta, J.C.: Mirror neuron system: basic findings and clinical applications.
Annals of Neurology (2007)
[4] Kim, J., Biederman, I.: Where do objects become scenes? Journal of Vision (2009)
[5] Helbig, H., Graf, M., Kiefer, M.: The role of action representations in visual object recognition.
Experimental Brain Research (2006)
[6] Sakata, H., Taira, M., Kusunoki, M., Murata, A., Tanaka, Y., Tsutsui, K.: Neural coding of
3d features of objects for hand action in the parietal cortex of the monkey. Philos Trans R Soc
Lond B Biol Sci. (1998)
[7] Wang, Y., Jiang, H., Drew, M.S., nian Li, Z., Mori, G.: Unsupervised discovery of action
classes. In: CVPR. (2006) 5
[8] Li, L., Fei-Fei, L.: What, where and who? classifying events by scene and object recognition.
In: ICCV. (2007) 1?8
[9] Thurau, C., Hlavac, V.: Pose primitive based human action recognition in videos or still
images. In: CVPR. (2008) 1?8
[10] Yao, B., Fei-Fei, L.: Grouplet: A structured image representation for recognizing human and
object interactions. CVPR (2010)
[11] Gupta, A., Kembhavi, A., Davis, L.: Observing human-object interactions: Using spatial and
functional compatibility for recognition. PAMI (2009)
[12] Yao, B., Fei-Fei, L.: Modeling mutual context of object and human pose in human-object
interaction activities. CVPR (2010)
[13] Blake, A., Rother, C., Brown, M., Perez, P., Torr, P.: Interactive image segmentation using an
adaptive gmmrf model. ECCV (2004)
[14] Karlinsky, L., Dinerstein, M., Harari, D., Ullman, S.: The chains model for detecting parts by
their context. CVPR (2010)
[15] Ferrari, V., Marin, M., Zisserman, A.: Progressive search space reduction for human pose
estimation. CVPR (2008)
[16] Andriluka, M., Roth, S., Schiele, B.: Pictorial structures revisited: People detection and
articulated pose estimation. CVPR (2009)
[17] Felzenszwalb, P., Huttenlocher, D.: Pictorial structures for object recognition. IJCV 61 (2005)
55?79
[18] Ramanan, D., Forsyth, D.A., Barnard, K.: Building models of animals from video. PAMI
(2006)
[19] Maouene, J., Hidaka, S., Smith, L.B.: Body parts and early-learned verbs. Cognitive Science
(2008)
[20] Lowe, D.: Distinctive image features from scale-invariant keypoints. IJCV (2004)
[21] Duda, R., Hart, P.: Pattern classification and scene analysis. Wiley (1973)
[22] Mount, D., Arya, S.: Ann: A library for approximate nearest neighbor searching. CGC 2nd
Annual Workshop on Comp. Geometry (1997)
[23] Felzenszwalb, P., McAllester, D., Ramanan, D.: A discriminatively trained, multiscale, deformable part model. CVPR (2008) 1?8
[24] Zhang, J., Marszalek, M., Lazebnik, S., Schmid, C.: Local features and kernels for classification of texture and object categories: A comprehensive study. IJCV (2007)
[25] Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: The pascal visual object classes challenge 2007 results. http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2007
(2007)
9
| 4012 |@word version:2 duda:1 nd:1 everingham:1 simplifying:1 dialing:4 thereby:1 cgc:1 reduction:1 configuration:7 contains:2 score:1 current:5 comparing:2 assigning:2 must:1 readily:1 fn:14 realistic:1 nian:1 remove:1 designed:1 treating:1 drop:2 discrimination:1 generative:4 yr:3 complementing:1 ith:3 smith:1 short:5 detecting:6 provides:2 contribute:2 location:12 successive:1 revisited:1 zhang:1 spray:1 become:1 qualitative:1 shorthand:1 consists:2 ijcv:3 introduce:1 manner:5 mask:5 p1:1 bility:1 brain:2 anchoring:3 voc:2 automatically:1 increasing:1 elbow:28 provided:7 brushing:7 notation:2 becomes:1 maximizes:1 bounded:1 confused:1 israel:1 what:1 monkey:1 differing:1 unobserved:1 finding:1 temporal:1 shed:1 interactive:1 uk:1 brute:1 unit:2 ramanan:2 rozenfeld:1 dropped:1 local:3 io:1 severely:1 marin:1 mount:1 jiang:1 marszalek:2 pami:2 studied:1 challenging:1 limited:2 range:1 averaged:1 weizmann:2 bat:1 testing:1 procedure:1 area:2 significantly:3 pre:1 integrating:1 refers:1 regular:1 word:1 suggest:1 cannot:3 put:1 context:3 applying:2 equivalent:2 demonstrated:1 roth:1 maximizing:1 straightforward:1 pitcher:2 primitive:1 independently:4 williams:1 rectangular:2 resolution:1 identifying:3 oh:2 his:1 badminton:1 searching:2 handle:1 ferrari:1 sar:13 limiting:1 annals:1 construction:1 target:4 exact:1 us:3 distinguishing:3 recognition:41 approximated:1 walking:1 utilized:1 labeled:2 huttenlocher:1 mike:4 role:2 solved:1 singing:5 wang:1 region:16 oe:2 grasping:2 mentioned:1 environment:6 schiele:1 dynamic:4 trained:9 weakly:1 solving:1 laptev:1 creates:1 localization:1 efficiency:1 distinctive:1 completely:1 karlinsky:3 joint:2 sif:3 represented:1 derivation:1 train:1 articulated:1 distinct:1 describe:1 detected:20 query:3 labeling:1 supplementary:3 cvpr:9 otherwise:1 grammar:1 unseen:2 sakata:1 itself:1 sequence:6 propose:1 interaction:4 zoomed:1 remainder:1 canny:1 relevant:14 date:2 bow:3 deformable:3 moved:1 participating:5 competition:1 ky:1 r1:1 produce:2 leave:4 object:39 illustrate:1 derive:1 ac:2 propagating:1 pose:11 measured:1 nearest:5 eq:4 soc:1 involves:1 direction:1 radius:1 annotated:1 correct:2 centered:1 human:17 mcallester:1 material:3 require:1 extension:2 drinking:16 clothing:1 sufficiently:2 considered:3 blake:1 around:3 roi:1 visually:1 exp:1 ground:1 thurau:1 pkn:9 ventral:1 early:1 estimation:3 bag:2 label:6 currently:1 occupying:1 tool:3 successfully:4 mit:1 gaussian:2 rather:1 spoon:7 mobile:1 eating:7 forehand:1 derived:1 focus:4 she:1 kim:1 sense:1 detect:6 glass:7 inference:1 dependent:1 rigid:1 eliminate:2 entire:1 her:1 relation:6 transformed:1 going:1 compatibility:1 issue:1 classification:2 pixel:3 arg:2 flexible:1 denoted:3 priori:4 socalled:1 animal:1 andriluka:1 art:8 constrained:1 s1t:2 mutual:1 spatial:1 having:3 manually:5 progressive:1 unsupervised:1 future:1 few:2 employ:1 randomly:1 recognize:5 comprehensive:1 individual:2 pictorial:2 taira:1 geometry:1 detection:38 interest:7 huge:1 circular:1 highly:1 intransitive:1 analyzed:1 bracket:1 light:1 perez:1 chain:6 accurate:5 edge:1 necessary:1 jumping:1 respective:4 circle:1 re:1 instance:5 modeling:4 localizing:1 assignment:2 subset:1 uniform:1 recognizing:5 conducted:1 kn:4 answer:1 considerably:1 person:17 density:1 discriminating:1 probabilistic:5 michael:2 connecting:1 together:2 yao:2 nm:1 containing:3 cognitive:2 leading:1 return:1 ullman:3 li:2 star:4 coding:1 forsyth:1 depends:1 performed:9 observer:1 lowe:1 observing:1 red:3 start:1 maintains:1 waving:4 contribution:1 il:1 square:3 accuracy:2 kiefer:1 descriptor:2 musical:1 efficiently:2 who:2 gathered:3 murata:1 yellow:2 dealt:1 identification:1 accurately:2 served:1 lighting:1 comp:1 explain:1 detector:2 reach:1 against:2 failure:4 naturally:1 associated:2 static:20 soton:1 sampled:1 dataset:23 popular:1 ask:1 recall:1 enumerates:1 car:1 improves:1 organized:1 subtle:1 segmentation:1 gmmrf:1 actually:2 reflecting:1 focusing:3 appears:1 originally:1 supervised:2 zisserman:2 pascallin:1 evaluated:2 box:3 strongly:1 anywhere:1 stage:14 implicit:1 hand:46 multiscale:1 propagation:2 mode:7 mazziotta:1 cigarette:1 building:1 contain:2 brown:1 hence:2 assigned:2 excluded:1 symmetric:1 semantic:1 illustrated:1 deal:2 eg:1 during:5 width:3 self:2 davis:1 generalized:1 plate:2 complete:1 confusion:4 tn:1 invisible:1 motion:2 image:36 meaning:4 lazebnik:1 recently:2 common:3 functional:1 discriminated:1 empirically:1 pouring:2 sailing:1 nh:1 interpretation:6 he:14 significant:2 cup:9 automatic:2 philos:1 outlined:1 grid:1 hp:1 wear:1 actor:7 cortex:2 tennis:1 similarity:1 etc:3 compiled:2 posterior:5 recent:1 phone:23 binary:3 exploited:1 seen:1 additional:5 recognized:5 grabcut:1 period:1 ii:1 full:3 keypoints:1 characterized:1 cross:3 long:1 clinical:1 hart:1 ftr:2 variant:1 basic:1 vision:1 sometimes:1 represent:1 kernel:2 achieved:2 cell:1 addition:1 background:5 fine:5 uninformative:1 interval:1 addressed:1 whereas:1 winn:1 source:2 ot:1 rest:1 unlike:1 operate:1 call:3 presence:3 easy:1 identified:2 fm:1 reduce:1 whether:1 passing:1 action:121 useful:3 detailed:3 involve:1 clutter:1 dark:1 category:1 http:1 estimated:3 track:1 blue:1 rehovot:1 express:1 achieving:1 localize:1 rectangle:1 graph:3 run:1 powerful:1 extends:1 throughout:2 patch:10 appendix:2 comparable:1 cyan:2 distinguish:2 identifiable:1 activity:1 annual:1 constrain:1 fei:6 scene:8 generates:1 aspect:1 lond:1 performing:4 department:1 structured:1 ball:1 describes:2 making:4 ohe:27 explained:1 restricted:2 iccv:1 invariant:1 taken:1 mori:1 describing:1 mechanism:2 needed:1 know:1 instrument:1 volleyball:1 adopted:2 apply:3 indirectly:1 appearing:3 xfn:2 recipient:2 top:1 remaining:1 include:1 kembhavi:1 graphical:2 fnm:15 question:2 already:1 posture:1 strategy:1 parametric:4 dependence:1 enhances:1 distance:1 separate:2 sci:1 capacity:1 originate:1 reason:1 rother:1 code:1 index:3 relationship:1 illustration:1 providing:1 difficult:6 holding:1 hog:1 kde:4 favorably:1 implementation:4 unknown:2 perform:1 allowing:1 upper:1 observation:1 neuron:1 datasets:5 arya:1 parietal:1 extended:3 variability:1 head:1 excluding:2 y1:1 frame:15 verb:4 arbitrary:1 biederman:1 bottle:6 smoking:6 required:1 namely:1 learned:5 distinction:1 tanaka:1 trans:1 able:1 bar:3 proceeds:1 below:3 pattern:1 xm:2 indoor:1 challenge:2 including:3 green:2 video:7 pascal:2 max:2 gool:1 event:3 natural:5 examination:1 difficulty:1 rely:2 treated:1 force:1 arm:1 scheme:2 movie:6 voc2007:1 hlavac:1 library:1 identifies:1 created:1 transitive:22 extract:4 schmid:2 sporting:1 sn:10 prior:13 geometric:1 discovery:2 literature:1 relative:5 graf:1 fully:1 discriminatively:1 interesting:2 validation:4 teeth:7 sufficient:2 playing:2 classifying:1 row:3 eccv:1 summary:1 infeasible:1 side:1 allow:2 institute:1 explaining:1 neighbor:5 face:28 felzenszwalb:2 differentiating:1 leaveone:1 van:1 xn:7 resides:1 overall:1 collection:2 adaptive:1 bm:1 employing:2 ec:1 probap:1 approximate:2 obtains:1 dealing:2 flashlight:3 active:1 neurology:1 grayscale:1 search:4 anchored:14 learn:1 fnh:1 nature:1 obtaining:1 grouplet:1 domain:2 main:4 bounding:2 allowed:1 body:34 qkn:1 fig:2 roc:4 wiley:1 sub:1 thematic:1 wish:1 outdoor:1 answering:3 shimon:2 magenta:2 down:1 specific:4 tnm:2 sift:1 showing:1 symbol:1 offset:7 svm:3 gupta:8 evidence:3 essential:1 workshop:1 sequential:1 drew:1 mirror:1 texture:1 gap:1 appearance:1 likely:2 forming:1 visual:4 contained:2 sport:3 talking:13 applies:2 brush:1 truth:1 extracted:4 marked:4 ann:3 towards:1 barnard:1 leonid:2 absence:1 determined:1 typical:1 except:1 torr:1 microphone:1 called:1 experimental:2 vote:1 s1n:4 mark:1 people:15 searched:1 dorsal:1 wearing:5 tested:2 biol:1 |
3,328 | 4,013 | Policy gradients in linearly-solvable MDPs
Emanuel Todorov
Applied Mathematics and Computer Science & Engineering
University of Washington
[email protected]
Abstract
We present policy gradient results within the framework of linearly-solvable
MDPs. For the first time, compatible function approximators and natural policy gradients are obtained by estimating the cost-to-go function, rather than the
(much larger) state-action advantage function as is necessary in traditional MDPs.
We also develop the first compatible function approximators and natural policy
gradients for continuous-time stochastic systems.
1 Introduction
Policy gradient methods [18] in Reinforcement Learning have gained popularity, due to the guaranteed improvement in control performance over iterations (which is often lacking in approximate
policy or value iteration) as well as the discovery of more efficient gradient estimation methods.
In particular it has been shown that one can replace the true advantage function with a compatible
function approximator without affecting the gradient [8, 14], and that a natural policy gradient (with
respect to Fisher information) can be computed [2, 5, 11].
The goal of this paper is to apply policy gradient ideas to the linearly-solvable MDPs (or LMDPs)
we have recently-developed [15, 16], as well as to a class of continuous stochastic systems with
similar properties [4, 7, 16]. This framework has already produced a number of unique results ?
such as linear Bellman equations, general estimation-control dualities, compositionality of optimal
control laws, path-integral methods for optimal control, etc. The present results with regard to policy
gradients are also unique, as summarized in Abstract. While the contribution is mainly theoretical
and scaling to large problems is left for future work, we provide simulations demonstrating rapid
convergence. The paper is organized in two sections, treating discrete and continuous problems.
2 Discrete problems
Since a number of papers on LMDPs have already been published, we will not repeat the general
development and motivation here, but instead only summarize the background needed for the present
paper. We will then develop the new results regarding policy gradients.
2.1 Background on LMDPs
An LMDP is defined by a state cost ? (?) over a (discrete for now) state space X , and a transition
probability density ? (?0 |?) corresponding to the notion of passive dynamics. In this paper we focus
on infinite-horizon average-cost problems where ? (?0 |?) is assumed to be ergodic, i.e. it has a
unique stationary density. The admissible "actions" are all transition probability densities ? (?0 |?)
which are ergodic and satisfy ? (?0 |?) = 0 whenever ? (?0 |?) = 0. The cost function is
? (?? ? (?|?)) = ? (?) + ?KL (? (?|?) ||? (?|?))
1
(1)
Thus the controller is free to modify the default/passive dynamics in any way it wishes, but incurs a
control cost related to the amount of modification.
The average cost ? and differential cost-to-go ? (?) for given ? (?0 |?) satisfy the Bellman equation
?
?
P
? (?0 |?)
0
)
(2)
? + ? (?) = ? (?) + ?0 ? (?0 |?) log
+
?
(?
? (?0 |?)
where ? (?) is defined up to a constant. The optimal ?? and ? ? (?) can be shown to satisfy
P
?? + ?? (?) = ? (?) ? log ?0 ? (?0 |?) exp (??? (?0 ))
(3)
and the optimal ?? (?0 |?) can be found in closed form given ? ? (?):
? (?0 |?) exp (??? (?0 ))
? ? (?0 |?) = P
?
? ? (?|?) exp (?? (?))
(4)
Exponentiating equation (3) makes it linear in exp (??? (?)), although this will not be used here.
2.2 Policy gradient for a general parameterization
Consider a parameterization ? (?0 |?? w) which is valid in the sense that it satisfies the above conditions and Ow ? , ????w exists for all w ? R? . Let ? (?? w) be the corresponding stationary
density. We will also need the pair-wise density ? (?? ?0 ? w) = ? (?? w) ? (?0 |?? w). To avoid notational clutter we will suppress the dependence on w in most of the paper; keep in mind that all
quantities that depend on ? are functions of w.
Our objective here is to compute Ow ?. This is done by differentiating the Bellman equation (2) and
following the template from [14]. The result (see Supplement) is given by
Theorem 1. The LMDP policy gradient for any valid parameterization is
?
?
P
P
? (?0 |?)
0
Ow ? = ? ? (?) ?0 Ow ? (?0 |?) log
)
+
?
(?
? (?0 |?)
Let us now compare (5) to the policy gradient in traditional MDPs [14], which is
P
P
Ow ? = ? ? (?) ? Ow ? (?|?) ? (?? ?)
(5)
(6)
Here ? (?|?) is a stochastic policy over actions (parameterized by w) and ? (?? ?) is the corresponding state-action cost-to-go. The general form of (5) and (6) is similar, however the term log (???)+?
in (5) cannot be interpreted as a ?-function. Indeed it is not clear what a ?-function means in the
LMDP setting. On the other hand, while in traditional MDPs one has to estimate ? (or rather the
advantage function) in order to compute the policy gradient, it will turn out that in LMDPs it is
sufficient to estimate ?.
2.3 A suitable policy parameterization
The relation (4) between the optimal policy ?? and the optimal cost-to-go ? ? suggests parameterizing ? as a ?-weighted Gibbs distribution. Since linear function approximators have proven very
successful, we will use an energy function (for the Gibbs distribution) which is linear in w :
?
?
? (?0 |?) exp ?wT f (?0 )
0
? (? |?? w) , P
(7)
T
? ? (?|?) exp (?w f (?))
Here f (?) ? R? is a vector of features. One can verify that (7) is a valid parameterization. We will
also need the ?-expectation operator
P
? [? ] (?) , ? ? (?|?) ? (?)
(8)
defined for both scalar and vector functions over X . The general result (5) is now specialized as
Theorem 2. The LMDP policy gradient for parameterization (7) is
?
?
P
Ow ? = ???0 ? (?? ?0 ) (? [f ] (?) ? f (?0 )) ? (?0 ) ? wT f (?0 )
(9)
As expected from (4), we see that the energy function w f (?) and the cost-to-go ? (?) are related.
Indeed if they are equal the gradient vanishes (the converse is not true).
T
2
2.4 Compatible cost-to-go function approximation
One of the more remarkable aspects of policy gradient results [8, 14] in traditional MDPs is that,
when the true ? function is replaced with a compatible approximation satisfying certain conditions,
the gradient remains unchanged. Key to obtaining such results is making sure that the approximation
error is orthogonal to the remaining terms in the expression for the policy gradient. Our goal in this
section is to construct a compatible function approximator for LMDPs. The procedure is somewhat
elaborate and unusual, so we provide the derivation before stating the result in Theorem 3 below.
Given the form of (9), it makes sense to approximate ? (?) as a linear combination of the same
features f (?) used to represent the energy function: ?b (?? r) , rT f (?). Let us also define the
approximation error ?r (?) , ? (?) ? ?b (?? r). If the policy gradient Ow ? is to remain unchanged
when ? is replaced with ?b in (9), the following quantity must be zero:
P
(10)
d (r) , ???0 ? (?? ?0 ) (? [f ] (?) ? f (?0 )) ?r (?0 )
Expanding (10) and using the stationarity of ?, we can simplify d as
P
d (r) = ? ? (?) (? [f ] (?) ? [?r ] (?) ? f (?) ?r (?))
(11)
One can also incorporate an ?-dependent baseline in (9), such as ? (?) which is often used in traditional MDPs. However the baseline vanishes after the simplification, and the result is again (11).
Now we encounter a complication. Suppose we were to fit ?b to ? in a least-squares sense, i.e.
minimize the squared error weighted by ?. Denote the resulting weight vector r?? :
?
?2
P
r?? , arg min ? ? (?) ? (?) ? rT f (?)
(12)
r
This is arguably the best fit one can hope for. The error ?r is now orthogonal to the features f , thus
for r = r?? the second term in (11) vanishes, but the first term does not. Indeed we have verified
numerically (on randomly-generated LMDPs) that d (r?? ) 6= 0.
If the best fit is not good enough, what are we to do? Recall that we do not actually need a good fit,
but rather a vector r such that d (r) = 0. Since d (r) and r are linearly related and have the same
dimensionality, we can directly solve this equation for r. Replacing ?r (?) with ? (?) ? rT f (?) and
using the fact that ? is a linear operator, we have d (r) = ?r ? k where
?
?
P
T
T
? , ? ? (?) f (?) f (?) ? ? [f ] (?) ? [f ] (?)
(13)
P
k , ? ? (?) (f (?) ? (?) ? ? [f ] (?) ? [?] (?))
We are not done yet because k still depends on ?. The goal now is to approximate ? in such a way
that k remains unchanged. To this end we use (2) and express ? [?] in terms of ?:
(14)
? + ? (?) ? ? (?) = ? [?] (?)
Here ? (?) is shortcut notation for ? (?? ? (?|?? w)). Thus the vector k becomes
P
k = ? ? (?) (g (?) ? (?) + ? [f ] (?) (? (?) ? ?))
(15)
where the policy-specific auxiliary features g (?) are related to the original features f (?) as
g (?) , f (?) ? ? [f ] (?)
P
(16)
The second term in (15) does not depend on ?; it only depends on ? = ? ? (?) ? (?). The first term
in (15) involves the projection of ? on the auxiliary features g. This projection can be computed by
defining the auxiliary function approximator ?e (?? s) , sT g (?) and fitting it to ? in a least-squares
sense, as in (12) but using g (?) rather than f (?). The approximation error is now orthogonal to the
auxiliary features g (?), and so replacing ? (?) with ?e (?? s) in (15) does not affect k. Thus we have
Theorem 3. The following procedure yields the exact LMDP policy gradient:
1.
2.
3.
4.
fit ?e (?? s) to ? (?) in a least squares sense, and also compute ?
compute ? from (13), and k from (15) by replacing ? (?) with ?e (?? s)
"fit" ?b (?? r) by solving ?r = k
the policy gradient is
P
T
Ow ? = ???0 ? (?? ?0 ) (f (?0 ) ? ? [f ] (?)) f (?0 ) (w ? r)
3
(17)
This is the first policy gradient result with compatible function approximation over the state space
rather than the state-action space. The computations involve averaging over ?, which in practice will
be done through sampling (see below). The requirement that ? ? ?e be orthogonal to g is somewhat
restrictive, however an equivalent requirement arises in traditional MDPs [14].
2.5 Natural policy gradient
When the parameter space has a natural metric ? (w), optimization algorithms tend to work better
?1
if the gradient of the objective function is pre-multiplied by ? (w) . This yields the so-called
natural gradient [1]. In the context of policy gradient methods [5, 11] where w parameterizes a
probability density, the natural metric is given by Fisher information (which depends on ? because
w parameterizes the conditional density). Averaging over ? yields the metric
P
T
(18)
? (w) , ???0 ? (?? ?0 ) Ow log ? (?0 |?) Ow log ? (?0 |?)
We then have the following result (see Supplement):
Theorem 4. With the vector r computed as in Theorem 3, the LMDP natural policy gradient is
? (w)?1 Ow ? = w ? r
(19)
? (w)?1 Ow ? = r
(20)
Let us compare this result to the natural gradient in traditional MDPs [11], which is
In traditional MDPs one maximizes reward while in LMDPs one minimizes cost, thus the sign
difference. Recall that in traditional MDPs the policy ? is parameterized using features over the
state-action space while in LMDPs we only need features over the state space. Thus the vectors w? r
will usually have lower dimensionality in (19) compared to (20).
Another difference is that in LMDPs the (regular as well as natural) policy gradient vanishes when
w = r, which is a sensible fixed-point condition. In traditional MDPs the policy gradient vanishes
when r = 0, which is peculiar because it corresponds to the advantage function approximation
being identically 0. The true advantage function is of course different, but if the policy becomes
deterministic and only one action is sampled per state, the resulting data can be fit with r = 0. Thus
any deterministic policy is a local maximum in traditional MDPs. At these local maxima the policy
gradient theorem cannot actually be applied because it requires a stochastic policy. When the policy
becomes near-deterministic, the number of samples needed to obtain accurate estimates increases
because of the lack of exploration [6]. These issues do not seem to arise in LMDPs.
2.6 A Gauss-Newton method for approximating the optimal cost-to-go
Instead of using policy gradient, we can solve (3) for the optimal ?? directly. One option is approximate policy iteration ? which in our context takes on a simple form. Given the policy parameters
w(?) at iteration ?, approximate the cost-to-go function and obtain the feature weights r(?) , and then
set w(?+1) = r(?) . This is equivalent to the above natural gradient method with step size 1, using a
biased approximator instead of the compatible approximator given by Theorem 3.
The other option is approximate value iteration ? which is a fixed-point method for solving (3)
while replacing ? ? (?) with wT f (?). We can actually do better than value iteration here. Since
(3) has already been optimized over the controls and is differentiable, we can apply an efficient
Gauss-Newton method. Up to an additive constant ?, the Bellman error from (3) is
?
?
P
? (?? w) , wT f (?) ? ? (?) + log ? ? (?|?) exp ?wT f (?)
(21)
Interestingly, the gradient of this Bellman error coincides with our auxilliary features g:
?
?
P ? (?|?) exp ?wT f (?)
Ow ? (?? w) = f (?) ? ? P
f (?) = f (?) ? ? [f ] (?) = g (?)
T
? ? (?|?) exp (?w f (?))
(22)
where ? and g are the same as in (16, 8). We now linearize: ? (?? w + ?w) ? ? (?? w) + ?wT g (?)
and proceed to minimize (with respect to ? and ?w) the quantity
?
?2
P
T
(23)
? ? (?) ? + ? (?? w) + ?w g (?)
4
Figure 1: (A) Learning curves for a random LMDP. "resid" is the Gauss-Newton method. The
sampling versions use 400 samples per evaluation: 20 trajectories with 20 steps each, starting from
the stationary distribution. (B) Cost-to-go functions for the metronome LMDP. The numbers show
the average costs obtained. There are 2601 discrete states and 25 features (Gaussians). Convergence
was observed in about 10 evaluations (of the objective and the gradient) for both algorithms, exact
and sampling versions. The sampling version of the Gauss-Newton method worked well with 400
samples per evaluation; the natural gradient needed around 2500 samples.
Normally the density ? (?) would be fixed, however we have found empirically that the resulting
algorithm yields better policies if we set ? (?) to the policy-specific stationary density ? (?? w)
at each iteration. It is not clear how to guarantee convergence of this algorithm given that the
objective function itself is changing over iterations, but in practice we observed that simple damping
is sufficient to make it convergent (e.g. w ? w + ?w?2).
It is notable that minimization of (23) is closely related to policy evaluation via Bellman residual
minimization. More precisely, using (14, 16) it is easy to see that TD(0) applied to our problem
would seek to minimize
?
?2
P
T
(24)
? ? (?? w) ? ? ? (?? w) + r g (?)
The similarity becomes even more apparent if we write ?? (?? w) more explicitly as
?
?
P
?? (?? w) = wT ? [f ] (?) ? ? (?) + log ? ? (?|?) exp ?wT f (?)
(25)
Thus the only difference from (21) is that one expression has the term wT f (?) at the place where
the other expression has the term wT ? [f ] (?). Note that the Gauss-Newton method proposed
here would be expected to have second-order convergence, even though the amount of computation/sampling per iteration is the same as in a policy gradient method.
2.7 Numerical experiments
We compared the natural policy gradient and the Gauss-Newton method, both in exact form and
with sampling, on two classes of LMDPs: randomly generated, and a discretization of a continuous
"metronome" problem taken from [17]. Fitting the auxiliary approximator ?e (?? s) was done using
the LSTD(?) algorithm [3]. Note that Theorem 3 guarantees compatibility only for ? = 1, however
lower values of ? reduce variance and still provide good descent directions in practice (as one would
expect). We ended up using ? = 0?2 after some experimentation. The natural gradient was used
with the BFGS minimizer "minFunc" [12].
Figure 1A shows typical learning curves on a random LMDP with 100 states, 20 random features,
and random passive dynamics with 50% sparsity. In this case the algorithms had very similar performance. On other examples we observed one or the other algorithm being slightly faster or producing better minima, but overall they were comparable. The average cost of the policies found by
the Gauss-Newton method occasionally increased towards the end of the iteration.
Figure 1B compares the optimal cost-to-go ? ? , the least-squares fit to the known ? ? using our features (which were a 5-by-5 grid of Gaussians), and the solution of the policy gradient method initialized with w = 0. Note that the latter has lower cost compared to the least-squares fit. In this case
both algorithms converged in about 10 iterations, although the Gauss-Newton method needed about
5 times fewer samples in order to achieve similar performance to the exact version.
5
3 Continuous problems
Unlike the discrete case where we focused exclusively on LMDPs, here we begin with a very general
problem formulation and present interesting new results. These results are then specialized to a
narrower class of problems which are continuous (in space and time) but nevertheless have similar
properties to LMDPs.
3.1 Policy gradient for general controlled diffusions
Consider the controlled Ito diffusion
?x = b (x? u) ?? + ? (x) ??
(26)
where ? (?) is a standard multidimensional Brownian motion process, and u is now a traditional
control vector. Let ? (x? u) be a cost function. As before we focus on infinite-horizon average-cost
optimal control problems. Given a policy u = ? (x), the average cost ? and differential cost-to-go
? (x) satisfy the Hamilton-Jacobi-Bellman (HJB) equation
? = ? (x? ? (x)) + L [?] (x)
where L is the following 2nd-order linear differential operator:
?
?
L [?] (x) , b (x? ? (x))T Ox ? (x) + 12 trace ? (x) ? (x)T Oxx ? (x)
(27)
(28)
In can be shown [10] that L coincides with the infinitesimal generator of (26), i.e. it computes the
expected directional derivative of ? along trajectories generated by (26). We will need
Lemma 1. Let L be the infinitesimal generator of an Ito diffusion which has a stationary density ?,
and let ? be a twice-differentiable function. Then
Z
? (x) L [? ] (x) ?x = 0
(29)
Proof: The adjoint L? of the infinitesimal generator L is known to be the Fokker-Planck operator ?
which computes the time-evolution of a density under the diffusion [10]. Since ? is the stationary
density, L? [?] (x) = 0 for all x, and so hL? [?] ? ? i = 0. Since L and L? are adjoint, hL? [?] ? ? i =
h?? L [? ]i. Thus h?? L [? ]i = 0.
This lemma seems important-yet-obvious so we would not be surprised if it was already known, but
we have not seen in the literature. Note that many diffusions lack stationary densities. For example
the density of Brownian motion initialized at the origin is a zero-mean Gaussian whose covariance
grows linearly with time ? thus there is no stationary density. If however the diffusion is controlled
and the policy tends to keep the state within some region, then a stationary density would normally
exist. The existence of a stationary density may actually be a sensible definition of stability for
stochastic systems (although this point will not be pursued in the present paper).
Now consider any policy parameterization u = ? (x? w) such that (for the current value of w) the
diffusion (26) has a stationary density ? and Ow ? exists. Differentiating (27), and using the shortcut
notation b (x) in place of b (x? ? (x? w)) and similarly for ? (x), we have
Ow ? = Ow ? (x) + Ow b (x)? Ox ? (x) + L [Ow ?] (x)
(30)
Theorem 5. The policy gradient of the controlled diffusion (26) is
Z
?
?
?
Ow ? = ? (x) Ow ? (x) + Ow b (x) Ox ? (x) ?x
(31)
Here L [Ow ?] is meant component-wise. If we now average over ?, the last term will vanish due to
Lemma 1. This is essential for a policy gradient procedure which seeks to avoid finite differencing;
indeed Ow ? could not be estimated while sampling from a single policy. Thus we have
Unlike most other results in stochastic optimal control, equation (31) does not involve the Hessian
Oxx ?, although we can obtain a Oxx ?-dependent term here if we allow ? to depend on u. We now
illustrate Theorem 5 on a linear-quadratic-Gaussian (LQG) control problem.
6
Example (LQG). Consider dynamics ?? = ??? + ?? and cost ? (?? ?) = ?2 + ?2 . Let ? = ???
be the parameterized policy with ? ? 0. The differential cost-to-go is known to be in the form
2
? (?) = ??2 . Substituting in the HJB equation and matching powers of ? yields ? = ? = ?2?+1 ,
2
and so the policy gradient can be computed directly as O? ? = 1 ? ?2?+1
2 . The stationary density
1
? (?) is a zero-mean Gaussian with variance ? 2 = 2?
. One can now verify that the gradient given
by Theorem 5 is identical to the O? ? computed above.
Another interesting aspect of Theorem 5 is that it is a natural generalization of classic results from
finite-horizon deterministic optimal control [13], even though it cannot be derived from those results.
Suppose we have an open-loop control trajectory u (?) ? 0 ? ? ? ? , the resulting state trajectory
(starting from a given x0 ) is x (?), and the corresponding co-state trajectory (obtained by integrating
Pontryagin?s ODE backwards in time) is ? (?). It is known that the gradient of the total cost ? w.r.t.
u is Ou ? + Ou bT ?. Now suppose u (?) is parameterized by some vector w. Then
Z
Z ?
?
Ow ? = Ow u (?)T Ou(?) ??? =
Ow ? (x (?) ? u (?)) + Ow b (x (?) ? u (?))T ? (?) ?? (32)
The co-state ? (?) is known to be equal to the gradient Ox ? (x? ?) of the cost-to-go function for the
(closed-loop) deterministic problem. Thus (31) and (32) are very similar. Of course in finite-horizon
settings there is no stationary density, and instead the integral in (32) is over the trajectory. An RL
method for estimating Ow ? in deterministic problems was developed in [9].
Theorem 5 suggests a simple procedure for estimating the policy gradient via sampling: fit a function
approximator ?b to ?, and use Ox ?b in (31). Alternatively, a compatible approximation scheme can be
obtained by fitting Ox ?b to Ox ? in a least-squares sense, using a linear approximator with features
Ow b (x). This however is not practical because learning targets for Ox ? are difficult to obtain.
Ideally we would construct a compatible approximation scheme which involves fitting ?b rather than
Ox ?b. It is not clear how to do that for general diffusions, but can be done for a restricted problem
class as shown next.
3.2 Natural gradient and compatible approximation for linearly-solvable diffusions
We now focus on a more restricted family of stochastic optimal control problems which arise in
many situations (e.g. most mechanical systems can be described in this form):
?x = (a (x) + ? (x) u) ?? + ? (x) ??
(33)
? (x? u) = ? (?) + 12 uT ? (x) u
Such problems have been studied extensively [13]. The optimal control law u? and the optimal
differential cost-go-to ? ? (x) are known to be related as u? = ???1 ? T Ox ?? . As in the discrete
case we use this relation to motivate the choice of policy parameterization and cost-to-go function
approximator. Choosing some features f (x), we define ?b (x? r) , rT f (x) as before, and
?
?
?1
T
(34)
? (x? w) , ?? (x) ? (x) Ox wT f (x)
It is convenient to also define the matrix ? (x) , Ox f (x) , so that Ox ?b (x? r) = ? (x) r. We can
now substitute these definitions in the general result (31), replace ? with the approximation ?b, and
skipping the algebra, obtain the corresponding approximation to the policy gradient:
Z
e w ? = ? (x) ? (x)T ? (x) ? (x)?1 ? (x)T ? (x) (w ? r) ?x
O
(35)
T
e w ? = Ow ?), we seek a natural gradient
Before addressing the issue of compatibility (i.e. whether O
version of (35). To this end we need to interpret ? T ???1 ? T ? as Fisher information for the (infinitesimal) transition probability density of our parameterized diffusion. We do this by discretizing
the time axis with time step ?, and then dividing by ?. The ?-step explicit Euler discretization of the
stochastic dynamics (33) is given by the Gaussian
?
?
?? (?|x? w) = N x + ?a (x) ? ?? (x) ? (x)?1 ? (x)T ? (x) w; ?? (x) ? (x)T
(36)
Suppressing the dependence on x, Fisher information becomes
Z
?
??1
1
0
T
?1 T
? ?? T
???1 ? T ?
?? Ow log ?? Ow log ?T
? ?x = ? ??
?
7
(37)
Comparing to (35) we see that a natural gradient result is obtained when
T
?1
T
? (x) ? (x) = ? (x) ? (x) ? (x)
(38)
Assuming (38) is satisfied, and defining ? (w) as the average of Fisher information over ? (x),
e w? = w ? r
? (w)?1 O
(39)
Condition (38) is rather interesting. Elsewhere we have shown [16] that the same condition is needed
to make problem (33) linearly-solvable. More precisely, the exponentiated HJB equation for the
optimal ? ? in problem (33, 38) is linear in exp (??? ). We have also shown [16] that the continuous
problem (33, 38) is the limit (when ? ? 0) of continuous-state discrete-time LMDPs constructed
via Euler discretization as above. The compatible function approximation scheme from Theorem
3 can then be applied to these LMDPs. Recall (8). Since L is the infinitesimal generator, for any
twice-differentiable function ? we have
? ?
? [? ] (x) = ? (x) + ?L [? ] (x) + ? ?2
(40)
Substituting in (13), dividing by ? and taking the limit ? ? 0, the matrix ? and vector k become
Z
?
?
T
T
? = ? (x) ?L [f ] (x) f (x) ? f (x) L [f ] (x) ?x
(41)
Z
k = ? (x) (?L [f ] (x) ? (x) + f (x) (? (x) ? ?)) ?x
Compatibility is therefore achieved when the approximation error in ? is orthogonal to L [f ]. Thus
the auxiliary function approximator is now ?e (x? s) , sT L [f ] (x), and we have
Theorem 6. The following procedure yields the exact policy gradient for problem (33, 38):
1.
2.
3.
4.
fit ?e (x? s) to ? (x) in a least-squares sense, and also compute ?
compute ? and k from (41), replacing ? (x) with ?e (x? s)
"fit" ?b (x? r) by solving ?r = k
the policy gradient is (35), and the natural policy gradient is (39)
This is the first policy gradient result with compatible function approximation for continuous stochastic systems. It is very similar to the corresponding results in the discrete case (Theorems 3,4)
except it involves the differential operator L rather than the integral operator ?.
4 Summary
Here we developed compatible function approximators and natural policy gradients which only require estimation of the cost-to-go function. This was possible due to the unique properties of the
LMDP framework. The resulting approximation scheme is unusual, using policy-specific auxiliary
features derived from the primary features. In continuous time we also obtained a new policy gradient result for control problems that are not linearly-solvable, and showed that it generalizes results
from deterministic optimal control. We also derived a somewhat heuristic but nevertheless promising Gauss-Newton method for solving for the optimal cost-to-go directly; it appears to be a hybrid
between value iteration and policy gradient.
One might wonder why we need policy gradients here given that the (exponentiated) Bellman equation is linear, and approximating its solution using features is faster than any other procedure in
Reinforcement Learning and Approximate Dynamic Programming. The answer is that minimizing
Bellman error does not always give the best policy ? as illustrated in Figure 1B. Indeed a combined
approach may be optimal: solve the linear Bellman equation approximately [17], and then use the
solution to initialize the policy gradient method. This idea will be explored in future work.
Our new methods require a model ? as do all RL methods that rely on state values rather than stateaction values. We do not see this as a shortcoming because, despite all the effort that has gone
into model-free RL, the resulting methods do not seem applicable to truly complex optimal control
problems. Our methods involve model-based sampling which combines the best of both worlds:
computational speed, and grounding in reality (assuming we have a good model of reality).
Acknowledgements.
This work was supported by the US National Science Foundation. Thanks to Guillaume Lajoie and
Jan Peters for helpful discussions.
8
References
[1] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10:251?276,
1998.
[2] J. Bagnell and J. Schneider. Covariant policy search. In International Joint Conference on
Artificial Intelligence, 2003.
[3] J. Boyan. Least-squares temporal difference learning. In International Conference on Machine
Learning, 1999.
[4] W. Fleming and S. Mitter. Optimal control and nonlinear filtering for nondegenerate diffusion
processes. Stochastics, 8:226?261, 1982.
[5] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems,
2002.
[6] S. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University
College London, 2003.
[7] H. Kappen. Linear theory for control of nonlinear stochastic systems. Physical Review Letters,
95, 2005.
[8] V. Konda and J. Tsitsiklis. Actor-critic algorithms. SIAM Journal on Control and Optimization,
pages 1008?1014, 2001.
[9] R. Munos. Policy gradient in continuous time. The Journal of Machine Learning Research,
7:771?791, 2006.
[10] B. Oksendal. Stochastic Differential Equations (4th Ed). Springer-Verlag, Berlin, 1995.
[11] J. Peters and S. Schaal. Natural actor-critic. Neurocomputing, 71:1180?1190, 2008.
[12] M. Schmidt. minfunc. online material, 2005.
[13] R. Stengel. Optimal Control and Estimation. Dover, New York, 1994.
[14] R. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement
learning with function approximation. In Advances in Neural Information Processing Systems,
2000.
[15] E. Todorov. Linearly-solvable Markov decision problems. Advances in Neural Information
Processing Systems, 2006.
[16] E. Todorov. Efficient computation of optimal actions. PNAS, 106:11478?11483, 2009.
[17] E. Todorov. Eigen-function approximation methods for linearly-solvable optimal control problems. IEEE ADPRL, 2009.
[18] R. Williams. Simple statistical gradient following algorithms for connectionist reinforcement
learning. Machine Learning, pages 229?256, 1992.
9
| 4013 |@word version:5 seems:1 nd:1 open:1 simulation:1 seek:3 covariance:1 incurs:1 kappen:1 exclusively:1 interestingly:1 suppressing:1 current:1 discretization:3 comparing:1 skipping:1 yet:2 must:1 numerical:1 additive:1 lqg:2 treating:1 stationary:13 pursued:1 fewer:1 intelligence:1 parameterization:8 dover:1 complication:1 along:1 constructed:1 differential:7 become:1 surprised:1 fitting:4 combine:1 hjb:3 x0:1 expected:3 indeed:5 rapid:1 bellman:10 td:1 becomes:5 begin:1 estimating:3 notation:2 maximizes:1 what:2 interpreted:1 minimizes:1 developed:3 ended:1 guarantee:2 temporal:1 multidimensional:1 stateaction:1 control:22 normally:2 converse:1 planck:1 producing:1 arguably:1 hamilton:1 before:4 engineering:1 local:2 modify:1 tends:1 limit:2 despite:1 sutton:1 path:1 approximately:1 might:1 twice:2 studied:1 suggests:2 co:2 gone:1 unique:4 practical:1 practice:3 procedure:6 jan:1 resid:1 projection:2 matching:1 pre:1 integrating:1 regular:1 convenient:1 cannot:3 operator:6 context:2 equivalent:2 deterministic:7 go:17 williams:1 starting:2 ergodic:2 focused:1 parameterizing:1 stability:1 classic:1 notion:1 target:1 suppose:3 exact:5 programming:1 auxilliary:1 origin:1 satisfying:1 observed:3 region:1 vanishes:5 complexity:1 reward:1 ideally:1 dynamic:6 motivate:1 depend:3 solving:4 singh:1 algebra:1 joint:1 derivation:1 shortcoming:1 london:1 artificial:1 choosing:1 apparent:1 whose:1 larger:1 solve:3 heuristic:1 amari:1 itself:1 online:1 advantage:5 differentiable:3 loop:2 achieve:1 adjoint:2 convergence:4 requirement:2 illustrate:1 develop:2 stating:1 linearize:1 dividing:2 auxiliary:7 c:1 involves:3 direction:1 closely:1 stochastic:11 exploration:1 mcallester:1 material:1 require:2 adprl:1 generalization:1 around:1 exp:11 substituting:2 estimation:4 applicable:1 weighted:2 hope:1 minimization:2 gaussian:4 always:1 rather:9 avoid:2 derived:3 focus:3 schaal:1 improvement:1 notational:1 mainly:1 baseline:2 sense:7 helpful:1 dependent:2 bt:1 relation:2 compatibility:3 arg:1 issue:2 overall:1 development:1 initialize:1 equal:2 construct:2 washington:2 sampling:9 identical:1 future:2 connectionist:1 simplify:1 randomly:2 national:1 neurocomputing:1 replaced:2 stationarity:1 evaluation:4 truly:1 accurate:1 peculiar:1 integral:3 necessary:1 orthogonal:5 damping:1 initialized:2 theoretical:1 minfunc:2 increased:1 cost:31 addressing:1 euler:2 wonder:1 successful:1 answer:1 combined:1 st:2 density:21 thanks:1 international:2 siam:1 again:1 squared:1 satisfied:1 thesis:1 derivative:1 bfgs:1 stengel:1 summarized:1 metronome:2 satisfy:4 notable:1 explicitly:1 depends:3 closed:2 option:2 contribution:1 minimize:3 square:8 variance:2 efficiently:1 yield:6 directional:1 produced:1 trajectory:6 published:1 converged:1 whenever:1 ed:1 definition:2 infinitesimal:5 energy:3 obvious:1 proof:1 jacobi:1 sampled:1 emanuel:1 recall:3 ut:1 dimensionality:2 organized:1 ou:3 actually:4 appears:1 formulation:1 done:5 though:2 ox:13 hand:1 replacing:5 nonlinear:2 lack:2 grows:1 grounding:1 verify:2 true:4 evolution:1 illustrated:1 coincides:2 motion:2 passive:3 wise:2 recently:1 specialized:2 empirically:1 rl:3 physical:1 numerically:1 interpret:1 gibbs:2 grid:1 mathematics:1 similarly:1 had:1 actor:2 similarity:1 etc:1 brownian:2 showed:1 occasionally:1 certain:1 verlag:1 discretizing:1 approximators:4 seen:1 minimum:1 somewhat:3 schneider:1 pnas:1 faster:2 controlled:4 controller:1 expectation:1 metric:3 iteration:12 represent:1 achieved:1 background:2 affecting:1 ode:1 biased:1 unlike:2 oksendal:1 sure:1 tend:1 seem:2 near:1 backwards:1 enough:1 identically:1 easy:1 todorov:5 affect:1 fit:12 reduce:1 idea:2 regarding:1 parameterizes:2 whether:1 expression:3 effort:1 peter:2 proceed:1 hessian:1 york:1 action:8 clear:3 involve:3 amount:2 clutter:1 extensively:1 exist:1 sign:1 estimated:1 popularity:1 per:4 discrete:8 write:1 express:1 key:1 demonstrating:1 nevertheless:2 changing:1 verified:1 diffusion:12 parameterized:5 letter:1 place:2 family:1 decision:1 scaling:1 comparable:1 guaranteed:1 simplification:1 convergent:1 quadratic:1 precisely:2 worked:1 aspect:2 speed:1 min:1 combination:1 remain:1 slightly:1 kakade:2 stochastics:1 modification:1 making:1 hl:2 restricted:2 taken:1 equation:12 remains:2 turn:1 needed:5 mind:1 end:3 unusual:2 generalizes:1 gaussians:2 experimentation:1 multiplied:1 apply:2 schmidt:1 encounter:1 eigen:1 existence:1 original:1 substitute:1 remaining:1 newton:9 konda:1 restrictive:1 approximating:2 unchanged:3 objective:4 already:4 quantity:3 primary:1 dependence:2 rt:4 traditional:12 bagnell:1 gradient:68 ow:33 berlin:1 sensible:2 lajoie:1 assuming:2 minimizing:1 differencing:1 difficult:1 trace:1 suppress:1 policy:73 oxx:3 markov:1 finite:3 descent:1 defining:2 situation:1 mansour:1 compositionality:1 pair:1 mechanical:1 kl:1 optimized:1 fleming:1 below:2 usually:1 sparsity:1 summarize:1 power:1 suitable:1 natural:23 hybrid:1 rely:1 boyan:1 solvable:8 residual:1 scheme:4 mdps:14 axis:1 review:1 literature:1 discovery:1 acknowledgement:1 law:2 lacking:1 expect:1 interesting:3 filtering:1 proven:1 approximator:10 remarkable:1 generator:4 foundation:1 sufficient:2 nondegenerate:1 critic:2 compatible:14 course:2 elsewhere:1 repeat:1 last:1 free:2 summary:1 supported:1 tsitsiklis:1 allow:1 exponentiated:2 template:1 taking:1 differentiating:2 munos:1 regard:1 curve:2 default:1 transition:3 valid:3 world:1 computes:2 reinforcement:5 exponentiating:1 approximate:7 keep:2 assumed:1 alternatively:1 continuous:11 search:1 why:1 reality:2 promising:1 expanding:1 obtaining:1 complex:1 linearly:10 motivation:1 arise:2 mitter:1 elaborate:1 wish:1 explicit:1 vanish:1 ito:2 admissible:1 theorem:17 specific:3 explored:1 exists:2 essential:1 gained:1 supplement:2 phd:1 horizon:4 scalar:1 lstd:1 springer:1 covariant:1 corresponds:1 minimizer:1 satisfies:1 fokker:1 conditional:1 goal:3 narrower:1 towards:1 replace:2 fisher:5 shortcut:2 infinite:2 typical:1 except:1 wt:12 averaging:2 lemma:3 called:1 total:1 duality:1 gauss:9 pontryagin:1 guillaume:1 college:1 latter:1 arises:1 meant:1 incorporate:1 |
3,329 | 4,014 | Agnostic Active Learning Without Constraints
Alina Beygelzimer
IBM Research
Hawthorne, NY
[email protected]
Daniel Hsu
Rutgers University &
University of Pennsylvania
[email protected]
John Langford
Yahoo! Research
New York, NY
[email protected]
Tong Zhang
Rutgers University
Piscataway, NJ
[email protected]
Abstract
We present and analyze an agnostic active learning algorithm that works without
keeping a version space. This is unlike all previous approaches where a restricted
set of candidate hypotheses is maintained throughout learning, and only hypotheses from this set are ever returned. By avoiding this version space approach, our
algorithm sheds the computational burden and brittleness associated with maintaining version spaces, yet still allows for substantial improvements over supervised learning for classification.
1
Introduction
In active learning, a learner is given access to unlabeled data and is allowed to adaptively choose
which ones to label. This learning model is motivated by applications in which the cost of labeling
data is high relative to that of collecting the unlabeled data itself. Therefore, the hope is that the
active learner only needs to query the labels of a small number of the unlabeled data, and otherwise
perform as well as a fully supervised learner. In this work, we are interested in agnostic active
learning algorithms for binary classification that are provably consistent, i.e. that converge to an
optimal hypothesis in a given hypothesis class.
One technique that has proved theoretically profitable is to maintain a candidate set of hypotheses
(sometimes called a version space), and to query the label of a point only if there is disagreement
within this set about how to label the point. The criteria for membership in this candidate set needs
to be carefully defined so that an optimal hypothesis is always included, but otherwise this set can be
quickly whittled down as more labels are queried. This technique is perhaps most readily understood
in the noise-free setting [1, 2], and it can be extended to noisy settings by using empirical confidence
bounds [3, 4, 5, 6, 7].
The version space approach unfortunately has its share of significant drawbacks. The first is computational intractability: maintaining a version space and guaranteeing that only hypotheses from
this set are returned is difficult for linear predictors and appears intractable for interesting nonlinear
predictors such as neural nets and decision trees [1]. Another drawback of the approach is its brittleness: a single mishap (due to, say, modeling failures or computational approximations) might cause
the learner to exclude the best hypothesis from the version space forever; this is an ungraceful failure mode that is not easy to correct. A third drawback is related to sample re-usability: if (labeled)
data is collected using a version space-based active learning algorithm, and we later decide to use
a different algorithm or hypothesis class, then the earlier data may not be freely re-used because its
collection process is inherently biased.
1
Here, we develop a new strategy addressing all of the above problems given an oracle that returns an
empirical risk minimizing (ERM) hypothesis. As this oracle matches our abstraction of many supervised learning algorithms, we believe active learning algorithms built in this way are immediately
and widely applicable.
Our approach instantiates the importance weighted active learning framework of [5] using a rejection
threshold similar to the algorithm of [4] which only accesses hypotheses via a supervised learning
oracle. However, the oracle we require is simpler and avoids strict adherence to a candidate set
of hypotheses. Moreover, our algorithm creates an importance weighted sample that allows for
unbiased risk estimation, even for hypotheses from a class different from the one employed by the
active learner. This is in sharp contrast to many previous algorithms (e.g., [1, 3, 8, 4, 6, 7]) that create
heavily biased data sets. We prove that our algorithm is always consistent and has an improved label
complexity over passive learning in cases previously studied in the literature. We also describe a
practical instantiation of our algorithm and report on some experimental results.
1.1
Related Work
As already mentioned, our work is closely related to the previous works of [4] and [5], both of
which in turn draw heavily on the work of [1] and [3]. The algorithm from [4] extends the selective
sampling method of [1] to the agnostic setting using generalization bounds in a manner similar
to that first suggested in [3]. It accesses hypotheses only through a special ERM oracle that can
enforce an arbitrary number of example-based constraints; these constraints define a version space,
and the algorithm only ever returns hypotheses from this space, which can be undesirable as we
previously argued. Other previous algorithms with comparable performance guarantees also require
similar example-based constraints (e.g., [3, 5, 6, 7]). Our algorithm differs from these in that (i) it
never restricts its attention to a version space when selecting a hypothesis to return, and (ii) it only
requires an ERM oracle that enforces at most one example-based constraint, and this constraint is
only used for selective sampling. Our label complexity bounds are comparable to those proved in [5]
(though somewhat worse that those in [3, 4, 6, 7]).
The use of importance weights to correct for sampling bias is a standard technique for many machine
learning problems (e.g., [9, 10, 11]) including active learning [12, 13, 5]. Our algorithm is based
on the importance weighted active learning (IWAL) framework introduced by [5]. In that work, a
rejection threshold procedure called loss-weighting is rigorously analyzed and shown to yield improved label complexity bounds in certain cases. Loss-weighting is more general than our technique
in that it extends beyond zero-one loss to a certain subclass of loss functions such as logistic loss. On
the other hand, the loss-weighting rejection threshold requires optimizing over a restricted version
space, which is computationally undesirable. Moreover, the label complexity bound given in [5]
only applies to hypotheses selected from this version space, and not when selected from the entire
hypothesis class (as the general IWAL framework suggests). We avoid these deficiencies using a
new rejection threshold procedure and a more subtle martingale analysis.
Many of the previously mentioned algorithms are analyzed in the agnostic learning model, where
no assumption is made about the noise distribution (see also [14]). In this setting, the label complexity of active learning algorithms cannot generally improve over supervised learners by more
than a constant factor [15, 5]. However, under a parameterization of the noise distribution related to
Tsybakov?s low-noise condition [16], active learning algorithms have been shown to have improved
label complexity bounds over what is achievable in the purely agnostic setting [17, 8, 18, 6, 7]. We
also consider this parameterization to obtain a tighter label complexity analysis.
2
2.1
Preliminaries
Learning Model
Let D be a distribution over X ? Y where X is the input space and Y = {?1} are the labels. Let
(X, Y ) ? X ? Y be a pair of random variables with joint distribution D. An active learner receives
a sequence (X1 , Y1 ), (X2 , Y2 ), . . . of i.i.d. copies of (X, Y ), with the label Yi hidden unless it is
explicitly queried. We use the shorthand a1:k to denote a sequence (a1 , a2 , . . . , ak ) (so k = 0
correspond to the empty sequence).
2
Let H be a set of hypotheses mapping from X to Y. For simplicity, we assume H is finite but does
not completely agree on any single x ? X (i.e., ?x ? X , ?h, h? ? H such that h(x) 6= h? (x)). This
keeps the focus on the relevant aspects of active learning that differ from passive learning. The error
of a hypothesis h : X ? Y is err(h) := Pr(h(X) 6= Y ). Let h? := arg min{err(h) : h ? H} be
a hypothesis of minimum error in H. The goal of the active learner is to return a hypothesis h ? H
with error err(h) not much more than err(h? ), using as few label queries as possible.
2.2
Importance Weighted Active Learning
In the importance weighted active learning (IWAL) framework of [5], an active learner looks at
the unlabeled data X1 , X2 , . . . one at a time. After each new point Xi , the learner determines a
probability Pi ? [0, 1]. Then a coin with bias Pi is flipped, and the label Yi is queried if and only if
the coin comes up heads. The query probability Pi can depend on all previous unlabeled examples
X1:i?1 , any previously queried labels, any past coin flips, and the current unlabeled point Xi .
Formally, an IWAL algorithm specifies a rejection threshold function p : (X ? Y ? {0, 1})? ? X ?
[0, 1] for determining these query probabilities. Let Qi ? {0, 1} be a random variable conditionally
independent of the current label Yi ,
Qi ?
? Yi | X1:i , Y1:i?1 , Q1:i?1
and with conditional expectation
E[Qi |Z1:i?1 , Xi ] = Pi := p(Z1:i?1 , Xi ).
where Zj := (Xj , Yj , Qj ). That is, Qi indicates if the label Yi is queried (the outcome of
the coin toss). Although the notation does not explicitly suggest this, the query probability
Pi = p(Z1:i?1 , Xi ) is allowed to explicitly depend on a label Yj (j < i) if and only if it has
been queried (Qj = 1).
2.3
Importance Weighted Estimators
We first review some standard facts about the importance weighting technique. For a function f :
X ? Y ? R, define the importance weighted estimator of E[f (X, Y )] from Z1:n ? (X ? Y ?
{0, 1})n to be
n
1 X Qi
fb(Z1:n ) :=
? f (Xi , Yi ).
n i=1 Pi
Note that this quantity depends on a label Yi only if it has been queried (i.e., only if Qi = 1; it also
depends on Xi only if Qi = 1). Our rejection threshold will be based on a specialization of this
estimator, specifically the importance weighted empirical error of a hypothesis h
n
err(h, Z1:n ) :=
1 X Qi
? 1[h(Xi ) 6= Yi ].
n i=1 Pi
In the notation of Algorithm 1, this is equivalent to
X
1
err(h, Sn ) :=
n
(Xi ,Yi ,1/Pi )?Sn
(1/Pi ) ? 1[h(Xi ) 6= Yi ]
(1)
where Sn ? X ? Y ? R is the importance weighted sample collected by the algorithm.
Pn
A basic property of these estimators is P
unbiasedness: E[fb(Z1:n )] = (1/n) i=1 E[E[(Qi /Pi ) ?
n
f (Xi , Yi ) | X1:i , Y1:i , Q1:i?1 ]] = (1/n) i=1 E[(Pi /Pi ) ? f (Xi , Yi )] = E[f (X, Y )]. So, for example, the importance weighted empirical error of a hypothesis h is an unbiased estimator of its true
error err(h). This holds for any choice of the rejection threshold that guarantees Pi > 0.
3
A Deviation Bound for Importance Weighted Estimators
As mentioned before, the rejection threshold used by our algorithm is based on importance weighted
error estimates err(h, Z1:n ). Even though these estimates are unbiased, they are only reliable when
3
the variance is not too large. To get a handle on this, we need a deviation bound for importance
weighted estimators. This is complicated by two factors that rules out straightforward applications
of some standard bounds:
1. The importance weighted samples (Xi , Yi , 1/Pi ) (or equivalently, the Zi = (Xi , Yi , Qi ))
are not i.i.d. This is because the query probability Pi (and thus the importance weight 1/Pi )
generally depends on Z1:i?1 and Xi .
2. The effective range and variance of each term in the estimator are, themselves, random
variables.
To address these issues, we develop a deviation bound using a martingale technique from [19].
Let f : X ? Y ? [?1, 1] be a bounded function. Consider any rejection threshold function p :
(X ? Y ? {0, 1})? ? X ? (0, 1] for which Pn = p(Z1:n?1 , Xn ) is bounded below by some positive
quantity (which may depend on n). Equivalently, the query probabilities Pn should have inverses
1/Pn bounded above by some deterministic quantity rmax (which, again, may depend on n). The
a priori upper bound rmax on 1/Pn can be pessimistic, as the dependence on rmax in the final
deviation bound will be very mild?it enters in as log log rmax . Our goal is to prove a bound on
|fb(Z1:n ) ? E[f (X, Y )]| that holds with high probability over the joint distribution of Z1:n .
To start, we establish bounds on the range and variance of each term Wi := (Qi /Pi ) ? f (Xi , Yi ) in
the estimator, conditioned on (X1:i , Y1:i , Q1:i?1 ). Let Ei [ ? ] denote E[ ? |X1:i , Y1:i , Q1:i?1 ]. Note
that Ei [Wi ] = (Ei [Qi ]/Pi ) ? f (Xi , Yi ) = f (Xi , Yi ), so if Ei [Wi ] = 0, then Wi = 0. Therefore,
the (conditional) range and variance are non-zero only if Ei [Wi ] 6= 0. For the range, we have
|Wi | = (Qi /Pi ) ? |f (Xi , Yi )| ? 1/Pi , and for the variance, Ei [(Wi ? Ei [Wi ])2 ] ? (Ei [Q2i ]/Pi2 ) ?
f (Xi , Yi )2 ? 1/Pi . These range and variance bounds indicate the form of the deviations we can
expect, similar to that of other classical deviation bounds.
Theorem 1. Pick any t ? 0 and n ? 1. Assume 1 ? 1/Pi ? rmax for all 1 ? i ? n, and let
Rn := 1/ min({Pi : 1 ? i ? n ? f (Xi , Yi ) 6= 0} ? {1}). With probability at least 1 ? 2(3 +
log2 rmax )e?t/2 ,
n
r
r
1 X Q
2Rn t
2t Rn t
i
? f (Xi , Yi ) ? E[f (X, Y )] ?
+
+
.
n
P
n
n
3n
i
i=1
We defer all proofs to the appendices.
4
Algorithm
First, we state a deviation bound for the importance weighted error of hypotheses in a finite hypothesis class H that holds for all n ? 1. It is a simple consequence of Theorem 1 and union bounds;
the form of the bound motivates certain algorithmic choices to be described below.
Lemma 1. Pick any ? ? (0, 1). For all n ? 1, let
16 log(2(3 + n log2 n)n(n + 1)|H|/?)
log(n|H|/?)
?n :=
.
(3)
=O
n
n
Let (Z1 , Z2 , . . .) ? (X ? Y ? {0, 1})? be the sequence of random variables specified in Section 2.2
using a rejection threshold p : (X ? Y ? {0, 1})? ? X ? [0, 1] that satisfies p(z1:n , x) ? 1/nn for
all (z1:n , x) ? (X ? Y ? {0, 1})n ? X and all n ? 1.
The following holds with probability at least 1 ? ?. For all n ? 1 and all h ? H,
r
?n
?n
?
?
+
|(err(h, Z1:n ) ? err(h , Z1:n )) ? (err(h) ? err(h ))| ?
Pmin,n (h) Pmin,n (h)
(4)
where Pmin,n (h) = min{Pi : 1 ? i ? n ? h(Xi ) 6= h? (Xi )} ? {1} .
We let C0 = O(log(|H|/?)) ? 2 be a quantity such that ?n (as defined in Eq. (3)) is bounded as
?n ? C0 ? log(n + 1)/n. The following absolute constants are used in the description of the rejection
4
Algorithm 1
Notes: see Eq. (1) for the definition of err (importance weighted error), and Section 4 for the
definitions of C0 , c1 , and c2 .
Initialize: S0 := ?.
For k = 1, 2, . . . , n:
1. Obtain unlabeled data point Xk .
2. Let
hk := arg min{err(h, Sk?1 ) : h ? H}, and
h?k := arg min{err(h, Sk?1 ) : h ? H ? h(Xk ) 6= hk (Xk )}.
Let Gk := err(h?k , Sk?1 ) ? err(hk , Sk?1 ), and
(
q
C0 log k
C0 log k
1
1
C0 log k
1
if
G
?
+
k
k?1
k?1
Pk :=
= min 1, O
+
?
G2k
Gk
k?1
s otherwise
where s ? (0, 1) is the positive solution to the equation
r
C log k
C0 log k c2
c1
0
Gk = ? ? c1 + 1 ?
+
? c2 + 1 ?
.
k?1
s
k?1
s
(2)
3. Toss a biased coin with Pr(heads) = Pk .
If heads, then query Yk , and let Sk := Sk?1 ? {(Xk , Yk , 1/Pk )}.
Else, let Sk := Sk?1 .
Return: hn+1 := arg min{err(h, Sn ) : h ? H}.
Figure 1: Algorithm for importance weighted active learning with an error minimization oracle.
?
?
2
threshold and
?the 2subsequent analysis: c1 := 5 + 2 2, c2 := 5, c3 := ((c1 + 2)/(c1 ? 2)) ,
c4 := (c1 + c3 ) , c5 := c2 + c3 .
Our proposed algorithm is shown in Figure 1. The rejection threshold (Step 2) is based on the
deviation bound from Lemma 1. First, the importance weighted error minimizing hypothesis hk and
the ?alternative? hypothesis h?k are found. Note that both optimizations are over the entire hypothesis
class H (with h?k only being required to disagree with hk on xk )?this is a key aspect where our
algorithm differs from previous approaches. Thepdifference in importance weighted errors Gk of
the two hypotheses is then computed. If Gk ? (C0 log k)/(k ? 1) + (C0 log k)/(k ? 1), then
the query probability Pk is set to 1. Otherwise, Pk is set to the positive solution s to the quadratic
equation in Eq. (2). The functional form of Pk is roughly min{1, (1/G2k + 1/Gk ) ? (C0 log k)/(k ?
1)}. It can be checked that Pk ? (0, 1] and that Pk is non-increasing with Gk . It is also useful to note
that (log k)/(k ?1) is monotonically decreasing with k ? 1 (we use the convention log(1)/0 = ?).
In order to apply Lemma 1 with our rejection threshold, we need to establish the (very crude) bound
Pk ? 1/k k for all k.
Lemma 2. The rejection threshold of Algorithm 1 satisfies p(z1:n?1 , x) ? 1/nn for all n ? 1 and
all (z1:n?1 , x) ? (X ? Y ? {0, 1})n?1 ? X .
Note that this is a worst-case bound; our analysis shows that the probabilities Pk are more like
1/poly(k) in the typical case.
5
5.1
Analysis
Correctness
We first prove a consistency guarantee for Algorithm 1 that bounds the generalization error of the
importance weighted empirical error minimizer. The proof actually establishes a lower bound on
5
the query probabilities Pi ? 1/2 for Xi such that hn (Xi ) 6= h? (Xi ). This offers an intuitive
characterization of the weighting landscape induced by the importance weights 1/Pi .
Theorem 2. The following holds with probability at least 1 ? ?. For any n ? 1,
r
2C0 log n 2C0 log n
?
?
+
.
0 ? err(hn ) ? err(h ) ? err(hn , Z1:n?1 ) ? err(h , Z1:n?1 ) +
n?1
n?1
This implies, for all n ? 1,
r
2C0 log n 2C0 log n
?
err(hn ) ? err(h ) +
+
.
n?1
n?1
Therefore, the final hypothesis returned by Algorithm 1 after seeing n unlabeled data has roughly
the same error bound as a hypothesis returned by a standard passive learner with n labeled data. A
variant of this result under certain noise conditions is given in the appendix.
5.2
Label Complexity Analysis
We now bound the number of labels requested by Algorithm 1 after n iterations. The following
lemma bounds the probability of querying the label Yn ; this is subsequently used to establish the
final bound on the expected number of labels queried. The key to the proof is in relating empirical
error differences and their deviations to the probability of querying a label. This is mediated through
the disagreement coefficient, a quantity first used by [14] for analyzing the label complexity of the
A2 algorithm of [3]. The disagreement coefficient ? := ?(h? , H, D) is defined as
Pr(X ? DIS(h? , r))
?(h? , H, D) := sup
:r>0
r
where
DIS(h? , r) := {x ? X : ?h? ? H such that Pr(h? (X) 6= h? (X)) ? r and h? (x) 6= h? (x)}
(the disagreement region around h? at radius r). This quantity is bounded for many learning problems studied in the literature; see [14, 6, 20, 21] for more discussion. Note that the supremum can
instead be taken over r > ? if the target excess error is ?, which allows for a more detailed analysis.
Lemma 3. Assume the bounds from Eq. (4) holds for all h ? H and n ? 1. For any n ? 1,
!
r
C0 log n
C0 log2 n
?
.
E[Qn ] ? ? ? 2 err(h ) + O ? ?
+??
n?1
n?1
Theorem 3. With probability at least 1 ? ?, the expected number of labels queried by Algorithm 1
after n iterations is at most
p
1 + ? ? 2 err(h? ) ? (n ? 1) + O ? ? C0 n log n + ? ? C0 log3 n .
The bound is dominated by a linear term scaled by err(h? ), plus a sublinear term. The linear term
err(h? ) ? n is unavoidable in the worst case, as evident from label complexity lower bounds [15, 5].
When err(h? ) is negligible (e.g., the data is separable) and ? is bounded (as is the case for many
problems studied in the literature [14]), then the bound represents a polynomial label complexity improvement over supervised learning, similar to that achieved by the version space algorithm
from [5].
5.3
Analysis under Low Noise Conditions
Some recent work on active learning has focused on improved label complexity under certain noise
conditions [17, 8, 18, 6, 7]. Specifically, it is assumed that there exists constants ? > 0 and 0 < ? ?
1 such that
?
Pr(h(X) 6= h? (X)) ? ? ? (err(h) ? err(h? ))
(5)
for all h ? H. This is related to Tsybakov?s low noise condition [16]. Essentially, this condition
requires that low error hypotheses not be too far from the optimal hypothesis h? under the disagreement metric Pr(h? (X) 6= h(X)). Under this condition, Lemma 3 can be improved, which in turn
yields the following theorem.
6
Theorem 4. Assume that for some value of ? > 0 and 0 < ? ? 1, the condition in Eq. (5) holds
for all h ? H. There is a constant c? > 0 depending only on ? such that the following holds. With
probability at least 1 ? ?, the expected number of labels queried by Algorithm 1 after n iterations is
at most
?/2
? ? ? ? c? ? (C0 log n)
? n1??/2 .
Note that the bound is sublinear in n for all 0 < ? ? 1, which implies label complexity improvements whenever ? is bounded (an improved analogue of Theorem 2 under these conditions can be
established using similar techniques). The previous algorithms of [6, 7] obtain even better rates
under these noise conditions using specialized data dependent generalization bounds, but these algorithms also required optimizations over restricted version spaces, even for the bound computation.
6
Experiments
Although agnostic learning is typically intractable in the worst case, empirical risk minimization can
serve as a useful abstraction for many practical supervised learning algorithms in non-worst case
scenarios. With this in mind, we conducted a preliminary experimental evaluation of Algorithm 1,
implemented using a popular algorithm for learning decision trees in place of the required ERM
oracle. Specifically, we use the J48 algorithm from Weka v3.6.2 (with default parameters) to select
the hypothesis hk in each round k; to produce the ?alternative? hypothesis h?k , we just modify
the decision tree hk by changing the label of the node used for predicting on xk . Both of these
procedures are clearly heuristic, but they are similar in spirit to the required optimizations. We
set C0 = 8 and c1 = c2 = 1?these can be regarded as tuning parameters, with C0 controlling
the aggressiveness of the rejection threshold. We did not perform parameter tuning with active
learning although the importance weighting approach developed here could potentially be used for
that. Rather, the goal of these experiments is to assess the compatibility of Algorithm 1 with an
existing, practical supervised learning procedure.
6.1
Data Sets
We constructed two binary classification tasks using MNIST and KDDCUP99 data sets. For MNIST,
we randomly chose 4000 training 3s and 5s for training (using the 3s as the positive class), and used
all of the 1902 testing 3s and 5s for testing. For KDDCUP99, we randomly chose 5000 examples
for training, and another 5000 for testing. In both cases, we reduced the dimension of the data to 25
using PCA.
To demonstrate the versatility of our algorithm, we also conducted a multi-class classification experiment using the entire MNIST data set (all ten digits, so 60000 training data and 10000 testing data).
This required modifying how h?k is selected: we force h?k (xk ) 6= hk (xk ) by changing the label of
the prediction node for xk to the next best label. We used PCA to reduce the dimension to 40.
6.2
Results
We examined the test error as a function of (i) the number of unlabeled data seen, and (ii) the number
of labels queried. We compared the performance of the active learner described above to a passive
learner (one that queries every label, so (i) and (ii) are the same) using J48 with default parameters.
In all three cases, the test errors as a function of the number of unlabeled data were roughly the same
for both the active and passive learners. This agrees with the consistency guarantee from Theorem 2.
We note that this is a basic property not satisfied by many active learning algorithms (this issue is
discussed further in [22]).
In terms of test error as a function of the number of labels queried (Figure 2), the active learner
had minimal improvement over the passive learner on the binary MNIST task, but a substantial
improvement over the passive learner on the KDDCUP99 task (even at small numbers of label
queries). For the multi-class MNIST task, the active learner had a moderate improvement over the
passive learner. Note that KDDCUP99 is far less noisy (more separable) than MNIST 3s vs 5s task,
so the results are in line with the label complexity behavior suggested by Theorem 3, which states
that the label complexity improvement may scale with the error of the optimal hypothesis. Also,
7
0.25
0.05
Passive
Active
test error
test error
0.2
0.15
0.1
0.05
Passive
Active
0.04
0.03
0.02
0.01
0
1000
2000
3000
number of labels queried
0
4000
0
MNIST 3s vs 5s
0.24
Passive
Active
Passive
Active
0.22
test error
test error
5000
KDDCUP99
0.1
0.08
1000
2000
3000
4000
number of labels queried
0.06
0.04
0.2
0.18
0.16
0.02
0.14
0
0
100
200
300
400
500
number of labels queried
0
600
KDDCUP99 (close-up)
1
2
3
number of labels queried
4
4
x 10
MNIST multi-class (close-up)
Figure 2: Test errors as a function of the number of labels queried.
the results from MNIST tasks suggest that the active learner may require an initial random sampling
phase during which it is equivalent to the passive learner, and the advantage manifests itself after
this phase. This again is consistent with the analysis (also see [14]), as the disagreement coefficient
can be large at initial scales, yet much smaller as the number of (unlabeled) data increases and the
scale becomes finer.
7
Conclusion
This paper provides a new active learning algorithm based on error minimization oracles, a departure from the version space approach adopted by previous works. The algorithm we introduce here
motivates computationally tractable and effective methods for active learning with many classifier
training algorithms. The overall algorithmic template applies to any training algorithm that (i) operates by approximate error minimization and (ii) for which the cost of switching a class prediction
(as measured by example errors) can be estimated. Furthermore, although these properties might
only hold in an approximate or heuristic sense, the created active learning algorithm will be ?safe?
in the sense that it will eventually converge to the same solution as a passive supervised learning
algorithm. Consequently, we believe this approach can be widely used to reduce the cost of labeling
in situations where labeling is expensive.
Recent theoretical work on active learning has focused on improving rates of convergence. However,
in some applications, it may be desirable to improve performance at much smaller sample sizes, perhaps even at the cost of improved rates as long as consistency is ensured. Importance sampling and
weighting techniques like those analyzed in this work may be useful for developing more aggressive
strategies with such properties.
Acknowledgments
This work was completed while DH was at Yahoo! Research and UC San Diego.
8
References
[1] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning,
15(2):201?221, 1994.
[2] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information
Processing Systems 18, 2005.
[3] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In Twenty-Third International
Conference on Machine Learning, 2006.
[4] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances in
Neural Information Processing Systems 20, 2007.
[5] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance weighted active learning. In Twenty-Sixth
International Conference on Machine Learning, 2009.
[6] S. Hanneke. Adaptive rates of convergence in active learning. In Twenty-Second Annual Conference on
Learning Theory, 2009.
[7] V. Koltchinskii. Rademacher complexities and bounding the excess risk in active learning. Manuscript,
2009.
[8] M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Twentieth Annual Conference
on Learning Theory, 2007.
[9] R. .S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[10] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem.
SIAM Journal of Computing, 32:48?77, 2002.
[11] M. Sugiyama, M. Krauledat, and K.-R. M?uller. Covariate shift adaptation by importance weighted cross
validation. Journal of Machine Learning Research, 8:985?1005, 2007.
[12] M. Sugiyama. Active learning for misspecified models. In Advances in Neural Information Processing
Systems 18, 2005.
[13] F. Bach. Active learning for misspecified generalized linear models. In Advances in Neural Information
Processing Systems 19, 2006.
[14] S. Hanneke. A bound on the label complexity of agnostic active learning. In Twenty-Fourth International
Conference on Machine Learning, 2007.
[15] M. K?aa? ri?ainen. Active learning in the non-realizable case. In Seventeenth International Conference on
Algorithmic Learning Theory, 2006.
[16] A. B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Annals of Statistics, 32(1):135?
166, 2004.
[17] R. Castro and R. Nowak. Upper and lower bounds for active learning. In Allerton Conference on Communication, Control and Computing, 2006.
[18] R. Castro and R. Nowak. Minimax bounds for active learning. In Twentieth Annual Conference on
Learning Theory, 2007.
[19] T. Zhang. Data dependent concentration bounds for sequential prediction algorithms. In Eighteenth
Annual Conference on Learning Theory, 2005.
[20] E. Friedman. Active learning for smooth problems. In Twenty-Second Annual Conference on Learning
Theory, 2009.
[21] L. Wang. Sufficient conditions for agnostic active learnable. In Advances in Neural Information Processing Systems 22, 2009.
[22] S. Dasgupta and D. Hsu. Hierarchical sampling for active learning. In Twenty-Fifth International Conference on Machine Learning, 2008.
9
| 4014 |@word mild:1 version:15 polynomial:1 achievable:1 c0:21 q1:4 pick:2 whittled:1 initial:2 selecting:1 daniel:1 past:1 existing:1 err:31 current:2 com:2 z2:1 beygelzimer:3 yet:2 readily:1 john:1 subsequent:1 atlas:1 ainen:1 v:2 selected:3 parameterization:2 xk:9 characterization:1 provides:1 node:2 coarse:1 allerton:1 simpler:1 zhang:3 c2:6 constructed:1 prove:3 shorthand:1 introduce:1 manner:1 theoretically:1 expected:3 behavior:1 themselves:1 roughly:3 multi:3 decreasing:1 increasing:1 becomes:1 moreover:2 notation:2 bounded:7 agnostic:11 what:1 adherence:1 rmax:6 developed:1 nj:1 guarantee:4 every:1 collecting:1 subclass:1 shed:1 ensured:1 scaled:1 classifier:2 control:1 yn:1 before:1 positive:4 understood:1 negligible:1 modify:1 consequence:1 switching:1 sutton:1 ak:1 analyzing:1 might:2 plus:1 iwal:4 chose:2 studied:3 examined:1 koltchinskii:1 suggests:1 range:5 seventeenth:1 practical:3 acknowledgment:1 enforces:1 yj:2 testing:4 union:1 differs:2 digit:1 procedure:4 empirical:7 confidence:1 seeing:1 suggest:2 get:1 cannot:1 unlabeled:11 undesirable:2 close:2 risk:4 equivalent:2 deterministic:1 eighteenth:1 straightforward:1 attention:1 focused:2 simplicity:1 immediately:1 estimator:9 rule:1 regarded:1 brittleness:2 handle:1 profitable:1 annals:1 target:1 controlling:1 heavily:2 diego:1 hypothesis:37 expensive:1 labeled:2 enters:1 wang:1 worst:4 region:1 yk:2 mentioned:3 substantial:2 tongz:1 complexity:18 rigorously:1 depend:4 purely:1 creates:1 serve:1 learner:21 beygel:1 completely:1 joint:2 describe:1 effective:2 query:13 labeling:3 outcome:1 heuristic:2 widely:2 say:1 otherwise:4 statistic:1 itself:2 noisy:2 final:3 sequence:4 advantage:1 net:1 adaptation:1 relevant:1 description:1 intuitive:1 convergence:2 empty:1 rademacher:1 produce:1 guaranteeing:1 pi2:1 depending:1 develop:2 measured:1 eq:5 implemented:1 come:1 indicate:1 convention:1 differ:1 implies:2 safe:1 radius:1 drawback:3 correct:2 closely:1 modifying:1 subsequently:1 aggressiveness:1 require:3 argued:1 generalization:4 preliminary:2 tighter:1 pessimistic:1 hold:9 around:1 mapping:1 algorithmic:3 a2:2 estimation:1 applicable:1 label:48 agrees:1 correctness:1 create:1 djhsu:1 mishap:1 weighted:22 establishes:1 hope:1 minimization:4 mit:1 clearly:1 uller:1 always:2 rather:1 avoid:1 pn:5 barto:1 focus:1 improvement:7 indicates:1 hk:8 contrast:1 sense:2 realizable:1 abstraction:2 dependent:2 membership:1 nn:2 entire:3 typically:1 hidden:1 bandit:1 selective:2 interested:1 provably:1 compatibility:1 arg:4 classification:4 issue:2 overall:1 priori:1 yahoo:3 special:1 initialize:1 uc:1 never:1 sampling:6 flipped:1 represents:1 look:1 rci:2 report:1 few:1 randomly:2 phase:2 versatility:1 maintain:1 n1:1 friedman:1 evaluation:1 analyzed:3 j48:2 kddcup99:6 nowak:2 unless:1 tree:3 re:2 theoretical:1 minimal:1 modeling:1 earlier:1 cost:4 addressing:1 deviation:9 predictor:2 conducted:2 too:2 adaptively:1 unbiasedness:1 international:5 broder:1 siam:1 quickly:1 again:2 unavoidable:1 satisfied:1 cesa:1 choose:1 hn:5 worse:1 return:5 pmin:3 aggressive:1 exclude:1 coefficient:3 inc:1 explicitly:3 depends:3 later:1 analyze:1 sup:1 start:1 aggregation:1 complicated:1 defer:1 ass:1 variance:6 yield:2 correspond:1 landscape:1 hanneke:2 finer:1 monteleoni:1 whenever:1 checked:1 definition:2 sixth:1 failure:2 associated:1 proof:3 hsu:3 proved:2 popular:1 manifest:1 subtle:1 carefully:1 actually:1 auer:1 appears:1 manuscript:1 supervised:9 improved:7 though:2 furthermore:1 just:1 langford:3 hand:1 receives:1 ei:8 cohn:1 nonlinear:1 mode:1 logistic:1 perhaps:2 believe:2 unbiased:3 y2:1 true:1 conditionally:1 round:1 during:1 maintained:1 criterion:1 generalized:1 evident:1 demonstrate:1 passive:14 balcan:2 misspecified:2 specialized:1 functional:1 jl:1 discussed:1 relating:1 significant:1 multiarmed:1 g2k:2 queried:17 tuning:2 consistency:3 sugiyama:2 had:2 access:3 recent:2 optimizing:1 moderate:1 scenario:1 certain:5 binary:3 yi:21 seen:1 minimum:1 somewhat:1 employed:1 freely:1 converge:2 v3:1 monotonically:1 ii:4 desirable:1 smooth:1 usability:1 match:1 offer:1 long:1 cross:1 bach:1 a1:2 qi:13 prediction:3 variant:1 basic:2 essentially:1 expectation:1 rutgers:4 metric:1 iteration:3 sometimes:1 achieved:1 c1:8 else:1 biased:3 unlike:1 strict:1 induced:1 spirit:1 easy:1 xj:1 zi:1 pennsylvania:1 nonstochastic:1 reduce:2 weka:1 shift:1 qj:2 motivated:1 specialization:1 pca:2 returned:4 york:1 cause:1 krauledat:1 generally:2 useful:3 detailed:1 tsybakov:3 ten:1 reduced:1 schapire:1 specifies:1 restricts:1 zj:1 estimated:1 dasgupta:4 key:2 threshold:15 alina:1 changing:2 inverse:1 fourth:1 extends:2 throughout:1 place:1 decide:1 draw:1 decision:3 appendix:2 comparable:2 bound:40 quadratic:1 oracle:9 annual:5 constraint:6 deficiency:1 x2:2 ri:1 dominated:1 aspect:2 min:8 separable:2 developing:1 piscataway:1 instantiates:1 smaller:2 wi:8 castro:2 restricted:3 pr:6 erm:4 taken:1 computationally:2 equation:2 agree:1 previously:4 turn:2 eventually:1 mind:1 flip:1 tractable:1 adopted:1 apply:1 hierarchical:1 enforce:1 disagreement:6 alternative:2 coin:5 completed:1 log2:3 maintaining:2 establish:3 classical:1 already:1 quantity:6 strategy:2 concentration:1 dependence:1 collected:2 minimizing:2 equivalently:2 difficult:1 unfortunately:1 potentially:1 gk:7 motivates:2 twenty:6 perform:2 bianchi:1 upper:2 disagree:1 ladner:1 finite:2 situation:1 extended:1 ever:2 head:3 communication:1 y1:5 rn:3 sharp:1 arbitrary:1 introduced:1 pair:1 required:5 specified:1 c3:3 z1:21 c4:1 established:1 address:1 beyond:1 suggested:2 below:2 departure:1 built:1 including:1 reliable:1 analogue:1 force:1 predicting:1 minimax:1 improve:2 created:1 mediated:1 sn:4 review:1 literature:3 determining:1 relative:1 freund:1 fully:1 loss:6 expect:1 sublinear:2 interesting:1 querying:2 validation:1 sufficient:1 consistent:3 s0:1 intractability:1 share:1 pi:26 ibm:2 keeping:1 free:1 copy:1 dis:2 bias:2 template:1 absolute:1 fifth:1 default:2 xn:1 dimension:2 avoids:1 fb:3 qn:1 collection:1 made:1 hawthorne:1 c5:1 san:1 adaptive:1 far:2 reinforcement:1 log3:1 excess:2 approximate:2 forever:1 keep:1 supremum:1 active:53 instantiation:1 assumed:1 xi:27 sk:8 inherently:1 improving:2 requested:1 poly:1 did:1 pk:10 bounding:1 noise:9 allowed:2 x1:7 martingale:2 ny:2 tong:1 candidate:4 crude:1 third:2 weighting:7 down:1 theorem:9 covariate:1 learnable:1 burden:1 intractable:2 exists:1 mnist:9 sequential:1 importance:28 conditioned:1 margin:1 rejection:15 twentieth:2 applies:2 aa:1 minimizer:1 determines:1 satisfies:2 dh:1 conditional:2 goal:3 consequently:1 toss:2 included:1 specifically:3 typical:1 operates:1 lemma:7 called:2 experimental:2 formally:1 select:1 avoiding:1 |
3,330 | 4,015 | Multi-Stage Dantzig Selector
Ji Liu, Peter Wonka, Jieping Ye
Arizona State University
{ji.liu,peter.wonka,jieping.ye}@asu.edu
Abstract
We consider the following sparse signal recovery (or feature selection) problem:
given a design matrix X ? Rn?m (m ? n) and a noisy observation vector
y ? Rn satisfying y = X? ? + ? where ? is the noise vector following a Gaussian
distribution N (0, ? 2 I), how to recover the signal (or parameter vector) ? ? when
the signal is sparse?
The Dantzig selector has been proposed for sparse signal recovery with strong
theoretical guarantees. In this paper, we propose a multi-stage Dantzig selector
method, which iteratively refines the target signal ? ? . We show that if X obeys a
certain condition, then with a large probability the difference between the solution
?? estimated by the proposed method and the true solution ? ? measured in terms
of the lp norm (p ? 1) is bounded as
?
?
p
k?? ? ? ? kp ? C(s ? N )1/p log m + ? ?,
where C is a constant, s is the number of nonzero entries in ? ? , ? is independent
of m and is much smaller than the first term, and?N is the number of entries of
? ? larger than a certain value in the order of O(? log m). The proposed method
improves the
? estimation bound of the
? standard Dantzig selector approximately
from Cs1/p log m? to C(s ? N )1/p log m? where the value N depends on the
number of large entries in ? ? . When N = s, the proposed algorithm achieves the
oracle solution with a high probability. In addition, with a large probability, the
proposed method can select the same number of correct features under a milder
condition than the Dantzig selector.
1
Introduction
The sparse signal recovery problem has been studied in many areas including machine learning
[18, 19, 22], signal processing [8, 14, 17], and mathematics/statistics [2, 5, 7, 10, 11, 12, 13, 20].
In the sparse signal recovery problem, one is mainly interested in the signal recovery accuracy, i.e.,
the distance between the estimation ?? and the original signal or the true solution ? ? . If the design
matrix X is considered as a feature matrix, i.e., each column is a feature vector, and the observation
y as a target object vector, then the sparse signal recovery problem is equivalent to feature selection
(or model selection). In feature selection, one concerns the feature selection accuracy. Typically,
a group of features corresponding to the coefficient values in ?? larger than a threshold form the
supporting feature set. The difference between this set and the true supporting set (i.e., the set of
features corresponding to nonzero coefficients in the original signal) measures the feature selection
accuracy.
Two well-known algorithms for learning sparse signals include LASSO [15] and Dantzig selector [7]:
1
(1)
LASSO
min : kX? ? yk22 + ?||?||1
?
2
1
Dantzig Selector
min : ||?||1
?
s.t. : kX T (X? ? y)k? ? ?
(2)
Strong theoretical results concerning LASSO and Dantzig selector have been established in the
literature [4, 5, 7, 17, 20, 22].
1.1
Contributions
In this paper, we propose a multi-stage procedure based on the Dantzig selector, which estimates
the supporting feature set F0 and the signal ?? iteratively. The intuition behind the proposed multistage method is that feature selection and signal recovery are tightly correlated and they can benefit
from each other: a more accurate estimation of the supporting features can lead to a better signal
recovery and a more accurate signal recovery can help identify a better set of supporting features.
In the proposed method, the supporting set F0 starts from an empty set and its size increases by one
after each iteration. At each iteration, we employ the basic framework of Dantzig selector and the
? In addition, we
information about the current supporting feature set F0 to estimate the new signal ?.
select the supporting feature candidates in F0 among all features in the data at each iteration, thus
allowing to remove incorrect features from the previous supporting feature set.
The main contributions of this paper lie in the theoretical analysis of the proposed method. Specifically, we show: 1) the proposed method
bound of the standard Dantzig
?
? can improve the estimation
selector approximately from Cs1/p log m? to C(s ? N )1/p log m? where the value N depends
on the number of large entries in ? ? ; 2) when N = s, the proposed algorithm can achieve the oracle
solution with a high probability; 3) with a high probability, the proposed method can select the same
number of correct features under a milder condition than the standard Dantzig selector method. The
numerical experiments validate these theoretical results.
1.2
Related Work
Sparse signal recovery without the observation noise was studied in [6]. It has been shown that
under certain irrepresentable conditions, the 0-support of the LASSO solution is consistent with the
true solution. It was shown that when the absolute value of each element in the true solution is large
enough, a weaker condition (coherence property) can guarantee the feature selection accuracy [5].
The prediction bound of LASSO, i.e., kX(?? ? ? ? )k2 , was also presented. A comprehensive analysis for LASSO, including the recovery accuracy in an arbitrary lp norm (p ? 1), was presented
in [20]. In [7], the Dantzig selector was proposed for sparse signal recovery and a bound of recovery
accuracy with the same order as LASSO was presented. An approximate equivalence between the
LASSO estimator and the Dantzig selector was shown in [1]. In [11], the l? convergence rate was
studied simultaneously for LASSO and Dantzig estimators in a high-dimensional linear regression
model under a mutual coherence assumption. In [9], conditions on the design matrix X under which
the LASSO and Dantzig selector coefficient estimates are identical for certain tuning parameters
were provided.
Many heuristic methods have been proposed in the past, including greedy least squares regression
[16, 8, 19, 21, 3], two stage LASSO [20], multiple thresholding procedures [23], and adaptive
LASSO [24]. They have been shown to outperform the standard convex methods in many practical applications. It was shown [16] that under an irrepresentable condition the solution of the
greedy least squares regression algorithm (also named OMP or forward greedy algorithm) guarantees the feature selection consistency in the noiseless case. The results in [16] were extended to
the noisy case [19]. Very recently, the results were further improved in [21] by considering arbitrary loss functions (not necessarily quadratic). In [3], the consistency of OMP was shown under
the mutual incoherence conditions. A multiple thresholding procedure was proposed to refine the
solution of LASSO or Dantzig selector [23]. An adaptive forward-backward greedy algorithm was
proposed [18], and it was shown that under the restricted isometry condition the feature ?
selection
consistency is achieved if the minimal nonzero entry in the true solution is larger than O(? log m).
The adaptive LASSO was proposed to adaptively tune the weight value for the L1 penalty, and it
was shown to enjoy the oracle properties [24].
2
1.3
Definitions, Notations, and Basic Assumptions
We use X ? Rn?m to denote the design matrix and focus on the case m ? n, i.e., the signal
dimension is much larger than the observation dimension. The correlation matrix A is defined as
A = X T X with respect to the design matrix. The noise vector ? follows the multivariate normal
distribution ? ? N (0, ? 2 I). The observation vector y ? Rn satisfies y = X? ? +?, where ? ? denotes
the original signal (or true solution). ?? is used to denote the solution of the proposed algorithm. The
?-supporting set (? ? 0) for a vector ? is defined as
supp? (?) = {j : |?j | > ?}.
The ?supporting? set of a vector refers to the 0-supporting set. F denotes the supporting set of the
original signal ? ? . For any index set S, |S| denotes the size of the set and S? denotes the complement
of S in {1, 2, 3, ..., m}. In this paper, s is used to denote the size of the supporting set F , i.e.,
s = |F |. We use ?S to denote the subvector of ? consisting of the entries of ? in the index set S.
P
1/p
The lp norm of a vector v is computed by kvkp = ( i vip ) , where vi denotes the ith entry of v.
The oracle solution ?? is defined as ??F = (XFT XF )?1 XFT y, and ??F? = 0. We employ the following
notation to measure some properties of a PSD matrix M ? RK?K [20]:
(p)
?M,k
=
(p)
=
?M,k,l
inf
u?Rk ,|I|=k
kMI,I ukp
,
kukp
sup
u?Rl ,|I|=k,|J|=l,I?J=?
(p)
?M,k =
sup
u?Rk ,|I|=k
kMI,I ukp
,
kukp
kMI,J ukp
,
kukp
(3)
(4)
where p ? [1, ?], I and J are disjoint subsets of {1, 2, ..., K}, and MI,J ? R|I|?|J| is a submatrix
of M with rows from the index set I and columns from the index set J. Additionally, we use the
following notation to denote two probabilities:
?10 = ?1 (? log ((m ? s)/?1 ))?1/2 ,
?20 = ?2 (? log(s/?2 ))?1/2 .
(5)
where ?1 and ?2 are two factors between 0 and 1. In this paper, if we say ?large?, ?larger? or ?the
largest?, it means that the absolute value is large, larger or the largest. For simpler notation in the
computation of sets, we sometimes use ?S1 + S2 ? to indicate the union of two sets S1 and S2 , and
use ?S1 ? S2 ? to indicate the removal of the intersection of S1 and S2 from the first set S1 . In this
paper, the following assumption is always admitted.
Assumption 1. We assume that s = |supp0 (? ? )| < n, the variable number is much larger than
the feature dimension (i.e. m ? n), each column vector is normalized as XiT Xi = 1 where Xi
indicates the ith column (or feature) of X, and the noise vector ? follows the Gaussian distribution
N (0, ? 2 I).
In the literature, it is often assumed that XiT Xi = n, which is essentially
identical to our assump?
tion. However, this may lead to a slight difference of a factor n in some conclusions. We have
automatically transformed conclusions from related work according to our assumption when citing
them in our paper.
1.4
Organization
The rest of the paper is organized as follows. We present our multi-stage algorithm in Section 2. The
main theoretical results are summarized in Section 3 with detailed proofs given in the supplemental
material. The numerical simulation is reported in Section 4. Finally, we conclude the paper in
Section 5. All proofs can be found in the supplementary file.
2
The Multi-Stage Dantzig Selector Algorithm
In this section, we introduce the multi-stage Dantzig selector algorithm. In the proposed method,
we update the support set F0 and the estimation ?? iteratively; the supporting set F0 starts from an
empty set and its size increases by one after each iteration. At each iteration, we employ the basic
3
framework of Dantzig selector and the information about the current supporting set F0 to estimate
the new signal ?? by solving the following linear program:
min k?F?0 k1
s.t. kXFT?0 (X? ? y)k? ? ?
kXFT0 (X?
(6)
? y)k? = 0.
Since the features in F0 are considered as the supporting candidates, it is natural to enforce them to
be orthogonal to the residual vector X? ? y, i.e., one should make use of them for reconstructing
the overestimation y. This is the rationale behind the constraint: kXFT0 (X? ? y)k? = 0. The
other advantage is when all correct features are chosen, the proposed algorithm can be shown to
converge to the oracle solution. The detailed procedure is formally described in Algorithm 1 below.
(0)
Apparently, when F0 = ? and N = 0, the proposed method is identical to the standard Dantzig
selector.
Algorithm 1 Multi-Stage Dantzig Selector
(0)
Require: F0 , ?, N , X, y,
(N )
Ensure: ??(N ) , F0
1: while i=0; i?N; i++ do
(i)
2:
Obtain ??(i) by solving the problem (6) with F0 = F0
(i+1)
3:
Form F0
as the index set of the i + 1 largest elements of ??(i) .
4: end while
3
3.1
Main Results
Motivation
To motivate the proposed multi-stage algorithm, we first consider a simple case where some knowledge about the supporting features is known in advance. In standard Dantzig selector, we assume
F0 = ?. If we assume that the features belonging to a set F0 are known as supporting features, i.e.,
F0 ? F , we have the following result:
r
?
?
Theorem 1. Assume that assumption 1 holds. Take F0 ? F and ? = ? 2 log m?s
in the
?1
optimization problem (6). If there exists some l such that
?1?1/p
? ?
|F0 ? F? |
(p)
(p)
>0
?A,s+l ? ?A,s+l,l
l
holds, then with a probability larger than 1 ? ?10 , the lp -norm (1 ? p ? ?) of the difference between
? the solution of the problem (6), and the oracle solution ?? is bounded as
?,
?
? ? ? ?p?1 ?1/p
F|
?
?
1 + | F0 ?
(|F?0 ? F? | + l2p )1/p s
l
m?s
(7)
?
?
k? ? ?kp ?
? 2 log
? ? ? ?1?1/p
?1
(p)
(p)
|F0 ?F |
?A,s+l ? ?A,s+l,l
l
and with a probability larger than 1 ? ?10 ? ?20 , the lp -norm (1 ? p ? ?) of the difference between
? the solution of the problem (6) and the true solution ? ? is bounded as
?,
?
? ? ? ?p?1 ?1/p
F|
?
?
1 + | F0 ?
(|F?0 ? F? | + l2p )1/p s
l
m?s
?
k?? ? ? kp ?
?
2
log
+
? ? ? ?1?1/p
?1
(p)
(p)
|F0 ?F |
?A,s+l ? ?A,s+l,l
(8)
l
s1/p
(p)
?(X T XF )1/2 ,s
F
?
p
2 log(s/?2 )
4
It is clear that both bounds (for any 1 ? p ? ?) are monotonically increasing with respect to the
value of |F?0 ? F? |. In other words, the larger F0 is, the lower these bounds are. This coincides
with our motivation that more knowledge about the supporting features can lead to a better signal
estimation. Most related literatures directly estimate the bound of k?? ? ? ? kp . Since ? ? may not be
a feasible solution of problem (6), it is not easy to directly estimate the distance between ?? and ? ? .
The
p bound in the inequality
p (8), which consists of two terms. Since m ? n ? s, we have
2 log((m ? s)/?1 ) ? 2 log(s/?2 ) if ?1 ? ?2 . When p = 2, the following holds:
? ?
?1?1/2
|F0 ? F? |
(2)
(2)
(2)
?A,s+l ? ?A,s+l,l
? ?(X T XF )1/2 ,s
F
l
since
(2)
(2)
(2)
(2)
?A,s+l ? ?A,s ? ?X T XF ,s ? ?(X T XF )1/2 ,s .
F
F
From the analysis in the next section, we can see that the first term is the upper bound of the distance
? p and the second term is the upper bound of the
from the optimizer to the oracle solution k?? ? ?k
distance from the oracle solution to the true solution k?? ? ? ? kp . Thus, the first term might be much
larger than the second term.
3.2
Comparison with Dantzig Selector
We first compare our estimation bound with the one in [7] for p = 2. For convenience of comparison,
we rewrite the theorem in [7] equivalently as:
(2)
Theorem 2. Suppose ? ? Rm is any s-sparse vector of parameters obeying ?2s + ?A,s,2s < 1.
p
Setting ?p = ? 2 log(m/?) (0 < ? ? 1), with a probability at least 1 ? ?(? log m)?1/2 , the
solution of the standard Dantzig selector ??D obeys
p
4
k??D ? ? ? k2 ?
s1/2 ? 2 log(m/?),
(9)
(2)
1 ? ?2s ? ?A,s,2s
(2)
(2)
where ?2s = max(?A,2s ? 1, 1 ? ?A,2s ).
Theorem 1 also implies a bound estimation resultr
for Dantzig selector by letting F0 = ? and p = 2.
?
?
Specifically, we set F0 = ?, N = 0, and ? = ? 2 log m?s
in the multi-stage method, and set
?1
s
p = 2, l = s, ?1 = m?s
m ?, and ?2 = m ? for a convenient of comparison with Theorem 1. If follows
that with probability larger than 1 ? ?(? log m)?1/2 , the following bound holds:
?
?
?
p
10
1
? s1/2 ? 2 log (m/?).
k?? ? ? ? k2 ? ? (2)
+
(10)
(2)
(2)
?A,2s ? ?A,2s,s
?(X T XF )1/2 ,s
F
It is easy to verify that
?
?2
(2)
(2)
(2)
(2)
(2)
(2)
(2)
1??2s ??A,s,2s ? ?A,2s ??A,2s,s ? ?A,2s ? ?(X T XF ),s = ?(X T XF )1/2 ,s ? ?(X T XF )1/2 ,s ? 1.
F
F
F
Thus, the bound in (10) is comparable to the one in (9). In the following, we compare the performance bound of the proposed multi-stage method (N > 0) with the one in (10).
3.3
Feature Selection
The estimation bounds in Theorem 1 assume that a set F0 is given. In this section, we show how
the supporting set can be estimated. Similar to previous work [5, 19], |?j? | for j ? F is required
to be larger than a threshold value. As is clear from the proof in the next section, the threshold
value mainly depends on the value of k?? ? ? ? k? . We essentially employ the result with p = ? in
Theorem 1 to estimate the threshold value. In the following, we first consider the simple case when
N = 0. We have shown in the last section that the estimation bound in this case is similar to the one
for Dantzig selector.
5
Theorem 3. Under the assumption 1, if there exists an index set J such that |?j? | > ?0 for any
j ? J and there exists a nonempty set
?s?
(?)
(?)
? = {l | ?A,s+l ? ?A,s+l,l
> 0}
l
where
s
?
?
?
?
p
max 1, sl
m?s
1
?0 = 4 min (?)
?
2
log
+
?
2 log(s/?2 ),
?
?
(?)
(?)
s
l?? ?
?1
?(X T XF )1/2 ,s
A,s+l ? ?A,s+l,l l
F
r
?
?
then taking F0 = ?, N = 0, ? = ? 2 log m?s
into the problem (6) (equivalent to Dantzig
?1
selector), the largest |J| elements of ??std (or ??(0) ) belong to F with probability larger than 1 ? ?10 ?
?20 .
?
The theorem above indicates that under the given condition, if minj?J |?j? | > O(? log m) (as? ?
(?)
(?)
suming that there exists l ? s such that ?A,s+l ? ?A,s+l,l sl > 0), then with high probability
the selected |J| features by Dantzig selector belong to the true supporting set. In particular, if
|J| = s, then the consistency of feature selection is achieved. The result above is comparable to the
ones for other feature selection algorithms, including LASSO [5, 22], greedy least squares regression [16, 8, 19], two stage LASSO [20], and adaptive forward-backward
greedy algorithm [18]. In
?
all these algorithms, the condition minj?F |?j? | ? C? log m is required, since the noise level is
?
O(? log m) [18]. Because C is always a coefficient in terms of the covariance matrix XX T (or
the feature matrix X), it is typically treated as a constant term; see the literature listed above.
Next, we show that the condition |?j? | > ?0 in Theorem 3 can be relaxed by the proposed multi-stage
procedure with N > 0, as summarized in the following theorem:
Theorem 4. Under the assumption 1, if there exists a nonempty set
?s?
(?)
(?)
? = {l | ?A,s+l ? ?A,s+l,l
> 0}
l
and there exists a set J such that |supp?i (?J? )| > i holds for all i ? {0, 1, ..., |J| ? 1}, where
s
?
?
?
?
p
max 1, s?i
m?s
1
l
?i = 4 min n
+ (?)
? 2 log(s/?2 ),
? s?i ?o ? 2 log
(?)
(?)
l??
?
1
?(X T XF )1/2 ,s
?A,s+l ? ?A,s+l,l l
F
r
?
?
(0)
then taking F0 = ?, ? = ? 2 log m?s
and N = |J| ? 1 into Algorithm 1, the solution after
?1
(N )
N iterations satisfies F0
1 ? ?10 ? ?20 .
? F (i.e. |J| correct features are selected) with probability larger than
Assume that one aims to select N correct features by the standard Dantzig selector and the multistage method. These two theorems show that the standard Dantzig selector requires that at least N
of |?j? |?s with j ? F are larger than the threshold value ?0 , while the proposed multi-stage method
requires that at least i of the |?j? |?s are larger than the threshold value ?i?1 , for i = 1, ? ? ? , N . Since
{?j } is a strictly decreasing sequence satisfying for some l ? ?,
s
?
?
(?)
4?A,s+l,l
m?s
?i?1 ? ?i > ?
,
?
2
log
?
?
? 2
?1
(?)
(?)
l ?A,s+l ? ?A,s+l,l s?i
l
the proposed multi-stage method requires a strictly weaker condition for selecting N correct features
than the standard Dantzig selector.
3.4
Signal Recovery
In this section, we derive the estimation bound of the proposed multi-stage method by combing
results from Theorems 1, 3, and 4.
6
Theorem 5. Under the assumption 1, if there exists l such that
?s?
(?)
(?)
(p)
(p)
?A,s+l ? ?A,s+l,l
> 0 and ?A,2s ? ?A,2s,s > 0,
l
and there exists a set J such that |supp?i (?J? )| > i holds for all i ? {0, 1, ..., |J| ? 1}, where the
?i ?s are defined in Theorem 4, then
r
?
?
into Algorithm 1, with probability larger
(1) taking F0 = ?, N = 0 and ? = ? 2 log m?s
?1
than 1 ? ?10 ? ?20 , the solution of the Dantzig selector ??D (i.e, ??(0) ) obeys:
s
?
?
p+1
1/p 1/p
p
(2
+
2)
s
s1/p
m?s
?
?
k?D ? ? kp ?
?
2
log
+
? 2 log(s/?2 );
(p)
(p)
(p)
?1
??
?
? T
1/2
A,2s
A,2s,s
(XF XF )
r
(2) taking F0 = ?, N = |J| and ? = ?
?
2 log
m?s
?1
(11)
,s
?
into Algorithm 1, with probability larger
than 1 ? ?10 ? ?20 , the solution of the multi-stage method ??mul (i.e., ??(N ) ) obeys:
s
?
?
p+1
1/p
1/p
p
(2
+
2)
(s
?
N
)
m?s
s1/p
?
?
?
2
log
+
? 2 log(s/?2 ).
k?mul ? ? kp ? (p)
p
(p)
?1
?(X T XF )1/2 ,s
?A,2s?N ? ?A,2s?N,s?N
F
(12)
?
Similar to the analysis in Theorem 1, the first term (i.e., the distance from ?? to the oracle solution ?)
dominates in the estimated bounds. Thus, the performance
of
the
multi-stage
method
approximately
?
?
improved the standard Dantzig selector from Cs1/p log m? to C(s ? N )1/p log m?. When
p = 2, our estimation has the same order as the greedy least squares regression algorithm [19] and
the adaptive forward-backward greedy algorithm [18].
3.5
The Oracle Solution
The oracle solution is the minimum-variance unbiased estimator of the true solution given the noisy
observation. We show in the following theorem that the proposed method can obtain the oracle
solution with high probability under certain conditions:
?
?
(?)
(?)
Theorem 6. Under the assumption 1, if there exists l such that ?A,s+l ? ?A,s+l,l s?i
> 0, and
l
?
?
the supporting set F of ? satisfies |supp?i (?F )| > i for all i ?r{0, 1, ..., s ? 1}, where the ?i ?s
?
?
are defined in Theorem 4, then taking F0 = ?, N = s and ? = ? 2 log m?s
into Algorithm 1,
?1
(N )
the oracle solution can be achieved, i.e. F0
1 ? ?10 ? ?20 .
= F and ??(N ) = ?? with probability larger than
The theorem above shows that when the nonzero elements of the true coefficients vector ? ? are large
enough, the oracle solution can be achieved with high probability.
4 Simulation Study
We have performed simulation studies to verify our theoretical analysis. Our comparison includes
two aspects: signal recovery accuracy and feature selection accuracy. The signal recovery accuracy
is measured by the relative signal error: SRA = k?? ? ? ? k2 /k? ? k2 , where ?? is the solution of a
specific algorithm. The feature selection accuracy is measured by the percentage of correct features
selected: F SA = |F? ? F |/|F |, where F? is the estimated feature candidate set.
We generate an n ? m random matrix X. Each element of X follows an independent standard Gaussian distribution N (0, 1). We then normalize the length of the columns of X to be 1. The s?sparse
original signal ? ? is generated with s nonzero elements independently uniformly distributed from
7
n=50 m=200 s=15 ?=0.001
n=50 m=200 s=15 ?=0.1
n=50 m=500 s=10 ?=0.001
0.55
0.2
standard
oracle
multi?stage
0.18
0.16
n=50 m=500 s=10 ?=0.1
0.18
standard
oracle
multi?stage
0.5
0.45
standard
oracle
multi?stage
0.16
0.14
0.18
0.4
0.14
0.16
0.12
0.35
0.14
0.3
0.25
SRA
0.1
SRA
0.12
SRA
SRA
standard
oracle
multi?stage
0.2
0.1
0.08
0.2
0.06
0.08
0.06
0.15
0.04
0.04
0.12
0.1
0.08
0.06
0.1
0.02
0.02
0.04
0.05
0.02
0
2
4
6
8
10
12
14
16
0
2
4
6
N
8
10
12
14
16
0
2
4
N
n=50 m=200 s=15 ?=0.001
n=50 m=200 s=15 ?=0.1
1
6
8
10
0
2
4
N
8
10
n=50 m=500 s=10 ?=0.1
n=50 m=500 s=10 ?=0.001
1
6
N
1
1
0.98
0.95
0.98
0.96
0.9
0.96
0.95
0.94
0.8
0.94
FSA
FSA
FSA
FSA
0.85
0.92
0.9
0.92
0.75
0.9
0.88
standard
oracle
multi?stage
0.85
0
2
4
6
8
10
12
N
14
0.65
16
0.9
standard
oracle
multi?stage
0.7
0
2
4
6
8
10
12
14
standard
oracle
multi?stage
0.88
16
0
N
2
4
6
N
8
10
standard
oracle
multi?stage
0.86
0.84
0
2
4
6
8
10
N
Figure 1: Numerical simulation. We compare the solutions of the standard Dantzig selector method
(N = 0), the proposed method for different values of N , and the oracle solution. The SRA and
F SA comparisons are reported on the top row and the bottom row, respectively. The starting point
of each curve records the SRA (or F SA) value of the standard Dantzig selector method; the ending
point records the value of the oracle solution; the middle part of each curve records the results by
the proposed method for different values of N .
[?10, 10]. We for y by y = X? ? + ?, where the noise vector ? is generated
by the Gaussian distri?
bution N (0, ? 2 I). For a fair comparison, we choose the same ? = ? 2 log m in both algorithms.
The following experiments are repeated 20 times and we report their average performance.
(0)
We run the proposed algorithm with F0 = ? and output the ??(N ) ?s. Note that the solution of the
standard Dantzig selector algorithm is equivalent to ??(0) with N = 0. We report the SRA curve of
??(N ) with respect to N in the top row of Figure 1. Based on ??(N ) , we compute the supporting set
F? (N ) as the index of the N largest entries in ??(N ) . Note that the supporting set we compute here
(N )
is different from the supporting set F?0 which only contains the N largest feature indexes. The
bottom row of Figure 1 shows the F SA curve with respect to N . We can observe from Figure 1 that
1) the multi-stage method obtains a solution with a smaller distance to the original signal than the
standard Dantzig selector method; 2) the multi-stage method selects a larger percentage of correct
features than the standard Dantzig selector method; 3) the multi-stage method can achieve the oracle
solution. Overall, the recovery accuracy curve increases with an increasing value of N and the
feature selection accuracy curve is decreasing with an increasing value of N .
5 Conclusion
In this paper, we propose a multi-stage Dantzig selector method which iteratively selects the supporting features and recovers the original signal. The proposed method makes use of the information
of supporting features to estimate the signal and simultaneously makes use of the information of the
estimated signal to select the supporting features. Our theoretical analysis shows that the proposed
method improves upon the standard Dantzig selector in both signal recovery and supporting feature
selection. The final numerical simulation validates our theoretical analysis.
Since the multi-stage procedure can improve the Dantzig selector, one natural question is whether
the analysis can be extended to other related techniques such as LASSO. The two-stage LASSO
has been shown to outperform the standard LASSO. We plan to extend our analysis for multi-stage
LASSO in the future. In addition, we plan to improve the proposed algorithm by adopting stopping
rules similar to the ones recently proposed in [3, 19, 21].
Acknowledgments
This work was supported by NSF IIS-0612069, IIS-0812551, CCF-0811790, IIS-0953662, and
NGA HM1582-08-1-0016.
8
References
[1] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Annals of Statistics, 37:1705?1732, 2009.
[2] F. Bunea, A. Tsybakov, and M. Wegkamp. Sparsity oracle inequalities for the Lasso. Electronic
Journal of Statistics, 2007.
[3] T. Cai and L. Wang. Orthogonal matching pursuit for sparse signal reconvery. Technical
Report, 2010.
[4] T. Cai, G. Xu, and J. Zhang. On recovery of sparse signals via l1 minimization. IEEE Transactions on Information Theory, 55(7):3388?3397, 2009.
[5] E. J. Candes and Y. Plan. Near-ideal model selection by l1 minimization. Annals of Statistics,
37:2145?2177, 2006.
[6] E. J. Candes and T. Tao. Decoding by linear programming. IEEE Transactions on Information
Theory, 51(12):4203?4215, 2005.
[7] E. J. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger
than n. Annals of Statistics, 35:2313, 2007.
[8] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Transactions on Information Theory, pages 6?18,
2006.
[9] G. M. James, P. Radchenko, and J. Lv. DASSO: connections between the Dantzig selector and
Lasso. Journal of The Royal Statistical Society Series B, 71(1):127?142, 2009.
[10] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines on-line
learning and bandits. COLT, pages 229?238, 2008.
[11] K. Lounici. Sup-norm convergence rate and sign concentration property of Lasso and Dantzig
esti mators. Electronic Journal of Statistics, 2:90?102, 2008.
[12] N. Meinshausen, P. Bhlmann, and E. Zrich. High dimensional graphs and variable selection
with the Lasso. Annals of Statistics, 34:1436?1462, 2006.
[13] P. Ravikumar, G. Raskutti, M. J. Wainwright, and B. Yu. Model selection in gaussian graphical
models: High-dimensional consistency of l1 -regularized MLE. pages 1329?1336, 2008.
[14] J. Romberg. The Dantzig selector and generalized thresholding. CISS, pages 22?25, 2008.
[15] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society: Series B, 58(1):267?288, 1996.
[16] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Transactions
on Information Theory, 50:2231?2242, 2004.
[17] M. J. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using
l1 -constrained quadratic programming (Lasso). IEEE Transactions on Information Theory,
pages 2183?2202, 2009.
[18] T. Zhang. Adaptive forward-backward greedy algorithm for sparse learning with linear models.
NIPS, pages 1921?1928, 2008.
[19] T. Zhang. On the consistency of feature selection using greedy least squares regression. Journal
of Machine Learning Reserch, 10:555?568, 2009.
[20] T. Zhang. Some sharp performance bounds for least squares regression with l1 regularization.
Annals of Statistics, 37:2109, 2009.
[21] T. Zhang. Sparse recovery with orthogonal matching pursuit under RIP. arXiv:1005.2249,
2010.
[22] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Reserch, 7:2541?2563, 2006.
[23] S. Zhou. Thresholding procedures for high dimensional variable selection and statistical estimation. NIPS, pages 2304?2312, 2009.
[24] H. Zou. The adaptive Lasso and its oracle properties. Journal of the American Statistical
Association, 101:1418?1429, 2006.
9
| 4015 |@word middle:1 norm:6 simulation:5 covariance:1 liu:2 contains:1 series:2 selecting:1 past:1 current:2 refines:1 numerical:4 cis:1 remove:1 update:1 greedy:10 asu:1 selected:3 ith:2 record:3 simpler:1 zhang:5 incorrect:1 consists:1 yuan:1 introduce:1 multi:30 decreasing:2 automatically:1 considering:1 increasing:3 provided:1 xx:1 bounded:3 notation:4 distri:1 supplemental:1 guarantee:3 esti:1 k2:5 rm:1 enjoy:1 incoherence:1 approximately:3 might:1 koltchinskii:1 dantzig:47 studied:3 equivalence:1 meinshausen:1 obeys:4 practical:1 acknowledgment:1 union:1 procedure:7 area:1 convenient:1 matching:2 word:1 refers:1 convenience:1 irrepresentable:2 selection:24 romberg:1 equivalent:3 jieping:2 starting:1 citing:1 convex:1 independently:1 recovery:23 estimator:3 rule:1 annals:5 target:2 suppose:1 rip:1 programming:2 element:6 satisfying:2 std:1 bottom:2 wang:1 intuition:1 kmi:3 overestimation:1 multistage:2 motivate:1 solving:2 rewrite:1 upon:1 kp:7 heuristic:1 larger:22 supplementary:1 elad:1 say:1 statistic:8 noisy:4 validates:1 final:1 fsa:4 advantage:1 sequence:1 cai:2 propose:3 achieve:2 validate:1 normalize:1 convergence:2 empty:2 object:1 help:1 derive:1 measured:3 sa:4 strong:2 indicate:2 implies:1 correct:8 material:1 require:1 strictly:2 hold:6 considered:2 normal:1 algorithmic:1 achieves:1 optimizer:1 bickel:1 estimation:14 radchenko:1 largest:6 bunea:1 minimization:2 gaussian:5 always:2 aim:1 zhou:1 shrinkage:1 focus:1 xit:2 indicates:2 mainly:2 milder:2 stopping:1 typically:2 bandit:1 transformed:1 interested:1 selects:2 tao:2 overall:1 among:1 colt:1 plan:3 constrained:1 mutual:2 identical:3 yu:2 future:1 report:3 employ:4 simultaneously:2 tightly:1 comprehensive:1 consisting:1 psd:1 organization:1 behind:2 accurate:2 orthogonal:3 overcomplete:1 theoretical:8 minimal:1 column:5 reserch:2 entry:8 subset:1 reported:2 adaptively:1 decoding:1 wegkamp:1 choose:1 american:1 zhao:1 combing:1 supp:4 summarized:2 includes:1 coefficient:5 kvkp:1 depends:3 vi:1 tion:1 performed:1 apparently:1 sup:3 bution:1 start:2 recover:1 candes:3 contribution:2 square:6 accuracy:12 variance:1 ensemble:1 identify:1 simultaneous:1 minj:2 definition:1 james:1 proof:3 mi:1 recovers:1 knowledge:2 improves:2 organized:1 cs1:3 improved:2 ritov:1 lounici:1 stage:33 correlation:1 tropp:1 suming:1 ye:2 normalized:1 true:12 verify:2 unbiased:1 ccf:1 regularization:1 iteratively:4 nonzero:5 xft:2 coincides:1 generalized:1 l1:6 recently:2 raskutti:1 ji:2 rl:1 belong:2 slight:1 extend:1 association:1 tuning:1 consistency:7 mathematics:1 stable:1 f0:36 multivariate:1 isometry:1 inf:1 certain:5 inequality:2 kukp:3 minimum:1 relaxed:1 omp:2 converge:1 monotonically:1 signal:37 ii:3 multiple:2 technical:1 xf:14 concerning:1 ravikumar:1 mle:1 prediction:1 basic:3 regression:8 noiseless:1 essentially:2 arxiv:1 iteration:6 sometimes:1 adopting:1 kernel:1 achieved:4 addition:3 rest:1 file:1 near:1 presence:1 yk22:1 ideal:1 enough:2 easy:2 lasso:29 l2p:2 whether:1 greed:1 penalty:1 peter:2 detailed:2 clear:2 tune:1 listed:1 tsybakov:2 generate:1 outperform:2 sl:2 percentage:2 nsf:1 sign:1 estimated:5 disjoint:1 tibshirani:1 mators:1 group:1 threshold:7 backward:4 graph:1 nga:1 run:1 named:1 electronic:2 coherence:2 comparable:2 submatrix:1 bound:20 quadratic:2 arizona:1 refine:1 oracle:27 constraint:1 aspect:1 min:5 according:1 belonging:1 smaller:2 reconstructing:1 lp:5 s1:10 restricted:1 nonempty:2 letting:1 vip:1 end:1 pursuit:2 observe:1 enforce:1 original:7 denotes:5 top:2 include:1 ensure:1 graphical:1 k1:1 society:2 question:1 concentration:1 distance:6 length:1 index:8 equivalently:1 wonka:2 design:5 allowing:1 upper:2 observation:6 supporting:30 extended:2 rn:4 arbitrary:2 sharp:2 complement:1 subvector:1 required:2 connection:1 established:1 supp0:1 nip:2 below:1 sparsity:2 program:1 including:4 max:3 royal:2 dasso:1 wainwright:2 natural:2 treated:1 regularized:1 residual:1 improve:3 literature:4 removal:1 relative:1 loss:1 rationale:1 lv:1 consistent:1 thresholding:4 row:5 supported:1 last:1 weaker:2 taking:5 hm1582:1 absolute:2 sparse:18 benefit:1 distributed:1 curve:6 dimension:3 ending:1 forward:5 adaptive:7 transaction:5 temlyakov:1 approximate:1 selector:45 obtains:1 assumed:1 conclude:1 xi:3 additionally:1 sra:8 necessarily:1 zou:1 main:3 s2:4 noise:7 motivation:2 fair:1 mul:2 repeated:1 xu:1 obeying:1 candidate:3 lie:1 rk:3 theorem:21 specific:1 concern:1 dominates:1 exists:9 kx:3 intersection:1 admitted:1 assump:1 satisfies:3 donoho:1 feasible:1 specifically:2 uniformly:1 select:5 formally:1 support:2 correlated:1 |
3,331 | 4,016 | Individualized ROI Optimization via
Maximization of Group -wise Consistency of
Structural and Functional Profiles
1, 2*
Kaiming Li, 1Lei Guo, 3Carlos Faraco, 2Dajiang Zhu, 2Fan Deng, 1Tuo Zhang, 1Xi
Jiang, 1Degang Zhang, 1Hanbo Chen, 1Xintao Hu, 3Steve Miller, 2Tianming Liu
1
School of Automation, Northwestern Polytechnical University,China;2Department of
Computer Science, the University of Georgia, USA; 3Department of Psychology, the
University of Georgia, USA; *Email:[email protected]
Abstract
Functional segregation and integration are fundamental characteristics of the
human brain. Studying the connectivity among segregated regions and the
dynamics of integrated brain networks has drawn increasing interest. A very
controversial, yet fundamental issue in these studies is how to determine the
best functional brain regions or ROIs (regions of interests) for individuals.
Essentially, the computed connectivity patterns and dynamics of brain
networks are very sensitive to the locations, sizes, and shapes of the ROIs.
This paper presents a novel methodology to optimize the locations of an
individual's ROIs in the working memory system. Our strategy is to formulate
the individual ROI optimization as a group variance minimization problem,
in which group-wise functional and structural connectivity patterns, and
anatomic profiles are defined as optimization constraints. The optimization
problem is solved via the simulated annealing approach. Our experimental
results show that the optimized ROIs have significantly improved
consistency in structural and functional profiles across subjects, and have
more reasonable localizations and more consistent morphological and
anatomic profiles.
1
Int ro ducti o n
The human brain?s function is segregated into distinct regions and integrated via axonal fibers
[1]. Studying the connectivity among these regions and modeling their dynamics and
interactions has drawn increasing interest and effort from the brain imaging and neuroscience
communities [2-6]. For example, recently, the Human Connectome Project [7] and the 1000
Functional Connectomes Project [8] have embarked to elucidate large-scale connectivity
patterns in the human brain. For traditional connectivity analysis, a variety of models
including DCM (dynamics causal modeling), GCM (Granger causality modeling) and MVA
(multivariate autoregressive modeling) are proposed [6, 9-10] to model the interactions of the
ROIs. A fundamental issue in these studies is how to accurately identify the ROIs, which are
the structural substrates for measuring connectivity. Currently, this is still an open, urgent, yet
challenging problem in many brain imaging applications. From our perspective, the major
challenges come from uncertainties in ROI boundary definition, the tremendous variability
across individuals, and high nonlinearities within and around ROIs.
Current approaches for identifying brain ROIs can be broadly classified into four categories.
The first is manual labeling by experts using their domain knowledge. The second is a
data-driven clustering of ROIs from the brain image itself. For instance, the ReHo (regional
homogeneity) algorithm [11] has been used to identify regional homogeneous regions as ROIs.
The third is to predefine ROIs in a template brain, and warp them back to the individual space
using image registration [12]. Lastly, ROIs can be defined from the activated regions observed
during a task-based fMRI paradigm. While fruitful results have been achieved using these
approaches, there are various limitations. For instance, manual labeling is difficult to
implement for large datasets and may be vulnerable to inter-subject and intra-subject variation;
it is difficult to build correspondence across subjects using data-driven clustering methods;
warping template ROIs back to individual space is subject to the accuracy of warping
techniques and the anatomical variability across subjects.
Even identifying ROIs using task-based fMRI paradigms, which is regarded as the standard
approach for ROI identification, is still an open question. It was reported in [13] that many
imaging-related variables including scanner vender, RF coil characteristics (phase array vs.
volume coil), k-space acquisition trajectory, reconstruction algorithms, susceptibility -induced
signal dropout, as well as field strength differences, contribute to variations in ROI
identification. Other researchers reported that spatial smoothing, a common preprocessing
technique in fMRI analysis to enhance SNR, may introduce artificial localization shift s (up to
12.1mm for Gaussian kernel volumetric smoothing) [14] or generate overly smoothed
activation maps that may obscure important details [15]. For example, as shown in Fig.1a, the
local maximum of the ROI was shifted by 4mm due to the spatial smoothing process.
Additionally, its structural profile (Fig.1b) was significantly altered. Furthermore,
group-based activation maps may show different patterns from an individual's activation map;
Fig.1c depicts such differences. The top panel is the group activation map from a working
memory study, while the bottom panel is the activation map of one subject in the study. As we
can see from the highlighted boxes, the subject has less activated regions than the group
analysis result. In conclusion, standard analysis of task-based fMRI paradigm data is
inadequate to accurately localize ROIs for each individual.
Fig.1. (a): Local activation map maxima (marked by the cross) shift of one ROI due to spatial
volumetric smoothing. The top one was detected using unsmoothed data while the bottom one
used smoothed data (FWHM: 6.875mm). (b): The corresponding fibers for the ROIs in (a). The
ROIs are presented using a sphere (radius: 5mm). (c): Activation map differences between the
group (top) and one subject (bottom). The highlighted boxes show two of the missing activated
ROIs found from the group analysis.
Without accurate and reliable individualized ROIs, the validity of brain connectivity analysis,
and computational modeling of dynamics and interactions among brain networks , would be
questionable. In response to this fundamental issue, this paper presents a novel computational
methodology to optimize the locations of an individual's ROIs initialized from task-based
fMRI. We use the ROIs identified in a block-based working memory paradigm as a test bed
application to develop and evaluate our methodology. The optimization of ROI locations was
formulated as an energy minimization problem, with the goal of jointly maximizing the
group-wise consistency of functional and structural connectivity patterns and anatomic
profiles. The optimization problem is solved via the well-established simulated annealing
approach. Our experimental results show that the optimized ROIs achieved our optimization
objectives and demonstrated promising results.
2
Mat eria l s a nd Metho ds
2.1
Data acquisition and preprocessing
Twenty-five university students were recruited to participate in
this study. Each participant performed an fMRI modified version
of the OSPAN task (3 block types: OSPAN, Arithmetic, and
Baseline) while fMRI data was acquired. DTI scans were also
acquired for each participant. FMRI and DTI scans were
acquired on a 3T GE Signa scanner. Acquisition parameters were
as follows : fMRI: 64x64 matrix, 4mm slice thickness, 220mm
FOV, 30 slices, TR=1.5s, TE=25ms, ASSET=2; DTI: 128x128
matrix, 2mm slice thickness, 256mm FOV, 60 slices,
TR=15100ms, TE= variable, ASSET=2, 3 B0 images, 30
optimized gradient directions, b-value=1000). Each participant?s
fMRI data was analyzed using FSL. Individual activation map
Fig.2. working memory
reflecting the OSPAN (OSPAN > Baseline) contrast was used. In
ROIs mapped on a
total, we identified the 16 highest activated ROIs, including left
WM/GM surface
and right insula, left and right medial frontal gyrus, left and right
precentral gyrus, left and right paracingulate gyrus, left and right
dorsolateral prefrontal cortex, left and right inferior parietal lobule, left occipital pole, right
frontal pole, right lateral occipital gyrus, and left and right precuneus. Fig.2 shows the 16 ROIs
mapped onto a WM(white matter)/GM(gray matter) cortical surface. For some individuals,
there may be missing ROIs on their activation maps. Under such condition, we adapted the
group activation map as a guide to find these ROIs using linear registration.
DTI pre-processing consisted of skull removal, motion correction, and eddy current correction.
After the pre-processing, fiber tracking was performed using MEDINRIA (FA threshold: 0.2;
minimum fiber length: 20). Fibers were extended along their tangent directions to reach into
the gray matter when necessary. Brain tissue segmentation was conducted on DTI data by the
method in [16] and the cortical surface was reconstructed from the tissue maps using the
marching cubes algorithm. The cortical surface was parcellated into anatomical regions using
the HAMMER tool [17]. DTI space was used as the standard space from which to generate the
GM (gray matter) segmentation and from which to report the ROI locations on the cortical
surface. Since the fMRI and DTI sequences are both EPI (echo planar imaging) sequences,
their distortions tend to be similar and the misalignment between DTI and fMRI images is
much less than that between T1 and fMRI images [18]. Co-registration between DTI and fMRI
data was performed using FSL FLIRT [12]. The activated ROIs and tracked fibers were then
mapped onto the cortical surface for joint modeling.
2.2
Joint modeling of anatomical, structural and functional profiles
Despite the high degree of variability across subjects, there are several aspects of regularity on
which we base the proposed solution. Firstly, across subjects, the functional ROIs should have
similar anatomical locations, e.g., similar locations in the atlas space. Secondly, these ROIs
should have similar structural connectivity profiles across subjects. In other words, fibers
penetrating the same functional ROIs should have at least similar target regions across subjects.
Lastly, individual networks identified by task-based paradigms, like the working memory
network we adapted as a test bed in this paper, should have similar functional connectivity
pattern across subjects. The neuroscience bases of the above premises include: 1) structural
and functional brain connectivity are closely related [19], and cortical gyrification and
axongenesis processes are closely coupled [20]; Hence, it is reasonable to put these three types
of information in a joint modeling framework. 2)
Extensive studies have already demonstrated the
existence of a common structural and functional
architecture of the human brain [21, 22], and it
makes sense to assume that the working memory
network has similar structural and functional
connectivity patterns across individuals.
Based on these premises, we proposed to
optimize the locations of individual functional
ROIs by jointly modeling anatomic profiles,
structural connectivity patterns, and functional
connectivity patterns, as illustrated in Fig 3. The
Fig.3. ROIs optimization scheme.
goal was to minimize the group-wise variance (or maximize group-wise consistency) of these
jointly modeled profiles. Mathematically, we modeled the group-wise variance as energy E as
follows. A ROI from fMRI analysis was mapped onto the surface, and is represented by a
center vertex and its neighborhood. Suppose ??? is the ROI region j on the cortical surface of
subject i identified in Section 2.1; we find a corresponding surface ROI region ??? so that the
energy E (contains energy from n subjects, each with m ROIs) is minimized:
? = ?? (?
?? ????
+ (1 ? ?)
???
?? ????
???
)
(1)
where Ea is the anatomical constraint; Ec is the structural connectivity constraint, M Ec and
? E are the mean and standard deviation of Ec in the searching space; E f is the functional
c
connectivity constraint, M E f and ? E f are the mean and standard deviation of E f respectively;
and ? is a weighting parameter between 0 and 1. If not specified,
and m is the number of ROIs in this paper. The details of these
energy terms are provided in the following sections.
2.2.1
n is the number of subjects,
Anatomical constraint energy
Anatomical constraint energy Ea is defined to ensure that the
optimized ROIs have similar anatomical locations in the atlas
space (Fig.4 shows an example of ROIs of 15 randomly
selected subjects in the atlas space). We model the locations for
all ROIs in the atlas space using a Gaussian model (mean:
??? ,and standard deviation: ? X j for ROI j ). The model
parameters were estimated using the initial locations obtained
from Section 2.1. Let X ij be the center coordinate of region Sij
Fig.4. ROI distributions
in Atlas space.
in the atlas space, then Ea is expressed as
1
?? = { ?????1
?
(?????1)
(????>1)
(2)
? , 1 ? ? ? ?; 1 ? ? ? ?. }
(3)
where
??? ????
???? = ??? { ?
3???
Under the above definition, if any X ij is within the range of 3s X from the distribution model
j
center M X , the anatomical constraint energy will always be one; if not, there will be an
j
exponential increase of the energy which punishes the possible involvement of outliers. In
other words, this energy factor will ensure the optimized ROIs will not significantly deviate
away from the original ROIs.
2.2.2 Structural connectivity constraint energy
Structural connectivity constraint energy Ec is defined to ensure the group has similar
structural connectivity profiles for each functional ROI, since similar functional regions
should have the similar structural connectivity patterns [19],
n
m
Ec ? ?? (Cij ? M C j )Covc ?1 (Ci j ? M C j )T
(4)
i ?1 j ?1
where
Cij is the connectivity pattern vector for ROI j of subject i , M C j is the group mean
?1
for ROI j , and Covc is the inverse of the covariance matrix.
The connectivity pattern vector
Cij is a fiber target region distribution histogram. To obtain
this histogram, we first parcellate all the cortical surfaces into nine regions ( as shown in Fig.5a,
four lobes for each hemisphere, and the subcortical region) using the HAMMER algorithm
[17]. A finer parcellation is available but not used due to the relatively lower parcellation
accuracy, which might render the histogram too sensitive to the parcellation result. Then, we
extract fibers penetrating region Sij , and calculate the distribution of the fibers? target cortical
regions. Fig.5 illustrates the ideas.
Fig.5. Structural connectivity pattern descriptor. (a): Cortical surface parcellation using
HAMMER [17]; (b): Joint visualization of the cortical surface, two ROIs (blue and green
spheres), and fibers penetrating the ROIs (in red and yellow, respectively); (c): Corresponding
target region distribution histogram of ROIs in Fig.5b. There are nine bins corresponding to the
nine cortical regions. Each bin contains the number of fibers that penetrate the ROI and are
connected to the corresponding cortical region. Fiber numbers are normalized across subjects.
2.2.3
Functional connectivity constraint energy
Functional connectivity constraint energy E f is defined to ensure each individual has similar
functional connectivity patterns for the working memory system, assuming the human brain
has similar functional architecture across individuals [21].
?? = ???=1??? ? ?? ?
(5)
Here, Fi is the functional connectivity matrix for subject i , and M F is the group mean of the
dataset. The connectivity between each pair of ROIs is defined using the Pearson correlation.
The matrix distance used here is the Frobenius norm.
2.3
Energy minimization solution
The minimization of the energy defined in Section 2.2 is known as a combinatorial
optimization problem. Traditional optimization methods may not fit this problem, since there
are two noticeable characteristics in this application. First, we do not know how the energy
changes with the varying locations of ROIs. Therefore, techniques like Newton?s method
cannot be used. Second, the structure of search space is not smooth, which may lead to
multiple local minima during optimization. To address this problem, we adopt the simulated
annealing (SA) algorithm [23] for the energy minimization. The idea of the SA algorithm is
based on random walk through the space for lower energies. In these random walks, the
probability of taking a step is determined by the Boltzmann distribution,
- (E
- E )/ ( KT )
p = e i+ 1 i
(6)
if Ei ?1 ? Ei , and p ? 1 when Ei ?1 ? Ei . Here, ?? and ??+1 are the system energies at solution
configuration ? and ? + 1 respectively; ? is the Boltzmann constant; and ? is the system
temperature. In other words, a step will be taken when a lower energy is found. A step will also
be taken with probability p if a higher energy is found. This helps avoid the local minima in
the search space.
3
R esult s
Compared to structural and functional connectivity patterns, anatomical profiles are more
easily affected by variability across individuals. Therefore, the anatomical constraint energy is
designed to provide constraint only to ROIs that are obviously far away from reasonableness.
The reasonable range was statistically modeled by the localizations of ROIs warped into the
atlas space in Section 2.2.1. Our focus in this paper is the structural and functional profiles.
3.1
Optimization using anatomical and structural connectivity profile s
In this section, we use only anatomical and structural connectivity profiles to optimize the
locations of ROIs. The goal is to check whether the structural constraint energy Ec works as
expected. Fig.6 shows the fibers penetrating the right precuneus for eight subjects before (top
panel) and after optimization (bottom panel). The ROI is highlighted in a red sphere for each
subject. As we can see from the figure (please refer to the highlighted yellow arrows), after
optimization, the third and sixth subjects have significantly improved consistency with the rest
of the group than before optimization, which proves the validity of the energy function Eq.(4).
Fig.6. Comparison of structural profiles before and after optimization. Each column shows the
corresponding before-optimization (top) and after-optimization (bottom) fibers of one subject.
The ROI (right precuneus) is presented by the red sphere.
3.2
Optimization using anatomical and functional connectivity profiles
In this section, we optimize the locations of ROIs using anatomical and functional profiles,
aiming to validate the definition of functional connectivity constraint energy E f . If this energy
constraint worked well, the functional connectivity variance of the working memory system
across subjects would decrease. Fig.7 shows the comparison of the standard derivation for
functional connectivity before (left) and after (right) optimization. As we can see, the variance
is significantly reduced after optimization. This demonstrated the effectiveness of the defined
functional connectivity constraint energy.
Fig.7. Comparison of the standard derivation for functional connectivity before and after the
optimization. Lower values mean more consistent connectivity pattern cross subjects.
3.3
Consistency between optimization of functional profiles and
structural profiles
Fig.8. Optimization consistency between functional and structural profiles. Top: Functional
profile energy drop along with structural profile optimization; Bottom: Structural profile
energy drop along with functional profile optimization. Each experiment was repeated 15
times with random initial ROI locations that met the anatomical constraint.
The relationship between structure and function has been extensively studied [24], and it is
widely believed that they are closely related. In this section, we study the relationship between
functional profiles and structural profiles by looking at how the energy for one of them changes
while the energy of the other decreases. The optimization processes in Section 3.1 and 3.2 were
repeated 15 times respectively with random initial ROI locations that met the anatomical
constraint. As shown in Fig.8, in general, the functional profile energies and structural profile
energies are closely related in such a way that the functional profile energies tend to decrease
along with the structural profile optimization process, while the structural profile energies also
tend to decrease as the functional profile is optimized. This positively correlated decrease of
functional profile energy and structural profile energy not only proves the close relationship
between functional and structural profiles, but also demonstrates the consistency between
functional and structural optimization, laying down the foundation of the joint optimiza tion,
whose results are detailed in the following section.
3.4
Optimization
connectivity profiles
using
anatomical,
structural
and
functional
In this section, we used all the constraints in Eq. (1) to optimize the individual locations of all
ROIs in the working memory system. Ten runs of the optimization were performed using
random initial ROI locations that met the anatomical constraint. Weighting parameter ?
equaled 0.5 for all these runs. Starting and ending temperatures for the simulated annealing
algorithm are 8 and 0.05; Boltzmann constant K ? 1 . As we can see from Fig.9, most runs
started to converge at step 24, and the convergence energy is quite close for all runs. This
indicates that the simulated annealing algorithm provides a valid solution to our problem.
By visual inspection, most of the ROIs move to more reasonable and consistent locations after
the joint optimization. As an example, Fig.10 depicts the location movements of the ROI in
Fig. 6 for eight subjects. As we can see, the ROIs for these subjects share a similar anatomical
landmark, which appears to be the tip of the upper bank of the parieto-occipital sulcus. If the
initial ROI was not at this landmark, it moved to the landmark after the optimization, which
was the case for subjects 1, 4 and 7. The structural profiles of these ROIs are very similar to
Fig.6. The results in Fig. 10 indicate the significant improvement of ROI locations achieved by
the joint optimization procedure.
Fig.9. Convergence performance of the simulated annealing . Each run has 28 temperature
conditions.
Fig.10. The movement of right precuneus before (in red sphere) and after (in green sphere)
optimization for eight subjects. The "C"-shaped red dash curve for each subject depicts a
similar anatomical landmark across these subjects. The yellow arrows in subject 1, 4 and 7
visualized the movement direction after optimization.
4
Co ncl usio n
This paper presented a novel computational approach to optimize the locations of ROIs
identified from task-based fMRI. The group-wise consistency of functional and structural
connectivity patterns, and anatomical locations are jointly modeled and formulated in an
energy function, which is minimized by the simulated annealing optimization algorithm.
Experimental results demonstrate the optimized ROIs have more reasonable localizations, and
have significantly improved the consistency of structural and functional connectivity profiles
and morphological and anatomic profiles across subjects. Our future work includes extending
this framework to optimize other parameters of ROIs such as size and shape, and applying and
evaluating this methodology to the optimization of ROIs identified in other brain systems such
as the visual, auditory, language, attention, and emotion networks.
5
R ef erence s
1.
Friston, K., Modalities, modes, and models in functional neuroimaging. Science, vol.326, no.5951,
399-403(2009).
Bharat B. Biswal, Toward discovery science of human brain function, PNAS March 9, 2010 vol. 107
no. 10 4734-4739.
Sporns O, Tononi G, K?tter R, The human connectome: A structural description of the human brain.
PLoS Comput Biol. 2005 Sep; 1(4): e42.
Van Dijk KR, Hedden T, Venkataraman A, Evans KC, Lazar SW, Buckner RL, Intrinsic functional
connectivity as a tool for human connectomics: theory, properties, and optimization. J Neurophysiol.
2010 Jan; 103(1): 297-321.
Hagmann P, et al., MR connectomics: Principles and challenges. J Neurosci Methods. 2010 Jan 22.
Friston K, J. et al., Dynamic causal modeling, Neuroimage, 19, 1273-1302, 2003.
http://www.humanconnectomeproject.org/
http://www.nitrc.org/projects/fcon_1000/
Goebel,R., et al., Investigating directed cortical interactions in time-resolved fMRI data using
vector autoregressive modeling and Granger causality mapping. Magnetic Resonance Imaging,
Volume 21, Issue 10, December 2003, Pages 1251-1261
Harrison L, et al., Multivariate autoregressive modeling of fMRI time series, NeuroImage, Volume
19, Issue 4, August 2003, Pages 1477-1491
Zang, Y., et al., ?Regional homogeneity approach to fMRI data analysis,? NeuroImage, 22(1): p.
394-400, 2004.
Jenkinson, M., Bannister, P., Brady, M., Smith, S., 2002. Improved optimization for the robust and
accurate linear registration and motion correction of brain images. Neuroimage 17, 825 ?841.
Friedman, L., and Glover, G.H. (2006). Report on a Multicenter fMRI Quality Assurance Protocol.
Journal of Magnetic Resonance in Imaging, 23(6):827-839.
H.J. Jo, J.M. Lee, J.H. Kim, C.H. Choi, B.M. Gu and D.H. Kang et al., Artificial shifting of fMRI
activation localized by volume- and surface-based analyses, NeuroImage 40 (3) (2008), pp.
1077?1089.
W. Ou, W.M. Wells III, and P. Golland. Combining Spatial Priors and Anatomical Information for
fMRI Detection. Medical Image Analysis, 14(3):318-331, 2010.
Tianming Liu, Hai Li, Kelvin Wong, Ashley Tarokh, Lei Guo, Stephen Wong, Brain Tissue
Segmentation Based on DTI Data, NeuroImage, 38(1):114-23, 2007.
Shen, D., et al., 2002. HAMMER: hierarchical attribute matching mechanism for elastic registration.
IEEE Trans Med Imaging 21(11), 1421-39.
Li K, et al., Cortical surface based identification of brain networks using high spatial reso lution
resting state fMRI data, International Symposium of Biomedical Imaging (ISBI) 2010.DOI:
10.1109/ISBI.2010.5490089 .
Passingham RE, et al., The anatomical basis of functional localization in the cortex. Nat Rev
Neurosci. 3(8):606-16. 2002.
Van Essen, D.: A tension-based theory of morphogenesis and compact wiring in the central nervous
system. Nature 385:313-318 (1997).
M.D. Fox and M.E. Raichle, ?Spontaneous fluctuations in brain activity observed with functional
magnetic resonance imaging?, Nat Rev Neurosci 8:700-711, 2007.
Van Dijk KR, Hedden T, Venkataraman A, Evans KC, Lazar SW, Buckner RL, Intrinsic functional
connectivity as a tool for human connectomics: theory, properties, and optimization. J Neurophysiol.
2010 Jan; 103(1): 297-321.
V. Granville, et al., Simulated annealing: A proof of convergence". IEEE Transactions on PAMI 16
(6): 652?656. June 1994.
Honey CJ, Sporns O, Cammoun L, Gigandet X, Thiran JP, Meuli R, Hagmann P. Predicting human
resting-state functional connectivity from structural connectivity. PNAS, 106(6):2035-40. 2009.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
| 4016 |@word version:1 norm:1 nd:1 open:2 hu:1 lobe:1 covariance:1 tr:2 initial:5 liu:2 contains:2 configuration:1 series:1 punishes:1 current:2 com:1 activation:11 gmail:1 yet:2 connectomics:3 evans:2 shape:2 atlas:7 designed:1 medial:1 drop:2 v:1 tarokh:1 selected:1 assurance:1 nervous:1 inspection:1 smith:1 precuneus:4 provides:1 contribute:1 location:23 firstly:1 x128:1 zhang:2 five:1 org:2 glover:1 along:4 symposium:1 bharat:1 introduce:1 acquired:3 inter:1 expected:1 brain:24 increasing:2 project:3 provided:1 panel:4 brady:1 dti:10 questionable:1 honey:1 ro:1 demonstrates:1 medical:1 kelvin:1 t1:1 before:7 local:4 aiming:1 despite:1 jiang:1 fluctuation:1 pami:1 might:1 china:1 studied:1 fov:2 challenging:1 co:2 fwhm:1 range:2 statistically:1 directed:1 lobule:1 block:2 implement:1 procedure:1 jan:3 erence:1 significantly:6 matching:1 pre:2 word:3 fsl:2 onto:3 cannot:1 close:2 put:1 applying:1 wong:2 optimize:8 fruitful:1 map:11 demonstrated:3 missing:2 maximizing:1 center:3 www:2 attention:1 occipital:3 starting:1 formulate:1 shen:1 penetrate:1 identifying:2 array:1 regarded:1 searching:1 x64:1 variation:2 coordinate:1 elucidate:1 gm:3 tianming:2 target:4 suppose:1 substrate:1 homogeneous:1 spontaneous:1 observed:2 bottom:6 solved:2 calculate:1 region:22 connected:1 morphological:2 plo:1 decrease:5 highest:1 movement:3 venkataraman:2 dynamic:6 localization:5 misalignment:1 neurophysiol:2 gu:1 basis:1 easily:1 joint:7 sep:1 resolved:1 various:1 fiber:15 represented:1 derivation:2 epi:1 distinct:1 doi:1 artificial:2 detected:1 labeling:2 neighborhood:1 pearson:1 lazar:2 whose:1 quite:1 widely:1 distortion:1 highlighted:4 itself:1 jointly:4 echo:1 obviously:1 sequence:2 reconstruction:1 interaction:4 combining:1 bed:2 frobenius:1 validate:1 moved:1 description:1 convergence:3 regularity:1 extending:1 jenkinson:1 help:1 develop:1 ij:2 school:1 noticeable:1 b0:1 eq:2 sa:2 come:1 indicate:1 met:3 direction:3 radius:1 closely:4 hammer:4 attribute:1 human:12 bin:2 premise:2 secondly:1 mathematically:1 correction:3 scanner:2 mm:8 around:1 roi:83 mapping:1 major:1 adopt:1 susceptibility:1 combinatorial:1 currently:1 sensitive:2 tool:3 minimization:5 gaussian:2 always:1 modified:1 avoid:1 varying:1 focus:1 june:1 improvement:1 check:1 indicates:1 contrast:1 baseline:2 sense:1 equaled:1 buckner:2 kim:1 integrated:2 kc:2 raichle:1 ashley:1 issue:5 among:3 resonance:3 spatial:5 integration:1 smoothing:4 cube:1 field:1 emotion:1 shaped:1 fmri:23 minimized:2 report:2 future:1 randomly:1 homogeneity:2 individual:18 phase:1 friedman:1 detection:1 interest:3 essen:1 tononi:1 intra:1 analyzed:1 activated:5 accurate:2 kt:1 necessary:1 lution:1 fox:1 initialized:1 walk:2 re:1 causal:2 precentral:1 instance:2 column:1 modeling:12 measuring:1 maximization:1 pole:2 deviation:3 vertex:1 snr:1 conducted:1 inadequate:1 too:1 reported:2 thickness:2 fundamental:4 international:1 lee:1 connectome:2 enhance:1 tip:1 connectivity:46 jo:1 central:1 prefrontal:1 expert:1 warped:1 li:3 insula:1 nonlinearities:1 student:1 automation:1 int:1 matter:4 includes:1 hagmann:2 performed:4 tion:1 polytechnical:1 red:5 wm:2 carlos:1 participant:3 minimize:1 accuracy:2 variance:5 characteristic:3 descriptor:1 miller:1 identify:2 yellow:3 identification:3 accurately:2 trajectory:1 researcher:1 asset:2 finer:1 tissue:3 classified:1 reach:1 manual:2 email:1 definition:3 volumetric:2 sixth:1 energy:40 acquisition:3 pp:1 proof:1 auditory:1 dataset:1 knowledge:1 eddy:1 segmentation:3 ou:1 cj:1 ea:3 back:2 reflecting:1 appears:1 steve:1 higher:1 methodology:4 response:1 improved:4 planar:1 tension:1 box:2 furthermore:1 biomedical:1 lastly:2 correlation:1 d:1 working:9 gcm:1 ei:4 mode:1 quality:1 gray:3 lei:2 usa:2 validity:2 consisted:1 normalized:1 hence:1 illustrated:1 white:1 biswal:1 wiring:1 during:2 inferior:1 please:1 m:2 penetrating:4 demonstrate:1 motion:2 temperature:3 image:7 wise:7 novel:3 recently:1 fi:1 ef:1 common:2 functional:53 rl:2 tracked:1 jp:1 volume:4 anatomic:5 resting:2 refer:1 significant:1 goebel:1 consistency:10 language:1 cortex:2 surface:14 base:2 multivariate:2 perspective:1 involvement:1 hemisphere:1 driven:2 minimum:3 mr:1 deng:1 determine:1 paradigm:5 maximize:1 converge:1 signal:1 arithmetic:1 stephen:1 multiple:1 pnas:2 smooth:1 cross:2 sphere:6 believed:1 essentially:1 histogram:4 kernel:1 achieved:3 golland:1 annealing:8 harrison:1 modality:1 rest:1 regional:3 subject:35 induced:1 recruited:1 tend:3 med:1 december:1 effectiveness:1 structural:41 axonal:1 iii:1 variety:1 fit:1 psychology:1 architecture:2 identified:6 idea:2 shift:2 whether:1 effort:1 render:1 nine:3 detailed:1 extensively:1 ten:1 visualized:1 category:1 gyrus:4 generate:2 reduced:1 http:2 shifted:1 neuroscience:2 overly:1 estimated:1 anatomical:24 broadly:1 blue:1 zang:1 mat:1 vol:2 affected:1 group:18 four:2 threshold:1 drawn:2 localize:1 sulcus:1 registration:5 bannister:1 imaging:9 run:5 inverse:1 uncertainty:1 reasonable:5 dorsolateral:1 dropout:1 dash:1 correspondence:1 fan:1 activity:1 strength:1 adapted:2 constraint:21 worked:1 aspect:1 unsmoothed:1 relatively:1 department:2 march:1 across:16 urgent:1 skull:1 rev:2 outlier:1 sij:2 taken:2 segregation:1 visualization:1 granger:2 mechanism:1 know:1 ge:1 studying:2 predefine:1 available:1 eight:3 hierarchical:1 away:2 magnetic:3 existence:1 original:1 top:6 clustering:2 include:1 ensure:4 newton:1 sw:2 parcellation:4 build:1 prof:2 warping:2 objective:1 move:1 question:1 already:1 strategy:1 fa:1 traditional:2 hai:1 gradient:1 distance:1 individualized:2 mapped:4 simulated:8 lateral:1 landmark:4 participate:1 toward:1 laying:1 assuming:1 connectomes:1 length:1 modeled:4 relationship:3 tter:1 ncl:1 difficult:2 neuroimaging:1 cij:3 reso:1 boltzmann:3 twenty:1 nitrc:1 upper:1 datasets:1 parietal:1 extended:1 variability:4 looking:1 smoothed:2 august:1 community:1 morphogenesis:1 tuo:1 thiran:1 pair:1 specified:1 extensive:1 optimized:7 tremendous:1 established:1 kang:1 trans:1 address:1 pattern:17 challenge:2 rf:1 including:3 memory:9 reliable:1 green:2 sporns:2 shifting:1 friston:2 predicting:1 zhu:1 scheme:1 altered:1 started:1 coupled:1 extract:1 dijk:2 deviate:1 prior:1 discovery:1 removal:1 tangent:1 segregated:2 northwestern:1 limitation:1 subcortical:1 localized:1 isbi:2 foundation:1 degree:1 controversial:1 consistent:3 signa:1 principle:1 bank:1 e42:1 share:1 obscure:1 guide:1 warp:1 template:2 taking:1 van:3 slice:4 boundary:1 curve:1 cortical:15 ending:1 valid:1 evaluating:1 autoregressive:3 preprocessing:2 ec:6 far:1 transaction:1 reconstructed:1 compact:1 investigating:1 mva:1 xi:1 reasonableness:1 search:2 additionally:1 promising:1 nature:1 robust:1 elastic:1 domain:1 protocol:1 neurosci:3 arrow:2 parcellated:1 profile:40 repeated:2 positively:1 causality:2 fig:27 depicts:3 georgia:2 neuroimage:6 exponential:1 comput:1 third:2 weighting:2 down:1 choi:1 intrinsic:2 kr:2 ci:1 te:2 nat:2 illustrates:1 chen:1 marching:1 visual:2 expressed:1 kaiming:1 tracking:1 vulnerable:1 coil:2 dcm:1 marked:1 formulated:2 goal:3 change:2 determined:1 total:1 experimental:3 optimiza:1 guo:2 scan:2 frontal:2 evaluate:1 biol:1 correlated:1 |
3,332 | 4,017 | New Adaptive Algorithms for Online Classification
Koby Crammer
Department of Electrical Enginering
The Technion
Haifa, 32000 Israel
[email protected]
Francesco Orabona
DSI
Universit`a degli Studi di Milano
Milano, 20135 Italy
[email protected]
Abstract
We propose a general framework to online learning for classification problems
with time-varying potential functions in the adversarial setting. This framework
allows to design and prove relative mistake bounds for any generic loss function.
The mistake bounds can be specialized for the hinge loss, allowing to recover and
improve the bounds of known online classification algorithms. By optimizing the
general bound we derive a new online classification algorithm, called NAROW,
that hybridly uses adaptive- and fixed- second order information. We analyze the
properties of the algorithm and illustrate its performance using synthetic dataset.
1
Introduction
Linear discriminative online algorithms have been shown to perform very well on binary and multiclass labeling problems [10, 6, 14, 3]. These algorithms work in rounds, where at each round a
new instance is given and the algorithm makes a prediction. After the true class of the instance is
revealed, the learning algorithm updates its internal hypothesis. Often, such update is taking place
only on rounds where the online algorithm makes a prediction mistake or when the confidence in
the prediction is not sufficient. The aim of the classifier is to minimize the cumulative loss it suffers
due to its prediction, such as the total number of mistakes.
Until few years ago, most of these algorithms were using only first-order information of the input features. Recently [1, 8, 4, 12, 5, 9], researchers proposed to improve online learning algorithms by incorporating second order information. Specifically, the Second-Order-Perceptron (SOP)
proposed by Cesa-Bianchi et al. [1] builds on the famous Perceptron algorithm with an additional
data-dependent time-varying ?whitening? step. Confidence weighted learning (CW) [8, 4] and the
adaptive regularization of weights algorithm (AROW) [5] are motivated from an alternative view:
maintaining confidence in the weights of the linear models maintained by the algorithm. Both CW
and AROW use the input data to modify the weights as well and the confidence in them. CW and
AROW are motivated from the specific properties of natural-language-precessing (NLP) data and
indeed were shown to perform very well in practice, and on NLP problems in particular. However,
the theoretical foundations of this empirical success were not known, especially when using only
the diagonal elements of the second order information matrix. Filling this gap is one contribution of
this paper.
In this paper we extend and generalizes the framework for deriving algorithms and analyzing them
through a potential function [2]. Our framework contains as a special case the second order Perceptron and a (variant of) AROW. While it can also be used to derive new algorithms based on other
loss functions.
For carefully designed algorithms, it is possible to bound the cumulative loss on any sequence of
samples, even adversarially chosen [2]. In particular, many of the recent analyses are based on the
online convex optimization framework, that focuses on minimizing the sum of convex functions.
1
Two common view-points for online convex optimization are of regularization [15] or primal-dual
progress [16, 17, 13]. Recently new bounds have been proposed for time-varying regularizations
in [18, 9], focusing on the general case of regression problems. The proof technique derived from
our framework extends the work of Kakade et al. [13] to support time varying potential functions.
We also show how the use of widely used classification losses, as the hinge loss, allows us to derive
new powerful mistake bounds superior to existing bounds. Moreover the framework introduced
supports the design of aggressive algorithms, i.e. algorithms that update their hypothesis not only
when they make a prediction mistake.
Finally, current second order algorithms suffer from a common problem. All these algorithms maintain the cumulative second-moment of the input features, and its inverse, qualitatively speaking, is
used as a learning rate. Thus, if there is a single feature with large second-moment in the prefix of the
input sequence, its effective learning rate would drop to a relatively low value, and the learning algorithm will take more time to update its value. When the instances are ordered such that the value of
this feature seems to be correlated with the target label, such algorithms will set the value of weight
corresponding to this feature to a wrong value and will decrease its associated learning rate to a low
value. This combination makes it hard to recover from the wrong value set to the weight associated
with this feature. Our final contribution is a new algorithm that adapts the way the second order
information is used. We call this algorithm Narrow Adaptive Regularization Of Weights (NAROW).
Intuitively, it interpolates its update rule from adaptive-second-order-information to fixed-secondorder-information, to have a narrower decrease of the learning rate for common appearing features.
We derive a bound for this algorithm and illustrate its properties using synthetic data simulations.
2
Online Learning for Classification
We work in the online binary classification scenario where learning algorithms work in rounds.
At each round t, an instance xt ? Rd is presented to the algorithm, which then predicts a label
y?t ? {?1, +1}. Then, the correct label yt is revealed, and the algorithm may modify its hypothesis.
The aim of the online learning algorithm is to make as few mistakes as possible (on any sequence
of samples/labels {(xt , yt )}Tt=1 ). In this paper we focus on linear prediction functions of the form
y?t = sign(wt> xt ).
We strive to design online learning algorithms for which it is possible to prove a relative mistakes
bound or a loss bound. Typical such analysis bounds the cumulative loss the algorithm suffers,
PT
with the cumulative loss of any classifier u plus an additional penalty called
t=1 `(wt , xt , yt ),P
T
the regret, R(u) + t=1 `(u, xt , yt ). Given that we focus on classification, we are more interested
in relative mistakes bound, where we bound the number of mistakes of the learner with R(u) +
PT
t=1 `(u, xt , yt ). Since the classifier u is arbitrary, we can choose, in particular, the best classifier
that can be found in hindsight given all the samples. Often R(?) depends on a function measuring
the complexity of u and the number of samples T , and ` is a non-negative loss function. Usually `
is chosen to be a convex upper bound of the 0/1 loss. We will also denote by `t (u) = `(u, xt , yt ).
In the following we denote by M to be the set of round indexes for which the algorithm performed a
mistake. We assume that the algorithm always update if it rules in such events. Similarly, we denote
by U the set of the margin error rounds, that is, rounds in which the algorithm updates its hypothesis
and the prediction is correct, but the loss `t (wt ) is different from zero. Their cardinality will be
indicated with M and U respectively. Formally, M = {t : sign(wt> xt ) 6= yt & wt 6= wt+1 },
and U = {t : sign(wt> xt ) = yt & wt 6= wt+1 }. An algorithm that updates its hypothesis only on
mistake rounds is called conservative (e.g. [3]). Following previous naming convention [3], we call
aggressive an algorithm that updates is rule on rounds for which the loss `t (wt ) is different from
zero, even if its prediction was correct.
We define now few basic concepts from convex analysis that will be used in the paper. Given a
convex function f : X ? R, its sub-gradient ?f (v) at v satisfies: ?u ? X, f (u) ? f (v) ? (u ?
v) ? ?f (v). The Fenchel conjugate of f , f ? : S ? R, is defined by f ? (u) = supv?S v ? u ? f (v) .
A differentiable function f : X ? R is ?-strongly convex w.r.t. a norm k ? k if for any u, v ? S and
2
? ? (0, 1), h(?u + (1 ? ?)v) ? ?f (u) + (1 ? ?)f (v) ? ?2 ?(1 ? ?) ku ? vk . Strong convexity
turns out to be a key property to design online learning algorithms.
2
3
General Algorithm and Analysis
We now introduce a general framework to design online learning algorithms and a general lemma
which serves as a general tool to prove their relative regret bounds. Our algorithm builds on previous
algorithms for online convex programming with a one significant difference. Instead of using a fixed
link function as first order algorithms, we allow a sequence of link functions ft (?), one for each time
t. In a nutshell, the algorithm maintains a weight vector ?t . Given a new examples it uses the current
link function ft to compute a prediction weight vector wt . After the target label is received it sets
the new weight ?t+1 to be the sum of ?t and minus the gradient of the loss at wt . The algorithm is
summarized in Fig. 1.
The following lemma is a generalization of Corollary 7 in [13] and Corollary 3 in [9], for online
learning. All the proofs can be found in the Appendix.
Lemma 1. Let ft , t = 1, . . . , T be ?t -strongly convex functions with respect to the norms
k ? kf1 , . . . , k ? kfT over a set S and let k ? kfi? be the respective dual norms. Let f0 (0) = 0,
and x1 , . . . , xT be an arbitrary sequence of vectors in Rd . Assume that algorithm in Fig. 1 is run
on this sequence with the functions fi . Then, for any u ? S, and any ? > 0 we have
!
T
T
2
2
X
1
fT (?u) X ?t kzt kft?
1 ?
>
?
?t zt
wt ? u ?
+
+ (ft (?t ) ? ft?1 (?t )) .
?
?
2??t
?
t=1
t=1
This Lemma can appear difficult to interpret, but we now show that it is straightforward to use
the lemma to recover known bounds of different online learning algorithms. In particular we
can state the following Corollary that holds for any convex loss ` that upper bounds the 0/1 loss.
1: Input: A series of strongly convex
functions f1 , . . . , fT .
2: Initialize: ?1 = 0
3: for t = 1, 2, . . . , T do
4:
Receive xt
5:
Set wt = ?ft? (?t )
6:
Predict y?t = sign(wt> xt )
7:
Receive yt
8:
if `t (wt ) > 0 then
9:
zt = ?`t (wt )
10:
?t+1 = ?t ? ?t zt
11:
else
12:
?t+1 = ?t
13:
end if
14: end for
PT
?
Corollary 1. Define B = t=1 (ft? (?t ) ? ft?1
(?t )). Under
the hypothesis of Lemma 1, if ` is convex and it upper bounds
the 0/1 loss, and ?t = ?, then for any u ? S the algorithm
in Fig. 1 has the following bound on the maximum number of
mistakes M ,
T
T
X
X
kzt k2ft?
B
fT (u)
+?
.
(1)
+
M?
`t (u) +
?
2?t
?
t=1
t=1
Moreover if ft (x) ? ft+1 (x), ?x ? S, t = 0, . . . , T ? 1 then
B ? 0.
A similar bound has been recently presented in [9] as a regret bound. Yet, there are two differences. First, our analysis
bounds the number of mistakes, a more natural quantity in
classification setting, rather than of a general loss function.
Figure 1: Prediction algorithm
Second, we retain the additional term B which may be negative, and thus possibly provide a better bound. Moreover, to
choose the optimal tuning of ? we should know quantities that are unknown to the learner. We could
use adaptive regularization methods, as the one proposed in [16, 18], but in this way we would lose
the possibility to prove mistake bounds for second order algorithms, like the ones in [1, 5]. In the
next Section we show how to obtain bounds with an automatic tuning, using additional assumptionion on the loss function.
3.1
Better bounds for linear losses
The hinge loss, `(u, xt , yt ) = max(1 ? yt u> xt , 0), is a very popular evaluation metric in classification. It has been used, for example, in Support Vector Machines [7] as well as in many online
learning algorithms [3]. It has also been extended to the multiclass case [3]. Often mistake bounds
are expressed in terms of the hinge loss. One reason is that it is a tighter upper bound of the 0/1 loss
compared to other losses, as the squared hinge loss. However, this loss is particularly interesting for
us, because it allows an automatic tuning of the bound in (1). In particular it is easy to verify that it
satisfies the following condition
`(u, xt , yt ) ? 1 + u> ?`t (wt ), ?u ? S, wt : `t (wt ) > 0 .
3
(2)
Thanks to this condition we can state the following Corollary for any loss satisfying (2).
Corollary 2. Under the hypothesis of Lemma 1, if fT (?u) ? ?2 fT (u), and ` satisfies (2), then for
any u ? S, and any ? > 0 we have
!
X
X ?2
1
t
2
>
?t ? L + ?fT (u) +
B+
kzt kft? ? ?t wt zt
,
?
2?t
t?M?U
t?M?U
P
PT
?
where L = t?M?U ?t `t (u), and B = t=1 (ft? (?t ) ? ft?1
(?t )). In particular, choosing the
optimal ?, we obtain
v
u
X ?2
X
p
u
t
(3)
kzt k2f ? ? 2?t wt> zt .
?t ? L + 2fT (u)t2B +
t
?t
t?M?U
t?M?U
The intuition and motivation behind this Corollary is that a classification algorithm should be independent of the particular scaling of the hyperplane. In other words, wt and ?wt (with ? > 0) make
exactly the same predictions, because only the sign of the prediction matters. Exactly this independence in a scale factor allows us to improve the mistake bound (1) to the bound of (3). Hence, when
(2) holds, the update of the algorithm becomes somehow independent from the scale factor, and we
have the better bound. Finally, note that when the hinge loss is used, the vector ?t is updated as in
an aggressive version of the Perceptron algorithm, with a possible variable learning rate.
4
New Bounds for Existing Algorithms
We now show the versatility of our framework, proving better bounds for some known first order
and second order algorithms.
4.1
An Aggressive p-norm Algorithm
We can use the algorithm in Fig. 1 to obtain an aggressive version of the p-norm algorithm [11]. Set
1
ft (u) = 2(q?1)
kuk2q , that is 1-strongly convex w.r.t. the norm k ? kq . The dual norm of k ? kq is
k ? kp , where 1/p + 1/q = 1. Moreover set ?t = 1 in mistake error rounds, so using the second
bound of Corollary 2, and defining R such that kxt k2p ? R2 , we have
s
s X
X
kuk2q
?t2 kxt k2p + 2?t yt wt> xt ?
?t
M ?L+
q?1
t?M?U
t?U
s
s
X
X
kuk2q
?t .
?L+
M R2 +
?t2 kxt k2p + 2?t yt wt> xt ?
q?1
t?U
t?U
Solving for M we have
1
kukq
M ?L+
kuk2q R2 + R ?
2(q ? 1)
q?1
s
X
1
kuk2q R2 + L + D ?
?t ,
(4)
4(q ? 1)
t?U
P
P
?t2 kxt k2p +2?t yt wt> xt
where L =
?
`
(u),
and
D
=
?
?
t . We have still the
t?M?U t t
t?U
R2
freedom to set ?t in margin error rounds. If we set ?t = 0, the algorithm of Fig.
the
21 becomes
R ?2yt wt> xt
p-norm algorithm and we recover its best bound [11]. However if 0 ? ?t ? min
,
1
kxt k2p
P
we have that D is negative, and L ? t?M?U `t (u). Hence the aggressive updates gives us a better
bound, thanks to last term that is subtracted to the bound.
In the particular case of p = q = 2 we recover the Perceptron algorithm.
the minimum
2 In particular
R /2?yt wt> xt
of D, under the constraint ?t ? 1, can be found setting ?t = min
, 1 . If R is equal
kxt k2
?
to 2, we recover the PA-I update rule, when C = 1. However note that the mistake bound in (4) is
better than the one proved for PA-I in [3] and the ones in [16]. Hence the bound (4) provides the first
theoretical justification to the good performance of the PA-I, and it can be seen as a general evidence
supporting the aggressive updates versus the conservative ones.
4
4.2
Second Order Algorithms
We show now how to derive in a simple way the bound of the
SOP [1] and the one of AROW [5]. Set ft (x) = 12 x> At x,
x x>
where At = At?1 + tr t , r > 0 and A0 = I. The functions
ft are 1-strongly convex w.r.t. the norms kxk2ft = x> At x.
The dual functions of ft (x), ft? (x), are equal to 12 x> A?1
t x,
> ?1
while kxk2ft? is x> A?1
x.
Denote
by
?
=
x
A
x
t
t
t
t?1 t and
> ?1
mt = yt xt At?1 ?t . With these definitions it easy to see
that the conservative version of the algorithm corresponds directly to SOP. The aggressive version corresponds to AROW,
with a minor difference. In fact, the prediction of the algor
,
rithm in Fig. 1 specialized in this case is yt wt> xt = mt r+?
t
on the other hand AROW predicts with mt . The sign of the
predictions is the same, but here the aggressive version is upr
dating when mt r+?
? 1, while AROW updates if mt ? 1.
t
Figure 2: NLP Data: the number of
words vs. the word-rank on two sentiment data sets.
To derive the bound, observe that using Woodbury matrix
identity we have
ft? (?t )
?
(x> A?1 ?t )2
?
ft?1
(?t )
t
t?1
= ? 2(r+x
> A?1
t
t?1 xt )
m2
t
= ? 2(r+?
. Using the second bound
t)
in Corollary 2, and setting ?t = 1 we have
v
u X
p
u
?1
>
>
M + U ? L + u AT u t
x>
t At xt + 2yt wt xt ?
t?M?U
s
?L+
1
kuk2 +
r
X
m2t
r + ?t
v
u
X
u
>
2
t
(u xt )
r log(det(AT )) +
2yt wt> xt ?
t?M?U
t?M?U
s
s
X
X
2
>
2
? L + rkuk +
(u xt ) log(det(AT )) +
t?M?U
t?M?U
m2t
r + ?t
mt (2r ? mt )
.
r(r + ?t )
This bound recovers the SOP?s one in the conservative case, and improves slightly the one of AROW
for the aggressive case. It would be possible to improve the AROW bound even more, setting ?t to a
value different from 1 in margin error rounds. We leave the details for a longer version of this paper.
4.3
Diagonal updates for AROW
Both CW and AROW has an efficient version that use diagonal matrices instead of full ones. In this
case the complexity of the algorithm becomes linear in dimension. Here we prove a mistake bound
for the diagonal version of AROW, using Corollary 2. We denote Dt = diag{At }, where At is
defined as in SOP and AROW, and ft (x) = 12 x> Dt x. Setting ?t = 1, and using the second bound
in Corollary 2 and Lemma 12 in [9], we have1
v
!
!
u
P
d
2
u
X
X
x
t,i
t?M?U
M +U ?
`t (u) + tuT DT u r
log
+ 1 + 2U
r
i=1
t?M?U
v
v
!
u d
P
u
d
2
u X
X
X
X
u
x
1
t,i
t?M?U
u2
x2t,i tr
log
+ 1 + 2U .
=
`t (u) + tkuk2 +
r i=1 i
r
i=1
t?M?U
t?M?U
The presence of a mistake bound allows us to theoretically analyze the cases where this algorithm
could be advantageous respect to a simple Perceptron. In particular, for NLP data the features are
binary and it is often the case that most of the features are zero most of the time. On the other hand,
1
We did not optimize the constant multiplying U in the bound.
5
these ?rare? features are usually the most informative ones (e.g. [8]). Fig. 2 shows the number of
times each feature (word) appears in two sentiment datasets vs the word rank. Clearly there are few
very frequent words and many rate words. These exact properties were used to originally derive the
CW algorithm. Our analysis justifies this derivation. Concretely, the above considerations leads us
to think that the optimal hyperplane u will be such that
d
X
X
u2i
i=1
x2t,i ?
t?M?U
X
u2i
i?I
X
x2t,i ?
t?M?U
X
u2i s ? skuk2
i?I
where I is the set of the informative and rare features and s P
is the maximum
number of times these
P
d
features appear in the sequence. In general each time that i=1 u2i t?M?U x2t,i ? skuk2 with
s small enough, it is possible to show that, with an optimal tuning of r, this bound is better of the
Perceptron?s one. In particular, using a proof similar to the one in [1], in the conservative version of
2
R2
this algorithm, it is enough to have s < M2dR , and to set r = MsM
R2 ?2sd .
5
A New Adaptive Second Order Algorithm
We now introduce a new algorithm with an update rule that interpolates from adaptive-second-orderinformation to fixed-second-order-information. We start from the first bound in Corollary 2. We set
x x>
ft (x) = 21 x> At x, where At = At?1 + trt t , and A0 = I. This is similar to the regularization used
?1
in AROW and SOP, but here we have rt > 0 changing over time. Again, denote ?t = x>
t At?1 xt ,
and set ?t = 1. With this choices, we obtain the bound
X
X ?(u> xt )2
?kuk2
mt (2rt ? mt )
?t rt
M +U ?
`t (u) +
+
?
,
+
2
2rt
2?(rt + ?t )
2?(rt + ?t )
t?M?U
t?M?U
that holds for any ? > 0 and any choice of rt > 0. We would like to choose rt at each step to
>
2
rt
minimize the bound, in particular to have a small value of the sum ?(u rtxt ) + ?(r?tt+?
. Altough
t)
>
2
t
we do not know the values of (u xt ) and ?, still we can have a good trade-off setting rt = b??t ?1
when ?t ? 1b and rt = +? otherwise. Here b is a parameter. With this choice we have that
?t rt
rt +?t
= 1b , and
(u> xt )2
rt
=
?t (u> xt )2 b
,
rt +?t
when ?t ? 1b . Hence we have
X
?kuk2
?
`t (u)
2
t?M?U
X ?b?t (u> xt )2
X mt (2rt ? mt )
1
1 X
?
+
?t ?
+
2(rt + ?t )
2?b
2?
2?(rt + ?t )
t?M?U
t:b?t >1
t:b?t ?1
X mt (2rt ? mt )
X ?t kuk2 R2
1 X
1
+
min
, ?t ?
? ?b
2(rt + ?t ) 2?
b
2?(rt + ?t )
t?M?U
t?M?U
t:b?t >1
X mt (2rt ? mt )
1
1 X
1
? ?bR2 kuk2 log det(AT ) +
min
, ?t ?
,
2
2?
b
2?(rt + ?t )
M +U ?
t?M?U
t?M?U
where in the last inequality we used an extension of Lemma 4 in [5] to varying values of rt . Tuning
? we have
s
r
X
X
1
bmt (2rt ? mt )
M +U ?
`t (u) + kukR
+ log det(AT )
min (1, b?t ) ?
.
bR2
rt + ?t
t?M?U
t?M?U
This algorithm interpolates between a second order algorithm with adaptive second order information, like AROW, and one with a fixed second order information. Even the bound is in between
these two worlds. In particular the matrix At is updated only if ?t ? 1b , preventing its eigenvalues
from growing too much, as in AROW/SOP. We thus call this algorithm NAROW, since its is a new
adaptive algorithm, which narrows the range of possible eigenvalues of the matrix At . We illustrate
empirically its properties in the next section.
6
5000
5000
25
15
3500
?5
2000
?10
1500
?15
?20
700
0
10
1000
?15
500
?20
?25
20
1200
500
400
300
200
0
10
1000
1000
2000
3000
Examples
4000
?5
?5
2000
1500
?10
1500
1000
?15
1000
?15
500
?20
500
?20
?25
20
PA
AROW
NAROW
AdaGrad
800
600
400
5000
300
Cumulative Number of Mistakes
1200
600
500
400
300
200
1000
2000
3000
Examples
4000
2000
3000
Examples
4000
5000
0
10
PA
AROW
NAROW
AdaGrad
PA
AROW
NAROW
AdaGrad
800
600
400
100
2000
3000
Examples
4000
2000
3000
Examples
4000
0
10
20
PA
AROW
NAROW
AdaGrad
250
200
150
100
5000
PA
AROW
NAROW
AdaGrad
1000
350
250
200
150
100
300
2000
3000
Examples
4000
5000
PA
AROW
NAROW
AdaGrad
250
200
150
100
50
1000
5000
300
?10
50
1000
50
1000
500
?20
350
150
300
1000
?25
20
200
5000
200
100
?10
250
350
PA
AROW
NAROW
AdaGrad
1000
?20
50
1000
2500
0
?10
200
100
700
?10
3000
5
2500
350
PA
AROW
NAROW
AdaGrad
600
800
?20
3500
10
3000
2000
Cumulative Number of Mistakes
Cumulative Number of Mistakes
800
?10
1500
3500
0
Cumulative Number of Mistakes
?20
?5
?10
4000
15
5
2500
2000
4500
20
10
3000
0
Cumulative Number of Mistakes
?25
3500
5
2500
0
5000
25
4000
15
10
3000
5
4500
20
4000
15
10
Cumulative Number of Mistakes
5000
25
4500
20
4000
Cumulative Number of Mistakes
4500
20
Cumulative Number of Mistakes
25
2000
3000
Examples
4000
5000
1000
2000
3000
Examples
4000
5000
Figure 3: Top: Four sequences used for training, the colors represents the ordering in the sequence from blue
to yellow, to red. Middle: cumulative number of mistakes of four algorithms on data with no labels noise.
Bottom: results when training using data with 10% label-noise.
6
Experiments
We illustrate the characteristics of our algorithm NAROW using a synthetic data generated in a
similar manner of previous work [4]. We repeat its properties for completeness. We generated 5, 000
points in R20 where the first two coordinates were drawn from a 45? rotated Gaussian distribution
with standard deviation 1 and 10. The remaining 18 coordinates were drawn from independent
Gaussian distributions N (0, 8.5). Each point?s label depended on the first two coordinates using
a separator parallel to the long axis of the ellipsoid, yielding a linearly separable set. Finally, we
ordered the training set in four different ways: from easy examples to hard examples (measured by
the signed distance to the separating-hyperplane), from hard examples to easy examples, ordered by
their signed value of the first feature, and by the signed value of the third (noisy) feature - that is by
xi ? y for i = 1 and i = 3 - respectively. An illustration of these ordering appears in the top row of
Fig. 3, the colors code the ordering of points from blue via yellow to red (last points). We evaluated
four algorithms: version I of the passive-aggressive (PA-I) algorithm [3], AROW [5], AdaGrad [9]
and NAROW. All algorithms, except AdaGrad, have one parameter to be tuned, while AdaGrad has
two. These parameters were chosen on a single random set, and the plots summarizes the results
averaged over 100 repetitions.
The second row of Fig. 3 summarizes the cumulative number of mistakes averaged over 100 repetitions and the third row shows the cumulative number of mistakes where 10% artificial label noise
was used. (Mistakes are counted using the unnoisy labels.)
Focusing on the left plot, we observe that all the second order algorithms outperform the single
first order algorithm - PA-I. All algorithms make few mistakes when receiving the first half of the
data - the easy examples. Then all algorithms start to make more mistakes - PA-I the most, then
AdaGrad and closely following NAROW, and AROW the least. In other words, AROW was able to
converge faster to the target separating hyperplane just using ?easy? examples which are far from
the separating hyperplane, then NAROW and AdaGrad, with PA-I being the worst in this aspect.
The second plot from the left, showing the results for ordering the examples from hard to easy. All
algorithms follow a general trend of making mistakes in a linear rate and then stop making mistakes
when the data is easy and there are many possible classifiers that can predict correctly. Clearly,
7
AROW and NAROW stop making mistakes first, then AdaGrad and PA-I last. A similar trend can
be found in the noisy dataset, with each algorithm making relatively more mistakes.
The third and fourth columns tell a similar story, although the plots in the third column summarize
results when the instances are ordered using the first feature (which is informative with the second)
and the plots in the fourth column summarize when the instances are ordered using the third uninformative feature. In both cases, all algorithms do not make many mistakes in the beginning, then at
some point, close to the middle of the input sequence, they start making many mistakes for a while,
and then they converge. In terms of total performance: PA-I makes more mistakes, then AdaGrad,
AROW and NAROW. However, NAROW starts to make many mistakes before the other algorithms
and takes more ?examples? to converge until it stopped making mistakes. This phenomena is further
shown in the bottom plots where label noise is injected.
We hypothesize that this relation is due to the fact that NAROW does not let the eigenvalues of the
matrix A to grow unbounded. Since its inverse is proportional to the effective learning rate, it means
that it does not allow the learning rate to drop too low as opposed to AROW and even to some extent
AdaGrad.
7
Conclusion
We presented a framework for online convex classification, specializing it for particular losses, as the
hinge loss. This general tool allows to design theoretical motivated online classification algorithms
and to prove their relative mistake bound. In particular it supports the analysis of aggressive updates.
Our framework also provided a missing bound for AROW for diagonal matrices. We have shown
its utility proving better bounds for known online algorithms, and proposing a new algorithm, called
NAROW. This is a hybrid between adaptive second order algorithms, like AROW and SOP, and a
static second order one. We have validated it using synthetic datasets, showing its robustness to the
malicious orderings of the sample, comparing it with other state-of-art algorithms. Future work will
focus on exploring the new possibilities offered by our framework and on testing NAROW on real
world data.
Acknowledgments We thank Nicol`o Cesa-Bianchi for his helpful comments. Francesco Orabona was
sponsored by the PASCAL2 NoE under EC grant no. 216886. Koby Crammer is a Horev Fellow, supported by
the Taub Foundations. This work was also supported by the German-Israeli Foundation grant GIF-2209-1912.
A
Appendix
?
Proof of Lemma 1. Define by ft? the Fenchel dual of ft , and ?t = ft? (?t+1 ) ? ft?1
(?t ). We have
PT
?
?
?
?
?
t=1 ?t = fT (?T +1 ) ? f0 (?1 ) = fT (?T +1 ). Moreover we have that ?t = ft (?t+1 ) ? ft (?t ) +
?t2
2
2?t kzt kft? , where we used Theorem 6
PT
have that ?1 t=1 ?t = ?1 fT? (?T +1 ) ?
?
?
(?t ) ? ?t zt> ?ft? (?t ) +
(?t ) ? ft? (?t ) ? ft?1
ft? (?t ) ? ft?1
in [13]. Moreover using the Fenchel-Young inequality, we
PT
u> ?T +1 ? ?1 fT (?u) = ? t=1 ?t u> zt ? ?1 fT (?u). Hence putting all togheter we have
?
T
X
t=1
T
?t u> z t ?
T
1
1X
1X ?
?2
?
fT (?u) ?
?t ?
(ft (?t ) ? ft?1
(?t ) ? ?t wt> zt + t kzt k2ft? ),
?
? t=1
? t=1
2?t
where we used the definition of wt in Algorithm 1.
Proof of Corollary 1. By convexity, `(wt , xt , yt ) ? `(u, xt , yt ) ? zt> (wt ? u), so setting ? = 1
in Lemma 1 we have the stated bound. For the additional statement, using Lemma 12 in [16] and
?
ft (x) ? ft+1 (x) we have that ft? (x) ? ft+1
(x), so B ? 0. The additional statement on B is
proved using Lemma 12 in [16]. Using it, we have that ft (x) ? ft+1 (x) implies that ft? (x) ?
?
ft+1
(x), so we have that B ? 0.
Proof of Corollary 2. Lemma 1, the condition on the loss (2), and the hypothesis on fT gives us
!
T
T
T
2
2
X
X
1 X ?t kzt kft?
>
>
?t (1 ? `t (u)) ? ?
?t u zt ? ?fT (u) +
+ B ? ?t zt wt .
? t=1
2?t
t=1
t=1
Note that ? is free, so choosing its optimal value we get the second bound.
8
References
[1] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order Perceptron algorithm. SIAM
Journal on Computing, 34(3):640?668, 2005.
[2] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. Journal of Machine Learning Research, 7:551?585, 2006.
[4] K. Crammer, M. Dredze, and F. Pereira. Exact Convex Confidence-Weighted learning. Advances in Neural Information Processing Systems, 22, 2008.
[5] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. Advances
in Neural Information Processing Systems, 23, 2009.
[6] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951?991, 2003.
[7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines and Other
Kernel-Based Learning Methods. Cambridge University Press, 2000.
[8] M. Dredze, K. Crammer, and F. Pereira. Online Confidence-Weighted learning. Proceedings
of the 25th International Conference on Machine Learning, 2008.
[9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and
stochastic optimization. Technical Report 2010-24, UC Berkeley Electrical Engineering
and Computer Science, 2010. Available at http://cs.berkeley.edu/?jduchi/
projects/DuchiHaSi10.pdf.
[10] Y. Freund and R. E. Schapire. Large margin classification using the Perceptron algorithm.
Machine Learning, pages 277?296, 1999.
[11] C. Gentile. The robustness of the p-norm algorithms. Machine Learning, 53(3):265?299, 2003.
[12] E. Hazan and S. Kale. Extracting certainty from uncertainty: Regret bounded by variation in
costs. In Proc. of the 21st Conference on Learning Theory, 2008.
[13] S. Kakade, S. Shalev-Shwartz, and A. Tewari. On the duality of strong convexity and strong
smoothness: Learning applications and matrix regularization. Technical report, TTI, 2009.
http://www.cs.huji.ac.il/ shais/papers/KakadeShalevTewari09.pdf.
[14] J. Kivinen, A. Smola, and R. Williamson. Online learning with kernels. IEEE Trans. on Signal
Processing, 52(8):2165?2176, 2004.
[15] A. Rakhlin and A. Tewari. Lecture notes on online learning. Technical report, 2008. Available at http://www-stat.wharton.upenn.edu/?rakhlin/papers/online_
learning.pdf.
[16] S. Shalev-Shwartz. Online learning: Theory, algorithms, and applications. Technical report,
The Hebrew University, 2007. PhD thesis.
[17] S. Shalev-Shwartz and Y. Singer. A primal-dual perspective of online learning algorithms.
Machine Learning Journal, 2007.
[18] L. Xiao. Dual averaging method for regularized stochastic learning and online optimization.
In Advances in Neural Information Processing Systems 22, pages 2116?2124. 2009.
9
| 4017 |@word middle:2 version:10 advantageous:1 seems:1 norm:10 dekel:1 simulation:1 minus:1 tr:2 moment:2 contains:1 series:1 tuned:1 prefix:1 existing:2 current:2 comparing:1 yet:1 kft:5 informative:3 hypothesize:1 designed:1 drop:2 update:17 plot:6 v:2 sponsored:1 half:1 beginning:1 provides:1 bmt:1 completeness:1 u2i:4 unbounded:1 prove:6 upr:1 manner:1 introduce:2 theoretically:1 upenn:1 indeed:1 growing:1 cardinality:1 becomes:3 provided:1 project:1 moreover:6 bounded:1 israel:1 gif:1 algor:1 proposing:1 hindsight:1 noe:1 jduchi:1 fellow:1 berkeley:2 certainty:1 nutshell:1 exactly:2 universit:1 classifier:5 wrong:2 supv:1 k2:1 grant:2 appear:2 before:1 engineering:1 modify:2 sd:1 mistake:45 depended:1 analyzing:1 lugosi:1 signed:3 plus:1 kfi:1 range:1 averaged:2 acknowledgment:1 kuk2q:5 woodbury:1 testing:1 practice:1 regret:4 empirical:1 confidence:6 word:8 get:1 close:1 optimize:1 www:2 yt:23 missing:1 straightforward:1 kale:1 convex:16 m2:1 rule:5 deriving:1 his:1 proving:2 coordinate:3 justification:1 variation:1 updated:2 target:3 pt:7 exact:2 programming:1 us:2 hypothesis:8 secondorder:1 pa:17 element:1 trend:2 satisfying:1 particularly:1 predicts:2 bottom:2 ft:57 electrical:2 worst:1 ordering:5 decrease:2 trade:1 intuition:1 convexity:3 complexity:2 cristianini:1 solving:1 learner:2 derivation:1 effective:2 kp:1 artificial:1 labeling:1 tell:1 choosing:2 shalev:4 widely:1 otherwise:1 think:1 noisy:2 final:1 online:31 sequence:10 differentiable:1 kxt:6 eigenvalue:3 propose:1 frequent:1 adapts:1 leave:1 rotated:1 tti:1 illustrate:4 derive:7 ac:2 stat:1 measured:1 minor:1 received:1 progress:1 strong:3 c:2 implies:1 convention:1 closely:1 correct:3 stochastic:2 milano:2 f1:1 generalization:1 tighter:1 extension:1 exploring:1 hold:3 predict:2 proc:1 precessing:1 lose:1 label:11 repetition:2 tool:2 weighted:3 clearly:2 always:1 gaussian:2 aim:2 rather:1 varying:5 have1:1 corollary:14 derived:1 focus:4 validated:1 vk:1 rank:2 adversarial:1 helpful:1 dependent:1 a0:2 relation:1 interested:1 classification:14 dual:7 art:1 special:1 initialize:1 uc:1 equal:2 wharton:1 enginering:1 adversarially:1 represents:1 koby:3 k2f:1 filling:1 future:1 t2:4 report:4 few:5 versatility:1 maintain:1 freedom:1 possibility:2 evaluation:1 yielding:1 tut:1 primal:2 behind:1 respective:1 taylor:1 haifa:1 theoretical:3 stopped:1 instance:6 fenchel:3 column:3 measuring:1 cost:1 deviation:1 rare:2 kq:2 technion:2 too:2 synthetic:4 thanks:2 st:1 international:1 siam:1 huji:1 retain:1 off:1 receiving:1 thesis:1 squared:1 again:1 cesa:4 opposed:1 choose:3 possibly:1 strive:1 aggressive:13 potential:3 summarized:1 matter:1 depends:1 performed:1 view:2 analyze:2 hazan:2 red:2 start:4 recover:6 maintains:1 parallel:1 contribution:2 minimize:2 il:2 characteristic:1 yellow:2 famous:1 multiplying:1 researcher:1 ago:1 suffers:2 definition:2 proof:6 di:1 associated:2 recovers:1 static:1 stop:2 dataset:2 proved:2 popular:1 color:2 improves:1 carefully:1 focusing:2 appears:2 originally:1 dt:3 follow:1 evaluated:1 strongly:5 just:1 smola:1 until:2 hand:2 somehow:1 indicated:1 dredze:3 concept:1 true:1 verify:1 regularization:8 hence:5 round:13 game:1 maintained:1 trt:1 pdf:3 tt:2 duchi:1 passive:2 consideration:1 recently:3 fi:1 common:3 superior:1 specialized:2 mt:16 empirically:1 extend:1 interpret:1 significant:1 taub:1 cambridge:2 smoothness:1 rd:2 tuning:5 automatic:2 similarly:1 kukq:1 language:1 shawe:1 f0:2 longer:1 whitening:1 recent:1 perspective:1 optimizing:1 italy:1 scenario:1 inequality:2 binary:3 success:1 seen:1 minimum:1 additional:6 gentile:2 converge:3 signal:1 full:1 technical:4 faster:1 long:1 naming:1 specializing:1 prediction:15 variant:1 regression:1 basic:1 metric:1 kernel:2 receive:2 uninformative:1 else:1 grow:1 malicious:1 kukr:1 comment:1 call:3 extracting:1 ee:1 presence:1 revealed:2 easy:8 enough:2 independence:1 multiclass:3 det:4 motivated:3 utility:1 penalty:1 sentiment:2 suffer:1 interpolates:3 speaking:1 tewari:2 http:3 schapire:1 outperform:1 sign:6 correctly:1 blue:2 key:1 four:4 putting:1 drawn:2 changing:1 subgradient:1 year:1 sum:3 run:1 inverse:2 powerful:1 fourth:2 injected:1 uncertainty:1 place:1 extends:1 appendix:2 scaling:1 summarizes:2 bound:60 constraint:1 aspect:1 min:5 separable:1 relatively:2 department:1 combination:1 conjugate:1 slightly:1 kakade:2 making:6 intuitively:1 turn:1 german:1 x2t:4 singer:4 know:2 serf:1 end:2 generalizes:1 available:2 observe:2 generic:1 appearing:1 subtracted:1 alternative:1 robustness:2 top:2 remaining:1 nlp:4 hinge:7 maintaining:1 build:2 especially:1 quantity:2 rt:26 diagonal:5 shais:1 gradient:2 cw:5 link:3 distance:1 separating:3 thank:1 extent:1 reason:1 studi:1 code:1 index:1 ellipsoid:1 illustration:1 minimizing:1 hebrew:1 difficult:1 statement:2 br2:2 sop:8 negative:3 stated:1 design:6 zt:11 unknown:1 perform:2 allowing:1 bianchi:4 upper:4 francesco:2 datasets:2 supporting:1 defining:1 extended:1 arbitrary:2 introduced:1 kf1:1 narrow:2 israeli:1 trans:1 able:1 usually:2 kulesza:1 summarize:2 max:1 pascal2:1 event:1 natural:2 hybrid:1 regularized:1 kivinen:1 improve:4 axis:1 m2t:2 dating:1 nicol:1 adagrad:16 relative:5 freund:1 loss:32 dsi:2 lecture:1 interesting:1 msm:1 proportional:1 versus:1 foundation:3 offered:1 sufficient:1 xiao:1 story:1 row:3 repeat:1 last:4 supported:2 free:1 allow:2 perceptron:9 taking:1 dimension:1 world:2 cumulative:16 preventing:1 concretely:1 qualitatively:1 adaptive:13 counted:1 far:1 ec:1 r20:1 discriminative:1 degli:1 xi:1 shwartz:4 ultraconservative:1 ku:1 williamson:1 separator:1 diag:1 did:1 linearly:1 motivation:1 noise:4 x1:1 fig:9 k2p:5 rithm:1 sub:1 pereira:2 kzt:7 third:5 young:1 theorem:1 kuk2:5 specific:1 xt:36 showing:2 r2:8 rakhlin:2 evidence:1 incorporating:1 keshet:1 phd:1 justifies:1 margin:4 gap:1 expressed:1 ordered:5 arow:33 conconi:1 u2:1 corresponds:2 satisfies:3 identity:1 narrower:1 orabona:3 hard:4 specifically:1 typical:1 except:1 unimi:1 wt:37 hyperplane:5 averaging:1 lemma:14 conservative:5 called:4 total:2 duality:1 formally:1 internal:1 support:5 crammer:7 phenomenon:1 correlated:1 |
3,333 | 4,018 | An Approximate Inference Approach to Temporal
Optimization in Optimal Control
Konrad C. Rawlik
School of Informatics
University of Edinburgh
Edinburgh, UK
Marc Toussaint
TU Berlin
Berlin, Germany
Sethu Vijayakumar
School of Informatics
University of Edinburgh
Edinburgh, UK
Abstract
Algorithms based on iterative local approximations present a practical approach
to optimal control in robotic systems. However, they generally require the temporal parameters (for e.g. the movement duration or the time point of reaching
an intermediate goal) to be specified a priori. Here, we present a methodology
that is capable of jointly optimizing the temporal parameters in addition to the
control command profiles. The presented approach is based on a Bayesian canonical time formulation of the optimal control problem, with the temporal mapping
from canonical to real time parametrised by an additional control variable. An approximate EM algorithm is derived that efficiently optimizes both the movement
duration and control commands offering, for the first time, a practical approach to
tackling generic via point problems in a systematic way under the optimal control
framework. The proposed approach, which is applicable to plants with non-linear
dynamics as well as arbitrary state dependent and quadratic control costs, is evaluated on realistic simulations of a redundant robotic plant.
1 Introduction
Control of sensorimotor systems, artificial or biological, is inherently both a spatial and temporal
process. Not only do we have to specify where the plant has to move to but also when it reaches
that position. In some control schemes, the temporal component is implicit; for example, with a
PID controller, movement duration results from the application of the feedback loop, while in other
cases it is explicit, like for example in finite or receding horizon optimal control approaches where
the time horizon is set explicitly as a parameter of the problem [8, 13].
Although control based on an optimality criterion is certainly attractive, practical approaches for
stochastic systems are currently limited to the finite horizon [9, 16] or first exit time formulation [14,
17]. The former does not optimize temporal aspects of the movement, i.e., duration or the time when
costs for specific sub goals of the problem are incurred, assuming them as given a priori. However,
how should one choose these temporal parameters? This question is non trivial and important even
while considering a simple reaching problem. The solution generally employed in practice is to use
a apriori fixed duration, chosen experimentally. This can result in not reaching the goal, having to
use unrealistic range of control commands or excessive (wasteful) durations for short distance tasks.
The alternative first exit time formulation, on the other hand, either assumes specific exit states in the
cost function and computes the shortest duration trajectory which fulfils the task or assumes a time
stationary task cost function and computes the control which minimizes the joint cost of movement
duration and task cost [17, 1, 14]. This formalism is thus only directly applicable to tasks which do
not require sequential achievement of multiple goals. Although this limitation could be overcome
by chaining together individual time optimal single goal controllers, such a sequential approach has
several drawbacks. First, if we are interested in placing a cost on overall movement duration, we are
restricted to linear costs if we wish to remain time optimal. A second more important flaw is that
1
future goals should influence our control even before we have achieved the previous goal, a problem
which we highlight during our comparative simulation studies.
A wide variety of successful approaches to address stochastic optimal control problems have been
described in the literature [6, 2, 7]. The approach we present here builds on a class of approximate
stochastic optimal control methods which have been successfully used in the domain of robotic manipulators and in particular, the iLQG [9] algorithm used by [10], and the Approximate Inference
Control (AICO) algorithm [16]. These approaches, as alluded to earlier, are finite horizon formulations and consequently require the temporal structure of the problem to be fixed a priori. This
requirement is a direct consequence of a fixed length discretization of the continuous problem and
the structure of the temporally non-stationary cost function used, which binds incurrence of goal
related costs to specific (discretised) time points. The fundamental idea proposed here is to reformulate the problem in canonical time and alternately optimize the temporal and spatial trajectories.
We implement this general approach in the context of the approximate inference formulation of
AICO, leading to an Expectation Maximisation (EM) algorithm where the E-Step reduces to the
standard inference control problem. It is worth noting that due to the similarities between AICO,
iLQG and other algorithms, e.g., DDP [6], the same principle and approach should be applicable
more generally. The proposed approach provides an extension to the time scaling approach [12, 3]
by considering the scaling for a complete controlled system, rather then a single trajectory. Additionally, it also extends previous applications of Expectation Maximisation algorithms for system
identification of dynamical systems, e.g. [4, 5], which did not consider the temporal aspects.
2 Preliminaries
Let us consider a process with state x ? RDx and controls u ? RDu which is of the form
dx = (F(x) + Bu)dt + d?
d?d?? = Q
(1)
with non-linear state dependent dynamics F, control matrix B and Brownian motion ?, and define
a cost of the form
Z T
C(x(t), t) + u(t)?Hu(t) dt ,
(2)
L(x(?), u(?)) =
0
with arbitrary state dependent cost C and quadratic control cost. Note in particular that T , the
trajectory length, is assumed to be known. The closed loop stochastic optimal control problem is to
find the policy ? : x(t) ? u(t) given by
? ? = argmin Ex,u|?,x(0) {L(x(?), u(?))} .
(3)
?
In practice, the continuous time problem is discretized into a fixed number of K steps of length ?t ,
leading to the discreet problem with dynamics
P(xk+1 |xk , uk ) = N (xk+1 |xk + (F(x) + Bu)?t , Q?t ) ,
(4)
where we use N (?|a, A) to denote a Gaussian distribution with mean a and covariance A, and cost
L(x1:K , u1:K ) = CK (xK ) +
K?1
X
k=0
?t Ck (xk ) + u?k (H?t )uk .
(5)
Note that here we used the Euler Forward Method as the discretization scheme, which will prove
advantageous if a linear cost on the movement duration is chosen, leading to closed form solution
for certain optimization problems. However, in other cases, alternative discretisation methods could
be used and indeed, be preferable.
2.1 Approximate Inference Control
Recently, it has been suggested to consider a Bayesian inference approach [16] to (discreet) optimal
control problems formalised in Section 2. With the probabilistic trajectory model in (4) as a prior,
an auxiliary (binary) dynamic random task variable rk , with the associated likelihood
P(rk = 1|xk , uk ) = exp ?(?t Ck (xk ) + u?k (H?t )uk ) ,
(6)
2
u0
u1
u2
x0
x1
x2
r0
r1
r2
...
?0
?1
?2
u0
u1
u2
xK
x0
x1
x2
rK
r0
r1
r2
(a)
...
xK
rK
(b)
Figure 1: The graphical models for (a) standard inference control and (b) the AICO-T model
with canonical time. Circle and square nodes indicate continous and discreet variables respectively.
Shaded nodes are observed.
is introduced, i.e., we interpret the cost as a negative log likelihood of task fulfilment. Inference
control consists of computing the posterior conditioned on the observation r0:K = 1 within the
resulting model (illustrated as a graphical model in Fig. 1 (a)), and from it obtaining the maximum
a posteriori (MAP) controls. For cases, where the process and cost are linear and quadratic in u
respectively, the controls can be marginalised in closed form and one is left with the problem of
computing the posterior
Y
P (x0:K |r0:K = 1) =
N (xk+1 |xk + F(xk )?t , W?t ) exp(??t Ck (xk )) ,
(7)
k
with W := Q + BH?1 B?.
As this posterior is in general not tractable, the AICO [16] algorithm computes a Gaussian approximation to the true posterior using an approximate message passing approach similar in nature to EP
(details are given in supplementary material). The algorithm has been shown to have competitive
performance when compared to iLQG [16].
3 Temporal Optimization for Optimal Control
Often the state dependent cost term C(x, t) in (2) can be split into a set of costs which are incurred
only at specific times: also referred to as goals, and others which are independent of time, that is
C(x, t) = J (x) +
N
X
?t=t?n Vn (x) .
(8)
n=1
Classically, t?n refer to real time and are fixed. For instance, in a reaching movement, generally a cost
that is a function of the distance to the target is incurred only at the final time T while collision costs
are independent of time and incurred throughout the movement. In order to allow the time point at
which the goals are achieved to be influenced by the optimization, we will re-frame the goal driven
part of the problem in a canonical time and in addition to optimizing the controls, also optimize the
mapping from canonical to real time.
Specifically, we introduce into the problem defined by (1) & (2) the canonical time variable ? with
the associated mapping
Z t
1
ds , ?(?) > 0 ,
(9)
? = ?(t) =
0 ?(s)
with ? as an additional control. We also reformulate the cost in terms of the time ? as1
Z ??N
N
X
?1
T (?(s))ds
L(x(?), u(?), ?(?)) =
Vn (x(? (?
?n ))) +
n=1
0
+
Z
0
1
? ?1 (?
?N )
J (x(t)) + u(t)?Hu(t) dt , (10)
Note that as ? is strictly monotonic and increasing, the inverse function ? ?1 exists
3
with T an additional cost term over the controls ? and the ??1:N ? R assumed as given. Based on the
last assumption, we are still required to choose the time point at which individual goals are achieved
and how long the movement lasts; however, this is now done in terms of the canonical time and since
by controlling ?, we can change the real time point at which the cost is incurred, the exact choices
for ??1:N are relatively unimportant. The real time behaviour is mainly specified by the additional
cost term T over the new controls ? which we have introduced. Note that in the special case where
R ??
T is linear, we have 0 N T (?s )ds = T (T ), i.e., T is equivalent to a cost on the total movement
duration. Although here we will stick to the linear case, the proposed approach is also applicable
to non-linear duration costs. We briefly note the similarity of the formulation to the canonical time
formulation of [11] used in an imitation learning setting.
We now discretize the augmented system in canonical time with a fixed number of steps K. Making
the arbitrary choice of a step length of 1 in ? induces, by (9), a sequence of steps in t with length2
?k = ?k . Using this time step sequence and (4) we can now obtain a discreet process in terms of
the canonical time with an explicit dependence on ?0:K?1 . Discretization of the cost in (10) gives
L(x1:K , u1:K , ?0:K?1 ) =
N
X
Vn (xk?n ) +
n=1
K?1
X
k=0
T (?k ) + J (xk )?k + u?k H?k uk ,
(11)
for some given k?1:N . We now have a new formulation of the optimal control problem that no longer
of the form of equations (4) & (5), e.g. (11) is no longer quadratic in the controls as ? is a control.
Proceeding as for standard inference control and treating the cost (11) as a neg-log likelihood of
an auxiliary binary dynamic random variable, we obtain the inference problem illustrated by the
Bayesian network in Figure 1(b). With controls u marginalised, our aim is now to find the posterior
P(x0:K , ?0:K?1 |r0:K = 1). Unfortunately, this problem is intractable even for the simplest case, e.g.
LQG with linear duration cost. However, observing that for given ?k ?s, the problem reduces to the
standard case of Section 2.1 suggest restricting ourselves to finding the MAP estimate for ?0:K?1 and
MAP
the associated posterior P(x0:K |?0:K?1
, r0:K = 1) using an EM algorithm. The solution is obtained
by iterating the E- & M-Steps (see below) until the ??s have converged; we call this algorithm AICOT to reflect the temporal aspect of the optimization.
3.1 E-Step
In general, the aim of the E-Step is to calculate the posterior over the unobserved variables, i.e. the
trajectories, given the current parameter values, i.e. the ?i ?s.
i
q i (x0:K ) = P(x0:K |r0:K = 1, ?0:K?1
).
(12)
However, as will be shown below we actually only require the expectations xk x?k and xk x?k+1
during the M-Step. As these are in general not tractable, we compute a Gaussian approximation to
the posterior, following an approximate message passing approach with linear and quadratic approximations to the dynamics and cost respectively [16] (for details, refer to supplementary material).
3.2 M-Step
In the M-Step, we solve
i+1
i
),
?0:K?1
= argmax Q(?0:K?1 |?0:K?1
(13)
?0:K?1
with
i
Q(?0:K?1 |?0:K?1
) = hlog P(x0:K , r0:K = 1|?0:K?1 )i
=
K?1
X
hlog P(xk+1 |xk , ?k )i ?
k=0
K?1
X
[T (?k ) + ?k hJ (xk )i] + constant ,
k=1
(14)
where h?i denotes the expectation with respect to the distribution calculated in the E-Step, i.e., the
posterior q i (x0:K ) over trajectories given the previous parameter values. The required expectations,
2
under the assumption of constant ?(?) during each step
4
hJ (xk )i and
hlog P(xk+1 |xk , ?k )i = ?
D
E
Dx
e k ))? W
f ?1 (xk+1 ? F(x
e k )) , (15)
f k | ? 1 (xk+1 ? F(x
log |W
k
2
2
e k ) = xk + F(xk )?k and W
f k = ?k W, are in general not tractable. Therefore, we take
with F(x
approximations
F(xk ) ? ak + Ak xk
and J (xk ) ?
1 ?
x Jk xk ? j?k xk ,
2 k
(16)
choosing the mean of q i (xk ) as the point of approximation, consistent with the equivalent approximations made in the E-Step. Under these approximations, it can be shown that, up to additive terms
independent of ?,
K?1
X Dx
i
f k | + T (?k ) + 1 Tr(W
f ?1 xk+1 x?
Q(?0:K?1 |?0:K?1 ) = ?
log |W
k+1 )
k
2
2
k=0
e? W
f ?1 hxk+1 x? i) + 1 Tr(A
f ?1 A
e ? hxk x? i) + a
f ?1 A
e k hxk i
e kW
??k W
? Tr(A
k
k
k
k
k
k
k
2
1
1 ? f ?1
? W a
? k + ?k
,
+ a
Tr(Jk xk x?k ? jk hxk i
2 k k
2
e k = I + ?k Ak and taking partial derivatives leads to
??k = ?k ak , A
with a
D2
?Q
1
= ?k?2 Tr W?1 ( xk+1 x?k+1 ? 2 xk+1 x?k + xk x?k ) ? x ?k?1
??k
2
2
dT
1
+ a?k W?1 ak + 2a?k W?1 Ak hxk i
Tr(AW?1 A? xk x?k ) + 2
?
2
d? ?k
+ Tr(Jk xk x?k ) ? 2jk hxk i .
(17)
In the general case, we can now use gradient ascent to improve the ??s. However, in the specific
?Q
is a quadratic in ?k?1 and the unique
case where T is a linear function of ?, we note that 0 = ??
k
extremum under the constraint ?k > 0 can be found analytically.
3.3 Practical Remarks
The performance of the algorithm can be greatly enhanced by using the result of the previous EStep as initialisation for the next one. As this is likely to be near the optimum with the new temporal
trajectory, AICO converges within only a few iterations. Additionally, in practise it is often sufficient
to restrict the ?k ?s between goals to be constant, which is easily achieved as Q is a sum over the ??s.
The proposed algorithms leads to a variation of discretization step length which can be a problem.
For one, the approximation error increases with the step length which may lead to wrong results. On
the other hand, the algorithm may lead to control frequencies which are not achievable in practice.
In general, a fixed control signal frequency may be prescribed by the hardware system. In practice,
??s can be kept in a prescribed range by adjusting the number of discretization steps K after an
M-Step.
Finally, although we have chosen to express the time cost in terms of a function ofP
the ??s, often it
may be desirable to consider a costPdirectly over the duration T . Noting that T =
?k , all that is
?T ( ?)
dT
required is to replace d? with ??k in (17).
4 Experiments
The proposed algorithm was evaluated in simulation. As a basic plant, we used a kinematic simulation of a 2 degrees of freedom (DOF) planar arm, consisting of two links of equal length. The state
? with q ? R2 the joint angles and q? ? R2 associated angular
of the plant is given by x = (q, q),
5
600
AICO-T(? = ?0 )
AICO-T(? = 2?)
AICO-T(? = 0.5?)
0.2
Reaching Cost
0.4
0
AICO-T(? = ?0 )
AICO-T(? = 2?0 )
AICO-T(? = 0.5?0 )
300
Reaching Cost
Movement Duration
0.6
200
100
0
0.2
0.4
0.6
0.8
0.2
Task Space Movement Distance
0.4
0.6
0.8
Task Space Movement Distance
(a)
(b)
AICO (T = 0.07)
AICO (T = 0.24)
AICO (T = 0.41)
AICO-T(? = ?0 )
400
200
0
0.2
0.4
0.6
0.8
Task Space Movement Distance
(c)
Figure 2: Temporal scaling behaviour using AICO-T. (a & b) Effect of changing time-cost weight
?, (effectively the ratio between reaching cost and duration cost) on (a) duration and (b) reaching
cost (control + state cost). (c) Comparison of reaching costs (control + error cost) for AICO-T and
a fixed duration approach, i.e. AICO.
velocities. The controls u ? R2 are the joint space accelerations. We also added some iid noise with
small diagonal covariance.
For all experiments, we used a quadratic control cost and the state dependent cost term:
X
?k=k?i (?i (xk ) ? yi? )??i (?i (xk ) ? yi? ) ,
V(xk ) =
(18)
i
for some given k?i and employed a diagonal weight matrix ?i while yi? represented the desired state
in task space. For point targets, the task space mapping is ?(x) = (x, y, x,
? y)
? ?, i.e., the map from
x to the vector of end point positions and velocities in task space coordinates. The time cost was
linear, that is, T (?) = ??.
4.1 Variable Distance Reaching Task
In order to evaluate the behaviour of AICO-T we applied it to a reaching task with varying starttarget distance. Specifically, for a fixed start point we considered a series of targets lying equally
spaced along a line in task space. It should be noted that although the targets are equally spaced
in task space and results are shown with respect to movement distance in task space, the distances
in joint space scale non linearly. The state cost (18) contained a single term incurred at the final
discrete step with ? = 106P
? I and the control cost were given by H = 104 ? I. Fig. 2(a & b) shows
the movement duration (= ?k ) and standard reaching cost3 for different temporal-cost parameters
? (we used ?0 = 2?107), demonstrating that AICO-T successfully trades-off the movement duration
and standard reaching cost for varying movement distances. In Fig. 2(c), we compare the reaching
costs of AICO-T with those obtained with a fixed duration approach, in this case AICO. Note that
although with a fixed, long duration (e.g., AICO with duration T=0.41) the control and error costs
are reduced for short movements, these movements necessarily have up to 4? longer durations than
those obtained with AICO-T. For example for a movement distance of 0.2 application of AICO-T
results in a optimised movement duration of 0.07 (cf. Fig. 2(a)), making the fixed time approach
impractical when temporal costs are considered. Choosing a short duration on the other hand (AICO
(T=0.07)) leads to significantly worse costs for long movements. We further emphasis that the
fixed durations used in this comparison were chosen post hoc by exploiting the durations suggested
by AICO-T in absence of this, there would have been no practical way of choosing them apart
from experimentation. Furthermore, we would like to highlight that, although the results suggests a
simple scaling of duration with movement distance, in cluttered environments and plants with more
complex forward kinematics, an efficient decision on the movement duration cannot be based only
on task space distance.
4.2 Via Point Reaching Tasks
We also evaluated the proposed algorithm in a more complex via point task. The task requires the
end-effector to reach to a target, having passed at some point through a given second target, the
3
n.b. the standard reaching cost is the sum of control costs and cost on the endpoint error, without taking
duration into account, i.e., (11) without the T (?) term.
6
?0.2
3.4
?1.5
3
?0.6
2.8
?2
0
0.2
1
2
0
Time
(a)
20
15
1
10
0.5
2.6
?0.4?0.2 0
2
1.5
?2.5
?0.8
25
Reaching Cost
?0.4
Movement Duration
Angle Joint 1 [rad]
Angle Joint 2 [rad]
2.5
3.2
1
Time
(b)
2
0
5
Near
0
Far
Near
Far
(c)
Figure 3: Comparision of AICO-T (solid) to the common modelling approach, using AICO,
(dashed) with fixed times on a via point task. (a) End point task space trajectories for two different via points (circles) obtained for a fixed start point (triangle). (b) The corresponding joint
space trajectories. (c) Movement durations and reaching costs (control + error costs) from 10 random start points. The proportion of the movement duration spend before the via point is shown in
light gray (mean in the AICO-T case).
via point. This task is of interest as it can be seen as an abstraction of a diverse range of complex
sequential tasks that requires one to achieve a series of sub-tasks in order to reach a final goal. This
task has also seen some interest in the literature on modeling of human movement using the optimal
control framework, e.g., [15]. Here the common approach is to choose the time point at which
one passes the via point such as to divide the movement duration in the same ratio as the distances
between the start point, via point and end target. This requires on the one hand prior knowledge of
these movement distances and on the other, makes the implicit assumption that the two movements
are in some sense independent.
In a first experiment, we demonstrate the ability of our approach to solve such sequential problems,
adjusting movement durations between sub goals in a principled manner, and show that it improves
upon the standard modelling approach. Specifically, we apply AICO-T to the two via point problems
illustrated in Fig. 3(a) with randomised start states4 . For comparison, we follow the standard modeling approach and apply AICO to compute the controller. We again choose the movement duration
for the standard case post hoc to coincide with the mean movement duration obtained with AICO-T
for each of the individual via point tasks. Each task is expressed using a cost function consisting of
two point target cost terms. Specifically, (18) takes the form
V(xk ) = ?k= K (?(xk ) ? yv? )??v (?(xk ) ? yv? ) + ?k=K (?(xk ) ? ye? )??e (?(xk ) ? ye? ) ,
2
(19)
with K the number of discrete steps and diagonal matrices ?v = diag(?pos , ?pos , 0, 0), ?e =
diag(?pos , ?pos , ?vel , ?vel ), where ?pos = 105 & ?vel = 107 and vectors yv? = (?, ?, 0, 0)?, ye? =
(?, ?, 0, 0)? desired states for individual via point and target, respectively. Note that the cost function
does not penalise velocity at the via point but encourages the stopping at the target. While admittedly
the choice of incurring the via point cost at the middle of the movement ( K
2 ) is likely to be a suboptimal choice for the standard approach, one has to consider that in more complex task spaces, the
relative ratio of movement distances may not be easily accessible and one may have to resort to the
most intuitive choice for the uninformed case as we have done here. Note that although for AICO-T
this cost is incurred at the same discrete step, we allow ? before and after the via point to differ, but
constrain them to be constant throughout each part of the movement, hence, allowing the cost to be
incurred at an arbitrary point in real time. We sampled the initial position of each joint independently
from a Gaussian distribution with a variance of 3? . In Fig. 3(a&b), we show maximum a posteriori
(MAP) trajectories in task space and joint space for controllers computed for the mean initial state.
Interestingly, although the end point trajectory for the near via point produced by AICO-T may
look sub-optimal than that produced by the standard AICO algorithm, closer examination of the
joint space trajectories reveal that our approach results in more efficient actuation trajectories. In
Fig. 3(c), we illustrate the resulting average movement durations and costs of the mean trajectories.
As can be seen, AICO-T results in the expected passing times for the two via points, i.e. early
vs. late in the movement for the near and far via point, respectively. This directly leads to a lower
incurred cost compared to un-optimised movement durations.
4
For the sake of clarity, Fig. 3(a&b) show MAP trajectories of controllers computed for the mean start state.
7
?0.2
3.4
?1.5
2.5
60
2
50
?0.6
?2
?0.8
?1
2.6
?2.5
Reaching Cost
2.8
Angle Joint 2 [rad]
Angle Joint 1 [rad]
3
Movement Duration
?0.4
3.2
1.5
1
0.5
40
30
20
10
?1.2
0
0
0.2
0.4
0.6
(a)
1
2
0
Time
1
2
0
Time
(b)
Joint Seq.
0
Joint Seq.
(c)
Figure 4: Joint (solid) vs. sequential (dashed) optimisation using AICO-T for a sequential (via
point) task. (a) Task space trajectories for a fixed start point (triangle). Viapoint and target are
indicated by the circle and square, respectively. (b) The corresponding joint space trajectories. (c)
The movement durations and reaching costs (control + error cost) for 10 random start points. The
mean proportion of the movement duration spend before the via point is shown in light gray.
In order to highlight the shortcomings of sequential time optimal control, next we compare planning a complete movement over sequential goals to planning a sequence of individual movements.
Specifically, using AICO-T, we compare planning the whole via point movement (joint planner) to
planning a movement from the start to the via point followed by a second movement from the end
point of the first movement (n.b. not from the via point) to the end target (sequential planner). The
joint planner used the same cost function as the previous experiment. For the sequential planner,
each of the two sub-trajectories had half the number of discrete time steps of the joint planner and
the cost functions were given by appropriately splitting (19), i.e.,
V 1 (xk ) = ?k= K (?(xk )?yv? )??v (?(xk )?yv? ) and V 2 (xk ) = ?k= K (?(xk )?ye? )??e (?(xk )?ye? ) ,
2
2
with ?v , ?e , yv? , ye? as for (19). The start states were sampled according to the distribution used in
the last experiment and in Fig. 4(a&b), we plot the MAP trajectories for the mean start state, in task
as well as joint space. The results illustrate that sequential planning leads to sub-optimal results as
it does not take future goals into consideration. This leads directly to a higher cost (c.f. Fig. 4(c)),
calculated from trials with randomised start state. One should however note that this effect would
be less pronounced if the cost required stopping at the via point, as it is the velocity away from the
end target which is the main problem for the sequential planner.
5 Conclusion
The contribution of this paper is a novel method for jointly optimizing a movement trajectory and
its time evolution (temporal scale and duration) in the stochastic optimal control framework. As a
special case, this solves the problem of an unknown goal horizon and the problem of trajectory optimization through via points when the timing of intermediate constraints is unknown and subject to
optimization. Both cases are of high relevance in practical robotic applications where pre-specifying
a goal horizon by hand is common practice but typically lacks justification.
The method was derived in the form of an Expectation-Maximization algorithm where the E-step addresses the stochastic optimal control problem reformulated as an inference problem and the M-step
re-adapts the time evolution of the trajectory. In principle, the proposed framework can be applied
to extend any algorithm that ? directly or indirectly ? provides us with an approximate trajectory
posterior in each iteration. AICO [16] does so directly in terms of a Gaussian approximation; similarly, the local LQG solution implicit in iLQG [9] can, with little extra computational cost, be used
to compute a Gaussian posterior over trajectories. For algorithms like DDP [6], which do not lead to
an LQG approximation, we can employ the Laplace method to obtain Gaussian posteriors or adjust
the M-Step for the non-Gaussian posterior. We demonstrated the algorithm on a standard reaching
task with and without via points. In particular, in the via point case, it becomes obvious that fixed
horizon methods and sequenced first exit time methods cannot find equally efficient motions as the
proposed method.
8
References
[1] David Barber and Tom Furmston. Solving deterministic policy (PO)MDPs using expectationmaximisation and antifreeze. In European Conference on Machine Learning (LEMIR workshop), 2009.
[2] Marc Peter Deisenroth, Carl Edward Rasmussen, and Jan Peters. Gaussian process dynamic
programming. Neurocomputing, 72(7-9):1508 ? 1524, 2009.
[3] Yu-Yi Fu, Chia-Ju Wu, Kuo-Lan Su, and Chia-Nan Ko. A time-scaling method for near-timeoptimal control of an omni-directional robot along specified paths. Artificial Life and Robotics,
13(1):350?354, 2008.
[4] Z Ghahramani and G Hinton. Parameter estimation for linear dynamical systems. Technical
Report CRG-TR-96-2, University of Toronto, 1996.
[5] Z Ghahramani and S Roweis. Learning nonlinear dynamical systems using an em algorithm.
In Advances in Neural Information Processing Systems, volume 11, Nov 1999.
[6] D Jacobson and D Mayne. Differential Dynamic Programming. Elsevier, 1970.
[7] Hilbert J. Kappen. A linear theory for control of non-linear stochastic systems. Physical
Review Letters, 95(20):200201, 2005.
[8] Donald E. Kirk. Optimal Control Theory - An Introduction. Prentice-Hall, 1970.
[9] Weiwei Li and Emanuel Todorov. An iterative optimal control and estimation design for nonlinear stochastic system. In Proc. of the 45th IEEE Conference on Decision and Control, 2006.
[10] Djordje Mitrovic, Sho Nagashima, Stefan Klanke, Takamitsu Matsubara, and Sethu Vijayakumar. Optimal feedback control for anthropomorphic manipulators. In Proc. IEEE International
Conference on Robotics and Automation (ICRA 2010), 2010.
[11] Peter Pastor, Heiko Hoffmann, Tamim Asfour, and Stefan Schaal. Learning and generalization
of motor skills by learning from demonstration. In Proc. IEEE International Conference on
Robotics and Automation (ICRA 2010), Feb 2010.
[12] Gideon Sahar and John M. Hollerbach. Planning of minimum- time trajectories for robot arms.
The International Journal of Robotics Research, 5(3):90?100, 1986.
[13] Robert F. Stengel. Optimal Control and Estimation. Dover Publications, 1986.
[14] Emanuel Todorov. Compositionality of optimal control laws. In Advances in Neural Information Processing Systems, volume 22, 2009.
[15] Emanuel Todorov and Michael Jordan. Optimal feedback control as a theory of motor coordination. Nature Neuroscience, 5(11):1226?1235, 2002.
[16] Marc Toussaint. Robot trajectory optimization using approximate inference. In Proc. of the 26
th International Conference on Machine Learning (ICML 2009), 2009.
[17] Marc Toussaint and Amos Storkey. Probabilistic inference for solving discrete and continuous
state Markov Decision Processes. In Proc. of the 23nd Int. Conf. on Machine Learning (ICML
2006), pages 945?952, 2006.
9
| 4018 |@word trial:1 middle:1 briefly:1 achievable:1 advantageous:1 proportion:2 nd:1 hu:2 d2:1 simulation:4 covariance:2 tr:8 solid:2 kappen:1 initial:2 series:2 initialisation:1 offering:1 interestingly:1 current:1 discretization:5 tackling:1 dx:3 john:1 realistic:1 additive:1 lqg:3 motor:2 treating:1 plot:1 v:2 stationary:2 half:1 xk:55 dover:1 short:3 provides:2 node:2 toronto:1 along:2 direct:1 differential:1 prove:1 consists:1 manner:1 introduce:1 x0:9 expected:1 indeed:1 planning:6 discretized:1 little:1 considering:2 increasing:1 becomes:1 argmin:1 minimizes:1 finding:1 unobserved:1 extremum:1 impractical:1 temporal:18 penalise:1 preferable:1 wrong:1 uk:7 control:63 stick:1 before:4 local:2 bind:1 timing:1 consequence:1 ak:6 optimised:2 path:1 emphasis:1 suggests:1 shaded:1 specifying:1 limited:1 range:3 practical:6 unique:1 practice:5 maximisation:2 implement:1 jan:1 significantly:1 pre:1 donald:1 suggest:1 cannot:2 bh:1 prentice:1 context:1 influence:1 optimize:3 equivalent:2 map:7 demonstrated:1 deterministic:1 duration:44 cluttered:1 independently:1 splitting:1 variation:1 coordinate:1 justification:1 laplace:1 target:13 controlling:1 enhanced:1 exact:1 programming:2 carl:1 velocity:4 storkey:1 jk:5 observed:1 ep:1 calculate:1 movement:52 trade:1 principled:1 environment:1 practise:1 dynamic:8 solving:2 upon:1 exit:4 triangle:2 easily:2 joint:20 po:6 represented:1 shortcoming:1 artificial:2 choosing:3 dof:1 supplementary:2 solve:2 spend:2 ability:1 jointly:2 final:3 hoc:2 sequence:3 formalised:1 tu:1 loop:2 achieve:1 adapts:1 roweis:1 mayne:1 intuitive:1 pronounced:1 achievement:1 exploiting:1 requirement:1 r1:2 optimum:1 comparative:1 converges:1 illustrate:2 uninformed:1 school:2 edward:1 solves:1 auxiliary:2 indicate:1 differ:1 drawback:1 stochastic:8 human:1 material:2 require:4 behaviour:3 generalization:1 preliminary:1 anthropomorphic:1 biological:1 crg:1 extension:1 strictly:1 lying:1 considered:2 hall:1 exp:2 mapping:4 rawlik:1 sho:1 early:1 estimation:3 proc:5 applicable:4 currently:1 coordination:1 asfour:1 successfully:2 amos:1 stefan:2 gaussian:9 aim:2 heiko:1 reaching:21 rather:1 rdx:1 ck:4 hj:2 varying:2 command:3 publication:1 derived:2 schaal:1 modelling:2 likelihood:3 mainly:1 greatly:1 sense:1 posteriori:2 inference:13 flaw:1 dependent:5 abstraction:1 stopping:2 elsevier:1 typically:1 interested:1 germany:1 overall:1 priori:3 spatial:2 special:2 apriori:1 equal:1 having:2 placing:1 kw:1 look:1 yu:1 excessive:1 icml:2 future:2 others:1 report:1 few:1 employ:1 neurocomputing:1 individual:5 argmax:1 ourselves:1 consisting:2 freedom:1 interest:2 message:2 kinematic:1 certainly:1 adjust:1 light:2 jacobson:1 parametrised:1 fu:1 capable:1 partial:1 closer:1 discretisation:1 divide:1 circle:3 re:2 desired:2 effector:1 instance:1 formalism:1 earlier:1 modeling:2 maximization:1 cost:74 euler:1 successful:1 aico:41 aw:1 ju:1 fundamental:1 international:4 accessible:1 vijayakumar:2 bu:2 systematic:1 probabilistic:2 informatics:2 off:1 michael:1 together:1 again:1 reflect:1 choose:4 classically:1 worse:1 conf:1 resort:1 derivative:1 leading:3 li:1 account:1 stengel:1 automation:2 int:1 explicitly:1 closed:3 observing:1 competitive:1 start:12 yv:6 contribution:1 square:2 variance:1 efficiently:1 spaced:2 directional:1 bayesian:3 identification:1 produced:2 iid:1 trajectory:27 worth:1 viapoint:1 converged:1 reach:3 influenced:1 sensorimotor:1 frequency:2 obvious:1 associated:4 sampled:2 emanuel:3 adjusting:2 knowledge:1 improves:1 hilbert:1 actually:1 higher:1 dt:5 follow:1 methodology:1 specify:1 planar:1 tom:1 formulation:8 evaluated:3 done:2 vel:3 furthermore:1 angular:1 implicit:3 until:1 d:3 hand:5 su:1 nonlinear:2 lack:1 gray:2 reveal:1 indicated:1 manipulator:2 effect:2 ye:6 true:1 former:1 analytically:1 hence:1 evolution:2 illustrated:3 attractive:1 konrad:1 during:3 encourages:1 noted:1 chaining:1 criterion:1 complete:2 demonstrate:1 motion:2 consideration:1 novel:1 recently:1 common:3 physical:1 endpoint:1 volume:2 extend:1 interpret:1 refer:2 similarly:1 had:1 robot:3 similarity:2 longer:3 feb:1 brownian:1 posterior:13 optimizing:3 optimizes:1 driven:1 hxk:6 apart:1 pastor:1 certain:1 binary:2 life:1 yi:4 neg:1 seen:3 minimum:1 additional:4 employed:2 r0:8 shortest:1 redundant:1 signal:1 u0:2 dashed:2 multiple:1 desirable:1 reduces:2 technical:1 long:3 chia:2 ofp:1 post:2 equally:3 controlled:1 basic:1 ko:1 controller:5 optimisation:1 expectation:6 iteration:2 sequenced:1 achieved:4 robotics:4 addition:2 furmston:1 appropriately:1 extra:1 ascent:1 pass:1 subject:1 jordan:1 call:1 near:6 noting:2 intermediate:2 split:1 weiwei:1 variety:1 todorov:3 restrict:1 suboptimal:1 idea:1 passed:1 peter:3 reformulated:1 passing:3 remark:1 generally:4 collision:1 iterating:1 unimportant:1 klanke:1 induces:1 hardware:1 simplest:1 reduced:1 canonical:11 neuroscience:1 diverse:1 discrete:5 express:1 demonstrating:1 lan:1 wasteful:1 changing:1 clarity:1 kept:1 sum:2 mitrovic:1 inverse:1 angle:5 letter:1 extends:1 throughout:2 planner:6 wu:1 vn:3 seq:2 decision:3 scaling:5 ddp:2 followed:1 nan:1 quadratic:7 comparision:1 constraint:2 constrain:1 x2:2 sake:1 fulfils:1 aspect:3 u1:4 optimality:1 prescribed:2 relatively:1 estep:1 according:1 remain:1 em:4 making:2 restricted:1 pid:1 alluded:1 equation:1 randomised:2 kinematics:1 tractable:3 end:8 incurring:1 experimentation:1 apply:2 away:1 generic:1 indirectly:1 alternative:2 assumes:2 denotes:1 cf:1 graphical:2 ghahramani:2 build:1 icra:2 move:1 question:1 added:1 matsubara:1 hoffmann:1 dependence:1 diagonal:3 gradient:1 distance:16 link:1 berlin:2 sethu:2 barber:1 trivial:1 assuming:1 length:7 reformulate:2 ratio:3 demonstration:1 unfortunately:1 hlog:3 robert:1 negative:1 design:1 policy:2 unknown:2 allowing:1 discretize:1 observation:1 markov:1 finite:3 hinton:1 frame:1 omni:1 arbitrary:4 compositionality:1 introduced:2 david:1 discretised:1 required:4 specified:3 continous:1 rad:4 alternately:1 address:2 suggested:2 receding:1 dynamical:3 below:2 gideon:1 hollerbach:1 unrealistic:1 examination:1 marginalised:2 arm:2 scheme:2 improve:1 mdps:1 temporally:1 prior:2 literature:2 review:1 relative:1 law:1 plant:6 highlight:3 ilqg:4 limitation:1 sahar:1 toussaint:3 incurred:9 degree:1 sufficient:1 consistent:1 principle:2 last:3 rasmussen:1 allow:2 wide:1 taking:2 edinburgh:4 feedback:3 overcome:1 calculated:2 computes:3 forward:2 made:1 coincide:1 far:3 approximate:10 nov:1 skill:1 robotic:4 assumed:2 imitation:1 continuous:3 iterative:2 un:1 additionally:2 as1:1 nature:2 expectationmaximisation:1 inherently:1 obtaining:1 necessarily:1 complex:4 european:1 marc:4 domain:1 diag:2 did:1 main:1 linearly:1 whole:1 noise:1 profile:1 x1:4 augmented:1 fig:10 referred:1 sub:6 position:3 explicit:2 wish:1 late:1 kirk:1 rk:4 specific:5 r2:5 exists:1 intractable:1 workshop:1 restricting:1 sequential:12 effectively:1 conditioned:1 horizon:7 likely:2 expressed:1 contained:1 u2:2 monotonic:1 goal:19 consequently:1 acceleration:1 replace:1 absence:1 experimentally:1 change:1 specifically:5 admittedly:1 total:1 kuo:1 deisenroth:1 cost3:1 relevance:1 actuation:1 evaluate:1 ex:1 |
3,334 | 4,019 | Two-layer Generalization Analysis for Ranking Using
Rademacher Average
Wei Chen?
Chinese Academy of Sciences
[email protected]
Tie-Yan Liu
Microsoft Research Asia
[email protected]
Zhiming Ma
Chinese Academy of Sciences
[email protected]
Abstract
This paper is concerned with the generalization analysis on learning to rank for
information retrieval (IR). In IR, data are hierarchically organized, i.e., consisting
of queries and documents. Previous generalization analysis for ranking, however,
has not fully considered this structure, and cannot explain how the simultaneous
change of query number and document number in the training data will affect the
performance of the learned ranking model. In this paper, we propose performing
generalization analysis under the assumption of two-layer sampling, i.e., the i.i.d.
sampling of queries and the conditional i.i.d sampling of documents per query.
Such a sampling can better describe the generation mechanism of real data, and
the corresponding generalization analysis can better explain the real behaviors of
learning to rank algorithms. However, it is challenging to perform such analysis, because the documents associated with different queries are not identically
distributed, and the documents associated with the same query become no longer
independent after represented by features extracted from query-document matching. To tackle the challenge, we decompose the expected risk according to the two
layers, and make use of the new concept of two-layer Rademacher average. The
generalization bounds we obtained are quite intuitive and are in accordance with
previous empirical studies on the performances of ranking algorithms.
1
Introduction
Learning to rank has recently gained much attention in machine learning, due to its wide applications in real problems such as information retrieval (IR). When applied to IR, learning to rank is a
process as follows [16]. First, a set of queries, their associated documents, and the corresponding
relevance judgments are given. Each document is represented by a set of features, measuring the
matching between document and query. Widely-used features include the frequency of query terms
in the document and the query likelihood given by the language model of the document. A ranking function, which combines the features to predict the relevance of a document to the query, is
learned by minimizing a loss function defined on the training data. Then for a new query, the ranking function is used to rank its associated documents according to their predicted relevance. Many
learning to rank algorithms have been proposed, among which the pairwise ranking algorithms such
as Ranking SVMs [12, 13], RankBoost [11], and RankNet [5] have been widely applied.
To understand existing ranking algorithms, and to guide the development of new ones, people have
studied the learning theory for ranking, in particular, the generalization ability of ranking methods.
Generalization ability is usually represented by a bound of the deviation between the expected and
empirical risks for an arbitrary ranking function in the hypothesis space. People have investigated the
generalization bounds under different assumptions. First, with the assumption that documents are
i.i.d., the generalization bounds of RankBoost [11], stable pairwise ranking algorithms like Ranking
?
The work was performed when the first author was an intern at Microsoft Research Asia.
1
SVMs [2], and algorithms minimizing pairwise 0-1 loss [1, 9] were studied. We call these generalization bounds ?document-level generalization bounds?, which converge to zero when the number of
documents in the training set approaches infinity. Second, with the assumption that queries are i.i.d.,
the generalization bounds of stable pairwise ranking algorithms like Ranking SVMs and IR-SVM
[6] and listwise algorithms were obtained in [15] and [14]. We call these generalization bounds
?query-level generalization bounds?. When analyzing the query-level generalization bounds, the
documents associated with each query are usually regarded as a deterministic set [10, 14], and no
random sampling of documents is assumed. As a result, query-level generalization bounds converge
to zero only when the number of queries approaches infinity, no matter how many documents are
associated with them.
While the existing generalization bounds can explain the behaviors of some ranking algorithms, they
also have their limitations. (1) The assumption that documents are i.i.d. makes the document-level
generalization bounds not directly applicable to ranking in IR. This is because it has been widely accepted that the documents associated with different queries do not follow the same distribution [17]
and the documents with the same query are no longer independent after represented by documentquery matching features. (2) It is not reasonable for query-level generalization bounds to assume
that one can obtain the document set associated with each query in a deterministic manner. Usually
there are many random factors that affect the collection of documents. For example, in the labeling
process of TREC, the ranking results submitted by all TREC participants were put together and then
a proportion of them were selected and presented to human annotators for labeling. In this process,
the number of participants, the ranking result given by each participant, the overlap between different ranking results, the labeling budget, and the selection methodology can all influence which
documents and how many documents are labeled for each query. As a result, it is more reasonable
to assume a random sampling process for the generation of labeled documents per query.
To address the limitations of previous work, we propose a novel theoretical framework for ranking,
in which a two-layer sampling strategy is assumed. In the first layer, queries are i.i.d. sampled
from the query space according to a fixed but unknown probability distribution. In the second layer,
for each query, documents are i.i.d. sampled from the document space according to a fixed but
unknown conditional probability distribution determined by the query (i.e., documents associated
with different queries do not have the identical distribution). Then, a set of features are extracted
for each document with respect to the query. Note that the feature representations of the documents
with the same query, as random variables, are not independent any longer. But they are conditionally
independent if the query is given. As can be seen, this new sampling strategy removes improper
assumptions in previous work, and can more accurately describe the data generation process in IR.
Based on the framework, we have performed two-layer generalization analysis for pairwise ranking algorithms. However, the task is non-trivial mainly because the two-layer sampling does not
correspond to a typical empirical process: the documents for different queries are not identically
distributed while the documents for the same query are not independent. Thus, the empirical process techniques, widely used in previous work on generalization analysis, are not sufficient. To
tackle the challenge, we carefully decompose the expected risk according to the query and document layers, and employ a new concept called two-layer Rademacher average. The new concept
accurately describes the complexity in the two-layer setting, and its reduced versions can be used to
derive meaningful bounds for query layer error and document layer error respectively.
According to the generalization bounds we obtained, we have the following findings: (i) Both more
queries and more documents per query can enhance the generalization ability of ranking methods;
(ii) Only if both the number of training queries and that of documents per query simultaneously
approach infinity, can the generalization bound converge to zero; (iii) Given a fixed size of training
data, there exists an optimal tradeoff between the number of queries and the number of documents
per query. These findings are quite intuitive and can well explain empirical observations [19].
2
2.1
Related Work
Pairwise Learning to Rank
Pairwise ranking is one of the major approaches to learning to rank, and has been widely adopted in
real applications [5, 11, 12, 13]. The process of pairwise ranking can be described as follows.
2
Assume there are n queries {q1 , q2 , ? ? ? , qn } in the training data. Each query qi is associated with mi
i
}, where yji ? Y. Each document dij is
documents {di1 , ? ? ? , dimi } and their judgments {y1i , ? ? ? , ym
i
represented by a set of features xij = ?(dij , qi ) ? X , measuring the matching between document dij
and query qi . Widely-used features include the frequency of query terms in the document and the
query likelihood given by the language model of the document. For ease of reference, we use z =
(x, y) ? X ? Y = Z to denote document d since it encodes all the information of d in the learning
process. Then the training set can be denoted as S = {S1 , ? ? ? , Sn } where Si , {zji ? Z}j=1,??? ,mi
is the document sample for query qi . For a ranking function f : X ? R, the pairwise 0-1 loss l0?1
and pairwise surrogate loss l? are defined as below:
l0?1 (z, z 0 ; f ) = I{(y?y0 )(f (x)?f (x0 ))<0} ,
l? (z, z 0 ; f ) = ? ? sgn(y ? y 0 ) ? (f (x) ? f (x0 )) ,
(1)
0
where I{?} is the indicator function and z, z are two documents associated with the same query.
When function ? takes different forms, we will get the surrogate loss functions for different algorithms. For example, for Ranking SVMs, RankBoost, and RankNet, function ? is the hinge,
exponential, and logistic functions respectively.
2.2
Document-level Generalization Analysis
In document-level generalization analysis, it is assumed that the documents are i.i.d. sampled from
the document space Z according to P (z).Then the expected risk of pairwise ranking algorithms can
be defined as below,
Z
l
RD
(f ) =
l(z, z 0 ; f )dP 2 (z, z 0 ),
Z2
where P 2 (z, z 0 ) is the product probability of P (z) on the product space Z 2 .
The document-level generalization bound usually takes the following form: with probability at least
1 ? ?,
l
RD
(f ) ?
X
1
l(zj , zk ; f ) + (?, F , m), ?f ? F ,
m(m ? 1)
j6=k
where (?, F, m) ? 0 when document number m ? ?.
As representative work, the generalization bounds for the pairwise 0-1 loss were derived in [1, 9]
and the generalization bounds for RankBoost and Ranking SVMs were obtained in [2] and [11].
As aforementioned, the assumption that documents are i.i.d. makes the document-level generalization bounds not directly applicable to ranking in IR. Even if the assumption holds, the documentlevel generalization bounds still cannot be used to explain existing pairwise ranking algorithms.
Actually, according to the document-level generalization bound, what we can obtain is: with probaP
0
0
l(z i ,z i ;f )
j k
l
P=(i ,k)
P
+ (?, F , mi ), ?f ? F . The empirical risk is the
bility at least 1 ? ?, RD
(f ) ? (i,j)6
mi ( mi ?1)
average of the pairwise losses on all the document pairs. This is clearly not the real empirical risk
of ranking in IR, where documents associated with different queries cannot be compared with each
other, and pairs are constructed only by documents associated with the same query.
2.3
P
Query-level Generalization Analysis
In existing query-level generalization analysis [14], it is assumed that each query qi , represented by
a deterministic document set Si with the same number of documents (i.e. mi ? m), is i.i.d. sampled
from the space Z m . Then the expected risk can be defined as follows,
l
RQ
(f ) =
Z
Zm
X
1
l(zj , zk ; f )dP (z1 , ? ? ? , zm ).
m(m ? 1)
j6=k
The query-level generalization bound usually takes the following form: with probability at least
1 ? ?,
l
RQ
(f ) ?
n
X i i
1
1X
l(zj , zk ; f ) + (?, F , n), ?f ? F ,
n i=1 m(m ? 1)
j6=k
where (?, F, n) ? 0 as query number n ? ?.
3
As representative work, the query-level generalization bounds for stable pairwise ranking algorithms
such as Ranking SVMs and IR-SVM and listwise ranking algorithms were derived in [15] 1 and [14].
As mentioned in the introduction, the assumption that each query is associated with a deterministic
set of documents is not reasonable. The fact is that many random factors can influence what kinds of
documents and how many documents are labeled for each query. Due to this inappropriate assumption, the query-level generalization bounds are sometimes not intuitive. For example, when more
labeled documents are added to the training set, the generalization bounds of stable pairwise ranking algorithms derived in [15] do not change and the generalization bounds of some of the listwise
ranking algorithms derived in [14] get even looser.
3
Two-Layer Generalization Analysis
In this section, we introduce the concepts of two-layer data sampling and two-layer generalization
ability for ranking. These concepts can help describe the data generation process and explain the
behaviors of learning to rank algorithms more accurately than previous work.
3.1
Two-Layer Sampling in IR
When applying learning to rank techniques to IR, a training set is needed. The creation of such a
training set is usually as follows. First, queries are randomly sampled from query logs of search
engines. Then for each query, documents that are potentially relevant to the query are sampled (e.g.,
using the strategy in TREC [8]) from the entire document repository and presented to human annotators. Human annotators make relevance judgment to these documents, according to the matching
between them and the query. Mathematically, we can represent the above process in the following
manner. First, queries Q = {q1 , ? ? ? , qn } are i.i.d. sampled from the query space Q according to
distribution P (q). Second, for each query qi , its associated documents and their relevant judgments
i
)} are i.i.d. sampled from the document space D according to a conditional
{(di1 , y1i ), ? ? ? , (dimi , ym
i
distribution P (d|qi ) where mi is the number of sampled documents. Each document dij is then represented by a set of matching features, i.e., xij = ?(dij , qi ), where ? is a feature extractor. Following
the notation rules in Section 2.1, we use zji = (xij , yji ) to represent document dij and its label, and
denote the training data for query qi as Si = {zji }j=1,??? ,mi . Note that although {dij }j=1,??? ,mi are
i.i.d. samples, random variables {zji }j=1,??? ,mi are no longer independent because they share the
same query qi . Only if qi is given, we can regard them as independent of each other.
We call the above data generation process two-layer sampling, and denote the training data generated
in this way as (Q, S), where Q is the query sample and S = {Si }i=1,??? ,n is the document sample.
The two-layer sampling process can be illustrated using Figure 1.
(Q, P )
?
?
?
?
?
q1
q2
..
.
qn
?
?
?P (?|q)
?
?
?
?
?
?
?
(d11 , y11 )
(d21 , y12 )
..
.
n
(d1 , y1n )
?
?
z = (?(q, d), y) ?
?
?
(d12 , y21 )
(d22 , y22 )
..
.
n
(d2 , y2n )
...
...
..
.
...
1
(d1m1 , ym
)
1
...
..
.
...
z11
z12
..
.
z1n
...
...
..
.
...
1
zm
1
...
..
.
...
z21
z22
..
.
z2n
Figure 1: Two-layer sampling
...
..
.
n
zm
n
...
..
.
n
n
(dmn , ym
)
n
2
zm
2
?
2
(d2m2 , ym
) ?
2
?
?
?
?
?
?
?
?
Note that two-layer sampling has significant difference from the sampling strategies used in previous generalization analysis. (i) As compared to the sampling in document-level generalization
analysis, two-layer sampling introduces the sampling of queries, and documents associated with
1
In [15], although a similar sampling strategy to the two-layer sampling is mentioned, the generalization
analysis, however, does not consider the independent sampling at the document layer. As a result, the generalization bound they obtained is a query-level generalization bound, but not a two-layer generalization bound.
4
(P, P )
?
?
?
?
..
.
?
P1
P2
..
.
Pn
?
?
?
?
z11
z12
..
.
z1n
z21
z22
..
.
z2n
...
...
..
.
...
1
zm
2
zm
..
.
n
zm
?
?
?
?
?
Figure 2: (n, m)-sampling
different queries are sampled according to different conditional distributions. (ii) As compared to
the sampling in query-level generalization analysis, two-layer sampling considers the sampling of
documents for each query.
To some extent, the aforementioned two-layer sampling has relationship with directly sampling from
the product space of query and document, and the (n, m)-sampling proposed in [4]. However, as
shown below, they also have significant differences. Firstly, it is clear that directly sampling from
the product space of query and document does not describe the real data generation process. Furthermore, even if we sample a large number of documents in this way, it is not guaranteed that we
can have sufficient number of documents for each single query. Secondly, comparing Figure 1 with
Figure 2 (which illustrate (n, m)-sampling), we can easily find: (i) in (n, m)-sampling, tasks (corresponding to queries) have the same number (i.e., m) of elements (corresponding to documents),
however, in two-layer sampling, queries can be associated with different numbers of documents;
(ii) in (n, m)-sampling, all the elements are i.i.d., however, in two-layer sampling documents (if
represented by matching features) associated with the same query are not independent of each other.
3.2
Two-Layer Generalization Ability
With the probabilistic assumption of two-layer sampling, we define the expected risk for pairwise
ranking as follows,
Z Z
Rl (f ) =
l(z, z 0 ; f )dP (z, z 0 |q)dP (q),
Q
(2)
Z2
where P (z, z 0 |q) is the product probability of P (z|q) on the product space Z 2 .
Definition 1. We say that an ERM learning process with loss l in hypothesis space F has two-layer
generalization ability, if with probability at least 1 ? ?,
l
? n;m
(f ; S) + (?, F , n, m1 , , ? ? ? , mn ), ?f ? F ,
Rl (f ) ? R
1 ,??? ,mn
Pn
P
l
? n;m
where R
(f ; S) = n1 i=1 mi (m1i ?1) j6=k l(zji , zki ; f ), and (?, F, n, m1 , ? ? ? , mn ) ?
1 ,??? ,mn
0 iff query number n and document number per query mi simultaneously approach infinity.
In the next section, we will show our theoretical results on the two-layer generalization abilities of
typical pairwise ranking algorithms.
4
Main Theoretical Result
In this section, we show our results on the two-layer generalization ability of ERM learning with
pairwise ranking losses (either pairwise 0-1 loss or pairwise surrogate losses). As prerequisites, we
recall the concept of conventional Rademacher averages (RA)[3].
Definition
2. For sample {x1 , . .i. , xm }, the RA of l ? F is defined as follows, Rm (l ? F) =
h
2 Pm
E? supf ?F m
j=1 ?j l(xj ; f ) , where ?1 , . . . , ?m are independent Rademacher random variables independent of data sample.
With the above definitions, we have the following theorem, which describes when and how the
two-layer generalization bounds of pairwise ranking algorithms converge to zero.
Theorem 1. Suppose l is the loss function for pairwise ranking. Assume 1) l ? F is bounded by M ,
2) E [Rm (l ? F)] ? D(l ? F, m), then with probability at least 1 ? ?, for ?f ? F
l
R (f )
l
? n;m
(f )
?R
1 ,??? ,mn
+ D(l ? F , n) +
s
v
u n
n
uX 2M 2 log 4?
2M 2 log( 4? )
1X
mi
+
c) + t
.
D(l ? F , b
n
n i=1
2
mi n 2
i=1
5
Remark: The condition of the existence of upper bounds for E [Rm (l ? F)] can be satisfied in
many situations. For example, for ranking function class F that satisfies V C(F? ) = V , where
F? = {f (x, x0 ) = f (x) ? f (x0 ); f ? F},pV C(?) denotes the VC dimension, andp|f (x)| ? B, it
has been proved that D(l0?1 ? F, m) = c1 V /m and D(l? ? F, m) = c2 B?0 (B) V /m in [3, 9],
where c1 and c2 are both constants.
4.1
Proof of Theorem 1
Note that the proof of Theorem 1 is non-trivial because documents generated by two-layer sampling
are neither independent nor identically distributed, as aforementioned. As a result, the two-layer
sampling does not correspond to an empirical process and classical proving techniques in statistical learning are not sufficient for the proof. To tackle the challenge, we decompose the two-layer
expected risk as follows:
l
l
l
l
? n;m
?n
?n
? n;m
Rl (f ) = R
(f ) + Rl (f ) ? R
(f ) + R
(f ) ? R
(f ),
1 ,??? ,mn
1 ,??? ,mn
R
0
0
l
? nl (f ) = 1 Pn
?l
where R
i=1 Z 2 l(z, z ; f )dP (z, z |qi ). We call R (f ) ? Rn (f ) query-layer error and
n
l
l
? (f )? R
?
R
n
n;m1 ,??? ,mn (f ) document-layer error. Then, inspired by conventional RA [3], we propose
a concept called two-layer RA to describe the complexity of sample (Q, S).
Definition 3. For two-layer sample (Q, S), the two-layer RA of l ? F is defined as follows,
?
bmi /2c
n
X
X i i i
1
2
?j l(zj , zbmi /2c+j ; f )? ,
Rn;m1 ,??? ,mn (l ? F (Q, S)) = E? ? sup
bmi /2c j=1
f ?F n
i=1
?
where {?ji } are independent Rademacher random variables independent of data sample. If
(Q, S) = {qi ; zi , zi0 }i=1,??? ,n , we call its expected two-layer RA, i.e., EQ,S [Rn;2,??? ,2 (l ? F(Q, S))],
document-layer reduced two-layer RA. If (q, S) = {q; z1 , ? ? ? , zm }, we call its conditional expected
two-layer RA, i.e., ES|q [R1;m (l ? F(q, S))], query-layer reduced two-layer RA.
Based on the concept of two-layer RA, we can derive meaningful bounds for the two-layer expected
risk. In Section 4.1.1, we prove the query-layer error bound by using document-layer reduced twolayer RA; and in Section 4.1.2, we prove the document-layer error bound by using query-layer
reduced two-layer RA. Combining the two bounds, we can prove Theorem 1 in Section 4.1.3.
4.1.1
Query-Layer Error Bounds
As for the query-layer error bound, we have the following theorem.
Theorem 2. Assume l ? F is bounded by M , then with probability at least 1 ? ?,
l
?n
R (f ) ? R
(f ) ? EQ,S [Rn;2,??? ,2 (l ? F (Q, S))] +
l
r
2M 2 log(2/?)
, ?f ? F .
n
l(z, z 0 ; f )dP 2 (z|q). Since q1 , ? ? ? , qn
? l (f ).
are i.i.d. sampled, Lf (q1 ), ? ? ? , Lf (qn ) are also i.i.d.. Denote G1 (Q) = supf ?F Rl (f ) ? R
n
Since
l
?
F
is
bounded
by
M
,
by
the
McDiarmid?s
inequality,
we
have
G
(Q)
?
E
[G
(Q)]
+
1
1
q
2M 2 log( ?2 )
?
. By introducing a ghost query sample Q = {q?1 , ? ? ? , q?n }, we have
Proof. We define a function Lf as follows: Lf (q) =
R
Z2
n
E [G1 (Q)] = EQ
"
#
#
"
Z
n
n
n
X
X
1X
1
1
Lf (qi ) ? Lf (q)dP (q) ? EQ,Q? sup
Lf (qi ) ?
Lf (q?i )
sup
n
f ?F n
f ?F n
i=1
i=1
i=1
Further assuming that there are virtual document samples {zi , zi0 }i=1,??? ,n and {z?i , z?i0 }i=1,??? ,n for
? we have Lf (qi ) = Ez ,z0 |q [l(zi , z 0 ; f ); Lf (q?i )] = E ?0 [l(z?i , z?0 ; f )].
query samples Q and Q,
i
i
i i i
z?i ,zi |q?i
Substitute Lf (qi ) and Lf (q?i ) into inequality 3, we obtain the following result:
#
n
X
1
0
E[G1 (Q)] = EQ,Q0 sup
Ezi ,z0 ,z?i ,z?0 |qi ,q?i l(zi , zi ; f ) ? l(z?i , z?i0 ; f )
i
i
f ?F n
i=1
#
#
"
"
n
n
X
X
2
1
0
0
0
?
l(zi , zi ; f ) ? l(z?i , zi ; f ) = EQ,S E? sup
?i l(zi , zi ; f )
? Eqi ,q?i Ezi ,z0 ,z?i ,z?0 |qi ,q?i sup
i
i
f ?F n
f ?F n
i=1
i=1
"
According to the definition of document-layer reduced two-layer RA, Theorem 2 is proved.
6
(3)
4.1.2
Document-layer Error Bound
In order to obtain the bound for document-layer error, we consider the fact that documents are
independent if the query sample is given. Then for any given query sample, we can obtain the
following theorem by concentration inequality and symmetrization.
? n (f ) ? R
? n;m ,??? ,m (f )) and assume l ? F is bounded by
Theorem 3. Denote G(S) , supf ?F (R
1
n
M , then we have:
v
u n
n
n
uX 2M 2 log (2/?) o
1X
P G(S) ?
ESi |qi [R1;mi (l ? F (qi , Si ))] + t
Q ? 1 ? ?.
n i=1
mi n 2
i=1
(4)
Proof. First, we prove the bounded difference property for G(S).2 Given query sample Q, all the
documents in the document sample will become independent. Denote S 0 as the document sample
obtained by replacing document zji00 in S with a new document z?ji00 . It is clear that
? n;m1 ,??? ,mn (f ; S 0 )
? n;m1 ,??? ,mn (f ; S) ? R
sup G(S) ? G(S 0 ) ? sup sup R
S,S 0 f ?F
S,S 0
? sup sup
S,S 0 f ?F
P
(l(z i0 , z i0 ; f ) ? l(?
z i0 , z i0 ; f )
j0
k6=j0
j0
k
nmi0 (mi0 ? 1)
k
?
2M
.
mi0 n
Then by the McDiarmid?s inequality, with probability at least 1 ? ?, we have
v
u n
uX 2M 2 log (2/?)
.
G(S) ? ES|Q [G(S)] + t
mi n 2
i=1
(5)
Second, inspired by [9] we introduce permutations to convert the non-sum-of-i.i.d. pairwise loss to
a sum-of-i.i.d. form. Assume Smi is the symmetric group of degree mi and ?i ? Smi (i = 1, ? ? ? , n)
which permutes the mi documents associates with qi . Since documents associated with the same
query follow the identical distribution, we have,
bmi /2c
n
n
X
X
X i i
1X
1
1 X
1
p 1
l(z?i i (j) , z?i i (bmi /2c+j) ; f ),
l(zj , zk ; f ) =
n i=1 mi (mi ? 1)
n i=1 mi ! ? bmi /2c j=1
j6=k
(6)
i
p
? i ) on each Si as follows:
where = means identity in distribution. Define a function G(S
? i ) = sup
G(S
f ?F
bmi /2c
X
1
l(zji , zbi mi c+j ; f ) ? Ez,z0 |qi l(z, z 0 ; f ) .
2
bmi /2c j=1
? i ) does not contain any document pairs that share a common document. By
We can see that G(S
? i )] as below:
using Eqn.(6), we can decompose ES|Q [G(S)] into the sum of ESi |qi [G(S
ES|Q [G(S)] ?
n
Z
h
1X 1 X
ESi |qi sup
l(z, z 0 ; f )dP (z, z 0 |qi )
n i=1 mi ! ?
f ?F
Z2
i
?
1
b m2i c
bmi /2c
X
j=1
n
i
h
i
1X
? i) .
l(Z?i i (j) , Z?i i (b mi c+j) ; f ) =
ESi |qi G(S
2
n i=1
(7)
h
i
? i ) by use of symmetrization. We introduce a ghost docThird, we give a bound for ESi |qi G(S
ument sample S?i = {?
zji }j=1,??? ,mi that is independent of Si and identically distributed. Assume
i
i
?1 , ? ? ? , ?bmi /2c are independent Rademacher random variables, independent of Si and S?i . Then,
?
h
i
? i) ? E ?
?
sup
ESi |qi G(S
Si ,Si |qi
?
= ESi ,?i |qi ? sup
f ?F
2
bmi /2c
?
bmi /2c
X
1
l(zji , zbi mi c+j ; f ) ? l(?
zji , z?bi mi c+j ; f ) ?
2
2
f ?F bmi /2c
j=1
?
bmi /2c
X i i i
?
?j l(zj , zbmi /2c+j ; f ) = ESi |qi [R1;mi (l ? F (qi , Si ))] .
(8)
j=1
Jointly considering (5), (7), and (8), we can prove the theorem.
2
We say a function has bounded difference, if the value of the function can only have bounded change when
only one variable is changed.
7
4.1.3
Combining the Bounds
Considering Theorem 3 and taking expectation on query sample Q, we can obtain that with probability at least 1 ? ?,
v
u n
n
X
uX 2M 2 log 2?
1
l
l
t
?n
? n;m
(f
)
?
R
(f ) ? R
, ?f ? F .
E
[R
(l
?
F
(q
,
S
))]
+
1;m
i
i
S
|q
,???
,m
i
n
i i
1
n i=1
mi n 2
i=1
Furthermore, if conventional RA has an upper bound D(l ? F, ?), for arbitrary sample distribution,
document-layer reduced two-layer RA can be upper bounded by D(l?F, n) and query-layer reduced
two-layer RA can be bounded by D(l ? F, bmi /2c).
Combining the document-layer error bound and the query-layer error bound presented in the previous subsections, and considering the above discussions, we can eventually prove Theorem 1.
4.2
Discussions
According to Theorem 1, we can have the following discussions.
(1) The increasing number of either queries or documents per query in the training data will enhance
the two-layer generalization ability. This conclusion seems more intuitive and reasonable than that
obtained in [15].
(2) Only if n ? ? and mi ? ? simultaneously does the two-layer generalization bound uniformly
converge. That is, if the number of documents for some query is finite, there will always exist
document-layer error no matter how many queries have been used for training; if the number of
queries is finite, then there will always exist query-layer error, no matter how many documents per
query have been used for training.
(3) If we only have a limited budget to label C documents in total, according to Theorem 1, there is
an optimal trade off between the number of training queries and that of training documents per query.
This is consistent with previous empirical findings in [19]. Actually one can attain the optimal trade
off by solving the following optimization problem:
min
n,m1 ,??? ,mn
s.t.
n
X
v
u n
uX 2M 2 log
D(l ? F , n) + t
mi n 2
i=1
2
?
+
n
X
D(l ? F , bmi /2c)
i=1
mi = C
i=1
? =
This optimum problem is easy to solve. For example,
function class F satisfies V C(F)
? if ranking
?
?
c V + 2 log(4/?)
?
C, m?i ? nC? where c1 is a constant.
V , for the pairwise 0-1 loss, we have n? = 1
c1 2V
?
From this result we have the following discussions. (i) n decreases with the increasing capacity of
the function class. That is, we should label fewer queries and more documents per query when the
hypothesis space is larger. (ii) For fixed hypothesis space, n? increases with the confidence level ?.
That is, we should label more query if we want the bound to hold with a larger probability.
The above findings can be used to explain the behavior of existing pairwise ranking algorithms, and
can be used to guide the construction of training set for learning to rank.
5
Conclusions and Discussions
In this paper, we have proposed conducting two-layer generalization analysis for ranking, and proved
a two-layer generalization bound for ERM learning with pairwise losses. The theoretical results we
have obtained can better explain experimental observations in learning to rank than previous results,
and can provide general guidelines to trade off between deep labeling and shallow labeling in the
construction of training data.
For future work, we plan to i) extend our analysis to listwise loss functions in ranking, such as
ListNet [7] and listMLE [18]; ii) and introduce noise condition in order to obtain faster convergency.
8
References
[1] S. Agarwal, T. Graepel, R. Herbrich, S.Har-Peled, and D. Roth. Generalization bounds for the
area under the roc curve. Journal of Machine Learning Research, 6:393?425, 2005.
[2] S. Agarwal and P. Niyogi. Generalization bounds for ranking algorithms via algorithmic stability. Journal of Machine Learning Research, 10:441?474, 2009.
[3] P. L. Bartlett, S. Mendelson, and M. Long. Rademacher and gaussian complexities: risk bounds
and structural results. Journal of Machine Learning Research, 3:463?482, 2002.
[4] J. Baxter. Learning internal representations. In Proceedings of the Eighth International Conference on Computational Learning Theory, pages 311?320. ACM Press, 1995.
[5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.
Learning to rank using gradient descent. In ICML ?05: Proceedings of the 22nd International
Conference on Machine learning, pages 89?96, 2005.
[6] Y. Cao, J. Xu, T. Y. Liu, H. Li, Y. Huang, and H. W. Hon. Adapting ranking svm to document
retrieval. In SIGIR ?06: Proceedings of the 29th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval, pages 186?193. ACM Press, 2006.
[7] Z. Cao, T. Qin, T. Y. Liu, M. F. Tsai, and H. Li. Learning to rank: from pairwise approach to
listwise approach. In ICML ?07: Proceedings of the 24th International Conference on Machine
learning, pages 129?136, 2007.
[8] C. L. Clarke, N. Craswell, and I. Soboroff. Overview of the trec 2009 web track. Technical
report, no date.
[9] S. Cl?emenc?on, G. Lugosi, and N. Vayatis. Ranking and scoring using empirical risk minimization. In COLT ?05: Proceedings of the 18th Annual Conference on Learning Theory, pages
1?15, 2005.
[10] D. Cossock and T. Zhang. Subset ranking using regression. In COLT ?06: Proceedings of the
19th Annual Conference on Learning Theory, pages 605?619, 2006.
[11] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[12] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers, pages 115?132, Cambridge, MA, 1999.
MIT.
[13] T. Joachims. Optimizing search engines using clickthrough data. In KDD ?02: Proceedings
of the 8th ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 133?142, 2002.
[14] Y. Y. Lan, T. Y. Liu, Z. M. Ma, and H. Li. Generalization analysis of listwise learning-torank algorithms. In ICML ?09: Proceedings of the 26th International Conference on Machine
Learning, pages 577?584, 2009.
[15] Y. Y. Lan, T. Y. Liu, T. Qin, Z. M. Ma, and H. Li. Query-level stability and generalization in
learning to rank. In ICML ?08: Proceedings of the 25th International Conference on Machine
Learning, pages 512?519, 2008.
[16] T. Y. Liu. Learning to rank for information retrieval. Foundations and Trends in Information
Retrieval, 3:225?331, 2009.
[17] J. R. Wen, J. Y. Nie, and H. J. Zhang. Clustering user queries of a search engine. In WWW ?01:
Proceedings of the 10th international conference on World Wide Web, pages 162?168, New
York, NY, USA, 2001. ACM.
[18] F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li. Listwise approach to learning to rank - theory
and algorithm. In ICML ?08: Proceedings of the 25th International Conference on Machine
learning, pages 1192?1199. Omnipress, 2008.
[19] E. Yilmaz and S. Robertson. Deep versus shallow judgments in learning to rank. In SIGIR
?09: Proceedings of the 32th annual international ACM SIGIR conference on Research and
development in information retrieval, pages 662?663, 2009.
9
| 4019 |@word repository:1 version:1 proportion:1 seems:1 nd:1 d2:1 q1:5 twolayer:1 liu:7 mi0:2 document:116 existing:5 com:1 z2:4 comparing:1 si:11 kdd:1 listmle:1 remove:1 selected:1 fewer:1 renshaw:1 boosting:1 herbrich:2 preference:1 firstly:1 mcdiarmid:2 zhang:3 constructed:1 c2:2 become:2 prove:6 combine:1 introduce:4 manner:2 x0:4 pairwise:28 ra:16 expected:10 behavior:4 p1:1 bility:1 nor:1 inspired:2 inappropriate:1 considering:3 increasing:2 notation:1 bounded:9 what:2 kind:1 q2:2 finding:4 tackle:3 tie:1 rm:3 classifier:1 hamilton:1 accordance:1 analyzing:1 lugosi:1 studied:2 challenging:1 ease:1 limited:1 zi0:2 bi:1 lf:12 j0:3 dmn:1 area:1 empirical:10 yan:1 attain:1 adapting:1 matching:7 deed:1 confidence:1 get:2 cannot:3 convergency:1 selection:1 yilmaz:1 put:1 risk:12 influence:2 applying:1 shaked:1 www:1 conventional:3 deterministic:4 roth:1 emenc:1 attention:1 d12:1 d22:1 sigir:4 zbi:2 permutes:1 rule:1 regarded:1 proving:1 stability:2 construction:2 suppose:1 user:1 hypothesis:4 associate:1 element:2 trend:1 robertson:1 tyliu:1 labeled:4 wang:1 improper:1 trade:3 decrease:1 rq:2 mentioned:2 m1i:1 complexity:3 peled:1 nie:1 esi:8 solving:1 creation:1 easily:1 represented:8 describe:5 query:114 labeling:5 quite:2 widely:6 solve:1 larger:2 say:2 ability:9 niyogi:1 g1:3 jointly:1 propose:3 product:6 zm:9 qin:2 d21:1 relevant:2 combining:4 cao:2 date:1 iff:1 academy:2 intuitive:4 optimum:1 r1:3 rademacher:8 help:1 derive:2 illustrate:1 ac:2 z1n:2 eq:6 p2:1 predicted:1 vc:1 human:3 sgn:1 virtual:1 generalization:62 decompose:4 secondly:1 mathematically:1 hold:2 considered:1 algorithmic:1 predict:1 major:1 applicable:2 label:4 symmetrization:2 minimization:1 mit:1 clearly:1 always:2 rankboost:4 gaussian:1 pn:3 l0:3 derived:4 joachim:1 rank:19 likelihood:2 mainly:1 sigkdd:1 am:1 ument:1 i0:6 entire:1 among:1 aforementioned:3 y2n:1 denoted:1 k6:1 smi:2 development:3 plan:1 hon:1 colt:2 sampling:38 identical:2 icml:5 future:1 di1:2 report:1 employ:1 wen:1 randomly:1 simultaneously:3 zki:1 consisting:1 microsoft:2 n1:1 mining:1 introduces:1 d11:1 nl:1 har:1 theoretical:4 measuring:2 introducing:1 deviation:1 subset:1 lazier:1 dij:7 micorsoft:1 international:10 probabilistic:1 off:3 enhance:2 together:1 ym:5 satisfied:1 huang:1 li:5 matter:3 z12:2 ranking:53 performed:2 sup:15 participant:3 ir:12 conducting:1 judgment:5 correspond:2 accurately:3 dimi:2 j6:5 submitted:1 explain:8 simultaneous:1 definition:5 frequency:2 z2n:2 associated:19 mi:33 proof:5 sampled:11 proved:3 recall:1 subsection:1 knowledge:1 organized:1 graepel:2 carefully:1 actually:2 follow:2 asia:2 methodology:1 wei:1 listnet:1 furthermore:2 eqn:1 web:2 replacing:1 logistic:1 usa:1 concept:8 contain:1 y12:1 q0:1 symmetric:1 illustrated:1 conditionally:1 eqi:1 omnipress:1 novel:1 recently:1 common:1 rl:5 ji:1 overview:1 cossock:1 extend:1 m1:7 significant:2 cambridge:1 rd:3 pm:1 language:2 stable:4 longer:4 ezi:2 optimizing:1 inequality:4 scoring:1 seen:1 converge:5 ii:5 z11:2 technical:1 faster:1 y22:1 retrieval:7 long:1 qi:33 y21:1 regression:2 m2i:1 expectation:1 sometimes:1 represent:2 agarwal:2 c1:4 vayatis:1 want:1 call:6 structural:1 iii:1 identically:4 concerned:1 easy:1 baxter:1 affect:2 xj:1 zi:11 cn:2 tradeoff:1 bartlett:1 york:1 remark:1 ranknet:2 deep:2 clear:2 z22:2 svms:6 reduced:8 schapire:1 xij:3 exist:2 zj:6 per:10 track:1 group:1 lan:2 neither:1 convert:1 sum:3 reasonable:4 looser:1 clarke:1 layer:75 bound:52 guaranteed:1 annual:4 infinity:4 encodes:1 y1i:2 min:1 performing:1 according:15 describes:2 y0:1 shallow:2 s1:1 erm:3 eventually:1 mechanism:1 needed:1 singer:1 ordinal:1 adopted:1 prerequisite:1 existence:1 substitute:1 denotes:1 clustering:1 include:2 hinge:1 chinese:2 classical:1 added:1 strategy:5 concentration:1 craswell:1 surrogate:3 obermayer:1 gradient:1 dp:8 capacity:1 considers:1 extent:1 trivial:2 assuming:1 relationship:1 minimizing:2 nc:1 potentially:1 y11:1 guideline:1 clickthrough:1 unknown:2 perform:1 upper:3 observation:2 finite:2 descent:1 situation:1 trec:4 rn:4 arbitrary:2 pair:3 z1:2 mazm:1 engine:3 learned:2 address:1 andp:1 usually:6 below:4 xm:1 eighth:1 ghost:2 challenge:3 y1n:1 overlap:1 indicator:1 mn:12 sn:1 hullender:1 discovery:1 freund:1 fully:1 loss:16 permutation:1 generation:6 limitation:2 versus:1 annotator:3 foundation:1 degree:1 sufficient:3 consistent:1 share:2 changed:1 guide:2 understand:1 burges:1 wide:2 taking:1 distributed:4 listwise:7 regard:1 dimension:1 curve:1 boundary:1 world:1 xia:1 qn:4 author:1 collection:1 probap:1 assumed:4 yji:2 search:3 zk:4 investigated:1 cl:1 hierarchically:1 main:1 bmi:15 soboroff:1 noise:1 x1:1 xu:1 representative:2 roc:1 ny:1 pv:1 exponential:1 extractor:1 theorem:15 z0:4 svm:3 exists:1 mendelson:1 gained:1 iyer:1 budget:2 margin:2 chen:1 zji:9 supf:3 intern:1 ez:2 ux:5 satisfies:2 amt:1 extracted:2 ma:4 acm:6 conditional:5 identity:1 z21:2 change:3 determined:1 typical:2 uniformly:1 called:2 total:1 accepted:1 e:4 experimental:1 meaningful:2 internal:1 people:2 relevance:4 tsai:1 d1:1 |
3,335 | 402 | Learning Time-varying Concepts
Anthony Kuh
Dept. of Electrical Eng.
U. of Hawaii at Manoa
Honolulu, HI 96822
[email protected]
Thomas Petsche
Siemens Corp. Research
755 College Road East
Princeton, NJ 08540
petsche? learning. siemens.com
Ronald L. Rivest
Lab. for Computer Sci.
MIT
Cambridge, MA 02139
[email protected]
Abstract
This work extends computational learning theory to situations in which concepts
vary over time, e.g., system identification of a time-varying plant. We have
extended formal definitions of concepts and learning to provide a framework
in which an algorithm can track a concept as it evolves over time. Given
this framework and focusing on memory-based algorithms, we have derived
some PAC-style sample complexity results that determine, for example, when
tracking is feasible. We have also used a similar framework and focused on
incremental tracking algorithms for which we have derived some bounds on
the mistake or error rates for some specific concept classes.
1 INTRODUCTION
The goal of our ongoing research is to extend computational learning theory to include
concepts that can change or evolve over time. For example, face recognition is complicated by the fact that a persons face changes slowly with age and more quickly with changes
in make up, hairstyle, or facial hair. Speech recognition is complicated by the fact that
a speakers voice may change over time due to fatigue, illness, stress, or background
noise (Galletti and Abbott, 1989).
Time varying systems often appear in adaptive control or signal processing applications.
For example, adaptive equalizers adjust the receiver and transmitter to compensate for
changes in the noise on a transmission channel (Lucky et at, 1968). The kinematics of
a robot arm can change when it picks up a heavy load or when the motors and drive
train responses change due to wear. The output of a sensor may drift over time as the
components age or as the temperature changes.
183
184
Kuh, Petsche, and Rivest
Computational learning theory as introduced by Valiant (1984) can make some useful
statements about whether a given class of concepts can be learned and provide probabilistic bounds on the number of examples needed to learn a concept. Haussler, et al.
(1987), and Littlestone (1989) have also shown that it is possible to bound the number of
mistakes that a learner will make. However, while these analyses allow the concept to be
chosen arbitrarily, that concept must remain fixed for all time. Littlestone and Warmuth
(1989) considered concepts that may drift, but in the context of a different accuracy
measure than we use. Our research seeks explore further modifications to existing theory
to allow the analysis of performance when learning time-varying concept.
In the following, we describe two approaches we are exploring. Section 3 describes
an extension of the PAC-model to include time-varying concepts and shows how this
new model applies to algorithms that base their hypotheses on a set of stored examples.
Section 4 described how we can bound the mistake rate of an algorithm that updates its
estimate based on the most recent example. In Section 2 we define some notation and
terminology that is used in the remainder of the based.
2 NOTATION & TERMINOLOGY
For a dichotomy that labels each instance as a positive or negative example of a concept,
we can formally describe the model as follows. Each instance Xj is drawn randomly,
according to an arbitrary fixed probability distribution, from an instance space X. The
concept c to be learned is drawn randomly, according to an arbitrary fixed probability
distribution, from a concept class C. Associated with each instance is a label aj = c(Xj)
such that aj = 1 if Xj is a positive example and aj = 0 otherwise. The learner is presented
with a sequence of examples (each example is a pair (Xj, aj)) chosen randomly from X .
The learner must form an estimate, c, of c based on these examples.
In the time-varying case, we assume that there is an adversary who can change cover
time, so we change notation slightly. The instance Xt is presented at time t. The concept
Ct is active at time t if the adversary is using Ct to label instances at that time. The
sequence of t active concepts, Ct = {Cl' ... , Ct} is called a concept sequence of length t.
The algorithm's task is to form an estimate f t of the actual concept sequence Cr. i.e., at
each time t, the tracker must use the sequence of randomly chosen examples to form an
estimate ct of Ct. A set of length t concept sequences is denoted by C(t) and we call a
set of infinite length concept sequences a concept sequence space and denote it by C.
Since the adversary, if allowed to make arbitrary changes, can easily make the tracker's
task impossible, it is usually restricted such that only small or infrequent changes are
allowed. In other words, each C(t) is a small subset of ct.
We consider two different types of different types of "tracking" (learning) algorithms,
memory-based (or batch) and incremental (or on-line). We analyze the sample complexity
of batch algorithms and the mistake (or error) rate of incremental algorithms.
In t;e usual case where concepts are time-invariant, batch learning algorithms operate
in two distinct phases. During the first phase, the algorithm collects a set of training
examples. Given this set, it then computes a hypothesis. In the second phase, this
hypothesis is used to classify all future instances. The hypothesis is never again updated.
In Section 3 we consider memory-based algoritms derived from batch algorithms.
Learning Time-varying Concepts
When concepts are time-invariant, an on-line learning algorithm is one which constantly
modifies its hypothesis. On each iteration, the learner (1) receives an instance; (2) predicts
a label based on the current hypothesis; (3) receives the correct label; and (4) uses the
correct label to update the hypothesis. In Section 4, we consider incremental algorithms
based on on-line algorithms.
When studying learnability, it is helpful to define the Vapnik-Chervonenkis (VC) dimension (Vapnik and Chervonenkis, 1971) of a concept class: VCdim(C) is the cardinality
of the largest set such that every possible labeling scheme is achieved by some concept
in C. Blumer et al. (1989) showed that a concept class is learnable if and only if the
VC-dimension is finite and derived an upper bound (that depends on the VC dimension)
for the number of examples need to PAC-learn a learnable concept class.
3 MEMORY-BASED TRACKING
In this section, we will consider memory-based trackers which base their current hypothesis on a stored set of examples. We build on the definition of PAC-learning to define
what it means to PAC-track a concept sequence. Our main result here is a lower bound
on the maximum rate of change that can be PAC-tracked by a memory-based learner.
A memory-based tracker consists of (a) a function WeE, 8); and (b) an algorithm .c that
produces the current hypothesis, Ct using the most recent W (E, 8) examples. The memorybased tracker thus maintains a sliding window on the examples that includes the most
recent W ( E, 8) examples. We do not require that .c run in polynomial time.
Following the work of Valiant (1984) we say that an algorithm A PAC -tracks a concept
sequence space C' ~ C if, for any c E C', any distribution D on X, any E,8 > 0, and
access to examples randomly selected from X according to D and labeled at time t by
concept Ct; for all t sufficiently large, with t' chosen unifonnly at random between 1 and
t, it is true that
Pr(d(ct Ct ~ E) ~ 1 - 8.
The probability includes any randomization algorithm A may use as well as the random
selection of t' and the random selection of examples according to the distribution D,
and where d(c,c') = D(x : c(x) # c'(x)) is the probability that c and c' disagree on a
randomly chosen example.
l ,
l )
Learnability results often focus on learners that see only positive examples. For many
concept classes this is sufficient, but for others negative examples are also necessary.
Natarajan (1987) showed that a concept class that is PAC-learnable can be learned using
only positive examples if the class is closed under intersection.
With this in mind, let's focus on a memory-based tracker that modifies its estimate
using only positive examples. Since PAC-tracking requires that A be able to PAC-learn
individual concepts, it must be true that A can PAC-track a sequence of concepts only if
the concept class is closed under intersection. However, this is not sufficient.
Observation 1. Assume C is closed under intersection. If positive examples are drawn
from CI E C prior to time to, and from C2 E C, CI ~ C2. after time to. then there exists an
estimate of C2 that is consistent with all examples drawn from CI.
The proof of this is straightforward once we realize that if
CI ~ C2,
then all positive
185
186
Kuh, Petsche, and Rivest
examples drawn prior to time to from CI are consistent with C2. The problem is therefore
equivalent to first choosing a set of examples from a subset of C2 and then choosing more
examples from all of C2 - it skews that probability distribution, but any estimate of C2
will include all examples drawn from CI.
Consider the set of closed intervals on [0,1], C = {[a,b] I 0 ~ a,b ~ I}. Assume that,
for some d > b, Ct = CI = [a,b] for all t ~ to and Ct = C2 = [a,d] for all t > to. All
the examples drawn prior to to, {xc: t < to}, are consistent with C2 and it would be nice
to use these examples to help estimate C2. How much can these examples help?
Theorem 1. Assume C is closed under intersection and VCdim(C) is finite; C 2 ~ C;
and A has PAC learned CI E C at time to. Then,for some d such that VCdim( C2) ~ d ~
VCdim( C), the maximum number of examples drawn after time to required so that A can
PAC learn C2 E C is upper bounded by m(E, 8) = max (~log~, 8: log 1;)
In other words, if there is no prior information about C2, then the number of examples
required depends on VCdim(C). However, the examples drawn from CI can be used to
shrink the concept space towards C2' For example, when CI
[a,b] and C2
[a,c],
in the limit where c~
CI. the problem of learning C2 reduces to learning a one-sided
interval which has VC-dimension 1 versus 2 for the two-sided interval. Since it is unlikely
that c~ = Cit it will usually be the case that d > VCdim(C2 ).
=
=
=
In order to PAC-track c, most of the time A must have m( E, 8) examples consistent with
the current concept. This implies that w (E, 8) must be at least m (E, 8). Further, since the
concepts are changing, the consistent examples will be the most recent. Using a sliding
window of size m(e, 8), the tracker will have an estimate that is based on examples that
are consistent with the active concept after collecting no more than m(e, 8) examples
after a change.
In much of our analysis of memory-based trackers, we have focused on a concept sequence space C,\ which is the set of all concept sequences such that, on average, each
concept is active for at least 1/), time steps before a change occurs. That is, if N (c, t) is
the number of changes in the firstt time steps of c, C,\ = {c : lim sUPC-400 N (c, t) /t < ).}.
The question then is, for what values of ). does there exist a PAC-tracker?
Theorem 2. Let.c be a memory-based tracker with W(E, 8) = m(E,8/2) which draws
instances labeled according to some concept sequence c E C,\ with each Ct E C and
VCdim(C) < 00. For any E > 0 and 8> 0, A can UPAC track C if). < !m(E, 8/2).
This theorem provides a lower bound on the maximum rate of change that can be tracked
by a batch tracker. Theorem 1 implies that a memory-based tracker can use examples
from a previous concept to help estimate the active concept. The proof of theorem 2
assumes that some of the most recent m(E, 8) examples are not consistent with Ct until
m (E, 8) examples from the active concept have been gathered. An algorithm that removes
inconsistent examples more intelligently, e.g., by using conflicts between examples or
information about allowable changes, will be able to track concept sequence spaces that
change more rapidly.
Learning Time-varying Concepts
4
INCREMENTAL TRACKING
Incremental tracking is similar to the on-line learning case, but now we assume that there
is an adversary who can change the concept such that Ct+l =fi Ct. At each iteration:
1. the adversary chooses the active concept Ct;
2. the tracker is given an unlabeled instance, Xt;
3. the tracker predicts a label using the current hypothesis: at
= Ct-l (Xt);
4. the tracker is given the correct label at;
5. the tracker forms a new hypothesis: ct
= .c(Ct-l, (Xt,at}).
We have defined a number of different types of trackers and adversaries: A prudent
tracker predicts that at = 1 if and only if Ct (Xt)
1. A conservative tracker changes
its hypothesis only if at =fi at. A benign adversary changes the concept in a way that
is independent of the tracker's hypothesis while a malicious adversary uses information
about the tracker and its hypothesis to choose a Ct+l to cause an increase in the error
rate. The most malicious adversary chooses Ct+l to cause the largest possible increase in
error rate on average.
=
We distinguish between the error of the hypothesis formed in step 5 above and a mistake
made in step 3 above. The instantaneous error rate of an hypothesis is et = d (Ct, ct ).
It is the probability that another randomly chosen instance labeled according to Ct will
be misclassified by the updated hypothesis. A mistake is a mislabeled instance, and we
define a mistake indicator function Mt = 1 if Ct (Xt) =fi Ct-l (Xt).
t
We define the average error rate Ct = L:~=l et and the asymptotic error rate is c =
lim inft-+co Ct. The average mistake rate is the average value of the mistake indicator
function, J.Lt = L:~=l M to and the asymptotic mistake rate is J.L = lim inft -+ co J.Lt?
t
We are modeling the incremental tracking problems as a Markov process. Each state
of the Markov process is labeled by a triple (c, C, a), and corresponds to an iteration in
which C is the active concept, C is the active hypothesis, and a is the set of changes the
adversary is allowed to make given c. We are still in the process of analyzing a general
model, so the following presents one of the special cases we have examined.
Let X be the set of all points on the unit circle. We use polar coordinates so that
since the radius is fixed we can label each point by an angle B, thus X = [0, 27r).
Note that X is periodic. The concept class C is the set of all arcs of fixed length 7r
radians, i.e., all semicircles that lie on the unit circle. Each C E C can be written as
C = [7r(2B - 1) mod 27r, 27rB), where B E [0, 1). We assume that the instances are chosen
uniformly from the circle.
The adversary may change the concept by rotating it around the circle, however, the
maximum rotation is bounded such that, given Ct, Ct+l must satisfy d(ct+t, Ct) ~ "y. For
the uniform case, this is equivalent to restricting Bt + 1 = Bt ? f3 mod 1, where ~ f3 ~
"y
/2.
?
The tracker is required to be conservative, but since we are satisfied to lower bound the
error rate, we assume that every time the tracker makes a mistake, it is told the correct
concept. Thus, ct = Ct-l if no mistake is made, but Ct = Ct wherever a mistake is made.
187
188
Kuh, Petsche, and Rivest
The worst case or most malicious adversary for a conservative tracker always tries to
maximize the tracker's error rate. Therefore, whenever the tracker deduces Ct (Le. whenever the tracker makes a mistake), the adversary picks a direction by flipping a fair
coin. The adversary then rotates the concept in that direction as far as possible on each
iteration. Then we can define a random direction function St and write
St
={
+ 1,
-1,
w.p. 1/2
w.p. 1/2
St-l,
if Ct-l #
if Ct-l = Ct-l;
if Cl-l = Ct-l;
Ct-l.
Then the adversary chooses the new concept to be
(}t
=
(}t-l
+ Stl/2.
Since the adversary always rotates the concept by 1/2, there are 2/1 distinct concepts that
can occur. However, when (}( t + 1/1 ) = (}(t) + 1/2 mod 1, the semicircles do not overlap
and therefore, after at most 1/1 changes, a mistake will be made with probability one.
Because at most 1/1 consecutive changes can be made before the mistake rate returns
to zero, because the probability of a mistake depends only on (}t - (}~, and because of
inherent symmetries, this system can be modeled by a Markov chain with k = 1/1 states.
Each state Si corresponds to the case I(}t - Ot I = i I mod 1. The probability of a transition
1 - (i + Ih. The probability of a transition
from state Si to state Si+l is P(si+1lsi)
from state Si to state So is P(sols;) = (i + Ih. All other transition probabilities are
zero. This Markov chain is homogeneous, irreducible, aperiodic, and finite so it has an
invariant distribution. By solving the balance equations, for I sufficiently small, we find
that
=
(1)
Since we assume that I is small, the probability that no mistake will occur for each of
k - 1 consecutive time steps after a mistake, P(sk-d, is very small and we can say that
the probability of a mistake is approximately P(so). Therefore, from equation I, for small
I' it follows that JLmaJicious ~ ..)2//1r.
If we drop the assumption that the adversary is malicious, and instead assume the the
adversary chooses the direction randomly at each iteration, then a similar sort of analysis
yields that JLbenign = 0 (1 2/ 3 ).
Since the foregoing analysis assumes a conservative tracker that chooses the best hypothesis every time it makes a mistake, it implies that for this concept sequence space
and any conservative tracker, the mistake rate is 0(rl/2) against a malicious adversary
and 0(r2/3b) against a benign adversary. For either adversary, it can be shown that
c = JL - I ?
5
CONCLUSIONS AND FURTHER RESEARCH
We can draw a number of interesting conclusions fonn the work we have done so far.
First, tracking sequences of concepts is possible when the individual concepts are learnable and change occurs "slowly" enough. Theorem 2 gives a weak upper bound on the
rate of concept changes that is sufficient to insure that tracking is possible.
Learning Time-varying Concepts
Theorem 1 implies that there can be some trade-off between the size (VC-dimension)
of the changes and the rate of change. Thus, if the size of the changes is restricted,
Theorems 1 and 2 together imply that the maximum rate of change can be faster than for
the general case. It is significant that a simple tracker that maintains a sliding window
on the most recent set of examples can PAC-track the new concept after a change as
quickly as a static learner can if it starts from scratch. This suggests it may be possible
to subsume detection so that it is implicit in the operation of the tracker. One obviously
open problem is to determine d in Theorem 1, i.e., what is the appropriate dimension to
apply to the concept changes?
The analysis of the mistake and error rates presented in Section 4 is for a special case
with VC-dimension 1, but even so, it is interesting that the mistake and error rates are
significantly worse than the rate of change. Preliminary analysis of other concept classes
suggests that this continues to be true for higher VC-dimensions. We are continuing
work to extend this analysis to other concept classes, including classes with higher VCdimension; non-conservative learners; and other restrictions on concept changes.
Acknowledgments
Anthony Kuh gratefully acknowledges the support of the National Science Foundation
through grant EET-8857711 and Siemens Corporate Research. Ronald L. Rivest gratefully acknowledges support from NSF grant CCR-8914428, ARO grant NOOOI4-89-J-1988,
and a grant from the Siemens Corporation.
References
Blumer, A., Ehrenfeucht, A., Haussler, D., and Warmuth, M. (1989). Learnability and the
Vapnik-Chervonenkis dimension. Journal o/the Association/or Computing Machinery,
36(4):929-965.
Galletti, I. and Abbott, M. (1989). Development of an advanced airborne speech recognizer for direct voice input. Speech Technology, pages 60-63.
Haussler, D., Littlestone, N., and Warmuth, M. K. (1987). Expected mistake bounds for
on-line learning algorithms. (Unpublished).
Littlestone, N. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. Technical Report UCSC-CRL-89-11, Univ. of California at Santa Cruz.
Littlestone, N. and Warmuth, M. K. (1989). The weighted majority algorithm. In Proceedings 0/ IEEE FOCS Conference, pages 256-261. IEEE. (Extended abstract only.).
Lucky, R. W., Salz, 1., and Weldon, E. 1. (1968). Principles 0/ Data Communications.
McGraw-Hill, New York.
Natarajan, B. K. (1987). On learning boolean functions. In Proceedings o/the Nineteenth
Annual ACM Symposium on Theory o/Computing, pages 296-304.
Valiant, L. (1984). A theory of the learnable. Communications o/the ACM, 27:1134-1142.
Vapnik, V. N. and Chervonenkis, A. Y. (1971). On the uniform convergence of relative
frequencies of events to their probabilities. Theory 0/ Probability and its Applications,
16:264-280.
189
| 402 |@word polynomial:1 open:1 seek:1 eng:2 pick:2 fonn:1 chervonenkis:4 existing:1 current:5 com:1 si:5 must:7 written:1 cruz:1 ronald:2 realize:1 benign:2 motor:1 remove:1 drop:1 update:2 selected:1 warmuth:4 provides:1 c2:18 direct:1 ucsc:1 symposium:1 focs:1 consists:1 expected:1 actual:1 window:3 cardinality:1 rivest:6 notation:3 bounded:2 insure:1 what:3 corporation:1 nj:1 every:3 collecting:1 control:1 unit:2 grant:4 appear:1 supc:1 positive:7 before:2 mistake:26 limit:1 analyzing:1 approximately:1 examined:1 collect:1 suggests:2 co:2 acknowledgment:1 lucky:2 honolulu:1 semicircle:2 significantly:1 word:2 road:1 unlabeled:1 selection:2 context:1 impossible:1 restriction:1 equivalent:2 modifies:2 straightforward:1 focused:2 haussler:3 coordinate:1 updated:2 infrequent:1 homogeneous:1 us:2 hypothesis:19 recognition:2 natarajan:2 continues:1 predicts:3 labeled:4 electrical:1 worst:1 sol:1 trade:1 complexity:2 solving:1 learner:8 mislabeled:1 easily:1 algoritms:1 train:1 univ:1 distinct:2 describe:2 dichotomy:1 labeling:1 choosing:2 foregoing:1 say:2 nineteenth:1 otherwise:1 skews:1 obviously:1 sequence:17 intelligently:1 aro:1 remainder:1 deduces:1 rapidly:1 convergence:1 transmission:1 produce:1 incremental:7 help:3 vcdimension:1 implies:4 direction:4 radius:1 aperiodic:1 correct:4 vc:7 vcdim:7 require:1 preliminary:1 randomization:1 memorybased:1 exploring:1 extension:1 tracker:31 considered:1 sufficiently:2 around:1 vary:1 consecutive:2 recognizer:1 polar:1 label:9 largest:2 weighted:1 mit:2 sensor:1 always:2 cr:1 varying:9 derived:4 focus:2 transmitter:1 helpful:1 unlikely:1 bt:2 weldon:1 misclassified:1 denoted:1 prudent:1 development:1 special:2 once:1 never:1 f3:2 future:1 others:1 report:1 inherent:1 irreducible:1 randomly:8 wee:1 national:1 individual:2 phase:3 detection:1 adjust:1 chain:2 necessary:1 facial:1 unifonnly:1 machinery:1 continuing:1 littlestone:5 circle:4 rotating:1 instance:13 classify:1 modeling:1 boolean:1 cover:1 subset:2 uniform:2 learnability:3 stored:2 periodic:1 chooses:5 person:1 st:3 probabilistic:1 told:1 off:1 together:1 quickly:2 again:1 satisfied:1 choose:1 slowly:2 hawaii:2 worse:1 style:1 return:1 includes:2 satisfy:1 depends:3 try:1 lab:1 closed:5 analyze:1 start:1 sort:1 maintains:2 complicated:2 formed:1 accuracy:1 who:2 gathered:1 yield:1 inft:2 identification:1 weak:1 drive:1 whenever:2 definition:2 against:2 frequency:1 associated:1 proof:2 static:1 radian:1 noooi4:1 lim:3 focusing:1 higher:2 response:1 done:1 shrink:1 implicit:1 until:1 receives:2 manoa:1 aj:4 concept:72 true:3 ehrenfeucht:1 during:1 speaker:1 fatigue:1 stress:1 allowable:1 hill:1 temperature:1 instantaneous:1 fi:3 rotation:1 mt:1 rl:1 tracked:2 jl:1 extend:2 association:1 illness:1 significant:1 cambridge:1 gratefully:2 wear:1 robot:1 access:1 base:2 recent:6 showed:2 corp:1 arbitrarily:1 determine:2 maximize:1 signal:1 sliding:3 corporate:1 reduces:1 hairstyle:1 technical:1 faster:1 compensate:1 hair:1 iteration:5 achieved:1 background:1 interval:3 airborne:1 malicious:5 ot:1 operate:1 inconsistent:1 mod:4 call:1 enough:1 xj:4 whether:1 speech:3 york:1 cause:2 useful:1 santa:1 cit:1 exist:1 lsi:1 nsf:1 track:8 rb:1 ccr:1 write:1 terminology:2 threshold:1 drawn:9 changing:1 abbott:2 equalizer:1 run:1 angle:1 extends:1 draw:2 bound:11 hi:1 ct:45 distinguish:1 annual:1 occur:2 according:6 remain:1 describes:1 slightly:1 evolves:1 modification:1 wherever:1 restricted:2 invariant:3 pr:1 sided:2 equation:2 kinematics:1 needed:1 mind:1 studying:1 operation:1 apply:1 appropriate:1 petsche:5 batch:5 voice:2 coin:1 thomas:1 assumes:2 include:3 xc:1 build:1 question:1 occurs:2 flipping:1 usual:1 rotates:2 sci:1 majority:1 length:4 modeled:1 balance:1 statement:1 negative:2 upper:3 disagree:1 observation:1 markov:4 arc:1 finite:3 situation:1 extended:2 subsume:1 communication:2 arbitrary:3 drift:2 introduced:1 pair:1 required:3 unpublished:1 conflict:1 california:1 learned:4 able:2 adversary:21 usually:2 max:1 memory:11 including:1 overlap:1 event:1 indicator:2 advanced:1 arm:1 scheme:1 technology:1 imply:1 acknowledges:2 prior:4 nice:1 evolve:1 asymptotic:2 relative:1 plant:1 interesting:2 versus:1 age:2 triple:1 foundation:1 sufficient:3 consistent:7 principle:1 heavy:1 formal:1 allow:2 face:2 dimension:9 transition:3 computes:1 made:5 adaptive:2 far:2 mcgraw:1 eet:1 kuh:6 active:9 receiver:1 lcs:1 sk:1 channel:1 learn:4 symmetry:1 cl:2 anthony:2 main:1 noise:2 allowed:3 fair:1 lie:1 theorem:9 load:1 specific:1 xt:7 galletti:2 pac:16 learnable:5 r2:1 stl:1 exists:1 ih:2 vapnik:4 restricting:1 valiant:3 ci:11 intersection:4 lt:2 logarithmic:1 explore:1 tracking:10 applies:1 corresponds:2 constantly:1 acm:2 ma:1 goal:1 blumer:2 towards:1 crl:1 feasible:1 change:36 infinite:1 uniformly:1 conservative:6 called:1 siemens:4 east:1 formally:1 college:1 support:2 ongoing:1 dept:1 princeton:1 scratch:1 |
3,336 | 4,020 | Over-complete representations on recurrent neural
networks can support persistent percepts
Dmitri B. Chklovskii
Janelia Farm Research Campus
Howard Hughes Medical Institute
Ashburn, VA 20147
[email protected]
Shaul Druckmann
Janelia Farm Research Campus
Howard Hughes Medical Institute
Ashburn, VA 20147
[email protected]
Abstract
A striking aspect of cortical neural networks is the divergence of a relatively small
number of input channels from the peripheral sensory apparatus into a large number of cortical neurons, an over-complete representation strategy. Cortical neurons
are then connected by a sparse network of lateral synapses. Here we propose that
such architecture may increase the persistence of the representation of an incoming stimulus, or a percept. We demonstrate that for a family of networks in which
the receptive field of each neuron is re-expressed by its outgoing connections, a
represented percept can remain constant despite changing activity. We term this
choice of connectivity REceptive FIeld REcombination (REFIRE) networks. The
sparse REFIRE network may serve as a high-dimensional integrator and a biologically plausible model of the local cortical circuit.
1
Introduction
Two salient features of cortical networks are the numerous recurrent lateral connections within a
cortical area and the high ratio of cortical cells to sensory input channels. In their seminal study
[1], Olshausen and Field argued that such architecture may subserve sparse over-complete representations, which maximize representation accuracy while minimizing the metabolic cost of spiking.
In this framework, lateral connections between neurons with correlated receptive fields mediate explaining away of the sensory input features[2]. With the exception of an Ising-like generative model
for the lateral connections [3] and a mutual information maximization approach [4], most theoretical
work on lateral connections did not focus on the representation over-completeness [5] and references
therein.
Here, we propose that over-complete representations on recurrently connected networks offer a solution to a long-standing puzzle in neuroscience, that of maintaining a stable sensory percept in the
absence of time-invariant persistent activity (rate of action potential discharge). In order for sensory
percepts to guide actions, their duration must extend to behavioral time scales, hundreds of milliseconds or seconds if not more. However, many cortical neurons exhibit time-varying activity even
during working memory tasks [6, 7] and references therein. If each neuron codes for orthogonal
directions in stimulus space, any change in the activity of neurons would cause a distortion in the
network representation, implying that a percept cannot be maintained.
We point out that, in an over-complete representation, network activity can change without any
change in the percept, allowing persistent percepts to be maintained in face of variable neuronal
activity. This results from the fact that the activity space has a higher dimensionality than that of
the stimulus space. When the activity changes in a direction nulled by the projection onto stimulus
space, the percept remains invariant.
1
What lateral connectivity can support persistent percepts, even in the face of changing neuronal
activity? We derive the condition on lateral connection weights for networks to maintain persistent
percepts, thus defining a family of REceptive FIeld REcombination networks. Furthermore, we
propose that minimizing synaptic volume cost favors sparse REFIRE networks, whose properties are
remarkably similar to that of the cortex. Such REFIRE networks act as high dimensional integrators
of sensory input.
2
Model
We consider n sensory neurons, their activity marked by s in Rn which project to a layer of m
cortical neurons, where m > n. The activity of the m neurons, marked by a in Rm , at any given
time represents a percept of a certain stimulus. The represented percept s is a linear superposition
of feature vectors, stacked as columns of matrix D, weighted by the neuronal activity a:
s = Da.
(1)
For instance, s could represent the intensity level of pixels in a patch of the visual field and the
columns of D a dictionary chosen to represent the patches, e.g. a set of Gabor filters [8]. Since
m > n, the columns of dictionary D cannot be orthogonal and hence define a frame rather than a
basis [9].
2.1
Frames
A frame is a generalization of the idea of a basis to linearly dependent elements [9]. The mapping
between the activity space Rm and the sensory space Rn is accomplished by the synthesis operator,
D. The adjoint operator DT is called the analysis operator and their composition the frame operator
DDT . As a consequence of columns of D being a frame, a given vector in the space of percepts can
be represented non-uniquely, i.e. with different coefficients expressed by neuronal activity a. The
general form of coefficients is given by:
a = DT (DDT )?1 s + a? ,
(2)
where a? belongs to the null-space of D, i.e. Da? = 0.
One choice of coefficients, called frame coefficients, corresponds to a? = 0 and minimizes their
l2 norm. Alternatively one can choose a set of coefficients minimizing the l1 norm. These can be
computed by Matching Pursuit [10], Basis Pursuit [11] or LASSO [12], or by the dynamics of a
neural network with feedforward and lateral connections [13]. In summary, the neural activity is an
over-complete representation of the sensory percepts, the m columns of D acting as a frame for the
space of sensory percepts.
2.2
Persistent percepts and lateral connectivity
Now, we derive a necessary and sufficient condition on the lateral connections L such that for every
a the percept represented by Equation (1) persists. We focus on the dynamics of a following a
transient presentation of the sensory stimulus. The dynamics of a network with lateral connectivity
matrix L is given by:
a? = ?a + La,
(3)
where time is measured in units of the neuronal membrane time constant. Requiring time-invariant
persistent activity amounts to a? = 0 or
a = La.
(4)
However, this is not necessary if we require only the percept represented by the network to be fixed.
Instead,
s? = Da? = D(?a + La) = 0
(5)
Thus, setting the derivative of s to zero is tantamount to
Da = DLa.
(6)
If we require persistent percepts for any a, then:
D = DL
2
(7)
Equation (7) has a trivial solution L = I, which corresponds to a network with no actual lateral
connections and only autapses. We do not consider this solution further for two reasons. First,
autapses are extremely rare among cortical neurons[14]. Second, recurrent networks better support
persistency than autapses [15, 16].
The intuition behind the derivation of Equation (7) is as follows: as the activity of each neuron
changes due to the first term in the rhs of Equation (5) its contribution to the percept may change. To
compensate for this change without necessarily keeping the activity fixed, we require that the other
neurons adjust their activity according to Equation (6).
The condition imposed by Equation (7) on the synaptic weights can be understood as follows. For
each neuron j the sum of its post-synaptic partners receptive fields, weighted by the synaptic efficacy
from neuron j to the other neurons equals to the receptive field of neuron j. Thus, the other neurons
get excited by exactly the amount that it would take for them to replace the lost contribution to the
percept. Equation (7) and its non-trivial solutions that maintain persistent percepts are the main
results of the present study. We term non-trivial solutions of Equation (7) REceptive FIeld REexpression, or REFIRE networks due to the intuition underlying their definition.
Some patterns of activity satisfying Equation (4) will remain time-invariant themselves. These
correspond to patterns spanned by the right eigenvectors of L with an eigenvalue of one. Note that
in order to satisfy Equation (7) a right eigenvector v of L must have either an eigenvalue of one or
be in the null-space of D.
There are infinitely many solutions satisfying Equation (7), since there are m ? n equations and
m ? m variables in L. A general solution is given by:
L = DT (DDT )?1 D + L? ,
(8)
where L? indicates a component in L corresponding to the null-space of D i.e. DL? = 0. We shall
use these degrees of freedom to require a zero diagonal for L, thus avoiding autapses.
s
s (nx1)
a
a
-1
DT(mxn)
a (mx1)
a
L (mxm)
-1
-1
s
a
-1
a
-1
a
-1
Figure 1: Schematic network diagram and Mercedes-Benz example. Left: Network diagram. Middle: Directions of vectors in the MB example. Right: visualization of L
2.3
An example: the Mercedes-Benz frame
In order to present a more intuitive view of the concept of persistent percepts we consider the
Mercedes-Benz
frame [17].?This simple frame spans the R2 plane with three frame elements:
?
[0 1], [? 3/2 ? 1/2], [ 3/2 ? 1/2]. In this case, the frame operator DDT has a particularly simple form, being proportional to the identity matrix, indicating that the frame is tight. The
first term in the general form of L (Equation (8)) has a non-zero diagonal, which can be removed by
adding L? , a matrix with all its entries equal to one (times a scalar). Thus, L is:
!
0 ?1 ?1
?1 0 ?1
L=
?1 ?1 0
This seems a rather unlikely candidate matrix to support persistent percepts. However, consider
starting out with the vector a0 = [1 0 0] representing the point [0 1] on the plane, after convergence of the dynamics we have a = [2/3 ? 1/3 ? 1/3]. This new activity vector represents
3
exactly the same point on the plane: Da = [0 1]. Thus, the percept, the point on the plane,
remained constant despite changing neuronal activity. Note that some patterns of activity will remain strictly persistent themselves. These correspond to vectors which are a linear combination
of the right eigenvectors of L with an eigenvalue of one. In this case, these eigenvectors are:
v1 = [?1 1 0],v2 = [1/2 1/2 ? 1].
2.4
The sparse REFIRE network
Which members of the family of REFIRE networks obeying equation (7) are most likely to model
cortical networks? In the cortex, the connectivity is sparse and the synaptic weights are distributed
exponentially [18, 19]. These measurements are consistent with minimizing cost proportional to
synaptic weight, such as for example their volume. Motivated by these observations, we choose
each column of L as a sparse representation of each individual dictionary element by every other
element. Define Dj = d1 , d2 , . . . dj?1 , dj+1 . . . dm . We shall denote the sparse approximation
coefficients by ?. Therefore:
?j? =
min
?j ?Rm?1
||dj ? Dj ?j ||22 + ?||?j ||1
(9)
These are vectors in Rm?1 , we now need to insert a zero in the position of the dictionary element
that was extracted for each of these vectors. Denote by ??j a vector where a zero before the jth
location of ?j , resulting in a vector in Rm . The connectivity of our model network is given by
L = [??1 , ??2 , . . . ??m ] in Rmxm .
We call this form of L the sparse REFIRE network. Similar networks were previously constructed
on the raw data (or image patches) [20, 21], while sparse REFIRE networks reflect the relationship among dictionary elements. Previously, the dependencies between dictionary elements were
captured by tree-graphs [22, 23].
3
Results
In this section, we apply our model to the primary visual cortex by modeling the receptive fields
following the approach of [1]. We study the properties of the resulting sparse REFIRE network and
compare them with experimentally established properties of cortical networks.
3.1
Constructing the sparse REFIRE network for visual cortex
We learn the sparse REFIRE network from a standard set of natural images [8]. We extract patches
of size 13x13 pixels. We use a set of 100,000 such patches distributed evenly across different natural
images to learn the model. Whitening was performed through PCA, after the DC component of each
patch was removed. The dimensionality was reduced from 169 to 84 dimensions. We learn a four
times over-complete dictionary, via the SPAMS online sparse approximation toolbox [24]. Figure 2
left shows the forward weights (columns of D) learned. As expected, the filters obtained are edge
detectors differing in scale, spatial location and orientation.
The sparse REFIRE network was then learned from the dictionary using the same toolbox. Parameter
? in equation (9) governs the tradeoff between sparsity and reconstruction fidelity, figure 2 right. We
verified that the results presented in this study do not qualitatively change over a wide range of ?
and chose the value of ? where the average probability of connection was 9%, in agreement with the
experimental number of approximately 10%. For this choice the relative reconstruction mismatch
was approximately 10?3 . The distribution of synaptic weights in the network, Figure 3 left, shows a
strong bias to zero valued connections and a heavier than gaussian tail as does the cortical data [25].
For an enlarged view of the network see Figure 7. From here on we consider that particular choice
when we refer to the sparse REFIRE network.
Remarkably, the real part of all eigenvalues is less than or equal to one, Figure 3 right, indicating
stability of network dynamics. Although equation (7) guarantees that n eigenvalues are equal to
one, it does not rule out the existence of eigenvalues with greater real part. We speculate that the
absence of such eigenvalues in the spectrum is due to the l1 term in equation (9), the minimization
of which could be viewed as a shrinkage of Gershgorin circles. We find that the connectivity learned
4
Summed l-one length of L
0.03
2500
0.025
2000
0.02
1500
0.015
1000
0.01
500
Relative reconstruction mismatch
0.035
3000
0.005
0 -5
10
-4
-3
10
10
Lambda
-2
-1
10
10
Figure 2: The sparse REFIRE network. Left: the patches corresponding to columns of D sorted by
variance. Right: Summed l1 -norm of all columns of L (left y-axis, red), the reconstruction mismatch
|(D ? DL)|/|D| (right y-axis, blue) as a function of ?. Dashed line indicates the value of ? chosen
for the sparse REFIRE network.
was asymmetric with substantial imaginary components in the eigenvalues, see Figure 3 right. In
general, the sparse REFIRE network is unlikely to be symmetric because the connection weights
between a pair of neurons are not decided based solely on the identity of the neurons in the pair but
are dependent on other connections of the same pre-synaptic neuron.
2000
Count
1500
1000
10 0
Imaginary part of eigenvalue
Weight survival func.
11500
10-2
-4
10
10-6
0
1
2
3
Connection weight
500
0.6
0.2
-0.2
-0.6
-1.5
-1
-0.5
0
0.5
1
Real part of eigenvalue
0
0
0.2
0.4
0.6
0.8
1
1.2
Connection weight
Figure 3: Properties of lateral connections. Left: distribution of lateral connectivity weights. Inset
shows a survival plot with logarithmic y-axis and same axes limits. Right: scatter plot of eigenvalues
of the lateral connectivity matrix. Note that there are many eigenvalues at real value one, imaginary
value zero. Histogram shown below plot
Numerical simulations of the dynamics of a recurrent network with connectivity matrix L confirm
that the percept remains stable during the network dynamics. We chose an image patch at random
and simulated the network dynamics. As can be seen in Figure 4 left, despite significant changes
in the activity of the neurons, the percept encoded by the network remained stable, PSNR between
original image and image after dynamics lasting 100 neuronal time constants: 45.5dB. The dynamics
of the network desparsified the representation (Figure 4 right). Averaged across multiple patches,
the value of each coefficient in the sparse representation was 0.0704, while after the network dynamics this increased to 0.0752, though still below the value obtained for the frame coefficients
representation which was 0.0814.
3.2
Computational advantages of the sparse REFIRE network
In this section, we consider possible computational advantages for the de-coupling between the
sensory percept and it representation by neuronal activity. Specifically, we address a shortcoming
of the sparse representation, its lack of robustness [13]. Namely, the fact that stimuli that differ
only to a small degree might end up being represented with very different coefficients. Intuitively
speaking, this may occur when two (or more) dictionary elements compete for the same role in the
5
0
10
20
30
40
Time, units of neuro. time cons.
Activity after dynamics a.u.
Activity a.u
0
-0.1
0
0.1
Activity before dynamics a.u.
Figure 4: Evolution of neuronal activity in time. Left: activity of a subset of neurons over time. Top
shows the original percept (framed in black) and plotted left to right patches taken from consecutive
points in the dynamics. Right: scatter of the coefficients before and after 400 neuronal time constants
of the dynamics.
sparse representation. To arrive at a sparse approximation of the stimuli either one of the dictionary
elements could potentially be used, but due to the high cost of non-sparseness both of them together
are not likely to be chosen in a given representation. Thus, small changes in the image, as might
arise due to various noise sources, might cause one of the coefficients to be preferred over the other
in an essentially random fashion, potentially resulting in very different coefficient values for highly
similar images.
The dynamics of the sparse REFIRE network improve the robustness of the coefficient values in the
face of noise. In order to model this effect we extract a single patch and corrupt it repeatedly with i.i.d
5% Gaussian noise. Figure 5 left shows two patches with similar orientation. Figure 5 middle shows
the values of these two coefficients for the sparse approximation taken across the different noise
repetitions. As can be clearly seen only one or the other of the two coefficients is used, exemplifying
the competition described above. The resulting flickering in the coefficients exemplifies this lack of
robustness. Note that the true lack of robustness arises due to multicollinear relations between the
different dictionary elements. Here we restrict ourselves to two in the interests of clarity. Figure
5 right shows these coefficient values plotted one against the other in red along with the values of
the two coefficients following the model dynamics in blue. In the latter case, the coefficient values
between different repetitions remain fairly constant and the flickering representation as in Figure 5
middle is abolished.
We further examined the utility of a more stable representation by training a Naive Bayes classifier to
discriminate between noisy versions of two patches. We corrupt the two patches with i.i.d noise and
train the classifier on 75% of the data while reserving the remaining data for testing generalization.
We train one of classifier on the sparse representation and the other on the representation following
the dynamics of the sparse REFIRE network. We find that the generalization of the classifier learned
following the dynamics was indeed higher, providing 92% accuracy, while the sparse coefficient
trained classifier scored 83% accuracy.
We then demonstrate the computational advantages of the sparse REFIRE network in a more realistic
scenario, encoding a set of patches extracted from an image by shifting the patch one pixel at a
time. Such a shift can be caused by fixational drift or slow self-movement. Figure 5 right top
shows a subset of the patches extracted in this fashion. For each of the patches we calculate the
sparse approximation coefficients and then determine the dot product between the representation
of consecutive patches. We then take the same coefficients, evolve them through the dynamics of
the sparse REFIRE network network and compute the dot product between these new coefficients.
Figure 5 right bottom shows the normalized dot product, the value of the dot product between the
coefficients of two consecutive patches after the sparse REFIRE network dynamics, divided by the
same dot product between the original coefficients. As can be seen, for nearly all cases the ratio is
higher than one, indicating a smoother transition between the coefficients of the consecutive patches.
6
4
Relative dot prod.
Coefficient Two
5
3
2
1
0
0
1
2
3
4
5
5
4
3
2
1
0.5
0
5
10
15
Coefficient One
Figure 5: Sparse REFIRE network dynamics enhances the robustness of representation. Left: the
patches corresponding to two columns of D with similar tuning. Followed by the coefficient of
each of the patch in the representation of the different noisy image instantiations and a scatter plot
of the coefficient values before recurrent dynamics (red) and following (blue) recurrent dynamics.
Right: an example of the patches in the sliding frame (top) and the normalized dot product between
consecutive patches.
Figure 6: Dictionary clustering. Clusters of patches obtained by a three-way sparse REFIRE network
partitioning by normalized cut. Note the mainly horizontal orientation of the first set of patches and
the vertical orientation of the second.
The sparse REFIRE network encodes useful information regarding the relation between the different
dictionary elements. This can be probed by partitioning performed on the graph [20]. Figure 6 shows
the components of a normalized cut performed on the sparse REFIRE network. The left group
shows clear bias towards horizontal orientation tuning, the middle towards vertical. Thus, subspaces
can be learned directly from partitioning on the sparse REFIRE network offering a complementary
approach to learning structured models directly from the data [26, 27].
Finally, the sparse REFIRE network serves as an integrator of the sensory input. Eigenspace of the
unit eigenvalue is a multi-dimensional generalization of the line attractor used to model persistent
activity [16]. However, unlike the persistent activity theory, which focuses on dynamics along the
line attractor, we emphasize the transient dynamics approaching the unitary eigenspace.
4
Discussion
This study makes a number of novel contributions. First, we propose and demonstrate that in an
over-complete representation certain types of network connectivity allow the percept, i.e. the stimulus represented by the network activity, to remain fixed in time despite changing neuronal activity.
Second, we propose the sparse REFIRE network as a biologically plausible model for cortical lateral
connections that enables such persistent percepts. Third, we point out that the ability to manipulate
activity without affecting the accuracy of representation can be exploited in order to achieve computational goals. As an example, we show that the sparse REFIRE network dynamics, though causing
the representation to be less sparse, alleviates the problem of representation non-robustness.
Although this study focused on sensory representation in the visual cortex, the framework can be
extended to other sensory modalities, motor cortex and, perhaps, even higher cognitive areas such
as prefrontal cortex or hippocampus.
7
Fraction
0.06
0.04
0.02
0
0
20
40
60
80
Figure 7: sparse REFIRE network structure. Nodes are shown by a patch corresponding to its
feature vector. Arrows indicate connections, blue excitatory, red inhibitory. Plot organized to put
strongly connected nodes close in space. Only strongest connections shown in the interests of clarity.
Inset: Left: histogram of connectivity fraction by difference in feature orientation; red non-zero
connections, gray all connections. Right: zoomed in view.
The sparse REFIRE network model bears an important relation to the family of sparse subspace
models, which have been suggested to improve the robustness of sparse representations[26, 27]. We
have shown that subspaces can be learned directly from the graph by standard graph partitioning
algorithms. The optimal way to leverage the information embodied in the sparse REFIRE network
to learn subspace-like models is a subject of ongoing work with promising results as is the study of
different matrices L that allow persistent percepts.
Acknowledgments
We would like to thank Anatoli Grinshpan, Tao Hu, Alexei Koulakov, Bruno Olshausen and Lav
Varshney for fruitful discussions and Frank Midgley for assistance with preparing figure 7.
References
[1] B. A. Olshausen and D. J. Field, ?Emergence of simple-cell receptive field properties by learning a sparse code for natural images,? Nature, vol. 381, pp. 607?9, Jun 1996.
[2] M. Rehn and F. Sommer, ?A network that uses few active neurones to code visual input predicts
the diverse shapes of cortical receptive fields,? Journal of Computational Neuroscience, vol. 22,
pp. 135?146, 2007. 10.1007/s10827-006-0003-9.
[3] P. J. Garrigues and B. A. Olshausen, ?Learning horizontal connections in a sparse coding model
of natural images,? Advances in Neural Information Processing Systems, vol. 20, pp. 505?512,
2008.
8
[4] O. Shriki, H. Sompolinsky, and D. D. Lee, ?An information maximization approach to overcomplete and recurrent representations,? Advances in Neural Information Processing Systems,
vol. 12, pp. 87?93, 2000.
[5] D. B. Chklovskii and A. A. Koulakov, ?Maps in the brain: What can we learn from them?,?
Annual Review of Neuroscience, vol. 27, no. 1, pp. 369?392, 2004.
[6] G. Major and D. Tank, ?Persistent neural activity: prevalence and mechanisms,? Current opinion in neurobiology, vol. 14, no. 6, pp. 675?684, 2004.
[7] M. Goldman, ?Memory without feedback in a neural network,? Neuron, vol. 61, no. 4, pp. 621?
634, 2009.
[8] A. Hyvarinen, J. Hurri, and P. O. Hoyer, Natural Image Statistics: A Probabilistic Approach to
Early Computational Vision. Springer Publishing Company, Incorporated, 2009.
[9] O. Christensen, An Introduction to Frames and Riesz Bases. birkhauser, 2003.
[10] S. Mallat and Z. Zhang, ?Matching pursuits with time-frequency dictionaries,? Signal Processing, IEEE Transactions on, vol. 41, pp. 3397 ?3415, dec 1993.
[11] S. Chen, D. Donoho, and M. Saunders, ?Atomic decomposition by basis pursuit,? SIAM review,
vol. 43, no. 1, pp. 129?159, 2001.
[12] R. Tibshirani, ?Regression shrinkage and selection via the lasso,? Journal of the Royal Statistical Society (Series B), vol. 58, pp. 267?288, 1996.
[13] C. J. Rozell, D. H. Johnson, R. G. Baraniuk, and B. A. Olshausen, ?Sparse coding via thresholding and local competition in neural circuits,? Neural Comput, vol. 20, pp. 2526?63, 2008.
[14] V. Braitenberg and A. Sch?uz, Cortex: Statistics and Geometry of Neuronal Connectivity.
Berlin, Germany: Springer, 1998. ISBN: 3-540-63816-4.
[15] S. Cannon, D. Robinson, and S. Shamma, ?A proposed neural network for the integrator of the
oculomotor system,? Biological Cybernetics, vol. 49, no. 2, pp. 127?136, 1983.
[16] H. Seung, ?How the brain keeps the eyes still,? Proceedings of the National Academy of Sciences, vol. 93, no. 23, p. 13339, 1996.
[17] J. Kovavcevic and A. Chebira, ?An introduction to frames,? Found. Trends Signal Process.,
vol. 2, no. 1, pp. 1?94, 2008.
[18] Y. Mishchenko, T. Hu, J. Spacek, J. Mendenhall, K. M. Harris, and D. B. Chklovskii, ?Ultrastructural analysis of hippocampal neuropil from the connectomics perspective,? Neuron,
vol. 67, no. 6, pp. 1009?1020, 2010.
[19] L. R. Varshney, P. J. Sj?ostr?om, and D. B. Chklovskii, ?Optimal information storage in noisy
synapses under resource constraints,? Neuron, vol. 52, no. 3, pp. 409 ? 423, 2006.
[20] B. Cheng, J. Yang, S. Yan, Y. Fu, and T. Huang, ?Learning with L1-Graph for Image Analysis,?
IEEE Transactions on Image Processing, p. 1, 2010.
[21] E. Elhamifar and R. Vidal, ?Sparse subspace clustering,? in CVPR, pp. 2790 ?2797, 2009.
[22] R. Jenatton, J. Mairal, G. Obozinski, and F. Bach, ?Proximal Methods for Sparse Hierarchical
Dictionary Learning,? Proc. ICML, 2010.
[23] D. Zoran and Y. Weiss, ?The? Tree-Dependent Components? of Natural Images are Edge Filters,? Advances in Neural Information Processing Systems, 2009.
[24] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, ?Online learning for matrix factorization and sparse
coding,? Journal of Machine Learning Research, vol. 11, pp. 19?60, 2010.
[25] S. Song, P. J. Sj?ostr?om, M. Reigl, S. Nelson, and D. B. Chklovskii, ?Highly nonrandom features
of synaptic connectivity in local cortical circuits,? PLoS Biol, vol. 3, p. e68, Mar 2005.
[26] G. Yu, G. Sapiro, and S. Mallat, ?Image modeling and enhancement via structured sparse
model selection,? 2010.
[27] K. Kavukcuoglu, M. Ranzato, R. Fergus, and Y. LeCun, ?Learning invariant features through
topographic filter maps,? in Proc. International Conference on Computer Vision and Pattern
Recognition (CVPR?09), IEEE, 2009.
9
| 4020 |@word version:1 middle:4 hippocampus:1 norm:3 seems:1 d2:1 hu:2 simulation:1 decomposition:1 excited:1 garrigues:1 series:1 efficacy:1 offering:1 imaginary:3 current:1 scatter:3 must:2 connectomics:1 realistic:1 numerical:1 shape:1 enables:1 motor:1 plot:5 implying:1 generative:1 plane:4 persistency:1 completeness:1 node:2 location:2 org:2 zhang:1 along:2 constructed:1 persistent:17 behavioral:1 indeed:1 expected:1 themselves:2 multi:1 integrator:4 brain:2 uz:1 goldman:1 company:1 actual:1 project:1 campus:2 underlying:1 circuit:3 eigenspace:2 null:3 what:2 minimizes:1 eigenvector:1 differing:1 guarantee:1 sapiro:2 nonrandom:1 every:2 act:1 exactly:2 rm:5 classifier:5 partitioning:4 unit:3 medical:2 before:4 persists:1 local:3 understood:1 apparatus:1 limit:1 consequence:1 despite:4 encoding:1 solely:1 approximately:2 might:3 chose:2 black:1 therein:2 examined:1 shamma:1 factorization:1 range:1 averaged:1 decided:1 acknowledgment:1 lecun:1 testing:1 atomic:1 hughes:2 lost:1 prevalence:1 area:2 yan:1 gabor:1 projection:1 persistence:1 nulled:1 matching:2 dla:1 pre:1 get:1 cannot:2 onto:1 close:1 operator:5 selection:2 put:1 storage:1 seminal:1 fruitful:1 imposed:1 map:2 starting:1 duration:1 focused:1 rule:1 spanned:1 stability:1 discharge:1 mallat:2 us:1 agreement:1 element:11 trend:1 satisfying:2 particularly:1 rozell:1 recognition:1 asymmetric:1 cut:2 ising:1 predicts:1 bottom:1 role:1 calculate:1 connected:3 sompolinsky:1 ranzato:1 plo:1 movement:1 removed:2 substantial:1 intuition:2 seung:1 dynamic:27 trained:1 zoran:1 tight:1 serve:1 basis:4 represented:7 various:1 mxn:1 derivation:1 stacked:1 train:2 shortcoming:1 saunders:1 whose:1 encoded:1 plausible:2 valued:1 distortion:1 cvpr:2 favor:1 ability:1 statistic:2 koulakov:2 topographic:1 farm:2 noisy:3 emergence:1 online:2 advantage:3 eigenvalue:13 isbn:1 propose:5 reconstruction:4 mb:1 product:6 zoomed:1 causing:1 alleviates:1 achieve:1 academy:1 adjoint:1 intuitive:1 competition:2 convergence:1 cluster:1 enhancement:1 derive:2 recurrent:7 coupling:1 measured:1 strong:1 indicate:1 riesz:1 differ:1 direction:3 filter:4 transient:2 opinion:1 mendenhall:1 argued:1 require:4 generalization:4 biological:1 strictly:1 insert:1 puzzle:1 mapping:1 shriki:1 major:1 dictionary:15 consecutive:5 early:1 proc:2 superposition:1 repetition:2 autapses:4 weighted:2 minimization:1 clearly:1 gaussian:2 rather:2 cannon:1 shrinkage:2 varying:1 ax:1 focus:3 exemplifies:1 ponce:1 indicates:2 mainly:1 dependent:3 unlikely:2 a0:1 shaul:1 relation:3 tao:1 germany:1 pixel:3 tank:1 among:2 orientation:6 fidelity:1 spatial:1 summed:2 fairly:1 mutual:1 field:13 equal:4 preparing:1 represents:2 yu:1 icml:1 nearly:1 braitenberg:1 stimulus:9 few:1 divergence:1 national:1 individual:1 geometry:1 ourselves:1 attractor:2 maintain:2 freedom:1 interest:2 highly:2 alexei:1 adjust:1 behind:1 edge:2 fu:1 necessary:2 orthogonal:2 tree:2 re:1 circle:1 plotted:2 overcomplete:1 theoretical:1 increased:1 column:10 instance:1 modeling:2 maximization:2 cost:4 entry:1 rare:1 subset:2 hundred:1 johnson:1 dependency:1 e68:1 proximal:1 international:1 siam:1 standing:1 lee:1 probabilistic:1 synthesis:1 together:1 connectivity:14 reflect:1 choose:2 prefrontal:1 huang:1 lambda:1 cognitive:1 derivative:1 potential:1 de:1 speculate:1 coding:3 coefficient:30 satisfy:1 caused:1 performed:3 view:3 druckmann:1 red:5 bayes:1 contribution:3 om:2 accuracy:4 variance:1 percept:33 correspond:2 raw:1 kavukcuoglu:1 gershgorin:1 cybernetics:1 detector:1 synapsis:2 strongest:1 synaptic:9 definition:1 against:1 pp:17 frequency:1 dm:1 con:1 x13:1 dimensionality:2 psnr:1 organized:1 jenatton:1 higher:4 dt:4 wei:1 though:2 strongly:1 mar:1 furthermore:1 working:1 horizontal:3 lack:3 perhaps:1 gray:1 olshausen:5 effect:1 requiring:1 concept:1 true:1 normalized:4 evolution:1 hence:1 symmetric:1 assistance:1 during:2 self:1 uniquely:1 maintained:2 hippocampal:1 complete:8 demonstrate:3 l1:4 image:17 novel:1 spiking:1 exponentially:1 volume:2 extend:1 tail:1 measurement:1 composition:1 refer:1 significant:1 framed:1 subserve:1 tuning:2 bruno:1 janelia:4 dj:5 dot:7 stable:4 cortex:8 whitening:1 base:1 perspective:1 belongs:1 scenario:1 certain:2 accomplished:1 exploited:1 captured:1 seen:3 greater:1 determine:1 maximize:1 dashed:1 signal:2 smoother:1 multiple:1 sliding:1 hhmi:2 offer:1 long:1 compensate:1 divided:1 bach:2 post:1 manipulate:1 va:2 schematic:1 neuro:1 regression:1 essentially:1 mxm:1 vision:2 histogram:2 represent:2 cell:2 dec:1 affecting:1 remarkably:2 chklovskii:5 diagram:2 source:1 modality:1 sch:1 unlike:1 subject:1 db:1 member:1 call:1 unitary:1 leverage:1 yang:1 feedforward:1 architecture:2 lasso:2 restrict:1 approaching:1 idea:1 regarding:1 tradeoff:1 shift:1 motivated:1 pca:1 heavier:1 utility:1 song:1 speaking:1 cause:2 neurones:1 action:2 repeatedly:1 useful:1 governs:1 eigenvectors:3 reserving:1 clear:1 amount:2 ashburn:2 fixational:1 reduced:1 mx1:1 millisecond:1 inhibitory:1 neuroscience:3 tibshirani:1 blue:4 diverse:1 ddt:4 probed:1 shall:2 vol:18 group:1 salient:1 four:1 changing:4 mercedes:3 clarity:2 verified:1 v1:1 graph:5 dmitri:1 fraction:2 sum:1 compete:1 baraniuk:1 striking:1 arrive:1 family:4 patch:28 layer:1 followed:1 cheng:1 annual:1 activity:36 occur:1 constraint:1 exemplifying:1 encodes:1 aspect:1 extremely:1 span:1 min:1 relatively:1 structured:2 reigl:1 according:1 peripheral:1 combination:1 membrane:1 remain:5 across:3 biologically:2 mitya:1 lasting:1 christensen:1 intuitively:1 invariant:5 taken:2 benz:3 equation:17 visualization:1 remains:2 previously:2 resource:1 count:1 mechanism:1 end:1 serf:1 pursuit:4 vidal:1 apply:1 hierarchical:1 away:1 v2:1 robustness:7 rmxm:1 existence:1 original:3 top:3 remaining:1 clustering:2 sommer:1 publishing:1 maintaining:1 anatoli:1 recombination:2 society:1 strategy:1 receptive:10 primary:1 diagonal:2 exhibit:1 enhances:1 hoyer:1 subspace:5 thank:1 lateral:16 simulated:1 berlin:1 evenly:1 nelson:1 partner:1 trivial:3 reason:1 code:3 length:1 relationship:1 ratio:2 minimizing:4 providing:1 potentially:2 frank:1 allowing:1 vertical:2 neuron:26 observation:1 howard:2 defining:1 extended:1 neurobiology:1 incorporated:1 frame:17 rn:2 dc:1 intensity:1 drift:1 pair:2 namely:1 toolbox:2 connection:22 learned:6 established:1 robinson:1 address:1 suggested:1 below:2 pattern:4 mismatch:3 sparsity:1 oculomotor:1 royal:1 memory:2 shifting:1 natural:6 representing:1 improve:2 eye:1 numerous:1 axis:3 jun:1 extract:2 naive:1 embodied:1 func:1 review:2 l2:1 evolve:1 tantamount:1 relative:3 abolished:1 bear:1 proportional:2 degree:2 sufficient:1 consistent:1 rehn:1 thresholding:1 metabolic:1 corrupt:2 excitatory:1 summary:1 keeping:1 jth:1 guide:1 bias:2 allow:2 ostr:2 institute:2 explaining:1 wide:1 face:3 sparse:54 distributed:2 feedback:1 dimension:1 cortical:16 transition:1 sensory:15 forward:1 qualitatively:1 spam:1 hyvarinen:1 transaction:2 sj:2 emphasize:1 preferred:1 varshney:2 confirm:1 keep:1 active:1 incoming:1 instantiation:1 mairal:2 hurri:1 fergus:1 alternatively:1 spectrum:1 prod:1 promising:1 channel:2 learn:5 nature:1 neuropil:1 necessarily:1 constructing:1 da:5 did:1 main:1 linearly:1 rh:1 arrow:1 noise:5 arise:1 mediate:1 scored:1 mishchenko:1 complementary:1 neuronal:12 enlarged:1 fashion:2 slow:1 position:1 obeying:1 comput:1 candidate:1 third:1 remained:2 inset:2 recurrently:1 r2:1 survival:2 dl:3 adding:1 elhamifar:1 sparseness:1 chen:1 logarithmic:1 likely:2 infinitely:1 visual:5 expressed:2 lav:1 scalar:1 springer:2 corresponds:2 extracted:3 harris:1 obozinski:1 marked:2 presentation:1 identity:2 viewed:1 sorted:1 towards:2 goal:1 flickering:2 replace:1 absence:2 donoho:1 change:10 experimentally:1 specifically:1 birkhauser:1 acting:1 called:2 discriminate:1 experimental:1 la:3 exception:1 indicating:3 nx1:1 support:4 latter:1 arises:1 avoiding:1 ongoing:1 outgoing:1 d1:1 biol:1 correlated:1 |
3,337 | 4,021 | Generalized roof duality and bisubmodular functions
Vladimir Kolmogorov
Department of Computer Science
University College London, UK
[email protected]
Abstract
Consider a convex relaxation f? of a pseudo-boolean function f . We say that
the relaxation is totally half-integral if f?(x) is a polyhedral function with halfintegral extreme points x, and this property is preserved after adding an arbitrary
combination of constraints of the form xi = xj , xi = 1 ? xj , and xi = ? where
? ? {0, 1, 21 } is a constant. A well-known example is the roof duality relaxation
for quadratic pseudo-boolean functions f . We argue that total half-integrality is a
natural requirement for generalizations of roof duality to arbitrary pseudo-boolean
functions.
Our contributions are as follows. First, we provide a complete characterization
of totally half-integral relaxations f? by establishing a one-to-one correspondence
with bisubmodular functions. Second, we give a new characterization of bisubmodular functions. Finally, we show some relationships between general totally
half-integral relaxations and relaxations based on the roof duality.
1
Introduction
Let V be a set of |V | = n nodes and B ? K1/2 ? K be the following sets:
B = {0, 1}V
K1/2 = {0, 21 , 1}V
K = [0, 1]V
A function f : B ? R is called pseudo-boolean. In this paper we consider convex relaxations
f? : K ? R of f which we call totally half-integral:
Definition 1. (a) Function f? : P ? R where P ? K is called half-integral if it is a convex
polyhedral function such that all extreme points of the epigraph {(x, z) | x ? P, z ? f?(x)} have
the form (x, f?(x)) where x ? K1/2 . (b) Function f? : K ? R is called totally half-integral if
restrictions f? : P ? R are half-integral for all subsets P ? K obtained from K by adding an
arbitrary combination of constraints of the form xi = xj , xi = xj , and xi = ? for points x ? K.
Here i, j denote nodes in V , ? denotes a constant in {0, 1, 12 }, and z ? 1 ? z.
A well-known example of a totally half-integral
relaxation
is the roof duality relaxation for quadratic
P
P
pseudo-boolean functions f (x) =
i ci x i +
(i,j) cij xi xj studied by Hammer, Hansen and
Simeone [13]. It is known to possess the persistency property: for any half-integral minimizer
x
? ? arg min f?(?
x) there exists minimizer x ? arg min f (x) such that xi = x
?i for all nodes i with
integral component x
?i . This property is quite important in practice as it allows to reduce the size
of the minimization problem when x
? 6= 21 . The set of nodes with guaranteed optimal solution can
sometimes be increased further using the PROBE technique [6], which also relies on persistency.
The goal of this paper is to generalize the roof duality approach to arbitrary pseudo-boolean functions. The total half-integrality is a very natural requirement of such generalizations, as discussed
later in this section. As we prove, total half-integrality implies persistency.
1
We provide a complete characterization of totally half-integral relaxations. Namely, we prove in section 2 that if f? : K ? R is totally half-integral then its restriction to K1/2 is a bisubmodular function,
and conversely any bisubmodular function can be extended to a totally half-integral relaxation.
Definition 2. Function f : K1/2 ? R is called bisubmodular if
? x, y ? K1/2
f (x u y) + f (x t y) ? f (x) + f (y)
(1)
where binary operators u, t : K1/2 ? K1/2 ? K1/2 are defined component-wise as follows:
u
0
0
0
1
2
1
2
1
2
1
1
2
1
2
1
2
1
2
1
t
0
1
2
1
1
2
1
2
0
0
0
1
2
1
2
0
1
2
1
1
1
1
2
1
1
(2)
As our second contribution, we give a new characterization of bisubmodular functions (section 3).
Using this characterization, we then prove several results showing links with the roof duality relaxation (section 4).
1.1
Applications
This work has been motivated by computer vision applications. A fundamental task in vision is
to infer pixel properties from observed data. These properties can be the type of object to which
the pixel belongs, distance to the camera, pixel intensity before being corrupted by noise, etc. The
popular MAP-MRF approach casts the P
inference task as an energy minimization problem with the
objective function of the form f (x) = C fC (x) where C ? V are subsets of neighboring pixels
of small cardinality (|C| = 1, 2, 3, . . .) and terms fC (x) depend only on labels of pixels in C.
For some vision applications the roof duality approach [13] has shown a good performance [30,
32, 23, 24, 33, 1, 16, 17].1 Functions with higher-order terms are steadily gaining popularity in
computer vision [31, 33, 1, 16, 17]; it is generally accepted that they correspond to better image
models. Therefore, studying generalizations of roof duality to arbitrary pseudo-boolean functions
is an important task. In such generalizations the total half-integrality property is essential. Indeed,
in practice, the relaxation f? is obtained as the sum of relaxations f?C constructed for each term
independently. Some of these terms can be c|xi ? xj | and c|xi + xj ? 1|. If c is sufficiently
large, then applying the roof duality relaxation to these terms would yield constraints xi = xj and
x = xj present in the definition of total half-integrality. Constraints xi = ? ? {0, 1, 21 } can also
be simulated via the roof duality, e.g. xi = xj , xi = xj for the same pair of nodes i, j implies
xi = xj = 12 .
1.2
Related work
Half-integrality There is a vast literature on using half-integral relaxations for various combinatorial optimization problems. In many cases these relaxations lead to 2-approximation algorithms.
Below we list a few representative papers.
The earliest work recognizing half-integrality of polytopes with certain pairwise constraints was
perhaps by Balinksi [3], while the persistency property goes back to Nemhauser and Trotter [28]
who considered the vertex cover problem. Hammer, Hansen and Simeone [13] established that these
properties hold for the roof duality relaxation for quadratic pseudo-boolean functions. Their work
was generalized to arbitrary pseudo-boolean functions by Lu and Williams [25]. (The relaxation
in [25] relied on converting function f to a multinomial representation; see section 4 for more
details.) Hochbaum [14, 15] gave a class of integer problems with half-integral relaxations. Very
recently, Iwata and Nagano [18] formulated a half-integral relaxation for the problem of minimizing
submodular function f (x) under constraints of the form xi + xj ? 1.
1
In many vision problems variables xi are not binary. However, such problems are often reduced to
a sequence of binary minimization problems using iterative move-making algorithms, e.g. using expansion
moves [9] or fusion moves [23, 24, 33, 17].
2
In computer vision, several researchers considered the following scheme: given a function f (x) =
P
fC (x), convert terms fC (x) to quadratic pseudo-boolean functions by introducing auxiliary
binary variables, and then apply the roof duality relaxation to the latter. Woodford et al. [33] used
this technique for the stereo reconstruction problem, while Ali et al. [1] and Ishikawa [16] explored
different conversions to quadratic functions.
To the best of our knowledge, all examples of totally half-integral relaxations proposed so far belong
to the class of submodular relaxations, which is defined in section 4. They form a subclass of more
general bisubmodular relaxations.
Bisubmodularity Bisubmodular functions were introduced by Chandrasekaran and Kabadi as rank
functions of (poly-)pseudomatroids [10, 19]. Independently, Bouchet [7] introduced the concept of
?-matroids which is equivalent to pseudomatroids. Bisubmodular functions and their generalizations have also been considered by Qi [29], Nakamura [27], Bouchet and Cunningham [8] and Fujishige [11]. The notion of the Lov?asz extension of a bisubmodular function introduced by Qi [29]
will be of particular importance for our work (see next section).
It has been shown that some submodular minimization algorithms can be generalized to bisubmodular functions. Qi [29] showed the applicability of the ellipsoid method. A weakly polynomial combinatorial algorithm for minimizing bisubmodular functions was given by Fujishige and Iwata [12],
and a strongly polynomial version was given by McCormick and Fujishige [26].
Recently, we introduced strongly and weakly tree-submodular functions [22] that generalize bisubmodular functions.
2
Total half-integrality and bisubmodularity
The first result of this paper is following theorem.
Theorem 3. If f? : K ? R is a totally half-integral relaxation then its restriction to K1/2 is bisubmodular. Conversely, if function f : K1/2 ? R is bisubmodular then it has a unique totally halfintegral extension f? : K ? R.
This section is devoted to the proof of theorem 3. Denote L = [?1, 1]V , L1/2 = {?1, 0, 1}V . It
? : L ? R and h : L1/2 ? R obtained from f? and f via
will be convenient to work with functions h
a linear change of coordinates xi 7? 2xi ? 1. Under this change totally half-integral relaxations are
transformed to totally integral relaxations:
? : L ? R be a function of n variables. (a) h
? is called integral if it is a convex
Definition 4. Let h
?
polyhedral function such that all extreme points of the epigraph {(x, z) | x ? L, z ? h(x)}
have the
1/2
?
? is called totally integral if it is integral and for an arbitrary
form (x, h(x))
where x ? L . (b) h
ordering of nodes the following functions of n ? 1 variables (if n > 1) are totally integral:
? 0 (x1 , . . . , xn?1 )
h
? 0 (x1 , . . . , xn?1 )
h
? 1 , . . . , xn?1 , xn?1 )
= h(x
? 1 , . . . , xn?1 , ?xn?1 )
= h(x
? 0 (x1 , . . . , xn?1 )
h
? 1 , . . . , xn?1 , ?)
= h(x
for any constant ? ? {?1, 0, 1}
The definition of a bisubmodular function is adapted as follows: function h : L1/2 ? R is bisubmodular if inequality (1) holds for all x, y ? L1/2 where operations u, t are defined by tables (2)
after replacements 0 7? ?1, 21 7? 0, 1 7? 1. To prove theorem 3, it suffices to establish a link
? : L ? R and bisubmodular functions h : L1/2 ? R. We can
between totally integral relaxations h
?
assume without loss of generality that h(0)
= h(0) = 0, since adding a constant to the functions
does not affect the theorem.
A pair ? = (?, ?) where ? : V ? {1, . . . , n} is a permutation of V and ? ? {?1, 1}V will be
called a signed ordering. Let us rename nodes in V so that ?(i) = i. To each signed ordering ? we
associate labelings x0 , x1 , . . . , xn ? L1/2 as follows:
x0 = (0, 0, . . . , 0)
x1 = (?1 , 0, . . . , 0)
3
...
xn = (?1 , ?2 , . . . , ?n )
(3)
where nodes are ordered according to ?.
? : RV ? R is defined in
Consider function h : L1/2 ? R with h(0) = 0. Its Lov?asz extension h
V
the following way [29]. Given a vector x ? R , select a signed ordering ? = (?, ?) as follows:
(i) choose ? so that values |xi |, i ? V are non-increasing, and rename nodes accordingly so that
|x1 | ? . . . ? |xn |; (ii) if xi 6= 0 set ?i = sign(xi ), otherwise choose ?i ? {?1, 1} arbitrarily. It
is not difficult to check that
n
X
x=
?i xi
(4a)
i=1
i
where labelings x are defined in (3) (with respect to the selected signed ordering) and ?i = |xi | ?
|xi+1 | for i = 1, . . . , n ? 1, ?n = |xn |. The value of the Lov?asz extension is now defined as
?
h(x)
=
n
X
?i h(xi )
(4b)
i=1
? is convex on
Theorem 5 ([29]). Function h is bisubmodular if and only if its Lov?asz extension h
L. 2
Let L? be the set of vectors in L for which signed ordering ? = (?, ?) can be selected. Clearly,
L? = {x ? L | |x1 | ? . . . ? |xn |, xi ?i ? 0 ?i ? V }. It is easy to check that L? is the convex hull
? is linear on L? and coincides with h in each corner
of n + 1 points (3). Equations (4) imply that h
x0 , . . . , x n .
? : L ? R is totally integral. Then h
? is linear on simplex L? for each
Lemma 6. Suppose function h
signed ordering ? = (?, ?).
Proof. We use induction on n = |V |. For n = 1 the claim is straightforward; suppose that n ? 2.
? is linear on the boundary ?L? ; this
Consider signed ordering ? = (?, ?). We need to prove that h
?
will imply that g? is linear on L? since otherwise h would have an extreme point in the the interior
L? \?L? which cannot be integral.
Let X = {x0 , . . . , xn } be the set of extreme points of L? defined by (3). The boundary ?L? is the
union of n + 1 facets L0? , . . . , Ln? where Li? is the convex hull of points in X\{xi }. Let us prove
? is linear on L0 . All points x ? X\{x0 } satisfy x1 = ?1 , therefore L0 = {x ? L? | x1 =
that h
?
?
? 0 (x2 , . . . , xn ) = h(?
? 1 , x2 , . . . , xn ), and let L0 0 be the
?1 }. Consider function of n ? 1 variables h
?
? 0 is linear on L0 0 , and thus h
? is linear on
projection of L0? to RV \{1} . By the induction hypothesis h
?
L0? .
? is linear on other facets can be proved in a similar way. Note that for i = 2, . . . , n ? 1
The fact that h
there holds Li? = {x ? L? | xi = ?i?1 ?i xi?1 }, and for i = n we have Ln? = {x ? L? | xn = 0}.
? : L ? R with h(0)
?
Corollary 7. Suppose function h
= 0 is totally integral. Let h be the restriction
1/2
?
?
? and h
? coincide on L.
of h to L and h be the Lov?asz extension of h. Then h
Theorem 5 and corollary 7 imply the first part of theorem 3. The second part will follow from
? : L ? R is
Lemma 8. If h : L1/2 ? R with h(0) = 0 is bisubmodular then its Lov?asz extension h
totally integral.
? is assumed to be convex on RV rather than on L.
Note, Qi formulates this result slightly differently: h
?
? on RV . Indeed, it can be checked
However, it is easy to see that convexity of h on L implies convexity of h
?
?
?
that h is positively homogeneous, i.e. h(?x) = ? h(x) for any ? ? 0, x ? RV . Therefore, for any x, y ? RV
and ?, ? ? 0 with ? + ? = 1 there holds
2
1?
??
??
?
?
?
h(?x
+ ?y) = h(??x
+ ??y) ? h(?x)
+ h(?y)
= ?h(x)
+ ? h(y)
?
?
?
? on L, assuming that ? is a sufficiently small
where the inequality in the middle follows from convexity of h
constant.
4
Proof. We use induction on n = |V |. For n = 1 the claim is straightforward; suppose that n ? 2.
? is convex on L. Function h
? is integral since it is linear on each simplex L? and
By theorem 5, h
1/2
? 0 considered in definition 4 are
vertices of L? belong to L . It remains to show that functions h
0
V \{n}
totally integral. Consider the following functions h : {?1, 0, 1}
? R:
h0 (x1 , . . . , xn?1 ) =
h0 (x1 , . . . , xn?1 ) =
h0 (x1 , . . . , xn?1 ) =
h(x1 , . . . , xn?1 , xn?1 )
h(x1 , . . . , xn?1 , ?xn?1 )
h(x1 , . . . , xn?1 , ?) , ? ? {?1, 0, 1}
It can be checked that these functions are bisubmodular, and their Lov?asz extensions coincide with
? 0 used in definition 4. The claim now follows from the induction hypothesis.
respective functions h
3
A new characterization of bisubmodularity
In this section we give an alternative definition of bisubmodularity; it will be helpful later for describing a relationship to the roof duality. As is often done for bisubmodular functions, we will
encode each half-integral value xi ? {0, 1, 12 } via two binary variables (ui , ui0 ) according to the
following rules:
1
0 ? (0, 1)
1 ? (1, 0)
2 ? (0, 0)
Thus, labelings in K1/2 will be represented via labelings in the set
X ? = {u ? {0, 1}V | (ui , ui0 ) 6= (1, 1) ? i ? V }
where V = {i, i0 | i ? V } is a set with 2n nodes. The node i0 for i ? V is called the ?mate? of
i; intuitively, variable ui0 corresponds to the complement of ui . We define (i0 )0 = i for i ? V .
Labelings in X ? will be denoted either by a single letter, e.g. u or v, or by a pair of letters, e.g.
(x, y). In the latter case we assume that the two components correspond to labelings of V and
V \V , respectively, and the order of variables in both components match. Using this convention, the
one-to-one mapping X ? ? K1/2 can be written as (x, y) 7? 21 (x + y). Accordingly, instead of
function f : K1/2 ? R we will work with the function g : X ? ? R defined by
x+y
g(x, y) = f
(5)
2
Note that the set of integer labelings B ? K1/2 corresponds to the set X ? = {u ? X ? | (ui , ui0 ) 6=
(0, 0)}, so function g : X ? ? R can be viewed as a discrete relaxation of function g : X ? ? R.
Definition 9. Function f : X ? ? R is called bisubmodular if
f (u u v) + f (u t v) ? f (u) + f (v)
? u, v ? X ?
(6)
where u u v = u ? v, u t v = REDUCE(u ? v) and REDUCE(w) is the labeling obtained from
w by changing labels (wi , wi0 ) from (1, 1) to (0, 0) for all i ? V .
To describe a new characterization, we need to introduce some additional notation. We denote
X = {0, 1}V to be the set of all binary labelings of V . For a labeling u ? X , define labeling u0 by
(u0 )i = ui0 . Labels (ui , ui0 ) are transformed according to the rules
(0, 1) ? (0, 1)
(1, 0) ? (1, 0)
(0, 0) ? (1, 1)
0
(1, 1) ? (0, 0)
00
0
(7)
0
Equivalently, this mapping can be written as (x, y) = (y, x). Note that u = u, (u ? v) = u ? v 0
and (u ? v)0 = u0 ? v 0 for u, v ? X . Next, we define sets
X ? = {u ? X | u ? u0 } = {u ? X | (ui , u0i ) 6= (1, 1) ?i ? V }
X + = {u ? X | u ? u0 } = {u ? X | (ui , u0i ) 6= (0, 0) ?i ? V }
X?
X?
= {u ? X | u = u0 } = {u ? X | (ui , u0i ) ? {(0, 1), (1, 0)}
= X? ? X+
?i ? V } = X ? ? X +
Clearly, u ? X ? if and only if u0 ? X + . Also, any function g : X ? ? R can be uniquely extended
to a function g : X ? ? R so that the following condition holds:
g(u0 ) = g(u)
5
?u ? X?
(8)
Proposition 10. Let g : X ? ? R be a function satisfying (8). The following conditions are equivalent:
(a) g is bisubmodular, i.e. it satisfies (6).
(b) g satisfies the following inequalities:
g(u ? v) + g(u ? v) ? g(u) + g(v)
if u, v, u ? v, u ? v ? X ?
(9)
(c) g satisfies those inequalities in (6) for which u = w ? ei , v = w ? ej where w = u ? v
and i, j are distinct nodes in V with wi = wj = 0. Here ek for node k ? V denotes the
labeling in X with ekk = 1 and ekk0 = 0 for k 0 ? V \{k}.
(d) g satisfies those inequalities in (9) for which u = w ? ei , v = w ? ej where w = u ? v
and i, j are distinct nodes in V with zi = zj = 0.
A proof is given [20]. Note, an equivalent of characterization (c) was given by Ando et al. [2]; we
state it here for completeness.
Remark 1 In order to compare characterizations (b,d) to existing characterizations (a,c), we need
to analyze the sets of inequalities in (b,d) modulo eq. (8), i.e. after replacing terms g(w), w ? X +
with g(w0 ). In can be seen that the inequalities in (a) are neither subset nor superset of those in (b)3 ,
so (b) is a new characterization. It is also possible to show that from this point of view (c) and (d)
are equivalent.
4
Submodular relaxations and roof duality
Consider a submodular function g : X ? R satisfying the following ?symmetry? condition:
g(u0 ) = g(u)
?u ? X
(10)
We call such function g a submodular relaxation of function f (x) = g(x, x). Clearly, it satisfies
conditions of proposition 10, so g is also a bisubmodular relaxation of f . Furthermore, minimizing
g is equivalent to minimizing its restriction g : X ? ? R; indeed, if u ? X is a minimizer of g then
so are u0 and u ? u0 ? X ? .
In this section we will do the following: (i) prove that any pseudo-boolean function f : B ? R has
a submodular relaxation g : X ? R; (ii) show that the roof duality relaxation for quadratic pseudoboolean functions is a submodular relaxation, and it dominates all other bisubmodular relaxations;
(iii) show that for non-quadratic pseudo-boolean functions bisubmodular relaxations can be tighter
than submodular ones; (iv) prove that similar to the roof duality relaxation, bisubmodular relaxations
possess the persistency property.
Review of roof duality Consider a quadratic pseudo-boolean function f : B ? R:
X
X
f (x) =
fi (xi ) +
fij (xi , xj )
i?V
(11)
(i,j)?E
where (V, E) is an undirected graph and xi ? {0, 1} for i ? V are binary variables. Hammer,
Hansen and Simeone [13] formulated several linear programming relaxations of this function and
3
Denote u =
1 0 1 0
0 0 0 0
and v =
0 1 0 0
0 0 1 0
where the top and bottom rows correspond to the labelings
of V and V \V respectively, with |V | = 4. Plugging pair (u, v) into (6) gives the following inequality:
g 00 00 00 00 + g 10 10 00 00 ? g 10 00 10 00 + g 00 10 01 00
This inequality is a part of (a), but it is not present in (b): pairs (u, v) and (u0 , v 0 ) do not satisfy the RHS
of (9), while pairs (u, v 0 ) and (u0 , v) give a different inequality:
g 10 00 00 00 + g 00 10 00 00 ? g 10 00 10 00 + g 00 10 01 00
where we used condition (8). Conversely, the second inequality is a part of (b) but it is not present in (a).
6
showed their equivalence. One of these formulations was called a roof dual. An efficient maxflowbased method for solving the roof duality relaxation was given by Hammer, Boros and Sun [5, 4].
We will rely on this algorithmic description of the roof duality approach [4]. The method?s idea
can be summarized as follows. Each variable xi is replaced with two binary variables ui and ui0
corresponding to xi and 1 ? xi respectively. The new set of nodes is V = {i, i0 | i ? V }. Next,
function f is transformed to a function g : X ? R by replacing each term according to the following
rules:
fi (xi ) 7?
fij (xi , xj ) 7?
fij (xi , xj ) 7?
1
[fi (ui ) + fi (ui0 )]
2
1
[fij (ui , uj ) + fij (ui0 , uj 0 )]
2
1
[fij (ui , uj 0 ) + fij (ui0 , uj )]
2
(12a)
if fij (?, ?) is submodular
(12b)
if fij (?, ?) is not submodular
(12c)
g is a submodular quadratic pseudo-boolean function, so it can be minimized via a maxflow algorithm. If u ? X is a minimizer of g then the roof duality relaxation has a minimizer x
? with
x
?i = 12 (ui + ui0 ) [4].
It is easy to check that g(u) = g(u0 ) for all u ? X , therefore g is a submodular relaxation. Also, f
and g are equivalent when ui0 = ui for all i ? V , i.e.
?x ? B
g(x, x) = f (x)
(13)
Invariance to variable flipping Suppose that g is a (bi-)submodular relaxation of function f :
B ? R. Let i be a fixed node in V , and consider function f 0 (x) obtained from f (x) by a change of
coordinates xi 7? xi and function g 0 (u) obtained from g(u) by swapping variables ui and ui0 . It is
easy to check that g 0 is a (bi-)submodular relaxation of f 0 . Furthermore, if f is a quadratic pseudoboolean function and g is its submodular relaxation constructed by the roof duality approach, then
applying the roof duality approach to f 0 yields function g 0 . We will sometimes use such ?flipping?
operation for reducing the number of considered cases.
Conversion to roof duality Let us now consider a non-quadratic pseudo-boolean function f : B ?
R. Several papers [33, 1, 16] proposed the following scheme: (1) Convert f to a quadratic pseudoboolean function f? by introducing k auxiliary binary variables so that f (x) = min??{0,1}k f?(x, ?)
for all labelings x ? B. (2) Construct submodular relaxation g?(x, ?, y, ?) of f? by applying the roof
duality relaxation to f?; then
g?(x, ?, y, ?) = g?(y, ?, x, ?) , g?(x, ?, x, ?) = f?(x, ?)
(3) Obtain function g by
min?,? ?{0,1}k g?(x, ?, y, ?).
minimizing
out
auxiliary
?x, y ? B, ?, ? ? {0, 1}k
variables:
g(x, y)
=
One can check that g(x, y) = g(y, x), so g is a submodular relaxation4 . In general, however,
it may not be a relaxation of function f , i.e. (13) may not hold; we are only guaranteed to have
g(x, x) ? f (x) for all labelings x ? B.
Existence of submodular relaxations It is easy to check that if f : B ? R is submodular
then function g(x, y) = 21 [f (x) + f (y)] is a submodular relaxation of f .5 Thus, monomials of
the form c?i?A xi where c ? 0 and A ? V have submodular relaxations. Using the ?flipping?
operation xi 7? xi , we conclude that submodular relaxations also exist for monomials of the form
4
It is well-known that minimizing variables out preserves submodularity. Indeed, suppose that h(x) =
?
? is a submodular function. Then h is also submodular since
min? h(x,
?) where h
?
?
? ? y, ? ? ?) + h(x
? ? y, ? ? ?) ? h(x ? y) + h(x ? y)
h(x) + h(y) = h(x,
?) + h(y,
?) ? h(x
5
In fact, it dominates all other bisubmodular relaxations g? : X ? ? R of f . Indeed, consider labeling
(x, y) ? X ? . It can be checked that (x, y) = u u v = u t v where u = (x, x) and v = (y, y), therefore
g?(x, y) ? 21 [?
g (u) + g?(v)] = 21 [f (x) + f (y)] = g(x, y).
7
c?i?A xi ?i?B xi where c ? 0 and A, B are disjoint subsets of U . It is known that any pseudoboolean function f can be represented as a sum of such monomials (see e.g. [4]; we need to represent
?f as a posiform and take its negative). This implies that any pseudo-boolean function f has a
submodular relaxation.
Note that this argument is due to Lu and Williams [25] who converted function f to a sum of
monomials of the form c?i?A xi and cxk ?i?A xi , c ? 0, k ?
/ A. It is possible to show that the
relaxation proposed in [25] is equivalent to the submodular relaxation constructed by the scheme
above (we omit the derivation).
Submodular vs. bisubmodular relaxations An important question is whether bisubmodular
relaxations are more ?powerful? compared to submodular ones. The next theorem gives a class of
functions for which the answer is negative; its proof is given in [20].
Theorem 11. Let g be the submodular relaxation of a quadratic pseudo-boolean function f defined
by (12), and assume that the set E does not have parallel edges. Then g dominates any other
bisubmodular relaxation g? of f , i.e. g(u) ? g?(u) for all u ? X ? .
For non-quadratic pseudo-boolean functions, however, the situation can be different. In [20]. we
give an example of a function f of n = 4 variables which has a tight bisubmodular relaxation g (i.e.
g has a minimizer in X ? ), but all submodular relaxations are not tight.
Persistency Finally, we show that bisubmodular functions possess the autarky property, which
implies persistency.
Proposition 12. Let f : K1/2 ? R be a bisubmodular function and x ? K1/2 be its minimizer.
[Autarky] Let y be a labeling in B. Consider labeling z = (y t x) t x. Then z ? B and
f (z) ? f (y).
[Persistency] Function f : B ? R has a minimizer x? ? B such that x?i = xi for nodes i ? V
with integral xi .
Proof. It can be checked that zi = yi if xi = 21 and zi = xi if xi ? {0, 1}. Thus, z ? B. For
any w ? K1/2 there holds f (w t x) ? f (w) + [f (x) ? f (w u x)] ? f (w). This implies that
f ((y t x) t x) ? f (y). Applying the autarky property to a labeling y ? arg min{f (x) | x ? B }
yields persistency.
5
Conclusions and future work
We showed that bisubmodular functions can be viewed as a natural generalization of the roof duality
approach to higher-order cliques. As mentioned in the introduction, thisP
work has been motivated
by computer vision applications that use functions of the form f (x) = C fC (x). An important
open question is how to construct bisubmodular relaxations f?C for individual terms. For terms of
low order, e.g. with |C| = 3, this potentially could be done by solving a small linear program.
Another important question is how to minimize such functions. Algorithms in [12, 26] are unlikely
to be practical for most vision problems, which typically have tens of thousands of variables. However, in our case we need to minimize a bisubmodular function which has a special structure: it
is represented as a sum of low-order bisubmodular terms. We recently showed [21] that a sum of
low-order submodular terms can be optimized more efficiently using maxflow-like techniques. We
conjecture that similar techniques can be developed for bisubmodular functions as well.
References
[1] Asem M. Ali, Aly A. Farag, and Georgy L. Gimel?Farb. Optimizing binary MRFs with higher order
cliques. In ECCV, 2008.
[2] Kazutoshi Ando, Satoru Fujishige, and Takeshi Naitoh. A characterization of bisubmodular functions.
Discrete Mathematics, 148:299?303, 1996.
[3] M. L. Balinski. Integer programming: Methods, uses, computation. Management Science, 12(3):253?
313, 1965.
8
[4] E. Boros and P. L. Hammer. Pseudo-boolean optimization. Discrete Applied Mathematics, 123(1-3):155
? 225, November 2002.
[5] E. Boros, P. L. Hammer, and X. Sun. Network flows and minimization of quadratic pseudo-Boolean
functions. Technical Report RRR 17-1991, RUTCOR, May 1991.
[6] E. Boros, P. L. Hammer, and G. Tavares. Preprocessing of unconstrained quadratic binary optimization.
Technical Report RRR 10-2006, RUTCOR, 2006.
[7] A. Bouchet. Greedy algorithm and symmetric matroids. Math. Programming, 38:147?159, 1987.
[8] A. Bouchet and W. H. Cunningham. Delta-matroids, jump systems and bisubmodular polyhedra. SIAM
J. Discrete Math., 8:17?32, 1995.
[9] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. PAMI, 23(11),
November 2001.
[10] R. Chandrasekaran and Santosh N. Kabadi. Pseudomatroids. Discrete Math., 71:205?217, 1988.
[11] S Fujishige. Submodular Functions and Optimization. North-Holland, 1991.
[12] Satoru Fujishige and Satoru Iwata. Bisubmodular function minimization. SIAM J. Discrete Math.,
19(4):1065?1073, 2006.
[13] P. L. Hammer, P. Hansen, and B. Simeone. Roof duality, complementation and persistency in quadratic
0-1 optimization. Mathematical Programming, 28:121?155, 1984.
[14] D. Hochbaum. Instant recognition of half integrality and 2-approximations. In 3rd International Workshop
on Approximation Algorithms for Combinatorial Optimization, 1998.
[15] D. Hochbaum. Solving integer programs over monotone inequalities in three variables: A framework for
half integrality and good approximations. European Journal of Operational Research, 140(2):291?321,
2002.
[16] H. Ishikawa. Higher-order clique reduction in binary graph cut. In CVPR, 2009.
[17] H. Ishikawa. Higher-order gradient descent by fusion-move graph cut. In ICCV, 2009.
[18] Satoru Iwata and Kiyohito Nagano. Submodular function minimization under covering constraints. In
FOCS, October 2009.
[19] Santosh N. Kabadi and R. Chandrasekaran. On totally dual integral systems. Discrete Appl. Math.,
26:87?104, 1990.
[20] V. Kolmogorov.
Generalized roof duality and bisubmodular functions.
Technical Report
arXiv:1005.2305v2, September 2010.
[21] V. Kolmogorov. Minimizing a sum of submodular functions. Technical Report arXiv:1006.1990v1, June
2010.
[22] V. Kolmogorov. Submodularity on a tree: Unifying L\ -convex and bisubmodular functions. Technical
Report arXiv:1007.1229v2, July 2010.
[23] Victor Lempitsky, Carsten Rother, and Andrew Blake. LogCut - efficient graph cut optimization for
Markov random fields. In ICCV, 2007.
[24] Victor Lempitsky, Carsten Rother, Stefan Roth, and Andrew Blake. Fusion moves for Markov random
field optimization. PAMI, July 2009.
[25] S. H. Lu and A. C. Williams. Roof duality for polynomial 0-1 optimization. Math. Programming,
37(3):357?360, 1987.
[26] S. Thomas McCormick and Satoru Fujishige. Strongly polynomial and fully combinatorial algorithms for
bisubmodular function minimization. Math. Program., Ser. A, 122:87?120, 2010.
[27] M. Nakamura. A characterization of greedy sets: universal polymatroids (I). In Scientific Papers of the
College of Arts and Sciences, volume 38(2), pages 155?167. The University of Tokyo, 1998.
[28] G. L. Nemhauser and L. E. Trotter. Vertex packings: Structural properties and algorithms. Mathematical
Programming, 8:232?248, 1975.
[29] Liqun Qi. Directed submodularity, ditroids and directed submodular flows. Mathematical Programming,
42:579?599, 1988.
[30] A. Raj, G. Singh, and R. Zabih. MRF?s for MRI?s: Bayesian reconstruction of MR images via graph cuts.
In CVPR, 2006.
[31] Stefan Roth and Michael J. Black. Fields of experts. IJCV, 82(2):205?229, 2009.
[32] C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer. Optimizing binary MRFs via extended roof
duality. In CVPR, June 2007.
[33] O. Woodford, P. Torr, I. Reid, and A. Fitzgibbon. Global stereo reconstruction under second order smoothness priors. In CVPR, 2008.
9
| 4021 |@word bisubmodularity:4 version:1 middle:1 polynomial:4 trotter:2 mri:1 open:1 reduction:1 existing:1 written:2 v:1 half:29 selected:2 greedy:2 accordingly:2 halfintegral:2 woodford:2 persistency:10 completeness:1 characterization:13 node:17 math:7 mathematical:3 constructed:3 focs:1 prove:8 ijcv:1 polyhedral:3 introduce:1 x0:5 pairwise:1 lov:7 indeed:5 nor:1 cardinality:1 totally:22 increasing:1 notation:1 developed:1 pseudo:20 pseudoboolean:4 subclass:1 uk:2 ser:1 omit:1 reid:1 before:1 bisubmodular:47 establishing:1 pami:2 signed:7 black:1 studied:1 equivalence:1 conversely:3 appl:1 bi:2 directed:2 unique:1 camera:1 practical:1 practice:2 union:1 fitzgibbon:1 maxflow:2 universal:1 convenient:1 projection:1 kabadi:3 cannot:1 interior:1 satoru:5 operator:1 rutcor:2 applying:4 restriction:5 equivalent:7 map:1 roth:2 go:1 williams:3 straightforward:2 independently:2 convex:10 ekk:1 rule:3 notion:1 coordinate:2 suppose:6 modulo:1 programming:7 homogeneous:1 us:1 hypothesis:2 associate:1 satisfying:2 recognition:1 cut:5 observed:1 bottom:1 thousand:1 wj:1 sun:2 ordering:8 mentioned:1 convexity:3 ui:15 depend:1 weakly:2 solving:3 tight:2 ali:2 singh:1 packing:1 differently:1 various:1 represented:3 kolmogorov:6 derivation:1 distinct:2 fast:1 describe:1 london:1 labeling:8 h0:3 quite:1 cvpr:4 say:1 otherwise:2 sequence:1 ucl:1 reconstruction:3 neighboring:1 nagano:2 description:1 requirement:2 object:1 andrew:2 ac:1 eq:1 auxiliary:3 c:1 implies:6 convention:1 submodularity:3 fij:9 tokyo:1 hammer:8 hull:2 suffices:1 generalization:6 proposition:3 tighter:1 extension:8 hold:7 sufficiently:2 considered:5 blake:2 mapping:2 algorithmic:1 claim:3 label:3 combinatorial:4 hansen:4 minimization:9 stefan:2 clearly:3 rather:1 ej:2 earliest:1 corollary:2 l0:7 encode:1 june:2 rank:1 check:6 polyhedron:1 helpful:1 inference:1 mrfs:2 i0:4 unlikely:1 typically:1 cunningham:2 transformed:3 labelings:11 pixel:5 arg:3 dual:2 denoted:1 art:1 special:1 santosh:2 construct:2 u0i:3 field:3 ishikawa:3 future:1 simplex:2 minimized:1 report:5 few:1 preserve:1 roof:31 individual:1 replaced:1 replacement:1 ando:2 extreme:5 swapping:1 devoted:1 integral:34 edge:1 respective:1 tree:2 iv:1 increased:1 boolean:20 facet:2 cover:1 formulates:1 applicability:1 introducing:2 vertex:3 subset:4 monomials:4 polymatroids:1 veksler:1 recognizing:1 answer:1 corrupted:1 fundamental:1 siam:2 international:1 michael:1 management:1 choose:2 corner:1 ek:1 expert:1 li:2 converted:1 summarized:1 north:1 satisfy:2 later:2 view:1 analyze:1 relied:1 parallel:1 contribution:2 minimize:2 who:2 efficiently:1 correspond:3 yield:3 generalize:2 bayesian:1 lu:3 researcher:1 complementation:1 checked:4 definition:9 energy:2 steadily:1 proof:6 proved:1 popular:1 knowledge:1 back:1 higher:5 follow:1 formulation:1 done:2 strongly:3 generality:1 furthermore:2 ei:2 replacing:2 perhaps:1 scientific:1 concept:1 wi0:1 symmetric:1 uniquely:1 covering:1 coincides:1 generalized:4 complete:2 l1:8 image:2 wise:1 recently:3 fi:4 boykov:1 multinomial:1 volume:1 discussed:1 belong:2 smoothness:1 rd:1 unconstrained:1 mathematics:2 balinski:1 submodular:37 etc:1 showed:4 raj:1 belongs:1 optimizing:2 certain:1 inequality:12 binary:13 arbitrarily:1 yi:1 victor:2 seen:1 additional:1 mr:1 converting:1 july:2 ii:2 rv:6 u0:14 infer:1 technical:5 match:1 plugging:1 qi:5 mrf:2 tavares:1 vision:8 arxiv:3 sometimes:2 represent:1 hochbaum:3 preserved:1 posse:3 asz:7 fujishige:7 undirected:1 thisp:1 flow:2 call:2 integer:4 structural:1 iii:1 easy:5 superset:1 xj:16 affect:1 gave:1 zi:3 reduce:3 idea:1 whether:1 motivated:2 stereo:2 remark:1 boros:4 simeone:4 generally:1 takeshi:1 ten:1 zabih:2 reduced:1 exist:1 zj:1 sign:1 delta:1 disjoint:1 popularity:1 discrete:7 changing:1 neither:1 integrality:10 v1:1 vast:1 graph:6 relaxation:65 monotone:1 sum:6 convert:2 letter:2 powerful:1 chandrasekaran:3 guaranteed:2 correspondence:1 quadratic:17 adapted:1 constraint:7 x2:2 argument:1 min:6 conjecture:1 department:1 according:4 combination:2 liqun:1 slightly:1 wi:2 making:1 rrr:2 intuitively:1 iccv:2 ln:2 equation:1 remains:1 describing:1 studying:1 operation:3 probe:1 apply:1 v2:2 alternative:1 existence:1 thomas:1 denotes:2 top:1 unifying:1 instant:1 k1:18 uj:4 establish:1 objective:1 move:5 question:3 flipping:3 september:1 nemhauser:2 gradient:1 distance:1 link:2 simulated:1 w0:1 ui0:13 argue:1 induction:4 assuming:1 rother:3 relationship:2 ellipsoid:1 minimizing:7 vladimir:1 equivalently:1 difficult:1 october:1 cij:1 potentially:1 negative:2 mccormick:2 conversion:2 markov:2 mate:1 november:2 descent:1 situation:1 extended:3 arbitrary:7 intensity:1 aly:1 introduced:4 complement:1 namely:1 cast:1 pair:6 optimized:1 polytopes:1 established:1 below:1 program:3 gaining:1 natural:3 nakamura:2 rely:1 scheme:3 imply:3 review:1 literature:1 prior:1 loss:1 fully:1 permutation:1 row:1 eccv:1 matroids:3 boundary:2 xn:25 jump:1 coincide:2 preprocessing:1 far:1 approximate:1 clique:3 global:1 assumed:1 conclude:1 xi:54 iterative:1 table:1 kiyohito:1 operational:1 symmetry:1 expansion:1 poly:1 european:1 rh:1 noise:1 x1:15 positively:1 epigraph:2 representative:1 theorem:11 showing:1 list:1 explored:1 dominates:3 fusion:3 exists:1 essential:1 workshop:1 adding:3 importance:1 ci:1 fc:5 ordered:1 holland:1 corresponds:2 minimizer:8 iwata:4 relies:1 satisfies:5 lempitsky:3 goal:1 formulated:2 viewed:2 carsten:2 change:3 torr:1 reducing:1 lemma:2 total:6 called:10 duality:30 accepted:1 invariance:1 select:1 college:2 rename:2 latter:2 szummer:1 |
3,338 | 4,022 | Latent Variable Models for Predicting File
Dependencies in Large-Scale Software Development
Diane J. Hu1 , Laurens van der Maaten1,2 , Youngmin Cho1 , Lawrence K. Saul1 , Sorin Lerner1
1
Dept. of Computer Science & Engineering, University of California, San Diego
2
Pattern Recognition & Bioinformatics Lab, Delft University of Technology
{dhu,lvdmaaten,yoc002,saul,lerner}@cs.ucsd.edu
Abstract
When software developers modify one or more files in a large code base, they
must also identify and update other related files. Many file dependencies can be
detected by mining the development history of the code base: in essence, groups
of related files are revealed by the logs of previous workflows. From data of this
form, we show how to detect dependent files by solving a problem in binary matrix
completion. We explore different latent variable models (LVMs) for this problem,
including Bernoulli mixture models, exponential family PCA, restricted Boltzmann machines, and fully Bayesian approaches. We evaluate these models on the
development histories of three large, open-source software systems: Mozilla Firefox, Eclipse Subversive, and Gimp. In all of these applications, we find that LVMs
improve the performance of related file prediction over current leading methods.
1
Introduction
As software systems grow in size and complexity, they become more difficult to develop and maintain. Nowadays, it is not uncommon for a code base to contain source files in multiple programming
languages, text documents with meta information, XML documents for web interfaces, and even
platform-dependent versions of the same application. This complexity creates many challenges because no single developer can be an expert in all things.
One such challenge arises whenever a developer wishes to update one or more files in the code
base. Often, seemingly localized changes will require many parts of the code base to be updated.
Unfortunately, these dependencies can be difficult to detect. Let S denote a set of starter files that
the developer wishes to modify, and let R denote the set of relevant files that require updating after
modifying S. In a large system, where the developer cannot possibly be familiar with the entire code
base, automated tools that can recommend files in R given starter files in S are extremely useful.
A number of automated tools now make recommendations of this sort by mining the development
history of the code base [1, 2]. Work in this area has been facilitated by code versioning systems,
such as CVS or Subversion, which record the development histories of large software projects. In
these histories, transactions denote sets of files that have been jointly modified?that is, whose
changes have been submitted to the code base within a short time interval. Statistical analyses of
past transactions can reveal which files depend on each other and need to be modified together.
In this paper, we explore the use of latent variable models (LVMs) for modeling the development
history of large code bases. We consider a number of different models, including Bernoulli mixture
models, exponential family PCA, restricted Boltzmann machines, and fully Bayesian approaches.
In these models, the problem of recommending relevant files can be viewed as a problem in binary
matrix completion. We present experimental results on the development histories of three large
open-source systems: Mozilla Firefox, Eclipse Subversive, and Gimp. In all of these applications,
we find that LVMs outperform the current leading method for mining development histories.
1
2
Related work
Two broad classes of methods are used for identifying file dependencies in large code bases; one
analyzes the semantic content of the code base while the other analyzes its development history.
2.1
Impact analysis
The field of impact analysis [3] draws on tools from software engineering in order to identify the
consequences of code modifications. Most approaches in this tradition attempt to identify program
dependencies by inspecting and/or running the program itself. Such dependence-based techniques
include transitive traversal of the call graph as well as static [4, 5, 6] and dynamic [7, 8] slicing
techniques. These methods can identify many dependencies; however, they have trouble on certain difficult cases such as cross-language dependencies (e.g., between a data configuration file and
the code that uses it) and cross-program dependencies (e.g., between the front and back ends of a
compiler). These difficulties have led researchers to explore the methods we consider next.
2.2
Mining of development histories
Data-driven methods identify file dependencies in large software projects by analyzing their development histories. Two of the most widely recognized works in this area are by Ying et al. [1] and
Zimmerman et al. [2]. Both groups use frequent itemset mining (FIM) [9], a general heuristic for
identifying frequent patterns in large databases. The patterns extracted from development histories
are just those sets of files that have been jointly modified at some point in the past; the frequent
patterns are the patterns that have occurred at least ? times. The parameter ? is called the minimum
support threshold. In practice, it is tuned to yield the best possible balance of precision and recall.
Given a database and a minimum support threshold, the resulting set of frequent patterns is uniquely
specified. Much work has been devoted to making FIM as fast and efficient as possible. Ying et
al. [1] uses a FIM algorithm called FP-growth, which extracts frequent patterns by using a tree-like
data structure that is cleverly designed to prune the number of possible patterns to be searched. FPgrowth is used to find all frequent patterns that contain the set of starter files; the joint sets of these
frequent patterns are then returned as recommendations. As a baseline in our experiments we use a
variant of FP-growth called FP-Max [10] which outputs only maximal sets for added efficiency.
Zimmerman et al. [2] uses the popular Apriori algorithm [11] (which uses FIM to solve a subtask) to
form association rules from the development history. These rules are of the form x1 ? x2 , where
x1 and x2 are disjoint sets; they indicate that ?if x1 is observed, then based on experience, x2 should
also be observed.? After identifying all rules in which starter files appear on the left hand side, their
tool recommends all files that appear on the right hand side. They also work with content on a finer
granularity, recommending not only relevant files, but also relevant code blocks within files.
Both Ying et al. [1] and Zimmerman et al. [2] evaluate the data-driven approach by its f-measure, as
measured against ?ground-truth? recommendations. For Ying et al. [1], these ground-truth recommendations are the files committed for a completed modification task, as recorded in that project?s
Bugzilla. For Zimmerman et al. [2], the ground-truth recommendations are the files checked-in
together at some point in the past, as revealed by the development history.
Other researchers have also used the development history to detect file dependencies, but in
markedly different ways. Shirabad et al. [12] formulate the problem as one of binary classification; they label pairs of source files as relevant or non-relevant based on their joint modification
histories. Robillard [13] analyzes the topology of structural dependencies between files at the codeblock level. Kagdi et al [14] improve on the accuracy of existing file recommendation methods
by considering asymmetric file dependencies; this information is also used to return a partial ordering over recommended files. Finally, Sherriff et al. [15] identify clusters of dependent files by
performing singular value decomposition on the development history.
3
Latent variable modeling of development histories
We examine four latent variable models of file dependence in software systems. All these models
represent the development history as an N ? D large binary matrix, where non-zero elements in
2
the same row indicate files that were checked-in together or jointly modified at some point in time.
To detect dependent files, we infer the values of missing elements in this matrix from the values of
known elements. The inferences are made from the probability distributions defined by each model.
We use the following notation for all models:
1. The file list F = (f1 , . . . , fD ) is an ordered collection of all files referenced in a static
version of the development history.
2. A transaction is a set of files that were modified together, according to the development history. We represent each transaction by a D-dimensional binary vector x = (x1 , . . . , xD ),
where xi = 1 if the fi is a member of the transaction, and xi = 0 otherwise.
3. A development history D is a set of N transaction vectors {x1 , x2 , . . . , xN }. We assume
them to be independently and identically sampled from some underlying joint distribution.
4. A starter set is a set of s starter files S = (fi1 , . . . , fis ) that the developer wishes to modify.
5. A recommendation set is a set of recommended files R = (fj1 , . . . , fjr ) that we label as
relevant to the starter set S.
3.1
Bernoulli mixture model
The simplest model that we explore is a Bernoulli mixture model (BMM). Figure 1(a) shows
the BMM?s graphical model in plate notation. In training, the observed variables are the D binary elements xi ? {0, 1} of each transaction vector. The hidden variable is a multinomial label z ? {1, 2, . . . , k} that can be viewed as assigning each transaction vector to one of k clusters.
The joint distribution of the BMM is given by:
p(x, z|?, ?) = p(z|?)
D
Y
p(xi |z, ?) = ?z
i=1
D
Y
?xizi (1 ? ?iz )1?xi .
(1)
i=1
As implied by the graph in Fig. 1(a), we model the different elements of x as conditionally independent given the label z. Here, the parameter ?z = p(z|?) denotes the prior probability of the
latent variable z, while the parameter ?iz = p(xi = 1|z, ?) denotes the conditional mean of the
observed variable
Q xi . We use the EM algorithm to estimate parameters that maximize the likelihood
p(D|?, ?) = n p(xn |?, ?) of the transactions in the development history.
When a software developer wishes to modify a set of starter files, she can query a trained BMM
to identify a set of relevant files. Let s = {xi1 , . . . , xis } denote the elements of the transaction
vector indicating the files in the starter set S. Let r denote the D ? s remaining elements of the
transaction vector indicating files that may or may not be relevant. In BMMs, we infer which
files are relevant by computing the posterior probability p(r|s = 1, ?, ?). Using Bayes rule and
conditional independence, this posterior probability is given (up to a constant factor) by:
p(r|s = 1, ?, ?) ?
k
X
p(r|z, ?) p(s = 1|z, ?) p(z|?).
(2)
z=1
The most likely set of relevant files, according to the model, is given by the completed transaction r?
that maximizes the right hand side of eq. (2). Unfortunately, while we can efficiently compute the
posterior probability p(r|s = 1) for a particular set of recommended files, it is not straightforward to
maximize eq. (2) over all 2D?s possible ways to complete the transaction. As an approximation, we
sort the possibly relevant files by their individual posterior probabilities p(xi = 1|s = 1) for fi ?
/ S.
Then we recommend all files whose posterior probabilities p(xi = 1|s = 1) exceed some threshold;
we optimize the threshold on a held-out set of training examples.
3.2
Bayesian Bernoulli mixture model
We also explore a Bayesian treatment of the BMM. In a Bayesian Bernoulli mixture (BBM), instead
of learning point estimates of the parameters {?, ?}, we introduce a prior distribution p(?, ?) and
make predictions by averaging over the posterior distribution p(?, ?|D). The generative model for
the BBM is shown graphically in Figure 1(b).
3
?
?
?
?
?,?
z
?
z
c
V
y
u
W
x
x
(a) BMM.
b
K
N
x
x
N
N
N
(b) BBM.
(c) RBM.
(d) Logistic PCA.
Figure 1: Graphical model of the Bernoulli mixture model (BMM), the Bayesian Bernoulli mixture
(BBM), the restricted Boltzmann machine (RBM), and logistic PCA.
In our BBMs, the mixture weight parameters are drawn from a Dirichlet prior1 :
p(?|?) = Dirichlet (? |?/k, . . . , ?/k ) ,
(3)
where k indicates (as before) the number of mixture components and ? is a hyperparameter of the
Dirichlet prior, the so-called concentration parameter2 . Likewise, the parameters of the k Bernoulli
distributions are drawn from Beta priors:
p(?j |?, ?) = Beta(?j |?, ?),
(4)
where ?j is a D-dimensional vector, and ? and ? are hyperparameters of the Beta prior.
As exact inference in BBMs is intractable, we resort to collapsed Gibbs sampling and make predictions by averaging over samples from the posterior. In particular, we integrate out the Bernoulli
parameters ? and the cluster distribution parameters ?, and we sample the cluster assignment variables z. For Gibbs sampling, we must compute the conditional probability p(zn = j|z?n , D) that
the nth transaction is assigned to cluster j, given the training data D and all other cluster assignments z?n . This probability is given by:
D
N?nj + ?k Y (? + N?nij )xni (? + N?nj ? N?nij )(1?xni )
p(zn = j|z?n , D) =
, (5)
N ? 1 + ? i=1
? + ? + N?nj
where N?nj counts the number of transactions assigned to cluster j (excluding the nth transaction)
and N?nij counts the number of times that the ith file belongs to one of these N?nj transactions.
(t)
After each full Gibbs sweep, we obtain a sample z(t) (and corresponding counts Nj of the number
(t)
of points assigned to cluster j), which can be used to infer the Bernoulli parameters ?j . We use
T of these samples to estimate the probability that a file xi needs to be changed given files in the
starter set S. In particular, averaging predictions over the T Gibbs samples, we estimate:
?
?
(t)
T
k
p
x
=
1|?
X
X
X
i
j
1
1
(t)
?1
? , with ?(t)
p(xi = 1|s = 1) ?
Nj
= (t)
xn .
j
(t)
T t=1 N j=1
N
p s = 1|?
(t)
j
j
n:zn =j
(6)
3.3
Restricted Boltzmann Machines
A restricted Boltzmann machine (RBM) is a Markov random field (MRF) whose nodes are (typically) binary random variables [17]. The graphical model of an RBM is a fully connected bipartite
1
In preliminary experiments, we also investigated an infinite mixture of Bernoulli distributions that replaces
the Dirichlet prior by a Dirichlet process [16]. However, we did not find the infinite mixture model to outperform its finite counterpart, so we do not discuss it further.
2
For simplicity, we assume a symmetric Dirichlet prior, i.e. we assume ?j : ?j = ?/k.
4
graph with D observed variables xi in one layer and k latent variables yj in the other; see Fig. 1(c).
Due to the bipartite structure, the latent variables are conditionally independent given the observed
variables (and vice versa). For the RBMs in this paper, we model the joint distribution as:
1
p(x, y) = exp ?x> Wy ? b> x ? c> y ,
(7)
Z
where W stores the weight matrix between layers, b and c store (respectively) the biases on observed and hidden nodes, and Z is a normalization factor that depends on the model?s parameters.
The product form of RBMs can model much sharper distributions over the observed variables than
mixture models [17], making them an interesting alternative to consider for our application.
RBMs are trained by maximum likelihood estimation. Exact inference in RBMs is intractable due to
the exponential sum in the normalization factor Z. However, the conditional distributions required
for Gibbs sampling have a particularly simple form:
X
X
p(xi = 1|y) = ?
Wij yj +
cj ,
(8)
j
j
X
X
p(yj = 1|x) = ?
Wij xi +
bi ,
(9)
i
i
where ?(z) = [1 + e?z ]?1 is the sigmoid function. The obtained Gibbs samples can be used to
approximate the gradient of the likelihood function with respect to the model parameters; see [17,
18] for further discussion of sampling strategies3 .
To determine whether a file fi is relevant given starter files in S, we can either (i) clamp the observed variables representing starter files and perform Gibbs sampling on the rest, or (ii) compute
the posterior over the remaining files using a fast, factorized approximation [19]. In preliminary
experiments, we found the latter to work best. Hence, we recommend files by computing
X
k
Y
p(xi = 1|s = 1) ? exp(bi )
xj Wj` + Wi` + c`
1 + exp
,
(10)
j:fj ?S
`=1
then thresholding these probabilities on some value determined on held-out examples.
3.4
Logistic PCA
Logistic PCA is a method for dimensionality reduction of binary data; see Fig. 1(d) for its graphical model. Logistic PCA belongs to a family of algorithms known as exponential family PCA;
these algorithms generalize PCA to data modeled by non-Gaussian distributions of the exponential
family [20, 21, 22]. To use logistic PCA, we stack the N transaction vectors xn ? {0, 1}D of the
development history into a N ?D binary matrix X. Then, modeling each element of this matrix as
a Bernoulli random variable, we attempt to find a low-rank factorization of the N ?D real-valued
matrix ? whose elements are the log-odds parameters of these random variables.
The low-rank factorization in logistic PCA is computed by maximizing the log-likelihood of the
observed data X. In terms of the log-odds matrix ?, this log-likelihood is given by:
X
LX (?) =
Xnd log ?(?nd ) + (1 ? Xnd ) log ?(??nd ) .
(11)
nd
We obtain a low dimensional representation of the data by factoring the log-odds matrix ? ? <N ?D
as the product of two smaller matrices U ? <N ?L and V ? <L?D . Specifically, we have:
X
?nd =
Un` V`d .
(12)
`
Note that the reduced rank L D plays a role analogous to the number of clusters k in BMMs.
After obtaining a low-rank factorization of the log-odds matrix ? = UV, we can use it to recommend relevant files from starter files S = {fi1 , fi2 , . . . , fis }. To recommend relevant files, we
compute the vector u that optimizes the regularized log-loss:
s
X
?
(13)
LS (u) =
log ?(u?vij ) + kuk2 ,
2
j=1
3
We use the approach in [17] known as contrastive divergence with m Gibbs sweeps (CD-m).
5
Time Period
Support
10
15
20
25
Mozilla Firefox
March 2007 - Nov 2007
Eclipse Subversive
Dec 2006 - May 2010
Train
9,579
9,015
8,497
8,021
Train
372
316
282
233
Test
2,666
2,266
1,991
1,771
Files
1,264
778
546
411
Test
114
92
79
59
Files
61
38
30
25
Gimp
Nov 2007 - May 2010
Train
5,359
5,084
4,729
4,469
Test
3,608
3,436
3,208
3,012
Files
1,376
899
600
447
Table 1: Datasets statistics, showing the time period from which transactions were extracted, and
the number of transactions and unique files in the training and test sets (for a single starter file).
where in the first term, v` denotes the `th column of the matrix V, and in the second term, ? is a
regularization parameter. The vector u obtained in this way is the low dimensional representation
of the transaction with starter files in S. To determine whether file fi is relevant, we compute the
probability p(xi = 1|u, V) = ?(u ? vi ) and recommend the file if this probability exceeds some
threshold. (We tune the threshold on held-out transactions from the development history).
4
Experiments
We evaluated our models on three datasets4 constructed from check-in records of Mozilla Firefox,
Eclipse Subversive, and Gimp. These open-source projects use software configuration management
(SCM) tools which provide logs that allow us to extract binary vectors indicating which files were
changed during a transaction. Our experimental setup and results are described below.
4.1
Experimental setup
We preprocess the raw data obtained from SCM?s check-in records in two steps. First, following Ying et al [1], we eliminate all transactions consisting of more than 100 files (as these usually do
not correspond to meaningful changes). Second, we simulate the minimum support threshold (see
Section 2.2) by removing all files in the code base that occur very infrequently. This pruning allows
us to make a fair comparison with latent variable models (LVMs).
After pre-processing, the dataset is chronologically ordered; the first two-thirds is used as training
data, and the last one-third as testing data. For each transaction in the test set, we formed a ?query?
and ?label? set by randomly picking a set of changed files as starter files. The remaining files that
were changed in the transaction form the label set, which is the set of files our models must predict.
Following [1], we only include transactions for which the label set is non-empty in the train data.
Table 1 shows the number of transactions for training and test set, as well as the total number of
unique files that appear in these transactions.
We trained the LVMs as follows. The Bernoulli mixture models (BMMs) were trained by 100
or fewer iterations of the EM algorithm. For the Bayesian mixtures (BBMs), we ran 30 separate
Markov chains and made predictions after 30 full Gibbs sweeps5 . The RBMs were trained for 300
iterations of contrastive divergence (CD), starting with CD-1 and gradually increasing the number
of Gibbs sweeps to CD-9 [17]. The parameters U and V of logistic PCA were learned using an
alternating least squares procedure [21] that converges to a local maximum of the log-likelihood.
We initialized the matrices U and V from an SVD of the matrix X.
The parameters of the LVMs (i.e., number of hidden components in the BMM and RBM, as well
as the number of dimensions and the regularization parameter ? in logistic PCA) were selected
based on the performance on a small held-out validation set. The hyperparameters of the Bayesian
Bernoulli mixtures were set based on prior knowledge from the domain: the Beta-prior parameters ?
and ? were set to 0.005 and 0.95, respectively, to reflect our prior knowledge that most files are not
changed in a transaction. The concentration parameter ? was set to 50 to reflect our prior knowledge
that file dependencies typically form a large number of small clusters.
4
5
These binary datasets publicly available at http://cseweb.ucsd.edu/?dhu/research/msr
In preliminary experiments, we found 30 Gibbs sweeps to be sufficient for the Markov chain to mix.
6
Model Support
10
15
FIM
20
25
10
15
BMM
20
25
10
15
BBM
20
25
10
15
RBM
20
25
10
15
LPCA
20
25
Mozilla Firefox
Start = 1
Start = 3
0.106 0.136 0.112 0.195
0.129 0.144 0.127 0.194
0.115 0.137 0.106 0.186
0.124 0.135 0.110 0.195
0.160 0.189 0.106 0.158
0.160 0.202 0.110 0.141
0.172 0.204 0.120 0.147
0.177 0.218 0.130 0.160
0.196 0.325 0.180 0.376
0.192 0.340 0.180 0.376
0.206 0.355 0.191 0.417
0.197 0.360 0.175 0.391
0.157 0.230 0.069 0.307
0.156 0.246 0.063 0.310
0.169 0.260 0.058 0.324
0.172 0.269 0.088 0.340
0.200 0.249 0.169 0.300
0.182 0.254 0.157 0.295
0.182 0.265 0.156 0.308
0.174 0.277 0.162 0.325
Eclipse Subversive
Start = 1
Start = 3
0.133 0.382 0.234 0.516
0.141 0.461 0.319 0.632
0.177 0.550 0.364 0.672
0.227 0.616 0.360 0.637
0.222 0.433 0.206 0.479
0.181 0.486 0.350 0.489
0.196 0.530 0.403 0.514
0.251 0.566 0.382 0.482
0.257 0.547 0.278 0.700
0.202 0.607 0.374 0.769
0.223 0.655 0.413 0.791
0.262 0.694 0.418 0.756
0.170 0.233 0.090 0.405
0.157 0.238 0.138 0.423
0.174 0.307 0.178 0.531
0.200 0.426 0.259 0.524
0.124 0.415 0.230 0.609
0.138 0.452 0.281 0.615
0.212 0.517 0.325 0.667
0.247 0.605 0.344 0.625
Gimp
Start = 1
Start = 3
0.020 0.116 0.016 0.176
0.014 0.091 0.016 0.159
0.007 0.066 0.013 0.129
0.006 0.057 0.010 0.095
0.129 0.177 0.084 0.152
0.134 0.205 0.085 0.143
0.127 0.207 0.085 0.154
0.117 0.212 0.010 0.131
0.114 0.174 0.104 0.177
0.114 0.200 0.107 0.183
0.114 0.205 0.108 0.187
0.110 0.206 0.103 0.179
0.074 0.137 0.028 0.194
0.080 0.148 0.024 0.205
0.074 0.156 0.027 0.242
0.062 0.143 0.025 0.230
0.123 0.187 0.148 0.263
0.124 0.200 0.145 0.288
0.115 0.222 0.135 0.300
0.100 0.205 0.131 0.230
Table 2: Performance of FIM and LVMs on three datasets for queries with 1 or 3 starter files. Each
shaded column presents the f -measure, and each white column presents the correct prediction ratio.
4.2
Results
Our experiments evaluated the performance of each LVM, as well as a highly efficient implementation of FIM called FP-Max [10]. Several experiments were run on different values of starter files
(abbreviated ?Start?) and minimum support thresholds (abbreviated ?Support?). Table 2 shows the
comparison of each model in terms of the f -measure (the harmonic mean of the precision and recall) and the ?correct prediction ratio,? or CPR (the fraction of files we predict correctly, assuming
that the number of files to be predicted is given). The latter measure reflects how well our models
identify relevant files for a particular starter file, without the added complication of thresholding.
Experiments that achieve the highest result for each of the two measures are boldfaced.
From our results, we see that most LVMs outperform the popular FIM approach. In particular, the
BBMs outperform all other approaches on two of the three datasets, with a high of CPR = 79% in
Eclipse Subversive. This means that an average of 79% of all dependent files are detected as relevant
by the BBM. We also observe that f -measure generally decreases with the addition of starter files
? since the average size of transactions is relatively small (around four files for Firefox), adding
starter files must make predictions less obvious in the case that the total number of relevant files is
not given to us. Increasing support, on the other hand, seems to effectively remove noise caused by
infrequent files. Finally, we see that recommendations are most accurate on Eclipse Subversive, the
smallest dataset. We believe this is because a smaller test set does not require a model to predict as
far into the future as a larger one. Thus, our results suggest that an online learning algorithm may
further increase accuracy.
5
Discussion
The use of LVMs has significant advantages over traditional approaches to impact analysis (see
Section 2), namely its ability to find dependent files written in different languages. To show this, we
present the three clusters with the highest weights, as discovered by a BMM in the Firefox data, in
Table 3. The table reveals that the clusters correspond to interpretable structure in the code that span
multiple data formats and languages. The first cluster deals with the JIT compiler for JavaScript,
while the second and third deal with the CSS style sheet manager and web browser properties. The
dependencies in the last two clusters would have been missed by conventional impact analysis.
7
Cluster 1
js/src/jscntxt.h
js/src/jstracer.cpp
js/src/nanojit/Assembler.cpp
js/src/jsregexp.cpp
js/src/jsapi.cpp
js/src/jsarray.cpp
js/src/jsfun.cpp
js/src/jsinterp.cpp
js/src/jsnum.cpp
js/src/jsobj.cpp
Cluster 2
view/src/nsViewManager.cpp
layout/generic/nsHTMLReflowState.cpp
layout/reftests/bugs/reftest.list
layout/style/nsCSSRuleProcessor.cpp
layout/style/nsCSSStyleSheet.cpp
layout/style/nsCSSParser.cpp
layout/base/crashtests/crashtests.list
layout/base/nsBidiPresUtils.cpp
layout/base/nsPresShell.cpp
content/xbl/src/nsBindingManager.cpp
Cluster 3
browser/base/content/browser-context.inc
browser/base/content/browser.js
browser/base/content/pageinfo/pageInfo.xul
browser/locales/en-US/chrome/browser/browser.dtd
toolkit/mozapps/update/src/nsUpdateService.js.in
toolkit/mozapps/update/src/updater/updater.cpp
modules/plugin/base/src/nsNPAPIPluginInstance.h
modules/plugin/base/src/nsPluginHost.cpp
browser/locales/en-US/chrome/browser/browser.properties
view/src/nsViewManager.cpp
Table 3: Three of the clusters from Firefox, identified by the BMM. We show the clusters with
the largest mixing proportion. Within each cluster, the 10 files with highest membership probabilities are shown; note how these files span multiple data formats and program languages, revealing
dependencies that would escape the notice of traditional methods.
LVMs also have important advantages over FIM. Given a set S of starter files, FIM simply looks at
co-occurrence data; it recommends a set of files R for which the number of transactions that contain
both R and S is frequent. By contrast, LVMs can exploit higher-order information by discovering
the underlying structure of the data. Our results suggest that the ability to leverage such structure
leads to better predictions. Admittedly, in terms of computation, LVMs have a larger one-time
training cost than the FIM, as we must first train the model or generate and store the Gibbs samples.
However, for a single query, the time required to compute recommendations is comparable to that
of the FP-Max algorithm we used for FIM.
The results from the previous section also revealed significant differences between the LVMs we
considered. In the majority of our experiments, mixture models (with many mixture components)
appear to outperform RBMs and logistic PCA. This result suggests that our dataset consists of a large
number of transactions with a number of small, highly interrelated files. Modeling such data with a
product of experts such as an RBM is difficult as each individual expert has the ability to ?veto? a
prediction. We tried to resolve this problem by using a sparsity prior on the states of the hidden units
y to make the RBMs behave more like a mixture model [23], but in preliminary experiments, we
did not find this to improve the performance. Another interesting observation is that the Bayesian
treatment of the Bernoulli mixture model generally leads to better predictions than a maximum
likelihood approach, as it is less susceptible to overfitting. This advantage is particularly useful in
file dependency prediction which requires models with a large number of mixture components to
appropriately model data that consists of many small, distinct clusters while having few training
instances (i.e., transactions).
6
Conclusion
In this paper, we have described a new application of binary matrix completion for predicting file
dependencies in software projects. For this application, we investigated the performance of four
different LVMs and compared our results to that of the widely used of FIM. Our results indicate that
LVMs can significantly outperform FIM by exploiting latent, higher-order structure in the data.
Admittedly, our present study is still limited in scope, and it is very likely that our results can be
further improved. For instance, results from the Netflix competition have shown that blending the
predictions from various models often leads to better performance [24]. The raw transactions also
contain additional information that could be harvested to make more accurate predictions. Such
information includes the identity of users who committed transactions to the code base, as well as
the text of actual changes to the source code. It remains a grand challenge to incorporate all the
available information from development histories into a probabilistic model for predicting which
files need to be modified. In future work, we aim to explore discriminative methods for parameter
estimation, as well as online algorithms for tracking non-stationary trends in the code base.
Acknowledgments
LvdM acknowledges support by the Netherlands Organisation for Scientific Research (grant no.
680.50.0908) and by EU-FP7 NoE on Social Signal Processing (SSPNet).
8
References
[1] A.T.T. Ying, G.C. Murphy, R. Ng, and M.C. Chu-Carroll. Predicting source code changes by mining
change history. IEEE Transactions on Software Engineering, 30(9):574?586, 2004.
[2] T. Zimmerman, P. Weibgerber, S. Diehl, and A. Zeller. Mining version histories to guide software changes.
Proceedings of the 26th International Conference on Software Engineering, pages 563?572, 2004.
[3] R. Arnold and S. Bohner. Software Change Impact Analysis. IEEE Computer Society, 1996.
[4] M. Weiser. Program slicing. In Proceedings of the 5th International Conference on Software Engineering,
pages 439?449, 1981.
[5] S. Horwitz, T. Reps, and D. Binkley. Interprocedural slicing using dependence graphs. ACM Transactions
on Programming Languages and Systems, 12(1):26?60, 1990.
[6] F. Tip. A survey of program slicing techniques. Journal of Programming Languages, 3:121?189, 1995.
[7] B. Korel and J. Laski. Dynamic program slicing. Information Processing Letters, 29(3):155?163, 1988.
[8] X. Zhang, R. Gupta, and Y. Zhang. Precise dynamic slicing algorithms. In Proceedings of the 25th
International Conference on Software Engineering, pages 319?329, 2003.
[9] B. Goethals. Frequent set mining. In The Data Mining and Knowledge Discovery Handbook, pages
377?397, 2005.
[10] G. Grahne and J. Zhu. Efficiently using prefix-trees in mining frequent itemsets. Proceedings of the 1st
ICDM Workshop on Frequent Itemset Mining Implementations, 2003.
[11] M.J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. New algorithms for fast discovery of association
rules. 1997.
[12] J. S. Shirabad, T. C. Lethbridge, and S. Matwin. Mining the maintenance history of a legacy software
system. Proceedings of the 19th International Conference on Software Maintenance, pages 95?104, 2003.
[13] M. Robillard. Automatic generation of suggestions for program investigation. ACM SIGSOFT International Symposium on Foundations of Software Engineering, 30:11?20, 2005.
[14] H. Kagdi, S. Yusaf, and J.I. Maletic. Mining sequences of changed-files from version histories. Proc. of
Int. Workshop on Mining Software Repositories, pages 47?53, 2006.
[15] M. Sherriff, J.M. Lake, and L. Williams. Empirical software change impact analysis using singular value
decomposition. International Conference on Software Testing, Verification, and Validation, 2008.
[16] R.M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249?265, 2000.
[17] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14(8):1771?1800, 2002.
[18] T. Tieleman. Training Restricted Boltzmann Machines using approximations to the likelihood gradient. In
Proceedings of the International Conference on Machine Learning, volume 25, pages 1064?1071, 2008.
[19] R.R. Salakhutdinov, A. Mnih, and G.E. Hinton. Restricted Boltzmann Machines for collaborative filtering.
In Proceedings of the 24th International Conference on Machine Learning, pages 791?798, 2007.
[20] M. Collins, S. Dasgupta, and R.E. Schapire. A generalization of principal components analysis to the
exponential family. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural
Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.
[21] A.I. Schein, L.K. Saul, and L.H. Ungar. A generalized linear model for principal component analysis of
binary data. In Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics,
2003.
[22] I. Rish, G. Grabarnik, G. Cecchi, F. Pereira, and G.J. Gordon. Closed-form supervised dimensionality reduction with generalized linear models. In Proceedings of the 25th International Conference on Machine
learning, pages 832?839, 2008.
[23] M.A. Ranzato, Y.L. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. In Advances
in Neural Information Processing Systems, pages 1185?1192, 2008.
[24] R.M. Bell and Y. Koren. Lessons from the Netflix prize challenge. ACM SIGKDD Explorations Newsletter, 9(2):75?79, 2007.
9
| 4022 |@word msr:1 version:4 repository:1 proportion:1 seems:1 nd:4 open:3 tried:1 decomposition:2 contrastive:3 reduction:2 configuration:2 tuned:1 document:2 prefix:1 past:3 existing:1 current:2 rish:1 assigning:1 chu:1 must:5 written:1 remove:1 designed:1 interpretable:1 update:4 stationary:1 generative:1 fewer:1 selected:1 discovering:1 intelligence:1 ith:1 prize:1 short:1 record:3 node:2 complication:1 lx:1 zhang:2 constructed:1 become:1 beta:4 symposium:1 consists:2 boldfaced:1 introduce:1 examine:1 manager:1 salakhutdinov:1 resolve:1 actual:1 considering:1 increasing:2 project:5 notation:2 underlying:2 maximizes:1 factorized:1 developer:7 nj:7 locale:2 noe:1 growth:2 xd:1 unit:1 grant:1 appear:4 before:1 lvm:1 engineering:7 referenced:1 modify:4 local:1 consequence:1 zeller:1 plugin:2 analyzing:1 itemset:2 suggests:1 shaded:1 co:1 factorization:3 limited:1 youngmin:1 bi:2 unique:2 acknowledgment:1 lecun:1 yj:3 testing:2 practice:1 block:1 procedure:1 area:2 empirical:1 bell:1 significantly:1 revealing:1 pre:1 suggest:2 cannot:1 sheet:1 collapsed:1 context:1 optimize:1 conventional:1 missing:1 maximizing:1 straightforward:1 graphically:1 starting:1 independently:1 l:1 layout:8 formulate:1 survey:1 simplicity:1 identifying:3 slicing:6 rule:5 analogous:1 updated:1 cs:1 diego:1 play:1 infrequent:1 user:1 exact:2 programming:3 us:4 element:9 infrequently:1 recognition:1 particularly:2 updating:1 trend:1 asymmetric:1 xnd:2 database:2 observed:10 role:1 module:2 itemsets:1 wj:1 connected:1 ranzato:1 ordering:1 decrease:1 highest:3 eu:1 ran:1 src:17 subtask:1 complexity:2 traversal:1 dynamic:3 trained:5 depend:1 solving:1 creates:1 bipartite:2 efficiency:1 xul:1 matwin:1 joint:5 various:1 train:5 distinct:1 fast:3 detected:2 query:4 artificial:1 whose:4 heuristic:1 widely:2 solve:1 valued:1 larger:2 otherwise:1 ability:3 statistic:3 browser:12 jointly:3 itself:1 seemingly:1 online:2 advantage:3 sequence:1 clamp:1 maximal:1 product:4 frequent:11 relevant:19 mixing:1 achieve:1 bug:1 competition:1 exploiting:1 cluster:21 empty:1 williams:1 converges:1 develop:1 completion:3 measured:1 eq:2 c:1 predicted:1 indicate:3 laurens:1 chrome:2 correct:2 modifying:1 exploration:1 require:3 ungar:1 f1:1 generalization:1 preliminary:4 investigation:1 inspecting:1 blending:1 cpr:2 around:1 considered:1 ground:3 exp:3 datasets4:1 lawrence:1 scope:1 predict:3 smallest:1 estimation:2 proc:1 label:7 largest:1 vice:1 tool:5 reflects:1 mit:1 gaussian:1 aim:1 modified:6 she:1 bernoulli:16 likelihood:8 indicates:1 rank:4 check:2 contrast:1 sigkdd:1 tradition:1 baseline:1 detect:4 zimmerman:5 inference:3 dependent:6 factoring:1 membership:1 entire:1 typically:2 fis:2 eliminate:1 hidden:4 wij:2 lvms:16 classification:1 development:25 platform:1 apriori:1 field:2 having:1 ng:1 sampling:6 broad:1 look:1 future:2 recommend:6 prior1:1 fjr:1 escape:1 few:1 ogihara:1 randomly:1 gordon:1 lerner:1 divergence:3 individual:2 murphy:1 familiar:1 delft:1 consisting:1 maintain:1 attempt:2 fd:1 mining:14 highly:2 mnih:1 uncommon:1 mixture:22 devoted:1 held:4 chain:3 accurate:2 xni:2 nowadays:1 partial:1 experience:1 tree:2 initialized:1 schein:1 nij:3 instance:2 column:3 modeling:4 zn:3 assignment:2 cost:1 mozilla:5 front:1 lvdmaaten:1 dependency:17 st:1 grand:1 international:10 probabilistic:1 xi1:1 picking:1 tip:1 together:4 reflect:2 recorded:1 management:1 possibly:2 expert:4 resort:1 style:4 leading:2 return:1 li:1 includes:1 int:1 inc:1 caused:1 depends:1 vi:1 view:2 lab:1 closed:1 compiler:2 start:7 sort:2 bayes:1 netflix:2 scm:2 collaborative:1 formed:1 square:1 accuracy:2 publicly:1 who:1 efficiently:2 likewise:1 yield:1 identify:8 preprocess:1 correspond:2 lesson:1 generalize:1 bayesian:9 raw:2 researcher:2 finer:1 horwitz:1 history:30 submitted:1 whenever:1 checked:2 against:1 rbms:7 obvious:1 rbm:7 static:2 sampled:1 dataset:3 treatment:2 popular:2 recall:2 knowledge:4 dimensionality:2 cj:1 back:1 higher:2 zaki:1 supervised:1 improved:1 evaluated:2 just:1 hand:4 web:2 logistic:10 reveal:1 scientific:1 believe:1 dietterich:1 contain:4 counterpart:1 hence:1 assigned:3 regularization:2 alternating:1 symmetric:1 semantic:1 neal:1 white:1 conditionally:2 deal:2 during:1 uniquely:1 essence:1 generalized:2 plate:1 complete:1 newsletter:1 interface:1 fj:1 dtd:1 harmonic:1 fi:4 sigmoid:1 multinomial:1 volume:1 association:2 occurred:1 significant:2 versa:1 gibbs:12 cv:1 cambridge:1 automatic:1 uv:1 language:7 toolkit:2 carroll:1 base:22 j:12 posterior:8 belongs:2 driven:2 optimizes:1 store:3 certain:1 meta:1 binary:13 rep:1 der:1 analyzes:3 minimum:4 additional:1 prune:1 recognized:1 determine:2 maximize:2 period:2 recommended:3 signal:1 ii:1 multiple:3 full:2 mix:1 infer:3 exceeds:1 saul1:1 cross:2 icdm:1 impact:6 prediction:14 variant:1 mrf:1 maintenance:2 iteration:2 represent:2 normalization:2 dec:1 addition:1 interval:1 grow:1 source:7 singular:2 appropriately:1 rest:1 file:102 markedly:1 thing:1 member:1 veto:1 call:1 odds:4 structural:1 leverage:1 granularity:1 revealed:3 exceed:1 recommends:2 identically:1 automated:2 independence:1 xj:1 topology:1 identified:1 whether:2 pca:14 firefox:8 fim:14 grabarnik:1 becker:1 cecchi:1 assembler:1 returned:1 deep:1 workflow:1 useful:2 generally:2 tune:1 netherlands:1 dhu:2 simplest:1 reduced:1 http:1 generate:1 outperform:6 schapire:1 notice:1 disjoint:1 correctly:1 hyperparameter:1 dasgupta:1 iz:2 group:2 four:3 threshold:8 drawn:2 graph:4 chronologically:1 fraction:1 sum:1 run:1 facilitated:1 letter:1 family:6 missed:1 lake:1 draw:1 comparable:1 layer:2 koren:1 replaces:1 occur:1 x2:4 software:23 updater:2 simulate:1 extremely:1 span:2 performing:1 relatively:1 format:2 according:2 march:1 cleverly:1 smaller:2 em:2 wi:1 modification:3 making:2 bmm:11 restricted:7 gradually:1 remains:1 discus:1 count:3 abbreviated:2 fp7:1 end:1 fi1:2 available:2 observe:1 generic:1 occurrence:1 alternative:1 denotes:3 running:1 include:2 trouble:1 completed:2 graphical:5 remaining:3 dirichlet:7 exploit:1 ghahramani:1 society:1 implied:1 sweep:4 added:2 concentration:2 dependence:3 traditional:2 gradient:2 separate:1 majority:1 evaluate:2 jit:1 assuming:1 code:21 modeled:1 ratio:2 balance:1 minimizing:1 ying:6 difficult:4 unfortunately:2 setup:2 susceptible:1 sharper:1 javascript:1 implementation:2 boltzmann:7 perform:1 observation:1 markov:4 datasets:4 finite:1 behave:1 hinton:2 excluding:1 committed:2 precise:1 discovered:1 ucsd:2 stack:1 pair:1 required:2 specified:1 namely:1 california:1 learned:1 starter:22 fi2:1 wy:1 pattern:10 below:1 usually:1 fp:5 sparsity:1 challenge:4 program:8 including:2 max:3 belief:1 difficulty:1 regularized:1 predicting:4 hu1:1 nth:2 representing:1 zhu:1 improve:3 xml:1 technology:1 acknowledges:1 transitive:1 extract:2 parthasarathy:1 text:2 prior:12 discovery:2 sherriff:2 fully:3 loss:1 harvested:1 interesting:2 generation:1 suggestion:1 filtering:1 localized:1 validation:2 foundation:1 integrate:1 sufficient:1 verification:1 thresholding:2 editor:1 vij:1 cd:4 row:1 changed:6 last:2 legacy:1 side:3 bias:1 allow:1 guide:1 arnold:1 saul:2 sparse:1 van:1 dimension:1 xn:4 made:2 collection:1 san:1 far:1 social:1 transaction:38 approximate:1 nov:2 pruning:1 overfitting:1 reveals:1 goethals:1 handbook:1 recommending:2 xi:17 parameter2:1 discriminative:1 un:1 latent:10 table:7 diehl:1 obtaining:1 diane:1 investigated:2 domain:1 did:2 noise:1 hyperparameters:2 fair:1 x1:5 fig:3 en:2 precision:2 pereira:1 wish:4 exponential:6 third:3 removing:1 kuk2:1 showing:1 list:3 gupta:1 organisation:1 intractable:2 workshop:3 adding:1 effectively:1 boureau:1 led:1 interrelated:1 simply:1 explore:6 likely:2 ordered:2 tracking:1 eclipse:7 recommendation:9 sorin:1 truth:3 tieleman:1 extracted:2 acm:3 ma:1 conditional:4 viewed:2 identity:1 content:6 change:9 infinite:2 determined:1 specifically:1 averaging:3 principal:2 admittedly:2 called:5 total:2 cpp:20 experimental:3 svd:1 meaningful:1 indicating:3 support:9 searched:1 latter:2 arises:1 collins:1 bioinformatics:1 incorporate:1 dept:1 |
3,339 | 4,023 | Approximate inference in continuous time
Gaussian-Jump processes
Andreas Ruttor
Fakult?at Elektrotechnik und Informatik
Technische Universit?at Berlin
Berlin, Germany
[email protected]
Manfred Opper
Fakult?at Elektrotechnik und Informatik
Technische Universit?at Berlin
Berlin, Germany
[email protected]
Guido Sanguinetti
School of Informatics
University of Edinburgh
[email protected]
Abstract
We present a novel approach to inference in conditionally Gaussian continuous
time stochastic processes, where the latent process is a Markovian jump process.
We first consider the case of jump-diffusion processes, where the drift of a linear
stochastic differential equation can jump at arbitrary time points. We derive partial
differential equations for exact inference and present a very efficient mean field
approximation. By introducing a novel lower bound on the free energy, we then
generalise our approach to Gaussian processes with arbitrary covariance, such as
the non-Markovian RBF covariance. We present results on both simulated and
real data, showing that the approach is very accurate in capturing latent dynamics
and can be useful in a number of real data modelling tasks.
Introduction
Continuous time stochastic processes are receiving increasing attention within the statistical machine
learning community, as they provide a convenient and physically realistic tool for modelling and
inference in a variety of real world problems. Both continuous state space [1, 2] and discrete state
space [3?5] systems have been considered, with applications ranging from systems biology [6] to
modelling motion capture [7]. Within the machine learning community, Gaussian processes (GPs)
[8] have proved particularly popular, due to their appealing properties which allow to reduce the
infinite dimensional smoothing problem into a finite dimensional regression problem. While GPs
are indubitably a very successful tool in many pattern recognition tasks, their use is restricted to
processes with continuously varying temporal behaviour, which can be a limit in many applications
which exhibit inherently non-stationary or discontinuous behaviour.
In this contribution, we consider the state inference and parameter estimation problems in a wider
class of conditionally Gaussian (or Gaussian-Jump) processes, where the mean evolution of the GP is
determined by the state of a latent (discrete) variable which evolves according to Markovian dynamics. We first consider the special, but important, case where the GP is a Markovian process, i.e. an
Ornstein-Uhlenbeck (OU) process. In this case, exact inference can be derived by using a forwardbackward procedure. This leads to partial differential equations, whose numerical solution can be
computationally expensive; alternatively, a variational approximation leads to an iterative scheme
involving only the numerical solution of ordinary differential equations, and which is extremely
efficient from a computational point of view. We then consider the case of general (non-Markov)
1
GPs coupled to a Markovian latent variable. Inference in this case is intractable, but, by means of a
Legendre transform, we can derive a lower bound on the exact free energy, which can be optimised
using a saddle point procedure.
1
Conditionally Gaussian Markov Processes
We consider a continuous state stochastic system governed by a linear stochastic differential equation (SDE) with piecewise constant (in time) drift bias which can switch randomly with Markovian
dynamics (see e.g. [9] for a good introduction to stochastic processes). For simplicity, we give the
derivations in the case when there are only two states in the switching process (i.e. it is a random
telegraph process) and the diffusion system is one dimensional; generalisation to more dimensions
or more latent states is straightforward. The system can be written as
dx = (A? + b ? ?x) dt + ?dw(t),
?(t) ? T P (f? ) ,
(1)
2
where w is the Wiener process with variance ? and ?(t) is a random telegraph process with switching rates f? . Our interest in this type of models is twofold: similar models have found applications
in fields like systems biology, where the rapid transitions of regulatory proteins make a switching
latent variable a plausible model [6]. At the same time, at least intuitively, model (1) could be considered as an approximation to more complex non-linear diffusion processes, where diffusion near
local minima of the potential is approximated by linear diffusion.
Let us assume that we observe the process x at a finite number of time points with i.i.d. noise, giving
values
yi ? N x(ti ), s2 ,
i = 1, . . . , N.
For simplicity, we have assumed that the process itself is observed; nothing would change in what
follows if we assumed that the variable y is linearly related to the process (except of course that
we would have more parameters to estimate). The problem we wish to address is the inference of
the joint posterior over both variables x and ? at any time within a certain interval, as well as the
determination of (a subset of) the parameters and hyperparameters involved in equation (1) and in
the observation model.
1.1
Exact state inference
As the system described by equation (1) is a Markovian process, the marginal probability distribution
q? (x, t) for both state variables ? ? {0, 1} and x of the posterior process can be calculated using
a smoothing algorithm similar to the one described in [6]. Based on the Markov property one can
show that
1
q? (x, t) = p? (x, t)?? (x, t).
(2)
Z
Here p? (x, t) denotes the marginal filtering distribution, while ?? (x, t) = p({yi |ti > t}|xt =
x, ?t = ?) is the likelihood of all observations after time t under the condition that the process has
state (x, ?) at time t (backward message). The time evolution of the backward message is described
by the backward Chapman-Kolmogorov equation for ? ? {0, 1} [9]:
???
???
? 2 ? 2 ??
+ (A? + b ? ?x)
+
= f1?? (?? (x, t) ? ?1?? (x, t)).
?t
?x
2 ?x2
(3)
This PDE must be solved backward in time starting at the last observation yN using the initial
condition
?? (x, tN ) = p(yN |x(tN ) = x).
(4)
The other observations are taken into account by jump conditions
+
?? (x, t?
j ) = ?? (x, tj ) p(yj |x(tj ) = x),
(5)
where ?? (x, t?
k ) being the values of ?? (x, t) before and after the k-th observation and p(yj |x(tj ) =
x) is given by the noise model.
2
In order to calculate q? (x, t) we need to calculate the filtering distribution p? (x, t), too. Its time
evolution is given by the forward Chapman-Kolmogorov equation [9]
?p?
?
? 2 ? 2 p?
= f? p1?? (x, t) ? f1?? p? (x, t).
(6)
+
(A? + b ? ?x)p? (x, t) ?
?t
?x
2 ?x2
We can show that the posterior process q? (x, t) fulfils a similar PDE by calculating its time derivative
and using both (3) and (6). By doing so we find
?q?
?
? 2 ? 2 q?
= g? (x, t) q1?? (x, t) ? g1?? (x, t) q? (x, t),
+
(A? + b ? ?x + c? (x, t))q? (x, t) ?
?t
?x
2 ?x2
(7)
where
?? (x, t)
g? (x, t) =
f?
(8)
?1?? (x, t)
are time and state dependent posterior jump rates, while the drift
?
log ?? (x, t)
(9)
?x
takes the observations into account. It is clearly visible that (7) is also a forward ChapmanKolmogorov equation. Consequently, the only differences between prior and posterior process are
the jump rates for the telegraph process ? and the drift of the diffusion process x.
c? (x, t) = ? 2
1.2
Variational inference
The exact inference approach outlined above gives rise to PDEs which need to be solved numerically in order to estimate the relevant posteriors. For one dimensional GPs this is expensive, but
in principle feasible. This work will be deferred to a further publication. Of course, numerical solutions become computationally prohibitive for higher dimensional problems, leading to a need for
approximations. We describe here a variational approximation to the joint posterior over the switching process ?(t) and the diffusion process x(t) which gives an upper bound on the true free energy;
it is obtained by making a factorised approximation to the probability over paths (x0:T , ?0:T ) of the
form
q (x0:T , ?0:T ) = qx (x0:T ) q? (?0:T ) ,
(10)
where qx is a pure diffusion process (which can be easily shown to be Gaussian) and q? is a pure
jump process. Considering the KL divergence between the original process (1) and the approximating process, and keeping into account the conditional structure of the model and equation (10),
we obtain the following expression for the Kullback-Leibler (KL) divergence between the true and
approximating posteriors:
KL [qkp] = K0 +
N
X
hlog p (yi |x(ti ))iqx + hKL [qx kp (x0:T |?0:T )]iq? + KL [q? kp(?0:T )] . (11)
i=1
By using the general formula for the KL divergence between two diffusion processes [1], we obtain
the following form for the third term in equation (11):
Z
1
2
2
hKL [qx kp (x0:T |?0:T )]iq? = dt 2 {[?(t) + ?] c2 (t) + m2 (t) + [?(t) ? b] +
2?
(12)
+ 2 [?(t) + ?] [?(t) ? b] m(t) + A2 ? 2A (?(t) + ?) m(t) ? 2A (?(t) ? b) q?1 (t)}.
Here ? and ? are the gain and bias (coefficients of the linear term and constant) of the drift of
the approximating diffusion process, m and c2 are the mean and variance of the approximating
process, and q?1 (t) is the marginal probability at time t of the switch being on (computed using
the approximating jump process). So the KL is the sum of an initial condition part (which can
be set to zero) and two other parts involving the KL between a Markovian Gaussian process and
a Markovian Gaussian process observed linearly with noise (second and third terms) and the KL
between two telegraph processes. The variational E-step iteratively minimises these two parts using
recursions of the forward-backward type. Interleaved with this, variational M-steps can be carried
out by optimising the variational free energy w.r.t. the parameters; the fixed point equations for this
are easily derived and will be omitted here due to space constraints. Evaluation of the Hessian of
the free energy w.r.t. the parameters can be used to provide a measure of the uncertainty associated.
3
1.2.1
Computation of the approximating diffusion process
Minimisation of the second and third term in equation (11) requires finding an approximating Gaussian process. By inspection of equation (12), we see that we are trying to compute the posterior
process for a discretely observed Gaussian process with (prior) drift Aq?1 (t)+b??x, with the observations being i.i.d. with Gaussian noise. Due to the Markovian nature of the process, its single time
marginals can be computed using the continuous time version of the well known forward-backward
algorithm [10, 11]. The single time posterior marginal can be decomposed as
1
? (x(t)) ? (x(t)) ,
(13)
Z
where ? is the filtered process or forward message, and ? is the backward message, i.e. the likelihood
of future observations conditioned on time t. The recursions are based on the following general
ODEs linking mean m
? and variance c?2 of a general Gaussian diffusion process with system noise ? 2
to the drift coefficients ?
? and ?? of the respective SDE, which are a consequence of the Fokker-Planck
equation for Gaussian processes
q (x(t)) = p (x(t)|y1 , . . . , yN ) =
dm
?
?
=?
?m
? + ?,
dt
d?
c2
= 2?
?c?2 + ? 2 .
dt
(14)
The filtered process outside the observations satisfies the forward Fokker-Planck equation of the
prior process, so its mean and variance can be propagated using equations (14) with prior drift
coefficients ?
? = ?? and ?? = Aq?1 + b. Observations are incorporated via the jump conditions
lim ? (x(t)) ? p (yi |x(ti )) lim? ? (x(t)) ,
t?t+
i
(15)
t?ti
whence the recursions on the mean and variances easily follow. Notice that this is much simpler than
(discrete time) Kalman filter recursions as the prior gain is zero in continuous time. Computation of
the backward message (smoothing) is analogous; the reader is referred to [10, 11] for further details.
1.2.2
Jump process smoothing
Having computed the approximating diffusion process, we now turn to give the updates for the
approximating jump process.The KL divergence in equation (11) involves the jump process in two
terms: the last term is the KL divergence between the posterior jump process and the prior one, while
the third term, which gives the expectation of the KL between the two diffusion processes under the
posterior jump, also contains terms involving the jump posterior. The KL divergence between two
telegraph processes was calculated in [4]; considering the jump terms coming from equation (12),
and adding a Lagrange multiplier to take into account the Master equation fulfilled by the telegraph
process, we end up with the following Lagrangian:
Z
1
L [q? , g? , ?, ?] = KL [q? kpprior ] + dt 2 A2 ? 2A (? + ?) m ? 2A (? ? b) q1 (t)+
2?
Z
(16)
dq1
dt?(t)
+ (g? + g+ )q1 ? g+ .
dt
Notice we use q1 (t) = q? (?(t) = 1) to lighten the notation. Functional derivatives w.r.t. to the
posterior rates g? allow to eliminate them in favour of the Lagrange multipliers; inserting this into
the functional derivatives w.r.t. to the marginals q1 (t) gives ODEs involving the Lagrange multiplier
and the prior rates only (as well as terms from the diffusion process), which can be solved backward
in time from the condition ?(T ) = 0. This allows to update the rates and then the posterior marginals
can be found in a forward propagation, in a manner similar to [4].
2
Conditionally Gaussian Processes: general case
In this section, we would like to generalise our model to processes of the form
dx = (??x + A? + b)dt + df (t),
4
(17)
where the white noise driving process ?dw(t) in (1) is replaced by an arbitrary GP df (t) 1 . The application of our variational approximation (11) requires the KL divergence KL [qx kp (x0:T |?0:T )]
between a GP qx and a GP with a shifted mean function p (x0:T |?0:T ). Assuming the same covariance this could in principle be computed using the Radon-Nykodym derivative between the two
measures. Our preliminary results (based on the Cameron-Martin formula for GPs [12]) indicates
that even in simple cases (like Ornstein-Uhlenbeck noise) the measures are not absolutely continuous and the KL divergence is infinite. Hence, we have resorted to a different variational approach,
which is based on a lower bound to the free energy.
We use the fact, that conditioned on the path of the switching process ?0:T , the prior of x(t)
is a GP with a covariance kernel K(t, t0 ) and can be marginalised out exactly. The kernel K
can be easily computed from the kernel of the driving noise process f (t) [2]. In the previous
casen of white noise K is ogiven by the (nonstationary) Ornstein-Uhlenbeck kernel KOU (t, t0 ) =
0
?2
??|t?t0 |
? e??(t+t ) . The mean function of the conditioned GP is obtained by solving the
2? e
linear ODE (17) without noise, i.e. with f = 0. This yields
Z t
EGP [x(t)|?0:T ] =
e??(t?s) (A?(s) + b) ds .
(18)
0
Marginalising out the conditional GP, the negative log marginal probability of observations (free
energy) F = ? ln p(D) is represented as
1
? >
2 ?1
?
F = ? ln E? [p(D|?0:T )] = ? ? ln E? exp ? (y ? x ) (K + ? I) (y ? x ) . (19)
2
Here E? denotes expectation over the prior switching process p? , y is the vector of observations,
>
and x? = EGP [(x(t1 ), . . . , x(tN )) |?0:T ] is the vector of conditional means at observation times
1
ti . K is the kernel matrix and ? = 2 ln(|2?K|). This intractable free energy contains a functional in
the exponent which is bilinear in the switching process ?. In the spirit of other variational transformations [13, 14] thiscan be linearised
(or convex duality). Applying
through a Legendre transform
1 > ?1
1 >
>
?
z
A
z
=
max
?
z
?
?
A?
to
the
vector
z
=
(y
?
x
)
and
the matrix A = (K + ? 2 I),
?
2
2
and exchanging the max operation with the expectation over ?, leads to the lower bound
>
1 >
2
?
F ? ? + max ? ? (K + ? I)? ? ln E? exp ?? (y ? x )
.
(20)
?
2
A similar upper bound which is however harder to evaluate computationally will be presented elsewhere. It can be shown that the lower bound (20) neglects the variance of the E? [x? ] process
(intuitively, the two point expectations in (19) are dropped). The second term in the bracket looks
like the free energy for a jump process model having a (pseudo) log likelihood of the data given by
??> (y?x? ). This auxiliary free energy can again be rewritten in terms of the ?standard variational?
representation
? ln E? exp ??> (y ? x? ) = min KL[qkpprior ] + ?> (y ? Eq [x? ]) ,
(21)
q
where in the second line we have introduced an arbitrary process q over the switching variable and
used standard variational manipulations. Inserting (18) into the last term in (21), we see that this KL
minimisation is of the same structure as the one in equation (16) with a linear functional of q in the
(pseudo) likelihood term. Therefore the minimiser q is an inhomogeneous Markov jump process,
and we can use a backward and forward sweep to compute marginals q1 (t) exactly for a fixed ?!
These marginals are used to compute the gradient of the lower bound (K + ? 2 I)? + (y ? Eq [x? ])
and we iterate between gradient ascent steps and recomputations of Eq [x? ]. Since the minimax
problem defined by (20) and (21) is concave in ? and convex in q the solution must be unique. Upon
convergence, we use the switching process marginals q1 for prediction. Statistics of the smoothed
x process can then be computed by summing the conditional GP statistics (obtained by exact GP
regression) and the x? statistics, which can be computed using the same methods as in [6].
1
In case of a process with smooth sample paths, we can write df (t) = g(t)dt with an ?ordinary? GP g
5
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
100
200
300
400
500
600
700
800
900
0
0
1000
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
6
4.5
4
5
3.5
4
3
3
2.5
2
2
1.5
1
1
0
0.5
0
0
100
200
300
400
500
600
700
800
900
?1
0
1000
Figure 1: Results on synthetic data. Variational Markovian Gaussian-Jump process on the left,
approximate RBF Gaussian-Jump process on the right. Top row, inferred posterior jump means
(solid line) and true jump profile (dotted black) Bottom row: inferred posterior mean x (solid) with
confidence intervals (dotted red); data points are shown as red crosses, and the true sample profile
is shown as black dots. Notice that the less confident jump prediction for the RBF process gives a
much higher uncertainty in the x prediction (see text). The x axis units are the simulation time steps.
3
3.1
Results
Synthetic data
To evaluate the performance and identifiability of our model, we experimented first with a simple
one-dimensional synthetic data set generated using a jump profile with only two jumps. A sample
from the resulting conditional Gaussian process was then obtained by simulating the SDE using
the Euler-Maruyama method, and ten identically spaced points were then taken from the sample
path and corrupted with Gaussian noise. Inference was then carried out using two procedures: a
Markovian Gaussian-Jump process as described in Section 1, using the variational algorithm, and
a ?RBF? Gaussian-Jump process with slowly varying covariance, as described in Section 2. The
parameters s2 , ? 2 and f? were kept fixed, while the A, b and ? hyperparameters were optimised
using type II ML.
The inference results are shown in Figure 1: the left column gives the results of the variational
smoothing, while the right column gives the results obtained by fitting a RBF Gaussian-Jump process. The top row shows the inferred posterior mean of the discrete state distribution, while the
bottom row gives the conditionally Gaussian posterior. We notice that both approaches provide a
good smoothing of the GP and the jump process, although the second jump is inferred as being
slightly later than in the true path. Notice that the uncertainties associated with the RBF process are
much higher than in the Markovian one, and are dominated by the uncertainty in the posterior mean
caused by the uncertainty in the jump process, which is less confident than in the Markovian case
(top right figure). This is probably due to the fact that the lower bound (20) ignores the contributions
of the variance of the x? term in the free energy, which is due to the variance of the jump process, and hence removes the penalty for having intermediate jump posteriors. A similar behaviour
was already noted in a related context in [14]. In terms of computational efficiency, the variational
Markovian algorithm converged in approximately 0.1 seconds on a standard laptop, while the RBF
process took approximately two minutes. As a baseline, we used a standard discrete time Switching
6
1
1.5
0.9
1
0.8
0.7
0.5
0.6
0.5
0
0.4
?0.5
0.3
0.2
?1
0.1
0
0
100
200
300
400
500
600
700
?1.5
0
800
100
200
300
400
500
600
700
800
Figure 2: Results on double well diffusion. Left: inferred posterior switch mean; right smoothed
data, with confidence intervals. The x axis units are the simulation time steps.
Kalman Filter in the implementation of [15], but did not manage to obtain good results. It is not
clear whether the problem resided in the short time series or in our application of the model.
Estimation of the parameters using the variational upper bound also gave very accurate results, with
A = 3.1 ? 0.3 ? 10?2 (true value 3 ? 10?2 ), b = 1.0 ? 2 ? 10?2 (true value 1 ? 10?2 ) and
? = 1.1 ? 0.1 ? 10?2 (true value 1 ? 10?2 ). It is interesting to note that, if the system noise
parameter ? 2 was set at a higher value, then the A parameter was always driven to zero, leading to a
decoupling of the Gaussian and jump processes. In fact, it can be shown that the true free energy has
always a local minimum for A = 0: heuristically, the GP is always a sufficiently flexible model to fit
the data on its own. However, for small levels of system noise, the evidence of the data is such that
the more complex model involving a jump process is favoured, giving a type of automated Occam
razor, which is one of the main attractions of Bayesian modelling.
3.2
Diffusion in a double-well potential
To illustrate the properties of the Gaussian-jump process as an approximator for non-linear stochastic models, we considered the benchmark problem of smoothing data generated from a SDE with
double-well potential drift and constant diffusion coefficient. Since the process we wish to approximate is a diffusion process, we use the variational upper bound method, which gave good results
in the synthetic experiments. The data we use is the same as the one used in [1], where a nonstationary Gaussian approximation to the non-linear SDE was proposed by means of a variational
approximation. The results are shown in Figure 2: as is evident the method both captures accurately
the transition time, and provides an excellent smoothing (very similar to the one reported in [1]);
these results were obtained in 0.07 seconds, while the Gaussian process approximation of [1] involves gradient descent in a high dimensional space and takes approximately three to four orders of
magnitude longer. Naturally, our method cannot be used in this case to estimate the parameters of
the true (double well) prior drift, as it only models the linear behaviour near the bottom of each well;
however, for smoothing purposes it provides a very accurate and efficient alternative method.
3.3
Regulation of competence in B. subtilis
Regulation of gene expression at the transcriptional level provides an important application, as well
as motivation for the class of models we have been considering. Transcription rates are modulated
by the action of transcription factors (TFs), DNA binding proteins which can be activated fast in
response to environmental signals. The activation state of a TF is a notoriously difficult quantity
to measure experimentally; this has motivated a significant effort within the machine learning and
systems biology community to provide models to infer TF activities from more easily measurable
gene expression levels [2, 16, 17]. In this section, we apply our model to single cell fluorescence
measurements of protein concentrations; the intrinsic stochasticity inherent in single cell data would
make conditionally deterministic models such as [2, 6] an inappropriate tool, while our variational
SDE model should be able to better capture the inherent fluctuations.
The data we use was obtained in [18] during a study of the genetic regulation of competence in
B. subtilis: briefly, bacteria under food shortage can either enter a dormant stage (spore) or can
7
1
4.5
0.9
4
0.8
3.5
0.7
0.6
3
0.5
2.5
0.4
2
0.3
1.5
0.2
1
0.1
0
0
2.5
5
7.5
10
12.5
15
17.5
0.5
0
20
Time (h)
2.5
5
7.5
10
12.5
15
17.5
20
Time (h)
Figure 3: Results on competence circuit. Left: inferred posterior switch mean (ComK activity
profile); right smoothed ComS data, with confidence intervals. The y axis units in the right hand
panel are arbitrary fluorescence units.
continue to replicate their DNA without dividing (competence). Competence is essentially a bet that
the food shortage will be short-lived: in that case, the competent cell can immediately divide into
many daughter cells, giving an evolutionary advantage. The molecular mechanisms underpinning
competence are quite complex, but the essential behaviour can be captured by a simple system
involving only two components: the competence regulator ComK and the auxiliary protein ComS,
which is controlled by ComK with a switch-like behaviour (Hill coefficient 5). In [18], ComK
activity was indirectly estimated using a gene reporter system (using the ComG promoter). Here,
we leave ComK as a latent switching variable, and use our model to smooth the ComS data. The
results are shown in Figure 3, showing a clear switch behaviour for ComK activity (as expected, and
in agreement with the high Hill coefficient), and a good smoothing of the ComS data. Analysis of the
optimal parameters is also instructive: while the A and b parameters are not so informative due to the
fact that fluorescence measurements are reported in arbitrary units, the ComS decay rate is estimated
as 0.32 ? 0.06h?1 , corresponding to a half life of approximately 3 hours, which is clearly plausible
from the data. It should be pointed out that, in the simulations in the supplementary material of [18],
a nominal value of 0.0014 s?1 was used, corresponding to a half life of only 20 minutes! While
the purpose of that simulation was to recreate the qualitative behaviour of the system, rather than to
estimate its parameters, the use of such an implausible parameter value illustrates all too well the
need for appropriate data-driven tools in modelling complex systems.
4
Discussion
In this contribution we proposed a novel inference methodology for continuous time conditionally
Gaussian processes. As well as being interesting in its own right as a method for inference in
jump-diffusion processes (to our knowledge the first to be proposed), these models find a powerful
motivation due to their relevance to fields such as systems biology, as well as plausible approximations to non-linear diffusion processes. We presented both a method based on a variational upper
bound in the case of Markovian processes, and a more general lower bound which holds also for
non-Markovian Gaussian processes.
A natural question from the machine learning point of view is what are the advantages of continuous
time over discrete time approaches. As well as providing a conceptually more correct description of
the system, continuous time approaches have at least two significant advantages in our view: a computational advantage in the availability of more stable solvers (such as Runge-Kutta methods), and
a communication advantage, as they are more immediately understandable to the large community
of modellers which use differential equations but may not be familiar with statistical methods.
There are several possible extension to the work we presented: a relatively simple task would be
an extension to a factorial design such as the one proposed for conditionally deterministic systems
in [14]. A theoretical task of interest would be a thorough investigation of the relationship between
the upper and lower bounds we presented. This is possible, at least for Markovian GPs, but will be
presented in other work.
8
References
[1] Cedric Archambeau, Dan Cornford, Manfred Opper, and John Shawe-Taylor. Gaussian process
approximations of stochastic differential equations. Journal of Machine Learning Research
Workshop and Conference Proceedings, 1(1):1?16, 2007.
[2] Neil D. Lawrence, Guido Sanguinetti, and Magnus Rattray. Modelling transcriptional regulation using Gaussian processes. In Advances in Neural Information Processing Systems 19,
2006.
[3] Uri Nodelman, Christian R. Shelton, and Daphne Koller. Continuous time Bayesian networks.
In Proceedings of the Eighteenth conference on Uncertainty in Artificial Intelligence (UAI),
2002.
[4] Manfred Opper and Guido Sanguinetti. Variational inference for Markov jump processes. In
Advances in Neural Information Processing Systems 20, 2007.
[5] Ido Cohn, Tal El-Hay, Nir Friedman, and Raz Kupferman. Mean field variational approximation for continuous-time Bayesian networks. In Proceedings of the twenty-fifthth conference
on Uncertainty in Artificial Intelligence (UAI), 2009.
[6] Guido Sanguinetti, Andreas Ruttor, Manfred Opper, and Cedric Archambeau. Switching regulatory models of cellular stress response. Bioinformatics, 25(10):1280?1286, 2009.
[7] Mauricio Alvarez, David Luengo, and Neil D. Lawrence. Latent force models. In Proceedings
of the Twelfth Interhantional Conference on Artificial Intelligence and Statistics (AISTATS),
2009.
[8] Carl E. Rasmussen and Christopher K.I. Williams. Gaussian Processes for Machine Learning.
MIT press, 2005.
[9] C. W. Gardiner. Handbook of Stochastic Methods. Springer, Berlin, second edition, 1996.
[10] Andreas Ruttor and Manfred Opper. Efficient statistical inference for stochastic reaction processes. Phys. Rev. Lett., 103(23), 2009.
[11] Cedric Archambeau and Manfred Opper. Approximate inference for continuous-time Markov
processes. In David Barber, Taylan Cemgil, and Silvia Chiappa, editors, Inference and Learning in Dynamic Models. Cambridge University Press, 2010.
[12] M. A. Lifshits. Gaussian Random Functions. Kluwer, Dordrecht, second edition, 1995.
[13] Michael I. Jordan, Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. An introduction to variational methods for graphical models. Machine Learning, 37:183?233, 1999.
[14] Manfred Opper and Guido Sanguinetti. Learning combinatorial transcriptional dynamics from
gene expression data. Bioinformatics, 26(13):1623?1629, 2010.
[15] David Barber. Expectation correction for smoothing in switching linear Gaussian state space
models. Journal of Machine Learning Research, 7:2515?2540, 2006.
[16] James C. Liao, Riccardo Boscolo, Young-Lyeol Yang, Linh My Tran, Chiara Sabatti, and
Vwani P. Roychowdhury. Network component analysis: Reconstruction of regulatory signals
in biological systems. Proceedings of the National Academy of Sciences USA, 100(26):15522?
15527, 2003.
[17] Martino Barenco, Daniela Tomescu, David Brewer, Robin Callard, Jaroslav Stark, and Michael
Hubank. Ranked prediction of p53 targets using hidden variable dynamical modelling. Genome
Biology, 7(3), 2006.
[18] G?urol M. Su?el, Jordi Garcia-Ojalvo, Louisa M. Liberman, and Michael B. Elowitz. An excitable gene regulatory circuit induces transient cellular differentiation. Nature, 440:545?50,
2006.
9
| 4023 |@word briefly:1 version:1 replicate:1 twelfth:1 heuristically:1 egp:2 simulation:4 covariance:5 q1:7 solid:2 harder:1 initial:2 contains:2 series:1 genetic:1 reaction:1 activation:1 dx:2 written:1 must:2 john:1 realistic:1 numerical:3 visible:1 informative:1 christian:1 remove:1 update:2 stationary:1 half:2 prohibitive:1 intelligence:3 inspection:1 short:2 manfred:7 filtered:2 provides:3 simpler:1 daphne:1 c2:3 differential:7 become:1 qualitative:1 fitting:1 dan:1 manner:1 x0:7 expected:1 rapid:1 p1:1 kou:1 decomposed:1 food:2 inappropriate:1 considering:3 increasing:1 solver:1 notation:1 circuit:2 laptop:1 panel:1 what:2 sde:6 finding:1 transformation:1 differentiation:1 temporal:1 pseudo:2 thorough:1 ti:6 concave:1 exactly:2 universit:2 uk:1 unit:5 yn:3 planck:2 mauricio:1 before:1 t1:1 dropped:1 local:2 cemgil:1 limit:1 consequence:1 switching:13 bilinear:1 optimised:2 path:5 fluctuation:1 approximately:4 black:2 archambeau:3 unique:1 yj:2 procedure:3 convenient:1 confidence:3 protein:4 zoubin:1 cannot:1 context:1 applying:1 measurable:1 deterministic:2 lagrangian:1 eighteenth:1 straightforward:1 attention:1 starting:1 williams:1 convex:2 simplicity:2 immediately:2 pure:2 m2:1 attraction:1 dw:2 fakult:2 analogous:1 qkp:1 target:1 nominal:1 guido:5 exact:6 gps:6 carl:1 agreement:1 recognition:1 particularly:1 expensive:2 approximated:1 observed:3 bottom:3 solved:3 capture:3 calculate:2 cornford:1 forwardbackward:1 und:2 instructive:1 tfs:1 dynamic:5 solving:1 upon:1 efficiency:1 easily:5 joint:2 k0:1 represented:1 kolmogorov:2 derivation:1 fast:1 describe:1 kp:4 artificial:3 outside:1 dordrecht:1 whose:1 quite:1 supplementary:1 plausible:3 statistic:4 neil:2 g1:1 gp:13 transform:2 itself:1 runge:1 advantage:5 took:1 reconstruction:1 tran:1 coming:1 tu:2 relevant:1 inserting:2 opperm:1 academy:1 description:1 convergence:1 double:4 leave:1 wider:1 derive:2 iq:2 ac:1 illustrate:1 chiappa:1 minimises:1 school:1 eq:3 dividing:1 auxiliary:2 c:1 involves:2 resided:1 tommi:1 dormant:1 inhomogeneous:1 discontinuous:1 correct:1 filter:2 stochastic:10 transient:1 material:1 behaviour:8 f1:2 preliminary:1 investigation:1 biological:1 extension:2 correction:1 underpinning:1 hold:1 sufficiently:1 considered:3 magnus:1 exp:3 taylan:1 lawrence:3 driving:2 a2:2 omitted:1 purpose:2 estimation:2 combinatorial:1 fluorescence:3 tf:2 tool:4 mit:1 clearly:2 gaussian:35 always:3 rather:1 varying:2 bet:1 jaakkola:1 publication:1 minimisation:2 vwani:1 derived:2 martino:1 modelling:7 likelihood:4 indicates:1 baseline:1 whence:1 inference:19 dependent:1 el:2 eliminate:1 hidden:1 koller:1 hubank:1 germany:2 flexible:1 exponent:1 smoothing:11 special:1 marginal:5 field:4 having:3 chapman:2 biology:5 optimising:1 look:1 future:1 piecewise:1 lighten:1 inherent:2 randomly:1 divergence:8 national:1 familiar:1 replaced:1 friedman:1 linearised:1 interest:2 message:5 evaluation:1 deferred:1 bracket:1 activated:1 tj:3 accurate:3 partial:2 bacteria:1 respective:1 minimiser:1 divide:1 taylor:1 theoretical:1 column:2 markovian:18 exchanging:1 ordinary:2 introducing:1 technische:2 subset:1 euler:1 successful:1 too:2 reported:2 corrupted:1 ido:1 synthetic:4 my:1 confident:2 informatics:1 receiving:1 telegraph:6 michael:3 continuously:1 again:1 manage:1 slowly:1 derivative:4 leading:2 linh:1 stark:1 account:4 potential:3 de:2 factorised:1 availability:1 coefficient:6 caused:1 ornstein:3 later:1 view:3 doing:1 red:2 identifiability:1 contribution:3 wiener:1 variance:8 yield:1 spaced:1 conceptually:1 bayesian:3 accurately:1 informatik:2 notoriously:1 modeller:1 converged:1 implausible:1 phys:1 ed:1 energy:12 involved:1 iqx:1 james:1 dm:1 naturally:1 associated:2 jordi:1 propagated:1 gain:2 maruyama:1 proved:1 popular:1 lim:2 knowledge:1 ou:1 higher:4 dt:9 follow:1 methodology:1 response:2 alvarez:1 marginalising:1 stage:1 d:1 hand:1 christopher:1 cohn:1 su:1 propagation:1 usa:1 true:10 multiplier:3 evolution:3 hence:2 leibler:1 iteratively:1 white:2 conditionally:8 during:1 razor:1 noted:1 comk:6 trying:1 hill:2 evident:1 stress:1 tn:3 motion:1 subtilis:2 ranging:1 variational:23 novel:3 functional:4 linking:1 kluwer:1 numerically:1 marginals:6 significant:2 measurement:2 cambridge:1 enter:1 outlined:1 pointed:1 stochasticity:1 shawe:1 aq:2 dot:1 stable:1 longer:1 posterior:23 own:2 driven:2 manipulation:1 certain:1 hay:1 continue:1 life:2 yi:4 captured:1 minimum:2 signal:2 ii:1 infer:1 smooth:2 determination:1 cross:1 pde:2 cameron:1 molecular:1 controlled:1 prediction:4 involving:6 regression:2 liao:1 essentially:1 expectation:5 df:3 physically:1 kernel:5 uhlenbeck:3 cell:4 ode:3 interval:4 ascent:1 probably:1 dq1:1 spirit:1 jordan:1 nonstationary:2 near:2 yang:1 intermediate:1 identically:1 automated:1 variety:1 switch:6 iterate:1 gave:2 fit:1 andreas:4 reduce:1 raz:1 favour:1 t0:3 expression:4 whether:1 motivated:1 recreate:1 effort:1 penalty:1 hessian:1 action:1 luengo:1 useful:1 clear:2 shortage:2 factorial:1 ten:1 induces:1 dna:2 notice:5 shifted:1 dotted:2 roychowdhury:1 fulfilled:1 estimated:2 rattray:1 discrete:6 write:1 kupferman:1 four:1 diffusion:21 kept:1 backward:10 resorted:1 sum:1 uncertainty:7 master:1 powerful:1 reader:1 radon:1 capturing:1 bound:14 interleaved:1 discretely:1 activity:4 gardiner:1 constraint:1 x2:3 fulfils:1 dominated:1 tal:1 regulator:1 extremely:1 min:1 martin:1 relatively:1 barenco:1 tomescu:1 p53:1 according:1 legendre:2 slightly:1 appealing:1 evolves:1 making:1 rev:1 intuitively:2 restricted:1 taken:2 computationally:3 equation:24 ln:6 turn:1 daniela:1 mechanism:1 brewer:1 end:1 operation:1 rewritten:1 apply:1 observe:1 indirectly:1 appropriate:1 simulating:1 alternative:1 callard:1 original:1 denotes:2 top:3 graphical:1 calculating:1 neglect:1 giving:3 ghahramani:1 approximating:9 sweep:1 already:1 quantity:1 spore:1 question:1 concentration:1 transcriptional:3 exhibit:1 gradient:3 evolutionary:1 kutta:1 berlin:7 simulated:1 reporter:1 barber:2 cellular:2 assuming:1 kalman:2 coms:5 relationship:1 providing:1 riccardo:1 regulation:4 difficult:1 hlog:1 negative:1 rise:1 daughter:1 implementation:1 design:1 lived:1 understandable:1 twenty:1 upper:6 observation:13 markov:6 benchmark:1 finite:2 descent:1 incorporated:1 communication:1 y1:1 smoothed:3 arbitrary:6 competence:7 community:4 drift:10 inferred:6 introduced:1 david:4 kl:18 hour:1 address:1 able:1 sabatti:1 dynamical:1 pattern:1 hkl:2 max:3 natural:1 force:1 ranked:1 recursion:4 marginalised:1 minimax:1 scheme:1 axis:3 carried:2 excitable:1 coupled:1 nir:1 text:1 prior:10 cedric:3 nodelman:1 interesting:2 filtering:2 approximator:1 boscolo:1 principle:2 editor:1 occam:1 row:4 course:2 elsewhere:1 last:3 free:12 pdes:1 keeping:1 rasmussen:1 bias:2 allow:2 generalise:2 saul:1 edinburgh:1 calculated:2 opper:7 world:1 dimension:1 transition:2 lett:1 ignores:1 forward:8 genome:1 jump:40 qx:6 approximate:4 liberman:1 kullback:1 ruttor:4 gene:5 transcription:2 ml:1 uai:2 handbook:1 summing:1 assumed:2 sanguinetti:6 alternatively:1 continuous:14 latent:8 iterative:1 regulatory:4 robin:1 nature:2 inherently:1 decoupling:1 excellent:1 complex:4 did:1 aistats:1 main:1 promoter:1 linearly:2 s2:2 noise:13 hyperparameters:2 profile:4 motivation:2 nothing:1 edition:2 silvia:1 competent:1 referred:1 lifshits:1 favoured:1 wish:2 governed:1 third:4 elowitz:1 young:1 formula:2 minute:2 xt:1 showing:2 experimented:1 decay:1 evidence:1 intractable:2 intrinsic:1 essential:1 workshop:1 adding:1 jaroslav:1 magnitude:1 conditioned:3 illustrates:1 uri:1 garcia:1 saddle:1 lagrange:3 chiara:1 binding:1 springer:1 fokker:2 satisfies:1 environmental:1 conditional:5 consequently:1 rbf:7 twofold:1 feasible:1 change:1 experimentally:1 infinite:2 determined:1 generalisation:1 except:1 duality:1 modulated:1 relevance:1 absolutely:1 bioinformatics:2 evaluate:2 shelton:1 |
3,340 | 4,024 | Variable margin losses for classifier design
Nuno Vasconcelos
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Hamed Masnadi-Shirazi
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Abstract
The problem of controlling the margin of a classifier is studied. A detailed analytical study is presented on how properties of the classification risk, such as
its optimal link and minimum risk functions, are related to the shape of the loss,
and its margin enforcing properties. It is shown that for a class of risks, denoted
canonical risks, asymptotic Bayes consistency is compatible with simple analytical relationships between these functions. These enable a precise characterization
of the loss for a popular class of link functions. It is shown that, when the risk is
in canonical form and the link is inverse sigmoidal, the margin properties of the
loss are determined by a single parameter. Novel families of Bayes consistent loss
functions, of variable margin, are derived. These families are then used to design
boosting style algorithms with explicit control of the classification margin. The
new algorithms generalize well established approaches, such as LogitBoost. Experimental results show that the proposed variable margin losses outperform the
fixed margin counterparts used by existing algorithms. Finally, it is shown that
best performance can be achieved by cross-validating the margin parameter.
1
Introduction
Optimal classifiers minimize the expected value of a loss function, or risk. Losses commonly used
in machine learning are upper-bounds on the zero-one classification loss of classical Bayes decision
theory. When the resulting classifier converges asymptotically to the Bayes decision rule, as training
samples increase, the loss is said to be Bayes consistent. Examples of such losses include the hinge
loss, used in SVM design, the exponential loss, used by boosting algorithms such as AdaBoost,
or the logistic loss, used in both classical logistic regression and more recent methods, such as
LogitBoost. Unlike the zero-one loss, these losses assign a penalty to examples correctly classified
but close to the boundary. This guarantees a classification margin, and improved generalization
when learning from finite datasets [1]. Although the connections between large-margin classification
and classical decision theory have been known since [2], the set of Bayes consistent large-margin
losses has remained small. Most recently, the design of such losses has been studied in [3]. By
establishing connections to the classical literature in probability elicitation [4], this work introduced
a generic framework for the derivation of Bayes consistent losses. The main idea is that there are
three quantities that matter in risk minimization: the loss function ?, a corresponding optimal link
function f?? , which maps posterior class probabilities to classifier predictions, and a minimum risk
C?? , associated with the optimal link.
While the standard approach to classifier design is to define a loss ?, and then optimize it to obtain
f?? and C?? , [3] showed that there is an alternative: to specify f?? and C?? , and analytically derive the
loss ?. The advantage is that this makes it possible to manipulate the properties of the loss, while
guaranteeing that it is Bayes consistent. The practical relevance of this approach is illustrated in [3],
where a Bayes consistent robust loss is derived, for application in problems involving outliers. This
1
loss is then used to design a robust boosting algorithm, denoted SavageBoost. SavageBoost has been,
more recently, shown to outperform most other boosting algorithms in computer vision problems,
where outliers are prevalent [5]. The main limitation of the framework of [3] is that it is not totally
constructive. It turns out that many pairs (C?? ,f?? ) are compatible with any Bayes consistent loss ?.
Furthermore, while there is a closed form relationship between ? and (C?? ,f?? ), this relationship is
far from simple. This makes it difficult to understand how the properties of the loss are influenced
by the properties of either C?? or f?? . In practice, the design has to resort to trial and error, by 1)
testing combinations of the latter and, 2) verifying whether the loss has the desired properties. This
is feasible when the goal is to enforce a broad loss property, e.g. that a robust loss should be bounded
for negative margins [3], but impractical when the goal is to exercise a finer degree of control.
In this work, we consider one such problem: how to control the size of the margin enforced by the
loss. We start by showing that, while many pairs (C?? ,f?? ) are compatible with a given ?, one of these
pairs establishes a very tight connection between the optimal link and the minimum risk: that f?? is
the derivative of C?? . We refer to the risk function associated with such a pair as a canonical risk,
and show that it leads to an equally tight connection between the pair (C?? ,f?? ) and the loss ?. For
a canonical risk, all three functions can be obtained from each other with one-to-one mappings of
trivial analytical tractability. This enables a detailed analytical study of how C?? or f?? affect ?. We
consider the case where the inverse of f?? is a sigmoidal function, i.e. f?? is inverse-sigmoidal, and
show that this strongly constrains the loss. Namely, the latter becomes 1) convex, 2) monotonically
decreasing, 3) linear for large negative margins, and 4) constant for large positive margins. This
implies that, for a canonical risk, the choice of a particular link in the inverse-sigmoidal family
only impacts the behavior of ? around the origin, i.e. the size of the margin enforced by the loss.
This quantity is then shown to depend only on the slope of the sigmoidal inverse-link at the origin.
Since this property can be controlled by a single parameter, the latter becomes a margin-tunning
parameter, i.e. a parameter that determines the margin of the optimal classifier. This is exploited to
design parametric families of loss functions that allow explicit control of the classification margin.
These losses are applied to the design of novel boosting algorithms of tunable margin. Finally,
it is shown that the requirements of 1) a canonical risk, and 2) an inverse-sigmoidal link are not
unduly restrictive for classifier design. In fact, approaches like logistic regression or LogitBoost
are special cases of the proposed framework. A number of experiments are conducted to study the
effect of margin-control on the classification accuracy. It is shown that the proposed variable-margin
losses outperform the fixed-margin counterparts used by existing algorithms. Finally, it is shown that
cross-validation of the margin parameter leads to classifiers with the best performance on all datasets
tested.
2
Loss functions for classification
We start by briefly reviewing the theory of Bayes consistent classifier design. See [2, 6, 7, 3] for
further details. A classifier h maps a feature vector x ? X to a class label y ? {?1, 1}. This
mapping can be written as h(x) = sign[p(x)] for some function p : X ? R, which is denoted
as the classifier predictor. Feature vectors and class labels are drawn from probability distributions
PX (x) and PY (y) respectively. Given a non-negative loss function L(x, y), the classifier is optimal
if it minimizes the risk R(f ) = EX,Y [L(h(x), y)]. This is equivalent to minimizing the conditional
risk EY |X [L(h(x), y)|X = x] for all x ? X . It is useful to express p(x) as a composition of
two functions, p(x) = f (?(x)), where ?(x) = PY |X (1|x), and f : [0, 1] ? R is a link function.
Classifiers are frequently designed to be optimal with respect to the zero-one loss
1 ? sign(yf )
0, if y = sign(f );
L0/1 (f, y) =
(1)
=
1, if y 6= sign(f ),
2
where we omit the dependence on x for notational simplicity. The associated conditional risk is
1 ? sign(f )
1 + sign(f )
1 ? ?, if f ? 0;
C0/1 (?, f ) = ?
+ (1 ? ?)
=
(2)
?,
if f < 0.
2
2
The risk is minimized if
?
? f (x) > 0
f (x) = 0
?
f (x) < 0
if ?(x) >
if ?(x) =
if ?(x) <
2
1
2
1
2
1
2
(3)
Table 1: Loss ?, optimal link f?? (?), optimal inverse link [f?? ]?1 (v) , and minimum conditional risk C?? (?)
for popular learning algorithms.
Algorithm
SVM
Boosting
Logistic Regression
?(v)
max(1 ? v, 0)
exp(?v)
log(1 + e?v )
f?? (?)
sign(2? ? 1)
?
1
2 log 1??
?
log 1??
[f?? ]?1 (v)
NA
e2v
1+e2v
ev
1+ev
C?? (?)
1p
? |2? ? 1|
2 ?(1 ? ?)
-? log ? ? (1 ? ?) log(1 ? ?)
?
. The associated optimal
Examples of optimal link functions include f ? = 2? ? 1 and f ? = log 1??
?
?
classifier h = sign[f ] is the well known Bayes decision rule (BDR), and the associated minimum
conditional (zero-one) risk is
1 1
1 1
?
C0/1
(?) = ?
? sign(2? ? 1) + (1 ? ?)
+ sign(2? ? 1) .
(4)
2 2
2 2
A loss which is minimized by the BDR is Bayes consistent. A number of Bayes consistent alternatives to the 0-1 loss are commonly used. These include the exponential loss of boosting, the log
loss of logistic regression, and the hinge loss of SVMs. They have the form L? (f, y) = ?(yf ), for
different functions ?. These functions assign a non-zero penalty to small positive yf , encouraging
the creation of a margin, a property not shared by the 0-1 loss. The resulting large-margin classifiers
have better generalization than those produced by the latter [1]. The associated conditional risk
C? (?, f ) = ??(f ) + (1 ? ?)?(?f ).
(5)
f?? (?) = arg min C? (?, f )
(6)
is minimized by the link
f
leading to the minimum conditional risk function C?? (?) = C? (?, f?? ). Table 1 lists the loss, optimal
link, and minimum risk of some of the most popular classifier design methods.
Conditional risk minimization is closely related to classical probability elicitation in statistics [4].
Here, the goal is to find the probability estimator ?? that maximizes the expected reward
I(?, ??) = ?I1 (?
? ) + (1 ? ?)I?1 (?
? ),
(7)
where I1 (?
? ) is the reward for prediction ?? when event y = 1 holds and I?1 (?
? ) the corresponding
reward when y = ?1. The functions I1 (?), I?1 (?) should be such that the expected reward is
maximal when ?? = ?, i.e.
I(?, ??) ? I(?, ?) = J(?), ??
(8)
with equality if and only if ?? = ?. The conditions under which this holds are as follows.
Theorem 1. [4] Let I(?, ??) and J(?) be as defined in (7) and (8). Then 1) J(?) is convex and
2) (8) holds if and only if
I1 (?) = J(?) + (1 ? ?)J ? (?)
I?1 (?) = J(?) ? ?J ? (?).
(9)
(10)
Hence, starting from any convex J(?), it is possible to derive I1 (?), I?1 (?) so that (8) holds. This
enables the following connection to risk minimization.
Theorem 2. [3] Let J(?) be as defined in (8) and f a continuous function. If the following properties hold
1. J(?) = J(1 ? ?),
2. f is invertible with symmetry
f ?1 (?v) = 1 ? f ?1 (v),
3
(11)
then the functions I1 (?) and I?1 (?) derived with (9) and (10) satisfy the following equalities
I1 (?)
I?1 (?)
= ??(f (?))
= ??(?f (?)),
(12)
(13)
with
?(v) = ?J[f ?1 (v)] ? (1 ? f ?1 (v))J ? [f ?1 (v)].
(14)
Under the conditions of the theorem, I(?, ??) = ?C? (?, f ). This establishes a new path for classifier
design [3]. Rather than specifying a loss ? and minimizing C? (?, f ), so as to obtain whatever
optimal link f?? and minimum expected risk C?? (?) results, it is possible to specify f?? and C?? (?)
and derive, from (14) with J(?) = ?C?? (?), the underlying loss ?. The main advantage is the ability
to control directly the quantities that matter for classification, namely the predictor and risk of the
optimal classifier.The only conditions are that C?? (?) = C?? (1 ? ?) and (11) holds for f?? .
3
Canonical risk minimization
In general, given J(?) = ?C?? (?), there are multiple pairs (?, f?? ) that satisfy (14). Hence, specification of either the minimum risk or optimal link does not completely characterize the loss. This
makes it difficult to control some important properties of the latter, such as the margin. In this work,
we consider an important special case, where such control is possible. We start with a lemma that
relates the symmetry conditions, on J(?) and f?? (?), of Theorem 2.
Lemma 3. Let J(?) be a strictly convex and differentiable function such that J(?) = J(1 ? ?).
Then J ? (?) is invertible and
[J ? ]?1 (?v) = 1 ? [J ? ]?1 (v).
(15)
Hence, under the conditions of Theorem 2, the derivative of J(?) has the same symmetry as f?? (?).
Since this symmetry is the only constraint on f?? , the former can be used as the latter. Whenever this
holds, the risk is said to be in canonical form, and (f ? , J) are denoted a canonical pair [6] .
Definition 1. Let J(?) be as defined in (8), and C?? (?) = ?J(?) a minimum risk. If the optimal link
associated with C?? (?) is
f?? (?) = J ? (?)
(16)
the risk C? (?, f ) is said to be in canonical form. f?? (?) is denoted a canonical link and ?(v), the
loss given by (14), a canonical loss.
Note that (16) does not hold for all risks. For example,
the risk of boosting is derived from the
p
convex, differentiable, and symmetric J(?) = ?2 ?(1 ? ?). Since this has derivative
1
2? ? 1
?
6= log
J ? (?) = p
= f?? (?),
2
1
?
?
?(1 ? ?)
(17)
the risk is not in canonical form. What follows from (16) is that it is possiblep
to derive a canonical
risk for any maximal reward J(?), including that of boosting (J(?) = ?2 ?(1 ? ?)). This is
discussed in detail in Section 5.
While canonical risks can be easily designed by specifying either J(?) or f?? (?), and then using (14)
and (16), it is much less clear how to directly specify a loss ?(v) for which (14) holds with a
canonical pair (f ? , J). The following result solves this problem.
Theorem 4. Let C? (?, f ) be the canonical risk derived from a convex and symmetric J(?). Then
?? (v) = ?[J ? ]?1 (?v) = [f?? ]?1 (v) ? 1.
4
(18)
16
Canonical Boosting
I ?
I
?
?
=
1
I
15
(
v
)
14
13
0
Avg. Rank
?
=
12
11
10
9
8
7
6
I ?
?
=
0
.
I
I
5
?
=
?
?
9
7
5
=
3
1
0.8
0.6
0.4
0.2
margin parameter
0
0
18
Canonical Log
16
v
14
?
1
f
I
(
*
v
)
]
Avg. Rank
[
1
0.5
12
10
8
6
4
9
7
5
3
1
0.8
0.6
0.4
0.2
margin parameter
v
Figure 1: Left: canonical losses compatible with an IS optimal link. Right: Average classification rank as a
function of margin parameter, on the UCI data.
This theorem has various interesting consequences. First, it establishes an easy-to-verify necessary
condition for the canonical form. For example, logistic regression has [f?? ]?1 (v) = 1+e1?v and
e
? ?1
?? (v) = ? 1+e
(v) ? 1, while boosting has [f?? ]?1 (v) = 1+e1?2v and ?? (v) = ?e?v 6=
?v = [f? ]
? ?1
[f? ] (v) ? 1. This (plus the symmetry of J and f?? ) shows that the former is in canonical form
but the latter is not. Second, it makes it clear that, up to additive constants, the three components
(?, C?? , and f?? ) of a canonical risk are related by one-to-one relationships. Hence, it is possible to
control the properties of the three components of the risk by manipulating a single function (which
can be any of the three). Finally, it enables a very detailed characterization of the losses compatible
with most optimal links of Table 1.
?v
4
Inverse-sigmoidal links
Inspection of Table 1 suggests that the classifiers produced by boosting, logistic regression, and variants have sigmoidal inverse links [f?? ]?1 . Due to this, we refer to the links f?? as inverse-sigmoidal
(IS). When this is the case, (18) provides a very detailed characterization of the loss ?. In particular,
it can be trivially shown that, letting f (n) be the nth order derivative of f , that the following hold
lim [f?? ]?1 (v) = 0
?
lim [f?? ]?1 (v) = 1
v??
lim ([f?? ]?1 )(n) (v) = 0, n ? 1
v???
?1
[f?? ] (v) ? (0, 1)
?1
[f?? ] (v) monotonically increasing
?1
[f?? ] (0) = .5
?
v???
?
lim ?(1) (v) = ?1
v???
lim ?(1) (v) = 0
v??
lim ?(n+1) (v) = 0, n ? 1
v???
(19)
(20)
(21)
? ?(v) monotonically decreasing
(22)
? ?(v) convex
(23)
? ?
(1)
(0) = ?.5.
(24)
It follows that, as illustrated in Figure 1, the loss ?(v) is convex, monotonically decreasing, linear
(with slope ?1) for large negative v, constant for large positive v, and has slope ?.5 at the origin.
The set of losses compatible with an IS link is, thus, strongly constrained. The only degrees of
freedom are in the behavior of the function around the origin. This is not surprising, since the only
degrees of freedom of the sigmoid itself are in its behavior within this region.
5
Canonical Log, a=0.4
Canonical Log, a=1
Canonical Log, a=10
1
Canonical Log, a=0.4
Canonical Log, a=1
Canonical Log, a=10
10
9
0.8
8
?(v)
6
?
[f* ]?1(v)
7
0.6
0.4
5
4
3
0.2
2
1
0
0
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
?10
?8
?6
?4
?2
v
1
0
2
4
6
8
10
v
Canonical Boosting, a=0.4
Canonical Boosting, a=1
Canonical Boosting, a=10
Canonical Boosting, a=0.4
Canonical Boosting, a=1
Canonical Boosting, a=10
20
18
16
0.8
12
?(v)
?
[f* ]?1(v)
14
0.6
0.4
10
8
6
0.2
4
2
0
0
?1
?0.5
0
0.5
1
?20
?15
v
?10
?5
0
5
10
15
20
v
Figure 2: canonical link (left) and loss (right) for various values of a. (Top) logistic, (bottom) boosting.
What is interesting is that these are the degrees of freedom that control the margin characteristics
of the loss ?. Hence, by controlling the behavior of the IS link around the origin, it is possible to
control the margin of the optimal classifier. In particular, the margin is a decreasing function of the
curvature of the loss at the origin, ?(2) (0). Since, from (18), ?(2) (0) = ([f?? ]?1 )(1) (0), the margin
can be controlled by varying the slope of [f?? ]?1 at the origin.
5
Variable margin loss functions
The results above enable the derivation of families of canonical losses with controllable margin. In
Section 3, we have seen that the boosting loss is not canonical, but there is a canonical loss for the
minimum risk of boosting. We consider a parametric extension of this risk,
?2 p
J(?; a) =
?(1 ? ?), a > 0.
(25)
a
From (16), the canonical optimal link is
2? ? 1
f?? (?; a) = p
a ?(1 ? ?)
and it can be shown that
?1
[f?? ]
(v; a) =
1
av
+ p
2 2 4 + (av)2
is an IS link, i.e. satisfies (19)-(24). Using (18), the corresponding canonical loss is
1 p
?(v; a) =
( 4 + (av)2 ? av).
2a
(26)
(27)
(28)
Because it shares the minimum risk of boosting, we refer to this loss as the canonical boosting loss.
It is plotted in Figure 2, along with the inverse link, for various values of a. Note that the inverse
6
Table 2: Margin parameter value a of rank 1 for each of the ten UCI datasets.
UCI dataset#
Canonical Log
Canonical Boost
#1
0.4
0.9
#2
0.5
6
#3
0.6
2
#4
0.3
2
#5
0.1
0.4
#6
2
3
#7
0.5
0.2
#8
0.1
4
#9
0.2
0.2
#10
0.2
0.9
link is indeed sigmoidal, and that the margin is determined by a. Since ?(2) (0; a) = a4 , the margin
increases with decreasing a.
It is also possible to derive variable margin extensions of existing canonical losses. For example,
consider the parametric extension of the minimum risk of logistic regression
J(?; a) =
1
1
? log(?) + (1 ? ?) log(1 ? ?).
a
a
(29)
From (16),
1
?
eav
?1
.
log
[f?? ] (v; a) =
a
1??
1 + eav
This is again a sigmoidal inverse link and, from (18),
[f?? ](v; a) =
(30)
1
[log(1 + eav ) ? av] .
(31)
a
We denote this loss the canonical logistic loss. It is plotted in Figure 2, along with the corresponding
inverse link for various a. Since ?(2) (0; a) = a4 , the margin again increases with decreasing a.
?(v; a) =
Note that, in (28) and (31), margin control is not achieved by simply rescaling the domain of the loss
function. e.g. just replacing log(1 + e?v ) by log(1 + e?av ) in the case of logistic regression. This
would have no impact in classification accuracy, since it would just amount to a change of scale of the
original feature space. While this type of re-scaling occurs in both families of loss functions above
(which are both functions of av), it is localized around the origin, and only influences the margin
properties of the loss. As can be seen seen in Figure 2 all loss functions are identical away from the
origin. Hence, varying a is conceptually similar to varying the bandwidth of an SVM kernel. This
suggests that the margin parameter a could be cross-validated to achieve best performance.
6
Experiments
A number of easily reproducible experiments were conducted to study the effect of variable margin losses on the accuracy of the resulting classifiers. Ten binary UCI data sets were considered:
(#1)sonar, (#2)breast cancer prognostic, (#3)breast cancer diagnostic, (#4)original Wisconsin breast
cancer, (#5)Cleveland heart disease, (#6)tic-tac-toe, (#7)echo-cardiogram, (#8)Haberman?s survival
(#9)Pima-diabetes and (#10)liver disorder. The data was split into five folds, four used for training and one for testing. This produced five training-test pairs per dataset. The GradientBoost
algorithm [8], with histogram-based weak learners, was then used to design boosted classifiers
which minimize the canonical logistic and boosting losses, for various margin parameters. GradientBoost was adopted because it can be easily combined with the different losses, guaranteeing
that, other than the loss, every aspect of classifier design is constant. This makes the comparison as fair as possible. 50 boosting iterations were applied to each training set, for 19 values of
a ? {0.1, 0.2, ..., 0.9, 1, 2, ..., 10}. The classification accuracy was then computed per dataset, by
averaging over its five train/test pairs.
Since existing algorithms can be seen as derived from special cases of the proposed losses, with a =
1, it is natural to inquire whether other values of the margin parameter will achieve best performance.
To explore this question we show, in Figure-1, the average rank of the classifier designed with each
loss and margin parameter a. To produce the plot, a classifier was trained on each dataset, for all
19 values of a. The results were then ranked, with rank 1 (19) being assigned to the a parameter of
smallest (largest) error. The ranks achieved with each a were then averaged over the ten datasets, as
suggested in [9]. For the canonical logistic loss, the best values of a is in the range 0.2 ? a ? 0.3.
Note that the average rank for this range (between 5 and 6), is better than that (close to 7) achieved
with the logistic loss of LogitBoost [2] (a = 1). In fact, as can be seen from Table 2, the canonical
7
Table 3: Classification error for each loss function and UCI dataset.
UCI dataset#
Canonical Log
LogitBoost (a = 1)
Canonical Boost
Canonical Boost, a = 1
AdaBoost
#1
11.2
11.6
12.6
13.2
11.4
#2
11.4
12.4
11.6
12.4
11.4
#3
8
10
21
21
9.4
#4
5.6
6.6
18.6
18.6
6.4
#5
12.4
13.4
17.6
18.6
14
#6
11.8
48.6
7.2
50.8
28
#7
7
6.8
6
7.2
6.6
#8
18.8
21.2
21.8
21.2
21.8
#9
38.2
39.6
37.6
39.4
41.2
#10
27
28.4
28.6
28.2
28.2
Table 4: Classification error for each loss function and UCI dataset.
UCI dataset#
Canonical Log, a = 0.2
Canonical Boost, a = 0.2
LogitBoost (a = 1)
AdaBoost
#1
13.2
12.6
12.4
11.4
#2
15
14.8
15.4
15.2
#3
8.4
17.2
8.6
9.2
#4
5
18.6
5.6
6
#5
11.2
12
11.4
11.4
#6
56.2
56.8
46
21.6
#7
6.8
6.8
7.2
7.4
#8
24
23.2
25
23.2
#9
39.8
38.4
40.4
42.8
#10
25.8
26.4
26.4
26.6
logistic loss with a = 1 did not achieve rank 1 on any dataset, whereas canonical logistic losses with
0.2 ? a ? 0.3 were top ranked on 3 datasets (and with 0.1 ? a ? 0.4 on 6). For the canonical
boosting loss, there is also a range (0.8 ? a ? 2) that produces best results. We note that the a
values of the two losses are not directly comparable. This can be seen from Figure-2 where a = 0.4
produces a loss of much larger margin for canonical boosting. Furthermore, the canonical boosting
loss has a heavier tail and approaches zero more slowly than the canonical logistic loss.
Although certain ranges of margin parameters seem to produce best results for both canonical loss
functions, the optimal parameter value is likely to be dataset dependent. This is confirmed by Table 2
which presents the parameter value of rank 1 for each of the ten datasets. Improved performance
should thus be possible by cross-validating the margin parameter a. Table 3 presents the 5-fold
cross validation test error (# of misclassified points) obtained for each UCI dataset and canonical
loss. The table also shows the results of AdaBoost, LogitBoost (canonical logistic, a = 1), and
canonical boosting loss with a = 1. Cross validating the margin results in better performance for
9 out of 10 (8 out 10) datasets for the canonical logistic (boosting) loss, when compared to the
fixed margin (a = 1) counterparts. When compared to the existing algorithms, at least one of the
margin-tunned classifiers is better than both Logit and AdaBoost for each dataset.
Under certain experimental conditions, cross validation might not be possible or computationally
feasible. Even in this case, it may be better to use a value of a other than the standard a = 1. Table-4
presents results for the case where the margin parameter is fixed at a = 0.2 for both canonical loss
functions. In this case, canonical logistic and canonical boosting outperform both LogitBoost and
AdaBoost in 7 and 5 of the ten datasets, respectively. The converse, i.e. LogitBoost and AdaBoost
outperforming both canonical losses only happens in 2 and 3 datasets, respectively.
7
Conclusion
The probability elicitation approach to loss function design, introduced in [3], enables the derivation
of new Bayes consistent loss functions. Yet, because the procedure is not fully constructive, this
requires trial and error. In general, it is difficult to anticipate the properties, and shape, of a loss
function that results from combining a certain minimal risk with a certain link function. In this
work, we have addressed this problem for the class of canonical risks. We have shown that the
associated canonical loss functions lend themselves to analysis, due to a simple connection between
the associated minimum conditional risk and optimal link functions. This analysis was shown to
enable a precise characterization of 1) the relationships between loss, optimal link, and minimum
risk, and 2) the properties of the loss whenever the optimal link is in the family of inverse sigmoid
functions. These properties were then exploited to design parametric families of loss functions
with explicit margin control. Experiments with boosting algorithms derived from these variable
margin losses have shown better performance than those of classical algorithms, such as AdaBoost
or LogitBoost.
8
References
[1] V. N. Vapnik, Statistical Learning Theory.
John Wiley Sons Inc, 1998.
[2] J. Friedman, T. Hastie, and R. Tibshirani, ?Additive logistic regression: A statistical view of boosting,?
Annals of Statistics, 2000.
[3] H. Masnadi-Shirazi and N. Vasconcelos, ?On the design of loss functions for classification: theory, robustness to outliers, and savageboost,? in NIPS, 2008, pp. 1049?1056.
[4] L. J. Savage, ?The elicitation of personal probabilities and expectations,? Journal of the American Statistical Association, vol. 66, pp. 783?801, 1971.
[5] C. Leistner, A. Saffari, P. M. Roth, and H. Bischof, ?On robustness of on-line boosting - a competitive
study,? in IEEE ICCV Workshop on On-line Computer Vision, 2009.
[6] A. Buja, W. Stuetzle, and Y. Shen, ?Loss functions for binary class probability estimation and classification:
Structure and applications,? 2006.
[7] T. Zhang, ?Statistical behavior and consistency of classification methods based on convex risk minimization,? Annals of Statistics, 2004.
[8] J. H. Friedman, ?Greedy function approximation: A gradient boosting machine,? The Annals of Statistics,
vol. 29, no. 5, pp. 1189?1232, 2001.
[9] J. Dem?sar, ?Statistical comparisons of classifiers over multiple data sets,? The Journal of Machine Learning
Research, vol. 7, pp. 1?30, 2006.
9
| 4024 |@word trial:2 briefly:1 prognostic:1 logit:1 c0:2 e2v:2 existing:5 savage:1 surprising:1 yet:1 written:1 john:1 additive:2 shape:2 enables:4 designed:3 reproducible:1 plot:1 greedy:1 inspection:1 characterization:4 boosting:35 provides:1 sigmoidal:11 zhang:1 five:3 along:2 indeed:1 expected:4 behavior:5 themselves:1 frequently:1 decreasing:6 encouraging:1 haberman:1 totally:1 becomes:2 increasing:1 cleveland:1 bounded:1 underlying:1 maximizes:1 what:2 tic:1 minimizes:1 impractical:1 guarantee:1 every:1 classifier:28 control:13 whatever:1 converse:1 omit:1 positive:3 consequence:1 establishing:1 path:1 might:1 plus:1 studied:2 specifying:2 suggests:2 range:4 averaged:1 practical:1 testing:2 practice:1 stuetzle:1 procedure:1 close:2 risk:49 influence:1 py:2 optimize:1 equivalent:1 map:2 roth:1 starting:1 convex:9 shen:1 simplicity:1 disorder:1 rule:2 estimator:1 tunning:1 sar:1 annals:3 diego:2 controlling:2 origin:9 diabetes:1 bottom:1 verifying:1 inquire:1 region:1 cardiogram:1 disease:1 constrains:1 reward:5 personal:1 trained:1 depend:1 tight:2 reviewing:1 creation:1 learner:1 completely:1 easily:3 various:5 derivation:3 train:1 larger:1 ability:1 statistic:4 itself:1 echo:1 advantage:2 differentiable:2 analytical:4 maximal:2 uci:9 combining:1 achieve:3 requirement:1 produce:4 guaranteeing:2 converges:1 derive:5 gradientboost:2 liver:1 solves:1 implies:1 closely:1 enable:3 saffari:1 assign:2 generalization:2 leistner:1 anticipate:1 strictly:1 extension:3 hold:10 around:4 considered:1 exp:1 mapping:2 smallest:1 estimation:1 label:2 largest:1 establishes:3 minimization:5 rather:1 boosted:1 varying:3 derived:7 l0:1 validated:1 notational:1 prevalent:1 rank:10 dependent:1 manipulating:1 misclassified:1 i1:7 arg:1 classification:17 denoted:5 constrained:1 special:3 vasconcelos:2 identical:1 broad:1 minimized:3 masnadi:2 bdr:2 freedom:3 friedman:2 necessary:1 desired:1 plotted:2 re:1 minimal:1 tractability:1 predictor:2 conducted:2 characterize:1 combined:1 invertible:2 na:1 again:2 slowly:1 resort:1 derivative:4 style:1 leading:1 rescaling:1 american:1 dem:1 matter:2 inc:1 satisfy:2 view:1 closed:1 start:3 bayes:15 competitive:1 slope:4 minimize:2 accuracy:4 characteristic:1 conceptually:1 generalize:1 weak:1 produced:3 confirmed:1 finer:1 classified:1 hamed:1 influenced:1 whenever:2 definition:1 pp:4 nuno:2 toe:1 associated:9 tunable:1 dataset:12 popular:3 lim:6 adaboost:8 specify:3 improved:2 strongly:2 furthermore:2 just:2 replacing:1 logistic:21 yf:3 shirazi:2 effect:2 verify:1 counterpart:3 former:2 analytically:1 equality:2 hence:6 assigned:1 symmetric:2 laboratory:2 illustrated:2 novel:2 recently:2 sigmoid:2 discussed:1 tail:1 association:1 refer:3 composition:1 tac:1 consistency:2 trivially:1 specification:1 curvature:1 posterior:1 recent:1 showed:1 jolla:2 certain:4 binary:2 outperforming:1 exploited:2 seen:6 minimum:15 ey:1 monotonically:4 relates:1 multiple:2 cross:7 manipulate:1 equally:1 e1:2 controlled:2 impact:2 prediction:2 involving:1 regression:9 variant:1 breast:3 vision:2 expectation:1 histogram:1 kernel:1 iteration:1 achieved:4 whereas:1 addressed:1 unlike:1 validating:3 seem:1 split:1 easy:1 affect:1 hastie:1 bandwidth:1 idea:1 whether:2 heavier:1 penalty:2 useful:1 detailed:4 clear:2 amount:1 ten:5 svms:1 outperform:4 canonical:70 sign:10 diagnostic:1 correctly:1 per:2 tibshirani:1 vol:3 express:1 four:1 drawn:1 asymptotically:1 enforced:2 inverse:15 family:8 decision:4 scaling:1 comparable:1 bound:1 fold:2 constraint:1 aspect:1 min:1 px:1 combination:1 son:1 happens:1 outlier:3 iccv:1 heart:1 computationally:1 turn:1 letting:1 adopted:1 away:1 enforce:1 generic:1 alternative:2 robustness:2 original:2 top:2 include:3 a4:2 hinge:2 restrictive:1 classical:6 question:1 quantity:3 occurs:1 parametric:4 dependence:1 said:3 gradient:1 link:37 trivial:1 enforcing:1 relationship:5 minimizing:2 difficult:3 pima:1 negative:4 design:18 upper:1 av:7 datasets:9 finite:1 precise:2 ucsd:2 buja:1 introduced:2 pair:10 namely:2 connection:6 bischof:1 california:2 unduly:1 established:1 boost:4 nip:1 elicitation:4 suggested:1 ev:2 max:1 including:1 lend:1 event:1 natural:1 ranked:2 nth:1 literature:1 asymptotic:1 wisconsin:1 loss:109 fully:1 interesting:2 limitation:1 localized:1 validation:3 degree:4 consistent:11 share:1 cancer:3 compatible:6 allow:1 understand:1 boundary:1 commonly:2 avg:2 san:2 far:1 continuous:1 sonar:1 table:12 robust:3 ca:2 controllable:1 symmetry:5 domain:1 did:1 main:3 logitboost:10 fair:1 wiley:1 explicit:3 exponential:2 exercise:1 theorem:7 remained:1 showing:1 list:1 svm:3 survival:1 workshop:1 vapnik:1 margin:58 simply:1 explore:1 likely:1 visual:2 savageboost:3 determines:1 satisfies:1 conditional:8 goal:3 shared:1 feasible:2 change:1 determined:2 averaging:1 lemma:2 experimental:2 la:2 latter:7 relevance:1 constructive:2 tested:1 ex:1 |
3,341 | 4,025 | Throttling Poisson Processes
?
Thomas Vanck
Michael Bruckner
Tobias Scheffer
University of Potsdam
Department of Computer Science
August-Bebel-Strasse 89, 14482 Potsdam, Germany
{uwedick,haider,vanck,mibrueck,scheffer}@cs.uni-potsdam.de
Uwe Dick
Peter Haider
Abstract
We study a setting in which Poisson processes generate sequences of decisionmaking events. The optimization goal is allowed to depend on the rate of decision
outcomes; the rate may depend on a potentially long backlog of events and decisions. We model the problem as a Poisson process with a throttling policy that
enforces a data-dependent rate limit and reduce the learning problem to a convex
optimization problem that can be solved efficiently. This problem setting matches
applications in which damage caused by an attacker grows as a function of the rate
of unsuppressed hostile events. We report on experiments on abuse detection for
an email service.
1
Introduction
This paper studies a family of decision-making problems in which discrete events occur on a continuous time scale. The time intervals between events are governed by a Poisson process. Each event
has to be met by a decision to either suppress or allow it. The optimization criterion is allowed to
depend on the rate of decision outcomes within a time interval; the criterion is not necessarily a sum
of a loss function over individual decisions.
The problems that we study cannot adequately be modeled as Mavkov or semi-Markov decision
problems because the probability of transitioning from any value of decision rates to any other value
depends on the exact points in time at which each event occurred in the past. Encoding the entire
backlog of time stamps in the state of a Markov process would lead to an unwieldy formalism. The
learning formalism which we explore in this paper models the problem directly as a Poisson process
with a throttling policy that depends on an explicit data-dependent rate limit, which allows us to
refer to a result from queuing theory and derive a convex optimization problem that can be solved
efficiently.
Consider the following two scenarios as motivating applications. In order to stage a successful
denial-of-service attack, an assailant has to post requests at a rate that exceeds the capacity of the
service. A prevention system has to meet each request by a decision to suppress it, or allow it
to be processed by the service provider. Suppressing legitimate requests runs up costs. Passing
few abusive requests to be processed runs up virtually no costs. Only when the rate of passed
abusive requests exceeds a certain capacity, the service becomes unavailable and costs incur. The
following second application scenario will serve as a running example throughout this paper. Any
email service provider has to deal with a certain fraction of accounts that are set up to disseminate
phishing messages and email spam. Serving the occasional spam message causes no harm other
than consuming computational ressources. But if the rate of spam messages that an outbound email
server discharges triggers alerting mechanisms of other providers, then that outbound server will
become blacklisted and the service is disrupted. Naturally, suppressing any legitimate message is a
disruption to the service, too.
1
Let x denote a sequence of decision events x1 , . . . , xn ; each event is a point xi ? X in an instance
space. Sequence t denotes the time stamps ti ? R+ of the decision events with ti < ti+1 . We define
an episode e by the tuple e = (x, t, y) which includes a label y ? {?1, +1}. In our application, an
episode corresponds to the sequence of emails sent within an observation interval from a legitimate
(y = ?1) or abusive (y = +1) account e. We write xi and ti to denote the initial sequence of the
first i elements of x and t, respectively. Note that the length n of the sequences can be different for
different episodes.
Let A = {?1, +1} be a binary decision set, where +1 corresponds to suppressing an event and ?1
corresponds to passing it. The decision model ? gets to make a decision ? (xi , ti ) ? A at each point
in time ti at which an event occurs.
The outbound rate r? (t? |x, t) at time t? for episode e and decision model ? is a crucial concept.
It counts the number of events that were let pass during a time interval of lengh ? ending before t? .
It is therefore defined as r? (t? |x, t) = |{i : ?(xi , ti ) = ?1 ? ti ? [t? ? ?, t? )}|. In outbound spam
throttling, ? corresponds to the time interval that is used by other providers to estimate the incoming
spam rate.
We define an immediate loss function ? : Y ? A ? R+ that specifies the immediate loss of deciding
a ? A for an event with label y ? Y as
{
c+ y = +1 ? a = ?1
c? y = ?1 ? a = +1
?(y, a) =
(1)
0 otherwise,
where c+ and c? are positive constants, corresponding to costs of false positive and false negative
decisions. Additionally, the rate-based loss ? : Y ? R+ ? R+ is the loss that runs up per unit
of time. We require ? to be a convex, monotonically increasing function in the outbound rate for
y = +1 and to be 0 otherwise. The rate-based loss reflects the risk of the service getting blacklisted
based on the current sending behaviour. This risk grows in the rate of spam messages discharged
and the duration over which a high sending rate of spam messages is maintained.
The total loss of a model ? for an episode e = (x, t, y) is therefore defined as
? tn +?
n
?
L(?; x, t, y) =
? (y, r? (t? |x, t)) dt? +
? (y, ?(xi , ti ))
t1
(2)
i=1
The first term penalizes a high rate of unsuppressed events with label +1?in our example, a high
rate of unsuppressed spam messages?whereas the second term penalizes each decision individually.
For the special case of ? = 0, the optimization criterion resolves to a risk, and the problem becomes
a standard binary classification problem.
An unknown target distribution over p(x, t, y) induces the overall optimization goal
Ex,t,y [L(?; x, t, y)]. The learning problem consists in finding ? ? = argmin? Ex,t,y [L(?; x, t, y)]
m
m
from a training sample of tuples D = {(x1n1 , t1n1 , y 1 ), . . . , (xm
nm , tnm , y )}.
2
Poisson Process Model
We assume the following data generation process for episodes e = (x, t, y) that will allow us to
derive an optimization problem to be solved by the learning procedure. First, a rate parameter ?,
label y, and the sequence of instances x, are drawn from a joint distribution p(x, ?, y). Rate ? is the
parameter of a Poisson process p(t|?) which now generates time sequence t. The expected loss of
decision model ? is taken over all input sequences x, rate parameter ?, label y, and over all possible
sequences of time stamps t that can be generated according to the Poisson process.
? ? ? ?
(3)
Ex,t,y [L(?; x, t, y)] =
L(?; x, t, y)p(t|?)p(x, ?, y)d?dtdx
x
2.1
t
?
y
Derivation of Empirical Loss
In deriving the empirical counterpart of the expected loss, we want to exploit our assumption that
time stamps are generated by a Poisson process with unknown but fixed rate parameter. For each
2
input episode (x, t, y), instead of minimizing the expected loss over the single observed sequence of
time stamps, we would therefore like to minimize the expected loss over all sequences of time stamps
generated by a Poisson process with the rate parameter that has most likely generated the observed
sequence of time stamps. Equation 4 introduces the observed time sequence of time stamps t? into
Equation 3 and uses the fact that the rate parameter ? is independent of x and y given t? . Equation
5 rearranges the terms, and Equation 6 writes the central integral as a conditional expected value
of the loss given the rate ?. Finally, Equation 7 approximates the integral over all values of ? by a
single summand with value ?? for each episode.
Ex,t,y [L(?; x, t, y)] =
=
=
?
? ? ? ? ?
t?
x
t
?
x
y
?
t?
x
y
?
t?
x
? ? ? (?
? ? ?
(4)
y
)
)
L(?; x, t, y)p(t|?)dt p(?|t? )d? p(x, t? , y)dxdt?
? ? ? (? (?
t?
L(?; x, t, y)p(t|?)p(?|t? )p(x, t? , y)d?dtdxdt?
(5)
t
)
(Et [L(?; x, t, y) | ?] p(?|t? )d? p(x, t? , y)dxdt?
Et [L(?; x, t, y) | ?? ] p(x, t? , y)dxdt?
(6)
(7)
y
1
We arrive at the regularized risk functional in Equation 8 by replacing p(x, t? , y) by m
for all ob?
servations in D and inserting MAP estimate ?e as parameter that generated time stamps te . The
influence of the convex regularizer ? is determined by regularization parameter ? > 0.
1 ?
Et [L(?; xe , t, y e ) | ??e ] + ??(?)
m e=1
m
? x,t,y [L(?; x, t, y)]
E
with
=
(8)
??e = argmax? p(?|te )
Minimizing this risk functional is the basis of the learning procedure in the next section. As noted
in Section 1, for the special case when the rate-based loss ? is zero, the problem reduces to a
standard weighted binary classification problem and would be easy to solve with standard learning
algorithms. However, as we will see in Section 4, the ?-dependent loss makes the task of learning
a decision function hard to solve; attributing individual decisions with their ?fair share? of the rate
loss?and thus estimating the cost of the decision?is problematic. The Erlang learning model of
Section 3 employs a decision function that allows to factorize the rate loss naturally.
3
Erlang Learning Model
In the following we derive an optimization problem that is based on modeling the policy as a datadependent rate limit. This allows us to apply a result from queuing theory and approximate the
empirical risk functional of Equation (8) with a convex upper bound. We define decision model ?
in terms of the function f? (xi , ti ) = exp(?T ? (xi , ti )) which sets a limit on the admissible rate of
events, where ? is some feature mapping of the initial sequence (xi , ti ) and ? is a parameter vector.
The throttling model is defined as
{
? (xi , ti ) =
?1 (?allow?)
if r? (ti |xi , ti ) + 1 ? f? (xi , ti )
+1 (?suppress?) otherwise.
(9)
The decision model blocks event xi , if the number of instances that were sent within [ti ? ?, ti ), plus
the current instance, would exceed rate limit f? (xi , ti ). We will now transform the optimization goal
of Equation 8 into an optimization problem that can be solved by standard convex optimization tools.
To this end, we first decompose the expected loss of an input sequence given the rate parameter in
Equation 8 into immediate and rate-dependent loss terms. Note that te denotes the observed training
sequence whereas t serves as expectation variable for the expectation Et [?|?e ? ] over all sequences
3
conditional on the Poisson process rate parameter ?e ? as in Equation 8.
Et [L(?; xe , t, y e ) | ??e ]
[? tne +?
] ?
ne
e ? ? e
?
?
= Et
? (y , r (t |x , t)) dt | ?e +
Et [?(y e , ?(xei , ti )) | ??e ]
t1
= Et
[?
(10)
i=1
tne +?
?
?
? (y , r (t |x , t)) dt |
e
?
e
??e
]
n
?
e
+
t1
[ (
]
)
Et ? ?(xei , ti ) ?= y e | ??e ?(y e , ?y e ) (11)
i=1
Equation 10 uses the definition of the loss function in Equation 2. Equation 11 exploits that only
decisions against the correct label, ?(xei , ti ) ?= y e , incur a positive loss ?(y, ?(xei , ti )).
We will first derive a convex approximation of the expected rate-based loss
? t e+?
Et [ t1n ? (y e , r? (t? |xe , t)) dt? |??e ] (left side of Equation 11). Our definition of the decision
model allows us to factorize the expected rate-based loss into contributions of individual rate limit
decisions. The convexity will be addressed by Theorem 1.
Since the outbound rate r? increases only at decision points ti , we can upper-bound its value with
the value immediately after the most recent decision in Equation 12. Equation 13 approximates
the actual outbound rate with the rate limit given by f? (xei , tei ). This is reasonable because the
outbound rate depends on the policy decisions which are defined in terms of the rate limit. Because
t is generated by a Poisson process, Et [ti+1 ? ti | ??e ] = ?1? (Equation 14).
e
[? tne +?
]
Et
? (y e , r? (t? |xe , t)) dt? | ??e
t1
?
e
n?
?1
Et [ti+1 ? ti | ??e ]?(y e , r? (ti |xe , t)) + ? ?(y e , r? (tne |xe , t))
(12)
(
)
(
Et [ti+1 ? ti | ??e ]? y e , f? (xei , tei ) + ? ? y e , f? (xene , tene ))
(13)
)
)
(
1 ( e
? y , f? (xei , tei ) + ? ? y e , f? (xene , tene )
?
?e
(14)
i=1
?
e
n?
?1
i=1
=
e
n?
?1
i=1
We have thus established a convex approximation of the left side of Equation 11.
We will now derive a closed form approximation of Et [? (?(xei , ti ) ?= y e ) | ??e ], the second part of
the loss functional in Equation 11. Queuing theory provides a convex approximation: The Erlang-B
formula [5] gives the probability that a queuing process which maintains a constant rate limit of
f within a time interval of ? will block an event when events are generated by a Poisson process
with given rate parameter ?. Fortet?s formula (Equation 15) generalizes the Erlang-B formula for
non-integer rate limits.
1
B(f, ?? ) = ? ? ?z
(15)
z f
e (1 + ??
) dz
0
The integral can be computed efficiently using a rapidly converging series, c.f. [5]. The formula
requires a constant rate limit, so that the process can reach an equilibrium. In our model, the rate
limit f? (xi , ti ) is a function of the sequences xi and ti until instance xi , and Fortet?s formula
therefore serves as an approximation.
Et [?(?(xei , ti ) = 1)|??e ] ? B(f? (xei , tei ), ??e ? )
[? ?
]?1
e e
z
=
e?z (1 + ? )f? (xi ,ti ) dz
?e ?
0
(16)
(17)
Unfortunately, Equation 17 is not convex in ?. We approximate it with the convex upper bound
? log (1 ? B(f? (xei , tei ), ??e ? )) (cf. the dashed green line in the left panel of Figure 2(b) for an
illustration). This is an upper bound, because ? log p ? 1 ? p for 0 ? p ? 1; its convexity
is addressed by Theorem 1. Likewise, Et [?(?(xei , ti ) = ?1)|??e ] is approximated by upper bound
log (B(f? (xei , tei ), ??e ? )). We have thus derived a convex upper bound of Et [? (?(xei , ti ) ?= y e ) |??e ].
4
Combining the two components of the optimization goal (Equation 11) and adding convex regularizer ?(?) and regularization parameter ? > 0 (Equation 8), we arrive at an optimization problem for
finding the optimal policy parameters ?.
Optimization Problem 1 (Erlang Learning Model). Over ?, minimize
{ ne ?1
m
)
(
)
1 ? ? 1 ( e
R(?) =
? y , f? (xei , tei ) + ? ? y e , f? (xene , tene )
?
m e=1 i=1 ?e
n
?
e
+
(18)
}
[ e
(
)] e
e
e e
?
e
? log ?(y = 1) ? y B f? (xi , ti ), ?e ? ?(y , ?y ) + ??(?)
i=1
Next we show that minimizing risk functional R amounts to solving a convex optimization problem.
Theorem 1 (Convexity of R). R(?) is a convex risk functional in ? for any ??e > 0 and ? > 0.
Proof. The convexity of ? and ? follows from their definitions. It remains to be shown that both
? log B(f? (?), ??e ? )) and ? log(1 ? B(f? (?), ??e ) are convex in ?. Component ?(y e , ?y e ) of Equation 18 is independent of ?. It is known that Fortet?s formula B(f, ?e ? ? )) is convex, monotically
decreasing, and positive in f for ??e ? > 0 [5]. Furthermore ? log(B(f, ??e ? ))) is convex and monotonically increasing. Since f? (?) is convex in ?, it follows that ? log(B(f? (?), ??e )) is also convex.
Next, we show that ? log(1 ? B(f? (?), ??e ? ))) is convex and monotonically decreasing. From the
above it follows that b(f ) = 1 ? B(f, ??e ? )) is monotonically increasing, concave and positive.
??
d2
1
?1
?
Therefore, df
2 ? ln(b(f )) = b2 (f ) b (f ) + b (f ) b(f ) ? 0 as both summands are positive. Again, it
?
follows that ? log(1 ? B(f? (?), ?e ? ))) is convex in ? due to the definition of f? .
4
Prior Work and Reference Methods
We will now discuss how the problem of minimizing the expected loss, ? ? =
argmin? Ex,t,y [L(?; x, t, y)], from a sample of sequences x of events with labels y and observed
rate parameters ?? relates to previously studied methods. Sequential decision-making problems
are commonly solved by reinforcement learning approaches, which have to attribute the loss of an
episode (Equation 2) to individual decisions in order to learn to decide optimally in each state. Thus,
a crucial part of defining an appropriate procedure for learning the optimal policy consists in defining an appropriate state-action loss function. Q? (s, a) estimates the loss of performing action a in
state s when following policy ? for the rest of the episode.
Several different state-action loss functions for related problems have been investigated in the literature. For example, policy gradient methods such as in [4] assign the loss of an episode to individual
decisions proportional to the log-probabilities of the decisions. Other approaches use sampled estimates of the rest of the episode Q(si , ai ) = L(?, s) ? L(?, si ) or the expected loss if a distribution
of states of the episode is known [7]. Such general purpose methods, however, are not the optimal
choice for the particular problem instance at hand. Consider the special case ? = 0, where the
problem reduces to a sequence of independent binary decisions. Assigning the cumulative loss of
the episode to all instances leads to a grave distortion of the optimization criterion.
As reference in our experiments we use a state-action loss function that assigns the immediate loss
?(y, ai ) to state si only. Decision ai determines the loss incurred by ? only for ? time units, in
? t +?
the interval [ti , ti + ? ). The corresponding rate loss is tii ?(y, r? (t? |x, t))dt? . Thus, the loss of
deciding ai = ?1 instead of ai = +1 is the difference in the corresponding ?-induced loss. Let
x?i , t?i denote the sequence x, t without instance xi . This leads to a state-action loss function that
is the sum of immediate loss and ?-induced loss; it serves as our first baseline.
? ti +?
?
Qit (si , a) = ?(y, a) + ?(a = ?1)
?(y, r? (t? |x?i , t?i ) + 1) ? ?(y, r? (t? |x?i , t?i ))dt? (19)
ti
? ti +?
By approximating ti
?(y, r? (t? |x, t)) with ? ?(y, r? (ti |x, t)), we define the state-action loss
function of a second plausible state-action loss that, instead of using the observed loss to estimate
5
the loss of an action, approximates it with the loss that would be incurred by the current outbound
rate r? (ti |x?i , t?i ) for ? time units.
[ (
)]
Q?ub (si , a) = ?(y, a) + ?(a = ?1) ? ?(y, r? (ti |x?i , t?i ) + 1) ? ?(y, r? (ti |x?i , t?i ))
(20)
The state variable s has to encode all information a policy needs to decide. Since the loss crucially
depends on outbound rate r? (t? |x, t), any throttling model must have access to the current outbound
rate. The transition between a current and a subsequent rate depends on the time at which the next
event occurs, but also on the entire backlog of events, because past events may drop out of the
interval ? at any time. In analogy to the information that is available to the Erlang learning model,
it is natural to encode states si as a vector of features ?(xi , ti ) (see Section 5 for details) together
with the current outbound rate r? (ti |x, t). Given a representation of the state and a state-action loss
function, different approaches for defining the policy ? and optimizing its parameters have been
investigated. For our baselines, we use the following two methods.
Policy gradient. Policy gradient methods model a stochastic policy directly as a parameterized
decision function. They perform a gradient descent that always converges to a local optimum [8].
The gradient of the expected loss with respect to the parameters is estimated in each iteration k for
the distribution over episodes, states, and losses that the current policy ?k induces. However, in
order to achieve fast convergence to the optimal polity, one would need to determine the gradient for
the distribution over episodes, states, and losses induced by the optimal policy. We implement two
policy gradient algorithms for experimentation which only differ in using Qit and Qub , respectively.
They are denoted PGit and PGub in the experiments. Both use a logistic regression function as
decision function, the two-class equivalent of the Gibbs distribution which is used in the literature.
Iterative Classifier. The second approach is to represent policies as classifiers and to employ
methods for supervised classification learning. A variety of papers addresses this approach [6, 3, 7].
We use an algorithm that is inspired by [1, 2] and is adapted to the problem setting at hand. Blatt
and Hero [2] investigate an algorithm that finds non-stationary policies for two-action T-step MDPs
by solving a sequence of one-step decisions via a binary classifier. Classifiers ?t for time step t are
learned iteratively on the distribution of states generated by the policy (?0 , . . . , ?t?1 ). Our derived
algorithm iteratively learns weighted support vector machine (SVM) classifier ?k+1 in iteration k+1
on the set of instances and losses Q?k (s, a) that were observed after classifier ?k was used as policy
on the training sample. The weight vector of ?k is denoted ?k . The weight of misclassification of s
is given by Q?k (s, ?y). The SVM weight vector is altered in each iteration as ?k+1 = (1 ? ?k )?k +
? where ?? is the weight vector of the new classifier that was learned on the observed losses. In
?k ?,
the experiments, two iterative SVM learner were implemented, denoted It-SVMit and It-SVMub ,
corresponding to the used state-action losses Qit and Qub , respectively. Note that for the special
case ? = 0 the iterative SVM algorithm reduces to a standard SVM algorithm.
All four procedures iteratively estimate the loss of a policy decision on the data via a state-action
loss function and learn a new policy ? based on this estimated cost of the decisions. Convergence
guarantees typically require the Markov assumption; that is, the process is required to possess a
stationary transition distribution P (si+1 |si , ai ). Since the transition distribution in fact depends
on the entire backlog of time stamps and the duration over which state si has been maintained,
the Markov assumption is violated to some extent in practice. In addition to that, ?-based loss
estimates are sampled from a Poisson process. In each iteration ? is learned to minimize sampled
and inherently random losses of decisions. Thus, convergence to a robust solution becomes unlikely.
In contrast, the Erlang learning model directly minimizes the ?-loss by assigning a rate limit. The
rate limit implies an expectation of decisions. In other words, the ?-based loss is minimized without
explicitely estimating the loss of any decisions that are implied by the rate limit. The convexity of
the risk functional in Optimization Problem 1 guarantees convergence to the global optimum.
5
Application
The goal of our experiments is to study the relative benefits of the Erlang learning model and the
four reference methods over a number of loss functions. The subject of our experimentation is the
problem of suppressing spam and phishing messages sent from abusive accounts registered at a
large email service provider. We sample approximately 1,000,000 emails sent from approximately
6
c-=5, c+=1
c-=10, c+=1
8
5
6
4
8
ELM
It-SVMit
It-SVMub
PGub
PGit
SVM
7
Loss
6
9
8
ELM
It-SVMit
It-SVMub
PGub
PGit
SVM
7
Loss
c-=20, c+=1
9
5
6
4
5
4
3
3
3
2
2
2
1
1
1
0
0
0
1
2
c?
3
4
5
ELM
It-SVMit
It-SVMub
PGub
PGit
SVM
7
Loss
9
0
0
1
2
c?
3
4
5
0
1
2
c?
3
4
5
Figure 1: Average loss on test data depending on the influence of the rate loss c? for different
immediate loss constants c? and c+ .
10,000 randomly selected accounts over two days and label them automatically based on information
passed by other email service providers via feedback loops (in most cases triggered by ?report spam?
buttons). Because of this automatic labeling process, the labels contain a certain smount of noise.
Feature mapping ? determines a vector of moving average and moving variance estimates of several
attributes of the email stream. These attributes measure the frequency of subject changes and sender
address changes, and the number of recipients. Other attributes indicate whether the subject line
or the sender address have been observed before within a window of time. Additionally, a moving
average estimate of the rate ? is used as feature. Finally, other attributes quantify the size of the
message and the score returned by a content-based spam filter employed by the email service.
We implemented the baseline methods that were descibed in Section 4, namely the iterative SVM
methods It-SVMub and It-SVMit and the policy gradient methods PGub and PGit . Additionally,
we used a standard support vector machine classifier SVM with weights of misclassification corresponding to the costs defined in Equation 1. The Erlang learning model is denoted ELM in the plots.
Linear decision functions were used for all baselines.
In our experiments, we assume a cost that is quadratic in the outbound rate. That is,
2
?(1, r? (t? |x, t))) = c? ? r? (t? |x, t) with c? > 0 determining the influence of the rate loss to the
overall loss. The time interval ? was chosen to be 100 seconds. Regularizer ?(?) as in Optimization
problem 1 is the commonly used squared l2 -norm ?(?) = ???22 .
We evaluated our method for different costs of incorrectly classified non-spam emails (c? ), incorrectly classified spam emails (c+ ) (see the definition of ? in Equation 1), and rate of outbound spam
messages (c? ). For each setting, we repeated 100 runs; each run used about 50%, chosen at random,
as training data and the remaining part as test data. Splits where chosen such that there were equally
many spam episodes in training and test set. We tuned the regularization parameter ? for the Erlang
learning model as well as the corresponding regularization parameters of the iterative SVM methods
and the standard SVM on a separate tuning set that was split randomly from the training data.
5.1
Results
Figure 1 shows the resulting average loss of the Erlang learning model and reference methods.
Each of the three plots shows loss versus parameter c? which determines the influence of the rate
loss on the overall loss. The left plot shows the loss for c? = 5 and c+ = 1, the center plot for
(c? = 10, c+ = 1), and the right plot for (c? = 20, c+ = 1).
We can see in Figure 1 that the Erlang learning model outperforms all baseline methods for larger
values of c? ?more influence of the rate dependent loss on the overall loss?in two of the three
settings. For c? = 20 and c+ = 1 (right panel), the performance is comparable to the best baseline
method It-SVMub ; only for the largest shown c? = 5 does the ELM outperform this baseline. The
iterative classifier It-SVMub that uses the approximated state-action loss Qub performs uniformly
better than It-SVMit , the iterative SVM method that uses the sampled loss from the previous iteration. It-SVMit itself surprisingly shows very similar performance to that of the standard SVM
method; only for the setting c? = 20 and c+ = 1 in the right panel does this iterative SVM method
show superior performance. Both policy gradient methods perform comparable to the Erlang learning model for smaller values of c? but deteriorate for larger values.
7
c-=5, c+=1
1.4
1.2
Loss
1
Fortet function
with convex upper bound
ELM
It-SVMit
It-SVMub
PGub
PGit
SVM
Complement of Fortet function
with convex upper bound
B(exp(??),??)
-log(1-B(exp(??),??))
2
1-B(exp(??),??)
-log(B(exp(??),??))
2
0.8
1
0.6
1
0.4
0.2
0
0
0.2
0.4
0.6
c?
0.8
1
(a) Average loss and standard error for
small values of c? .
0
0
1
2
?*?
3
-1
0
1
2
?*?
(b) Left: Fortet?s formula B(e?? , ?? ) (Equation 17) and its
upper bound ? log(1 ? B(e?? , ?)) for ?? = 10. Right:
1 ? B(e?? , ?) and respective upper bound ? log(B(e?? , ?)).
As expected, the iterative SVM and the standard SVM algorithms perform better than the Erlang
learning model and policy gradient models if the influence of the rate pedendent loss is very small.
This can best be seen in Figure 2(a). It shows a detail of the results for the setting c? = 5 and
c+ = 1, for c? ranging only from 0 to 1. This is the expected outcome following the considerations
in Section 4. If c? is close to 0, the problem approximately reduces to a standard binary classification problem, thus favoring the very good classification performance of support vector machines.
However, for larger c? the influence of the rate dependent loss rises and more and more dominates
the immediate classification loss ?. Consequently, for those cases ? which are the important ones in
this real world application ? the better rate loss estimation of the Erlang learning model compared
to the baselines leads to better performance.
The average training times for the Erlang learning model and the reference methods are in the same
order of magnitude. The SVM algorithm took 14 minutes in average to converge to a solution. The
Erlang learning model converged after 44 minutes and the policy gradient methods took approximately 45 minutes. The training times of the iterative classifier methods were about 60 minutes.
6
Conclusion
We devised a model for sequential decision-making problems in which events are generated by a
Poisson process and the loss may depend on the rate of decision outcomes. Using a throttling policy
that enforces a data-dependent rate-limit, we were able to factor the loss over single events. Applying
a result from queuing theory led us to a closed-form approximation of the immediate event-specific
loss under a rate limit set by a policy. Both parts led to a closed-form convex optimization problem.
Our experiments explored the learning model for the problem of suppressing abuse of an email
service. We observed significant improvements over iterative reinforcement learning baselines. The
model is being employed to this end in the email service provided by web hosting firm STRATO.
It has replaced a procedure of manual deactivation of accounts after inspection triggered by spam
reports.
Acknowledgments
We gratefully acknowledge support from STRATO Rechenzentrum AG and the German Science
Foundation DFG.
References
[1] J.A. Bagnell, S. Kakade, A. Ng, and J. Schneider. Policy search by dynamic programming.
Advances in Neural Information Processing Systems, 16, 2004.
[2] D. Blatt and A.O. Hero. From weighted classification to policy search. Advances in Neural
Information Processing Systems, 18, 2006.
[3] C. Dimitrakakis and M.G. Lagoudakis. Rollout sampling approximate policy iteration. Machine
Learning, 72(3):157?171, 2008.
8
[4] M. Ghavamzadeh and Y. Engel. Bayesian policy gradient algorithms. Advances in Neural
Information Processing Systems, 19, 2007.
[5] D.L. Jagerman, B. Melamed, and W. Willinger. Stochastic modeling of traffic processes. Frontiers in queueing: models, methods and problems, pages 271?370, 1996.
[6] M.G. Lagoudakis and R. Parr. Reinforcement learning as classification: Leveraging modern
classifiers. In Proceedings of the 20th International Conference on Machine Learning, 2003.
[7] J. Langford and B. Zadrozny. Relating reinforcement learning performance to classification
performance. In Proceedings of the 22nd International Conference on Machine learning, 2005.
[8] R.S. Sutton, D. McAllester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. Advances in Neural Information Processing Systems, 12, 2000.
9
| 4025 |@word norm:1 nd:1 d2:1 crucially:1 initial:2 series:1 score:1 tuned:1 suppressing:5 past:2 outperforms:1 current:7 si:9 assigning:2 must:1 willinger:1 subsequent:1 drop:1 plot:5 stationary:2 selected:1 inspection:1 provides:1 attack:1 rollout:1 become:1 consists:2 ressources:1 deteriorate:1 expected:13 inspired:1 decreasing:2 automatically:1 resolve:1 actual:1 window:1 increasing:3 becomes:3 provided:1 estimating:2 panel:3 argmin:2 minimizes:1 finding:2 ag:1 elm:6 guarantee:2 ti:51 concave:1 classifier:11 unit:3 before:2 service:14 t1:4 positive:6 local:1 limit:17 sutton:1 encoding:1 servations:1 meet:1 abuse:2 approximately:4 plus:1 monotically:1 studied:1 acknowledgment:1 enforces:2 polity:1 block:2 tene:3 implement:1 writes:1 practice:1 procedure:5 strasse:1 empirical:3 word:1 get:1 cannot:1 close:1 risk:9 influence:7 applying:1 equivalent:1 map:1 dz:2 center:1 duration:2 convex:25 immediately:1 assigns:1 legitimate:3 deriving:1 discharge:1 target:1 trigger:1 exact:1 programming:1 us:4 melamed:1 element:1 approximated:2 observed:10 solved:5 episode:17 convexity:5 tobias:1 dynamic:1 ghavamzadeh:1 denial:1 depend:4 solving:2 singh:1 incur:2 serve:1 learner:1 basis:1 joint:1 regularizer:3 abusive:4 derivation:1 fast:1 labeling:1 outcome:4 firm:1 grave:1 larger:3 solve:2 plausible:1 distortion:1 otherwise:3 transform:1 itself:1 sequence:23 triggered:2 unsuppressed:3 took:2 inserting:1 combining:1 loop:1 rapidly:1 achieve:1 getting:1 convergence:4 optimum:2 decisionmaking:1 converges:1 derive:5 depending:1 implemented:2 c:1 throttling:7 implies:1 met:1 differ:1 indicate:1 quantify:1 correct:1 attribute:5 filter:1 stochastic:2 mcallester:1 require:2 behaviour:1 assign:1 decompose:1 frontier:1 deciding:2 exp:5 equilibrium:1 mapping:2 parr:1 purpose:1 estimation:1 label:9 individually:1 largest:1 engel:1 tool:1 reflects:1 weighted:3 always:1 encode:2 derived:2 improvement:1 blacklisted:2 contrast:1 baseline:9 tne:4 bebel:1 dependent:7 entire:3 typically:1 unlikely:1 favoring:1 germany:1 uwe:1 classification:9 overall:4 denoted:4 prevention:1 special:4 ng:1 sampling:1 minimized:1 report:3 summand:1 few:1 employ:2 modern:1 randomly:2 individual:5 dfg:1 replaced:1 argmax:1 detection:1 message:10 investigate:1 introduces:1 rearranges:1 tuple:1 integral:3 erlang:17 respective:1 penalizes:2 instance:9 formalism:2 modeling:2 cost:9 strato:2 successful:1 too:1 motivating:1 optimally:1 disrupted:1 international:2 michael:1 together:1 again:1 central:1 nm:1 squared:1 outbound:14 account:5 tii:1 de:1 b2:1 includes:1 caused:1 depends:6 stream:1 queuing:5 closed:3 deactivation:1 traffic:1 maintains:1 blatt:2 contribution:1 minimize:3 variance:1 efficiently:3 likewise:1 discharged:1 bayesian:1 provider:6 classified:2 converged:1 reach:1 manual:1 email:14 definition:5 against:1 frequency:1 naturally:2 proof:1 sampled:4 dt:8 supervised:1 day:1 evaluated:1 furthermore:1 stage:1 until:1 langford:1 hand:2 web:1 replacing:1 logistic:1 grows:2 concept:1 contain:1 counterpart:1 adequately:1 regularization:4 iteratively:3 deal:1 during:1 maintained:2 noted:1 criterion:4 tn:1 performs:1 disruption:1 ranging:1 consideration:1 lagoudakis:2 superior:1 functional:7 haider:2 occurred:1 approximates:3 relating:1 refer:1 significant:1 gibbs:1 ai:6 automatic:1 tuning:1 gratefully:1 moving:3 access:1 phishing:2 summands:1 recent:1 optimizing:1 scenario:2 certain:3 server:2 hostile:1 binary:6 xe:6 seen:1 schneider:1 employed:2 converge:1 determine:1 monotonically:4 dashed:1 semi:1 relates:1 reduces:4 exceeds:2 match:1 long:1 devised:1 post:1 equally:1 converging:1 regression:1 expectation:3 poisson:15 df:1 iteration:6 represent:1 whereas:2 want:1 addition:1 interval:9 addressed:2 crucial:2 rest:2 posse:1 induced:3 subject:3 virtually:1 sent:4 leveraging:1 integer:1 exceed:1 split:2 easy:1 tei:7 variety:1 reduce:1 whether:1 passed:2 peter:1 returned:1 passing:2 cause:1 action:13 hosting:1 amount:1 induces:2 processed:2 generate:1 specifies:1 outperform:1 problematic:1 estimated:2 per:1 serving:1 discrete:1 write:1 four:2 drawn:1 queueing:1 button:1 fraction:1 sum:2 dimitrakakis:1 run:5 parameterized:1 arrive:2 family:1 throughout:1 xei:15 reasonable:1 decide:2 decision:47 ob:1 comparable:2 bound:10 backlog:4 quadratic:1 adapted:1 occur:1 qub:3 generates:1 performing:1 fortet:6 department:1 according:1 request:5 smaller:1 kakade:1 making:3 taken:1 ln:1 equation:28 remains:1 previously:1 discus:1 count:1 mechanism:1 german:1 hero:2 end:2 sending:2 serf:3 generalizes:1 available:1 experimentation:2 apply:1 occasional:1 appropriate:2 thomas:1 recipient:1 denotes:2 running:1 cf:1 remaining:1 qit:3 exploit:2 approximating:1 implied:1 occurs:2 damage:1 bagnell:1 gradient:13 separate:1 capacity:2 extent:1 length:1 modeled:1 illustration:1 dick:1 minimizing:4 unfortunately:1 potentially:1 negative:1 rise:1 suppress:3 policy:33 unknown:2 attacker:1 perform:3 upper:10 observation:1 markov:4 acknowledge:1 descent:1 incorrectly:2 immediate:8 defining:3 zadrozny:1 mansour:1 august:1 complement:1 t1n:1 required:1 namely:1 alerting:1 potsdam:3 learned:3 registered:1 established:1 address:3 able:1 xm:1 green:1 event:26 misclassification:2 natural:1 regularized:1 altered:1 mdps:1 ne:2 prior:1 literature:2 l2:1 determining:1 relative:1 loss:88 generation:1 proportional:1 analogy:1 versus:1 foundation:1 incurred:2 share:1 surprisingly:1 bruckner:1 side:2 allow:4 benefit:1 feedback:1 xn:1 ending:1 cumulative:1 transition:3 world:1 commonly:2 reinforcement:5 spam:16 approximate:3 uni:1 global:1 incoming:1 harm:1 explicitely:1 consuming:1 xi:20 tuples:1 factorize:2 continuous:1 iterative:11 search:2 additionally:3 learn:2 robust:1 inherently:1 unavailable:1 investigated:2 necessarily:1 noise:1 allowed:2 fair:1 repeated:1 x1:1 scheffer:2 explicit:1 governed:1 stamp:10 learns:1 admissible:1 unwieldy:1 theorem:3 transitioning:1 formula:7 tnm:1 minute:4 specific:1 explored:1 svm:19 dominates:1 false:2 adding:1 sequential:2 magnitude:1 te:3 attributing:1 led:2 explore:1 likely:1 sender:2 datadependent:1 disseminate:1 corresponds:4 determines:3 conditional:2 goal:5 consequently:1 content:1 hard:1 change:2 determined:1 uniformly:1 total:1 pas:1 support:4 ub:1 violated:1 ex:5 |
3,342 | 4,026 | Random Conic Pursuit for Semidefinite Programming
Ariel Kleiner
Computer Science Division
Univerisity of California
Berkeley, CA 94720
Ali Rahimi
Intel Research Berkeley
Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science Division
University of California
Berkeley, CA 94720
[email protected]
[email protected]
Abstract
We present a novel algorithm, Random Conic Pursuit, that solves semidefinite programs (SDPs) via repeated optimization over randomly selected two-dimensional
subcones of the PSD cone. This scheme is simple, easily implemented, applicable to very general SDPs, scalable, and theoretically interesting. Its advantages
are realized at the expense of an ability to readily compute highly exact solutions,
though useful approximate solutions are easily obtained. This property renders
Random Conic Pursuit of particular interest for machine learning applications, in
which the relevant SDPs are generally based upon random data and so exact minima are often not a priority. Indeed, we present empirical results to this effect for
various SDPs encountered in machine learning; these experiments demonstrate
the potential practical usefulness of Random Conic Pursuit. We also provide a
preliminary analysis that yields insight into the theoretical properties and convergence of the algorithm.
1
Introduction
Many difficult problems have been shown to admit elegant and tractably computable representations
via optimization over the set of positive semidefinite (PSD) matrices. As a result, semidefinite
programs (SDPs) have appeared as the basis for many procedures in machine learning, such as
sparse PCA [8], distance metric learning [24], nonlinear dimensionality reduction [23], multiple
kernel learning [14], multitask learning [19], and matrix completion [2].
While SDPs can be solved in polynomial time, they remain computationally challenging. Generalpurpose solvers, often based on interior point methods, do exist and readily provide high-accuracy
solutions. However, their memory requirements do not scale well with problem size, and they typically do not allow a fine-grained tradeoff between optimization accuracy and speed, which is often a
desirable tradeoff in machine learning problems that are based on random data. Furthermore, SDPs
in machine learning frequently arise as convex relaxations of problems that are originally computationally intractable, in which case even an exact solution to the SDP yields only an approximate
solution to the original problem, and an approximate SDP solution can once again be quite useful.
Although some SDPs do admit tailored solvers which are fast and scalable (e.g., [17, 3, 7]), deriving and implementing these methods is often challenging, and an easily usable solver that alleviates
these issues has been elusive. This is partly the case because generic first-order methods do not
apply readily to general SDPs.
In this work, we present Random Conic Pursuit, a randomized solver for general SDPs that is simple,
easily implemented, scalable, and of inherent interest due to its novel construction. We consider
general SDPs over Rd?d of the form
min
X0
f (X)
s.t.
gj (X) ? 0,
1
j = 1 . . . k,
(1)
where f and the gj are convex real-valued functions, and denotes the ordering induced by the
PSD cone. Random Conic Pursuit minimizes the objective function iteratively, repeatedly randomly
sampling a PSD matrix and optimizing over the random two-dimensional subcone given by this
matrix and the current iterate. This construction maintains feasibility while avoiding the computational expense of deterministically finding feasible directions or of projecting into the feasible
set. Furthermore, each iteration is computationally inexpensive, though in exchange we generally
require a relatively large number of iterations. In this regard, Random Conic Pursuit is similar in
spirit to algorithms such as online gradient descent and sequential minimal optimization [20] which
have illustrated that in the machine learning setting, algorithms that take a large number of simple,
inexpensive steps can be surprisingly successful.
The resulting algorithm, despite its simplicity and randomized nature, converges fairly quickly to
useful approximate solutions. Unlike interior point methods, Random Conic Pursuit does not excel
in producing highly exact solutions. However, it is more scalable and provides the ability to trade
off computation for more approximate solutions. In what follows, we present our algorithm in full
detail and demonstrate its empirical behavior and efficacy on various SDPs that arise in machine
learning; we also provide early analytical results that yield insight into its behavior and convergence
properties.
2
Random Conic Pursuit
Random Conic Pursuit (Algorithm 1) solves SDPs of the general form (1) via a sequence of simple two-variable optimizations (2). At each iteration, the algorithm considers the two-dimensional
cone spanned by the current iterate, Xt , and a random rank one PSD matrix, Yt . It selects as its
next iterate, Xt+1 , the point in this cone that minimizes the objective f subject to the constraints
gj (Xt+1 ) ? 0 in (1). The distribution of the random matrices is periodically updated based on the
current iterate (e.g., to match the current iterate in expectation); these updates yield random matrices
that are better matched to the optimum of the SDP at hand.
The two-variable optimization (2) can be solved quickly in general via a two-dimensional bisection
search. As a further speedup, for many of the problems that we considered, the two-variable optimization can be altogether short-circuited with a simple check that determines whether the solution
Xt+1 = Xt , with ?? = 1 and ?
? = 0, is optimal. Additionally, SDPs with a trace constraint tr X = 1
force ? + ? = 1 and therefore require only a one-dimensional optimization.
Two simple guarantees for Random Conic Pursuit are immediate. First, its iterates are feasible for (1)
because each iterate is a positive sum of two PSD matrices, and because the constraints gj of (2)
are also those of (1). Second, the objective values decrease monotonically because ? = 1, ? = 0
is a feasible solution to (2). We must also note two limitations of Random Conic Pursuit: it does
not admit general equality constraints, and it requires a feasible starting point. Nonetheless, for
many of the SDPs that appear in machine learning, feasible points are easy to identify, and equality
constraints are either absent or fortuitously pose no difficulty.
We can gain further intuition by observing that Random Conic Pursuit?s iterates, Xt , are positive
weighted sums of random rank one matrices and so lie in the random polyhedral cones
( t
)
X
x
0
Ft :=
?i xt xt : ?i ? 0 ? {X : X 0}.
(3)
i=1
Thus, Random Conic Pursuit optimizes the SDP (1) by greedily optimizing f w.r.t. the gj constraints
within an expanding sequence of random cones {Ftx }. These cones yield successively better inner
approximations of the PSD cone (a basis for which is the set of all rank one matrices) while allowing
us to easily ensure that the iterates remain PSD.
In light of this discussion, one might consider approximating the original SDP by sampling a random
cone Fnx in one shot and replacing the constraint X 0 in (1) with the simpler linear constraints
X ? Fnx . For sufficiently large n, Fnx would approximate the PSD cone well (see Theorem 2 below),
yielding an inner approximation that upper bounds the original SDP; the resulting problem would be
easier than the original (e.g., it would become a linear program if the gj were linear). However, we
have found empirically that a very large n is required to obtain good approximations, thus negating
any potential performance improvements (e.g., over interior point methods). Random Conic Pursuit
2
Random Conic Pursuit
[brackets contain a particular, generally effective, sampling scheme]
Input: A problem of the form (1)
n ? N: number of iterations
X0 : a feasible initial iterate
[? ? (0, 1): numerical stability parameter]
Output: An approximate solution Xn to (1)
Algorithm 1:
p ? a distribution over Rd [p ? N (0, ?) with ? = (1 ? ?)X0 + ?Id ]
for t ? 1 to n do
Sample xt from p and set Yt ? xt x0t
Set ?
? , ?? to the optimizer of
min f (?Yt + ?Xt?1 )
?,??R
s.t. gj (?Yt + ?Xt?1 ) ? 0,
?, ? ? 0
? t?1
Set Xt ? ?
? Yt + ?X
if ?
? > 0 then Update p based on Xt
end
return Xn
j = 1...k
(2)
[p ? N (0, ?) with ? = (1 ? ?)Xt + ?Id ]
successfully resolves this issue by iteratively expanding the random cone Ftx . As a result, we are
able to much more efficiently access large values of n, though we compute a greedy solution within
Fnx rather than a global optimum over the entire cone. This tradeoff is ultimately quite advantageous.
3
Applications and Experiments
We assess the practical convergence and scaling properties of Random Conic Pursuit by applying it
to three different machine learning tasks that rely on SDPs: distance metric learning, sparse PCA,
and maximum variance unfolding. For each, we compare the performance of Random Conic Pursuit
(implemented in MATLAB) to that of a standard and widely used interior point solver, SeDuMi [21]
(via cvx [9]), and to the best available solver which has been customized for each problem.
To evaluate convergence, we first compute a ground-truth solution X ? for each problem instance
by running the interior point solver with extremely low tolerance. Then, for each algorithm, we
plot the normalized objective value errors [f (Xt ) ? f (X ? )]/|f (X ? )| of its iterates Xt as a function
of the amount of time required to generate each iterate. Additionally, for each problem, we plot
the value of an application-specific metric for each iterate. These metrics provide a measure of
the practical implications of obtaining SDP solutions which are suboptimal to varying degrees. We
evaluate scaling with problem dimensionality by running the various solvers on problems of different
dimensionalities and computing various metrics on the solver runs as described below for each
experiment. Unless otherwise noted, we use the bracketed sampling scheme given in Algorithm 1
with ? = 10?4 for all runs of Random Conic Pursuit.
3.1
Metric Learning
Given a set of datapoints in Rd and ap
pairwise similarity relation over them, metric learning extracts
a Mahalanobis distance dA (x, y) = (x ? y)0 A(x ? y) under which similar points are nearby and
?
dissimilar points are far apart [24]. Let S be the set of similar pairs
P of datapoints, and let S be its
complement. The metric learning SDP, for A ? Rd?d and C = (i,j)?S (xi ? xj )(xi ? xj )0 , is
X
min tr(CA) s.t.
dA (xi , xj ) ? 1.
(4)
A0
(i,j)?S?
To apply Random Conic Pursuit, X0 is set to a feasible scaled identity matrix. We solve the twovariable optimization (2) via a double bisection search: at each iteration, ? is optimized out with
a one-variable bisection search over ? given fixed ?, yielding a function of ? only. This resulting
function is itself then optimized using a bisection search over ?.
3
1
Interior Point
Random Pursuit
Projected Gradient
0.08
0.06
0.04
0.02
0
0
734
1468 2202
time (sec)
2936
pairwise distance quality (Q)
normalized objective value error
0.1
0.8
0.6
0.4
0
Interior Point
Random Pursuit
Projected Gradient
734
1468
2202
2936
time (sec)
d
100
100
100
200
200
300
300
400
400
alg
IP
RCP
PG
RCP
PG
RCP
PG
RCP
PG
f after 2 hrs?
3.7e-9
2.8e-7, 3.0e-7
1.1e-5
5.1e-8, 6.1e-8
1.6e-5
5.4e-8, 6.5e-8
2.0e-5
7.2e-8, 1.0e-8
2.4e-5
time to Q > 0.99
636.3
142.7, 148.4
42.3
529.1, 714.8
207.7
729.1, 1774.7
1095.8
2128.4, 2227.2
1143.3
Figure 1: Results for metric learning. (plots) Trajectories of objective value error (left) and Q (right)
on UCI ionosphere data. (table) Scaling experiments on synthetic data (IP = interior point, RCP =
Random Conic Pursuit, PG = projected gradient), with two trials per d for RCP and times in seconds.
?
For d = 100, third column shows f after 20 minutes.
As the application-specific metric for this problem, we measure the extent to which the metric
learning goal has been achieved: similar datapoints should be near each other, and dissimilar
datapoints should beP
farther away. We adopt the following metric of quality of a solution ma?
trix X, where
?
=
i |{j
P P
P: (i, j) ? S}| ? |{l : (i, l) ? S}| and 1[?] is the indicator function:
1
Q(X) = ? i j:(i,j)?S l:(i,l)?S? 1[dij (X) < dil (X)].
To examine convergence behavior, we first apply the metric learning SDP to the UCI ionosphere
dataset, which has d = 34 and 351 datapoints with two distinct labels (S contains pairs with identical
labels). We selected this dataset from among those used in [24] because it is among the datasets
which have the largest dimensionality and experience the greatest impact from metric learning in
that work?s clustering application. Because the interior point solver scales prohibitively badly in the
number of datapoints, we subsampled the dataset to yield 4 ? 34 = 136 datapoints.
To evaluate scaling, we use synthetic data in order to allow variation of d. To generate a ddimensional data set, we first generate mixture centers by applying a random rotation to the elements
of C1 = {(?1, 1), (?1, ?1)} and C2 = {(1, 1), (1, ?1)}. We then sample each datapoint xi ? Rd
from N (0, Id ) and assign it uniformly at random to one of two clusters. Finally, we set the first two
components of xi to a random element of Ck if xi was assigned to cluster k ? {1, 2}; these two
components are perturbed by adding a sample from N (0, 0.25I2 ).
The best known customized solver for the metric learning SDP is a projected gradient algorithm [24],
for which we used code available from the author?s website.
Figure 1 shows the results of our experiments. The two trajectory plots, for an ionosphere data
problem instance, show that Random Conic Pursuit converges to a very high-quality solution (with
high Q and negligible objective value error) significantly faster than interior point. Additionally,
our performance is comparable to that of the projected gradient method which has been customized
for this task. The table in Figure 1 illustrates scaling for increasing d. Interior point scales badly
in part because parsing the SDP becomes impracticably slow for d significantly larger than 100.
Nonetheless, Random Conic Pursuit scales well beyond that point, continuing to return solutions
with high Q in reasonable time. On this synthetic data, projected gradient appears to reach high
Q somewhat more quickly, though Random Conic Pursuit consistently yields significantly better
objective values, indicating better-quality solutions.
3.2
Sparse PCA
Sparse PCA seeks to find a sparse unit length vector that maximizes x0 Ax for a given data covariance
matrix A. This problem can be relaxed to the following SDP [8], for X, A ? Rd?d :
min
X0
?10 |X|1 ? tr(AX) s.t.
tr(X) = 1,
(5)
where the scalar ? > 0 controls the solution?s sparsity. A subsequent rounding step returns the
dominant eigenvector of the SDP?s solution, yielding a sparse principal component.
We use the colon cancer dataset [1] that has been used frequently in past studies of sparse PCA
and contains 2,000 microarray readings for 62 subjects. The goal is to identify a small number of
4
Interior Point
Random Pursuit
DSPCA
0.08
0.06
0.04
0.02
0
0
1076
2152 3228
time (sec)
4304
0.52
top eigenvector sparsity
normalized objective value error
0.1
0.39
0.26
Interior Point
Random Pursuit
DSPCA
0.13
0
1076
2152 3228
time (sec)
4304
d
alg
f after 4 hrs
sparsity after 4 hrs
120
120
120
IP
RCP
DSPCA
-10.25
-9.98, -10.02
-10.24
0.55
0.47, 0.45
0.55
200
200
200
IP
RCP
DSPCA
failed
-10.30, -10.27
-11.07
failed
0.51, 0.50
0.64
300
300
300
IP
RCP
DSPCA
failed
-9.39, -9.29
-11.52
failed
0.51, 0.51
0.69
500
500
500
IP
RCP
DSPCA
failed
-6.95, -6.54
-11.61
failed
0.53, 0.50
0.78
Figure 2: Results for sparse PCA. All solvers quickly yield similar captured variance (not shown
here). (plots) Trajectories of objective value error (left) and sparsity (right), for a problem with
d = 100. (table) Scaling experiments (IP = interior point, RCP = Random Conic Pursuit), with two
trials per d for RCP.
microarray cells that capture the greatest variance in the dataset. We vary d by subsampling the
readings and use ? = 0.2 (large enough to yield sparse solutions) for all experiments.
To apply Random Conic Pursuit, we set X0 = A/ tr(A). The trace constraint (5) implies that
tr(Xt?1 ) = 1 and so tr(?Yt + ?Xt?1 ) = ? tr(Yt ) + ? = 1 in (2). Thus, we can simplify the
two-variable optimization (2) to a one-variable optimization, which we solve by bisection search.
The fastest available customized solver for the sparse PCA SDP is an adaptation of Nesterov?s
smooth optimization procedure [8] (denoted by DSPCA), for which we used a MATLAB implementation with heavy MEX optimizations that is downloadable from the author?s web site.
We compute two application-specific metrics which capture the two goals of sparse PCA: high
captured variance and high sparsity. Given the top eigenvector
u of a solution matrix X, its captured
P
variance is u0 Au, and its sparsity is given by d1 j 1[|uj | < ? ]; we take ? = 10?3 based on
qualitative inspection of the raw microarray data covariance matrix A.
The results of our experiments are shown in Figure 2. As seen in the two plots, on a problem instance
with d = 100, Random Conic Pursuit quickly achieves an objective value within 4% of optimal and
thereafter continues to converge, albeit more slowly; we also quickly achieve fairly high sparsity
(compared to that of the exact SDP optimum). In contrast, interior point is able to achieve lower
objective value and even higher sparsity within the timeframe shown, but, unlike Random Conic
Pursuit, it does not provide the option of spending less time to achieve a solution which is still
relatively sparse. All of the solvers quickly achieve very similar captured variances, which are not
shown. DSPCA is extremely efficient, requiring much less time than its counterparts to find nearly
exact solutions. However, that procedure is highly customized (via several pages of derivation and an
optimized implementation), whereas Random Conic Pursuit and interior point are general-purpose.
The table in Figure 2 illustrates scaling by reporting achieved objecive values and sparsities after
the solvers have each run for 4 hours. Interior point fails due to memory requirements for d > 130,
whereas Random Conic Pursuit continues to function and provide useful solutions, as seen from the
achieved sparsity values, which are much larger than those of the raw data covariance matrix. Again,
DSPCA continues to be extremely efficient.
3.3
Maximum Variance Unfolding (MVU)
MVU searches for a kernel matrix that embeds high-dimensional input data into a lower-dimensional
manifold [23]. Given m data points and a neighborhood relation i ? j between them, it forms
their centered and normalized Gram matrix G ? Rm?m and the squared Euclidean distances d2ij =
Gii +Gjj ?2Gij . The desired kernel matrix is the solution of the following SDP, where X ? Rm?m
and the scalar ? > 0 controls the dimensionality of the resulting embedding:
X
max tr(X) ? ?
(Xii + Xjj ? 2Xij ? d2ij )2 s.t. 10 X1 = 0.
(6)
X0
i?j
To apply Random Conic Pursuit, we set X0 = G and use the general sampling formulation in Algorithm 1 by setting p = N (0, ?(?f (Xt ))) in the initialization (i.e., t = 0) and update steps, where
5
m
40
40
40
200
200
200
400
400
800
800
4
8
3000
x 10
2800
Objective value
Objective value
6
2600
2400
2200
4
2
2000
1800
0
Interior Point
Random Pursuit
10
20
30
Time (sec)
0
0
100
Random Pursuit
200
300
400
Time (sec)
alg
IP
RCP
GD
IP
RCP
GD
IP
RCP
IP
RCP
f after convergence
23.4
22.83 (0.03)
23.2
2972.6
2921.3 (1.4)
2943.3
12255.6
12207.96 (36.58)
failed
71231.1 (2185.7)
seconds to f > 0.99f?
0.4
0.5 (0.03)
5.4
12.4
6.6 (0.8)
965.4
97.1
26.3 (9.8)
failed
115.4 (29.2)
Figure 3: Results for MVU. (plots) Trajectories of objective value for m = 200 (left) and m = 800
(right). (table) Scaling experiments showing convergence as a function of m (IP = interior point,
RCP = Random Conic Pursuit, GD = gradient descent).
? truncates negative eigenvalues of its argument to zero. This scheme empirically yields improved
performance for the MVU problem as compared to the bracketed sampling scheme in Algorithm 1.
To handle the equality constraint, each Yt is first transformed to Y?t = (I ? 110 /m)Yt (I ? 110 /m),
which preserves PSDness and ensures feasibility. The two-variable optimization (2) proceeds as
before on Y?t and becomes a two-variable quadratic program, which can be solved analytically.
MVU also admits a gradient descent algorithm, which serves as a straw-man large-scale solver for
the MVU SDP. At each iteration, the step size is picked by a line search, and the spectrum of the
iterate is truncated to maintain PSDness. We use G as the initial iterate.
To generate data, we randomly sample m points from the surface of a synthetic swiss roll [23]; we
set ? = 1. To quantify the amount of time it takes a solver to converge, we run it until its objective
curve appears qualitatively flat and declare the convergence point to be the earliest iterate whose
objective is within 1% of the best objective value seen so far (which we denote by f?).
Figure 3 illustrates that Random Conic Pursuit?s objective values converge quickly, and on problems
where the interior point solver achieves the optimum, Random Conic Pursuit nearly achieves that
optimum. The interior point solver runs out of memory when m > 400 and also fails on smaller
problems if its tolerance parameter is not tuned. Random Conic Pursuit easily runs on larger problems for which interior point fails, and for smaller problems its running time is within a small factor
of that of the interior point solver; Random Conic Pursuit typically converges within 1000 iterations. The gradient descent solver is orders of magnitude slower than the other solvers and failed to
converge to a meaningful solution for m ? 400 even after 2000 iterations (which took 8 hours).
4
Analysis
Analysis of Random Conic Pursuit is complicated by the procedure?s use of randomness and its
handling of the constraints gj ? 0 explicitly in the sub-problem (2), rather than via penalty functions
or projections. Nonetheless, we are able to obtain useful insights by first analyzing a simpler setting
having only a PSD constraint. We thus obtain a bound on the rate at which the objective values
of Random Conic Pursuit?s iterates converge to the SDP?s optimal value when the problem has no
constraints of the form gj ? 0:
Theorem 1 (Convergence rate of Random Conic Pursuit when f is weakly convex and k = 0). Let
f : Rd?d ? R be a convex differentiable function with L-Lipschitz gradients such that the minimum
of the following optimization problem is attained at some X ? :
min f (X).
X0
(7)
Let X1 . . . Xt be the iterates of Algorithm 1 when applied to this problem starting at iterate X0
(using the bracketed sampling scheme given in the algorithm specification), and suppose kXt ?X ? k
is bounded. Then
1
Ef (Xt ) ? f (X ? ) ? ? max(?L, f (X0 ) ? f (X ? )),
(8)
t
for some constant ? that does not depend on t.
6
Proof. We prove that equation (8) holds in general for any X ? , and thus for the optimizer of f in
particular. The convexity of f implies the following linear lower bound on f (X) for any X and Y :
f (X) ? f (Y ) + h?f (Y ), X ? Y i.
(9)
The Lipschitz assumption on the gradient of f implies the following quadratic upper bound on f (X)
for any X and Y [18]:
f (X) ? f (Y ) + h?f (Y ), X ? Y i + L2 kX ? Y k2 .
(10)
?
?
?
Define the random variable Yt := ?t (Yt )Yt with ?t a positive function that ensures EYt = X . It
suffices to set ?t = q(Y )/?
p(Y ), where p? is the distribution of Yt and q is any distribution with mean
X ? . In particular, the choice Y?t := ?t (xt )xt x0t with ?t (x) = N (x|0, X ? )/N (x|0, ?t ) satisfies this.
At iteration t, Algorithm 1 produces ?t and ?t so that Xt+1 := ?t Yt + ?t Xt minimizes f (Xt+1 ).
We will bound the defect f (Xt+1 ) ? f (X ? ) at each iteration by sub-optimally picking ?
? t = 1/t,
? t+1 = ??t Xt + ?
??t = 1 ? 1/t, and X
? t ?t (Yt )Yt = ??t Xt + ?
? t Y?t . Conditioned on Xt , we have
Ef (Xt+1 ) ? f (X ? ) ? Ef (??t Xt + ?
? t Y?t ) ? f (X ? ) = Ef Xt ? 1t (Xt ? Y?t ) ? f (X ? ) (11)
E
D
? f (Xt ) ? f (X ? ) + E ?f (Xt ), 1t (Y?t ? Xt ) + 2tL2 EkXt ? Y?t k2 (12)
? Y?t k2
? f (Xt ) ? f (X ? ) + (f (X ? ) ? f (Xt )) + 2tL2 EkXt ? Y?t k2
= 1 ? 1 f (Xt ) ? f (X ? ) + L2 EkXt ? Y?t k2 .
= f (Xt ) ? f (X ? ) +
1
t
1
t
h?f (Xt ), X ? ? Xt i +
L
2t2 EkXt
(13)
(14)
(15)
?
The first inequality follows by the suboptimality of ?
? t and ?t , the second by Equation (10), and the
third by (9).
t
2t
Define et := Ef (Xt ) ? f (X ? ). The term EkY?t ? Xt k2 is bounded above by some absolute constant
? because EkY?t ? Xt k2 = EkY?t ? X ? k2 + kXt ? X ? k2 . The first term is bounded because it is the
variance of Y?t , and thesecond term is bounded by assumption. Taking expectation over Xt gives the
L?
1
?
bound et+1 ? 1 ? 1t et + 2t
2 , which is solved by et = t ? max(?L, f (X0 ) ? f (X )) [16].
Despite the extremely simple and randomized nature of Random Conic Pursuit, the theorem guarantees that its objective values converge at the rate O(1/t) on an important subclass of SDPs. We
omit here some readily available extensions: for example, the probability that a trajectory of iterates
violates the above rate can be bounded by noting that the iterates? objective values behave as a finite
difference sub-martingale. Additionally, the theorem and proof could be generalized to hold for a
broader class of sampling schemes.
Directly characterizing the convergence of Random Conic Pursuit on problems with constraints appears to be significantly more difficult and seems to require introduction of new quantities depending
on the constraint set (e.g., condition number of the constraint set and its overlap with the PSD cone)
whose implications for the algorithm are difficult to explicitly characterize with respect to d and
the properties of the gj , X ? , and the Yt sampling distribution. Indeed, it would be useful to better
understand the limitations of Random Conic Pursuit. As noted above, the procedure cannot readily
accommodate general equality constraints; furthermore, for some constraint sets, sampling only a
rank one Yt at each iteration could conceivably cause the iterates to become trapped at a sub-optimal
boundary point (this could be alleviated by sampling higher rank Yt ). A more general analysis is
the subject of continuing work, though our experiments confirm empirically that we realize usefully
fast convergence of Random Conic Pursuit even when it is applied to a variety of constrained SDPs.
We obtain a different analytical perspective by recalling that Random Conic Pursuit computes a
solution within the random polyhedral cone Fnx , defined in (3) above. The distance between this
cone and the optimal matrix X ? is closely related to the quality of solutions produced by Random
Conic Pursuit. The following theorem characterizes the distance between a sampled cone Fnx and
any fixed X ? in the PSD cone:
Theorem 2. Let X ? 0 be a fixed positive definite matrix, and let x1 , . . . , xn ? Rd be drawn i.i.d.
from N (0, ?) with ? X ? . Then, for any ? > 0, with probability at least 1 ? ?,
?
?1
? ?1
1 + 2 log 1? 2 q
?1
?
?
?X ? ?1
X
?
?
minx kX ? X ? k ?
X?Fn
e
n
2
7
See supplementary materials for proof. As expected, Fnx provides a progressively better approximation to the PSD cone (with high probability) as n grows. Furthermore, the rate at which this occurs
depends on X ? and its relationship to ?; as the latter becomes better matched to the former, smaller
values of n are required to achieve an approximation of given quality.
The constant ? in Theorem 1 can hide a dependence on the dimensionality of the problem d, though
the proof of Theorem 2 helps to elucidate the dependence of ? on d and X ? for the particular case
when ? does not vary over time (the constants in Theorem 2 arise from bounding k?t (xt )xt x0t k).
A potential concern regarding both of the above theorems is the possibility of extremely adverse
dependence of their constants on the dimensionality d and the properties (e.g., condition number)
of X ? . However, our empirical results in Section 3 show that Random Conic Pursuit does indeed
decrease the objective function usefully quickly on real problems with relatively large d and solution
matrices X ? which are rank one, a case predicted by the analysis to be among the most difficult.
5
Related Work
Random Conic Pursuit and the analyses above are related to a number of existing optimization and
sampling algorithms.
Our procedure is closely related to feasible direction methods [22], which move along descent directions in the feasible set defined by the constraints at the current iterate. Cutting plane methods [11],
when applied to some SDPs, solve a linear program obtained by replacing the PSD constraint with
a polyhedral constraint. Random Conic Pursuit overcomes the difficulty of finding feasible descent
directions or cutting planes, respectively, by sampling directions randomly and also allowing the
current iterate to be rescaled.
Pursuit-based optimization methods [6, 13] return a solution within the convex hull of an a priorispecified convenient set of points M. At each iteration, they refine their solution to a point between
the current iterate and a point in M. The main burden in these methods is to select a near-optimal
point in M at each iteration. For SDPs having only a trace equality constraint and with M the set
of rank one PSD matrices, Hazan [10] shows that such points in M can be found via an eigenvalue
computation, thereby obtaining a convergence rate of O(1/t). In contrast, our method selects steps
randomly and still obtains a rate of O(1/t) in the unconstrained case.
The Hit-and-Run algorithm for sampling from convex bodies can be combined with simulated annealing to solve SDPs [15]. In this configuration, similarly to Random Conic Pursuit, it conducts a
search along random directions whose distribution is adapted over time.
Finally, whereas Random Conic Pursuit utilizes a randomized polyhedral inner approximation of
the PSD cone, the work of Calafiore and Campi [5] yields a randomized outer approximation to the
PSD cone obtained by replacing the PSD constraint X 0 with a set of sampled linear inequality
constraints. It can be shown that for linear SDPs, the dual of the interior LP relaxation is identical
to the exterior LP relaxation of the dual of the SDP. Empirically, however, this outer relaxation
requires impractically many sampled constraints to ensure that the problem remains bounded and
yields a good-quality solution.
6
Conclusion
We have presented Random Conic Pursuit, a simple, easily implemented randomized solver for
general SDPs. Unlike interior point methods, our procedure does not excel at producing highly exact
solutions. However, it is more scalable and provides useful approximate solutions fairly quickly,
characteristics that are often desirable in machine learning applications. This fact is illustrated by
our experiments on three different machine learning tasks based on SDPs; we have also provided a
preliminary analysis yielding further insight into Random Conic Pursuit.
Acknowledgments
We are grateful to Guillaume Obozinski for early discussions that motivated this line of work.
8
References
[1] U. Alon, N. Barkai, D. A. Notterman, K. Gish, S. Ybarra, D. Mack, and A. J. Levine. Broad patterns
of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays. Proc. Natl. Acad. Sci. USA, 96:6745?6750, June 1999.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[3] S. Burer and R.D.C Monteiro. Local minima and convergence in low-rank semidefinite programming.
Mathematical Programming, 103(3):427?444, 2005.
[4] S. Burer, R.D.C. Monteiro, and Y. Zhang. A computational study of a gradient-based log-barrier algorithm
for a class of large-scale sdps. Mathematical Programming, 95(2):359?379, 2003.
[5] G. Calafiore and M.C. Campi. Uncertain convex programs: randomized solutions and confidence levels.
Mathematical Programming, 102(1):25?46, 2005.
[6] K. Clarkson. Coresets, sparse greedy approximation, and the frank-wolfe algorithm. In Symposium on
Discrete Algorithms (SODA), 2008.
[7] A. d?Aspremont. Subsampling algorithms for semidefinite programming. Technical Report 0803.1990,
ArXiv, 2009.
[8] A. d?Aspremont, L. El Ghaoui, M. I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse pca
using semidefinite programming. SIAM Review, 49(3):434?448, 2007.
[9] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 1.21.
http://cvxr.com/cvx, May 2010.
[10] E. Hazan. Sparse approximate solutions to semidefinite programs. In Latin American conference on
Theoretical informatics, pages 306?316, 2008.
[11] C. Helmberg. A cutting plane algorithm for large scale semidefinite relaxations. In Martin Gr?otschel,
editor, The sharpest cut, chapter 15. MPS/SIAM series on optimization, 2001.
[12] C. Helmberg and F. Rendl. A spectral bundle method for semidefinite programming. SIAM Journal on
Optimization archive, 10(3):673?696, 1999.
[13] L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. The Annals of Statistics, 20(1):608?613, March
1992.
[14] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research (JMLR), 5:27?72, December
2004.
[15] L. Lov?asz and S. Vempala. Fast algorithms for logconcave functions: Sampling, rounding, integration
and optimization. In Foundations of Computer Science (FOCS), 2006.
[16] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[17] Y Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, May 2005.
[18] Y. Nesterov. Smoothing technique and its applications in semidefinite optimization. Mathematical Programming, 110(2):245?259, July 2007.
[19] G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for
multiple classification problems. Statistics and Computing, pages 1573?1375, 2009.
[20] J. Platt. Using sparseness and analytic QP to speed training of Support Vector Machines. In Advances in
Neural Information Processing Systems (NIPS), 1999.
[21] J.F. Sturm. Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization
Methods and Software, Special issue on Interior Point Methods, 11-12:625?653, 1999.
[22] W. Sun and Y. Yuan. Optimization Theory And Methods: Nonlinear Programming. Springer Optimization
And Its Applications, 2006.
[23] K. Q. Weinberger, F. Sha, Q. Zhu, and L. K. Saul. Graph laplacian regularization for large-scale semidefinite programming. In Advances in Neural Information Processing Systems (NIPS), 2006.
[24] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to clustering with
side-information. In Advances in Neural Information Processing Systems (NIPS), 2003.
9
| 4026 |@word multitask:1 trial:2 version:1 polynomial:1 advantageous:1 seems:1 seek:1 gish:1 covariance:3 pg:5 thereby:1 tr:9 accommodate:1 shot:1 reduction:1 initial:2 configuration:1 contains:2 efficacy:1 dspca:9 series:1 tuned:1 past:1 existing:1 current:7 com:2 must:1 readily:5 parsing:1 realize:1 fn:1 periodically:1 numerical:1 subsequent:1 analytic:1 plot:7 update:3 progressively:1 juditsky:1 greedy:3 selected:2 website:1 inspection:1 plane:3 short:1 farther:1 provides:3 iterates:9 simpler:2 zhang:1 mathematical:5 along:2 c2:1 direct:1 become:2 symposium:1 qualitative:1 prove:1 focs:1 yuan:1 polyhedral:4 x0:9 pairwise:2 theoretically:1 expected:1 lov:1 indeed:3 behavior:3 frequently:2 sdp:19 examine:1 resolve:1 solver:23 increasing:1 becomes:3 provided:1 matched:2 bounded:6 maximizes:1 what:1 minimizes:3 eigenvector:3 finding:2 guarantee:2 berkeley:6 rcp:17 subclass:1 usefully:2 prohibitively:1 scaled:1 rm:2 k2:9 control:2 unit:1 hit:1 omit:1 appear:1 producing:2 grant:1 platt:1 positive:5 negligible:1 before:1 declare:1 local:1 acad:1 despite:2 id:3 analyzing:1 ap:1 might:1 au:1 initialization:1 challenging:2 fastest:1 nemirovski:1 practical:3 acknowledgment:1 definite:1 swiss:1 procedure:7 empirical:3 significantly:4 projection:2 alleviated:1 convenient:1 boyd:2 confidence:1 cannot:1 interior:26 selection:2 mvu:6 applying:2 yt:19 center:1 elusive:1 starting:2 convex:9 simplicity:1 bep:1 insight:4 array:1 deriving:1 spanned:1 vandenberghe:1 datapoints:7 stability:1 embedding:1 handle:1 variation:1 updated:1 annals:1 construction:2 suppose:1 elucidate:1 exact:7 programming:15 lanckriet:2 element:2 wolfe:1 continues:3 cut:1 ft:1 levine:1 taskar:1 solved:4 capture:2 notterman:1 ensures:2 d2ij:2 sun:1 ordering:1 trade:1 decrease:2 rescaled:1 russell:1 intuition:1 convexity:1 nesterov:3 cristianini:1 ultimately:1 weakly:1 depend:1 grateful:1 ali:2 upon:1 division:2 basis:2 easily:7 joint:2 various:4 chapter:1 derivation:1 distinct:1 fast:3 effective:1 neighborhood:1 quite:2 whose:3 widely:1 valued:1 solve:4 larger:3 supplementary:1 otherwise:1 ability:2 statistic:2 itself:1 ip:12 online:1 advantage:1 sequence:2 eigenvalue:2 analytical:2 differentiable:1 took:1 kxt:2 adaptation:1 relevant:1 uci:2 alleviates:1 achieve:5 convergence:14 double:1 requirement:2 optimum:5 cluster:2 produce:1 converges:3 help:1 depending:1 alon:1 completion:1 pose:1 solves:2 implemented:4 c:2 ddimensional:1 implies:3 predicted:1 quantify:1 direction:6 closely:2 hull:1 stochastic:2 centered:1 violates:1 implementing:1 material:1 exchange:1 require:3 assign:1 suffices:1 preliminary:2 ftx:2 extension:1 hold:2 sufficiently:1 considered:1 ground:1 calafiore:2 normal:1 optimizer:2 early:2 adopt:1 vary:2 achieves:3 purpose:1 proc:1 applicable:1 label:2 largest:1 successfully:1 weighted:1 unfolding:2 minimization:1 rather:2 ck:1 varying:1 broader:1 earliest:1 ax:2 june:1 improvement:1 consistently:1 rank:8 check:1 contrast:2 greedily:1 colon:2 el:2 typically:2 entire:1 relation:2 transformed:1 selects:2 monteiro:2 issue:3 among:3 dual:2 classification:1 denoted:1 constrained:1 integration:1 fairly:3 smoothing:1 special:1 once:1 having:2 ng:1 sampling:15 identical:2 broad:1 jones:1 nearly:2 report:1 t2:1 simplify:1 inherent:1 randomly:5 preserve:1 subsampled:1 eyt:1 maintain:1 psd:18 recalling:1 interest:2 highly:4 possibility:1 mixture:1 bracket:1 semidefinite:13 light:1 yielding:4 natl:1 bundle:1 implication:2 experience:1 sedumi:2 unless:1 conduct:1 continuing:2 euclidean:1 desired:1 theoretical:2 minimal:1 uncertain:1 instance:3 column:1 negating:1 usefulness:1 successful:1 dij:1 rounding:2 gr:1 optimally:1 characterize:1 perturbed:1 synthetic:4 gd:3 combined:1 randomized:7 siam:4 off:1 informatics:1 straw:1 michael:1 picking:1 quickly:10 again:2 squared:1 successively:1 slowly:1 priority:1 admit:3 timeframe:1 american:1 usable:1 return:4 potential:3 sec:6 downloadable:1 coresets:1 bracketed:3 explicitly:2 depends:1 mp:1 picked:1 observing:1 characterizes:1 hazan:2 xing:1 maintains:1 option:1 complicated:1 ass:1 accuracy:2 roll:1 variance:8 characteristic:1 efficiently:1 yield:12 identify:2 raw:2 sdps:25 sharpest:1 produced:1 bisection:5 helmberg:2 trajectory:5 randomness:1 tissue:1 datapoint:1 reach:1 inexpensive:2 nonetheless:3 proof:4 gain:1 sampled:3 dataset:5 dimensionality:7 hilbert:1 appears:3 originally:1 higher:2 attained:1 improved:1 disciplined:1 formulation:2 though:6 furthermore:4 until:1 hand:1 sturm:1 web:1 replacing:3 gjj:1 nonlinear:2 quality:7 dil:1 grows:1 barkai:1 usa:1 effect:1 contain:1 normalized:4 requiring:1 counterpart:1 former:1 regularization:1 equality:5 assigned:1 analytically:1 symmetric:1 iteratively:2 i2:1 illustrated:2 mahalanobis:1 noted:2 suboptimality:1 generalized:1 demonstrate:2 spending:1 novel:2 ef:5 x0t:3 rotation:1 empirically:4 qp:1 ybarra:1 cambridge:1 rd:8 unconstrained:1 similarly:1 access:1 specification:1 similarity:1 surface:1 gj:10 dominant:1 hide:1 perspective:1 optimizing:2 optimizes:1 apart:1 inequality:2 univerisity:1 minimum:3 captured:4 somewhat:1 relaxed:1 seen:3 converge:6 monotonically:1 july:1 u0:1 multiple:2 desirable:2 full:1 rahimi:2 smooth:3 technical:1 match:1 faster:1 burer:2 feasibility:2 impact:1 rendl:1 scalable:5 regression:1 laplacian:1 metric:17 expectation:2 arxiv:1 iteration:13 kernel:4 tailored:1 fnx:7 mex:1 achieved:3 cell:1 c1:1 whereas:3 fine:1 annealing:1 microarray:3 unlike:3 archive:1 asz:1 induced:1 subject:3 elegant:1 gii:1 logconcave:1 december:1 spirit:1 jordan:6 near:2 noting:1 revealed:1 latin:1 easy:1 enough:1 iterate:16 xj:3 variety:1 suboptimal:1 inner:3 regarding:1 computable:1 tradeoff:3 absent:1 whether:1 motivated:1 pca:9 expression:1 bartlett:1 penalty:1 clarkson:1 render:1 xjj:1 cause:1 repeatedly:1 matlab:4 useful:7 generally:3 amount:2 generate:4 http:1 shapiro:1 exist:1 xij:1 trapped:1 per:2 xii:1 discrete:1 probed:1 thereafter:1 lan:1 drawn:1 graph:1 relaxation:5 defect:1 cone:21 sum:2 run:7 oligonucleotide:1 soda:1 reporting:1 reasonable:1 cvx:3 utilizes:1 scaling:8 comparable:1 bound:6 quadratic:2 encountered:1 refine:1 badly:2 adapted:1 constraint:25 flat:1 software:2 nearby:1 speed:2 argument:1 min:5 extremely:5 vempala:1 relatively:3 martin:1 speedup:1 march:1 remain:2 smaller:3 lp:2 conceivably:1 projecting:1 ghaoui:2 ariel:1 mack:1 computationally:3 equation:2 remains:1 end:1 serf:1 pursuit:60 available:4 apply:5 away:1 generic:1 spectral:1 weinberger:1 altogether:1 slower:1 original:4 denotes:1 running:3 ensure:2 clustering:3 top:2 subsampling:2 uj:1 approximating:1 objective:23 move:1 realized:1 quantity:1 occurs:1 sha:1 dependence:3 gradient:13 minx:1 subspace:1 distance:8 fortuitously:1 otschel:1 simulated:1 sci:1 outer:2 manifold:1 considers:1 extent:1 code:1 length:1 relationship:1 difficult:4 truncates:1 frank:1 expense:2 trace:3 negative:1 implementation:2 allowing:2 upper:2 datasets:1 finite:1 descent:6 behave:1 truncated:1 immediate:1 campi:2 complement:1 pair:2 required:3 toolbox:1 optimized:3 california:2 hour:2 tractably:1 nip:3 able:3 beyond:1 proceeds:1 below:2 pattern:1 appeared:1 sparsity:10 reading:2 program:7 max:3 memory:3 greatest:2 overlap:1 difficulty:2 force:1 rely:1 indicator:1 hr:3 customized:5 zhu:1 scheme:7 conic:52 excel:2 aspremont:2 extract:1 review:1 l2:2 interesting:1 limitation:2 foundation:1 degree:1 editor:1 heavy:1 cancer:1 surprisingly:1 side:1 allow:2 understand:1 saul:1 taking:1 characterizing:1 barrier:1 absolute:1 sparse:15 tolerance:2 regard:1 curve:1 boundary:1 xn:3 gram:1 computes:1 author:2 qualitatively:1 projected:6 far:2 approximate:9 obtains:1 cutting:3 overcomes:1 confirm:1 gene:1 global:1 twovariable:1 xi:6 spectrum:1 search:8 kleiner:1 table:5 additionally:4 nature:2 robust:1 ca:4 expanding:2 exterior:1 obtaining:2 alg:3 generalpurpose:1 da:2 main:1 bounding:1 arise:3 cvxr:1 repeated:1 x1:3 body:1 site:1 intel:2 martingale:1 slow:1 embeds:1 fails:3 sub:4 deterministically:1 lie:1 jmlr:1 third:2 grained:1 theorem:10 minute:1 xt:50 specific:3 covariate:1 showing:1 admits:1 ionosphere:3 concern:1 intractable:1 burden:1 albeit:1 sequential:1 adding:1 magnitude:1 illustrates:3 conditioned:1 sparseness:1 kx:2 easier:1 failed:9 trix:1 scalar:2 springer:1 truth:1 determines:1 satisfies:1 ma:1 obozinski:2 identity:1 goal:3 lipschitz:2 man:1 feasible:11 adverse:1 uniformly:1 impractically:1 principal:1 tumor:1 lemma:1 gij:1 partly:1 meaningful:1 indicating:1 select:1 guillaume:1 tl2:2 support:1 latter:1 dissimilar:2 evaluate:3 d1:1 avoiding:1 handling:1 |
3,343 | 4,027 | Label Embedding Trees for Large Multi-Class Tasks
Samy Bengio(1)
Jason Weston(1)
David Grangier(2)
(1)
Google Research, New York, NY
{bengio, jweston}@google.com
(2)
NEC Labs America, Princeton, NJ
{dgrangier}@nec-labs.com
Abstract
Multi-class classification becomes challenging at test time when the number of
classes is very large and testing against every possible class can become computationally infeasible. This problem can be alleviated by imposing (or learning)
a structure over the set of classes. We propose an algorithm for learning a treestructure of classifiers which, by optimizing the overall tree loss, provides superior
accuracy to existing tree labeling methods. We also propose a method that learns
to embed labels in a low dimensional space that is faster than non-embedding approaches and has superior accuracy to existing embedding approaches. Finally
we combine the two ideas resulting in the label embedding tree that outperforms
alternative methods including One-vs-Rest while being orders of magnitude faster.
1
Introduction
Datasets available for prediction tasks are growing over time, resulting in increasing scale in all their
measurable dimensions: separate from the issue of the growing number of examples m and features
d, they are also growing in the number of classes k. Current multi-class applications such as web
advertising [6], textual document categorization [11] or image annotation [12] have tens or hundreds
of thousands of classes, and these datasets are still growing. This evolution is challenging traditional
approaches [1] whose test time grows at least linearly with k.
At training time, a practical constraint is that learning should be feasible, i.e. it should not take
more than a few days, and must work with the memory and disk space requirements of the available
hardware. Most algorithms? training time, at best, linearly increases with m, d and k; algorithms
that are quadratic or worse with respect to m or d are usually discarded by practitioners working on
real large scale tasks. At testing time, depending on the application, very specific time constraints
are necessary, usually measured in milliseconds, for example when a real-time response is required
or a large number of records need to be processed. Moreover, memory usage restrictions may also
apply. Classical approaches such as One-vs-Rest are at least O(kd) in both speed (of testing a single
example) and memory. This is prohibitive for large scale problems [6, 12, 26].
In this work, we focus on algorithms that have a classification speed sublinear at testing time in k as
well as having limited dependence on d with best-case complexity O(de (log k + d)) with de d
and de k. In experiments we observe no loss in accuracy compared to methods that are O(kd),
further, memory consumption is reduced from O(kd) to O(kde ). Our approach rests on two main
ideas: firstly, an algorithm for learning a label tree: each node makes a prediction of the subset of
labels to be considered by its children, thus decreasing the number of labels k at a logarithmic rate
until a prediction is reached. We provide a novel algorithm that both learns the sets of labels at each
node, and the predictors at the nodes to optimize the overall tree loss, and show that this approach is
superior to existing tree-based approaches [7, 6] which typically lose accuracy compared to O(kd)
approaches. Balanced label trees have O(d log k) complexity as the predictor at each node is still
1
Algorithm 1 Label Tree Prediction Algorithm
Input: test example x, parameters T .
Let s = 0.
repeat
Let s = argmax{c:(s,c)?E} fc (x).
until |`s | = 1
Return `s .
- Start at the root node
- Traverse to the most confident child.
- Until this uniquely defines a single label.
linear in d. Our second main idea is to learn an embedding of the labels into a space of dimension
de that again still optimizes the overall tree loss. Hence, we are required at test time to: (1) map
the test example in the label embedding space with cost O(dde ) and then (2) predict using the label
tree resulting in our overall cost O(de (log k + d)). We also show that our label embedding approach
outperforms other recently proposed label embedding approaches such as compressed sensing [17].
The rest of the paper is organized as follows. Label trees are discussed and label tree learning
algorithms are proposed in Section 2. Label embeddings are presented in Section 3. Related prior
work is presented in Section 4. An experimental study on three large tasks is given in Section 5
showing the good performance of our proposed techniques. Finally, Section 6 concludes.
2
Label Trees
A label tree is a tree T = (N, E, F, L) with n + 1 indexed nodes N = {0, . . . n}, a set of edges E =
{(p1 , c1 ), (p|E| , c|E| )} which are ordered pairs of parent and child node indices, label predictors
F = {f1 , . . . , fn } and label sets L = {`0 , . . . , `n } associated to each node. The root node is
labeled with index 0. The edges E are such that all other nodes have one parent, but they can have
an arbitrary number of children (but still in all cases |E| = n). The label sets indicate the set of
labels to which a point should belong if it arrives at the given node, and progress from generic to
specific along the tree, i.e. the root label set
S contains all classes |`0 | = k and each child label set is
a subset of its parent label set with `p = (p,c)?E `c . We differentiate between disjoint label trees
where there are only k leaf nodes, one per class, and hence any two nodes i and j at the same depth
cannot share any labels, `i ? `j = ?, and joint label trees that can have more than k leaf nodes.
Classifying an example with the label tree is achieved by applying Algorithm 1. Prediction begins
at the root node (s = 0) and for each edge leading to a child (s, c) ? E one computes the score
of the label predictor fc (x) which predicts whether the example x belongs to the set of labels `c .
One takes the most confident prediction, traverses to that child node, and then repeats the process.
Classification is complete when one arrives at a node that identifies only a single label, which is the
predicted class.
Instances of label trees have been used in the literature before with various methods for choosing
the parameters (N, E, F, L). Due to the difficulty of learning, many methods make approximations
such as a random choice of E and optimization of F that does not take into account the overall loss
of the entire system leading to suboptimal performance (see [7] for a discussion). Our goal is to
provide an algorithm to learn these parameters to optimize the overall empirical loss (called the tree
loss) as accurately as possible for a given tree size (speed).
We can define the tree loss we wish to minimize as:
Z
Z
R(ftree ) = I(ftree (x) 6= y)dP (x, y) =
max
i?B(x)={b1 (x),...bD(x) (x)}
I(y ?
/ `i )dP (x, y)
(1)
where I is the indicator function and
bj (x) = argmax{c : (bj?1 (x),c)?E} fc (x)
is the index of the winning (?best?) node at depth j, b0 (x) = 0, and D(x) is the depth in the tree
of the final prediction for x, i.e. the number of loops plus one of the repeat block when running
Algorithm 1. The tree loss measures an intermediate loss of 1 for each prediction at each depth j of
the label tree where the true label is not in the label set `bj (x) . The final loss for a single example
is the max over these losses, because if any one of these classifiers makes a mistake then regardless
2
of the other predictions the wrong class will still be predicted. Hence, any algorithm wishing to
optimize the overall tree loss should train all the nodes jointly with respect to this maximum.
We will now describe how we propose to learn the parameters T of our label tree. In the next
subsection we show how to minimize the tree loss for a given fixed tree (N, E and L are fixed, F is
to be learned). In the following subsection, we will describe our algorithm for learning N, E and L.
2.1
Learning with a Fixed Label Tree
Let us suppose we are given a fixed label tree N, E, L chosen in advance. Our goal is simply to
minimize the tree loss (1) over the variables F , given training data {(xi , yi )}i=1,...,m . We follow the
standard approach of minimizing the empirical loss over the data, while regularizing our solution.
We consider two possible algorithms for solving this problem.
Relaxation 1: Independent convex problems The simplest (and poorest) procedure is to consider
the following relaxation to this problem:
m
Remp (ftree ) =
m
n
1 X
1 XX
/ `j ) ?
max I(yi ?
I(sgn(fj (xi )) = Cj (yi ))
m i=1 j?B(x)
m i=1 j=1
where Cj (y) = 1 if y ? `j and -1 otherwise. The number of errors counted by the approximation
cannot be less than the empirical tree loss Remp as when, for a particular example, the loss is zero
for the approximation it is also zero for Remp . However, the approximation can be much larger
because of the sum.
One then further approximates this by replacing the indicator function with the hinge loss and choosing linear (or kernel) models of the form fi (x) = wi> ?(x). We are then left with the following
convex problem: minimize
!
n
m
X
X
1
Cj (yi )fj (xi ) ? 1 ? ?ij
2
?||wj || +
?ij s.t. ?i, j,
?ij ? 0
m
j=1
i=1
where we also added a classical 2-norm regularizer controlled by the hyperparameter ?. In fact, this
can be split into n independent convex problems because the hyperplanes wi , i = 1, . . . , n, do not
interact in the objective function. We consider this simple relaxation as a baseline approach.
Relaxation 2: Tree Loss Optimization (Joint convex problem) We propose a tighter minimization
of the tree loss with the following:
m
1 X ?
?
m i=1 i
s.t. fr (xi ) ? fs (xi ) ? ?i , ?r, s : yi ? `r ? yi ?
/ `s ? (?p : (p, r) ? E ? (p, s) ? E)
(2)
(3)
?i ? 0, i = 1, . . . , m.
When ? is close to zero, the shared slack variables simply count a single error if any of the predictions at any depth of the tree are incorrect, so this is very close to the true optimization of the
tree loss. This is measured by checking, out of all the nodes that share the same parent, if the one
containing the true label in its label set is highest ranked. In practice we set ? = 1 and arrive at a
convex optimization problem. Nevertheless, unlike relaxation (1) the max is not approximated with
a sum. Again, using the hinge loss and a 2-norm regularizer, we arrive at our final optimization
problem:
m
n
X
1 X
?i
(4)
?
||wj ||2 +
m i=1
j=1
subject to constraints (2) and (3).
2.2
Learning Label Tree Structures
The previous section shows how to optimize the label predictors F while the nodes N , edges E and
label sets L which specify the structure of the tree are fixed in advance. However, we want to be
able to learn specific tree structures dependent on our prediction problem such that we minimize the
3
Algorithm 2 Learning the Label Tree Structure
Train k One-vs-Rest classifiers f?1 , . . . , f?k independently (no tree structure is used).
Compute the confusion matrix C?ij = |{(x, yi ) ? V : argmaxr f?r (x) = j}| on validation set V.
For each internal node l of the tree, from root to leaf, partition its label set `l between its children?s label sets Ll = {`c : c ? Nl }, where Nl = {c ? N : (l, c) ? E} and ?c?Nl `c = `l , by
maximizing:
X X
1
Rl (Ll ) =
Apq , where A = (C? + C? > ) is the symmetrized confusion matrix,
2
c?Nl yp ,yq ?`c
subject to constraints preventing trivial solutions, e.g. putting all labels in one set (see [4]).
This optimization problem (including the appropriate constraints) is a graph cut problem and it
can be solved with standard spectral clustering, i.e. we use A as the affinity matrix for step 1 of
the algorithm given in [21], and then apply all of its other steps (2-6).
Learn the parameters f of the tree by minimizing (4) subject to constraints (2) and (3).
overall tree loss. This section describes an algorithm for learning the parameters N , E and L, i.e.
optimizing equation (1) with respect to these parameters.
The key to the generalization ability of a particular choice of tree structure is the learnability of the
label sets `. If some classes are often confused but are in different label sets the functions f may not
be easily learnable, and the overall tree loss will hence be poor. For example for an image labeling
task, a decision in the tree between two label sets, one containing tiger and jaguar labels versus one
containing frog and toad labels is presumably more learnable than (tiger, frog) vs. (jaguar, toad).
In the following, we consider a learning strategy for disjoint label trees (the methods in the previous
section were for both joint and disjoint trees). We begin by noticing that Remp can be rewritten as:
?
?
m
X
X
1
max ?I(yi ? `j )
C(xi , y?)?
Remp (ftree ) =
m i=1 j
y??`
/ j
where C(xi , y?) = I(ftree (xi ) = y?) is the confusion of labeling example xi (with true label yi ) with
label y? instead. That is, the tree loss for a given example is 1 if there is a node j in the tree containing
yi , but we predict a different node at the same depth leading to a prediction not in the label set of j.
Intuitively, the confusion of predicting node i instead of j comes about because of the class confusion
between the labels y ? `i and the labels y? ? `j . Hence, to provide the smallest tree loss we
want to group together labels into the same label set that are likely to be confused at test time.
Unfortunately we do not know the confusion matrix of a particular tree without training it first, but
as a proxy we can use the class confusion matrix of a surrogate classifier with the supposition that
the matrices will be highly correlated. This motivates the proposed Algorithm 2. The main idea is
to recursively partition the labels into label sets between which there is little confusion (measuring
confusion using One-vs-Rest as a surrogate classifier) solving at each step a graph cut problem
where standard spectral clustering is applied [20, 21]. The objective function of spectral clustering
penalizes unbalanced partitions, hence encouraging balanced trees. (To obtain logarithmic speedups the tree has to be balanced; one could also enforce this constraint directly in the k-means step.)
The results in Section 5 show that our learnt trees outperform random structures and in fact match
the accuracy of not using a tree at all, while being orders of magnitude faster.
3
Label Embeddings
An orthogonal angle of attack of the solution of large multi-class problems is to employ shared
representations for the labelings, which we term label embeddings. Introducing the function ?(y) =
(0, . . . , 0, 1, 0, . . . , 0) which is a k-dimensional vector with a 1 in the y th position and 0 otherwise,
we would like to find a linear embedding E(y) = V ?(y) where V is a de ? k matrix assuming that
labels y ? {1, . . . , k}. Without a tree structure, multi-class classification is then achieved with:
fembed (x) = argmaxi=1,...,k S (W x, V ?(i))
4
(5)
where W is a de ? d matrix of parameters and S(?, ?) is a measure of similarity, e.g. an inner
product or negative Euclidean distance. This method, unlike label trees, is unfortunately still linear
with respect to k. However, it does have better behavior with respect to the feature dimension d,
with O(de (d + k)) testing time, compared to methods such as One-vs-Rest which is O(kd). If the
embedding dimension de is much smaller than d this gives a significant saving.
There are several ways we could train such models. For example, the method of compressed sensing
[17] has a similar form to (5), but the matrix V is not learnt but chosen randomly, and only W
is learnt. In the next section we will show how we can train such models so that the matrix V
captures the semantic similarity between classes, which can improve generalization performance
over random choices of V in an analogous way to the improvement of label trees over random
trees. Subsequently, we will show how to combine label embeddings with label trees to gain the
advantages of both approaches.
3.1
Learning Label Embeddings (Without a Tree)
We consider two possibilities for learning V and W .
Sequence of Convex Problems Firstly, we consider learning the label embedding by solving a sequence of convex problems using the following method. First, train independent (convex) classifiers
fi (x) for each class 1, . . . , k and compute the k?k confusion matrix C? over the data (xi , yi ), i.e. the
same as the first two steps of Algorithm 2. Then, find the label embedding vectors Vi that minimize:
k
X
Aij ||Vi ? Vj ||2 , where A =
i,j=1
1 ?
(C + C? > ) is the symmetrized confusion matrix,
2
P
subject to the constraint V > DV = I where Dii = j Aij (to prevent trivial solutions) which is the
same problem solved by Laplacian Eigenmaps [4]. We then obtain an embedding matrix V where
similar classes i and j should have small distance between their vectors Vi and Vj . All that remains is
to learn the parameters W of our model. To do this, we can then train a convex multi-class classifier
utilizing the label embedding V : minimize
m
?||W ||F RO +
1 X
?i
m i=1
where ||.||F RO is the Frobenius norm, subject to constraints:
||W xi ? V ?(i)||2 ? ||W xi ? V ?(j)||2 + ?i ,
?j 6= i
(6)
?i ? 0, i = 1, . . . , m.
Note that the constraint (6) is linear as we can multiply out and subtract ||W xi ||2 from both sides.
At test time we employ equation (5) with S(z, z 0 ) = ?||z ? z 0 ||.
Non-Convex Joint Optimization The second method is to learn W and V jointly, which requires
non-convex optimization. In that case we wish to directly minimize:
m
?||W ||F RO +
1 X
?i
m i=1
subject to (W xi )> V ?(i) ? (W xi )> V ?(j) ? ?i ,
?j 6= i
and ||Vi || ? 1 , ?i ? 0, i = 1, . . . , m. We optimize this using stochastic gradient descent (with
randomly initialized weights) [8]. At test time we employ equation (5) with S(z, z 0 ) = z > z 0 .
3.2
Learning Label Embedding Trees
In this work, we also propose to combine the use of embeddings and label trees to obtain the advantages of both approaches, which we call the label embedding tree. At test time, the resulting
label embedding tree prediction is given in Algorithm 3. The label embedding tree has potentially
O(de (d + log(k))) testing speed, depending on the structure of the tree (e.g. being balanced).
5
Algorithm 3 Label Embedding Tree Prediction Algorithm
Input: test example x, parameters T .
Compute z = W x.
- Cache prediction on example
Let s = 0.
- Start at the root node
repeat
- Traverse to the most
Let s = argmax{c:(s,c)?E} fc (x) = argmax{c:(s,c)?E} z > E(c).
confident child.
until |`s | = 1
- Until this uniquely defines a single label.
Return `s .
To learn a label embedding tree we propose the following minimization problem:
m
?||W ||F RO +
1 X
?i
m i=1
subject to constraints:
/ `s ? (?p : (p, r) ? E ? (p, s) ? E)
(W xi )> V ?(r) ? (W xi )> V ?(s) ? ?i , ?r, s : yi ? `r ? yi ?
||Vi || ? 1, ?i ? 0, i = 1, . . . , m.
This is essentially a combination of the optimization problems defined in the previous two Sections.
Learning the tree structure for these models can still be achieved using Algorithm 2.
4
Related Work
Multi-class classification is a well studied problem. Most of the prior approaches build upon binary
classification and have a classification cost which grows at least linearly with the number of classes
k. Common multi-class strategies include one-versus-rest, one-versus-one, label ranking and Decision Directed Acyclic Graph (DDAG). One-versus-rest [25] trains k binary classifiers discriminating
each class against the rest and predicts the class whose classifier is the most confident, which yields
a linear testing cost O(k). One-versus-one [16] trains a binary classifier for each pair of classes
and predicts the class getting the most pairwise preferences, which yields a quadratic testing cost
O(k ? (k ? 1)/2). Label ranking [10] learns to assign a score to each class so that the correct class
should get the highest score, which yields a linear testing cost O(k). DDAG [23] considers the same
k ? (k ? 1)/2 classifiers as one-versus-one but achieves a linear testing cost O(k). All these methods
are reported to perform similarly in terms of accuracy [25, 23].
Only a few prior techniques achieve sub-linear testing cost. One way is to simply remove labels the
classifier performs poorly on [11]. Error correcting code approaches [13] on the other hand represent
each class with a binary code and learn a binary classifier to predict each bit. This means that
the testing cost could potentially be O(log k). However, in practice, these approaches need larger
redundant codes to reach competitive performance levels [19]. Decision trees, such as C4.5 [24], can
also yield a tree whose depth (and hence test cost) is logarithmic in k. However, testing complexity
also grows linearly with the number of training examples making these methods impractical for
large datasets [22].
Filter tree [7] and Conditional Probability Tree (CPT) [6] are logarithmic approaches that have been
introduced recently with motivations similar to ours, i.e. addressing large scale problems with a
thousand classes or more. Filter tree considers a random binary tree in which each leaf is associated
with a class and each node is associated with a binary classifier. A test example traverses the tree
from the root. At each node, the node classifier decides whether the example is directed to the
right or to the left subtree, each of which are associated to half of the labels of the parent node.
Finally, the label of the reached leaf is predicted. Conditional Probability Tree (CPT) relies on a
similar paradigm but builds the tree during training. CPT considers an online setup in which the
set of classes is discovered during training. Hence, CPT builds the tree greedily: when a new class
is encountered, it is added by splitting an existing leaf. In our case, we consider that the set of
classes are available prior to training and propose to tessellate the class label sets such that the node
classifiers are likely to achieve high generalization performance. This contribution is shown to have
a significant advantage in practice, see Section 5.
6
Finally, we should mention that a related active area of research involves partitioning the feature
space rather than the label space, e.g. using hierarchical experts [18], hashing [27] and kd-trees [5].
Label embedding is another key aspect of our work when it comes to efficiently handling thousands
of classes. Recently, [26] proposed to exploit class taxonomies via embeddings by learning to project
input vectors and classes into a common space such that the classes close in the taxonomy should
have similar representations while, at the same time, examples should be projected close to their
class representation. In our case, we do not rely on a pre-existing taxonomy: we also would like
to assign similar representations to similar classes but solely relying on the training data. In that
respect, our work is closer to work in information retrieval [3], which proposes to embed documents
? not classes ? for the task of document ranking. Compressed sensing based approaches [17] do
propose to embed class labels, but rely on a random projection for embedding the vector representing
class memberships, with the added advantages of handling problems for which multiple classes are
active for a given example. However, relying on a random projection does not allow for the class
embedding to capture the relation between classes. In our experiments, this aspect is shown to be a
drawback, see Section 5. Finally, the authors of [2] do propose an embedding approach over class
labels, but it is not clear to us if their approach is scalable to our setting.
5
Experimental Study
We consider three datasets: one publicly available image annotation dataset and two proprietary
datasets based on images and textual descriptions of products.
ImageNet Dataset ImageNet [12] is a new image dataset organized according to WordNet [14]
where quality-controlled human-verified images are tagged with labels. We consider the task of
annotating images from a set of about 16 thousand labels. We split the data into 2.5M images for
training, 0.8M for validation and 0.8M for testing, removing duplicates between training, validation
and test sets by throwing away test examples which had too close a nearest neighbor training or
validation example in feature space. Images in this database were represented by a large but sparse
vector of color and texture features, known as visual terms, described in [15].
Product Datasets We had access to a large proprietary database of about 0.5M product descriptions.
Each product is associated with a textual description, an image, and a label. There are ?18 thousand
unique labels. We consider two tasks: predicting the label given the textual description, and predicting the label given the image. For the text task we extracted the most frequent set of 10 thousand
words (discounting stop words) to yield a textual dictionary, and represented each document by a
vector of counts of these words in the document, normalized using tf-idf. For the image task, images
were represented by a dense vector of 1024 real values of texture and color features.
Table 1 summarizes the various datasets. Next, we describe the approaches that we compared.
Flat versus Tree Learning Approaches In Table 2 we compare label tree predictor training methods from Section 2.1: the baseline relaxation 1 (?Independent Optimization?) versus our proposed
relaxation 2 (?Tree Loss Optimization?), both of which learn the classifiers for fixed trees; and we
compare our ?Learnt Label Tree? structure learning algorithm from Section 2.2 to random structures. In all cases we considered disjoint trees of depth 2 with 200 internal nodes. The results show
that learnt structure performs better than random structure and tree loss optimization is superior
to independent optimization. We also compare to three other baselines: One-vs-Rest large margin
classifiers trained using the passive aggressive algorithm [9], the Filter Tree [7] and the Conditional
Probability Tree (CPT) [6]. For all algorithms, hyperparameters are chosen using the validation set.
The combination of Learnt Label Tree structure and Tree Loss Optimization for the label predictors
is the only method that is comparable to or better than One-vs-Rest while being around 60? faster
to compute at test time.
For ImageNet one could wonder how well using WordNet (a graph of human annotated label similarities) to build a tree would perform instead. We constructed a matrix C for Algorithm 2 where
Cij = 1 if there is an edge in the WordNet graph, and 0 otherwise, and used that to learn a label
tree as before, obtaining 0.99% accuracy using ?Independent Optimization?. This is better than a
random tree but not as good as using the confusion matrix, implying that the best tree to use is the
one adapted to the supervised task of interest.
7
Table 1: Summary Statistics of the Three Datasets Used in the Experiments.
Statistics
Task
Number of Training Documents
Number of Test Documents
Validation Documents
Number of Labels
Type of Documents
Type of Features
Number of Features
Average Feature Sparsity
ImageNet
image annotation
2518604
839310
837612
15952
images
visual terms
10000
97.5%
Product Descriptions
product categorization
417484
60278
105572
18489
texts
words
10000
99.6%
Product Images
image annotation
417484
60278
105572
18489
images
dense image features
1024
0.0%
Table 2: Flat versus Tree Learning Results Test set accuracies for various tree and non-tree methods on three datasets. Speed-ups compared to One-vs-Rest are given in brackets.
Classifier
One-vs-Rest
Filter Tree
Conditional Prob. Tree (CPT)
Independent Optimization
Independent Optimization
Tree Loss Optimization
Tree Type
None (flat)
Filter Tree
CPT
Random Tree
Learnt Label Tree
Learnt Label Tree
ImageNet
2.27% [1?]
0.59% [1140?]
0.74% [41?]
0.72% [60?]
1.25% [60?]
2.37% [60?]
Product Desc.
37.0% [1?]
14.4% [1285?]
26.3% [45?]
21.3% [59?]
27.1% [59?]
39.6% [59?]
Product Images
12.6% [1?]
0.73% [1320?]
2.20% [115?]
1.35% [61?]
5.95% [61?]
10.6% [61?]
Table 3: Label Embeddings and Label Embedding Tree Results
Classifier
One-vs-Rest
Compressed Sensing
Seq. Convex Embedding
Non-Convex Embedding
Label Embedding Tree
Tree Type
None (flat)
None (flat)
None (flat)
None (flat)
Label Tree
Accuracy
2.27%
0.6%
2.23%
2.40%
2.54%
ImageNet
Speed
1?
3?
3?
3?
85?
Memory
1.2 GB
18 MB
18 MB
18 MB
18 MB
Product Images
Accuracy Speed Memory
12.6%
1?
170 MB
2.27%
10?
20 MB
3.9%
10?
20 MB
14.1%
10?
20 MB
13.3%
142? 20 MB
Embedding and Embedding Tree Approaches In Table 3 we compare several label embedding
methods: (i) the convex and non-convex methods from Section 5; (ii) compressed sensing; and
(iii) the label embedding tree from Section 3.2. In all cases we fixed the embedding dimension
de = 100. The results show that the random embeddings given by compressed sensing are inferior
to learnt embeddings and Non-Convex Embedding is superior to Sequential Convex Embedding,
presumably as the overall loss which is dependent on both W and V is jointly optimized. The latter
gives results as good or superior to One-vs-Rest with modest computational gain (3? or 10? speedup). Note, we do not detail results on the product descriptions task because no speed-up is gained
there from embedding as the sparsity is already so high, however the methods still gave good test
accuracy (e.g. Non-Convex Embedding yields 38.2%, which should be compared to the methods in
Table 2). Finally, combining embedding and label tree learning using the ?Label Embedding Tree?
of Section 3.2 yields our best method on ImageNet and Product Images with a speed-up of 85? or
142? respectively with accuracy as good or better than any other method tested. Moreover, memory
usage of this method (and other embedding methods) is significantly less than One-vs-Rest.
6
Conclusion
We have introduced an approach for fast multi-class classification by learning label embedding trees
by (approximately) optimizing the overall tree loss. Our approach obtained orders of magnitude
speedup compared to One-vs-Rest while yielding as good or better accuracy, and outperformed
other tree-based or embedding approaches. Our method makes real-time inference feasible for very
large multi-class tasks such as web advertising, document categorization and image annotation.
Acknowledgements
We thank Ameesh Makadia for very useful discussions.
8
References
[1] E. Allwein, R. Schapire, and Y. Singer. Reducing multiclass to binary: a unifying approach for margin
classifiers. Journal of Machine Learning Research (JMLR), 1:113?141, 2001.
[2] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classification. In
Proceedings of the 24th international conference on Machine learning, page 24. ACM, 2007.
[3] B. Bai, J. Weston, D. Grangier, R. Collobert, C. Cortes, and M. Mohri. Half transductive ranking. In
Artificial Intelligence and Statistics (AISTATS), 2010.
[4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering.
Advances in neural information processing systems, 1:585?592, 2002.
[5] J.L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the
ACM, 18(9):517, 1975.
[6] A. Beygelzimer, J. Langford, Y. Lifshits, G. Sorkin, and A. Strehl. Conditional probability tree estimation
analysis and algorithm. In Conference in Uncertainty in Artificial Intelligence (UAI), 2009.
[7] A. Beygelzimer, J. Langford, and P. Ravikumar. Error-correcting tournaments. In International Conference on Algorithmic Learning Theory (ALT), pages 247?262, 2009.
[8] L?eon Bottou. Stochastic learning. In Olivier Bousquet and Ulrike von Luxburg, editors, Advanced Lectures on Machine Learning, Lecture Notes in Artificial Intelligence, LNAI 3176, pages 146?168. Springer
Verlag, Berlin, 2004.
[9] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms.
Journal of Machine Learning Research, 7:551?585, 2006.
[10] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Machine Learning Research (JMLR), 2:265?292, 2002.
[11] O. Dekel and O. Shamir. Multiclass-Multilabel Learning when the Label Set Grows with the Number of
Examples. In Artificial Intelligence and Statistics (AISTATS), 2010.
[12] J. Deng, W. Dong, R. Socher, Li-Jia Li, K. Li, and Fei-Fei Li. Imagenet: A large-scale hierarchical image
database. In Conference on Computer Vision and Pattern Recognition (CVPR), pages 248?255, 2009.
[13] T. Dietterich and G. Bakiri. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of Artificial Intelligence Research (JAIR), 2:263?286, 1995.
[14] C. Fellbaum, editor. WordNet: An Electronic Lexical Database. MIT Press, 1998.
[15] David Grangier and Samy Bengio. A discriminative kernel-based model to rank images from text queries.
Transactions on Pattern Analysis and Machine Intelligence, 30(8):1371?1384, 2008.
[16] T. Hastie and R. Tibshirani. Classication by pairwise coupling. The Annals of Statistics, 26(2):451?471,
2001.
[17] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In Neural
Information Processing Systems (NIPS), 2009.
[18] M.I. Jordan and R.A. Jacobs. Hierarchical mixtures of experts and the EM algorithm. Neural computation,
6(2):181?214, 1994.
[19] J. Langford and A. Beygelzimer. Sensitive error correcting output codes. In Conference on Learning
Theory (COLT), pages 158?172, 2005.
[20] U. Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):416, 2007.
[21] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. Advances in
neural information processing systems, 2:849?856, 2002.
[22] T. Oates and D. Jensen. The effects of training set size on decision tree complexity. In International
Conference on Machine Learning (ICML), pages 254?262, 1997.
[23] J. Platt, N. Cristianini, and J. Shawe-Taylor. Large margin dags for multiclass classification. In NIPS,
pages 547?553, 2000.
[24] J. Quinlan. C4.5 : programs for machine learning. Morgan Kaufmann, 1993.
[25] R. Rifkin and A. Klautau. In defense of one-vs-all classification. Journal of Machine Learning Research
(JMLR), 5:101?141, 2004.
[26] K. Weinberger and O. Chapelle. Large margin taxonomy embedding for document categorization. In
NIPS, pages 1737?1744, 2009.
[27] P.N. Yianilos. Data structures and algorithms for nearest neighbor search in general metric spaces. In
Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms, page 321. Society for
Industrial and Applied Mathematics, 1993.
9
| 4027 |@word norm:3 disk:1 dekel:2 jacob:1 mention:1 recursively:1 bai:1 contains:1 score:3 document:11 ours:1 outperforms:2 existing:5 current:1 com:2 beygelzimer:3 must:1 dde:1 bd:1 fn:1 partition:3 remove:1 v:15 implying:1 half:2 prohibitive:1 leaf:6 intelligence:6 record:1 provides:1 node:32 traverse:4 hyperplanes:1 firstly:2 attack:1 preference:1 zhang:1 along:1 constructed:1 become:1 symposium:1 incorrect:1 combine:3 pairwise:2 behavior:1 p1:1 growing:4 multi:11 relying:2 decreasing:1 little:1 encouraging:1 cache:1 increasing:1 becomes:1 begin:2 xx:1 moreover:2 confused:2 project:1 nj:1 impractical:1 every:1 multidimensional:1 fink:1 ro:4 classifier:21 wrong:1 platt:1 partitioning:1 before:2 mistake:1 solely:1 approximately:1 tournament:1 plus:1 frog:2 studied:1 challenging:2 limited:1 directed:2 practical:1 unique:1 testing:14 practice:3 block:1 procedure:1 area:1 empirical:3 significantly:1 alleviated:1 projection:2 pre:1 word:4 ups:1 get:1 cannot:2 close:5 applying:1 ameesh:1 restriction:1 measurable:1 optimize:5 map:1 lexical:1 maximizing:1 regardless:1 independently:1 convex:18 splitting:1 correcting:3 utilizing:1 embedding:44 searching:1 analogous:1 annals:1 shamir:1 suppose:1 olivier:1 samy:2 approximated:1 recognition:1 cut:2 predicts:3 labeled:1 database:4 solved:2 capture:2 thousand:6 wj:2 highest:2 balanced:4 jaguar:2 complexity:4 cristianini:1 multilabel:1 trained:1 solving:3 upon:1 easily:1 joint:4 various:3 america:1 represented:3 regularizer:2 train:8 fast:1 describe:3 argmaxi:1 artificial:5 treestructure:1 labeling:3 query:1 choosing:2 shalev:1 whose:3 larger:2 cvpr:1 otherwise:3 compressed:7 annotating:1 ability:1 statistic:6 niyogi:1 transductive:1 jointly:3 final:3 online:2 associative:1 differentiate:1 advantage:4 sequence:2 propose:9 product:13 mb:9 fr:1 frequent:1 loop:1 combining:1 rifkin:1 poorly:1 achieve:2 description:6 frobenius:1 getting:1 parent:5 requirement:1 categorization:4 depending:2 coupling:1 measured:2 nearest:2 ij:4 b0:1 progress:1 predicted:3 involves:1 indicate:1 come:2 drawback:1 correct:1 annotated:1 filter:5 subsequently:1 stochastic:2 human:2 sgn:1 dii:1 assign:2 f1:1 generalization:3 tighter:1 desc:1 around:1 considered:2 presumably:2 algorithmic:3 predict:3 bj:3 achieves:1 dictionary:1 smallest:1 estimation:1 outperformed:1 lose:1 label:120 sensitive:1 tf:1 argmaxr:1 minimization:2 mit:1 rather:1 allwein:1 focus:1 improvement:1 rank:1 industrial:1 greedily:1 wishing:1 baseline:3 inference:1 dependent:2 membership:1 typically:1 entire:1 lnai:1 relation:1 labelings:1 overall:11 classification:11 issue:1 uncovering:1 colt:1 proposes:1 saving:1 having:1 ng:1 icml:1 duplicate:1 few:2 employ:3 belkin:1 randomly:2 argmax:4 interest:1 highly:1 possibility:1 multiply:1 mixture:1 arrives:2 nl:4 bracket:1 yielding:1 edge:5 closer:1 necessary:1 orthogonal:1 modest:1 tree:126 indexed:1 euclidean:1 taylor:1 penalizes:1 initialized:1 instance:1 measuring:1 cost:10 introducing:1 addressing:1 subset:2 hundred:1 predictor:7 wonder:1 eigenmaps:2 too:1 learnability:1 reported:1 learnt:9 confident:4 international:3 siam:1 discriminating:1 dong:1 together:1 again:2 von:1 containing:4 worse:1 expert:2 leading:3 return:2 yp:1 ullman:1 li:4 account:1 aggressive:2 de:11 ranking:4 vi:5 collobert:1 root:7 jason:1 lab:2 reached:2 start:2 competitive:1 ulrike:1 annotation:5 jia:1 contribution:1 minimize:8 publicly:1 accuracy:13 kaufmann:1 efficiently:1 yield:7 accurately:1 none:5 advertising:2 reach:1 against:2 associated:5 gain:2 stop:1 dataset:3 hsu:1 remp:5 subsection:2 color:2 organized:2 cj:3 fellbaum:1 hashing:1 jair:1 day:1 follow:1 supervised:1 response:1 specify:1 wei:1 until:5 langford:4 working:1 hand:1 web:2 replacing:1 google:2 defines:2 quality:1 grows:4 bentley:1 usage:2 effect:1 dietterich:1 normalized:1 true:4 evolution:1 hence:8 tagged:1 discounting:1 semantic:1 ll:2 during:2 uniquely:2 inferior:1 complete:1 confusion:12 performs:2 fj:2 passive:2 image:25 regularizing:1 novel:1 recently:3 fi:2 superior:6 common:2 rl:1 discussed:1 belong:1 approximates:1 significant:2 imposing:1 dag:1 mathematics:1 similarly:1 grangier:3 shawe:1 had:2 chapelle:1 access:1 similarity:3 optimizing:3 optimizes:1 belongs:1 verlag:1 binary:9 yi:13 morgan:1 deng:1 paradigm:1 redundant:1 ii:1 multiple:1 faster:4 match:1 retrieval:1 ravikumar:1 controlled:2 laplacian:2 prediction:16 scalable:1 essentially:1 vision:1 metric:1 kernel:4 represent:1 achieved:3 c1:1 want:2 rest:18 unlike:2 subject:7 jordan:2 practitioner:1 call:1 intermediate:1 bengio:3 embeddings:10 split:2 iii:1 gave:1 sorkin:1 hastie:1 suboptimal:1 inner:1 idea:4 multiclass:6 klautau:1 whether:2 defense:1 gb:1 f:1 york:1 proprietary:2 cpt:7 useful:1 clear:1 ten:1 hardware:1 processed:1 simplest:1 reduced:1 schapire:1 outperform:1 millisecond:1 tutorial:1 disjoint:4 per:1 tibshirani:1 discrete:1 hyperparameter:1 group:1 putting:1 key:2 nevertheless:1 prevent:1 verified:1 graph:5 relaxation:7 sum:2 luxburg:2 angle:1 noticing:1 prob:1 uncertainty:1 fourth:1 arrive:2 electronic:1 seq:1 decision:4 summarizes:1 comparable:1 poorest:1 bit:1 quadratic:2 encountered:1 annual:1 adapted:1 constraint:11 throwing:1 idf:1 fei:2 flat:7 bousquet:1 aspect:2 speed:9 speedup:3 according:1 combination:2 poor:1 kd:6 describes:1 smaller:1 em:1 wi:2 kakade:1 making:1 intuitively:1 jweston:1 dv:1 apq:1 computationally:1 equation:3 remains:1 slack:1 count:2 singer:3 know:1 available:4 rewritten:1 apply:2 observe:1 hierarchical:3 away:1 generic:1 appropriate:1 spectral:6 enforce:1 alternative:1 weinberger:1 symmetrized:2 running:1 clustering:6 include:1 hinge:2 quinlan:1 unifying:1 exploit:1 eon:1 build:4 amit:1 bakiri:1 classical:2 society:1 objective:2 added:3 already:1 strategy:2 dependence:1 traditional:1 surrogate:2 affinity:1 dp:2 gradient:1 distance:2 separate:1 thank:1 berlin:1 consumption:1 considers:3 dgrangier:1 trivial:2 assuming:1 makadia:1 code:4 index:3 minimizing:2 setup:1 unfortunately:2 cij:1 potentially:2 kde:1 taxonomy:4 negative:1 implementation:2 motivates:1 perform:2 datasets:9 discarded:1 descent:1 communication:1 discovered:1 arbitrary:1 david:2 introduced:2 pair:2 required:2 optimized:1 imagenet:8 c4:2 learned:1 textual:5 nip:3 able:1 usually:2 pattern:2 sparsity:2 program:1 including:2 memory:7 max:5 oates:1 difficulty:1 ranked:1 rely:2 predicting:3 indicator:2 advanced:1 representing:1 improve:1 yq:1 identifies:1 concludes:1 toad:2 ddag:2 text:3 prior:4 literature:1 acknowledgement:1 checking:1 loss:33 lecture:2 sublinear:1 acyclic:1 versus:9 srebro:1 validation:6 proxy:1 editor:2 classifying:1 share:2 strehl:1 classication:1 summary:1 mohri:1 repeat:4 infeasible:1 aij:2 side:1 allow:1 neighbor:2 sparse:1 dimension:5 depth:8 computes:1 preventing:1 author:1 projected:1 counted:1 transaction:1 decides:1 active:2 uai:1 b1:1 xi:17 shwartz:1 discriminative:1 search:2 table:7 learn:11 correlated:1 obtaining:1 interact:1 bottou:1 vj:2 yianilos:1 aistats:2 main:3 dense:2 linearly:4 motivation:1 hyperparameters:1 child:9 lifshits:1 ny:1 sub:1 position:1 wish:2 winning:1 jmlr:3 learns:3 removing:1 embed:3 specific:3 showing:1 jensen:1 sensing:7 learnable:2 supposition:1 cortes:1 alt:1 socher:1 sequential:1 gained:1 texture:2 nec:2 magnitude:3 subtree:1 keshet:1 margin:4 subtract:1 logarithmic:4 fc:4 simply:3 likely:2 visual:2 ordered:1 springer:1 relies:1 extracted:1 acm:3 weston:2 conditional:5 goal:2 shared:3 feasible:2 tiger:2 reducing:1 wordnet:4 called:1 experimental:2 internal:2 latter:1 crammer:2 unbalanced:1 princeton:1 tested:1 handling:2 |
3,344 | 4,028 | Efficient Minimization of
Decomposable Submodular Functions
Andreas Krause
California Institute of Technology
Pasadena, CA 91125
[email protected]
Peter Stobbe
California Institute of Technology
Pasadena, CA 91125
[email protected]
Abstract
Many combinatorial problems arising in machine learning can be reduced to the problem
of minimizing a submodular function. Submodular functions are a natural discrete analog
of convex functions, and can be minimized in strongly polynomial time. Unfortunately,
state-of-the-art algorithms for general submodular minimization are intractable for larger
problems. In this paper, we introduce a novel subclass of submodular minimization
problems that we call decomposable. Decomposable submodular functions are those
that can be represented as sums of concave functions applied to modular functions. We
develop an algorithm, SLG, that can efficiently minimize decomposable submodular
functions with tens of thousands of variables. Our algorithm exploits recent results in
smoothed convex minimization. We apply SLG to synthetic benchmarks and a joint
classification-and-segmentation task, and show that it outperforms the state-of-the-art
general purpose submodular minimization algorithms by several orders of magnitude.
1
Introduction
Convex optimization has become a key tool in many machine learning algorithms. Many seemingly
multimodal optimization problems such as nonlinear classification, clustering and dimensionality
reduction can be cast as convex programs. When minimizing a convex loss function, we can rest
assured to efficiently find an optimal solution, even for large problems. Convex optimization is a
structural property of continuous optimization problems. However, many machine learning problems, such as structure learning, variable selection, MAP inference in discrete graphical models,
require solving discrete, combinatorial optimization problems.
In recent years, another fundamental problem structure, which has similar beneficial properties,
has emerged as very useful in many combinatorial optimization problems arising in machine learning: Submodularity is an intuitive diminishing returns property, stating that adding an element to a
smaller set helps more than adding it to a larger set. Similarly to convexity, submodularity allows
one to efficiently find provably (near-)optimal solutions. In particular, the minimum of a submodular
function can be found in strongly polynomial time [11]. Unfortunately, while polynomial-time solvable, exact techniques for submodular minimization require a number of function evaluations on the
order of n5 [12], where n is the number of variables in the problem (e.g., number of random variables
in the MAP inference task), rendering the algorithms impractical for many real-world problems.
Fortunately, several submodular minimization problems arising in machine learning have structure
that allows solving them more efficiently. Examples include symmetric functions that can be
solved in O n3 evaluations using Queyranne?s algorithm [19], and functions that decompose into
attractive, pairwise potentials, that can be solved using graph cutting techniques [7]. In this paper,
we introduce a novel class of submodular minimization problems that can be solved efficiently. In
particular, we develop an algorithm SLG, that can minimize a class of submodular functions that
we call decomposable: These are functions that can be decomposed into sums of concave functions
applied to modular (additive) functions. Our algorithm is based on recent techniques of smoothed
convex minimization [18] applied to the Lov?asz extension. We demonstrate the usefulness of
( )
1
our algorithm on a joint classification-and-segmentation task involving tens of thousands of
variables, and show that it outperforms state-of-the-art algorithms for general submodular function
minimization by several orders of magnitude.
2
Background on Submodular Function Minimization
We are interested in minimizing set functions that map subsets of some base set E to real numbers.
E ! R we wish to solve for A 2
I.e., given f
A f A . For simplicity of notation, we
use the base set E f ; : : : ng, but in an application the base set may consist of nodes of a graph,
pixels of an image, etc. Without loss of generality, we assume f ;
. If the function f has no
structure, then there is no way solve the problem other than checking all n subsets. In this paper,
we consider functions that satisfy a key property that arises in many applications: submodularity
(c.f., [16]). A set function f is called submodular iff, for all A; B 2 E , we have
f A[B
f A\B f A
f B :
(1)
Submodular functions can alternatively, and perhaps more intuitively, be characterized in terms of
their discrete derivatives. First, we define k f A
f A [fk g f A to be the discrete derivative
of f with respect to k 2 E at A; intuitively this is the change in f ?s value by adding the element k
to the set A. Then, f is submodular iff:
:2
arg min ( )
( )=0
= 1
2
2
)+ (
) ( )+ ( )
( )= (
) ( )
(
k f (A) k f (B ); for all A B E and k 2 E n B:
Note the analogy to concave functions; the discrete derivative is smaller for larger sets, in the same
way that x h x y h y for all x y; h if and only if is a concave function
on R. Thus a simple example of a submodular function is f A
jAj where is any concave
function. Yet despite this connection to concavity, it is in fact ?easier? to minimize a submodular
function than to maximize it1 , just as it is easier to minimize a convex function. One explanation for
this is that submodular minimization can be reformulated as a convex minimization problem.
(+) ()
(+) ()
0
( )= ( )
To see this, consider taking a set function minimization problem, and reformulating it as a minimization problem over the unit cube ; n Rn . Define eA 2 Rn to be the indicator vector of the
set A, i.e.,
if k 2
=A
eA k
if k 2 A
[0 1]
[ ] = 10
[]
=
We use the notation x k for the k th element of the vector x. Also we drop brackets and commas
in subscripts, so ekl
efk;lg and ek efkg as with the standard unit vectors. A continuous
extension of a set function f is a function f on the unit cube f
; n ! R with the property
that f A
f eA . In order to be useful, however, one needs the minima of the set function to be
related to minima of the extension:
f A ) eA 2
f x:
(2)
A 2
x2[0;1]n
A22E
=
( ) = ~( )
~
~ : [0 1]
arg min ~( )
arg min ( )
~
A key result due to Lov?asz [16] states that each submodular function f has an extension f that not
only satisfies the above property, but is also convex and efficient to evaluate. We can define the
Lov?asz extension in terms of the submodular polyhedron Pf :
Pf fv 2 Rn v eA f A ; for all A 2 E g; f x
v x:
v 2P f
=
:
( )
~( ) = sup
2
~
~
The submodular polyhedron Pf is defined by exponentially many inequalities, and evaluating f
requires solving a linear program over this polyhedron. Perhaps surprisingly, as shown by Lov?asz, f
can be very efficiently computed as follows. For a fixed x let E ! E be a permutation such that
x : : : x n , and then define the set Sk f ; : : : ; k g. Then we have a formula
for f and a subgradient:
n
n
X
X
f x
x k f Sk f Sk 1 ; @ f x 3 e(k) f Sk f Sk 1 :
k=1
k=1
[ (1)]
~
[ ( )]
~( ) =
[ ( )]( ( )
= (1)
(
:
()
~( )
))
(( )
(
))
~
Note that if two components of x are equal, the above formula for f is independent of the permutation chosen, but the subgradient is not unique.
1
With the additional assumption that f is nondecreasing, maximizing a submodular function subject to a
cardinality constraint jAj M is ?easy?; a greedy algorithm is known to give a near-optimal answer [17].
2
Equation (2) was used to show that submodular minimization can be achieved in polynomial time
[16]. However, algorithms which directly minimize the Lovasz extension are regarded as impractical. Despite being convex, the Lov?asz extension is non-smooth, and hence a simple subgradient
descent algorithm would need O =2 steps to achieve O accuracy.
(1 )
()
Recently, Nesterov showed that if knowledge about the structure of a particular non-smooth convex
function is available, it can be exploited to achieve a running time of O = [18]. One way this is
done is to construct a smooth approximation of the non-smooth function, and then use an accelerated
gradient descent algorithm which is highly effective for smooth functions. Connections of this work
with submodularity and combinatorial optimization are also explored in [4] and [2]. In fact, in
[2], Bach shows that computing the smoothed Lov?asz gradient of a general submodular function is
equivalent to solving a submodular minimization problem. In this paper, we do not treat general
submodular functions, but rather a large class of submodular minimization functions that we call
decomposable. (To apply the smoothing technique of [18], special structural knowledge about the
convex function is required, so it is natural that we would need special structural knowledge about
the submodular function to leverage those results.) We further show that we can exploit the discrete
structure of submodular minimization in a way that allows terminating the algorithm early with a
certificate of optimality, which leads to drastic performance improvements.
(1 )
3
The Decomposable Submodular Minimization Problem
In this paper, we consider the problem of minimizing functions of the following form:
( ) = c eA +
f A
X
j
: [0
j
( wj e A ) ;
(3)
]
where c; wj 2 Rn and 0 wj 1 and j
; wj 1 ! R are arbitrary concave functions. It can
be shown that functions of this form are submodular. We call this class of functions decomposable
submodular functions, as they decompose into a sum of concave functions applied to nonnegative
modular functions2 . Below, we give examples of decomposable submodular functions arising in
applications.
() =
We first focus on the special case where all the concave functions are of the form j
dj
yj ; for some yj ; dj > . Since these potentials are of key importance, we define the
submodular functions w;y A
y; w eA and call them threshold potentials. In Section 5,
we will show in how to generalize our approach to arbitrary decomposable submodular functions.
min( )
0
( ) = min(
)
(
)
(1)
Examples. The simplest example is a 2-potential, which has the form jA\fk; lgj , where
. It can be expressed as a sum of a modular function and a threshold potential:
(0)
(
(1)
(2)
jA \ fk; lgj
) = (0) + ((2)
(1))ekl eA + (2(1)
(0)
(2)) ekl ; (A)
1
Why are such potential functions interesting? They arise, for example, when finding the Maximum
a Posteriori configuration of a pairwise Markov Random Field model in image classification
schemes such as in [20]. On a high level, such an algorithm computes a value c k that corresponds
to the log-likelihood of pixel k being of one class vs. another, and for each pair of adjacent pixels,
a value dkl related to the log-likelihood that pixels k and l are of the same
P class. Then the algorithm
classifies pixels by minimizing a sum of 2-potentials: f A
c eA
j ekl eA j .
k;l dkl
If the value dkl is large, this encourages the pixels k and l to be classified similarly.
[]
( )=
+
(1 1
)
More generally, consider a higher order potential function: a concave function of the number of
elements in some activation set S , jA \ S j where is concave. It can be shown that this can
be written as a sum of a modular function and a positive linear combination of jS j
threshold
potentials. Recent work [14] has shown that classification performance can be improved by adding
terms corresponding to such higher order potentials j jRj \ Aj to the objective function where the
functions j are piecewise linear concave functions, and the regions Rj of various sizes generated
from a segmentation algorithm. Minimization of these particular potential functions can then be
reformulated as a graph cut problem [13], but this is less general than our approach.
(
)
1
(
)
Another canonical example of a submodular function is a set cover function. Such a function can
be reformulated as a combination of concave cardinality functions (details omitted here). So all
2
A function is called modular if (1) holds with equality. It can be written as A 7! w eA for some w
3
2
Rn .
functions which are weighted combinations of set cover functions can be expressed as threshold
potentials. However, threshold potentials with nonuniform weights are strictly more general than
concave
P cardinality potentials. That is, there exists w and y such that w;y A cannot be expressed
as j j jRj \ Aj for any collection of concave j and sets Rj .
(
( )
)
Another example of decomposable functions arises in multiclass queuing systems [10]. These are
of the form f A
c eA u eA v eA , where u; v are nonnegative weight vectors and is
a nonincreasing concave function. With the proper choice of j and wj (again details are omitted
here), this can in fact be reformulated as sum of the type in Eq. 3 with n terms.
( )=
+
(
)
In our own experiments, shown in Section 6, we use an implementation of TextonBoost [20] and
augment it with quadratic higher order potentials.
P That is, we use TextonBoost to generate per-pixel
scores c, and then minimize f A
c eA
j jA \ Rj jjRj n Aj, where the regions Rj are regions
of pixels that we expect to be of the same class (e.g., by running a cheap region-growing heuristic).
The potential function jA\Rj jjRj nAj is smallest when A contains all of Rj or none of it. It gives the
largest penalty when exactly half of Rj is contained in A. This encourages the classification scheme
to classify most of the pixels in a region Rj the same way. We generate regions with a basic regiongrowing algorithm with random seeds. See Figure 1(a) for an illustration of examples of regions
that we use. In our experience, this simple idea of using higher-order potentials can dramatically
increase the quality of the classification over one using only 2-potentials, as can be seen in Figure 2.
( )=
4
+
The SLG Algorithm for Threshold Potentials
We now present our algorithm for efficient minimization of a decomposable submodular function f
based on smoothed convex minimization. We first show how we can efficiently smooth the Lov?asz
extension of f . We then apply accelerated gradient descent to the gradient of the smoothed function.
Lastly, we demonstrate how we can often obtain a certificate of optimality that allows us to stop
early, drastically speeding up the algorithm in practice.
4.1 The Smoothed Extension of a Threshold Potential
The key challenge in our algorithm is to efficiently smooth the Lov?asz extension of f , so that we
can resort to algorithms for accelerated convex minimization. We now show how we can efficiently
smooth the threshold potentials w;y A
y; w eA of Section 3, which are simple enough
to allow efficient smoothing, but rich enough when combined to express a large class of submodular
functions. For x 0, the Lov?asz extension of w;y is
v x s.t. v w; v eA y for all A 2 E :
w;y x
( ) = min(
~ ( ) = sup
)
2
Note that when x 0, the arg max of the above linear program always contains a point v which
satisfies v 1 y , and v . So we can restrict the domain of the dual variable v to those points
which satisfy these two conditions, without changing the value of x :
v x where D w; y fv 0 v w; v yg:
w;y x
v2D(w;y)
Restricting the domain of v allows us to define a smoothed Lov?asz extension (with parameter )
that is easily computed:
x
vx
kvk2
w;y
v2D(w;y)
To compute the value of this function we need to solve for the optimal vector v , which is also the
gradient of this function, as we have the following characterization:
x
2
r w;y x
vx
kvk
(4)
v
:
v2D(w;y)
v2D(w;y)
To derive an expression for v , we begin by forming the Lagrangian and deriving the dual problem:
=
0
~ ( ) = max
(
)= :
~ ( ) = max
2
~ ( ) = arg max
~ w;y (x) =
~ ( )
1=
= arg min
2
min
max
v
x
n
t2R;1 ;2 0 v2R
2 kvk + v + (w v) + t(y v 1)
1 kx t1 + k + w + ty:
= t2R;min
1 ;2 0 2
2
1
2
1
2
2
2
If we fix t, we can solve for the optimal dual variables 1 and 2 componentwise. By strong duality,
1
we know the optimal primal variable is given by v
x t 1 1 2 . So we have:
= (
+
)
1 = max(t 1 x; 0); 2 = max(x t 1 w; 0) ) v = min (max((x t 1)=; 0) ; w) :
4
This expresses v as a function of the unknown optimal dual variable t . For the simple case of
2-potentials, we can solve for t explicitly and get a closed form expression:
8
>
<ek
r ~ ekl ;1 (x) = >el
:1
2
(ekl + (x[k] x[l])(ek el ))
1
[ ] [ ]+
[ ] [ ]+
[ ] []
if x k x l
if x l x k
if jx k
xlj<
=
()
However, in general to find t we note that v must satisfy v 1 y . So define x;w t as:
x;w t
x t1 =; 0 ; w 1
Then we note this function is a monotonic continuous piecewise linear function of t, so we can use a
simple root-finding algorithm to solve x;w t
y . This root finding procedure will take no more
than O n steps in the worst case.
( ) = min(max((
)
) )
( )=
()
4.2 The SLG Algorithm for Minimizing Sums of Threshold Potentials
Stepping beyond a single threshold potential, we now assume that the submodular function to be
minimized can be written as a nonnegative linear combination of threshold potentials and a modular
X
function, i.e.,
f A
c eA
dj wj ;yj A :
j
Thus, we have the smoothed Lov?asz extension, and its gradient:
( )=
+
( )
~ (x) = c x + X dj ~ wj ;yj (x) and rf~ (x) = c + X dj r ~ wj ;yj (x):
f
j
j
We now wish to use the accelerated gradient descent algorithm of [18] to minimize this function.
This algorithm requires that the smoothed objective has a Lipschitz continuous gradient. That is, for
some constant L, it must hold that krf x1
rf x2 k Lkx1 x2 k; for all x1 ; x2 2 Rn .
Fortunately, by construction, the smoothed threshold extensions wj ;yj x all have = Lipschitz gradient, a direct consequence of the characterization in Equation 4. Hence
we have
P
D
a loose upper bound for the Lipschitz constant of f : L , where D
j dj . Furthermore, the smoothed threshold extensions approximate the threshold extensions uniformly:
f x j D
.
j wj ;yj x
wj ;yj x j 2 for all x, so jf x
2
~( )
~( )
~
~
~
( ) ~
1
=
~ ( ) ~( )
()
()
~
~
One way to use the smoothed gradient is to specify an accuracy ", then minimize f for sufficiently
small to guarantee that the solution will also be an approximate minimizer of f . Then we simply
apply the accelerated gradient descent algorithm of [18]. See also [3] for a description. Let PC x
0
x0 2C kx x k be the projection of x onto the convex set C . In particular, P[0;1]n x
x; 0 ; 1 . Algorithm 1 formalizes our Smoothed Lov?asz Gradient (SLG) algorithm:
arg min
min(max(
) )
Algorithm 1: SLG: Smoothed Lov?asz Gradient
Input: Accuracy "; decomposable function f .
begin
1
2"D , L D
1;
,x 1 z 1
2
for t
; ; ; : : : do
gt rf xt 1 =L; zt P[0;1]n z 1
=
( )=
( )=
=
=0 1 2
= ~(
=
)
=
Pt
=
s=0
2
= (2 + ( + 1) ) ( + 3)
=
~( )
min
if gapt "= then stop;
xt
zt t
yt = t
;
x" yt ;
Output: "-optimal x" to
x2[0;1]n f x
s+1 g
2
s ;
yt = P[0;1]n (xt gt );
The optimality gap of a smooth convex function at the iterate yt can be computed from its gradient:
gapt
yt x rf yt yt rf yt
rf yt ; 0 1:
x2[0;1]n
= max (
)
~ ( )=
~ ( ) + max(
~( ) )
In summary, as a consequence of the results of [18], we have the following guarantee about SLG:
( )
Theorem 1 SLG is guaranteed to provide an "-optimal solution after running for O D
" iterations.
5
SLG is only guaranteed to provide an "-optimal solution to the continuous optimization problem.
Fortunately, once we have an "-optimal point for the Lov?asz extension, we can efficiently round it to
set which is "-optimal for the original submodular function using Alg. 2 (see [9] for more details).
Algorithm 2: Set generation by rounding the continuous solution
Input: Vector x 2 ; n ; submodular function f .
begin
By sorting, find any permutation satisfying: x
::: x n ;
Sk f ; : : : ; k g; K
f
k2f0;1;:::;ng Sk ; C fSk k
Output: Collection of sets C , such that f A f x for all A 2 C
[0 1]
= (1)
4.3
()
[ (1)]
= arg min
( )
~
( ) ()
[ ( )]
= : 2 K g;
Early Stopping based on Discrete Certificates of Optimality
In general, if the minimum of f is not unique, the output of SLG may be in the interior of the unit
cube. However, if f admits a unique minimum A , then the iterates will tend toward the corner
eA . One natural question one may ask, if a trend like this is observed, is it necessary to wait for the
iterates to converge all the way to the optimal solution of the continuous problem
x2[0;1]n f x ,
when one is actually iterested in solving the discrete problem
A22E f A ? Below, we show that
it is possible to use information about the current iterates to check optimality of a set and terminate
the algorithm before the continuous problem has converged.
( )
min
~( )
min
~
~( )
To prove optimality of a candidate set A, we can use a subgradient of f at eA . If g 2 @ f eA , then
we can compute an optimality gap:
X
e
x
g
; g k eA k eE nA k :
(5)
f A
f
A
x2[0;1]n
k2A
In particular if g k for k 2 A and g k for k 2 E n A, then A is optimal. But if we only
have knowledge of candidate set A, then finding a subgradient g 2 @ f eA which demonstrates
optimality may be extremely difficult, as the set of subgradients is a polyhedron with exponentially
many extreme points. But our algorithm naturally suggests the subgradient we could use; the gradient of the smoothed extension is one such subgradient ? provided a certain condition is satisfied, as
described in the following Lemma.
( )
max (
[] 0
) =
[] 0
max(0 [ ]( [ ]
[ ]))
~( )
~
Lemma 1 Suppose f is a decomposable submodular function, with Lov?asz extension f , and
smoothed extension f as in the previous section. Suppose x 2 Rn and A 2 E satisfy the following property:
xk xl
k2A;l2E nA
Then rf x 2 @ f eA
This is a consequence of our formula for r , but see the appendix of the extended paper [21] for
a detailed proof. Lemma 1 states that if the components of point x corresponding to elements of A
are all larger than all the other components by at least , then the gradient at x is a subgradient for
f at eA (which by Equation 5 allows us to compute an optimality gap). In practice, this separation
of components naturally occurs as the iterates move in the direction of the point eA , long before
they ever actually reach the point eA . But even if the components are not separated, we can easily
add a positive multiple of eA to separate them and then compute the gradient there to get an
optimality gap. In summary, we have the following algorithm to check the optimality of a candidate
set: Of critical importance is how to choose the candidate set A. But by Equation , for a set to be
~
~( )
~( )
min
[]
[] 2
2
~
2
~
5
Algorithm 3: Set Optimality Check
Input: Set A; decomposable function f ; scale ; x 2 Rn .
begin
P
x k ; g rf x
eA ;
k2A;l2E nA x l
gap
; g k eA k eE nA k ;
k2A
Output: gap, which satisfies gap f A
f
= 2 + max
[] [ ] = ~ ( + )
=
max(0 [ ]( [ ]
[ ]))
( )
optimal, we want the components of the gradient rf~ (A +
eA )[k ] to be negative for k 2 A and
positive for k 2 E n A. So it is natural to choose A = fk : rf~ (x)[k ] 0g. Thus, if adding
eA
does not change the signs of the components of the gradient, then in fact we have found the optimal
set. This stopping criterion is very effective in practice, and we use it in all of our experiments.
6
R1
PR
R2
2
Running Time (s)
Running Time (s)
HYBRID
SFM3
10
LEX2
PR
MinNorm
0
10
SFM3
LEX2
2
10
0
10
HYBRID
MinNorm
SLG
R3
(a) Example Regions for Potentials
SLG
2
2
3
10
3
10
10
10
Problem Size (n)
Problem Size (n)
(b) Results for genrmf-long
(c) Results genrmf-wide
Figure 1: (a) Example regions used for our higher-order potential functions (b-c) Comparision of
running times of submodular minimization algorithms on synthetic problems from DIMACS [1].
5
Extension to General Concave Potentials
To extend our algorithm to work on general concave functions, we note that an arbitrary concave
function can be expressed as an integral of threshold potential functions. This is a simple consequence of integration by parts, which we state in the following lemma:
([0; T ]),
(x) = (0) + 0 (T )x
Lemma 2 For 2 C 2
Z T
0
min(x; y)00(y)dy; 8x 2 [0; T ]
This means that for a general sum of concave potentials as in Equation (3), we have:
Z w j 1
X
0
00
f A
c eA
j
wj wj e A
wj ;y A j y dy :
0
j
Then we can define f and f by replacing with and respectively. Our SLG algorithm is
essentially unchanged, the conditions for optimality still hold, and so on. Conceptually, we just use
a different smoothed
gradient, but calculating it is more involved. We need to compute the integrals
R
of the form r w;y x 00 y dy . Since r w;y x is a piecewise linear function with repect to y
which we can compute, we can evaluate the integral by parts so that we need only evaluate , but
not its derivatives. We omit the resulting formulas for space limitations.
( )=
+
~
(0) + (
~
~ ( ) ( )
6
1)
~
( ) ()
~
~ ( )
Experiments
Synthetic Data. We reproduce the experimental setup of [8] designed to compare submodular
minimization algorithms. Our goal is to find the minimum cut of a randomly generated graph (which
requires submodular minimization of a sum of 2-potentials) with the graph generated by the specifications in [1]. We compare against the state of the art combinatorial algorithms (LEX2, HYBRID,
SFM3, PR [6]) that are guaranteed to find the exact solution in polynomial time, as well as the
Minimum Norm algorithm of [8], a practical alternative with unknown running time. Figures 1(b)
and 1(c) compare the running time of SLG against the running times reported in [8]. In some cases,
SLG was 6 times faster than the MinNorm algorithm. However the comparison to the MinNorm
algorithm is inconclusive in this experiment, since while we used a faster machine, we also used a
simple MATLAB implementation. What is clear is that SLG scales at least as well as MinNorm on
these problems, and is practical for problem sizes that the combinatorial algorithms cannot handle.
Image Segmentation Experiments. We also tested our algorithm on the joint image
segmentation-and-classification task introduced in Section 3. We used an implementation of
TextonBoost [20], then trained on and tested subsampled images from [5]. As seen in Figures 2(e)
and 2(g), using only the per-pixel score from our TextonBoost implementation gets the general area
of the object, but does not do a good job of identifying the shape of a classified object. Compare
to the ground truth in Figures 2(b) and 2(d). We then perform MAP inference in a Markov Random
Field with 2-potentials (as done in [20]). While this regularization, as shown in Figures 2(f) and
2(h), leads to improved performance, it still performs poorly on classifying the boundary.
7
(a) Original Image
(b) Ground truth
(c) Original Image
(d) Ground Truth
(e) Pixel-based
(f) Pairwise Potentials
(g) Pixel-based
(h) Pairwise Potentials
(i) Concave Potentials
(j) Continuous
(k) Concave Potentials
(l) Continuous
Figure 2: Segmentation experimental results
Finally, we used SLG to regularize with higher order potentials. To generate regions for our potentials, we randomly picked seed pixels and grew the regions based on HSV channels of the image.
We picked our seed pixels with a preference for pixels which were included in the least number of
previously generated regions. Figure 1(a) shows what the regions typically looked
P like. For our experiments, we used total regions. We used SLG to minimize f A
ceA j jA\Rj jjRj nAj,
where c was the output from TextonBoost, scaled appropriately. Figures 2(i) and 2(k) show the classification output. The continuous variables x at the end of each run are shown in Figures 2(j) and
2(l); while it has no formal meaning, in general one can interpret a very high or low value of x k
to correspond to high confidence in the classification of the pixel k . To generate the result shown
in Figure 2(k), a problem with 4 variables and
concave potentials, our MATLAB/mex implementation of SLG took 71.4 seconds. In comparison, the MinNorm implementation of the SFO
toolbox [15] gave the same result, but took 6900 seconds. Similar problems on an image of twice the
resolution ( 4 variables) were tested using SLG, resulting in runtimes of roughly 1600 seconds.
90
( )=
+
[]
10
90
4 10
7
Conclusion
We have developed a novel method for efficiently minimizing a large class of submodular functions
of practical importance. We do so by decomposing the function into a sum of threshold potentials,
whose Lov?asz extensions are convenient for using modern smoothing techniques of convex optimization. This allows us to solve submodular minimization problems with thousands of variables,
that cannot be expressed using only pairwise potentials. Thus we have achieved a middle ground
between graph-cut-based algorithms which are extremely fast but only able to handle very specific
types of submodular minimization problems, and combinatorial algorithms which assume nothing
but submodularity but are impractical for large-scale problems.
Acknowledgements This research was partially supported by NSF grant IIS-0953413, a gift from
Microsoft Corporation and an Okawa Foundation Research Grant. Thanks to Alex Gittens and
Michael McCoy for use of their TextonBoost implementation.
8
References
[1] Dimacs, The First international algorithm implementation challenge: The core experiments,
1990.
[2] F. Bach, Structured sparsity-inducing norms through submodular functions, Advances in Neural Information Processing Systems (2010).
[3] S. Becker, J. Bobin, and E.J. Candes, Nesta: A fast and accurate first-order method for sparse
recovery, Arxiv preprint arXiv 904 (2009), 1?37.
[4] F.A. Chudak and K. Nagano, Efficient solutions to relaxations of combinatorial problems with
submodular penalties via the Lov?asz extension and non-smooth convex optimization, Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, Society for
Industrial and Applied Mathematics, 2007, pp. 79?88.
[5] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman, The
PASCAL Visual Object Classes Challenge 2009 (VOC2009) Results, http://www.pascalnetwork.org/challenges/VOC/voc2009/workshop/index.html.
[6] L. Fleischer and S. Iwata, A push-relabel framework for submodular function minimization
and applications to parametric optimization, Discrete Applied Mathematics 131 (2003), no. 2,
311?322.
[7] D. Freedman and P. Drineas, Energy minimization via graph cuts: Settling what is possible, IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005.
CVPR 2005, vol. 2, 2005.
[8] Satoru Fujishige, Takumi Hayashi, and Shigueo Isotani, The Minimum-Norm-Point Algorithm
Applied to Submodular Function Minimization and Linear Programming, (2006), 1?19.
[9] E. Hazan and S. Kale, Beyond convexity: Online submodular minimization, Advances in
Neural Information Processing Systems 22 (Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I.
Williams, and A. Culotta, eds.), 2009, pp. 700?708.
[10] T. Itoko and S. Iwata, Computational geometric approach to submodular function minimization for multiclass queueing systems, Integer Programming and Combinatorial Optimization
(2007), 267?279.
[11] S. Iwata, L. Fleischer, and S. Fujishige, A combinatorial strongly polynomial algorithm for
minimizing submodular functions, Journal of the ACM (JACM) 48 (2001), no. 4, 777.
[12] S. Iwata and J.B. Orlin, A simple combinatorial algorithm for submodular function minimization, Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms,
Society for Industrial and Applied Mathematics, 2009, pp. 1230?1237.
[13] P. Kohli, M.P. Kumar, and P.H.S. Torr, P3 & Beyond: Solving Energies with Higher Order
Cliques, 2007 IEEE Conference on Computer Vision and Pattern Recognition (2007), 1?8.
[14] P. Kohli, L. Ladick?y, and P.H.S. Torr, Robust Higher Order Potentials for Enforcing Label
Consistency, International Journal of Computer Vision 82 (2009), no. 3, 302?324.
[15] A. Krause, SFO: A Toolbox for Submodular Function Optimization, The Journal of Machine
Learning Research 11 (2010), 1141?1144.
[16] L. Lov?asz, Submodular functions and convexity, Mathematical programming: the state of the
art, Bonn (1982), 235?257.
[17] G. Nemhauser, L. Wolsey, and M. Fisher, An analysis of the approximations for maximizing
submodular set functions, Mathematical Programming 14 (1978), 265?294.
[18] Yu. Nesterov, Smooth minimization of non-smooth functions, Mathematical Programming 103
(2004), no. 1, 127?152.
[19] M. Queyranne, Minimizing symmetric submodular functions, Mathematical Programming 82
(1998), no. 1-2, 3?12.
[20] J. Shotton, J. Winn, C. Rother, and A. Criminisi, TextonBoost for Image Understanding: MultiClass Object Recognition and Segmentation by Jointly Modeling Texture, Layout, and Context,
Int. J. Comput. Vision 81 (2009), no. 1, 2?23.
[21] P. Stobbe and A. Krause, Efficient minimization of decomposable submodular functions,
arXiv:1010.5511 (2010).
9
| 4028 |@word kohli:2 middle:1 polynomial:6 norm:3 everingham:1 textonboost:7 functions2:1 it1:1 reduction:1 configuration:1 contains:2 score:2 nesta:1 outperforms:2 current:1 activation:1 yet:1 written:3 must:2 additive:1 shape:1 cheap:1 drop:1 designed:1 v:1 greedy:1 half:1 v2r:1 xk:1 core:1 certificate:3 characterization:2 node:1 iterates:4 hsv:1 preference:1 org:1 mathematical:4 kvk2:1 direct:1 become:1 symposium:2 gapt:2 prove:1 introduce:2 bobin:1 x0:1 pairwise:5 lov:18 roughly:1 growing:1 voc:1 decomposed:1 t2r:2 pf:3 cardinality:3 gift:1 begin:4 classifies:1 notation:2 provided:1 what:3 developed:1 finding:4 corporation:1 impractical:3 guarantee:2 formalizes:1 subclass:1 concave:22 exactly:1 demonstrates:1 scaled:1 unit:4 grant:2 omit:1 positive:3 t1:2 before:2 treat:1 consequence:4 despite:2 subscript:1 twice:1 suggests:1 unique:3 practical:3 yj:8 practice:3 procedure:1 area:1 projection:1 convenient:1 confidence:1 wait:1 get:3 cannot:3 onto:1 selection:1 interior:1 satoru:1 context:1 www:1 equivalent:1 map:4 lagrangian:1 yt:9 maximizing:2 eighteenth:1 williams:2 kale:1 layout:1 convex:19 resolution:1 decomposable:16 simplicity:1 identifying:1 recovery:1 regarded:1 deriving:1 regularize:1 handle:2 construction:1 pt:1 suppose:2 exact:2 programming:6 element:5 trend:1 satisfying:1 recognition:3 cut:4 observed:1 preprint:1 solved:3 worst:1 thousand:3 wj:14 region:14 culotta:1 convexity:3 nesterov:2 terminating:1 trained:1 solving:6 drineas:1 multimodal:1 joint:3 easily:2 represented:1 various:1 separated:1 fast:2 effective:2 whose:1 modular:7 larger:4 emerged:1 solve:7 heuristic:1 cvpr:1 nineteenth:1 nondecreasing:1 jointly:1 seemingly:1 online:1 took:2 nagano:1 iff:2 poorly:1 achieve:2 intuitive:1 description:1 inducing:1 r1:1 object:4 help:1 derive:1 develop:2 stating:1 job:1 eq:1 strong:1 direction:1 submodularity:5 criminisi:1 require:2 ja:6 fix:1 decompose:2 extension:23 strictly:1 hold:3 sufficiently:1 ground:4 seed:3 early:3 smallest:1 omitted:2 jx:1 purpose:1 combinatorial:11 label:1 jrj:2 largest:1 tool:1 weighted:1 minimization:37 lovasz:1 always:1 rather:1 mccoy:1 focus:1 improvement:1 polyhedron:4 likelihood:2 check:3 industrial:2 ladick:1 posteriori:1 inference:3 el:2 stopping:2 typically:1 diminishing:1 pasadena:2 reproduce:1 interested:1 provably:1 pixel:16 arg:8 classification:10 dual:4 pascal:1 augment:1 html:1 art:5 smoothing:3 special:3 integration:1 cube:3 equal:1 construct:1 field:2 once:1 ng:2 runtimes:1 yu:1 minimized:2 piecewise:3 modern:1 randomly:2 subsampled:1 microsoft:1 highly:1 evaluation:2 bracket:1 kvk:2 extreme:1 pc:1 primal:1 nonincreasing:1 accurate:1 integral:3 necessary:1 experience:1 classify:1 modeling:1 cover:2 subset:2 usefulness:1 rounding:1 reported:1 answer:1 synthetic:3 combined:1 thanks:1 fundamental:1 international:2 siam:2 michael:1 yg:1 na:4 again:1 satisfied:1 choose:2 corner:1 ek:3 derivative:4 resort:1 return:1 potential:42 int:1 satisfy:4 explicitly:1 queuing:1 root:2 picked:2 closed:1 hazan:1 sup:2 candes:1 orlin:1 minimize:9 accuracy:3 efficiently:11 correspond:1 conceptually:1 generalize:1 none:1 classified:2 converged:1 reach:1 stobbe:3 ed:1 against:2 ty:1 energy:2 pp:3 involved:1 naturally:2 proof:1 stop:2 ask:1 knowledge:4 dimensionality:1 segmentation:7 ea:34 actually:2 higher:8 specify:1 improved:2 zisserman:1 done:2 strongly:3 generality:1 furthermore:1 just:2 lastly:1 replacing:1 nonlinear:1 aj:3 perhaps:2 quality:1 pascalnetwork:1 hence:2 equality:1 reformulating:1 regularization:1 symmetric:2 attractive:1 adjacent:1 round:1 encourages:2 criterion:1 dimacs:2 demonstrate:2 performs:1 image:10 meaning:1 novel:3 recently:1 stepping:1 exponentially:2 analog:1 extend:1 interpret:1 k2a:4 fk:4 mathematics:3 similarly:2 consistency:1 submodular:64 dj:6 specification:1 etc:1 base:3 gt:2 j:1 add:1 own:1 recent:4 showed:1 certain:1 inequality:1 exploited:1 caltech:2 seen:2 minimum:8 fortunately:3 additional:1 converge:1 maximize:1 ii:1 multiple:1 rj:9 smooth:12 faster:2 characterized:1 bach:2 long:2 dkl:3 involving:1 basic:1 n5:1 essentially:1 relabel:1 vision:4 arxiv:3 iteration:1 mex:1 achieved:2 background:1 want:1 krause:3 winn:2 sfo:2 appropriately:1 rest:1 asz:18 subject:1 tend:1 fujishige:2 lafferty:1 call:5 integer:1 structural:3 near:2 leverage:1 ee:2 bengio:1 easy:1 enough:2 rendering:1 iterate:1 shotton:1 gave:1 restrict:1 andreas:1 idea:1 okawa:1 multiclass:3 fleischer:2 expression:2 jaj:2 becker:1 queyranne:2 penalty:2 peter:1 reformulated:4 matlab:2 dramatically:1 useful:2 generally:1 detailed:1 clear:1 fsk:1 ten:2 simplest:1 reduced:1 generate:4 http:1 canonical:1 nsf:1 sign:1 arising:4 per:2 discrete:12 vol:1 express:2 key:5 threshold:16 queueing:1 changing:1 krf:1 graph:7 subgradient:8 relaxation:1 sum:11 year:1 run:1 slg:21 separation:1 p3:1 appendix:1 dy:3 bound:1 guaranteed:3 quadratic:1 nonnegative:3 annual:2 comparision:1 constraint:1 alex:1 n3:1 x2:8 bonn:1 min:19 optimality:13 extremely:2 subgradients:1 kumar:1 structured:1 combination:4 beneficial:1 smaller:2 xlj:1 voc2009:2 gittens:1 minnorm:6 intuitively:2 pr:3 equation:5 previously:1 loose:1 r3:1 know:1 drastic:1 end:1 available:1 decomposing:1 apply:4 alternative:1 original:3 clustering:1 include:1 running:9 shigueo:1 graphical:1 calculating:1 exploit:2 lkx1:1 society:3 unchanged:1 objective:2 move:1 question:1 occurs:1 looked:1 parametric:1 gradient:20 nemhauser:1 separate:1 toward:1 enforcing:1 rother:1 index:1 illustration:1 minimizing:9 lg:1 unfortunately:2 difficult:1 setup:1 negative:1 implementation:8 proper:1 zt:2 unknown:2 perform:1 upper:1 markov:2 benchmark:1 descent:5 extended:1 ever:1 grew:1 rn:8 nonuniform:1 smoothed:17 arbitrary:3 introduced:1 cast:1 required:1 pair:1 toolbox:2 connection:2 componentwise:1 california:2 fv:2 beyond:3 able:1 below:2 pattern:2 sparsity:1 challenge:4 program:3 rf:10 max:16 explanation:1 gool:1 critical:1 natural:4 hybrid:3 settling:1 solvable:1 indicator:1 chudak:1 scheme:2 technology:2 speeding:1 geometric:1 acknowledgement:1 checking:1 understanding:1 loss:2 expect:1 permutation:3 interesting:1 generation:1 limitation:1 wolsey:1 analogy:1 foundation:1 krausea:1 classifying:1 summary:2 surprisingly:1 supported:1 drastically:1 formal:1 allow:1 institute:2 wide:1 taking:1 sparse:1 van:1 boundary:1 world:1 evaluating:1 rich:1 concavity:1 computes:1 collection:2 approximate:2 cutting:1 clique:1 alternatively:1 continuous:11 comma:1 sk:7 why:1 terminate:1 channel:1 robust:1 ca:2 efk:1 schuurmans:1 alg:1 domain:2 assured:1 arise:1 freedman:1 nothing:1 x1:2 wish:2 xl:1 candidate:4 comput:1 formula:4 theorem:1 xt:3 specific:1 explored:1 r2:1 admits:1 inconclusive:1 intractable:1 consist:1 exists:1 restricting:1 adding:5 workshop:1 importance:3 texture:1 magnitude:2 push:1 kx:2 gap:7 easier:2 sorting:1 simply:1 jacm:1 forming:1 visual:1 expressed:5 contained:1 partially:1 hayashi:1 monotonic:1 corresponds:1 minimizer:1 satisfies:3 truth:3 acm:3 iwata:4 goal:1 lipschitz:3 jf:1 fisher:1 change:2 included:1 isotani:1 torr:2 uniformly:1 lemma:5 called:2 total:1 ekl:6 duality:1 experimental:2 arises:2 accelerated:5 evaluate:3 tested:3 |
3,345 | 4,029 | Effects of Synaptic Weight Diffusion on Learning in
Decision Making Networks
Kentaro Katahira1,2,3 , Kazuo Okanoya1,3 and Masato Okada1,2,3
ERATO Okanoya Emotional Information Project, Japan Science Technology Agency
2
Graduate School of Frontier Sciences, The University of Tokyo, Kashiwa, Chiba 277-8561, Japan
3
RIKEN Brain Science Institute, Wako, Saitama 351-0198, Japan
[email protected] [email protected]
[email protected]
1
Abstract
When animals repeatedly choose actions from multiple alternatives, they can allocate their choices stochastically depending on past actions and outcomes. It
is commonly assumed that this ability is achieved by modifications in synaptic
weights related to decision making. Choice behavior has been empirically found
to follow Herrnstein?s matching law. Loewenstein & Seung (2006) demonstrated
that matching behavior is a steady state of learning in neural networks if the synaptic weights change proportionally to the covariance between reward and neural activities. However, their proof did not take into account the change in entire synaptic distributions. In this study, we show that matching behavior is not necessarily
a steady state of the covariance-based learning rule when the synaptic strength is
sufficiently strong so that the fluctuations in input from individual sensory neurons influence the net input to output neurons. This is caused by the increasing
variance in the input potential due to the diffusion of synaptic weights. This effect
causes an undermatching phenomenon, which has been observed in many behavioral experiments. We suggest that the synaptic diffusion effects provide a robust
neural mechanism for stochastic choice behavior.
1 Introduction
Decision making has often been studied in experiments in which a subject repeatedly chooses actions
and rewards are given depending on the action. The choice behavior of subjects in such experiments
is known to obey Herrnstein?s matching law [1]. This law states that the proportional allocation of
choices matches the relative reinforcement obtained from those choices. The neural correlates of
matching behavior have been investigated [2] and the computational models that explain them have
been developed [3, 4, 5, 6, 7]?
Previous studies have shown that the learning rule in which the weight update is made proportionally
to the covariance between reward and neural activities lead to matching behavior (we simply refer
to this learning rule as the covariance rule) [3, 7]. In this study, by means of a statistical mechanical
approach [8, 9, 10, 11], we analyze the properties of the covariance rule in a limit where the number of plastic synapses is infinite . We demonstrate that matching behavior is not a steady state of
the covariance rule under three conditions: (1) learning is achieved through the modification of the
synaptic weights from sensory neurons to the value-encoding neurons; (2) individual fluctuations in
sensory input neurons are so large that they can affect the potential of value-coding neurons (possibly via sufficiently strong synapses); (3) the number of plastic synapses that are involved in learning
is large. This result is caused by the diffusion of synaptic weights. The term ?diffusion? refers
to a phenomenon where the distributions over the population of synaptic weights broadens. This
diffusion increases the variance in the potential of output units since the broader synaptic weight
distributions are, the more they amplify fluctuations in individual inputs. This makes the choice
1
behavior of the network more random and moves the probabilities of choosing alternatives to equal
probabilities, than that predicted by the matching law. This outcome corresponds to the undermatching phenomenon, which has been observed in behavioral experiments.
Our results suggest that when we discuss the learning processes in a decision making network, it may
be insufficient to only consider a steady state for individual weight updates, and we should therefore
consider the dynamics of the weight distribution and the network architecture. This proceeding is a
short version of our original paper [12], with the model modified and new results included.
2
Matching Law
First, let us formulate the matching law. We will consider a case with two alternatives (each denoted
as A and B), which has generally been studied in animal experiments. Here, we consider stochastic
choice behavior, where at each time step, a subject chooses alternative a with probability pa . We
denote the reward as r. For the sake of simplicity, we restrict r to a binary variable: r = 0 represents
the absence of a reward, and r = 1 means that a reward is given. The expected return, ?r|a?, refers
to the average reward per choice
?n a, and the income, Ia , refers to the total amount of reward resulting
from the choice a and Ia / ( a?a I?
a? ) is a fractional income from choice a. For a large number of
n
trials, this equals ?r|a?pa . ?r? = a?a ?r|a? ?p?
a? is an average reward per trial over possible choice
n
behavior. The matching law states that Ia / ( a?a Ia? ) = pa for all a with pa ?= 0. For a large
a
a
number of trials, the fraction of income from an alternative a is expressed as ? ?r|a?p
= ?r|a?p
?
?r?
a? ?r|a ?pa?
Then, the matching law states that this quantity equals pa for all a. To make this hold, it should
satisfy
?r|A? = ?r|B? = ?r?,
(1)
if pA ?= 0 and pB ?= 0. Note that ?r|a? is the average reward given the current choice, and this is
a function of the past choice. Equation 1 is a condition for the matching law, and we will often use
this identity.
3
Model
Decision Making Network: The decision making network we study consists of sensory-input neurons and output neurons that represent the subjective value of each alternative (we call the output
neurons value-encoding neurons). The network is divided into two groups (A and B), which participate in choosing each alternative. Sensory cues from both targets are given simultaneously via
B
1 Each component of input
A
B
B
the N -neuron population, xA = (xA
1 , ..., xN ) and x = (x1 , ..., xN )
vectors xA and xB independently obeys a gaussian distribution with mean X0 and variance one
(these quantities can be spike counts during stimulus presentation).
The choice is made in such a way that alternative a is chosen if the potential of output unit ua ,
which will be specified below, is higher than that of the other alternative. Although we do not model
this comparison process explicitly, it can be carried out via a winner-take-all competition mediated
by feedback inhibition, as has been commonly assumed in decision making networks [3, 13]. In
this competition, the ?winner? group gains a high firing rate while the ?loser? enters a low firing
state [13]. Let y A and y B denote the final output of an output neuron after competition and this is
determined as
y A = 1, y B = 0, if uA ? uB ,
y A = 0, y B = 1, if uA < uB .
A
B
With the synaptic efficacies (or weights) J A = (J1A , ..., JN
) and J B = (J1B , ..., JN
), the net input
to the output units are given by
ha =
N
?
Jia xai , a = A, B.
(2)
i=1
1 This assumption might be the case when the sensory input for each alternative is completely different, e.g.,
in position, and in color such as those in Sugrue et al.?s experiment [2]. The case that output neurons share the
inputs from sensory neurons are analyzed in [12].
2
?
?
We assume that Jia is scaled as O(1/ N ). This means that the mean of ha is O( N ), thus diverges
for large N , while the variance is kept of order unity. This is a key assumption of our models. If Jia
is scaled as O(1/N ) instead, the individual fluctuations in xai are averaged out. It has been shown
that the mean of the potential are kept of order unity while fluctuations in external sources (xai ) that
are of order unity affect the potential in output neuron, under the condition that recurrent inputs
from inhibitory interneurons, excitatory recurrent inputs, and input from external sources (xai ) are
balanced [14]. We do not explicitly model this recurrent balancing mechanism, but phenomenologically incorporate it as follows.
Using the order parameters
N
1 ? a
la = ||J a ||, J?a = ?
Ji ,
N i=1
(3)
?
we find ha ? N ( N X0 J?a , la2 ) where N (?, ? 2 ) denotes the gaussian distribution
? with mean ?
and variance ? 2 . We assume ua obeys a gaussian distribution of mean Ca ua / N , and variance
Ca Var[ua ] + ?p2 due to the reccurent balancing mechanism [14]. CA , CB and ?p2 are constants
that are determined according to the specific model architecture of reccurent network, but we set
a
CA = CB = 1 since they do not affect the qualitative
of the model. Then,
? properties
? u is computed
a
a
a
a
a
a
?
?
as u = h ? hrec + ?p ? with hrec = (1 ? 1/ N )E[h ] where E[h ] = N X0 J?a and ? is a
gaussian random variable with unit mean and unit variance. Then, ua obey the independent Gaussian
distributions whose means and variances are respectively given by J?a and la2 + ?p2 . From this, the
probability that the network will choose alternative A can be described as
?
?
?
?
?
X0 (JA ? JB ) ?
1
.
(4)
pA = erfc ? ?
?
2
2(l2 + l2 + 2? 2 ) ?
A
B
p
??
2
where erfc(?) is the complementary error function, erfc(x) = ?2? x e?t dt. This expression is
in a closed form of the order parameters. Thus, if we can describe the evolution of these order
parameters, we can completely describe how the behavior of the model changes as a consequence of
learning. In the following, we will often use an additional order parameter, the variance of weight,
?a2 . This parameter is more convenient for gaining insights into the evolution of the weight than
the weight norm, la . The diffusion of weight distributions is reflected by increases in ?a2 , i.e., the
differences between the growth of the second order moment of weight distribution la2 and that of the
square of its mean J?a2 .
Learning Rules: We consider following two learning rules that belong to the class of the covariance
learning rule:
Reward-modulated (RM) Hebb rule:
Jia (t + 1) = Jia (t) +
?
[r(t) ? r?(t)] y a (t)(xai (t) ? cx ),
N
(5)
Delta rule:
?
[r(t) ? r?(t)] (xai (t) ? cx ),
(6)
N
where ? is the learning rate, ?? denotes the expected value and cx is a constant. The expectation of
these updates is proportional to covariance between the reward, r, and a measure of neural activity
(y a (xai ? cx ) for RM-Hebb rule, and xai ? cx for the delta rule). Variants of the RM-Hebb rule
have recently been studied intensively [4, 15, 16, 17, 18, 19, 20]. The delta rule has been used as
an example of the covariance rule [3, 7] and has also been used for the learning rule in the model of
perceptual learning [21]. The expected reward, r?, can be estimated, e.g., with an exponential
? kernel
such as r?(t + 1) = (1 ? ?)r(t) + ? r?(t) with a constant ?. We assume that cx = (1 ? 1/ N )X0 to
simplify the following analysis 2 .
Jia (t + 1) = Jia (t) +
2 From this assumption, this model can be transformed into a simple mathematical equivalent form that the
?
distribution of input xai is replaced with N (X0 / N , 1) and the potential in output is replaced with ua =
?N
a a
a
a
i=1 Ji xi + ?p ? , where ? ? N (0, 1).
3
4
Macroscopic Description of Learning Processes
Here, following the statistical mechanical analysis of on-line learning [8, 9, 10, 11], we derive
equations that describe the evolution of the order parameters. To do this, we first rewrite the learning
rule in a vector form:
1
J a (t + 1) = J a (t) + Fa (xa ? cx ),
(7)
N
where for the RM-Hebb rule, Fa = ?(rt ? r?t )y a ?and for the delta rule, Fa = ?(rt ? r?t ). Taking the
? a + 1 Fa (t)2 +
square norm of each side of equation 7, we obtain la (t + 1)2 = la (t)2 + N2 Fa (t) h
N
?N
2
a
a a
?
O(1/N ), where we have defined h =
J
(x
?
c
).
Summing
up
over
all
components
x
i
i=1 i
xa , where we have defined
on both sides of equation 7, we obtain J?a (t + 1) = J?a (t) + N1 Fa (t)?
?N
a
?a =
x
i=1 (xi ? cx ). In both these equations, the magnitude of each update is of order 1/N .
Hence, to change the order parameters of order one, O(N ) updates are needed. Within this short
period that spans the O(N ) updates, the weight change in O(1/N ) can be neglected, and the selfaveraging property holds. By using this property and introducing continuous ?time? scaled by N ,
i.e., ? = t/N , the evolutions of the order parameters obey ordinary differential equations:
?
dla2
? a ? + ?F 2 ?, dJa = ?Fa x
? a ?,
= 2?Fa h
(8)
a
d?
d?
where ??? denotes the ensemble average over all possible inputs and arrivals of rewards. The specific
form of the ensemble averages are obtained for reward-dependent Hebbian learning as
? a ? = ? pa {?r|a? ? ?r?} ?h
? a |a?,
?Fa h
{
}
?Fa2 ? = ? 2 pa (1 ? 2?r?)?r|a? + (?r?)2 ,
? a ? = ? pa {?r|a? ? ?r?} ??
?Fa x
xa |a?,
and for the delta rule,
? a? = ?
?Fa h
{
}
? a |a? + (?r|a? ? ? ?r?)J?a ,
pa (?r|a? ? ?r|a? ?)?h
?Fa2 ? = ? 2 {?r?(1 ? ?r?)} ,
? a ? = ? { pa (?r|a? ? ?r|a? ?)??
?Fa x
xa |a? + ?r|a? ? ? ?r?} .
? a |a? and ??
The conditional averages ?h
xa |a? in these equations are computed as
(
)
(
)
2
X02 DJ2?
X02 DJ2?
J?a
? a |a? = J?a X0 + ?la
?
?h
exp ?
,
??
x
|a?
=
X
+
exp
?
,
a
0
2L2
2L2
pa 2?L2
pa 2?L2
(9)
?
2 + l2 + 2? 2 and D = J? ? J? . The details on the derivation are
where we have defined L = lA
B
A
J?
p
B
given in the supplementary material and [12].
Next, we consider weight normalization in which the total length of the weight vector is kept constant. We adopted this weight normalization because of analytical convenience rather than taking
biological realism into account. Other weight constraints would produce no clear differences in
the following results. Specifically, we constrained the norm of the weight as ||J ||2 = 2, where
A
B
2
2
J = (J1A , ..., JN
, J1B , ..., JN
). This is equivalent to keeping lA
+ lB
= 2. This is achieved by
modifying the learning rule in the following way [22]:
?
2(J a (t) + N1 Fa xa )
J a (t) + Fa xa
a
?
J (t + 1) =
= ?
,
(10)
1 + F/N
||J A (t) + 1 FA xA ||2 + ||J B (t) + 1 FB xB ||2
N
N
with F ? FA u + FB u +
+
provided that ||J ||2 = 2 holds at trial t. Expanding the
right-hand side to first order in 1/N , we can obtain the differential equations similarly to Equation 8:
?
dla2
1
? a ? + ?F 2 ? ? ?F?l2 , dJa = ?Fa x
? a ? ? ?F?J?a .
= 2?Fa h
(11)
a
a
d?
d?
2
2
2
With ?F? = ?FA uA ? + ?FB uB ? + 12 (?FA2 ? + ?FB2 ?), we can find that d(lA
+ lB
)/d? becomes zero
2
2
when lA + lB = 2; thus, the length of the weight is kept constant.
A
B
1
2
2 (FA
FB2 ),
4
0.2
1
20
40
60
80
100
E
0.8
Matching
0.4
0.2
20
40
60
80
100
(Scaled time)
1.2
1
0.8
0.6
0.4
0.2
0
-0.2 0
Matching
0.6
0.4
0.2
20
40
60
80
00
100
F
0.6
0 0
0.8
1
100
200
300
400
500
G
0.8
Matching
0.6
0.2
20
40
60
80
100
100
200
300
400
500
100
200
300
400
500
H
1
0.5
0.4
00
7
6
5
4
3
2
1
00
1.5
pA
00
C
Order parameters
0.4
1
Order parameters
0.6
7
6
5
4
3
2
1
0 0
Delta rule
D
pA
Matching
Order parameters
pA
0.8
A
With normarization
p
A
Order parameters
No normarization
1
RM-Hebb rule
B
100
200
300
400
500
0
-0.5 0
Figure 1: Evolution of choice probability and order parameters for RM-Hebb rules (A, B, E, F)
and delta rule (C, D, G, H), without weight normalization (A-D) and with normalization (E-H).
Parameters were X0 = 2, ? = 0.1 and ?p = 1, and the reward schedule was a VI schedule (see
main text) with ?A = 0.2, ?B = 0.1. Lines represent results of theory and symbols plot mean
of ten trials with computer simulation. Simulations were done for N = 1, 000. Error bars indicate
standard deviation (s.d.). Error bars are almost invisible for choice probability since s.d. is very
small.
5
Results
To demonstrate the behavior of the model, we used a time-discrete version of a variable-interval
(VI) reward schedule, which is commonly used for studying the matching law. In a VI schedule, a
reward is assigned to two alternatives stochastically and independently, with a constant probability,
?a for alternative a (a = A, B). The reward remains until it is harvested by choosing the alternative.
Here, we use ?A = 0.2, ?B = 0.1. For this task setting, the choice probability that yields matching
behavior (denoted as pmatch
) is pmatch
= 0.6923. Figure 1(A-D) plots the evolution of choice
A
A
probability and order parameters in two learning rules without a weight normalization constraint.
The lines represent the results for theory and the symbols plot the results for simulations. The results
for theory agree well with those for the computer simulations (N = 1, 000), indicating the validity of
our theory. We can see that the choice probability approaches a value that yields matching behavior
(pmatch
), while the order parameters J?a and ?a continue to change without becoming saturated.
A
The weight standard deviation, ?a , always increases (the synaptic weight diffusion).
Figure 1(E-H) plots the results with weight normalization. Again, the results for theory agree well
with those for computer simulations. For the RM-Hebb rule, the choice probability saturates at a
value below pmatch
. For the delta rule, the choice probability first approaches pmatch
, but without
A
A
match
reaching pA
. It then returns to the uniform choice probability (pA = 0.5) due to its larger
diffusion effect than that of the RM-Hebb rule.
5.1
Matching Behavior Is Not Necessarily Steady State of Learning
From Figure 1, the choice probability seems to asymptotically approach matching behavior for the
case without wight normalization. However, matching behavior is not necessarily a steady state
of learning. In Figure 2, the order parameters are initialized so that pA (0) = pmatch
and then
A
match
Equations 8 and 11 are numerically solved. We see that pA does not remain at pA
but changes
toward the uniform choice (pA = 0.5) for both learning rules. Then, for the RM-Hebb rule, pA
evolves toward pmatch
, but not do so for the delta rule. To understand the mechanism for this
A
5
A
No normarization
B
0.75
With normarization
0.8
RMHebb rule
Matching
Delta rule
A
0.7
0.65
p
p
A
0.7
0.6
0.6
0.5
0.55
0
500
1000
0
500
1000
Figure 2: Strict matching behavior is not equilibrium point. We set initial value of order parameters
to derive perfect matching for (A) no normalization condition and (B) normalization condition.
In both cases, choice probability that yields perfect matching is repulsive. For no normalization
condition, initial conditions were first set at J?B = 1.0, ?A = ?B = 1.0 and then J?A was determined
so that pA = pmatch
. For normalization condition, these values were rescaled so that normalization
A
condition was met.
repulsive property of matching behavior, let us substitute the condition of the matching law, ?r|A? =
? a?
?r|B? = ?r? into Equations 11, for the no normalization condition. We then find that ?Fa h
? a ? are zero but ?Fa2 ? is non-zero and positive except for the non-interesting case where r
and ?Fa x
always takes the same value. Therefore, when pA = pmatch
, the variance in the weight increases,
A
i.e., d?a2 /d? = d(la2 ? J?a2 )/d? > 0. This moves the choice probabilities toward unbiased choice
behavior, pA = 0.5 (see Equation 4). This is the reason that pmatch
is repulsive. This result is in
A
contrast with the N = 1 case [7] where the average changes stop when pA converges to pmatch
.
A
?
2 + l2 ) in Equation 4 is always two; thus, the only factor that
With weight normalization, 2(lA
B
determines choice probability is the difference between J?A and J?B . Substituting ?r|a? = ?r?, ?a into
Equation 11, only term ?Fa2 ? remains, and we obtain d(J?B ? J?A )/(d?) = ? 12 (?FA2 ? + ?FB2 ?) (J?B ?
J?A ) Except for uninteresting cases where r is always 0 or 1, ?FA2 ? + ?FB2 ? > 0 holds; thus, the
absolute difference, |J?B ? J?A |, always decreases. Hence, again, the choice probability at pmatch
A
approaches unbiased choice behavior due to the diffusion effect.
Nevetheless, the choice probability of the RM-Hebb rule without weight normalization asymptotically converges to pmatch
. The reason for this can be explained as follows. First, we rewrite the
A
?
?
choice probability as
?
?
1
X0 (J?A ? J?B )
pA = erfc ? ?
.
(12)
?
2
2(J?2 + J?2 + ? 2 + ? 2 + 2? 2 ) ?
A
B
A
B
p
From this expression, we find that the larger the magnitude of J?a is, the weaker the effect of increases
in ?a . The ?diffusion term?, ?Fa2 ?, which moves pA away from pmatch
depends on pA but not on
A
?
the magnitude of Ja ?s. Thus, within the order parameter set satisfying pA = pmatch
, the larger the
A
magnitudes of Ja ?s are, the weaker is the repulsive effect. If |J?B ? J?A | ? ? while ?A , ?B are finite,
pA stays at pmatch
. Because |J?B ? J?A | can increase faster than ?A and ?B in the RM-Hebb rule
A
without any weight constraints, the network approaches such situations. This is the reason that in
Figure 2A the pA returned to pmatch
after it was repulsed from pmatch
. When weight normalization
A
A
is imposed, the magnitude of J?a ?s are limited as |J?B ? J?A | < 2. Thus, the diffusion effect prevents
pA from approaching pmatch
. In the delta rule, the magnitude of J?a ?s cannot increase independently
A
of ?a ?s. Thus, pA saturates before it reaches pmatch
, where the increase in |J?B ? J?A | and those in
A
?a ?s are balanced.
5.2 Learning Rate Dependence of Learning Behavior
Next, we investigate how the learning rate, ?, affects the choice behavior. In the ?diffusion term?,
?Fa2 ?, is a quadratic term in the learning rate ?. In contrast, only the first order terms of ? appear
6
0.7
Matching
0.65
0.6
0.55
0.5
10
0
10
2
10
4
10
10
0.8
0.75
0.7
0.65
0.6
0.55
0.5
0.45 -4
10
0.75
0.7
0.65
0.6
0.55
0.5
0.8
Matching
10-2
100
102
104
106
h = 0.010
h = 0.100
h = 1.00
h= 10.00
h= 1000.00
0.8
0.45 -4
10
6
Probability of choosing A
Probability of choosing A
C
-2
Delta rule
B
Probability of choosing A
0.75
0.45 -4
10
With normalization
RM-Hebb rule
0.8
Probability of choosing A
No normalization
A
D
10-2
100
102
104
106
h = 0.010
h = 0.100
h = 1.00
h = 2.00
h = 10.00
0.75
0.7
0.65
0.6
0.55
0.5
0.45 -4
10
10-2
100
102
104
106
Figure 3: Evolution of choice probability for various learning rates, ?. Top rows are for non-weight
normalization condition and bottom rows are for normalization condition. Columns at left are for
RM-Hebb rule and those at right are for delta rule. Parameters for model and task schedules are
same as those in Figure 1. Initial conditions were set at ?a = 0.0, (a = A, B), J?a = 5.0 for
non-normalization condition and J?a = 1.0 for normalization condition.
in the other terms. Therefore, if ? is small, the repulsive effect from matching behavior due to the
diffusion effect is expected to weaken. Figure 3 plots the dependence of the evolution of pA on ?. As
a whole, as ? is decreased, the asymptotic value, pA , approaches matching behavior, but relaxation
slows down due to the diffusion of synaptic weights. As we previously discussed, the diffusion
effect is more evident for the delta rule than for the RM-Hebb rule, and for the weight-normalization
condition than the non- normalization condition. This tendency becomes evident as ? increases.
For the RM-Hebb rule without normalization, networks approach matching behavior even for a very
large learning rate (? = 1000). At the beginning of learning when J?a is of small magnitude, the diffusion term, ?Fa2 ?, has a large impact so that it greatly impedes learning for a large ? case. However,
as the magnitude of the differences J?A ? J?B increases, this effect weakens and the dependence of
pA on ? becomes quite small. Although there is still a deviation from perfect matching (see inset of
Figure 3A), the asymptotic value is almost unaffected in the RM-Hebb rule. For the delta rule without normalization, the asymptotic values gradually depend on ?. With normalization constraints, the
RM-Hebb rule also demonstrate graded dependence of asymptotic probability on ?. These results
reflect the fact that the greater learning rate ? is, the larger the diffusion effect.
5.3
Deviation from Matching Law
Choices by animals in many experiments deviate slightly from matching behavior toward unbiased random choice, a phenomenon called undermatching [2, 23]. The synaptic diffusion effects
reproduces this phenomenon. Figure 4A,B plots choice probability for option A as a function of
the fraction income from the option. If this function lies at the diagonal line, it corresponds to
matching behavior. For the RM-rule with weight normalization, as the learning rate ? increases, the
choice probabilities deviate from matching behavior towards unbiased random choice, pA = 0.5
(Figure 4A). Similar results are obtained for another weight constraint,
bound condition
? the hard
a
a
(Figure?4B). In this condition, if the updates makes Ji > Jmax / N (or Ji < 0), Jia is set to
Jmax / N (or 0). We see that the larger the ? is, the broader the weight distributions due the the
synaptic diffusion effects (Figure 4A). This result suggests that the weight diffusion effect causes undermathing regardless of the way of weight constraint, as long as the synaptic weights are confined
to a finite range, as predicted by our theory.
7
Figure 4: Constraints on synaptic weights leads to the undermatching behavior through synaptic
diffusion effects. (A) Choice probability for A as a function of the fraction income for A for the
RM-rule with weight normalization. We used VI schedules with ?A = 0.3a and ?B = 0.3(1 ? a),
varying the constant a (0 ? a ? 1). The results were obtained using stationaly points of the
macroscopic equations. The diagonal line indicates the perfect matching behavior. As the learning
rate ? increases, the choice probabilities deviate from matching behavior towards unbiased random
choice, pA = 0.5. (B) The same plot with (A) for the RM-rule
with the hard bound condition
?
(the synaptic weights are restricted to the interval [0, Jmax / N ] where Jmax = 5.0) obtained by
numerical simulations. Simulations were done for N = 500. (C) The weight distribution after
convergence for the simulations in (B) indicated by the gray arrows.
6
Discussion
In this study, we analyzed the reward-based learning procedure in simple, large-scale decision making networks. To achieve this, we employed techniques from statistical mechanics. Although statistical mechanical analysis has been successively applied to analyze the dynamics of learning the
in neural networks, we applied it to reward-modulated learning in decision making networks for
the first time, to the best of our knowledge. We have assumed the activities of sensory neurons
are independent. In realistic cases, there may be correlations among sensory neurons. The existence of correlation weakens the diffusion effects. However, if there are independent fluctuations, as
observed in many physiological studies, the diffusion effects are at play here as well.
If only a single plastic synapse is taken into consideration, covariance learning rules seem to make
matching behavior a steady state of learning. However, under certain situations where a large number
of synapses simultaneously modify their efficacy, matching behavior cannot be a steady state. This
is because the randomness in weight modifications affects the choice probability of the network, and
the effect returns to the learning process. These results may offer suggestions for discussing learning
behavior in large-scale neural circuits.
Choice behavior in many experiments deviates slightly from matching behavior toward unbiased
choice behavior, a phenomenon called undermatching [23, 2]. There are several possible explanations for this phenomenon. The learning rule employed by Soltani & Wang [4] is equivalent to the
state-less Q-learning in the literature on reinforcement learning [15]. Sakai & Fukai [5, 6] proved
that Q-learning does not lead to matching behavior. Thus, Soltani-Wang?s model is intrinsically
incapable of reproducing matching behavior. The authors interpreted that the departure from matching behavior due to limitations in the learning rule was a possible mechanism for undermatching.
Loewenstein [7] suggested that the mistuning of parameters in the covariance learning rule could
cause undermatching. However, we found that in some task settings, the mistuning can cause overmatching, rather than undermatching [12]. Our findings in this study add one possible mechanism
for undermatching, i.e., undermatching can be caused by the diffusion of synaptic efficacies. The
diffusion effects provide a robust mechanism for undermatching: It reproduces undermatching behavior, regardless of specific task settings.
To achieve random choice behavior, it is thought to require fine-tuning of network parameters [16],
whereas random choice behavior is often observed in behavioral experiments. Our results suggest
that the broad distributions of synaptic weights observed in experiments [24] can make it easier to
realize stochastic random choice behavior perhaps than previously thought.
8
References
[1] R. J. Herrnstein, H. Rachlin, and D. I. Laibson. The Matching Law. Russell Sage Foundation
New York, 1997.
[2] L. P. Sugrue, G. S. Corrado, and W. T. Newsome. Matching behavior and the representation of
value in the parietal cortex. Science, 304(5678):1782?1787, 2004.
[3] Y. Loewenstein and H. S. Seung. Operant matching is a generic outcome of synaptic plasticity
based on the covariance between reward and neural activity. Proceedings of the National
Academy of Sciences, 103(41):15224?15229, 2006.
[4] A. Soltani and X. J. Wang. A biophysically based neural model of matching law behavior:
melioration by stochastic synapses. Journal of Neuroscience, 26(14):3731?3744, 2006.
[5] Y. Sakai and T. Fukai. The actor-critic learning is behind the matching law: Matching versus
optimal behaviors. Neural Computation, 20(1):227?251, 2008.
[6] Y. Sakai and T. Fukai. When does reward maximization lead to matching law? PLoS ONE,
3(11):e3795, 2008.
[7] Y. Loewenstein. Robustness of learning that is based on covariance-driven synaptic plasticity.
PLoS Computational Biology, 4(3):e1000007, 2008.
[8] W. Kinzel and P. Rujan. Improving a network generalization ability by selecting examples.
Europhysics Letters, 13(5):473?477, 1990.
[9] D. Saad. On-line learning in neural networks. Cambridge University Press, 1998.
[10] G. Reents and R. Urbanczik. Self-averaging and on-line learning. Physical Review Letters,
80(24):5445?5448, 1998.
[11] M. Biehl, N. Caticha, and P. Riegler. Statistical mechanics of on-line learning. SimilarityBased Clustering, pages 1?22, 2009.
[12] K. Katahira, K. Okanoya, and M. Okada. Statistical mechanics of reward-modulated learning
in decision making networks. under review.
[13] X. J. Wang. Probabilistic decision making by slow reverberation in cortical circuits. Neuron,
36(5):955?968, 2002.
[14] C. van Vreeswijk and H. Sompolinsky. Chaotic balanced state in a model of cortical circuits.
Neural Computation, 10(6):1321?1371, 1998.
[15] A. Soltani, D. Lee, and X. J. Wang. Neural mechanism for stochastic behaviour during a
competitive game. Neural Networks, 19(8):1075?1090, 2006.
[16] S. Fusi, W. F. Asaad, E. K. Miller, and X. J. Wang. A neural circuit model of flexible sensorimotor mapping: learning and forgetting on multiple timescales. Neuron, 54(2):319?333,
2007.
[17] E. M. Izhikevich. Solving the distal reward problem through linkage of STDP and dopamine
signaling. Cerebral Cortex, 17:2443?2452, 2007.
[18] R. V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic
plasticity. Neural Computation, 19(6):1468?1502, 2007.
[19] M. A. Farries and A. L. Fairhall. Reinforcement Learning With Modulated Spike Timing
Dependent Synaptic Plasticity. Journal of Neurophysiology, 98(6):3648?3665, 2007.
[20] R. Legenstein, D. Pecevski, and W. Maass. A learning theory for reward-modulated spiketiming-dependent plasticity with application to biofeedback. PLoS Computational Biology,
4(10):e1000180, 2008.
[21] C. T. Law and J. I. Gold. Reinforcement learning can account for associative and perceptual
learning on a visual-decision task. Nature Neuroscience, 12(5):655?663, 2009.
[22] M. Biehl. An exactly solvable model of unsupervised learning. Europhysics Letters,
25(5):391?396, 1994.
[23] W. M. Baum. On two types of deviation from the matching law: Bias and undermatching.
Journal of the Experimental Analysis of Behavior, 22(1):231?242, 1974.
[24] B. Barbour, N. Brunel, V. Hakim, and J. P. Nadal. What can we learn from synaptic weight
distributions? TRENDS in Neurosciences, 30(12):622?629, 2007.
9
| 4029 |@word neurophysiology:1 trial:5 version:2 norm:3 seems:1 simulation:8 covariance:13 moment:1 initial:3 efficacy:3 selecting:1 wako:1 past:2 subjective:1 current:1 realize:1 realistic:1 numerical:1 plasticity:5 plot:7 update:7 cue:1 beginning:1 realism:1 short:2 mathematical:1 differential:2 qualitative:1 consists:1 behavioral:3 x0:9 forgetting:1 expected:4 behavior:50 mechanic:3 brain:2 increasing:1 ua:9 project:1 provided:1 becomes:3 circuit:4 what:1 interpreted:1 nadal:1 developed:1 finding:1 growth:1 exactly:1 katahira:2 scaled:4 rm:20 unit:5 appear:1 positive:1 before:1 timing:2 modify:1 limit:1 consequence:1 encoding:2 riegler:1 fluctuation:6 firing:2 becoming:1 modulation:1 might:1 studied:3 suggests:1 limited:1 graduate:1 range:1 obeys:2 averaged:1 chaotic:1 signaling:1 procedure:1 fb2:4 urbanczik:1 thought:2 matching:55 convenient:1 refers:3 suggest:3 amplify:1 convenience:1 cannot:2 influence:1 equivalent:3 imposed:1 demonstrated:1 baum:1 regardless:2 independently:3 fa2:10 formulate:1 simplicity:1 loewenstein:4 rule:55 insight:1 population:2 jmax:4 target:1 play:1 pa:42 trend:1 satisfying:1 observed:5 bottom:1 enters:1 solved:1 wang:6 sompolinsky:1 plo:3 decrease:1 rescaled:1 russell:1 balanced:3 agency:1 reward:26 seung:2 dynamic:2 neglected:1 depend:1 rewrite:2 solving:1 completely:2 various:1 riken:2 derivation:1 describe:3 broadens:1 outcome:3 choosing:7 whose:1 quite:1 supplementary:1 larger:5 biehl:2 ability:2 final:1 associative:1 net:2 analytical:1 loser:1 achieve:2 academy:1 gold:1 description:1 competition:3 convergence:1 diverges:1 produce:1 perfect:4 converges:2 depending:2 recurrent:3 ac:2 derive:2 weakens:2 school:1 p2:3 strong:2 predicted:2 indicate:1 met:1 tokyo:3 modifying:1 stochastic:5 material:1 ja:3 require:1 behaviour:1 generalization:1 biological:1 frontier:1 hold:4 sufficiently:2 stdp:1 exp:2 cb:2 equilibrium:1 mapping:1 pecevski:1 substituting:1 a2:5 gaussian:5 always:5 modified:1 rather:2 reaching:1 varying:1 broader:2 indicates:1 greatly:1 contrast:2 dependent:4 entire:1 transformed:1 among:1 flexible:1 denoted:2 animal:3 constrained:1 equal:3 biology:2 represents:1 broad:1 unsupervised:1 jb:1 stimulus:1 simplify:1 simultaneously:2 national:1 individual:5 replaced:2 n1:2 interneurons:1 investigate:1 saturated:1 analyzed:2 behind:1 xb:2 initialized:1 biofeedback:1 weaken:1 column:1 newsome:1 maximization:1 ordinary:1 introducing:1 deviation:5 saitama:1 uniform:2 uninteresting:1 chooses:2 stay:1 probabilistic:1 lee:1 again:2 reflect:1 successively:1 choose:2 possibly:1 stochastically:2 external:2 return:3 japan:3 account:3 potential:7 coding:1 satisfy:1 caused:3 explicitly:2 vi:4 depends:1 closed:1 analyze:2 competitive:1 option:2 jia:8 square:2 variance:10 ensemble:2 yield:3 miller:1 biophysically:1 plastic:3 la2:4 unaffected:1 randomness:1 explain:1 synapsis:5 reach:1 synaptic:27 sensorimotor:1 okanoya:3 involved:1 selfaveraging:1 proof:1 repulsed:1 gain:1 stop:1 proved:1 intrinsically:1 intensively:1 color:1 fractional:1 knowledge:1 schedule:6 higher:1 dt:1 follow:1 reflected:1 synapse:1 done:2 xa:11 until:1 correlation:2 hand:1 gray:1 perhaps:1 indicated:1 izhikevich:1 effect:21 validity:1 unbiased:6 herrnstein:3 evolution:8 hence:2 assigned:1 maass:1 distal:1 erato:1 during:2 self:1 game:1 steady:8 evident:2 demonstrate:3 invisible:1 consideration:1 recently:1 kinzel:1 empirically:1 ji:4 dja:2 physical:1 winner:2 jp:3 cerebral:1 belong:1 discussed:1 numerically:1 refer:1 cambridge:1 tuning:1 similarly:1 actor:1 cortex:2 inhibition:1 add:1 driven:1 certain:1 incapable:1 binary:1 continue:1 discussing:1 additional:1 greater:1 kazuo:1 florian:1 employed:2 period:1 x02:2 corrado:1 multiple:2 hebbian:1 match:3 faster:1 offer:1 long:1 divided:1 europhysics:2 impact:1 variant:1 expectation:1 dopamine:1 represent:3 kernel:1 normalization:29 achieved:3 confined:1 whereas:1 fine:1 interval:2 decreased:1 source:2 fukai:3 macroscopic:2 saad:1 strict:1 subject:3 seem:1 call:1 pmatch:20 affect:5 architecture:2 restrict:1 approaching:1 masato:1 expression:2 allocate:1 linkage:1 returned:1 york:1 cause:4 repeatedly:2 action:4 generally:1 proportionally:2 clear:1 amount:1 ten:1 soltani:4 mistuning:2 inhibitory:1 delta:15 estimated:1 per:2 neuroscience:3 discrete:1 group:2 key:1 dj2:2 pb:1 diffusion:26 kept:4 asymptotically:2 relaxation:1 fraction:3 letter:3 almost:2 fusi:1 legenstein:1 decision:12 bound:2 barbour:1 quadratic:1 activity:5 fairhall:1 strength:1 constraint:7 sake:1 span:1 according:1 remain:1 slightly:2 unity:3 evolves:1 making:11 modification:3 explained:1 gradually:1 restricted:1 operant:1 taken:1 equation:15 agree:2 remains:2 previously:2 discus:1 count:1 mechanism:8 vreeswijk:1 needed:1 adopted:1 studying:1 repulsive:5 obey:3 away:1 generic:1 alternative:14 robustness:1 jn:4 original:1 substitute:1 denotes:3 top:1 existence:1 clustering:1 emotional:1 spiketiming:1 erfc:4 graded:1 move:3 rachlin:1 quantity:2 spike:3 fa:22 okada1:1 rt:2 dependence:4 diagonal:2 participate:1 toward:5 reason:3 length:2 insufficient:1 slows:1 sage:1 reverberation:1 similaritybased:1 neuron:19 finite:2 parietal:1 situation:2 saturates:2 reproducing:1 lb:3 mechanical:3 specified:1 bar:2 suggested:1 below:2 departure:1 gaining:1 explanation:1 ia:4 phenomenologically:1 solvable:1 mn:1 technology:1 carried:1 mediated:1 text:1 deviate:4 literature:1 l2:9 review:2 relative:1 law:18 asymptotic:4 harvested:1 interesting:1 suggestion:1 proportional:2 allocation:1 limitation:1 var:1 versus:1 foundation:1 share:1 balancing:2 critic:1 row:2 excitatory:1 keeping:1 side:3 weaker:2 understand:1 bias:1 institute:1 hakim:1 taking:2 absolute:1 van:1 chiba:1 feedback:1 xn:2 cortical:2 sakai:3 fb:3 sensory:9 author:1 commonly:3 reinforcement:5 made:2 caticha:1 income:5 correlate:1 reproduces:2 xai:9 summing:1 assumed:3 xi:2 continuous:1 learn:1 nature:1 okada:2 robust:2 ca:4 expanding:1 improving:1 investigated:1 necessarily:3 rujan:1 did:1 main:1 timescales:1 arrow:1 whole:1 arrival:1 n2:1 complementary:1 x1:1 hebb:17 slow:1 position:1 reccurent:2 exponential:1 lie:1 perceptual:2 down:1 specific:3 inset:1 symbol:2 physiological:1 magnitude:8 easier:1 cx:8 simply:1 kashiwa:1 visual:1 prevents:1 expressed:1 impedes:1 brunel:1 corresponds:2 determines:1 conditional:1 identity:1 kentaro:1 presentation:1 towards:2 absence:1 change:8 hard:2 included:1 infinite:1 determined:3 specifically:1 except:2 averaging:1 total:2 called:2 tendency:1 la:10 sugrue:2 experimental:1 indicating:1 modulated:5 ub:3 incorporate:1 phenomenon:7 |
3,346 | 403 | From Speech Recognition to Spoken Language
Understanding: The Development of the MIT
SUMMIT and VOYAGER Systems
Victor Zue, James Glass, David Goodine, Lynette Hirschman,
Hong Leung, Michael Phillips, Joseph Polifroni, and Stephanie Seneff'
Room NE43-601
Spoken Language Systems Group
Laboratory for Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139 U.S.A.
Abstract
Spoken language is one of the most natural, efficient, flexible, and economical means of communication among humans. As computers play an ever
increasing role in our lives, it is important that we address the issue of
providing a graceful human-machine interface through spoken language.
In this paper, we will describe our recent efforts in moving beyond the
scope of speech recognition into the realm of spoken-language understanding. Specifically, we report on the development of an urban navigation and
exploration system called VOYAGER, an application which we have used as
a basis for performing research in spoken-language understanding.
1
Introduction
Over the past decade, research in speech coding and synthesis has matured to the
extent that speech can now be transmitted efficiently and generated with high intelligibility. Spoken input to computers, however, has yet to pass the threshold of
practicality. Despite some recent successful demonstrations, current speech recognition systems typically fall far short of human capabilities of continuous speech
recognition with essentially unrestricted vocabulary and speakers, under adverse
acoustic environments. This is largely due to our incomplete knowledge of the encoding of linguistic information in the speech signal, and the inherent variabilities of
255
256
Zue, Glass, Goodine, Hirschman, Leung, Phillips, lblifroni, and Seneff
this process. Our approach to system development is to seek a good understanding
of human communication through spoken language, to capture the essential features
of the process in appropriate models, and to develop the necessary computational
framework to make use of these models for machine understanding.
Our research in spoken language system development is based on the premise that
many of the applications suitable for human/machine interaction using speech typically involve interactive problem solving. That is, in addition to converting the
speech signal to text, the computer must also understand the user's request, in
order to generate an appropriate response. As a result, we have focused our attention on three main issues. First, the system must operate in a realistic application
domain, where domain-specific information can be utilized to translate spoken input into appropriate actions. The use of a realistic application is also critical to
collecting data on how people would like to use machines to access information and
solve problems. Use of a constrained task also makes possible rigorous evaluations
of system performance. Second and perhaps most importantly the system must
integrate speech recognition and natural language technologies to achieve speech
understanding. Finally, the system must begin to deal with interactive speech,
where the computer is an active conversational participant, and where people produce spontaneous speech, including false starts, hestitations, etc.
In this paper, we will describe our recent efforts in developing a spoken language
interface for an urban navigation system (VOYAGER). We begin by describing our
overall system architecture, paying particular attention to the interface between
speech and natural language. We then describe the application domain and some
of the issues that arise in realistic interactive problem solving applications, particulary in terms of conversational interaction. Finally, we report results of some
performance evaluations we have made, using a spontaneous speech corpus we collected for this task.
2
System Architecture
Our spoken language language system contains three important components. The
SUMMIT speech recognition system converts the speech signal into a set of word
hypotheses. The TINA natural language system interacts with the speech recognizer
in order to obtain a word string, as well as a linguistic' interpretation of the utterance.
A control strategy mediates between the recognizer and the language understanding
component, using the language understanding constraints to help control the search
of the speech recognition system.
2.1
Continuous Speech Recognition: The
SUMMIT
System
The SUMMIT system (Zue et aI., 1989) starts the recognition process by first transforming the speech signal into a representation that models some of the known
properties of the human auditory system (Seneff, 1988). Using the output of the
auditory model, acoustic landmarks of varying robustness are located and embedded in a hierarchical structure called a dendrogram (Glass, 1988). The acoustic
segments in the dendrogram are then mapped to phoneme hypotheses, using a set
of automatically determined acoustic attributes in conjunction with conventional
From Speech Recognition to Spoken Language Understanding
pattern recognition algorithms. The result is a phoneme network, in which each
arc is characterized by a vector of probabilities for all the possible candidates. Recently, we have begun to experiment with the use of artificial neural nets for phonetic classifiction. To date, we have been able to improve the system's classification
performance by over 5% (Leung and Zue, 1990).
Words in the lexicon are represented as pronunciation networks, which are generated
automatically by a set of phonological rules (Zue et aI., 1990). Weights derived
from training data are assigned to each arc, using a corrective training procedure,
to reflect the likelihood of a particular pronunciation. Presently, lexical decoding
is accomplished by using the Viterbi algorithm to find the best path that matches
the acoustic-phonetic network with the lexical network.
2.2
Natural Language Processing: The TINA System
In a spoken language system, the natural language component should perform two
critical functions: 1) provide constraint for the recognizer component, and 2) provide an interpretation of the meaning of the sentence to the back-end. Our natural
language system, TINA, was specifically designed to meet these two needs. TINA
is a probabilistic parser which operates top-down, using an agenda-based control
strategy which favors the most likely analyses. The basic design of TIN A has been
described elsewhere (Seneff, 1989), but will be briefly reviewed. The grammar is
entered as a set of simple context-free rules which are automatically converted to
a shared network structure. The nodes in the network are augmented with constraint filters (both syntactic and semantic) that operate only on locally available
parameters. All arcs in the network are associated with probabilities, acquired
automatically from a set of training sentences. Note that the probabilities are established not on the rule productions but rather on arcs connecting sibling pairs
in a shared structure for a number of linked rules. The effect of such pooling
is essentially a hierarchical bigram model. We believe this mechanism offers the
capability of generating probabilities in a reasonable way by sharing counts on syntactically /semantically identical units in differing structural environments.
2.3
Control Strategy
The current interface between the SUMMIT speech recognition system and the TINA
natural language system, uses an N-best algorithm (Chow and Schwartz, 1989;
Soong and Huang, 1990; Zue et aI., 1990), in which the recognizer can propose its
best N complete sentence hypotheses one by one, stopping with the first sentence
that is successfully analyzed by the natural language component TINA. In this case,
TINA acts as a filter on whole sentence hypotheses.
In order to produce N -best hypotheses, we use a search strategy that involves an
initial Viterbi search all the way to the end of the sentence, to provide a "best"
hypothesis, followed by an A? search to produce next-best hypotheses in turn,
provided that the first hypothesis failed to parse. If all hypotheses fail to parse the
system produces the rejection message, "I'm sorry but I didn't understand you."
Even with the parser acting as a filter of whole-sentence hypotheses, it is appropriate
to also provide the recognizer with an inexpensive language model that can partially
257
258
Zue, Glass, Goodine, Hirschman, Leung, Phillips, Iblifroni, and Seneff
constrain the theories. This is currently done with a word-pair language model, in
which each word in the vocabulary is associated with a list of words that could
possibly follow that word anywhere in the sentence.
3
The
VOYAGER
Application Domain
is an urban navigation and exploration system that enables the user to
ask about places of interest and obtain directions. It has been under development
since early 1989 (Zue et al., 1989; Zue et al., 1990). In this section, we describe the
application domain, the interface between our language understanding system TIN A
and the application back-end, and the discourse capabilities of the current system.
VOYAGER
3.1
Domain Description
For our first attempt at exploring issues related to a fully-interactive spokenlanguage system, we selected a task in which the system knows about the physical
environment of a specific geographical area and can provide assistance on how to
get from one location to another within this area. The system, which we call VOyAGER, can also provide information concerning certain objects located inside this
area. The current version of VOYAGER focuses on the geographic area of the city of
Cambridge, MA between MIT and Harvard University.
The application database is an enhanced version of the Direction Assistance program developed at the Media Laboratory at MIT (Davis and Trobaugh, 1987). It
consists of a map database, including the locations of various classes of objects
(streets, buildings, rivers) and properties of these objects (address, phone number,
etc.) The application supports a set of retrieval functions to access these data.
The application must convert the semantic representation of TIN A into the appropriate function call to the VOYAGER back-end. The answer is given to the user in
three forms. It is graphically displayed on a map, with the object(s) of interest
highlighted. In addition, a textual answer is printed on the screen, and is also spoken verbally using synthesized speech. The current implementation handles various
types of queries, such as the location of objects, simple properties of objects, how to
get from one place to another, and the distance and time for travel between objects.
3.2
Application Interface to
VOYAGER
Once an utterance has been processed by the language understanding system, it
is passed to an interface component which constructs a command function from
the natural language representation. This function is subsequently passed to the
back-end where a response is generated. There are three function types used in
the current command framework of VOYAGER, which we will illustrate with the
following example:
Query:
Function:
Where is the nearest bank to MIT?
(LOCATE (NEAREST (BANK nil) (SCHOOL "HIT"?)
LOCATE is an example of a major function that determines the primary action to
be performed by the command. It shows the physical location of an object or set
From Speech Recognition to Spoken Language Understanding
of objects on the map. Functions such as BAlK and SCHOOL in the above example
access the database to return an object or a set of objects. When null arguments
are provided, all possible candidates are returned from the database. Thus, for
example, (SCHOOL "MIT") and (BAlK nil) will return the objects MIT and all
known banks, respectively. Finally, there are a number of functions in VOYAGER
that act as filters, whereby the subset that fulfills some requirements are returned.
The function (IEAREST X y), for example, returns the object in the set X that is
closest to the object y. These filter functions can be nested, so that they can quite
easily construct a complicated object. For example, "the Chinese restaurant on
Main Street nearest to the hotel in Harvard Square that is closest to City Hall"
would be represented by,
(NEAREST
(ON-STREET
(SERVE (RESTAURAIT nil) "Chinese")
(STREET "Main" "Street"?
(IEAREST
(Ill-REGIOI (HOTEL nil) (SQUARE "Harvard"?
(PUBLIC-BUILDIIlG "City Hall"?)
3.3
Discourse Capabilities
Carrying on a conversation requires the use of context and discourse history. Without context, some user input may appear underspecified, vague or even ill-formed.
However, in context, these queries are generally easily understood. The discourse
capabilities of the current VOYAGER system are simplistic but nonetheless effective
in handling the majority of the interactions within the designated task. We describe
briefly how a discourse history is maintained, and how the system keeps track of
incomplete requests, querying the user for more information as needed to fill in
ambiguous material.
Two slots are reserved for discourse history. The first slot refers to the location
of the user, which can be set during the course of the conversation and then later
referred to. The second slot refers to the most recently referenced set of objects.
This slot can be a single object, a set of objects, or two separate objects in the
case where the previous command involved a calculation involving both a source
and a destination. With these slots, the system can process queries that include
pronominal reference as in "What is their address?" or "How far is it from here?"
VOYAGER can also handle underspecified or vague queries, in which a function argument has either no value or multiple values. Examples of such queries would be
"How far is a bank?" or "How far is MIT?" when no [FROM-LOCATION] has been
specified. VOYAGER points out such underspecification to the user, by asking for
specific clarification. The underspecified command is also pushed onto a stack of
incompletely specified commands. When the user provides additional information
that is evaluated successfully, the top command in the stack is popped for reevaluation. If the additional information is not sufficient to resolve the original command,
the command is again pushed onto the stack, with the new information incorporated. A protection mechanism automatically clears the history stack whenever the
user abandons a line of discussion before all underspecified queries are clarified.
259
260
Zue, Glass, Goodine, Hirschman, Leung, Phillips, Iblifroni, and Seneff
4
Performance Evaluation
In this section, we describe our experience with performance evaluation of spoken
language systems. The version of VOYAGER that we evaluated has a vocabulary of
350 words. The word-pair language model for the speech recognition sub-system has
a perplexity of 72. For the N-best algorithm; the number of sentence hypotheses
was arbitrarily set at 100. The system was implemented on a SUN-4, using four
commercially available signal processing boards. This configuration has a processes
an utterance in 3 to 5 times real-time.
The system was trained and tested using a corpus of spontaneous speech recorded
from 50 male and 50 female subjects (Zue et al., 1989). We arbitrarily designated the
data from 70 speakers, equally divided between male and female, to be the training
set. Data from 20 of the remaining speakers were designated as the development
set. The test set consisted of 485 utterances generated by the remaining 5 male and
5 female subjects. The average number of words per sentence was 7.7.
VOYAGER generated an action for 51.7% of the sentences in the test set. The
system failed to generate a parse on the remaining 48.3% of the sentences, either
due to recognizer errors, unknown words, unseen linguistic structures, or back-end
inadequacy. Specifically, 20.3% failed to generate an action due to recognition errors
or the system's inability to deal with spontaneous speech phenomena, 17.2% were
found to contain unknown words, and an additional 10.5% would not have parsed
even if recognized correctly. VOYAGER almost never failed to provide a response
once a parse had been generated. This is a direct result of our conscious decision
to constrain TINA according to the capabilities of the back-end. Although 48.3% of
the sentences were judged to be incorrect, only 13% generated the wrong response.
For the remainder of the errors, the system responded with the message, "I'm sorry
but I didn't understand you."
Finally, we solicited judgments from three naive subjects who had had no previous
experience with VOYAGER to assess the capabilities of the back-end. About 80% of
the responses were judged to be appropriate, with an additional 5% being verbose
but otherwise correct. Only about 4% of the sentences produced diagnostic error
messages, for which the system was judged to give an appropriate response about
two thirds of the time. The response was judged incorrect about 10% of the time.
The subjects judged about 87% of the user queries to be reasonable.
5
Summary
This paper summarizes the status of our recent efforts in spoken language system development. It is clear that spoken language systems will incorporate research from,
and provide a useful testbed for a variety of disciplines including speech, natural
language processing, knowledge aquisition, databases, expert systems, and human
factors. In the near term our plans include improving the phonetic recognition
accuracy of SUMMIT by incorporating context-dependent models, and investigating
control strategies which more fully integrate our speech recognition and natural
language components.
From Speech Recognition to Spoken Language Understanding
Acknowledgements
This research was supported by DARPA under Contract NOOOI4-89-J-1332, monitored through the Office of Naval Research.
References
Chow, Y, and R. Schwartz, (1989) "The N-Best Algorithm: An Efficient Procedure for Finding Top N Sentence Hypotheses", Proc. DARPA Speech and Natural
Language Workshop, pp. 199-202, October.
Davis, J.R. and T. F. Trobaugh, (1987) "Back Seat Driver," Technical Report 1,
MIT Media Laboratory Speech Group, December.
Glass, J. R., (1988) "Finding Acoustic Regularities in Speech: Applications to
Phonetic Recognition," Ph.D. thesis, Massachusetts Institute of Technology, May.
Leung, H., and V. Zue, (1990) "Phonetic Classification Using Multi-Layer Perceptrons," Proc. ICASSP-90, pp. 525-528, Albuquerque, NM.
Seneff, S., (1988) "A Joint Synchrony/Mean-Rate Model of Auditory Speech Processing," J. of Phonetics, vol. 16, pp. 55-76, January.
Seneff, S. (1989) "TINA: A Probabilistic Syntactic Parser for Speech Understanding
Systems," Proc. DARPA Speech and Natural Language Workshop, pp. 168-178,
February.
Soong, F., and E. Huang, (1990) "A Tree-Trellis Based Fast Search for Finding the
N-best Sentence Hypotheses in Continuous Speech Recognition", Proc. DARPA
Speech and Natural Language Workshop, pp. 199-202, June.
Zue, V., J. Glass, M. Phillips, and S. Seneff, (1989) "Acoustic Segmentation and
Phonetic Classification in the SUMMIT System," Proc. ICASSP-89, pp. 389-392,
Glasgow, Scotland.
Zue, V., J. Glass, D. Goodine, H. Leung, M. Phillips, J. Polifroni, and S. Seneff,
(1989) "The VOYAGER Speech Understanding System: A Progress Report," Proc.
DARPA Speech and Natural Language Workshop, pp. 51-59, October.
Zue, V., N. Daly, J. Glass, D. Goodine, H. Leung, M. Phillips, J. Polifroni, S. Seneff,
and M. Soelof, (1989) "The Collection and Preliminary Analysis of a Spontaneous
Speech Database," Proc. DARPA Speech and Natural Language Workshop, pp.
126-134, October.
Zue, V., J. Glass, D. Goodine, M. Phillips, and S. Seneff, (1990) "The SUMMIT
Speech Recognition System: Phonological Modelling and Lexical Access," Proc.
ICASSP-90, pp. 49-52, Albuquerque, NM.
Zue, V., J. Glass, D. Goodine, H. Leung, M. Phillips, J. Polifroni, and S. Seneff,
(1990) "The VOYAGER Speech Understanding System: Preliminary Development
and Evaluation," Proc. ICASSP-90, pp. 73-76, Albuquerque, NM.
Zue, V., J. Glass, D. Goodine, H. Leung, M. Phillips, J. Polifroni, and S. Seneff,
(1990) "Recent Progress on the VOYAGER System," Proc. DARPA Speech and
Natural Language Workshop, pp. 206-211, June.
261
| 403 |@word version:3 briefly:2 bigram:1 seek:1 initial:1 configuration:1 contains:1 past:1 current:7 protection:1 yet:1 must:5 realistic:3 matured:1 enables:1 designed:1 selected:1 scotland:1 short:1 provides:1 node:1 lexicon:1 location:6 clarified:1 direct:1 driver:1 incorrect:2 consists:1 inside:1 acquired:1 classifiction:1 multi:1 automatically:5 resolve:1 increasing:1 begin:2 provided:2 didn:2 medium:2 null:1 what:1 string:1 developed:1 spoken:20 differing:1 finding:3 collecting:1 act:2 interactive:4 wrong:1 hit:1 schwartz:2 control:5 unit:1 appear:1 before:1 understood:1 referenced:1 despite:1 encoding:1 meet:1 path:1 procedure:2 area:4 printed:1 word:12 refers:2 get:2 onto:2 judged:5 context:5 conventional:1 map:3 lexical:3 graphically:1 attention:2 focused:1 glasgow:1 rule:4 seat:1 importantly:1 fill:1 handle:2 spontaneous:5 play:1 parser:3 user:10 enhanced:1 us:1 hypothesis:13 harvard:3 recognition:21 utilized:1 located:2 summit:8 underspecified:4 database:6 role:1 capture:1 reevaluation:1 sun:1 ne43:1 environment:3 transforming:1 trained:1 carrying:1 solving:2 segment:1 serve:1 basis:1 vague:2 easily:2 darpa:7 icassp:4 joint:1 represented:2 various:2 corrective:1 fast:1 describe:6 effective:1 artificial:1 query:8 pronunciation:2 quite:1 solve:1 otherwise:1 grammar:1 favor:1 unseen:1 syntactic:2 highlighted:1 abandon:1 net:1 propose:1 interaction:3 remainder:1 date:1 entered:1 translate:1 achieve:1 description:1 regularity:1 requirement:1 produce:4 generating:1 object:19 help:1 illustrate:1 develop:1 nearest:4 school:3 progress:2 paying:1 implemented:1 involves:1 direction:2 correct:1 attribute:1 filter:5 subsequently:1 exploration:2 human:7 public:1 material:1 premise:1 preliminary:2 aquisition:1 exploring:1 hall:2 scope:1 viterbi:2 particulary:1 major:1 early:1 recognizer:6 proc:10 travel:1 daly:1 currently:1 successfully:2 city:3 mit:8 rather:1 varying:1 command:9 office:1 conjunction:1 linguistic:3 derived:1 focus:1 june:2 naval:1 modelling:1 likelihood:1 rigorous:1 glass:12 dependent:1 stopping:1 leung:10 typically:2 chow:2 sorry:2 issue:4 among:1 flexible:1 overall:1 classification:3 ill:2 development:8 plan:1 constrained:1 once:2 phonological:2 construct:2 never:1 identical:1 commercially:1 report:4 inherent:1 attempt:1 interest:2 message:3 evaluation:5 male:3 navigation:3 analyzed:1 necessary:1 experience:2 solicited:1 tree:1 incomplete:2 asking:1 subset:1 successful:1 answer:2 geographical:1 river:1 probabilistic:2 destination:1 contract:1 decoding:1 discipline:1 michael:1 synthesis:1 connecting:1 again:1 reflect:1 recorded:1 thesis:1 nm:3 huang:2 possibly:1 expert:1 return:3 converted:1 coding:1 verbose:1 performed:1 hirschman:4 later:1 linked:1 start:2 participant:1 capability:7 complicated:1 synchrony:1 ass:1 square:2 formed:1 accuracy:1 responded:1 phoneme:2 largely:1 efficiently:1 reserved:1 judgment:1 who:1 albuquerque:3 produced:1 economical:1 history:4 sharing:1 whenever:1 inexpensive:1 nonetheless:1 hotel:2 pp:11 involved:1 james:1 associated:2 monitored:1 auditory:3 massachusetts:2 begun:1 ask:1 noooi4:1 realm:1 knowledge:2 conversation:2 segmentation:1 back:8 follow:1 response:7 done:1 evaluated:2 anywhere:1 dendrogram:2 parse:4 perhaps:1 believe:1 building:1 effect:1 consisted:1 geographic:1 contain:1 assigned:1 laboratory:3 semantic:2 deal:2 assistance:2 during:1 davis:2 speaker:3 whereby:1 maintained:1 ambiguous:1 hong:1 complete:1 syntactically:1 interface:7 phonetics:1 meaning:1 recently:2 physical:2 interpretation:2 synthesized:1 cambridge:2 phillips:10 ai:3 language:42 had:3 moving:1 access:4 etc:2 closest:2 recent:5 female:3 phone:1 perplexity:1 phonetic:6 certain:1 seneff:14 arbitrarily:2 life:1 accomplished:1 victor:1 transmitted:1 unrestricted:1 additional:4 converting:1 recognized:1 voyager:21 signal:5 multiple:1 technical:1 match:1 characterized:1 calculation:1 offer:1 retrieval:1 divided:1 concerning:1 equally:1 involving:1 basic:1 simplistic:1 essentially:2 addition:2 source:1 operate:2 pooling:1 subject:4 december:1 call:2 structural:1 near:1 variety:1 restaurant:1 architecture:2 sibling:1 inadequacy:1 passed:2 effort:3 returned:2 speech:46 action:4 generally:1 useful:1 clear:2 involve:1 locally:1 conscious:1 ph:1 processed:1 generate:3 diagnostic:1 track:1 per:1 correctly:1 vol:1 group:2 four:1 threshold:1 urban:3 convert:2 you:2 place:2 almost:1 reasonable:2 decision:1 summarizes:1 pushed:2 layer:1 followed:1 constraint:3 constrain:2 argument:2 conversational:2 performing:1 graceful:1 developing:1 designated:3 according:1 request:2 stephanie:1 joseph:1 presently:1 soong:2 describing:1 count:1 mechanism:2 zue:18 turn:1 fail:1 know:1 needed:1 popped:1 end:8 available:2 hierarchical:2 intelligibility:1 appropriate:7 robustness:1 original:1 top:3 remaining:3 tina:9 include:2 parsed:1 practicality:1 chinese:2 february:1 strategy:5 primary:1 interacts:1 distance:1 separate:1 mapped:1 incompletely:1 landmark:1 street:5 majority:1 extent:1 collected:1 providing:1 demonstration:1 october:3 agenda:1 design:1 implementation:1 unknown:2 perform:1 arc:4 displayed:1 january:1 communication:2 ever:1 variability:1 locate:2 incorporated:1 stack:4 david:1 pair:3 specified:2 sentence:16 acoustic:7 textual:1 testbed:1 established:1 mediates:1 address:3 beyond:1 able:1 pattern:1 program:1 including:3 suitable:1 critical:2 natural:18 improve:1 technology:3 naive:1 utterance:4 text:1 understanding:16 acknowledgement:1 embedded:1 fully:2 querying:1 integrate:2 sufficient:1 bank:4 production:1 elsewhere:1 course:1 summary:1 supported:1 free:1 understand:3 institute:2 fall:1 vocabulary:3 made:1 collection:1 far:4 status:1 keep:1 active:1 investigating:1 corpus:2 underspecification:1 continuous:3 search:5 decade:1 reviewed:1 improving:1 domain:6 main:3 whole:2 arise:1 augmented:1 referred:1 screen:1 board:1 sub:1 trellis:1 candidate:2 third:1 tin:3 down:1 specific:3 list:1 essential:1 incorporating:1 workshop:6 false:1 rejection:1 likely:1 failed:4 verbally:1 partially:1 nested:1 determines:1 discourse:6 ma:2 slot:5 room:1 shared:2 adverse:1 specifically:3 determined:1 operates:1 semantically:1 acting:1 called:2 nil:4 pas:1 clarification:1 perceptrons:1 people:2 support:1 fulfills:1 inability:1 incorporate:1 tested:1 phenomenon:1 handling:1 |
3,347 | 4,030 | Layered Image Motion with Explicit Occlusions,
Temporal Consistency, and Depth Ordering
Deqing Sun, Erik B. Sudderth, and Michael J. Black
Department of Computer Science, Brown University
{dqsun,sudderth,black}@cs.brown.edu
Abstract
Layered models are a powerful way of describing natural scenes containing
smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or
accuracy of non-layered methods. We present a new probabilistic model of optical
flow in layers that addresses many of the shortcomings of previous approaches. In
particular, we define a probabilistic graphical model that explicitly captures: 1)
occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is
modeled by a combination of a parametric model and a smooth deviation based
on an MRF with a robust spatial prior; the resulting model allows roughness in
layers. Finally, a key contribution is the formulation of the layers using an imagedependent hidden field prior based on recent models for static scene segmentation.
The method achieves state-of-the-art results on the Middlebury benchmark and
produces meaningful scene segmentations as well as detected occlusion regions.
1 Introduction
Layered models of scenes offer significant benefits for optical flow estimation [8, 11, 25]. Splitting
the scene into layers enables the motion in each layer to be defined more simply, and the estimation
of motion boundaries to be separated from the problem of smooth flow estimation. Layered models
also make reasoning about occlusion relationships easier. In practice, however, none of the current
top performing optical flow methods use a layered approach [2]. The most accurate approaches
are single-layered, and instead use some form of robust smoothness assumption to cope with flow
discontinuities [5]. This paper formulates a new probabilistic, layered motion model that addresses
the key problems of previous layered approaches. At the time of writing, it achieves the lowest
average error of all tested approaches on the Middlebury optical flow benchmark [2]. In particular,
the accuracy at occlusion boundaries is significantly better than previous methods. By segmenting
the observed scene, our model also identifies occluded and disoccluded regions.
Layered models provide a segmentation of the scene and this segmentation, because it corresponds
to scene structure, should persist over time. However, this persistence is not a benefit if one is only
computing flow between two frames; this is one reason that multi-layer models have not surpassed
their single-layer competitors on two-frame benchmarks. Without loss of generality, here we use
three-frame sequences to illustrate our method. In practice, these three frames can be constructed
from an image pair by computing both the forward and backward flow. The key is that this gives
two segmentations of the scene, one at each time instant, both of which must be consistent with the
flow. We formulate this temporal layer consistency probabilistically. Note that the assumption of
temporal layer consistency is much more realistic than previous assumptions of temporal motion
consistency [4]; while the scene motion can change rapidly, scene structure persists.
1
One of the main motivations for layered models is that, conditioned on the segmentation into layers,
each layer can employ affine, planar, or other strong models of optical flow. By applying a single
smooth motion across the entire layer, these models combine information over long distances and
interpolate behind occlusions. Such rigid parametric assumptions, however, are too restrictive for
real scenes. Instead one can model the flow within each layer as smoothly varying [26]. While
the resulting model is more flexible than traditional parametric models, we find that it is still not as
accurate as robust single-layer models. Consequently, we formulate a hybrid model that combines a
base affine motion with a robust Markov random field (MRF) model of deformations from affine [6].
This roughness in layers model, which is similar in spirit to work on plane+parallax [10, 14, 19],
encourages smooth flow within layers but allows significant local deviations.
Because layers are temporally persistent, it is also possible to reason about their relative depth ordering. In general, reliable recovery of depth order requires three or more frames. Our probabilistic
formulation explicitly orders layers by depth, and we show that the correct order typically produces
more probable (lower energy) solutions. This also allows explicit reasoning about occlusions, which
our model predicts at locations where the layer segmentations for consecutive frames disagree.
Many previous layered approaches are not truly ?layered?: while they segment the image into multiple regions with distinct motions, they do not model what is in front of what. For example, widely
used MRF models [27] encourage neighboring pixels to occupy the same region, but do not capture
relationships between regions. In contrast, building on recent state-of-the-art results in static scene
segmentation [21], our model determines layer support via an ordered sequence of occluding binary
masks. These binary masks are generated by thresholding a series of random, continuous functions.
This approach uses image-dependent Gaussian random field priors and favors partitions which accurately match the statistics of real scenes [21]. Moreover, the continuous layer support functions
play a key role in accurately modeling temporal layer consistency. The resulting model produces
accurate layer segmentations that improve flow accuracy at occlusion boundaries, and recover meaningful scene structure.
As summarized in Figure 1, our method is based on a principled, probabilistic generative model
for image sequences. By combining recent advances in dense flow estimation and natural image
segmentation, we develop an algorithm that simultaneously estimates accurate flow fields, detects
occlusions and disocclusions, and recovers the layered structure of realistic scenes.
2 Previous Work
Layered approaches to motion estimation have long been seen as elegant and promising, since spatial
smoothness is separated from the modeling of discontinuities and occlusions. Darrell and Pentland
[7, 8] provide the first full approach that incorporates a Bayesian model, ?support maps? for segmentation, and robust statistics. Wang and Adelson [25] clearly motivate layered models of image
sequences, while Jepson and Black [11] formalize the problem using probabilistic mixture models.
A full review of more recent methods is beyond our scope [1, 3, 12, 13, 16, 17, 20, 24, 27, 29].
Early methods, which use simple parametric models of image motion within layers, are not highly
accurate. Observing that rigid parametric models are too restrictive for real scenes, Weiss [26] uses a
more flexible Gaussian process to describe the motion within each layer. Even using modern implementation methods [22] this approach does not achieve state-of-the-art results. Allocating a separate
layer for every small surface discontinuity is impractical and fails to capture important global scene
structure. Our approach, which allows ?roughness? within layers rather than ?smoothness,? provides
a compromise that captures coarse scene structure as well as fine within-layer details.
One key advantage of layered models is their ability to realistically model occlusion boundaries.
To do this properly, however, one must know the relative depth order of the surfaces. Performing
inference over the combinatorial range of possible occlusion relationships is challenging and, consequently, only a few layered flow models explicitly encode relative depth [12, 30]. Recent work
revisits the layered model to handle occlusions [9], but does not explicitly model the layer ordering
or achieve state-of-the-art performance on the Middlebury benchmark. While most current optical
flow methods are ?two-frame,? layered methods naturally extend to longer sequences [12, 29, 30].
Layered models all have some way of making either a hard or soft assignment of pixels to layers.
Weiss and Adelson [27] introduce spatial coherence to these layer assignments using a spatial MRF
2
gtk
gt+1,k
K?1
s t+1,k
s tk
K
K
utk
It
I t+1
v tk
K
Figure 1: Left: Graphical representation for the proposed layered model. Right: Illustration of variables from
the graphical model for the ?Schefflera? sequence. Labeled sub-images correspond to nodes in the graph. The
left column shows the flow fields for three layers, color coded as in [2]. The g and s images illustrate the
reasoning about layer ownership (see text). The composite flow field (u, v) and layer labels (k) are also shown.
model. However, the Ising/Potts MRF they employ assigns low probability to typical segmentations
of natural scenes [15]. Adapting recent work on static image segmentation by Sudderth and Jordan [21], we instead generate spatially coherent, ordered layers by thresholding a series of random
continuous functions. As in the single-image case, this approach realistically models the size and
shape properties of real scenes. For motion estimation there are additional advantages: it allows
accurate reasoning about occlusion relationships and modeling of temporal layer consistency.
3 A Layered Motion Model
Building on this long sequence of prior work, our generative model of layered image motion is
summarized in Figure 1. Below we describe how the generative model captures piecewise smooth
deviation of the layer motion from parametric models (Sec. 3.1), depth ordering and temporal consistency of layers (Sec. 3.2), and regions of occlusion and disocclusion (Sec. 3.3).
3.1 Roughness in Layers
Our approach is inspired by Weiss?s model of smoothness in layers [26]. Given a sequence of
images It , 1 ? t ? T , we model the evolution from the current frame It , to the subsequent frame
It+1 , via K locally smooth, but potentially globally complex, flow fields. Let utk and vtk denote
the horizontal and vertical flow fields, respectively, for layer k at time t. The corresponding flow
ij
vector for pixel (i, j) is then denoted by (uij
tk , vtk ).
Each layer?s flow field is drawn from a distribution chosen to encourage piecewise smooth motion.
For example, a pairwise Markov random field (MRF) would model the horizontal flow field as
X
1X
i? j ?
p(utk ) ? exp{?Emrf (utk )} = exp ?
?s (uij
?
u
)
.
(1)
tk
tk
2
? ?
(i,j) (i ,j )??(i,j)
Here, ?(i, j) is the set of neighbors of pixel (i, j), often its four nearest neighbors. The potential
?s (?) is some robust function [5] that encourages smoothness, but allows occasional significant deviations from it. The vertical flow field vtk can then be modeled via an independent MRF prior as
in Eq. (1), as justified by the statistics of natural flow fields [18].
While such MRF priors are flexible, they capture very little dependence between pixels separated by
even moderate image distances. In contrast, real scenes exhibit coherent motion over large scales,
due to the motion of (partially) rigid objects in the world. To capture this, we associate an affine (or
planar) motion model, with parameters ?tk , to each layer k. We then use an MRF to allow piecewise
smooth deformations from the globally rigid assumptions of affine motion:
X
? ?
1X
i? j ?
Eaff (utk , ?tk ) =
?ij
?i?tkj ) .
(2)
?s (uij
tk ? u
?tk ) ? (utk ? u
2
? ?
(i,j) (i ,j )??(i,j)
3
Here, u?ij
?tk denotes the horizontal motion predicted for pixel (i, j) by an affine model with parameters ?tk . Unlike classical models that assume layers are globally well fit by a single affine
motion [6, 25], this prior allows significant, locally smooth deviations from rigidity. Unlike the basic smoothness prior of Eq. (1), this semiparametric construction allows effective global reasoning
about non-contiguous segments of partially occluded objects. More sophisticated flow deformation
priors may also be used, such as those based on robust non-local terms [22, 28].
3.2 Layer Support and Spatial Contiguity
The support for whether or not a pixel belongs to a given layer k is defined using a hidden random
field gk . We associate each of the first K ? 1 layers at time t with a random continuous function
gtk , defined over the same domain as the image. This hidden support field is illustrated in Figure 1.
We assume a single, unique layer is observable at each location and that the observed motion of
that pixel is determined by its assigned layer. Analogous to level set representations, the discrete
support of each layer is determined by thresholding gtk : pixel (i, j) is considered visible when
gtk (i, j) ? 0. Let stk (i, j) equal one if layer k is visible at pixel (i, j), and zero otherwise; note that
P
k stk (i, j) = 1. For pixels (i, j) for which gtk (i, j) < 0, we necessarily have stk (i, j) = 0.
We define the layers to be ordered with respect to the camera, so that layer k occludes layers k ? > k.
ij
for which stkij (i, j) = 1 is then
Given the full set of support functions gtk , the unique layer kt?
t?
ij
kt?
= min ({k | 1 ? k ? K ? 1, gtk (i, j) ? 0} ? {K}) .
(3)
Note that layer K is essentially a background layer that captures all pixels not assigned to the first
K ? 1 layers. For this reason, only K ? 1 hidden fields gtk are needed (see Figure 1).
Our use of thresholded, random continuous functions to define layer support is partially motivated
by known shortcomings of discrete Ising/Potts MRF models for image partitions [15]. They also
provide a convenient framework for modeling the temporal and spatial coherence observed in real
motion sequences. Spatial coherence is captured via a Gaussian conditional random field in which
edge weights are modulated by local differences in Lab color vectors, Ict (i, j):
X
1X
Espace (gtk ) =
wiij? j ? (gtk (i, j) ? gtk (i? , j ? ))2 ,
(4)
2
?
?
(i,j) (i ,j )??(i,j)
n
o
1
ij
c
c ? ? 2
wi? j ? = max exp ? 2 ||It (i, j) ? It (i , j )|| , ?c .
(5)
2?c
The threshold ?c > 0 adds robustness to large color changes in internal object texture. Temporal
coherence of surfaces is then encouraged via a corresponding Gaussian MRF:
X
ij 2
(gtk (i, j) ? gt+1,k (i + uij
(6)
Etime (gtk , gt+1,k , utk , vtk ) =
tk , j + vtk )) .
(i,j)
Critically, this energy function uses the corresponding flow field to non-rigidly align the layers at
subsequent frames. By allowing smooth deformation of the support functions gtk , we allow layer
support to evolve over time, as opposed to transforming a single rigid template [12].
Our model of layer coherence is inspired by a recent method for image segmentation, based on
spatially dependent Pitman-Yor processes [21]. That work makes connections between layered occlusion processes and stick breaking representations of nonparametric Bayesian models. By assigning appropriate stochastic priors to layer thresholds, the Pitman-Yor model captures the power
law statistics of natural scene partitions and infers an appropriate number of segments for each
image. Existing optical flow benchmarks employ artificially constructed scenes that may have different layer-level statistics. Consequently our experiments in this paper employ a fixed number of
layers K.
3.3 Depth Ordering and Occlusion Reasoning
The preceding generative process defines a set of K ordered layers, with corresponding flow
fields utk , vtk and segmentation masks stk . Recall that the layer assignment masks s are a
4
deterministic function (threshold) of the underlying continuous layer support functions g (see
Eq. (3)). To consistently reason about occlusions, we examine the layer assignments stk (i, j) and
ij
st+1,k (i + uij
tk , j + vtk ) at locations corresponded by the underlying flow fields. This leads to a far
richer occlusion model than standard spatially independent outlier processes: geometric consistency
is enforced via the layered sequence of flow fields.
Let Ist (i, j) denote an observed image feature for pixel (i, j); we work with a filtered
version of the intensity images to provide some invariance to illumination changes. If
ij
stk (i, j) = st+1,k (i + uij
tk , j + vtk ) = 1, the visible layer for pixel (i, j) at time t remains unoccluded at time t + 1, and the image observations are modeled using a standard brightness (or, here,
feature) constancy assumption. Otherwise, that pixel has become occluded, and is instead generated
from a uniform distribution. The image likelihood model can then be written as
p(Ist | Ist+1 , ut , vt , gt , gt+1 ) ? exp{?Edata (ut , vt , gt , gt+1 )}
n XX
ij
ij
ij
?d (Ist (i, j) ? Ist+1 (i + uij
= exp ?
tk , j + vtk ))stk (i, j)st+1,k (i + utk , j + vtk )
k (i,j)
o
ij
+ ?d stk (i, j)(1 ? st+1,k (i + uij
tk , j + vtk ))
where ?d (?) is a robust potential function and the constant ?d arises from the difference of the log
normalization constants for the robust and uniform distributions. With algebraic simplifications, the
data error term can be written as
Edata (ut , vt , gt , gt+1 ) =
XX
ij
ij
stk (i, j)st+1,k (i + uij
?d (Ist (i, j) ? Ist+1 (i + uij
,
j
+
v
))
?
?
d
tk , j + vtk ) (7)
tk
tk
k (i,j)
up to an additive, constant multiple of ?d . The shifted potential function (?d (?) ? ?d ) represents the
change in energy when a pixel transitions from an occluded to an unoccluded configuration. Note
that occlusions have higher likelihood only for sufficiently large discrepancies in matched image
features and can only occur via a corresponding change in layer visibility.
4 Posterior Inference from Image Sequences
Considering the full generative model defined in Sec. 3, maximum a posteriori (MAP) estimation
for a T frame image sequence is equivalent to minimization of the following energy function:
K
T
?1
X
X
Edata (ut , vt , gt , gt+1 ) +
?a (Eaff (utk , ?tk ) + Eaff (vtk , ?tk ))
E(u, v, g, ?) =
t=1
+
K?1
X
k=1
k=1
K?1
X
?b Espace (gtk ) + ?c Etime (gtk , gt+1,k , utk , vtk ) +
?b Espace (gT k ). (8)
k=1
Here ?a , ?b , and ?c are weights controlling the relative importance of the affine, spatial, and temporal terms respectively. Simultaneously inferring flow fields, layer support maps, and depth ordering
is a challenging process; our approach is summarized below.
4.1 Relaxation of the Layer Assignment Process
Due to the non-differentiability of the threshold process that determines assignments of regions to
layers, direct minimization of Eq. (8) is challenging. For a related approach to image segmentation,
a mean field variational method has been proposed [21]. However, that segmentation model is based
on a much simpler, spatially factorized likelihood model for color and texture histogram features.
Generalization to the richer flow likelihoods considered here raises significant complications.
Instead, we relax the hard threshold assignment process using the logistic function ?(g) = 1/(1 +
exp(?g)). Applied to Eq. (3), this induces the following soft layer assignments:
(
Qk?1
?(?e gtk (i, j)) k? =1 ?(??e gtk? (i, j)), 1 ? k < K,
s?tk (i, j) = QK?1
(9)
k = K.
k? =1 ?(??e gtk? (i, j)),
5
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Figure 2: Results on the ?Venus? sequence with 4 layers. The two background layers move faster than the two
foreground layers, and the solution with the correct depth ordering has lower energy and smaller error. (a) First
frame. (b-d) Fast-to-slow ordering: EPE 0.252 and energy ?1.786 ? 106 . Left to right: motion segmentation,
estimated flow field, and absolute error of estimated flow field. (f-g) Slow-to-fast ordering: EPE 0.195 and
energy ?1.808 ? 106 . Darker indicates larger flow field errors in (d) and (g).
Note that ?(?g) = 1 ? ?(g), and
PK
?tk (i, j)
k=1 s
= 1 for any gtk and constant ?e > 0.
Substituting these soft assignments s?tk (i, j) for stk (i, j) in Eq. (7), we obtain a differentiable energy
function that can be optimized via gradient-based methods. A related relaxation underlies the classic
backpropagation algorithm for neural network training.
4.2 Gradient-Based Energy Minimization
We estimate the hidden fields for all the frames together, while fixing the flow fields, by optimizing
an objective involving the relevant Edata (?), Espace (?), and Etime (?) terms. We then estimate the flow
fields ut , vt for each frame, while fixing those of neighboring frames and the hidden fields, via the
Edata (?), Eaff (?), and Etime (?) terms. For flow estimation, we use a standard coarse-to-fine, warpingbased technique as described in [22]. For hidden field estimation, we use an implementation of
conjugate gradient descent with backtracking and line search. See Supplemental Material for details.
5 Experimental Results
We apply the proposed model to two-frame sequences and compute both the forward and backward
flow fields. This enables the use of the temporal consistency term by treating one frame as both
the previous and the next frame of the other1. We obtain an initial flow field using the Classic+NL
method [22], cluster the flow vectors into K groups (layers), and convert the initial segmentation
into the corresponding hidden fields. We then use a two-level Gaussian pyramid (downsampling
factor 0.8) and perform a fairly standard incremental estimation of the flow fields for each layer. At
each level, we perform 20 incremental warping steps and during each step alternately solve for the
hidden fields and the flow estimates. In the end, we threshold the hidden fields to compute a hard
segmentation, and obtain the final flow field by selecting the flow field from the appropriate layers.
Occluded regions are determined by inconsistencies between the hard segmentations at subsequent
frames, as matched by the final flow field. We would ideally like to compare layer initializations
based on all permutations of the initial flow vector clusters, but this would be computationally intensive for large K. Instead we compare two orders: a fast-to-slow order appropriate for rigid scenes,
and an opposite slow-to-fast order (for variety and robustness). We illustrate automatic selection of
the preferred order for the ?Venus? sequence in Figure 2.
The parameters for all experiments are set to ?a = 3, ?b = 30, ?c = 4, ?d = 9, ?e = 2,
?i = 12, and ?c = 0.004. A generalized Charbonnier function is used for ?S (?) and ?d (?) (see
Supplemental Material). Optimization takes about 5 hours for the two-frame ?Urban? sequence
using our M ATLAB implementation.
5.1 Results on the Middlebury Benchmark
Training Set As a baseline, we implement the smoothness in layers model [26] using modern
techniques, and obtain an average training end-point error (EPE) of 0.487. This is reasonable but
not competitive with state-of-the-art methods. The proposed model with 1 to 4 layers produces
average EPEs of 0.248, 0.212, 0.200, and 0.194, respectively (see Table 1). The one-layer model is
similar to the Classic+NL method, but has a less sophisticated (more local) model of the flow within
1
Our model works for longer sequences. We use two frames here for fair comparison with other methods.
6
Table 1: Average end-point error (EPE) on the Middlebury optical flow benchmark training set.
Avg. EPE
Weiss [26]
Classic++
Classic+NL
1layer
2layers
3layers
4layers
3layers w/ WMF
3layers w/ WMF C++Init
0.487
0.285
0.221
0.248
0.212
0.200
0.194
0.195
0.203
Venus
0.510
0.271
0.238
0.243
0.219
0.212
0.197
0.211
0.212
Dimetrodon
0.179
0.128
0.131
0.144
0.147
0.149
0.148
0.150
0.151
Hydrangea
0.249
0.153
0.152
0.175
0.169
0.173
0.159
0.161
0.161
RubberWhale
0.236
0.081
0.073
0.095
0.081
0.073
0.068
0.067
0.066
Grove2
0.221
0.139
0.103
0.125
0.098
0.090
0.088
0.086
0.087
Grove3
0.608
0.614
0.468
0.504
0.376
0.343
0.359
0.331
0.339
Urban2
0.614
0.336
0.220
0.279
0.236
0.220
0.230
0.210
0.210
Urban3
1.276
0.555
0.384
0.422
0.370
0.338
0.300
0.345
0.396
Table 2: Average end-point error (EPE) on the Middlebury optical flow benchmark test set.
EPE
Layers++
Classic+NL
EPE in boundary regions
Layers++
Classic+NL
Rank
Average
Army
Mequon
Schefflera
Wooden
Grove
Urban
Yosemite
Teddy
4.3
6.5
0.270
0.319
0.08
0.08
0.19
0.22
0.20
0.29
0.13
0.15
0.48
0.64
0.47
0.52
0.15
0.16
0.46
0.49
0.560
0.689
0.21
0.23
0.56
0.74
0.40
0.65
0.58
0.73
0.70
0.93
1.01
1.12
0.14
0.13
0.88
0.98
that layer. It thus performs worse than the Classic+NL initialization; the performance improvements
allowed by additional layers demonstrate the benefits of a layered model.
Accuracy is improved by applying a 15 ? 15 weighted median filter (WMF) [22] to the flow fields of
each layer during the iterative warping step (EPE for 1 to 4 layers: 0.231, 0.204, 0.195, and 0.193).
Weighted median filtering can be interpreted as a non-local spatial smoothness term in the energy
function that integrates flow field information over a larger spatial neighborhood.
The ?correct? number of layers for a real scene is not well defined (consider the ?Grove3? sequence,
for example). We use a restricted number of layers, and model the remaining complexity of the flow
within each layer via the roughness-in-layers spatial term and the WMF. As the number of layers
increases, the complexity of the flow within each layer decreases, and consequently the need for
WMF also decreases; note that the difference in EPE for the 4-layer model with and without WMF
is insignificant. For the remaining experiments we use the version with WMF.
To test the sensitivity of the result to the initialization, we also initialized with Classic++ (?C++Init?
in Table 1), a good, but not top, non-layered method [22]. The average EPE for 1 to 4 layers increases
to 0.248, 0.206, 0.203, and 0.198, respectively. While the one-layer method gets stuck in poor local
minima on the ?Grove3? and ?Urban3? sequences, models with additional layers are more robust to
the initialization. For more details and full EPE results, see the Supplemental Material.
Test Set For evaluation, we focus on a model with 3 layers (denoted ?Layers++? in the Middlebury
public table). On the Middlebury test set it has an average EPE of 0.270 and average angular error
(AAE) of 2.556; this is the lowest among all tested methods [2] at the time of writing (Oct. 2010).
Table 2 summarizes the results for individual test sequences. The layered model is particularly
accurate at motion boundaries, probably due to the use of layer-specific motion models, and the
explicit modeling of occlusion in Edata (Eq. (7)). For more extensive results, see the Supplemental
Material.
Visual Comparison Figure 3 shows results for the 3-layer model on several training and test
sequences. Notice that the layered model produces a motion segmentation that captures the major
structure of the scene, and the layer boundaries correspond well to static image edges. It detects most
occlusion regions and interpolates their motion reasonably well. Several sequences show significant
improvement due to the global reasoning provided by the layered model. On the training ?Grove3?
sequence, the proposed method correctly identifies many holes between the branches and leaves as
background. It also associates the branch at the bottom right corner with branches in the center.
As the branch moves beyond the image boundary, the layered model interpolates its motion using
long-range correlation with the branches in the center. In contrast, the single-layered approach
incorrectly interpolates from local background regions. The ?Schefflera? result illustrates how the
layered method can separate foreground objects from the background (e.g., the leaves in the top
right corner), and thereby reduce errors made by single-layer approaches such as Classic+NL.
7
Figure 3: Results on some Middlebury training (rows 1 to 3) and test (rows 4 to 6) sequences. Top to bottom:
?RubberWhale?, ?Grove3?, ?Urban3?, ?Mequon?, ?Schefflera?, and ?Grove?. Left to right: First image frame,
initial flow field from ?Classic+NL?, final flow field, motion segmentation (green front, blue middle, red back),
and detected occlusions. Best viewed in color and enlarged to allow comparison of detailed motions.
6 Conclusion and Discussion
We have described a new probabilistic formulation for layered image motion that explicitly models
occlusion and disocclusion, depth ordering of layers, and the temporal consistency of the layer segmentation. The approach allows the flow field in each layer to have piecewise smooth deformation
from a parametric motion model. Layer support is modeled using an image-dependent hidden field
prior that supports a model of temporal layer continuity over time. The image data error term takes
into account layer occlusion relationships, resulting in increased flow accuracy near motion boundaries. Our method achieves state-of-the-art results on the Middlebury optical flow benchmark while
producing meaningful segmentation and occlusion detection results.
Future work will address better inference methods, especially a better scheme to infer the layer order, and the automatic estimation of the number of layers. Computational efficiency has not been
addressed, but will be important for inference on long sequences. Currently our method does not
capture transparency, but this could be supported using a soft layer assignment and a different generative model. Additionally, the parameters of the model could be learned [23], but this may require
more extensive and representative training sets. Finally, the parameters of the model, especially the
number of layers, should adapt to the motions in a given sequence.
Acknowledgments DS and MJB were supported in part by the NSF Collaborative Research in Computational Neuroscience Program (IIS?0904875) and a gift from Intel Corp.
8
References
[1] S. Ayer and H. S. Sawhney. Layered representation of motion video using robust maximum-likelihood
estimation of mixture models and MDL encoding. In ICCV, pages 777?784, Jun 1995.
[2] S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski. A database and evaluation
methodology for optical flow. IJCV, to appear.
[3] S. Birchfield and C. Tomasi. Multiway cut for stereo and motion with slanted surfaces. In ICCV, pages
489?495, 1999.
[4] M. J. Black and P. Anandan. Robust dynamic motion estimation over time. In CVPR, pages 296?302,
1991.
[5] M. J. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth
flow fields. CVIU, 63:75?104, 1996.
[6] M. J. Black and A. D. Jepson. Estimating optical-flow in segmented images using variable-order parametric models with local deformations. PAMI, 18(10):972?986, October 1996.
[7] T. Darrell and A. Pentland. Robust estimation of a multi-layered motion representation. In Workshop on
Visual Motion, pages 173?178, 1991.
[8] T. Darrell and A. Pentland. Cooperative robust estimation using layers of support. PAMI, 17(5):474?487,
1995.
[9] B. Glocker, T. H. Heibel, N. Navab, P. Kohli, and C. Rother. Triangleflow: Optical flow with triangulationbased higher-order likelihoods. In ECCV, pages 272?285, 2010.
[10] M. Irani, P. Anandan, and D. Weinshall. From reference frames to reference planes: Multi-view parallax
geometry and applications. In ECCV, 1998.
[11] A. Jepson and M. J. Black. Mixture models for optical flow computation. In CVPR, 1993.
[12] N. Jojic and B. Frey. Learning flexible sprites in video layers. In CVPR, pages I:199?206, 2001.
[13] A. Kannan, B. Frey, and N. Jojic. A generative model of dense optical flow in layers. Technical Report
TR PSI-2001-11, University of Toronto, Aug. 2001.
[14] R. Kumar, P. Anandan, and K. Hanna. Shape recovery from multiple views: A parallax based approach.
In Proc 12th ICPR, 1994.
[15] R. D. Morris, X. Descombes, and J. Zerubia. The Ising/Potts model is not well suited to segmentation
tasks. In Proceedings of the IEEE Digital Signal Processing Workshop, 1996.
[16] M. Nicolescu and G. Medioni. Motion segmentation with accurate boundaries - a tensor voting approach.
In CVPR, pages 382?389, 2003.
[17] M. P. Kumar, P. H. Torr, and A. Zisserman. Learning layered motion segmentations of video. IJCV,
76(3):301?319, 2008.
[18] S. Roth and M. J. Black. On the spatial statistics of optical flow. IJCV, 74(1):33?50, August 2007.
[19] H. S. Sawhney. 3D geometry from planar parallax. In CVPR, pages 929?934, 1994.
[20] T. Schoenemann and D. Cremers. High resolution motion layer decomposition using dual-space graph
cuts. In CVPR, pages 1?7, June 2008.
[21] E. Sudderth and M. Jordan. Shared segmentation of natural scenes using dependent Pitman-Yor processes.
In NIPS, pages 1585?1592, 2009.
[22] D. Sun, S. Roth, and M. J. Black. Secrets of optical flow estimation and their principles. In CVPR, 2010.
[23] D. Sun, S. Roth, J. P. Lewis, and M. J. Black. Learning optical flow. In ECCV, pages 83?97, 2008.
[24] P. Torr, R. Szeliski, and P. Anandan. An integrated Bayesian approach to layer extraction from image
sequences. PAMI, 23(3):297?303, Mar 2001.
[25] J. Y. A. Wang and E. H. Adelson. Representing moving images with layers. IEEE Transactions on Image
Processing, 3(5):625?638, Sept. 1994.
[26] Y. Weiss. Smoothness in layers: Motion segmentation using nonparametric mixture estimation. In CVPR,
pages 520?526, Jun 1997.
[27] Y. Weiss and E. Adelson. A unified mixture framework for motion segmentation: Incorporating spatial
coherence and estimating the number of models. In CVPR, pages 321?326, Jun 1996.
[28] M. Werlberger, T. Pock, and H. Bischof. Motion estimation with non-local total variation regularization.
In CVPR, 2010.
[29] H. Yalcin, M. J. Black, and R. Fablet. The dense estimation of motion and appearance in layers. In IEEE
Workshop on Image and Video Registration, pages 777?784, Jun 2004.
[30] Y. Zhou and H. Tao. Background layer model for object tracking through occlusion. In ICCV, volume 2,
pages 1079?1085, 2003.
9
| 4030 |@word kohli:1 wmf:7 version:2 middle:1 decomposition:1 brightness:1 thereby:1 tr:1 initial:4 configuration:1 series:2 selecting:1 existing:1 current:3 assigning:1 must:2 slanted:1 written:2 subsequent:3 realistic:2 partition:3 visible:3 shape:2 enables:2 occludes:1 visibility:1 treating:1 additive:1 occlude:1 generative:7 leaf:2 plane:2 filtered:1 provides:1 coarse:2 node:1 location:3 complication:1 toronto:1 simpler:1 constructed:2 direct:1 become:1 persistent:1 ijcv:3 combine:2 parallax:4 introduce:1 pairwise:1 secret:1 mask:4 examine:1 multi:3 inspired:2 detects:2 globally:3 little:1 considering:1 gift:1 provided:1 xx:2 moreover:1 underlying:2 matched:2 factorized:1 baker:1 lowest:2 what:2 weinshall:1 estimating:2 interpreted:1 contiguity:1 supplemental:4 unified:1 impractical:1 temporal:14 every:1 voting:1 descombes:1 stick:1 appear:1 producing:1 segmenting:1 persists:1 local:9 frey:2 pock:1 middlebury:10 encoding:1 rigidly:1 pami:3 black:12 initialization:4 challenging:3 range:2 unique:2 camera:1 acknowledgment:1 practice:2 implement:1 backpropagation:1 sawhney:2 significantly:1 composite:1 adapting:1 persistence:1 convenient:1 get:1 layered:38 selection:1 applying:2 writing:2 equivalent:1 map:3 deterministic:1 center:2 roth:4 formulate:2 resolution:1 splitting:1 recovery:2 assigns:1 grove2:1 classic:11 handle:1 variation:1 analogous:1 construction:1 play:1 controlling:1 us:3 associate:3 particularly:1 cut:2 persist:1 predicts:1 labeled:1 observed:4 role:1 ising:3 constancy:1 bottom:2 wang:2 capture:11 cooperative:1 region:11 sun:3 ordering:11 tkj:1 decrease:2 principled:1 transforming:1 complexity:2 ideally:1 occluded:5 dynamic:1 motivate:1 raise:1 segment:3 compromise:1 efficiency:1 epe:13 separated:3 distinct:1 fast:4 shortcoming:2 describe:2 effective:1 detected:2 corresponded:1 neighborhood:1 richer:2 widely:1 larger:2 solve:1 cvpr:10 relax:1 otherwise:2 favor:1 statistic:6 ability:1 final:3 sequence:27 advantage:2 differentiable:1 neighboring:2 relevant:1 combining:1 rapidly:1 achieve:2 realistically:2 cluster:2 darrell:3 produce:5 incremental:2 tk:24 object:5 illustrate:3 develop:1 fixing:2 nearest:1 ij:15 aug:1 eq:7 strong:1 c:1 predicted:1 correct:3 filter:1 stochastic:1 material:4 public:1 require:1 generalization:1 probable:1 roughness:5 sufficiently:1 considered:2 exp:6 scope:1 substituting:1 major:1 achieves:3 consecutive:1 early:1 estimation:21 proc:1 integrates:1 combinatorial:1 label:1 currently:1 weighted:2 navab:1 minimization:3 clearly:1 gaussian:5 rather:1 zhou:1 varying:1 probabilistically:1 encode:1 focus:1 june:1 properly:1 potts:3 consistently:1 likelihood:6 indicates:1 rank:1 improvement:2 contrast:3 baseline:1 posteriori:1 inference:4 wooden:1 dependent:4 rigid:6 entire:1 typically:1 integrated:1 hidden:11 uij:10 tao:1 pixel:16 among:1 flexible:4 dual:1 denoted:2 spatial:13 art:6 fairly:1 field:47 equal:1 extraction:1 encouraged:1 represents:1 adelson:4 foreground:2 espace:4 discrepancy:1 future:1 report:1 piecewise:5 employ:4 few:1 modern:2 simultaneously:2 interpolate:1 individual:1 geometry:2 occlusion:27 detection:1 glocker:1 highly:1 evaluation:2 mdl:1 truly:1 mixture:5 nl:8 behind:1 accurate:8 allocating:1 kt:2 edge:2 encourage:2 grove:2 initialized:1 deformation:6 increased:1 column:1 modeling:5 soft:4 contiguous:1 formulates:1 yosemite:1 assignment:10 deviation:5 uniform:2 too:2 front:2 st:5 sensitivity:1 probabilistic:7 michael:1 together:1 containing:1 opposed:1 worse:1 corner:2 vtk:14 potential:3 account:1 summarized:3 sec:4 cremers:1 explicitly:5 view:2 lab:1 observing:1 red:1 competitive:1 recover:1 contribution:1 collaborative:1 accuracy:5 qk:2 correspond:2 bayesian:3 accurately:2 critically:1 none:1 history:1 competitor:1 energy:10 atlab:1 naturally:1 disocclusion:2 psi:1 recovers:1 static:4 urban2:1 recall:1 color:5 ut:5 infers:1 segmentation:32 formalize:1 sophisticated:2 back:1 higher:2 planar:3 ayer:1 wei:6 improved:1 methodology:1 formulation:3 zisserman:1 mar:1 generality:1 angular:1 correlation:1 d:1 horizontal:3 continuity:1 defines:1 logistic:1 building:2 brown:2 evolution:1 regularization:1 assigned:2 spatially:4 irani:1 jojic:2 illustrated:1 during:2 encourages:2 generalized:1 demonstrate:1 performs:1 motion:51 reasoning:7 image:39 variational:1 volume:1 extend:1 significant:6 smoothness:9 automatic:2 consistency:11 multiway:1 moving:1 longer:2 surface:5 gt:13 base:1 add:1 align:1 posterior:1 recent:7 optimizing:1 moderate:1 belongs:1 corp:1 binary:2 vt:5 inconsistency:1 seen:1 captured:1 additional:3 minimum:1 preceding:1 utk:11 anandan:5 signal:1 ii:1 branch:5 multiple:4 full:5 infer:1 transparency:1 smooth:13 segmented:1 match:1 faster:1 adapt:1 offer:1 long:6 technical:1 coded:1 mrf:11 basic:1 underlies:1 involving:1 essentially:1 surpassed:1 histogram:1 normalization:1 pyramid:1 achieved:1 justified:1 background:6 semiparametric:1 fine:2 epes:1 addressed:1 sudderth:4 median:2 unlike:2 probably:1 elegant:1 flow:71 spirit:1 incorporates:1 jordan:2 near:1 variety:1 fit:1 opposite:1 reduce:1 venus:3 intensive:1 whether:1 motivated:1 stereo:1 algebraic:1 sprite:1 interpolates:3 detailed:1 nonparametric:2 locally:2 morris:1 induces:1 differentiability:1 generate:1 occupy:1 nsf:1 shifted:1 notice:1 estimated:2 neuroscience:1 correctly:1 blue:1 discrete:2 ist:7 key:5 four:1 group:1 threshold:6 drawn:1 urban:2 thresholded:1 registration:1 backward:2 graph:2 relaxation:2 convert:1 enforced:1 powerful:1 reasonable:1 urban3:3 coherence:6 summarizes:1 layer:135 simplification:1 aae:1 occur:1 scene:28 edata:6 min:1 kumar:2 performing:2 optical:19 department:1 icpr:1 combination:1 poor:1 conjugate:1 across:1 smaller:1 wi:1 making:1 outlier:1 restricted:1 iccv:3 computationally:1 remains:1 describing:1 needed:1 know:1 end:4 grove3:5 apply:1 occasional:1 appropriate:4 robustness:2 top:4 denotes:1 remaining:2 graphical:3 instant:1 restrictive:2 especially:2 mjb:1 classical:1 warping:2 move:2 objective:1 tensor:1 parametric:9 dependence:1 traditional:1 exhibit:1 gradient:3 distance:2 separate:2 reason:4 kannan:1 erik:1 rother:1 modeled:4 relationship:5 illustration:1 downsampling:1 birchfield:1 october:1 potentially:1 gk:1 implementation:3 perform:2 allowing:1 disagree:1 vertical:2 observation:1 markov:2 benchmark:9 descent:1 teddy:1 pentland:3 incorrectly:1 frame:23 august:1 intensity:1 pair:1 extensive:2 connection:1 optimized:1 bischof:1 tomasi:1 coherent:2 learned:1 hour:1 discontinuity:3 alternately:1 address:3 beyond:2 nip:1 below:2 program:1 reliable:1 max:1 green:1 video:4 medioni:1 power:1 overlap:1 natural:6 hybrid:1 representing:1 scheme:1 improve:1 temporally:1 identifies:2 jun:4 sept:1 text:1 prior:11 review:1 ict:1 geometric:1 evolve:1 relative:4 law:1 loss:1 permutation:1 database:1 filtering:1 digital:1 affine:8 consistent:1 thresholding:3 principle:1 row:2 eccv:3 supported:2 allow:3 szeliski:2 wide:1 neighbor:2 template:1 absolute:1 pitman:3 yor:3 benefit:3 boundary:10 depth:12 world:1 transition:1 forward:2 stuck:1 avg:1 made:1 far:1 cope:1 transaction:1 scharstein:1 observable:1 preferred:1 global:3 unoccluded:2 continuous:6 search:1 iterative:1 table:6 additionally:2 promising:1 reasonably:1 robust:15 init:2 hanna:1 complex:1 necessarily:1 artificially:1 domain:1 jepson:3 pk:1 main:1 dense:3 charbonnier:1 motivation:1 revisits:1 gtk:20 fair:1 rubberwhale:2 allowed:1 enlarged:1 representative:1 intel:1 slow:4 darker:1 fails:1 sub:1 inferring:1 explicit:3 breaking:1 specific:1 insignificant:1 workshop:3 incorporating:1 importance:1 texture:2 illumination:1 conditioned:1 illustrates:1 hole:1 cviu:1 easier:1 suited:1 smoothly:1 hydrangea:1 backtracking:1 simply:1 army:1 appearance:1 visual:2 ordered:4 tracking:1 partially:3 corresponds:1 determines:2 lewis:2 oct:1 conditional:1 viewed:1 consequently:4 ownership:1 stk:10 shared:1 change:5 hard:4 typical:1 determined:3 torr:2 total:1 invariance:1 experimental:1 meaningful:3 occluding:1 internal:1 support:16 modulated:1 arises:1 deqing:1 tested:2 rigidity:1 |
3,348 | 4,031 | Monte-Carlo Planning in Large POMDPs
Joel Veness
UNSW, Sydney, Australia
[email protected]
David Silver
MIT, Cambridge, MA 02139
[email protected]
Abstract
This paper introduces a Monte-Carlo algorithm for online planning in large
POMDPs. The algorithm combines a Monte-Carlo update of the agent?s
belief state with a Monte-Carlo tree search from the current belief state.
The new algorithm, POMCP, has two important properties. First, MonteCarlo sampling is used to break the curse of dimensionality both during
belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly
larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark
problem, rocksample, by several orders of magnitude. We also introduce
two challenging new POMDPs: 10 ? 10 battleship and partially observable
PacMan, with approximately 1018 and 1056 states respectively. Our MonteCarlo planning algorithm achieved a high level of performance with no prior
knowledge, and was also able to exploit simple domain knowledge to achieve
better results with less search. POMCP is the first general purpose planner
to achieve high performance in such large and unfactored POMDPs.
1
Introduction
Monte-Carlo tree search (MCTS) is a new approach to online planning that has provided
exceptional performance in large, fully observable domains. It has outperformed previous
planning approaches in challenging games such as Go [5], Amazons [10] and General Game
Playing [4]. The key idea is to evaluate each state in a search tree by the average outcome
of simulations from that state. MCTS provides several major advantages over traditional
search methods. It is a highly selective, best-first search that quickly focuses on the most
promising regions of the search space. It breaks the curse of dimensionality by sampling
state transitions instead of considering all possible state transitions. It only requires a black
box simulator, and can be applied in problems that are too large or too complex to represent
with explicit probability distributions. It uses random simulations to estimate the potential
for long-term reward, so that it plans over large horizons, and is often effective without any
search heuristics or prior domain knowledge [8]. If exploration is controlled appropriately
then MCTS converges to the optimal policy. In addition, it is anytime, computationally
efficient, and highly parallelisable.
In this paper we extend MCTS to partially observable environments (POMDPs). Full-width
planning algorithms, such as value iteration [6], scale poorly for two reasons, sometimes
referred to as the curse of dimensionality and the curse of history [12]. In a problem with
n states, value iteration reasons about an n-dimensional belief state. Furthermore, the
number of histories that it must evaluate is exponential in the horizon. The basic idea of
our approach is to use Monte-Carlo sampling to break both curses, by sampling start states
from the belief state, and by sampling histories using a black box simulator.
Our search algorithm constructs, online, a search tree of histories. Each node of the search
tree estimates the value of a history by Monte-Carlo simulation. For each simulation, the
1
start state is sampled from the current belief state, and state transitions and observations
are sampled from a black box simulator. We show that if the belief state is correct, then
this simple procedure converges to the optimal policy for any finite horizon POMDP. In
practice we can execute hundreds of thousands of simulations per second, which allows us
to construct extensive search trees that cover many possible contingencies. In addition,
Monte-Carlo simulation can be used to update the agent?s belief state. As the search tree
is constructed, we store the set of sample states encountered by the black box simulator
in each node of the search tree. We approximate the belief state by the set of sample
states corresponding to the actual history. Our algorithm, Partially Observable MonteCarlo Planning (POMCP), efficiently uses the same set of Monte-Carlo simulations for both
tree search and belief state updates.
2
2.1
Background
POMDPs
In a Markov decision process (MDP) the environment?s dynamics are fully determined by
its current state st . For any state s ? S and for any action a ? A, the transition probabilities
a
0
0
Pss
0 = P r(st+1 = s |st = s, at = a) determine the next state distribution s , and the reward
a
function Rs = E[rt+1 |st = s, at = a] determines the expected reward. In a partially observable Markov decision process (POMDP), the state cannot be directly observed by the agent.
Instead, the agent receives an observation o ? O, determined by observation probabilities
Zsa0 o = P r(ot+1 = o|st+1 = s0 , at = a). The initial state s0 ? S is determined by a probability distribution Is = P r(s0 = s). A history is a sequence of actions and observations,
ht = {a1 , o1 , ..., at , ot } or ht at+1 = {a1 , o1 , ..., at , ot , at+1 }. The agent?s action-selection
behaviour can be described by a policy, ?(h, a), that maps a history h toPa probability
?
distribution over actions, ?(h, a) = P r(at+1 = a|ht = h). The return Rt = k=t ? k?t rk is
the total discounted reward accumulated from time t onwards, where ? is a discount factor
specified by the environment. The value function V ? (h) is the expected return from state
s when following policy ?, V ? (h) = E? [Rt |ht = h]. The optimal value function is the maximum value function achievable by any policy, V ? (h) = max V ? (h). In any POMDP there
?
is at least one optimal policy ? ? (h, a) that achieves the optimal value function. The belief
state is the probability distribution over states given history h, B(s, h) = P r(st = s|ht = h).
2.2
Online Planning in POMDPs
Online POMDP planners use forward search, from the current history or belief state, to
form a local approximation to the optimal value function. The majority of online planners
are based on point-based value iteration [12, 13]. These algorithms use an explicit model
of the POMDP probability distributions, M = hP, R, Z, Ii. They construct a search tree
of belief states, using a heuristic best-first expansion procedure. Each value in the search
tree is updated by a full-width computation that takes account of all possible actions,
observations and next states. This approach can be combined with an offline planning
method to produce a branch-and-bound procedure [13]. Upper or lower bounds on the
value function are computed offline, and are propagated up the tree during search. If the
POMDP is small, or can be factored into a compact representation, then full-width planning
with explicit models can be very effective.
Monte-Carlo planning is a very different paradigm for online planning in POMDPs [2, 7].
The agent uses a simulator G as a generative model of the POMDP. The simulator provides a sample of a successor state, observation and reward, given a state and action,
(st+1 , ot+1 , rt+1 ) ? G(st , at ), and can also be reset to a start state s. The simulator is used
to generate sequences of states, observations and rewards. These simulations are used to
update the value function, without ever looking inside the black box describing the model?s
dynamics. In addition, Monte-Carlo methods have a sample complexity that is determined
only by the underlying difficulty of the POMDP, rather than the size of the state space or
observation space [7]. In principle, this makes them an appealing choice for large POMDPs.
However, prior Monte-Carlo planners have been limited to fixed horizon, depth-first search
[7] (also known as sparse sampling), or to simple rollout methods with no search tree [2],
and have not so far proven to be competitive with best-first, full-width planning methods.
2
2.3
Rollouts
In fully observable MDPs, Monte-Carlo simulation provides a simple method for evaluating
a state s. Sequences of states are generated by an MDP simulator, starting from s and using
a random rollout policy, until a terminal state or discount horizon is reached. The value of
PN
state s is estimated by the mean return of N simulations from s, V (s) = N1 i=1 Ri , where
Ri is the return from the beginning of the ith simulation. Monte-Carlo simulation can be
turned into a simple control algorithm by evaluating all legal actions and selecting the action
with highest evaluation [15]. Monte-Carlo simulation can be extended to partially observable
MDPs [2] by using a history based rollout policy ?rollout (h, a). To evaluate candidate action
a in history h, simulations are generated from ha using a POMDP simulator and the rollout
policy. The value of ha is estimated by the mean return of N simulations from ha.
2.4
Monte-Carlo Tree Search
Monte-Carlo tree search [3] uses Monte-Carlo simulation to evaluate the nodes of a search
tree in a sequentially best-first order. There is one node in the tree for each state s, containing a valuePQ(s, a) and a visitation count N (s, a) for each action a, and an overall
count N (s) = a N (s, a). Each node is initialised to Q(s, a) = 0, N (s, a) = 0. The value
is estimated by the mean return from s of all simulations in which action a was selected
from state s. Each simulation starts from the current state st , and is divided into two
stages: a tree policy that is used while within the search tree; and a rollout policy that is
used once simulations leave the scope of the search tree. The simplest version of MCTS
uses a greedy tree policy during the first stage, which selects the action with the highest
value; and a uniform random rollout policy during the second stage. After each simulation, one new node is added to the search tree, containing the first state visited in the
second stage. The UCT algorithm [8] improves the greedy action selection in MCTS. Each
state of the search tree is viewed as a multi-armed bandit, and actions are chosen by using
the UCB1 algorithm [1]. The value of an action is augmented
q by an exploration bonus
N (s)
that is highest for rarely tried actions, Q? (s, a) = Q(s, a) + c log
N (s,a) . The scalar constant c determines the relative ratio of exploration to exploitation; when c = 0 the UCT
algorithm acts greedily within the tree. Once all actions from state s are represented in
the search tree, the tree policy selects the action maximising the augmented action-value,
argmaxa Q? (s, a). Otherwise, the rollout policy is used to select actions. For suitable choice
of c, the value function constructed by UCT converges in probability to the optimal value
p
function, Q(s, a) ? Q? (s, a), ?s ? S, a ? A [8]. UCT can be extended to use domain knowledge, for example heuristic knowledge or a value function computed offline [5]. New nodes
are initialised using this knowledge, Q(s, a) = Qinit (s, a), N (s, a) = Ninit , where Qinit (s, a)
is an action value function and Ninit indicates its quality. Domain knowledge narrowly
focuses the search on promising states without altering asymptotic convergence.
3
Monte-Carlo Planning in POMDPs
Partially Observable Monte-Carlo Planning (POMCP) consists of a UCT search that selects
actions at each time-step; and a particle filter that updates the agent?s belief state.
3.1
Partially Observable UCT (PO?UCT)
We extend the UCT algorithm to partially observable environments by using a search tree
of histories instead of states. The tree contains a node T (h) = hN (h), V (h)i for each
represented history h. N (h) counts the number of times that history h has been visited.
V (h) is the value of history h, estimated by the mean return of all simulations starting with h.
New nodes are initialised to hVinit (h), Ninit (h)i if domain knowledge is available, and to h0, 0i
otherwise. We assume for now that the belief state B(s, h) is known exactly. Each simulation
starts in an initial state that is sampled from B(?, ht ). As in the fully observable algorithm,
the simulations are divided into two stages. In the first stage of simulation, when
q child
N (h)
?
nodes exist for all children, actions are selected by UCB1, V (ha) = V (ha) + c log
N (ha) .
Actions are then selected to maximise this augmented value, argmaxa V ? (ha). In the second
3
h
h
N=9
S={17,34,26,31}
V=1.5
a1
a2
o2
o1
N=3
V=-1
o1
a=a2
N=6
V=2
o2
o=o2
hao
S={42}
N=2
V=-2
N=1
V=2
a1
N=1
V=-1
N=3
V=1
a2
a1
N=1
V=-3
r=-1
S={27,36,44}
S={27,36,44}
a2
N=2
V=5
o1
r=+2
N=3
V=3
a1
o1
N=1
V=4
N=1
V=6
S={7}
r=+3
r=+4
r=+6
r=-1
N=1
V=4
S={38}
N=3
V=3
a2
N=2
V=5
N=1
V=-1
o2
S={38}
N=3
V=3
a1
N=1
V=-1
N=2
V=5
o2
N=1
V=6
o1
S={7}
S={38}
N=1
V=4
S={27,36,44}
a2
N=1
V=-1
o2
N=1
V=6
S={7}
Figure 1: An illustration of POMCP in an environment with 2 actions, 2 observations, 50 states,
and no intermediate rewards. The agent constructs a search tree from multiple simulations, and
evaluates each history by its mean return (left). The agent uses the search tree to select a real
action a, and observes a real observation o (middle). The agent then prunes the tree and begins a
new search from the updated history hao (right).
stage of simulation, actions are selected by a history based rollout policy ?rollout (h, a) (e.g.
uniform random action selection). After each simulation, precisely one new node is added
to the tree, corresponding to the first new history encountered during that simulation.
3.2
Monte-Carlo Belief State Updates
In small
state spaces, the belief state can be updated exactly by Bayes? theorem, B(s0 , hao) =
P
a
a
s?S Zs0 o Pss0 B(s,h)
P
P
a
Z
P a B(s,h) . The majority of POMDP planning methods operate in this man00
s?S
s ?S
s00 o
ss00
ner [13]. However, in large state spaces even a single Bayes update may be computationally
infeasible. Furthermore, a compact represention of the transition or observation probabilities may not be available. To plan efficiently in large POMDPs, we approximate the
belief state using an unweighted particle filter, and use a Monte-Carlo procedure to update
particles based on sample observations, rewards, and state transitions. Although weighted
particle filters are used widely to represent belief states, an unweighted particle filter can
be implemented particularly efficiently with a black box simulator, without requiring an
explicit model of the POMDP, and providing excellent scalability to larger problems.
We approximate the belief state for history ht by K particles, Bti ? S, 1 ? i ? K. Each
particle corresponds to a sample state, and the belief state is the sum of all particles,
? ht ) = 1 PK ?sB i , where ?ss0 is the kronecker delta function. At the start of the
B(s,
i=1
K
t
algorithm, K particles are sampled from the initial state distribution, B0i ? I, 1 ? i ? K.
After a real action at is executed, and a real observation ot is observed, the particles are
updated by Monte-Carlo simulation. A state s is sampled from the current belief state
? ht ), by selecting a particle at random from Bt . This particle is passed into the black
B(s,
box simulator, to give a successor state s0 and observation o0 , (s0 , o0 , r) ? G(s, at ). If the
sample observation matches the real observation, o = ot , then a new particle s0 is added
to Bt+1 . This process repeats until K particles have been added. This approximation to
? ht ) =
the belief state approaches the true belief state with sufficient particles, limK?? B(s,
B(s, ht ), ?s ? S. As with many particle filter approaches, particle deprivation is possible
for large t. In practice we combine the belief state update with particle reinvigoration. For
example, new particles can be introduced by adding artificial noise to existing particles.
3.3
Partially Observable Monte-Carlo
POMCP combines Monte-Carlo belief state updates with PO?UCT, and shares the same
simulations for both Monte-Carlo procedures. Each node in the search tree, T (h) =
hN (h), V (h), B(h)i, contains a set of particles B(h) in addition to its count N (h) and value
V (h). The search procedure is called from the current history ht . Each simulation begins
from a start state that is sampled from the belief state B(ht ). Simulations are performed
4
Algorithm 1 Partially Observable Monte-Carlo Planning
procedure Search(h)
repeat
if h = empty then
s?I
else
s ? B(h)
end if
Simulate(s, h, 0)
until Timeout()
return argmax V (hb)
end procedure
procedure Simulate(s, h, depth)
if ? depth < then
return 0
end if
if h ?
/ T then
for all a ? A do
T (ha) ? (Ninit (ha), Vinit (ha), ?)
end for
return Rollout(s, h, depth)
end if
q
N (h)
a ? argmax V (hb) + c log
N (hb)
procedure Rollout(s, h, depth)
if ? depth < then
return 0
end if
a ? ?rollout (h, ?)
(s0 , o, r) ? G(s, a)
return r + ?.Rollout(s0 , hao, depth+1)
end procedure
(s0 , o, r) ? G(s, a)
R ? r + ?.Simulate(s0 , hao, depth + 1)
B(h) ? B(h) ? {s}
N (h) ? N (h) + 1
N (ha) ? N (ha) + 1
(ha)
V (ha) ? V (ha) + R?V
N (ha)
return R
end procedure
b
b
using the partially observable UCT algorithm, as described above. For every history h
encountered during simulation, the belief state B(h) is updated to include the simulation
state. When search is complete, the agent selects the action at with greatest value, and
receives a real observation ot from the world. At this point, the node T (ht at ot ) becomes the
root of the new search tree, and the belief state B(ht ao) determines the agent?s new belief
state. The remainder of the tree is pruned, as all other histories are now impossible. The
complete POMCP algorithm is described in Algorithm 1 and Figure 1.
4
Convergence
The UCT algorithm converges to the optimal value function in fully observable MDPs [8].
This suggests two simple ways to apply UCT to POMDPs: either by converting every belief
state into an MDP state, or by converting every history into an MDP state, and then
applying UCT directly to the derived MDP. However, the first approach is computationally
expensive in large POMDPs, where even a single belief state update can be prohibitively
costly. The second approach requires a history-based simulator that can sample the next
history given the current history, which is usually more costly and hard to encode than a
state-based simulator. The key innovation of the PO?UCT algorithm is to apply a UCT
search to a history-based MDP, but using a state-based simulator to efficiently sample states
from the current beliefs. In this section we prove that given the true belief state B(s, h),
PO?UCT also converges to the optimal value function. We prove convergence for POMDPs
with finite horizon T ; this can be extended to the infinite horizon case as suggested in [8].
Lemma 1. Given a POMDP M = (S, A, P, R, Z), consider the derived MDP with hisP P
a
a
? = (H, A, P,
? R),
? where P? a
?a
tories as states, M
B(s, h)Pss
0 Zs0 o and Rh =
h,hao =
s?S s0 ?S
P
B(s, h)Ras . Then the value function V? ? (h) of the derived MDP is equal to the value
s?S
function V ? (h) of the POMDP, ?? V? ? (h) = V ? (h).
Proof. By backward
the horizon,
V ? (h)
induction
on P the
P P
P
=
s?S a?A s0 ?S o?O
P P
Bellman
equation,
starting
a
a
?
B(s, h)?(h, a) (Ras + ?Pss
0 Zs0 o V (hao))
from
=
a
? ah + ? P?h,hao
?(h, a) R
V? ? (hao) = V? ? (h).
a?A o?O
Let D? (hT ) be the POMDP rollout distribution. This is the distribution of histories generated by sampling an initial state st ? B(s, ht ), and then repeatedly sampling actions from
policy ?(h, a) and sampling states, observations and rewards from M, until terminating at
5
? ? (hT ) be the derived MDP rollout distribution. This is the distribution of
time T . Let D
histories generated by starting at ht and then repeatedly sampling actions from policy ?
? until terminating at time T .
and sampling state transitions and rewards from M,
Lemma 2. For any rollout policy ?, the POMDP rollout distribution is equal to the derived
? ? (hT ).
MDP rollout distribution, ?? D? (hT ) = D
P
P
a
a
Proof. By forward induction from ht , D? (hao) = D? (h)?(h, a) s?S s0 ?S B(s, h)Pss
0 Zs0 o =
? ? (h)?(h, a)P? a
??
D
h,hao = D (hao).
Theorem 1. For suitable choice of c, the value function constructed by PO?UCT converges
p
in probability to the optimal value function, V (h) ? V ? (h), for all histories h that are
prefixed by ht . As the number of visits N (h) approaches infinity, the bias of the value
function, E [V (h) ? V ? (h)] is O(log N (h)/N (h)).
Proof. By Lemma 2 the PO?UCT simulations can be mapped into UCT simulations in the
derived MDP. By Lemma 1, the analysis of UCT in [8] can then be applied to PO?UCT.
5
Experiments
We applied POMCP to the benchmark rocksample problem, and to two new problems:
battleship and pocman. For each problem we ran POMCP 1000 times, or for up to 12 hours
of total computation time. We evaluated the performance of POMCP by the average total
discounted reward. In the smaller rocksample problems, we compared POMCP to the best
full-width online planning algorithms. However, the other problems were too large to run
these algorithms. To provide a performance benchmark in these cases, we evaluated the
performance of simple Monte-Carlo simulation without any tree. The PO-rollout algorithm
used Monte-Carlo belief state updates, as described in section 3.2. It then simulated n/|A|
rollouts for each legal action, and selected the action with highest average return.
The exploration constant for POMCP was set to c = Rhi ? Rlo , where Rhi was the highest
return achieved during sample runs of POMCP with c = 0, and Rlo was the lowest return
achieved during sample rollouts. The discount horizon was set to 0.01 (about 90 steps
when ? = 0.95). On the larger battleship and pocman problems, we combined POMCP with
particle reinvigoration. After each real action and observation, additional particles were
added to the belief state, by applying a domain specific local transformation to existing
particles. When n simulations were used, n/16 new particles were added to the belief set.
We also introduced domain knowledge into the search algorithm, by defining a set of preferred
actions Ap . In each problem, we applied POMCP both with and without preferred actions.
When preferred actions were used, the rollout policy selected actions uniformly from Ap ,
and each new node T (ha) in the tree was initialised to Vinit (ha) = Rhi , Ninit (ha) = 10
for preferred actions a ? Ap , and to Vinit (ha) = Rlo , Ninit (ha) = 0 for all other actions.
Otherwise, the rollouts policy selected actions uniformly among all legal actions, and each
new node T (ha) was initialised to Vinit (ha) = 0, Ninit (ha) = 0 for all a ? A.
The rocksample (n, k) problem [14] simulates a Mars explorer robot in an n ? n grid containing k rocks. The task is to determine which rocks are valuable, take samples of valuable
rocks, and leave the map to the east when sampling is complete. When provided with an exactly factored representation, online full-width planners have been successful in rocksample
(7, 8) [13], and an offline full-width planner has been successful in the much larger rocksample (11, 11) problem [11]. We applied POMCP to three variants of rocksample: (7, 8),
(11, 11), and (15, 15), without factoring the problem. When using preferred actions, the
number of valuable and unvaluable observations was counted for each rock. Actions that
sampled rocks with more valuable observations were preferred. If all remaining rocks had a
greater number of unvaluable observations, then the east action was preferred. The results
of applying POMCP to rocksample, with various levels of prior knowledge, is shown in Figure 2. These results are compared with prior work in Table 1. On rocksample (7, 8), the
performance of POMCP with preferred actions was close to the best prior online planning
methods combined with offline solvers. On rocksample (11, 11), POMCP achieved the same
performance with 4 seconds of online computation to the state-of-the-art solver SARSOP
with 1000 seconds of offline computation [11]. Unlike prior methods, POMCP also provided
scalable performance on rocksample (15, 15), a problem with over 7 million states.
6
Rocksample
States |S|
AEMS2
HSVI-BFS
SARSOP
Rollout
POMCP
(7, 8)
12,544
21.37 ?0.22
21.46 ?0.22
21.39 ?0.01
9.46 ?0.27
20.71 ?0.21
(11, 11)
247,808
N/A
N/A
21.56 ?0.11
8.70 ?0.29
20.01 ?0.23
(15, 15)
7,372,800
N/A
N/A
N/A
7.56 ?0.25
15.32 ?0.28
Table 1: Comparison of Monte-Carlo planning with full-width planning on rocksample. POMCP and
the rollout algorithm used prior knowledge in their rollouts. The online planners used knowledge
computed offline by PBVI; results are from [13]. Each online algorithm was given 1 second per
action. The full-width, offline planner SARSOP was given approximately 1000 seconds of offline
computation; results are from [9]. All full-width planners were provided with an exactly factored
representation of the POMDP; the Monte-Carlo planners do not factor the representation. The
full-width planners could not be run on the larger problems.
In the battleship POMDP, 5 ships are placed at random into a 10 ? 10 grid, subject to the
constraint that no ship may be placed adjacent or diagonally adjacent to another ship. Each
ship has a different size of 5 ? 1, 4 ? 1, 3 ? 1 and 2 ? 1 respectively. The goal is to find
and sink all ships. However, the agent cannot observe the location of the ships. Each step,
the agent can fire upon one cell of the grid, and receives an observation of 1 if a ship was
hit, and 0 otherwise. There is a -1 reward per time-step, a terminal reward of +100 for
hitting every cell of every ship, and there is no discounting (? = 1). It is illegal to fire twice
on the same cell. If it was necessary to fire on all cells of the grid, the total reward is 0;
otherwise the total reward indicates the number of steps better than the worst case. There
are 100 actions, 2 observations, and approximately 1018 states in this challenging POMDP.
Particle invigoration is particularly important in this problem. Each local transformation
applied one of three transformations: 2 ships of different sizes swapped location; 2 smaller
ships were swapped into the location of 1 larger ship; or 1 to 4 ships were moved to a new
location, selected uniformly at random, and accepted if the new configuration was legal.
Without preferred actions, all legal actions were considered. When preferred actions were
used, impossible cells for ships were deduced automatically, by marking off the diagonally
adjacent cells to each hit. These cells were never selected in the tree or during rollouts. The
performance of POMCP, with and without preferred actions, is shown in Figure 2. POMCP
was able to sink all ships more than 50 moves faster, on average, than random play, and more
than 25 moves faster than randomly selecting amongst preferred actions (which corresponds
to the simple strategy used by many humans when playing the Battleship game). Using
preferred actions, POMCP achieved better results with less search; however, even without
preferred actions, POMCP was able to deduce the diagonal constraints from its rollouts,
and performed almost as well given more simulations per move. Interestingly, the search
tree only provided a small benefit over the PO-rollout algorithm, due to small differences
between the value of actions, but high variance in the returns.
In our final experiment we introduce a partially observable version of the video game PacMan. In this task, pocman, the agent must navigate a 17 ? 19 maze and eat the food pellets
that are randomly distributed across the maze. Four ghosts roam the maze, initially according to a randomised strategy: at each junction point they select a direction, without
doubling back, with probability proportional to the number of food pellets in line of sight in
that direction. Normally, if PocMan touches a ghost then he dies and the episode terminates.
However, four power pills are available, which last for 15 steps and enable PocMan to eat
any ghosts he touches. If a ghost is within Manhattan distance of 5 of PocMan, it chases him
aggressively, or runs away if he is under the effect of a power pill. The PocMan agent receives
a reward of ?1 at each step, +10 for each food pellet, +25 for eating a ghost and ?100 for
dying. The discount factor is ? = 0.95. The PocMan agent receives ten observation bits at
every time step, corresponding to his senses of sight, hearing, touch and smell. He receives
four observation bits indicating whether he can see a ghost in each cardinal direction, set
to 1 if there is a ghost in his direct line of sight. He receives one observation bit indicating
whether he can hear a ghost, which is set to 1 if he is within Manhattan distance 2 of a ghost.
He receives four observation bits indicating whether he can feel a wall in each of the cardinal
directions, which is set to 1 if he is adjacent to a wall. Finally, he receives one observation
bit indicating whether he can smell food, which is set to 1 if he is adjacent or diagonally ad7
0.01
0.1
1
10
0.01
25
0.1
100
Average Discounted Return
20
15
10
Rocksample (11, 11)
POMC: basic
POMC: preferred
PO-rollout: basic
PO-rollout: preferred
SARSOP
5
0
10
100
1000
10000
15
10
5
Rocksample (15, 15)
POMC: basic
POMC: preferred
PO-rollout: basic
PO-rollout: preferred
0
100000
1e+06
10
100
1000
Simulations
0.001
10000
100000
1e+06
Simulations
0.01
0.1
1
0.001
70
0.01
0.1
1
350
Search Time (seconds)
Search Time (seconds)
300
Average Undiscounted Return
60
50
Average Return
10
Search Time (seconds)
20
Average Discounted Return
1
25
Search Time (seconds)
40
30
20
Battleship
POMCP: basic
POMCP: preferred
PO-rollouts: basic
PO-rollouts: preferred
PO-rollouts: preferred
10
0
10
100
1000
Simulations
10000
250
200
150
100
PocMan
POMCP: basic
POMCP: preferred
PO-rollout: basic
PO-rollout: preferred
50
0
100000
10
100
1000
Simulations
10000
100000
Figure 2: Performance of POMCP in rocksample (11,11) and (15,15), battleship and pocman. Each
point shows the mean discounted return from 1000 runs or 12 hours of total computation. The
search time for POMCP with preferred actions is shown on the top axis.
jacent to any food. The pocman problem has approximately 1056 states, 4 actions, and 1024
observations. For particle invigoration, 1 or 2 ghosts were teleported to a randomly selected
new location. The new particle was accepted if consistent with the last observation. When
using preferred actions, if PocMan was under the effect of a power pill, then he preferred to
move in directions where he saw ghosts. Otherwise, PocMan preferred to move in directions
where he didn?t see ghosts, excluding the direction he just came from. The performance
of POMCP in pocman, with and without preferred actions, is shown in Figure 2. Using
preferred actions, POMCP achieved an average undiscounted return of over 300, compared
to 230 for the PO-rollout algorithm. Without domain knowledge, POMCP still achieved
an average undiscounted return of 260, compared to 130 for simple rollouts. A real-time
demonstration of POMCP using preferred actions, is available online, along with source code
for POMCP (http://www.cs.ucl.ac.uk/staff/D.Silver/web/Applications.html).
6
Discussion
Traditionally, POMDP planning has focused on small problems that have few states or can
be neatly factorised into a compact representation. However, real-world problems are often
large and messy, with enormous state spaces and probability distributions that cannot be
conveniently factorised. In these challenging POMDPs, Monte-Carlo simulation provides
an effective mechanism both for tree search and for belief state updates, breaking the curse
of dimensionality and allowing much greater scalability than has previously been possible.
Unlike previous approaches to Monte-Carlo planning in POMDPs, the PO?UCT algorithm
provides a computationally efficient best-first search that focuses its samples in the most
promising regions of the search space. The POMCP algorithm uses these same samples to
provide a rich and effective belief state update. The battleship and pocman problems provide
two examples of large POMDPs which cannot easily be factored and are intractable to prior
algorithms for POMDP planning. POMCP was able to achieve high performance in these
challenging problems with just a few seconds of online computation.
8
References
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed
bandit problem. Machine Learning, 47(2-3):235?256, 2002.
[2] D. Bertsekas and D. Casta?
non. Rollout algorithms for stochastic scheduling problems.
Journal of Heuristics, 5(1):89?108, 1999.
[3] R. Coulom. Efficient selectivity and backup operators in Monte-Carlo tree search. In
5th International Conference on Computer and Games, 2006-05-29, pages 72?83, 2006.
[4] H. Finnsson and Y. Bj?
ornsson. Simulation-based approach to general game playing. In
23rd Conference on Artificial Intelligence, pages 259?264, 2008.
[5] S. Gelly and D. Silver. Combining online and offline learning in UCT. In 17th International Conference on Machine Learning, pages 273?280, 2007.
[6] L. Kaelbling, M. Littman, and A. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101:99?134, 1995.
[7] M. Kearns, Y. Mansour, and A. Ng. Approximate planning in large POMDPs via
reusable trajectories. In Advances in Neural Information Processing Systems 12. MIT
Press, 2000.
[8] L. Kocsis and C. Szepesvari. Bandit based Monte-Carlo planning. In 15th European
Conference on Machine Learning, pages 282?293, 2006.
[9] H. Kurniawati, D. Hsu, and W. Lee. SARSOP: Efficient point-based POMDP planning
by approximating optimally reachable belief spaces. In Robotics: Science and Systems,
2008.
[10] R. Lorentz. Amazons discover Monte-Carlo. In Computers and Games, pages 13?24,
2008.
[11] S. Ong, S. Png, D. Hsu, and W. Lee. POMDPs for robotic tasks with mixed observability. In Robotics: Science and Systems, 2009.
[12] J. Pineau, G. Gordon, and S. Thrun. Anytime point-based approximations for large
POMDPs. Journal of Artificial Intelligence Research, 27:335?380, 2006.
[13] S. Ross, J. Pineau, S. Paquet, and B. Chaib-draa. Online planning algorithms for
pomdps. Journal of Artificial Intelligence Research, 32:663?704, 2008.
[14] T. Smith and R. Simmons. Heuristic search value iteration for pomdps. In 20th conference on Uncertainty in Artificial Intelligence, 2004.
[15] G. Tesauro and G. Galperin. Online policy improvement using Monte-Carlo search. In
Advances in Neural Information Processing 9, pages 1068?1074, 1996.
9
| 4031 |@word exploitation:1 middle:1 version:2 achievable:1 simulation:45 r:1 tried:1 initial:4 configuration:1 contains:2 selecting:3 interestingly:1 o2:6 existing:2 ninit:7 current:9 com:2 gmail:2 must:2 lorentz:1 update:15 generative:1 selected:10 greedy:2 intelligence:5 beginning:1 ith:1 smith:1 provides:5 node:15 location:5 rollout:32 along:1 constructed:3 direct:1 consists:1 prove:2 combine:3 inside:1 introduce:2 ra:2 expected:2 planning:30 simulator:15 multi:2 terminal:2 bellman:1 discounted:5 automatically:1 food:5 actual:1 curse:6 armed:2 considering:1 solver:2 becomes:1 provided:5 begin:2 underlying:1 discover:1 bonus:1 didn:1 lowest:1 dying:1 transformation:3 every:6 act:1 exactly:4 prohibitively:1 hit:2 uk:1 control:1 normally:1 bertsekas:1 maximise:1 ner:1 local:3 ss00:1 approximately:4 ap:3 black:8 twice:1 tory:1 suggests:1 challenging:5 limited:1 practice:2 procedure:12 significantly:1 illegal:1 argmaxa:2 cannot:4 close:1 selection:3 operator:1 scheduling:1 impossible:2 applying:3 www:1 map:2 go:1 starting:4 pomdp:22 focused:1 amazon:2 factored:4 bfs:1 his:2 rhi:3 traditionally:1 smell:2 updated:5 feel:1 simmons:1 play:1 us:7 expensive:1 particularly:2 observed:2 worst:1 thousand:1 region:2 episode:1 highest:5 observes:1 ran:1 valuable:4 environment:5 complexity:1 reward:16 messy:1 littman:1 ong:1 dynamic:2 terminating:2 upon:1 sink:2 pill:3 po:20 easily:1 represented:2 various:1 effective:4 monte:37 artificial:6 outcome:1 h0:1 heuristic:5 larger:6 widely:1 otherwise:6 fischer:1 paquet:1 final:1 online:18 unfactored:1 timeout:1 advantage:1 sequence:3 chase:1 kocsis:1 rock:6 ucl:1 reset:1 remainder:1 turned:1 combining:1 pbvi:1 poorly:1 achieve:3 moved:1 scalability:2 convergence:3 empty:1 undiscounted:3 produce:1 silver:3 converges:6 leave:2 ac:1 sydney:1 implemented:1 c:1 direction:7 correct:1 filter:5 stochastic:2 exploration:4 human:1 australia:1 enable:2 successor:2 finnsson:1 behaviour:1 ao:1 wall:2 kurniawati:1 considered:1 scope:1 bj:1 major:1 achieves:1 a2:6 purpose:1 outperformed:1 visited:2 ross:1 saw:1 him:1 exceptional:1 pellet:3 weighted:1 mit:2 aems2:1 sight:3 rather:2 pn:1 b0i:1 rocksample:16 eating:1 encode:1 derived:6 focus:3 improvement:1 ps:4 indicates:2 greedily:1 factoring:1 accumulated:1 sb:1 bt:2 initially:1 bandit:3 selective:1 selects:4 overall:1 among:1 html:1 plan:3 art:1 equal:2 construct:4 once:2 never:1 veness:1 sampling:12 ng:1 gordon:1 cardinal:2 few:2 randomly:3 argmax:2 rollouts:11 fire:3 n1:1 onwards:1 highly:2 joel:1 evaluation:1 introduces:1 sens:1 necessary:1 draa:1 tree:41 cover:1 altering:1 kaelbling:1 hearing:1 hundred:1 uniform:2 successful:2 too:3 optimally:1 combined:3 st:10 deduced:1 international:2 lee:2 off:1 quickly:1 s00:1 cesa:1 containing:3 hn:2 return:25 account:1 potential:1 factorised:2 performed:2 break:3 root:1 reached:1 start:7 competitive:1 bayes:2 variance:1 efficiently:4 carlo:37 trajectory:1 pomdps:24 history:32 ah:1 evaluates:1 initialised:5 proof:3 propagated:1 chaib:1 sampled:7 hsu:2 knowledge:13 anytime:2 dimensionality:4 improves:1 auer:1 back:1 execute:1 box:8 evaluated:2 mar:1 furthermore:2 sarsop:5 stage:7 uct:23 just:2 until:5 receives:9 web:1 touch:3 pineau:2 quality:1 mdp:11 effect:2 requiring:1 true:2 discounting:1 aggressively:1 adjacent:5 during:10 game:7 width:11 complete:3 demonstrate:1 pacman:2 casta:1 pocman:15 million:1 extend:2 he:18 cambridge:1 rd:1 grid:4 hp:1 particle:28 neatly:1 zs0:4 had:1 reachable:1 robot:1 bti:1 deduce:1 ship:14 tesauro:1 store:1 selectivity:1 came:1 additional:1 greater:2 staff:1 prune:1 converting:2 determine:2 paradigm:1 ii:1 branch:1 full:11 multiple:1 match:1 faster:2 long:1 divided:2 visit:1 a1:7 controlled:1 variant:1 basic:9 scalable:1 iteration:4 represent:2 sometimes:1 achieved:7 cell:7 robotics:2 addition:4 background:1 else:1 source:1 appropriately:1 ot:8 operate:1 unlike:2 swapped:2 limk:1 subject:1 simulates:1 effectiveness:1 intermediate:1 hb:3 observability:1 idea:2 narrowly:1 whether:4 o0:2 passed:1 action:63 repeatedly:2 discount:4 ten:1 png:1 simplest:1 generate:1 http:1 exist:1 estimated:4 battleship:8 per:4 delta:1 pomcp:40 visitation:1 key:2 four:4 reusable:1 enormous:1 ht:23 backward:1 sum:1 run:5 uncertainty:1 planner:11 almost:1 decision:2 dy:1 bit:5 bound:2 encountered:3 precisely:1 kronecker:1 infinity:1 constraint:2 ri:2 simulate:3 pruned:1 eat:2 ss0:1 marking:1 according:1 teleported:1 parallelisable:1 smaller:2 across:1 terminates:1 appealing:1 computationally:4 legal:5 equation:1 previously:2 randomised:1 describing:1 montecarlo:3 count:4 roam:1 mechanism:1 prefixed:1 end:8 available:4 junction:1 apply:2 observe:1 away:1 rlo:3 top:1 remaining:1 include:1 exploit:1 vinit:4 gelly:1 approximating:1 move:5 added:6 strategy:2 costly:2 rt:4 traditional:1 diagonal:1 amongst:1 distance:2 mapped:1 simulated:1 thrun:1 majority:2 reason:2 induction:2 maximising:1 code:1 o1:7 illustration:1 ratio:1 providing:1 demonstration:1 innovation:1 coulom:1 executed:1 hao:12 unsw:1 policy:22 allowing:1 upper:1 bianchi:1 observation:31 galperin:1 markov:2 benchmark:3 finite:3 defining:1 extended:3 ever:1 looking:1 excluding:1 mansour:1 david:1 introduced:2 required:1 specified:1 extensive:1 hour:2 able:4 suggested:1 usually:1 ghost:12 hear:1 max:1 video:1 belief:40 greatest:1 suitable:2 power:3 difficulty:1 explorer:1 mdps:3 mcts:6 axis:1 prior:9 relative:1 asymptotic:1 manhattan:2 fully:5 mixed:1 proportional:1 proven:1 contingency:1 agent:17 sufficient:1 consistent:1 s0:14 principle:1 playing:3 share:1 diagonally:3 repeat:2 placed:2 last:2 infeasible:1 offline:10 bias:1 sparse:1 benefit:1 distributed:1 depth:8 transition:7 evaluating:2 unweighted:2 world:2 maze:3 forward:2 rich:1 pss0:1 far:1 counted:1 approximate:4 observable:17 compact:3 preferred:30 sequentially:1 robotic:1 search:55 table:2 promising:3 szepesvari:1 expansion:1 excellent:1 complex:1 european:1 domain:10 pk:1 rh:1 backup:1 noise:1 child:2 augmented:3 referred:1 explicit:5 exponential:1 candidate:1 breaking:1 deprivation:1 rk:1 theorem:2 specific:1 navigate:1 intractable:1 adding:1 effectively:1 magnitude:1 horizon:9 cassandra:1 ucb1:2 conveniently:1 hitting:1 partially:13 scalar:1 doubling:1 corresponds:2 determines:3 ma:1 viewed:1 goal:1 hsvi:1 hard:1 determined:4 infinite:1 uniformly:3 acting:1 lemma:4 kearns:1 total:6 called:1 accepted:2 east:2 rarely:1 select:3 indicating:4 evaluate:4 |
3,349 | 4,032 | Tight Sample Complexity of Large-Margin Learning
1
Sivan Sabato1 Nathan Srebro2 Naftali Tishby1
School of Computer Science & Engineering, The Hebrew University, Jerusalem 91904, Israel
2
Toyota Technological Institute at Chicago, Chicago, IL 60637, USA
{sivan sabato,tishby}@cs.huji.ac.il, [email protected]
Abstract
We obtain a tight distribution-specific characterization of the sample complexity of large-margin classification with L2 regularization: We introduce the
?-adapted-dimension, which is a simple function of the spectrum of a distribution?s covariance matrix, and show distribution-specific upper and lower bounds
on the sample complexity, both governed by the ?-adapted-dimension of the
source distribution. We conclude that this new quantity tightly characterizes the
true sample complexity of large-margin classification. The bounds hold for a rich
family of sub-Gaussian distributions.
1
Introduction
In this paper we tackle the problem of obtaining a tight characterization of the sample complexity
which a particular learning rule requires, in order to learn a particular source distribution. Specifically, we obtain a tight characterization of the sample complexity required for large (Euclidean)
margin learning to obtain low error for a distribution D(X, Y ), for X ? Rd , Y ? {?1}.
Most learning theory work focuses on upper-bounding the sample complexity. That is, on providing a bound m(D, ?) and proving that when using some specific learning rule, if the sample
size is at least m(D, ?), an excess error of at most ? (in expectation or with high probability) can
be ensured. For instance, for large-margin classification we know that if PD [kXk ? B] = 1,
then m(D, ?) can be set to O(B 2 /(? 2 ?2 )) to get true error of no more than ??? + ?, where
??? = minkwk?1 PD (Y hw, Xi ? ?) is the optimal margin error at margin ?.
Such upper bounds can be useful for understanding positive aspects of a learning rule. But it is
difficult to understand deficiencies of a learning rule, or to compare between different rules, based
on upper bounds alone. After all, it is possible, and often the case, that the true sample complexity,
i.e. the actual number of samples required to get low error, is much lower than the bound.
Of course, some sample complexity upper bounds are known to be ?tight? or to have an almostmatching lower bound. This usually means that the bound is tight as a worst-case upper bound for
a specific class of distributions (e.g. all those with PD [kXk ? B] = 1). That is, there exists some
source distribution for which the bound is tight. In other words, the bound concerns some quantity
of the distribution (e.g. the radius of the support), and is the lowest possible bound in terms of this
quantity. But this is not to say that for any specific distribution this quantity tightly characterizes the
sample complexity. For instance, we know that the
than the
p sample complexity can be much smaller
2
2
radius of the support of X, if the average norm E[kXk ] is small. However, E[kXk ] is also not
a precise characterization of the sample complexity, for instance in low dimensions.
The goal of this paper is to identify a simple quantity determined by the distribution that does
precisely characterize the sample complexity. That is, such that the actual sample complexity for the
learning rule on this specific distribution is governed, up to polylogarithmic factors, by this quantity.
1
In particular, we present the ?-adapted-dimension k? (D). This measure refines both the dimension
and the average norm of X, and it can be easily calculated from the covariance matrix of X. We show
that for a rich family of ?light tailed? distributions (specifically, sub-Gaussian distributions with
independent uncorrelated directions ? see Section 2), the number of samples required for learning
? ? ). More
by minimizing the ?-margin-violations is both lower-bounded and upper-bounded by ?(k
precisely, we show that the sample complexity m(?, ?, D) required for achieving excess error of no
more than ? can be bounded from above and from below by:
? k? (D) ).
?(k? (D)) ? m(?, ?, D) ? O(
?2
As can be seen in this bound, we are not concerned about tightly characterizing the dependence of
the sample complexity on the desired error [as done e.g. in 1], nor with obtaining tight bounds for
very small error levels. In fact, our results can be interpreted as studying the sample complexity
needed to obtain error well below random, but bounded away from zero. This is in contrast to
classical statistics asymptotic that are also typically tight, but are valid only for very small ?. As was
recently shown by Liang and Srebro [2], the quantities on which the sample complexity depends on
for very small ? (in the classical statistics asymptotic regime) can be very different from those for
moderate error rates, which are more relevant for machine learning.
Our tight characterization, and in particular the distribution-specific lower bound on the sample
complexity that we establish, can be used to compare large-margin (L2 regularized) learning to other
learning rules. In Section 7 we provide two such examples: we use our lower bound to rigorously
establish a sample complexity gap between L1 and L2 regularization previously studied in [3], and to
show a large gap between discriminative and generative learning on a Gaussian-mixture distribution.
In this paper we focus only on large L2 margin classification. But in order to obtain the distributionspecific lower bound, we develop novel tools that we believe can be useful for obtaining lower
bounds also for other learning rules.
Related work
Most work on ?sample complexity lower bounds? is directed at proving that under some set of
assumptions, there exists a source distribution for which one needs at least a certain number of
examples to learn with required error and confidence [4, 5, 6]. This type of a lower bound does
not, however, indicate much on the sample complexity of other distributions under the same set of
assumptions.
As for distribution-specific lower bounds, the classical analysis of Vapnik [7, Theorem 16.6] provides not only sufficient but also necessary conditions for the learnability of a hypothesis class with
respect to a specific distribution. The essential condition is that the ?-entropy of the hypothesis
class with respect to the distribution be sub-linear in the limit of an infinite sample size. In some
sense, this criterion can be seen as providing a ?lower bound? on learnability for a specific distribution. However, we are interested in finite-sample convergence rates, and would like those to depend
on simple properties of the distribution. The asymptotic arguments involved in Vapnik?s general
learnability claim do not lend themselves easily to such analysis.
Benedek and Itai [8] show that if the distribution is known to the learner, a specific hypothesis
class is learnable if and only if there is a finite ?-cover of this hypothesis class with respect to the
distribution. Ben-David et al. [9] consider a similar setting, and prove sample complexity lower
bounds for learning with any data distribution, for some binary hypothesis classes on the real line.
In both of these works, the lower bounds hold for any algorithm, but only for a worst-case target
hypothesis. Vayatis and Azencott [10] provide distribution-specific sample complexity upper bounds
for hypothesis classes with a limited VC-dimension, as a function of how balanced the hypotheses
are with respect to the considered distributions. These bounds are not tight for all distributions, thus
this work also does not provide true distribution-specific sample complexity.
2
Problem setting and definitions
Let D be a distribution over Rd ? {?1}. DX will denote the restriction of D to Rd . We are
interested in linear separators, parametrized by unit-norm vectors in Bd1 , {w ? Rd | kwk2 ? 1}.
2
For a predictor w denote its misclassification error with respect to distribution D by ?(w, D) ,
P(X,Y )?D [Y hw, Xi ? 0]. For ? > 0, denote the ?-margin loss of w with respect to D by
?? (w, D) , P(X,Y )?D [Y hw, Xi ? ?]. The minimal margin loss with respect to D is denoted
d
by ??? (D) , minw?Bd1 ?? (w, D). For a sample S = {(xi , yi )}m
i=1 such that (xi , yi ) ? R ? {?1},
1
|{i | yi hxi , wi ? ?}| and the misclasthe margin loss with respect to S is denoted by ??? (w, S) , m
1
? S) , |{i | yi hxi , wi ? 0}|. In this paper we are concerned with learning by
sification error is ?(w,
m
minimizing the margin loss. It will be convenient for us to discuss transductive learning algorithms.
Since many predictors minimize the margin loss, we define:
Definition 2.1. A margin-error minimization algorithm A is an algorithm whose input is a
?
margin ?, a training sample S = {(xi , yi )}m
xi }m
i=1 and an unlabeled test sample SX = {?
i=1 ,
?
which outputs a predictor w
? ? argminw?Bd1 ?? (w, S). We denote the output of the algorithm by
w
? = A? (S, S?X ).
We will be concerned with the expected test loss of the algorithm given a random training sample and
?
?
?
a random test sample, each of size m, and define ?m (A? , D) , ES,S?D
m [?(A(S, SX ), S)], where
?
S, S? ? Dm independently. For ? > 0, ? ? [0, 1], and a distribution D, we denote the distributionspecific sample complexity by m(?, ?, D): this is the minimal sample size such that for any marginerror minimization algorithm A, and for any m ? m(?, ?, D), ?m (A? , D) ? ??? (D) ? ?.
Sub-Gaussian distributions
We will characterize the distribution-specific sample complexity in terms of the covariance of X ?
DX . But in order to do so, we must assume that X is not too heavy-tailed. Otherwise, X can
have even infinite covariance but still be learnable, for instance if it has a tiny probability of having
an exponentially large norm. We will thus restrict ourselves to sub-Gaussian distributions. This
ensures light tails in all directions, while allowing a sufficiently rich family of distributions, as we
presently see. We also require a more restrictive condition ? namely that DX can be rotated to a
product distribution over the axes of Rd . A distribution can always be rotated so that its coordinates
are uncorrelated. Here we further require that they are independent, as of course holds for any
multivariate Gaussian distribution.
Definition 2.2 (See e.g. [11, 12]). A random variable X is sub-Gaussian with moment B (or
B-sub-Gaussian) for B ? 0 if
?t ? R,
E[exp(tX)] ? exp(B 2 t2 /2).
(1)
p
We further say that X is sub-Gaussian with relative moment ? = B/ E[X 2 ].
The sub-Gaussian family is quite extensive: For instance, any bounded, Gaussian, or Gaussianmixture random variable with mean zero is included in this family.
Definition 2.3. A distribution DX over X ? Rd is independently sub-Gaussian with relative
moment ? if there exists some orthonormal basis a1 , . . . , ad ? Rd , such that hX, ai i are independent
sub-Gaussian random variables, each with a relative moment ?.
We will focus on the family D?sg of all independently ?-sub-Gaussian distributions in arbitrary disg
includes all Gaussian distribumension, for a small fixed constant ?. For instance, the family D3/2
tions, all distributions which are uniform over a (hyper)box, and all multi-Bernoulli distributions,
in addition to other less structured distributions. Our upper bounds and lower bounds will be tight
up to quantities which depend on ?, which we will regard as a constant, but the tightness will not
depend on the dimensionality of the space or the variance of the distribution.
3
The ?-adapted-dimension
As mentioned in the introduction, the sample complexity of margin-error minimization can be upperbounded in terms of the average norm E[kXk2 ] by m(?, ?, D) ? O(E[kXk2 ]/(? 2 ?2 )) [13]. Alter2
?
natively, we can rely only on the dimensionality and conclude m(?, ?, D) ? O(d/?
) [7]. Thus,
3
although both of these bounds are tight in the worst-case sense, i.e. they are the best bounds that
rely only on the norm or only on the dimensionality respectively, neither is tight in a distributionspecific sense: If the average norm is unbounded while the dimensionality is small, an arbitrarily
large gap is created between the true m(?, ?, D) and the average-norm upper bound. The converse
happens if the dimensionality is arbitrarily high while the average-norm is bounded.
Seeking a distribution-specific tight analysis, one simple option to try to tighten these bounds is to
consider their minimum, min(d, E[kXk2 ]/? 2 )/?2 , which, trivially, is also an upper bound on the
sample complexity. However, this simple combination is also not tight: Consider a distribution in
which there are a few directions with very high variance, but the combined variance in all other
directions is small. We will show that in such situations the sample complexity is characterized not
by the minimum of dimension and norm, but by the sum of the number of high-variance dimensions
and the average norm in the other directions. This behavior is captured by the ?-adapted-dimension:
Definition 3.1. Let b > 0 and k a positive integer.
(a). A subset X ? Rd is (b, k)-limited if there exists a sub-space V ? Rd of dimension d ? k
such that X ? {x ? Rd | kx? P k2 ? b}, where P is an orthogonal projection onto V .
(b). A distribution DX over Rd is (b, k)-limited if there exists a sub-space V ? Rd of dimension d ? k such that EX?DX [kX ? P k2 ] ? b, with P an orthogonal projection onto V .
Definition 3.2. The ?-adapted-dimension of a distribution or a set, denoted by k? , is the minimum
k such that the distribution or set is (? 2 k, k) limited.
It is easy to see that k? (DX ) is upper-bounded by min(d, E[kXk2 ]/? 2 ). Moreover, it can be much
smaller. For example, for X ? R1001 with independent coordinates such that the variance of the
first coordinate is 1000, but the variance in each remaining coordinate is 0.001 we have k1 = 1 but
d = E[kXk2 ] = 1001. More generally, if ?1 ? ?2 ? ? ? ? ?d are the eigenvalues of the covariance
Pd
2
matrix of X, then k? = min{k |
i=k+1 ?i ? ? k}. A quantity similar to k? was studied
previously in [14]. k? is different in nature from some other quantities used for providing sample
complexity bounds in terms of eigenvalues, as in [15], since it is defined based on the eigenvalues
of the distribution and not of the sample. In Section 6 we will see that these can be quite different.
In order to relate our upper and lower bounds, it will be useful to relate the ?-adapted-dimension for
different margins. The relationship is established in the following Lemma , proved in the appendix:
Lemma 3.3. For 0 < ? < 1, ? > 0 and a distribution DX , k? (DX ) ? k?? (DX ) ?
2k? (DX )
?2
+ 1.
We proceed to provide a sample complexity upper bound based on the ?-adapted-dimension.
4
A sample complexity upper bound using ?-adapted-dimension
In order to establish an upper bound on the sample complexity, we will bound the fat-shattering
dimension of the linear functions over a set in terms of the ?-adapted-dimension of the set. Recall
that the fat-shattering dimension is a classic quantity for proving sample complexity upper bounds:
Definition 4.1. Let F be a set of functions f : X ? R, and let ? > 0. The set {x1 , . . . , xm } ? X is
?-shattered by F if there exist r1 , . . . , rm ? R such that for all y ? {?1}m there is an f ? F such
that ?i ? [m], yi (f (xi ) ? ri ) ? ?. The ?-fat-shattering dimension of F is the size of the largest
set in X that is ?-shattered by F.
? ?/8 /?2 ) were d?/8 is the ?/8The sample complexity of ?-loss minimization is bounded by O(d
fat-shattering dimension of the function class [16, Theorem 13.4]. Let W(X ) be the class of linear
functions restricted to the domain X . For any set we show:
Theorem 4.2. If a set X is (B 2 , k)-limited, then the ?-fat-shattering dimension of W(X ) is at most
3
2
2
2 (B /? + k + 1). Consequently, it is also at most 3k? (X ) + 1.
Proof. Let X be a m ? d matrix whose rows are a set of m points in Rd which is ?-shattered.
? of dimensions
For any ? > 0 we can augment X with an additional column to form the matrix X
d+1
m
e
m ? (d + 1), such that for all y ? {??, +?} , there is a wy ? B1+? such that Xwy = y (the details
4
can be found in the appendix). Since X is (B 2 , k)-limited, there is an orthogonal projection matrix
? ? P k2 ? B 2 where X
? i is the vector in row i of
P? of size (d + 1) ? (d + 1) such that ?i ? [m], kX
i
?
?
X. Let V be the sub-space of dimension d ? k spanned by the columns of P? . To bound the size of
? on V are ?shattered? using projected labels.
the shattered set, we show that the projected rows of X
We then proceed similarly to the proof of the norm-only fat-shattering bound [17].
? = X
? P? + X(I
? ? P? ). In addition, Xw
? y = y. Thus y ? X
? P? wy = X(I
? ? P? )wy .
We have X
? ? P? ) is at most k + 1.
I ? P? is a projection onto a k + 1-dimensional space, thus the rank of X(I
Let T be an m ? m orthogonal projection matrix onto the subspace orthogonal to the columns
? ? P? ). This sub-space is of dimension at most l = m ? (k + 1), thus trace(T ) = l.
of X(I
? P? wy ) = T X(I
? ? P? )wy = 0(d+1)?1 . Thus T y = T X
? P? wy for every y ? {??, +?}m .
T (y ? X
? P? by zi . We have ?i ? m, hzi , wy1 i = ti y =
Denote row i of T by ti and row i of T X
P
P
P
P
1
1
j?m ti [j]y[j]. Therefore h
i zi y[i], wy i =
i?m
j?(l+k) ti [j]y[i]y[j]. Since kwy k ? 1 + ?,
P
m
1
d+1
1
?x ? RP , (1 + ?)kxk ? kxkkwy k ? hx, wy i. Thus ?y ? {??, +?} , (1 + ?)k i zi y[i]k ?
P
i?m
j?m ti [j]y[i]y[j]. Taking the expectation of y chosen uniformly at random, we have
X
X
X
(1 + ?)E[k
zi y[i]k] ?
E[ti [j]y[i]y[j]] = ? 2
ti [i] = ? 2 trace(T ) = ? 2 l.
In addition,
1
? 2 E[k
P
i
i,j
2
i zi y[i]k
2
]=
Pl
i
2
?? ? ? 2 ? ?
?? ? ? ? ?
i=1 kzi k = trace(P X T X P ) ? trace(P X X P ) ? B m.
2
2
this holds for any
From the inequality E[X ] ? E[X]2 , it follows that l2 ? (1 + ?)2 B
? 2 m. Since
q
2
B2
B4
? > 0, we can set ? = 0 and solve for m. Thus m ? (k + 1) + 2? 2 + 4? 4 + B
? 2 (k + 1) ?
q
2
3 B2
B2
(k + 1) + B
?2 +
? 2 (k + 1) ? 2 ( ? 2 + k + 1).
Corollary 4.3. Let D be a distribution over X ? {?1}, X ? Rd . Then
e k?/8 (X ) .
m(?, ?, D) ? O
?2
The corollary above holds only for distributions with bounded support. However, since sub-Gaussian
variables have an exponentially decaying tail, we can use this corollary to provide a bound for
independently sub-Gaussian distributions as well (see appendix for proof):
Theorem 4.4 (Upper Bound for Distributions in D?sg ). For any distribution D over Rd ? {?1} such
that DX ? D?sg ,
2
? ? k? (DX ) ).
m(?, ?, D) = O(
?2
This new upper bound is tighter than norm-only and dimension-only upper bounds. But does the
?-adapted-dimension characterize the true sample complexity of the distribution, or is it just another
upper bound? To answer this question, we need to be able to derive sample complexity lower bounds
as well. We consider this problem in following section.
5
Sample complexity lower bounds using Gram-matrix eigenvalues
We wish to find a distribution-specific lower bound that depends on the ?-adapted-dimension, and
matches our upper bound as closely as possible. To do that, we will link the ability to learn with
a margin, with properties of the data distribution. The ability to learn is closely related to the
probability of a sample to be shattered, as evident from Vapnik?s formulations of learnability as a
function of the ?-entropy. In the preceding section we used the fact that non-shattering (as captured
by the fat-shattering dimension) implies learnability. For the lower bound we use the converse fact,
presented below in Theorem 5.1: If a sample can be fat-shattered with a reasonably high probability,
then learning is impossible. We then relate the fat-shattering of a sample to the minimal eigenvalue
of its Gram matrix. This allows us to present a lower-bound on the sample complexity using a lower
bound on the smallest eigenvalue of the Gram-matrix of a sample drawn from the data distribution.
We use the term ??-shattered at the origin? to indicate that a set is ?-shattered by setting the bias
r ? Rm (see Def. 4.1) to the zero vector.
5
Theorem 5.1. Let D be a distribution over Rd ? {?1}. If the probability of a sample of size m
m
drawn from DX
to be ?-shattered at the origin is at least ?, then there is a margin-error minimization
algorithm A, such that ?m/2 (A? , D) ? ?/2.
Proof. For a given distribution D, let A be an algorithm which, for every two input samples S and
?
S?X , labels S?X using the separator w ? argminw?Bd1 ??? (w, S) that maximizes ES?Y ?Dm [??? (w, S)].
Y
1
d
For every x ? R there is a label y ? {?1} such that P(X,Y )?D [Y 6= y | X = x] ? 2 . If the set of
examples in SX and S?X together is ?-shattered at the origin, then A chooses a separator with zero
? Therefore ?m/2 (A? , D) ? ?/2.
margin loss on S, but loss of at least 12 on S.
The notion of shattering involves checking the existence of a unit-norm separator w for each labelvector y ? {?1}m . In general, there is no closed form for the minimum-norm separator. However,
the following Theorem provides an equivalent and simple characterization for fat-shattering:
Theorem 5.2. Let S = (X1 , . . . , Xm ) be a sample in Rd , denote X the m?d matrix whose rows are
the elements of S. Then S is 1-shattered iff X is invertible and ?y ? {?1}m , y ? (XX ? )?1 y ? 1.
The proof of this theorem is in the appendix. The main issue in the proof is showing that if a set is
shattered, it is also shattered with exact margins, since the set of exact margins {?1}m lies in the
convex hull of any set of non-exact margins that correspond to all the possible labelings. We can now
use the minimum eigenvalue of the Gram matrix to obtain a sufficient condition for fat-shattering,
after which we present the theorem linking eigenvalues and learnability. For a matrix X, ?n (X)
denotes the n?th largest eigenvalue of X.
Lemma 5.3. Let S = (X1 , . . . , Xm ) be a sample in Rd , with X as above. If ?m (XX ? ) ? m then
S is 1-shattered at the origin.
?
?
Proof. If ?m (XX
is invertible and ?1 ((XX ? )?1 ) ? 1/m. For any y ? {?1}m
? ) ? m ?then XX
? ?1
we have kyk = m and y (XX ) y ? kyk2 ?1 ((XX ? )?1 ) ? m(1/m) = 1. By Theorem 5.2 the
sample is 1-shattered at the origin.
Theorem 5.4. Let D be a distribution over Rd ?{?1}, S be an i.i.d. sample of size m drawn from D,
and denote XS the m ? d matrix whose rows are the points from S. If P[?m (XS XS? ) ? m? 2 ] ? ?,
then there exists a margin-error minimization algorithm A such that ?m/2 (A? , D) ? ?/2.
Theorem 5.4 follows by scaling XS by ?, applying Lemma 5.3 to establish ?-fat shattering with
probability at least ?, then applying Theorem 5.1. Lemma 5.3 generalizes the requirement for linear
independence when shattering using hyperplanes with no margin (i.e. no regularization). For unregularized (homogeneous) linear separation, a sample is shattered iff it is linearly independent, i.e. if
?m > 0. Requiring ?m > m? 2 is enough for ?-fat-shattering. Theorem 5.4 then generalizes the
simple observation, that if samples of size m are linearly independent with high probability, there
is no hope of generalizing from m/2 points to the other m/2 using unregularized linear predictors.
Theorem 5.4 can thus be used to derive a distribution-specific lower bound. Define:
1
1
?
2
m? (D) , min m PS?Dm [?m (XS XS ) ? m? ] <
2
2
Then for any ? < 1/4 ? ??? (D), we can conclude that m(?, ?, D) ? m? (D), that is, we cannot learn
within reasonable error with less than m? examples. Recall that our upper-bound on the sample
? ? ). The remaining question is whether we can relate m and
complexity from Section 4 was O(k
?
k? , to establish that the our lower bound and upper bound tightly specify the sample complexity.
6
A lower bound for independently sub-Gaussian distributions
As discussed in the previous section, to obtain sample complexity lower bound we require a bound
on the value of the smallest eigenvalue of a random Gram-matrix. The distribution of this eigenvalue
has been investigated under various assumptions. The cleanest results are in the case where m, d ?
? and m
d ? ? < 1, and the coordinates of each example are identically distributed:
6
Theorem 6.1 (Theorem 5.11 in [18]). Let Xi be a series of mi ? di matrices whose entries are i.i.d.
i
random variables with mean zero, variance ? 2 and finite fourth moments. If limi?? m
di = ? < 1,
?
then limi?? ?m ( d1 Xi Xi? ) = ? 2 (1 ? ?)2 .
This asymptotic limit can be used to calculate m? and thus provide a lower bound on the sample
complexity: Let the coordinates of X ? Rd be i.i.d. with variance ? 2 and consider a sample of size
m. If d, m are large enough, we have by Theorem 6.1:
p
?
?
?m (XX ? ) ? d? 2 (1 ? m/d)2 = ? 2 ( d ? m)2
?
p
Solving ? 2 ( d ? 2m? )2 = 2m? ? 2 we get m? ? 12 d/(1 + ?/?)2 . We can also calculate the ?adapted-dimension for this distribution to get k? ? d/(1 + ? 2 /? 2 ), and conclude that 14 k? ? m? ?
1
2 k? . In this case, then, we are indeed able to relate the sample complexity lower bound with k? , the
same quantity that controls our upper bound. This conclusion is easy to derive from known results,
however it holds only asymptotically, and only for a highly limited set of distributions. Moreover,
since Theorem 6.1 holds asymptotically for each distribution separately, we cannot deduce from it
any finite-sample lower bounds for families of distributions.
For our analysis we require finite-sample bounds for the smallest eigenvalue of a random Grammatrix. Rudelson and Vershynin [19, 20] provide such finite-sample lower bounds for distributions
with identically distributed sub-Gaussian coordinates. In the following Theorem we generalize results of Rudelson and Vershynin to encompass also non-identically distributed coordinates. The
proof of Theorem 6.2 can be found in the appendix. Based on this theorem we conclude with Theorem 6.3, stated below, which constitutes our final sample complexity lower bound.
Theorem 6.2. Let B > 0. There is a constant ? > 0 which depends only on B, such that for any
? ? (0, 1) there exists a number L0 , such that for any independently sub-Gaussian distribution with
covariance matrix ? ? I and trace(?) ? L0 , if each of its independent sub-Gaussian coordinates
has moment B, then for any m ? ? ? trace(?)
?
P[?m (Xm Xm
) ? m] ? 1 ? ?,
Where Xm is an m ? d matrix whose rows are independent draws from DX .
Theorem 6.3 (Lower bound for distributions in D?sg ). For any ? > 0, there are a constant ? > 0
and an integer L0 such that for any D such that DX ? D?sg and k? (DX ) > L0 , for any margin
? > 0 and any ? < 41 ? ??? (D),
m(?, ?, D) ? ?k? (DX ).
Proof. The covariance matrix of DX is clearly diagonal. We assume w.l.o.g. that ? =
diag(?1 , . . . , ?d ) where ?1 ? . . . ? ?d > 0. Let S be an i.i.d. sample of size m drawn from
D. Let X be the m ? d matrix whose rows are the unlabeled examples from S. Let ? be fixed, and
set ? and L0 as defined in Theorem 6.2 for ?. Assume m ? ?(k? ? 1).
We would like to use Theorem 6.2 to bound the smallest eigenvalue of XX ? with high probability,
so that we can then apply Theorem 5.4 to get the desired lower bound. However, Theorem 6.2
holds only if all the coordinate variances are bounded by 1, and it requires that the moment, and not
the relative moment, be bounded. Thus we divide the problem to two cases, based on the value of
?k? +1 , and apply Theorem 6.2 separately to each case.
Case I: Assume ?k? +1 ? ? 2 . Then ?i ? [k? ], ?i ? ? 2 . Let ?1 = diag(1/?1 , . . . , 1/?k? , 0, . . . , 0).
?
The random matrix X ?1 is drawn from an independently sub-Gaussian distribution, such that
each of its coordinates has sub-Gaussian moment ? and covariance
? matrix ? ? ?1 ? Id . In addition,
trace(? ? ?1 ) = k? ? L0 . Therefore Theorem 6.2 holds for X ?1 , and P[?m (X?1 X ? ) ? m] ?
1 ? ?. Clearly, for any X, ?m ( ?12 XX ? ) ? ?m (X?1 X ? ). Thus P[?m ( ?12 XX ? ) ? m] ? 1 ? ?.
Case II: Assume ?k? +1 < ? 2 . Then ?i < ? 2 for all i ? {k? + 1, . . . , d}. Let ?2 =
?
diag(0, . . . , 0, 1/? 2 , . . . , 1/? 2 ), with k? zeros on the diagonal. Then the random matrix X ?2
is drawn from an independently sub-Gaussian distribution with covariance matrix ? ? ?2 ? Id , such
that all its coordinates have sub-Gaussian moment ?. In addition, from the properties of k? (see
Pd
discussion in Section 2), trace(? ? ?2 ) = ?12 i=k? +1 ?i ? k? ? 1 ? L0 ? 1. Thus Theorem 6.2
?
holds for X ?2 , and so P[?m ( ?12 XX ? ) ? m] ? P[?m (X?2 X ? ) ? m] ? 1 ? ?.
7
In both cases P[?m ( ?12 XX ? ) ? m] ? 1 ? ? for any m ? ?(k? ? 1). By Theorem 5.4, there exists
an algorithm A such that for any m ? ?(k? ? 1) ? 1, ?m (A? , D) ? 12 ? ?/2. Therefore, for any
? < 12 ? ?/2 ? ??? (D), we have m(?, ?, D) ? ?(k? ? 1). We get the theorem by setting ? = 14 .
7
Summary and consequences
Theorem 4.4 and Theorem 6.3 provide an upper bound and a lower bound for the sample complexity
of any distribution D whose data distribution is in D?sg for some fixed ? > 0. We can thus draw the
following bound, which holds for any ? > 0 and ? ? (0, 14 ? ??? (D)):
? k? (DX ) ).
?(k? (DX )) ? m(?, ?, D) ? O(
?2
(2)
In both sides of the bound, the hidden constants depend only on the constant ?. This result shows
that the true sample complexity of learning each of these distributions is characterized by the ?adapted-dimension. An interesting conclusion can be drawn as to the influence of the conditional
distribution of labels DY |X : Since Eq. (2) holds for any DY |X , the effect of the direction of the best
separator on the sample complexity is bounded, even for highly non-spherical distributions. We can
use Eq. (2) to easily characterize the sample complexity behavior for interesting distributions, and
to compare L2 margin minimization to learning methods.
Gaps between L1 and L2 regularization in the presence of irrelevant features. Ng [3] considers
learning a single relevant feature in the presence of many irrelevant features, and compares using
L1 regularization and L2 regularization. When kXk? ? 1, upper bounds on learning with L1
regularization guarantee a sample complexity of O(log(d)) for an L1 -based learning rule [21]. In
order to compare this with the sample complexity of L2 regularized learning and establish a gap,
one must use a lower bound on the L2 sample complexity. The argument provided by Ng actually
assumes scale-invariance of the learning rule, and is therefore valid only for unregularized linear
learning. However, using our results we can easily establish a lower bound of ?(d) for many specific
distributions with kXk? ? 1 and Y = X[1] ? {?1}. For instance, when each coordinate is an
independent Bernoulli variable, the distribution is sub-Gaussian with ? = 1, and k1 = ?d/2?.
Gaps between generative and discriminative learning for a Gaussian mixture. Consider two
classes, each drawn from a unit-variance spherical Gaussian in a high dimension Rd and with a
large distance 2v >> 1 between the class means, such that d >> v 4 . Then PD [X|Y = y] =
N (yv ? e1 , Id ), where e1 is a unit vector in Rd . For any v and d, we have DX ? D1sg . For large
values of v, we have extremely low margin error at ? = v/2, and so we can hope to learn the
2
classes by looking for a large-margin separator. Indeed, we can calculate k? = ?d/(1 + v4 )?, and
2
?
conclude that the sample complexity required is ?(d/v
). Now consider a generative approach:
fitting a spherical Gaussian model for each class. This amounts to estimating each class center as
the empirical average of the points in the class, and classifying based on the nearest estimated class
center. It is possible to show that for any constant ? > 0, and for large enough v and d, O(d/v 4 )
samples are enough in order to ensure an error of ?. This establishes a rather large gap of ?(v 2 )
between the sample complexity of the discriminative approach and that of the generative one.
To summarize, we have shown that the true sample complexity of large-margin learning of a rich
family of specific distributions is characterized by the ?-adapted-dimension. This result allows true
comparison between this learning algorithm and other algorithms, and has various applications, such
as semi-supervised learning and feature construction. The challenge of characterizing true sample
complexity extends to any distribution and any learning algorithm. We believe that obtaining answers to these questions is of great importance, both to learning theory and to learning applications.
Acknowledgments
The authors thank Boaz Nadler for many insightful discussions, and Karthik Sridharan for pointing
out [14] to us. Sivan Sabato is supported by the Adams Fellowship Program of the Israel Academy
of Sciences and Humanities. This work was supported by the NATO SfP grant 982480.
8
References
[1] I. Steinwart and C. Scovel. Fast rates for support vector machines using Gaussian kernels. Annals of
Statistics, 35(2):575?607, 2007.
[2] P. Liang and N. Srebro. On the interaction between norm and dimensionality: Multiple regimes in learning. In ICML, 2010.
[3] A.Y. Ng. Feature selection, l1 vs. l2 regularization, and rotational invariance. In ICML, 2004.
[4] A. Antos and G. Lugosi. Strong minimax lower bounds for learning. Mach. Learn., 30(1):31?56, 1998.
[5] A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. In Proceedings of the First Anuual Workshop on Computational Learning
Theory, pages 139?154, August 1988.
[6] C. Gentile and D.P. Helmbold. Improved lower bounds for learning from noisy examples: an informationtheoretic approach. In COLT, pages 104?115, 1998.
[7] V.N. Vapnik. The Nature of Statistical Learning Theory. Springer, 1995.
[8] Gyora M. Benedek and Alon Itai. Learnability with respect to fixed distributions. Theoretical Computer
Science, 86(2):377?389, September 1991.
[9] S. Ben-David, T. Lu, and D. P?al. Does unlabeled data provably help? In Proceedings of the Twenty-First
Annual Conference on Computational Learning Theory, pages 33?44, 2008.
[10] N. Vayatis and R. Azencott. Distribution-dependent vapnik-chervonenkis bounds. In EuroCOLT ?99,
pages 230?240, London, UK, 1999. Springer-Verlag.
[11] D.J.H. Garling. Inequalities: A Journey into Linear Analysis. Cambrige University Press, 2007.
[12] V.V. Buldygin and Yu. V. Kozachenko. Metric Characterization of Random Variables and Random Processes. American Mathematical Society, 1998.
[13] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: Risk bounds and structural
results. In COLT 2001, volume 2111, pages 224?240. Springer, Berlin, 2001.
[14] O. Bousquet. Concentration Inequalities and Empirical Processes Theory Applied to the Analysis of
Learning Algorithms. PhD thesis, Ecole Polytechnique, 2002.
[15] B. Sch?olkopf, J. Shawe-Taylor, A. J. Smola, and R.C. Williamson. Generalization bounds via eigenvalues
of the gram matrix. Technical Report NC2-TR-1999-035, NeuroCOLT2, 1999.
[16] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, 1999.
[17] N. Christianini and J. Shawe-Taylor. An Introduction to Support Vector Machines. Cambridge University
Press, 2000.
[18] Z. Bai and J.W. Silverstein. Spectral Analysis of Large Dimensional Random Matrices. Springer, second
edition edition, 2010.
[19] M. Rudelson and R. Vershynin. The smallest singular value of a random rectangular matrix. Communications on Pure and Applied Mathematics, 62:1707?1739, 2009.
[20] M. Rudelson and R. Vershynin. The littlewoodofford problem and invertibility of random matrices. Advances in Mathematics, 218(2):600?633, 2008.
[21] T. Zhang. Covering number bounds of certain regularized linear function classes. Journal of Machine
Learning Research, 2:527?550, 2002.
[22] G. Bennett, V. Goodman, and C. M. Newman. Norms of random matrices. Pacific J. Math., 59(2):359?
365, 1975.
[23] F.L. Nazarov and A. Podkorytov. Ball, haagerup, and distribution functions. Operator Theory: Advances
and Applications, 113 (Complex analysis, operators, and related topics):247?267, 2000.
[24] R.E.A.C. Paley and A. Zygmund. A note on analytic functions in the unit circle. Proceedings of the
Cambridge Philosophical Society, 28:266272, 1932.
9
| 4032 |@word norm:17 covariance:9 tr:1 moment:10 bai:1 series:1 chervonenkis:1 ecole:1 scovel:1 dx:22 must:2 refines:1 chicago:2 analytic:1 v:1 alone:1 generative:4 kyk:1 provides:2 characterization:7 math:1 hyperplanes:1 buldygin:1 zhang:1 unbounded:1 mathematical:1 bd1:4 prove:1 fitting:1 introduce:1 indeed:2 expected:1 behavior:2 themselves:1 nor:1 multi:1 eurocolt:1 spherical:3 actual:2 provided:1 xx:13 bounded:12 moreover:2 maximizes:1 estimating:1 lowest:1 israel:2 interpreted:1 guarantee:1 every:3 ti:7 tackle:1 fat:13 ensured:1 k2:3 rm:2 uk:1 control:1 unit:5 converse:2 grant:1 positive:2 engineering:1 limit:2 consequence:1 mach:1 id:3 lugosi:1 studied:2 limited:7 directed:1 acknowledgment:1 empirical:2 convenient:1 projection:5 word:1 confidence:1 get:6 onto:4 unlabeled:3 cannot:2 selection:1 operator:2 risk:1 impossible:1 applying:2 influence:1 restriction:1 equivalent:1 center:2 jerusalem:1 independently:8 convex:1 rectangular:1 helmbold:1 pure:1 rule:10 haussler:1 orthonormal:1 spanned:1 proving:3 classic:1 notion:1 coordinate:13 annals:1 target:1 construction:1 exact:3 homogeneous:1 humanity:1 hypothesis:8 origin:5 element:1 worst:3 calculate:3 ensures:1 technological:1 balanced:1 mentioned:1 pd:6 complexity:58 rigorously:1 depend:4 tight:16 solving:1 learner:1 basis:1 easily:4 various:2 tx:1 fast:1 london:1 newman:1 hyper:1 whose:8 quite:2 solve:1 say:2 tightness:1 otherwise:1 ability:2 statistic:3 transductive:1 noisy:1 final:1 paley:1 eigenvalue:14 interaction:1 product:1 argminw:2 relevant:2 iff:2 academy:1 olkopf:1 convergence:1 requirement:1 r1:1 p:1 rademacher:1 adam:1 ben:2 rotated:2 tions:1 derive:3 develop:1 ac:1 alon:1 help:1 nearest:1 school:1 eq:2 strong:1 c:1 involves:1 indicate:2 implies:1 direction:6 radius:2 closely:2 hull:1 vc:1 require:4 hx:2 generalization:1 tighter:1 pl:1 hold:12 sufficiently:1 considered:1 exp:2 great:1 nadler:1 claim:1 pointing:1 smallest:5 label:4 largest:2 kwy:1 tool:1 establishes:1 minimization:7 hope:2 clearly:2 gaussian:31 always:1 rather:1 corollary:3 ax:1 focus:3 l0:7 bernoulli:2 rank:1 contrast:1 sense:3 dependent:1 shattered:17 typically:1 hidden:1 labelings:1 interested:2 provably:1 issue:1 classification:4 colt:2 denoted:3 augment:1 having:1 ng:3 shattering:15 yu:1 icml:2 constitutes:1 t2:1 report:1 few:1 tightly:4 ourselves:1 karthik:1 highly:2 violation:1 mixture:2 upperbounded:1 light:2 antos:1 necessary:1 minw:1 orthogonal:5 euclidean:1 divide:1 taylor:2 desired:2 circle:1 theoretical:2 minimal:3 instance:7 column:3 cover:1 subset:1 entry:1 predictor:4 uniform:1 tishby:1 learnability:7 characterize:4 too:1 answer:2 combined:1 chooses:1 vershynin:4 huji:1 v4:1 invertible:2 together:1 thesis:1 hzi:1 american:1 b2:3 includes:1 invertibility:1 depends:3 ad:1 try:1 closed:1 characterizes:2 yv:1 decaying:1 option:1 nc2:1 minimize:1 il:2 variance:10 azencott:2 correspond:1 identify:1 silverstein:1 generalize:1 lu:1 definition:7 involved:1 dm:3 proof:9 mi:1 di:2 proved:1 recall:2 dimensionality:6 actually:1 supervised:1 specify:1 improved:1 formulation:1 done:1 box:1 just:1 smola:1 steinwart:1 believe:2 usa:1 effect:1 requiring:1 true:10 regularization:8 ehrenfeucht:1 kyk2:1 naftali:1 covering:1 criterion:1 evident:1 polytechnique:1 l1:6 novel:1 recently:1 b4:1 exponentially:2 volume:1 tail:2 linking:1 discussed:1 kwk2:1 cambridge:3 ai:1 rd:22 trivially:1 mathematics:2 similarly:1 shawe:2 hxi:2 cambrige:1 deduce:1 multivariate:1 moderate:1 irrelevant:2 certain:2 verlag:1 inequality:3 binary:1 arbitrarily:2 yi:6 seen:2 minimum:5 captured:2 additional:1 preceding:1 gentile:1 ii:1 semi:1 encompass:1 multiple:1 technical:1 match:1 characterized:3 e1:2 a1:1 expectation:2 metric:1 kernel:1 vayatis:2 addition:5 fellowship:1 separately:2 singular:1 source:4 sabato:2 sch:1 goodman:1 sridharan:1 integer:2 structural:1 presence:2 easy:2 concerned:3 enough:4 identically:3 independence:1 zi:5 restrict:1 zygmund:1 whether:1 bartlett:2 proceed:2 useful:3 generally:1 amount:1 exist:1 estimated:1 itai:2 sivan:3 achieving:1 drawn:8 d3:1 neither:1 christianini:1 asymptotically:2 sum:1 fourth:1 journey:1 extends:1 family:9 reasonable:1 separation:1 draw:2 appendix:5 scaling:1 dy:2 bound:86 def:1 annual:1 adapted:15 precisely:2 deficiency:1 ri:1 bousquet:1 nathan:1 aspect:1 argument:2 min:4 extremely:1 structured:1 pacific:1 combination:1 ball:1 smaller:2 wi:2 happens:1 presently:1 restricted:1 unregularized:3 previously:2 discus:1 needed:2 know:2 studying:1 generalizes:2 apply:2 away:1 kozachenko:1 spectral:1 rp:1 existence:1 denotes:1 remaining:2 rudelson:4 assumes:1 ensure:1 xw:1 restrictive:1 k1:2 establish:7 classical:3 society:2 seeking:1 question:3 quantity:12 neurocolt2:1 concentration:1 dependence:1 diagonal:2 september:1 subspace:1 distance:1 link:1 thank:1 berlin:1 parametrized:1 topic:1 considers:1 relationship:1 providing:3 minimizing:2 hebrew:1 rotational:1 liang:2 difficult:1 relate:5 trace:8 stated:1 twenty:1 allowing:1 upper:27 observation:1 finite:6 situation:1 looking:1 precise:1 communication:1 arbitrary:1 august:1 ttic:1 david:2 namely:1 required:6 extensive:1 philosophical:1 polylogarithmic:1 established:1 able:2 usually:1 below:4 xm:6 wy:8 regime:2 summarize:1 challenge:1 program:1 lend:1 misclassification:1 rely:2 regularized:3 minimax:1 created:1 understanding:1 l2:11 nati:1 sg:6 checking:1 asymptotic:4 relative:4 loss:9 benedek:2 interesting:2 srebro:2 foundation:1 sufficient:2 uncorrelated:2 tiny:1 heavy:1 classifying:1 row:9 course:2 summary:1 supported:2 bias:1 side:1 understand:1 institute:1 characterizing:2 sification:1 taking:1 limi:2 distributed:3 regard:1 dimension:33 calculated:1 valid:2 gram:6 rich:4 author:1 projected:2 tighten:1 kzi:1 excess:2 boaz:1 nato:1 informationtheoretic:1 b1:1 conclude:6 xi:11 discriminative:3 spectrum:1 tailed:2 learn:7 nature:2 reasonably:1 obtaining:4 williamson:1 investigated:1 complex:1 separator:7 anthony:1 domain:1 diag:3 cleanest:1 main:1 linearly:2 bounding:1 edition:2 x1:3 sub:27 natively:1 wish:1 lie:1 governed:2 kxk2:5 toyota:1 hw:3 theorem:37 specific:19 showing:1 insightful:1 learnable:2 x:6 concern:1 essential:1 exists:8 workshop:1 vapnik:5 mendelson:1 valiant:1 importance:1 phd:1 margin:32 sx:3 gap:7 kx:3 entropy:2 generalizing:1 kxk:7 springer:4 conditional:1 goal:1 consequently:1 bennett:1 included:1 specifically:2 determined:1 infinite:2 uniformly:1 lemma:5 kearns:1 invariance:2 e:2 support:5 d1:1 ex:1 |
3,350 | 4,033 | Boosting Classifier Cascades
Nuno Vasconcelos
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Mohammad J. Saberian
Statistical Visual Computing Laboratory,
University of California, San Diego
La Jolla, CA 92039
[email protected]
Abstract
The problem of optimal and automatic design of a detector cascade is considered.
A novel mathematical model is introduced for a cascaded detector. This model is
analytically tractable, leads to recursive computation, and accounts for both classification and complexity. A boosting algorithm, FCBoost, is proposed for fully
automated cascade design. It exploits the new cascade model, minimizes a Lagrangian cost that accounts for both classification risk and complexity. It searches
the space of cascade configurations to automatically determine the optimal number of stages and their predictors, and is compatible with bootstrapping of negative examples and cost sensitive learning. Experiments show that the resulting
cascades have state-of-the-art performance in various computer vision problems.
1
Introduction
There are many applications where a classifier must be designed under computational constraints.
One problem where such constraints are extreme is that of object detection in computer vision.
To accomplish tasks such as face detection, the classifier must process thousands of examples per
image, extracted from all possible image locations and scales, at a rate of several images per second.
This problem has been the focus of substantial attention since the introduction of the detector cascade
architecture by Viola and Jones (VJ) in [13]. This architecture was used to design the first real time
face detector with state-of-the-art classification accuracy. The detector has, since, been deployed
in many practical applications of broad interest, e.g. face detection on low-complexity platforms
such as cameras or cell phones. The outstanding performance of the VJ detector is the result of 1) a
cascade of simple to complex classifiers that reject most non-faces with a few machine operations,
2) learning with a combination of boosting and Haar features of extremely low complexity, and 3)
use of bootstrapping to efficiently deal with the extremely large class of non-face examples.
While the resulting detector is fast and accurate, the process of designing a cascade is not. In
particular, VJ did not address the problem of how to automatically determine the optimal cascade
configuration, e.g. the numbers of cascade stages and weak learners per stage, or even how to design
individual stages so as to guarantee optimality of the cascade as a whole. In result, extensive manual
supervision is required to design cascades with good speed/accuracy trade off. This includes trialand-error tuning of the false positive/detection rate of each stage, and of the cascade configuration.
In practice, the design of a good cascade can take up several weeks. This has motivated a number
of enhancements to the VJ training procedure, which can be organized into three main areas: 1)
enhancement of the boosting algorithms used in cascade design, e.g. cost-sensitive variations of
boosting [12, 4, 8], float Boost [5] or KLBoost [6], 2) post processing of a learned cascade, by adjusting stage thresholds, to improve performance [7], and 3) specialized cascade architectures which
simplify the learning process, e.g. the embedded cascade (ChainBoost) of [15], where each stage
contains all weak learners of previous stages. These enhancements do not address the fundamental
limitations of the VJ design, namely how to guarantee overall cascade optimality.
1
0.8
0.8
AdaBoost
ChainBoost
AdaBoost
ChainBoost
0.7
L
RL
0.6
0.6
0.4
0.2
0
10
20
30
40
0.5
50
0
10
20
30
40
50
Iterations
Iterations
Figure 1: Plots of RL (left) and L (right) for detectors designed with AdaBoost and ChainBoost.
More recently, various works have attempted to address this problem [9, 8, 1, 14, 10]. However, the
proposed algorithms still rely on sequential learning of cascade stages, which is suboptimal, sometimes require manual supervision, do not search over cascade configurations, and frequently lack
a precise mathematical model for the cascade. In this work, we address these problems, through
two main contributions. The first is a mathematical model for a detector cascade, which is analytically tractable, accounts for both classification and complexity, and is amenable to recursive
computation. The second is a boosting algorithm, FCBoost, that exploits this model to solve the
cascade learning problem. FCBoost solves a Lagrangian optimization problem, where the classification risk is minimized under complexity constraints. The risk is that of the entire cascade, which
is learned holistically, rather than through sequential stage design, and FCBoost determines the optimal cascade configuration automatically. It is also compatible with bootstrapping and cost sensitive
boosting extensions, enabling efficient sampling of negative examples and explicit control of the
false positive/detection rate trade off. An extensive experimental evaluation, covering the problems
of face, car, and pedestrian detection demonstrates its superiority over previous approaches.
2
Problem Definition
A binary classifier h(x) maps an example x into a class label y ? {?1, 1} according to h(x) =
sign[f (x)], where f (x) is a continuous-valued predictor. Optimal classifiers minimize a risk
1 X
RL (f ) = EX,Y {L[y, f (x)]} ?
L[yi , f (xi )]
(1)
|St | i
where St = {(x1 , y1 ), . . . , (xn , yn )} is a set of training examples, yi ? {1, ?1} the class label of
example xi , and L[y, f (x)] a loss function. Commonly used losses are upper bounds on the zero-one
loss, whose risk is the probability of classification error. Hence, RL is a measure of classification
accuracy. For applications with computational constraints, optimal classifier design must also take
into consideration the classification complexity. This is achieved by defining a computational risk
1 X
RC (f ) = EX,Y {LC [y, C(f (x))]} ?
LC [yi , C(f (xi ))]
(2)
|St | i
where C(f (x)) is the complexity of evaluating f (x), and LC [y, C(f (x))] a loss function that encodes
the cost of this operation. In most detection problems, targets are rare events and contribute little to
the overall complexity. In this case, which we assume throughout this work, LC [1, C(f (x))] = 0
and LC [?1, C(f (x))] is denoted LC [C(f (x))]. The computational risk is thus
1 X
RC (f ) ?
LC [C(f (xi ))].
(3)
|St? |
?
xi ?St
whereSt?
contains the negative examples of St . Usually, more accurate classifiers are more complex.
For example in boosting, where the decision rule is a combination of weak rules, a finer approximation of the classification boundary (smaller error) requires more weak learners and computation.
Optimal classifier design under complexity constraints is a problem of constrained optimization,
which can be solved with Lagrangian methods. These minimize a Lagrangian
? X
1 X
LC [C(f (xi ))],
(4)
L[yi , f (xi )] + ?
L(f ; St ) =
|St |
|St |
?
x ?S
i
xi ?St
t
2
where ? is a Lagrange multiplier, which controls the trade-off between error rate and complexity.
Figure 1 illustrates this trade-off, by plotting the evolution of RL and L as a function of the boosting
iteration, for the AdaBoost algorithm [2]. While the risk always decreases with the addition of weak
learners, this is not true for the Lagrangian. After a small number of iterations, the gain in accuracy
does not justify the increase in classifier complexity. The design of classifiers under complexity
constraints has been addressed through the introduction of detector cascades. A detector cascade
H(x) implements a sequence of binary decisions hi (x), i = 1 . . . m. An example x is declared a
target (y = 1) if and only if it is declared a target by all stages of H, i.e. hi (x) = 1, ?i. Otherwise,
the example is rejected. For applications where the majority of examples can be rejected after a
small number of cascade stages, the average classification time is very small. However, the problem
of designing an optimal detector cascade is still poorly understood. A popular approach, known as
ChainBoost or embedded cascade [15], is to 1) use standard boosting algorithms to design a detector,
and 2) insert a rejection point after each weak learner. This is simple to implement, and creates
a cascade with as many stages as weak learners. However, the introduction of the intermediate
rejection points, a posteriori of detector design, sacrifices the risk-optimality of the detector. This
is illustrated in Figure 1, where the evolution of RL and L are also plotted for ChainBoost. In this
example, L is monotonically decreasing, i.e. the addition of weak learners no longer carries a large
complexity penalty. This is due to the fact that most negative examples are rejected in the earliest
cascade stages. On the other hand, the classification risk is more than double that of the original
boosted detector. It is not known how close ChainBoost is to optimal, in the sense of (4).
3
Classifier cascades
In this work, we seek the design of cascades that are provably optimal under (4). We start by
introducing a mathematical model for a detector cascade.
3.1
Cascade predictor
Let H(x) = {h1 (x), . . . , hm (x)} be a cascade of m detectors hi (x) = sgn[fi (x)]. To develop
some intuition, we start with a two-stage cascade, m = 2. The cascade implements the decision rule
H(F)(x) = sgn[F(x)]
(5)
where
F(x) = F(f1 , f2 )(x) =
f1 (x)
f2 (x)
if f1 (x) < 0
if f1 (x) ? 0
= f1 u(?f1 ) + u(f1 )f2
(6)
(7)
is denoted the cascade predictor, u(.) is the step function and we omit the dependence on x for
notational simplicity. This equation can be extended to a cascade of m stages, by replacing the
predictor of the second stage, when m = 2, with the predictor of the remaining cascade, when m is
larger. Letting Fj = F(fj , . . . , fm ) be the cascade predictor for the cascade composed of stages j
to m
F = F1 = f1 u(?f1 ) + u(f1 )F2 .
(8)
More generally, the following recursion holds
Fk = fk u(?fk ) + u(fk )Fk+1
(9)
with initial condition Fm = fm . In Appendix A, it is shown that combining (8) and (9) recursively
leads to
F
= T1,m + T2,m fm
= T1,k + T2,k fk u(?fk ) + T2,k Fk+1 u(fk ), k < m.
(10)
(11)
with initial conditions T1,0 = 0, T2,0 = 1 and
T1,k+1 = T1,k + fk u(?fk ) T2,k ,
T2,k+1 = T2,k u(fk ).
(12)
Since T1,k , T2,k , and Fk+1 do not depend on fk , (10) and (11) make explicit the dependence of the
cascade predictor, F, on the predictor of the k th stage.
3
3.2
Differentiable approximation
Letting F(fk + ?g) = F(f1 , .., fk + ?g, ..fm ), the design of boosting algorithms requires the evaluation of both F(fk + ?g), and the functional derivative of F with respect to each fk , along any
direction g
d
< ?F(fk ), g >=
F(fk + ?g)
.
d?
?=0
These are straightforward for the last stage since, from (10),
F(fm + ?g) = am + ?bm g, < ?F(fm ), g >= bm g,
(13)
where
am = T1,m + T2,m fm = F(fm ), bm = T2,m .
(14)
In general, however, the right-hand side of (11) is non-differentiable, due to the u(.) functions. A
differentiable approximation is possible by adopting the classic sigmoidal approximation u(x) ?
tanh(?x)+1
, where ? is a relaxation parameter. Using this approximation in (11),
2
F = F(fk ) = T1,k + T2,k fk (1 ? u(fk )) + T2,k Fk+1 u(fk )
(15)
1
? T1,k + T2,k fk + T2,k [Fk+1 ? fk ][tanh(?fk ) + 1].
(16)
2
It follows that
< ?F(fk ), g > = bk g
(17)
1
T2,k [1 ? tanh(?fk )] + ?[Fk+1 ? fk ][1 ? tanh2 (?fk )] . (18)
bk =
2
F(fk + ?g) can also be simplified by resorting to a first order Taylor series expansion around fk
F(fk + ?g) ? ak + ?bk g
(19)
1
ak = F(fk ) = T1,k + T2,k fk + [Fk+1 ? fk ][tanh(?fk ) + 1] .
(20)
2
3.3
Cascade complexity
In Appendix B, a similar analysis is performed for the computational complexity. Denoting by C(fk )
the complexity of evaluating fk , it is shown that
C(F) = P1,k + P2,k C(fk ) + P2,k u(fk )C(Fk+1 ).
(21)
with initial conditions C(Fm+1 ) = 0, P1,1 = 0, P2,1 = 1 and
P1,k+1 = P1,k + C(fk ) P2,k
P2,k+1 = P2,k u(fk ).
(22)
This makes explicit the dependence of the cascade complexity on the complexity of the k th stage.
P
In practice, fk =
l cl gl for gl ? U, where U is a set of functions of approximately identical
complexity. For example, the set of projections into Haar features, in which C(fk ) is proportional to
the number of features gl . In general, fk has three components. The first is a predictor that is also
used in a previous cascade stage, e.g. fk (x) = fk?1 (x) + cg(x) for an embedded cascade. In this
case, fk?1 (x) has already been evaluated in stage k ? 1 and is available with no computational cost.
The second is the set O(fk ) of features that have been used in some stage j ? k. These features are
also available and require minimal computation (multiplication by the weight cl and addition to the
running sum). The third is the set N (fk ) of features that have not been used in any stage j ? k. The
overall computation is
C(fk ) = |N (fk )| + ?|O(fk )|,
(23)
where ? < 1 is the ratio of computation required to evaluate a used vs. new feature. For Haar
1
wavelets, ? ? 20
. It follows that updating the predictor of the k th stage increases its complexity to
C(fk ) + ? if g ? O(fk )
C(fk + ?g) =
(24)
C(fk ) + 1 if g ? N (fk ),
and the complexity of the cascade to
C(F(fk + ?g)) = P1,k + P2,k C(fk + ?g) + P2,k u(fk + ?g)C(Fk+1 )
(25)
= ?k + ? k C(fk + ?g) + ? k u(fk + ?g)
with
?k = P1,k
? k = P2,k
4
? k = P2,k C(Fk+1 ).
(26)
(27)
3.4
Neutral predictors
The models of (10), (11) and (21) will be used for the design of optimal cascades. Another observation that we will exploit is that
H[F(f1 , . . . , fm , fm )] = H[F(f1 , . . . , fm )].
This implies that repeating the last stage of a cascade does not change its decision rule. For this
reason n(x) = fm (x) is referred to as the neutral predictor of a cascade of m stages.
4
Boosting classifier cascades
In this section, we introduce a boosting algorithm for cascade design.
4.1
Boosting
Boosting algorithms combine weak learners to produce a complex decision boundary. Boosting iterations are gradient descent steps towards the predictor f (x) of minimum risk for the loss
L[y, f (x)] = e?yf (x) [3]. Given a set U of weak learners, the functional derivative of RL along the
direction of weak leaner g is
1 X d ?yi (f (xi )+?g(xi ))
1 X
< ?RL (f ), g > =
=?
e
yi wi g(xi ),
(28)
|St | i d?
|St | i
?=0
where wi = e?yi f (xi ) is the weight of xi . Hence, the best update is
g ? (x) = arg max < ??RL (f ), g > .
g?U
(29)
Letting I(x) be the indicator function, the optimal step size along the selected direction, g ? (x), is
P
X
wi I(yi = g ? (xi ))
1
?
?yi (f (xi )+cg ? (xi ))
c = arg min
e
= log Pi
.
(30)
?
c?R
2
i wi I(yi 6= g (xi ))
i
The predictor is updated into f (x) = f (x) + c? g ? (x) and the procedure iterated.
4.2
Cascade risk minimization
To derive a boosting algorithm for the design of detector cascades, we adopt the loss
L[y, F(f1 , . . . , fm )(x)] = e?yF (f1 ,...,fm )(x) , and minimize the cascade risk
1 X ?yi F (f1 ,...,fm )(xi )
e
.
RL (F) = EX,Y {e?yF (f1 ,...,fm ) } ?
|St | i
Using (13) and (19),
1 X d ?yi [ak (xi )+?bk (xi )g(xi )]
1 X
< ?RL (F(fk )), g >=
=?
e
yi wik bki g(xi ) (31)
|St | i d?
|St | i
?=0
k
where wik = e?yi a (xi ) , bki = bk (xi ) and ak , bk are given by (14), (18), and (20). The optimal
descent direction and step size for the k th stage are then
gk?
=
c?k
=
arg max < ??RL (F(fk )), g >
g?U
X
k
?
wik e?yi bi cgk (xi ) .
arg min
c?R
(32)
(33)
i
In general, because the bki are not constant, there is no closed form for c?k , and a line search must
be used. Note that, since ak (xi ) = F(fk )(xi ), the weighting mechanism is identical to that of
boosting, i.e. points are reweighed according to how well they are classified by the current cascade.
Given the optimal c? , g ? for all stages, the impact of each update in the overall cascade risk, RL , is
evaluated and the stage of largest impact is updated.
5
4.3
Adding a new stage
Searching for the optimal cascade configuration requires support for the addition of new stages,
whenever necessary. This is accomplished by including a neutral predictor as the last stage of the
cascade. If adding a weak learner to the neutral stage reduces the risk further than the corresponding
addition to any other stage, a new stage (containing the neutral predictor plus the weak learner) is
created. Since this new stage includes the last stage of the previous cascade, the process mimics the
design of an embedded cascade. However, there are no restrictions that a new stage should be added
at each boosting iteration, or consist of a single weak learner.
4.4
Incorporating complexity constraints
Joint optimization of speed and accuracy, requires the minimization of the Lagrangian of (4). This
requires the computation of the functional derivatives
1 X s d
< ?RC (F(fk )), g >= ?
yi
LC [C(F(fk + ?g)(xi )]
(34)
d?
|St | i
?=0
where yis = I(yi = ?1). Similarly to boosting, which upper bounds the zero-one loss u(?yf ) by
the exponential loss e?yf , we rely on a loss that upper-bounds the true complexity. This upper-bound
is a combination of a boosting-style bound u(f +?g) ? ef +?g , and the bound C(f +?g) ? C(f )+1,
which follows from (24). Using (26),
LC [C(F(fk + ?g)(xi )]
= LC [?k + ? k C(fk + ?g) + ? k u(fk + ?g)]
k
k
k fk +?g
= ? + ? (C(fk ) + 1) + ? e
and, since d? LC [C(F(fk + ?g))] ?=0 = ? k efk g,
1 X s k k
yi ?i ?i g(xi )
< ?RC (F(fk )), g > =
|St? | i
(35)
(36)
d
(37)
with ?ik = ? k (xi ) and ?ik = efk (xi ) . The derivative of (4) with respect to the k th stage predictor is
then
< ?L(F(fk )), g > = < ?RL (F(fk )), g > +? < ?RC (F(fk )), g >
X y i w k bk
yis ?ik ?ik
i i
g(xi )
=
?
+?
|St |
|St? |
i
(38)
(39)
k
with wik = e?yi a (xi ) and ak and bk given by (14), (18), and (20). Given a set of weak learners U,
the optimal descent direction and step size for the k th stage are then
gk? = arg max < ??L(F(fk )), g >
(40)
g?U
c?k
= arg min
c?R
(
1 X k ?yi bki cgk? (xi )
? X s k k cgk? (xi )
yi ?i ?i e
wi e
+ ?
|St | i
|St | i
)
.
(41)
?
A pair (gk,1
, c?k,1 ) is found among the set O(fk ) and another among the set U ? O(fk ) . The one
that most reduces (4) is selected as the best update for the k th stage and the stage with the largest
impact is updated. This gradient descent procedure is denoted Fast Cascade Boosting (FCBoost).
5
Extensions
FCBoost supports a number of extensions that we briefly discuss in this section.
5.1
Cost Sensitive Boosting
As is the case for AdaBoost, it is possible to use cost sensitive risks in FCBoost. For example, the risk of CS-AdaBoost: RL (f ) = EX,Y {y c e?yf (x) } [12] or Asym-AdaBoost: RL (f ) =
c
EX,Y {e?y yf (x) } [8], where y c = CI(y = ?1) + (1 ? C)I(y = 1) and C is a cost factor.
6
0.36
Train Set
pos
neg
9,000 9,000
1,000 10,000
1,000 10,000
Test Set
pos
neg
832
832
100 2,000
200 2,000
0.32
RL
Data Set
Face
Car
Pedestrian
0.28
0.24
0
10
20
30
RC
Figure 2: Left: data set characteristics. Right: Trade-off between the error (RL ) and complexity (RC ) components of the risk as ? changes in (4).
Table 1: Performance of various classifiers on the face, car, and pedestrian test sets.
Method
AdaBoost
ChainBoost
FCBoost (? = 0.02)
5.2
RL
0.20
0.45
0.30
Face
RC
50
2.65
4.93
L
1.20
0.50
0.40
RL
0.22
0.65
0.44
Car
RC
50
2.40
5.38
L
1.22
0.70
0.55
RL
0.35
.052
0.46
Pedestrian
RC
L
50
1.35
3.34 0.59
4.23 0.54
Bootstrapping
Bootstrapping is a procedure to augment the training set, by using false positives of the current
classifier as the training set for the following [11]. This improves performance, but is feasible only
when the bootstrapping procedure does not affect previously rejected examples. Otherwise, the
classifier will forget the previous negatives while learning from the new ones. Since FCBoost learns
all cascade stages simultaneously, and any stage can change after bootstrapping, this condition is
violated. To overcome the problem, rather than replacing all negative examples with false positives,
only a random subset is replaced. The negatives that remain in the training set prevent the classifier
from forgetting about the previous iterations. This method is used to update the training set whenever
the false positive rate of the cascade being learned reaches 50%.
6
Evaluation
Several experiments were performed to evaluate the performance of FCBoost, using face, car, and
pedestrian recognition data sets, from computer vision. In all cases, Haar wavelet features were used
as weak learners. Figure 2 summarizes the data sets.
Effect of ?: We started by measuring the impact of ?, see (4), on the accuracy and complexity of
FCBoost cascades. Figure 2 plots the accuracy component of the risk, RL , as a function of the
complexity component, RC , on the face data set, for cascades trained with different ?. The leftmost
point corresponds to ? = 0.05, and the rightmost to ? = 0. As expected, as ? decreases the cascade
has lower error and higher complexity. In the remaining experiments we used ? = 0.02.
Cascade comparison: Figure 3 (a) repeats the plots of the Lagrangian of the risk shown in Figure 1, for classifiers trained with 50 boosting iterations, on the face data. In addition to AdaBoost
and ChainBoost, it presents the curves of FCBoost with (? = 0.02) and without (? = 0) complexity constraints. Note that, in the latter case, performance is in between those of AdaBoost and
ChainBoost. This reflects the fact that FCBoost (? = 0) does produce a cascade, but this cascade
has worse accuracy/complexity trade-off than that of ChainBoost. On the other hand, the inclusion
of complexity constraints, FCBoost (? = 0.02), produces a cascade with the best trade-off. These
results are confirmed by Table 1, which compares classifiers trained on all data sets. In all cases, AdaBoost detectors have the lowest error, but at a tremendous computational cost. On the other hand,
ChainBoost cascades are always the fastest, at the cost of the highest classification error. Finally,
FCBoost (? = 0.02) achieves the best accuracy/complexity trade-off: its cascade has the lowest risk
Lagrangian L. It is close to ten times faster than the AdaBoost detector, and has half of the increase
in classification error (with respect to AdaBoost) of the ChainBoost cascade. Based on these results,
FCBoost (? = 0.02) was used in the last experiment.
7
94
0.8
L
Detection Rate
FCBoost ?=0
FCBoost ?=0.02
AdaBoost
ChainBoost
0.7
0.6
90
Viola & Jones
ChainBoost
FloatBoost
WaldBoost
FCBoost
85
0.5
0.4
0
10
20
30
40
80
50
0
25
50
75
100
125
Number of False Positives
Iterations
(a)
150
(b)
Figure 3: a) Lagrangian of the risk for classifiers trained with various boosting algorithms. b) ROC of various
detector cascades on the MIT-CMU data set.
Table 2: Comparison of the speed of different detectors.
Method
Evals
VJ [13]
8
FloatBoost [5]
18.9
ChainBoost [15]
18.1
WaldBoost [9]
10.84
[8]
15.45
FCBoost
7.2
Face detection: We finish with a face detector designed with FCBoost (? = 0.02), bootstrapping,
and 130K Haar features. To make the detector cost-sensitive, we used CS-AdaBoost with C = 0.99.
Figure 3 b) compares the resulting ROC to those of VJ [13], ChainBoost [15], FloatBoost [5] and
WaldBoost [9]. Table 2 presents a similar comparison for the detector speed (average number of
features evaluated per patch). Note the superior performance of the FCBoost cascade in terms of
both accuracy and speed. To the best of our knowledge, this is the fastest face detector reported to
date.
A Recursive form of cascade predictor
Applying (9) recursively to (8)
F = f1 u(?f1 ) + u(f1 )F2
= f1 u(?f1 ) + u(f1 ) [f2 u(?f2 ) + u(f2 )F3 ]
= f1 u(?f1 ) + f2 u(f1 )u(?f2 ) + u(f1 )u(f2 ) [f3 u(?f3 ) + u(f3 )F4 ]
=
k?1
X
fi u(?fi )
i=1
Y
Y
u(fj ) + Fk
j<i
u(fj )
(42)
(43)
(44)
(45)
j<k
= T1,k + T2,k Fk
(46)
Pk?1
Q
Q
where T1,k = i=1 fi u(?fi ) j<i u(fj ) and T2,k = j<k u(fj ) satisfy the recursions of (12).
Combining (46) and (9) then leads to (11). (10) follows from (46) and the initial condition Fm = fm .
B Recursive form of cascade complexity
Let C(fk ) be the complexity of evaluating fk . Then
C(F) = C(f1 ) + u(f1 )C(F2 )
= C(f1 ) + u(f1 )[C(f2 ) + u(f2 )C(F3 )]
=
k?1
X
i=1
C(fi )
Y
u(fj ) + C(Fk )
j<i
= P1,k + P2,k C(Fk )
Y
u(fj )
(47)
(48)
(49)
j<k
(50)
with
P1,k+1 = P1,k + C(fk ) P2,k
P2,k+1 = P2,k u(fk )
(51)
and initial conditions P1,1 = 0, P2,1 = 1. The relationship of (47) is a special case of
C(Fk ) = C(fk ) + u(fk )C(Fk+1 )
(52)
with initial conditions C(Fm ) = C(fm ) and C(Fm+1 ) = 0. Combining (52) with (50) leads to (21).
8
References
[1] S. C. Brubaker, M. D. Mullin, and J. M. Rehg. On the design of cascades of boosted ensembles
for face detection. International Journal of Computer Vision, 77:65?86, 2008.
[2] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting, 1997.
[3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of
boosting. Annals of Statistics, 28:2000, 1998.
[4] X. Hou, C.-L. Liu, and T. Tan. Learning boosted asymmetric classifiers for object detection.
In IEEE Conference on Computer Vision and Pattern Recognition,, pages 330?338, 2006.
[5] S. Z. Li and Z. Zhang. Floatboost learning and statistical face detection. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 26(9):1112?1123, 2004.
[6] C. Liu and H.-Y. Shum;. Kullback-leibler boosting. In IEEE Conference on Computer Vision
and Pattern Recognition, pages 587?594, 2003.
[7] H. Luo. Optimization design of cascaded classifiers. In IEEE Conference on Computer Vision
and Pattern Recognition,, pages 480?485, 2005.
[8] H. Masnadi-Shirazi and N. Vasconcelos. High detection-rate cascades for real-time object
detection. In IEEE International Conference on Computer Vision, volume 2, pages 1?6, 2007.
[9] J. Sochman and J. Matas. Waldboost - learning for time constrained sequential detection. In
IEEE Conference on Computer Vision and Pattern Recognition, pages 150?157, 2005.
[10] J. Sun, J. M. Rehg, and A. Bobick. Automatic cascade training with perturbation bias. IEEE
Conference on Computer Vision and Pattern Recognition, 2:276?283, 2004.
[11] K. K. Sung and T. Poggio. Example based learning for view-based human face detection. IEEE
Trans. on Pattern Analysis and Machine Intelligence, 20:39?51, 1998.
[12] P. Viola and M. Jones. Fast and robust classification using asymmetric adaboost and a detector
cascade. In Advances in Neural Information Processing System, pages 1311?1318, 2001.
[13] P. Viola and M. Jones. Robust real-time object detection. International Journal of Computer
Vision, 57(2):137?154, 2004.
[14] J. Wu, S. Brubaker, M. D. Mullin, and J. M. Rehg. Fast asymmetric learning for cascade face
detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 3:369?382, 2008.
[15] R. Xiao, L. Zhu, and H.-J. Zhang. Boosting chain learning for object detection. In IEEE
International Conference on Computer Vision, pages 709?715, 2003.
9
| 4033 |@word briefly:1 trialand:1 seek:1 recursively:2 carry:1 initial:6 configuration:6 contains:2 series:1 liu:2 shum:1 denoting:1 rightmost:1 current:2 luo:1 must:4 hou:1 additive:1 designed:3 plot:3 update:4 v:1 half:1 selected:2 intelligence:3 boosting:29 contribute:1 location:1 sochman:1 sigmoidal:1 zhang:2 mathematical:4 rc:11 along:3 ik:4 combine:1 introduce:1 sacrifice:1 forgetting:1 expected:1 p1:10 frequently:1 decreasing:1 floatboost:4 automatically:3 little:1 lowest:2 minimizes:1 bootstrapping:8 sung:1 guarantee:2 classifier:22 demonstrates:1 control:2 omit:1 superiority:1 yn:1 positive:6 t1:12 understood:1 ak:6 approximately:1 plus:1 fastest:2 bi:1 practical:1 camera:1 recursive:4 practice:2 implement:3 procedure:5 area:1 cascade:88 reject:1 projection:1 close:2 risk:22 applying:1 restriction:1 map:1 lagrangian:9 straightforward:1 attention:1 simplicity:1 rule:4 rehg:3 asym:1 classic:1 searching:1 variation:1 updated:3 annals:1 diego:2 target:3 tan:1 designing:2 recognition:6 updating:1 asymmetric:3 solved:1 thousand:1 sun:1 trade:8 decrease:2 highest:1 substantial:1 intuition:1 complexity:34 saberian:2 trained:4 depend:1 creates:1 f2:14 learner:14 po:2 joint:1 tanh2:1 various:5 train:1 fast:4 whose:1 larger:1 solve:1 valued:1 otherwise:2 statistic:1 sequence:1 differentiable:3 combining:3 bobick:1 date:1 poorly:1 enhancement:3 double:1 produce:3 object:5 derive:1 develop:1 p2:15 solves:1 c:2 implies:1 direction:5 f4:1 human:1 sgn:2 require:2 f1:32 generalization:1 evals:1 extension:3 insert:1 hold:1 around:1 considered:1 week:1 achieves:1 adopt:1 label:2 tanh:4 sensitive:6 largest:2 reflects:1 minimization:2 mit:1 always:2 rather:2 boosted:3 earliest:1 focus:1 notational:1 cg:2 sense:1 am:2 posteriori:1 entire:1 provably:1 overall:4 classification:14 arg:6 among:2 denoted:3 augment:1 art:2 platform:1 constrained:2 special:1 f3:5 vasconcelos:2 sampling:1 identical:2 broad:1 jones:4 mimic:1 minimized:1 t2:18 simplify:1 few:1 masnadi:1 composed:1 simultaneously:1 individual:1 replaced:1 friedman:1 detection:19 interest:1 evaluation:3 extreme:1 bki:4 chain:1 amenable:1 accurate:2 necessary:1 poggio:1 taylor:1 plotted:1 minimal:1 mullin:2 measuring:1 cost:12 introducing:1 neutral:5 rare:1 subset:1 predictor:19 reported:1 accomplish:1 st:21 fundamental:1 international:4 off:8 containing:1 worse:1 derivative:4 style:1 li:1 account:3 includes:2 pedestrian:5 satisfy:1 performed:2 h1:1 view:2 closed:1 start:2 contribution:1 minimize:3 accuracy:10 characteristic:1 efficiently:1 ensemble:1 weak:16 iterated:1 confirmed:1 finer:1 classified:1 detector:28 reach:1 manual:2 whenever:2 definition:1 nuno:2 gain:1 adjusting:1 popular:1 knowledge:1 car:5 improves:1 organized:1 higher:1 adaboost:16 evaluated:3 rejected:4 stage:46 hand:4 replacing:2 lack:1 logistic:1 yf:7 shirazi:1 effect:1 multiplier:1 true:2 evolution:2 analytically:2 hence:2 laboratory:2 leibler:1 illustrated:1 deal:1 covering:1 leftmost:1 theoretic:1 mohammad:1 fj:8 image:3 consideration:1 novel:1 recently:1 fi:6 ef:1 superior:1 specialized:1 functional:3 rl:22 volume:1 automatic:2 tuning:1 fk:102 resorting:1 similarly:1 inclusion:1 supervision:2 longer:1 reweighed:1 jolla:2 phone:1 binary:2 yi:23 accomplished:1 neg:2 minimum:1 waldboost:4 determine:2 monotonically:1 reduces:2 faster:1 post:1 impact:4 regression:1 vision:12 cmu:1 iteration:9 sometimes:1 adopting:1 achieved:1 cell:1 addition:6 addressed:1 float:1 intermediate:1 automated:1 affect:1 finish:1 architecture:3 fm:23 suboptimal:1 hastie:1 motivated:1 penalty:1 generally:1 repeating:1 ten:1 schapire:1 holistically:1 sign:1 per:4 tibshirani:1 threshold:1 prevent:1 relaxation:1 sum:1 throughout:1 wu:1 patch:1 decision:6 appendix:2 summarizes:1 bound:6 hi:3 constraint:9 encodes:1 declared:2 speed:5 extremely:2 optimality:3 min:3 according:2 combination:3 smaller:1 remain:1 wi:5 equation:1 previously:1 discus:1 mechanism:1 letting:3 tractable:2 available:2 operation:2 original:1 remaining:2 running:1 exploit:3 matas:1 already:1 added:1 dependence:3 leaner:1 gradient:2 majority:1 reason:1 relationship:1 ratio:1 gk:3 negative:7 design:22 upper:4 observation:1 enabling:1 descent:4 viola:4 defining:1 extended:1 precise:1 y1:1 brubaker:2 ucsd:2 perturbation:1 introduced:1 bk:8 namely:1 required:2 pair:1 extensive:2 california:2 learned:3 tremendous:1 boost:1 trans:3 address:4 usually:1 pattern:8 max:3 including:1 event:1 rely:2 haar:5 cascaded:2 indicator:1 recursion:2 zhu:1 wik:4 improve:1 created:1 started:1 hm:1 multiplication:1 embedded:4 fully:1 loss:9 freund:1 limitation:1 proportional:1 xiao:1 plotting:1 pi:1 compatible:2 gl:3 last:5 repeat:1 side:1 bias:1 face:19 boundary:2 overcome:1 xn:1 evaluating:3 curve:1 commonly:1 san:2 simplified:1 bm:3 kullback:1 xi:36 search:3 continuous:1 table:4 robust:2 ca:2 efk:2 expansion:1 complex:3 cl:2 vj:7 did:1 pk:1 main:2 whole:1 x1:1 referred:1 roc:2 deployed:1 lc:12 explicit:3 exponential:1 third:1 weighting:1 wavelet:2 learns:1 consist:1 incorporating:1 false:6 sequential:3 adding:2 ci:1 illustrates:1 rejection:2 forget:1 visual:2 lagrange:1 corresponds:1 determines:1 extracted:1 towards:1 feasible:1 change:3 justify:1 experimental:1 la:2 attempted:1 support:2 latter:1 outstanding:1 violated:1 evaluate:2 ex:5 |
3,351 | 4,034 | Multiparty Differential Privacy via Aggregation of
Locally Trained Classifiers
Manas A. Pathak
Carnegie Mellon University
Pittsburgh, PA
[email protected]
Shantanu Rane
Mitsubishi Electric Research Labs
Cambridge, MA
[email protected]
Bhiksha Raj
Carnegie Mellon University
Pittsburgh, PA
[email protected]
Abstract
As increasing amounts of sensitive personal information finds its way into data
repositories, it is important to develop analysis mechanisms that can derive aggregate information from these repositories without revealing information about
individual data instances. Though the differential privacy model provides a framework to analyze such mechanisms for databases belonging to a single party, this
framework has not yet been considered in a multi-party setting. In this paper, we
propose a privacy-preserving protocol for composing a differentially private aggregate classifier using classifiers trained locally by separate mutually untrusting
parties. The protocol allows these parties to interact with an untrusted curator to
construct additive shares of a perturbed aggregate classifier. We also present a
detailed theoretical analysis containing a proof of differential privacy of the perturbed aggregate classifier and a bound on the excess risk introduced by the perturbation. We verify the bound with an experimental evaluation on a real dataset.
1
Introduction
In recent years, individuals and corporate entities have gathered large quantities of personal data.
Often, they may wish to contribute the data towards the computation of functions such as various
statistics, responses to queries, classifiers etc. In the process, however, they risk compromising
the privacy of the individuals by releasing sensitive information such as their medical or financial
records, addresses and telephone numbers, preferences of various kinds which the individuals may
not want exposed. Merely anonymizing the data is not sufficient ? an adversary with access to
publicly available auxiliary information can still recover the information about individual, as was
the case with the de-anonymization of the Netflix dataset [1].
In this paper, we address the problem of learning a classifier from a multi-party collection of such
private data. A set of parties P1 , P2 , . . . , PK each possess data D1 , D2 , . . . , DK . The aim is to
learn a classifier from the union of all the data D1 ? D2 . . . ? DK . We specifically consider a logistic
regression classifier, but as we shall see, the techniques are generally applicable to any classification
algorithm. The conditions we impose are that (a) None of the parties are willing to share the data
with one another or with any third party (e.g. a curator). (b) The computed classifier cannot be
reverse engineered to learn about any individual data instance possessed by any contributing party.
The conventional approach to learning functions in this manner is through secure multi-party computation (SMC) [2]. Within SMC individual parties use a combination of cryptographic techniques
and oblivious transfer to jointly compute a function of their private data [3, 4, 5]. The techniques
typically provide guarantees that none of the parties learn anything about the individual data besides
what may be inferred from the final result of the computation. Unfortunately, this does not satisfy
condition (b) above. For instance, when the outcome of the computation is a classifier, it does not
prevent an adversary from postulating the presence of data instances whose absence might change
1
the decision boundary of the classifier, and verifying the hypothesis using auxiliary information if
any. Moreover, for all but the simplest computational problems, SMC protocols tend to be highly
expensive, requiring iterated encryption and decryption and repeated communication of encrypted
partial results between participating parties.
An alternative theoretical model for protecting the privacy of individual data instances is differential
privacy [6]. Within this framework, a stochastic component is added to any computational mechanism, typically by the addition of noise. A mechanism evaluated over a database is said to satisfy
differential privacy if the probability of the mechanism producing a particular output is almost the
same regardless of the presence or absence of any individual data instance in the database. Differential privacy provides statistical guarantees that the output of the computation does not carry
information about individual data instances. On the other hand, in multiparty scenarios where the
data used to compute a function are distributed across several parties, it does not provide any mechanism for preserving the privacy of the contributing parties from one another or alternately, from a
curator who computes the function from the combined data.
We provide an alternative solution: within our approach the individual parties locally compute an
optimal classifier with their data. The individual classifiers are then averaged to obtain the final aggregate classifier. The aggregation is performed through a secure protocol that also adds a stochastic
component to the averaged classifier, such that the resulting aggregate classifier is differentially
private, i.e., no inference may be made about individual data instances from the classifier. This
procedure satisfies both criteria (a) and (b) mentioned above. Furthermore, it is significantly less
expensive than any SMC protocol to compute the classifier on the combined data.
We also present theoretical guarantees on the classifier. We provide a fundamental result that the
excess risk of an aggregate classifier obtained by averaging classifiers trained on individual subsets,
compared to the optimal classifier computed on the combined data in the union of all subsets, is
bounded by a quantity that depends on the size of the smallest subset. We prove that the addition of
the noise does indeed result in a differentially private classifier. We also provide a bound on the true
excess risk of the differentially private averaged classifier compared to the optimal classifier trained
on the combined data. Finally, we present experimental evaluation of the proposed technique on
a UCI Adult dataset which is a subset of the 1994 census database and empirically show that the
differentially private classifier trained using the proposed method provides the performance close to
the optimal classifier when the distribution of data across parties is reasonably equitable.
2
Differential Privacy
In this paper, we consider the differential privacy model introduced by Dwork [6]. Given any two
databases D and D0 differing by one element, which we will refer to as adjacent databases, a
randomized query function M is said to be differentially private if the probability that M produces
a response S on D is close to the probability that M produces the same response S on D0 . As
the query output is almost the same in the presence or absence of an individual entry with high
probability, nothing can be learned about any individual entry from the output.
Definition A randomized function M with a well-defined probability density P satisfies differential privacy if, for all adjacent databases D and D0 and for any S ? range(M ),
log P (M (D) = S) ? .
0
P (M (D ) = S)
(1)
In a classification setting, the training dataset may be thought of as the database and the algorithm
learning the classification rule as the query mechanism. A classifier satisfying differential privacy
implies that no additional details about the individual training data instances can be obtained with
certainty from output of the learning algorithm, beyond the a priori background knowledge. Differential privacy provides an ad omnia guarantee as opposed to most other models that provide ad hoc
guarantees against a specific set of attacks and adversarial behaviors. By evaluating the differentially
private classifier over a large number of test instances, an adversary cannot learn the exact form of
the training data.
2
2.1
Related Work
Dwork et al. [7] proposed the exponential mechanism for creating functions satisfying differential
privacy by adding a perturbation term from the Laplace distribution scaled by the sensitivity of the
function. Chaudhuri and Monteleoni [8] use the exponential mechanism [7] to create a differentially private logistic regression classifier by perturbing the estimated parameters with multivariate
Laplacian noise scaled by the sensitivity of the classifier. They also propose another method to learn
classifiers satisfying differential privacy by adding a linear perturbation term to the objective function which is scaled by Laplacian noise. Nissim, et al. [9] show we can create a differentially private
function by adding noise from Laplace distribution scaled by the smooth sensitivity of the function.
While this mechanism results in a function with lower error, the smooth sensitivity of a function
can be difficult to compute in general. They also propose the sample and aggregate framework
for replacing the original function with a related function for which the smooth sensitivity can be
easily computed. Smith [10] presents a method for differentially private unbiased MLE using this
framework.
All the previous methods are inherently designed for the case where a single curator has access
to the entire data and is interested in releasing a differentially private function computed over the
data. To the best of our knowledge and belief, ours is the first method designed for releasing a
differentially private classifier computed over training data owned by different parties who do not
wish to disclose the data to each other. Our technique was principally motivated by the sample and
aggregate framework, where we considered the samples to be owned by individual parties. Similar
to [10], we choose a simple average as the aggregation function and the parties together release the
perturbed aggregate classifier which satisfies differential privacy. In the multi-party case, however,
adding the perturbation to the classifier is no longer straightforward and it is necessary to provide a
secure protocol to do this.
3
Multiparty Classification Protocol
The problem we address is as follows: a number of parties P1 , . . . , PK possess data sets D1 , . . . , DK
where Di = (x, y)|j includes a set of instances x and their binary labels y. We want to train a logistic
regression classifier on the combined data such that no party is required to expose any of its data,
and the no information about any single data instance can be obtained from the learned classifier.
The protocol can be divided into the three following phases:
3.1
Training Local Classifiers on Individual Datasets
Each party Pj uses their data set (x, y)|j to learn an `2 regularized logistic regression classifier with
? j . This is obtained by minimizing the following objective function
weights w
T
1 X
? j = argmin J(w) = argmin
w
log 1 + e?yi w xi + ?wT w,
(2)
nj i
w
w
where ? > 0 is the regularization parameter. Note that no data or information has been shared yet.
3.2
Publishing a Differentially Private Aggregate Classifier
The proposed solution, illustrated by Figure 1, proceedsPas follows. The parties then collaborate
1
?s = K
? j + ?, where ? is a d-dimensional
to compute an aggregate classifier given by w
jw
random variable sampled from a Laplace distribution scaled with the parameter n(1)2 ? and n(1) =
minj nj . As we shall see later, composing an aggregate classifier in this manner incurs only a wellbounded excess risk over training a classifier directly on the union of all data while enabling the
parties to maintain their privacy. We also show in Section 4.1 that the noise term ? ensures that
? s satisfies differential privacy, i.e., that individual data instances cannot be discerned
the classifier w
from the aggregate classifier. The definition of the noise term ? above may appear unusual at this
stage, but it has an intuitive explanation: A classifier constructed by aggregating locally trained
classifiers is limited by the performance of the individual classifier that has the least number of data
instances. This will be formalized in Section 4.2. We note that the parties Pj cannot simply take
3
? j , perturb them with a noise vector and publish the perturbed
their individually trained classifiers w
classifiers, because aggregating such classifiers will not give the correct ? ? Lap 2/(n(1) ?) in
general. Since individual parties cannot simply add noise to their classifiers to impose differential
privacy, the actual averaging operation must be performed such that the individual parties do not
expose their own classifiers or the number of data instances they possess. We therefore use a private
multiparty protocol, interacting with an untrusted curator ?Charlie? to perform the averaging. The
outcome of the protocol is such that each of the parties obtain additive shares of the final classifier
? s , such that these shares must be added to obtain w
? s.
w
Stage 1
additive
secret
sharing
Stage 2
Stage 3
encryption
additive
secret
sharing of
noise term
&
curator
reverse
permutations
&
curator
additive
secret
sharing of
local
classifers
Null
curator
&
&
blind-andpermute
Indicator vector with permuted
index
of smallest database
add noise
vectors
Encrypted Laplacian noise vector
added obliviously by smallest database
perfectly private
additive shares of
? s.
Figure 1: Multiparty protocol to securely compute additive shares of w
Privacy-Preserving Protocol
We use asymmetric key additively homomorphic encryption [11]. A desirable property of such
schemes is that we can perform operations on the ciphertext elements which map into known operations on the same plaintext elements. For an additively homomorphic encryption function ?(?),
?(a) ?(b) = ?(a + b), ?(a)b = ?(ab). Note that the additively homomorphic scheme employed
here is semantically secure, i.e., repeated encryption of the same plaintext will result in different
ciphertexts. For the ensuing protocol, encryption keys are considered public and decryption keys are
privately owned by the specified parties. Assuming the parties to be honest-but-curious, the steps of
the protocol are as follows.
Stage 1. Finding the index of the smallest database obfuscated by permutation.
1. Each party Pj computes nj = aj + bj , where aj and bj are integers representing
additive shares of the database lengths nj for j = 1, 2, ..., K. Denote the K-length
vectors of additive shares as a and b respectively.
2. The parties Pj mutually agree on a permutation ?1 on the index vector (1, 2, ..., K).
This permutation is unknown to Charlie. Then, each party Pj sends its share aj to
party P?1 (j) , and sends its share bj to Charlie with the index changed according to the
permutation. Thus, after this step, the parties have permuted additive shares given by
?1 (a) while Charlie has permuted additive shares ?1 (b).
3. The parties Pj generate a key pair (pk,sk) where pk is a public key for homomorphic
encryption and sk is the secret decryption key known only to the parties but not to
Charlie. Denote element-wise encryption of a by ?(a). The parties send ?(?1 (a)) =
?1 (?(a)) to Charlie.
4. Charlie generates a random vector r = (r1 , r2 , ? ? ? , rK ) where the elements ri are
integers chosen uniformly at random and are equally likely to be positive or negative.
Then, he computes ?(?1 (aj ))?(rj ) = ?(?1 (aj ) + rj ). In vector notation, he computes
?(?1 (a) + r). Similarly, by subtracting the same random integers in the same order
to his own shares, he obtains ?1 (b) ? r where ?1 was the permutation unknown to
him and applied by the parties. Then, Charlie selects a permutation ?2 at random
4
and obtains ?2 (?(?1 (a) + r)) = ?(?2 (?1 (a) + r)) and ?2 (?1 (b) ? r). He sends
?(?2 (?1 (a) + r)) to the individual parties in the following order: First element to P1 ,
second element to P2 ,...,K th element to PK .
5. Each party decrypts the signal received from Charlie. At this point, the parties
P1 , P2 , ..., PK respectively possess the elements of the vector ?2 (?1 (a) + r) while
Charlie possesses the vector ?2 (?1 (b) ? r). Since ?1 is unknown to Charlie and ?2
is unknown to the parties, the indices in both vectors have been complete obfuscated.
Note also that, adding the vector collectively owned by the parties and the vector
owned by Charlie would give ?2 (?1 (a) + r) + ?2 (?1 (b) ? r) = ?2 (?1 (a + b)) =
?2 (?1 (n)). This situation in this step is similar to that encountered in the ?blind and
permute? protocol used for minimum-finding by Du and Atallah [12].
? Then ni > nj ? a
6. Let ?2 (?1 (a) + r) = ?
a and ?2 (?1 (b) ? r) = b.
?i + ?bi > a
?j + ?bj ?
a
?i ? a
?j > ?bj ? ?bi . For each (i, j) pair with i, j ? {1, 2, ..., K}, these comparisons
can be solved by any implementation of a secure millionaire protocol [2]. When all
the comparisons are done, Charlie finds the index ?j such that a
??j + ?b?j = minj nj . The
true index corresponding to the smallest database has already been obfuscated by the
steps of the protocol. Charlie holds only an additive share of minj nj and thus cannot
know the true length of the smallest database.
Stage 2. Obliviously obtaining encrypted noise vector from the smallest database.
1. Charlie constructs an K indicator vector u such that u?j = 1 and all other elements are
0. He then obtains the permuted vector ?2?1 (u) where ?2?1 inverts ?2 . He generates a
key-pair (pk 0 ,sk 0 ) for additive homomorphic function ?(?) where only the encryption
key pk 0 is publicly available to the parties Pj . Charlie then transmits ?(?2?1 (u)) =
?2?1 (?(u)) to the parties Pj .
2. The parties mutually obtain a permuted vector ?1?1 (?2?1 (?(u))) = ?(v) where ?1?1
inverts the permutation ?1 originally applied by the parties Pj in Stage I. Now that
both permutations have been removed, the index of the non-zero element in the indicator vector v corresponds to the true index of the smallest database. However, since
the parties Pj cannot decrypt ?(?), they cannot find out this index.
3. For j = 1, . . . , K, party Pj generates ? j , a d-dimensional noise vector sampled from
a Laplace distribution with parameter nj2? . Then, it obtains a d-dimensional vector
? j where for i = 1, . . . , d, ?j (i) = ?(v(j))?j (i) = ?(v(j) ?j (i)).
4. All parties Pj now compute a d-dimensional
noise vector ?
P
such that, for i = 1, . . . , d,
Q
Q
?(i) = j ?j (i) = j ?(v(j)?j (i)) = ?
j v(j)?j (i) .
The reader will notice that, by construction, the above equation selects only the
Laplace noise terms for the smallest database, while rejecting the noise terms for all
other databases. This is because v has an element with value 1 at the index corresponding to the smallest database and has zeroes everywhere else. Thus, the decryption of
? is equal to ? which was the desired perturbation term defined at the beginning of
this section.
? s.
Stage 3. Generating secret additive shares of w
1. One of the parties, say P1 , generates a d-dimensional random integer noise vector s,
and transmits ?(i)?(s(i)) for all i = 1, . . . , d to Charlie. Using s effectively prevents
Charlie from discovering ?, and therefore still ensures that no information is leaked
about the database owners Pj . P1 computes w1 ? Ks.
2. Charlie decrypts ?(i)?(s(i)) to obtain ?(i) + s(i) for i = 1, . . . , d. At this stage, the
parties and Charlie have the following d-dimensional vectors: Charlie has K(? + s),
? 1 ? Ks, and all other parties Pj , j = 2, . . . , K have w
? j . None of the K + 1
P1 has w
participants can share this data for fear of compromising differential privacy.
3. Finally, Charlie and the K database-owning parties run a simple secure function evaluation protocol [13], at the end of which each of the K + 1 participants obtains an
? s . This protocol is provably private against honest but curious
additive share of K w
participants when there are no collisions. The resulting shares are published.
5
The above protocol ensures the following (a) None of the K+1 participants, or users of the perturbed
aggregate classifier can find out the size of any database, and therefore none of the parties knows
who contributed ? (b) Neither Charlie nor any of the parties Pj can individually remove the noise ?
after the additive shares are published. This last property is important because if anyone knowingly
could remove the noise term, then the resulting classifier no longer provides differential privacy.
3.3
Testing Phase
A test participant Dave having a test data instance x0 ? Rd is interested in applying the trained
? s.
classifier adds the published shares and divides by K to get the differentially private classifier w
1
0
He can then compute the sigmoid function t =
and decide to classify x with label ?1 if
??
wsT xi
1+e
and with label 1 if t > 21 .
t?
1
2
4
Theoretical Analysis
4.1
Proof of Differential Privacy
We show that the perturbed aggregate classifier satisfies differential privacy. We use the following bound on the sensitivity of the regularized regression classifier as proved in Corollary 2 in [8]
restated in the appendix as Theorem 6.1.
? s preserves -differential privacy. For any two adjacent datasets D
Theorem 4.1. The classifier w
0
and D ,
ws |D)
log P (?
? .
P (?
ws |D0 )
Proof. Consider the case where one instance of the training dataset D is changed to result in an
adjacent dataset D0 . This would imply a change in one element in the training dataset of one party
? sj . Assuming that the change is in the
and thereby a change in the corresponding learned vector w
? j ; let denote the new
dataset of the party Pj , the change in the learned vector is only going to be in w
? j as kw
?j ? w
? 0j k1 ? nj2? . Following
? 0j . In Theorem 6.1, we bound the sensitivity of w
classifier by w
? s using either the training
an argument similar to [7], considering that we learn the same vector w
0
datasets D and D , we have
h
i
n(1) ?
s
?
exp
k
w
k
j 1
n(1) ?
? |D)
? j + ?|D)
2
P (w
P (w
0
h
i ? exp
=
?j ? w
? j k1
=
kw
n ?
? s |D0 )
P (w
2
? 0j + ?|D0
P w
? 0j k1
exp (1)2 kw
n(1) ? 2
n(1)
? exp
? exp
? exp(),
2 nj ?
nj
by the definition of function sensitivity. Similarly, we can lower bound the the ratio by exp(?).
4.2
Analysis of Excess Error
In the following discussion, we consider how much excess error is introduced when using a per? s satisfying differential privacy as opposed to the unperturbed classifier
turbed aggregate classifier w
?
w trained on the entire training data while ignoring the privacy constraints as well as the unper?
turbed aggregate classifier w.
? and the
We first establish a bound on the `2 norm of the difference between the aggregate classifier w
classifier w? trained over the entire training data. To prove the bound we apply Lemma 1 from [8]
restated as Lemma 6.2 in the appendix. Please refer to the appendix for the proof of the following
theorem.
? , the classifier w? trained over the entire training
Theorem 4.2. Given the aggregate classifier w
data and n(1) is the size of the smallest training dataset,
k?
w ? w? k2 ?
6
K ?1
.
n(1) ?
The bound is inversely proportional to the number of instances in the smallest dataset. This indicates
? will be a lot different from w? . The largest possible
that when the datasets are of disparate sizes, w
n
? will
value for n(1) is K in which case all parties having an equal amount of training data and w
be closest to w? . In the one party case for K = 1, the bound indicates that norm of the difference
? is the
would be upper bounded by zero, which is a valid sanity check as the aggregate classifier w
same as w? .
We use this result to establish a bound on the empirical risk of the perturbed aggregate classifier
?s = w
? + ? over the empirical risk of the unperturbed classifier w? in the following theorem.
w
Please refer to the appendix for the proof.
Theorem 4.3. If all data instances xi lie in a unit ball, with probability at least 1 ? ?, the empirical
? s over the classifier w? trained over
regularized excess risk of the perturbed aggregate classifier w
entire training data is
d
d
(K ? 1)2 (? + 1) 2d2 (? + 1)
2d(K ? 1)(? + 1)
2
J(?
ws ) ? J(w? ) +
+
log
log
+
.
2n2(1) ?2
n2(1) 2 ?2
?
n2(1) ?2
?
The bound suggests an error because of two factors: aggregation and perturbation. The bound
increases for smaller values of implying a tighter definition of differential privacy, indicating a
clear trade-off between privacy and utility. The bound is also inversely proportional to n2(1) implying
an increase in excess risk when the parties have training datasets of disparate sizes.
In the limiting case ? ?, we are adding a perturbation term ? sampled from a Laplacian distribution of infinitesimally small variance resulting in the perturbed classifier being almost as same as
? satisfying a very loose definition of differential privacy.
using the unperturbed aggregate classifier w
With such a value of , our bound becomes
? ? J(w? ) +
J(w)
(K ? 1)2 (? + 1)
.
2n2(1) ?2
(3)
Similar to the analysis of Theorem 4.2, the excess error in using an aggregate classifier is inversely
proportional to the size of the smallest dataset n(1) and in the one party case K = 1, the bound
? is the same as w? . Also, for a small value of in the one
becomes zero as the aggregate classifier w
party case K = 1 and n(1) = n, our bound reduces to that in Lemma 3 of [8],
2d2 (? + 1)
d
2
? s ) ? J(w? ) +
J(w
log
.
(4)
2
2
2
n ?
?
While the previous theorem gives us a bound on the empirical excess risk over a given training
? s over w? . Let us denote the
dataset, it is important to consider a bound on the true excess risk of w
s
s
s
?
? by J(w
? ) = E[J(w
? )] and similarly, the true risk of the classifier w? by
true risk of the classifier w
?
?
?
J(w ) = E[J(w )]. In the following theorem, we apply the result from [14] which uses the bound
on the empirical excess risk to form a bound on the true excess risk. Please refer to the appendix for
the proof.
Theorem 4.4. If all training data instances xi lie in a unit ball, with probability at least 1 ? ?, the
? s over the classifier w? trained over entire
true excess risk of the perturbed aggregate classifier w
training data is
2(K ? 1)2 (? + 1) 4d2 (? + 1)
d
s
2
?
?
?
J(?
w ) ? J(w ) +
+ 2 2 2 log
2
2
2n(1) ?
n(1) ?
?
4d(K ? 1)(? + 1)
16
1
d
+
+
32 + log
.
log
n2(1) ?2
?
?n
?
5
Experiments
We perform an empirical evaluation of the proposed differentially private classifier to obtain a characterization of the increase in the error due to perturbation. We use the Adult dataset from the
UCI machine learning repository [15] consisting of personal information records extracted from
7
0.5
non-private all data
DP all data
DP aggregate n(1)=6512
DP aggregate n(1)=4884
DP aggregate n(1)=3256
0.45
test error
0.4
0.35
0.3
0.25
0.2
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
?
? s for different data splits vs.
Figure 2: Classifier performance evaluated for w? , w? + ?, and w
the census database and the task is to predict whether a given person has an annual income over
$50,000. The choice of the dataset is motivated as a realistic example for application of data privacy techniques. The original Adult data set has six continuous and eight categorical features. We
use pre-processing similar to [16], the continuous features are discretized into quintiles, and each
quintile is represented by a binary feature. Each categorical feature is converted to as many binary
features as its cardinality. The dataset contains 32,561 training and 16,281 test instances each with
123 features.1 In Figure 2, we compare the test error of perturbed aggregate classifiers trained over
data from five parties for different values of . We consider three situations: all parties with equal
datasets containing 6512 instances (even split, n(1) = 20% of n), parties with datasets containing
4884, 6512, 6512, 6512, 8141 instances (n(1) = 15% of n), and parties with datasets containing
3256, 6512, 6512, 6512, 9769 instances (n(1) = 10% of n). We also compare with the error of
the classifier trained using combined training data and its perturbed version satisfying differential
privacy. We chose the value of the regularization parameter ? = 1 and the results displayed are
averaged over 200 executions.
The perturbed aggregate classifier which is trained using maximum n(1) = 6512 does consistently
better than for lower values of n(1) which is same as our theory suggested. Also, the test error for
all perturbed aggregate classifiers drops with , but comparatively faster for even split and converges
to the test error of the classifier trained over the combined data. As expected, the differentially
private classifier trained over the entire training data does much better than the perturbed aggregate
classifiers with an error equal to the unperturbed classifier except for small values of . The lower
error of this classifier is at the cost of the loss in privacy of the parties as they would need to share
the data in order to train the classifier over combined data.
6
Conclusion
We proposed a method for composing an aggregate classifier satisfying -differential privacy from
classifiers locally trained by multiple untrusting parties. The upper bound on the excess risk of
the perturbed aggregate classifer as compared to the optimal classifier trained over the complete
data without privacy constraints is inversely proportional to the privacy parameter , suggesting an
inherent tradeoff between privacy and utility. The bound is also inversely proportional to the size
of the smallest training dataset, implying the best performance when the datasets are of equal sizes.
Experimental results on the UCI Adult data also show the behavior suggested by the bound and
we observe that the proposed method provides classification performance close to the optimal nonprivate classifier for appropriate values of . In future work, we seek to generalize the theoretical
analysis of the perturbed aggregate classifier to the setting in which each party has data generated
from a different distribution.
1
The dataset can be download from http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html#a9a
8
References
[1] Arvind Narayanan and Vitaly Shmatikov. De-anonymizing social networks. In IEEE Symposium on Security and Privacy, pages 173?187, 2009.
[2] Andrew Yao. Protocols for secure computations (extended abstract). In IEEE Symposium on
Foundations of Computer Science, 1982.
[3] Jaideep Vaidya, Chris Clifton, Murat Kantarcioglu, and A. Scott Patterson. Privacy-preserving
decision trees over vertically partitioned data. TKDD, 2(3), 2008.
[4] Jaideep Vaidya, Murat Kantarcioglu, and Chris Clifton. Privacy-preserving naive bayes classification. VLDB J, 17(4):879?898, 2008.
[5] Jaideep Vaidya, Hwanjo Yu, and Xiaoqian Jiang. Privacy-preserving svm classification.
Knowledge and Information Systems, 14(2):161?178, 2008.
[6] Cynthia Dwork. Differential privacy. In International Colloquium on Automata, Languages
and Programming, 2006.
[7] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. In Theory of Cryptography Conference, pages 265?284, 2006.
[8] Kamalika Chaudhuri and Claire Monteleoni. Privacy-preserving logistic regression. In Neural
Information Processing Systems, pages 289?296, 2008.
[9] Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. Smooth sensitivity and sampling in
private data analysis. In ACM Symposium on Theory of Computing, pages 75?84, 2007.
[10] Adam Smith. Efficient, differentially private point estimators. arXiv:0809.4794v1 [cs.CR],
2008.
[11] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity classes. In
EUROCRYPT, 1999.
[12] Mikhail Atallah and Jiangtao Li. Secure outsourcing of sequence comparisons. International
Journal of Information Security, 4(4):277?287, 2005.
[13] Michael Ben-Or, Shari Goldwasser, and Avi Widgerson. Completeness theorems for noncryptographic fault-tolerant distributed computation. In Proceedings of the ACM Symposium
on the Theory of Computing, pages 1?10, 1988.
[14] Karthik Sridharan, Shai Shalev-Shwartz, and Nathan Srebro. Fast rates for regularized objectives. In Neural Information Processing Systems, pages 1545?1552, 2008.
[15] A. Frank and A. Asuncion. UCI machine learning repository, 2010.
[16] John Platt. Fast training of support vector machines using sequential minimal optimization. In
Advances in Kernel Methods ? Support Vector Learning, pages 185?208, 1999.
9
| 4034 |@word repository:4 version:1 private:25 norm:2 d2:5 willing:1 additively:3 mitsubishi:1 seek:1 vldb:1 incurs:1 thereby:1 carry:1 contains:1 ours:1 com:1 yet:2 must:2 john:1 realistic:1 additive:16 remove:2 designed:2 drop:1 v:1 implying:3 discovering:1 beginning:1 smith:4 record:2 provides:6 characterization:1 contribute:1 completeness:1 preference:1 attack:1 five:1 constructed:1 differential:27 symposium:4 prove:2 shantanu:1 owner:1 decrypt:1 manner:2 privacy:45 x0:1 secret:5 expected:1 indeed:1 behavior:2 p1:7 nor:1 multi:4 discretized:1 actual:1 considering:1 increasing:1 becomes:2 cardinality:1 moreover:1 bounded:2 notation:1 null:1 what:1 kind:1 argmin:2 differing:1 finding:2 nj:9 guarantee:5 certainty:1 plaintext:2 classifier:100 scaled:5 k2:1 platt:1 unit:2 medical:1 appear:1 producing:1 positive:1 local:2 aggregating:2 vertically:1 jiang:1 jaideep:3 might:1 chose:1 k:2 suggests:1 limited:1 smc:4 range:1 bi:2 averaged:4 testing:1 union:3 procedure:1 empirical:6 significantly:1 revealing:1 thought:1 composite:1 pre:1 get:1 cannot:8 close:3 risk:17 applying:1 raskhodnikova:1 www:1 conventional:1 map:1 outsourcing:1 send:1 straightforward:1 regardless:1 cryptosystems:1 automaton:1 restated:2 formalized:1 rule:1 estimator:1 financial:1 his:1 laplace:5 limiting:1 construction:1 user:1 exact:1 programming:1 us:2 hypothesis:1 pa:2 element:13 expensive:2 satisfying:7 asymmetric:1 database:23 csie:1 disclose:1 solved:1 verifying:1 ensures:3 trade:1 removed:1 mentioned:1 colloquium:1 personal:3 trained:20 exposed:1 classifer:1 patterson:1 untrusted:2 easily:1 various:2 represented:1 train:2 fast:2 query:4 aggregate:37 avi:1 outcome:2 shalev:1 quintiles:1 sanity:1 whose:1 say:1 statistic:1 jointly:1 final:3 hoc:1 sequence:1 propose:3 subtracting:1 uci:4 chaudhuri:2 intuitive:1 participating:1 differentially:17 r1:1 produce:2 generating:1 adam:3 converges:1 encryption:9 ben:1 derive:1 develop:1 andrew:1 received:1 p2:3 auxiliary:2 c:3 implies:1 correct:1 compromising:2 stochastic:2 engineered:1 libsvmtools:1 public:3 obfuscated:3 ntu:1 tighter:1 obliviously:2 hold:1 considered:3 exp:7 bj:5 predict:1 smallest:14 applicable:1 label:3 expose:2 sensitive:2 individually:2 him:1 largest:1 create:2 aim:1 cr:1 corollary:1 release:1 consistently:1 indicates:2 check:1 a9a:1 secure:8 adversarial:1 inference:1 typically:2 entire:7 w:3 going:1 interested:2 selects:2 provably:1 classification:7 html:1 pascal:1 priori:1 equal:5 construct:2 having:2 sampling:1 kw:3 yu:1 future:1 inherent:1 oblivious:1 preserve:1 individual:25 phase:2 consisting:1 maintain:1 karthik:1 ab:1 highly:1 dwork:4 evaluation:4 mcsherry:1 partial:1 necessary:1 wst:1 kantarcioglu:2 tree:1 divide:1 desired:1 theoretical:5 minimal:1 merl:1 instance:24 classify:1 cost:1 subset:4 entry:2 millionaire:1 perturbed:17 combined:8 quintile:1 person:1 density:1 fundamental:1 randomized:2 sensitivity:10 international:2 off:1 anonymization:1 michael:1 together:1 yao:1 w1:1 decryption:4 homomorphic:5 containing:4 anonymizing:2 opposed:2 choose:1 creating:1 kobbi:2 li:1 suggesting:1 converted:1 de:2 includes:1 satisfy:2 depends:1 ad:2 blind:2 performed:2 later:1 lot:1 lab:1 analyze:1 netflix:1 aggregation:4 recover:1 participant:5 bayes:1 shai:1 asuncion:1 publicly:2 ni:1 variance:1 who:3 gathered:1 sofya:1 ciphertext:1 generalize:1 iterated:1 rejecting:1 none:5 published:3 dave:1 minj:3 monteleoni:2 sharing:3 definition:5 manas:1 against:2 proof:6 di:1 transmits:2 vaidya:3 sampled:3 dataset:17 proved:1 knowledge:3 originally:1 response:3 discerned:1 jw:1 evaluated:2 though:1 done:1 furthermore:1 stage:9 hand:1 replacing:1 logistic:5 aj:5 bhiksha:2 calibrating:1 verify:1 requiring:1 true:9 unbiased:1 regularization:2 illustrated:1 adjacent:4 leaked:1 please:3 anything:1 criterion:1 complete:2 wise:1 sigmoid:1 permuted:5 empirically:1 perturbing:1 classifers:1 he:7 mellon:2 refer:4 cambridge:1 rd:1 collaborate:1 similarly:3 language:1 access:2 longer:2 etc:1 add:4 eurocrypt:1 multivariate:1 own:2 recent:1 closest:1 raj:1 reverse:2 scenario:1 binary:4 fault:1 equitable:1 yi:1 preserving:7 minimum:1 additional:1 impose:2 employed:1 signal:1 multiple:1 corporate:1 desirable:1 rj:2 d0:7 reduces:1 smooth:4 faster:1 arvind:1 divided:1 mle:1 equally:1 laplacian:4 regression:6 cmu:2 publish:1 arxiv:1 kernel:1 encrypted:3 addition:2 want:2 background:1 else:1 sends:3 releasing:3 posse:5 tend:1 vitaly:1 nj2:2 sridharan:1 integer:4 curious:2 presence:3 split:3 perfectly:1 goldwasser:1 tradeoff:1 honest:2 whether:1 motivated:2 six:1 utility:2 generally:1 collision:1 detailed:1 clear:1 amount:2 locally:5 narayanan:1 simplest:1 generate:1 http:1 notice:1 estimated:1 per:1 carnegie:2 shall:2 key:9 prevent:1 pj:16 neither:1 v1:1 merely:1 year:1 run:1 everywhere:1 multiparty:5 almost:3 reader:1 decide:1 decision:2 appendix:5 bound:24 atallah:2 encountered:1 annual:1 constraint:2 ri:1 decrypts:2 generates:4 nathan:1 anyone:1 argument:1 infinitesimally:1 according:1 combination:1 ball:2 belonging:1 across:2 smaller:1 partitioned:1 tw:1 census:2 principally:1 equation:1 mutually:3 agree:1 loose:1 mechanism:10 cjlin:1 know:2 end:1 unusual:1 xiaoqian:1 available:2 operation:3 apply:2 eight:1 observe:1 appropriate:1 alternative:2 original:2 charlie:23 publishing:1 perturb:1 k1:3 establish:2 comparatively:1 objective:3 added:3 quantity:2 already:1 rane:2 said:2 dp:4 separate:1 entity:1 ensuing:1 chris:2 nissim:3 assuming:2 besides:1 length:3 index:11 ratio:1 minimizing:1 difficult:1 unfortunately:1 frank:2 negative:1 disparate:2 implementation:1 cryptographic:1 murat:2 unknown:4 perform:3 contributed:1 upper:2 datasets:10 enabling:1 possessed:1 protecting:1 displayed:1 situation:2 extended:1 communication:1 interacting:1 perturbation:8 shmatikov:1 download:1 inferred:1 introduced:3 pair:3 required:1 specified:1 security:2 learned:4 alternately:1 address:3 adult:4 adversary:3 beyond:1 suggested:2 scott:1 explanation:1 belief:1 pathak:1 regularized:4 indicator:3 representing:1 scheme:2 imply:1 inversely:5 categorical:2 naive:1 curator:8 contributing:2 loss:1 permutation:9 proportional:5 srebro:1 foundation:1 degree:1 sufficient:1 share:21 claire:1 changed:2 last:1 mikhail:1 distributed:2 boundary:1 evaluating:1 valid:1 computes:5 collection:1 made:1 party:73 income:1 social:1 excess:15 sj:1 obtains:5 tolerant:1 nonprivate:1 pittsburgh:2 xi:4 shwartz:1 continuous:2 sk:3 learn:7 transfer:1 reasonably:1 composing:3 inherently:1 ignoring:1 obtaining:1 interact:1 permute:1 du:1 electric:1 protocol:21 tkdd:1 pk:8 privately:1 noise:21 n2:6 nothing:1 turbed:2 repeated:2 cryptography:1 owning:1 securely:1 postulating:1 wish:2 inverts:2 exponential:2 lie:2 third:1 rk:1 theorem:12 specific:1 cynthia:2 unperturbed:4 r2:1 dk:3 svm:1 adding:6 effectively:1 kamalika:1 sequential:1 execution:1 lap:1 simply:2 likely:1 prevents:1 fear:1 collectively:1 clifton:2 corresponds:1 satisfies:5 owned:5 extracted:1 ma:1 acm:2 towards:1 shared:1 absence:3 change:5 telephone:1 specifically:1 uniformly:1 semantically:1 averaging:3 wt:1 except:1 lemma:3 experimental:3 indicating:1 support:2 d1:3 |
3,352 | 4,035 | Semi-Supervised Learning with Adversarially
Missing Label Information
Umar Syed
Ben Taskar
Department of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
{usyed,taskar}@cis.upenn.edu
Abstract
We address the problem of semi-supervised learning in an adversarial setting. Instead of assuming that labels are missing at random, we analyze a less favorable scenario where the label information can be missing partially and arbitrarily,
which is motivated by several practical examples. We present nearly matching
upper and lower generalization bounds for learning in this setting under reasonable assumptions about available label information. Motivated by the analysis, we
formulate a convex optimization problem for parameter estimation, derive an efficient algorithm, and analyze its convergence. We provide experimental results on
several standard data sets showing the robustness of our algorithm to the pattern
of missing label information, outperforming several strong baselines.
1
Introduction
Semi-supervised learning algorithms use both labeled and unlabeled examples. Most theoretical
analyses of semi-supervised learning assume that m + n labeled examples are drawn i.i.d. from a
distribution, and then a subset of size n is chosen uniformly at random and their labels are erased
[1]. This missing-at-random assumption is best suited for a situation where the labels are acquired
by annotating a random subset of all available data. But in many applications of semi-supervised
learning, the partially-labeled data is ?naturally occurring?, and the learning algorithm has no control
over which examples were labeled.
For example, pictures on popular websites like Facebook and Flikr are tagged by users at their discretion, and it is difficult to know how users decide which pictures to tag. A similar problem occurs
when data is submitted to an online labor marketplace, such as Amazon Mechanical Turk, to be
manually labeled. The workers who label the data are often poorly motivated, and may deliberately
skip examples that are difficult to correctly label. In such a setting, a learning algorithm should not
assume that the examples were labeled at random.
Additionally, in many semi-supervised learning settings, the partial label information is not provided
on a per-example basis. For example, in multiple instance learning [2], examples are presented to
a learning algorithm in sets, with either zero or one positive examples per set. In graph-based
regularization [3], a learning algorithm is given information about which examples are likely to
have the same label, but not necessarily the identity of that label. Recently, there has been much
interest in algorithms that learn from labeled features [4]; in this setting, the learning algorithm is
given information about the expected value of several features with respect to the true distribution
on labeled examples.
To summarize, in a typical semi-supervised learning problem, label information is often missing in
an arbitrary fashion, and even when present, does not always have a simple form, like one label per
example. Our goal in this paper is to develop and analyze a learning algorithm that is explicitly
1
designed for these types of problems. We derive our learning algorithm within a framework that is
expressive enough to permit a very general notion of label information, allowing us to make minimal
assumptions about which examples in a data set have been labeled, how they have been labeled,
and why. We present both theoretical upper and lower bounds for learning in this framework, and
motivated by these bounds, derive a simple yet provably optimal learning algorithm. We also provide
experimental results on several standard data sets, which show that our algorithm is effective and
robust when the label information has been provided by ?lazy? or ?unhelpful? labelers.
Related Work: Our learning framework is related to the malicious label noise setting, in which
the labeler is allowed to mislabel a small fraction of the training set (this is a special case of the
even more challenging malicious noise setting [5], where an adverary can inject a small number
of arbitrary examples into the training set). Learning with this type of label noise is known to be
quite difficult, and positive results often make quite restrictive assumptions about the underlying
data distribution [6, 7]. By contrast, our results apply far more generally, at the expense of assuming
a more benign (but possibly more realistic) model of label noise, where the labeler can adversarially
erase labels, but not change them. In other words, we assume that the labeler equivocates, but does
not lie. The difference in these assumptions shows up quite clearly in our analysis: As we point out
in Section 3, our bounds become vacuous if the labeler is allowed to mislabel data.
In Section 2 we describe how our framework encodes label information in a label regularization
function, which closely resembles the idea of a compatibility function introduced by Balcan & Blum
[8]. However, they did not analyze a setting where this function is selected adversarially.
2
Learning Framework
Let X be the set of all possible examples, and Y the set of all possible labels, where |Y| = k. Let D
be an unknown distribution on X ? Y. We write x and y as abbreviations for (x1 , . . . , xm ) ? X m
and (y1 , . . . , ym ) ? Y m , respectively. We write (x, y) ? Dm to denote that each (xi , yi ) is drawn
i.i.d. from the distribution D on X ? Y, and x ? Dm to denote that each xi is drawn i.i.d. from the
marginal distribution of D on X .
? ) ? Dm be the m labeled training examples. In supervised learning, one assumes access to
Let (?
x, y
? ). In semi-supervised learning, one assumes access to only some of the
the entire training set (?
x, y
? , and in most theoretical analyses, the missing components of y
? are assumed to have been
labels y
selected uniformly at random.
We make a much weaker assumption about what label information is available. We assume that,
? ) has been drawn, the learning algorithm is only given access to
after the labeled training set (?
x, y
? and to a label regularization function R. The function R encodes some information
the examples x
? of x
? , and is selected by a potentially adversarial labeler from a family R(?
? ).
about the labels y
x, y
? to a
A label regularization function R maps each possible soft labeling q of the training examples x
real number R(q) (a soft labeling is natual generalization of a labeling that we will define formally
? ), the learner can make no assumptions
in a moment). Except for knowing that R belongs to R(?
x, y
about how the labeler selects R. We give examples of label regularization functions in Section 2.1.
?
Let ? denote the set of distributions on Y. A soft labeling q ? ?m of the training examples x
is a doubly-indexed vector, where q(i, y) is interpreted as the probability that example x
?i has label
y ? Y. The correct soft labeling has q(i, y) = 1{y = y?i }, where the indicator function 1{?} is 1
? to denote the correct
when its argument is true and 0 otherwise; we overload notation and write y
soft labeling.
? ) of label regularization functions
Although the labeler is possibly adversarial, the family R(?
x, y
restricts the choices the labeler can make. We are interested in designing learning algorithms that
? ) assigns a low value to the correct labeling y
? . In the examples we
work well when each R ? R(?
x, y
? will be near the minimum of R, but there will be many
describe in Section 2.1, the correct labeling y
other minima and near-minima as well. This is the sense in which label information is ?missing? ?
it is difficult for any learning algorithm to distinguish among these minima.
? is close to the minimum of each R ?
We emphasize that, while our algorithms work best when y
? ), nothing in our framework requires this to be true; in Section 3 we will see that our learning
R(?
x, y
bounds degrade gracefully as this condition is violated.
2
We are interested in learning a parameterized model that predicts a label y given an example x. Let
L(?, x, y) be the loss of parameter ? ? Rd with respect to labeled example (x, y). While some of
the development in this paper will apply to generic loss functions, but two loss functions that will
particularly interest us are the negative log-likelihood of a log-linear model
Llike (?, x, y) = ? log p? (y|x) = ? log P
exp(? T ?(x, y))
T
?
y ? exp(? ? (x, y ))
where ?(x, y) ? Rd is the feature function, and the 0-1 loss of a linear classifier
L0,1 (?, x, y) = 1{arg max
? T ? (x, y ? ) 6= y}.
?
y ?Y
? , label regularization function R, and loss function L, the goal of a learnGiven training examples x
ing algorithm is to find a parameter ? that minimizes the expected loss ED [L(?, x, y)], where ED [?]
denotes expectation with respect to (x, y) ? D.
Let Ex? ,q [f (x, y)] denote the expected value of f (x, y) when example x is chosen uniformly at
? and ? supposing that this is example x
random from the training examples x
?i ? label y is chosen
from the distribution q(i, ?). Accordingly, Ex? ,?y [f (x, y)] denotes the expected value of f (x, y) when
? ).
labeled example (x, y) is chosen uniformly at random from the labeled training examples (?
x, y
2.1
Examples of Label Regularization Functions
To make the concept of a label regularization function more clear, we describe several well-known
learning settings in which the information provided to the learning algorithm is less than the fully
labeled training set. We show that, for each these settings, there is a natural definition of R that
captures the information that is provided to the learning algorithm, and thus each of these settings
can be seen as special cases of our framework.
Before proceeding with the partially labeled cases, we explain how supervised learning can be
expressed in our framework. In the supervised learning setting, the label of every example in the
?)
training set is revealed to the learner. In this setting, the label regularization function family R(?
x, y
? , and Ry? (q) = ? otherwise.
contains a single function Ry? such that Ry? (q) = 0 if q = y
In the semi-supervised learning setting, the labels of only some of the training examples are revealed.
? ) for each I ? [m] such that RI (q) = 0 if q(i, y) =
In this case, there is a function RI ? R(?
x, y
1{y = y?i } for all i ? I and y ? Y, and RI (q) = ? otherwise. In other words, RI (q) is zero
? on the examples in I. This implies that RI (q) is
if and only if the soft labeling q agrees with y
independent of how q labels examples not in I ? these are the examples whose labels are missing.
In the ambiguous learning setting [9, 10], which is a generalization of semi-supervised learning,
the labeler reveals a label set Y?i ? Y for each training example x
?i such that y?i ? Y?i . That is,
for each training example, the learning algorithm is given a set of possibile labels the example can
have (semi-supervised learning is the special case where each label set has size 1 or k). Letting
? ) for
Y? = (Y?1 , . . . , Y?m ) be all the label sets revealed to the learner, there is a function RY? ? R(?
x, y
each possible Y? such that RY? (q) = 0 if supp(qi ) ? Y?i for all i ? [m] and RY (q) = ? otherwise.
Here qi , q(i, ?) and supp(qi ) is the support of label distribution qi . In other words, RY? (q) is
zero if and only if the soft labeling q is supported on the sets Y?1 , . . . , Y?m .
The label regularization functions described above essentially give only local information; they specify, for each example in the training set, which labels are possible for that example. In some cases,
we may want to allow the labeler to provide more global information about the correct labeling.
One example of providing global information is Laplacian regularization, a kind of graph-based
regularization [3] that encodes information about which examples are likely to have the same labels.
For any soft labeling q, let q[y] be the m-length
vector whose ith component is q(i, y). The LaplaP
x)q[y], where L(?
x) is an m ? m positive
cian regularizer is defined to be RL (q) = y?Y q[y]T L(?
? that are believed to have
semi-definite matrix defined so that RL (q) is large whenever examples in x
the same label are assigned different label distributions by q.
Another possibility is posterior regularization. Define a feature function f (x, y) ? R? ; these features
may or may not be related to the model features ? defined in Section 2. As noted by several authors
3
[4, 11, 12], it is often convenient for a labeler to provide information about the expected value of
f (x, y) with respect to the true distribution. A typical posterior regularizer of this type will have
2
the form Rf ,b (q) = kEx? ,q [f (x, y)] ? bk2 , where the vector b ? R? is the labeler?s estimate of the
expected value of f . This term penalizes soft labelings q which cause the expected value of f on the
training set to deviate from b.
Label regularization functions can also be added together. So, for instance, ambiguous learning can
be combined with a Laplacian, and in this case the learner is given a label regularization function of
the form RY? (q)+RL (q). We will experiment with these kinds of combined regularization functions
in Section 5.
? is at or close to the
Note that, in all the examples described above, while the correct labeling y
? ), there may be many labelings meeting this condition.
minimum of each function R ? R(?
x, y
Again, this is the sense in which label information is ?missing?.
It is also important to note that we have only specified what information the labeler can reveal to the
? )), but we do not specify how that information is chosen
learner (some function from the set R(?
x, y
? )?). This will have a significant impact on our analysis of
by the labeler (which function R ? R(?
x, y
this framework. To see why, consider the example of semi-supervised learning. Using the notation
defined above, most analyses of semi-supervised learning assume that RI is chosen be selecting a
random subset I of the training examples [13, 14]. By constrast, we make no assumptions about
how RI is chosen, because we are interested in settings where such assumptions are not realistic.
3
Upper and Lower Bounds
In this section, we state upper and lower bounds for learning in our framework. But first, we provide
a definition of the well-known concept of uniform convergence.
Definition 1 (Uniform Convergence). Loss function L has ?-uniform convergence if with probability
1??
sup ED [L(?, x, y)] ? Ex? ,?y [L(?, x, y)] ? ?(?, m)
???
? ) ? Dm and ?(?, ?) is an expression bounding the rate of convergence.
where (?
x, y
?(x, y)k ? c for all (x, y) ? X ? Y and ? ={? : k?k ? 1} ?Rd , then the
For example, if k?
q
loss function Llike has ?-uniform convergence with ?(?, m) = O c d log m+log(1/?)
, which folm
lows from standard results about Rademacher complexity and covering numbers. Other commonly
used loss functions, such as hinge loss and 0-1 loss, also have ?-uniform convergence under similar
boundedness assumptions on ? and ?.
We are now ready to state an upper bound for learning in our framework. The proof is contained in
the supplement.
? ) ? Dm then with probaTheorem 1. Suppose loss function L has ?-uniform convergence. If (?
x, y
?)
bility at least 1 ? ? for all parameters ? ? ? and label regularization functions R ? R(?
x, y
y) + ?(?, m).
ED [L(?, x, y)] ? maxm (Ex? ,q [L(?, x, y)] ? R(q)) + R(?
q??
Theorem 2 below states a lower bound that nearly matches the upper bound in Theorem 1, in certain
cases. As we will see, the existence of a matching lower bound depends strongly on the structure
of the label regularization function family R. Note that, given a labeled training set (x, y), the set
R(x, y) essentially constrains what information the labeler can reveal to the learning algorithm,
thereby encoding our assumptions about how the labeler will behave. We make three such assumptions, described below. For the remainder of this section, we let the set of all possible examples
X = {?
x1 , . . . , x
?N } be finite.
Recall that all the label regularization functions described in Section 2.1 use the value ? to indicate
which labelings of the training set are impossible. Our first assumption is that, for each R ? R(x, y),
the set of possible labelings under R is separable over examples.
4
Assumption 1 (?-Separability). For all labeled training sets (x, y) and R ? R(x, y) there exists a collection of label sets {Yx? : x
? ? X } and real-valued function F such that R(q) =
P
m
}
+
F
(q),
where
the characteristic function ?{?} is 0 when its argument
?{supp(q
)
?
Y
i
x
i
i=1
is true and ? otherwise, and F (q) < ? for all q ? ?m .
It is easy to verify that all the examples of label regularization function families given in Section 2.1
satisfy Assumption 1. Also note that Assumption 1 allows the finite part of R (denoted by F ) to
depend on the entire soft labeling q in a basically arbitrarily manner.
Before describing our second assumption, we need a few additional definitions. We write h to
denote a labeling function that maps examples X to labels Y. Also, for any labeling function h and
unlabeled training set x ? X m , we let h(x) ? Y m denote the vector of labels whose ith component
is h(xi ). Let px be an N -length vector that represents unlabeled training set x as a distribution on
|{j : xj =?
xi }|
X , whose ith component is px (i) ,
.
m
Our second assumption is the labeler?s behavior is stable: If training sets (x, y) and (x? , y? ) are
?close? (by which we mean that they are consistently labeled and kpx ? px? k? is small) then the
label regularization functions available to the labeler for each training set are the ?same?, in the
sense that the sets of possible labelings under each of them are identical.
Assumption 2 (?-Stability). For any labeling function h? and unlabeled training sets x, x? such that
kpx ? px? k? ? ? the following holds: For all R ? R(x, h? (x)) there exists R? ? R(x? , h? (x? ))
such that R(h(x)) < ? if and only if R? (h(x? )) < ?, for all labeling functions h.
Our final assumption, which we call reciprocity, states there is no way to deduce which of the
possible labelings under R is the correct one only by examining R.
Assumption 3 (Reciprocity). For all labeled training sets (x, y) and R ? R(x, y), if R(y? ) < ?
then R ? R(x, y? ).
Of all our assumptions, reciprocity seems to be the most unnatural and unmotivated. We argue it is
necessary for two reasons: Firstly, all the examples of label regularization function families given in
Section 2.1 satisfy this assumption, and secondly, in Theorem 3 we show that lifting the reciprocity
assumption makes the upper bound in Theorem 1 very loose.
We are nearly ready to state our lower bound. Let A be a (possibly randomized) learning algorithm
? and a label regularization function R as input, and
that takes a set of unlabeled training examples x
? Also, if under distribution D each example x ? X is associated
outputs an estimated parameter ?.
with exactly one label h? (x) ? Y, then we write D = DX ? h? , where the data distribution DX is
the marginal distribution of D on X . Theorem 2 proves the existence of a true labeling function h?
such that a nearly tight lower bound holds for all learning algorithms A and all data distributions
DX whenever the training set is drawn from DX ? h? . The fact that our lower bound holds for all
data distributions significantly complicates the analysis, but this generality is important: since DX
is typically easy to estimate, it is possible that the learning algorithm A has been tuned for DX . The
proof of Theorem 2 is contained in the supplement.
Theorem 2. Suppose Assumptions 1, 2 and 3 hold for label regularization function family R, the
loss function L is 0-1 loss, and the set of all possible examples X is finite. For all learning algorithms
? ) ? Dm (where
A and data distributions DX there exists a labeling function h? such that if (?
x, y
|X
|
D = DX ? h? ) and m ? O( ?12 log ? ) then with probability at least 14 ? 2?
? x, y)] ? 1 max Ex? ,q [L(?,
? x, y)] ? R(q) + min R(q) ? ?(?, m)
ED [L(?,
q??m
4 q??m
? is the parameter output by A, and ? is the constant from Assumption
? ), where ?
for some R ? R(?
x, y
2.
Obviously, Assumptions 1, 2 and 3 restrict the kinds of label regularization function families to
which Theorem 2 can be applied. However, some restriction is necessary in order to prove a meaningful lower bound, as Theorem 3 below shows. This theorem states that if Assumption 3 does not
hold, then it may happen that each family R(x, y) has a structure which a clever (but computationally infeasible) learning algorithm can exploit to perform much better than the upper bound given in
Theorem 1. The proof of Theorem 3, which is contained in the supplement, constructs an example
of such a family.
5
Theorem 3. Suppose the loss function L is 0-1 loss. There exists a label regularization function
family R that satisfies Assumptions 1 and 2, but not Assumption 3, and a learning algorithm A such
? ) ? Dm then with probability at least 1 ? ?
that for all distributions D if (?
x, y
? x, y)] ? R(q) + min R(q) + ?(?, m) ? 1
? x, y)] ? max Ex? ,q [L(?,
ED [L(?,
m
m
q??
q??
? is the parameter output by A.
? ), where ?
for some R ? R(?
x, y
Whenever limm?? ?(?, m) = 0 the gap between the upper and lower bounds in Theorems 1 and
2 approaches R(?
y) ? minq R(q) as m ? ? (ignoring constant factors). Therefore, these bounds
are asymptotically matching if the labeler always chooses a label regularization function R such
? is a nonunique minimum of R.
that R(?
y) = minq R(q). We emphasize that this is true even if y
Several of the example learning settings described in Section 2.1, such as semi-supervised learning
and ambiguous learning, meet this criteria. On the other hand, if R(?
y) ? minq R(q) is large, then
the gap is very large, and the utility of our analysis degrades. In the extreme case that R(?
y) = ?
(i.e., the correct labeling of the training set is not possible under R), our upper bound is vacuous. In
this sense, our framework is best suited to settings in which the information provided by the labeler
is equivocal, but not actually untruthful, as it is in the malicious label noise setting [6, 7].
Finally, note that if limm?? ?(?, m) = 0, then the upper bound in Theorem 3 is smaller than
the lower bound in Theorem 2 for all sufficiently large m, which establishes the importance of
Assumption 3.
4
Algorithm
? and label regularization function R, the bounds in Section
Given the unlabeled training examples x
3 suggest an obvious learning algorithm: Find a parameter ? ? that realizes the minimum
2
min maxm (Ex? ,q [L(?, x, y)] ? R(q)) + ? k?k .
?
q??
(1)
The objective (1) is simply the minimization of the upper bound in Theorem 1, with one difference:
2
for algorithmic convenience, we do not minimize over the set ?, but instead add the quantity ? k?k
to the objective and leave ? unconstrained (here, and in the rest of the paper, k?k denotes L2 norm).
If we assume that ? = {? : k?k ? c} for some c > 0, then this modification is without loss of
generality, since there exists a constant ?c for which this is an equivalent formulation.
In order to estimate ? ? , throughout this section we make the following assumption about the loss
function L and label regularization function R.
Assumption 4. The loss function L is convex in ?, and the label regularization function R is convex
in q.
It is easy to verify that all of the loss functions and label regularization functions we gave as examples
in Sections 2 and 2.1 satisfy Assumption 4.
Instead of finding ? ? directly, our approach will be to ?swap? the min and max in (1), find the
soft labeling q? that realizes the maximum, and then use q? to compute ? ? . For convenience, we
abbreviate the function that appears in the objective (1) as F (?, q) , Ex? ,q [L(?, x, y)] ? R(q) +
2
? k?k . A high-level version of our learning algorithm ? called GAME due to the use of a gametheoretic minimax theorem in its proof of correctness ? is given in Algorithm 1; the implementation
details for each step are given below Theorem 4.
Algorithm 1 GAME: Game for Adversarially Missing Evidence
1: Given: Constants ?1 , ?2 > 0.
? such that min? F (?, q
? ) ? maxq??m min? F (?, q) ? ?1
2: Find q
? such that F (?,
? q
? ) ? min? F (?, q
? ) + ?2
3: Find ?
?
4: Return: Parameter estimate ?.
In the first step of Algorithm 1, we modify the objective (1) by swapping the min and max, and then
? that approximately maximizes this modified objective. In the next step, we
find a soft labeling q
6
? that approximately minimizes the original objective with respect to the fixed soft
find a parameter ?
? . The next theorem proves that Algorithm 1 produces a good estimate of ? ? , the minimum
labeling q
of the objective (1). Its proof is in the supplement.
q
? output by Algorithm 1 satisfies k?
? ? ? ? k ? 8 (?1 + ?2 ).
Theorem 4. The parameter ?
?
We now briefly explain how the steps of Algorithm 1 can be implemented using off-the-shelf algorithms. For concreteness, we focus on an implementation for the loss function L = Llike , which is
also the loss function we use in our experiments in Section 5.
The second step of Algorithm 1 is the easier one, so we explain it first. In this step, we need to
? ) over ?. Since q
? is fixed in this minimization, we can ignore the R(?
minimize F (?, q
q) term in
the definition of F , and we see that this minimization amounts to maximizing the likelihood of a
log-linear model. This is a very well-studied problem, and there are numerous efficient methods
available for solving it, such as stochastic gradient descent.
The first step of Algorithm 1 is more complicated, as it requires finding the maximum of a maxmin objective. Our approach is to first take the dual of the inner minimization; after doing this the
2
function to maximize becomes G(p, q) , H(p) ? ?1 k?? (p, q)k ? R(q), where we let H(p) ,
P
?(x, y)]. By convex duality we
?(x, y)] ? Ex? ,q [?
? i,y p(i, y) log p(i, y) and ?? (p, q) , Ex? ,p [?
have maxq min? F (?, q) = maxp,q G(p, q). This dual has been previously derived by several
authors; see [15] for more details. Note that G is concave function, and we need to maximize it
over simplex constraints. Exponentiated-gradient-style algorithms [16, 15] are well-suited for this
kind of problem, as they ?natively? maintain the simplex constraint, and converged quickly in the
experiments described in Section 5.
5
Experiments
We tested our GAME algorithm (Algorithm 1) on several standard learning data sets. In all of our
experiments, we labeled a fraction of the training examples sets in a non-random manner that was
designed to simulate various types of difficult ? even adversarial ? labelers.
Our first set of experiments involved two binary classification data sets that belong to a benchmark
suite1 accompanying a widely-used semi-supervied learning book [1]: the Columbia object image
library (COIL) [17], and a data set of EEG scans of a human subject connected to a brain-computer
interface (BCI) [18]. For each data set, a training set was formed by randomly sampling a subset of
the data in a way that produced a skewed class distribution. We defined the outlier score of a training
example to be the fraction of its nearest neighbors that belong to a different class. For several values
of p ? [0, 1] and for each training set, we labeled only the p-fraction of examples with the highest
outlier score. In this way, we simulated an ?unhelpful? labeler who only labels examples that are
exceptions to the general rule, thinking (perhaps sincerely, but erroneously) that this is the most
effective use of her effort.
? ) was chosen to match the
We tested three algorithms on these data sets: GAME, where R(?
x, y
semi-supervised learning setting with a Laplacian regularizer (see Section 2.1); Laplacian SVM
[3]; and Transductive SVM [19]. When constructing the Laplacian matrix and choosing values for
hyperparameters, we adhered closely to the model-selection procedure described in [1, Sections
21.2.1 and 21.2.5]. The results of our experiments are given in Figures 1(a) and 1(b).
We also tested the GAME algorithm on a multiclass data set, namely a subset of the Labeled Faces
in the Wild data set [20], a standard corpus of face photographs. Our subset contained 500 faces
of the top 10 characters from the corpus, but with a randomly skewed distribution, so that some
faces appeared more often than others. The feature representation for each photograph was PCA
on the pixel values (i.e., eigenfaces). We used an ambiguously-labeled version of this data set,
where each face in the training set is associated with one or more labels, only one of which is correct
(see Section 2.1 for a definition of ambiguous learning). We labeled trainined examples to simulate a
?lazy? labeler, in the following way: For each pair of labels (y, y ? ), we sorted the examples with true
1
This benchmark suite contains several data sets; we selected these two because they contain a large number
of examples that meet our definition of outliers.
7
100
50
40
30
Transductive SVM
Laplacian SVM
Game
0.1
0.2
0.3
0.4
Fraction of training set labeled
80
Transductive SVM
Laplacian SVM
Game
70
60
50
40
80
Accuracy
90
60
Accuracy
Accuracy
70
60
40
Uniform
EM
Game
20
0.1
0.2
0.3
0.4
Fraction of training set labeled
0.2
0.4
0.6
0.8
Fraction of training set labeled
Figure 1: (a) Accuracy vs. fraction of unlabeled data for BCI data set. (b) Accuracy vs. fraction of
unlabeled data for COIL data set. (c) Accuracy vs. fraction of partially labeled data for Faces in the
Wild data set. In all plots, error bars represent 1 standard deviation over 10 trials.
label y with respect to their distance, in feature space, from the centroid of the cluster of examples
with true label y ? . For several values of p ? [0, 1], we added the label y ? to the top p-fraction of
this list. The net effect of this procedure is that examples on the ?border? of the two clusters are
given both labels y and y ? in the training set. The idea behind this labeling procedure is to mimic
a (realistic, in our view) situation where a ?lazy? labeler declines to commit to one label for those
examples that are especially difficult to distinguish.
? ) was chosen to match the ambiguous
We tested the GAME algorithm on this data set, where R(?
x, y
learning setting with a Laplacian regularizer (see Section 2.1). We compared with two algorithms
from [9]: UNIFORM, which assumes each label in the ambiguous label set is equally likely, and
learns a maximum likelihood log-linear model; and a discrimitive EM algorithm that guesses the
true labels, learns the most likely parameter, updates the guess, and repeats. The results of our
experiments are given in Figure 1(c).
Perhaps the best way to characterize the difference between GAME and the algorithms we compared
it to is that the other algorithms are ?optimistic?, by which we mean they assume that the missing
labels most likely agree with the estimated parameter, while GAME is a ?pessimistic? algorithm
that, because it was designed for an adverarial setting, assumes exactly the opposite. The results of
our experiments indicate that, for certain labeling styles, as the fraction of fully labeled examples
decreases, the GAME algorithm?s pessimistic approach is substantially more effective. Importantly,
Figures 1(a)-(c) show that the GAME algorithm?s performance advantage is most significant when
the number of labeled examples is very small. Semi-supervised learning algorithms are often promoted as being able to learn from only a handful of labeled examples. Our results show that this
ability may be quite sensitive to how these examples are labeled.
6
Future Work
Our framework lends itself to several natural extensions. For example, it can be straightforwardly
extended to the structured prediction setting [21], in which both examples and labels have some
internal structure, such as sequences or trees. One can show that both steps of the GAME algorithm
can be implemented efficiently even when the number of labels is combinatorial, provided that
both the loss function and label regularization function decompose appropriately over the structure.
Another possibility is to interactively poll the labeler for label information, resulting in a sequence
of successively more informative label regularization functions, with the aim of extracting the most
useful label information from the labeler with a minimum of labeling effort. Also, it would be
interesting to design Amazon Mechanical Turk experiments that test whether the ?unhelpful? and
?lazy? labeling styles described in Section 5 in fact occur in practice. Finally, of the three technical
assumptions we introduced in Section 3 to aid our analysis, we only proved (in Theorem 3) that one
of them is necessary. We would like to determine whether the other assumptions are necessary as
well, or can be relaxed.
Acknowledgements
Umar Syed was partially supported by DARPA CSSG 2009 Award. Ben Taskar was partially supported by DARPA CSSG 2009 Award and the ONR 2010 Young Investigator Award.
8
References
[1] Olivier Chapelle, Bernhard Sch?olkopf, and Alexander Zien, editors. Semi-Supervised Learning. MIT
Press, Cambridge, MA, 2006.
[2] Thomas G. Dietterich, Richard H. Lathrop, and Tom?as Lozano-P?erez. Solving the multiple instance
problem with axis-parallel rectangles. Artificial Intelligence, 89(1-2):31?71, 1997.
[3] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework
for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399?2434,
2006.
[4] Gregory Druck, Gideon Mann, and Andrew McCallum. Learning from labeled features using generalized
expectation criteria. In Proceedings of the 31st Annual International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages 595?602, 2008.
[5] Michael Kearns and Ming Li. Learning in the presence of malicious errors. In Proceedings of the 20th
Annual ACM Symposium on Theory of Computing, pages 267?280, New York, NY, USA, 1988. ACM.
[6] Adam T. Kalai, Adam R. Klivans, Yishay Mansour, and Rocco A. Servedio. Agnostically learning halfspaces. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, pages
11?20, 2005.
[7] Adam R. Klivans, Philip M. Long, and Rocco A. Servedio. Learning halfspaces with malicious noise.
Journal of Machine Learning Research, 10:2715?2740, 2009.
[8] Maria-Florina Balcan and Avrim Blum. A PAC-style model for learning from labeled and unlabeled data.
In Proceedings of the 18th Annual Conference on Learning Theory, pages 111?126, 2005.
[9] Rong Jin and Zoubin Ghahramani. Learning with multiple labels. In Advances in Neural Information
Processing Systems 16, 2003.
[10] Timothee Cour, Ben Sapp, Chris Jordan, and Ben Taskar. Learning from ambiguously labeled images. In
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2009.
[11] Kuzman Ganchev, Jo?ao Grac?a, Jennifer Gillenwater, and Ben Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11:2001?2049, 2010.
[12] Percy Liang, Michael I. Jordan, and Dan Klein. Learning from measurements in exponential families. In
Proceedings of the 26th Annual International Conference on Machine Learning, pages 641?648, 2009.
[13] Rie Johnson and Tong Zhang. On the effectiveness of laplacian normalization for graph semi-supervised
learning. Journal of Machine Learning Research, 8:1489?1517, December 2007.
[14] Philippe Rigollet. Generalization error bounds in semi-supervised classification under the cluster assumption. Journal of Machine Learning Research, 8:1369?1392, December 2007.
[15] Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. Exponentiated
gradient algorithms for conditional random fields and max-margin markov networks. Journal of Machine
Learning Research, 9:1775?1822, 2008.
[16] Jyrki Kivinen and Manfred K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Inf. Comput., 132(1):1?63, 1997.
[17] Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image library (COIL-100).
Technical Report CUCS-006-96, Columbia University, 1996.
[18] Thomas Navin Lal, Thilo Hinterberger, Guido Widman, Michael Schr?oder, N. Jeremy Hill, Wolfgang
Rosenstiel, Christian Erich Elger, Bernhard Sch?olkopf, and Niels Birbaumer. Methods towards invasive
human brain computer interfaces. In Advances in Neural Information Processing Systems 17, 2004.
[19] Thorsten Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of the 16th International Conference on Machine Learning, pages 200?209, 1999.
[20] Gary B. Huang, Manu Ramesh, Tamara Berg, and Erik Learned-Miller. Labeled faces in the wild: A
database for studying face recognition in unconstrained environments.
[21] Ben Taskar, Carlos Guestrin, and Daphne Koller. Max-margin markov networks. In Advances in Neural
Information Processing Systems 16, 2004.
9
| 4035 |@word trial:1 briefly:1 version:2 seems:1 norm:1 thereby:1 boundedness:1 moment:1 contains:2 score:2 selecting:1 tuned:1 yet:1 dx:8 realistic:3 happen:1 informative:1 benign:1 christian:1 designed:3 plot:1 update:1 v:3 intelligence:1 selected:4 website:1 guess:2 amir:1 accordingly:1 warmuth:1 mccallum:1 ith:3 manfred:1 firstly:1 zhang:1 daphne:1 become:1 symposium:2 prove:1 doubly:1 wild:3 dan:1 manner:2 acquired:1 upenn:1 expected:7 behavior:1 bility:1 ry:8 brain:2 ming:1 erase:1 provided:6 becomes:1 underlying:1 notation:2 maximizes:1 what:3 kind:4 interpreted:1 minimizes:2 substantially:1 finding:2 suite:1 every:1 concave:1 exactly:2 classifier:1 control:1 positive:3 before:2 local:1 modify:1 encoding:1 discretion:1 meet:2 koo:1 approximately:2 usyed:1 resembles:1 studied:1 challenging:1 practical:1 globerson:1 practice:1 definite:1 equivocal:1 procedure:3 significantly:1 matching:3 convenient:1 word:3 suggest:1 zoubin:1 convenience:2 unlabeled:10 close:3 clever:1 selection:1 impossible:1 restriction:1 equivalent:1 map:2 missing:12 maximizing:1 minq:3 convex:4 sigir:1 formulate:1 amazon:2 assigns:1 constrast:1 untruthful:1 rule:1 importantly:1 stability:1 notion:1 yishay:1 suppose:3 user:2 guido:1 olivier:1 designing:1 pa:1 recognition:2 particularly:1 predicts:1 labeled:39 database:1 taskar:6 capture:1 connected:1 decrease:1 highest:1 adhered:1 halfspaces:2 environment:1 complexity:1 constrains:1 depend:1 tight:1 solving:2 learner:5 basis:1 swap:1 darpa:2 various:1 regularizer:4 effective:3 describe:3 hiroshi:1 artificial:1 marketplace:1 labeling:28 choosing:1 quite:4 whose:4 widely:1 valued:1 annotating:1 otherwise:5 maxp:1 bci:2 ability:1 niyogi:1 commit:1 transductive:4 itself:1 final:1 online:1 obviously:1 advantage:1 sequence:2 net:1 ambiguously:2 remainder:1 poorly:1 gametheoretic:1 olkopf:2 convergence:8 cluster:3 cour:1 rademacher:1 kpx:2 produce:1 adam:3 leave:1 ben:6 object:2 derive:3 develop:1 andrew:1 nearest:1 ex:10 strong:1 implemented:2 skip:1 implies:1 indicate:2 murase:1 closely:2 correct:9 stochastic:1 human:2 mann:1 ao:1 generalization:4 decompose:1 pessimistic:2 secondly:1 extension:1 rong:1 hold:5 accompanying:1 sufficiently:1 exp:2 algorithmic:1 niels:1 favorable:1 estimation:1 realizes:2 label:94 combinatorial:1 sensitive:1 agrees:1 maxm:2 correctness:1 establishes:1 ganchev:1 grac:1 minimization:4 mit:1 clearly:1 always:2 aim:1 modified:1 kalai:1 shelf:1 l0:1 focus:1 derived:1 joachim:1 maria:1 consistently:1 likelihood:3 contrast:1 adversarial:4 centroid:1 baseline:1 sense:4 inference:1 entire:2 typically:1 her:1 koller:1 limm:2 labelings:6 selects:1 interested:3 provably:1 pixel:1 arg:1 among:1 dual:2 classification:3 denoted:1 kex:1 elger:1 development:2 compatibility:1 special:3 marginal:2 field:1 construct:1 sampling:1 manually:1 labeler:25 adversarially:4 represents:1 identical:1 nearly:4 thinking:1 mimic:1 simplex:2 others:1 future:1 report:1 richard:1 few:1 belkin:1 randomly:2 maintain:1 interest:2 possibility:2 extreme:1 swapping:1 behind:1 worker:1 partial:1 necessary:4 indexed:1 tree:1 penalizes:1 theoretical:3 minimal:1 complicates:1 instance:3 soft:13 deviation:1 subset:6 uniform:8 predictor:1 examining:1 johnson:1 characterize:1 straightforwardly:1 gregory:1 combined:2 chooses:1 st:1 international:3 randomized:1 off:1 michael:4 ym:1 together:1 quickly:1 druck:1 jo:1 again:1 interactively:1 successively:1 huang:1 possibly:3 hinterberger:1 book:1 inject:1 style:4 return:1 li:1 supp:3 mislabel:2 jeremy:1 satisfy:3 explicitly:1 depends:1 view:1 optimistic:1 analyze:4 sup:1 doing:1 wolfgang:1 carlos:1 complicated:1 parallel:1 partha:1 minimize:2 formed:1 accuracy:6 who:2 characteristic:1 efficiently:1 miller:1 produced:1 basically:1 timothee:1 converged:1 submitted:1 explain:3 nene:1 whenever:3 ed:6 facebook:1 definition:7 servedio:2 turk:2 involved:1 obvious:1 dm:7 naturally:1 proof:5 associated:2 invasive:1 tamara:1 proved:1 popular:1 recall:1 sapp:1 actually:1 appears:1 supervised:22 maxmin:1 specify:2 tom:1 rie:1 formulation:1 strongly:1 generality:2 widman:1 hand:1 navin:1 expressive:1 reveal:2 perhaps:2 usa:1 effect:1 dietterich:1 concept:2 true:10 verify:2 deliberately:1 contain:1 tagged:1 regularization:36 assigned:1 lozano:1 xavier:1 rosenstiel:1 game:15 skewed:2 ambiguous:6 noted:1 covering:1 criterion:2 generalized:1 hill:1 percy:1 interface:2 balcan:2 image:3 recently:1 rigollet:1 rl:3 birbaumer:1 belong:2 significant:2 measurement:1 cambridge:1 rd:3 unconstrained:2 erich:1 erez:1 gillenwater:1 chapelle:1 access:3 stable:1 deduce:1 labelers:2 add:1 carreras:1 posterior:3 belongs:1 inf:1 scenario:1 certain:2 outperforming:1 arbitrarily:2 binary:1 onr:1 meeting:1 yi:1 seen:1 minimum:10 additional:1 relaxed:1 promoted:1 guestrin:1 determine:1 maximize:2 semi:21 zien:1 multiple:3 sameer:1 ing:1 technical:2 match:3 believed:1 long:1 retrieval:1 equally:1 award:3 laplacian:9 qi:4 impact:1 prediction:1 florina:1 essentially:2 expectation:2 vision:1 represent:1 normalization:1 want:1 malicious:5 appropriately:1 sch:2 rest:1 subject:1 supposing:1 december:2 effectiveness:1 jordan:2 call:1 extracting:1 near:2 presence:1 manu:1 revealed:3 enough:1 easy:3 xj:1 gave:1 pennsylvania:1 restrict:1 agnostically:1 opposite:1 inner:1 idea:2 decline:1 knowing:1 multiclass:1 whether:2 motivated:4 expression:1 pca:1 utility:1 bartlett:1 unnatural:1 effort:2 peter:1 york:1 cause:1 oder:1 generally:1 useful:1 clear:1 amount:1 restricts:1 estimated:2 correctly:1 per:3 klein:1 write:5 blum:2 poll:1 drawn:5 rectangle:1 graph:3 asymptotically:1 concreteness:1 fraction:12 parameterized:1 family:12 reasonable:1 decide:1 throughout:1 bound:25 distinguish:2 annual:5 occur:1 constraint:2 handful:1 ri:7 encodes:3 tag:1 erroneously:1 simulate:2 argument:2 min:9 klivans:2 separable:1 px:4 department:1 structured:2 smaller:1 em:2 separability:1 character:1 modification:1 outlier:3 thorsten:1 computationally:1 agree:1 previously:1 sincerely:1 describing:1 loose:1 jennifer:1 know:1 letting:1 studying:1 available:5 permit:1 apply:2 generic:1 robustness:1 existence:2 original:1 thomas:2 assumes:4 denotes:3 top:2 vikas:1 hinge:1 yx:1 umar:2 exploit:1 restrictive:1 ghahramani:1 prof:2 especially:1 society:1 objective:8 added:2 quantity:1 occurs:1 degrades:1 rocco:2 gradient:5 lends:1 distance:1 simulated:1 philip:1 gracefully:1 degrade:1 chris:1 manifold:1 argue:1 reason:1 assuming:2 erik:1 length:2 reciprocity:4 providing:1 kuzman:1 liang:1 difficult:6 potentially:1 expense:1 negative:1 implementation:2 design:1 unknown:1 perform:1 allowing:1 upper:12 markov:2 benchmark:2 ramesh:1 finite:3 nonunique:1 behave:1 descent:2 jin:1 philippe:1 situation:2 extended:1 y1:1 mansour:1 schr:1 arbitrary:2 introduced:2 vacuous:2 namely:1 mechanical:2 specified:1 pair:1 lal:1 cucs:1 learned:1 maxq:2 address:1 able:1 bar:1 unhelpful:3 below:4 pattern:2 xm:1 appeared:1 gideon:1 summarize:1 rf:1 max:7 terry:1 syed:2 natural:2 indicator:1 abbreviate:1 kivinen:1 minimax:1 library:2 numerous:1 picture:2 axis:1 ready:2 columbia:3 philadelphia:1 deviate:1 text:1 geometric:1 l2:1 acknowledgement:1 loss:23 fully:2 interesting:1 versus:1 foundation:1 cian:1 bk2:1 editor:1 supported:3 repeat:1 infeasible:1 weaker:1 allow:1 exponentiated:3 neighbor:1 eigenfaces:1 face:8 mikhail:1 author:2 commonly:1 collection:1 far:1 emphasize:2 ignore:1 bernhard:2 global:2 reveals:1 corpus:2 assumed:1 xi:4 latent:1 why:2 additionally:1 learn:2 robust:1 ignoring:1 eeg:1 cssg:2 necessarily:1 constructing:1 did:1 bounding:1 noise:6 hyperparameters:1 border:1 nothing:1 allowed:2 x1:2 fashion:1 ny:1 aid:1 tong:1 natively:1 exponential:1 comput:1 lie:1 learns:2 young:1 theorem:22 showing:1 pac:1 list:1 svm:6 evidence:1 exists:5 avrim:1 importance:1 ci:1 supplement:4 lifting:1 occurring:1 margin:2 gap:2 easier:1 suited:3 photograph:2 simply:1 likely:5 lazy:4 labor:1 expressed:1 contained:4 partially:6 sindhwani:1 thilo:1 gary:1 satisfies:2 acm:3 ma:1 coil:3 abbreviation:1 conditional:1 identity:1 goal:2 sorted:1 jyrki:1 towards:1 erased:1 change:1 typical:2 except:1 uniformly:4 kearns:1 called:1 lathrop:1 duality:1 experimental:2 meaningful:1 exception:1 formally:1 berg:1 internal:1 support:2 scan:1 collins:1 alexander:1 overload:1 violated:1 investigator:1 tested:4 nayar:1 |
3,353 | 4,036 | Variational Inference over Combinatorial Spaces
?
Alexandre Bouchard-C?ot?e?
Michael I. Jordan?,?
?
Computer Science Division Department of Statistics
University of California at Berkeley
Abstract
Since the discovery of sophisticated fully polynomial randomized algorithms for a range of
#P problems [1, 2, 3], theoretical work on approximate inference in combinatorial spaces
has focused on Markov chain Monte Carlo methods. Despite their strong theoretical guarantees, the slow running time of many of these randomized algorithms and the restrictive
assumptions on the potentials have hindered the applicability of these algorithms to machine learning. Because of this, in applications to combinatorial spaces simple exact models are often preferred to more complex models that require approximate inference [4].
Variational inference would appear to provide an appealing alternative, given the success
of variational methods for graphical models [5]; unfortunately, however, it is not obvious
how to develop variational approximations for combinatorial objects such as matchings,
partial orders, plane partitions and sequence alignments. We propose a new framework that
extends variational inference to a wide range of combinatorial spaces. Our method is based
on a simple assumption: the existence of a tractable measure factorization, which we show
holds in many examples. Simulations on a range of matching models show that the algorithm is more general and empirically faster than a popular fully polynomial randomized
algorithm. We also apply the framework to the problem of multiple alignment of protein
sequences, obtaining state-of-the-art results on the BAliBASE dataset [6].
1
Introduction
The framework we propose is applicable in the following setup: let C denote a combinatorial space,
by which we mean a finite but large set, where
P testing membership is tractable, but enumeration is
not, and suppose that the goal is to compute x?C f (x), where f is a positive function. This setup
subsumes many probabilistic inference and classical combinatorics problems. It is often intractable
to compute this sum, so approximations are used.
We approach this problem by exploiting a finite collection of sets {Ci } such that C = ?i Ci . Each Ci
is
for each i,
Plarger than C, but paradoxically it is often possible to find such a decomposition where
1
f
(x)
is
tractable.
We
give
many
examples
of
this
in
Section
3
and
Appendix
B.
This
paper
x?Ci
describes an effective way of using this type of decomposition to approximate the original sum.
Another way of viewing this setup is in terms of exponential families. In this view, described in
detail in Section 2, the decomposition becomes a factorization of the base measure. As we will
show, the exponential family view gives a principled way of defining variational approximations.
In order to make variational approximations tractable in the combinatorial setup, we use what we
call an implicit message representation. The canonical parameter space of the exponential family
enables such representation. We also show how additional approximations can be introduced in
cases where the factorization has a large number of factors. These further approximations rely on
an outer bound of the partition function, and therefore preserve the guarantees of convex variational
objective functions.
While previous authors have proposed mean field or loopy belief propagation algorithms to approximate the partition function of a few specific combinatorial models?for example [7, 8] for parsing,
1
The appendices can be found in the supplementary material.
1
and [9, 10] for computing the permanent of a matrix?we are not aware of a general treatment of
variational inference in combinatorial spaces.
There has been work on applying variational algorithms to the problem of maximization over combinatorial spaces [11, 12, 13, 14], but maximization over combinatorial spaces is rather different than
summation. For example, in the bipartite matching example considered in both [13] and this paper,
there is a known polynomial algorithm for maximization, but not for summation. Our approach
is also related to agreement-based learning [15, 16], although agreement-based learning is defined
within the context of unsupervised learning using EM, while our framework is agnostic with respect
to parameter estimation.
The paper is organized as follows: in Section 2 we present the measure factorization framework; in
Section 3 we show examples of this framework applied to various combinatorial inference problems;
and in Section 4 we present empirical results.
2
Variational measure factorization
In this section, we present the variational measure factorization framework. At a high level, the first
step is to construct an equivalent but more convenient exponential family. This exponential family
will allow us to transform variational algorithms over graphical models into approximation algorithms over combinatorial spaces. We first describe the techniques needed to do this transformation
in the case of a specific variational inference algorithm?loopy belief propagation?and then discuss
mean-field and tree-reweighted approximations.
To make the exposition more concrete, we use the running example of approximating the value and
gradient of the log-partition function of a Bipartite Matching model (BM) over KN,N , a well-known
#P problem [17]. Unless we mention otherwise, we will consider bipartite perfect matchings; nonbipartite and non-perfect matchings are discussed in Section 3.1. The reader should keep in mind,
however, that our framework is applicable to a much broader class of combinatorial objects. We
develop several other examples in Section 3 and in Appendix B.
2.1
Setup
Since we are dealing with discrete-valued random variables X, we can assume without loss of
generality that the probability distribution for which we want to compute the partition function and
moments is a member of a regular exponential family with canonical parameters ? ? RJ :
P(X ? B) =
X
exp{h?(x), ?i ? A(?)}?(x),
A(?) = log
x?B
X
exp{h?(x), ?i}?(x),
(1)
x?X
for a J-dimensional sufficient statistic ? and base measure ? over F = 2X , both of which are
assumed (again, without loss of generality) to be indicator functions : ?j , ? : X ? {0, 1}. Here
X is a supersetPof both C and all of the Ci s. The link between this setup and the general problem
of computing x?C f (x) is the base measure ?, which is set to the indicator function over C:
?(x) = 1[x ? C], where 1[?] is equal to one if its argument holds true, and zero otherwise.
The goal is to approximate A(?) and ?A(?) (recall that the j-th coordinate of the gradient, ?j A, is
equal to the expectation of the sufficient statistic ?j under the exponential family with base measure
? [5]). We want to exploit situations where the base measure can be written as a product of I
QI
measures ?(x) = i=1 ?i (x) such that each factor
P ?i : X ? {0, 1} induces a super-partition
function assumed to be tractable: Ai (?) = log x?X exp{h?(x), ?i}?i (x). This computation is
typically done using dynamic programming (DP). We also assume that the gradient of the superpartition functions is tractable, which is typical for DP formulations.
In the case of BM, the space X is a product of N 2 binary alignment variables, x =
x1,1 , x1,2 , . . . , xN,N . In the Standard Bipartite Matching formulation (which we denote by SBM),
the sufficient statistic takes the form ?j (x) = xm,n . The measure factorization we use to enforce
the matching property is ? = ?1 ?2 , where:
?1 (x) =
N
Y
m=1
1[
N
X
xm,n ? 1],
?2 (x) =
N
Y
n=1
n=1
2
1[
N
X
m=1
xm,n ? 1].
(2)
We start by constructing an equivalent but more convenient exponential
family.<This
con- X
< X
=
Xgeneral =
X
mation.
@
@new
!A
struction
has
an
associated
bipartite
Markov
Random
Field
(MRF)
with
structure
.
?
?
?
?
exp
?i! ,j ! A .
(x)
?j. ! +
?
exp
i!This
,j
j ! + KI,J?
j ! (x)
j
eld reformulation
;
:! !
;
:
! :i
! !=i
! !=
j :j !=
j
bipartite structure should not be confused with the bipartite graph
from
the KM,N
bipartite
graph i! :i! !=i
j !i:j
j
ndom
field
reformulation
#
%
#
%
specific
to
the
BM
example:
the
former
is
part
of
the
general
theory,
and
is
not
specific
to
the
! con"
! " $
$
an equivalent but more convenient exponential family. This general
The required parameters
are therefore:
? i,j are
1[j $= j ! ]? i,j
?j ! +
?i! !],j !?j !. + i! :i! "=i ?i! ,j ! .
The
required
parameters
therefore:
=
1[j
=
$
j
! :i! "=
bipartite
matching
example.
! =
i
i
!
j
j
d bipartite
Markov Random
Field
(MRF) with
structure
KI,J . This
Thisgeneral
new conructing
an equivalent
but more
convenient
exponential
family.
not
confused
with
theI bipartite
graph
thewith
KM,N
bipartite
graph
Thebebipartite
MRF
has
random
in the
first
graph
component,
B1 new
, . . . , BI , each having a
ssociated
bipartite
Markov
Randomvariables
Field from
(MRF)
structure
KI,J
. This
2.4
Reusetheory,
of
partition
function
computations
ple:
former
part of
the
general
and
is
not
specific
to
the
2.4
Reuse
of
partition
function
computations
copythe
ofnot
X be
as confused
itsisdomain.
In
the
second
component,
the
hasbipartite
J random
variables,
S1 , . . . , S J ,
should
with
bipartite
graph
from
thegraph
K
graph
M,N
le.
Sj has
{0,
1}.
The
pairwise
potentials
between
an
event
(B
=
x)
in theeach
firstsuper-partition function
i
M where
example:
thea binary
former domain
is part
of
the
general
theory,
and
is
not
specific
to
the
Naively, the updates
derivedthe
so updates
far would
require
J timesBPMF(?,
Naively,
derived
socomputing
far would require
computing
JAtimes
function
, AIsuper-partition
)
1 , . . .each
component
and one
(Sfirst
s)
isB
given
?
(x,We
s)
=
1[?
= can
s].
The
following
j = graph
j (x)
example.
random
variables
in the
component,
, . . . by
,B
each
having
aiteration.
Aiin
at the
eachsecond
message
iteration.
show
that
this
be
reduced
to computing
each to
Aicomputing
only
I , i,j
Apassing
message
passing
We
show
that
this
can be reduced
each Ai only
i1 at each
(1)
?j s
one node potentials
are also once
included:
?Ji (x)
=
?i (x)
?gain.
.,
j (s)
has
random
variables,
....
. e, SJhaving
1: ?i,j = 0
iteration,
aonce
considerable
perBand
iteration,
considerable
1 ,a.,=
FIn
hastheI second
randomcomponent,
variables in the
the graph
firstper
graph
component,
. ,S
a
BI gain.
1, . . B
I each
1B
main
{0,
1}.
The
pairwise
potentials
between
an
event
(B
=
x)
in
the
first
i
2: for t = 1, 2, . . . , T do
The
equivalence
between
the
two
formulations
follows
from
the
rich
sufficient
statistic
condition,
We first
omain. In the second component,
thedefine
graphthe
hasvectors:
J random
variables,
S1 , ..... . , SJ ,
We
first define
the... vectors:
P
= which
s)domain
in the
second
given
by ?i,j
(x, s) =between
1[?j (x)an
= event
s]. The
following
X
X
(t?1)
implies:
?(t)
nary
{0, 1}.isThe
pairwise
potentials
(B
=
x)
in
the
first
i
?j s
??i = ? +
? i! , ??i = ? + 3: ? i!?, i = ? +
i0 :i0 6=i? i0
so
included:
?
(x)
=
?
(x)
and
?
(s)
=
e
.
i
i
j
?
ne (Sj = s) in
the second
?
I i,j
J(x, s) = 1[?j (x) = s]. The following
! !
X
X is given
Xby Y
Y
i! :i! !=i
(t)
(t)
(t)
...
1... if x1 = xi2.:i..=!=i? ? ? = xI
4:
? i = logit ?Ai ??i
? ??i
sthe
aretwo
alsoformulations
included: ?ifollows
(x)?=
(x) and
=j (x
e?ij)s=
. statistic
? ? ?ifrom
1[?
sj ] =
.
the ?
rich
sufficient
condition,
j (s)
0the
otherwise
and then rewrite theand
numerator
inside
logit function
Equation
(4) as follows:
then rewrite
the
numerator
insideinthe
logit function
in Equation (4) as follows:
s ?{0,1} s2 ?{0,1}
sJ ?{0,1} i=1 j=1
... XSJ
S1
S2 X
between the 1two formulations
follows from the
statistic
condition,
Xrich sufficient
X
X
5: ? end for
X
X
?
?j (x)fi,j (x)?i (x) =? (x)f (x)? (x)exp!?(x),
?? "e??i,j sexp!?(x),
s?i (x)
=
?? "e??i,j s s? (x)
?
I Y
J
Y
P
(T )
j
i,j
i
i 6: return ?
? =i logistici ? + i ? i
1 if x1 x?X
= x2 = ? ? ? = xI
s?{0,1} x:?j (x)=s
s?{0,1} x:?j (x)=s
? This
?
1[?j (x
]=
.possible
i) =
transformation
into
ansjequivalent
MRF
reveals several x?X
variational approximations.
?
I
J
0
otherwise
X
X YY
?
?i,j
i=1 j=1
1 if x1 [14]
= x2can
= ?be
?? =
sJ ?{0,1}
? graph,
We
show
how
loopy BP updates
defined
=xeI ??.i,jover
?j Athis
= e??even
?jthough
Ai (??i ),
i (? i ),
? ? ? in the next section
1[?j (x
i ) = sj ] =
Figure
1:
Left:
the
bipartite
graphical
model
used
for
the
MRF
construction described in Section 2.2. Right:
0
otherwise
some
nodes
in
this
graph?the
B
s?have
domains
of
exponential
size.
We
then
describe
the
updates
i
i=1
j=1
?{0,1}
s
?{0,1}
2
J
and
similarity for
and
similarity
for
the denominator:
pseudocode
for the
the
BPMF
algorithm.
See can
Section
2 andbounds
Appendix
A.2 for the derivation.
for mean field [15] and TRW
[16].
In contrast
todenominator:
BP, these
algorithms
provided
on the
an equivalent MRF reveals several possible variational
X approximations.
X
?
???i,j
function.
fi,j (x)?
= e??i,jf?
Ai (??i (x)
AjiA
(??i i())
onpartition
how an
loopy
BP updates
[14]
can beseveral
definedpossible
over thisvariational
graph,
even
??i ) + (1 ? ?j Ai (??i ))
i (x) though
i,jj(x)?
i ) +=(1e ? ?j?
on
into
equivalent
MRF
reveals
approximations.
x?X
x?X
?the
B
s?have
domains
of
exponential
size.
We
then
describe
the
updates
i
xt section how loopy BP updates
can in
beAppendix
defined overA.3
thisthat
graph,
even though
?
We[14]
show
A=on
A? can bej A
computed
in time O(N 2 ) for the SBM.
1 1and
RW
InBcontrast
to BP,
these
algorithms
can provided bounds
the
2.3 [16].
Implicit
message
representation
+ (e??2i,j ? 1)?=
(??i(e
) ??i,j ? 1)?j Ai (??i )
1i +
graph?the
i s?have domains of exponential size. We then describe the updates
assumption
we provided
make is bounds
thatBP
given
a vector
s ? RJ , there is at most one possible configu] The
and TRW
[16].B Inhave
contrast
toThe
BP, last
these
algorithms
can
on
the
variables
exponential
size domains,
hence
if we applied
updates
naively, the mesi
ration
x with
?(x)
= finite.
s. an
We
callforthis
the
rich
sufficient
statistic
condition.
Since we are concerned
sages going from Bi to Sj would
require
summing
exponential
number
of
terms,
This argument
holds
for argument
?over
Adding
conditions
handling
the and
othermescases,
we the
get other
the following
This
holds
?ij finite.
Adding
conditions
handling
cases, we get the following
ij
epresentation
in
this
framework
with
computing
expectations,
not
with
parameter
estimation,
this can be done
message
updates:
sages going from Sj to Bi would require anmessage
exponential
amount of storage. To avoid summing
updates:
?For inspired
`
?[7]
ssage
representation
?
`
?
without
loss
of
generality.
example,
if
the
original
exponential
family
is
curved
(e.g., by paramexplicitly
over
exponentially
many
terms,
we
use
a
technique
by
and
exploit
the
fact
?
ponential size domains, hence if we applied BP updates naively,
the?A
meslogit
??i,j if?A
??i,jiis
finite
(??i,j
) ? ??i,j if ??i,j is finite
i (?i,j ) ? logit
?the
?i,jfunction
= expectations
i,j =super-partition
that
an require
efficientsumming
algorithmover
is assumed
for computing
derivaeter
tying),
fornumber
the purpose
of
computing
can always work in the over-complete
i and its one
?i,jand
?i,jAotherwise.
would
an
exponential
of
terms,
mes?
otherwise.
j
?
have
exponential
sizeexponential
domains,parameterization,
hence
if we
BPgoing
updates
thean
mestives.
Torequire
avoid the
storage
ofofapplied
messages
to
Bnaively,
use
implicit
representation
and
then
project
to
the coarse
sufficient statistic for parameter estimation.
i , weback
B
would
an exponential
amount
storage.
To
avoid
Biof
Sj would
require
summing
over
an exponential
number
ofsumming
terms, and mesi to
these
messages
in
the
canonical
parameter
space.
ally
many
terms,
we
use
a
technique
inspired
by
[7]
and
exploit
the
fact
Sj to Bi would require an exponential amount of storage. To avoid summing
mponentially
is
forthe
computing
super-partition
function
Afield
and
its
derivai variational
Letassumed
us denote
messages
going
from
S
B
M
(s),
s?
{0,algorithms
1}the
andfact
the reverse messages,
2.5
variational
algorithms
2.5
j toinspired
i byOther
2.2
Markov
random
reformulation
many
terms,
wethe
use
a Other
technique
byj?i
[7]
and
exploit
ential
storage
of
messages
going
to
B
,
we
use
an
implicit
representation
i
(x),
x
?
X
.
Using
the
definitions
of
?
,
?
,
?
,
the
standard
updates become:
m
i?j is assumed for computing the super-partition
i,j
i function
j
gorithm
Ai and BP
its derivacanonical
parameter
The ideasX
to derive
the BP
updates
canthe
be extended
to can
otherbevariational
algorithms
with minor
The ideas
used
toequivalent
derive
BP
extended
to
other variational
algorithms
with minor
Y
We going
startused
by
constructing
an
butupdates
more
convenient
exponential
family.
This general
cone exponential
storagespace.
of messages
to B
representation
i , we use an implicit
modifications.
We
show
here
two We
examples:
a(x)
naive
field algorithm,
andfield
a TRW
approximamodifications.
show
here
two mean
examples:
a naive mean
algorithm,
and a TRW approximami?j
(s) ?
1[?
= s]?
Mj ! ?i
j (x)
i (x)
in going
the canonical
space.
es
from Sj parameter
to Bi by M
(s),
s
?
{0,
1}
and
the
reverse
messages,
struction
has
an
associated
bipartite
Markov
Random
Field
(MRF)
with
structure
K
,
shown
in
j?i
I,J
tion.
tion.
x?X
j ! :j ! #=j
the definitions
?i,j S
, ?ito, ?Bj , Figure
theM
standard
BP
become:
X
1.
new1}bipartite
structure
should not be confused with the bipartite graph from the
messages
goingoffrom
by
(s),This
s ?updates
{0,
and
theY
reverse
messages,
1
j
i
Note
that in order
to1handle
thes]
a canonical
parameter
is +?, wecoordinate
slightly
Note
incases
orderwhere
the cases
where a coordinate
parameter
is +?, we need to slightly
! ?j
Mj?i
(x)
?j?i
e?j s 1[?
=
m
(s).
X
Y
j (x)that
ihandle
K
bipartite
graph
specific
toto
the
BM
example:
thecanonical
former(3)
is part ofneed
thetogeneral
theory, the latter
the definitions
?
?
, the
standard
BP
updates
become:
i,ji,(x)
i ,N,N
jthe
redefine
super-partition
functions
redefine
the
super-partition
functions as follows:
! :ifollows:
! #=i
mUsing
1[?j (x)of=?s]?
M
i?j (s) ?
j ! ?i (x)
s?{0,1}
ias
is
specific
to
the
bipartite
matching
example.
X
Y
( J
( J )
)
x?X
j ! :j ! #=j
J
J
Y
X
X
Y
mi?j (s)
MX
i (x)
j ! ?i (x) X
X? ? s 1[?j (x) = s]?Y
A
(?)#=j=
1[?j=< +?]?
(x)
1[?
=j 1]
j
exp
<by
+?]?
(x)+?
?component,
(x)?j (x)
1[?
=B
+?
1] having a
j ?j (x)1[?
jj =
i (?)
j?
i?
The
MRF
has
IArandom
in?jithe
first
graph
. . ?, jB(x)
each
Mj?i
? is to get
ex?X
1[?
s] bipartite
m
(s).notexp
(3)
j !i:j
The(x)
task
an
update
equation
that
does
represent
Mj?ivariables
(x)
explicitly,
exploiting
the
1 , .?
I ,=
j (x) =
i! !?j
j=1
j=1
x?C
j=1
j=1
x?C
X ? s functions
Y
! :i!of
fact that the
super-partition
A
and
their
derivatives
can
be
computed
efficiently.
To
do
so,
s?{0,1}
i
=
#
i
copy
X
as
its
domain.
In
the
second
component,
the
graph
has
J
random
variables,
S1 , . . . , S J ,
i
Mj?i (x) ?
e j 1[?j (x) = s]
mi! ?j (s).
(3)
it is convenient to use the following
representation for the messages mi?j (s):
where equivalent
Sj ihas
! :i! #=ia binary domain {0, 1}. The pairwise potential between an event {Bi = x} in the first
s?{0,1}
component
and
one
s} in thethe
second is given by ?i,j (x, s) = 1[?j (x) = s]. The following
ate equation that does not represent
Mj?i (x)
mexplicitly,
(1) {Sjby=exploiting
i?j
?i,j
logcomputed efficiently.
? [??, +?].
4
n functions Ai and their derivatives
can=be
To do so, ?4i (x) = ?i (x) and
mj?i
(0) are
one-node
potentials
also included:
?j (s) = e?j s .
i?j(x)
an
update
equation
that
does
not
represent
M
explicitly,
by
exploiting
the
following equivalent representation for the messages mi?j (s):
-partition functions Ai and theirThe
derivatives
can be between
computedthe
efficiently.
To do so, follows from the rich sufficient statistic condition,
equivalence
two
formulations
! mi?j
use
thealso
following
equivalent
for the messages
(s): ! (x), we can write:
(1) representation
i?jdenote
If we
letlog
fi,jm
(x)
any
function
proportional
to proof
j ?i
which
implies
(for a full
the
equivalence, see Appendix A.1):
?i,j =
? [??,
+?].
j ! :j ! "=of
jM
mi?j
Pi?j (1)
? (0)m
?
?P
?
Ji,j (x)?i (x)
(x)f
?i,j = log
[??,
+?].
jI(x)f
i,j (x)?
i (x)
X
X
X
Y
x?X ?j?
x?X ?Y
(0)
P
=
logit
,
(4) 1 if x1 = x2 = ? ? ? = xI
?i,j = log Pmi?j
!
1[?j (xi ) = sj ] =
???
(1 ? ?j (x))fM
i,j (x)?
i (x)
x?X to
!
(x),
we can write: x?X fi,j (x)?i (x)
ote any function proportional
j ! :j ! "=j
j ?i
s ?{0,1} s ?{0,1}
0 otherwise.
(3)
2
J
Px) denote any function proportional
P! 1
?
?to
(x),
we? can write:
Mji,j! ?i
! "=(x)f
(x)?
j ! :j?
jj
i (x)
x?X ?j (x)fi,j (x)?i (x)
x?X
P
=
logit
,
? P j (x))fi,j (x)?i (x)
?
? P i,j (x)?
? (4)
3 i (x)
?X (1 ? ?x?X
x?X f
?j (x)fi,j (x)?i (x) This transformation
(x)?
i,jan
i (x)
into
equivalent
MRF
x?X ?j (x)f
P
= logit
,
(4) reveals several possible variational approximations.
g P
fi,j (x)?i (x)
We show in the next
how loopy belief propagation [18] can be modified to tractably accomx?X (1 ? ?j (x))fi,j (x)?i (x)
x?X section
3
s ?{0,1} i=1 j=1
modate this transformed exponential family, even though some nodes in the graphical model?the
Bi s?have a domain of exponential size. We then describe similar updates for mean field [19] and
3
tree-reweighted
[20] variational algorithms. We will refer to these algorithms as BPMF (Belief Propagation on Measure Factorizations), MFMF (Mean Field on Measure Factorizations) and TRWMF
(Tree-Reweighted updates on Measure Factorizations). In contrast to BPMF, MFMF is guaranteed
to converge2 , and TRWBF is guaranteed to provide an upper bound on the partition function.3
2.3
Implicit message representation
The variables Bi have a domain of exponential size, hence if we applied belief propagation updates
naively, the messages going from Bi to Sj would require summing over an exponential number of
terms, and messages going from Sj to Bi would require an exponential amount of storage. To avoid
summing explicitly over exponentially many terms, we adapt an idea from [7] and exploit the fact
2
3
Although we did not have convergence issues with BPMF in our experiments.
Surprisingly, MFMF does not provide a lower bound (see Appendix A.6).
3
that an efficient algorithm is assumed for computing the super-partition function Ai and its derivatives. To avoid the exponential storage of messages going to Bi , we use an implicit representation
of these messages in the canonical parameter space.
Let us denote the messages going from Sj to Bi by Mj?i (s), s ? {0, 1} and the reverse messages
by mi?j (x), x ? X . From the definitions of ?i,j , ?i , ?j , the explicit belief propagation updates
are:
mi?j (s) ?
X
Mj 0 ?i (x)
j 0 :j 0 6=j
x?X
X
Mj?i (x) ?
Y
1[?j (x) = s]?i (x)
Y
e?j s 1[?j (x) = s]
mi0 ?j (s).
(4)
i0 :i0 6=i
s?{0,1}
The task is to get an update equation that does not represent Mj?i (x) explicitly, by exploiting
the fact that the super-partition functions Ai and their derivatives can be computed efficiently. To
do so, it is convenient to use the following equivalent representation for the messages mi?j (s):
?i,j = log mi?j (1) ? log mi?j (0) ? [??, +?].4
Q
If we also let fi,j (x) denote any function proportional to j 0 :j 0 6=j Mj 0 ?i (x), we can write:
?i,j = log
P
?j (x)fi,j (x)?i (x)
P
(1
?
?j (x))fi,j (x)?i (x)
x?X
x?X
P
?j (x)fi,j (x)?i (x)
x?X fi,j (x)?i (x)
x?X
= logit
P
,
(5)
where logit(x) = log x ? log(1 ? x). This means that if we can find a parameter vector ? i,j ? RJ
such that
fi,j (x) = exph?(x), ?i,j i ?
Y
Mj 0 ?i (x),
j 0 :j 0 6=j
then we could write ?i,j = logit ?j Ai (? i,j ) . We derive such a vector ? i,j as follows:
Y
Mj 0 ?i (x) =
j 0 :j 0 6=j
Y
X
j 0 :j 0 6=j sj 0 ?{0,1}
=
Y
mi0 ?j 0 (sj 0 )
i0 :i0 6=i
Y
e?j 0 ?j 0 (x)
mi0 ?j 0 (?j 0 (x))
i0 :i0 6=i
j 0 :j 0 6=j
? exp
Y
e?j 0 sj 0 1[?j 0 (x) = sj 0 ]
?
? X
?
X
?j 0 (x) ??j 0 +
j 0 :j 0 6=j
i0 :i0 6=i
?
??
?
?i0 ,j 0 ? ,
?
where in the last step we have used the assumption that ?j has domain {0, 1}, which implies that
mi?j (?j (x)) = exp{?j (x) log mi?j (1) + (1 ? ?j (x))
i?j (0)} ? exp{?j (x)?i,j }. The
log m
P
0
required parameters are therefore: ? i,j j 0 = 1[j 6= j ] ?j 0 + i0 :i0 6=i ?i0 ,j 0 .
2.4
Reuse of partition function computations
Naively, the updates derived so far would require computing each super-partition function J times at
each message passing iteration. We show that this can be reduced to computing each super-partition
function only once per iteration, a considerable gain.
We first define the vectors:
X
? =?+
?
i
? i0 ,
i0 :i0 6=i
and then rewrite the numerator inside the logit function in Equation (5) as follows:
X
x?X
?j (x)fi,j (x)?i (x) =
X
X
?
? i} ? e??i,j s ? s ? ?i (x)
exp{h?(x), ?
i
s?{0,1} x:?j (x)=s
?
?
? ),
= eAi (?i )??i,j ?j Ai (?
i
4
In what follows, we will assume that ?i,j ? (??, +?). The extended real line is treated in Appendix C.1.
4
and similarly for the denominator:
X
?
?
? ) + eAi (?? i ) (1 ? ?j Ai (?
? ))
fi,j (x)?i (x) = eAi (?i )??i,j ?j Ai (?
i
i
x?X
?
?
?) .
= eAi (?i ) 1 + (e??i,j ? 1)?j Ai (?
i
After plugging in the reparameterization of the numerator and denominator back into the logit
function in Equation
(5) and doing some algebra, we obtain the more efficient update ?i,j =
logit ?Ai (??i,j ) ? ??i,j , where the logit function of a vector, logit v, is defined as the vector of
the logit function applied to each entry of the vector v. See Figure 1 for a summary of the BPMF
algorithm.
2.5
Other variational algorithms
The ideas used to derive the BPMF updates can be extended to other variational algorithms with
minor modifications. We sketch here two examples: a naive mean field algorithm, and a TRW
approximation. See Appendix A.2 for details.
In the case of naive mean field applied the graphical model described in Section 2.2, the updates
take a form similar to Equations (4), except that the reverse incoming message is not omitted when
computing an outgoing message. As a consequence, the updates are not directional and can be
associated to nodes in the graphical model rather than edges:
Mj (s) ?
X
1[?j (x) = s]?i (x)
Y
X
mi (x) ?
mi (x)
j
x?X
e?j s 1[?j (x) = s]
Y
Mj (s).
i
s?{0,1}
This yields the following implicit updates:5
?(t) = ? +
X
(t?1)
?i
i
(t)
?i
= logit ?Ai ?(t) ,
(6)
? = logistic(?).
and the moment approximation ?
In the case of TRW, lines 3 and 6 in the pseudocode of Figure 1 stay the same, while the update in
line 4 becomes:
?
?i,j
j0
= ??j 0 ? ?i?j 0 ?i,j 0 +
?
X
?
i0 ?j 0
i0 :i0 6=i
?
i0 ,j 0
??
?j 0 ?i if j 0 6= j
(1 ? ?i?j ) otherwise,
(7)
where ?i?j are marginals of a spanning tree distribution over KI,J . We show in Appendix A.2 how
the idea in Section 2.4 can be exploited to reuse computations of super-partition functions in the
case of TRW as well.
2.6
Large factorizations
In some cases, it might not be possible to write the base measure as a succinct product of factors.
Fortunately, there is a simple and elegant workaround to this problem that retains good theoretical
guarantees. The basic idea is that dropping measures with domain {0, 1} in a factorization can only
increase the value of the partition function. This solution is especially attractive in the context of
outer approximations such as the TRW algorithm, because it preserves the upper bound property of
the approximation. We show an example of this in Section 3.2.
3
Examples of factorizations
In this section, we show three examples of measure factorizations. See Appendix B for two more
examples (partitions of the plane, and traveling salesman problems).
5
Assuming that naive mean field is optimized coordinate-wise, with an ordering that optimizes all of the
mi ?s, then all of the Mj ?s.
5
C
(a)
C
C
A
A
A
T
G
A
C
A
(b)
T
C
C
A
A
T
T
C
A
B
C
A
T
Monotonicity
violation
C
Transitivity
violation
Partial order
violation
(c)
C
D
E
F
G
F
I
A
T
C
C
Figure 2: (a) An example of a valid multiple alignment between three sequences. (b) Examples of invalid
multiple sequence alignments illustrating what is left out by the factors in the decomposition of Section 3.2.
(c) The DAG representation of a partial order. An example of linearization is A,C,D,B,E,F,G,H,I. The fine red
dashed lines and blue lines demonstrate an example of two forests covering the set of edges, forming a measure
decomposition with two factors. The linearization A,D,B,E,F,G,H,I,C is an example of a state allowed by one
factor but not the other.
3.1
More matchings
Our approach extends naturally to matchings with higher-order (augmented) sufficient statistic,
and to non-bipartite/non-perfect matchings. Let us first consider an Higher-order Bipartite Model
(HBM), which has all the basic sufficient statistic coordinates found in SBM, plus those of the form
?j (x) = xm,n ? xm+1,n+1 . We claim that with the factorization of Equation (2), the super-partition
functions A1 and A2 are still tractable in HBM. To see why, note that computing A1 can be done
by building an auxiliary exponential family with associated graphical model given by a chain of
length N , and where the state space of each node in this chain is {1, 2, . . . , N }. The basic sufficient statistic coordinates ?j (x) = xm,n are encoded as node potentials, and the augmented ones as
edge potentials in the chain. This yields a running time of O(N 3 ) for computing one super-partition
function and its gradient (see Appendix A.3 for details). The auxiliary exponential family technique
used here is reminiscent of [21].
Extension to non-perfect and non-bipartite matchings can also be done easily. In the first case, a
dummy ?null? node is added to each bipartite component. In the second case, where the original
space is the set of N2 alignment indicators, we propose a decomposition into N measures. Each
PN
one checks that a single node is connected to at most one other node: ?n (x) = 1[ n0 =1 xn,n0 ? 1].
3.2
Multiple sequence alignment
We start by describing the space of pairwise alignments (which is tractable), and then discuss the
extension to multiple sequences (which quickly becomes infeasible as the number of sequences
increases). Consider two sequences of length M and N respectively. A pairwise sequence alignment
is a bipartite graph on the characters of the two sequences (where each bipartite component has
the characters of one of the sequences) constrained to be monotonic: if a character at index m ?
{1, . . . , M } is aligned to a character at index n ? {1, . . . , N } and another character at index m0 >
m is aligned to index n0 , then we must have n0 > n. A multiple alignment between K sequences of
lengths N1 , N2 , . . . , NK is a K-partite graph, where the k-th components? vertices are the characters
of the k-th sequence, and such that the following three properties hold: (1) each pair of components
forms a pairwise alignment as described above; (2) the alignments are transitive, i.e., if character
c1 is aligned to c2 and c2 is aligned to c3 then c1 must be aligned to c3 ; (3) the alignments satisfy
a partial order property: there exists a partial order p on the connected components of the graph
with the property that if C1 <p C2 are two distinct connected components and c1 ? C1 , c2 ? C2
are in the same sequence, then the index of c1 in the sequence is smaller than the index of c2 . See
Figure 2(a,b) for an illustration.
We use the technique of Section 2.6, and include only the pairwise alignment and transitivity constraints, creating a variational objective function
that is an outer bound of the origi
nal objective. In this factorization, there are K
pairwise
alignment measures, and T =
2
6
0.3
0.6
FPRAS
BPMF
0.5
Mean Field
0.4
Loopy BP
(b)
0.2
0.1
(c)
0.3
0.2
0.1
0
0
0.1
1
10
100
1000
Time (s)
0.5
0.6
0.7
0.8
0.9
Bipartite Graph Density
1
Mean Normalized Loss
0.7
FPRAS
Mean RMS
(a)
Mean RMS
0.4
0.14
HBM-F1
0.12
HBM-F2
0.1
0.08
0.06
0.04
0.02
0
10
20
30
40
50
60
Graph size
Figure 3: Experiments discussed in Section 4.1 on two of the matching models discussed. (a) and (b) on SBM,
(c), on HBM.
P
Nk Nk0 Nk00 transitivity measures. We show in Appendix A.4 that all the messages for one iteration can be computed in time O(T ).
k,k0 ,k00 :k6=k0 6=k00 6=k
3.3
Linearization of partial orders
A linearization of a partial order p over N objects is a total order t over the same objects such
that x ?p y ? x ?t y. Counting the number of linearizations is a well-known #P problem [22].
Equivalently, the problem can be view as a matching between a DAG G = (V, E) and the integers
{1, 2, . . . , N } with the order constraints specified on the edges of the DAG.
To factorize the base measure, consider a collection of I directed forests on V , Gi = (V, Ei ), i ? I
such that their union covers G: ?i Ei = E. See Figure 2(c) for an example. For a single forest Gi , a
straightforward generalization of the algorithm used to compute HBM?s super-partition can be used.
This generalization is simply to use sum-product with graphical model Gi instead of sum-product
on a chain as in HBM (see Appendix A.5 for details). Again, the state space of the node of the
graphical model is {1, 2, . . . , N }, but this time the edge potentials enforce the ordering constraints
of the current forest.
4
4.1
Experiments
Matchings
As a first experiment, we compared the approximation of SBM described in Section 2 to the Fully
Polynomial Randomized Approximation Scheme (FPRAS) described in [23]. We performed all our
experiments on 100 iid random bipartite graphs of size N , where each edge has iid appearance probability p, a random graph model that we denote by RB(N, p). In the first and second experiments, we
used RB(10, 0.9). In this case, exact computation is still possible, and we compared the mean Root
Mean Squared (RMS) of the estimated moments to the truth. In Figure 3(a), we plot this quantity as
a function of the time spent to compute the 100 approximations. In the variational approximation,
we measured performance at each iteration of BPMF, and in the sampling approach, we measured
performance after powers of two sampling rounds. The conclusion is that the variational approximation attains similar levels of error in at least one order of magnitude less time in the RB(10, 0.9)
regime.
Next, we show in Figure 3(b) the behavior of the algorithms as a function of p, where we also
added the mean field algorithm to the comparison. In each data point in the graph, the FPRAS was
run no less than one order of magnitude more time than the variational algorithms. Both variational
strategies outperform the FPRAS in low-density regimes, where mean field also slightly outperforms
BPMF. On the other hand, for high-density regimes, only BPMF outperforms the FPRAS, and mean
field has a bias compared to the other two methods.
The third experiment concerns the augmented matching model, HBM. Here we compare two types
of factorization and investigate the scalability of the approaches to larger graphs. Factorization F1
is a simpler factorization of the form described in Section 3.1 for non-bipartite graphs. This ignores
the higher-order sufficient statistic coordinates, creating an outer approximation. Factorization F2,
7
Sum of Pairs score (SP)
BAliBASE protein group
BPMF-1
BPMF-2
BPMF-3
Clustal [24]
ProbCons [25]
short, < 25% identity
short, 20% ? 40% identity
short, > 35% identity
0.68
0.94
0.97
0.74
0.95
0.98
0.76
0.95
0.98
0.71
0.89
0.97
0.72
0.92
0.98
All
0.88
0.91
0.91
0.88
0.89
Table 1: Average SP scores in the ref1/test1 directory of BAliBASE. BPMF-i denotes the average SP of the
BPMF algorithm after i iterations of (parallel) message passing.
described in Section 3.1 specifically for HBM, is tighter. The experimental setup is based on a generative model over noisy observations of bipartite perfect matchings described in Appendix C.2. We
show in Figure 3(c) the results of a sequence of these experiments for different bipartite component
sizes N/2. This experiments demonstrates the scalability of sophisticated factorizations, and their
superiority over simpler ones.
4.2
Multiple sequence alignment
To assess the practical significance of this framework, we also apply it to BAliBASE [6], a standard
protein multiple sequence alignment benchmark. We compared our system to Clustal 2.0.12 [24],
the most popular multiple alignment tool, and ProbCons 1.12, a state-of-the-art system [25] that also
relies on enforcing transitivity constraints, but which is not derived via the optimization of an objective function. Our system uses a basic pair HMM [26] to score pairwise alignments. This scoring
function captures a proper subset of the biological knowledge exploited by Clustal and ProbCons.6
The advantage of our system over the other systems is the better optimization technique, based on
the measure factorization described in Section 3.2. We used a standard technique to transform the
pairwise alignment marginals into a single valid multiple sequence alignment (see Appendix C.3).
Our system outperformed both baselines after three BPMF parallel message passing iterations. The
algorithm converged in all protein groups, and performance was identical after more than three iterations. Although the overall performance gain is not statistically significant according to a Wilcoxon
signed-rank test, the larger gains were obtained in the small identity subset, the ?twilight zone?
where research on multiple sequence alignment has focused.
One caveat of this multiple alignment approach is its running time, which is cubic in the length of
the longest sequence, while most multiple sequence alignment approaches are quadratic. For example, the running time for one iteration of BPMF in this experiment was 364.67s, but only 0.98s for
Clustal?this is why we have restricted the experiments to the short sequences section of BAliBASE.
Fortunately, several techniques are available to decrease the computational complexity of this algorithm: the transitivity factors can be subsampled using a coarse pass, or along a phylogenetic tree;
and computation of the factors can be entirely parallelized. These improvements are orthogonal to
the main point of this paper, so we leave them for future work.
5
Conclusion
Computing the moments of discrete exponential families can be difficult for two reasons: the structure of the sufficient statistic that can create junction trees of high tree-width, and the structure of
the base measures that can induce an intractable combinatorial space. Most previous work on variational approximations has focused on the first difficulty; however, the second challenge also arises
frequently in machine learning. In this work, we have presented a framework that fills this gap.
It is based on an intuitive notion of measure factorization, which, as we have shown, applies to
a variety of combinatorial spaces. This notion enables variational algorithms to be adapted to the
combinatorial setting. Our experiments both on synthetic and naturally-occurring data demonstrate
the viability of the method compared to competing state-of-the-art algorithms.
6
More precisely it captures long gap and hydrophobic core modeling.
8
References
[1] Alexander Karzanov and Leonid Khachiyan. On the conductance of order Markov chains. Order,
V8(1):7?15, March 1991.
[2] Mark Jerrum, Alistair Sinclair, and Eric Vigoda. A polynomial-time approximation algorithm for the
permanent of a matrix with non-negative entries. In Proceedings of the Annual ACM Symposium on
Theory of Computing, pages 712?721, 2001.
[3] David Wilson. Mixing times of lozenge tiling and card shuffling Markov chains. The Annals of Applied
Probability, 14:274?325, 2004.
[4] Adam Siepel and David Haussler. Phylogenetic estimation of context-dependent substitution rates by
maximum likelihood. Mol Biol Evol, 21(3):468?488, 2004.
[5] Martin J. Wainwright and Michael I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1:1?305, 2008.
[6] Julie Thompson, Fr?ed?eric Plewniak, and Olivier Poch. BAliBASE: A benchmark alignments database for
the evaluation of multiple sequence alignment programs. Bioinformatics, 15:87?88, 1999.
[7] David A. Smith and Jason Eisner. Dependency parsing by belief propagation. In Proceedings of the
Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 145?156, Honolulu,
October 2008.
[8] David Burkett, John Blitzer, and Dan Klein. Joint parsing and alignment with weakly synchronized
grammars. In North American Association for Computational Linguistics, Los Angeles, 2010.
[9] Bert Huang and Tony Jebara. Approximating the permanent with belief propagation. ArXiv e-prints,
2009.
[10] Yusuke Watanabe and Michael Chertkov. Belief propagation and loop calculus for the permanent of a
non-negative matrix. J. Phys. A: Math. Theor., 2010.
[11] Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. Max-margin parsing.
In EMNLP, 2004.
[12] Ben Taskar, Simon Lacoste-Julien, and Dan Klein. A discriminative matching approach to word alignment. In EMNLP 2005, 2005.
[13] John Duchi, Daniel Tarlow, Gal Elidan, and Daphne Koller. Using combinatorial optimization within
max-product belief propagation. In Advances in Neural Information Processing Systems, 2007.
[14] Aron Culotta, Andrew McCallum, Bart Selman, and Ashish Sabharwal. Sparse message passing algorithms for weighted maximum satisfiability. In New England Student Symposium on Artificial Intelligence,
2007.
[15] Percy Liang, Ben Taskar, and Dan Klein. Alignment by agreement. In North American Association for
Computational Linguistics (NAACL), pages 104?111, 2006.
[16] Percy Liang, Dan Klein, and Michael I. Jordan. Agreement-based learning. In Advances in Neural
Information Processing Systems (NIPS), 2008.
[17] Leslie G. Valiant. The complexity of computing the permanent. Theoret. Comput. Sci., 1979.
[18] Jonathan S. Yedidia, William T. Freeman, and Yair Weiss. Generalized belief propagation. In Advances
in Neural Information Processing Systems, pages 689?695, Cambridge, MA, 2001. MIT Press.
[19] Carsten Peterson and James R. Anderson. A mean field theory learning algorithm for neural networks.
Complex Systems, 1:995?1019, 1987.
[20] Martin J. Wainwright, Tommi S. Jaakkola, and Alan S. Willsky. Tree-reweighted belief propagation algorithms and approximate ML estimation by pseudomoment matching. In Proceedings of the International
Conference on Articial Intelligence and Statistics, 2003.
[21] Alexandre Bouchard-C?ot?e and Michael I. Jordan. Optimization of structured mean field objectives. In
Proceedings of Uncertainty in Artifical Intelligence, 2009.
[22] Graham Brightwell and Peter Winkler. Counting linear extensions. Order, 1991.
[23] Lars Eilstrup Rasmussen. Approximating the permanent: A simple approach. Random Structures and
Algorithms, 1992.
[24] Des G. Higgins and Paul M. Sharp. CLUSTAL: a package for performing multiple sequence alignment
on a microcomputer. Gene, 73:237?244, 1988.
[25] Chuong B. Do, Mahathi S. P. Mahabhashyam, Michael Brudno, and Serafim Batzoglou. PROBCONS:
Probabilistic consistency-based multiple sequence alignment. Genome Research, 15:330?340, 2005.
[26] David B. Searls and Kevin P. Murphy. Automata-theoretic models of mutation and alignment. In Proc Int
Conf Intell Syst Mol Biol., 1995.
9
| 4036 |@word illustrating:1 polynomial:5 logit:17 pseudomoment:1 km:2 calculus:1 simulation:1 serafim:1 decomposition:6 eld:1 mention:1 moment:4 substitution:1 score:3 mi0:3 daniel:1 outperforms:2 current:1 written:1 parsing:4 reminiscent:1 must:2 john:2 partition:29 enables:2 afield:1 plot:1 siepel:1 update:31 n0:4 bart:1 generative:1 intelligence:3 parameterization:1 plane:2 directory:1 mccallum:1 smith:1 short:4 core:1 tarlow:1 caveat:1 coarse:2 math:1 node:11 simpler:2 daphne:2 phylogenetic:2 along:1 c2:6 become:3 symposium:2 khachiyan:1 dan:5 redefine:2 inside:2 pairwise:11 mahabhashyam:1 behavior:1 frequently:1 inspired:2 freeman:1 ote:1 enumeration:1 jm:2 struction:2 becomes:3 confused:4 provided:3 project:1 agnostic:1 null:1 what:3 tying:1 microcomputer:1 transformation:3 gal:1 guarantee:3 berkeley:1 demonstrates:1 appear:1 superiority:1 positive:1 consequence:1 despite:1 vigoda:1 thes:1 yusuke:1 might:1 plus:1 signed:1 equivalence:3 factorization:23 range:3 bi:13 statistically:1 directed:1 practical:1 testing:1 union:1 jan:1 j0:1 empirical:2 honolulu:1 matching:12 convenient:7 word:1 induce:1 regular:1 protein:4 batzoglou:1 get:4 storage:7 context:3 applying:1 equivalent:11 fpras:6 straightforward:1 convex:1 focused:3 thompson:1 automaton:1 origi:1 evol:1 sbm:5 haussler:1 higgins:1 fill:1 reparameterization:1 bpmf:17 coordinate:6 notion:2 annals:1 construction:1 suppose:1 exact:2 programming:1 olivier:1 k00:2 us:1 agreement:4 trend:1 forthe:1 gorithm:1 database:1 taskar:3 capture:2 culotta:1 connected:3 theand:1 ordering:2 decrease:1 principled:1 workaround:1 complexity:2 ration:1 dynamic:1 weakly:1 rewrite:3 algebra:1 division:1 bipartite:32 f2:2 eric:2 matchings:9 easily:1 joint:1 k0:2 various:1 derivation:1 distinct:1 effective:1 describe:5 monte:1 artificial:1 kevin:1 encoded:1 supplementary:1 valued:1 larger:2 otherwise:8 grammar:1 statistic:16 gi:3 jerrum:1 winkler:1 transform:2 noisy:1 sequence:26 advantage:1 propose:3 isthe:1 product:6 fr:1 aligned:5 loop:1 mixing:1 intuitive:1 scalability:2 los:1 exploiting:5 convergence:1 aiin:1 perfect:5 leave:1 ben:3 object:4 spent:1 derive:4 develop:2 adam:1 andrew:1 blitzer:1 measured:2 ij:3 minor:3 strong:1 auxiliary:2 implies:3 synchronized:1 tommi:1 sabharwal:1 lars:1 viewing:1 material:1 byj:1 require:11 f1:2 generalization:2 tighter:1 biological:1 summation:2 theor:1 brudno:1 extension:3 hold:5 considered:1 exp:11 bj:1 claim:1 m0:1 a2:1 omitted:1 purpose:1 estimation:5 proc:1 outperformed:1 applicable:2 combinatorial:18 clustal:5 create:1 tool:1 weighted:1 mit:1 always:1 mation:1 super:15 modified:1 rather:2 avoid:6 pn:1 broader:1 wilson:1 jaakkola:1 derived:3 ifrom:1 longest:1 rank:1 check:1 sthe:1 improvement:1 likelihood:1 contrast:3 attains:1 baseline:1 inference:10 dependent:1 membership:1 i0:22 typically:1 koller:2 going:10 transformed:1 i1:1 issue:1 overall:1 k6:1 art:3 constrained:1 field:20 aware:1 construct:1 equal:2 having:3 once:2 sampling:2 identical:1 karzanov:1 articial:1 unsupervised:1 future:1 jb:1 few:1 preserve:2 intell:1 murphy:1 subsampled:1 n1:1 william:1 conductance:1 message:32 ential:1 investigate:1 evaluation:1 alignment:32 violation:3 chain:7 edge:6 partial:7 orthogonal:1 unless:1 tree:8 theoretical:3 modeling:1 inca:1 balibase:6 cover:1 retains:1 leslie:1 maximization:3 loopy:7 applicability:1 vertex:1 entry:2 subset:2 kn:1 dependency:1 synthetic:1 density:3 international:1 randomized:4 stay:1 probabilistic:2 michael:7 ashish:1 quickly:1 concrete:1 again:2 squared:1 huang:1 emnlp:3 sinclair:1 conf:1 creating:2 american:2 derivative:5 return:1 syst:1 potential:10 de:1 student:1 subsumes:1 north:2 int:1 permanent:6 combinatorics:1 explicitly:4 satisfy:1 aron:1 tion:2 view:3 performed:1 root:1 jason:1 doing:1 chuong:1 red:1 start:2 parallel:2 bouchard:2 simon:1 mutation:1 ass:1 partite:1 efficiently:4 yield:2 directional:1 mji:1 iid:2 carlo:1 converged:1 phys:1 ed:1 definition:4 james:1 obvious:1 naturally:2 associated:4 mi:16 proof:1 con:2 gain:5 dataset:1 treatment:1 popular:2 recall:1 knowledge:1 satisfiability:1 organized:1 sophisticated:2 back:1 trw:8 alexandre:2 higher:3 probcons:4 wei:1 formulation:5 done:4 though:3 generality:3 anderson:1 implicit:8 outgoing:1 traveling:1 sketch:1 ally:1 hand:1 ei:2 christopher:1 propagation:12 logistic:1 building:1 naacl:1 normalized:1 true:1 poch:1 former:4 hence:4 reweighted:4 attractive:1 round:1 numerator:4 transitivity:5 width:1 covering:1 generalized:1 complete:1 demonstrate:2 theoretic:1 duchi:1 ifollows:2 percy:2 variational:31 wise:1 fi:17 pseudocode:2 empirically:1 ji:4 exponentially:2 discussed:3 association:2 marginals:2 refer:1 significant:1 cambridge:1 ai:19 dag:3 shuffling:1 test1:1 pmi:1 consistency:1 similarly:1 language:1 similarity:2 base:8 wilcoxon:1 optimizes:1 reverse:5 binary:3 success:1 hydrophobic:1 exploited:2 scoring:1 additional:1 fortunately:2 parallelized:1 elidan:1 dashed:1 multiple:16 full:1 rj:3 alan:1 faster:1 adapt:1 england:1 long:1 thethe:1 plugging:1 a1:2 qi:1 mrf:11 basic:4 denominator:3 expectation:3 arxiv:1 iteration:11 represent:4 c1:6 want:2 fine:1 plewniak:1 ot:2 nary:1 elegant:1 member:1 jordan:4 call:1 integer:1 counting:2 viability:1 concerned:1 paradoxically:1 xsj:1 variety:1 fm:1 competing:1 hindered:1 idea:5 angeles:1 rms:3 reuse:3 peter:1 passing:5 jj:3 v8:1 eai:4 amount:4 induces:1 rw:1 reduced:3 outperform:1 canonical:6 estimated:1 per:1 yy:1 dummy:1 blue:1 rb:3 klein:5 discrete:2 write:6 dropping:1 group:2 reformulation:3 nal:1 lacoste:1 graph:29 sum:5 cone:1 linearizations:1 run:1 package:1 uncertainty:1 extends:2 family:17 reader:1 xei:1 appendix:15 graham:1 entirely:1 bound:7 ki:4 guaranteed:2 quadratic:1 annual:1 adapted:1 constraint:4 precisely:1 bp:13 x2:2 sexp:1 toto:1 argument:3 performing:1 px:1 martin:2 department:1 structured:1 according:1 march:1 manning:1 describes:1 slightly:3 em:1 ate:1 character:7 smaller:1 appealing:1 alistair:1 modification:2 s1:4 restricted:1 handling:2 equation:10 ndom:1 describing:1 discus:2 hbm:9 xi2:1 needed:1 mind:1 tractable:8 end:1 salesman:1 available:1 junction:1 tiling:1 yedidia:1 apply:2 enforce:2 alternative:1 yair:1 existence:1 original:3 denotes:1 running:5 include:1 linguistics:2 tony:1 graphical:10 exploit:5 restrictive:1 eisner:1 especially:1 approximating:3 classical:1 objective:5 added:2 quantity:1 print:1 strategy:1 gradient:4 dp:2 mx:1 link:1 epresentation:1 card:1 sci:1 hmm:1 outer:4 me:1 spanning:1 enforcing:1 reason:1 willsky:1 assuming:1 length:4 index:6 illustration:1 equivalently:1 setup:7 unfortunately:1 difficult:1 thei:1 october:1 liang:2 negative:2 sage:2 twilight:1 proper:1 upper:2 observation:1 markov:8 fin:1 finite:6 benchmark:2 curved:1 defining:1 situation:1 extended:4 bert:1 sharp:1 jebara:1 jand:1 introduced:1 david:5 pair:3 required:3 specified:1 c3:2 optimized:1 california:1 tractably:1 nip:1 xm:6 regime:3 challenge:1 program:1 max:2 belief:12 wainwright:2 ia:2 event:4 power:1 treated:1 rely:1 difficulty:1 natural:1 indicator:3 scheme:1 ne:1 julien:1 transitive:1 naive:5 xby:1 discovery:1 fully:3 loss:4 proportional:4 foundation:1 sufficient:14 ponential:1 pi:1 summary:1 surprisingly:1 last:2 copy:1 rasmussen:1 infeasible:1 bias:1 allow:1 wide:1 peterson:1 julie:1 sparse:1 xn:2 valid:2 rich:4 genome:1 ignores:1 author:1 collection:2 selman:1 bm:4 ple:1 far:3 sj:21 approximate:6 preferred:1 keep:1 dealing:1 monotonicity:1 ml:1 gene:1 reveals:4 incoming:1 b1:1 isb:1 assumed:5 summing:6 xi:4 factorize:1 discriminative:1 why:2 table:1 mj:17 ref1:1 obtaining:1 mol:2 forest:4 complex:2 constructing:2 domain:14 jthe:1 did:1 sp:3 main:2 significance:1 s2:2 paul:1 n2:2 brightwell:1 tothe:1 exph:1 succinct:1 allowed:1 x1:6 augmented:3 cubic:1 theoret:1 slow:1 watanabe:1 explicit:1 exponential:34 comput:1 third:1 ito:1 chertkov:1 specific:8 xt:1 thea:1 concern:1 intractable:2 naively:6 exists:1 adding:2 valiant:1 ci:5 magnitude:2 linearization:4 occurring:1 margin:1 nk:2 gap:2 simply:1 appearance:1 forming:1 monotonic:1 applies:1 truth:1 relies:1 acm:1 ma:1 goal:2 identity:4 carsten:1 exposition:1 invalid:1 jf:1 leonid:1 considerable:3 included:4 typical:1 except:1 specifically:1 total:1 pas:1 e:1 experimental:1 zone:1 mark:1 latter:1 arises:1 collins:1 alexander:1 bioinformatics:1 jonathan:1 artifical:1 nk0:1 biol:2 ex:1 |
3,354 | 4,037 | Improving the Asymptotic Performance of Markov
Chain Monte-Carlo by Inserting Vortices
Faustino Gomez
IDSIA
Galleria 2, Manno CH-6928, Switzerland
[email protected]
Yi Sun
IDSIA
Galleria 2, Manno CH-6928, Switzerland
[email protected]
?
Jurgen
Schmidhuber
IDSIA
Galleria 2, Manno CH-6928, Switzerland
[email protected]
Abstract
We present a new way of converting a reversible finite Markov chain into a nonreversible one, with a theoretical guarantee that the asymptotic variance of the
MCMC estimator based on the non-reversible chain is reduced. The method is
applicable to any reversible chain whose states are not connected through a tree,
and can be interpreted graphically as inserting vortices into the state transition
graph. Our result confirms that non-reversible chains are fundamentally better
than reversible ones in terms of asymptotic performance, and suggests interesting
directions for further improving MCMC.
1
Introduction
Markov Chain Monte Carlo (MCMC) methods have gained enormous popularity over a wide variety
of research fields [6, 8], owing to their ability to compute expectations with respect to complex, high
dimensional probability distributions. An MCMC estimator can be based on any ergodic Markov
chain with the distribution of interest as its stationary distribution. However, the choice of Markov
chain greatly affects the performance of the estimator, in particular the accuracy achieved with a
pre-specified number of samples [4].
In general, the efficiency of an MCMC estimator is determined by two factors: i) how fast the
chain converges to its stationary distribution, i.e., the mixing rate [9], and ii) once the chain reaches
its stationary distribution, how much the estimates fluctuate based on trajectories of finite length,
which is characterized by the asymptotic variance. In this paper, we consider the latter criteria.
Previous theory concerned with reducing asymptotic variance has followed two main tracks. The
first focuses on reversible chains, and is mostly based on the theorems of Peskun [10] and Tierney
[11], which state that if a reversible Markov chain is modified so that the probability of staying in
the same state is reduced, then the asymptotic variance can be decreased. A number of methods
have been proposed, particularly in the context of Metropolis-Hastings method, to encourage the
Markov chain to move away from the current state, or its adjacency in the continuous case [12, 13].
The second track, which was explored just recently, studies non-reversible chains. Neal proved
in [4] that starting from any finite-state reversible chain, the asymptotic variance of a related nonreversible chain, with reduced probability of back-tracking to the immediately previous state, will
not increase, and typically decrease. Several methods have been proposed by Murray based on this
idea [5].
1
Neal?s result suggests that non-reversible chains may be fundamentally better than reversible ones in
terms of the asymptotic performance. In this paper, we follow up this idea by proposing a new way
of converting reversible chains into non-reversible ones which, unlike in Neal?s method, are defined
on the state space of the reversible chain, with the theoretical guarantee that the asymptotic variance
of the associated MCMC estimator is reduced. Our method is applicable to any non-reversible chain
whose state transition graph contains loops, including those whose probability of staying in the
same state is zero and thus cannot be improved using Peskun?s theorem. The method also admits
an interesting graphical interpretation which amounts to inserting ?vortices? into the state transition
graph of the original chain. Our result suggests a new and interesting direction for improving the
asymptotic performance of MCMC.
The rest of the paper is organized as follows: section 2 reviews some background concepts and
results; section 3 presents the main theoretical results, together with the graphical interpretation;
section 4 provides a simple yet illustrative example and explains the intuition behind the results;
section 5 concludes the paper.
2
Preliminaries
Suppose we wish to estimate the expectation of some real valued function f over domain S, with
respect to a probability distribution ?, whose value may only be known to a multiplicative constant.
Let A be a transition operator of an ergodic1 Markov chain with stationary distribution ?, i.e.,
? (x) A (x ? y) = ? (y) B (y ? x) , ?x, y ? S,
(1)
where B is the reverse operator as defined in [5]. The expectation can then be estimated through the
MCMC estimator
1 XT
?T =
f (xt ) ,
(2)
t=1
T
where x1 , ? ? ? , xT is a trajectory sampled from the Markov chain. The asymptotic variance of ?T ,
with respect to transition operator A and function f is defined as
2
?A
(f ) = lim T V [?T ] ,
T ??
(3)
2
(f ) is well-defined followwhere V [?T ] denotes the variance of ?T . Since the chain is ergodic, ?A
ing the central limit theorem, and does not depend on the distribution of the initial point. Roughly
speaking, asymptotic variance has the meaning that the mean square error of the estimates based on
2
T consecutive states of the chain would be approximately T1 ?A
(f ), after a sufficiently long period
of ?burn in? such that the chain is close enough to its stationary distribution. Asymptotic variance
can be used to compare the asymptotic performance of MCMC estimators based on different chains
with the same stationary distribution, where smaller asymptotic variance indicates that, asymptotically, the MCMC estimator requires fewer samples to reach a specified accuracy.
Under the ergodic assumption, the asymptotic variance can be written as
X?
2
?A
(f ) = V [f ] +
(cA,f (? ) + cB,f (? )) ,
? =1
(4)
where
cA,f (? ) = EA [f (xt ) f (xt+? )] ? EA [f (xt )] E [f (xt+? )]
is the covariance of the function value between two states that are ? time steps apart in the trajectory
2
of the Markov chain with transition operator A. Note that ?A
(f ) depends on both A and its reverse
2
2
operator B, and ?A (f ) = ?B (f ) since A is also the reverse operator of B by definition.
In this paper, we consider only the case where S is finite, i.e., S = {1, ? ? ? , S}, so that the transition
operators A and B, the stationary distribution ?, and the function f can all be written in matrix form.
>
>
Let ? = [? (1) , ? ? ? , ? (S)] , f = [f (1) , ? ? ? , f (S)] , Ai,j = A (i ? j), Bi,j = B (i ? j). The
asymptotic variance can thus be written as
X?
2
?A
(f ) = V [f ] +
f > QA? + QB ? ? 2?? > f ,
? =1
1
Strictly speaking, the ergodic assumption is not necessary for the MCMC estimator to work, see [4].
However, we make the assumption to simplify the analysis.
2
with Q = diag {?}. Since B is the reverse operator of A, QA = B > Q. Also, from the ergodic
assumption,
lim A? = lim B ? = R,
? ??
? ??
>
where R = 1? is a square matrix in which every row is ? > . It follows that the asymptotic variance
can be represented by Kenney?s formula [7] in the non-reversible case:
>
2
(5)
?A
(f ) = V [f ] + 2 (Qf ) ?? H (Qf ) ? 2f > Qf ,
where [?]H denotes the Hermitian (symmetric) part of a matrix, and ? = Q+?? > ?J, with J = QA
being the joint distribution of two consecutive states.
3
Improving the asymptotic variance
It is clear from Eq.5 that the transition operator A affects the asymptotic variance only through term
[?? ]H . If the chain is reversible, then J is symmetric, so that ? is also symmetric, and therefore
comparing the asymptotic variance of two MCMC estimators becomes a matter of comparing their
2
2
J, namely, if2 J J 0 = QA0 , then ?A
(f ) ? ?A
0 (f ), for any f . This leads to a simple proof of
Peskun?s theorem in the discrete case [3].
In the case where the Markov chain is non-reversible, i.e., J is asymmetric, the analysis becomes
much more complicated. We start by providing a sufficient and necessary condition in section 3.1,
which transforms the comparison of asymptotic variance based on arbitrary finite Markov chains
into a matrix ordering problem, using a result from matrix analysis. In section 3.2, a special case
is identified, in which the asymptotic variance of a reversible chain is compared to that of a nonreversible one whose joint distribution over consecutive states is that of the reversible chain plus
a skew-Hermitian matrix. We prove that the resulting non-reversible chain has smaller asymptotic
variance, and provide a necessary and sufficient condition for the existence of such non-zero skewHermitian matrices. Finally in section 3.3, we provide a graphical interpretation of the result.
3.1
The general case
From Eq.5 we know that comparing the asymptotic variances of two MCMC estimators is equivalent
to comparing their [?? ]H . The following result from [1, 2] allows us to write [?? ]H in terms of the
symmetric and asymmetric parts of ?.
?
>
?
Lemma 1 If a matrix X is invertible, then [X ? ]H = [X]H + [X]S [X]H [X]S , where [X]S is the
skew Hermitian part of X.
From Lemma 1, it follows immediately that in the discrete case, the comparison of MCMC estimators based on two Markov chains with the same stationary distribution can be cast as a different
problem of matrix comparison, as stated in the following proposition.
Proposition 1 Let A, A0 be two transition operators of ergodic Markov chains with stationary distribution ?. Let J = QA, J 0 = QA0 , ? = Q + ?? > ? J, ?0 = Q + ?? > ? J 0 . Then the following
three conditions are equivalent:
2
2
1) ?A
(f ) ? ?A
0 (f ) for any f
h
i
?
2) [?? ]H (?0 )
H
3) [J]H ?
>
[J]S
?
[?]H
>
?
[J]S [J 0 ]H ? [J 0 ]S [?0 ]H [J 0 ]S
Proof. First we show that ? is invertible. Following the steps in [3], for any f 6= 0,
f > ?f = f > [?]H f = f > Q + ?? > ? J f
i
1 h
2
2
= E (f (xt ) ? f (xt+1 )) + E [f (xt )] > 0,
2
2
For symmetric matrices X and Y , we write X Y if Y ? X is positive semi-definite, and X ? Y if
Y ? X is positive definite.
3
thus [?]H 0, and ? is invertible since ?f 6= 0 for any f 6= 0.
Condition 1) and 2) are equivalent by definition. We now prove 2) is equivalent to 3). By Lemma 1,
h
i
?
?
>
>
? H (?0 )
?? [?]H + [?]S [?]H [?]S [?0 ]H + [?0 ]S [?0 ]H [?0 ]S ,
H
the result follows by noticing that [?]H = Q + ?? > ? [J]H and [?]S = ? [J]S .
3.2
A special case
Generally speaking, the conditions in Proposition 1 are very hard to verify, particularly because of
>
?
the term [J]S [?]H [J]S . Here we focus on a special case where [J 0 ]S = 0, and [J 0 ]H = J 0 = [J]H .
This amounts to the case where the second chain is reversible, and its transition operator is the
average of the transition operator of the first chain and the associated reverse operator. The result is
formalized in the following corollary.
Corollary 1 Let T be a reversible transition operator of a Markov chain with stationary distribution
?. Assume there is some H that satisfies
Condition I. 1> H = 0, H1 = 0, H = ?H > , and3
Condition II. T ? Q? H are valid transition matrices.
Denote A = T + Q? H, B = T ? Q? H, then
1) A preserves ?, and B is the reverse operator of A.
2
2
(f ) ? ?T2 (f ) for any f .
(f ) = ?B
2) ?A
2
(f ) < ?T2 (f ).
3) If H 6= 0, then there is some f , such that ?A
2
2
4) If A? = T + (1 + ?) Q? H is valid transition matrix, ? > 0, then ?A
(f ) ? ?A
(f ).
?
Proof. For 1), notice that ? > T = ? > , so
? > A = ? > T + ? > Q? H = ? > + 1> H = ? > ,
and similarly for B. Moreover
>
>
>
QA = QT + H = (QT ? H) = Q T ? Q? H
= (QB) ,
thus B is the reverse operator of A.
2
2
(f ) follows from Eq.5. Let J 0 = QT , J = QA. Note that [J]S = H,
(f ) = ?B
For 2), ?A
1
J 0 = QT = (QA + QB) = [QA]H = [J]H ,
2
?
2
and [?]H 0 thus H > [?]H H 0 from Proposition 1. It follows that ?A
(f ) ? ?T2 (f ) for any f .
For 3), write X = [?]H ,
?
?
?
? H = X + H > X ? H = X ? ? X ? H > X + HX ? H > HX ? .
? PS
Since X 0, HX ? H > 0, one can write X + HX ? H > = s=1 ?s es e>
s , with ?s > 0, ?s.
Thus
XS
?
>
H > X + HX ? H > H =
?s Hes (Hes ) .
s=1
Since H 6= 0, there is at least one s? , such that Hes? 6= 0. Let f = Q? XHes? , then
h
? i
1 2
>
2
?T (f ) ? ?A
(f ) = (Qf ) X ? ? X + H > X ? H
(Qf )
2
?
>
= (Qf ) X ? H > X + HX ? H > HX ? (Qf )
XS
>
>
?s Hes (Hes ) (Hes? )
= (Hes? )
s=1
X
2
4
>
= ?s kHes? k +
?s e>
> 0.
s? H Hes
?
s6=s
3
We write 1 for the S-dimensional column vector of 1?s.
4
For 4), let ?? = Q + ?? > ? QA? , then for ? > 0,
?
?
?
2
?? H = X + (1 + ?) H > X ? H
X + H > X ? H = ?? H ,
2
2
by Eq.5, we have ?A
(f ) ? ?A
(f ) for any f .
?
Corollary 1 shows that starting from a reversible Markov chain, as long as one can find a nonzero H satisfying Conditions I and II, then the asymptotic performance of the MCMC estimator is
guaranteed to improve. The next question to ask is whether such an H exists, and, if so, how to find
one. We answer this question by first looking at Condition I. The following proposition shows that
any H satisfying this condition can be constructed systematically.
Proposition 2 Let H be an S-by-S matrix. H satisfies Condition I if and only if H can be written
as the linear combination of 21 (S ? 1) (S ? 2) matrices, with each matrix of the form
>
Ui,j = ui u>
j ? uj ui , 1 ? i < j ? S ? 1.
Here u1 , ? ? ? , uS?1 are S ? 1 non-zero linearly independent vectors satisfying u>
s 1 = 0.
Proof. Sufficiency. It is straightforward to verify that each Ui,j is skew-Hermitian and satisfies
Ui,j 1 = 0. Such properties are inherited by any linear combination of Ui,j .
Necessity. We show that there are at most 21 (S ? 1) (S ? 2) linearly independent bases for all H
such that H = ?H > and H1 = 0. On one hand, any S-by-S skew-Hermitian matrix can be written
as the linear combination of 12 S (S ? 1) matrices of the form
Vi,j : {Vi,j }m,n = ? (m, i) ? (n, j) ? ? (n, i) ? (m, j) ,
where ? is the standard delta function such that ? (i, j) = 1 if i = j and 0 otherwise. However,
the constraint H1 = 0 imposes S ? 1 linearly independent constraints, which means that out of
1
2 S (S ? 1) parameters, only
1
1
S (S ? 1) ? (S ? 1) = (S ? 1) (S ? 2)
2
2
are independent.
S?1
=
2
are linearly independent.
On the other hand, selecting two non-identical vectors from u1 , ? ? ? , uS?1 results in
1
2
(S ? 1) (S ? 2) different Ui,j . It has still to be shown that these Ui,j
Assume
X
0=
?i,j Ui,j =
1?i<j?S?1
X
>
?i,j ui u>
j ? uj ui , ??i,j ? R.
1?i<j?S?1
Consider two cases: Firstly, assume u1 , ? ? ? , uS?1 are orthogonal, i.e., u>
i uj = 0 for i 6= j. For a
particular us ,
X
X
>
0=
?i,j Ui,j us =
?i,j ui u>
j ? uj ui us
1?i<j?S?1
=
X
1?i<s
1?i<j?S?1
X
?i,s ui
u>
s us +
?s,j uj
u>
s us .
s<j?S?1
Since
u>
s us 6= 0, it follows that ?i,s = ?s,j = 0, for all 1 ? i < s < j ? S ? 1. This holds for
any us , so all ?i,j must be 0, and therefore Ui,j are linearly independent by definition. Secondly, if
u1 , ? ? ? , uS?1 are not orthogonal, one can construct a new set of orthogonal vectors u
?1 , ? ? ? , u
?S?1
from u1 , ? ? ? , uS?1 through Gram?Schmidt orthogonalization, and create a different set of bases
?i,j . It is easy to verify that each U
?i,j is a linear combination of Ui,j . Since all U
?i,j are linearly
U
independent, it follows that Ui,j must also be linearly independent.
Proposition 2 confirms the existence of non-zero H satisfying Condition I. We now move to Condition II, which requires that both QT + H and QT ? H remain valid joint distribution matrices, i.e.
5
all entries must be non-negative and sum up to 1. Since 1> (QT + H) 1 = 1 by Condition I, only
the non-negative constraint needs to be considered.
It turns out that not all reversible Markov chains admit a non-zero H satisfying both Condition I and
II. For example, consider a Markov chain with only two states. It is impossible to find a non-zero
skew-Hermitian
H such that H1 = 0, because all 2-by-2 skew-Hermitian matrices are proportional
#
"
0 ?1
.
to
1
0
The next proposition gives the sufficient and necessary condition for the existence of a non-zero H
satisfying both I and II. In particular, it shows an interesting link between the existence of such H
and the connectivity of the states in the reversible chain.
Proposition 3 Assume a reversible ergodic Markov chain with transition matrix T and let J = QT .
The state transition graph GT is defined as the undirected graph with node set S = {1, ? ? ? , S} and
edge set {(i, j) : Ji,j > 0, 1 ? i < j ? S}. Then there exists some non-zero H satisfying Condition
I and II, if and only if there is a loop in GT .
Proof. Sufficiency: Without loss of generality, assume the loop is made of states 1, 2, ? ? ? , N and
edges (1, 2) , ? ? ? , (N ? 1, N ) , (N, 1), with N ? 3. By definition, J1,N > 0, and Jn,n+1 > 0 for
all 1 ? n ? N ? 1. A non-zero H can then be constructed as
?
?, if 1 ? i ? N ? 1 and j = i + 1,
?
?
?
? ??, if 2 ? i ? N and j = i ? 1,
?, if i = N and j = 1,
Hi,j =
?
?
??,
if i = 1 and j = N ,
?
?
0, otherwise.
Here
?=
min
1?n?N ?1
{Jn,n+1 , 1 ? Jn,n+1 , J1,N , 1 ? J1,N } .
Clearly, ? > 0, since all the items in the minimum are above 0. It is trivial to verify that H = ?H >
and H1 = 0.
Necessity: Assume there are no loops in GT , then all states in the chain must be organized in a tree,
following the ergodic assumption. In other word, there are exactly 2 (S ? 1) non-zero off-diagonal
elements in J. Plus, these 2 (S ? 1) elements are arranged symmetrically along the diagonal and
spanning every column and row of J.
Because the states are organized in a tree, there is at least one leaf node s in GT , with a single neighbor s0 . Row s and column s in J thus looks like rs = [? ? ? , ps,s , ? ? ? , ps,s0 , ? ? ? ] and its transpose,
respectively, with ps,s ? 0 and ps,s0 > 0, and all other entries being 0.
Assume that one wants to construct a some H, such that J ? H ? 0. Let hs be the s-th row of H.
Since rs ? hs ? 0, all except the s0 -th elements in hs must be 0. But since hs 1 = 0, the whole s-th
row, thus the s-th column of H must be 0.
Having set the s-th column and row of H to 0, one can consider the reduced Markov chain with one
state less, and repeat with another leaf node. Working progressively along the tree, it follows that all
rows and columns in H must be 0.
The indication of Proposition 3 together with 2 is that all reversible chains can be improved in terms
of asymptotic variance using Corollary 1, except those whose transition graphs are trees. In practice,
the non-tree constraint is not a problem because almost all current methods of constructing reversible
chains generate chains with loops.
3.3
Graphical interpretation
In this subsection we provide a graphical interpretation of the
Starting from a simple case, consider a reversible Markov chain
>
>
Let u1 = [1, 0, ?1] and u2 = [0, 1, ?1] . Clearly, u1 and
>
>
u1 1 = u2 1 = 0. By Proposition 2 and 3, there exists some ?
6
results in the previous sections.
with three states forming a loop.
u2 are linearly independent and
> 0, such that H = ?U12 satis-
3
3
4
2
9
5
8
6
4
7
H
9
1
8
6
=
9
5
8
9
5
6
? ?U6,8
8
4
9
5
6
? ?U5,6
3
4
? ?U4,5
6
? ?U3,4
8
9
5
8
6
+ ?U3,8
Figure 1: Illustration of the construction of larger vortices. The left hand side is a state transition
graph of a reversible Markov chain with S = 9 states, with a vortex 3 ? 8 ? 6 ? 5 ? 4 of
strength ? inserted. The corresponding H can be expressed as the linear combination of Ui,j , as
shown on the right hand side of the graph. We start from the vortex 8 ? 6 ? 9 ? 8, and add
one vortex a time. The dotted lines correspond to edges on which the flows cancel out when a new
vortex is added. For example, when vortex 6 ? 5 ? 9 ? 6 is added, edge 9 ? 6 cancels edge
6 ? 9 in the previous vortex, resulting in a larger vortex with four states. Note that in this way one
can construct vortices which do not include state 9, although each Ui,j is a vortex involving 9.
>
fies Condition I and II, with U1,2 = u1 u>
2 ? u2 u1 . Write U1,2 and J + H in explicit form,
"
#
"
#
0
1 ?1
p1,1
p1,2 + ? p1,3 ? ?
0
1 , J + H = p2,1 ? ?
p2,2
p2,3 + ? ,
U1,2 = ?1
1 ?1
0
p3,1 + ? p3,2 ? ?
p3,3
with pi,j being the probability of the consecutive states being i, j. It is clear that in J + H, the
probability of jumps 1 ? 2, 2 ? 3, and 3 ? 1 is increased, and the probability of jumps in the
opposite direction is decreased. Intuitively, this amounts to adding a ?vortex? of direction 1 ? 2 ?
3 ? 1 in the state transition. Similarly, the joint probability matrix for the reverse operator is J ?H,
which adds a vortex in the opposite direction. This simple case also gives an explanation of why
adding or subtracting non-zero H can only be done where a loop already exists, since the operation
requires subtracting ? from all entries in J corresponding to edges in the loop.
In the general case, define S ? 1 vectors u1 , ? ? ? , uS?1 as
us = [0, ? ? ? , 0,
1
, 0, ? ? ? , 0, ?1]> .
s-th element
It is straightforward to see that u1 , ? ? ? , uS?1 are linearly independent and u>
s 1 = 0 for all s, thus
>
any H satisfying Condition I can be represented as the linear combination of Ui,j = ui u>
j ? uj ui ,
with each Ui,j containing 1?s at positions (i, j), (j, S), (S, i), and ?1?s at positions (i, S), (S, j),
(j, i). It is easy to verify that adding ?Ui,j to J amounts to introducing a vortex of direction i ? j ?
S ? i, and any vortex of N states (N ? 3) s1 ? s2 ? ? ? ? ? sN ? s1 can be represented by the
PN ?1
linear combination n=1 Usn ,sn+1 in the case of state S being in the vortex and assuming sN = S
PN ?1
without loss of generality, or UsN ,s1 + n=1 Usn ,sn+1 if S is not in the vortex, as demonstrated in
Figure 1. Therefore, adding or subtracting an H to J is equivalent to inserting a number of vortices
into the state transition map.
4
An example
Adding vortices to the state transition graph forces the Markov chain to move in loops following
pre-specified directions. The benefit of this can be illustrated in the following example. Consider a
reversible Markov chain with S states forming a ring, namely from state s one can only jump to s?1
or s 1, with ? and being the mod-S summation and subtraction. The only possible non-zero H
PS?1
in this example is of form ? s=1 Us,s+1 , corresponding to vortices on the large ring.
We assume uniform stationary distribution ? (s) = S1 . In this case, any reversible chain behaves
like a random walk. The chain which achieves minimal asymptotic variance is the one with the
probability of both jumping forward and backward being 12 . The expected number of steps for
2
this chain to reach the state S2 edges away is S4 . However, adding the vortex reduces this number to
7
1.0
0.8
Without vortex
0.6
0.4
0.2
0.0
100
200
300
400
500
600
-0.2
Without vortex
With vortex
HaL
-0.4
HbL
With vortex
HcL
Figure 2: Demonstration of the vortex effect: (a) and (b) show two different, reversible Markov
chains, each containing 128 states connected in a ring. The equilibrium distribution of the chains is
depicted by the gray inner circles; darker shades correspond to higher probability. The equilibrium
distribution of chain (a) is uniform, while that of (b) contains two peaks half a ring apart. In addition,
the chains are constructed such that the probability of staying in the same state is zero. In each
case, two trajectories, of length 1000, are generated from the chain with and without the vortex,
starting from the state pointed to by the arrow. The length of the bar radiating out from a given
state represents the relative frequency of visits to that state, with red and blue bars corresponding
to chains with and without vortex, respectively. It is clear from the graph that trajectories sampled
from reversible chains spread much slower, with only 1/5 of the states reached in (a) and 1/3 in (b),
and the trajectory in (b) does not escape from the current peak. On the other hand, with vortices
added, trajectories of the same length spread over all the states, and effectively explore both peaks
of the stationary distribution in (b). The plot (c) show the correlation of function values (normalized
by variance) between two states ? time steps apart, with ? ranging from 1 to 600. Here we take
s
the Markov chains from (b) and use function f (s) = cos 4? ? 128
. When vortices are added, not
only do the absolute values of the correlations go down significantly, but also their signs alternate,
indicating that these correlations tend to cancel out in the sum of Eq.5.
S
roughly 2?
for large S, suggesting that it is much easier for the non-reversible chain to reach faraway
states, especially for large S. In the extreme case, when ? = 21 , the chain cycles deterministically,
reducing asymptotic variance to zero. Also note that the reversible chain here has zero probability
of staying in the current state, thus cannot be further improved using Peskun?s theorem.
Our intuition about why adding vortices helps is that chains with vortices move faster than the
reversible ones, making the function values of the trajectories less correlated. This effect is demonstrated in Figure 2.
5
Conclusion
In this paper, we have presented a new way of converting a reversible finite Markov chain into a nonreversible one, with the theoretical guarantee that the asymptotic variance of the MCMC estimator
based on the non-reversible chain is reduced. The method is applicable to any reversible chain whose
states are not connected through a tree, and can be interpreted graphically as inserting vortices into
the state transition graph.
The results confirm that non-reversible chains are fundamentally better than reversible ones. The
general framework of Proposition 1 suggests further improvements of MCMC?s asymptotic performance, by applying other results from matrix analysis to asymptotic variance reduction. The
combined results of Corollary 1, and Propositions 2 and 3, provide a specific way of doing so, and
pose interesting research questions. Which combinations of vortices yield optimal improvements
for a given chain? Finding one of them is a combinatorial optimization problem. How can a good
combination be constructed in practice, using limited history and computational resources?
8
References
[1] R.P. Wen, ?Properties of the Matrix Inequality?, Journal of Taiyuan Teachers College, 2005.
[2] R. Mathias, ?Matrices With Positive Definite Hermitian Part: Inequalities And Linear Systems?, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.
1.1.33.1768, 1992.
[3] L.H. Li, ?A New Proof of Peskun?s and Tierney?s Theorems using Matrix Method?, Joint
Graduate Students Seminar of Department of Statistics and Department of Biostatistics, Univ.
of Toronto, 2005.
[4] R.M. Neal, ?Improving asymptotic variance of MCMC estimators: Non-reversible chains are
better?, Technical Report No. 0406, Department of Statistics, Univ. of Toronto, 2004.
[5] I. Murray, ?Advances in Markov chain Monte Carlo methods?, M. Sci. thesis, University College London, 2007.
[6] R.M. Neal, ?Bayesian Learning for Neural Networks?, Springer, 1996.
[7] J. Kenney and E.S. Keeping, ?Mathematics of Statistics?, van Nostrand, 1963.
[8] C. Andrieu, N. de Freitas, A. Doucet, and M.I. Jordan, ?An Introduction to MCMC for Machine Learning?, Machine Learning, 50, 5-43, 2003.
[9] Szakdolgozat, ?The Mixing Rate of Markov Chain Monte Carlo Methods and some Applications of MCMC Simulation in Bioinformatics?, M.Sci. thesis, Eotvos Lorand University,
2006.
[10] P.H. Peskun, ?Optimum Monte-Carlo sampling using Markov chains?, Biometrika, vol. 60, pp.
607-612, 1973.
[11] L. Tierney, ?A note on Metropolis Hastings kernels for general state spaces?, Ann. Appl.
Probab. 8, 1-9, 1998.
[12] S. Duane, A.D. Kennedy, B.J. Pendleton and D. Roweth, ?Hybrid Monte Carlo?, Physics Letters B, vol.195-2, 1987.
[13] J.S. Liu, ?Peskun?s theorem and a modified discrete-state Gibbs sampler?, Biometria, vol.83,
pp.681-682, 1996.
9
| 4037 |@word h:4 confirms:2 simulation:1 r:2 covariance:1 reduction:1 necessity:2 liu:1 contains:2 initial:1 selecting:1 freitas:1 current:4 comparing:4 yet:1 written:5 must:7 j1:3 plot:1 progressively:1 stationary:12 half:1 fewer:1 leaf:2 item:1 provides:1 node:3 toronto:2 firstly:1 along:2 constructed:4 prove:2 hermitian:8 expected:1 kenney:2 roughly:2 p1:3 becomes:2 moreover:1 biostatistics:1 interpreted:2 proposing:1 finding:1 guarantee:3 every:2 exactly:1 biometrika:1 t1:1 positive:3 limit:1 vortex:36 approximately:1 burn:1 plus:2 suggests:4 appl:1 co:1 limited:1 bi:1 graduate:1 practice:2 definite:3 significantly:1 pre:2 word:1 cannot:2 close:1 operator:17 context:1 impossible:1 applying:1 equivalent:5 map:1 demonstrated:2 graphically:2 straightforward:2 starting:4 go:1 ergodic:8 formalized:1 immediately:2 estimator:15 u6:1 s6:1 construction:1 suppose:1 element:4 idsia:6 satisfying:8 particularly:2 asymmetric:2 u4:1 inserted:1 connected:3 sun:1 cycle:1 ordering:1 decrease:1 intuition:2 ui:25 depend:1 efficiency:1 manno:3 joint:5 represented:3 univ:2 fast:1 london:1 monte:6 doi:1 pendleton:1 whose:7 larger:2 valued:1 otherwise:2 ability:1 statistic:3 indication:1 subtracting:3 if2:1 inserting:5 loop:9 mixing:2 p:6 optimum:1 converges:1 staying:4 ring:4 help:1 pose:1 qt:8 jurgen:1 eq:5 p2:3 switzerland:3 direction:7 owing:1 adjacency:1 explains:1 hx:7 preliminary:1 proposition:13 secondly:1 summation:1 strictly:1 hold:1 sufficiently:1 considered:1 cb:1 equilibrium:2 u3:2 achieves:1 consecutive:4 faustino:1 applicable:3 combinatorial:1 viewdoc:1 create:1 clearly:2 modified:2 pn:2 fluctuate:1 corollary:5 focus:2 improvement:2 indicates:1 greatly:1 typically:1 a0:1 special:3 field:1 once:1 construct:3 having:1 psu:1 sampling:1 identical:1 represents:1 look:1 cancel:3 t2:3 report:1 fundamentally:3 simplify:1 escape:1 wen:1 hcl:1 preserve:1 interest:1 satis:1 extreme:1 behind:1 chain:77 edge:7 encourage:1 necessary:4 jumping:1 orthogonal:3 tree:7 walk:1 circle:1 theoretical:4 minimal:1 roweth:1 increased:1 column:6 juergen:1 introducing:1 entry:3 uniform:2 answer:1 teacher:1 combined:1 peak:3 off:1 physic:1 invertible:3 together:2 connectivity:1 thesis:2 central:1 containing:2 admit:1 li:1 suggesting:1 de:1 student:1 matter:1 depends:1 vi:2 multiplicative:1 h1:5 doing:1 red:1 start:2 reached:1 complicated:1 inherited:1 square:2 accuracy:2 variance:28 correspond:2 yield:1 bayesian:1 carlo:6 trajectory:8 kennedy:1 history:1 reach:4 definition:4 frequency:1 pp:2 associated:2 proof:6 galleria:3 peskun:7 sampled:2 proved:1 ask:1 lim:3 subsection:1 organized:3 ea:2 back:1 higher:1 follow:1 improved:3 sufficiency:2 arranged:1 done:1 generality:2 just:1 correlation:3 hand:5 hastings:2 working:1 reversible:44 gray:1 hal:1 effect:2 concept:1 verify:5 normalized:1 andrieu:1 symmetric:5 nonzero:1 neal:5 illustrated:1 tino:1 illustrative:1 criterion:1 orthogonalization:1 meaning:1 ranging:1 recently:1 behaves:1 ji:1 interpretation:5 he:8 gibbs:1 ai:1 mathematics:1 similarly:2 pointed:1 gt:4 base:2 add:2 apart:3 reverse:8 schmidhuber:1 nostrand:1 inequality:2 yi:2 minimum:1 converting:3 subtraction:1 period:1 ii:8 semi:1 reduces:1 ing:1 technical:1 faster:1 characterized:1 long:2 visit:1 involving:1 expectation:3 kernel:1 achieved:1 background:1 want:1 addition:1 decreased:2 rest:1 unlike:1 tend:1 undirected:1 flow:1 mod:1 jordan:1 symmetrically:1 enough:1 concerned:1 easy:2 variety:1 affect:2 identified:1 opposite:2 inner:1 idea:2 whether:1 speaking:3 qa0:2 generally:1 clear:3 amount:4 transforms:1 u5:1 s4:1 reduced:6 generate:1 http:1 notice:1 dotted:1 sign:1 estimated:1 delta:1 popularity:1 track:2 blue:1 discrete:3 write:6 vol:3 ist:1 four:1 enormous:1 tierney:3 backward:1 graph:11 asymptotically:1 sum:2 noticing:1 letter:1 almost:1 p3:3 hi:1 followed:1 gomez:1 guaranteed:1 strength:1 constraint:4 u1:15 min:1 qb:3 citeseerx:1 department:3 alternate:1 combination:9 smaller:2 remain:1 metropolis:2 making:1 s1:4 intuitively:1 resource:1 skew:6 turn:1 know:1 operation:1 away:2 schmidt:1 slower:1 existence:4 original:1 jn:3 denotes:2 include:1 graphical:5 murray:2 uj:6 especially:1 move:4 question:3 added:4 already:1 diagonal:2 link:1 sci:2 trivial:1 spanning:1 assuming:1 length:4 illustration:1 providing:1 demonstration:1 mostly:1 stated:1 negative:2 markov:30 finite:6 looking:1 arbitrary:1 namely:2 cast:1 specified:3 qa:9 bar:2 u12:1 including:1 explanation:1 force:1 hybrid:1 improve:1 concludes:1 sn:4 review:1 probab:1 asymptotic:33 relative:1 loss:2 interesting:5 proportional:1 sufficient:3 imposes:1 s0:4 systematically:1 pi:1 row:7 qf:7 summary:1 repeat:1 transpose:1 keeping:1 side:2 wide:1 neighbor:1 absolute:1 benefit:1 van:1 transition:22 valid:3 gram:1 fies:1 forward:1 made:1 jump:3 confirm:1 doucet:1 continuous:1 why:2 ca:2 improving:5 complex:1 constructing:1 domain:1 diag:1 main:2 spread:2 linearly:9 arrow:1 whole:1 s2:2 x1:1 darker:1 seminar:1 position:2 nonreversible:4 wish:1 explicit:1 deterministically:1 theorem:7 formula:1 down:1 shade:1 xt:10 specific:1 explored:1 x:2 admits:1 exists:4 adding:7 effectively:1 gained:1 easier:1 depicted:1 explore:1 forming:2 faraway:1 expressed:1 tracking:1 u2:4 springer:1 ch:6 duane:1 satisfies:3 ann:1 hard:1 determined:1 except:2 reducing:2 sampler:1 usn:3 lemma:3 mathias:1 e:1 indicating:1 college:2 latter:1 bioinformatics:1 mcmc:20 correlated:1 |
3,355 | 4,038 | Adaptive Multi-Task Lasso: with Application to
eQTL Detection
Seunghak Lee, Jun Zhu and Eric P. Xing
School of Computer Science, Carnegie Mellon University
{seunghak,junzhu,epxing}@cs.cmu.edu
Abstract
To understand the relationship between genomic variations among population and
complex diseases, it is essential to detect eQTLs which are associated with phenotypic effects. However, detecting eQTLs remains a challenge due to complex
underlying mechanisms and the very large number of genetic loci involved compared to the number of samples. Thus, to address the problem, it is desirable to
take advantage of the structure of the data and prior information about genomic
locations such as conservation scores and transcription factor binding sites.
In this paper, we propose a novel regularized regression approach for detecting
eQTLs which takes into account related traits simultaneously while incorporating
many regulatory features. We first present a Bayesian network for a multi-task
learning problem that includes priors on SNPs, making it possible to estimate the
significance of each covariate adaptively. Then we find the maximum a posteriori
(MAP) estimation of regression coefficients and estimate weights of covariates
jointly. This optimization procedure is efficient since it can be achieved by using a projected gradient descent and a coordinate descent procedure iteratively.
Experimental results on simulated and real yeast datasets confirm that our model
outperforms previous methods for finding eQTLs.
1 Introduction
One of the fundamental problems in computational biology is to understand associations between
genomic variations and phenotypic effects. The most common genetic variations are single nucleotide polymorphisms (SNPs), and many association studies have been conducted to find SNPs
that cause phenotypic variations such as diseases or gene-expression traits [1]. However, association
mapping of causal QTLs or eQTLs remains challenging as the variation of complex traits is a result
of contributions of many genomic variations. In this paper, we focus on two important problems to
detect eQTLs. First, we need to find methods to take advantage of the structure of data for finding
association SNPs from high dimensional eQTL datasets when p ? N , where p is the number of
SNPs and N is the sample size. Second, we need techniques to take advantage of prior biological
knowledge to improve the performance of detecting eQTLs.
To address the first problem, Lasso is a widely used technique for high-dimensional association
mapping problems, which can yield a sparse and easily interpretable solution via an ?1 regularization
[2]. However, despite the success of Lasso, it is limited to considering each trait separately. If we
have multiple related traits it would be beneficial to estimate eQTLs jointly since we can share
information among related traits. For the second problem, Fig. 1 shows some prior knowledge on
SNPs in a genome including transcription factor binding sites (TFBS), 5? UTR and exon, which play
important roles for the regulation of genes. For example, TFBS controls the transcription of DNA
sequences to mRNAs. Intuitively, if SNPs are located on these regions, they are more likely to be
true eQTLs compared to those on regions without such annotations since they are related to genes or
gene regulations. Thus, it would be desirable to penalize regression coefficients less corresponding
1
SNPs
Chromosome
Transcription factor binding site
Exon
5? UTR
Annotation
Figure 1: Examples of prior knowledge on SNPs including transcription factor binding sites, 5? UTR and
exon. Arrows represent SNPs and we indicate three genomic annotations on the chromosome. Here association
SNPs are denoted by red arrows (best viewed in color), showing that SNPs on regions with regulatory features
are more likely to be associated with traits.
to SNPs having significant annotations such as TFBS in a regularized regression model. Again, the
widely used Lasso is limited to treating all SNPs equally.
This paper presents a novel regularized regression approach, called adaptive multi-task Lasso, to
effectively incorporate both the relatedness among multiple gene-expression traits and useful prior
knowledge for challenging eQTL detection. Although some methods have been developed for either
adaptive or multi-task learning, to the best of our knowledge, adaptive multi-task Lasso is the first
method that can consider prior information on SNPs and multi-task learning simultaneously in one
single framework. For example, Lirnet uses prior knowledge on SNPs such as conservation scores,
non-synonymous coding and UTR regions for a better search of association mappings [3]. However,
Lirnet considers the average effects of SNPs on gene modules by assuming that association SNPs are
shared in a module. This approach is different from multi-task learning where association SNPs are
found for each trait while considering group effects over multiple traits. To find genetic markers that
affect correlated traits jointly, the graph-guided fused Lasso [4] was proposed to consider networks
over multiple traits within an association analysis. However, graph-guided fused Lasso does not
incorporate prior knowledge of genomic locations.
Unlike other methods, we define the adaptive multi-task Lasso as finding a MAP estimate of a
Bayesian network, which provides an elegant Bayesian interpretation of our approach; the resultant
optimization problem is efficiently solved with an alternating minimization procedure. Finally, we
present empirical results on both simulated and real yeast eQTL datasets, which demonstrates the
advantages of adaptive multi-task Lasso over many other competitors.
2 Problem Definition: Adaptive Multi-task Lasso
Let Xij ? {0, 1, 2} denote the number of minor alleles at the j-th SNP of i-th individual for
i = 1, . . . , N and j = 1, . . . , p. We have K related gene traits and Yik represents the gene
expression level of k-th gene of i-th individual for k = 1, . . . , K. In our setting, we assume
that the K traits are related to each other and we explore the relatedness in a multi-task learning
framework. To achieve the relatedness among tasks via grouping effects [5], we can use any
clustering algorithms such as spectral clustering or hierarchical clustering. In association mapping
problems, these clusters can be viewed as clusters of genes which consist of regulatory networks or
pathways [4]. We treat the problem of detecting eQTLs as a linear regression problem. The general
setting includes one design matrix X and multiple tasks (genes) for k = 1, . . . , K,
Y k = X? k + ?
(1)
where
? is a standard
noise. We further assume that Xij ?s are standardized such that
PGaussian
P
2
X
/N
=
1, and consider a model without an intercept.
X
/N
=
0
and
ij
ij
i
i
Now, the open question is how we can devise an appropriate objective function over ? that could effectively consider the desirable group effects over multiple traits and incorporate useful prior knowledge, as we have stated. To explain the motivation of our work and provide a useful baseline that
grounds the proposed approach, we first briefly review the standard Lasso and multi-task Lasso.
2.1 Lasso and Multi-task Lasso
Lasso [2] is a technique for estimating the regression coefficients ? and has been widely used
for association mapping problems. Mathematically, it solves the ?1 -regularized least square problem,
p
X
1
?? = argmin kY ? X?k22 + ?
?j |?j |
2
?
j=1
2
(2)
where ? determines the degree of regularization of nonzero ?j . The scaling parameters ?j ? [0, 1]
are usually fixed (e.g., unit ones) or set by cross-validation, which can be very difficult when p is
large. Due to the singularity at the origin, the ?1 regularization (Lasso penalty) can yield a stable and
sparse solution, which is desirable for association mapping problems because in most cases we have
p ? N and there exists only a small number of eQTLs. It is worth mentioning that Lasso estimates
are posterior mode estimates under a multivariate independent Laplace prior for ? [2].
As we can see from problem (2), the standard Lasso does not distinguish the inputs and regression
coefficients from different tasks. In order to capture some desirable properties (e.g., shared
structures or sparse patterns) among multiple related tasks, the multi-task Lasso was proposed [5],
which solves the problem,
min
?
p
K
X
1X
kY k ? X? k k22 + ?
?j k?j k2
2 k=1
j=1
(3)
qP
k 2
where k?j k2 =
k (?j ) is the ?2 -norm. This model encourages group-wise sparsity across
related tasks via the ?1 /?2 regularization. Again, the solution of Eq. (3) can be interpreted as a MAP
estimate under appropriate priors with fixed scaling parameters.
Multi-task Lasso has been applied (with some extensions) to perform association analysis [4]. However, as we have stated, the limitation of current approaches is that they do not incorporate the useful
prior knowledge. The proposed adaptive multi-task Lasso, as to be presented, is an extension of the
multi-task Lasso to perform joint group-wise and within-group feature selection and incorporate the
useful prior knowledge for effective association analysis.
2.2 Adaptive Multi-task Lasso
Now, we formally introduce the adaptive multi-task Lasso. For clarity, we first define the sparse
multi-task Lasso with fixed scaling parameters, which will be a sub-problem of the adaptive
multi-task Lasso, as we shall see. Specifically, sparse multi-task Lasso solves the problem,
min
?
p
p
K
K
X
X
X
1X
kY k ? X? k k22 + ?1
|?jk | + ?2
?j
?j k?j k2
2
j=1
j=1
k=1
(4)
k=1
where ? and ? are the scaling parameters for the ?1 and ?1 /?2 -norm, respectively. The regularization
parameters ?1 and ?2 can be determined by cross or holdout validation. Obviously, this model subsumes the standard Lasso and multi-task Lasso, and it has three advantages over previous models.
First, unlike the multi-task Lasso, which contains the ?l /?2 -norm only to achieve group-wise sparsity, the ?1 -norm in Eq. (4) can achieve sparsity among SNPs within a group. This property is useful
when K tasks are not perfectly related and we need additional sparsity in each block of k?j k2 . In
section 4, we demonstrate the usefulness of the blended regularization. The hierarchical penalization [6] can achieve a smooth shrinkage effect for variables within a group, but it cannot achieve
within-group sparsity. Second, unlike Lasso we induce group sparsity across multiple related traits.
Finally, as to be extended, unlike Lasso and multi-task Lasso which treat ?j equally or with a fixed
scaling parameter, we can adaptively penalize each ?j according to prior knowledge on covariates
in such a way that SNPs having desirable features are less penalized (see Fig. 1 for details of prior
knowledge on SNPs).
To incorporate the prior knowledge as we have stated, we propose to automatically learn the scaling
parameters (?, ?) from data. To that end, we define ? and ? as mixtures of features on j-th SNP, i.e.
?j =
X
?t ftj and ?j =
X
?t ftj ,
(5)
t
t
where ftj is t-th feature for j-th SNP. For example ftj can be a conservation score of j-th SNP or one
if the SNP is located on TFBS, zero otherwise. To avoid scaling issues, we assume each feature is
P
standardized, i.e., j ftj = 1, ?t. Since we are interested in the relative contributions from different
P
P
features, we further add the constraints that t ?t = 1 and t ?t = 1. These constraints can be
interpreted as a regularization on the feature weights ? ? 0 and ? ? 0.
Although using the definitions (5) in problem (4) and jointly estimating ? and feature weights (?, ?)
can give a solution of adaptive multi-task learning, the resultant method would be lack of an elegant Bayesian interpretation, which is a desirable property that can make the framework more
3
flexible and easily extensible. Recall that the Lasso
?
estimates can be interpreted as MAP estimates under
f
X
1
Laplace priors. Similarly, to achieve a framework
?
that enjoys an elegant Bayesian interpretation, we
?
Y
define a Bayesian network and treat the adaptive
?
multi-task learning problem as finding its MAP
f
T
?
estimate. Specifically, we build a Bayesian network
as shown in Fig. 2 in order to compute the MAP
estimate of ? under adaptive scaling parameters, Figure 2: Graphical model representation of
{?, ?}. We define the conditional probability of ? adaptive multi-task Lasso.
given scaling parameters as,
P (?|?, ?) =
p
p
K
Y
Y
Y
1
exp (??j |?jk |) ?
exp (??j k?j k2 )
Z(?, ?) j=1
j=1
k=1
where Z(?, ?) is a normalization factor, and P (Y |X, ?) ? N (X?, ?), where ? is the identity
matrix. Although in principle we can treat ? and ? as random variables and define a fully Bayesian
approach, for simplicity, we define ? and ? as deterministic functions of ? and ? as in Eq. (5).
Extension to a fully Bayesian approach is our future work.
Now we define the adaptive multi-task Lasso as finding the MAP estimation of ? and simultaneously estimating the feature weights (?, ?), which is equivalent to solving the optimization problem,
min
?,?,?
p
p
K
K
X
X
X
1X
?j k?j k2 + log Z(?, ?),
?j
|?jk | + ?2
kY k ? X? k k22 + ?1
2 k=1
j=1
j=1
k=1
(6)
where ? and ? are related to ? and ? through Eq. (5) and subject to the constraints as defined above.
Remark 1 Although we can interpret problem (4) as a MAP estimate of ? under appropriate priors when
scaling parameters (?, ?) are fixed, it does not enjoy an elegant Bayesian interpretation if we perform joint
estimation of ? and the scaling parameters (?, ?) because it ignores normalization factors of the appropriate
priors. Lee et al. [3] used this approach where a regularized regression model is optimized over scaling
parameters and ? jointly. Therefore, their method does not have an elegant Bayesian interpretation. Moreover,
as we have stated, Lee et al. [3] did not consider grouping effects over multiple traits.
Remark 2 Our method also differs from the adaptive Lasso [7] , transfer learning with meta-priors [8] and
the Bayesian Lasso [9]. First, although both adaptive Lasso and our method use adaptive parameters for
penalizing regression coefficients, we learn adaptive parameters from prior knowledge on covariates in a multitask setting while adaptive Lasso uses ordinary least square solutions for adaptive parameters in a single task
setting. Second, the method of transfer learning with meta-priors [8] is similar to our method in a sense that
both use prior knowledge with multiple related tasks. However, we couple related tasks via ?1 /?2 penalty while
they couple tasks via transferring hyper-parameters among them. Thus we have group sparsity across tasks
as well as sparsity in each group but they cannot induce group sparsity across different tasks. Finally, the
Bayesian Lasso [9] does not have the grouping effects in multiple traits and the priors used usually do not
consider domain knowledge.
3 Optimization: an Alternating Minimization Approach
Now, we solve the adaptive multi-task Lasso problem (6). First, since the normalization factor Z is
hard to compute, we use its upper bound, as given by,
Z?
p Z
Y
j=1
exp (?k?j k2 )d?
RK
p
Y 2 K
Y
?
=
?
j
j
j=1
K?1
2
?( K+1
)2K Y 2 K
2
.
(?j K)K
?j
j
(7)
This integral result is due to normalization constant of K dimensional multivariate Laplace distribution [10, 11]. Using this upper bound, the learning problem is to minimize an upper bound of the
objective function in problem (6), which will be denoted by L(?, ?, ?) henceforth. Although L is
not joint convex over ?, ? and ?, it is convex over ? given {?, ?} and convex over {?, ?} given ?.
We use an alternating optimization procedure which (1) minimizes the upper bound L of problem (6)
over {?, ?} by fixing ?; and (2) minimizes L over ? by fixing {?, ?} iteratively until convergence
[12]. Both sub-problems are convex and can be solved efficiently via a projected gradient descent
method and a coordinate descent method, respectively.
4
For the first step of optimizing L over ? and ?, the sub-problem is to solve
min
??P? ,??P?
XX
j
k
X
(?K log ?j + ?j k?j k2 ) ,
? log ?j + ?j |?jk | +
j
where P? , {? :
t ?t = 1, ?t ? 0, ?t} is a simplex over ?, likewise for P? . ? and ? are
functions of ? and ? as defined in Eq. (5). This constrained problem is convex and can be solved by
using a gradient descent algorithm combined with a projection onto a simplex sub-space, which can
be efficiently done [13]. Since ? and ? are not coupled, we can learn each of them separately.
P
For the second sub-problem that optimizes L over ? given fixed feature weights (?, ?), it is exactly
the optimization problem (4). We can solve it using a coordinate descent procedure, which has been
used to optimize the sparse group Lasso [14]. Our problem is different from the sparse group Lasso
in the sense that the sparse group Lasso includes group penalty over multiple covariates for a single
trait, while adaptive multi-task Lasso considers group effects over multiple traits. Here we solve
problem (4) using a modified version of the algorithm proposed for the sparse group Lasso.
As summarized in Algorithm 1, the general optimization procedure is as follows: for each j, we
check the group sparsity condition that ?j = 0. If it is true, no update is needed for ?j . Otherwise,
we check whether ?jk = 0 for each k. If it is true that ?jk = 0, no update is needed for ?jk ; otherwise,
we optimize problem (4) over ?jk with all other coefficients fixed. This one-dimensional optimization problem can be efficiently solved by using a standard optimization method. This procedure is
continued until a convergence condition is met.
More specifically, we first obtain the optimal conditions for problem (4) by computing the subgradient of its objective function with respect to ?jk and set it to zero:
?XjT (Y k ? X? k ) + ?2 ?j gjk + ?1 ?j hkj = 0,
(8)
where g and h are sub-gradients of the ?1 /?2 -norm and the ?1 -norm, respectively. Note that gjk =
?jk
k?j k2
if ?j 6= 0, otherwise kgj k2 ? 1; and hkj = sign(?jk ) if ?jk 6= 0, otherwise hkj ? [?1, 1].
Then, we check the group sparsity that ?j = 0. To do that, we set ?j = 0 in Eq. (8), and we have,
XjT Y k ?XjT
X
Xr ?rk = ?2 ?j gjk +?1 ?j hkj , and ||gj ||22 =
r6=j
1
K
X
?22 ?2j
k=1
(XjT Y k ? XjT
X
Xr ?rk ? ?1 ?j hkj )2 .
r6=j
According to subgradient conditions, we need to have a gj that satisfies the less than inequality
kgj k22 < 1; otherwise, ?j will be non-zero. Since gj is a function of hj , it suffices to check whether
the minimal square ?2 -norm of gj is less than 1. Therefore, we solve the minimization problem of
kgj k22 w.r.t hj , which gives the optimal hj as,
hkj
=
?
?
ck
j
?1 ?j
ck
if | ?1j?j | ? 1
k
? sign( cj )
?1 ?j
(9)
otherwise
P
where ckj =XjT Y k ? XjT r6=j Xr ?rk . If the minimal kgj k22 is less than 1, then ?j is zero and no
update is needed; otherwise, we continue to the next step of checking whether ?jk=0, ?k, as follows.
Again, we start by assuming ?jk is zero. By setting ?jk = 0 in Eq. (8), we have,
XjT Y k ? XjT
X
Xr ?rk = ?1 ?j hkj , and hkj =
r6=j
X
1
Xr ?rk ).
(XjT Y k ? XjT
? 1 ?j
r6=j
According to the definition of the subgradient hkj , it needs to satisfy the condition that |hkj | < 1;
otherwise, ?jk will be non-zero. This checking step can be easily done. After the check, if we have
?jk 6= 0, the problem (4) becomes an one-dimensional optimization problem with respect to ?jk , and
the solution can be obtained using existing optimization algorithms (e.g. optimize function in the
R). We used majorize-minimize algorithm with gradient descent [15].
With the above two steps, we iteratively optimize (?, ?) by fixing ? and optimize ? by fixing feature
weights until convergence. Note that the parameters ?1 and ?2 in Eq. (4), which determine sparsity
levels, are determined by cross or hold-out validation.
5
Input : X ? RN ?p ; Y ? RN ?K ; ? ? Rp ; ? ? Rp ; and ? init ? Rp?K
Output: ? ? Rp?K
? ? ? init ;
Iterate this procedure until convergence;
for j ? 1 to p do
PK
k
k 2
k
k
m ? 21 2
k=1 (cj ? ?1 ?j hj ) where cj and hj are computed as in Eq. (9);
? ?
2 j
if m < 1 then ?jk = 0, for all k = 1, . . . K;
else for k ? 1 to K do
q ? ? 1? |XjT (Y k ? X? k ) + XjT Xj ?jk |;
1 j
if q < 1 then ?jk = 0;
else Solve the following one-dimensional optimization problem:
?jk ? argmin 12 kY k ? X? k k22 + ?1 ?j |?jk | + ?2 ?j k?j k2 ;
?k
j
end
end
Algorithm 1: Optimization algorithm for Equation (4) with fixed scaling parameters.
4 Simulation Study
To confirm the behavior of our model, we run the adaptive multi-task Lasso and other methods on
our simulated dataset (p=100, K=10). We first randomly select 100 SNPs from 114 yeast genotypes
from the yeast eQTL dataset [16]. Following the simulation study in Kim et al. [4], we assume that
some SNPs affect biological networks including multiple traits, and true causal SNPs are selected
by the following procedure. Three sets of randomly selected four SNPs are associated with three
trait clusters (1 ? 3), (4 ? 6), (7 ? 10), respectively. One SNP is associated with two clusters
(1 ? 3) and (4 ? 6), and one causal SNP is for all traits (1 ? 10). For all association SNPs we
set identical association strength from 0.3 to 1. Traits are generated by Y k = X? k + ?, for all
k = 1, . . . , 10 where ? follows the standard normal distribution. We make 10 features (f1 ? f10 ),
of which six are continuous and four are discrete. For the first three continuous features (f1 ? f3 ),
the feature value is drawn from s(N (2, 1)) if a SNP is associated with any traits; otherwise from
1
s(N (1, 1)), where s(x) = 1+exp(x)
is the sigmoid function. For the other three continuous features
(f4 ?f6 ), the value is drawn from s(N (2, 0.5)) if a SNP is associated with any traits; otherwise from
s(N (1, 0.5)). Finally, for the discrete features (f7 ? f10 ), the value is set to s(2) with probability
0.8 if a SNP is associated with any traits; otherwise to s(1). We standardize all the features.
True ?
AML
A + l1/l2
SML
Single SNP
l1/l?
Lasso
1
10
10
10
10
10
10
10
0.9
20
20
20
20
20
20
20
0.8
30
30
30
30
30
30
30
0.7
40
40
40
40
40
40
40
0.6
50
50
50
50
50
50
50
0.5
60
60
60
60
60
60
60
0.4
70
70
70
70
70
70
70
0.3
80
80
80
80
80
80
80
0.2
90
90
90
90
90
90
90
100
100
2 4 6 8 10
100
2 4 6 8 10
100
2 4 6 8 10
100
2 4 6 8 10
0.1
100
100
2 4 6 8 10
2 4 6 8 10
2 4 6 8 10
0
Figure 3: Results of the ? matrix estimated by different methods. For visualization, we present normalized
absolute values of regression coefficients and darker colors imply stronger association with traits. For each
matrix, X-axis represents traits (1-10) and Y-axis represents SNPs (1-100). True ? is shown in the left.
Fig. 3 shows the estimated ? matrix by various methods including AML (adaptive multi-task Lasso),
SML (sparse multi-task Lasso which is AML without adaptive weights), A+?1 /?2 (AML without
Lasso penalty), Single SNP [17], Lasso and ?1 /?? (multi-task learning with ?1 /?? norm). In this
figure, X-axis represents traits (1-10) and Y-axis represents SNPs (1-100). Note that regression
parameters (e.g. ?1 and ?2 for AML) were determined by holdout validation, and we set association
strength to 0.3. We also used hierarchical clustering with cutoff criterion 0.8 prior to run AML,
SML, A+?1 /?2 and ?1 /?? , and Single SNP and Lasso were analyzed for each trait separately.
We investigate the effect of Lasso penalty in our model by comparing the results of AML and
A+?1 /?2 . While AML is slightly more efficient than A+?1 /?2 in finding association SNPs, both
6
work very well for this task. It is not surprising since hierarchical clustering reproduced true trait
clusters and true ? could be detected without considering single SNP level sparsity in each group.
To further validate the effectiveness of Lasso penalty, we run AML and A+?1 /?2 without a priori
clustering step. Interestingly, AML could pick correct SNP-traits associations due to Lasso penalty,
however, A+?1 /?2 failed to do so (see Fig. 5c,d for the comparison of performance). While Lasso
penalty did not show significant contribution for this task when we generated a priori clusters, it is
good to include it when the quality of a clustering is not guaranteed. Comparing the results of AML
and SML in Fig. 3, we could observe that adaptive weights improve the performance significantly.
Adaptive weights help not only reduce false positives but also increase true positives.
0.16
0.14
?
t
0.12
0.1
0.08
0.06
0.04
0.02
f
1
f
2
f
f
3
4
c
1
1
1
0.8
0.8
0.8
0.8
6
f
7
f8
f
9
f
0.6
0.4
l1/l?
0.2
0.2
0.2
Lasso
Single SNP
0
0
0
0
0.5
1
1 ? Specificity
0
0.5
1
1 ? Specificity
0.4
0
10
AML
SML
A+l1/l2
0.6
0.2
0
0.4
f
d
Sensitivity
0.4
Sensitivity
b
0.6
5
Figure 4: Learned feature weights of ?.
1
0.6
f
Features
a
Sensitivity
Sensitivity
Fig. 4 shows the learned feature weights of ? (? is almost identical to ? and not shown here). The results are
based on 100 simulations for each association strength
0.3, 0.5, 0.8 and 1, and half of error bar represents one
standard deviation from the mean. We could observe that
discrete features f7 ?f10 have highest weights while lowest weights are assigned to f1 ? f3 . These weights are
reasonable because f1 ?f3 are drawn from Gaussian with
large standard deviation (STD: 1) compared to that of features f4 ? f6 (STD: 0.5). Also, discrete features are the
most important since they discriminate true association
SNPs with a high probability 0.8.
0.5
1 ? Specificity
1
0
0.5
1
1 ? Specificity
Figure 5: ROC curves of various methods as association strength varies (a) 0.3, (b) 0.5 on clustered data, (c)
0.3, and (d) 0.5 on input dataset. (a,b) Results on clustered data, where correct groups of gene traits are found
using hierarchical clustering (cutoff = 0.8). (c,d) Results on input dataset without using clustering algorithm.
We compare the sensitivity and specificity of our model with other methods. In Fig. 5, we generated
ROC curves for association strength of 0.3 and 0.5. Fig. 5a,b show the results with a priori hierarchical clustering and Fig. 5c,d is with no such preprocessing steps. Using hierarchical clustering we
could correctly find three clusters of gene traits at cutoff 0.8. In Fig. 5, when association strength
is small (i.e., 0.3), AML and A+?1 /?2 significantly outperformed other methods. As association
strength increased, the performance of multi-task learning methods improved quickly while methods based on a single trait such as Lasso and Single SNP showed gradual increase of performance.
We computed test errors on 100 simulated dataset using 30 samples for test and 84 samples for
training. On average, AML achieved the best test error rate of 0.9427, and the order of other methods
in terms of test errors is: A + ?1 /?2 (0.9506), SML (1.0436), ?1 /?? (1.0578) and Lasso (1.1080).
5 Yeast eQTL dataset
We analyze the yeast eQTL dataset [16] that contains expression levels of 5,637 genes and 2,956
SNPs. The genotype data include genetic variants of 114 yeast strains that are progenies of the
standard laboratory strain (BY) and a wild strain (RM). We used 141 modules given by Lee et al.
[3] as groups of gene traits, and extracted unique 1,260 SNPs from 2,956 SNPs for our analysis. For
prior biological knowledge on SNPs used for adaptive multi-task Lasso, we downloaded 12 features
from Saccharomyces Genome Database (http://www.yeastgenome.org) including 11 discrete and 1
continuous feature (conservation score). For a discrete feature, we set its value as ftj = s(2) if the
feature is found on the j-th SNP, ftj = s(1) otherwise. For conservation score, we set ftj = s(score).
All the features are then standardized.
7
Fig. 6 represents ? learned from the yeast eQTL dataset
(? is almost identical to ?). The features are ncRNA (f1 ),
noncoding exon (f2 ), snRNA (f3 ), tRNA (f4 ), intron (f5 ),
binding site (f6 ), 5? UTR intron (f7 ), LTR retrotransposon (f8 ), ARS (f9 ), snoRNA (f10 ), transposable element
gene (f11 ) and conservation score (f12 ). Five discrete features turn out to be important including ncRNA, snRNA,
binding site, 5? UTR intron and snoRNA as well as one
f f f f f f f f f f f f
1 2 3 4 5 6 7 8 9 10 11 12
continuous feature, i.e., conservation score. These reFeatures
sults agree with biological insights. For example, ncRNA, Figure 6: Learned weights of ? on the yeast
snRNA and snoRNA are potentially important for gene eQTL dataset.
regulation since they are functional RNA molecules having a variety of roles such as transcriptional regulation [18]. Also, conservation score would be
significant since mutation in conserved region is more likely to result in phenotypic effects.
0.2
0.18
0.16
0.14
?
t
0.12
0.1
0.08
0.06
0.04
0.02
0
Number of associated traits * 202
3.5
?
ncRNA
snRNA
binding sites
five prime UTR intron
conservation scores
3
2.5
2
1.5
1
0.5
0
0
10
20
30
40
50
60
70
80
90
100
110
120
SNPs
Figure 7: Plot of 121 SNPs on chromosome 1 and 2 vs the number of genes affected by the SNPs from the
yeast eQTL analysis (blue bar). Five significant prior knowledge on SNPs are overlapped with the plot. For
the four discrete priors (ncRNA, snRNA, binding site, 5? UTR intron) we set the value to 1 if annotated, 0
otherwise. Binding sites and regions with no associated traits are denoted by long green and short blue arrows.
Fig. 7 shows the number of associated genes for SNPs on chromosome 1 and 2, superimposed on 5
significant features. We see that association mapping results were affected by both priors and data.
For example, genomic region indicated by blue arrow shows weak association with traits, where
conservation score is low and no other annotations exist. Also we can see that three SNPs located on
binding sites affect a larger number of gene traits (see green arrows). As an example of biological
analysis, we investigate these three association SNPs. The three SNPs are located on telomeres
(chr1:483, chr1:229090, chr2:9425 (chromosome:coordinate)), and these genomic locations are in
cis to Abf1p (autonomously replicating sequence binding factor-1) binding sites. In biology, it is
known that Abf1p acts as a global transcriptional regulator in yeast [19]. Thus, the genomic regions
in telomeres would be good candidates for novel putative eQTL hotspots that regulate the expression
levels of many genes. They were not reported as eQTL hotspots in Yvert et al. [20].
6 Conclusions
In this paper, we proposed a novel regularized regression model, referred to as adaptive multi-task
Lasso, which takes into account multiple traits simultaneously while weights of different covariates
are learned adaptively from prior knowledge and data. Our simulation results support that our model
outperforms other methods via ?1 and ?1 /?2 penalty over multiple related genes, and especially
adaptively learned regularization significantly improved the performance. In our experiments on the
yeast eQTL dataset, we could identify putative three eQTL hotspots with biological supports where
SNPs are associated with a large number of genes.
Acknowledgments
This work was done under a support from NIH 1 R01 GM087694-01, NIH 1RC2HL101487-01
(ARRA), AFOSR FA9550010247, ONR N0001140910758, NSF Career DBI-0546594, NSF IIS0713379 and Alfred P. Sloan Fellowship awarded to E.X.
8
References
[1] R. Sladek, G. Rocheleau, J. Rung, C. Dina, L. Shen, D. Serre, P. Boutin, D. Vincent, A. Belisle,
S. Hadjadj, et al. A genome-wide association study identifies novel risk loci for type 2 diabetes.
Nature, 445(7130):881?885, 2007.
[2] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58(1):267?288, 1996.
[3] S.I. Lee, A.M. Dudley, D. Drubin, P.A. Silver, N.J. Krogan, D. Pe?er, and D. Koller. Learning
a prior on regulatory potential from eQTL data. PLoS Genetics, 5(1):e1000358, 2009.
[4] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative
trait network. PLoS Genetics, 5(8):e1000587, 2009.
[5] G. Obozinski, B. Taskar, and M. Jordan. Multi-task feature selection. In Technical Report,
Department of Statistics, University of California, Berkeley, 2006.
[6] M. Szafranski, Y. Grandvalet, and P. Morizet-Mahoudeaux. Hierarchical penalization. Advances in Neural Information Processing Systems, 20:1457?1464, 2007.
[7] H. Zou. The adaptive Lasso and its oracle properties. Journal of the American Statistical
Association, 101(476):1418?1429, 2006.
[8] S.I. Lee, V. Chatalbashev, D. Vickrey, and D. Koller. Learning a meta-level prior for feature
relevance from multiple related tasks. In Proceedings of the 24th International Conference on
Machine Learning, pages 489?496, 2007.
[9] T. Park and G. Casella. The bayesian Lasso. Journal of the American Statistical Association,
103(482):681?686, 2008.
[10] B. M. Marlin, M. Schmidt, and K. P. Murphy. Group sparse priors for covariance estimation. In
Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, pages 383?392,
2009.
[11] E. G?omez, M. A. Gomez-Viilegas, and J. M. Marin. A multivariate generalization of the
power exponential family of distributions. Communications in Statistics-Theory and Methods,
27(3):589?600, 1998.
[12] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. Advances in
Neural Information Processing Systems, 19:801?808, 2007.
[13] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 ball for learning in high dimensions. In Proceedings of the 25th International Conference on
Machine Learning, pages 272?279, 2008.
[14] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group Lasso and a sparse group Lasso.
arXiv:1001.0736v1 [math.ST], 2010.
[15] T. T. Wu and K. Lange. Coordinate descent algorithms for Lasso penalized regression. Ann.
Appl. Stat, 2(1):224?244, 2008.
[16] R. B. Brem and L. Kruglyak. The landscape of genetic complexity across 5,700 gene expression traits in yeast. Proceedings of the National Academy of Sciences of the United States of
America, 102(5):1572?1577, 2005.
[17] S. Purcell, B. Neale, K. Todd-Brown, L. Thomas, M. A. R. Ferreira, D. Bender, J. Maller,
P. Sklar, P. I. W. De Bakker, M. J. Daly, et al. PLINK: a tool set for whole-genome association
and population-based linkage analyses. The American Journal of Human Genetics, 81(3):559?
575, 2007.
[18] G. Storz. An expanding universe of noncoding RNAs. Science, 296(5571):1260?1263, 2002.
[19] T. Miyake, J. Reese, C. M. Loch, D. T. Auble, and R. Li. Genome-wide analysis of ARS (autonomously replicating sequence) binding factor 1 (Abf1p)-mediated transcriptional regulation
in Saccharomyces cerevisiae. Journal of Biological Chemistry, 279(33):34865?34872, 2004.
[20] G. Yvert, R. B. Brem, J. Whittle, J. M. Akey, E. Foss, E. N. Smith, R. Mackelprang,
L. Kruglyak, et al. Trans-acting regulatory variation in Saccharomyces cerevisiae and the
role of transcription factors. Nature Genetics, 35(1):57?64, 2003.
9
| 4038 |@word multitask:1 snorna:3 version:1 briefly:1 norm:8 stronger:1 open:1 simulation:4 gradual:1 covariance:1 pick:1 contains:2 score:11 series:1 united:1 genetic:5 interestingly:1 outperforms:2 existing:1 current:1 comparing:2 surprising:1 mahoudeaux:1 treating:1 interpretable:1 update:3 plot:2 v:1 half:1 selected:2 intelligence:1 smith:1 short:1 detecting:4 provides:1 math:1 location:3 org:1 five:3 pathway:1 wild:1 introduce:1 behavior:1 multi:39 gjk:3 f11:1 automatically:1 bender:1 considering:3 becomes:1 estimating:3 underlying:1 moreover:1 xx:1 rung:1 lowest:1 argmin:2 interpreted:3 minimizes:2 bakker:1 developed:1 finding:6 marlin:1 quantitative:1 berkeley:1 act:1 exactly:1 ferreira:1 demonstrates:1 k2:11 rm:1 control:1 unit:1 enjoy:1 positive:2 treat:4 todd:1 despite:1 marin:1 trna:1 pgaussian:1 challenging:2 appl:1 mentioning:1 limited:2 unique:1 acknowledgment:1 block:1 differs:1 xr:5 procedure:9 empirical:1 significantly:3 projection:2 induce:2 specificity:5 cannot:2 onto:2 selection:3 risk:1 intercept:1 optimize:5 equivalent:1 map:8 deterministic:1 www:1 szafranski:1 mrna:1 convex:5 shen:1 miyake:1 simplicity:1 insight:1 continued:1 dbi:1 population:2 e1000587:1 variation:7 coordinate:5 laplace:3 play:1 us:2 origin:1 diabetes:1 overlapped:1 element:1 standardize:1 jk:23 located:4 std:2 database:1 role:3 module:3 taskar:1 solved:4 capture:1 region:8 autonomously:2 plo:2 highest:1 disease:2 complexity:1 covariates:5 solving:1 eric:1 f2:1 exon:4 easily:3 joint:3 various:2 america:1 sklar:1 effective:1 detected:1 artificial:1 hyper:1 shalev:1 widely:3 solve:6 larger:1 otherwise:14 statistic:2 jointly:5 reproduced:1 obviously:1 advantage:5 sequence:3 propose:2 achieve:6 academy:1 f10:4 validate:1 ky:5 convergence:4 cluster:7 silver:1 help:1 stat:1 fixing:4 ij:2 school:1 minor:1 eq:9 solves:3 c:1 indicate:1 met:1 guided:2 aml:14 correct:2 f4:3 annotated:1 allele:1 human:1 polymorphism:1 suffices:1 f1:5 clustered:2 generalization:1 biological:7 singularity:1 mathematically:1 extension:3 hold:1 ground:1 normal:1 exp:4 mapping:7 estimation:5 f7:3 outperformed:1 daly:1 eqtl:15 tool:1 minimization:3 genomic:9 gaussian:1 cerevisiae:2 rna:2 modified:1 ck:2 hotspot:3 avoid:1 hj:5 shrinkage:2 focus:1 saccharomyces:3 methodological:1 check:5 superimposed:1 baseline:1 detect:2 sense:2 posteriori:1 kim:2 synonymous:1 chatalbashev:1 transferring:1 fa9550010247:1 eqtls:11 koller:2 interested:1 issue:1 among:7 flexible:1 denoted:3 priori:3 constrained:1 f3:4 having:3 ng:1 biology:2 represents:7 identical:3 park:1 future:1 simplex:2 report:1 randomly:2 simultaneously:4 national:1 individual:2 murphy:1 friedman:1 detection:2 investigate:2 mixture:1 analyzed:1 genotype:2 integral:1 dina:1 nucleotide:1 causal:3 minimal:2 increased:1 blended:1 extensible:1 ar:2 ordinary:1 deviation:2 usefulness:1 conducted:1 reported:1 varies:1 combined:1 adaptively:4 st:1 fundamental:1 sensitivity:5 international:2 lee:7 fused:2 quickly:1 again:3 f5:1 henceforth:1 american:3 li:1 account:2 f6:3 potential:1 kruglyak:2 de:1 chemistry:1 whittle:1 coding:2 subsumes:1 includes:3 coefficient:7 summarized:1 reese:1 junzhu:1 satisfy:1 sloan:1 analyze:1 red:1 xing:2 start:1 annotation:5 mutation:1 contribution:3 minimize:2 square:3 f12:1 efficiently:4 likewise:1 yield:2 identify:1 landscape:1 weak:1 bayesian:14 vincent:1 worth:1 explain:1 fo:1 casella:1 definition:3 competitor:1 involved:1 resultant:2 associated:11 couple:2 holdout:2 dataset:10 recall:1 knowledge:19 color:2 storz:1 cj:3 purcell:1 improved:2 done:3 until:4 marker:1 lack:1 mode:1 quality:1 indicated:1 yeast:13 effect:12 k22:8 normalized:1 true:10 serre:1 brown:1 regularization:8 assigned:1 alternating:3 iteratively:3 nonzero:1 laboratory:1 vickrey:1 encourages:1 criterion:1 demonstrate:1 duchi:1 l1:4 snp:62 wise:3 novel:5 nih:2 common:1 sigmoid:1 brem:2 functional:1 qp:1 association:35 interpretation:5 trait:44 interpret:1 mellon:1 significant:5 similarly:1 replicating:2 stable:1 gj:4 add:1 posterior:1 multivariate:3 showed:1 optimizing:1 optimizes:1 awarded:1 prime:1 meta:3 inequality:1 success:1 continue:1 onr:1 devise:1 conserved:1 additional:1 determine:1 multiple:17 desirable:7 smooth:1 technical:1 cross:3 long:1 equally:2 variant:1 regression:15 xjt:13 cmu:1 chandra:1 arxiv:1 represent:1 normalization:4 achieved:2 penalize:2 fellowship:1 separately:3 else:2 unlike:4 subject:1 elegant:5 effectiveness:1 jordan:1 iterate:1 affect:3 xj:1 variety:1 hastie:1 lasso:70 perfectly:1 ckj:1 reduce:1 lange:1 whether:3 expression:6 six:1 linkage:1 penalty:9 cause:1 remark:2 yik:1 useful:6 dna:1 http:1 xij:2 exist:1 nsf:2 sign:2 estimated:2 correctly:1 tibshirani:2 blue:3 alfred:1 carnegie:1 discrete:8 shall:1 affected:2 group:27 four:3 drawn:3 clarity:1 penalizing:1 phenotypic:4 cutoff:3 f8:2 v1:1 graph:2 subgradient:3 run:3 uncertainty:1 almost:2 reasonable:1 family:1 wu:1 putative:2 scaling:13 bound:4 guaranteed:1 distinguish:1 gomez:1 oracle:1 strength:7 constraint:3 regulator:1 min:4 department:1 according:3 ball:1 battle:1 beneficial:1 across:5 slightly:1 making:1 intuitively:1 equation:1 visualization:1 remains:2 agree:1 turn:1 mechanism:1 needed:3 locus:2 singer:1 end:3 observe:2 hierarchical:8 spectral:1 appropriate:4 regulate:1 dudley:1 schmidt:1 rp:4 thomas:1 standardized:3 clustering:11 include:2 graphical:1 sml:6 build:1 especially:1 society:1 r01:1 objective:3 question:1 transcriptional:3 gradient:5 simulated:4 considers:2 assuming:2 loch:1 relationship:1 regulation:5 difficult:1 potentially:1 stated:4 design:1 perform:3 upper:4 datasets:3 maller:1 descent:8 extended:1 communication:1 strain:3 rn:2 optimized:1 california:1 learned:6 trans:1 address:2 bar:2 qtls:1 usually:2 pattern:1 ftj:8 sparsity:13 challenge:1 including:6 green:2 royal:1 power:1 regularized:6 raina:1 zhu:1 improve:2 epxing:1 imply:1 sults:1 identifies:1 axis:4 jun:1 coupled:1 mediated:1 prior:34 review:1 l2:2 checking:2 relative:1 afosr:1 fully:2 limitation:1 validation:4 penalization:2 downloaded:1 transposable:1 degree:1 ltr:1 principle:1 grandvalet:1 share:1 genetics:4 penalized:2 enjoys:1 understand:2 majorize:1 wide:2 absolute:1 sparse:13 curve:2 dimension:1 genome:6 ignores:1 adaptive:32 projected:2 preprocessing:1 belisle:1 relatedness:3 transcription:6 gene:24 confirm:2 global:1 conservation:10 krogan:1 shwartz:1 search:1 regulatory:5 continuous:5 nature:2 learn:3 chromosome:5 expanding:1 transfer:2 molecule:1 init:2 career:1 kgj:4 complex:3 zou:1 domain:1 did:2 significance:1 pk:1 universe:1 arrow:5 motivation:1 noise:1 whole:1 morizet:1 site:11 fig:13 f9:1 referred:1 roc:2 darker:1 sub:6 exponential:1 candidate:1 r6:5 pe:1 neale:1 rk:6 covariate:1 arra:1 showing:1 intron:5 er:1 utr:8 essential:1 incorporating:1 grouping:3 consist:1 exists:1 effectively:2 false:1 ci:1 likely:3 explore:1 failed:1 omez:1 binding:13 determines:1 satisfies:1 extracted:1 obozinski:1 conditional:1 viewed:2 identity:1 ann:1 shared:2 hard:1 specifically:3 determined:3 acting:1 hkj:10 seunghak:2 called:1 discriminate:1 experimental:1 formally:1 select:1 support:3 noncoding:2 relevance:1 incorporate:6 correlated:2 |
3,356 | 4,039 | Random Projection Trees Revisited
Aman Dhesi?
Department of Computer Science
Princeton University
Princeton, New Jersey, USA.
[email protected]
Purushottam Kar
Department of Computer Science and Engineering
Indian Institute of Technology
Kanpur, Uttar Pradesh, INDIA.
[email protected]
Abstract
The Random Projection Tree (RPT REE) structures proposed in [1] are space partitioning data structures that automatically adapt to various notions of intrinsic
dimensionality of data. We prove new results for both the RPT REE -M AX and
the RPT REE -M EAN data structures. Our result for RPT REE -M AX gives a nearoptimal bound on the number of levels required by this data structure to reduce
the size of its cells by a factor s ? 2. We also prove a packing lemma for this data
structure. Our final result shows that low-dimensional manifolds have bounded
Local Covariance Dimension. As a consequence we show that RPT REE -M EAN
adapts to manifold dimension as well.
1 Introduction
The Curse of Dimensionality [2] has inspired research in several directions in Computer Science and
has led to the development of several novel techniques such as dimensionality reduction, sketching
etc. Almost all these techniques try to map data to lower dimensional spaces while approximately
preserving useful information. However, most of these techniques do not assume anything about
the data other than that they are imbedded in some high dimensional Euclidean space endowed with
some distance/similarity function.
As it turns out, in many situations, the data is not simply scattered in the Euclidean space in a random
fashion. Often, generative processes impose (non-linear) dependencies on the data that restrict the
degrees of freedom available and result in the data having low intrinsic dimensionality. There exist
several formalizations of this concept of intrinsic dimensionality. For example, [1] provides an
excellent example of automated motion capture in which a large number of points on the body of
an actor are sampled through markers and their coordinates transferred to an animated avatar. Now,
although a large sample of points is required to ensure a faithful recovery of all the motions of the
body (which causes each captured frame to lie in a very high dimensional space), these points are
nevertheless constrained by the degrees of freedom offered by the human body which are very few.
Algorithms that try to exploit such non-linear structure in data have been studied extensively resulting in a large number of Manifold Learning algorithms for example [3, 4, 5]. These techniques
typically assume knowledge about the manifold itself or the data distribution. For example, [4]
and [5] require knowledge about the intrinsic dimensionality of the manifold whereas [3] requires a
sampling of points that is ?sufficiently? dense with respect to some manifold parameters.
Recently in [1], Dasgupta and Freund proposed space partitioning algorithms that adapt to the intrinsic dimensionality of data and do not assume explicit knowledge of this parameter. Their data
structures are akin to the k-d tree structure and offer guaranteed reduction in the size of the cells
after a bounded number of levels. Such a size reduction is of immense use in vector quantization [6]
and regression [7]. Two such tree structures are presented in [1] ? each adapting to a different notion
?
Work done as an undergraduate student at IIT Kanpur
1
of intrinsic dimensionality. Both variants have already found numerous applications in regression
[7], spectral clustering [8], face recognition [9] and image super-resolution [10].
1.1 Contributions
The RPT REE structures are new entrants in a large family of space partitioning data structures such
as k-d trees [11], BBD trees [12], BAR trees [13] and several others (see [14] for an overview). The
typical guarantees given by these data structures are of the following types :
1. Space Partitioning Guarantee : There exists a bound L(s), s ? 2 on the number of levels
one has to go down before all descendants of a node of size ? are of size ?/s or less. The
size of a cell is variously defined as the length of the longest side of the cell (for box-shaped
cells), radius of the cell, etc.
2. Bounded Aspect Ratio : There exists a certain ?roundedness? to the cells of the tree - this
notion is variously defined as the ratio of the length of the longest to the shortest side of the
cell (for box-shaped cells), the ratio of the radius of the smallest circumscribing ball of the
cell to that of the largest ball that can be inscribed in the cell, etc.
3. Packing Guarantee : Given a fixed ball B of radius R and a size parameter r, there exists a
bound on the number of disjoint cells of the tree that are of size greater than r and intersect
B. Such bounds are usually arrived at by first proving a bound on the aspect ratio for cells
of the tree.
These guarantees play a crucial role in algorithms for fast approximate nearest neighbor searches
[12] and clustering [15]. We present new results for the RPT REE -M AX structure for all these types
of guarantees. We first present a bound on the number of levels required for size reduction by any
given factor in an RPT REE -M AX. Our result improves the bound obtainable from results presented
in [1]. Next, we prove an ?effective? aspect ratio bound for RPT REE -M AX. Given the randomized
nature of the data structure it is difficult to directly bound the aspect ratios of all the cells. Instead
we prove a weaker result that can nevertheless be exploited to give a packing lemma of the kind
mentioned above. More specifically, given a ball B, we prove an aspect ratio bound for the smallest
cell in the RPT REE -M AX that completely contains B.
Our final result concerns the RPT REE -M EAN data structure. The authors in [1] prove that this
structure adapts to the Local Covariance Dimension of data (see Section 5 for a definition). By
showing that low-dimensional manifolds have bounded local covariance dimension, we show its
adaptability to the manifold dimension as well. Our result demonstrates the robustness of the notion
of manifold dimension - a notion that is able to connect to a geometric notion of dimensionality such
as the doubling dimension (proved in [1]) as well as a statistical notion such as Local Covariance
Dimension (this paper).
Due to lack of space we relegate some proofs to the Supplementary Material document and present
proofs of only the main theorems here. All results cited from other papers are presented as Facts in
this paper. We will denote by B(x, r), a closed ball of radius r centered at x. We will denote by d,
the intrinsic dimensionality of data and by D, the ambient dimensionality (typically d ? D).
2 The RPT REE -M AX structure
The RPT REE -M AX structure adapts to the doubling dimension of data (see definition below). Since
low-dimensional manifolds have low doubling dimension (see [1] Theorem 22) hence the structure
adapts to manifold dimension as well.
Definition 1. The doubling dimension of a set S ? RD is the smallest integer d such that for any
ball B(x, r) ? RD , the set B(x, r) ? S can be covered by 2d balls of radius r/2.
The RPT REE -M AX algorithm is presented data imbedded in RD having doubling dimension d. The
algorithm splits data lying in a cell C of radius ? by first choosing a random direction v ? RD ,
projecting
all the data inside C onto that direction, choosing a random value ? in the range [?1, 1] ?
?
6?/ D and then assigning a data point x to the left child if x ? v < median({z ? v : z ? C}) + ?
and the right child otherwise. Since it is difficult to get the exact value of the radius of a data set,
2
the algorithm settles for a constant factor approximation to the value by choosing an arbitrary data
? = max({kx ? yk : y ? C}).
point x ? C and using the estimate ?
The following result is proven in [1] :
Fact 2 (Theorem 3 in [1]). There is a constant c1 with the following property. Suppose an RPT REE M AX is built using a data set S ? RD . Pick any cell C in the RPT REE -M AX; suppose that
S ? C has doubling dimension ? d. Then with probability at least 1/2 (over the randomization in
constructing the subtree rooted at C), every descendant C ? more than c1 d log d levels below C has
radius(C ? ) ? radius(C)/2.
In Sections 2, 3 and 4, we shall always assume that the data has doubling dimension d and shall
not explicitly state this fact again and again. Let us consider extensions of this result to bound the
number of levels it takes for the size of all descendants to go down by a factor s > 2. Let us analyze
the case of s = 4. Starting off in a cell C of radius ?, we are assured of a reduction in size by a
factor of 2 after c1 d log d levels. Hence all 2c1 d log d nodes at this level have radius ?/2 or less. Now
we expect that after c1 d log d more levels, the size should go down further by a factor of 2 thereby
giving us our desired result. However, given the large number of nodes at this level and the fact
that the success probability in Fact 2 is just greater than a constant bounded away from 1, it is not
possible to argue that after c1 d log d more levels the descendants of all these 2c1 d log d nodes will be
of radius ?/4 or less. It turns out that this can be remedied by utilizing the following extension of
the basic size reduction result in [1]. We omit the proof of this extension.
Fact 3 (Extension of Theorem 3 in [1]). For any ? > 0, with probability at least 1 ? ?, every descendant C ? which is more than c1 d log d + log(1/?) levels below C has radius(C ? ) ? radius(C)/2.
This gives us a way to boost the confidence and do the following : go down L = c1 d log d + 2 levels
from C to get the the radius of all the 2c1 d log d+2 descendants down to ?/2 with confidence 1 ? 1/4.
Afterward, go an additional L? = c1 d log d + L + 2 levels from each of these descendants so that
for any cell at level L, the probability of it having a descendant of radius > ?/4 after L? levels is
less than 4?21 L . Hence conclude with confidence at least 1 ? 14 ? 4?21 L ? 2L ? 21 that all descendants
of C after 2L + c1 d log d + 2 have radius ? ?/4. This gives a way to prove the following result :
Theorem 4. There is a constant c2 with the following property. For any s ? 2, with probability at
least 1?1/4, every descendant C ? which is more than c2 ?s?d log d levels below C has radius(C ? ) ?
radius(C)/s.
Proof. Refer to Supplementary Material
Notice that the dependence on the factor s is linear in the above result whereas one expects it to
be logarithmic. Indeed, typical space partitioning algorithms such as k-d trees do give such guarantees. The first result we prove in the next section is a bound on the number of levels that is
poly-logarithmic in the size reduction factor s.
3 A generalized size reduction lemma for RPT REE -M AX
In this section we prove the following theorem :
Theorem 5 (Main). There is a constant c3 with the following property. Suppose an RPT REE -M AX
is built using data set S ? RD . Pick any cell C in the RPT REE -M AX; suppose that S ? C
has doubling dimension ? d. Then for any s ? 2, with probability at least 1 ? 1/4 (over the
randomization in constructing the subtree rooted at C), for every descendant C ? which is more than
c3 ? log s ? d log sd levels below C, we have radius(C ? ) ? radius(C)/s.
Compared to this, data structures such as [12] give deterministic guarantees for such a reduction in
D log s levels which can be shown to be optimal (see [1] for an example). Thus our result is optimal
but for a logarithmic factor. Moving on with the proof, let us consider a cell C of radius ? in the
RPT REE -M AX that contains a dataset S having doubling dimension ? d. Then for any ? > 0, a
repeated application of Definition 1 shows that the S can be covered using at most 2d log(1/?) balls
??
of radius ??. We will cover S ? C using balls of radius 960s
so that O (sd)d balls would
d
suffice. Now consider all pairs of these balls, the distance between whose centers is ?
3
?
s
?
??
.
960s d
neutral split
good split
?
B1
B2
bad split
?
?
Figure 1: Balls B1 and B2 are of radius ?/s d and their centers are ?/s ? ?/s d apart.
If random splits separate data from all such pairs of balls i.e. for no pair does any cell contain data
from both balls of the pair, then each resulting cell would only contain data from pairs whose centers
??
are closer than ?
s ? 960s d . Thus the radius of each such cell would be at most ?/s.
We fix such a pair of balls calling them B1 and B2 . A split in the RPT REE -M AX is said to be good
with respect to this pair if it sends points inside B1 to one child of the cell in the RPT REE -M AX
and points inside B2 to the other, bad if it sends points from both balls to both children and neutral
otherwise (See Figure 1). We have the following properties of a random split :
Lemma 6. Let B = B(x, ?) be a ball contained inside an RPT REE -M AX cell of radius ? that
contains a dataset S of doubling dimension d. Lets us say that a random split splits this ball if the
split separates
the data set S into two parts. Then a random split of the cell splits B with probability
?
atmost 3?? d .
Proof. Refer to Supplementary Material
Lemma 7. Let B1 and B2 be a pair of balls as described above contained in the cell C that contains
data of doubling dimension d. Then a random split of the cell is a good split with respect to this pair
1
with probability at least 56s
.
Proof. Refer to Supplementary Material. Proof similar to that of Lemma 9 of [1].
Lemma 8. Let B1 and B2 be a pair of balls as described above contained in the cell C that contains
data of doubling dimension d. Then a random split of the cell is a bad split with respect to this pair
1
.
with probability at most 320s
Proof. The proof of a similar result in [1] uses a conditional probability argument. However the
technique does not work here since we require a bound that is inversely proportional to s. We instead
make a simple observation that the probability of a bad split is upper bounded by the probability that
one of the balls is split since for any two events A and B, P [A ? B] ? min{P [A] , P [B]}. The
result then follows from an application of Lemma 6.
We are now in a position to prove Theorem 5. What we will prove is that starting with a pair of balls
in a cell C, the probability that some cell k levels below has data from both the balls is exponentially
small in k. Thus, after going enough number of levels we can
take a union bound over all pairs of
balls whose centers are well separated (which are O (sd)2d in number) and conclude the proof.
Proof. (of Theorem 5) Consider a cell?C of radius ? in the RPT REE -M AX and fix a pair ?
of balls
contained inside C with radii ?/960s d and centers separated by at least ?/s ? ?/960s d. Let
4
pij denote the probability that a cell i levels below C has a descendant j levels below itself that
contains data points from both the balls. Then the following holds :
1 l l
Lemma 9. p0k ? 1 ? 68s
pk?l .
Proof. Refer to Supplementary Material. Proof similar to that of Lemma 11 of [1].
1 k
as a corollary. However using this result would require us
Note that this gives us p0k ? 1 ? 68s
1
to go down k = ?(sd log(sd)) levels before p0k = ?((sd)
2d ) which results in a bound that is worse
(by a factor logarithmic in s) than the one given by Theorem 4. This can be attributed to the small
probability of a good split for a tiny pair of balls in large cells. However, here we are completely
neglecting the fact that as we go down the levels, the radii of cells go down as well and good splits
become more frequent.
Indeed setting s = 2 in Theorems 7 and 8 tells us that if the pair of balls were to be contained in a
?
1
1
cell of radius s/2
then the good and bad split probabilities are 112
and 640
respectively. This paves
way for an inductive argument : assume that with probability > 1 ? 1/4, in L(s) levels, the size of
all descendants go down by a factor s. Denote by plg the probability of a good split in a cell at depth
l and by plb the corresponding probability of a bad split. Set l? = L(s/2) and let E be the event that
?
. Let C ? represent a cell at depth l? . Then,
the radius of every cell at level l? is less than s/2
?
1
1
1
plg ? P [good split in C ? |E] ? P [E] ?
?
? 1?
112
4
150
?
plb
= P [bad split in C ? |E] ? P [E] + P [bad split in C ? |?E] ? P [?E]
1 1
1
1
?1+
? ?
?
640
640 4
512
1 m
l?
Notice that now, for any m > 0, we have pm ? 1 ? 213
. Thus, for some constant c4 , setting
?
1 c4 d log(sd)
1 l
?
0
?
1 ? 213
k = l + c4 d log(sd) and applying Lemma 9 gives us pk ? 1 ? 68s
1
.
Thus
we
have
4(sd)2d
L(s) ? L(s/2) + c4 d log(sd)
which gives us the desired result on solving the recurrence i.e. L(s) = O (d log s log sd).
4 A packing lemma for RPT REE -M AX
In this section we prove a probabilistic packing lemma for RPT REE -M AX. A formal statement of
the result follows :
Theorem 10 (Main). Given any fixed ball B(x, R) ? RD , with probability greater than 1/2 (where
the randomization is over the construction of the RPT REE -M AX), the number of disjoint RPT REE O(d log d log(dR/r))
M AX cells of radius greater than r that intersect B is at most Rr
.
D
O(1)
which behaves like Rr
Data structures such as BBD-trees give a bound of the form O Rr
O(log Rr )
for fixed D. In comparison, our result behaves like Rr
for fixed d. We will prove the
result in two steps : first of all we will show that with high probability,
?the ballB will be completely
inscribed in an RPT REE -M AX cell C of radius no more than O Rd d log d . Thus the number of
disjoint cells of radius at least r that intersect this ball is bounded by the number of descendants of
C with this radius. To bound this number we then invoke Theorem 5 and conclude the proof.
4.1 An effective aspect ratio bound for RPT REE -M AX cells
In this section we prove an upper bound on the radius of the smallest RPT REE -M AX cell that
completely contains a given ball B of radius R. Note that this effectively bounds the aspect ratio
of this cell. Consider any cell C of radius ? that contains B. We proceed with the proof by first
5
useful split
C
Bi
useless split
B
?
2
?
Figure 2: Balls Bi are of radius ?/512 d and their centers are ?/2 far from the center of B.
showing that the probability that B will be split before it lands up in a cell of radius ?/2 is at most
a quantity inversely proportional to ?. Note that we are not interested in all descendants of C - only
the ones
?ones that contain B. That is why we argue differently here. We consider balls of radius
?/512 d surrounding B at a distance of ?/2 (see Figure 2). These
? balls are made to cover the
annulus centered at B of mean radius ?/2 and thickness ?/512 d ? clearly dO(d) balls suffice.
Without loss of generality assume that the centers of all these balls lie in C.
Notice that if B gets separated from all these balls without getting split in the process then it will
lie in a cell of radius < ?/2. Fix a Bi and call a random split of the RPT REE -M AX useful if
it separates B from Bi and useless if it splits B. Using a proof technique similar to that used in
1
Lemma 7 we can show that the probability of a useful split is at least 192
whereas Lemma 6 tells us
that the probability of a useless split is at most
?
3R d
? .
Lemma 11. There exists a constant c5 such that the probability of a ball of radius
R in a cell of
?
radius ? getting split before it lands up in a cell of radius ?/2 is at most c5 Rd ?d log d .
Proof. Refer to Supplementary Material
We now state our result on the ?effective? bound on aspect ratios of RPT REE -M AX cells.
Theorem 12. There exists a constant c6 such that with probability > 1 ? 1/4, a given (fixed) ball
B of radius
? R will be completely inscribed in an RPT REE -M AX cell C of radius no more than
c6 ? Rd d log d.
Proof. Refer to Supplementary Material
Proof. (of Theorem 10) Given a ball B of radius R, Theorem 12 shows that
with probability at
?
?
least 3/4, B will lie in a cell C of radius at most R = O Rd d log d . Hence all cells of
radius atleast r that intersect this ball must be either descendants or ancestors of C. Since we want
an upper bound on the largest number of such disjoint cells, it suffices to count the number of
descendants of C of radius no less than r. We know from Theorem 5 that with probability at least
3/4 in log(R? /r)d log(dR? /r) levels the radius of all cells must go below r. The result follows by
observing that the RPT REE -M AX is a binary tree and hence the number of children can be at most
?
?
2log(R /r)d log(dR /r) . The success probability is at least (3/4)2 > 1/2.
6
M
Tp (M)
p
Figure 3: Locally, almost all the energy of the data is concentrated in the tangent plane.
5 Local covariance dimension of a smooth manifold
The second variant of RPT REE, namely RPT REE -M EAN, adapts to the local covariance dimension
(see definition below) of data. We do not go into the details of the guarantees presented in [1] due
to lack of space. Informally, the guarantee is of the following kind : given data that has small local
covariance dimension, on expectation, a data point in a cell of radius r in the RPT REE -M EAN will
be contained in a cell of radius c7 ? r in the next level for some constant c7 < 1. The randomization
is over the construction of RPT REE -M EAN as well as choice of the data point. This gives per-level
improvement albeit in expectation whereas RPT REE -M AX gives improvement in the worst case but
after a certain number of levels.
We will prove that a d-dimensional Riemannian submanifold M of RD has bounded local covariance dimension thus proving that RPT REE -M EAN adapts to manifold dimension as well.
Definition 13. A set S ? RD has local covariance dimension (d, ?, r) if there exists an isometry
M of RD under which the set S when restricted to any ball of radius r has a covariance matrix for
which some d diagonal elements contribute a (1 ? ?) fraction of its trace.
This is a more general definition than the one presented in [1] which expects the top d eigenvalues
of the covariance matrix to account for a (1 ? ?) fraction of its trace. However, all that [1] requires
for the guarantees of RPT REE -M EAN to hold is that
P there exist d orthonormal directions such that
a (1 ? ?) fraction of the energy of the dataset i.e. x?S kx ? mean(S)k2 is contained in those d
dimensions. This is trivially true when M is a d-dimensional affine set. However we also expect
that for small neighborhoods on smooth manifolds, most of the energy would be concentrated in the
tangent plane at a point in that neighborhood (see Figure 3). Indeed, we can show the following :
Theorem 14 (Main). Given a data set S ? M where M is a d-dimensional Riemannian
manifold
?
??
1
with condition number ? , then for any ? ? 4 , S has local covariance dimension d, ?, 3 .
For manifolds, the local curvature decides how small a neighborhood should one take in order to
expect a sense of ?flatness? in the non-linear surface. This is quantified using the Condition Number
? of M (introduced in [16]) which restricts the amount by which the manifold can curve locally.
The condition number is related to more prevalent notions of local curvature such as the second
fundamental form [17] in that the inverse of the condition number upper bounds the norm of the
second fundamental form [16]. Informally, if we restrict ourselves to regions of the manifold of
radius ? or less, then we get the requisite flatness properties. This is formalized in [16] as follows.
For any hyperplane T ? RD and a vector v ? Rd , let vk (T ) denote the projection of v onto T .
Fact 15 (Implicit in Lemma 5.3 of [16]).
Suppose M is a Riemannian manifold with condition
?
number ? . For any p ? M and r ? ??, ? ? 14 , let M? = B(p, r) ? M. Let T = Tp (M) be the
tangent space at p. Then for any x, y ? M? , kxk (T ) ? yk (T )k2 ? (1 ? ?)kx ? yk2 .
This already seems to give us what we want - a large fraction of the length between any two points
on the manifold lies in the tangent plane - i.e.Pin d dimensions. However in our
P case we have
to show that for some d-dimensional plane P , x?S k(x ? ?)k (P )k2 > (1 ? ?) x?S kx ? ?k2
7
where ? = mean(S). The problem is that we cannot apply Fact 15 since there is no surety that the
mean will lie on the manifold itself. However it turns out that certain points on the manifold can act
as ?proxies? for the mean and provide a workaround to the problem.
Proof. (of Theorem 14) Refer to Supplementary Material
6 Conclusion
In this paper we considered the two random projection trees proposed in [1]. For the RPT REE M AX data structure, we provided an improved bound (Theorem 5) on the number of levels required
to decrease the size of the tree cells by any factor s ? 2. However the bound we proved is polylogarithmic in s. It would be nice if this can be brought down to logarithmic since it would directly
improve the packing lemma (Theorem 10) as well. More specifically the packing bound would
O(1)
O(log Rr )
become Rr
for fixed d.
instead of Rr
As far as dependence on d is concerned, there is room for improvement in the packing lemma. We
have shown that the smallest cell in the RPTREE -M AXthat completely contains a fixed ball B of
?
radius R has an aspect ratio no more than O d d log d since it has a ball of radius R inscribed in
?
it and can be circumscribed by a ball of radius no more than O Rd d log d . Any improvement in
the aspect ratio of the smallest cell that contains a given ball will also directly improve the packing
lemma.
Moving on to our results for the RPT REE -M EAN, we demonstrated that it adapts to manifold dimension as well. However the constants involved in our guarantee ?are pessimistic. For instance,
the radius parameter in the local covariance dimension is given as 3?? - this can be improved to
?
??
2 if one can show that there will always exists a point q ? B(x0 , r) ? M at which the function
g : x ? M 7?? kx ? ?k attains a local extrema.
We conclude with a word on the applications of our results. As we already mentioned, packing
lemmas and size reduction guarantees for arbitrary factors are typically used in applications for
nearest neighbor searching and clustering. However, these applications (viz [12], [15]) also require
that the tree have bounded depth. The RPT REE -M AX is a pure space partitioning data structure that
can be coerced by an adversarial placement of points into being a primarily left-deep or right-deep
tree having depth ?(n) where n is the number of data points.
Existing data structures such as BBD Trees remedy this by alternating space partitioning splits with
data partitioning splits. Thus every alternate split is forced to send at most a constant fraction
of the points into any of the children thus ensuring a depth that is logarithmic in the number of
data points. A similar technique is used in [7] to bound the depth of the version of RPT REE M AX used in that paper. However it remains to be seen if the same trick can be used to bound the
depth of RPT REE -M AX while maintaining the packing guarantees because although such ?space
partitioning? splits do not seem to hinder Theorem 5, they do hinder Theorem 10 (more specifically
they hinder Theorem 11).
We leave open the question of a possible augmentation of the RPT REE -M AX structure, or a better
analysis, that can simultaneously give the following guarantees :
1. Bounded Depth : depth of the tree should be o(n), preferably (log n)O(1)
(d log Rr )O(1)
2. Packing Guarantee : of the form Rr
3. Space Partitioning Guarantee : assured size reduction by factor s in (d log s)O(1) levels
Acknowledgments
The authors thank James Lee for pointing out an incorrect usage of the term Assouad dimension in
a previous version of the paper. Purushottam Kar thanks Chandan Saha for several fruitful discussions and for his help with the proofs of the Theorems 5 and 10. Purushottam is supported by the
Research I Foundation of the Department of Computer Science and Engineering, IIT Kanpur.
8
References
[1] Sanjoy Dasgupta and Yoav Freund. Random Projection Trees and Low dimensional Manifolds. In 40th
Annual ACM Symposium on Theory of Computing, pages 537?546, 2008.
[2] Piotr Indyk and Rajeev Motwani. Approximate Nearest Neighbors : Towards Removing the Curse of
Dimensionality. In 30th Annual ACM Symposium on Theory of Computing, pages 604?613, 1998.
[3] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(22):2319?2323, 2000.
[4] Piotr Indyk and Assaf Naor. Nearest-Neighbor-Preserving Embeddings. ACM Transactions on Algorithms, 3, 2007.
[5] Richard G. Baraniuk and Michael B. Wakin. Random Projections of Smooth Manifolds. Foundations of
Computational Mathematics, 9(1):51?77, 2009.
[6] Yoav Freund, Sanjoy Dasgupta, Mayank Kabra, and Nakul Verma. Learning the structure of manifolds
using random projections. In Twenty-First Annual Conference on Neural Information Processing Systems,
2007.
[7] Samory Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. In 22nd Annual
Conference on Learning Theory, 2009.
[8] Donghui Yan, Ling Huang, and Michael I. Jordan. Fast Approximate Spectral Clustering. In 15th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 907?916, 2009.
[9] John Wright and Gang Hua. Implicit Elastic Matching with Random Projections for Pose-Variant Face
Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages
1502?1509, 2009.
[10] Jian Pu, Junping Zhang, Peihong Guo, and Xiaoru Yuan. Interactive Super-Resolution through Neighbor
Embedding. In 9th Asian Conference on Computer Vision, pages 496?505, 2009.
[11] Jon Louis Bentley. Multidimensional Binary Search Trees Used for Associative Searching. Communications of the ACM, 18(9):509?517, 1975.
[12] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions. Journal of the ACM,
45(6):891?923, 1998.
[13] Christian A. Duncan, Michael T. Goodrich, and Stephen G. Kobourov. Balanced Aspect Ratio Trees:
Combining the Advantages of k-d Trees and Octrees. Journal of Algorithms, 38(1):303?333, 2001.
[14] Hanan Samet. Foundations of Multidimensional and Metric Data Structures. Morgan Kaufmann Publishers, 2005.
[15] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and Angela Y. Wu. A local search approximation algorithm for k-means clustering. Computational Geometry,
28(2-3):89?112, 2004.
[16] Partha Niyogi, Stephen Smale, and Shmuel Weinberger. Finding the Homology of Submanifolds with
High Confidence from Random Samples. Discrete & Computational Geometry, 39(1-3):419?441, 2008.
[17] Sebasti?an Montiel and Antonio Ros. Curves and Surfaces, volume 69 of Graduate Studies in Mathematics. American Mathematical Society and Real Sociedad Matem?atica Epa?nola, 2005.
9
| 4039 |@word version:2 norm:1 seems:1 nd:1 open:1 covariance:13 pick:2 thereby:1 reduction:12 contains:10 document:1 animated:1 existing:1 assigning:1 must:2 john:2 christian:1 generative:1 plane:4 provides:1 revisited:1 cse:1 node:4 contribute:1 c6:2 zhang:1 purushot:1 mathematical:1 c2:2 become:2 symposium:2 descendant:17 prove:15 incorrect:1 naor:1 yuan:1 assaf:1 inside:5 x0:1 indeed:3 inspired:1 automatically:1 curse:3 provided:1 bounded:10 suffice:2 what:2 kind:2 submanifolds:1 extremum:1 finding:1 guarantee:16 every:6 preferably:1 act:1 multidimensional:2 interactive:1 ro:1 demonstrates:1 k2:4 partitioning:10 omit:1 louis:1 before:4 engineering:2 local:15 sd:11 consequence:1 mount:2 ree:49 approximately:1 studied:1 quantified:1 range:1 bi:4 graduate:1 faithful:1 acknowledgment:1 piatko:1 union:1 silverman:2 intersect:4 yan:1 adapting:1 projection:8 matching:1 confidence:4 word:1 get:4 onto:2 cannot:1 applying:1 fruitful:1 map:1 deterministic:1 center:8 demonstrated:1 send:1 go:11 starting:2 resolution:2 formalized:1 recovery:1 pure:1 utilizing:1 orthonormal:1 his:1 proving:2 searching:3 notion:8 coordinate:1 embedding:1 avatar:1 play:1 suppose:5 construction:2 exact:1 us:1 trick:1 element:1 circumscribed:1 recognition:3 role:1 capture:1 worst:1 region:1 decrease:1 yk:2 mentioned:2 balanced:1 workaround:1 hinder:3 solving:1 completely:6 packing:12 differently:1 iit:2 jersey:1 various:1 surrounding:1 separated:3 forced:1 fast:2 effective:3 goodrich:1 tell:2 choosing:3 neighborhood:3 whose:3 supplementary:8 say:1 otherwise:2 niyogi:1 itself:3 final:2 indyk:2 associative:1 advantage:1 rr:10 eigenvalue:1 frequent:1 combining:1 adapts:7 getting:2 motwani:1 leave:1 help:1 ac:1 pose:1 nearest:5 direction:4 radius:62 centered:2 human:1 settle:1 material:8 require:4 fix:3 suffices:1 samet:1 randomization:4 pessimistic:1 extension:4 hold:2 lying:1 sufficiently:1 considered:1 wright:1 pointing:1 smallest:6 largest:2 kabra:1 brought:1 clearly:1 always:2 super:2 corollary:1 ax:37 viz:1 longest:2 improvement:4 prevalent:1 vk:1 adversarial:1 attains:1 sigkdd:1 sense:1 typically:3 ancestor:1 going:1 interested:1 hanan:1 development:1 constrained:1 shaped:2 having:5 sampling:1 piotr:2 jon:1 donghui:1 others:1 richard:1 few:1 primarily:1 saha:1 simultaneously:1 asian:1 variously:2 geometry:2 ourselves:1 freedom:2 mining:1 plg:2 circumscribing:1 immense:1 ambient:1 closer:1 neglecting:1 tree:24 euclidean:2 desired:2 instance:1 bbd:3 cover:2 tp:2 yoav:2 epa:1 neutral:2 expects:2 submanifold:1 nearoptimal:1 connect:1 dependency:1 thickness:1 thanks:1 cited:1 fundamental:2 randomized:1 mayank:1 international:1 lee:1 probabilistic:1 off:1 invoke:1 regressor:1 michael:3 sketching:1 again:2 augmentation:1 huang:1 dr:3 worse:1 american:1 account:1 de:1 student:1 b2:6 explicitly:1 try:2 closed:1 analyze:1 observing:1 vin:1 contribution:1 partha:1 kaufmann:1 annulus:1 chandan:1 definition:7 c7:2 energy:3 involved:1 james:1 proof:22 attributed:1 riemannian:3 sunil:1 sampled:1 proved:2 dataset:3 knowledge:4 dimensionality:14 improves:1 obtainable:1 adaptability:1 improved:2 done:1 box:2 generality:1 just:1 implicit:2 langford:1 nonlinear:1 marker:1 lack:2 rajeev:1 bentley:1 usa:1 usage:1 concept:1 contain:3 true:1 remedy:1 inductive:1 hence:5 homology:1 alternating:1 rpt:49 recurrence:1 rooted:2 anything:1 generalized:1 arrived:1 motion:2 christine:1 silva:1 image:1 novel:1 recently:1 behaves:2 overview:1 exponentially:1 volume:1 refer:7 rd:17 trivially:1 pm:1 mathematics:2 moving:2 actor:1 similarity:1 surface:2 yk2:1 etc:3 pu:1 curvature:2 isometry:1 purushottam:3 apart:1 certain:3 kar:2 binary:2 success:2 exploited:1 joshua:1 preserving:2 captured:1 greater:4 additional:1 impose:1 seen:1 morgan:1 shortest:1 stephen:2 flatness:2 smooth:3 adapt:2 offer:1 ensuring:1 variant:3 regression:2 basic:1 coerced:1 vision:2 expectation:2 metric:1 represent:1 cell:63 c1:12 whereas:4 want:2 median:1 jian:1 sends:2 crucial:1 publisher:1 seem:1 jordan:1 integer:1 inscribed:4 call:1 split:39 enough:1 concerned:1 automated:1 embeddings:1 restrict:2 escaping:1 reduce:1 akin:1 proceed:1 cause:1 deep:2 antonio:1 useful:4 covered:2 informally:2 amount:1 kanungo:1 extensively:1 locally:2 concentrated:2 tenenbaum:1 exist:2 restricts:1 notice:3 disjoint:4 per:1 discrete:1 dasgupta:3 shall:2 nevertheless:2 fraction:5 inverse:1 baraniuk:1 almost:2 family:1 wu:2 duncan:1 bound:28 guaranteed:1 annual:4 gang:1 placement:1 calling:1 aspect:11 nathan:2 argument:2 min:1 transferred:1 department:3 alternate:1 ball:47 projecting:1 restricted:1 remains:1 turn:3 count:1 pin:1 know:1 available:1 endowed:1 apply:1 away:1 spectral:2 robustness:1 weinberger:1 top:1 clustering:5 ensure:1 angela:2 wakin:1 maintaining:1 exploit:1 giving:1 society:2 already:3 quantity:1 question:1 imbedded:2 dependence:2 pave:1 diagonal:1 said:1 distance:3 separate:3 remedied:1 thank:1 manifold:26 argue:2 plb:2 length:3 ruth:2 useless:3 ratio:13 difficult:2 statement:1 smale:1 trace:2 twenty:1 kpotufe:1 upper:4 observation:1 arya:1 situation:1 communication:1 frame:1 arbitrary:2 p0k:3 introduced:1 david:2 pair:16 required:4 namely:1 c3:2 c4:4 polylogarithmic:1 boost:1 able:1 bar:1 usually:1 below:10 pattern:1 built:2 max:1 event:2 improve:2 technology:1 inversely:2 numerous:1 nice:1 geometric:2 discovery:1 tangent:4 freund:3 loss:1 expect:3 afterward:1 proportional:2 proven:1 entrant:1 foundation:3 degree:2 offered:1 pij:1 affine:1 proxy:1 netanyahu:2 tiny:1 atleast:1 land:2 verma:1 supported:1 atmost:1 side:2 weaker:1 formal:1 institute:1 india:1 neighbor:6 face:2 curve:2 dimension:33 depth:9 author:2 made:1 c5:2 far:2 transaction:1 approximate:4 iitk:1 global:1 decides:1 b1:6 conclude:4 search:3 why:1 nature:1 shmuel:1 elastic:1 ean:9 excellent:1 poly:1 constructing:2 assured:2 nakul:1 pk:2 dense:1 main:4 ling:1 tapa:1 child:6 repeated:1 body:3 sebasti:1 scattered:1 fashion:1 samory:1 formalization:1 position:1 explicit:1 lie:6 kanpur:3 down:10 theorem:25 removing:1 bad:8 showing:2 concern:1 intrinsic:7 undergraduate:1 quantization:1 exists:7 albeit:1 effectively:1 subtree:2 kx:5 led:1 logarithmic:6 simply:1 relegate:1 kxk:1 contained:7 doubling:12 hua:1 assouad:1 acm:6 conditional:1 towards:1 room:1 typical:2 specifically:3 hyperplane:1 lemma:21 sanjoy:2 guo:1 indian:1 requisite:1 princeton:3 |
3,357 | 404 | Design and Implementation of a High Speed
CMAC Neural Network Using Programmable
CMOS Logic Cell Arrays
W. Thomas Miller, III, Brian A. Box, and Erich C. Whitney
Department of Electrical and Computer Engineering
Kingsbury Hall
University of New Hampshire
Durham, New Hampshire 03824
James M. Glynn
Shenandoah Systems Company
1A Newington Park
West Park Drive
Newington, New Hampshire 03801
Abstract
A high speed implementation of the CMAC neural network was designed
using dedicated CMOS logic. This technology was then used to implement
two general purpose CMAC associative memory boards for the VME bus.
Each board implements up to 8 independent CMAC networks with a total
of one million adjustable weights. Each CMAC network can be configured
to have from 1 to 512 integer inputs and from 1 to 8 integer outputs.
Response times for typical CMAC networks are well below 1 millisecond,
making the networks sufficiently fast for most robot control problems, and
many pattern recognition and signal processing problems.
1
INTRODUCTION
We have been investigating learning techniques for the control of robotic manipulators which utilize extensions of the CMAC neural network as developed by Albus
1022
Design and Implementation of a High Speed CMAC Neural Network
(1972; 1975; 1979). The learning control techniques proposed have been studied
in our laboratory in a series of real time experimental studies (Miller, 1986; 1987;
1989; Miller et al., 1987; 1988; 1990). These studies successfully demonstrated the
ability to learn the kinematics of a robot/video camera system interacting with
randomly oriented objects on a moving conveyor, and to learn the dynamics of a
multi-axis industrial robot during high speed motions. We have also investigated
the use of CMAC networks for pattern recognition (Glanz and Miller, 1987; Herold
et al., 1988) and signal processing (Glanz and Miller, 1989) applications, with encouraging results. The primary goal of this project was to implement a compact,
high speed version of the CMAC neural network using CMOS logic cell arrays. Two
prototype CMAC associative memory systems for the industry standard VME bus
were then constructed.
2
THE CMAC NEURAL NETWORK
Figure 1 shows a simple example of a CMAC network with two inputs and one
output. Each variable in the input state vector is fed to a series of input sensors
with overlapping receptive fields. The width of the receptive field of each sensor
produces input generalization, while the offset of the adjacent fields produces input
quantization. The binary outputs of the input sensors are combined in a series of
threshold logic units (called state space detectors) with thresholds adjusted to produce logical AND functions. Each of these units receives one input from the group
of sensors for each input variable, and thus its input receptive field is the interior
of a hypercube in the input hyperspace. The input sensors are interconnected in
a sparse and regular fashion, so that each input vector excites a fixed number of
state space detectors. The outputs of the state space detectors are connected randomly to a smaller set of threshold logic units (called multiple field detectors) with
thresholds adjusted such that the output will be on if any input is on. The receptive
field of each of these units is thus the union of the fields of many of the state space
detectors. Finally, the output of each multiple field detector is connected, through
an adjustable weight, to an output summing unit. The output for a given input is
thus the sum of the weights selected by the excited multiple field detectors.
The nonlinear nature of the CMAC network is embodied in the interconnections of
the input sensors, state space detectors, and multiple field detectors, which perform
a fixed nonlinear associative mapping of the continuous valued input vector to a
many dimensional binary valued vector (which has tens or hundreds of thousands
of dimensions in typical implementations). The adaptation problem is linear in this
many dimensional space, and all of the convergence theorems for linear adaptive
elements apply.
3
THE CMAC HARDWARE DESIGN
The custom implementation of the CMAC associative memory required the development of two devices. The first device performs the input associative mapping,
converting application relevant input vectors into traditional RAM addresses. The
second device performs CMAC response accumulation, summing the weights from
all excited receptive fields. Both devices were implemented using 70 MHz XILINX
1023
1024
Miller, Box, Whitney, and Glynn
Input
Sensors
State Space
Detectors
.?
Weights
t
E
'S
Q.
C
f(?)
C'oI
'S
Q.
C
C>
110 Total
Units
c
Ii
o
c>
Multiple Field
Detectors
Logical AND unit
Logical OR unit
Figure 1: A Simple Example of a CMAC Neural Network
3090 programmable logic cell arrays.
The associative mapping device uses a bit recursive mapping scheme developed at
UNH, which is similar in philosophy to the CMAC mapping proposed by Albus,
but is structured for efficient implementation using discrete logic. The" address" of
each excited virtual receptive field is formed recursively by clocking the input vector
components sequentially from a buffer FIFO. The hashing of the virtual receptive
field address to a physical RAM address is performed simultaneously, using pipelined
logic. The resulting associative mapping generates one 18 bit RAM address for a
given input vector. The multiple addresses, corresponding to the multiple receptive
fields excited by a single input vector could be generated simultaneously using
parallel addressing circuits, or sequentially using a single circuit.
The second CMAC device serves basically as an accumulator during CMAC response
generation. As successive addresses are produced by the associative mapping circuit,
the accumulator sums the corresponding values from the data RAM. During memory
training, the response accumulation circuit adds the training adjustment to each
of the addressed memory locations, placing the result back in the RAM. Eight
independent CMAC output channels were placed on a single device.
In the final VME system design (Figure 2), a single CMAC associative mapping
device was used. Overlapping receptive fields were implemented sequentially using
the same device. A single CMAC response accumulation device was used, providing
eight parallel output channels. A weight vector memory containing 1 million 8
bit weights was provided using 85 nanosecond 512 KByte static RAM SIMMs.
A TMS320E15 micro controller was utilized to supervise communications with the
VME bus. The operational firmware for the micro controller chip was designed to
Design and Implementation of a High Speed CMAC Neural Network
CMAC Associative Mapping
CMAC Output Accumulators
VME PI Connector
Figure 2: The Component Side of the VME Based CMAC Associative 1\lemory
Card. The two large XILINX 3090 logic cell arrays implement the CMAC associative
mapping and the response accumulation/weight adjustment circuitry. The weights
are stored in the 1 Mbyte static RAM. The TMS320E15 microcontroller supervises
communications between the CMAC hardware and the VME host.
provide maximum flexibility in the logical organization of the CMAC associative
memory, as viewed by the VME host system. The board can be initialized to act as
from 1 to 8 independent virtual CMAC networks. For each network, the number of
16 bit inputs is selectable from 1 to 512, the number of 16 bit outputs is selectable
from 1 to 8, and the number of overlapping receptive fields is selectable from 2 to
256.
Figure 3 shows typical response times during training and response generation operations for a CMAC network with 1 million adjustable weights. The data shown
represent networks with 32 integer inputs and 8 integer outputs, with the number of overlapping receptive fields varied between 8 and 256. Throughout most of
this range CMAC training and response times are well below 1 millisecond. These
performance specifications should accommodate typical real time control problems
(allowing 1000 cycle per second control rates), as well as many problems in pattern
recognition.
A similar CMAC system for the 16 bit PC-AT bus has been developed by the
Shenandoah Systems Company for commercial applications. This CMAC system
supports both 8 and 16 bit adjustable weights (1 Mbyte total storage), and 8 independent virtual CMAC networks on a single card. Response times for the commercial CMAC-AT card are similar to those shown in Figure 3. A commercial version
1025
1026
Miller, Box, Whitney, and Glynn
???T??T?rTT???????????????c?i1~c?????~?fM?~n?~????p?~?~?~?T??????????????????????r???????????j
????+????j????i???j???j??????? 32 Inputs ?
a (kitputs' ?
1'ltiilian '!Ie'ights .................+............. j
! II ! I 1~lliJc.nd I I IIIII1
I I
-...
I
I
11+81
11+82
Figure 3: CMAC Associative Memory Response and Training Times. Response
times are shown for values of the generalization parameter (the number of overlapping receptive fields) between 8 and 256. In each case the CMAC had 32 integel'
inputs, 8 integer outputs, and one million adjustable weights.
of the VME bus design is currently under development .
Acknow ledgements
This work was sponsored in part by the Office of Naval Research (ONR Grant
Number N00014-89-J-1686) and the National Institute of Standards and Technology.
References
Albus, J. S., Theoretical and Experimental Aspects of a Cerebellar Model. PhD
Thesis, University of Maryland, Dec . 1972.
Albus, J. S., A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller (CMAC). Trans. of the ASME, Journal of Dynamic Systems,
Measurement and Control, vol. 97, pp. 220-227, September, 1975.
Albus, J. S., Mechanisms of Planning and Problem Solving in the Brain. Mathematical Biosciences, vol. 45, pp. 247-293, August, 1979.
Miller, W. T., A Nonlinear Learning Controller for Robotic Manipulators. Proc. of
the SPIE: Intelligent Robots and Computer Vision, vol 726, pp . 416-423, October,
1986.
Design and Implementation of a High Speed CMAC Neural Network
Miller, W. T., Sensor Based Control of Robotic Manipulators Using A General
Learning Algorithm. IEEE J. of Robotics and Automation, vol. RA-3, pp. 157165, April, 1987.
Miller, W. T., Glanz, F. H., and Kraft, 1. G., Application of a General Learning
Algorithm to the Control of Robotic Manipulators. The International Journal of
Robotics Research, vol. 6.2, pp. 84-98, Summer, 1987.
Miller, W.T., and Hewes, R.P., Real Time Experiments in Neural Network Based
Learning Control During High Speed, Nonrepetitive Robot Operations. Proceedings
of the Third IEEE International Symposium on Intelligent Control, Washington,
D.C., August 24-26, 1988.
Miller, W, T., Real Time Application of Neural Networks for Sensor-Based Control
of Robots with Vision. IEEE Transactions on Systems, Man, and Cybernetics.
Special issue on Information Technology for Sensory-Based Robot Manipulators,
vol. 19, pp. 825-831, 1989.
Miller, W. T., Hewes, R. P., Glanz, F. H., and Kraft, 1. G., Real Time Dynamic
Control of an Industrial Manipulator Using a Neural Network Based Learning Controller. IEEE J. of Robotics and Automation vol. 6, pp. 1-9, 1990.
Glanz, F. H., Miller, W. T., Shape Recognition Using a CMAC Based Learning
System. Proceedings SPIE: Intelligent Robots and Computer Vision, Cambridge,
Mass., Nov., 1987.
Herold, D. J., Miller, W. T., Kraft, L. G., and Glanz, F. H., Pattern Recognition
Using a CMAC Based Learning System. Proceedings SPIE: Automated Inspection
and High Speed Vision Architectures II, vol. 1004, pp. 84-90, 1988.
Glanz, F. H., and Miller, W. T., Deconvolution and Nonlinear Inverse Filtering
Using a Neural Network. Proc. ICASSP 89, Glasgow, Scotland, May 23-26, 1989,
vol. 4, pp. 2349-2352.
1027
| 404 |@word implemented:2 version:2 hypercube:1 nd:1 laboratory:1 receptive:12 primary:1 traditional:1 excited:4 adjacent:1 during:5 width:1 virtual:4 september:1 accommodate:1 recursively:1 card:3 xilinx:2 maryland:1 series:3 generalization:2 unh:1 asme:1 brian:1 adjusted:2 extension:1 performs:2 dedicated:1 motion:1 sufficiently:1 hall:1 providing:1 mapping:10 nanosecond:1 october:1 circuitry:1 supervises:1 physical:1 shape:1 acknow:1 designed:2 sponsored:1 purpose:1 million:4 proc:2 cerebellar:2 implementation:8 selected:1 device:10 currently:1 hewes:2 measurement:1 inspection:1 cambridge:1 adjustable:5 scotland:1 perform:1 allowing:1 successfully:1 erich:1 communication:2 sensor:9 location:1 successive:1 had:1 interacting:1 moving:1 robot:8 specification:1 varied:1 kingsbury:1 mathematical:1 constructed:1 add:1 rtt:1 symposium:1 office:1 august:2 required:1 naval:1 buffer:1 n00014:1 industrial:2 ra:1 binary:2 onr:1 planning:1 trans:1 multi:1 brain:1 address:7 below:2 pattern:4 selectable:3 company:2 articulation:1 encouraging:1 converting:1 project:1 provided:1 i1:1 signal:2 circuit:4 mass:1 issue:1 ii:3 multiple:7 memory:8 video:1 development:2 special:1 developed:3 field:19 host:2 scheme:1 washington:1 technology:3 placing:1 park:2 axis:1 act:1 controller:5 vision:4 embodied:1 intelligent:3 control:13 unit:8 grant:1 micro:2 randomly:2 oriented:1 represent:1 simultaneously:2 national:1 robotics:3 engineering:1 cell:4 dec:1 addressed:1 generation:2 filtering:1 organization:1 clocking:1 studied:1 custom:1 integer:5 pc:1 pi:1 range:1 iii:1 automated:1 camera:1 accumulator:3 placed:1 architecture:1 union:1 recursive:1 implement:4 fm:1 firmware:1 prototype:1 side:1 institute:1 cmac:44 initialized:1 sparse:1 theoretical:1 dimension:1 regular:1 industry:1 sensory:1 adaptive:1 pipelined:1 interior:1 mhz:1 whitney:3 ights:1 storage:1 programmable:2 transaction:1 addressing:1 nov:1 accumulation:4 compact:1 hundred:1 demonstrated:1 logic:9 ten:1 robotic:4 hardware:2 investigating:1 stored:1 sequentially:3 summing:2 glasgow:1 combined:1 millisecond:2 continuous:1 array:4 international:2 ie:1 fifo:1 per:1 learn:2 nature:1 ledgements:1 discrete:1 channel:2 operational:1 vol:9 group:1 commercial:3 thesis:1 threshold:4 investigated:1 us:1 containing:1 element:1 recognition:5 glanz:7 utilized:1 utilize:1 ram:7 sum:2 inverse:1 electrical:1 west:1 automation:2 thousand:1 board:3 configured:1 throughout:1 connected:2 cycle:1 fashion:1 performed:1 microcontroller:1 bit:7 third:1 parallel:2 summer:1 dynamic:3 adjustment:2 newington:2 theorem:1 solving:1 oi:1 formed:1 offset:1 miller:16 kraft:3 deconvolution:1 icassp:1 connector:1 generates:1 chip:1 aspect:1 speed:9 design:7 produced:1 vme:9 basically:1 quantization:1 phd:1 drive:1 fast:1 cybernetics:1 department:1 structured:1 durham:1 detector:11 mbyte:2 smaller:1 valued:2 pp:9 interconnection:1 james:1 glynn:3 ability:1 making:1 bioscience:1 spie:3 supervise:1 static:2 conveyor:1 final:1 associative:14 logical:4 bus:5 kinematics:1 mechanism:1 goal:1 interconnected:1 viewed:1 adaptation:1 back:1 fed:1 relevant:1 serf:1 man:1 hashing:1 operation:2 typical:4 flexibility:1 response:12 april:1 apply:1 albus:5 eight:2 box:3 hampshire:3 total:3 called:2 experimental:2 convergence:1 receives:1 thomas:1 produce:3 cmos:3 nonlinear:4 overlapping:5 object:1 support:1 philosophy:1 excites:1 manipulator:7 |
3,358 | 4,040 | Generative Local Metric Learning for
Nearest Neighbor Classification
Yung-Kyun Noh1,2
Byoung-Tak Zhang2
Daniel D. Lee1
GRASP Lab, University of Pennsylvania, Philadelphia, PA 19104, USA
2
Biointelligence Lab, Seoul National University, Seoul 151-742, Korea
1
[email protected], [email protected], [email protected]
Abstract
We consider the problem of learning a local metric to enhance the performance of
nearest neighbor classification. Conventional metric learning methods attempt to
separate data distributions in a purely discriminative manner; here we show how
to take advantage of information from parametric generative models. We focus
on the bias in the information-theoretic error arising from finite sampling effects,
and find an appropriate local metric that maximally reduces the bias based upon
knowledge from generative models. As a byproduct, the asymptotic theoretical
analysis in this work relates metric learning with dimensionality reduction, which
was not understood from previous discriminative approaches. Empirical experiments show that this learned local metric enhances the discriminative nearest
neighbor performance on various datasets using simple class conditional generative models.
1
Introduction
The classic dichotomy between generative and discriminative methods for classification in machine
learning can be clearly seen in two distinct performance regimes as the number of training examples
is varied [12, 18]. Generative models?which employ models first to find the underlying distribution p(x|y) for discrete class label y and input data x ? RD ?typically outperform discriminative
methods when the number of training examples is small, due to smaller variance in the generative
models which compensates for any possible bias in the models. On the other hand, more flexible
discriminative methods?which are interested in a direct measure of p(y|x)?can accurately capture the true posterior structure p(y|x) when the number of training examples is large. Thus, given
enough training examples, the best performing classification algorithms have typically employed
purely discriminative methods.
However, due to the curse of dimensionality when D is large, the number of data examples may
not be sufficient for discriminative methods to approach their asymptotic performance limits. In this
case, it may be possible to improve discriminative methods by exploiting knowledge of generative
models. There has been recent work on hybrid models showing some improvement [14, 15, 20], but
mainly the generative models have been improved through the discriminative formulation. In this
work, we consider a very simple discriminative classifier, the nearest neighbor classifier, where the
class label of an unknown datum is chosen according to the class label of the nearest known datum.
The choice of a metric to define nearest is then crucial, and we show how this metric can be locally
defined based upon knowledge of generative models.
Previous work on metric learning for nearest neighbor classification has focused on a purely discriminative approach. The metric is parameterized by a global quadratic form which is then optimized
on the training data to maximize pairwise separation between dissimilar points, and to minimize the
pairwise separation of similar points [3, 9, 10, 21, 26]. Here, we show how the problem of learning
1
a metric can be related to reducing the theoretical bias of the nearest neighbor classifier. Though
the performance of the nearest neighbor classifier has good theoretical guarantees in the limit of
infinite data, finite sampling effects can introduce a bias which can be minimized by the choice of an
appropriate metric. By directly trying to reduce this bias at each point, we will see the classification
error is significantly reduced compared to the global class-separating metric.
We show how to choose such a metric by analyzing the probability distribution on nearest neighbors,
provided we know the underlying generative models. Analyses of nearest neighbor distributions
have been discussed before [11, 19, 24, 25], but we take a simpler approach and derive the metricdependent term in the bias directly. We then show that minimizing this bias results in a semi-definite
programming optimization that can be solved analytically, resulting in a locally optimal metric. In
related work, Fukunaga et al. considered optimizing a metric function in a generative setting [7, 8],
but the resulting derivation was inaccurate and does not improve nearest neighbor performance.
Jaakkola et al. first showed how a generative model can be used to derive a special kernel, called the
Fisher kernel [12], which can be related to a distance function. Unfortunately, the Fisher kernel is
quite generic, and need not necessarily improve nearest neighbor performance.
Our generative approach also provides a theoretical relationship between metric learning and the
dimensionality reduction problem. In order to find better projections for classification, research
on dimensionality reduction using labeled training data has utilized information-theoretic measures
such as Bhattacharrya divergence [6] and mutual information [2, 17]. We argue how these problems can be connected with metric learning for nearest neighbor classification within the general
framework of F-divergences. We will also explain how dimensionality reduction is entirely different
from metric learning in the generative approach, whereas in the discriminative setting, it is simply a
special case of metric learning where particular directions are shrunk to zero.
The remainder of the paper is organized as follows. In section 2, we motivate by comparing the metric dependency of the discriminative and generative approaches for nearest neighbor classification.
After we derive the bias due to finite sampling in section 3, we show, in section 4, how minimizing
this bias results in a local metric learning algorithm. In section 5, we explain how metric learning
should be understood in a generative perspective, in particular, its relationship with dimensionality
reduction. Experiments on various datasets are presented in section 6, comparing our experimental
results with other well-known algorithms. Finally, in section 7, we conclude with a discussion of
future work and possible extensions.
2
Metric and Nearest Neighbor Classification
In recent work, determining a good metric for nearest neighbor classification is believed to be crucial. However, traditional generative analysis of this problem has simply ignored the metric issue
with good reason, as we will see in section 2.2. In this section, we explain the apparent contradiction between two different approaches to this issue, and briefly describe how the resolution of
this contradiction will lead to a metric learning method that is both theoretically and practically
plausible.
2.1
Metric Learning for Nearest Neighbor Classification
A nearest neighbor classifier determines the label of an unknown datum according to the label of
its nearest neighbor. In general, the meaning of the term nearest is defined along with the notion
of distance in data space. One common choice for this distance is the Mahalanobis distance with
a positive definite square matrix A ? RD?D where D is the dimensionality of data space. In this
case, the distance between two points x1 and x2 is defined as
q
d(x1 , x2 ) = (x1 ? x2 )T A(x1 ? x2 ) ,
(1)
and the nearest datum xN N is one having minimal distance to the test point among labeled training
data in {xi }N
i=1
In this classification task, the results are highly dependent on the choice of matrix A, and prior work
has attempted to improve the performance by a better choice of A. This recent work has assumed
the following common heuristic: the training data in different classes should be separated in a new
2
metric space. Given training data, a global A is optimized such that directions separating different
class data are extended, and directions binding same class data together are shrunk [3, 9, 10, 21, 26].
However, in terms of the test results, these conventional methods do not improve the performance
dramatically, which will be shown in our later experiments on large datasets, and we show why only
small improvements arise in our theoretical analysis.
2.2
Theoretical Performance of Nearest Neighbor Classifier
Contrary to recent metric learning approaches, a simple theoretical analysis using a generative model
displays no sensitivity to the choice of the metric. We consider i.i.d. samples generated from two
different distributions p1 (x) and p2 (x) over the vector space x ? RD . With infinite samples, the
probability of misclassification using a nearest neighbor classifier can be obtained:
Z
p1 (x)p2 (x)
EAsymp =
dx,
(2)
p1 (x) + p2 (x)
which is better known by its relationship to an upper bound, twice the optimal Bayes error [4, 7, 8].
By looking at the asymptotic error in a linearly transformed z-space, we can show that Eq. (2) is
invariant to the change of metric. If we consider a linear transformation z = LT x using a full
rank matrix L, and the distribution qc (z) for c ? {1, 2} in z-space satisfying pc (x)dx = qc (z)dz
and accompanying measure change dz = |L|dx, we see EAsymp in z-space is unchanged. Since
any positive definite A can be decomposed as A = LLT , we can say the asymptotic error remains
constant even as the metric shrinks or expands any spatial directions in data space.
This difference in behavior in terms of metric dependence can be understood as a special property
that arises from infinite data. When we do not have infinite samples, the expectation of error is
biased in that it deviates from the asymptotic error, and the bias is dependent on the metric. From
a theoretical perspective, the asymptotic error is the theoretical limit of expected error, and the bias
reduces as the number of samples increase. Since this difference is not considered in previous
research, the aforementioned metric will not exhibit performance improvements when the sample
number is large.
In the next section, we look at the performance bias associated with finite sampling directly and find
a metric that minimizes the bias from the asymptotic theoretical error.
3
Performance Bias due to Finite Sampling
Here, we obtain the expectation of nearest neighbor classification error from the distribution of
nearest neighbors in different classes. As we consider finite number of samples, the nearest neighbor
from a point x0 appears at a finite distance dN > 0. This non-zero distance gives rise to the
performance difference from its theoretical limit (2). A twice-differentiable distribution p(x) is
considered and approximated to second order near a test point x0 :
1
p(x) ' p(x0 ) + ?p(x)|Tx=x0 (x ? x0 ) + (x ? x0 )T ??p(x)x=x (x ? x0 )
(3)
0
2
with the gradient ?p(x) and Hessian matrix ??p(x) defined by taking derivatives with respect to
x.
Now, under the condition that the nearest neighbor appears at the distance dN from the test point,
the expectation of the probability p(xN N ) at a nearest neighbor point is derived by averaging the
probability over the D-dimensional hypersphere of radius dN , as in Fig. 1. After averaging, the
gradient term disappears, and the resulting expectation is the sum of the probability at x0 and a
residual term containing the Laplacian of p. We replace this expected probability by p?(x0 ).
h
i
ExN N p(xN N )dN , x0
i
h
1
= p(x0 ) + ExN N (x ? x0 )T ??p(x)(x ? x0 )kx ? x0 k2 = d2N
2
d2
= p(x0 ) + N ? ?2 p|x=x0 ? p?(x0 )
(4)
2D
3
Figure 1: The nearest neighbor xN N appears at a finite distance dN from x0 due to finite sampling.
Given the data distribution p(x), the average probability density function over the surface of a D
d2N
dimensional hypersphere is p?(x0 ) = p(x0 ) + 4D
?2 p|x=x0 for small dN .
where the scalar Laplacian ?2 p(x) is given by the sum of the eigenvalues of the Hessian ??p(x).
If we look at the expected error, it is the expectation of the probability that the test point and its
neighbor are labeled differently. In other words, the expectation error EN N is the expectation of
e(x, xN N ) = p(C1 |x)p(C2 |xN N ) + p(C2 |x)p(C1 |xN N ) over both the distribution of x and the
distribution of nearest neighbor xN N for a given x:
"
#
i
h
EN N = Ex ExN N e(x, xN N )x
(5)
We then replace the posteriors p(C|x) and p(C|xN N ) as pc (x)/(p1 (x) + p2 (x)) and
pc (xNhN )/(p1 (xNN ) + pi2 (xN N )) respectively, and approximate the expectation of the posterior
ExN N p(C|xN N )dN , x at a fixed distance dN from test point x using p?c (x)/(?
p1 (x) + p?2 (x)). If
h i
we expand EN N with respect to dN , and take the expectation using the decomposition, ExN N f =
h
h ii
EdN ExN N f dN , then the expected error is given to leading order by
Z
p1 p2
dx +
EN N '
p1 + p2
Z
h
i
EdN [d2N ]
1
2 2
2 2
2
2
p
?
p
+
p
?
p
?
p
p
(?
p
+
?
p
)
dx
(6)
2
1
1
2
1
2
1
2
4D
(p1 + p2 )2
When EdN [d2N ] ? 0 with an infinite number of samples, this error converges to the asymptotic limit
in Eq. (2) as expected. The residual term can be considered as the finite sampling bias of the error
discussed earlier. Under the coordinate transformation z = LT x and the distributions p(x) on x and
q(z) on z, we see that this bias term is dependent upon the choice of a metric A = LLT .
h
i
1
2 2
2 2
2
2
q
?
q
+
q
?
q
?
q
q
?
q
+
?
q
dz
(7)
2
1
1
2
1
2
2
(q1 + q2 )2 1
h
i
1
?1
2
2
=
tr
A
p
??p
+
p
??p
?
p
p
(??p
+
??p
)
dx
2
1
1
2
1
2
1
2
(p1 + p2 )2
which is derived using p(x)dx = q(z)dz and |L|?2 q = tr[A?1 ??p]. Expectation of squared
distance EdN [d2N ] is related to the determinant |A|, which will be fixed to 1. Thus, finding the
metric that minimizes the quantity given in Eq. (7) at each point is equivalent to minimizing the
metric-dependent bias in Eq. (6).
4
4
Reducing Deviation from the Asymptotic Performance
Finding the local metric that minimizes the bias can be formulated as a semi-definite programming
(SDP) problem of minimizing squared residual with respect to a positive semi-definite metric A:
min (tr[A?1 B])2
A
s.t. |A| = 1, A 0
(8)
where the matrix B at each point is
B = p21 ??p2 + p22 ??p1 ? p1 p2 (??p1 + ??p2 ).
(9)
This is a simple SDP having an analytical solution where the solution shares the eigenvectors with
B. Let ?+ ? Rd+ ?d+ and ?? ? Rd? ?d? be the diagonal matrices containing the positive and
negative eigenvalues of B respectively. If U+ ? RD?d+ contains the eigenvectors corresponding to
the eigenvalues in ?+ and U? ? RD?d? contains the eigenvectors corresponding to the eigenvalues
in ?? , we use the solution given by
d+ ?+
0
Aopt = [U+ U? ]
[U+ U? ]T
(10)
0
?d? ??
The solution Aopt is a local metric since we assumed that the nearest neighbor was close to the test
point satisfying Eq. (3). In principle, distances should then be defined as geodesic distances using
this local metric on a Riemannian manifold. However, this is computationally difficult, so we use
the surrogate distance A = ?I + Aopt and treat ? as a regularization parameter that is learned in
addition to the local metric Aopt .
The multiway extension of this problem isstraightforward.
The asymptotic
error with C-class dis P
PC R
P
1
pc j6=i pj /
tributions can be extended to C c=1
i pi dx using the posteriors of each
class, and it replaces B in Eq. (9) by the extended matrix:
?
?
C
X
X
X
B=
?2 p i ?
p2j ? pi
pj ? .
(11)
i=1
5
j6=i
j6=i
Metric Learning in Generative Models
Traditional metric learning methods can be understood as being purely discriminative. In contrast to
our method that directly considers the expected error, those methods are focused on maximizing the
separation of data belonging to different classes. In general, their motivations are compared to the
supervised dimensionality reduction methods, which try to find a low dimensional space where the
separation between classes is maximized. Their dimensionality reduction is not that different from
metric learning, but often as a special case where metric in particular directions is forced to be zero.
In the generative approach, however, the relationship between dimensionality reduction and metric
learning is different. As in the discriminative case, dimensionality reduction in generative models
tries to obtain class separation in a transformed space. It assumes particular parametric distributions
(typically Gaussians), and uses a criterion to maximize the separation [2, 6, 16, 17]. One general
form of these criteria is the F-divergence (also known as Csiszer?s general measure of divergence),
that can be defined with respect to a convex function ?(t) for t ? R [13]:
Z
p2 (x)
dx.
(12)
F (p1 (x), p2 (x)) = p1 (x) ?
p1 (x)
Rp
The examples of using this divergence include the Bhattacharyya
divergence
p1 (x)p2 (x)dx
?
R
p2 (x)
when ?(t) = t and the KL-divergence ? p1 (x) log p1 (x) dx when ?(t) = ? log(t). Using
mutual information between data and labels can be understood as an extension of KL-divergence.
The well known Linear Discriminant Analysis is a special example of Bhattacharyya criterion when
we assume two-class Gaussians sharing the same covariance matrices.
Unlike dimensionality reduction, we cannot use these criteria for metric learning because any Fdivergence is metric-invariant. The asymptotic error Eq. (2) is related to one particular F-divergence
5
Figure 2: Optimal local metrics are shown on the left for three example Gaussian distributions in
a 5-dimensional space. The projected 2-dimensional distributions are represented as ellipses (one
standard deviation from the mean), while the remaining 3 dimensions have an isotropic distribution.
The local p?/p of the three classes are plotted on the right using a Euclidean metric I and for the
optimal metric Aopt . The solution Aopt tries to keep the ratio p?/p over the different classes as
similar as possible when the distance dN is varied.
by EAsymp = 1 ? F (p1 , p2 ) with a convex function ?(t) = 1/(1 + t). Therefore, in generative
models, the metric learning problem is qualitatively different from the dimensionality reduction
problem in this aspect. One interpretation is that the F-measure can be understood as a measure
of dimensionality reduction in an asymptotic situation. In this case, the role of metric learning can
be defined to move the expected F-measure toward the asymptotic F-measure by appropriate metric
adaptation.
Finally, we provide an alternative understanding on the problem of reducing Eq. (7). By reformulating Eq. (9) into (p2 ? p1 )(p2 ?2 p1 ? p1 ?2 p2 ), we can see that the optimal metric tries to minimize
2
2
2
2
the difference between ?p1p1 and ?p2p2 . If ?p1p1 ? ?p2p2 , this also implies
p?1
p?2
?
p1
p2
(13)
d2
N
?2 p, the average probability at a distance dN in (4). Thus, the algorithm tries to keep
for p? = p + 2D
the ratio of the average probabilities p?1 /p?2 at a distance dN to be as similar as possible to the ratio of
probabilities p1 /p2 at the test point. This means that the expected nearest neighbor classification at
a distance dN will be least biased due to finite sampling. Fig. 2 shows how the learned local metric
Aopt varies at a point x for a 3-class Gaussian example, and how the ratio of p?/p is kept as similar
as possible.
6
Experiments
We apply our algorithm for learning a local metric to synthetic and various real datasets and see
how well it improves nearest neighbor classification performance. Simple standard Gaussian distributions are used to learn the generative model, with parameters including the mean vector ? and
covariance matrix ? for each class. The Hessian of a Gaussian distribution is then given by the
expression:
h
i
??p(x) = p(x) ??1 (x ? ?)(x ? ?)T ??1 ? ??1
(14)
This expression is then used to learn the optimal local metric. We compare the performance of
our method (GLML?Generative Local Metric Learning) with recent metric learning discriminative methods which report state-of-the-art performance on a number of datasets. These include
6
Information-Theoretic Metric Learning (ITML)1 [3], Boost Metric2 (BM) [21], and Largest Margin
Nearest Neighbor (LMNN)3 [26]. We used the implementations downloaded from the corresponding authors? websites. We also compare with a local metric given by the Fisher kernel [12] assuming
a single Gaussian for the generative model and using the location parameter to derive the Fisher information matrix. The metric from the Fisher kernel was not originally intended for nearest neighbor
classification, but it is the only other reported algorithm that learns a local metric from generative
models.
For the synthetic dataset, we generated data from two-class random Gaussian distributions having
two fixed means. The covariance matrices are generated from random orthogonal eigenvectors and
random eigenvalues. Experiments were performed varying the input dimensionality, and the classification accuracies are shown in Fig. 3.(a) along with the results of the other algorithms. We used
500 test points and an equal number of training examples. The experiments were performed with
20 different realizations and the results were averaged. As the dimensionality grows, the original
nearest neighbor performance degrades because of the high dimensionality. However, we see that
the proposed local metric highly outperforms the discriminative nearest neighbor performance in a
high dimensional space appropriately. We note that this example is ideal for GLML, and it shows
much improvement compared to the other methods.
The other experiments consist of the following benchmark datasets: UCI machine learning repository4 datasets (Ionosphere, Wine), and the IDA benchmark repository5 (German, Image, Waveform,
Twonorm). We also used the USPS handwritten digits and the TI46 speech dataset. For the USPS
data, we resized the images to 8 ? 8 pixels and trained on the 64-dimensional pixel vector data. For
the TI46 dataset, the examples consist of spoken sounds pronounced by 8 different men and 8 different women. We chose the pronunciation of ten digits (?zero? to ?nine?), and performed a 10 class
digit classification task. 10 different filters in the Fourier domain were used as features to preprocess
the acoustic data. The experiments were done on 20 data sampling realizations for Twonorm and
TI46, 10 for USPS, 200 for Wine, and 100 for the others.
Except the synthetic data in Fig. 3.(a), the data consist of various number of training data per class.
The regularization parameter ? value is chosen by cross-regularization on a subset of the training
data, then fixed for testing. The covariance matrix of the learned Gaussian distributions is also
? + ?I where ?
? is the estimated covariance. The parameter ? is set
regularized by setting ? = ?
prior to each experiment.
From the results shown in Fig. 3, our local metric algorithm generally outperforms most of the other
metrics across most of the datasets. On quite a number of datasets, many of the other methods
do not outperform the original Euclidean nearest neighbor classifier. This is because on some of
these datasets, performance cannot be improved using a global metric. On the other hand, the local
metric derived from simple Gaussian distributions always shows a performance gain over the naive
nearest neighbor classifier. In contrast, using Bayes rule with these simple Gaussian generative
models often results in very poor performance. The computational time using a local metric is also
very competitive, since the underlying SDP optimization has a simple spectral solution. This is in
contrast to other methods which numerically solve for a global metric using an SDP over the data
points.
7
Conclusions
In our study, we showed how a local metric for nearest neighbor classification can be learned using
generative models. Our experiments show improvement over competitive methods on a number
of experimental datasets. The learning algorithm is derived from an analysis of the asymptotic
performance of the nearest neighbor classifier, such that the optimal metric minimizes the bias of the
expected performance of the classifier. This connection to generative models is very powerful, and
can easily be extended to include missing data?one of the large advantages of generative models
1
http://userweb.cs.utexas.edu/ pjain/itml/
http://code.google.com/p/boosting/
3
http://www.cse.wustl.edu/ kilian/Downloads/LMNN.html
4
http://archive.ics.uci.edu/ml/
5
http://www.fml.tuebingen.mpg.de/Members/raetsch/benchmark
2
7
0.66
NN
GLML
ITML
BM
LMNN
Fisher
0.9
0.8
0.7
Performance
0.9
Performance
Performance
1
0.8
0.7
0.65
0.64
0.63
0.6
0.6
5
20
50
# Dim
10
100
(a) Synthetic
30
50
# tr. data
0.62
100
(b) Ionosphere
1
0.86
0.95
0.84
100
150
200
# tr. data
250
(c) German
0.9
0.85
Performance
Performance
Performance
0.96
0.82
0.8
300
500
700
# tr. data
900
200
(d) Image
1500
500
0.8
0.95
0.9
0.7
20
30
# tr. data
(g) Wine
40
3000
0.75
Performance
0.9
1000 2000
# tr. data
(f) Twonorm
1
Performance
Performance
500
1000
# tr. data
(e) Waveform
1
10
0.92
0.9
0.78
0.8
0.94
0.7
0.65
0.6
0.55
100
300
500
# tr. data
(h) USPS 8?8
1000
100
180
270
# tr. data
350
(i) TI46
Figure 3: (a) Gaussian synthetic data with different dimensionality. As number of dimensions gets
large, most methods degrade except GLML and LMNN. GLML continues to improve vastly over
other methods. (b)?(h) are the experiments on benchmark datasets varying the number of training
data per class. (i) TI46 is the speech dataset pronounced by 8 men and 8 women. The Fisher kernel
and BM are omitted for (f)?(i) and (h)?(i) respectively, since their performances are much worse
than the naive nearest neighbor classifier.
in machine learning. Here we used simple Gaussians for the generative models, but this could be
also easily extended to include other possibilities such as mixture models, hidden Markov models,
or other dynamic generative models.
The kernelization of this work is straightforward, and the extension to the k-nearest neighbor setting
using the theoretical distribution of k-th nearest neighbors is an interesting future direction. Another
possible future avenue of work is to combine dimensionality reduction and metric learning using
this framework.
Acknowledgments
This research was supported by National Research Foundation of Korea (2010-0017734, 2010-0018950, 3142008-1-D00377) and by the MARS (KI002138) and BK-IT Projects.
References
[1] B. Alipanahi, M. Biggs, and A. Ghodsi. Distance metric learning vs. Fisher discriminant analysis. In
Proceedings of the 23rd national conference on Artificial intelligence, pages 598?603, 2008.
8
[2] K. Das and Z. Nenadic. Approximate information discriminant analysis: A computationally simple heteroscedastic feature extraction technique. Pattern Recognition, 41(5):1548?1557, 2008.
[3] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In Proceedings
of the 24th International Conference on Machine Learning, pages 209?216, 2007.
[4] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification (2nd Edition). Wiley-Interscience, 2000.
[5] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance functions. In
Advances in Neural Information Processing Systems 18, pages 417?424, 2006.
[6] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, San Diego, CA, 1990.
[7] K. Fukunaga and T.E. Flick. The optimal distance measure for nearest neighbour classification. IEEE
Transactions on Information Theory, 27(5):622?627, 1981.
[8] K. Fukunaga and T.E. Flick. An optimal global nearest neighbour measure. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 6(3):314?318, 1984.
[9] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Advances in Neural Information
Processing Systems 18, pages 451?458. 2006.
[10] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. In
Advances in Neural Information Processing Systems 17, pages 513?520. 2005.
[11] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. Inverardi. A new class of random vector entropy
estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics,
17(3):277?297, 2005.
[12] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in
Neural Information Processing Systems 11, pages 487?493, 1998.
[13] J.N. Kapur. Measures of Information and Their applications. John Wiley & Sons, New York, NY, 1994.
[14] S. Lacoste-Julien, F. Sha, and M. Jordan. DiscLDA: Discriminative learning for dimensionality reduction
and classification. In Advances in Neural Information Processing Systems 21, pages 897?904. 2009.
[15] J.A. Lasserre, C.M. Bishop, and T.P. Minka. Principled hybrids of generative and discriminative models. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, pages 87?94, 2006.
[16] M. Loog and R.P.W. Duin. Linear dimensionality reduction via a heteroscedastic extension of LDA:
The chernoff criterion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(6):732?739,
2004.
[17] Z. Nenadic. Information discriminant analysis: Feature extraction with an information-theoretic objective.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(8):1394?1407, 2007.
[18] A.Y. Ng and M.I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive Bayes. In Advances in Neural Information Processing Systems 14, pages 841?848, 2001.
[19] F. Perez-Cruz. Estimation of information theoretic measures for continuous random variables. In Advances in Neural Information Processing Systems 21, pages 1257?1264. 2009.
[20] R. Raina, Y. Shen, A.Y. Ng, and A. McCallum. Classification with hybrid generative/discriminative
models. In Advances in Neural Information Processing Systems 16, pages 545?552. 2004.
[21] C. Shen, J. Kim, L. Wang, and A. van den Hengel. Positive semidefinite metric learning with boosting.
In Advances in Neural Information Processing Systems 22, pages 1651?1659. 2009.
[22] N. Singh-Miller and M. Collins. Learning label embeddings for nearest-neighbor multi-class classification
with an application to speech recognition. In Advances in Neural Information Processing Systems 22,
pages 1678?1686. 2009.
[23] D. Tran and A. Sorokin. Human activity recognition with metric learning. In Proceedings of the 10th
European Conference on Computer Vision, pages 548?561, 2008.
[24] Q. Wang, S. R. Kulkarni, and S. Verd?u. A nearest-neighbor approach to estimating divergence between
continuous random vectors. In Proceedings of IEEE International Symposium on Information Theory,
pages 242?246, 2006.
[25] Q. Wang, S. R. Kulkarni, and S. Verd?u. Divergence estimation for multidimensional densities via knearest-neighbor distances. IEEE Transactions on Information Theory, 55(5):2392?2405, 2009.
[26] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems 18, pages 1473?1480. 2006.
9
| 4040 |@word kulis:1 determinant:1 briefly:1 duda:1 nd:1 d2:2 decomposition:1 covariance:5 q1:1 tr:11 reduction:15 contains:2 nohyung:1 daniel:1 bhattacharyya:2 outperforms:2 comparing:2 ida:1 com:1 goldberger:1 dx:11 john:1 cruz:1 v:2 generative:37 intelligence:4 website:1 isotropic:1 mccallum:1 hypersphere:2 provides:1 boosting:2 cse:1 location:1 simpler:1 along:2 dn:14 direct:1 c2:2 symposium:1 combine:1 interscience:1 introduce:1 manner:1 theoretically:1 x0:21 pairwise:2 upenn:2 expected:9 behavior:1 p1:25 mpg:1 sdp:4 multi:1 salakhutdinov:1 lmnn:4 decomposed:1 curse:1 provided:1 btzhang:1 underlying:3 project:1 estimating:1 minimizes:4 q2:1 spoken:1 finding:2 transformation:2 guarantee:1 multidimensional:1 expands:1 p2j:1 classifier:14 k2:1 before:1 positive:5 understood:6 local:23 treat:1 limit:5 analyzing:1 chose:1 twice:2 downloads:1 heteroscedastic:2 averaged:1 acknowledgment:1 globerson:1 testing:2 definite:5 digit:3 empirical:1 significantly:1 projection:1 word:1 wustl:1 get:1 cannot:2 close:1 www:2 conventional:2 equivalent:1 dz:4 maximizing:1 missing:1 straightforward:2 convex:2 focused:2 resolution:1 qc:2 shen:2 contradiction:2 rule:1 estimator:1 haussler:1 classic:1 notion:1 coordinate:1 diego:1 pjain:1 programming:2 edn:4 us:1 verd:2 hypothesis:1 pa:1 satisfying:2 approximated:1 utilized:1 continues:1 recognition:5 tributions:1 labeled:3 role:1 solved:1 capture:1 wang:3 connected:1 kilian:1 principled:1 dynamic:1 geodesic:1 motivate:1 trained:1 singh:1 purely:4 upon:3 usps:4 biggs:1 easily:2 differently:1 various:4 tx:1 represented:1 derivation:1 separated:1 distinct:1 forced:1 describe:1 jain:1 artificial:1 dichotomy:1 pronunciation:1 quite:2 apparent:1 heuristic:1 plausible:1 solve:1 say:1 compensates:1 statistic:1 knearest:1 advantage:2 differentiable:1 eigenvalue:5 lee1:1 analytical:1 tran:1 remainder:1 adaptation:1 uci:2 realization:2 roweis:2 pronounced:2 exploiting:2 sea:2 converges:1 pi2:1 derive:4 blitzer:1 ac:1 nearest:50 eq:9 p2:21 c:1 frome:1 implies:1 direction:6 waveform:2 radius:1 filter:1 shrunk:2 human:1 extension:5 accompanying:1 practically:1 zhang2:1 considered:4 ic:1 omitted:1 wine:3 estimation:2 label:7 utexas:1 largest:1 clearly:1 gaussian:10 always:1 resized:1 varying:2 jaakkola:2 derived:4 focus:1 improvement:5 rank:1 mainly:1 contrast:3 kim:1 dim:1 dependent:4 nn:1 inaccurate:1 typically:3 hidden:1 tak:1 expand:1 transformed:2 interested:1 pixel:2 issue:2 classification:27 flexible:1 among:1 aforementioned:1 html:1 spatial:1 special:5 art:1 mutual:2 equal:1 having:3 extraction:2 sampling:9 chernoff:1 ng:2 look:2 future:3 minimized:1 report:1 others:1 employ:1 neighbour:2 national:3 divergence:11 intended:1 attempt:1 highly:2 possibility:1 grasp:1 mixture:1 semidefinite:1 pc:5 perez:1 byproduct:1 korea:2 orthogonal:1 euclidean:2 plotted:1 theoretical:12 minimal:1 earlier:1 d2n:5 deviation:2 subset:1 itml:3 reported:1 dependency:1 varies:1 synthetic:5 density:2 international:2 sensitivity:1 twonorm:3 enhance:1 together:1 squared:2 vastly:1 containing:2 choose:1 woman:2 collapsing:1 worse:1 derivative:1 leading:1 de:1 later:1 try:5 performed:3 lab:2 loog:1 alipanahi:1 competitive:2 bayes:3 minimize:2 square:1 accuracy:1 variance:1 maximized:1 miller:1 preprocess:1 repository4:1 handwritten:1 accurately:1 j6:3 explain:3 llt:2 sharing:1 minka:1 associated:1 riemannian:1 gain:1 dataset:4 knowledge:3 dimensionality:21 improves:1 organized:1 appears:3 originally:1 supervised:1 maximally:1 improved:2 formulation:1 done:1 though:1 shrink:1 mar:1 hand:2 google:1 logistic:1 lda:1 grows:1 usa:1 effect:2 true:1 analytically:1 regularization:3 reformulating:1 dhillon:1 mahalanobis:1 davis:1 criterion:5 trying:1 theoretic:6 meaning:1 image:4 common:2 stork:1 discussed:2 interpretation:1 numerically:1 raetsch:1 rd:8 fml:1 multiway:1 surface:1 posterior:4 recent:5 showed:2 perspective:2 optimizing:1 seen:1 employed:1 exn:6 maximize:2 semi:3 relates:1 full:1 ii:1 sound:1 reduces:2 academic:1 believed:1 cross:1 retrieval:1 hart:1 ellipsis:1 laplacian:2 regression:1 vision:2 metric:84 expectation:10 kernel:6 c1:2 whereas:1 addition:1 crucial:2 appropriately:1 biased:2 unlike:1 archive:1 member:1 contrary:1 jordan:2 near:1 ideal:1 enough:1 embeddings:1 mergel:1 pennsylvania:1 reduce:1 avenue:1 expression:2 speech:3 hessian:3 nine:1 york:1 flick:2 ignored:1 dramatically:1 generally:1 eigenvectors:4 nonparametric:1 locally:2 ten:1 reduced:1 http:5 inverardi:1 outperform:2 estimated:1 arising:1 per:2 discrete:1 goria:1 pj:2 kept:1 lacoste:1 sum:2 parameterized:1 powerful:1 aopt:7 separation:6 disclda:1 entirely:1 bound:1 datum:4 display:1 quadratic:1 replaces:1 sorokin:1 activity:1 duin:1 ghodsi:1 x2:4 ti46:5 aspect:1 fourier:1 fukunaga:4 min:1 leonenko:1 performing:1 according:2 p1p1:2 poor:1 belonging:1 byoung:1 smaller:1 across:1 son:1 snu:1 den:1 invariant:2 computationally:2 remains:1 german:2 singer:1 know:1 gaussians:3 apply:1 appropriate:3 generic:1 spectral:1 neighbourhood:1 alternative:1 weinberger:1 rp:1 original:2 assumes:1 remaining:1 include:4 society:1 unchanged:1 move:1 malik:1 objective:1 quantity:1 parametric:2 degrades:1 dependence:1 sha:1 traditional:2 diagonal:1 surrogate:1 enhances:1 exhibit:1 gradient:2 distance:24 separate:1 p22:1 separating:2 degrade:1 manifold:1 argue:1 considers:1 discriminant:4 tuebingen:1 reason:1 toward:1 assuming:1 code:1 relationship:4 ratio:4 minimizing:4 difficult:1 unfortunately:1 negative:1 rise:1 implementation:1 unknown:2 upper:1 datasets:12 markov:1 benchmark:4 finite:11 kapur:1 situation:1 kyun:1 extended:5 looking:1 hinton:1 varied:2 bk:1 kl:2 optimized:2 connection:1 acoustic:1 learned:5 boost:1 pattern:7 regime:1 including:1 misclassification:1 hybrid:3 regularized:1 residual:3 raina:1 improve:6 julien:1 disappears:1 naive:3 philadelphia:1 deviate:1 prior:2 understanding:1 determining:1 asymptotic:14 men:2 interesting:1 foundation:1 downloaded:1 sufficient:1 principle:1 share:1 pi:2 supported:1 dis:1 bias:20 neighbor:46 saul:1 taking:1 van:1 dimension:2 xn:13 hengel:1 author:1 qualitatively:1 projected:1 san:1 bm:3 transaction:5 approximate:2 keep:2 ml:1 global:6 conclude:1 assumed:2 discriminative:23 xi:1 continuous:2 why:1 lasserre:1 learn:2 ca:1 sra:1 necessarily:1 european:1 domain:1 da:1 linearly:1 motivation:1 arise:1 edition:1 x1:4 fig:5 en:4 ddlee:1 ny:1 wiley:2 learns:1 bishop:1 showing:1 ionosphere:2 consist:3 kr:1 kx:1 margin:2 entropy:1 lt:2 simply:2 glml:5 yung:1 scalar:1 binding:1 determines:1 conditional:1 formulated:1 replace:2 fisher:8 change:2 infinite:5 except:2 reducing:3 averaging:2 called:1 experimental:2 attempted:1 seoul:2 arises:1 collins:1 dissimilar:1 kulkarni:2 p21:1 kernelization:1 ex:1 |
3,359 | 4,041 | Learning from Candidate Labeling Sets
Francesco Orabona
DSI, Universit`a degli Studi di Milano
[email protected]
Luo Jie
Idiap Research Institute and EPF Lausanne
[email protected]
Abstract
In many real world applications we do not have access to fully-labeled training
data, but only to a list of possible labels. This is the case, e.g., when learning visual
classifiers from images downloaded from the web, using just their text captions or
tags as learning oracles. In general, these problems can be very difficult. However
most of the time there exist different implicit sources of information, coming from
the relations between instances and labels, which are usually dismissed. In this
paper, we propose a semi-supervised framework to model this kind of problems.
Each training sample is a bag containing multi-instances, associated with a set
of candidate labeling vectors. Each labeling vector encodes the possible labels
for the instances in the bag, with only one being fully correct. The use of the
labeling vectors provides a principled way not to exclude any information. We
propose a large margin discriminative formulation, and an efficient algorithm to
solve it. Experiments conducted on artificial datasets and a real-world images and
captions dataset show that our approach achieves performance comparable to an
SVM trained with the ground-truth labels, and outperforms other baselines.
1 Introduction
In standard supervised learning, each training sample is associated with a label, and the classifier is
usually trained through the minimization of the empirical risk on the training set. However, in many
real world problems we are not always so lucky. Partial data, noise, missing labels and other similar
common issues can make you deviate from this ideal situation, moving the learning scenario from
supervised learning to semi-supervised learning [7, 26].
In this paper, we investigate a special kind of semi-supervised learning which considers ambiguous
labels. In particular each training example is associated with several possible labels, among which
only one is correct. Intuitively this problem can be arbitrarily hard in the worst case scenario.
Consider the case when one noisy label is consistently appearing together with the true label: in this
situation we could not tell them apart. Despite that, learning could still be possible in many typical
real world scenarios. Moreover, in real problems samples are often gathered in groups, and the
intrinsic nature of the problem could be used to constrain the possible labels for the samples from
the same group. For example, we might have that two labels can not appear together in the same
group or a label can appear only once in each group, as, for example, a specific face in an image.
Inspired by these scenarios, we focus on the general case where we have bags of instances, with
each bag associated with a set of several possible labeling vectors, and among them only one is fully
correct. Each labeling vector consists of labels for each corresponding instance in the bag. For easy
reference, we call this type of learning problem a Candidate Labeling Set (CLS) problem.
As labeled data is usually expensive and hard to obtain, CLS problems naturally arise in many
real world tasks. For example, in computer vision and information retrieval domains, photographs
collections with tags have motivated the studies on learning from weakly annotated images [2], as
each image (bag) can be naturally partitioned into several patches (instances), and one could assume
that each tag should be associated with at least one patch. High-level knowledge, such as spatial
1
correlations (e.g. ?sun in sky? and ?car on street?), have been explored to prune down the labeling
possibilities [14]. Another similar task is to learn a face recognition system from images gathered
from news websites or videos, using the associated text captions and video scripts [3, 8, 16, 13].
These works use different approaches to integrate the constraints, such as that two faces in one
image could not be associated with the same name [3], mouth motion and gender of the person [8],
or modeling both names and action verbs jointly [16]. Another problem is the multiple annotators
scenario, where each data is associated with the labels given by independently hired annotators. The
annotators can disagree on the data and the aim is to recover the true label of each sample. All these
problems can be naturally casted into the CLS framework.
The contribution of this paper is a new formal way to cast the CLS setup into a learning problem.
We also propose a large margin formulation and an efficient algorithm to solve it. The proposed
Maximum Margin Set learning (MMS) algorithm, can scale to datasets of the order of 105 instances,
reaching performances comparable to fully-supervised learning algorithms.
Related works. This type of learning problem dates back to the work of Grandvalet in [12]. Later
Jin and Ghaharmani [17] formalized it and proposed a general framework for discriminative models.
Our work is also closely related to the ambiguous labeling problem presented in [8, 15]. Our framework generalizes them, to the cases where instances and possible labels come in the form of bags.
This particular generalization gives us a principled way for using different kinds of prior knowledge
on instances and labels correlation, without hacking the learning algorithm. More specifically, prior
knowledge, such as pairwise constraints [21] and mutual exclusiveness of some labels, can be easily
encoded in the labeling vectors. Although several works have focused on integrating these weakly
labeled information that are complementary to the labeled or unlabeled training data into existing
algorithms, these approaches are usually computational expensive. On the other hand, in our framework we have the opposite behavior: the more prior knowledge we exploit to construct the candidate
set, the better the performance and the faster the algorithm will be.
Other lines of research which are related to this paper are multiple-instance learning (MIL) problems [1, 5, 10], and multi-instance multi-label learning (MIML) problems [24, 25] which extends the
binary MIL setup to multi-labels scenario. In both setups, several instances are grouped into bags,
and their labels are not individually given but assigned to the bags directly. However, contrary to
our framework, in MIML noisy labeling is not allowed. In other words, all the labels being assigned
to the bags are assumed to be true. Moreover, current MIL and MIML algorithms usually rely on
a ?key? instance in the bag [1] or they transform each bag into single instance representation [25],
while our algorithm makes an explicit effort to label every instance in a bag and to consider all of
them during learning. Hence, it has a clear advantage in problems where the bags are dense in labeled instances and instances in the same bag are independent, as opposed to the cases when several
instances jointly represent a label. Our algorithm is also related to Latent Structural SVMs [22],
where the correct labels could be considered as latent variables.
2 Learning from Candidate Labeling Sets
Preliminaries. In this section, we formalize the CLS setting, which is a generalization of the
ambiguous labeling problem described in [17] from single instances to bags of instances.
In the following we denote vectors by bold letters, e.g. w, y, and use calligraphic font for sets, e.g.,
X . In the CLS setting, the N training data are provided in the form {Xi , Zi }N
i=1 , where Xi is a bag of
d
i
Mi instances, Xi = {xi,m }M
,
and
x
?
R
,
?
i
=
1,
.
.
.
,
N,
m
=
1,
. . . , Mi . The associated
i,m
m=1
Li
Mi
set of Li candidate labeling vectors is Zi = {zi,l }l=1 , where zi,l ? Y , and Y = {1, ..., C}. In
other words there are Li different combinations of Mi labels for the Mi instances in the i-th bag.
We assume that the correct labeling vector for Xi is present in Zi , while the other labeling vectors
maybe partially correct or even completely wrong. It is important to point out that this assumption
is not equivalent to just associating Li candidate labels to each instance. In fact, in this way we also
encode explicitly the correlations between instances and their labels in a bag. For example, consider
a two instances bag {xi,1 , xi,2 }: if it is known that they can only come from classes 1 and 2, and
they can not share the same label, then zi,1 = [1, 2], zi,2 = [2, 1] will be the candidate labeling
vectors for this bag, while the other possibilities are excluded from the labeling set. In the following
we will assume that the labeling set Zi is given with the training set. In Section 4.2 we will give a
practical example on how to construct this set using the prior knowledge on the task.
2
Given the training data {Xi , Zi }N
i=1 , we want to learn a function f (x), to correctly predict the
class of each single instance x, coming from the same distribution. The problem would become
the standard multiclass supervised learning if there is only one labeling vector in every labeling set
Zi , i.e. Li = 1. On the other hand, given a set of C labels, without any prior knowledge, a bag
of Mi instances could have maximum C Mi labeling vectors, which becomes a clustering problem.
However, we are more interested in situations when Li ? C Mi .
2.1 Large-margin formulation
We introduce here a large margin formulation to solve the CLS problem. It is helpful to first define
by X the generic bag of M instances {x1 , . . . xM }, Z = {z1 , . . . , zL } the generic set of candidate
labeling vectors, and y = {y1 , . . . , yM }, z = {z1 , . . . , zM } ? Y M two labeling vectors.
We start by introducing the loss function that assumes the true label ym of each instance xm is
known
M
X
?(zm , ym ) ,
(1)
?? (z, y) =
m=1
where ?(zm , ym ) is a non-negative loss function measuring how much we pay for having predicted
zm instead of ym . For example ?(zm , ym ) can be defined as 1(zm 6= ym ), where 1 is the indicator
function. Hence, if the vector z is the predicted label for the bag, ?? (z, y) simply counts the number
of misclassified instances in the bag.
However, the true labels are unknown, and we only have access to the set Z, knowing that the true
labeling vector is in Z. So we use a proxy of this loss function, and propose the ambiguous version
of this loss:
?A
?? (z, z ? ) .
? (z, Z) = min
?
z ?Z
A
We also define, with a small abuse of notation, ?A
? (X , Z; f ) = ?? (f (X ), Z), where f (X ) returns
a labeling vector which consists of labels for each instance in the bag X . It is obvious that this loss
underestimates the true loss. Nevertheless, we can easily extend [8, Proposition 3.1 to 3.3] to the
bag case, and prove that ?A
? /(1 ? ?) is an upper bound to ?? in expectation, where ? is a factor
between 0 and 1, and its value depends on the hardness of the problem. Like the definition in [8], ?
corresponds to the maximum probability of an extra label co-occurring with the true label over all
labels and instances. Hence, minimizing the ambiguous loss we are actually minimizing an upper
bound of the true loss. It is a known problem that direct minimization of this loss is hard, so in the
following we introduce another loss that upper bounds ?A
? which can be minimized efficiently.
We assume that the prediction function f (x) we are searching for is equal to arg maxy?Y F (x, y).
In this framework we can interpret the value of F (x, y) as the confidence of the classifier in assigning
x to the class y. We also assume the standard linear model used in supervised multiclass learning [9].
In particular the function F (x, y) is set to be w ? ?(x) ? ?(y), where ? and ? are the feature and
label space mapping [20], and ? is the Kronecker product1. We can now define F(X , y; w) =
PM
m=1 F (xm , ym ), which intuitively is gathering from each instance in X the confidence on the
labels in y. With the definitions above, we can rewrite the function F as
F(X , y; w) =
M
X
m=1
F (xm , ym ) =
M
X
w ? ?(xm ) ? ?(ym ) = w ? ?(X , y) ,
(2)
m=1
PM
where we defined ?(X , y) = m=1 ?(xm ) ? ?(ym ). Hence the function F can be defined as the
scalar product between w and a joint feature map between the bag X and the labeling vector y.
Remark. If the prior probabilities of every candidate labeling vectors zl ? Z are also available,
they could be incorporated by slightly modifying the feature mapping scheme in (2).
We can now introduce the following loss function
A
?max (X , Z; w) = max ?? (?
z , Z) + F(X , z?; w) ? max F(X , z; w)
z?Z
z
??Z
/
(3)
+
where |x|+ = max(0, x). The following proposition shows that ?max upper bounds ?A
?.
1
For simplicity we will omit the bias term here, it can be easily added by modifying the feature mapping.
3
Proposition. ?max (X , Z; w) ? ?A
? (X , Z; w) .
Proof. Define z? = arg maxz?Y M F(X , z; w). If z? ? Z then ?max (X , Z; w) ? ?A
? (X , Z; w) = 0.
We now consider the case in which z? ?
/ Z. We have that
A
?A
z , Z) + F(X , z?; w) ? max F(X , z; w)
? (X , Z; w) ? ?? (?
z?Z
(?
z
,
Z)
+
F(X
,
z
?
;
w)
? max F(X , z; w) ? ?max (X , Z; w) .
? max ?A
?
z
??Z
/
z?Z
The loss ?max is non-convex, due to the second max(?) function inside, but in Section 3 we will
introduce an algorithm to minimize it efficiently.
2.2 A probabilistic interpretation
It is possible to gain additional intuition on the proposed loss function ?max through a probabilistic
interpretation of the problem. It is helpful to look at the discriminative model for supervised learning
first, where the goal is to learn the model parameters ? for the function P (y|x; ?), from a predefined modeling class ?. Instead of directly maximizing the log-likelihood for the training data, an
alternative way is to maximize the log-likelihood ratio between the correct label and the most likely
incorrect one [9]. On the other hand, in the CLS setting the correct labeling vector for X is unknown,
but it is known to be a member of the candidate set Z. Hence we could maximize the log-likelihood
ratio between P (Z|X ; ?) and the most likely incorrect labeling vector which is not member of Z
(denoted as z?). However, the correlations between different vectors in Z are not known, so the
inference could be arbitrarily hard. Instead, we could approximate the problem by considering just
the most likely correct member of Z. It can be easily verified that maxz?Z P (z|X ; ?) is a lower
bound of P (Z|X ; ?). The learning problem becomes to minimize the ratio for the bag:
? log
P (Z|X ; ?)
maxz?Z P (z|X ; ?)
? ? log
.
maxz??Z
P
(?
z
|X
;
?)
max
z |X ; ?)
/
z
??Z
/ P (?
(4)
If we assume independence between the instances in the bag, (4) can be factorized as:
Q
X
X
maxz?Z m P (zm |xm ; ?)
Q
= max
log P (zm |xm ; ?) .
log P (?
zm |xm ; ?) ? max
? log
z?Z
zm |xm ; ?)
z
??Z
/
maxz??Z
/
m P (?
m
m
If we take the margin into account, and assume a linear model for the log-posterior-likelihood, we
obtain the loss function in (3).
3 MMS: The Maximum Margin Set Learning Algorithm
Using the square norm regularizer as in the SVM and the loss function in (3), we have the following
optimization problem for the CLS learning problem:
min
w
N
?
1 X
?max (Xi , Zi ; w)
kwk22 +
2
N i=1
(5)
This optimization problem (5) is non-convex due to the non-convex loss function (3). To convexify
this problem, one could approximate the second max(?) in (3) with the average over all the labeling
vectors in Zi . Similar strategies have been used in several analogous problems [8, 24]. However, the
approximation could be very loose if the number of labeling vectors is large. Fortunately, although
the loss function is not convex, it can be decomposed into a convex and a concave part. Thus the
problem can be solved using the constrained concave-convex procedure (CCCP) [19, 23].
3.1 Optimization using the CCCP algorithm
The CCCP solves the optimization problem using an iterative minimization process. At each round
r, given an initial w(r) , the CCCP replaces the concave part of the objective function with its firstorder Taylor expansion at w(r) , and then sets w(r+1) to the solution of the relaxed optimization
problem. When this function is non-smooth, such as maxz?Zi F(Xi , z; w) in our formulation, the
gradient in the Taylor expansion must be replaced by the subgradient2. Thus, at the r-th round, the
2
Given a function g, its subgradient ?g(x) at x satisfies: ?u, g(u) ? g(x) ? ?g(x) ? (u ? x). The set of
all subgradients of g at x is called the subdifferential of g at x.
4
CCCP replaces maxz?Zi F(Xi , z; w) in the loss function by
max F(Xi , z; w(r) ) + (w ? w(r) ) ? ? max F(Xi , z; w) .
z?Zi
(6)
z?Zi
The subgradient of a point-wise maximum function g(x) = maxi gi (x) is the convex hull of the
union of subdifferentials of the subset of the functions gi (x) which equal g(x) [4]. Defining by
(r)
Ci = {z ? Zi : F(Xi , z; w(r) ) = maxz? ?Zi F(Xi , z ? ; w(r) )}, the subgradient of the function
P (r)
P (r)
P (r)
maxz?Zi F(Xi , z; w) equals to l ?i,l ?F(Xi , zi,l ; w) = l ?i,l ?(Xi , zi,l ), with l ?i,l = 1,
(r)
(r)
and ?i,l ? 0 if zi,l ? Ci and ?i,l = 0 otherwise. Hence we have
X
X (r)
(r)
?i,l = max w(r) ? ?(Xi , z) .
?i,l w(r) ? ?(Xi , zi,l ) = max w(r) ? ?(Xi , z)
z?Zi
l
z?Zi
(r)
l:zi,l ?Ci
(r)
(r)
(r)
We are free to choose the values of the ?i,l in the convex hull, here we choose to set ?i,l = 1/|Ci |
(r)
for ?zi,l ? Ci . Using (6) the new loss function becomes
(r)
?cccp (Xi , Zi ; w) = maxz??/ Zi ?A
z , Zi ) + w ? ?(Xi , z?) ? w ?
? (?
1
(r)
|Ci
|
P
(r)
z?Ci
?(Xi , z) , (7)
+
Replacing the non-convex loss ?max in (5) with (7), the relaxed convex optimization program at r-th
round of the CCCP is
min
w
N
?
1 X (r)
?
(Xi , Zi ; w)
kwk22 +
2
N i=1 cccp
(8)
(r)
With our choice of ?i,l , in the first round of the CCCP when w is initialized at 0, the second max(?)
in (3) is approximated by the average over all the labeling vectors. The CCCP algorithm is guaranteed to decrease the objective function and it converges to a local minimum solution of (5) [23].
3.2 Solve the convex optimization problem using the Pegasos framework
In order to solve the relaxed convex optimization problem (8) efficiently at each round of the CCCP,
we have designed a stochastic subgradient descent algorithm, using the Pegasos framework developed in [18]. At each step the algorithm takes K random samples from the training set and calculates
an estimate of the subgradient of the objective function using these samples. Then it performs a subgradient descent step with decreasing learning rate, followed by a projection of the solution into
the space where the optimal solution lives. An upper bound on the radius of the ball in which the
optimal hyperplane lives can be calculated by considering that
N
? ? 2
1 X (r)
?
kw k2 ? min kwk22 +
?
(Xi , Zi ; w) ? B
w
2
2
N i=1 cccp
(r)
where w? is the optimal solution of (8), and B = maxi (?cccp (Xi , Zi ; 0)). If we use ?(zm , ym ) =
1(zm 6=ym ) in (7), B equals the maximum number of instances in the bags. The details of the Pegasos
algorithm for solving (8) are given in Algorithm 2. Using the theorems in [18] it is easy to show that
e 1/(??)) iterations Algorithm 2 converges in expectation to a solution of accuracy ?.
after O
Efficient implementation. Note that even if we solve the problem in the primal, we can still use
nonlinear kernels without computing the nonlinear mapping ?(x) explicitly. Since the implementation method is similar to the one described in [18, Section 4] for lack of space we omit the details.
Greedily searching for the most violating labeling vector z?k in line 4 of Algorithm 2 can be computational expensive. Dynamic programming can be carried out to reduce the computational cost
since the contribution of each instance is additive over different labels. Moreover, by looking into
the structure of Zi , the computational time can be further reduced. In the general situation, the
QMi
worst case complexity of searching the maximum of z? ?
/ Zi is O( m=1
Ci,m ), where Ci,m is the
number of unique possible labels for xi,m in Zi (usually Ci,m ? Li ). This complexity can be
greatly reduced when there are special structures such as graphs and trees in the labeling set. See
for example [20, Section 4] for a discussion on some specific problems and special cases.
5
Algorithm 1 The CCCP algorithm for solving MMS
1:
2:
3:
4:
5:
6:
initialize: w(1) = 0
repeat
(r)
Set Ci = {z ? Zi : F(Xi , z; w(r) ) = maxz? ?Zi F(Xi , z ? ; w(r) )}
Set w(r+1) as the solution of the convex optimization problem (8)
until convergence to a local minimum
output:w(r+1)
Algorithm 2 Pegasos Algorithm for Solving Relaxed-MMS (8)
(r)
1: Input: w0 , {Xi , Zi , Ci }N
i=1 , ?, T , K, B
2: for t = 1, 2, . . . , T do
3:
Draw at random At ? {1, . . . , N }, with |At | = K
A
4:
Compute z?k = arg maxz??Z
z , Zk ) + wt ? ?(Xk , z?)
/ k ?? (?
5:
Set
6:
Set
(r)
= {k ? At : ?cccp (Xk , Zk ; wt ) >
0}
P
1 P
1
1
+
wt+ 2 = (1 ? t )wt + ?Kt k?A
(r)
z?Ci
t
A+
t
p
wt+1 = min 1, 2B/?/kwt+ 12 k wt+ 21
8: end for
9: Output: wT +1
7:
?k ? At
(r)
?(Xk , z)/|Ci | ? ?(Xk , z?k )
4 Experiments
In order to evaluate the proposed algorithm, we first perform experiments on several artificial
datasets created from standard machine learning databases. Finally, we test our algorithm on one of
the examples motivating our study ? learning a face recognition system from news images weakly
annotated by their associated captions. We benchmark MMS against the following baselines:
? SVM: we train a fully-supervised SVM classifier using the ground-truth labels by considering every instance separately while ignoring the other candidate labels. Its performance
can be considered as an upper bound for the performance using candidate labels. In all our
experiments, we use the LIBLINEAR [11] package and test two different multiple-class
extensions, the 1-vs-All method using L1-loss (1vA-SVM) and the method by Crammer
and Singer [9] (MC-SVM).
? CL-SVM: the Candidate Labeling SVM (CL-SVM) is a naive approach which transforms
the ambiguous labeled data into a standard supervised representation by treating all possible labels of each instance as true labels. Then it learns 1-vs-All SVM classifiers from the
resulting dataset, where the negative examples are instances which do not have the corresponding label in their candidate labeling set. A similar baseline has been used in binary
MIL literature [5].
? MIML: we also compared with two SVM-based MIML algorithms3: MIMLSVM [25] and
M3 MIML [24]. We train the MIML algorithms by treating the labels in Zi as a label for
the bag. During the test phase, we consider each instance separately and predict the labels
as: y = arg maxy?Y Fmiml (x, y), where Fmiml is the obtained classifier, and Fmiml (x, y)
can be interpreted as the confidence of the classifier in assigning the instance x to the class
y. We would like to underline that although some of the experimental setups may favor our
algorithm, we include the comparison between MMS and MIML algorithms because to the
best of our knowledge it is the only existing principle framework for modeling instance bags
with multiple labels. MIML algorithms may still have their own advantage in scenarios
when no prior knowledge is available about the instances within a bag.
3
We used the original implementation at http://lamda.nju.edu.cn/data.ashx#code. We did
not compare against MIMLBOOST [25], because it does not scale to all the experiments we conducted. Besides, MIMLSVM [25] does not scale to data with high dimensional feature vectors (e.g., news20 which has
a 62,061-dimensions features). Running the MATLAB implementation of M3 MIML [24] on problems with
more than a few thousand samples is computational infeasible. Thus, we will only report results using this two
baseline methods on small size problems, where they can be finished in a reasonable amount of time.
6
usps (B=5, N=1,459)
100
letter (B=8, N=1,875)
80
70
Classification rate
news20 (B=5, N=3,187)
90
80
80
covtype (B=4, N=43,575)
80
60
60
70
60
50
40
60
40
40
20
10
25
50
L
100
200
20
10
20
50
30
50
100
200
400
40
10
50
L
100
200
L
400
0
10
25
50
100
200
L
Figure 1: (Best seen in colors) Classification performance of different algorithms on artificial datasets.
We implemented our MMS algorithm in MATLAB4 , and used a value of the 1/N for the regularization parameter ? in all our experiments. In (1) we used ?(zm , ym ) = 1(zm 6= ym ). For a fair
comparison, we used linear kernel for all the methods. The cost parameter for SVM algorithms is
selected from the range C ? {0.1, 1, 10, 100, 1000}. The bias term is used in all the algorithms.
4.1 Experiments on artificial data
We create several artificial datasets using four widely used multi-class datasets (usps, letter, news20
and covtype) from the LIBSVM [6] website. The artificial training sets are created as follows: we
first set at random pairs of classes as ?correlated classes?, and as ?ambiguous classes?, where the
ambiguous classes can be different from the correlated classes. Following that, instances are grouped
randomly into bags of fixed size B with probability at least Pc that two instances from correlated
classes will appear in the same bag. Then L ambiguous labeling vectors are created for each bag,
by modifying a few elements of the correct labeling vector. The number of the modified element is
randomly chosen from {1, . . . , B}, and the new labels are chosen among a predefined ambiguous
set. The ambiguous set is composed by the other correct labels from the same bag (except the true
one) and a subset of the ambiguous pairs of all the correct labels from the bag. The probability of
whether the ambiguous pair of a label is present equals Pa . For testing, we use the original test set,
and each instance is considered separately.
Varying Pc , Pa , and L we generate different dataset difficulty levels to evaluate the behaviour of
the algorithms. For example, when Pa > 0, noisy labels are likely to be present in the labeling
set. Meanwhile, Pc controls the ambiguity within the same bags. If Pc is large, instances from
two correlated classes are likely to be grouped into the same bag, thus it becomes more difficult to
distinguish between these two classes. The parameters Pc and Pa are chosen from {0, 0.25, 0.5}.
For each difficulty level, we run three different training/test splits.
In figure 1, we plot the average classification accuracy. Several observations can be made: first,
MMS achieves results close to the supervised SVM methods, and better than all other baselines.
As MMS uses a similar multi-class loss as MC-SVM, it even outperforms 1vA-SVM when the
loss has its advantage (e.g., on the ?letter? dataset). For the ?covtype? dataset, the performance
gap between MMS and SVM is more visible. It may because ?covtype? has a class unbalance,
where the two largest classes (among seven) dominate the whole dataset (more than 85% of the
total number of samples). Second, the change on performance of MMS is small when the size of the
candidate labeling set grows. Moreover, when correlated instances and extra noisy labels are present
in the dataset, the baseline methods? performance drops by several percentages, while MMS is less
affected. The CCCP algorithm usually converges in 3 ? 5 rounds, and the final performance is about
5% ? 40% higher compared to the results obtained after the first round, especially when L is large.
This behavior also proves that approximating the second max(?) function in the loss function (3)
with the average over all the possible labeling vectors can lead to poor performance.
4.2 Applications to learning from images & captions
A huge amount of images with accompanying text captions are available on the web. This cheap
source of information has been used, e.g., to name faces in images using captions [3, 13]. Thanks
to the recent developments in the computer vision and natural language processing fields, faces in
the images can be detected by a face detector and names in the captions can be identified using a
language parser. The gathered data can then be used to train visual classifiers, without human?s
4
Code available at http://dogma.sourceforge.net/
7
President Barack Obama and first lady
Michelle Obama wave from the steps of
Air Force One as they arrive in Prague,
Czech Republic.
Z:
?
z1
z2
z3 z4
z5
z6
na
nb
na
?
?
nb
?
na
nb
?
nb
na
?
? facea
? faceb
Figure 2: (Left): An example image and its associated caption. There are two detected faces facea and faceb
and two names Barack Obama (na ) and Michelle Obama (nb ) from the caption. (Right): The candidate labeling
set for this image-captions pairs. The labeling vectors are generated using the following constrains: i). a face
in the image can either be assigned with a name from its caption, or it possibly corresponds to none of them (a
NULL class, denoted as ?); ii) a face can be assigned to at most one name; iii) a name can be assigned to at most
a face. Differently from previous methods, we do not allow the labeling vector with all the faces assigned to
the NULL class, because it would lead to the trivial solution with 0 loss by classifying every instance as NULL.
Table 1: Overall face recognition accuracy
Dataset
Yahoo!
1vA-SVM
81.6% ? 0.6
MC-SVM
87.2% ? 0.3
CL-SVM
76.9% ? 0.2
MIMLSVM
74.7% ? 0.9
MMS
85.7% ? 0.5
effort in labeling the data. This task is difficult due to the so called ?correspondence ambiguity?
problem: there could be more than one face and name appearing in the image-caption pairs, and not
all the names in the caption appear in the image, and vice versa. Nevertheless, this problem can be
naturally formulated as a CLS problem. Since the names of the key persons in the image typically
appear in the captions, combined with other common assumptions [3, 13], we can easily generate
the candidate labeling sets (see Figure 2 for a practical example).
We conducted experiments on the Labeled Yahoo! News dataset5 [3, 13]. The dataset is fully annotated for association of faces in the image with names in the caption, precomputed facial features
were also available with the dataset. After preprocessing, the dataset contains 20071 images and
31147 faces. There are more than 10000 different names from the captions. We retain the 214 most
frequent ones which occur at least 20 times, and treat the other names as NULL. The experiments
are performed over 5 different permutations, sampling 80% images and captions as training set, and
using the rest for testing. During splitting we also maintain the ratio between the number of samples
from each class in the training and test set. For all algorithms, NULL names are considered as an
additional class, except for MIML algorithms where unknown faces can be automatically considered as negative instances. The performance of the algorithms is measured by how many faces in
the test set are correctly labeled with their name. Table 1 summarizes the results. Similar observations can also be made here: MMS achieves performance comparable to the fully-supervised SVM
algorithms (4.1% higher than 1vA-SVM on Yahoo! data), while outperforming the other baselines
for ambiguously labeled data.
5 Conclusion
In this paper, we introduce the ?Candidate Labeling Set? problem where training samples contain
multiple instances and a set of possible labeling vectors. We also propose a large margin formulation
of the learning problem and an efficient algorithm for solving it. Although there are other similar
frameworks, such as MIML, which also investigate learning from instance bags with multiple labels,
our framework is different since it makes an explicit effort to label and to consider each instance in
the bag during the learning process, and allows noisy labels in the training data. In particular,
our framework provides a principled way to encode prior knowledge about relationships between
instances and labels, and these constraints are explicitly taken into account into the loss function
by the algorithm. The use of this framework does not have to be limited to data which is naturally
grouped in multi-instance bags. It could be also possible to group separate instances into bags and
solve the learning problem using MMS, when there are labeling constraints between these instances
(e.g., a clustering problem with linkage constraints).
Acknowledgments We thank the anonymous reviewers for their helpful comments. The Labeled Yahoo!
News dataset were kindly provided by Matthieu Guillaumin and Jakob Verbeek. LJ was sponsored by the EU
project DIRAC IST-027787 and FO was sponsored by the PASCAL2 NoE under EC grant no. 216886. LJ also
acknowledges PASCAL2 Internal Visiting Programme for supporting traveling expense.
5
Dataset available at http://lear.inrialpes.fr/data/
8
References
[1] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance
learning. In Proc. NIPS, 2003.
[2] K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. Blei, and M. Jordan. Matching words
and pictures. JMLR, 3:1107?1135, 2003.
[3] T. Berg, A. Berg, J. Edwards, and D. Forsyth. Who?s in the picture? In Proc. NIPS, 2004.
[4] D. P. Bertsekas. Convex Analysis and Optimization. Athena Scientific, 2003.
[5] R. C. Bunescu and R. J. Mooney. Multiple instance learning for sparse positive bags. In Proc.
ICML, 2007.
[6] C. C. Chang and C. J. Lin. LIBSVM: A Library for Support Vector Machines, 2001. Software
available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[7] O. Chapelle, A. Zien, and B. Sch?olkopf (Eds.). Semi-supervised Learning. MIT Press, 2006.
[8] T. Cour, B. Sapp, C. Jordan, and B. Taskar. Learning from ambiguously labeled images. In
Proc. CVPR, 2009.
[9] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based
vector machines. JMLR, 2:265?292, 2001.
[10] T. G. Dietterich, R. H. Lathrop, T. Lozano-Perez, and A. Pharmaceutical. Solving the multipleinstance problem with axis-parallel rectangles. Artificial Intelligence, 39:31?71, 1997.
[11] R.-E. Fan, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundarajan. LIBLINEAR: A library
for large linear classification. JMLR, 9:1871?1874, 2008.
[12] Y. Grandvalet. Logistic regression for partial labels. In Proc. IPMU, 2002.
[13] M. Guillaumin, J. Verbeek, and C. Schmid. Multiple instance metric learning from automatically labeled bags of faces. In Proc. ECCV, 2010.
[14] A. Gupta and L. Davis. Beyond nouns: Exploiting prepositions and comparative adjectives for
learning visual classifiers. In Proc. ECCV, 2008.
[15] E. H?ullermeier and J. Beringe. Learning from ambiguously labelled example. Intelligent Data
Analysis, 10:419?439, 2006.
[16] L. Jie, B. Caputo, and V. Ferrari. Who?s doing what: Joint modeling of names and verbs for
simultaneous face and pose annotation. In Proc. NIPS, 2009.
[17] R. Jin and Z. Ghahramani. Learning with multiple labels. In Proc. NIPS, 2002.
[18] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal Estimated sub-GrAdient SOlver
for SVM. In Proc. ICML, 2007.
[19] A. J. Smola, S. V. N. Vishwanathan, and T. Hofmann. Kernel methods for missing variables.
In Proc. AISTAT, 2005.
[20] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. JMLR, 6:1453?1484, 2005.
[21] E.P Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to
clustering with side-information. In Proc. NIPS, 2002.
[22] C.-N. Yu and T. Joachims. Learning structural svms with latent variables. In Proc. ICML,
2009.
[23] A. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15:915?
936, 2003.
[24] M.-L. Zhang and Z.-H. Zhou. M3 MIML: A maximum margin method for multi-instance multilabel learning. In Proc. ICDM, 2008.
[25] Z.-H. Zhou and M.-L. Zhang. Multi-instance multi-label learning with application to scene
classification. In Proc. NIPS, 2006.
[26] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison, 2005.
9
| 4041 |@word version:1 norm:1 underline:1 liblinear:2 initial:1 contains:1 outperforms:2 existing:2 freitas:1 current:1 z2:1 luo:1 assigning:2 must:1 visible:1 additive:1 hofmann:3 cheap:1 designed:1 treating:2 plot:1 drop:1 v:2 sponsored:2 intelligence:1 selected:1 website:2 xk:4 blei:1 provides:2 zhang:2 direct:1 become:1 incorrect:2 consists:2 prove:1 inside:1 introduce:5 pairwise:1 news20:3 hardness:1 behavior:2 multi:10 inspired:1 decomposed:1 decreasing:1 automatically:2 considering:3 solver:1 becomes:4 provided:2 project:1 moreover:4 notation:1 factorized:1 null:5 what:1 kind:3 interpreted:1 developed:1 convexify:1 noe:1 sky:1 every:5 firstorder:1 concave:4 matlab4:1 barack:2 universit:1 k2:1 classifier:9 wrong:1 zl:2 control:1 grant:1 omit:2 appear:5 bertsekas:1 positive:1 nju:1 local:2 treat:1 despite:1 abuse:1 might:1 lausanne:1 co:1 limited:1 multipleinstance:1 range:1 practical:2 unique:1 acknowledgment:1 testing:2 union:1 procedure:2 empirical:1 lucky:1 projection:1 matching:1 word:3 integrating:1 confidence:3 altun:1 lady:1 pegasos:5 unlabeled:1 close:1 tsochantaridis:2 nb:5 risk:1 www:1 equivalent:1 map:1 maxz:13 missing:2 maximizing:1 reviewer:1 independently:1 convex:15 focused:1 survey:1 formalized:1 simplicity:1 splitting:1 matthieu:1 dominate:1 searching:3 ferrari:1 analogous:1 president:1 parser:1 caption:18 programming:1 us:1 pa:4 element:2 expensive:3 recognition:3 approximated:1 labeled:12 database:1 csie:1 taskar:1 solved:1 worst:2 thousand:1 news:4 sun:1 eu:1 decrease:1 russell:1 principled:3 intuition:1 complexity:2 constrains:1 dynamic:1 multilabel:1 trained:2 weakly:3 rewrite:1 solving:5 dogma:1 yuille:1 completely:1 usps:2 easily:5 joint:2 differently:1 regularizer:1 train:3 artificial:7 detected:2 labeling:50 tell:1 shalev:1 encoded:1 widely:1 solve:7 cvpr:1 otherwise:1 favor:1 gi:2 jointly:2 noisy:5 transform:1 final:1 advantage:3 net:1 propose:5 coming:2 product:1 zm:14 frequent:1 ambiguously:3 fr:1 date:1 dirac:1 olkopf:1 sourceforge:1 aistat:1 exploiting:1 convergence:1 cour:1 rangarajan:1 comparative:1 converges:3 andrew:1 pose:1 measured:1 edward:1 solves:1 implemented:1 idiap:2 predicted:2 come:2 radius:1 closely:1 correct:12 annotated:3 modifying:3 hull:2 stochastic:1 milano:1 human:1 behaviour:1 generalization:2 preliminary:1 anonymous:1 proposition:3 ntu:1 extension:1 mm:15 accompanying:1 considered:5 ground:2 mapping:4 predict:2 algorithmic:1 achieves:3 exclusiveness:1 proc:15 bag:47 label:66 individually:1 grouped:4 largest:1 vice:1 create:1 minimization:3 mit:1 always:1 aim:1 lamda:1 reaching:1 modified:1 zhou:2 varying:1 mil:4 encode:2 focus:1 joachim:2 consistently:1 likelihood:4 greatly:1 greedily:1 baseline:7 helpful:3 inference:1 typically:1 lj:2 relation:1 misclassified:1 interested:1 arg:4 issue:1 among:4 classification:5 denoted:2 overall:1 yahoo:4 development:1 noun:1 spatial:1 special:3 constrained:1 mutual:1 equal:5 construct:2 once:1 initialize:1 having:1 sampling:1 ng:1 field:1 kw:1 look:1 yu:1 icml:3 hacking:1 minimized:1 report:2 ullermeier:1 intelligent:1 few:2 randomly:2 composed:1 kwt:1 pharmaceutical:1 replaced:1 phase:1 keerthi:1 maintain:1 huge:1 investigate:2 possibility:2 pc:5 primal:2 perez:1 predefined:2 kt:1 partial:2 facial:1 tree:1 taylor:2 initialized:1 instance:62 modeling:4 measuring:1 cost:2 introducing:1 republic:1 subset:2 conducted:3 motivating:1 combined:1 person:2 thanks:1 retain:1 probabilistic:2 together:2 ym:15 epf:1 na:5 ambiguity:2 containing:1 opposed:1 choose:2 possibly:1 return:1 li:7 account:2 exclude:1 de:1 bold:1 forsyth:2 explicitly:3 depends:1 script:1 later:1 performed:1 doing:1 start:1 recover:1 wave:1 parallel:1 xing:1 annotation:1 contribution:2 hired:1 minimize:2 square:1 accuracy:3 air:1 who:2 efficiently:3 gathered:3 mc:3 none:1 mooney:1 detector:1 simultaneous:1 fo:1 ed:1 guillaumin:2 definition:2 against:2 underestimate:1 obvious:1 naturally:5 associated:11 di:1 mi:8 proof:1 gain:1 dataset:13 knowledge:9 car:1 color:1 sapp:1 formalize:1 actually:1 back:1 higher:2 supervised:15 violating:1 formulation:6 just:3 implicit:1 smola:1 correlation:4 until:1 hand:3 traveling:1 web:2 replacing:1 nonlinear:2 lack:1 logistic:1 scientific:1 grows:1 name:17 dietterich:1 subdifferentials:1 qmi:1 true:11 contain:1 lozano:1 hence:6 assigned:6 regularization:1 excluded:1 round:7 during:4 ambiguous:13 davis:1 performs:1 motion:1 l1:1 image:22 wise:1 inrialpes:1 common:2 extend:1 interpretation:2 association:1 interpret:1 versa:1 pm:2 z4:1 language:2 moving:1 access:2 chapelle:1 posterior:1 own:1 recent:1 apart:1 scenario:7 binary:2 arbitrarily:2 calligraphic:1 life:2 outperforming:1 seen:1 minimum:2 additional:2 fortunately:1 relaxed:4 prune:1 maximize:2 semi:5 ii:1 multiple:10 zien:1 smooth:1 technical:1 faster:1 retrieval:1 lin:2 icdm:1 cccp:16 algorithms3:1 va:4 calculates:1 prediction:1 z5:1 verbeek:2 regression:1 vision:2 expectation:2 metric:2 iteration:1 represent:1 kernel:4 subdifferential:1 want:1 separately:3 source:2 sch:1 extra:2 rest:1 comment:1 kwk22:3 member:3 contrary:1 prague:1 call:1 jordan:3 structural:2 ideal:1 split:1 easy:2 iii:1 independence:1 zi:40 associating:1 opposite:1 identified:1 reduce:1 cn:1 knowing:1 multiclass:3 whether:1 motivated:1 casted:1 linkage:1 effort:3 action:1 remark:1 jie:2 matlab:1 clear:1 maybe:1 transforms:1 amount:2 bunescu:1 svms:2 reduced:2 http:4 generate:2 exist:1 percentage:1 estimated:1 correctly:2 affected:1 group:5 key:2 four:1 ist:1 nevertheless:2 libsvm:3 verified:1 rectangle:1 graph:1 subgradient:6 run:1 package:1 letter:4 you:1 extends:1 arrive:1 reasonable:1 patch:2 draw:1 summarizes:1 comparable:3 bound:7 pay:1 guaranteed:1 followed:1 distinguish:1 correspondence:1 fan:1 replaces:2 oracle:1 occur:1 constraint:5 kronecker:1 constrain:1 vishwanathan:1 scene:1 software:1 encodes:1 tag:3 miml:13 min:5 duygulu:1 subgradients:1 structured:1 combination:1 ball:1 poor:1 slightly:1 partitioned:1 tw:1 maxy:2 intuitively:2 gathering:1 taken:1 count:1 loose:1 precomputed:1 singer:3 cjlin:1 end:1 generalizes:1 available:7 generic:2 appearing:2 alternative:1 original:2 mimlboost:1 assumes:1 clustering:3 include:1 running:1 dataset5:1 madison:1 unbalance:1 exploit:1 ghahramani:1 especially:1 prof:1 approximating:1 objective:3 added:1 font:1 strategy:1 visiting:1 gradient:2 distance:1 separate:1 thank:1 street:1 athena:1 w0:1 seven:1 considers:1 trivial:1 studi:1 code:2 besides:1 relationship:1 z3:1 ratio:4 minimizing:2 difficult:3 setup:4 expense:1 negative:3 implementation:5 unknown:3 perform:1 disagree:1 upper:6 observation:2 francesco:1 datasets:6 benchmark:1 jin:2 descent:2 supporting:1 situation:4 defining:1 incorporated:1 looking:1 y1:1 jakob:1 verb:2 cast:1 pair:5 z1:3 product1:1 czech:1 nip:6 beyond:1 usually:7 xm:10 dismissed:1 program:1 adjective:1 max:26 video:2 mouth:1 pascal2:2 difficulty:2 rely:1 natural:1 force:1 indicator:1 zhu:1 scheme:1 library:2 picture:2 finished:1 axis:1 created:3 carried:1 acknowledges:1 naive:1 schmid:1 text:3 deviate:1 prior:8 literature:2 interdependent:1 wisconsin:1 fully:7 dsi:2 loss:26 permutation:1 srebro:1 annotator:3 downloaded:1 integrate:1 proxy:1 principle:1 grandvalet:2 classifying:1 share:1 eccv:2 preposition:1 repeat:1 free:1 infeasible:1 formal:1 bias:2 allow:1 side:1 institute:1 face:20 michelle:2 sparse:1 calculated:1 dimension:1 world:5 collection:1 made:2 preprocessing:1 programme:1 ec:1 approximate:2 assumed:1 discriminative:3 degli:1 xi:31 shwartz:1 latent:3 iterative:1 z6:1 table:2 nature:1 learn:3 zk:2 ignoring:1 caputo:1 expansion:2 cl:13 meanwhile:1 domain:1 obama:4 did:1 kindly:1 dense:1 whole:1 noise:1 arise:1 allowed:1 complementary:1 fair:1 x1:1 sub:1 explicit:2 candidate:19 jmlr:4 learns:1 down:1 theorem:1 specific:2 maxi:2 list:1 explored:1 svm:22 mimlsvm:3 covtype:4 gupta:1 intrinsic:1 ci:14 occurring:1 margin:10 gap:1 photograph:1 simply:1 likely:5 visual:3 partially:1 scalar:1 chang:2 ch:1 gender:1 truth:2 corresponds:2 satisfies:1 ipmu:1 goal:1 formulated:1 lear:1 orabona:2 labelled:1 barnard:1 hard:4 change:1 typical:1 specifically:1 except:2 unimi:1 hyperplane:1 wt:7 called:2 total:1 lathrop:1 experimental:1 m3:3 berg:2 internal:1 support:2 crammer:2 evaluate:2 correlated:5 |
3,360 | 4,042 | Universal Consistency of Multi-Class
Support Vector Classification
Tobias Glasmachers
Dalle Molle Institute for Artificial Intelligence (IDSIA), 6928 Manno-Lugano, Switzerland
[email protected]
Abstract
Steinwart was the first to prove universal consistency of support vector machine
classification. His proof analyzed the ?standard? support vector machine classifier,
which is restricted to binary classification problems. In contrast, recent analysis
has resulted in the common belief that several extensions of SVM classification to
more than two classes are inconsistent.
Countering this belief, we prove the universal consistency of the multi-class support vector machine by Crammer and Singer. Our proof extends Steinwart?s techniques to the multi-class case.
Erratum, 20.01.2011
Unfortunately this paper contains a subtle flaw in the proof of Lemma 5. Furthermore it turns out the
statement itself is wrong: The multi-class SVM by Crammer&Singer is not universally consistent.
1
Introduction
Support vector machines (SVMs) as proposed in [1, 8] are powerful classifiers, especially in the
binary case of two possible classes. They can be extended to multi-class problems, that is, problems
involving more than two classes, in multiple ways which all reduce to the standard machine in the
binary case.
This is trivially the case for general techniques such as one-versus-one architectures and the oneversus-all approach, which combine a set of binary machines to a multi-class decision maker. At
least three different ?true? multi-class SVM extensions have been proposed in the literature: The
canonical multi-class machine proposed by Vapnik [8] and independently by Weston and Watkins
[9], the variant by Crammer and Singer [2], and a conceptually different extension by Lee, Lin, and
Wahba [4].
Recently, consistency of multi-class support vector machines has been investigated based on properties of the loss function ? measuring empirical risk in machine training [7]. The analysis is based on
the technical property of classification calibration (refer to [7] for details). This work is conceptually
related to Fisher consistency, in contrast to univeral statistical consistency, see [3, 5]. Schematically,
Theorem 2 by Tewari and Bartlett [7] establishes the relation
SA ? (SB ? SC ) ,
(1)
for the terms
SA : The loss function ? is classification calibrated.
SB : The ?-risk of a sequence (f?n )n?N of classifiers converges to the minimal possible ?-risk:
limn?? R? (f?n ) = R?? .
1
SC : The 0-1-risk of the same sequence (f?n )n?N of classifiers converges to the minimal possible
0-1-risk (Bayes risk): limn?? R(f?n ) = R? .
The classifiers f?n are assumed to result from structural risk minimization [8], that is, the space Fn
for which we obtain f?n = arg min{R? (f ) | f ? Fn } grows suitably with the size of the training
set such that SB holds.
The confusion around the consistency of multi-class machines arises from mixing the equivalence
and the implication in statement (1). Examples 1 and 2 in [7] show that the loss functions ? used
in the machines by Crammer and Singer [2] and by Weston and Watkins [9] are not classification
calibrated, thus SA = false. Then it is deduced that the corresponding machines are not consistent
(SC = false), although it can be deduced only that the implication SB ? SC does not hold. This
tells us nothing about SC , even if SB can be established per construction.
We argue that the consistency of a machine is not necessarily determined by properties of its loss
function. This is because for SVMs it is necessary to provide a sequence of regularization parameters
in order to make the infinite sample limit well-defined. Thus, we generalize Steinwart?s universal
consistency theorem for binary SVMs (Theorem 2 in [6]) to the multi-class support vector machine
[2] proposed by Crammer and Singer:
Theorem 2. Let X ? Rd be compact and k : X ? X ? R be a universal kernel with1
N ((X, dk ), ?) ? O(??? ) for some ? > 0. Suppose that we have a positive sequence (C? )??N
with ? ? C? ? ? and C? ? O(???1 ) for some 0 < ? < ?1 . Then for all Borel probability measures
P on X ? Y and all ? > 0 it holds
lim Pr? {T ? (X ? Y )? | R(fT,k,C? ) ? R? + ?} = 1 .
???
The corresponding notation will be introduced in sections 2 and 3. The theorem does not only establish the universal consistency of the multi-class SVM by Crammer and Singer, it also gives precise
conditions for how exactly the complexity control parameters needs to be coupled to the training set
size in order to obtain universal consistency. Moreover, the rigorous proof of this statement implies
that the common belief on the inconsistency of the popular multi-class SVM by Crammer and Singer
is wrong. This important learning machine is indeed universally consistent.
2
Multi-Class Support Vector Classification
A multi-class classification problem is stated by a training dataset T = (x1 , y1 ), . . . , (x? , y? ) ?
(X ? Y )? with label set Y of size |Y | = q < ?. W.l.o.g., the label space is represented by
Y = {1, . . . , q}. In contrast to the conceptually simpler binary case we have q > 2. The training
examples are supposed to be drawn i.i.d. from a probability distribution P on X ? Y .
Let k : X ? X ? R be a positive definite (Mercer) kernel function, and let ? : X ? H be
a corresponding feature map into a feature Hilbert space H such that h?(x), ?(x? )i = k(x, x? ).
We call a function on X induced by
p k if there exists w ? H such that f (x) = hw, ?(x)i. Let
dk (x, x? ) := k?(x) ? ?(x? )kH = k(x, x) ? 2k(x, x? ) + k(x? , x? ) denote the metric induced on
X by the kernel k.
Analog to Steinwart [6] we require that the input space X is a compact subset of Rd , and define the
notion of a universal kernel:
Definition 1. (Definition 2 in [6]) A continuous positive definite kernel function k : X ? X ? R
on a compact subset X ? Rd is called universal if the set of induced functions is dense in the
space C 0 (X) of continuous functions, i.e., for all g ? C 0 (X) and all ? > 0 there exists an induced
function f with kg ? f k? < ?.
Intuitively, this property makes sure that the feature space of a kernel is rich enough to achieve consistency for all possible data generating distributions. For a detailed treatment of universal kernels
we refer to [6].
1
For f, g : R+ ? R+ we define f (x) ? O(g(x)) iff ?c, x0 > 0 such that f (x) ? c ? g(x) ?x > x0 .
2
An SVM classifier for a q-class problem is given in the form of a vector-valued function f : X ? Rq
with component
P functions fu : X ? R, u ? Y (sometimes restricted by the so-called sum-to-zero
constraint u?Y fu = 0). Each of its components takes the form fu (x) = hwu , ?(x)i + bu with
wu ? H and bu ? R. Then we turn f into a classifier by feeding its result into the ?decision? function
n
o
? : Rq ? Y ;
(v1 , . . . , vq )T 7? min arg max{vu | u ? Y } ? Y .
Here, the arbitrary rule for breaking ties favors the smallest class index.2 We denote the SVM
hypothesis by h = ? ? f : X ? Y .
The multi-class SVM variant proposed by Crammer and Singer uses functions without
offset terms
(bu = 0 for all u ? Y ). For a given training set T = (x1 , y1 ), . . . , (x? , y? ) ? (X ? Y )? this
machine defines the function f , determined by (w1 , . . . , wq ) ? Hq , as the solution of the quadratic
program
minimize
X
u?Y
?
C X
hwu , wu i + ?
?i
? i=1
s.t. hwyi ? wu , ?(xi )i ? 1 ? ?i
(2)
? i ? {1, . . . , ?}, u ? Y \ {yi } .
The slack variables in the optimum can be written as
n
o
1 ? (fyi (xi ) ? fv (xi )) + ? 1 ? ?h(xi ),yi ? fyi (xi ) + fh(xi ) (xi ) + ,
?i = max
v?Y \{yi }
(3)
with the auxiliaury function [t]+ := max{0, t}. We denote the function induced by the solution of
this problem by f = fT,k,C = (hw1 , ?i, . . . , hwq , ?i)T .
Let s(x) := 1 ? max{P (y|x) | y ? Y } denote the noise level,
R that is, the probability of error of a
Bayes optimal classifier. We denote the Bayes risk by R? = X s(x)dx. For a given (measurable)
hypothesis h we define its error as Eh (x) := 1 ? P (h(x)|x), and its suboptimality w.r.t. Bayesoptimal classification as ?h (x) := Eh (x) ? s(x) = max{P (y|x) | y ? Y } ? P (h(x)|x). We have
Eh (x) ? s(x) and thus ?h (x) ? 0 up to a zero set.
3
The Central Construction
In this section we introduce a number of definitions and constructions preparing the proofs in the
later sections. Most of the differences to the binary case are incorporated into these constructions
such that the lemmas and theorems proven
P later on naturally extend to the multi-class case. Let
? := {p ? Rq | pu ? 0 ? u ? Y and u?Y pu = 1} denote the probability simplex over Y . We
introduce the covering number of the metric space (X, dk ) as
n
n
o
[
N ((X, dk ), ?) := min n ? {x1 , . . . , xn } ? X such that X ?
B(xi , ?) ,
i=1
with B(x, ?) = {x? ? X | dk (x, x? ) < ?} being the open ball of radius ? > 0 around x ? X.
Next we construct a partition of a large part of the input space X into suitable subsets. In a first step
we partition the probability simplex, then we transfer this partition to the input space, and finally
we discard small subsets of negligible impact. The resulting partition has a number of properties of
importance for the proofs of diverse lemmas in the next section.
We start by defining ? = ?/(q + 5), where ? is the error bound found in Theorems 1 and 2. Thus, ?
is simply a multiple of ?, which we can think of as an arbitrarily small positive number.
We split the simplex ? into a partition of ?classification-aligned? subsets
n
o
?y := ??1 ({y}) = p ? ? py > pu for u < y and py ? pu for u > y
for y ? Y , on which the decision function ? decides for class y. We define the grid
n
o
? = [n1 ?, (n1 + 1)? ) ? ? ? ? ? [nq ?, (nq + 1)? ) ? Rq (n1 , . . . , nq )T ? Zq
?
2
Note that any other deterministic rule for breaking ties can be realized by permuting the class indices.
3
of half-open cubes. Then we combine both constructions to the partition
o
[ n
? and ?? ? ?y 6= ?
?? ? ?y ?? ? ?
? :=
y?Y
of ? into classification-aligned subsets of side length upper bounded by ? . We have the trivial
upper bound |?| ? D := q ? (1/? + 1)q for the size of the partition. The partition
S ? will serve
as an index set in a number of cases. The first one of these is the partition X = ??? X? with
X? := x ? X P (y|x) ? ? .
The compactness of X ensures that the distribution P is regular. Thus, for each ? ? ? there exists a
? ? ? X? with P (K
? ? ) ? (1 ? ? /2) ? P (X? ). We choose minimal partitions A?? of
compact subset K
?
S
?
each K? = A?A?? A such that the diameter of each A ? A?? is bounded by ? = ? /(2 C). All of
S
these sets are summarized in the partition A? = ??? A?? . Now we drop all A ? A?? below a certain
probabiliy mass, resulting in
n
? o
,
(4)
A? := A ? A?? PX (A) ?
2M
S
S
with M := D ? N ((X, dk ), ?). We summarize these sets in K? = A?A? A and A := ??? A? .
These sets cover nearly all probability mass of PX in the sense
?
?
?
?
!
[
[
[
PX ?
K? ? = P X
A ? PX ?
A? ? ? /2
???
?
= PX ?
[
???
?
A?A
A?A
?
?
? ? ? ? ? /2 ? PX ?
K
[
???
?
X? ? ? ? /2 ? ? /2 = PX (X) ? ? = 1 ? ?
? ? M and condition (4), while the second inequality follows
The first estimate makes use of |A|
?
from the definition of K? .
To simplify notation, we associate a number of quantities with the sets ? ? ? and X? . We denote
the Bayes-optimal decision by y(X? ) = y(?) := ?(p) for any p ? ?, and for y ? Y we define the
lower and upper bounds
n
o
n
o
Ly (X? ) = Ly (?) := inf py p ? ?
and
Uy (X? ) = Uy (?) := sup py p ? ?
on the corresponding components in the probability simplex. We canonically extend these defini? ? , and A ? A, which are all subsets of exactly one of the sets
tions to the above defined sets K? , K
X? , by defining y(S) := y(?) for all non-empty subsets S ? X? . The resulting construction has
the following properties:
(P1) The decision function ? is constant on each set ? ? ?, and thus h = ? ? f is constant on
each set X? as well as on each of their subsets, most importantly on each A ? A.
(P2) For each y ? Y , the side length Uy (?) ? Ly (?) of each set ? ? ? is upper bounded by ? .
(P3) It follows from the construction of ? that for each y ? Y and ? ? ? we have either
Ly (?) = 0 or Ly (?) ? ? .
(P4) The cardinality of the partition ? is upper bounded by D = q ? (1/? + 1)q , which depends
only on ? and q, but not on T , k, or C.3
?
(P5) The cardinality of the partition A is upper bounded by M = D ? N ((X, dk ), ? /(2 C)),
which is finite by Lemma 1.
S
S
(P6) The set A?A A = ??? K? ? X covers a probability mass (w.r.t. PX ) of at least (1?? ).
?
.
(P7) Each A ? A covers a probability mass (w.r.t. PX ) of at least 2M
?
(P8) Each A ? A has diameter less than ? = ? /(2 C), that is, for x, x? ? A we have
dk (x, x? ) < ?.
3
A tight bound would be in O(? 1?q ).
4
With properties (P2) and (P6) it is straight-forward to obtain the inequality
X
X
1?
Ly(?) (?) ? PX (X? ) ? ? ? 1 ?
Uy(?) (?) ? PX (X? )
???
? R? ?1 ?
X
???
???
Ly(?) (?) ? PX (X? ) ? 1 ?
X
???
Uy(?) (?) ? PX (X? ) + ?
(5)
for the risk.
Now we are in the position to define the notion of a ?typical? training set. For ? ? N, u ? Y , and
A ? A, we define
n
F?A,u := (x1 , y1 ), . . . , (x? , y? ) ? (X ? Y )?
o
n ? {1, . . . , ?} xn ? A, yn = u ? ? ? (1 ? ? ) ? Lu (A) ? PX (A) .
Intuitively, we ask that the number of examples of class u in A does not deviate too much from its
expectation, introducing two approximations: The multiplicative factor (1?? ), and the lower bound
Lu (A) on the conditional probability of class u in A. We combine the properties of all these sets in
T
T
the set F? := u?Y A?A F?A,u of training sets of size ?, with the same lower bound on the number
of training examples in all sets A ? A, and for all classes u ? Y .
4
Preparations
The proof of our main result follows the proofs of Theorems 1 and 2 in [6] as closely as possible. For
the sake of clarity we organize the proof such that all six lemmas in this section directly correspond
to Lemmas 1-6 in [6].
Lemma 1. (Lemma 1 from [6]) Let k : X ? X ? R be a universal kernel on a compact subset X
or Rd and ? : X ? H be a feature map of k. The ? is continuous and
dk (x, x? ) := k?(x) ? ?(x? )k
defines a metric on X such that id : (X, k ? k) ? (X, dk ) is continuous. In particular, N ((X, dk ), ?)
is finite for all ? > 0.
Lemma 2. Let X ? Rd be compact and let k : X ? X ? R be a universal kernel. Then, for all
? u ? X, u ? Y , there exists an
? > 0 and all pairwise disjoint and compact (or empty) subsets K
induced function
h
iq
f : X ? ? 1/2 ? (1 + ?), 1/2 ? (1 + ?) ;
x 7? (hw1? , xi, . . . , hwq? , xi)T ,
such that
?u
if x ? K
? v for some v ? Y \ {u}
if x ? K
fu (x) ? [1/2, 1/2 ? (1 + ?)]
for all u ? Y .
fu (x) ? [?1/2 ? (1 + ?), ?1/2]
Proof. This lemma directly corresponds to Lemma 2 in [6], with slightly different cases. Its proof
is completely analogous.
Lemma 3.
The probability of the training sets F? is lower bounded by
1 6
?
2
P (F? ) ? 1 ? q ? M ? exp ? (? /M )? .
8
Proof. Let us fix A ? A and u ? Y . In the case Lu (A) = 0 we trivially have P ? (X ? Y )? \
F?A,u = 0. Otherwise we consider T = (x1 , y1 ), . . . , (x? , y? ) ? (X ? Y )? and define the binary
variables zi := 1{A?{u}} (xi , yi ), where the indicator function 1S (s) is one for s ? S and zero
5
otherwise. This definition allows us to express the cardinality n ? {1, . . . , ?} xn ? A, yn =
P?
u = i=1 zi found in the definition of F?A,u in a form suitable for the application of Hoeffding?s
inequality. The inequality, applied to the variables zi , states
!
?
X
?
P
zi ? (1 ? ? ) ? E ? ? ? exp ?2(? E)2 ? ,
i=1
where E := E[zi ] =
can use the relation
R
A?{u}
?
X
i=1
dP (x, y) =
R
A
P (u|x)dx ? Lu (A) ? PX (A) > 0. Due to E > 0 we
zi ? (1 ? ? ) ? E ? ? ?
?
X
i=1
zi < (1 ? ? /2) ? E ? ?
in order to obtain Hoeffding?s formula for the case of strict inequality
!
?
X
1
?
P
zi < (1 ? ? ) ? E ? ? ? exp ? (? E)2 ? .
2
i=1
Combining E ? Lu (A) ? PX (A) and
obtain
P
? P
?
?
?
(X ? Y ) \
?
X
i=1
F?A,u
=P
?
P?
i=1 zi
?
X
i=1
!
zi < (1 ? ? ) ? E ? ?
< (1 ? ? ) ? Lu (A) ? PX (A) ? ? ? T 6? F?A,u we
!
zi < (1 ? ? ) ? Lu (A) ? PX (A) ? ?
1
1
? exp ? (? E)2 ? ? exp ? (? Lu (A)PX (A))2 ? .
2
2
Properties (P3) and (P7) ensure Lu (A) ? ? and PX (A) ? ? /(2M ). Applying these to the previous
inequality results in
1
1
P ? (X ? Y )? \ F?A,u ? exp ? (? 3 /(2M ))2 ? = exp ? (? 6 /M 2 )? ,
2
8
which also holds in the case Lu (A) = 0 treated earlier. Finally, we use the union bound
!
!
[ [
\ \ A,u
A,u
= P?
1 ? P ? (F? ) = 1 ? P ?
(X ? Y )? \ F?
F?
u?Y A?A
u?Y A?A
1 6
1 6
2
2
? |Y | ? |A| ? exp ? (? /M )? ? q ? M ? exp ? (? /M )?
8
8
and properties (P4) and (P5) to prove the assertion.
Lemma 4.
The SVM solution f and the hypothesis h = f ? ? fulfill
Z
?h (x)dx .
R(f ) ? R? +
X
Proof. The lemma follows directly from the definition of ?h , even with equality. We keep it here
because it is the direct counterpart to the (stronger) Lemma 4 in [6].
Lemma 5.
fulfills
For all training sets T ? F? the SVM solution given by (w1 , . . . , wq ) and (?1 , . . . , ?? )
X
u?Y
with
(w1? , . . . , wq? )
hwu , wu i +
?
X
CX
?i ?
hwu? , wu? i + C(R? + 2? ) ,
? i=1
u?Y
as defined in Lemma 2.
6
Proof. The optimality of the SVM solution for the primal problem (2) implies
X
u?Y
hwu , wu i +
?
?
X
CX ?
CX
?i ?
?
hwu? , wu? i +
? i=1
? i=1 i
u?Y
for any feasible choice of the slack variables ?i? . We choose the values of these variables as ?i? =
1 + ? for P (y | xi ) 6? ?yi and zero otherwise which corresponds to a feasible solution according
P?
it remains to show that i=1 ?i? ? ? ? (R? + 2? ).
to the construction
of wu? in Lemma 2. Then
Let n+ = i ? {1, . . . , ?} P (y | xi ) ? ?yi denote the number of training examples correctly
P?
classified by the Bayes rule expressed by ?yi (or ?). Then we have i=1 ?i? = (1 + ? )(? ? n+ ).
The definition of F? yields
n+ ?
X X
? ? (1 ? ? ) ? Lu (?) ? PX (A) = ? ? (1 ? ? ) ?
u?Y A?A
y(A)=u
= ? ? (1 ? ? ) ?
X Xh
u?Y ???
y(?)=u
?
? ? ? (1 ? ? ) ? ?
Xh
???
X X
?
?Lu (?) ?
u?Y ???
y(?)=u
X
A?A?
?
PX (A)?
i
i
Xh
Lu (?) ? PX (K? ) = ? ? (1 ? ? ) ?
Ly(?) (?) ? PX (K? )
???
?
i
Ly(?) (?) ? PX (X? ) ? ? ? ? ? ? (1 ? ? ) ? (1 ? R? ) ,
where the last line is due to inequality (5). We obtain
?
X
i=1
?i? ? ? ? (1 + ? ) ? (1 ? (1 ? ? ) ? (1 ? R? ))
= ? ? [R? + ? + ? 2 (1 ? R? )] ? ? ? [R? + ? + ? 2 ] ? ? ? (R? + 2? ) ,
which proves the claim.
Lemma 6. For all training sets T ? F? the sum of the slack variables (?1 , . . . , ?? ) corresponding
to the SVM solution fulfills
?
X
i=1
2
?
?i ? ? ? (1 ? ? ) ? R +
Z
X
?h (x) dPX (x) ? q ? ?
.
Proof. Problem (2) takes the value C in the feasible solution w1 = . . . , wq = 0 and??1 = ? ? ? =
P
?? = 1. Thus, we have u?Y kwu k2 ? C in the optimum, and we deduce kwu k ? C for each
u ? Y . Thus, property (P8) makes sure that |fu (x) ? fu (x? )| ? ? /2 for all x, x? ? A and u ? Y .
7
The proof works through the following series of inequalities. The details are discussed below.
?
X
?i =
X X X
?i
A?A u?Y xi ?A
yi =u
i=1
?
?
X X X
1 ? ?h(xi ),u + fh(xi ) (xi ) ? fu (xi ) +
A?A u?Y xi ?A
yi =u
X X X
A?A u?Y xi ?A
yi =u
? ? ? (1 ? ? ) ?
? ? ? (1 ? ? ) ?
1
?
PX (A)
X X
A?A u?Y
XZ
A?A
2
? ? ? (1 ? ? ) ?
? ? ? (1 ? ? )2 ?
A
(1 ? ? ) ?
A
XZ
A?A
A
1 ? ?h(x),u + fh(x) (x) ? fu (x) ? 2 ?
Lu (A) ?
XZ
A?A
Z h
A
Z
A
?
?i
dPX (x)
2 +
?
?
?
?1 ? ? ? ?h(x),u + fh(x) (x) ? fu (x)? dPX (x)
|
{z
}
?0
X
+
Lu (A) dPX (x)
u?Y \{h(x)}
1 ? q ? ? ? Lh(x) (A) dPX (x)
1 ? q ? ? ? 1 + s(x) + ?h (x) dPX (x)
Z
= ? ? (1 ? ? )2 ? R? +
?h (x) dPX (x) ? q ? ?
X
The first inequality follows from equation (3). The second inequality is clear from the definition of
F?A,u together with |fu (x) ? fu (x? )| ? ? /2 within each A ? A. For the third inequality we use
that the case u = h(x)P
does not contribute, and the non-negativity of fh(x) (x) ? fu (x). In the next
steps we make use of u?Y Lu (A) ? 1 ? q ? ? and the lower bound Lh(x) (x) ? P (h(x)|x) =
1 ? Eh (x) = 1 ? s(x) ? ?h (x), which can be deduced from properties (P1) and (P2).
5
Proof of the Main Result
Just like the lemmas, we organize our theorems analogous to the ones found in [6]. We start with a
detailed but technical auxiliaury result.
Theorem 1. Let X ? Rd be compact, Y = {1, . . . , q}, and k : X ? X ? R a universal kernel.
Then, for all Borel probability measures P on X ? Y and all ? > 0 there exists a constant C ? > 0
such that for all C ? C ? and all ? ? 1 we have
1
Pr? T ? (X ? Y )? R(fT,k,C ) ? R? + ? ? 1 ? qM exp ? (? 6 /M 2 )? ,
8
?
q
where Pr? is the outer
? probability of P , fT,k,C is the solution of problem (2), M = q ? (1/? + 1) ?
N ((X, dk ), ? /(2 C)), and ? = ?/(q + 5).
Proof. According to Lemma 3 it is sufficient toRshow R(fT,k,C ) ? R? + ? for all T ? F? .
?
Lemma
R 4 provides the estimate R(f ) ? R + X ??h (x) dPX (x), such that it remains to show
that X ?h (x) dPX (x) ? ? for T ? F? . Consider wu as defined in Lemma 2, then we combine
Lemmas 5 and 6 to
?
?
?
?
?
?
Z
?
?
X
X
?
1 ?
?
? 2
2?
2 ? ?
?
kwu k ?
kwu k ? + (R? + 2? ) .
(1 ? ? ) ? ?R +
?h (x) dPX (x) ? q ? ? ? ? ?
C
?
?
X
?
?u?Y
u?Y
{z
}
|
| {z }
?1
?0
8
R
P
? 2
Using a?? ? (1?? )?a for any a ? [0, 1], we derive X ?h (x) dPX (x) ? C1 u?Y
R kwu k +(q+4)?
P
1
?
? 2
?
? . With the choice C = ? ? u?Y kwu k and the condition C ? C we obtain X ?h (x) dPX (x) ?
(q + 5) ? ? = ?.
Proof of Theorem 2. Up to constants, this short proof coincides with the proof of Theorem 2 in [6].
Because of the importance of the statement and the brevity of the proof we repeat it here:
Since ? ? C? ? ? there exists an integer ?0 such that ? ? C? ? C ? for all ? ? ?0 . Thus for ? ? ?0
Theorem 1 yields
1 6
?
2
?
?
Pr
T ? (X ? Y )
R(fT,k,C? ) ? R + ? ? 1 ? qM? exp ? (? /M? )? ,
8
?
where M? = D ? N ((X, dk ), ? /(2 C? )). Moreover, by the assumption on the covering numbers of
(X, dk ) we obtain M?2 ? O((? ? C? )2 ) and thus ? ? M??2 ? ?.
6
Conclusion
We have proven the universal consistency of the popular multi-class SVM by Crammer and Singer.
This result disproves the common belief that this machine is in general inconsistent. The proof itself
can be understood as an extension of Steinwart?s universal consistency result for binary SVMs. Just
like there are different extensions of the binary SVM to multi-class classification in the literature,
we strongly believe that our proof can be further generalized to cover other multi-class machines,
such as the one proposed by Weston and Watkins, which is a possible direction for future research.
References
[1] C. Cortes and V. Vapnik. Support-Vector Networks. Machine Learning, 20(3):273?297, 1995.
[2] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector
machines. Journal of Machine Learning Research, 2:265?292, 2002.
[3] S. Hill and A. Doucet. A Framework for Kernel-Based Multi-Category Classification. Journal
of Artificial Intelligence Research, 30:525?564, 2007.
[4] Y. Lee, Y. Lin, and G. Wahba. Multicategory Support Vector Machines: Theory and Application
to the Classification of Microarray Data and Satellite Radiance Data. Journal of the American
Statistical Association, 99(465):67?82, 2004.
[5] Y. Liu. Fisher Consistency of Multicategory Support Vector Machines. Journal of Machine
Learning Research, 2:291?298, 2007.
[6] I. Steinwart. Support Vector Machines are Universally Consistent. J. Complexity, 18(3):768?
791, 2002.
[7] A. Tewari and P. L. Bartlett. On the Consistency of Multiclass Classification Methods. Journal
of Machine Learning Research, 8:1007?1025, 2007.
[8] V. Vapnik. Statistical Learning Theory. Wiley, New-York, 1998.
[9] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. In
M. Verleysen, editor, Proceedings of the Seventh European Symposium On Artificial Neural
Networks (ESANN), pages 219?224, 1999.
9
| 4042 |@word stronger:1 suitably:1 open:2 liu:1 contains:1 series:1 dx:3 written:1 fn:2 partition:13 drop:1 intelligence:2 half:1 nq:3 p7:2 short:1 provides:1 contribute:1 simpler:1 direct:1 symposium:1 prove:3 combine:4 introduce:2 pairwise:1 x0:2 p8:2 indeed:1 p1:2 xz:3 multi:22 cardinality:3 notation:2 moreover:2 bounded:6 mass:4 kg:1 tie:2 exactly:2 wrong:2 classifier:8 k2:1 control:1 qm:2 ly:9 yn:2 organize:2 positive:4 negligible:1 understood:1 limit:1 id:1 equivalence:1 uy:5 vu:1 union:1 definite:2 dpx:12 empirical:1 universal:15 regular:1 risk:9 applying:1 py:4 measurable:1 map:2 deterministic:1 independently:1 rule:3 importantly:1 his:1 notion:2 analogous:2 construction:8 suppose:1 us:1 hypothesis:3 associate:1 fyi:2 idsia:2 recognition:1 ft:6 p5:2 ensures:1 rq:4 complexity:2 tobias:2 defini:1 tight:1 serve:1 completely:1 manno:1 represented:1 artificial:3 sc:5 tell:1 valued:1 otherwise:3 favor:1 think:1 itself:2 sequence:4 p4:2 aligned:2 combining:1 canonically:1 mixing:1 iff:1 achieve:1 supposed:1 kh:1 empty:2 optimum:2 satellite:1 generating:1 converges:2 tions:1 iq:1 derive:1 sa:3 esann:1 p2:3 implies:2 switzerland:1 direction:1 radius:1 closely:1 glasmachers:1 require:1 feeding:1 hwq:2 fix:1 molle:1 extension:5 hold:4 around:2 exp:11 algorithmic:1 claim:1 radiance:1 smallest:1 fh:5 label:2 maker:1 establishes:1 minimization:1 fulfill:1 contrast:3 rigorous:1 sense:1 flaw:1 sb:5 compactness:1 relation:2 arg:2 classification:16 verleysen:1 cube:1 construct:1 preparing:1 nearly:1 future:1 simplex:4 simplify:1 resulted:1 n1:3 analyzed:1 primal:1 permuting:1 implication:2 fu:13 necessary:1 lh:2 hwyi:1 minimal:3 earlier:1 cover:4 assertion:1 measuring:1 introducing:1 subset:12 seventh:1 too:1 calibrated:2 deduced:3 bu:3 lee:2 together:1 w1:4 central:1 choose:2 hoeffding:2 american:1 summarized:1 depends:1 later:2 multiplicative:1 sup:1 start:2 bayes:5 minimize:1 correspond:1 yield:2 conceptually:3 generalize:1 lu:16 straight:1 classified:1 definition:9 naturally:1 proof:24 dataset:1 treatment:1 popular:2 ask:1 lim:1 hilbert:1 subtle:1 strongly:1 furthermore:1 just:2 p6:2 steinwart:6 defines:2 believe:1 grows:1 true:1 counterpart:1 regularization:1 equality:1 covering:2 coincides:1 suboptimality:1 generalized:1 hill:1 confusion:1 recently:1 dalle:1 common:3 analog:1 extend:2 discussed:1 association:1 refer:2 rd:6 consistency:16 trivially:2 grid:1 calibration:1 deduce:1 pu:4 recent:1 inf:1 discard:1 certain:1 inequality:11 binary:10 arbitrarily:1 inconsistency:1 yi:10 multiple:2 technical:2 lin:2 impact:1 involving:1 variant:2 expectation:1 metric:3 sometimes:1 kernel:12 c1:1 schematically:1 limn:2 microarray:1 sure:2 strict:1 induced:6 inconsistent:2 call:1 integer:1 structural:1 split:1 enough:1 zi:11 architecture:1 wahba:2 reduce:1 multiclass:2 six:1 bartlett:2 york:1 tewari:2 detailed:2 clear:1 svms:4 category:1 diameter:2 hw1:2 canonical:1 disjoint:1 per:1 correctly:1 diverse:1 express:1 drawn:1 clarity:1 probabiliy:1 oneversus:1 v1:1 sum:2 powerful:1 extends:1 wu:9 p3:2 decision:5 bound:8 quadratic:1 constraint:1 sake:1 min:3 optimality:1 px:26 according:2 ball:1 slightly:1 intuitively:2 restricted:2 pr:4 countering:1 equation:1 vq:1 remains:2 turn:2 slack:3 singer:10 ensure:1 multicategory:2 especially:1 establish:1 prof:1 realized:1 quantity:1 dp:1 hq:1 outer:1 argue:1 trivial:1 length:2 index:3 unfortunately:1 statement:4 stated:1 implementation:1 upper:6 finite:2 hwu:6 defining:2 extended:1 incorporated:1 precise:1 y1:4 arbitrary:1 introduced:1 fv:1 established:1 below:2 pattern:1 summarize:1 program:1 max:5 belief:4 suitable:2 treated:1 eh:4 indicator:1 negativity:1 coupled:1 deviate:1 literature:2 loss:4 proven:2 versus:1 sufficient:1 consistent:4 mercer:1 editor:1 repeat:1 last:1 side:2 institute:1 xn:3 rich:1 forward:1 universally:3 compact:8 keep:1 doucet:1 decides:1 assumed:1 xi:20 continuous:4 zq:1 transfer:1 investigated:1 necessarily:1 european:1 dense:1 main:2 noise:1 nothing:1 x1:5 borel:2 wiley:1 position:1 xh:3 lugano:1 breaking:2 watkins:4 third:1 hw:1 theorem:13 formula:1 offset:1 dk:14 svm:14 cortes:1 exists:6 vapnik:3 false:2 bayesoptimal:1 importance:2 cx:3 simply:1 erratum:1 expressed:1 ch:1 corresponds:2 weston:4 conditional:1 fisher:2 feasible:3 determined:2 infinite:1 typical:1 lemma:24 called:2 wq:4 support:13 crammer:10 arises:1 fulfills:2 brevity:1 preparation:1 |
3,361 | 4,043 | Learning To Count Objects in Images
Andrew Zisserman
Visual Geometry Group
University of Oxford
Victor Lempitsky
Visual Geometry Group
University of Oxford
Abstract
We propose a new supervised learning framework for visual object counting tasks, such
as estimating the number of cells in a microscopic image or the number of humans in
surveillance video frames. We focus on the practically-attractive case when the training
images are annotated with dots (one dot per object).
Our goal is to accurately estimate the count. However, we evade the hard task of
learning to detect and localize individual object instances. Instead, we cast the problem
as that of estimating an image density whose integral over any image region gives the
count of objects within that region. Learning to infer such density can be formulated as
a minimization of a regularized risk quadratic cost function. We introduce a new loss
function, which is well-suited for such learning, and at the same time can be computed
efficiently via a maximum subarray algorithm. The learning can then be posed as a
convex quadratic program solvable with cutting-plane optimization.
The proposed framework is very flexible as it can accept any domain-specific visual
features. Once trained, our system provides accurate object counts and requires a very
small time overhead over the feature extraction step, making it a good candidate for
applications involving real-time processing or dealing with huge amount of visual data.
1
Introduction
The counting problem is the estimation of the number of objects in a still image or video frame. It arises
in many real-world applications including cell counting in microscopic images, monitoring crowds in
surveillance systems, and performing wildlife census or counting the number of trees in an aerial image
of a forest.
We take a supervised learning approach to this problem, and so require a set of training images with
annotation. The question is what level of annotation is required? Arguably, the bare minimum of annotation is to provide the overall count of objects in each training image. This paper focusses on the next
level of annotation which is to specify the object position by putting a single dot on each object instance
in each image. Figure 1 gives examples of the counting problems and the dotted annotation we consider.
Dotting (pointing) is the natural way to count objects for humans, at least when the number of objects is
large. It may be argued therefore that providing dotted annotations for the training images is no harder
for a human than giving just the raw counts. On the other hand, a spatial arrangement of the dots provides
a wealth of additional information, and this paper is, in part, about how to exploit this ?free lunch? (in
the context of the counting problem). Overall, it should be noted that dotted annotation is less labourintensive than the bounding-box annotation, let alone pixel-accurate annotation, traditionally used by the
supervised methods in the computer vision community [15]. Therefore, the dotted annotation represents
an interesting and, perhaps, under-investigated case.
This paper develops a simple and general discriminative learning-based framework for counting objects
in images. Similar to global regression methods (see below), it also evades the hard problem of detecting
all object instances in the images. However, unlike such methods, the approach also takes full and
extensive use of the spatial information contained in the dotted supervision.
The high-level idea of our approach is extremely simple: given an image I, our goal is to recover a
density function F as a real function of pixels in this image. Our notion of density function loosely
1
Figure 1: Examples of counting problems. Left ? counting bacterial cells in a fluorescence-light microscopy
image (from [29]), right ? counting people in a surveillance video frame (from [10]). Close-ups are shown alongside the images. The bottom close-ups show examples of the dotted annotations (crosses). Our framework learns to
estimate the number of objects in the previously unseen images based on a set of training images of the same kind
augmented with dotted annotations.
corresponds to the physical notion of density as well as to the mathematical notion of measure. Given
the estimate F of the density function and the query about the number of objects in the entire image I, the
number of objects in the image is estimated by integrating F over the entire I. Furthermore, integrating
the density over an image subregion S ? I gives an estimate of the count of objects in that subregion.
Our approach assumes that each pixel p in an image is represented by a feature vector xp and models
the density function as a linear transformation of xp : F (p) = wT xp . Given a set of training images,
the parameter vector w is learnt in the regularized risk framework, so that the density function estimates
for the training images matches the ground truth densities inferred from the user annotations (under
regularization on w).
The key conceptual difficulty with the density function is the discrete nature of both image observations
(pixel grid) and, in particular, the user training annotation (sparse set of dots). As a result, while it is
easy to reason about average densities over the extended image regions (e.g. the whole image), the notion
of density is not well-defined at a pixel level. Thus, given a set of dotted annotation there is no trivial
answer to the question: what should be the ground truth density for this training example. Consequently,
this local ambiguity also renders standard pixel-based distances between density functions inappropriate
for the regularized risk framework.
Our main contribution, addressing this conceptual difficulty, is a specific distance metric D between
density functions used as a loss in our framework, which we call the MESA distance (where MESA
stands for Maximum Excess over SubArrays, as well as for the geological term for the elevated plateau).
This distance possess two highly desirable properties:
1. Robustness. The MESA distance is robust to the additive local perturbations of its arguments such
as independent noise or high-frequency signal as long as the integrals (counts) of these perturbations
over larger region are close to zero. Thus, it does not matter much how exactly we define the ground
truth density locally, as long as the integrals of the ground truth density over the larger regions reflect the
counts correctly. We can then naturally define the ?ground truth? density for a dotted annotation to be a
sum of normalized gaussians centered at the dots.
2. Computability. The MESA distance can be computed exactly via an efficient combinatorial algorithm (maximum sub-array [8]). Plugging it into the regularized risk framework then leads to a convex
quadratic program for estimating w. While this program has a combinatorial number of linear constraints, the cutting-plane procedure finds the close approximation to the globally optimal w after a small
number of iterations.
The proposed approach is highly versatile. As virtually no assumptions is made about the features xp ,
our framework can benefit from much of the research on good features for object detection. Thus,
the confidence maps produced by object detectors or the scene explanations resulting from fitting the
generative models can be turned into features and used by our method.
1.1
Related work.
A number of approaches tackle counting problems in an unsupervised way, performing grouping based
on self-similarities [3] or motion similarities [27]. However, the counting accuracy of such fully unsupervised methods is limited, and therefore others considered approaches based on supervised learning.
Those fall into two categories:
2
Input: 6 and 10
Detection: 6 and unclear
Density: 6.52 and 9.37
Figure 2: Processing results for a previously unseen image. Left ? a fragment of the microscopy image. Emphasized are the two rectangles containing 6 and 10 cells respectively. Middle ? the confidence map produced by
an SVM-based detector, 6 peaks are clearly discernible for the 1st rectangle, but the number of peaks in the 2nd
rectangle is unclear. Right ? the density map, that our approach produces. The integrals over the rectangles (6.52
and 9.37) are close to the correct number of cells. (MATLAB jet colormap is used)
Counting by detection: This assumes the use of a visual object detector, that localizes individual object
instances in the image. Given the localizations of all instances, counting becomes trivial. However,
object detection is very far from being solved [15], especially for overlapping instances. In particular,
most current object detectors operate in two stages: first producing a real-valued confidence map; and
second, given such a map, a further thresholding and non-maximum suppression steps are needed to
locate peaks correspoinding to individual instances [12, 26]. More generative approaches avoid nonmaximum suppression by reasoning about relations between object parts and instances [6, 14, 20, 33,
34], but they are still geared towards a situation with a small number of objects in images and require
time-consuming inference. Alternatively, several methods assume that objects tend to be uniform and
disconnected from each other by the distinct background color, so that it is possible to localize individual
instances via a Monte-Carlo process [13], morphological analysis [5, 29] or variational optimization [25].
Methods in these groups deliver accurate counts when their underlying assumptions are met but are not
applicable in more challenging situations.
Counting by regression: These methods avoid solving the hard detection problem. Instead, a direct
mapping from some global image characteristics (mainly histograms of various features) to the number
of objects is learned. Such a standard regression problem can be addressed by a multitude of machine
learning tools (e.g. neural networks [11, 17, 22]). This approach however has to discard any available
information about the location of the objects (dots), using only its 1-dimensional statistics (total number)
for learning. As a result, a large number of training images with the supplied counts needs to be provided
during training. Finally, counting by segmentation methods [10, 28] can be regarded as hybrids of
counting-by-detection and counting-by-regression approaches. They segment the objects into separate
clusters and then regress from the global properties of each cluster to the overall number of objects in it.
2
The Framework
We now provide the detailed description of our framework starting with the description of the learning
setting and notation.
2.1
Learning to Count
We assume that a set of N training images (pixel grids) I1 , I2 , . . . IN is given. It is also assumed that
each pixel p in each image Ii is associated with a real-valued feature vector xip ? RK . We give the
examples of the particular choices of the feature vectors in the experimental section. It is finally assumed
that each training image Ii is annotated with a set of 2D points Pi = {P1 , . . . , PC(i) }, where C(i) is the
total number of objects annotated by the user.
The density functions in our approaches are real-valued functions over pixel grids, whose integrals over
image regions should match the object counts. For a training image Ii , we define the ground truth density
function to be a kernel density estimate based on the provided points:
X
?p ? Ii , Fi0 (p) =
N (p; P, ? 2 12?2 ) .
(1)
P ?Pi
Here, p denotes a pixel, N (p; P, ? 2 12?2 ) denotes a normalized 2D Gaussian kernel evaluated at p,
with the mean at the user-placed dot P , and an isotropic covariance matrix with P
? being a small value
(typically, a few pixels). With this definition, the sum of the ground truth density p?Ii Fi0 (p) over the
entire image will not match the dot count Ci exactly, as dots that lie very close to the image boundary
result in their Gaussian probability mass being partly outside the image. This is a natural and desirable
3
behaviour for most applications, as in many cases an object that lies partly outside the image boundary
should not be counted as a full object, but rather as a fraction of an object.
Given a set of training images together with their ground truth densities, we aim to learn the linear
transformation of the feature representation that approximates the density function at each pixel:
?p ? Ii ,
Fi (p|w) = wT xip ,
(2)
where w ? RK is the parameter vector of the linear transform that we aim to learn from the training
data, and Fi (?|w) is the estimate of the density function for a particular value of w. The regularized risk
framework then suggests choosing w so that it minimizes the sum of the mismatches between the ground
truth and the estimated density functions (the loss function) under regularization:
!
N
X
T
0
w = argmin w w + ?
D Fi (?), Fi (?|w)
,
(3)
w
i=1
Here, ? is a standard scalar hyperparameter, controlling the regularization strength. It is the only hyperparameter in our framework (in addition to those that might be used during feature extraction).
After the optimal weight vector has been learned from the training data, the system can produce a density
estimate for an unseen image I by a simple linear weighting of the feature vector computed in each pixel
as suggested by (2). The problem is thus reduced to choosing the right loss function D and computing
the optimal w in (3) under that loss.
2.2
The MESA distance
The distance D in (3) measures the mismatch between the ground truth and the estimated densities (the
loss) and has a significant impact on the performance of the entire learning framework. There are two
natural choices for D:
? One can choose D to be some function of an LP metric, e.g. the L1 metric (sum of absolute per-pixel
differences) or a square of the L2 metric (sum of squared per-pixel differences). Such choices turns
(3) into standard regression problems (i.e. support vector regression and ridge regression for L1 and
L22 cases respectively), where each pixel in each training image effectively provides a sample in the
training set. The problem with such loss is that it is not directly related to the real quantity that we
care about, i.e. the overall counts of objects in images. E.g. strong zero-mean noise would affect such
metric a lot, while the overall counts would be unaffected.
? As the overall counts is what we ultimately care about, one may choose D to be an absolute or
squared difference between
the overall
P
P sums over the entire images for the two arguments, e.g.
D (F1 (?), F2 (?)) = | p?I F1 (p) ? p?I F2 (p)|. The use of such a pseudometric as a loss turns
(3) into the counting-by-regression framework discussed in Section 1.1. Once again, we get either
the support vector regression (for the absolute differences) or ridge regression (for the squared differences), but now each training sample corresponds to the entire training image. Thus, although this
choice of the loss matches our ultimate goal of learning to count very well, it requires many annotated
images for training as spatial information in the annotation is discarded.
Given the significant drawbacks of both baseline distance measures, we suggest an alternative, which we
call the MESA distance. Given an image I, the MESA distance DMESA between two functions F1 (p) and
F2 (p) on the pixel grid is defined as the largest absolute difference between sums of F1 (p) and F2 (p)
over all box subarrays in I:
X
X
DMESA (F1 , F2 ) = max
F1 (p) ?
F2 (p)
(4)
B?B
p?B
p?B
Here, B is the set of all box subarrays of I.
The MESA distance (in fact, a metric) can be regarded as an L? distance between combinatorially-long
vectors of subarray sums. In the 1D case, it is related to the Kolmogorov-Smirnov distance between
probability distributions [23] (in our terminology, the Kolmogorov-Smirnov distance is the maximum of
absolute differences over the subarrays with one corner fixed at top-left; thus the strict subset of B is
considered in the Kolmogorov-Smirnov case).
4
original
noise added
? increased
dots jittered
dots removed dots reshuffled
Figure 3: Comparison of distances for matching density functions. Here, the top-left image shows one of the
densities, computed as the ground truth density for a set of dots. The densities in the top row are obtained through
some perturbations of the original one. In the bottom row, we compare side-by-side the per-pixel L1 distance, the
absolute difference of overall counts, and the MESA distance between the original and the perturbed densities (the
distances are normalized across the 5 examples). The MESA distance has a unique property that it tolerates the local
modifications (noise, jitter, change of Gaussian kernel), but reacts strongly to the change in the number of objects
or their positions. In the middle row we give per-pixel plots of the differences between the respective densities and
show the boxes on which the maxima in the definition of the MESA distance are achieved.
The MESA distance has a number of desirable properties in our framework. Firstly, it is directly related
to the counting objective we want to optimize. Since the set of all subarrays include the full image,
DMESA (F1 , F2 ) is an upper bound on the absolute difference of the overall count estimates given by the
two densities F1 and F2 . Secondly, when the two density functions differ by a zero-mean high-frequency
signal or an independent zero-mean noise, the DMESA distance between them is small, because positive
and negative deviations of F1 from F2 pixels tend to cancel each other over the large regions. Thirdly,
DMESA is sensitive to the overall spatial layout of the denisities. Thus, if the difference between F1 and
F2 is a low-frequency signal, e.g. F1 and F2 are the ground truth densities corresponding to the two point
sets leaning towards two different corners of the image, then the DMESA distance between F1 and F2 is
large, even if F1 and F2 sum to the same counts over the entire image. These properties are illustrated in
Figure 3.
The final property of DMESA is that it can be computed efficiently. This is because it can be rewritten as:
?
DMESA (F1 , F2 ) = max ?
max
B?B
X
F1 (p) ? F2 (p) ,
p?B
max
B?B
X
?
?.
F2 (p) ? F1 (p)
(5)
p?B
Computing both inner maxima in (5) then constitutes a 2D maximum subarray problem, which is finding
the box subarray of a given 2D array with the largest sum. This problem has a number of efficient
solutions. Perhaps, the simplest of the efficient ones (from [8]) is an exhaustive search over one image
dimension (e.g. for the top and bottom dimensions of the optimal subarray) combined with the dynamic
programming (Kadane?s algorithm [7]) to solve the 1D maximum subarray problem along the other
dimension in the inner loop. This approach has complexity O(|I|1.5 ), where |I| is the number of pixels
in the image grid. It can be further improved in practice by replacing the exhaustive search over the first
dimension with branch-and-bound [4]. More extensive algorithms that guarantee even better worst-case
complexity are known [31]. In our experiments, the algorithm [8] was sufficient, as the time bottleneck
lied in the QP solver (see below).
2.3
Optimization
We finally discuss how the optimization problem in (3) can be solved in the case when the DMESA distance
is employed. The learning problem (3) can then be rewritten as a convex quadratic program:
5
min
w,?1 ,...?N
?i, ?B ? Bi :
?i ?
wT w + ?
X
N
X
?i ,
subject to
(6)
i=1
Fi0 (p) ? wT xip ,
?i ?
X
wT xip ? Fi0 (p)
(7)
p?B
p?B
Here, ?i are the auxiliary slack variables (one for each training image) and Bi is the set of all subarrays in
image i. At the optimum of (6)?(7), the optimal vector w
? is the solution of (3) while the slack variables
equal the MESA distances: ??i = DMESA Fi0 (?), Fi (?|w)
? .
The number of linear constraints in (7) is combinatorial, so that a custom QP-solver cannot be applied
directly. A standard iterative cutting-plane procedure, however, overcomes this problem: one starts with
only a small subset of constraints activated (we choose 20 boxes with random dimensions in random
subset of images to initialize the process). At each iteration, the QP (6)?(7) is solved with an active
subset of constraints. Given the solution j w,j ?1 , . . .j ?N after the iteration j, one can find the box
subarrays corresponding to the most violated constraints among (7). To do that, for each image we find
the subarrays that maximize the right hand sides of (7), which are exactly the 2D maximum subarrays of
Fi0 (?) ? Fi (?|j w) and Fi (?|j w) ? Fi0 (?) respectively.
The boxes j Bi1 and j Bi2 corresponding
to thesemaximum subarrays
are then found
for each image i. If
P
P
T
T
the respective sums p?j B 1 Fi0 (p) ? j w xip and p?j B 2 j w xip ? Fi0 (p) exceed j ?i ? (1 + ),
i
i
the corresponding constraints are activated, and the next iteration is performed. The iterations terminate
when for all images the sums corresponding to maximum subarrays are within (1 + ) factor from j ?i
and hence no constraints are activated. In the derivation here, << 1 is a constant that promotes
convergence in a small number of iterations to the approximation of the global minimum. Setting to 0
solves the program (6)?(7) exactly, while it has been shown in similar circumstances [16] that setting to
a small finite value does not affect the generalization of the learning algorithm and brings the guarantees
of convergence in small number of steps.
3
Experiments
Our framework and several baselines were evaluated on counting tasks for two types of imagery shown in
Figure 1. We now discuss the experiments and the quantitative results. The test datasets and the densities
computed with our method can be further assessed qualitatively at the project webpage [1].
Bacterial cells in fluorescence-light microscopy images. Our first experiment is concerned with synthetic images, emulating microscopic views of the colonies of bacterial cell, generated with [19] (Figure 1-left). Such synthetic images (Figure 1-left) are highly realistic and simulate such effects as cell
overlaps, shape variability, strong out-of-focus blur, vignetting, etc. For the experiments, we generated a
dataset of images (available at [1]), with the overall number of cells varying between 74 and 317. Few
annotated datasets with real cell microscopy images also exist. While it is tempting to use real rather
than synthetic imagery, all the real image datasets to the best of our knowledge are small (only few
images have annotations), and, most importantly, there always are very big discrepancies between the
annotations of different human experts. The latter effectively invalidates the use of such real datasets for
quantitative comparison of different counting approaches.
Below we discuss the comparison of the counting accuracy achieved by our approach and baseline approaches. The features used in all approaches were based on the dense SIFT descriptor [21] computing
using [32] software at each pixel of each image with the fixed SIFT frame radius (about the size of the
cell) and fixed orientation. Each algorithm was trained on N training images, while another N images
were used for the validation of metaparameters. The following approaches were considered:
1. The proposed density-based approach. A very simple feature representation was chosen: a codebook
of K entries was constructed via k-means on SIFT descriptors extracted from the hold-out 20 images.
Then each pixel is represented by a vector of length K, which is 1 at the dimension corresponding to
the entry of the SIFT descriptor at that pixel and 0 for all other dimensions. We used training images to
learn the vector w as discussed in Section 2.1. Counting is then performed by summing the values wt
assigned to the codebook entries t for all pixels in the test image. Figure 2-right gives an example of the
respective density (see also [1]).
6
linear ridge regression
kernel ridge regression
detection
detection
detection+correction
density learning
density learning
Validation
counting
counting
counting
detection
counting
counting
MESA
N =1
67.3?25.2
60.4?16.5
28.0?20.6
20.8?3.8
?
12.7?7.3
9.5?6.1
N =2
37.7?14.0
38.7?17.0
20.8?5.8
20.1?5.5
22.6?5.3
7.8?3.7
6.3?1.2
N =4
16.7?3.1
18.6?5.0
13.6?1.5
15.7?2.0
16.8?6.5
5.0?0.5
4.9?0.6
N =8
8.8?1.5
10.4?2.5
10.2?1.9
15.0?4.1
6.8?1.2
4.6?0.6
4.9?0.7
N = 16
6.4?0.7
6.0?0.8
10.4?1.2
11.8?3.1
6.1?1.6
4.2?0.4
3.8?0.2
N = 32
5.9?0.5
5.2?0.3
8.5?0.5
12.0?0.8
4.9?0.5
3.6?0.2
3.5?0.2
Table 1: Mean absolute errors for cell counting on the test set of 100 fluorescent microscopy images. The
rows correspond to the methods described in the text. The second column corresponds to the error measure used
for learning meta-parameters on the validation set. The last 6 columns correspond to the numbers of images in
the training and validation sets. The average number of cells is 171?64 per image. Standard deviations in the
table correspond to 5 different draws of training and validation image sets. The proposed method (density learning)
outperforms considerably the baseline approaches (including the application-specific baseline with the error rate =
16.2) for all sizes of the training set.
Counting-by-Regression [17]
Counting-by-Regression [28]
Counting-by-Segmentation [28]
Density learning
?maximal? ?downscale? ?upscale? ?minimal? ?dense? ?sparse?
2.07
2.66
2.78
N/A
N/A
N/A
1.80
2.34
2.52
4.46
N/A
N/A
1.53
1.64
1.84
1.31
N/A
N/A
1.70
1.28
1.59
2.02
1.78?0.39 2.06?0.59
Table 2: Mean absolute errors for people counting in the surveillance video [10]. The columns correspond to
the four scenarios (splits) reproduced from [28] (?maximal?,?downscale?,?upscale?,?minimal?) and for the two new
sets of splits (?dense? and ?sparse?). Our method outperforms counting-by-regression methods and is competitive
with the hybrid method in [28], which uses more detailed annotation.
2. The counting-by-regression baseline. Each of the training images was described by a global histogram
of the entries occurrences for the same codebook as above. We then learned two types of regression (ridge
regression with linear and Gaussian kernels) to the number of cells in the image.
3. The counting-by-detection baseline. We trained a detector based on a linear SVM classifier. The SIFT
descriptors corresponding to the dotted pixels were considered positive examples. To sample negative
examples, we built a Delaunay triangulation on the dots and took SIFT descriptors corresponding to the
pixels at the middle of Delaunay edges. At detection time, we applied the SVM at each pixel, and then
found peaks in the resulting confidence map (e.g. Figure 2-middle) via non-maximum suppression with
the threshold ? and radius ? using the code [18]. We also considered a variant with the linear correction of
the obtained number to account for systematic biases (detection+correction). The slope and the intercept
of the correction for each combination of ? , ?, and regularization strength were estimated via robust
regression on the union of the training and validation sets.
4. Application-specific method [29]. We also evaluated the software specifically designed for analyzing
cells in fluorescence-light images [29]. The counting algorithm here is based on adaptive thresholding
and morphological analysis. For this baseline, we tuned the free parameter (cell division threshold) on
the test set, and computed the mean absolute error, which was 16.2.
The meta-parameters (K, regularization strengths, Gaussian kernel width for ridge regression, ? and ?
for non-maximum suppression) were learned in each case on the validation set. The objective minimized
during the validation was counting accuracy. For counting-by-detection, we also considered optimizing
detection accuracy (computed via Hungarian matching with the ground truth), and, for our approach, we
also considered minimizing the MESA distance with the ground truth density on the validation set.
The results for a different number N of training and validation images are given in Table 1, based on 5
random draws of training and validation sets. A hold out set of 100 images was used for testing. The
proposed method outperforms the baseline approaches for all sizes of the training set.
Pedestrians in surveillance video. Here we focus on a 2000-frames video dataset [10] from a camera
overviewing a busy pedestrian street (Figure 1-right). The authors of [10] also provided the dotted ground
truth for these frames, the position of the ground plane, and the region of interest, where the counts
should be performed. Recently, [28] performed extensive experiments on the dataset and reported the
performance of three approaches (two counting-by-regression including [17] and the hybrid approach:
split into blobs, and regress the number for each blob). The hybrid approach in [28] required more
7
detailed annotations than dotting (see [28] for details). For the sake of comparison, we adhered to the
experimental protocols described in [28], so that the performance of our method is directly comparable.
In particular, 4 train/test splits were suggested in [28]: 1) ?maximal?: train on frames 600:5:1400 (in
Matlab notation) 2) ?downscale?: train on frames 1205:5:1600 (the most crowded) 3) ?upscale?: train on
frames 805:5:1100 (the least crowded) 4) ?minimal?: train on frames 640:80:1360 (10 frames). Testing is
performed on the frames outside the training range. For future reference, we also included two additional
scenarios (?dense? and ?sparse?) with multiple similar splits in each (permitting variance estimation).
Both scenarios are based on splitting the 2000 frames into 5 contiguous chunks of 400 frames. In each
of the two scenarios, we then performed training on one chunk and testing on the other 4. In the ?dense?
scenario we trained on 80 frames sampled from the training split with uniform spacing, while in the
?sparse? scenario, we took just 10 frames.
Extracting features in this case is more involved as several modalities, namely the image itself, the
difference image with the previous frame, and the background subtracted image have to be combined
to achieve the best performance (a simple median filtering was used to estimate the static background
image). We used a randomized tree approach similar to [24] to get features combining these modalities.
Thus, we first extracted the primary features in each pixel including the absolute differences with the
previous frame and the background, the image intensity, and the absolute values x- and y-derivatives. On
the training subset of the smallest ?minimal? split, we then trained a random forest [9] with 5 randomized
trees. The training objective was the regression from the appearance of each pixel and its neighborhood
to the ground truth density. For each pixel at testtime, the random forest performs a series of simple
tests comparing the value of in the particular primary channel at location defined by a particular offset
with the particular threshold, while during forest pretraining the number of the channel, the offset and
the threshold are randomized. Given the pretrained forest, each pixel p gets assigned a vector xp of
dimension equal to the total number of leaves in all trees, with ones corresponding to the leaves in each
of the five trees the pixel falls into and zeros otherwise. Finally, to account for the perspective distortion,
we multiplied xp by the square of the depth of the ground plane at p (provided with the sequence).
Within each scenario, we allocated one-fifth of the training frames to pick ? and the tree depth through
validation via the MESA distance.
The quantitative comparison in Table 2, demonstrates the competitiveness of our method.
Overall comments. In both sets of experiments, we tried two strategies for setting ? (kernel width in
the definition of the ground truth densities): setting ? = 0 (effectively, the ground truth is then a sum of
delta-functions), and setting ? = 4 (roughly comparable with object half-size in both experiments). In
the first case (cells) both strategies gave almost the same results for all N , highlighting the insensitivity
of our approach to the choice of ? (see also Figure 3 on that). The results in Table 1 is for ? = 0. In the
second case (pedestrians), ? = 4 had an edge over ? = 0, and the results in Table 2 are for that value.
At train time, we observed that the cutting plane algorithm converged in a few dozen iterations (less
than 100 for our choice = 0.01). The use of a general-purpose quadratic solver [2] meant that the
training times were considerable (from several seconds to few hours depending on the value of ? and
the size of the training set). We anticipate a big reduction in training time for the purpose-built solver.
At test time, our approach introduces virtually no time overhead over feature extraction. E.g. in the case
of pedestrians, one can store the value wt computed during learning at each leaf t in each tree, so that
counting would require simply ?pushing? each pixel down the forest, and summing the resulting wt from
the obtained leaves. This can be done in real-time [30].
4
Conclusion
We have presented the general framework for learning to count objects in images. While our ultimate
goal is the counting accuracy over the entire image, during the learning our approach is optimizing the
loss based on the MESA-distance. This loss involves counting accuracy over multiple subarrays of the
entire image (and not only the entire image itself). We demonstrate that given limited amount of training
data, such an approach achieves much higher accuracy than optimizing the counting accuracy over the
entire image directly (counting-by-regression). At the same time, the fact that we avoid the hard problem
of detecting and discerning individual object instances, gives our approach an edge over the countingby-detection method in our experiments.
Acknowledgements. This work is suppoted by EU ERC grant VisRec no. 228180. V. Lempitsky is
also supported by Microsoft Research projects in Russia. We thank Prof. Jiri Matas (CTU Prague) for
suggesting the detection+correction baseline.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
[33]
[34]
http://www.robots.ox.ac.uk/%7Evgg/research/counting/index.html.
The MOSEK optimization software. http://www.mosek.com/.
N. Ahuja and S. Todorovic. Extracting texels in 2.1d natural textures. ICCV, pp. 1?8, 2007.
S. An, P. Peursum, W. Liu, and S. Venkatesh. Efficient algorithms for subwindow search in object detection
and localization. CVPR, pp. 264?271, 2009.
D. Anoraganingrum. Cell segmentation with median filter and mathematical morphology operation. Image
Analysis and Processing, International Conference on, 0:1043, 1999.
O. Barinova, V. Lempitsky, and P. Kohli. On the detection of multiple object instances using Hough transforms.
CVPR, 2010.
J. L. Bentley. Programming pearls: Algorithm design techniques. Comm. ACM, 27(9):865?871, 1984.
J. L. Bentley. Programming pearls: Perspective on performance. Comm. ACM, 27(11):1087?1092, 1984.
L. Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
A. B. Chan, Z.-S. J. Liang, and N. Vasconcelos. Privacy preserving crowd monitoring: Counting people
without people models or tracking. CVPR, 2008.
S.-Y. Cho, T. W. S. Chow, and C.-T. Leung. A neural-based crowd estimation by hybrid global learning
algorithm. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 29(4):535?541, 1999.
C. Desai, D. Ramanan, and C. Fowlkes. Discriminative models for multi-class object layout. ICCV, 2009.
X. Descombes, R. Minlos, and E. Zhizhina. Object extraction using a stochastic birth-and-death dynamics in
continuum. Journal of Mathematical Imaging and Vision, 33(3):347?359, 2009.
L. Dong, V. Parameswaran, V. Ramesh, and I. Zoghlami. Fast crowd segmentation using shape indexing.
ICCV, pp. 1?8, 2007.
M. Everingham,
L. Van Gool,
C. K. I. Williams,
J. Winn,
and A. Zisserman.
The PASCAL Visual Object Classes Challenge 2009 (VOC2009) Results.
http://pascallin.ecs.soton.ac.uk/challenges/VOC/voc2009/workshop/index.html.
T. Joachims, T. Finley, and C.-N. J. Yu. Cutting-plane training of structural svms. Machine Learning, 77(1):27?
59, 2009.
D. Kong, D. Gray, and H. Tao. A viewpoint invariant approach for crowd counting. ICPR (3), pp. 1187?1190,
2006.
P. D. Kovesi. MATLAB and Octave functions for computer vision and image processing. School
of Computer Science & Software Engineering, The University of Western Australia. Available from:
http://www.csse.uwa.edu.au/?pk/research/matlabfns/.
A. Lehmussola, P. Ruusuvuori, J. Selinummi, H. Huttunen, and O. Yli-Harja. Computational framework for
simulating fluorescence microscope images with cell populations. IEEE Trans. Med. Imaging, 26(7):1010?
1016, 2007.
B. Leibe, A. Leonardis, and B. Schiele. Robust object detection with interleaved categorization and segmentation. International Journal of Computer Vision, 77(1-3):259?289, 2008.
D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
A. N. Marana, S. A. Velastin, L. F. Costa, and R. A. Lotufo. Estimation of crowd density using image processing. Image Processing for Security Applications, pp. 1?8, 1997.
J. Massey, Frank J. The kolmogorov-smirnov test for goodness of fit. Journal of the American Statistical
Association, 46(253):68?78, 1951.
F. Moosmann, B. Triggs, and F. Jurie. Fast discriminative visual codebooks using randomized clustering
forests. NIPS, pp. 985?992, 2006.
S. K. Nath, K. Palaniappan, and F. Bunyak. Cell segmentation using coupled level sets and graph-vertex
coloring. MICCAI (1), pp. 101?108, 2006.
T. W. Nattkemper, H. Wersing, W. Schubert, and H. Ritter. A neural network architecture for automatic
segmentation of fluorescence micrographs. Neurocomputing, 48(1-4):357?367, 2002.
V. Rabaud and S. Belongie. Counting crowded moving objects. CVPR (1), pp. 705?711, 2006.
D. Ryan, S. Denman, C. Fookes, and S. Sridharan. Crowd counting using multiple local features. DICTA ?09:
Proceedings of the 2009 Digital Image Computing: Techniques and Applications, pp. 81?88, 2009.
J. Selinummi, J. Seppala, O. Yli-Harja, and J. A. Puhakka. Software for quantification of labeled bacteria from
digital microscope images by automated image analysis. Biotechniques, 39(6):859?63, 2005.
T. Sharp. Implementing decision trees and forests on a GPU. ECCV (4), pp. 595?608, 2008.
H. Tamaki and T. Tokuyama. Algorithms for the maxium subarray problem based on matrix multiplication.
SODA, pp. 446?452, 1998.
A. Vedaldi and B. Fulkerson. VLFeat: An open and portable library of computer vision algorithms.
http://www.vlfeat.org/, 2008.
B. Wu, R. Nevatia, and Y. Li. Segmentation of multiple, partially occluded objects by grouping, merging,
assigning part detection responses. CVPR, 2008.
T. Zhao and R. Nevatia. Bayesian human segmentation in crowded situations. CVPR (2), pp. 459?466, 2003.
9
| 4043 |@word kohli:1 kong:1 middle:4 smirnov:4 nd:1 everingham:1 triggs:1 open:1 tried:1 covariance:1 pick:1 versatile:1 harder:1 reduction:1 liu:1 series:1 fragment:1 tuned:1 outperforms:3 current:1 comparing:1 com:1 assigning:1 gpu:1 additive:1 realistic:1 blur:1 shape:2 discernible:1 plot:1 designed:1 alone:1 generative:2 leaf:4 half:1 ctu:1 plane:7 isotropic:1 provides:3 detecting:2 codebook:3 location:2 firstly:1 org:1 five:1 mathematical:3 along:1 constructed:1 direct:1 jiri:1 competitiveness:1 overhead:2 fitting:1 privacy:1 introduce:1 roughly:1 p1:1 morphology:1 multi:1 globally:1 voc:1 inappropriate:1 solver:4 becomes:1 provided:4 estimating:3 underlying:1 notation:2 project:2 mass:1 what:3 kind:1 argmin:1 minimizes:1 finding:1 transformation:2 guarantee:2 quantitative:3 tackle:1 descombes:1 exactly:5 colormap:1 classifier:1 demonstrates:1 uk:2 ramanan:1 grant:1 vlfeat:2 producing:1 arguably:1 positive:2 engineering:1 local:4 oxford:2 analyzing:1 micrographs:1 might:1 au:1 suggests:1 challenging:1 limited:2 bi:2 range:1 jurie:1 unique:1 camera:1 testing:3 practice:1 union:1 procedure:2 vedaldi:1 matching:2 ups:2 confidence:4 integrating:2 suggest:1 get:3 cannot:1 close:6 risk:5 context:1 intercept:1 optimize:1 www:4 map:6 lied:1 kovesi:1 layout:2 williams:1 starting:1 convex:3 splitting:1 array:2 regarded:2 importantly:1 population:1 fulkerson:1 notion:4 traditionally:1 controlling:1 user:4 programming:3 us:1 labeled:1 bottom:3 observed:1 solved:3 worst:1 region:8 desai:1 morphological:2 eu:1 removed:1 evades:1 adhered:1 comm:2 complexity:2 schiele:1 occluded:1 dynamic:2 ultimately:1 trained:5 solving:1 segment:1 deliver:1 localization:2 division:1 f2:16 distinctive:1 represented:2 various:1 kolmogorov:4 derivation:1 train:6 distinct:1 fast:2 monte:1 query:1 neighborhood:1 outside:3 crowd:7 choosing:2 exhaustive:2 whose:2 posed:1 larger:2 valued:3 solve:1 distortion:1 otherwise:1 cvpr:6 statistic:1 unseen:3 transform:1 itself:2 final:1 reproduced:1 blob:2 sequence:1 took:2 propose:1 maximal:3 turned:1 loop:1 combining:1 achieve:1 insensitivity:1 fi0:9 description:2 webpage:1 convergence:2 cluster:2 optimum:1 produce:2 categorization:1 object:49 depending:1 andrew:1 ac:2 colony:1 school:1 tolerates:1 strong:2 solves:1 auxiliary:1 hungarian:1 involves:1 subregion:2 met:1 differ:1 radius:2 drawback:1 annotated:5 correct:1 filter:1 stochastic:1 centered:1 human:5 australia:1 implementing:1 require:3 argued:1 behaviour:1 f1:16 generalization:1 bi1:1 anticipate:1 ryan:1 secondly:1 correction:5 hold:2 practically:1 considered:7 ground:20 mapping:1 pointing:1 achieves:1 continuum:1 smallest:1 purpose:2 estimation:4 applicable:1 combinatorial:3 fluorescence:5 sensitive:1 largest:2 combinatorially:1 tool:1 minimization:1 clearly:1 gaussian:5 always:1 aim:2 rather:2 avoid:3 downscale:3 harja:2 breiman:1 surveillance:5 varying:1 discerning:1 nonmaximum:1 focus:4 joachim:1 mainly:1 suppression:4 baseline:10 detect:1 parameswaran:1 inference:1 xip:6 leung:1 entire:11 typically:1 accept:1 chow:1 relation:1 metaparameters:1 i1:1 tao:1 pixel:34 schubert:1 overall:12 among:1 flexible:1 pascal:1 orientation:1 html:2 spatial:4 mesa:17 initialize:1 bacterial:3 once:2 equal:2 extraction:4 vasconcelos:1 represents:1 yu:1 unsupervised:2 constitutes:1 cancel:1 mosek:2 discrepancy:1 minimized:1 others:1 future:1 develops:1 few:5 neurocomputing:1 individual:5 geometry:2 microsoft:1 detection:21 huge:1 interest:1 highly:3 custom:1 introduces:1 light:3 pc:1 activated:3 accurate:3 integral:5 edge:3 bacteria:1 respective:3 tree:8 loosely:1 hough:1 minimal:4 instance:11 increased:1 column:3 contiguous:1 goodness:1 cost:1 addressing:1 subset:5 deviation:2 entry:4 uniform:2 vertex:1 kadane:1 reported:1 answer:1 perturbed:1 learnt:1 jittered:1 synthetic:3 cho:1 combined:2 st:1 density:51 peak:4 considerably:1 upscale:3 chunk:2 randomized:4 international:3 systematic:1 dong:1 ritter:1 together:1 squared:3 ambiguity:1 reflect:1 again:1 containing:1 choose:3 l22:1 russia:1 imagery:2 corner:2 expert:1 derivative:1 american:1 nevatia:2 zhao:1 li:1 account:2 busy:1 suggesting:1 crowded:4 matter:1 pedestrian:4 nattkemper:1 performed:6 view:1 lot:1 lowe:1 start:1 recover:1 competitive:1 annotation:21 slope:1 contribution:1 square:2 accuracy:8 descriptor:5 characteristic:1 efficiently:2 variance:1 correspond:4 raw:1 bayesian:1 accurately:1 produced:2 carlo:1 monitoring:2 cybernetics:1 unaffected:1 converged:1 plateau:1 detector:5 definition:3 frequency:3 evade:1 regress:2 involved:1 testtime:1 naturally:1 associated:1 pp:12 static:1 soton:1 sampled:1 costa:1 dataset:3 color:1 knowledge:1 segmentation:9 coloring:1 higher:1 supervised:4 response:1 zisserman:2 specify:1 improved:1 pascallin:1 evaluated:3 box:8 strongly:1 done:1 furthermore:1 just:2 stage:1 ox:1 miccai:1 hand:2 replacing:1 overlapping:1 western:1 brings:1 perhaps:2 gray:1 bentley:2 effect:1 normalized:3 regularization:5 hence:1 assigned:2 death:1 i2:1 illustrated:1 attractive:1 during:6 self:1 width:2 noted:1 octave:1 ridge:6 demonstrate:1 performs:1 motion:1 l1:3 reasoning:1 image:105 variational:1 fi:7 recently:1 physical:1 qp:3 thirdly:1 discussed:2 elevated:1 approximates:1 association:1 significant:2 automatic:1 grid:5 erc:1 had:1 dot:15 moving:1 robot:1 geared:1 supervision:1 similarity:2 etc:1 delaunay:2 chan:1 triangulation:1 perspective:2 optimizing:3 discard:1 scenario:7 store:1 meta:2 victor:1 preserving:1 wildlife:1 minimum:2 additional:2 care:2 employed:1 maximize:1 tempting:1 signal:3 ii:6 branch:1 full:3 desirable:3 keypoints:1 infer:1 multiple:5 huttunen:1 match:4 jet:1 cross:1 long:3 permitting:1 promotes:1 plugging:1 impact:1 involving:1 regression:23 variant:1 vision:6 metric:6 circumstance:1 iteration:7 histogram:2 kernel:7 microscopy:5 cell:20 achieved:2 microscope:2 background:4 addition:1 want:1 spacing:1 addressed:1 winn:1 wealth:1 median:2 reshuffled:1 modality:2 allocated:1 operate:1 unlike:1 posse:1 strict:1 comment:1 subject:1 tend:2 virtually:2 med:1 nath:1 invalidates:1 prague:1 call:2 extracting:2 structural:1 sridharan:1 counting:51 exceed:1 split:7 easy:1 concerned:1 reacts:1 automated:1 affect:2 fit:1 gave:1 architecture:1 inner:2 idea:1 codebooks:1 bottleneck:1 ultimate:2 render:1 pretraining:1 todorovic:1 matlab:3 detailed:3 amount:2 transforms:1 locally:1 svms:1 category:1 simplest:1 reduced:1 http:5 supplied:1 exist:1 dotted:11 estimated:4 delta:1 per:6 correctly:1 discrete:1 hyperparameter:2 group:3 key:1 putting:1 terminology:1 four:1 threshold:4 localize:2 tamaki:1 rectangle:4 computability:1 imaging:2 massey:1 graph:1 fraction:1 sum:13 jitter:1 birth:1 soda:1 almost:1 wu:1 draw:2 decision:1 comparable:2 interleaved:1 bound:2 quadratic:5 strength:3 constraint:7 scene:1 software:5 sake:1 simulate:1 argument:2 extremely:1 min:1 performing:2 pseudometric:1 icpr:1 combination:1 disconnected:1 aerial:1 across:1 voc2009:2 lp:1 lunch:1 making:1 modification:1 iccv:3 census:1 indexing:1 invariant:2 previously:2 turn:2 count:24 discus:3 slack:2 needed:1 moosmann:1 available:3 gaussians:1 rewritten:2 operation:1 multiplied:1 leibe:1 simulating:1 occurrence:1 fowlkes:1 subtracted:1 alternative:1 robustness:1 original:3 assumes:2 denotes:2 top:4 include:1 clustering:1 pushing:1 exploit:1 giving:1 especially:1 prof:1 objective:3 matas:1 question:2 arrangement:1 quantity:1 added:1 strategy:2 primary:2 unclear:2 microscopic:3 distance:29 separate:1 thank:1 geological:1 street:1 portable:1 trivial:2 reason:1 length:1 code:1 index:2 providing:1 minimizing:1 liang:1 frank:1 negative:2 design:1 upper:1 yli:2 observation:1 datasets:4 discarded:1 finite:1 ramesh:1 situation:3 extended:1 emulating:1 variability:1 frame:19 locate:1 perturbation:3 sharp:1 community:1 intensity:1 inferred:1 venkatesh:1 cast:1 required:2 namely:1 extensive:3 security:1 learned:4 hour:1 pearl:2 nip:1 trans:1 leonardis:1 suggested:2 alongside:1 below:3 mismatch:2 challenge:2 program:5 built:2 including:4 max:4 video:6 explanation:1 gool:1 bi2:1 overlap:1 difficulty:2 natural:4 regularized:5 hybrid:5 solvable:1 quantification:1 localizes:1 library:1 finley:1 coupled:1 bare:1 text:1 l2:1 acknowledgement:1 multiplication:1 loss:11 fully:1 interesting:1 fluorescent:1 filtering:1 validation:12 digital:2 sufficient:1 xp:6 thresholding:2 viewpoint:1 leaning:1 pi:2 row:4 eccv:1 placed:1 last:1 free:2 supported:1 side:3 bias:1 fall:2 absolute:12 sparse:5 fifth:1 benefit:1 van:1 boundary:2 dimension:8 depth:2 world:1 stand:1 author:1 made:1 qualitatively:1 adaptive:1 subwindow:1 rabaud:1 counted:1 far:1 ec:1 transaction:1 excess:1 cutting:5 overcomes:1 dealing:1 global:6 active:1 visrec:1 conceptual:2 summing:2 assumed:2 belongie:1 consuming:1 discriminative:3 alternatively:1 search:3 iterative:1 table:7 channel:2 terminate:1 nature:1 learn:3 robust:3 forest:9 investigated:1 domain:1 protocol:1 pk:1 main:1 dense:5 bounding:1 whole:1 noise:5 big:2 augmented:1 ahuja:1 sub:1 position:3 candidate:1 lie:2 weighting:1 learns:1 dozen:1 rk:2 down:1 specific:4 emphasized:1 barinova:1 sift:6 offset:2 svm:3 multitude:1 grouping:2 workshop:1 merging:1 effectively:3 ci:1 texture:1 suited:1 simply:1 appearance:1 visual:8 highlighting:1 contained:1 tracking:1 partially:1 scalar:1 pretrained:1 corresponds:3 truth:18 extracted:2 acm:2 lempitsky:3 goal:4 formulated:1 consequently:1 towards:2 man:1 considerable:1 hard:4 change:2 included:1 specifically:1 wersing:1 wt:8 total:3 partly:2 experimental:2 people:4 support:2 latter:1 arises:1 assessed:1 meant:1 violated:1 |
3,362 | 4,044 | Subgraph Detection Using Eigenvector L1 Norms
Nadya T. Bliss
Lincoln Laboratory
Massachusetts Institute of Technology
Lexington, MA 02420
[email protected]
Benjamin A. Miller
Lincoln Laboratory
Massachusetts Institute of Technology
Lexington, MA 02420
[email protected]
Patrick J. Wolfe
Statistics and Information Sciences Laboratory
Harvard University
Cambridge, MA 02138
[email protected]
Abstract
When working with network datasets, the theoretical framework of detection theory for Euclidean vector spaces no longer applies. Nevertheless, it is desirable to
determine the detectability of small, anomalous graphs embedded into background
networks with known statistical properties. Casting the problem of subgraph detection in a signal processing context, this article provides a framework and empirical results that elucidate a ?detection theory? for graph-valued data. Its focus is
the detection of anomalies in unweighted, undirected graphs through L1 properties
of the eigenvectors of the graph?s so-called modularity matrix. This metric is observed to have relatively low variance for certain categories of randomly-generated
graphs, and to reveal the presence of an anomalous subgraph with reasonable reliability when the anomaly is not well-correlated with stronger portions of the
background graph. An analysis of subgraphs in real network datasets confirms the
efficacy of this approach.
1
Introduction
A graph G = (V, E) denotes a collection of entities, represented by vertices V , along with some
relationship between pairs, represented by edges E. Due to this ubiquitous structure, graphs are used
in a variety of applications, including the natural sciences, social network analysis, and engineering.
While this is a useful and popular way to represent data, it is difficult to analyze graphs in the
traditional statistical framework of Euclidean vector spaces.
In this article we investigate the problem of detecting a small, dense subgraph embedded into an
unweighted, undirected background. We use L1 properties of the eigenvectors of the graph?s modularity matrix to determine the presence of an anomaly, and show empirically that this technique has
reasonable power to detect a dense subgraph where lower connectivity would be expected.
In Section 2 we briefly review previous work in the area of graph-based anomaly detection. In
Section 3 we formalize our notion of graph anomalies, and describe our experimental regime. In
Section 4 we give an overview of the modularity matrix and observe how its eigenstructure plays
a role in anomaly detection. Sections 5 and 6 respectively detail subgraph detection results on
simulated and actual network data, and in Section 7 we summarize and outline future research.
1
2
Related Work
The area of anomaly detection has, in recent years, expanded to graph-based data [1, 2]. The work of
Noble and Cook [3] focuses on finding a subgraph that is dissimilar to a common substructure in the
network. Eberle and Holder [4] extend this work using the minimum description length heuristic to
determine a ?normative pattern? in the graph from which the anomalous subgraph deviates, basing
3 detection algorithms on this property. This work, however, does not address the kind of anomaly
we describe in Section 3; our background graphs may not have such a ?normative pattern? that
occurs over a significant amount of the graph. Research into anomaly detection in dynamic graphs
by Priebe et al [5] uses the history of a node?s neighborhood to detect anomalous behavior, but this
is not directly applicable to our detection of anomalies in static graphs.
There has been research on the use of eigenvectors of matrices derived from the graphs of interest
to detect anomalies. In [6] the angle of the principal eigenvector is tracked in a graph representing
a computer system, and if the angle changes by more than some threshold, an anomaly is declared
present. Network anomalies are also dealt with in [7], but here it is assumed that each node in the
network has some highly correlated time-domain input. Since we are dealing with simple graphs,
this method is not general enough for our purposes. Also, we want to determine the detectability of
small anomalies that may not have a significant impact on one or two principal eigenvectors.
There has been a significant amount of work on community detection through spectral properties of
graphs [8, 9, 10]. Here we specifically aim to detect small, dense communities by exploiting these
same properties. The approach taken here is similar to that of [11], in which graph anomalies are
detected by way of eigenspace projections. We here focus on smaller and more subtle subgraph
anomalies that are not immediately revealed in a graph?s principal components.
3
Graph Anomalies
As in [12, 11], we cast the problem of detecting a subgraph embedded in a background as one of
detecting a signal in noise. Let GB = (V, E) denote the background graph; a network in which
there exists no anomaly. This functions as the ?noise? in our system. We then define the anomalous subgraph (the ?signal?) GS = (VS , ES ) with VS ? V . The objective is then to evaluate the
following binary hypothesis test; to decide between the null hypothesis H0 and alternate hypothesis
H1 :
H0 : The observed graph is ?noise? GB
H1 : The observed graph is ?signal+noise? GB ? GS .
Here the union of the two graphs GB ? GS is defined as GB ? GS = (V, E ? ES ).
In our simulations, we formulate our noise and signal graphs as follows. The background graph GB
is created by a graph generator, such as those outlined in [13], with a certain set of parameters. We
then create an anomalous ?signal? graph GS to embed into the background. We select the vertex
subset VS from the set of vertices in the network and embed GS into GB by updating the edge set
to be E ? ES . We apply our detection algorithm to graphs with and without the embedding present
to evaluate its performance.
4
The Modularity Matrix and its Eigenvectors
Newman?s notion of the modularity matrix [8] associated with an unweighted, undirected graph G
is given by
1
B := A ?
KK T .
(1)
2|E|
Here A = {aij } is the adjacency matrix of G, where aij is 1 if there is an edge between vertex i
and vertex j and is 0 otherwise; and K is the degree vector of G, where the ith component of K
is the number of edges adjacent to vertex i. If we assume that edges from one vertex are equally
likely to be shared with all other vertices, then the modularity matrix is the difference between the
?actual? and ?expected? number of edges between each pair of vertices. This is also very similar to
2
(a)
(b)
(c)
Figure 1: Scatterplots of an R-MAT generated graph projected into spaces spanned by two eigenvectors of its modularity matrix, with each point representing a vertex. The graph with no embedding
(a) and with an embedded 8-vertex clique (b) look the same in the principal components, but the
embedding is visible in the eigenvectors corresponding to the 18th and 21st largest eigenvalues (c).
the matrix used as an ?observed-minus-expected? model in [14] to analyze the spectral properties of
random graphs.
Since B is real and symmetric, it admits the eigendecomposition B = U ?U T , where U ? R|V |?|V |
is a matrix where each column is an eigenvector of B, and ? is a diagonal matrix of eigenvalues.
We denote by ?i , 1 ? i ? |V |, the eigenvalues of B, where ?i ? ?i+1 for all i, and by ui the
unit-magnitude eigenvector corresponding to ?i .
Newman analyzed the eigenvalues of the modularity matrix to determine if the graph can be split
into two separate communities. As demonstrated in [11], analysis of the principal eigenvectors of
B can also reveal the presence of a small, tightly-connected component embedded in a large graph.
This is done by projecting B into the space of its two principal eigenvectors, calculating a Chisquared test statistic, and comparing this to a threshold. Figure 1(a) demonstrates the projection of
an R-MAT Kronecker graph [15] into the principal components of its modularity matrix.
Small graph anomalies, however, may not reveal themselves in this subspace. Figure 1(b) demonstrates an 8-vertex clique embedded into the same background graph. In the space of the two principal eigenvectors, the symmetry of the projection looks the same as in Figure 1(a). The foreground
vertices are not at all separated from the background vertices, and the symmetry of the projection has
not changed (implying no change in the test statistic). Considering only this subspace, the subgraph
of interest cannot be detected reliably; its inward connectivity is not strong enough to stand out in
the two principal eigenvectors.
The fact that the subgraph is absorbed into the background in the space of u1 and u2 , however, does
not imply that it is inseparable in general; only in the subspace with the highest variance. Borrowing
language from signal processing, there may be another ?channel? in which the anomalous signal
subgraph can be separated from the background noise. There is in fact a space spanned by two
eigenvectors in which the 8-vertex clique stands out: in the space of the u18 and u21 , the two
eigenvectors with the largest components in the rows corresponding to VS , the subgraph is clearly
separable from the background, as shown in Figure 1(c).
4.1
Eigenvector L1 Norms
The subgraph detection technique we propose here is based on L1 properties of the eigenvectors
of the graph?s modularity matrix, where the L1 norm of a vector x = [x1 ? ? ? xN ]T is kxk1 :=
PN
i=1 |xi |. When a vector is closely aligned with a small number of axes, i.e., if |xi | is only large for
a few values of i, then its L1 norm will be smaller than that of a vector of the same magnitude where
this is not the case. For example, if x ? R1024
? has unit magnitude and only has nonzero components
along two of the 1024 axes, then kxk1 ? 2. If it has a component of equal magnitude along all
axes, then kxk1 = 32. This property has been exploited in the past in a graph-theoretic setting, for
finding maximal cliques [16, 17].
This property can also be useful when detecting anomalous clustering behavior. If there is a subgraph
GS that is significantly different from its expectation, this will manifest itself in the modularity
3
(a)
(b)
Figure 2: L1 analysis of modularity matrix eigenvectors. Under the null model, ku18 k has the
distribution in (a). With an 8-vertex clique embedded, ku18 k1 falls far from its average value, as
shown in (b).
matrix as follows. The subgraph GS has a set of vertices VS , which is associated with a set of indices
corresponding to rows and columns of the adjacency matrix A. Consider the vector x ? {0, 1}N ,
where xi is 1 if vi ? VS and xi = 0 otherwise. For any S ? V and v ? V , letP
dS (v) denote the
number of edges between the vertex v and the vertex set S. Also, let dS (S 0 ) := v?S 0 dS (v) and
d(v) := dV (v). We then have
2
X
d(VS )
2
kBxk2 =
,
dVS (v) ? d(v)
(2)
d(V )
v?V
xT Bx = dVS (VS ) ?
d2 (VS )
,
d(V )
(3)
p
and kxk2 = |VS |. Note that d(V ) = 2|E|. A natural interpretation of (2) is that Bx represents the difference between the actual and expected connectivity to VS across the entire graph,
and likewise (3) represents this difference within the subgraph. If x is an eigenvector of B, then
of course xT Bx/(kBxk2 kxk2 ) = 1. Letting
internal and
P each subgraph vertex have uniform
external degree, this ratio approaches 1 as v?V
(dVS (v) ? d(v)d(VS )/d(V ))2 is dominated by
/
S
P
2
v?VS (dVS (v) ? d(v)d(VS )/d(V )) . This suggests that if VS is much more dense than a typical
subset of background vertices, x is likely to be well-correlated with an eigenvector of B. (This becomes more complicated when there are several eigenvalues that are approximately dVS (VS )/|VS |,
but this typically occurs for smaller graphs than are of interest.) Newman made a similar observation: that the magnitude of a vertex?s component in an eigenvector is related to the ?strength? with
which it is a member of the associated community. Thus if a small set of vertices forms a community, with few belonging to other communities, there will be an eigenvector well aligned with this
set, and this implies that the L1 norm of this eigenvector would be smaller than that of an eigenvector
with a similar eigenvalue when there is no anomalously dense subgraph.
4.2
Null Model Characterization
To examine the L1 behavior of the modularity matrix?s eigenvectors, we performed the following
experiment. Using the R-MAT generator we created 10,000 graphs with 1024 vertices, an average
degree of 6 (the result being an average degree of about 12 since we make the graph undirected),
and a probability matrix
0.5 0.125
P =
.
0.125 0.25
For each graph, we compute the modularity matrix B and its eigendecomposition. We then compute
kui k1 for each i and store this value as part of our background statistics. Figure 2(a) demonstrates
the distribution of ku18 k1 . The distribution has a slight left skew, but has a tight variance (a standard
deviation of 0.35) and no large deviations from the mean under the null (H0 ) model.
After compiling background data, we computed the mean and standard deviation of the L1 norms
for each ui . Let ?i be the average of kui k1 and ?i be its standard deviation. Using the R-MAT graph
with the embedded 8-vertex clique, we observed eigenvector L1 norms as shown in Figure 2(b). In
4
the figure we plot kui k1 as well as ?i , ?i + 3?i and ?i ? 3?i . The vast majority of eigenvectors
have L1 norms close to the mean for the associated index. There are very few cases with a deviation
from the mean of greater than 3?. Note also that ?i decreases with decreasing i. This suggests that
the community formation inherent in the R-MAT generator creates components strongly associated
with the eigenvectors with larger eigenvalues.
The one outlier is u18 , which has an L1 norm that is over 10 standard deviations away from the mean.
Note that u18 is the horizontal axis in Figure 1(c), which by itself provides significant separation
between the subgraph and the background. Simple L1 analysis would certainly reveal the presence
of this particular embedding.
5
Embedded Subgraph Detection
With the L1 properties detailed in Section 4 in mind, we propose the following method to determine
the presence of an embedding. Given a graph G, compute the eigendecomposition of its modularity
matrix. For each eigenvector, calculate its L1 norm, subtract its expected value (computed from the
background statistics), and normalize by its standard deviation. If any of these modified L1 norms
is less than a certain threshold (since the embedding makes the L1 norm smaller), H1 is declared,
and H0 is declared otherwise. Pseudocode for this detection algorithm is provided in Algorithm 1.
Algorithm 1 L1S UBGRAPH D ETECTION
Input: Graph G = (V, E), Integer k, Numbers `1MIN , ?[1..k], ?[1..k]
B ? M OD M AT(G)
U ? EIGENVECTORS(B, k) hhk eigenvectors of Bii
for i ? 1 to k do
m[i] ? (kui k1 ? ?[i])/?[i]
if m[i] < `1MIN then
return H1 hhdeclare the presence of an embeddingii
end if
end for
return H0 hhno embedding foundii
We compute the eigenvectors of B using eigs in MATLAB, which has running time O(|E|kh +
|V |k 2 h + k 3 h), where h is the number of iterations required for eigs to converge [10]. While
the modularity matrix is not sparse, it is the sum of a sparse matrix and a rank-one matrix, so we
can still compute its eigenvalues efficiently, as mentioned in [8]. Computing the modified L1 norms
and comparing them to the threshold takes O(|V |k) time, so the complexity is dominated by the
eigendecomposition.
The signal subgraphs are created as follows. In all simulations in this section, |VS | = 8. For each
simulation, a subgraph density of 70%, 80%, 90% or 100% is chosen. For subraphs of this size and
density, the method of [11] does not yield detection performance better than chance.
The subgraph
is created by, uniformly at random, selecting the chosen proportion of the 82 possible edges. To
determine where to embed the subgraph into the background, we find all vertices with at most 1, 3
or 5 edges and select 8 of these at random. The subgraph is then induced on these vertices.
For each density/external degree pair, we performed a 10,000-trial Monte Carlo simulation in which
we create an R-MAT background with the same parameters as the null model, embed an anomalous
subgraph as described above, and run Algorithm 1 with k = 100 to determine whether the embedding is detected. Figure 3 demonstrates detection performance in this experiment. In the receiver
operating characteristic (ROC), changing the L1 threshold (`1MIN in Algorithm 1) changes the position on the curve. Each curve corresponds to a different subgraph density. In Figure 3(a), each
vertex of the subgraph has 1 edge adjacent to the background. In this case the subgraph connectivity
is overwhelmingly inward, and the ROC curve reflects this. Also, the more dense subgraphs are
more detectable. When the external degree is increased so that a subgraph vertex may have up to
3 edges adjacent to the background, we see a decline in detection performance as shown in Figure
3(b). Figure 3(c) demonstrates the additional decrease in detection performance when the external
subgraph connectivity is increased again, to as much as 5 edges per vertex.
5
(a)
(b)
(c)
Figure 3: ROC curves for the detection of 8-vertex subgraphs in a 1024-vertex R-MAT background.
Performance is shown for subgraphs of varying density when each foreground vertex is connected
to the background by up to 1, 3 and 5 edges in (a), (b) and (c), respectively.
6
Subgraph Detection in Real-World Networks
To verify that we see similar properties in real graphs that we do in simulated ones, we analyzed
five data sets available in the Stanford Network Analysis Package (SNAP) database [18]. Each network is made undirected before we perform our analysis. The data sets used here are the Epinions
who-trusts-whom graph (Epinions, |V | = 75,879, |E| = 405,740) [19], the arXiv.org collaboration
networks on astrophysics (AstroPh, |V | = 18,722, |E| = 198,050) and condensed matter (CondMat,
|V |=23,133, |E|=93,439) [20], an autonomous system graph (asOregon, |V |=11,461, |E|=32,730)
[21] and the Slashdot social network (Slashdot, |V |=82,168, |E|=504,230) [22]. For each graph, we
compute the top 110 eigenvectors of the modularity matrix and the L1 norm of each. Comparing
each L1 sequence to a ?smoothed? (i.e., low-pass filtered) version, we choose the two eigenvectors that deviate the most from this trend, except in the case of Slashdot, where there is only one
significant deviation.
Plots of the L1 norms and scatterplots in the space of the two eigenvectors that deviate most are
shown in Figure 4. The eigenvectors declared are highlighted. Note that, with the exception of the
asOregon, we see as similar trend in these networks that we did in the R-MAT simulations, with
the L1 norms decreasing as the eigenvalues increase (the L1 trend in asOregon is fairly flat). Also,
with the exception of Slashdot, each dataset has a few eigenvectors with much smaller norms than
those with similar eigenvalues (Slashdot decreases gradually, with one sharp drop at the maximum
eigenvalue).
The subgraphs detected by L1 analysis are presented in Table 1. Two subgraphs are chosen for each
dataset, corresponding to the highlighted points in the scatterplots in Figure 4. For each subgraph
we list the size (number of vertices), density (internal degree divided by the maximum number of
edges), external degree, and the eigenvector that separates it from the background. The subgraphs
are quite dense, at least 80% in each case.
To determine whether a detected subgraph is anomalous with respect to the rest of the graph, we
sample the network and compare the sample graphs to the detected subgraphs in terms of density
and external degree. For each detected subgraph, we take 1 million samples with the same number
of vertices. Our sampling method consists of doing a random walk and adding all neighbors of each
vertex in the path. We then count the number of samples with density above a certain threshold
and external degree below another threshold. These thresholds are the parenthetical values in the
4th and 5th columns of Table 1. Note that the thresholds are set so that the detected subgraphs
comfortably meet them. The 6th column lists the number of samples out of 1 million that satisfy
both thresholds. In each case, far less than 1% of the samples meet the criteria. For the Slashdot
dataset, no sample was nearly as dense as the two subgraphs we selected by thresholding along the
principal eigenvector. After removing samples that are predominantly correlated with the selected
eigenvectors, we get the parenthetical values in the same column. In most cases, all of the samples
meeting the thresholds are correlated with the detected eigenvectors. Upon further inspection, those
remaining are either correlated with another eigenvector that deviates from the overall L1 trend, or
correlated with multiple eigenvectors, as we discuss in the next section.
6
(a) Epinions L1 norms
(b) Epinions scatterplot
(c) AstroPh L1 norms
(d) AstroPh scatterplot
(e) CondMat L1 norms
(f) CondMat scatterplot
(g) asOregon L1 norms
(h) asOregon scatterplot
(i) Slashdot L1 norms
(j) Slashdot scatterplot
Figure 4: Eigenvector L1 norms in real-world network data (left column), and scatterplots of the
projection into the subspace defined by the indicated eigenvectors (right column).
7
dataset
eigenvector
subgraph
size
Epinions
Epinions
AstroPh
AstroPh
CondMat
CondMat
asOregon
asOregon
Slashdot
Slashdot
u36
u45
u57
u106
u29
u36
u6
u32
u1 > 0.08
u1 > 0.07
34
27
30
24
19
20
15
6
36
51
subgraph
(sample)
density
80% (70%)
83% (75%)
100% (90%)
100% (90%)
100% (90%)
83% (75%)
96% (85%)
93% (80%)
95% (90%)
89% (80%)
subgraph
(sample)
external degree
721 (1000)
869 (1200)
93 (125)
73 (100)
2 (50)
70 (120)
1089 (1500)
177 (200)
10570 (?)
12713 (?)
# samples
that meet
threshold
46 (0)
261 (6)
853 (0)
944 (0)
866 (0)
1596 (0)
23 (0)
762 (393)
0 (0)
0 (0)
Table 1: Subgraphs detected by L1 analysis, and a comparison with randomly-sampled subgraphs
in the same network.
Figure 5: An 8-vertex clique that does not create an anomalously small L1 norm in any eigenvector.
The scatterplot looks similar to one in which the subgraph is detectable, but is rotated.
7
Conclusion
In this article we have demonstrated the efficacy of using eigenvector L1 norms of a graph?s modularity matrix to detect small, dense anomalous subgraphs embedded in a background. Casting the
problem of subgraph detection in a signal processing context, we have provided the intuition behind
the utility of this approach, and empirically demonstrated its effectiveness on a concrete example:
detection of a dense subgraph embedded into a graph generated using known parameters. In real
network data we see trends similar to those we see in simulation, and examine outliers to see what
subgraphs are detected in real-world datasets.
Future research will include the expansion of this technique to reliably detect subgraphs that can be
separated from the background in the space of a small number of eigenvectors, but not necessarily
one. While the L1 norm itself can indicate the presence of an embedding, it requires the subgraph to
be highly correlated with a single eigenvector. Figure 5 demonstrates a case where considering multiple eigenvectors at once would likely improve detection performance. The scatterplot in this figure
looks similar to the one in Figure 1(c), but is rotated such that the subgraph is equally aligned with
the two eigenvectors into which the matrix has been projected. There is not significant separation in
any one eigenvector, so it is difficult to detect using the method presented in this paper. Minimizing
the L1 norm with respect to rotation in the plane will likely make the test more powerful, but could
prove computationally expensive. Other future work will focus on developing detectability bounds,
the application of which would be useful when developing detection methods like the algorithm
outlined here.
Acknowledgments
This work is sponsored by the Department of the Air Force under Air Force Contract FA8721-05-C0002. Opinions, interpretations, conclusions and recommendations are those of the author and are
not necessarily endorsed by the United States Government.
8
References
[1] J. Sun, J. Qu, D. Chakrabarti, and C. Faloutsos, ?Neighborhood formation and anomaly detection in bipartite graphs,? in Proc. IEEE Int?l. Conf. on Data Mining, Nov. 2005.
[2] J. Sun, Y. Xie, H. Zhang, and C. Faloutsos, ?Less is more: Compact matrix decomposition for
large sparse graphs,? in Proc. SIAM Int?l. Conf. on Data Mining, 2007.
[3] C. C. Noble and D. J. Cook, ?Graph-based anomaly detection,? in Proc. ACM SIGKDD Int?l.
Conf. on Knowledge Discovery and Data Mining, pp. 631?636, 2003.
[4] W. Eberle and L. Holder, ?Anomaly detection in data represented as graphs,? Intelligent Data
Analysis, vol. 11, pp. 663?689, December 2007.
[5] C. E. Priebe, J. M. Conroy, D. J. Marchette, and Y. Park, ?Scan statistics on enron graphs,?
Computational & Mathematical Organization Theory, vol. 11, no. 3, pp. 229?247, 2005.
[6] T. Id?e and H. Kashima, ?Eigenspace-based anomaly detection in computer systems,? in Proc.
KDD ?04, pp. 440?449, 2004.
[7] S. Hirose, K. Yamanishi, T. Nakata, and R. Fujimaki, ?Network anomaly detection based on
eigen equation compression,? in Proc. KDD ?09, pp. 1185?1193, 2009.
[8] M. E. J. Newman, ?Finding community structure in networks using the eigenvectors of matrices,? Phys. Rev. E, vol. 74, no. 3, 2006.
[9] J. Ruan and W. Zhang, ?An efficient spectral algorithm for network community discovery and
its applications to biological and social networks,? in Proc. IEEE Int?l Conf. on Data Mining,
pp. 643?648, 2007.
[10] S. White and P. Smyth, ?A spectral clustering approach to finding communities in graphs,? in
Proc. SIAM Data Mining Conf., 2005.
[11] B. A. Miller, N. T. Bliss, and P. J. Wolfe, ?Toward signal processing theory for graphs and other
non-Euclidean data,? in Proc. IEEE Int?l Conf. on Acoustics, Speech and Signal Processing,
pp. 5414?5417, 2010.
[12] T. Mifflin, ?Detection theory on random graphs,? in Proc. Int?l Conf. on Information Fusion,
pp. 954?959, 2009.
[13] D. Chakrabarti and C. Faloutsos, ?Graph mining: Laws, generators, and algorithms,? ACM
Computing Surveys, vol. 38, no. 1, 2006.
[14] F. Chung, L. Lu, and V. Vu, ?The spectra of random graphs with given expected degrees,? Proc.
of National Academy of Sciences of the USA, vol. 100, no. 11, pp. 6313?6318, 2003.
[15] D. Chakrabarti, Y. Zhan, and C. Faloutsos, ?R-MAT: A recursive model for graph mining,? in
Proc. Fourth SIAM Int?l Conference on Data Mining, vol. 6, pp. 442?446, 2004.
[16] T. S. Motzkin and E. G. Straus, ?Maxima for graphs and a new proof of a theorem of Tur?an,?
Canad. J. Math., vol. 17, pp. 533?540, 1965.
[17] C. Ding, T. Li, and M. I. Jordan, ?Nonnegative matrix factorization for combinatorial optimization: Spectral clustering, graph matching, and clique finding,? in Proc. IEEE Int?l Conf.
on Data Mining, pp. 183?192, 2008.
[18] J. Leskovec, ?Stanford network analysis package.? http://snap.stanford.edu.
[19] M. Richardson, R. Agrawal, and P. Domingos, ?Trust management for the semantic web,? in
Proc. ISWC, 2003.
[20] J. Leskovec, J. Kleinberg, and C. Faloutsos, ?Graph evolution: Densification and shinking
diameters,? ACM Trans. on Knowledge Discovery from Data, vol. 1, no. 1, 2007.
[21] J. Leskovec, J. Kleinberg, and C. Faloutsos, ?Graphs over time: Densification laws, shinking
diameters and possible explanations,? in Proc. KDD ?05, 2005.
[22] J. Leskovec, K. Lang, A. Dasgupta, and M. Mahoney, ?Community structure in large networks:
Natural cluster sizes and the absence of large well-defined clusters.? arXiv.org:0810.1355,
2008.
9
| 4044 |@word trial:1 version:1 briefly:1 compression:1 norm:27 stronger:1 proportion:1 d2:1 confirms:1 simulation:6 decomposition:1 minus:1 efficacy:2 selecting:1 united:1 past:1 comparing:3 nt:1 od:1 lang:1 visible:1 kdd:3 plot:2 drop:1 sponsored:1 v:18 implying:1 selected:2 cook:2 inspection:1 plane:1 ith:1 filtered:1 provides:2 detecting:4 node:2 characterization:1 math:1 org:2 zhang:2 five:1 mathematical:1 along:4 chakrabarti:3 consists:1 prove:1 expected:6 behavior:3 themselves:1 examine:2 decreasing:2 actual:3 considering:2 becomes:1 provided:2 eigenspace:2 null:5 inward:2 what:1 kind:1 eigenvector:22 lexington:2 finding:5 demonstrates:6 unit:2 eigenstructure:1 before:1 iswc:1 engineering:1 id:1 meet:3 path:1 approximately:1 suggests:2 factorization:1 acknowledgment:1 eberle:2 vu:1 union:1 recursive:1 area:2 empirical:1 significantly:1 projection:5 matching:1 astroph:5 get:1 cannot:1 close:1 context:2 demonstrated:3 survey:1 formulate:1 immediately:1 subgraphs:16 spanned:2 u6:1 embedding:9 notion:2 autonomous:1 elucidate:1 play:1 anomaly:24 smyth:1 us:1 hypothesis:3 domingo:1 harvard:2 wolfe:3 trend:5 expensive:1 updating:1 database:1 observed:5 role:1 kxk1:3 ding:1 calculate:1 connected:2 sun:2 decrease:3 highest:1 tur:1 mentioned:1 benjamin:1 intuition:1 ui:2 complexity:1 dynamic:1 tight:1 creates:1 upon:1 bipartite:1 represented:3 separated:3 describe:2 monte:1 detected:11 newman:4 formation:2 neighborhood:2 h0:5 quite:1 heuristic:1 larger:1 valued:1 stanford:3 snap:2 otherwise:3 statistic:6 richardson:1 highlighted:2 itself:3 sequence:1 eigenvalue:11 agrawal:1 propose:2 maximal:1 aligned:3 mifflin:1 subgraph:45 lincoln:2 academy:1 description:1 kh:1 normalize:1 exploiting:1 cluster:2 yamanishi:1 rotated:2 stat:1 strong:1 implies:1 indicate:1 closely:1 opinion:1 adjacency:2 government:1 biological:1 inseparable:1 purpose:1 proc:14 applicable:1 condensed:1 combinatorial:1 largest:2 basing:1 create:3 reflects:1 mit:2 clearly:1 aim:1 modified:2 pn:1 anomalously:2 varying:1 casting:2 overwhelmingly:1 derived:1 focus:4 ax:3 rank:1 u21:1 sigkdd:1 detect:7 entire:1 typically:1 borrowing:1 overall:1 fairly:1 ruan:1 equal:1 once:1 sampling:1 represents:2 park:1 look:4 nearly:1 noble:2 foreground:2 future:3 intelligent:1 inherent:1 few:4 randomly:2 tightly:1 national:1 slashdot:10 detection:33 organization:1 interest:3 investigate:1 highly:2 mining:9 certainly:1 fujimaki:1 mahoney:1 analyzed:2 behind:1 edge:14 euclidean:3 walk:1 parenthetical:2 theoretical:1 leskovec:4 increased:2 column:7 vertex:37 subset:2 deviation:8 uniform:1 st:1 density:9 siam:3 contract:1 concrete:1 connectivity:5 again:1 management:1 choose:1 external:8 conf:8 chung:1 bx:3 return:2 li:1 bliss:2 matter:1 int:8 satisfy:1 vi:1 performed:2 h1:4 analyze:2 doing:1 portion:1 complicated:1 substructure:1 air:2 holder:2 variance:3 characteristic:1 likewise:1 miller:2 efficiently:1 yield:1 who:1 dealt:1 lu:1 carlo:1 history:1 hhk:1 phys:1 pp:12 straus:1 associated:5 proof:1 static:1 sampled:1 dataset:4 massachusetts:2 popular:1 manifest:1 knowledge:2 ubiquitous:1 formalize:1 subtle:1 condmat:5 xie:1 done:1 strongly:1 d:3 working:1 horizontal:1 web:1 trust:2 indicated:1 reveal:4 usa:1 verify:1 evolution:1 symmetric:1 laboratory:3 nonzero:1 semantic:1 white:1 adjacent:3 ll:2 criterion:1 outline:1 theoretic:1 l1:39 predominantly:1 common:1 rotation:1 nakata:1 pseudocode:1 empirically:2 overview:1 tracked:1 million:2 extend:1 interpretation:2 slight:1 comfortably:1 significant:6 epinions:6 cambridge:1 outlined:2 language:1 reliability:1 marchette:1 longer:1 operating:1 patrick:1 recent:1 store:1 certain:4 binary:1 meeting:1 exploited:1 minimum:1 greater:1 additional:1 determine:9 converge:1 signal:12 multiple:2 desirable:1 divided:1 equally:2 impact:1 anomalous:11 metric:1 expectation:1 arxiv:2 iteration:1 represent:1 background:27 want:1 rest:1 enron:1 induced:1 undirected:5 member:1 december:1 effectiveness:1 jordan:1 integer:1 presence:7 revealed:1 split:1 enough:2 variety:1 decline:1 whether:2 utility:1 gb:7 speech:1 matlab:1 useful:3 detailed:1 eigenvectors:34 endorsed:1 amount:2 category:1 diameter:2 http:1 per:1 detectability:3 mat:9 vol:8 dasgupta:1 nevertheless:1 threshold:12 changing:1 chisquared:1 vast:1 graph:76 year:1 sum:1 run:1 angle:2 package:2 powerful:1 fourth:1 reasonable:2 decide:1 separation:2 zhan:1 bound:1 letp:1 g:8 nonnegative:1 strength:1 kronecker:1 flat:1 dominated:2 declared:4 u1:3 kleinberg:2 min:3 expanded:1 separable:1 relatively:1 department:1 developing:2 alternate:1 belonging:1 smaller:6 across:1 qu:1 rev:1 projecting:1 dv:1 outlier:2 gradually:1 taken:1 computationally:1 equation:1 skew:1 dvs:5 detectable:2 count:1 discus:1 mind:1 letting:1 end:2 available:1 apply:1 observe:1 away:1 spectral:5 bii:1 kashima:1 faloutsos:6 compiling:1 eigen:1 denotes:1 clustering:3 running:1 top:1 remaining:1 include:1 calculating:1 k1:6 objective:1 occurs:2 canad:1 traditional:1 diagonal:1 subspace:4 separate:2 simulated:2 entity:1 majority:1 eigs:2 whom:1 toward:1 length:1 index:2 relationship:1 kk:1 ratio:1 minimizing:1 difficult:2 priebe:2 astrophysics:1 reliably:2 perform:1 observation:1 datasets:3 smoothed:1 sharp:1 community:11 pair:3 cast:1 required:1 acoustic:1 conroy:1 trans:1 address:1 below:1 pattern:2 regime:1 summarize:1 including:1 explanation:1 power:1 natural:3 force:2 representing:2 improve:1 technology:2 imply:1 axis:1 created:4 l1s:1 deviate:4 review:1 discovery:3 law:2 embedded:11 generator:4 eigendecomposition:4 degree:12 article:3 thresholding:1 collaboration:1 row:2 course:1 changed:1 aij:2 institute:2 fall:1 neighbor:1 sparse:3 curve:4 xn:1 stand:2 world:3 unweighted:3 hirose:1 author:1 collection:1 made:2 projected:2 far:2 social:3 nov:1 compact:1 dealing:1 clique:8 receiver:1 assumed:1 xi:4 spectrum:1 modularity:18 table:3 channel:1 symmetry:2 expansion:1 kui:4 necessarily:2 domain:1 did:1 dense:10 noise:6 x1:1 roc:3 scatterplots:4 position:1 kxk2:2 removing:1 theorem:1 embed:4 xt:2 normative:2 densification:2 list:2 admits:1 fusion:1 exists:1 scatterplot:7 adding:1 magnitude:5 subtract:1 likely:4 absorbed:1 motzkin:1 u2:1 recommendation:1 applies:1 corresponds:1 chance:1 acm:3 ma:3 fa8721:1 shared:1 absence:1 change:3 specifically:1 typical:1 uniformly:1 except:1 principal:10 called:1 pas:1 experimental:1 e:3 exception:2 select:2 internal:2 scan:1 dissimilar:1 evaluate:2 correlated:8 |
3,363 | 4,045 | Robust Clustering as Ensembles of Affinity Relations
1
Hairong Liu1 , Longin Jan Latecki2 , Shuicheng Yan1
Department of Electrical and Computer Engineering, National University of Singapore, Singapore
2
Department of Computer and Information Sciences, Temple University, Philadelphia, USA
[email protected],[email protected],[email protected]
Abstract
In this paper, we regard clustering as ensembles of k-ary affinity relations and
clusters correspond to subsets of objects with maximal average affinity relations.
The average affinity relation of a cluster is relaxed and well approximated by a
constrained homogenous function. We present an efficient procedure to solve this
optimization problem, and show that the underlying clusters can be robustly revealed by using priors systematically constructed from the data. Our method can
automatically select some points to form clusters, leaving other points un-grouped;
thus it is inherently robust to large numbers of outliers, which has seriously limited
the applicability of classical methods. Our method also provides a unified solution to clustering from k-ary affinity relations with k ? 2, that is, it applies to both
graph-based and hypergraph-based clustering problems. Both theoretical analysis
and experimental results show the superiority of our method over classical solutions to the clustering problem, especially when there exists a large number of
outliers.
1 Introduction
Data clustering is a fundamental problem in many fields, such as machine learning, data mining and
computer vision [1]. Unfortunately, there is no universally accepted definition of a cluster, probably
because of the diverse forms of clusters in real applications. But it is generally agreed that the objects
belonging to a cluster satisfy certain internal coherence condition, while the objects not belonging
to a cluster usually do not satisfy this condition.
Most of existing clustering methods are partition-based, such as k-means [2], spectral clustering
[3, 4, 5] and affinity propagation [6]. These methods implicitly share an assumption: every data
point must belong to a cluster. This assumption greatly simplifies the problem, since we do not
need to judge whether a data point is an outlier or not, which is very challenging. However, this
assumption also results in bad performance of these methods when there exists a large number of
outliers, as frequently met in many real-world applications.
The criteria to judge whether several objects belong to the same cluster or not are typically expressed
by pairwise relations, which is encoded as the weights of an affinity graph. However, in many
applications, high order relations are more appropriate, and may even be the only choice, which
naturally results in hyperedges in hypergraphs. For example, when clustering a given set of points
into lines, pairwise relations are not meaningful, since every pair of data points trivially defines
a line. However, for every three data points, whether they are near collinear or not conveys very
important information.
As graph-based clustering problem has been well studied, many researchers tried to deal with
hypergraph-based clustering by using existing graph-based clustering methods. One direction is
to transform a hypergraph into a graph, whose edge-weights are mapped from the weights of the
original hypergraph. Zien et. al. [7] proposed two approaches called ?clique expansion? and ?star
expansion?, respectively, for such a purpose. Rodriguez [8] showed the relationship between the
1
spectral properties of the Laplacian matrix of the resulting graph and the minimum cut of the original hypergraph. Agarwal et al. [9] proposed the ?clique averaging? method and reported better
results than ?clique expansion? method. Another direction is to generalize graph-based clustering
method to hypergraphs. Zhou et al. [10] generalized the well-known ?normalized cut? method [5]
and defined a hypergraph normalized cut criterion for a k-partition of the vertices. Shashua et al.
[11] cast the clustering problem with high order relations into a nonnegative factorization problem
of the closest hyper-stochastic version of the input affinity tensor.
Based on game theory, Bulo and Pelillo [12] proposed to consider the hypergraph-based clustering
problem as a multi-player non-cooperative ?clustering game? and solve it by replicator equation,
which is in fact a generalization of their previous work [13]. This new formulation has a solid
theoretical foundation, possesses several appealing properties, and achieved state-of-art results. This
method is in fact a specific case of our proposed method, and we will discuss this point in Section 2.
In this paper, we propose a unified method for clustering from k-ary affinity relations, which is
applicable to both graph-based and hypergraph-based clustering problems. Our method is motivated
by an intuitive observation: for a cluster with m objects, there may exist (m
k ) possible k-ary affinity
relations, and most of these (sometimes even all) k-ary affinity relations should agree with each
other on the same criterion. For example, in the line clustering problem, for m points on the same
line, there are (m
3 ) possible triplets, and all these triplets should satisfy the criterion that they lie on
a line. The ensemble of such large number of affinity relations is hardly produced by outliers and is
also very robust to noises, thus yielding a robust mechanism for clustering.
2
Formulation
Clustering from k-ary affinity relations can be intuitively described as clustering on a special kind
of edge-weighted hypergraph, k-graph. Formally, a k-graph is a triplet G = (V, E, w), where
V = {1, ? ? ? , n} is a finite set of vertices, with each vertex representing an object, E ? V k is the
set of hyperedges, with each hyperedge representing a k-ary affinity relation, and w : E ? R is a
weighting function which associates a real value (can be negative) with each hyperedge, with larger
weights representing stronger affinity relations. We only consider the k-ary affinity relations with no
duplicate objects, that is, the hyperedges among k different vertices. For hyperedges with duplicated
vertices, we simply set their weights to zeros.
Each hyperedge e ? E involves k vertices, thus can be represented as k-tuple {v1 , ? ? ? , vk }. The
k
z
}|
{
weighted adjacency array of graph G is an n ? n ? ? ? ? ? n super-symmetry array, denoted by M ,
and defined as
{
w({v1 , ? ? ? , vk }) if {v1 , ? ? ? , vk } ? E,
(1)
M (v1 , ? ? ? , vk ) =
0
else,
Note that each edge {v1 , ? ? ? , vk } ? E has k! duplicate entries in the array M .
For a subset U ? V with m vertices, its edge set is denoted as EU . If U is really a cluster, then
most of hyperedges in EU should have large weights. The simplest measure to reflect such ensemble
phenomenon is the sum of all entries in M whose corresponding hyperedges contain only vertices
in U , which can be expressed as:
?
S(U ) =
M (v1 , ? ? ? , vk ).
(2)
v1 ,???,vk ?U
Suppose y is an n ? 1 indicator vector of the subset U , such that yvi = 1 if vi ? U and zero
otherwise, then S(U ) can be expressed as:
S(U ) = S(y) =
?
k
z }| {
M (v1 , ? ? ? , vk ) yv1 ? ? ? yvk .
(3)
v1 ,???,vk ?V
Obviously, S(U ) usually increases as the number of vertices in U increases. Since
there are mk summands in S(U ), the average of these entries can be expressed as:
1
Sav (U ) = k S(y)
m
2
?
i
yi = m and
1
= k
m
?
v1 ,???,vk ?V
?
=
v1 ,???,vk ?V
?
=
where x = y/m. As
?
i
yi = m,
?
k
z }| {
M (v1 , ? ? ? , vk ) yv1 ? ? ? yvk
k
z }| {
yv
yv
M (v1 , ? ? ? , vk ) 1 ? ? ? k
m
m
k
z }| {
M (v1 , ? ? ? , vk ) xv1 ? ? ? xvk ,
(4)
v1 ,???,vk ?V
i
xi = 1 is a natural constraint over x.
Intuitively, when U is a true cluster, Sav (U ) should be relatively large. Thus, the clustering problem
corresponds to the problem of maximizing Sav (U ). In essence, this is a combinatorial optimization
problem, since we know neither m nor which m objects to select. As this problem is NP-hard, to
reduce its complexity, we relax
? x to be within a continuous range [0, ?], where ? ? 1 is a constant,
while keeping the constraint i xi = 1. Then the problem becomes:
{
?
?k
max f (x) = v1 ,???,vk ?V M (v1 , ? ? ? , vk ) i=1 xvi ,
(5)
subject to x ? ?n and xi ? [0, ?]
?
where ?n = {x ? Rn : x ? 0 and i xi = 1} is the standard simplex in Rn . Note that Sav (x) is
abbreviated by f (x) to simplify the formula.
The adoption of ?1 -norm in (5) not only let xi have an intuitive probabilistic meaning, that is, xi
represents the probability for the cluster contain the i-th object, but also makes the solution sparse,
which means to automatically select some objects to form a cluster, while ignoring other objects.
Relation to Clustering Game. In [12], Bulo and Pelillo proposed to cast the hypergraph-based
clustering problem into a clustering game, which leads to a similar formulation as (5). In fact, their
formulation is a special case of (5) when ? = 1. Setting ? < 1 means that the probability of choosing
each strategy (from game theory perspective) or choosing each object (from our perspective) has an
known upper bound, which is in fact a prior, while ? = 1 represents a noninformative prior. This
point is very essential in many applications, it avoids the phenomenon where some components of
x dominate. For example, if the weight of a hyperedge is extremely large, then the cluster may only
select the vertices associated with this hyperedge, which is usually not desirable. In fact, ? offers us
a tool to control the least number of objects in cluster. Since each component does not exceed ?, the
cluster contains at least [ 1? ] objects, where [z] represents the smallest integer larger than or equal to
z. Because of the constraint xi ? [0, ?], the solution is also totally different from [12].
3
Algorithm
Formulation (5) usually has many local maxima. Large maxima correspond to true clusters and
small maxima usually form meaningless subsets. In this section, we first analyze the properties
of the maximizer x? , which are critical in algorithm design, and then introduce our algorithm to
calculate x? .
Since the formulation (5) is a constrained optimization problem, by adding Lagrangian multipliers
?, ?1 , ? ? ? , ?n and ?1 , ? ? ? , ?n , ?i ? 0 and ?i ? 0 for all i = 1, ? ? ? , n, we can obtain its Lagrangian
function:
n
n
n
?
?
?
L(x, ?, ?, ?) = f (x) ? ?(
xi ? 1) +
?i xi +
?i (? ? xi ).
(6)
i=1
i=1
i=1
The reward at vertex i, denoted by ri (x), is defined as follows:
ri (x) =
?
M (v1 , ? ? ? , vk?1 , i)
v1 ,???,vk?1 ?V
Since M is a super-symmetry array, then
of f (x) at x.
?f (x)
?xi
k?1
?
xvt
(7)
t=1
= kri (x), i.e., ri (x) is proportional to the gradient
3
Any local maximizer x? must satisfy the Karush-Kuhn-Tucker (KKT) condition [14], i.e., the firstorder necessary conditions for local optimality. That is,
?
?
? kr
?in(x )?? ? + ?i ? ?i = 0, i = 1, ? ? ? , n,
xi ?i = 0,
(8)
n
? ?i=1
(? ? x? )? = 0.
i
i=1
i
?n
Since x?i , ?i and ?i are all?nonnegative for all i?s, i=1 x?i ?i = 0 is equivalent to saying that if
n
?
?
xi > 0, then ?i = 0, and i=1 (? ? xi )?i = 0 is equivalent to saying that if x?i < ?, then ?i = 0.
Hence, the KKT conditions can be rewritten as:
{
? ?/k,
x?i = 0,
?
ri (x ) = ?/k, x?i > 0 and x?i < ?,
(9)
? ?/k,
x?i = ?.
According to x, the vertices set V can be divided into three disjoint subsets, V1 (x) = {i|xi = 0},
V2 (x) = {i|xi ? (0, ?)} and V3 (x) = {i|xi = ?}. The Equation (9) characterizes the properties of
the solution of (5), which are further summarized in the following theorem.
Theorem 1. If x? is the solution of (5), then there exists a constant ? (= ?/k) such that 1) the
rewards at all vertices belonging to V1 (x? ) are not larger than ?; 2) the rewards at all vertices
belonging to V2 (x? ) are equal to ?; and 3) the rewards at all vertices belonging to V3 (x? ) are not
smaller than ?.
Proof: Since KKT condition is a necessary condition, according to (9), the solution x? must satisfy
1), 2) and 3).
The set of non-zero components is Vd (x) = V2 (x) ? V3 (x) and the set of the components which are
smaller than ? is Vu (x) = V1 (x)?V2 (x). For any x, if we want to update it to increase f (x), then the
values of some components belonging to Vd (x) must decrease and the values of some components
belonging to Vu (x) must increase. According to Theorem 1, if x is the solution of (5), then ri (x) ?
rj (x), ?i ? Vu (x), ?j ? Vd (x). On the contrary, if ?i ? Vu (x), ?j ? Vd (x), ri (x) > rj (x), then x
is not the solution of (5). In fact, in such case, we can increase xi and decrease xj to increase f (x).
That is, let
{
xl ,
l ?= i, l ?= j;
?
xl + ?,
l = i;
xl =
(10)
xl ? ?,
l = j.
and define
rij (x) =
?
M (v1 , ? ? ? , vk?2 , i, j)
v1 ,???,vk?2
Then
k?2
?
xvt
(11)
t=1
f (x? ) ? f (x) = ?k(k ? 1)rij (x)?2 + k(ri (x) ? rj (x))?
(12)
Since ri (x) > rj (x), we can always select a proper ? > 0 to increase f (x). According to formula
(10) and the constraint over xi , ? ? min(xj , ? ? xi ). Since ri (x) > rj (x), if rij (x) ? 0, then
when ? = min(xj , ? ? xi ), the increase of f (x) reaches maximum; if rij > 0, then when ? =
ri (x)?rj (x)
min(xj , ? ? xi , 2(k?1)r
), the increase of f (x) reaches maximum.
ij (x)
According to the above analysis, if ?i ? Vu (x), ?j ? Vd (x), ri (x) > rj (x), then we can update
x to increase f (x). Such procedure iterates until ri (x) ? rj (x), ?i ? Vu (x), ?j ? Vd (x). From
a prior (initialization) x(0), the algorithm to compute the local maximizer of (5) is summarized in
Algorithm 1, which successively chooses the ?best? vertex and the ?worst? vertex and then update
their corresponding components of x.
Since significant maxima of formulation (5) usually correspond to true clusters, we need multiple
initializations (priors) to obtain them, with at least one initialization at the basin of attraction of
every significant maximum. Such informative priors in fact can be easily and efficiently constructed
from the neighborhood of every vertex (vertices with hyperedges connecting to this vertex), because
the neighbors of a vertex generally have much higher probabilities to belong to the same cluster.
4
Algorithm 1 Compute a local maximizer x? from a prior x(0)
1: Input: Weighted adjacency array M , prior x(0);
2: repeat
3:
Compute the reward ri (x) for each vertex i;
4:
Compute V1 (x(t)), V2 (x(t)), V3 (x(t)), Vd (x(t)), and Vu (x(t));
5:
Find the vertex i in Vu (x(t)) with the largest reward and the vertex j in Vd (x(t)) with the
smallest reward;
6:
Compute ? and update x(t) by formula (10) to obtain x(t + 1);
7: until x is a local maximizer
8: Output: The local maximizer x? .
Algorithm 2 Construct a prior x(0) containing vertex v
1: Input: Hyperedge set E(v) and ?;
2: Sort the hyperedges in E(v) in descending order according to their weights;
3: for i = 1, ? ? ? , |E(v)| do
4:
Add all vertices associated with the i-th hyperedge to L. If |L| ? [ 1? ], then break;
5: end for
1
;
6: For each vertex vj ? L, set the corresponding component xvj (0) = |L|
7: Output: a prior x(0).
For a vertex v, the set of hyperedges connected to v is denoted by E(v). We can construct a prior
containing v from E(v), which is described in Algorithm 2.
Because of the constraint xi ? ?, the initializations need to contain at least [ 1? ] nonzero components. To cover basin of attractions of more maxima, we expect these initializations to locate more
uniformly in the space {x|x ? ?n , xi ? ?}.
Since from every vertex, we can construct such a prior, thus, we can construct n priors in total. From
these n priors, according to Algorithm 1, we can obtain n maxima. The significant maxima of (5)
are usually among these n maxima, and a significant maximum may appear multiple times. In this
way, we can robustly obtain multiple clusters simultaneously, and these clusters may overlap, both
of which are desirable properties in many applications. Note that the clustering game approach [12]
utilizes a noninformative prior, that is, all vertices have equal probability. Thus, it cannot obtain
multiple clusters simultaneously. In clustering game approach [12], if xi (t) = 0, then xi (t + 1) = 0,
which means that it can only drop points and if a point is initially not included, then it cannot be
selected. However, our method can automatically add or drop points, which is another key difference
to the clustering game approach.
In each iteration of Algorithm 1, we only need to consider two components of x, which makes
both the update of rewards and the update of x(t) very efficient. As f (x(t)) increases, the sizes of
Vu (x(t)) and Vd (x(t)) both decrease quickly, thus f (x) converges to local maximum quickly. Suppose the maximal number of hyperedges containing a certain vertex is h, then the time complexity
of Algorithm 1 is O(thk), where t is the number of iterations. The total time complexity of our
method is then O(nthk), since we need to ran Algorithm 1 from n initializations.
4
Experiments
We evaluate our method on three types of experiments. The first one addresses the problem of line
clustering, the second addresses the problem of illumination-invariant face clustering, and the third
addresses the problem of affine-invariant point set matching. We compare our method with clique
averaging [9] algorithm and matching game approach [12]. In all experiments, the clique averaging
approach needs to know the number of clusters in advance; however, both clustering game approach
and our method can automatically reveal the number of clusters, which yields the advantages of the
latter two in many applications.
4.1 Line Clustering
In this experiment, we consider the problem of clustering lines in 2D point sets. Pairwise similarity
measures are useless in this case, and at least three points are needed for characterizing such a
5
property. The dissimilarity measure on triplets of points is given by their mean distance to the best
fitting line. If d(i, j, k) is the dissimilarity measure of points {i, j, k}, then the similarity function is
given by s({i, j, k}) = exp(?d(i, j, k)2 /?d2 ), where ?d is a scaling parameter, which controls the
sensitivity of the similarity measure to deformation.
We randomly generate three lines within the region [?0.5, 0.5]2 , each line contains 30 points, and all
these points have been perturbed by Gaussian noise N (0, ?). We also randomly add outliers into the
point set. Fig. 1(a) illustrates such a point set with three lines shown in red, blue and green colors,
respectively, and the outliers are shown in magenta color. To evaluate the performance, we ran all
algorithms on the same data set over 30 trials with varying parameter values, and the performance
is measured by F-measure.
We first fix the number of outliers to be 60, vary the scaling parameter ?d from 0.01 to 0.14, and
the result is shown in Fig. 1(b). For our method, we set ? = 1/30. Obviously, our method is nearly
not affected by the scaling parameter ?d , while the clustering game approach is very sensitive to ?d .
Note that ?d in fact controls the weights of the hyperedge graph and many graph-based algorithms
are notoriously sensitive to the weights of the graph. Instead, by setting a proper ?, our method
overcomes this problem. From Fig. 1(b), we observe that when ?d = 4?, the clustering game
approach will get the best performance. Thus, we fix ?d = 4?, and change the noise parameter
? from 0.01 to 0.1, the results of clustering game approach, clique averaging algorithm and our
method are shown in blue, green and red colors in Fig. 1(c), respectively. As the figure shows, when
the noise is small, matching game approach outperforms clique averaging algorithm, and when the
noise becomes large, the clique averaging algorithm outperforms matching game approach. This is
because matching game approach is more robust to outliers, while the clique averaging algorithm
seems more robust to noises. Our method always gets the best result, since it can not only select
coherent clusters as matching game approach, but also control the size of clusters, thus avoiding the
problem of too few points selected into clusters.
In Fig. 1(d) and Fig. 1(e), we vary the number of outliers from 10 to 100, the results clearly demonstrate that our method and clustering game approach are robust to outliers, while clique averaging
algorithm is very sensitive to outliers, since it is a partition-based method and every point must be
assigned to a cluster. To illustrate the influence of ?, we fix ?d = ? = 0.02, and test the performance of our method under different ?, the result is shown in Fig. 1(f), note that x axis is 1/?. As we
stressed in Section 2, clustering game approach is in fact a special case of our method when ? = 1,
thus, the result at ? = 1 is nearly the same as the result of clustering game approach in Fig. 1(b)
under the same conditions. Obviously, as 1/? approaches the real number of points in the cluster,
the result become much better. Note that the best result appears when 1/? > 30, which is due to the
fact that some outliers fall into the line clusters, as can be seen in Fig. 1(a).
4.2
Illumination-invariant face clustering
It has been shown that the variability of images of a Labmertian surface in fixed pose, but under
variable lighting conditions where no surface point is shadowed, constitutes a three dimensional
linear subspace [15]. This leads to a natural measure of dissimilarity over four images, which can
be used for clustering. In fact, this is a generalization of the k-lines problem into the k-subspaces
problem. If we assume that the four images under consideration form the columns of a matrix, and
s24
normalize each column by ?2 norm, then d = s2 +???+s
2 serves as a natural measure of dissimilarity,
1
4
where si is the ith singular value of this matrix.
In our experiments we use the Yale Face Database B and its extended version [16], which contains 38
individuals, each under 64 different illumination conditions. Since in some lighting conditions, the
images are severely shadowed, we delete these images and do the experiments on a subset (about
35 images for each individual). We considered cases where we have faces from 4 and 5 random
individuals (randomly choose 10 faces for each individual), with and without outliers. The case with
outliers consists 10 additional faces each from a different individual. For each of those combinations,
we ran 10 trials to obtain the average F-measures (mean and standard deviation), and the result is
reported in Table 1. Note that for each algorithm, we individually tune the parameters to obtain
the best results. The results clearly show that partition-based clustering method (clique averaging)
is very sensitive to outliers, but performs better when there are no outliers. The clustering game
approach and our method both perform well, especially when there are outliers, and our method
performs a little better.
6
Figure 1: Results on clustering three lines with noises and outliers. The performance of clique
averaging algorithm [9], matching game approach [12] and our method is shown as green dashed,
blue dotted and read solid curves, respectively. This figure is best viewed in color.
Table 1: Experiments on illuminant-invariant face clustering
Classes
Outliers
Clique Averaging
Clustering Game
Our Method
4.3
4
0
0.95 ? 0.05
0.92 ? 0.04
0.93 ? 0.04
5
10
0.84 ? 0.08
0.90 ? 0.04
0.92 ? 0.05
0
0.93 ? 0.05
0.91 ? 0.06
0.92 ? 0.07
10
0.83 ? 0.07
0.90 ? 0.07
0.91 ? 0.04
Affine-invariant Point Set Matching
An important problem in the object recognition is the fact that an object can be seen from different
viewpoints, resulting in differently deformed images. Consequently, the invariance to viewpoints
is a desirable property for many vision tasks. It is well-known that a near-planar object seen from
different viewpoint can be modeled by affine transformations. In this subsection, we will show that
matching planar point sets under different viewpoints can be formulated into a hypergraph clustering
problem and our algorithm is very suitable for such tasks.
Suppose the two point sets are P and Q, with nP and nQ points, respectively. For each point
in P , it may match to any point in Q, thus there are nP nQ candidate matches. Under the affine
S
transformation A, for three correct matches, mii? , mjj ? and mkk? , S ?ijk
= |det(A)|, where Sijk is
i j ? k?
the area of the triangle formed by points i, j and k in P , Si? j ? k? is the area of the triangle formed
by points i? , j ? and k ? in Q, and det(A) is the determinant of A. If we regard each candidate match
(S ?S ? ? ? |det(A)|)2
as a point, then s = exp(? ijk i j?k2
) serves as a natural similarity measure for three
d
?
?
?
points (candidate matches), mii , mjj and mkk , ?d is a scaling parameter, and the correct matching
configuration then naturally form a cluster. Note that in this problem, most of the candidate matches
are incorrect matches, and can be considered to be outliers.
We did the experiments on 8 shapes from MPEG-7 shape database [17]. For each shape, we uniformly sample its contour into 20 points. Both the shapes and sampled point sets are demonstrated
in Fig. 2. We regard original contour point sets as P s, then randomly add Gaussian noise N (0, ?),
and transform them by randomly generated affine matrices As to form corresponding Qs. Fig. 3
(a) shows such a pair of P and Q in red and blue, respectively. Since most of points (candidate
matches) should not belong to any cluster, partition-based clustering method, such as clique aver7
aging method, cannot be used. Thus, we only compare our method with matching game approach
and measure the performance of these two methods by counting how many matches agree with the
ground truths. Since |det(A)| is unknown, we estimate its range and sample several possible values
in this range, and conduct the experiment for each possible |det(A)|. In Fig. 3(b), we fix noise
parameter ? = 0.05, and test the robustness of both methods under varying scaling parameter ?d .
Obviously, our method is very robust to ?d , while the matching game approach is very sensitive to
it. In Fig. 3(c), we increase ? from 0.04 to 0.16, and for each ?, we adjust ?d to reach the best
performances for both methods. As expected, our method is more robust to noise by benefiting from
the parameter ?, which is set to 0.05 in both Fig. 3(b) and Fig. 3(c). In Fig. 3(d), we fix ? = 0.05
and ?d = 0.15, and test the performance of our method under different ?. The result again verifies
the importance of the parameter ?.
Figure 2: The shapes and corresponding contour point sets used in our experiment.
Figure 3: Performance curves on affine-invariant point set matching problem. The red solid curves
demonstrate the performance of our method, while the blue dotted curve illustrates the performance
of matching game approach.
5
Discussion
In this paper, we characterized clustering as an ensemble of all associated affinity relations and relax
the clustering problem into optimizing a constrained homogenous function. We showed that the
clustering game approach turns out to be a special case of our method. We also proposed an efficient
algorithm to automatically reveal the clusters in a data set, even under severe noises and a large number of outliers. The experimental results demonstrated the superiority of our approach with respect
to the state-of-the-art counterparts. Especially, our method is not sensitive to the scaling parameter
which affects the weights of the graph, and this is a very desirable property in many applications. A
key issue with hypergraph-based clustering is the high computational cost of the construction of a
hypergraph, and we are currently studying how to efficiently construct an approximate hypergraph
and then perform clustering on the incomplete hypergraph.
6
Acknowledgement
This research is done for CSIDM Project No. CSIDM-200803 partially funded by a grant from the
National Research Foundation (NRF) administered by the Media Development Authority (MDA) of
Singapore, and this work has also been partially supported by the NSF Grants IIS-0812118, BCS0924164 and the AFOSR Grant FA9550-09-1-0207.
8
References
[1] A. Jain, M. Murty, and P. Flynn, ?Data clustering: a review,? ACM Computing Surveys, vol. 31,
no. 3, pp. 264?323, 1999.
[2] T. Kanungo, D. Mount, N. Netanyahu, C. Piatko, R. Silverman, and A. Wu, ?An efficient
k-means clustering algorithm: Analysis and implementation,? IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 24, no. 7, pp. 881?892, 2002.
[3] A. Ng, M. Jordan, and Y. Weiss, ?On spectral clustering: Analysis and an algorithm,? in Advances in Neural Information Processing Systems, vol. 2, 2002, pp. 849?856.
[4] I. Dhillon, Y. Guan, and B. Kulis, ?Kernel k-means: spectral clustering and normalized cuts,?
in Proceedings of the tenth ACM International Conference on Knowledge Discovery and Data
Mining, 2004, pp. 551?556.
[5] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888?905, 2000.
[6] B. Frey and D. Dueck, ?Clustering by passing messages between data points,? Science, vol.
315, no. 5814, pp. 972?976, 2007.
[7] J. Zien, M. Schlag, and P. Chan, ?Multilevel spectral hypergraph partitioning with arbitrary
vertex sizes,? IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, vol. 18, no. 9, pp. 1389?1399, 1999.
[8] J. Rodriguez, ?On the Laplacian spectrum and walk-regular hypergraphs,? Linear and Multilinear Algebra, vol. 51, no. 3, pp. 285?297, 2003.
[9] S. Agarwal, J. Lim, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie, ?Beyond
pairwise clustering,? in IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, vol. 2, 2005, pp. 838?845.
[10] D. Zhou, J. Huang, and B. Scholkopf, ?Learning with hypergraphs: Clustering, classification,
and embedding,? in Advances in Neural Information Processing Systems, vol. 19, 2007, pp.
1601?1608.
[11] A. Shashua, R. Zass, and T. Hazan, ?Multi-way clustering using super-symmetric non-negative
tensor factorization,? in European Conference on Computer Vision, 2006, pp. 595?608.
[12] S. Bulo and M. Pelillo, ?A game-theoretic approach to hypergraph clustering,? in Advances in
Neural Information Processing Systems, 2009.
[13] M. Pavan and M. Pelillo, ?Dominant sets and pairwise clustering,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 1, pp. 167?172, 2007.
[14] H. Kuhn and A. Tucker, ?Nonlinear programming,? ACM SIGMAP Bulletin, pp. 6?18, 1982.
[15] P. Belhumeur and D. Kriegman, ?What is the set of images of an object under all possible
illumination conditions?? International Journal of Computer Vision, vol. 28, no. 3, pp. 245?
260, 1998.
[16] K. Lee, J. Ho, and D. Kriegman, ?Acquiring linear subspaces for face recognition under variable lighting,? IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5,
pp. 684?698, 2005.
[17] L. Latecki, R. Lakamper, and T. Eckhardt, ?Shape descriptors for non-rigid shapes with a single
closed contour,? in IEEE Conference on Computer Vision and Pattern Recognition, vol. 1,
2000, pp. 65?72.
9
| 4045 |@word deformed:1 trial:2 determinant:1 version:2 kulis:1 stronger:1 norm:2 seems:1 d2:1 shuicheng:1 zelnik:1 tried:1 solid:3 xv1:1 configuration:1 contains:3 seriously:1 outperforms:2 existing:2 com:1 si:2 gmail:1 must:6 partition:5 informative:1 noninformative:2 shape:7 drop:2 update:6 intelligence:4 selected:2 nq:2 ith:1 fa9550:1 provides:1 iterates:1 authority:1 constructed:2 become:1 scholkopf:1 incorrect:1 consists:1 fitting:1 introduce:1 pairwise:5 expected:1 frequently:1 nor:1 multi:2 automatically:5 little:1 totally:1 latecki:2 becomes:2 project:1 underlying:1 circuit:1 medium:1 what:1 kind:1 unified:2 flynn:1 transformation:2 dueck:1 every:7 firstorder:1 k2:1 control:4 partitioning:1 grant:3 superiority:2 appear:1 engineering:1 local:8 frey:1 aging:1 severely:1 mount:1 initialization:6 studied:1 challenging:1 limited:1 factorization:2 range:3 adoption:1 xvj:1 vu:9 piatko:1 silverman:1 procedure:2 jan:1 area:2 murty:1 matching:14 regular:1 get:2 cannot:3 influence:1 descending:1 equivalent:2 lagrangian:2 demonstrated:2 maximizing:1 shi:1 survey:1 q:1 attraction:2 array:5 dominate:1 embedding:1 construction:1 suppose:3 programming:1 associate:1 approximated:1 recognition:4 cut:5 cooperative:1 database:2 electrical:1 rij:4 worst:1 calculate:1 region:1 connected:1 eu:2 decrease:3 ran:3 complexity:3 hypergraph:17 reward:8 kriegman:3 algebra:1 triangle:2 easily:1 differently:1 represented:1 jain:1 hyper:1 choosing:2 neighborhood:1 whose:2 encoded:1 larger:3 solve:2 relax:2 otherwise:1 transform:2 obviously:4 advantage:1 propose:1 maximal:2 benefiting:1 intuitive:2 normalize:1 cluster:35 mpeg:1 converges:1 object:18 illustrate:1 pose:1 measured:1 ij:1 pelillo:4 involves:1 judge:2 met:1 direction:2 kuhn:2 correct:2 stochastic:1 adjacency:2 multilevel:1 fix:5 generalization:2 really:1 karush:1 multilinear:1 considered:2 ground:1 exp:2 vary:2 smallest:2 purpose:1 applicable:1 combinatorial:1 currently:1 sensitive:6 individually:1 grouped:1 largest:1 tool:1 weighted:3 clearly:2 always:2 gaussian:2 super:3 manor:1 zhou:2 varying:2 vk:21 eleyans:1 greatly:1 rigid:1 typically:1 integrated:1 initially:1 perona:1 relation:19 issue:1 among:2 classification:1 denoted:4 development:1 constrained:3 art:2 special:4 homogenous:2 field:1 equal:3 xvt:2 construct:5 ng:1 represents:3 nrf:1 nearly:2 constitutes:1 simplex:1 np:3 simplify:1 duplicate:2 few:1 randomly:5 simultaneously:2 national:2 individual:5 message:1 mining:2 adjust:1 severe:1 yielding:1 edge:4 tuple:1 necessary:2 conduct:1 incomplete:1 walk:1 deformation:1 theoretical:2 delete:1 mk:1 column:2 temple:2 cover:1 applicability:1 cost:1 deviation:1 vertex:32 subset:6 entry:3 too:1 schlag:1 reported:2 sav:4 pavan:1 perturbed:1 chooses:1 fundamental:1 sensitivity:1 international:2 probabilistic:1 lee:1 connecting:1 quickly:2 s24:1 again:1 reflect:1 successively:1 containing:3 choose:1 huang:1 star:1 summarized:2 satisfy:5 vi:1 break:1 closed:1 hazan:1 analyze:1 liu1:1 shashua:2 yv:2 characterizes:1 sort:1 red:4 formed:2 descriptor:1 efficiently:2 ensemble:5 correspond:3 yield:1 generalize:1 produced:1 notoriously:1 researcher:1 lighting:3 ary:8 reach:3 definition:1 pp:16 tucker:2 conveys:1 naturally:2 associated:3 proof:1 sampled:1 duplicated:1 color:4 subsection:1 knowledge:1 lim:1 segmentation:1 agreed:1 appears:1 higher:1 planar:2 wei:1 formulation:7 done:1 until:2 nonlinear:1 maximizer:6 propagation:1 rodriguez:2 defines:1 reveal:2 usa:1 contain:3 true:3 multiplier:1 normalized:4 counterpart:1 hence:1 assigned:1 read:1 symmetric:1 nonzero:1 dhillon:1 deal:1 game:28 essence:1 criterion:4 generalized:1 yv1:2 theoretic:1 demonstrate:2 performs:2 meaning:1 image:9 consideration:1 replicator:1 belong:4 hypergraphs:4 significant:4 kri:1 trivially:1 funded:1 similarity:4 surface:2 summands:1 add:4 dominant:1 closest:1 showed:2 chan:1 perspective:2 optimizing:1 certain:2 thk:1 hyperedge:8 yi:2 seen:3 minimum:1 additional:1 relaxed:1 belhumeur:1 v3:4 dashed:1 ii:1 zien:2 multiple:4 desirable:4 rj:8 match:9 characterized:1 offer:1 divided:1 sijk:1 zass:1 laplacian:2 vision:6 iteration:2 sometimes:1 kernel:1 agarwal:2 achieved:1 longin:1 eckhardt:1 want:1 else:1 singular:1 leaving:1 hyperedges:10 meaningless:1 posse:1 probably:1 subject:1 contrary:1 jordan:1 integer:1 near:2 counting:1 revealed:1 exceed:1 xj:4 affect:1 reduce:1 simplifies:1 det:5 administered:1 whether:3 motivated:1 collinear:1 passing:1 hardly:1 generally:2 tune:1 kanungo:1 simplest:1 generate:1 exist:1 nsf:1 singapore:3 dotted:2 disjoint:1 blue:5 diverse:1 vol:13 affected:1 key:2 four:2 bulo:3 neither:1 tenth:1 v1:25 graph:15 sum:1 saying:2 wu:1 utilizes:1 mii:2 coherence:1 scaling:6 bound:1 yale:1 nonnegative:2 mda:1 constraint:5 ri:13 xvk:1 extremely:1 optimality:1 min:3 relatively:1 department:2 according:7 combination:1 belonging:7 smaller:2 appealing:1 outlier:22 intuitively:2 invariant:6 equation:2 agree:2 discus:1 abbreviated:1 mechanism:1 turn:1 needed:1 know:2 end:1 serf:2 studying:1 rewritten:1 observe:1 v2:5 spectral:5 appropriate:1 robustly:2 robustness:1 ho:1 original:3 clustering:64 especially:3 classical:2 society:1 tensor:2 malik:1 strategy:1 affinity:17 gradient:1 subspace:3 distance:1 mapped:1 vd:9 useless:1 relationship:1 modeled:1 unfortunately:1 negative:2 design:2 implementation:1 proper:2 unknown:1 perform:2 upper:1 observation:1 finite:1 extended:1 variability:1 locate:1 rn:2 arbitrary:1 pair:2 cast:2 coherent:1 xvi:1 nu:1 address:3 beyond:1 usually:7 pattern:6 max:1 green:3 shadowed:2 critical:1 overlap:1 natural:4 suitable:1 indicator:1 lakamper:1 representing:3 mkk:2 axis:1 philadelphia:1 prior:15 sg:1 acknowledgement:1 review:1 discovery:1 afosr:1 expect:1 proportional:1 foundation:2 affine:6 basin:2 viewpoint:4 systematically:1 netanyahu:1 share:1 repeat:1 supported:1 keeping:1 yvi:1 neighbor:1 fall:1 face:8 characterizing:1 bulletin:1 sparse:1 regard:3 yvk:2 curve:4 world:1 avoids:1 contour:4 universally:1 transaction:5 approximate:1 implicitly:1 overcomes:1 clique:14 kkt:3 belongie:1 xi:26 spectrum:1 un:1 continuous:1 triplet:4 table:2 robust:9 inherently:1 ignoring:1 symmetry:2 expansion:3 european:1 vj:1 yan1:1 did:1 s2:1 noise:11 verifies:1 fig:16 xl:4 lie:1 candidate:5 guan:1 weighting:1 third:1 formula:3 theorem:3 magenta:1 bad:1 specific:1 exists:3 essential:1 adding:1 kr:1 importance:1 dissimilarity:4 illumination:4 illustrates:2 simply:1 expressed:4 mjj:2 partially:2 applies:1 acquiring:1 corresponds:1 truth:1 acm:3 viewed:1 formulated:1 consequently:1 hard:1 change:1 included:1 aided:1 uniformly:2 averaging:11 called:1 total:2 accepted:1 experimental:2 invariance:1 player:1 ijk:2 meaningful:1 formally:1 select:6 internal:1 latter:1 stressed:1 illuminant:1 phenomenon:2 evaluate:2 avoiding:1 |
3,364 | 4,046 | An analysis on negative curvature induced by
singularity in multi-layer neural-network learning
Eiji Mizutani
Department of Industrial Management
Taiwan Univ. of Science & Technology
[email protected]
Stuart Dreyfus
Industrial Engineering & Operations Research
University of California, Berkeley
[email protected]
Abstract
In the neural-network parameter space, an attractive field is likely to be induced
by singularities. In such a singularity region, first-order gradient learning typically
causes a long plateau with very little change in the objective function value E
(hence, a flat region). Therefore, it may be confused with ?attractive? local minima. Our analysis shows that the Hessian matrix of E tends to be indefinite in
the vicinity of (perturbed) singular points, suggesting a promising strategy that
exploits negative curvature so as to escape from the singularity plateaus. For numerical evidence, we limit the scope to small examples (some of which are found
in journal papers) that allow us to confirm singularities and the eigenvalues of
the Hessian matrix, and for which computation using a descent direction of negative curvature encounters no plateau. Even for those small problems, no efficient
methods have been previously developed that avoided plateaus.
1
Introduction
Consider a general two-hidden-layer multilayer perceptron (MLP) having a single (terminal) output,
H nodes at the second hidden layer (next to the terminal layer), I nodes at the first hidden layer,
and J nodes at the input layer; hence, a J-I-H-1 MLP. It has totally n parameters, denoted by an
n-vector ?, including thresholds. Let ?(.) be some node function; then, the forward pass transforms
the input vector x of length J to the first hidden-output vector z of length I , and then to the second
hidden-output vector h of length H
, leading to?the final
? P output y:
?
?P
` T ?
H
H
T
with zk = ?(xT+ wk ).
y = f (?; x) = ? h+ p = ?
(1)
j=0 pj ?(z+vj )
j=0 pj hj = ?
Here, fictitious outputs x0 = z0 = h0 = 1 are included in the output vectors with subscript ?+? for
thresholds p0 , v0,j , and w0,k ; pj (j = 1, ..., H ) is the weight connecting the jth hidden node to the
(final) output; vj a vector of ?hidden? weights directly connecting to the jth hidden node from the
first hidden layer;
wk a?vector
of ?hidden? weights to the kth hidden
node from the input layer;
?
?
?
T
hence, ? T ? pT |vT |wT = pT |v1T , ..., vjT , ..., vH
|w1T , ..., wkT , ..., wIT . The length of those weight
vectors ?, p, v, w are denoted by n, n3 , n2 , and n1 , respectively, where
n = n3 +n2 +n1 ; n3 = (H +1); n2 = H(I +1); n1 = I(J +1).
(2)
For parameter optimization, one may attempt to minimize the squared error over m data
m
m
1X
1X 2
1
2
(3)
E(?) =
{f (?; xd )?td } =
rd (? ) = rT r,
2
2
2
d=1
d=1
where td is a desired output on datum d; each residual rd a smooth function from <n to <; and r
an m-vector of residuals. Note here and hereafter that the argument (?) for E and r is frequently
suppressed as long as no confusion arises. The gradient and Hessian of E can be expressed as below
?E(?) =
m
X
d=1
rd ?rd = JT r, and ?2 E(?) =
m
X
d=1
?rd ?rdT +
m
X
d=1
rd ?2 rd ? JT J+S,
where J ? ?r, an m?n Jacobian matrix of r, and the dth row of J is denoted by ?r dT .
1
(4)
In the well-known Gauss-Newton method, S, the last matrix of second derivatives of residuals, is
omitted, and its search direction ?? is found by solving J?? GN = ?r (or, JT J??GN = ??E ). Under
the normal error assumption, the Fisher information matrix is tantamount to J T J, called the GaussNewton Hessian. This is why natural gradient learning can be viewed as an incremental version of
the Gauss-Newton method (see p.1404 [1]; p.1031 [2]) in the nonlinear least squares sense. Since
JT J is positive (semi)definite, natural gradient learning has no chance to exploit negative curvature.
It would be of great value to understand the weaknesses of such Gauss-Newton-type methods.
Learning behaviors of layered networks may be attributable to singularities [3, 2, 4]. Singularities
have been well discussed in the nonlinear least squares literature also: For instance, Jennrich &
Sampson (pp.65?66 [5]) described an overlap-singularity situation involving a redundant model;
specifically, a classical (linear-output) model of exponentials with hi ? ?(vi x) and no thresholds in
Eq.(1):
f (?; x) = p1 ?(v1 x)+p2 ?(v2 x) = p1 ev1 x +p2 ev2 x .
(5)
If the target data follow the path of a single exponential then the two hidden parameters, v 1 and v2 ,
become identical (i.e., overlap singularity) at the solution point, where J is rank-deficient; hence,
JT J is singular. If the fitted response function nearly follows such a path, then J T J is nearly singular. This is a typical over-realizable scenario, in which the true teacher lies at the singularity (see [6]
for details about 1-2-1 MLP-learning). In practice, if the solution point ? ? is stationary but J(? ? ) is
rank-deficient, then the search direction ?? GN can be numerically orthogonal to ?E at some distant
point from ? ? ; consequently, no progress can be made by searching along the Gauss-Newton direction (hence, line-search-based algorithms fail); this is first pointed out by Powell, who proved in [7]
that the Gauss-Newton iterates converge to a non-stationary limit point at which J is rank-deficient
in solving a particular system of nonlinear equations, for which the merit function is defined as
Eq.(3), where m = n. Another weak point of the Gauss-Newton-type method is a so-called largeresidual problem (e.g., see Dennis [8]); this implies that S in ?2 E is substantial because r is highly
nonlinear, or its norm is large at solution ? ? . Those drawbacks of the Gauss-Newton-type methods indicate that negative curvature often arises in MLP-learning when JT J is singular (i.e., in a
rank-deficient nonlinear least squares problem), and/or when S is more dominant than J T J. We thus
verify this fact mathematically, and then discuss how exploiting negative curvature is a good way to
escape from singularity plateaus, thereby enhancing the learning capacity.
2
Negative curvature induced by singularity
In rank-deficient nonlinear least squares problems, where J ? ?r is rank deficient, negative curvature often arises. This is true with an arbitrary MLP model, but to make our
P analysis concrete,
we consider a single terminal linear-output two-hidden-layer MLP: f (?; x) = H
j=0 pj hj in Eq. (1).
Then, the n weights separate into linear p and non-linear v and w. In this context, we show that a
4-by-4 indefinite Hessian block can be extracted from the n-by-n Hessian matrix ? 2 E in Eq.(4).
2.1
An existence of the 4 ? 4 indefinite Hessian block H in ?2 E
In the posed
two-hidden-layer
MLP-learning, as indicated after Eq.(1), the n weights are organized
?
?
as ? T ? pT |vT |wT . Now, we pay attention to two particular hidden nodes j and k at the second
hidden layer. The weights connecting to those two nodes are pj , pk , vj , and vk ; they are arranged
in the following manner:
?
?
? T = p0 , p1 , ..., pj , ..., pk , ..., pH |v0,1 , ......., |v0,j , v1,j , ..., vI,j |...|v0,k , v1,k , ..., vI,k |...., | wT , (6)
where vi,k is a weight from node i at the first hidden layer to node k at the second hidden layer.
Given a data pair (x; t), r ? f (?; x)?t, a residual element, and uT , an n-length row vector of the
residual Jacobian matrix J (? ???r ) in Eq.(4), is given as below using the output vector z+ (including
z0 = 1) at the first hidden layer
?
?
uT ? ?r T = ..., hj , ..., hk , ..., ?0j (zT+ vj )pj , ..., ?0k (zT+ vk )pk , ... ,
(7)
where only four entries are shown that are associated with four weights: pj , pk , v0,j , and v0,k . The
locations of those four weights in the n-vector ? are denoted by l1 , l2 , l3 , and l4 , respectively, where
l1 ? j +1, l2 ? k+1, l3 ? (I +1)(j ?1)+1, l4 ? (I +1)(k?1)+1.
(8)
Given J, we interchange columns 1 and l1 ; then, do columns 2 and l2 ; then columns 3 and l3 ; and
finally columns 4 and l4 ; this interchanging procedure moves those four columns to the first four.
2
Suppose that the n ? n Hessian matrix ?2 E = uuT +S is evaluated on a given single datum (x; t).
We then apply the above interchanging procedure to both rows and columns of ? 2 E appropriately,
which can be readily accomplished by PT ?2 E P, where four permutation matrices Pi (i = 1, ..., 4)
are employed as P ? P1 P2 P3 P4 ; each Pi satisfies PTi Pi = I (orthogonal) and Pi = PTi (symmetric);
hence, P is orthogonal. As a result, H, the 4-by-4 Hessian block (at the upper-left corner) of the
first four leading rows and columns of PT ?2 E P has the following structure:
2
(hj )2
6
H =6
4
|{z}
4?4
hj hk
(hk )2
Symmetric
3 2
hj ?0j (.)pj
hj ?0k (.)pk
0
?0j (.)r
0
0
0
hk ?j (.)pj
hk ?k (.)pk 7
0
0
7+6
? 0
?2 0
4
?00j (.)pj r
?j (.)pj
?j (.)?0k (.)pj pk 5
2
Symmetric
{?0k (.)pk }
0
?0k (.)r
3
7
5.
0
00
?k (.)pk r
(9)
The posed Hessian block H is associated with a vector of the four weights [pj , pk , v0,j , v0,k ]T .
If vj = vk , then hj = hk = ?(zT+ v); see Eq.(7). Obviously, no matter how many data are accumulated, two columns hj and hk of J in Eq.(4) are identical; therefore, J is rank deficient; hence, JT J
is singular. The posed singularity gives rise to negative curvature because the above 4-by-4 dense
Hessian block is almost always indefinite (so is ?2 E of size n ? n) to be proved next.
2.2
Case 1: vj = vk ? v; hence, hj = hk ? h = ?(zT+ v), and pj 6= pk
Given a set of m (training) data, the gradient vector ?E and the Hessian matrix ?2 E in Eq.(4) are
evaluated. We then apply the aforementioned orthogonal matrix P to them as PT ?E and PT ?2 EP,
yielding the gradient vector g of length 4 and the 4-by-4 Hessian block H [see Eq.(9)] associated
with the four weights [pj , pk , v0,j , v0,k ]T ; they may be expressed in a compact form as
?
? ?
?
?
?
b2
0
0 0 e
a a b1
?
m
X
b ? ?0 0 0
e ?
? ? ?
? a a b
g=
rd ud = ? p e ?; H = JT J+S = ? b b c 1 c 2 ? + ? e 0 d
,
(10)
0 ?
j
d=1
1
pk e
1
11
12
1
c22
0 e 0 d2
P
Pm ? 0 T ?2
Pm 0 T
00 T
where the entries are given below with B ? d=1 ? (z+dv)hd , C ? d=1 ? (z+dv) , D ? m
d=1 ? (z+dv)rd :
?
m
X 2
?
?
hd , b1 ? pj B, b2 ? pk B, c11 ? p2j C, c12 ? pj pk C, c22 ? p2k C,
?
? a?
b2
b2
c12
d=1
m
m
X
X
?
?
?
?
?
r
h
,
e
?
?0 (zT+d v)rd , d1 ? pj D,
d
d
?
d=1
d=1
(11)
d2 ? pk D.
Notice here that the subscript d implies datum d (d = 1, ..., m); hence, hd is the hidden-node output
on datum d (but not the dth hidden-node output) common to both nodes j and k due to v j = vk = v.
Theorem 1: When e 6= 0, the n-by-n Hessian ?2E and its block H in Eq.(10) are always indefinite.
Proof: A similarity transformation with T, a 4-by-4 orthogonal matrix (TT = T?1 ), obtains
2
2a
b1 +b2 +e
?
6b1 +b2 +e
T
T HT = 4 0
0
b1 ?b2
?
3
2 ?1
0
b1 ?b2
2
6 ?1 0
? 7
6 2
with T = 6
4 0 ?12
e 5
0 ?12
?
0
0
0
e
1
?
2
?1
?
2
3
0
07
7
7,
0 ?12 5
?1
?
0
2
(12)
where ? ? 21 (c11+2c12+c22+d1+d2 ), ? ? 21 (c11?c22+d1?d2 ), and ? ? 12 (c11?2c12+c22+d1+d2 ).
The eigenvalues of the 2-by-2 block at the lower-right corner are obtainable by
?
?
h
i?
?
?
??I ? 0e ?e ? = ?(? ? ? ) ? e2 = ?2 ? ? ? ? e2 = 0,
which yields 21 (? ? ? 2 + 4e2 ), the ?sign-different? eigenvalues as long as e 6= 0 holds. Then, by
Cauchy?s interlace theorem (see Ch.10 of Parlett 1998), the Hessian ?2E is indefinite. (So is H.) 2
2.3
Case 2: vj = vk ? v (hj = hk ? h), and pj = pk ? p
The result in Case 1 becomes simpler: For a given set of m (training) data,
? ?
3 2
2
?
0 0
a a b b
Pm
?? ?
6a a b b7 60 0
T
+
g = d=1 rd ud = ? ?; H = J J+S = 4
b b c c5 4 e 0
pe
b b c c
0 e
pe
3
e
0
d
0
3
0
e 7
,
05
d
(13)
where the entries are readily identifiable from Eq.(11). In Eq.(13), JT J is positive semi-definite (singular of rank 2 even when m ? 2), and S has an indefinite structure. When e 6= 0 (hence, ?E 6= 0),
we can prove below that there always exists negative curvature (i.e., ?2 E is always indefinite).
Theorem 2: When e 6=
?0, the 4?4 Hessian block H in Eq.(13) includes the sign-different eigenvalues
of S; namely, 12 (d ? d2 + 4e2 ), and the n ? n Hessian ?2 E as well as H are always indefinite.
Proof: Proceed similarly with the same orthogonal matrix T as defined in Eq.(12), where b 1 = b2 = b,
? = 0, and ? = d, rendering TT HT ?block-diagonal.? Its block of size 2 ? 2 at the lower-right corner
has the sign-different eigenvalues determined by ?2 ?d??e2 = 0. 2 QED 2
Now, we investigate stationary points, where the n-length gradient vector ?E = 0; hence, g = 0 in
Eq.(13). We thus consider two cases for pe = 0: (a) p = 0 and e 6= 0, and (b) p 6= 0 and e = 0. In
Case (b), S becomes a diagonal matrix, and the above TT H T shows that H is of (at most) rank 3
(when d 6= 0); hence, H becomes singular.
Theorem 3: If ?E(? ? ) = 0, p = 0, and e 6= 0 [i.e., Case (a)], then the stationary point ? ? is a saddle.
Theorem 4: If ?E(? ? ) = 0, and e = 0, but d < 0 [see Eq.(13)], then ? ? is a saddle point.
Proof of Theorems 3 and 4: From Theorem 2 above, H in Eq.(13) has a negative eigenvalue; hence,
the entire Hessian matrix ?2 E of size n ? n is indefinite 2 QED 2
Theorem 4 is a special case of Case (b). If d = pD > 0, then H becomes positive semi-definite;
however, we could alter the eigen-spectrum of H by changing linear parameters p in conjunction
with scalar ? for pj = 2?p and pk = 2(1??)p such that pj +pk = 2p with no change in E and ?E = 0
held fixed (to be confirmed in simulation; see Fig.1 later), leading to the following
Theorem 5: If D 6= 0 and C > 0 [see the definition of C and D for Eq.(11)] and v1 = v2 (? v) with
?E = 0, for which p 6= 0 and e = 0 (hence, S is diagonal), then choosing scalar ? appropriately for
pj = 2?p and pk = 2(1??)p can render H and thus ?2 E indefinite.
Proof: From Eq.(11), two on-diagonal (3,3) and (4,4) entries of H are a quadratic function in terms
D
of ?: The (3,3)-entry of H, H(3, 3) = 2?p(2?pC + D), has two roots: 0 and ? 2pC
, whereas the
D
(4,4)-entry, H(4, 4) = 2(1??)p[2(1??)pC+D], has two roots: 1 and 1 + 2pC . Obviously, given p, C,
and D, there exists ? such that the quadratic function value becomes negative (see later Fig.1). This
implies that adjusting ? can produce a negative diagonal entry of H; hence, indefinite. Then, again
by Cauchy?s interlace theorem, so is ?2 E . 2 QED 2
Example 1: A two-exponential model in Eq.(5).
Data set 1:
Input x
Target t
?2
1
?1
3
0
2
1
3
2
1
Data set 2:
Input x
Target t
?2
3
?1
1
0
2
1
1
2
3
(14)
Given two sets of five data pairs (xi ; ti ) as shown above, for each data set, we first find
a minimizer ? 0? = [p? , v? ]T of a two-weight 1-1-1 MLP, and then expand it with scalar ? as
? = [?p? , (1 ? ?)p? , v? , v? ]T to construct a four-weight 1-2-1 MLP that produces the same input-tooutput relations. That is, we first find the minimizer ? 0? = [p? , v? ]T using a 1-1-1 MLP, f (? 0 ; x) = pevx ,
?2
?
P
by solving ?E = 0, which yields p? = 2; v? = 0; E(? 0? ) = 21 5j=1 f (? 0? ; xj ) ? tj = 2; and confirm
that the 2 ? 2 Hessian ?2 E(? 0? ) is positive definite in both data sets above. Next, we augment ? 0?
as ? = [p1 , p2 , v1 , v2 ]T = [?p? , (1 ? ?)p? , v? , v? ]T to construct a 1-2-1 MLP: f (?; x) = p1 ev1 x +p2 ev2 x ,
which realizes the same input-to-output relations as the 1-1-1 MLP. Fig.1 shows how ? changes the
eigen-spectrum (see solid curve) of the 4 ? 4 Hessian ?2 E (supported by Theorem 5).
Conjecture: Suppose that ? ? is a local minimum point in two-hidden-layer J-I-H-1 MLP-learning,
and ?2 E of size n ? n is positive definite (so is H) with ?E = 0 and E > 0. Then, adding a node at
the second hidden layer can increase learning capacity in the sense that E can be further reduced.
Sketch of Proof: Choose a node j among H hidden nodes, and add a hidden node (call node k) by
duplicating the hidden weights by vk = vj with pk = 0; hence, totally n
e ? n+(I +2) weights. This
certainly renders new JT J of size n
e?n
e singular, and the (4,4)-entry in H in Eq.(10) becomes zero
(due to pk = 0). Then, by the interlace theorem, new ?2 E of size n
e?n
e becomes indefinite. 2
The above proof is not complete since we did not make clear assumptions about how the first-order
necessary condition ?E = 0 holds [see Cases (a) and (b) just above Theorem 3]. Furthermore, even
if we know in advance the minimum number of hidden nodes, Hmin , for a certain task, we may not
be able to find a local-minimum point of an MLP with one less hidden nodes, H min ?1. Consider, for
instance, the well-known (four data) XOR problem. Although it can be solved by a 2-2-1 MLP (nine
4
20
20
15
15
10
10
5
5
0
0
?5
?5
?10
min Eig(? E)
min Eig(S)
?2E(3,3)
2
? E(4,4)
?15
?20
?25
?2
?1
0
?
2
min Eig(? E)
min Eig(S)
2
? E(3,3)
2
? E(4,4)
?10
2
?15
?20
1
2
?25
?2
3
?1
0
1
?
2
3
2
Figure 1: The change of the minimum eigenvalue of ? E (solid curve) and of S (dashed) as well as
the (3,3)-entry of ?2 E (dotted) and the (4,4)-entry of ?2 E (dash-dot), both quadratic, according to
value ? (x-axis) in ? = [?p? , (1??)p? , v? , v? ]T , the four weights of a 1-2-1 MLP with exponential
hidden nodes (left) using data set 1, and (right) data set 2 in Eq.(14). Theorem 5 supports this result.
2?D contour plot
2?D contour plot
5
20
4
15
Minimizer
3
Attractor
10
Minimizer
4
2
Minimizer
5
Saddle
0
E(p,v)
v
v
3
1
Saddle
0
?5
?1
?10
2
1
0
2.5
Attractive point
?20
2
?3
0
0.2
0.4
1.5
?15
Saddle
?2
?10
1
0.6
0.8
1
1.2
?20
?1
?0.5
0
p
x
0.5
1
1.5
2
2.5
p
p
0
0.5
v
10
0
?0.5
?1
20
(a)
(b)
(c)
Figure 2: The 1-1-1 MLP landscape: (a) a magnified view; (b) bird?s-eye views in 2-D, and (c) 3-D.
weights), any local minimum point may not be found by optimizing a 2-1-1 MLP (five weights),
since the hidden weights tend to be divergent (or weight-? attractors). Here is another example:
Example 2: An N -shape curve fitting to four data: Data(x; t) ? {(?3; 0), (?1; 1), (1; 0), (3; 1)}.
We solved ?E = 0 to find all stationary points of a two-weight 1-1-1 MLP with a logistic hiddennode function ?(x) ? 1+e1?x , and found p? ? 1.0185 and v? ? 0.3571 with k?E(? 0? )k = O(10?15 ),
roughly the order of machine (double) precision, and E(? 0? ) ? 0.4111. The Hessian ?2 E(? 0? ) was
positive definite (eigenvalues: 0.8254 and 1.4824). We also found a saddle point. There was another
type of attractive points, where ? is driven to saturation due to a large hidden weight v in magnitude
(weight-? attractors). Fig.2 displays those three types of stationary points. Clearly, for a rigorous
proof of Conjecture, we need to characterize those different types, and clarify their underlying assumptions; yet, it is quite an arduous task because the situation totally depends on data; see also our
Hessian argument for Blum?s line in Sec.3.2.
We continue with Example 2 to verify the above theorems. We set ? = [?p? , (1 ? ?)p? , v? , v? ] in
a 1-2-1 MLP. When ? = 0.5, the Hessian ?2 E was positive semi-definite. If a small perturbation
is added to v? , then ?2 E becomes indefinite (see Theorem 2). In contrast, when ? = ?1.5, ?2 E
became indefinite (minimum eigenvalue ?0.2307); this situation was similar to Fig.1(left).
Remarks: The eigen-spectrum (or curvature) variation along a line often arises in separable (i.e.,
mixed linear and nonlinear) optimization problems. As a small non-MLP model, consider, for instance, a separable objective function with ? ? [p, v]T , two variables alone: F (?) = F (p, v) = pv 2 .
Expressed below are the gradient and ?Hessian
?
? of F :
?
?F =
v2
;
2pv
?2 F =
0
2v
2v
.
2p
(15)
Consider a line v = 0, where the Hessian ?2 F is singular. Then, the eigen-spectrum of ?2 F changes
as the linear parameter p alters while the first-order necessary condition (?F = 0) is maintained with
the objective-function value F = 0 held fixed. Clearly, ?2 F is positive semi-definite when p > 0,
whereas it is negative semi-definite when p < 0. Hence, the line is a collection of degenerate stationary points. In this way, singularities may be closely related to flat regions, where any updates of
5
parameters do not change the objective function value. Back to MLP-learning, Blum [10] describes
a different linear manifold of stationary points (see Sec.3.2 for more details), where the ?-adjusting
procedure described above fails because D = 0 (see Example 3 below also). Some other types of
linear manifolds (and eigen-spectrum changes) can be found; e.g., in [11, 4, 3]; unlike their work,
our paper did not claim anything about local minima, and our approach is totally different.
Example 3: A linear-output five-weight 1-1-2-1 MLP with ? = [p1 , p2 |v1 , v2 |w1 ]T (no thresholds),
having tanh-hidden-node functions. If ? ? = [1, 1, 0, 0, 0]T , then ?E(? ? ) = 0 with the indefinite Hessian ?2 E (hence, ? ? a saddle point) below, in which all diagonal entries of S are zero due to D = 0:
2
?
? E(? ) =
2
0 0
6 0 0
6 0 0
4
0 0
0 0
0
0
0
0
?
0
0
0
0
?
3
0
0 7
? 7
5
?
0
with ? ?
m
X
xd r d .
d=1
Here, ? denotes a non-zero entry with input x and residual r over an arbitrary number m of data. 2
The point to note here is that it is important to look at the entire Hessian ?2 E of size n ? n. When
H = O, a 4 ? 4 block of zeros, ?2 E would be indefinite (again by the interlace theorem) as long
as non-zero off-diagonal entries exist in ?2 E , as in Example 3 above. Needless to say, however,
the Hessian analysis fails in certain pathological cases (see Sec.3.2). Typical is an aforementioned
weight-? case, where the sigmoid-shaped hidden-node functions are driven to saturation limits due
to very large hidden weights. Then, only part of JT J associated with linear weights p appear in ?2 E
since S = O even if residuals are still large. This case is outside the scope of our analysis. It should
be noted that a regularization scheme to penalize large weights is quite orthogonal to our scheme to
exploit negative curvature. If a regularization term ?? T ? (with non-negative scalar ?) is added to E,
then the negative-curvature information will be lost due to ?2 E + ?I.
3
The 2-2-1 MLP-learning examples found in the literature
In this section, we consider learning with a 2-2-1 MLP having nine weights; then, Eq.(6) reduces to
? T ? [pT |vT ] = [pT |v1T |v2T ] = [p0 , p1 , p2 |v0,1 , v1,1 , v2,1 |v0,2 , v1,2 , v2,2 ],
where vj is a (hidden) weight vector connecting to the jth hidden node. Here, all weights are nonlinear since both hidden and final outputs are produced by sigmoidal logistic function ?(x) ? 1+e1?x .
3.1
Insensitivity to the initial weights in the singular XOR problem
The world-renowned XOR problem (involving only four data of binary values: ON and off) with a
standard nine-weight 2-2-1 MLP is inevitably a singular problem because the Gauss-Newton Hessian JT J in Eq.(4) is always singular (at most rank 4), whereas S tends to be of (nearly) full rank;
so does ?2 E (cf. rank analysis in [12]). This implies that singularity in terms of JT J is everywhere
in the posed neuro-manifolds. It is well-known (e.g., see [13]) that the origin (p = 0 and v = 0) is
a singular saddle point, where ?E = 0 and ?2 E = JT J with only one positive eigenvalue and eight
zeros. An interesting observation is that there always exists a descending path to the solution from
any initial point ? init as long as ? init is randomly generated in a small range; i.e., in the vicinity of
the origin. That is, first go directly down towards the origin from ? init , and then move in a descent
direction of negative curvature so as to escape from that singular saddle point. In this way, the 2-2-1
MLP can develop insensitivity to initial weights, always solving the posed XOR problem.
3.2
Blum?s linear manifold of stationary points
In the XOR problem, Blum [10] found a line of stationary points by adding constraints to ? as
L1 ? v0,1 = v0,2 , w1 ? v1,1 = v2,2 , w2 ? v1,2 = v2,1 , w ? p1 = p2 , (with L ? p0 ),
(16)
T
leading to a weight-sharing MLP of five weights: ? ? [L, w, L1 , w1 , w2 ] following the notations
in [10]. Using four XOR data: (x1 , x2 ; t) = {(0, 0; off), (0, 1; ON), (1, 0; ON), (1, 1; off)} for E in
Eq.(3), Blum considered a point with v = 0; hence, ? ? ? [L, w, 0, 0, 0]T , which gives two identical
1
1
hidden-node outputs: h1 = h2 = ?(0) = 1+e
0 = 2 . This is the same situation as in Sec.2.2 and 2.3. By
the constraints given in Eq.(16), the terminal output is given by y = ?(L + w). All those node outputs
are independent of input data. Then, for a given target value ?off? (e.g., 0.1), set
ON = 2?(L + w) ? off ?? ?(L + w) = (off + ON)/2
(17)
so that those target values ?off? and ?ON? must approximate XOR.
6
Blum?s Claim (page 539 [10]): There are many stationary points that are not absolute minima.
They correspond to w and L satisfying Eq.(17). Hence, they lie on a line ?L + w = c (constant)?
in the (w, L)-plane. Actually, these points are local minima of E, being 21 (ON ? off)2 . 2
A little algebra confirms that ?E = 0, and the quantities corresponding to e and D in Eq.(11) are
all zeros; hence, S = O. Consequently, no matter how ? (see Theorem 5) is changed to update w
and L (along the line), ?2 E stays positive semi-definite, and E in Eq.(3) remains the same 0.5 (flat
region). This is certainly a limitation of the second-order Hessian analysis, and thus more efforts
using higher-order derivatives were needed to disprove Blum?s claim (see [14, 15]), and it turned out
that Blum?s line is a collection of singular saddle points. In what follows, we show what conditions
must hold for the Hessian argument to work.
The 5-by-5 Hessian matrix ?2 E at a stationary point ? ? = [L, w, 0, 0, 0] is given by
2
4A
4A 2wA
4A 2wA
6 4A
6
2
6 2wA 2wA w 2 A
?
E
=
| {z } 6
4 wA wA w2 A
5?5
2
2
wA wA w2 A
wA
wA
w2
A
2
w2
(3A+S)
8
w2
(3A+S)
8
3
wA
8
2
0
wA
>
7
<A ? {? (L + w)}
2
7
w
7
A
?
?
2
7 with >
w2
:S ? ?00 (L+w) ?(L+w)? off .
5
(3A+S)
8
w2
(3A+S)
8
(18)
2
We thus obtain two non-zero eigenvalues of ?2 E , ?1 and ?2 , below using k ? w8 (3A + S):
?1 , ?2 =
1
2
?
?
ff
? q
A(w2 + 8) + 2k ? [A(w 2 + 8) + 2k]2 ? 2A(w 2 + 8)(4k ? w 2 A) .
(19)
Now, the smaller eigenvalue can be rendered negative when the following condition holds:
?
?
2
4k ? w 2 A < 0 ?? A + S = {?0 (L + w)} + ?00 (L + w) ?(L + w) ? off < 0.
(20)
Choosing L+w = 2 and off = 0.1 accomplishes our goal, yielding sign-different eigenvalues with
ON = 2?(2)?off ? 1.6616 by Eq.(17). Because ?2 E is indefinite, the posed stationary point is a
saddle point with E = 12 (ON ? off)2 (? 1.219), as desired. In other words, the target value for ON
is modified to break symmetry in data. Such a large target value ON (as 1.6616) is certainly unattainable outside the range (0,1) of the sigmoidal logistic function ?(x), but notice that ON is often
set equal to 1.0, which is also un-attainable for finite weight values. It appears that the choice of
such a (fictitiously large) value ON does not violate any Blum?s assumption. When 0 ? off < ON ? 1
(with w 6= 0), the Hessian ?2 E in Eq.(18) is always positive semi-definite of rank 2. Hence, it is a
singular saddle point.
3.3
Two-class pattern classification problems of Gori and Tesi
We next consider two two-class pattern classification problems made by Gori & Tesi: one with
five binary data (p.80 in [17]), and another with only three data (p.93 in [16]); see Fig.3. Both are
singular problems, because rank(JT J) ? 5; yet, both S and ?2 E tend to be of full rank; therefore,
the 9?9 Hessian ?2 E tends to be indefinite (see Theorems 1 and 2). On p.81 in [17], a configuration
of two separation lines, like two solid lines given by ? init in Fig.3(left) and (right), is claimed as a
region attractive to a local-minimum point. Indeed, the batch-mode steepest-descent method fails to
change the orientation of those solid lines. But its failure does not imply that there is no descending
way out of the two-solid-line configuration given by ? init because the convergence of the steepestdescent method to a (local) minimizer can be guaranteed by examining negative curvature (e.g., p.45
in [18]). We shall show a descending negative curvature direction.
In the five-data case, the steepest-descent method moves ? init to a point, where the weights become
relatively large; the gradient vector ?E ? 0; the Hessian ?2 E is positive semi-definite; and Eq.(3)
with m = 5 is given by E = 31 (ON ? off)2 , for which the two residuals at data points (0,0) and (1,1)
are made zeros. We can find such a point analytically by a linear-equation solving: Given ? init in
Fig.3, the solution to the linear system below yields p? = [p?0 , p?1 , p?2 ]T (three terminal weights):
2
1
41
1
?(?1.5)
?(0.5)
?(?0.5)
3
32 ? 3 2
??1 (off)
?(?0.5)
p0
?1
?(1.5) 54 p?1 5 = 4 ?? (off) ? 5 .
??1 2 ON3+off
p?2
?(0.5)
The resulting point ? ? ? [p?0 , p?1 , p?2 ; ?1.5, 1, 1; ?0.5, 1, 1]T, where the norm of p? becomes relatively
large O(102 ), gives the zero gradient vector, the positive semi-definite Hessian of rank 5, and
E = 13 (ON ? off)2 , as mentioned above. It is observed, however, that small perturbations on ? ? render
7
net = x 1 + x 2 ? 1.5
x2
net = ? x 1 + x 2 ? 0.5
x2
0
1.5
1.5
?1
1
(0, 1)
(0, 1)
(1, 1)
h2
h1
0
0.5
1
0.5
1.5
1
1.5
0.5
?0.5
(1, 0)
x1
?1.5
1
1
(0, 0)
0
1
(1, 0)
x1
?0.5
1
?0.5
0.5
1
2
x1
x2
net = ? x1 + x2 + 0.5
net = x 1 + x 2 ? 0.5
Figure 3: Gori & Tesi?s two-class pattern classification problems (left) three-data case; (right) fivedata case; and (middle) a 2-2-1 MLP with initial weight values ? init ? [0, 1, ?1; ?1.5, 1, 1; ?0.5, 1, 1]T .
Its corresponding initial configuration gives two solid lines of net-inputs (to two hidden nodes) in the
input space, where ??? stands for two ON-data (1,0), (0,1), whereas ??? for one off-data (0.5,0.5) in
left figure and three off-data (0,0), (0.5,0.5), (1,1) in right figure. A solution to both problems may
be given by the two dotted lines with ? sol ? [0, 1, ?1; ?0.5, ?1, 1; 0.5, ?1, 1]T .
?2 E indefinite of full rank (since S is dominant): rank(S) = rank(?2 E) = 9 with rank(JT J) = 4; this
suggests a descend direction (other than the steepest descent) to follow from ? init to a solution ? sol .
Fig.3(right) presents one of them, an intuitive change of six hidden weights (with the other three
weights held fixed) from two solid lines to two dotted ones, indicated by two thick arrows given by
?? ? ? sol ? ? init = [0, 0, 0; 1, ?2, 0; 1, ?2, 0]T, is a descent direction of negative curvature down to ? sol
because ?? T ?2E(? init )?? < 0, where ?2E(? init ), the Hessian evaluated at ? init , was indefinite. Intriguingly enough, it is easy to confirm for the three-data case that the posed ?descent? direction of
negative curvature ?? is orthogonal to ??E, the steepest-descent direction.
Claim: Line search from ? init to ? sol monotonically decreases the squared error E (? init + ???) as
the step size ? (scalar) changes from 0 to 1; hence, no plateau.
Proof for the three-data case: (The five-data case can be proved in a similar fashion.) Using target
values ON=1 and off=0, let q(?) ? E(? init +???)?E(? init ). Then, we show below that q 0 (?) < 0 using
a property that ?(?x) = 1??(x):
q(?) = 12 {?(?0.5??)??(0.5??)?ON}2 + 12 {?(?0.5+?)??(0.5+?)?ON}2 ?{?(?0.5)??(0.5)?ON}2
= 12 {1??(0.5+?)??(0.5??)?ON}2 + 12 {1??(0.5??)??(0.5+?)?ON}2?{1??(0.5)??(0.5)?ON}2
= {1?ON??(0.5 + ?)??(0.5 ? ?)}2 ? {1 ? ON ? 2?(0.5)}2
= {?(0.5 + ?) + ?(0.5 ? ?)}2 ? 4 {?(0.5)}2 .
Differentiation leads to q 0 (?) = 2 {?(0.5+?)+?(0.5??)} {?0 (0.5+?)??0 (0.5??)} < 0 because
?(0.5+?) > 0, ?(0.5??) > 0, and ? > 0, which guarantees ?0 (0.5+?) < ?0 (0.5??). 2
4
Summary
In a general setting, we have proved that negative curvature can arise in MLP-learning. To make
it analytically tractable, we intentionally used noise-free small data sets but on ?noisy? data, the
conditions for Theorems 1 and 2 most likely hold in the vicinity of singularity regions; it then follows
that the Hessian ?2 E tends to be indefinite (of nearly full rank). Our numerical results confirm
that the negative-curvature information is of immense value for escaping from singularity plateaus
including some problems where no method was developed to alleviate plateaus. In simulation, we
employed the second-order stagewise backpropagation [12] (that can evaluate ? 2 E and JT J at the
essentially same cost; see proof therein) to obtain ?2 E explicitly and its eigen-directions so as to
exploit negative curvature. This approach is suitable for up to medium-scale problems, for which
our analysis suggests using existing trust-region globalization strategies whose theory has thrived
on negative curvature including indefinite dogleg [19]. For large-scale problems, one could resort
to matrix-free Krylov subspace methods: Among them, the truncated conjugate-gradient (Krylovdogleg) method tends to pick up an arbitrary negative curvature (hence, slowing down learning;
see [20] for numerical evidence); so, other trust-region Krylov subspace methods are of our great
interest such as a Lanczos type [21] and a parameterized eigenvalue approach [22].
Acknowledgments
The work is partially supported by the National Science Council, Taiwan (NSC-99-2221-E-011-097).
8
References
[1] Amari, S.-I., Park,H. & Fukumizu, K. Adaptive Method of Realizing Natural Gradient Learning for Multilayer Perceptrons. Neural Computation, 12:1399-1409, 2000.
[2] Amari, S.-I., Park, H. & Ozeki, T. Singularities affect dynamics of learning in neuro-manifolds. Neural
Computation, 18(5):1007-1065, 2006.
[3] Wei, H., Zhang, J., Cousseau, F., Ozeki, T., & Amari, S.-I. Dynamics of Learning Near Singularities in
Layered Networks. Neural Computation, 20(3):813-843, 2008.
[4] Fukumizu, K. & Amari, S.-I. Local Minima and Plateaus in Hierarchical Structures of Multilayer Perceptrons. Neural Networks, 13(3):317?327, 2000.
[5] Jennrich, R.I. & Sampson, P.F. Application of Stepwise Regression to Non-Linear Estimation. Technometrics, 10(1):63?72, 1968.
[6] Cousseau, F., Ozeki, T., & Amari, S.-I. Dynamics of Learning in Multilayer Perceptrons near Singularities
IEEE Trans. on Neural Networks, 19(8):1313-1328, 2008.
[7] Powell, M.J.D. A hybrid method for nonlinear equations. In Numerical Methods for Nonlinear Algebraic
Equations, Ed. by P.Rabinowitz, Gordon & Breach, London, pp.87?114, 1970.
[8] Dennis, J.E., Jr. Nonlinear least squares and equations. In The state of the art in numerical analysis, Ed.
by D. Jacobs, Academic Press, London, pp.269?312, 1977.
[9] Parlett, B.N. The Symmetric Eigenvalue Problem. SIAM, 1998.
[10] Blum, E.K. Approximation of Boolean Functions by Sigmoidal Networks: Part I: XOR and other twovariable functions. Neural Computation, 1:532-540, 1989.
[11] Sprinkhuizen-Kuyper, I.G. & Boers, E.J.W. A Local Minimum for the 2-3-1 XOR Network. IEEE
Transactions on Neural Networks, 10(4):968?971, 1999.
[12] Mizutani, E. & Dreyfus, S.E. Second-order stagewise backpropagation for Hessian-matrix analyses and
investigation of negative curvature. Neural Networks, vol.21 (issues 2?3):193-203, 2008. (See its Corrigendum in vol.21, issue 9, page 1418).
[13] Sprinkhuizen-Kuyper, I.G. & Boers, E.J.W. The error surface of the 2-2-1 XOR network: The finite
stationary points. Neural Networks, 11:683?690, 1998.
[14] Tsaih, R.-H. An Improved Back Propagation Neural Network Learning Algorithm. Ph.D thesis at
the Department of Industrial Engineering and Operations Research, University of California at Berkeley,
pp.67?70, 1991.
[15] Sprinkhuizen-Kuyper, I.G. & Boers, E.J.W. A comment on a paper of Blum: Blum?s local minima are
saddle points. Tech. Rep. No. 94-34, Leiden University, Department of Computer Science, Leiden, The
Netherlands, 1994.
[16] Gori, M. & Tesi, A. Some examples of local minima during learning with backpropagation. Third
Italian Workshop on Parallel Architectures and Neural Networks. (Ed. by E.R. Caianiello), World Scientific
Publishing Co., pp. 87?94, 1990.
[17] Gori, M. & Tesi, A. On the Problem of Local Minima in Backpropagation. IEEE Trans. on Pattern
Analysis and Machine Intelligence, 14(1):76-86, 1992.
[18] Nocedal, J & Wright, S.J. Numerical Optimization. Springer Verlag, 1999.
[19] Byrd, R.H., Schnabel, R.B. & Schultz, G.A. Approximate solution of the trust region problems by minimization over two-dimensional subspaces. Mathematical Programming, 40:247?263, 1988.
[20] Mizutani, E. & Demmel, J.W. Iterative scaled trust-region learning in Krylov subspaces via Pearlmutter?s
implicit sparse Hessian-vector multiply. In S. Thrun, L. Saul, and B. Sch o? lkopf, editors, Advances in
Neural Information Processing Systems, MIT Press, 16:209?216, 2004.
[21] Gould, N.I.M., Lucidi, S., Roma, M. & Toint, Ph.L. Solving the trust-region subproblem using the
Lanczos method. SIAM Journal on Optimization, 9(2):504?525, 1999.
[22] Rojas, M., Santos, S.A. & Sorensen, D.C. A New Matrix-Free Algorithm for the Large-Scale TrustRegion Subproblem. SIAM Journal on Optimization, 11(3):611?646, 2000.
9
| 4046 |@word middle:1 version:1 norm:2 d2:6 confirms:1 simulation:2 jacob:1 p0:5 attainable:1 pick:1 thereby:1 solid:7 initial:5 configuration:3 ev1:2 hereafter:1 existing:1 yet:2 must:2 readily:2 numerical:6 distant:1 shape:1 plot:2 update:2 stationary:14 alone:1 intelligence:1 slowing:1 plane:1 steepest:4 realizing:1 iterates:1 node:28 location:1 sigmoidal:3 c22:5 simpler:1 five:7 zhang:1 mathematical:1 along:3 become:2 prove:1 fitting:1 manner:1 tesi:5 x0:1 indeed:1 roughly:1 p1:9 frequently:1 behavior:1 multi:1 terminal:5 v1t:2 td:2 byrd:1 little:2 totally:4 becomes:9 confused:1 underlying:1 notation:1 medium:1 what:2 santos:1 developed:2 transformation:1 magnified:1 differentiation:1 guarantee:1 berkeley:3 duplicating:1 w8:1 p2j:1 ti:1 xd:2 scaled:1 appear:1 positive:13 engineering:2 local:13 tends:5 limit:3 v2t:1 subscript:2 path:3 bird:1 therein:1 suggests:2 co:1 range:2 acknowledgment:1 practice:1 block:12 definite:13 lost:1 backpropagation:4 procedure:3 powell:2 word:1 needle:1 layered:2 context:1 descending:3 go:1 attention:1 wit:1 hd:3 searching:1 variation:1 pt:9 target:8 suppose:2 programming:1 lucidi:1 origin:3 element:1 satisfying:1 ep:1 observed:1 subproblem:2 solved:2 descend:1 region:11 sol:5 decrease:1 substantial:1 mentioned:1 pd:1 dynamic:3 caianiello:1 solving:6 algebra:1 univ:1 london:2 demmel:1 gaussnewton:1 choosing:2 h0:1 outside:2 quite:2 whose:1 posed:7 say:1 amari:5 noisy:1 final:3 obviously:2 eigenvalue:15 net:5 p4:1 turned:1 degenerate:1 insensitivity:2 intuitive:1 exploiting:1 convergence:1 double:1 produce:2 incremental:1 develop:1 progress:1 eq:33 p2:8 implies:4 indicate:1 direction:11 thick:1 drawback:1 closely:1 alleviate:1 investigation:1 singularity:20 mathematically:1 clarify:1 hold:5 considered:1 wright:1 normal:1 great:2 scope:2 claim:4 omitted:1 estimation:1 realizes:1 tanh:1 council:1 interlace:4 ozeki:3 fukumizu:2 minimization:1 mit:1 clearly:2 always:9 modified:1 hj:11 conjunction:1 vk:7 rank:21 hk:9 industrial:3 rigorous:1 contrast:1 tech:1 realizable:1 sense:2 tooutput:1 mizutani:3 accumulated:1 typically:1 entire:2 hidden:40 italian:1 relation:2 expand:1 jennrich:2 issue:2 aforementioned:2 among:2 classification:3 denoted:4 augment:1 orientation:1 art:1 special:1 field:1 construct:2 equal:1 shaped:1 having:3 intriguingly:1 identical:3 stuart:1 look:1 park:2 nearly:4 alter:1 interchanging:2 gordon:1 escape:3 pathological:1 randomly:1 national:1 attractor:3 n1:3 attempt:1 technometrics:1 mlp:30 interest:1 highly:1 investigate:1 multiply:1 certainly:3 weakness:1 yielding:2 pc:4 tj:1 held:3 sorensen:1 immense:1 necessary:2 disprove:1 orthogonal:8 desired:2 fitted:1 instance:3 column:8 boolean:1 gn:3 lanczos:2 cost:1 entry:13 examining:1 characterize:1 unattainable:1 perturbed:1 teacher:1 siam:3 boer:3 stay:1 off:23 connecting:4 p2k:1 concrete:1 w1:3 squared:2 again:2 thesis:1 management:1 choose:1 ieor:1 corner:3 resort:1 derivative:2 leading:4 suggesting:1 c12:4 b2:9 wk:2 includes:1 sec:4 matter:2 explicitly:1 vi:4 depends:1 later:2 root:2 view:2 h1:2 break:1 parallel:1 minimize:1 square:5 hmin:1 xor:10 became:1 who:1 yield:3 correspond:1 landscape:1 weak:1 lkopf:1 produced:1 confirmed:1 plateau:9 sharing:1 ed:3 rdt:1 definition:1 failure:1 pp:5 intentionally:1 e2:5 associated:4 proof:9 proved:4 adjusting:2 ut:2 organized:1 obtainable:1 actually:1 back:2 globalization:1 appears:1 higher:1 dt:1 follow:2 response:1 wei:1 improved:1 arranged:1 evaluated:3 furthermore:1 just:1 implicit:1 sketch:1 dennis:2 trust:5 nonlinear:11 eig:4 propagation:1 logistic:3 mode:1 stagewise:2 indicated:2 arduous:1 rabinowitz:1 scientific:1 verify:2 true:2 hence:24 vicinity:3 regularization:2 analytically:2 symmetric:4 attractive:5 during:1 maintained:1 noted:1 anything:1 tt:3 complete:1 confusion:1 pearlmutter:1 l1:5 dreyfus:3 common:1 sigmoid:1 discussed:1 numerically:1 rd:11 pm:3 similarly:1 pointed:1 dot:1 l3:3 similarity:1 surface:1 v0:14 add:1 dominant:2 curvature:24 optimizing:1 driven:2 scenario:1 claimed:1 certain:2 verlag:1 binary:2 continue:1 rep:1 vt:3 accomplished:1 renowned:1 minimum:16 employed:2 accomplishes:1 c11:4 converge:1 redundant:1 monotonically:1 ud:2 dashed:1 semi:10 full:4 violate:1 reduces:1 smooth:1 academic:1 long:5 e1:2 involving:2 neuro:2 regression:1 multilayer:4 enhancing:1 essentially:1 qed:3 penalize:1 whereas:4 singular:17 appropriately:2 w2:10 sch:1 unlike:1 wkt:1 induced:3 tend:2 deficient:7 comment:1 call:1 near:2 enough:1 easy:1 rendering:1 b7:1 xj:1 affect:1 architecture:1 escaping:1 six:1 effort:1 ev2:2 render:3 algebraic:1 hessian:40 cause:1 proceed:1 nine:3 remark:1 clear:1 netherlands:1 transforms:1 ph:3 eiji:2 reduced:1 exist:1 notice:2 dotted:3 alters:1 sign:4 shall:1 vol:2 indefinite:23 four:15 threshold:4 blum:12 changing:1 pj:23 ht:2 nocedal:1 v1:10 everywhere:1 parameterized:1 almost:1 p3:1 separation:1 toint:1 layer:16 hi:1 pay:1 dash:1 datum:4 display:1 guaranteed:1 quadratic:3 identifiable:1 constraint:2 n3:3 flat:3 x2:5 argument:3 min:5 separable:2 rendered:1 relatively:2 conjecture:2 gould:1 department:3 according:1 conjugate:1 jr:1 describes:1 smaller:1 suppressed:1 pti:2 tw:1 dv:3 equation:5 vjt:1 previously:1 remains:1 discus:1 fail:1 needed:1 know:1 merit:1 tractable:1 operation:2 apply:2 eight:1 hierarchical:1 v2:10 batch:1 encounter:1 eigen:6 existence:1 denotes:1 gori:5 cf:1 publishing:1 newton:8 exploit:4 classical:1 objective:4 move:3 added:2 quantity:1 strategy:2 rt:1 diagonal:7 gradient:12 kth:1 subspace:4 separate:1 thrun:1 capacity:2 w0:1 mail:1 manifold:5 cauchy:2 taiwan:2 rom:1 length:7 dogleg:1 negative:29 rise:1 zt:5 upper:1 observation:1 on3:1 finite:2 descent:8 inevitably:1 truncated:1 situation:4 perturbation:2 arbitrary:3 pair:2 namely:1 california:2 nsc:1 trans:2 dth:2 able:1 krylov:3 below:10 pattern:4 saturation:2 including:4 overlap:2 suitable:1 natural:3 hybrid:1 residual:8 scheme:2 technology:1 eye:1 imply:1 axis:1 breach:1 vh:1 literature:2 l2:3 tantamount:1 permutation:1 mixed:1 interesting:1 limitation:1 fictitious:1 kuyper:3 h2:2 leiden:2 editor:1 pi:4 row:4 changed:1 summary:1 supported:2 last:1 free:3 jth:3 allow:1 understand:1 perceptron:1 saul:1 trustregion:1 absolute:1 sparse:1 curve:3 world:2 stand:1 contour:2 parlett:2 forward:1 made:3 interchange:1 c5:1 avoided:1 collection:2 adaptive:1 schultz:1 transaction:1 approximate:2 compact:1 obtains:1 confirm:4 twovariable:1 b1:6 xi:1 spectrum:5 search:4 un:1 iterative:1 why:1 promising:1 zk:1 init:17 symmetry:1 vj:9 did:2 pk:22 dense:1 arrow:1 noise:1 arise:1 n2:3 w1t:1 x1:5 fig:9 ff:1 fashion:1 attributable:1 precision:1 fails:3 pv:2 exponential:4 lie:2 pe:3 jacobian:2 third:1 z0:2 theorem:20 down:3 xt:1 jt:17 uut:1 divergent:1 evidence:2 exists:3 stepwise:1 workshop:1 adding:2 magnitude:1 likely:2 saddle:13 expressed:3 partially:1 scalar:5 springer:1 ch:1 minimizer:6 chance:1 satisfies:1 extracted:1 viewed:1 goal:1 rojas:1 consequently:2 towards:1 sampson:2 fisher:1 change:10 included:1 specifically:1 typical:2 determined:1 wt:3 called:2 pas:1 gauss:8 perceptrons:3 l4:3 support:1 arises:4 schnabel:1 evaluate:1 d1:4 |
3,365 | 4,047 | Feature Set Embedding for Incomplete Data
Iain Melvin
NEC Labs America
Princeton, NJ
[email protected]
David Grangier
NEC Labs America
Princeton, NJ
[email protected]
Abstract
We present a new learning strategy for classification problems in which train and/or
test data suffer from missing features. In previous work, instances are represented
as vectors from some feature space and one is forced to impute missing values or
to consider an instance-specific subspace. In contrast, our method considers instances as sets of (feature,value) pairs which naturally handle the missing value
case. Building onto this framework, we propose a classification strategy for sets.
Our proposal maps (feature,value) pairs into an embedding space and then nonlinearly combines the set of embedded vectors. The embedding and the combination parameters are learned jointly on the final classification objective. This simple
strategy allows great flexibility in encoding prior knowledge about the features in
the embedding step and yields advantageous results compared to alternative solutions over several datasets.
1
Introduction
Many applications require classification techniques dealing with train and/or test instances with missing features: e.g. a churn predictor might deal with incomplete log features for new customers,
a spam filter might be trained from data originating from servers storing different features, a face
detector might deal with images for which high resolution cues are corrupted.
In this work, we address a learning setting in which the missing features are either missing at random [6], i.e. deletion due to corruption or noise, or structurally missing [4], i.e. some features do not
make sense for some examples, e.g. activity history for new customers. We do not consider setups
in which the features are maliciously deleted to fool the classifier [5]. Techniques for dealing with
incomplete data fall mainly into two categories: techniques which impute the missing features and
techniques considering an instance-specific subspace.
Imputation-based techniques are the most common. In this case, the data instances are viewed as
feature vectors in a high-dimensional space and the classifier is a function from this space into the
discrete set of classes. Prior to classification, the missing vector components need to be imputed.
Early imputation approaches fill any missing value with a constant, zero or the average of the feature
over the observed cases [18]. This strategy neglects inter-feature correlation, and completion techniques based on k-nearest-neighbors (k-NN) have subsequently been proposed to circumvent this
limitation [1]. Along this line, more complex strategies based on generative models have been used
to fill missing features according to the most likely value given the observed features. In this case, the
Expectation-Maximization algorithm is typically adopted to estimate the data distribution over the
incomplete training data [9]. Building upon this generative model strategy, several approaches have
considered integrating out the missing values, either by integrating the loss [2] or the decision function [22]. Recently, [15] and [6] have proposed to avoid the initial maximum likelihood distribution
estimation. Instead, they proposed to learn jointly the generative model and the decision function to
optimize the final classification loss.
As an alternative to imputation-based approaches, [4] has proposed a different framework. In this
case, each instance is viewed as a vector from a subspace of the feature space determined by its
1
Input
Set Embedding
(Non) Linear Combination
Linear Descision
Feature A: 0.15
Feature B missing
Feature C missing
Feature D missing
p(F, 0.77)
p(E, 0.28)
p
Class 1
? (. . .)
?
V
Class 2
Class 3
Feature E: 0.28
Class 4
Feature F: 0.77
Class 5
Feature G missing
p(A, 0.15)
Figure 1: Feature Set Embedding: An example is given a set of (feature, value) pairs. Each pair
is mapped into an embedding space, then the embedded vectors are combined into a single vector
(either linearly with mean or non-linearly with max). Linear classification is then applied. Our
learning procedure learns both the embedding space and the linear classifier jointly.
observed features. A decision function is learned for each specific subspace and parameter sharing
between the functions allows the method to achieve tractability and generalization. Compared to
imputation-based approaches, this strategy avoids choosing a generative model, i.e. making an assumption about the missing data. Other alternatives to imputation have been proposed in [10] and
[5]. These approaches focus on linear classifiers and propose learning procedures which avoid concentrating the weights on a small subset of the features, which helps achieve better robustness with
respect to feature deletion.
In this work, we propose a novel strategy called feature set embedding. Contrary to previous work,
we do not consider instances as vectors from a given feature space. Instead, we consider instances
as a set of (feature, value) pairs and propose to learn to classify sets directly. For that purpose, we
introduce a model which maps each (feature, value) pair onto an embedding space and combines the
embedded pairs into a single vector before applying a linear classifier, see Figure 1. The embedding
space mapping and the linear classifier are jointly learned to maximize the conditional probability
of the label given the observed input. Contrary to previous work, this set embedding framework
naturally handles incomplete data without modeling the missing feature distribution, or considering
an instance specific decision function. Compared to other work on learning from sets, our approach
is original as it proposes to learn to embed set elements and to classify sets as a single optimization
problem, while prior strategies learn their decision function considering a fixed mapping from sets
into a feature space [12, 3].
The rest of the paper is organized as follows: Section 2 presents the proposed approach, Section 3
describes our experiments and results. Section 4 concludes.
2
Feature Set Embedding
|X|
We denote an example as (X, y) where X = {(fi , vi )}i=1 is a set of (feature, value) pairs and y is a
class label in Y = {1, . . . , k}. The set of features is discrete, i.e. ?i, fi ? {1, . . . d}, while the feature
values are either continuous or discrete, i.e. ?i, vi ? Vfi where Vfi = R or Vfi = {1, . . . , cfi }.
Given a labeled training dataset Dtrain = {(Xi , yi )}ni=1 , we propose to learn a classifier g which
predicts a class from an input set X.
For that purpose, we combine two levels of modeling. At the lower level, (feature, value) pairs are
|X|
individually mapped into an embedding space of dimension m: given an example X = {(fi , vi )}i=1 ,
m
a function p predicts an embedding vector pi = p(fi , vi ) ? R for each feature value pair (fi , vi ). At
the upper level, the embedded vectors are combined to make the class prediction: a function h takes
|X|
|X|
the set of embedded vectors {pi }i=1 and predicts a vector of confidence values h({pi }i=1 ) ? Rk in
which the correct class should be assigned the highest value. Our classifier composes the two levels,
i.e g = h ? p. Intuitively, the first level extracts the information relevant to class prediction provided
by each feature, while the second level combines this information over all observed features.
2
2.1
Feature Embedding
Feature embedding offers great flexibility. It can accommodate discrete and continuous data and
allows encoding prior knowledge on characteristics shared between groups of features. For discrete
features, the simplest embedding strategy learns a distinct parameter vector for each (f, v) pair, i.e.
p(f, v) = Lf,v where Lf,v ? Rm .
For capacity control, rank regularization can be applied,
p(f, v) = W Lf,v where Lf,v ? Rl and W ? Rm?l ,
In this case, l < m is a hyperparameter bounding the rank of W L, where L denotes the matrix
concatenating all Lf,v vectors. One can further indicate that two pairs (f, v) and (f, v 0 ) originate
from the same feature by parameterizing Lf,v as
"
#
(
(a)
(b)
(a)
(a)
(b)
Lf
Lf ? Rl and Lf,v ? Rl
Lf,v =
(1)
where
(b)
l(a) + l(b) = l
Lf,v
Similarly, one can indicate that two pairs (f, v) and (f 0 , v) shares the same value by parameterizing,
#
(
"
(a)
(b)
(a)
(a)
(b)
Lf,v
Lf,v ? Rl and Lv ? Rl
where
(2)
Lf,v =
(b)
l(a) + l(b) = l
Lv
This is useful when feature values share a common physical meaning, like gray levels at different
pixel locations or temperatures measured by different sensors. Of course, the parameter sharing
strategies (1) and (2) can be combined.
When the feature values are continuous, we adopt a similar strategy and define
"
#
(
(a)
(b)
(a)
(a)
(b)
Lf
Lf ? Rl and Lf ? Rl
p(f, v) = W
where
(b)
l(a) + l(b) = l
vLf
(a)
(3)
(b)
where Lf informs about the presence of feature f , while vLf informs about its value. If the model
(a)
is thought not to need presence information, Lf can be omitted, i.e. l(a) = 0.
When the dataset contains a mix of continuous and discrete features, both embedding approaches can
be used jointly. Feature embedding is hence a versatile strategy; the practitioner defines the model
parameterization according to the nature of the features, and the learned parameters L and W encode
the correlation between features.
2.2
Classifying from an Embedded Feature Set
The second level of our architecture h considers the set of embedded features and predicts a vector
|X|
of confidence values. Given an example X = {(fi , vi )}i=1 , the function h takes the set P =
|X|
{p(fi , vi )}i=1 as input, and outputs h(P ) ? Rk according to
h(P ) = V ?(P )
where ? is a function which takes a set of vector of Rm and outputs a single vector of Rm , while V
is a k-by-m matrix. This second level is hence related to kernel methods for sets, which first apply a
fixed mapping ? from sets to vectors, before learning a linear classifier in the feature space [12]. In
our case, however, we make sure that ? is a generalized differentiable function [19], so that h and p
can be optimized jointly. In the following, we consider two alternatives for ?: a linear function, the
mean, and a non-linear function, the component-wise max.
Linear Model In this case, one can remark that
h(P )
|X|
= V mean({p(fi , vi )}i=1 )
|X|
= V mean({W Lfi ,vi }i=1 )
|X|
= V W mean({Lfi ,vi }i=1 )
3
by linearity of the mean. Hence, in this case, the dimension of the embedding space m bounds
the rank of the matrix V W . This also means that considering m > k is irrelevant in the linear
case. In the specific case where features are continuous and no presence information is provided,
(b)
i.e. Lf,v = vLf , our model is equivalent to a classical linear classifier operating on feature vectors
when all features are present, i.e. |X| = d,
g(X) = V W mean({Lfi ,vi }di=1 ) =
(b)
d
X
1
1
(b)
VW
vi Lfi = (V W L)v
d
d
i=1
(b)
where L denotes the matrix [Lf1 , . . . , Lfd ] and v denotes the vector [v1 , . . . , vd ]. Hence, in this
case, our model corresponds to
g(X) = M v where M ? Rk?d s.t. rank(M) = min{k, l, m, d}
Non-linear Model In this case, we rely on the component-wise max. This strategy can model
more complex decision functions. In this case, selecting m > k, l is meaningful. Intuitively, each
dimension in the embedding space provides a meta-feature describing each (feature, value) pair,
the max operator then outputs the best meta-feature match over the set of (feature, value) pairs,
performing a kind of soft-OR, i.e. checking whether there is at least one pair for which the metafeature is high. The final classification decision is then taken as a linear combination of the m
soft-ORs. One can relate our use of the max operator to its common use in fixed set mapping for
computer vision [3].
2.3
Model Training
Model learning aims at selecting the parameter matrices L, W and V . For that purpose, we maximize
the (log) posterior probability of the correct class over the training set Dtrain = {(Xi , yi )}ni=1 , i.e.
C=
n
X
log P (yi |Xi )
i=1
where model outputs are mapped to probabilities through a softmax function, i.e.
P (y|X) = Pk
exp(g(X)y )
y 0 =1
exp(g(X)y0 )
.
Capacity control is achieved by selecting the hyperparameters l and m. For linear models, the criterion C is referred to as the multiclass logistic regression objective and [16] has studied the relation
between C and margin maximization. In the binary case (k = 2), the criterion C is often referred to
as the cross entropy objective.
The maximization of C is conducted through stochastic gradient ascent for random initial parameters.
This algorithm enables the addressing of large training sets and has good properties for non-convex
problems [14], which is of interest for our non-linear model and for the linear model when rank
regularization is used. One can note that our non-linear model relies on the max function, which is
not differentiable everywhere. However, [8] has shown that gradient ascent can also be applied to
generalized differentiable functions, which is the case of our criterion.
3
Experiments
Our experiments consider different setups: features missing at train and test time, features missing
only at train time, features missing only at test time. In each case, our model is compared to alternative solutions relying on experimental setups introduced in prior work. Finally, we study our model
in various conditions over the larger MNIST dataset.
3.1
Missing Features at Train and Test Time
The setup in which features are missing at train and test time is relevant to applications suffering
sensor failure or communication errors. It is also relevant to applications in which some features are
4
UCI sick
pima
hepatitis
echo
hypo
MNIST-5-vs-6
Cars
USPS
Physics
Mine
MNIST-miss-test?
MNIST-full
?
?
Table 1: Dataset Statistics
Train set Test set # eval. Total #
size
size
splits
feat.
2,530
633
5
25
614
154
5
8
124
31
5
19
104
27
5
7
2,530
633
5
25
1,000
200
2
784
177
45
5
1,900
1,000
6,291
100
256
1,000
5,179
100
78
500
213
100
41
12?100 12?300
20
784
60,000
10,000
1
784
Missing
feat.(%)
90
90
90
90
90
25
62
85?
85?
26?
0 to 99?
0 to 87
Continuous
or discrete
c
c
c
c
c
d
d
c
c
c
d
d
Features missing only at training time for USPS, Physics and Mine.
Features missing only at test time for MNIST-miss-test. This set presents 12 binary problems, 4vs9,
3vs5, 7vs9, 5vs8, 3vs8, 2vs8, 2vs3, 8vs9, 5vs6, 2vs7, 4vs7 and 2vs6, each having 100 examples for
training, 200 for validation and 300 for test.
structurally missing, i.e. the measurements are absent because they do not make sense (e.g. see the
car detection experiments).
We compare our model to alternative solutions over the experimental setup introduced in [4]. Three
sets of experiments are considered. The first set relies on binary classification problems from the
UCI repository. For each dataset, 90% of the features are removed at random. The second set of
experiments considers the task of discriminating between handwritten characters of 5 and 6 from
the MNIST dataset. Contrary to UCI, the deleted features have some structure; for each example, a
square area covering 25% of the image surface is removed at random. The third set of experiments
considers detecting cars in images. This task presents a problem where some features are structurally
missing. For each example, regions of interests corresponding to potential car parts are detected, and
features are extracted for each region. For each image, 19 types of region are considered and between
0 and 10 instances of each region have been extracted. Each region is then described by 10 features.
This region extraction process is described in [7]. Hence, at most 1900 = 19 ? 10 ? 10 features are
provided for each image. Data statistics are summarized in Table 1.
On these datasets, Feature Set Embedding (FSE) is compared to 7 baseline models. These baselines
are all variants of Support Vector Machines (SVMs), suitable for the missing feature problem. Zero,
Mean, GMM and kNN are imputation-based strategies: Zero sets the missing values to zero, Mean
sets the missing values to the average value of the features over the training set, GMM finds the
most likely missing values given the observed ones relying on a Gaussian Mixture learned over the
training set, kNN fills the missing values of an instance based on its k-nearest-neighbors, relying on
the Euclidean distance in the subspace relevant to each pair of examples. Flag relies on the Zero
imputation but complements the examples with binary features indicating whether each feature was
observed or imputed. Finally, geom is a subspace-based strategy [4]; for each example, a classifier
in the subspace corresponding to the observed features is considered. The instance-specific margin
is maximized but the instance-specific classifiers share common weights.
For each experiment, the hyperparameters of our model l, m and the number of training iterations are
validated by first training the model on 4/5 of the training data and assessing it on the remainder of
the training data. A similar strategy has been used for selecting the baseline parameters. The SVM
kernel has notably been validated between linear and polynomial up to order 3. Test performance is
then reported over the best validated parameters.
Table 2 reports the results of our experiments. Overall, FSE performs at least as well as the best alternative for all experiments, except for hepatitis where all models yield almost the same performance.
In the case of structurally missing features, the car experiment shows a substantial advantage for FSE
over the second best approach geom, which was specifically introduced for this kind of setup. During
validation (no validation results are reported due to space constraints), we noted that non-linear mod-
5
Table 2: Error Rate (%) for Missing Features at Train & Test Time
UCI
sick
pima
hepatitis
echo
hypo
MNIST-5-vs-6
Cars
FSE
9
34
23
33
5
5
24
geom
10
34
22
34
5
5
28
zero
9
34
22
37
7
5
39
mean
37
35
22
33
35
6
39
flag
16
35
22
36
6
7
41
GMM
40
35
22
33
33
5
38
kNN
30
41
23
33
19
6
48
Table 3: Error rate (%) for missing features at train time only
USPS
Physics
Mines
FSE
11.7
23.8
9.8
meanInput
13.6
29.2
11.7
GMM
9.0
31.2
10.5
meanFeat
13.2
29.6
10.6
els, i.e. the baseline SVM with a polynomial kernel of order 2 and FSE with ? = max, outperformed
their linear counterparts. We therefore solely validate non-linear FSE in the following: For feature
embedding of continuous data, feature presence information has proven to be useful in all cases, i.e.
l(a) > 0 in Eq. (3). For feature embedding of discrete data, sharing parameters across different
values of the same feature, i.e. Eq. (1), was also helpful in all cases. We also relied on sharing
parameters across different features with the same value, i.e. Eq. (2), for datasets where the feature
values shared a common meaning, i.e. gray levels for MNIST and region features for cars. For the
hyperparameters (l, m) of our model, we observed that the main control on our model capacity is
the embedding size m. Its selection is simple since varying this parameter consistently yields convex
validation curves. The rank regularizer l needed little tuning, yielding stable validation performance
for a wide range of values.
3.2
Missing Features at Train Time
The setup presenting missing features at training time is relevant to applications which rely on different sources for training. Each source might not collect the exact same set of features, or might
have introduced novel features during the data collection process. At test time however, the feature
detector can be designed to collect the complete feature set.
In this case, we compare our model to alternative solutions over the experimental setup introduced
in [6]. Three datasets are considered. The first set USPS considers the task of discriminating between
odd and even handwritten digits over the USPS dataset. The training set is degraded and 85% of the
features are missing. The second set considers the quantum physics data from the KDD Cup 2004 in
which two types of particles generated in high energy collider experiments should be distinguished.
Again, the training set is degraded and 85% of the features are missing. The third set considers
the problem of detecting land-mines from 4 types of sensors, each sensor provides a different set of
features or views. In this case, for each instance, whole views are considered missing during training.
Data statistics are summarized in Table 1 for the three sets.
For this set of experiments, we rely on infinite imputations as a baseline. Infinite imputation is a general technique proposed for the case where features are missing at train time. Instead of pretraining
the distribution governing the missing values with a generative objective, infinite imputations proposes to train the imputation model and the final classifier in a joint optimization framework [6]. In
this context, we consider an SVM with a RBF kernel as the classifier and three alternative imputation
models Mean, GMM and MeanFeat which corresponds to mean imputations in the feature space.
For each experiment, we follow the validation strategy defined in the previous section for FSE. The
validation strategy for tuning the parameters of the other models is described in [6].
Table 3 reports our results. FSE is the best model for the Physics and Mines dataset, and the second
best model for the USPS dataset. In this case, features are highly correlated and GMM imputation
yields a challenging baseline. On the other hand, Physics presents a challenging problem with higher
6
Error rate (%)
40
30
20
10
FSE
Dekel & Shamir
Globerson & Roweis
0
0
150 300 450 600
Num. of missing features
750
Figure 2: Results for MNIST-miss-test (12 binary problems with features missing at test time only)
error rates for all models. In this case, feature correlation is low and GMM imputation is yielding the
worse performance, while our model brings a strong improvement.
3.3
Missing Features at Test Time
The setup presenting missing features at test time considers applications in which the training data
have been produced with more care than the test data. For example, in a face identification application, customers could provide clean photographs for training while, at test time, the system should
be required to work in the presence of occlusions or saturated pixels.
In this case, we compare our work to [10] and [5]. Both strategies propose to learn a classifier
which avoids assigning high weight to a small subset of features, hence limiting the impact of the
deletion of some features at test time. [10] formulates their strategy as a min-max problem, i.e.
identifying the best classifier under the worst deletion, while [5] relies on an L? regularizer to
avoid assigning high weights to few features. We compare our algorithm to these alternatives over
binary problems discriminating handwritten digits originating from MNIST. This experimental setup
has been introduced in [10] and Table 1 summarizes its statistics. In this setup, the data is split
into training, validation and test sets. For a fair comparison, the validation set is used solely to
select hyperparameters, i.e. we do not retrain the model over both training and validation sets after
hyperparameter selection.
Since no features are missing at train time, we adapt our training procedure to take into account
the mismatched conditions between train and test. Each time an example is considered during our
stochastic training procedure, we delete a random subset of its features. The size of this subset is
sampled uniformly between 0 and the total number of features minus 1.
Figure 2 plots the error rate as a function of the number of missing features. FSE has a clear advantage
in most settings: it achieves a lower error rate than Globerson & Roweis [10] in all cases, while it
is better than Dekel & Shamir [5], as soon as the number of missing features is above 50, i.e. less
than 6% missing features. In fact, we observe that FSE is very robust to feature deletion; its error
rate remains below 20% for up to 700 missing features i.e. 90% missing features. On the other end,
the alternative strategies report performance close to random when the number of missing features
reaches 150, i.e. 20% missing features. Note that [10] and [5] further evaluate their models in an
adversarial setting, i.e. features are intentionally deleted to fool the classifier, that is beyond the scope
of this work.
3.4
MNIST-full experiments
The previous experiments compared our model to prior approaches relying on the experimental setups introduced to evaluate these approaches. These setups proposed small training sets motivated by
the training cost of the compared alternatives (see Table 1). In this section, we stress the scalability
of our learning procedure and study FSE on the whole MNIST dataset with 10 classes and 60, 000
training instances. All conditions are considered: features missing at training time, at testing time,
and at both times.
We train 4 models which have access to training sets with various numbers of available features,
i.e. 100, 200, 500 and 784 features which approximately correspond to 90, 60, 35 and 0% missing
7
Table 4: Error Rate (%) 10-class MNIST-full Experiments
# train f.
100
300
500
784
random
100
19.8
34.2
55.6
78.3
10.7
# test features
300 500 784
8.9
7.5 6.9
7.4
4.8 3.9
12.3
4.8 2.9
46.7 17.8 2.5
2.9
2.1 1.8
features. We train a 5th model referred to as random with the algorithm introduced in Section 3.3, i.e.
all training features are available but the training procedure randomly hides some features each time
it examines an example. All models are evaluated with 100, 200, 500 and 784 available features.
Table 4 reports the results of these experiments. Excluding the random model, the result matrix is
strongly diagonal, e.g. when facing a test problem with 300 available features, the model trained with
300 features is better than the models trained with 100, 500 or 784 features. This is not surprising as
the training distribution is closer to the testing distribution in that case. We also observe that models
facing less features at test time than at train time yield poor performance, while the models trained
with few features yield satisfying performance when facing more features. This seems to suggest that
training with missing features yields more robust models as it avoids the decision function to rely
solely on few specific features that might be corrupted. In other word, training with missing features
seems to achieve a similar goal as L? regularization [5]. This observation is precisely what led
us to introduce the random training procedure. In this case, the model performs better than all other
models in all conditions, even when all features are present, confirming our regularization hypothesis.
In fact, the results obtained with no missing features (1.8% error) are comparable to the best nonconvolutional methods, including traditional neural networks (1.6% error) [20]. Only recent work
on Deep Boltzmann Machines [17] achieved significantly better performance (0.95% error). The
regularization effect of missing training features could be related to noise injection techniques for
regularization [21, 11].
4
Conclusions
This paper introduces Feature Set Embedding for the problem of classification with missing features.
Our approach deviates from the standard classification paradigm: instead of considering examples
as feature vectors, we consider examples as sets of (feature, value) pairs which handle the missing
feature problem more naturally. In order to classify sets, we propose a new strategy relying on two
levels of modeling. At the first level, each (feature, value) is mapped onto an embedding space. At
the second level, the set of embedded vectors is compressed onto a single embedded vector over
which linear classification is applied. Our training algorithm then relies on stochastic gradient ascent
to jointly learn the embedding space and the final linear decision function.
This proposed strategy has several advantages compared to prior work. First, sets are conceptually
better suited than vectors for dealing with missing values. Second, embedding (feature, value) pairs
offers a flexible framework which easily allows encoding prior knowledge about the features. Third,
our experiments demonstrate the effectiveness and the scalability of our approach.
From a broader perspective, the flexible feature embedding framework could go beyond the missing
feature application. In particular, it allows using meta-features (attributes describing a feature) [13],
e.g. the embedding vector of the temperature features in a weather prediction system could be computed from the locations of their sensors. It also enables the designing of a system in which new
sensors are added without requiring full model re-training; in this case, the model could be quickly
adapted by only updating embedding vectors corresponding to the new sensors. Also, our approach
of relying on feature sets offers interesting opportunities for feature selection and adversarial feature
deletion. We plan to study these aspects in the future.
Acknowledgments The authors are grateful to Gal Chechik and Uwe Dick for sharing their data
and experimental setups.
8
References
[1] G. Batista and M. Monard. A study of k-nearest neighbour as an imputation method. In Hybrid
Intelligent Systems (HIS), pages 251?260, 2002.
[2] C. Bhattacharyya, P. K. Shivaswamy, and A. Smola. A second order cone programming formulation for classifying missing data. In Neural Information Processing Systems (NIPS), pages
153?160, 2005.
[3] S. Boughhorbel, J-P. Tarel, and F. Fleuret. Non-mercer kernels for svm object recognition. In
British Machine Vision Conference (BMVC), 2004.
[4] G. Chechik, G. Heitz, G. Elidan, P. Abbeel, and D. Koller. Max margin classification of data
with absent features. Journal of Machine Learning Research (JMLR), 9:1?21, 2008.
[5] O. Dekel, O. Shamir, and L. Xiao. Learning to classify with missing and corrupted features.
Machine Learning Journal, 2010 (to appear).
[6] U. Dick, P. Haider, and T. Scheffer. Learning from incomplete data with infinite imputations.
In International Conference on Machine Learning (ICML), 2008.
[7] G. Elidan, G. Heitz, and D. Koller. Learning object shape: From drawings to images. In
Conference on Computer Vision and Pattern Recognition (CVPR), pages 2064?2071, 2006.
[8] Y. M. Ermoliev and V. I. Norkin. Stochastic generalized gradient method with application to
insurance risk management. Technical Report 21, International Institute for Applied Systems
Analysis, 1997.
[9] Z. Ghahramani and M. I. Jordan. Supervised learning from incomplete data via an em approach.
In Neural Information Processing Systems (NIPS), pages 120?127, 1993.
[10] A. Globerson and S. Roweis. Nightmare at test time: robust learning by feature deletion. In
International Conference on Machine Learning (ICML), pages 353?360, 2006.
[11] Y. Grandvalet, S. Canu, and S. Boucheron. Noise injection: Theoretical prospects. Neural
Computation, 9(5):1093?1108, 1997.
[12] R. Kondor and T. Jebara. A kernel between sets of vectors. In International Conference on
Machine Learning (ICML), 2003.
[13] E. Krupka, A. Navot, and N. Tishby. Learning to select features using their properties. Journal
of Machine Learning Research (JMLR), 9:2349?2376, 2008.
[14] Y. LeCun, L. Bottou, G. B. Orr, and K. R. Mueller. Efficient backprop. In G. B Orr and K. R.
Mueller, editors, Neural Networks: Tricks of the Trade, chapter 1, pages 9?50. Springer, 1998.
[15] X. Liao, H. Li, and L. Carin. Quadratically gated mixture of experts for incomplete data classification. In International Conference on Machine Learning (ICML), pages 553?560, 2007.
[16] S. Rosset, J. Zhu, and T. Hastie. Margin maximizing loss functions. In Neural Information
Processing Systems (NIPS), 2003.
[17] R. Salakhutdinov and H. Larochelle. Efficient learning of deep Boltzmann machines. In Artificial Intelligence and Statistics (AISTATS), 2010.
[18] J.L. Schafer. Analysis of Incomplete Multivariate Data. Chapman & Hall, London, UK, 1998.
[19] N.Z. Shor. Minimization Methods for Non-Differentiable Functions and Applications. Springer,
Berlin, Germany, 1985.
[20] P. Simard, D. Steinkraus, and J.C. Platt. Best practices for convolutional neural networks applied to visual document analysis. In International Conference on Document Analysis and
Recognition (ICDAR), pages 958?962, 2003.
[21] P. Vincent, H. Larochelle, Y. Bengio, and P.A. Manzagol. Extracting and composing robust features with denoising autoencoders. In International Conference on Machine Learning (ICML),
pages 1096?1103, 2008.
[22] D. Williams, X. Liao, Y. Xue, and L. Carin. Incomplete-data classification using logistic regression. In International Conference on Machine Learning (ICML), pages 972?979, 2005.
9
| 4047 |@word repository:1 kondor:1 polynomial:2 advantageous:1 seems:2 dekel:3 minus:1 versatile:1 accommodate:1 initial:2 contains:1 selecting:4 batista:1 document:2 bhattacharyya:1 com:2 surprising:1 assigning:2 confirming:1 kdd:1 enables:2 shape:1 designed:1 plot:1 v:2 cue:1 generative:5 intelligence:1 parameterization:1 num:1 provides:2 detecting:2 location:2 melvin:1 along:1 combine:4 introduce:2 inter:1 notably:1 salakhutdinov:1 relying:6 steinkraus:1 little:1 considering:5 provided:3 linearity:1 schafer:1 vs5:1 what:1 kind:2 gal:1 nj:2 lfd:1 classifier:17 rm:4 uk:1 control:3 platt:1 appear:1 before:2 krupka:1 encoding:3 solely:3 approximately:1 might:6 studied:1 collect:2 challenging:2 range:1 acknowledgment:1 globerson:3 lecun:1 testing:2 practice:1 lf:20 digit:2 procedure:7 cfi:1 area:1 thought:1 weather:1 significantly:1 chechik:2 confidence:2 integrating:2 word:1 suggest:1 onto:4 close:1 selection:3 operator:2 context:1 applying:1 risk:1 optimize:1 equivalent:1 map:2 customer:3 missing:67 maximizing:1 go:1 williams:1 convex:2 resolution:1 identifying:1 iain:2 maliciously:1 parameterizing:2 examines:1 fill:3 his:1 embedding:33 handle:3 limiting:1 shamir:3 exact:1 programming:1 designing:1 hypothesis:1 trick:1 element:1 lfi:4 satisfying:1 updating:1 recognition:3 predicts:4 labeled:1 observed:9 worst:1 region:7 trade:1 highest:1 removed:2 prospect:1 substantial:1 mine:5 trained:4 grateful:1 upon:1 usps:6 easily:1 joint:1 represented:1 america:2 various:2 regularizer:2 chapter:1 train:18 forced:1 distinct:1 london:1 detected:1 artificial:1 choosing:1 larger:1 cvpr:1 drawing:1 compressed:1 statistic:5 knn:3 jointly:7 echo:2 nonconvolutional:1 final:5 advantage:3 differentiable:4 propose:7 remainder:1 relevant:5 uci:4 flexibility:2 achieve:3 roweis:3 validate:1 scalability:2 assessing:1 object:2 help:1 informs:2 completion:1 measured:1 nearest:3 odd:1 eq:3 strong:1 indicate:2 larochelle:2 collider:1 correct:2 attribute:1 filter:1 subsequently:1 stochastic:4 backprop:1 require:1 abbeel:1 generalization:1 considered:8 hall:1 exp:2 great:2 mapping:4 scope:1 achieves:1 early:1 adopt:1 omitted:1 purpose:3 estimation:1 outperformed:1 label:2 individually:1 minimization:1 sensor:7 gaussian:1 aim:1 avoid:3 varying:1 broader:1 encode:1 validated:3 focus:1 improvement:1 consistently:1 rank:6 likelihood:1 mainly:1 hepatitis:3 contrast:1 adversarial:2 baseline:6 sense:2 helpful:1 shivaswamy:1 mueller:2 el:1 nn:1 typically:1 relation:1 koller:2 originating:2 germany:1 pixel:2 overall:1 classification:15 flexible:2 uwe:1 proposes:2 plan:1 softmax:1 having:1 extraction:1 chapman:1 icml:6 carin:2 future:1 report:5 intelligent:1 few:3 randomly:1 neighbour:1 occlusion:1 detection:1 interest:2 highly:1 eval:1 insurance:1 saturated:1 introduces:1 mixture:2 yielding:2 closer:1 incomplete:10 euclidean:1 re:1 theoretical:1 delete:1 instance:16 classify:4 soft:2 modeling:3 formulates:1 maximization:3 tractability:1 cost:1 addressing:1 subset:4 predictor:1 conducted:1 tishby:1 dtrain:2 reported:2 corrupted:3 xue:1 rosset:1 combined:3 international:8 discriminating:3 physic:6 quickly:1 again:1 management:1 worse:1 expert:1 simard:1 li:1 account:1 potential:1 orr:2 summarized:2 vi:12 view:2 lab:4 relied:1 square:1 ni:2 degraded:2 convolutional:1 characteristic:1 maximized:1 yield:7 correspond:1 conceptually:1 handwritten:3 identification:1 vincent:1 produced:1 corruption:1 churn:1 history:1 composes:1 detector:2 reach:1 sharing:5 failure:1 vs3:1 energy:1 intentionally:1 naturally:3 di:1 sampled:1 dataset:10 concentrating:1 knowledge:3 car:7 organized:1 higher:1 supervised:1 follow:1 bmvc:1 formulation:1 evaluated:1 strongly:1 governing:1 smola:1 correlation:3 autoencoders:1 hand:1 vs6:2 defines:1 logistic:2 brings:1 gray:2 building:2 effect:1 requiring:1 counterpart:1 regularization:6 assigned:1 hence:6 boucheron:1 deal:2 impute:2 during:4 covering:1 noted:1 criterion:3 generalized:3 presenting:2 stress:1 complete:1 demonstrate:1 performs:2 temperature:2 image:6 meaning:2 wise:2 novel:2 recently:1 fi:8 common:5 rl:7 physical:1 haider:1 metafeature:1 measurement:1 cup:1 tuning:2 canu:1 similarly:1 particle:1 grangier:1 stable:1 access:1 operating:1 surface:1 sick:2 posterior:1 multivariate:1 hide:1 recent:1 perspective:1 irrelevant:1 server:1 meta:3 binary:6 yi:3 care:1 maximize:2 paradigm:1 elidan:2 full:4 mix:1 technical:1 match:1 adapt:1 offer:3 cross:1 impact:1 prediction:3 variant:1 regression:2 liao:2 vision:3 expectation:1 iteration:1 kernel:6 achieved:2 proposal:1 source:2 rest:1 sure:1 ascent:3 contrary:3 mod:1 effectiveness:1 jordan:1 practitioner:1 extracting:1 vw:1 presence:5 split:2 vs9:3 bengio:1 architecture:1 hastie:1 shor:1 multiclass:1 absent:2 whether:2 motivated:1 suffer:1 pretraining:1 remark:1 deep:2 useful:2 fleuret:1 fool:2 clear:1 svms:1 category:1 simplest:1 imputed:2 discrete:8 hyperparameter:2 group:1 deleted:3 imputation:17 gmm:7 clean:1 v1:1 cone:1 everywhere:1 almost:1 decision:9 summarizes:1 comparable:1 bound:1 activity:1 adapted:1 constraint:1 precisely:1 aspect:1 min:2 performing:1 injection:2 according:3 combination:3 poor:1 describes:1 across:2 em:1 y0:1 character:1 making:1 intuitively:2 taken:1 remains:1 describing:2 icdar:1 needed:1 end:1 adopted:1 available:4 vfi:3 apply:1 observe:2 distinguished:1 vs7:2 alternative:12 robustness:1 original:1 denotes:3 opportunity:1 neglect:1 ghahramani:1 classical:1 objective:4 added:1 strategy:24 diagonal:1 traditional:1 gradient:4 subspace:7 distance:1 mapped:4 berlin:1 capacity:3 vd:1 originate:1 considers:8 dgrangier:1 manzagol:1 dick:2 setup:14 pima:2 relate:1 boltzmann:2 gated:1 upper:1 observation:1 datasets:4 communication:1 excluding:1 jebara:1 david:1 introduced:8 pair:19 nonlinearly:1 complement:1 required:1 optimized:1 learned:5 deletion:7 quadratically:1 nip:3 address:1 beyond:2 vs8:3 below:1 pattern:1 geom:3 max:9 including:1 suitable:1 rely:4 circumvent:1 hybrid:1 zhu:1 concludes:1 extract:1 deviate:1 prior:8 checking:1 embedded:9 loss:3 interesting:1 limitation:1 tarel:1 proven:1 facing:3 lv:2 vlf:3 validation:10 lf1:1 mercer:1 xiao:1 editor:1 grandvalet:1 storing:1 pi:3 share:3 classifying:2 land:1 course:1 soon:1 mismatched:1 fall:1 neighbor:2 face:2 wide:1 institute:1 curve:1 dimension:3 heitz:2 ermoliev:1 avoids:3 quantum:1 author:1 collection:1 spam:1 feat:2 dealing:3 navot:1 xi:3 continuous:7 table:11 learn:7 nature:1 robust:4 fse:13 composing:1 bottou:1 complex:2 aistats:1 pk:1 main:1 linearly:2 bounding:1 noise:3 hyperparameters:4 whole:2 suffering:1 fair:1 referred:3 retrain:1 scheffer:1 structurally:4 concatenating:1 jmlr:2 third:3 learns:2 rk:3 british:1 embed:1 specific:8 svm:4 mnist:13 hypo:2 nec:4 margin:4 suited:1 entropy:1 led:1 photograph:1 likely:2 nightmare:1 visual:1 springer:2 corresponds:2 relies:5 extracted:2 conditional:1 viewed:2 goal:1 rbf:1 shared:2 determined:1 except:1 specifically:1 infinite:4 uniformly:1 miss:3 flag:2 denoising:1 called:1 total:2 experimental:6 meaningful:1 indicating:1 select:2 support:1 evaluate:2 princeton:2 correlated:1 |
3,366 | 4,048 | Online Markov Decision Processes under Bandit
Feedback
Gergely Neu??
Andr?as Gy?orgy
?
?
Department of Computer Science and
Information Theory, Budapest University of
Technology and Economics, Hungary
[email protected]
Machine Learning Research Group
MTA SZTAKI Institute for Computer
Science and Control, Hungary
[email protected]
Csaba Szepesv?ari
Andr?as Antos
Department of Computing Science,
University of Alberta, Canada
[email protected]
Machine Learning Research Group
MTA SZTAKI Institute for Computer
Science and Control, Hungary
[email protected]
Abstract
We consider online learning in finite stochastic Markovian environments where in
each time step a new reward function is chosen by an oblivious adversary. The
goal of the learning agent is to compete with the best stationary policy in terms
of the total reward received. In each time step the agent observes the current state
and the reward associated with the last transition, however, the agent does not
observe the rewards associated with other state-action pairs. The agent is assumed
to know the transition probabilities. The state of the art result for this setting is
a no-regret algorithm. In this paper we propose a new learning algorithm and,
assuming that stationary policies mix uniformly fast, we show that after T time
steps, the expected regret of the new algorithm is O T 2/3 (ln T )1/3 , giving the
first rigorously proved regret bound for the problem.
1
Introduction
We consider online learning in finite Markov decision processes (MDPs) with a fixed, known dynamics. The formal problem definition is as follows: An agent navigates in a finite stochastic environment by selecting actions based on the states and rewards experienced previously. At each time
instant the agent observes the reward associated with the last transition and the current state, that is,
at time t + 1 the agent observes rt (xt , at ), where xt is the state visited at time t and at is the action
chosen. The agent does not observe the rewards associated with other transitions, that is, the agent
? T in T
faces a bandit situation. The goal of the agent is to maximize its total expected reward R
steps. As opposed to the standard MDP setting, the reward function at each time step may be different. The only assumption about this sequence of reward functions rt is that they are chosen ahead of
time, independently of how the agent acts. However, no statistical assumptions are made about the
choice of this sequence. As usual in such cases, a meaningful performance measure for the agent is
how well it can compete with a certain class of reference policies, in our case the set of all stationary
policies: If RT? denotes the expected total reward in T steps that can be collected by choosing the
best stationary policy (this policy can be chosen based on the full knowledge of the sequence rt ),
? T = R? ? R
?T .
the goal of learning can be expressed as minimizing the total expected regret, L
T
In this paper we propose a new algorithm for this setting. Assuming that the stationary distributions
underlying stationary policies exist, are unique and they are uniformly bounded away from zero and
1
that these policies mix uniformly fast, our main
result shows that the total expected regret of our
algorithm in T time steps is O T 2/3 (ln T )1/3 .
The first work that considered a similar online learning setting is due to Even-Dar et al. (2005,
2009). In fact, this is the work that provides the starting point for our algorithm and analysis. The
major difference between our work and that of Even-Dar et al. (2005, 2009) is that they assume
that the reward function is fully observed (i.e., in each time step the learning agent observes the
whole reward function rt ), whereas we consider the bandit setting. The main result in these works
is a bound on the total expected regret, which scales with the square root of the number of time
steps under mixing assumptions identical to our assumptions. Another work that considered the full
information problem is due to Yu et al. (2009)
who proposed new algorithms and proved a bound on
the expected regret of order O T 3/4+? for arbitrary ? ? (0, 1/3). The advantage of the algorithm
of Yu et al. (2009) to that of Even-Dar et al. (2009) is that it is computationally less expensive, which,
however, comes at the price of an increased bound on the regret. Yu et al. (2009) introduced another
algorithm (?Q-FPL?) and they have shown a sublinear (o(T )) almost sure bound on the regret.
All the works reviewed so far considered the full information case. The requirement that the full
reward function must be given to the agent at every time step significantly limits their applicability.
There are only three papers that we know of where the bandit situation was considered.
The first paper which falls into this category is due to Yu et al. (2009) who proposed an algorithm
(?Exploratory FPL?) for this setting and have shown an o(T ) almost sure bound on the regret.
?
Recently, Neu et al. (2010) gave O
T regret bounds for a special bandit setting when the agent
interacts with a loop-free episodic environment. The algorithm and analysis in this work heavily
exploits the specifics of these environments (i.e., that in the same episode no state can be visited
twice) and so they do not generalize to our setting.
Another closely related work is due to Yu and Mannor (2009a,b) who considered the problem of
online learning in MDPs where the transition probabilities may also change arbitrarily after each
transition. This problem, however, is significantly different from ours and the algorithms studied
are unsuitable for our setting. Further, the analysis in these papers seems to have gaps (see Neu
et al., 2010). Thus, currently, the only result for the case considered in this paper is an asymptotic
?no-regret? result.
The rest of the paper is organized as follows: The problem is laid out in Section 2, which is followed
by a section about our assumptions (Section 3). The algorithm and the main result are given in
Section 4, while a proof sketch of the latter is presented in Section 5.
2
Problem definition
Formally, a finite Markov Decision Process (MDP) M is defined by a finite state space X , a finite
action set A, a transition probability kernel P : X ? A ? X ? [0, 1], and a reward function
r : X ? A ? [0, 1]. In time step t ? {1, 2, . . .}, knowing the state xt ? X , an agent acting in
the MDP M chooses an action at ? A(xt ) to be executed based on (xt , r(at?1 , xt?1 ), at?1 , xt?1 ,
. . . , x2 , r(a1 , x1 ), a1 , x1 ).1 Here A(x) ? A is the set of admissible actions at state x. As a result of
executing the chosen action the process moves to state xt+1 ? X with probability P (xt+1 |xt , at )
and the agent receives reward r(xt , at ). In the so-called average-reward problem, the goal of the
agent is to maximize the average reward received over time. For a more detailed introduction the
reader is referred to, for example, Puterman (1994).
2.1
Online learning in MDPs
In this paper we consider the online version of MDPs when the reward function is allowed to change
arbitrarily. That is, instead of a single reward function r, a sequence of reward functions {rt } is
given. This sequence is assumed to be fixed ahead of time, and, for simplicity, we assume that
rt (x, a) ? [0, 1] for all (x, a) ? X ? A and t ? {1, 2, . . .}. No other assumptions are made about
this sequence.
1
We follow the convention that boldface letters denote random variables.
2
The learning agent is assumed to know the transition probabilities P , but is not given the sequence
{rt }. The protocol of interaction with the environment is unchanged: At time step t the agent
receives xt and then selects an action at which is sent to the environment. In response, the reward
rt (xt , at ) and the next state xt+1 are communicated to the agent. The initial state x1 is generated
from a fixed distribution P0 .
The goal of the learning agent is to maximize its expected total reward
" T
#
X
?T = E
R
rt (xt , at ) .
t=1
An equivalent goal is to minimize the regret, that is, to minimize the difference between the expected
total reward received by the best algorithm within some reference class and the expected total reward
of the learning algorithm. In the case of MDPs a reasonable reference class, used by various previous
works (e.g., Even-Dar et al., 2005, 2009; Yu et al., 2009) is the class of stationary stochastic policies.2
A stationary stochastic policy, ?, (or, in short: a policy) is a mapping ? : A ? X ? [0, 1], where
?(a|x) ? ?(a, x) is the probability of taking action a in state x. We say that a policy ? is followed
in an MDP if the action at time t is drawn from ?, independently of previous states and actions given
the current state x0t : a0t ? ?(?|x0t ). The expected total reward while following a policy ? is defined
as
" T
#
X
?
0
0
RT = E
rt (xt , at ) .
t=1
Here {(x0t , a0t )} denotes the trajectory that results from following policy ? from x01 ? P0 .
The expected regret (or expected relative loss) of the learning agent relative to the class of policies
(in short, the regret) is defined as
? T = sup RT? ? R
?T ,
L
?
where the supremum is taken over all (stochastic stationary) policies. Note that the optimal policy
is chosen in hindsight, depending acausally on the reward function. If the regret of an agent grows
sublinearly with T then we can say that in the long run it acts as well as the best (stochastic stationary) policy (i.e., the average expected regret of the agent is asymptotically equal to that of the best
policy).
3
Assumptions
In this section we list the assumptions that we make throughout the paper about the transition probability kernel (hence, these assumptions will not be mentioned in the subsequent results). In addition,
recall that we assume that the rewards are bound to [0, 1].
Before describing the
Passumptions, a few more definitions are needed: Let ? be a stationary policy.
Define P ? (x0 |x) = a ?(a|x)P (x0 |x, a). We will also view P ? as a matrix: (P ? )x,x0 = P ? (x0 |x),
where, without loss of generality, we assume that X = {1, 2, . . . , |X |}. In general, distributions will
also be treated as row vectors. Hence, for a distribution ?, ?P ? is the distribution over X that results
from using policy ? for one step from ? (i.e., the ?next-state distribution? under ?). Remember that
the stationary distribution of a policy ? is a distribution ? which satisfies ?P ? = ?.
Assumption A1 Every policy ? has a well-defined unique stationary distribution ?? .
Assumption A2 The stationary distributions are uniformly bounded away from zero:
inf ?,x ?? (x) ? ? for some ? > 0.
Assumption A3 There exists some fixed positive ? such that for any two arbitrary distributions ?
and ?0 over X ,
sup k(? ? ?0 )P ? k1 ? e?1/? k? ? ?0 k1 ,
?
P
where k ? k1 is the 1-norm of vectors: kvk1 = i |vi |.
2
This is a reasonable reference class because for a fixed reward function one can always find a member of
it which maximizes the average reward per time step, see Puterman (1994).
3
Note that Assumption A3 implies Assumption A1. The quantity ? is called the mixing time underlying P by Even-Dar et al. (2009) who also assume A3.
4
Learning in online MDPs under bandit feedback
In this section we shall first introduce some additional, standard MDP definitions, which we will be
used later. That these are well-defined follows from our assumptions on P and from standard results
to be found, for example, in the book by Puterman (1994). After the definitions, we specify our
algorithm. The section is finished by the statement of our main result concerning the performance
of the proposed algorithm.
4.1
Preliminaries
Fix an arbitrary policy ? and t ? 1. Let {(x0s , a0s )} be the random trajectory generated by ? and the
transition probability kernel P . Define, ??t , the average reward per stage corresponding to ?, P and
rt by
S
1X
??t = lim
E[rt (x0s , a0s )] .
S?? S
s=0
P ?
P
?
?
An alternative expression for ?t is ?t = x ? (x) a ?(a|x)rt (x, a), where ?? is the stationary
?
distribution underlying ?. Let qt be the action-value function of ?, P and rt and vt? be the corresponding state-value function. These can be uniquely defined as the solutions of the following
Bellman equations:
X
X
qt? (x, a) = rt (x, a) ? ??t +
P (x0 |x, a)vt? (x0 ),
vt? (x) =
?(a|x)qt? (x, a).
x0
a
Now, consider the trajectory {(xt , at )} underlying a learning agent, where x1 is randomly chosen
from P0 , and define
ut = ( x1 , a1 , r1 (x1 , a1 ), x2 , a2 , r2 (x2 , a2 ), . . . , xt , at , rt (xt , at ) )
and ?t (a|x) = P[at = a|ut?1 , xt = x]. That is, ?t denotes the policy followed by the agent at
time step t (which is computed based on past information and is therefore random). We will use the
following notation:
qt = qt?t ,
vt = vt?t ,
t
?t = ??
t .
Note that qt , vt satisfy the Bellman equations underlying ?t , P and rt .
For reasons to be made clear later in the paper, we shall need the state distribution at time step t
given that we start from the state-action pair (x, a) at time t ? N , conditioned on the policies used
between time steps t ? N and t:
def
0
0
?N
t,x,a (x ) = P [xt = x | xt?N = x, at?N = a, ?t?N +1 , . . . , ?t?1 ] ,
x, x0 ? X , a ? A .
N
It will be useful to view ?N
t as a matrix of dimensions |X ? A| ? |X |. Thus, ?t,x,a (?) will be viewed
as one row of this matrix. To emphasize the conditional nature of this distribution, we will also use
N
?N
t (?|x, a) instead of ?t,x,a (?).
4.2
The algorithm
Our algorithm is similar to that of Even-Dar et al. (2009) in that we use an expert algorithm in each
state. Since in our case the full reward function rt is not observed, the agent uses an estimate of it.
The main difficulty is to come up with an unbiased estimate of rt with a controlled variance. Here
we propose to use the following estimate:
(
rt (x,a)
if (x, a) = (xt , at )
N
?rt (x, a) = ?t (a|x)?t (x|xt?N ,at?N )
(1)
0
otherwise,
4
?t, v
? t and ?? as the solution to the Bellman equations underlying the
where t ? N + 1. Define q
average reward MDP defined by (P, ?t , ?rt ):
X
X
? t (x, a) = ?rt (x, a) ? ??t +
? t (x) =
P (x0 |x, a)?
q
vt (x0 ), v
?t (a|x)?
qt (x, a) ,
x0
??t =
X
a
?t
(2)
? (x)?t (a|x)?rt (x, a) .
x,a
Note that if N is sufficiently large and ?t changes sufficiently slowly then
?N
t (x|xt?N , at?N ) > 0,
(3)
almost surely, for arbitrary x ? X , t ? N + 1. This fact will be shown in Lemma 4. Now, assume
that ?t is computed based on ut?N , that is, ?t is measurable with respect to the ?-field ?(ut?N )
generated by the history ut?N :
?t ? ?(ut?N ) .
(4)
Then also ?t?1 , . . . , ?t?N ? ?(ut?N ) and ?N
t can be computed using
a ?t?N +1
?N
? ? ? P ?t?1 ,
t,x,a = ex P P
(5)
a
where P is the transition probability matrix when in every state action a is used and ex is the unit
row vector corresponding to x (and we assumed that X = {1, . . . , |X |}). Moreover, a simple but
tedious calculation shows that (3) and (4) ensure the conditional unbiasedness of our estimates, that
is,
E [ ?rt (x, a)| ut?N ] = rt (x, a).
(6)
It then follows that
E[??t |ut?N ] = ?t ,
and, hence, by the uniqueness of the solutions of the Bellman equations, we have, for all (x, a) ?
X ? A,
E[?
qt (x, a)|ut?N ] = qt (x, a)
and
E[?
vt (x)|ut?N ] = vt (x).
(7)
As a consequence, we also have, for all (x, a) ? X ? A, t ? N + 1,
E[??t ] = E [?t ] ,
E[?
qt (x, a)] = E [qt (x, a)] ,
and
E[?
vt (x)] = E [vt (x)] .
(8)
The bandit algorithm that we propose is shown as Algorithm 1. It follows the approach of Even-Dar
et al. (2009) in that a bandit algorithm is used in each state which together determine the policy to be
used. These bandit algorithms are fed with estimates of action-values for the current policy and the
? t defined earlier, which are based on the
current reward. In our case these action-value estimates are q
reward estimates ?rt . A major difference is that the policy computed based on the most recent actionvalue estimates is used only N steps later. This delay allows us to construct unbiased estimates of
the rewards. Its price is that we need to store N policies (or weights, leading to the policies), thus,
the memory needed by our algorithm scales with N |A||X |. The computational complexity of the
algorithm is dominated by the cost of computing ?rt (and,
in particular, by the cost of computing
3
?N
t (?|xt?N , at?N )). The cost of this is O N |A||X | . In addition to the need of dealing with the
? t can be both negative, which must
delay, we also need to deal with the fact that in our case qt and q
be taken into account in the proper tuning of the algorithm?s parameters.
4.3
Main result
Our main result is the following bound concerning the performance of Algorithm 1.
Theorem 1. Let N = d? ln T e,
?1/3
4? + 8
? = T ?2/3 ? (ln |A|)2/3 ?
(2? + 4)? |A| ln T + (3? + 1)2
,
?
1/3
2 ln |A|
?1/3
?2/3
2
?=T
? (2? + 4)
?
(2? + 4)? |A| ln T + (3? + 1)
.
?
5
Algorithm 1 Algorithm for the online bandit MDP.
Set N ? 1, w1 (x, a) = w2 (x, a) = ? ? ? = w2N (x, a) = 1, ? ? (0, 1), ? ? (0, ?].
For t = 1, 2, . . . , T , repeat
1. Set
wt (x, a)
?
?t (a|x) = (1 ? ?) P
+
|A|
b wt (x, b)
for all (x, a) ? X ? A.
2. Draw an action at randomly, according to the policy ?t (?|xt ).
3. Receive reward rt (xt , at ) and observe xt+1 .
4. If t ? N + 1
(a) Compute ?N
t (x|xt?N , at?N ) for all x ? X using (5).
? t using (2).
(b) Construct estimates ?rt using (1) and compute q
(c) Set wt+N (x, a) = wt+N ?1 (x, a)e??qt (x,a) for all (x, a) ? X ? A.
Then the regret can be bounded as
1/3
(4? + 8) ln |A|
2/3
2
?
LT ? 3 T
?
(2? + 4)? |A| ln T + (3? + 1)
+ O T 1/3 .
?
It is interesting to note that, similarly to the regret bound of Even-Dar et al. (2009), the main term
of the regret bound does not directly depend on the size of the state space, but it depends on it only
through ? and the mixing time ? , defined in Assumptions A2 and A3, respectively; however, we also
need to note that ? > 1/|X |. While the theorem provides the first rigorously proved finite sample
regret bound for the online bandit MDP problem, we suspect that the given convergence rate is not
sharp in the sense that it may be possible, in agreement
? with the standard bandit lower bound of
T regret (up to some logarithmic factors).
Auer et al. (2002), to give an algorithm with an O
The proof of the theorem is similar to the proof of a similar bound done for the full-information case
? T for an arbitrary fix policy ?. We
by Even-Dar et al. (2009). Clearly, it suffices to bound RT? ? R
use the following decomposition of this difference (also used by Even-Dar et al., 2009):
!
!
!
T
T
T
T
X
X
X
X
? T = RT? ?
?T .
RT? ? R
??t +
??t ?
?t +
?t ? R
(9)
t=1
t=1
t=1
t=1
The first term is bounded using the following standard MDP result.
Lemma
1 (Even-Dar
et al., 2009). For any policy ? and any T
PT
?
?
RT ? t=1 ?t ? 2(? + 1).
?
1 it holds that
Hence, it remains to bound the expectation of the other terms, which is done in the following two
propositions.
Proposition 1. Let N ? d? ln T e. For any policy ? and for all T large enough, we have
T
X
E [??t ? ?t ]
t=1
? (4? + 10)N +
ln |A|
+ (2? + 4) T
?
?+
2?
|A| N (1/? + 4? + 6) + (e ? 2)(2? + 4)
.
?
Proposition 2. Let N ? d? ln T e. For any T large enough,
T
X
2? 1
?
E [?t ] ? RT ? T
+ 4? + 6 (3? + 1)2 + 2T e?N/? + 2N.
?
?
t=1
6
(10)
Note that the choice of N ensures that the second term in (10) becomes O(1).
The proofs are broken into a number of statements presented in the next section. Due to space
constraints we present proof sketches only; the full proofs are presented in the extended version of
the paper.
5
Analysis
5.1
General tools
First, we show that if the policies that we follow up to time step t change slowly, ?N
t is ?close? to
??t :
P
Lemma 2. Let 1 ? N < t ? T and c > 0 be such that maxx a |?s+1 (a|x) ? ?s (a|x)| ? c
holds for 1 ? s ? t ? 1. Then we have
X
0
?t
0
2
?N/?
?N
max
.
t,x,a (x ) ? ? (x ) ? c (3? + 1) + 2e
x,a
x0
In the next two lemmas we compute the rate of change of the policies produced by Exp3 and show
that for a large enough value of N , ?N
t,x,a can be uniformly bounded form below by ?/2.
0
0
Lemma 3. Assume that for some N + 1 ? t ? T , ?N
t,xt?N ,at?N (x ) ? ?/2 holds for all states x .
1
Let c = 2?
?
? + 4? + 6 . Then,
X
max
|?t+N ?1 (a|x) ? ?t+N (a|x)| ? c.
(11)
x
a
The previous results yield the following result that show that by choosing the parameters appropriately, the policies will change slowly and ?N
t will be uniformly bounded away from zero.
Lemma 4. Let c be as in Lemma 3. Assume that c(3? + 1)2 < ?/2, and let
4
N ? ? ln
.
(12)
? ? 2c(3? + 1)2
0
Then, P
for all N < t ? T , x, x0 ? X and a ? A, we have ?N
t,x,a (x ) ? ?/2 and
0
0
0
0
maxx0 a0 |?t+1 (a |x ) ? ?t (a |x )| ? c.
This result is proved by first ensuring that ?t is uniformly lower bounded for t = N + 1, . . . , 2N ,
which can be easily seen since the policies do not change in this period. For the rest of the time
instants, one can proceed by induction, using Lemmas 2 and 3 in the inductive step.
5.2
Proof of Proposition 1
The statement is trivial for T ? N . The following simple result is the first step in proving Proposition 1 for T > N .
Lemma 5. (cf. Lemma 4.1 in Even-Dar et al., 2009) For any policy ? and t ? 1,
X
??t ? ?t =
?? (x)?(a|x) [qt (x, a) ? vt (x)] .
x,a
PT
PT
For every x, a define QT (x, a) =
t=N +1 qt (x, a) and VT (x) =
t=N +1 vt (x). The preceding lemma shows that in order to prove Proposition 1, it suffices to prove an upper bound on
E [QT (x, a) ? VT (x)].
2
Lemma
l
6. Let c be
m as in Lemma 3. Assume that ? ? (0, 1), c(3? + 1) < ?/2, N ?
? ln
4
??2c(3? +1)2
,0<??
?
2(1/? +2? +3) ,
and T > N hold. Then, for all (x, a) ? X ? A,
E [QT (x, a) ? VT (x)]
ln |A|
+ (2? + 4) T
? (4? + 8)N +
?
2?
? + |A| N (1/? + 4? + 6) + (e ? 2)(2? + 4)
.
?
7
Proof sketch. The proof essentially follows the original proof of Auer et al. (2002) concerning the
regret bound of Exp3, although some details are more subtle in our case: our estimates have different
properties than the ones considered in the original proof, and we also have to deal with the N -step
delay.
Let
? N (x) =
V
T
T ?N
X+1 X
t=N +1
?t+N ?1 (a|x)?
qt (x, a) and
? N (x, b) =
Q
T
a
T ?N
X+1
? t (x, b).
q
t=N +1
Observe that although qt (x, a) is not necessarily positive (in contrast to the rewards in the Exp3
algorithm), one can prove that ?t (a|x)|?
qt (x, a)| ? ?4 (? + 2) and
E [|?
qt (x, a)|] ? 2(? + 2).
(13)
Similarly, it can be easily seen that the constraint on ? ensures that ??
qt (x, a) ? 1 for all x, a, t.
Then, following the proof of Auer et al. (2002), we can show that
T ?N
X+1 X
? N (x) ? (1 ? ?)Q
? N (x, b) ? ln |A| ? 4 (? + 2) ?(e ? 2)
V
|?
qt (x, a)| .
T
T
?
?
a
(14)
t=N +1
P
Next, since the policies satisfy maxx a |?s+1 (a|x) ? ?s (a|x)| ? c by Lemma 4, we can prove,
using (8) and (13), that
i
h
? N (x) ? E [VT (x)] + 2(? + 2) N (c T |A| + 1).
E V
T
i
h
? N (x) we get
Now, taking the expectation of both sides of (14) and using the bound on E V
T
T ?N
X+1 X
ln |A|
4
E [|?
qt (x, a)|]
E [VT (x)] ? (1 ? ?)E QN
(x,
b)
?
?
(?
+
2)
?(e
?
2)
T
?
?
a
t=N +1
? 2(? + 2) N (c T |A| + 1),
i
h
? N (x, b) = E QN (x, b) by (8). Since qt (x, b) ? 2(? + 2),
where we used that E Q
T
T
E QN
T (x, b) ? E [QT (x, b)] + 2(? + 2) N.
Combining the above results and using (13) again, then substituting the definition of c yields the
desired result.
Proof of Proposition 1. Under the conditions of the proposition, combining Lemmas 5-6 yields
T
X
E [??t ? ?t ]
t=1
? 2N +
X
?? (x)?(a|x) E [QT (x, a) ? VT (x)]
x,a
? (4? + 10)N +
ln |A|
+ (2? + 4) T
?
?+
2?
|A| N (1/? + 4? + 6) + (e ? 2)(2? + 4)
,
?
proving Proposition 1.
Acknowledgments
This work was supported in part by the Hungarian Scientific Research Fund and the Hungarian
National Office for Research and Technology (OTKA-NKTH CNK 77782), the PASCAL2 Network
of Excellence under EC grant no. 216886, NSERC, AITF, the Alberta Ingenuity Centre for Machine
Learning, the DARPA GALE project (HR0011-08-C-0110) and iCore.
8
References
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002). The nonstochastic multiarmed
bandit problem. SIAM J. Comput., 32(1):48?77.
Even-Dar, E., Kakade, S. M., and Mansour, Y. (2005). Experts in a Markov decision process. In Saul,
L. K., Weiss, Y., and Bottou, L., editors, Advances in Neural Information Processing Systems 17,
pages 401?408.
Even-Dar, E., Kakade, S. M., and Mansour, Y. (2009). Online Markov decision processes. Mathematics of Operations Research, 34(3):726?736.
Neu, G., Gy?orgy, A., and Szepesv?ari, C. (2010). The online loop-free stochastic shortest-path problem. In COLT-10.
Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.
Wiley-Interscience.
Yu, J. Y. and Mannor, S. (2009a). Arbitrarily modulated Markov decision processes. In Joint 48th
IEEE Conference on Decision and Control and 28th Chinese Control Conference. IEEE Press.
Yu, J. Y. and Mannor, S. (2009b). Online learning in Markov decision processes with arbitrarily
changing rewards and transitions. In GameNets?09: Proceedings of the First ICST international
conference on Game Theory for Networks, pages 314?322, Piscataway, NJ, USA. IEEE Press.
Yu, J. Y., Mannor, S., and Shimkin, N. (2009). Markov decision processes with arbitrary reward
processes. Mathematics of Operations Research, 34(3):737?757.
9
| 4048 |@word version:2 seems:1 norm:1 tedious:1 hu:2 decomposition:1 p0:3 initial:1 selecting:1 ours:1 past:1 current:5 com:1 gmail:1 must:2 subsequent:1 fund:1 stationary:15 short:2 provides:2 mannor:4 prove:4 interscience:1 introduce:1 excellence:1 x0:13 sublinearly:1 expected:14 ingenuity:1 bellman:4 alberta:2 becomes:1 project:1 underlying:6 bounded:7 maximizes:1 notation:1 moreover:1 hindsight:1 csaba:1 nj:1 remember:1 every:4 act:2 control:4 unit:1 grant:1 before:1 positive:2 limit:1 consequence:1 x0s:2 path:1 twice:1 studied:1 aitf:1 unique:2 acknowledgment:1 regret:23 communicated:1 episodic:1 maxx:2 significantly:2 get:1 close:1 equivalent:1 measurable:1 economics:1 starting:1 independently:2 simplicity:1 proving:2 exploratory:1 pt:3 heavily:1 ualberta:1 programming:1 us:1 agreement:1 expensive:1 observed:2 ensures:2 episode:1 observes:4 mentioned:1 environment:6 broken:1 complexity:1 reward:40 rigorously:2 dynamic:2 depend:1 easily:2 darpa:1 joint:1 various:1 fast:2 choosing:2 say:2 otherwise:1 online:13 sequence:7 advantage:1 propose:4 interaction:1 loop:2 budapest:1 hungary:3 combining:2 mixing:3 cnk:1 convergence:1 requirement:1 r1:1 executing:1 depending:1 bme:2 qt:28 received:3 hungarian:2 come:2 implies:1 convention:1 closely:1 stochastic:8 fix:2 suffices:2 preliminary:1 proposition:9 hold:4 sufficiently:2 considered:7 mapping:1 substituting:1 major:2 a2:4 uniqueness:1 currently:1 visited:2 tool:1 clearly:1 always:1 office:1 kvk1:1 contrast:1 sense:1 a0:1 bandit:13 selects:1 colt:1 art:1 special:1 equal:1 field:1 construct:2 identical:1 yu:9 oblivious:1 few:1 randomly:2 national:1 antos:2 desired:1 increased:1 earlier:1 markovian:1 applicability:1 cost:3 delay:3 chooses:1 unbiasedness:1 international:1 siam:1 together:1 gergely:2 w1:1 again:1 cesa:1 opposed:1 slowly:3 gale:1 book:1 expert:2 leading:1 sztaki:2 account:1 gy:2 satisfy:2 vi:1 depends:1 later:3 root:1 view:2 sup:2 start:1 minimize:2 square:1 variance:1 who:4 yield:3 generalize:1 produced:1 trajectory:3 history:1 neu:5 definition:6 shimkin:1 associated:4 proof:13 proved:4 recall:1 knowledge:1 lim:1 ut:11 organized:1 subtle:1 auer:4 follow:2 response:1 specify:1 wei:1 done:2 generality:1 stage:1 sketch:3 receives:2 scientific:1 mdp:9 grows:1 usa:1 unbiased:2 inductive:1 hence:4 puterman:4 deal:2 game:1 uniquely:1 szepesva:1 ari:2 recently:1 x0t:3 otka:1 multiarmed:1 tuning:1 mathematics:2 similarly:2 centre:1 navigates:1 recent:1 inf:1 store:1 certain:1 arbitrarily:4 vt:19 icore:1 seen:2 additional:1 preceding:1 surely:1 determine:1 shortest:1 maximize:3 period:1 full:7 mix:2 exp3:3 calculation:1 long:1 concerning:3 a1:6 controlled:1 ensuring:1 essentially:1 expectation:2 maxx0:1 kernel:3 receive:1 szepesv:2 whereas:1 addition:2 appropriately:1 w2:1 rest:2 sure:2 suspect:1 sent:1 member:1 enough:3 gave:1 nonstochastic:1 knowing:1 a0t:2 expression:1 proceed:1 action:17 dar:14 useful:1 detailed:1 clear:1 category:1 schapire:1 exist:1 andr:2 per:2 discrete:1 shall:2 group:2 drawn:1 changing:1 asymptotically:1 icst:1 compete:2 run:1 letter:1 throughout:1 almost:3 laid:1 reader:1 reasonable:2 draw:1 decision:10 bound:19 def:1 followed:3 ahead:2 constraint:2 x2:3 dominated:1 department:2 mta:2 according:1 piscataway:1 kakade:2 taken:2 computationally:1 equation:4 ln:18 previously:1 remains:1 describing:1 needed:2 know:3 fed:1 operation:2 observe:4 away:3 alternative:1 original:2 denotes:3 ensure:1 cf:1 instant:2 unsuitable:1 exploit:1 giving:1 k1:3 chinese:1 unchanged:1 move:1 quantity:1 rt:38 usual:1 interacts:1 collected:1 trivial:1 reason:1 boldface:1 induction:1 assuming:2 minimizing:1 executed:1 statement:3 negative:1 proper:1 policy:41 bianchi:1 upper:1 markov:9 finite:7 situation:2 extended:1 mansour:2 arbitrary:6 sharp:1 canada:1 introduced:1 pair:2 hr0011:1 adversary:1 below:1 max:2 memory:1 pascal2:1 treated:1 difficulty:1 technology:2 mdps:6 finished:1 fpl:2 asymptotic:1 relative:2 freund:1 fully:1 loss:2 sublinear:1 interesting:1 x01:1 agent:28 editor:1 row:3 repeat:1 last:2 free:2 supported:1 formal:1 side:1 institute:2 fall:1 saul:1 face:1 taking:2 feedback:2 dimension:1 transition:12 qn:3 made:3 far:1 ec:1 emphasize:1 supremum:1 dealing:1 assumed:4 reviewed:1 nature:1 ca:1 orgy:2 bottou:1 necessarily:1 protocol:1 main:8 whole:1 allowed:1 x1:6 referred:1 wiley:1 experienced:1 comput:1 admissible:1 theorem:3 xt:31 specific:1 list:1 r2:1 a3:4 exists:1 conditioned:1 gap:1 lt:1 logarithmic:1 expressed:1 nserc:1 actionvalue:1 satisfies:1 conditional:2 goal:6 viewed:1 a0s:2 price:2 change:7 uniformly:7 acting:1 wt:4 lemma:15 total:10 called:2 meaningful:1 formally:1 latter:1 modulated:1 ex:2 |
3,367 | 4,049 | Learning invariant features using
the Transformed Indian Buffet Process
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
Tom [email protected]
Joseph L. Austerweil
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
Identifying the features of objects becomes a challenge when those features can
change in their appearance. We introduce the Transformed Indian Buffet Process
(tIBP), and use it to define a nonparametric Bayesian model that infers features
that can transform across instantiations. We show that this model can identify
features that are location invariant by modeling a previous experiment on human
feature learning. However, allowing features to transform adds new kinds of ambiguity: Are two parts of an object the same feature with different transformations
or two unique features? What transformations can features undergo? We present
two new experiments in which we explore how people resolve these questions,
showing that the tIBP model demonstrates a similar sensitivity to context to that
shown by human learners when determining the invariant aspects of features.
1
Introduction
One way the human brain manages the massive amount of sensory information it receives is by
learning invariants ? regularities in its input that do not change across many stimuli sharing some
property of interest. Learning and using invariants is essential to many aspects of cognition and
perception [1]. For example, the retinal image of an object1 changes with viewpoint and location, yet
people can still identify the object. One explanation for this capability is the visual system recognizes
that the features of an object can occur differently across presentations, but will be transformed
in a few predictable ways. Representing objects in terms of invariant features poses a challenge
for models of feature learning. From a computational perspective, unsupervised feature learning
involves recognizing regularities that can be used to compactly encode the observed stimuli [2].
When features have the same appearance and location, techniques such as factorial learning [3] or
various extensions of the Indian Buffet Process (IBP) [4] have been successful at learning features,
and show some correspondence to human performance [5]. Unfortunately, invariant features do not
always have the same appearance or location, by definition. Despite this, people are able to identify
invariant features (e.g., [6]), meaning that new machine learning methods need to be explored to
fully understand human behavior.
We propose an extension to the IBP called the Transformed Indian Buffet Process (tIBP), which
infers features that vary across objects. Analogous to how the Transformed Dirichlet Process extends
the Dirichlet Process [7], the tIBP associates a parameter with each instantiation of a feature that
determines how the feature is transformed in the given image. This allows for unsupervised learning
of features that are invariant in location, size, or orientation. After defining the generative model for
the tIBP and presenting a Gibbs sampling inference algorithm, we show that this model can learn
visual features that are location invariant by modeling previous behavioral results (from [6]).
1
We talk about objects, images, and scenes having features depending on the context.
1
(a)
(b)
+
Figure 1: Ambiguous representations. (a) Does this object have one feature that contains two vertical
bars or two features that each contain one vertical bar? (b) Are these two shapes the same? The shape
on the left is typically perceived as a square and the shape on the right is typically perceived as a
diamond despite being objectively equivalent after a transformation (a 45 degree rotation).
One new issue that arises from inferring invariant features is that it can be ambiguous whether parts
of an image are the same feature with different transformations or different features. For example,
an object containing two vertical bars has (at least) two representations: a single feature containing
two vertical bars a fixed distance apart, or two features each of which is a vertical bar with its own
translational transformation (see Figure 1 (a)). The tIBP suggests an answer to this question: pick
the smallest feature representation that can encode all observed objects. By presenting objects that
are either the two vertical bars a fixed distance apart that vary in position or two vertical bars varying
independently in location, we confirm that people use sets of objects to infer invariant features in a
behavioral experiment and that the different feature representations lead to different decisions.
Introducing transformational invariance also raises the question of what kinds of transformations
a feature can undergo. A classic demonstration of the difficulty of defining a set of permissable
transformations is the Mach square/diamond [8]. Are the two shapes in Figure 1 (b) the same?
The shape on the right is typically perceived as a diamond while the shape on the left is seen as a
square, despite being identical except for a rotational transformation. We extend the tIBP to include
variables that select the transformations each feature is allowed to undergo. This raises the question
of whether people can infer the permissable transformations of a feature. We demonstrate that this is
the case by showing that people vary in their generalizations from a square to a diamond depending
on whether the square is shown in the context of other squares that vary in rotation. This provides an
interesting new explanation of the Mach square/diamond: People learn the allowed transformations
of features for a given shape, not what transformations of features are allowed over all shapes.
2
Unsupervised feature learning using nonparametric Bayesian statistics
One common approach to unsupervised learning is to explicitly define the generative process that
created the observed data. Latent structure can then be identified by inverting this process using
Bayesian inference. Nonparametric Bayesian models can be used in this way to infer latent structure of potentially unbounded dimensionality [9]. The Indian Buffet Process (IBP) [4] is a stochastic
process that can be used as a prior in nonparametric Bayesian models where each object is represented using an unknown but potentially infinite set of latent features.
2.1
Learning features using the Indian Buffet Process
The standard treatment of feature learning using nonparametric Bayesian models factors the observations into two latent structures: (1) a binary matrix Z that denotes which objects have each feature,
and (2) a matrix Y that represents how the features instantiated. If there are N objects and K features, then Z is a N ? K binary matrix (where object n has feature k if znk = 1) and Y is a K ? D
matrix (where D is the dimensionality of the observed properties of each object, e.g., the number of
pixels in an image). The IBP defines a probability distribution over Z when K ? ? such that only
a finite number of the columns are non-zero (with prob. 1 for finite N ). This distribution is
K+
Y
? K+
(N ? mk )!(mk ? 1)!
P (Z) = Q2N ?1
exp{??HN }
N!
k=1
h=1 Kh !
(1)
where ? is a parameter affecting the number of non-zero entries in the matrix, Kh is the number
of features with history h (the history is the corresponding column of each feature, interpreted as
2
a binary number), K+ is the number of columns with non-zero entries, HN is the N -th harmonic
number, and mk is the number of objects that have feature k. Typically, a simple parametric model is
used for Y (Gaussian for generating real-valued observations, or Bernoulli for binary observations).
The observed properties of objects can be summarized in a N ? D matrix X. The vector xn
representing the properties of object n is generated based on its features zn and the matrix Y. This
can be done using a linear-Gaussian likelihood for real-valued properties [4], or a noisy-OR for
binary properties [10]. All of the modeling results in this paper use the noisy-OR, with
p(xnd = 1|Z, Y)
=
1 ? (1 ? ?)zn yd (1 ? ?)
(2)
where xnd is the dth observed property of the nth object, and yd is the corresponding column of Y.
2.2
The Transformed Indian Buffet Process (tIBP)
Following Sudderth et al.?s [7] extension of the Dirichlet Process, the Transformed Indian Buffet
Process (tIBP) allows features to be transformed. The transformations are object-specific, so in
a sense, when an object takes a feature, the feature is transformed with respect to the object. Let
g(Y|?) be a prior probability distribution on Y parameterized by ?, ?(?) be a distribution over a set
of transformations parameterized by ?, rn be a vector of transformations of the feature instantiations
for object n, and f (xn |rn (Y), zn , ?) be the data distribution and ? be any other parameters used in
the data distribution. The following generative process defines the tIBP:
Z|?
Y|?
? IBP(?)
? g(?)
rnk |?
xn |rn , zn , Y, ?
iid
? ?(?)
? f (xn |rn (Y), zn , ?)
In this paper, we focus on binary images where the transformations are drawn uniformly at random
from a finite set (though Section 5.1 uses a slightly more complicated distribution). The reason for
this (instead of using a Dirichlet process over transformations) is that we are interested in modeling
invariances in translation, size, or rotation and to model images where a feature occurs in a novel
translation, size, or rotation effectively, it is necessary for them to have non-zero probability. In this
section, we focus on translations. Assuming our data are in {0, 1}D1 ?D2 , a translation shifts the
starting place of its feature in each dimension by rnk = (d1 , d2 ). We assume a discrete uniform
prior on shifts: rnk ? U {0, . . . , D1 ? 1} ? U {0, . . . , D2 ? 1}. Each transformation results in a
new interpretation of the feature, rn (yd ). The likelihood p(xnd = 1|Z, Y, R) is then identical to
Equation 2, substituting the vector of transformed feature interpretations rn (yd ) for yd .
2.3
Inference by Gibbs sampling
We sample from the posterior distribution on feature assignments Z, feature interpretations Y, and
transformations R given observed properties X using Gibbs sampling [11]. The algorithm consists
of iteratively drawing each variable conditioned on the current values of all other variables.
For features with mk > 0 (after removal of the current value of znk ), we draw znk by marginalizing
over transformations. This avoids a bottleneck in sampling, as otherwise we would have to get lucky
in drawing the right feature and transformation. The marginalization can be done directly, with
X
p(znk |Z?(nk) , R, Y, X)p(rnk )
(3)
p(znk |Z?(nk) , R?(nk) , Y, X) =
rnk
where the first term on the right hand side is proportional to p(xn |zn , Y, R)p(znk |Z?(nk) ) (provided by the likelihood and the IBP prior respectively, with Z?(nk) being all of Z except znk ), and
the second term is uniform over all rnk . If znk = 1, we then sample rnk from
p(rnk |znk = 1, Z?(nk) , R?(nk) , Y, X)
? p(xn |zn , Y, R)p(rnk )
(4)
where the relevant probabilities are also used in computing Equation 3, and can thus be cached.
We follow Wood et al.?s [10] method for drawing new features (ie. features for which currently
mk = 0). First, we draw an auxilary variable Knnew , the number of ?new? features, from
p(Knnew |xn , Zn,1:(K+Knnew ) , Y, R)
? p(xn |Znew , Y, Knnew )P (Knnew )
3
(5)
where Znew is Z augmented with Knnew new columns containing ones in row n. From the IBP, we
know that Knnew ? Poisson(?/N ) [4]. To compute the first term on the right hand side, we need
to marginalize over the possible new feature images and their transformations (Y(K+1):(K+Knnew )
and Rn,(K+1):(K+Knnew ) ). We assume that the first object to take a feature takes it in its canonical
form and thus it is not transformed. Since the first transformation of a feature and its interpretation
in an image are not identifiable, this assumption is valid and necessary to aid in inference. With
no transformations, drawing the new features in the noisy-OR tIBP model is equivalent to drawing
the new features in the normal noisy-OR IBP model. Thus, we can use the same sampling step for
Knnew as [10]. Let Znew = Zn,1:(K+Knnew ) . Continuing the previous equation,
Y
p(Knnew | . . .) ? p(Knnew )
p(xnd |Znew , Y, R, Knnew )
(6)
d
=
new
new
?Kn e??
zn rn (yd )
Kn
1
?
(1
?
?)(1
?
?)
(1
?
p?)
Knnew !
(7)
where rn (yd ) is the vector of transformed feature interpretations along observed dimension d.
Finally, to complete each Gibbs sweep we resample the feature interpretations (Y) given the state
of the other variables. We sample each ykd independently given the state of the other variables, with
p(ykd |X, Z, R, Y?(kd) ) ? p(X|Y, Z, R)p(ykd )
(8)
where p(X|Y, Z, R) is the likelihood, given by the noisy-OR function.
2.4
Prediction
To compare the feature representations our model infers to behavioral results, we need to have judgements of the model for new test objects. This is a prediction problem: computing the probability of
a new object xN +1 given the set of N observed objects X. We can express this as
X
X
P (xN +1 |X) =
P (xN +1 , Z, Y, R|X) =
P (xN +1 |Z, Y, R)P (Z, Y, R|X).
(9)
Z,Y,R
Z,Y,R
The Gibbs sampling algorithm gives us samples from P (Z, Y, R|X) that can be used to approximate this sum. However, a further approximation is required to compute P (xN +1 |Z, Y, R). For
each sweep of Gibbs sampling, we sample a vector of features zN +1 and corresponding transformations rN +1 for a new object from their conditional distribution given the values of Z, Y, and
R in that sweep, under the constraint that no new features are generated. We use these samples to
approximate the calculation of P (xN +1 |Z, Y, R) by marginalizing over zN +1 and rN +1 .
3
Demonstration: Learning Translation Invariant Features
In many situations learners need to form a feature representation of a set of objects, and the features
do not reoccur in the exact same location. A common strategy for dealing with this problem is to
pre-process data to build in the relevant invariances, or simply to tabulate the presence or absence of
features without trying to infer them from the data (e.g., [12]). The tIBP provides a way for a learner
to discover that features are translation invariant, and to infer them directly from the data.
Fiser and colleagues [6, 12] showed that when two parts of an image always occur together (forming
a ?base pair?), people expect the two parts to occur together as if they had one feature representing
the pair. In Experiments 1 and 2 of [6], participants viewed 144 scenes, where each scene contained
three of the six base pairs in varied spatial location. Each base pair was two of twelve parts in
a particular spatial arrangment. Afterwards, participants chose which of two images was more
familiar: a base pair (in a never seen before location) and pair of parts that occured together at least
once (but were not a base pair). Participants strongly preferred the base pair. To demonstrate the
ability of tIBP to infer translation invariant features that are made up of complex parts, we trained
the model on the scenes with the same structure as those shown to participants. The only difference
was to lower the dimensionality of the images by recoding each part to be a 3 by 3 pixel image (the
images from [6] were 1200 by 900 pixels). Figure 2 (a) shows the basic parts (grouped into their
base pairs), while 2 (b) shows one scene given to the model. Figure 2 (c) shows the features inferred
4
(a)
(b)
(c)
Figure 2: Learning translation invariant features. (a) Each of the parts used to form base pairs, with
base pairs grouped in rectangles. (b) One example scene. (c) Features inferred by the tIBP model
(one sample from the Gibbs sampler). The tIBP infers the base pairs as features.
by the tIBP model (one sample from the Gibbs sampler after 1000 iterations with a 50 iteration
burn-in), given the 144 scenes. The parameters were initialized to ? = 0.8, ? = 0.05, ? = 0.99, and
p = 0.4. The model reconstructs the base pairs used to generate the images, and learns that the base
pairs can occur in any location. To compare the model people?s familiarity judgments, we calculated
the model?s predictive probability for each base pair in a new location and for a part in that base pair
with another part that co-occured with it at least once (but not in a base pair). Over all comparisons,
the tIBP model gave higher probability to the image containing the base pair.
4
Experiment 1: One feature or two features transformed?
A new problem arises out of learning features that can transform. Is an image composed of the same
feature multiple times with different instantiations or is it composed with different features that may
or may not be transformed? One way to decide between two possible feature representations for the
object is to pick the features that allow you to encode the object and the other objects it is associated
with. For example, the object from Figure 1 (a) is the first object (from the top left) in the two
sets of objects shown in Figure 3. Figure 3 (a) is the unitized object set. All of the objects in this
set can be represented as translations of one feature that is two vertical bars. Although this object
set can also be described in terms of two features (each of which are vertical bars that can each
translate independently), it is a surprising coincidence that the two vertical bars are always the same
distance apart over all of the objects in the set. Figure 3 (b) is the separate object set. This set is best
represented in terms of two features, where each is a vertical bar.
Using different feature representations leads to different predictions about what other objects should
be expected to be in the set. Representing the objects with a single feature containing two vertical
bars predicts new objects that have vertical bars where the two bars are the same distance apart (New
Unitized). These objects are also expected under the feature representation that is two features that
are each vertical bars; however, any object with two vertical bars is expected (New Separate) ? not
just those with a particular distance apart. Thus, interpreting objects with different feature representations has consequences for how to generalize set membership. In the following experiment,
we test these predictions by asking people after viewing either the unitized or separate object sets
to judge how likely the New Unitized or New Separate objects are to be part of the object set they
viewed. We then compare the behavioral results to the features inferred by the tIBP model and the
predictive probability of each of the test objects given each of the object sets.
(a)
(b)
Figure 3: Training sets for Experiment 1. (a) Objects made from spatial translations of the unitized
feature. (b) Objects made from spatial translations of two separate features. The number of times
each vertical bar is present is the same in the two object sets.
5
Human Experiment 1 Results
(a)
Human Rating
6
Unitized (Unit)
Separate (Sep)
4
2
0
Seen both Seen Unit Seen Sep New Unit New Sep
Diag
Test Image
Model Predictions for Experiment 1
(b)
Model Activation
1 Bar Unit + 1 Bar 3 Sep Bars
Unitized (Unit)
Separate (Sep)
Seen Both Seen Unit Seen Sep New Unit New Sep
1 Bar Unit + 1 Bar 3 Sep Bars
Diag
Test Image
Figure 4: Results of Experiment 1. (a) Human judgments. The unitized group only rated those
images with two vertical bars close together highly. The separate group rate any image with two
vertical bars highly. (b) The predictions by the tIBP model.
4.1
Methods
A total of 40 participants were recruited online and compensated a small amount. Three participants
were removed for failing to complete the task leaving 19 and 18 participants in the separate and
unitized conditions respectively. There were two phases to the experiment: training and test. In the
training phase, participants read this cover story (adapted from [13]): ?Recently a Mars rover found
a cave with a collection of different images on its walls. A team of scientists believes the images
could have been left by an alien civilization. The scientists are hoping to understand the images so
they can find out about the civilization.? They then looked through the eight images (which were
either the unitized or separate object set in a random order) and scrolled down to the next section
once they were ready for the test phase. Once they scrolled down to the next section, they were
informed that there were many more images on the cave wall that the rover had not yet had a chance
to record. Their task for the test phase was to rate how likely on a scale from 0 to 6 they believed
the rover would see each image as it explored further through the cave. There were nine test images
presented in a random order: Seen Both (an image in both training sets), Seen Unit (an image that
only the unitized group saw), Seen Sep (an image only the separate group saw), New Unit (an image
valid under the unitized feature set), New Sep (a image valid under separate feature set), and four
other images that acted as controls (the images are under the horizontal axes of Figure 4).
4.2
Results
Figure 4 (a) shows the average ratings made by participants in each group for the nine test images.
Over the nine test images, the separate group rated the Seen Sep (t(35) = 6.40, p < 0.001) and New
Sep (t(35) = 5.43, p < 0.001) objects higher than the unitized group, but otherwise did not rate any
of the other test images significantly different. As predicted by the above analysis, the unitized group
believed the Mars rover was likely to encounter the two images it observed and the New Unit image
(the unitized feature in a new horizontal position), but did not think it would encounter the other
objects. The separate group rated any image with two vertical bars highly. This indicates that they
represent the images using two features each containing a single vertical bar varying in horizontal
position. Thus, each group of participants infer a set of features invariant over the set of observed
objects (taking into account the different horizontal position of the features in each object).
Figure 4 (b) shows the predictions made by the tIBP model when given each object set. The predictive probabilities for the test objects were calculated using the procedure outlined above (with
the parameter values from Section 3), using 1000 iterations of Gibbs sampling and a 50 iteration
burn-in. A non-linear monotonic transformation of these probabilities was used for visualization,
6
(a)
(b)
(c)
Rotation set
Size set
New Rotation New Size
Figure 5: Stimuli for investigating how different types of invariances are learned for different object
classes. (a) The rotation training set. (b) The size training set. (c) Two new objects for testing the
inferred type of invariance a New Rotation and a New Size object.
raising the unnormalized probabilities to the power of 0.05 and renormalizing. The Spearman?s rank
order correlation between the model?s predictions and human judgments is 0.85. Qualitatively, the
model?s predictions are good; however, it incorrectly predicts that the separate condition should rate
the 1 Bar test image highly. Unlike the participants in the separate condition, the model does not
infer that each object has two features and so having only one feature is not a good object. This suggests that while learning the feature representation for a set of objects, people also learn the number
of features each object typically has. Investigating how people infer expectations about the number
of features objects have is an interesting phenomenon that demands further study.
5
Experiment 2: Learning the type of invariance
A natural next step for improving the tIBP would be to make the set of transformations ? larger and
thus extend the number of possible invariants that can be learned. Although this may be appropriate
from a machine learning perspective, it is inappropriate for understanding human cognition. Recall the Mach square/diamond example in Figure 1 (b). Many shapes are equivalent when rotated;
however, rotational invariance does not hold for all shapes. This example teaches a counterintuitive
moral: The best approach is not to include as many transformations as possible into the model.
Though rotations are not valid transformations for what people commonly consider to be squares,
they are appropriate for many objects. This suggests that people infer the set of allowable transformations for different classes of objects. Given the three objects in Figure 5 (a) (the rotation set) it
seems clear that the New Rotation object in Figure 5 (c) belongs in the set, but not the New Size
object. The reverse holds for the three objects from the left of Figure 5 (b), the size set. To explore
this phenomenon, we first extend the tIBP to infer the appropriate set of transformations by introducing latent variables for each feature that indicate which transformations it is allowed to use. We
demonstrate this extension to the tIBP predicts the New Rotation object when given the rotation set
and predicts the New Size object when given the size set ? effectively learning the appropriate type
of invariance for a given object class. Finally, we confirm our introspective argument that people
infer the type of invariance appropriate to the observed class of objects.
5.1
Learning invariance type using the tIBP
It is straightforward to modify the tIBP such that the type of transformations allowed on a feature is
inferred as well. This is done by introducing a hidden variable for each feature that indicate the type
of transformation allowed for that feature. Then, the feature transformation is generated conditioned
on this hidden variable from a probability distribution specific to the transformation type.
The experiment in this section is learning whether or not the feature defining a set of objects is
either rotation or size invariant. Formally, we model this using a generative process that is the
same as the tIBP, but introduces the latent variable tk which determines the type of transformation
allowed by feature k. If tk = 1, then rotational transformations are drawn from ?? (which is the
discrete uniform distribution distribution ranging in multiples of fifteen degrees from zero to 45).
If tk = 0, then size transformations are drawn from ?? (which is the discrete uniform distribution
iid
over [3/8, 3/7, 3/5, 5/7, 1, 7/5, 11/7, 5/3, 11/5, 7/3, 11/3]). We assume tk ? Bernoulli(?).
The inference algorithm for this extension is the same as for the tIBP except we need to infer the
values of tk . We draw tk using a Gibbs sampling scheme while marginalizing over r1k , . . . , rnk ,
X
p(xn |rnk , tk , Y, Z, R?k , t?k )p(rk |tk )p(tk ).
(10)
p(tk |X, Y, Z, R?k , t?k ) ?
rnk
7
(b)
Human Responses to Experiment 2
Model Activation
Human Rating
(a)
6
4
2
0
Seen BothSeen Rot Seen Size New Rot New Size
Model Predictions for Experiment 2
Rotation (Rot)
Size
Seen Both Seen Rot Seen Size New Rot New Size
Test Image
Test Image
Figure 6: Results of Experiment 2. (a) Responses of human participants. (b) Model predictions.
Prediction is as above except tk gives the set of transformations each feature is allowed to take.
5.2
Methods
A total of 40 participants were recruited online and compensated a small amount, with 20 participants in both training conditions (rotation and size). The cover story from Experiment 1 was used.
Participants observed the three objects in their training set and then generalize on a scale from 0 to
6 to five test objects: Same Both (the object that is in both training sets), Same Rot (the last object of
the rotation set), Same Size (the last object of the size set), New Rot and New Size.
5.3
Results
Figure 6 (a) shows the average human judgments. As expected, participants in the rotation condition
generalize more to the New Rot object than the size condition (unpaired t(38) = 4.44, p < 0.001)
and vice versa for the New Size object (unpaired t(38) = 5.34, p < 0.001). This confirms our hypothesis; people infer the appropriate set of transformations (a subset of all transformations) features
are allowed to use for a class of objects. Figure 6 (b) shows the model predictions with parameters
set to ? = 2, ? = 0.01, ? = 0.99, p = 0.5, and ? = 0.5 and using the same visualizing technique
as Experiment 1 (with T = 0.005), run for 1000 iterations (with a burn-in of 50 iterations) on the
sets of images (downsampled to 38 by 38 pixels). Qualitatively, the extended tIBP model has nearly
the same pattern of results as the participants in the experiment. The only issue being that it gives
high probability to the Same Size when given the rotation set, an artifact from downsampling. The
Spearman?s rank order correlation between the model?s predictions and human judgments is 0.68.
Importantly, the model predicts that only when given the rotation set should participants generalize
to the New Rot object and only when given the size set should they generalize to the New Size object.
6
Conclusions and Future Directions
In this paper, we presented a solution to how people infer feature representations that are invariant
over transformations and in two behavioral experiments confirmed two predictions of a new model
of human unsupervised feature learning. In addition to these contributions, we proposed a first
sketch of a new computational theory of shape representation ? the features representing an object
are transformed relative to the object and the set of transformations a feature is allowed to undergo
depends on the object?s context. In the future, we would like to pursue this theory further, expanding
the account of learning the types of transformations and exploring how the transformations between
features in an object interact (we should expect some interaction due to real world constraints on
the transformations, e.g., prospective geometry). Finally, we hope to include other facets of visual
perception into our model, like a perceptually realistic prior on feature instantiations and features
relations (e.g., the horizontal bar is always ON TOP OF the vertical bar).
Acknowledgements We thank Karen Schloss, Stephen Palmer, and the Computational Cognitive Science Lab
at Berkeley for discussions and AFOSR grant FA-9550-10-1-0232, and NSF grant IIS-0845410 for support.
8
References
[1] S. E. Palmer. Vision Science. MIT Press, Cambridge, MA, 1999.
[2] H. Barlow. Unsupervised learning. Neural Computation, 1:295?311, 1989.
[3] Z. Ghahramani. Factorial learning and the EM algorithm. In Advances in Neural Information
Processing Systems, volume 7, pages 617?624, Cambridge, MA, 1995. MIT Press.
[4] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005.
[5] J. L. Austerweil and T. L. Griffiths. Analyzing human feature learning as nonparametric
Bayesian inference. In Daphne Koller, Yoshua Bengio, Dale Schuurmans, and L?eon Bottou,
editors, Advances in Neural Information Processing Systems, volume 21, Cambridge, MA,
2009. MIT Press.
[6] J. Fiser and R. N. Aslin. Unsupervised statistical learning of higher-order spatial structures
from visual scenes. Psychological Science, 12(6), 2001.
[7] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Describing visual scenes using transformed Dirichlet processes. In Advances in Neural Information Processing Systems 18, Cambridge, MA, 2006. MIT Press.
[8] E. Mach. The analysis of sensations. Open Court, Chicago, 1914/1959.
[9] M. I. Jordan. Bayesian nonparametric learning: Expressive priors for intelligent systems. In
Heuristics, Probability and Causality: A Tribute to Judea Pearl. College Publications, 2010.
[10] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring
hidden causes. In Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence,
2006.
[11] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741,
1984.
[12] G. Orban, J. Fiser, R. N. Aslin, and M. Lengyel. Bayesian learning of visual chunks by human
observers. Proceedings of the National Academy of Sciences, 105(7):2745?2750, 2008.
[13] J. L. Austerweil and T. L. Griffiths. The effect of distributional information on feature learning.
In Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society. 2009.
9
| 4049 |@word judgement:1 seems:1 nd:1 open:1 d2:3 confirms:1 pick:2 fifteen:1 contains:1 tabulate:1 current:2 com:1 surprising:1 activation:2 gmail:1 yet:2 chicago:1 realistic:1 shape:11 hoping:1 generative:4 intelligence:2 record:1 provides:2 location:12 daphne:1 five:1 unbounded:1 along:1 consists:1 behavioral:5 introduce:1 expected:4 behavior:1 brain:1 freeman:1 resolve:1 inappropriate:1 becomes:1 provided:1 discover:1 what:5 kind:2 interpreted:1 pursue:1 informed:1 transformation:47 cave:3 berkeley:6 ykd:3 demonstrates:1 control:1 unit:12 grant:2 before:1 scientist:2 modify:1 consequence:1 despite:3 mach:4 analyzing:1 yd:7 chose:1 burn:3 suggests:3 co:1 palmer:2 unique:1 thirty:1 testing:1 procedure:1 lucky:1 significantly:1 pre:1 griffith:6 downsampled:1 get:1 marginalize:1 close:1 context:4 equivalent:3 compensated:2 straightforward:1 starting:1 independently:3 identifying:1 importantly:1 counterintuitive:1 classic:1 analogous:1 massive:1 exact:1 us:1 hypothesis:1 associate:1 xnd:4 predicts:5 geman:2 observed:13 distributional:1 coincidence:1 removed:1 predictable:1 trained:1 raise:2 predictive:3 rover:4 learner:3 compactly:1 sep:12 differently:1 various:1 represented:3 talk:1 instantiated:1 artificial:1 heuristic:1 larger:1 valued:2 drawing:5 otherwise:2 objectively:1 austerweil:4 statistic:1 ability:1 think:1 transform:3 noisy:5 online:2 propose:1 interaction:1 relevant:2 translate:1 academy:1 kh:2 regularity:2 generating:1 cached:1 renormalizing:1 rotated:1 object:96 tk:11 depending:2 pose:1 ibp:8 predicted:1 involves:1 judge:1 indicate:2 direction:1 sensation:1 stochastic:2 human:18 viewing:1 generalization:1 wall:2 extension:5 exploring:1 hold:2 normal:1 exp:1 cognition:2 substituting:1 vary:4 r1k:1 smallest:1 torralba:1 resample:1 perceived:3 failing:1 currently:1 saw:2 grouped:2 vice:1 hope:1 mit:4 always:4 gaussian:2 varying:2 publication:1 encode:3 ax:1 focus:2 bernoulli:2 likelihood:4 indicates:1 alien:1 rank:2 sense:1 inference:6 membership:1 typically:5 hidden:3 relation:1 koller:1 transformed:17 interested:1 pixel:4 issue:2 translational:1 orientation:1 spatial:5 once:4 never:1 having:2 sampling:9 identical:2 represents:1 unsupervised:7 nearly:1 future:2 report:1 stimulus:3 yoshua:1 aslin:2 few:1 intelligent:1 composed:2 national:1 familiar:1 phase:4 geometry:1 interest:1 highly:4 introduces:1 necessary:2 continuing:1 initialized:1 mk:5 psychological:1 column:5 modeling:4 asking:1 facet:1 cover:2 zn:12 assignment:1 restoration:1 introducing:3 entry:2 subset:1 uniform:4 recognizing:1 successful:1 kn:2 answer:1 chunk:1 twelve:1 sensitivity:1 ie:1 together:4 ambiguity:1 containing:6 hn:2 reconstructs:1 cognitive:2 account:2 transformational:1 retinal:1 summarized:1 explicitly:1 depends:1 observer:1 lab:1 participant:18 capability:1 complicated:1 contribution:1 square:9 judgment:5 identify:3 generalize:5 bayesian:11 manages:1 iid:2 confirmed:1 lengyel:1 history:2 sharing:1 definition:1 colleague:1 associated:1 judea:1 treatment:1 recall:1 infers:4 dimensionality:3 occured:2 higher:3 follow:1 tom:1 response:2 done:3 though:2 strongly:1 mar:2 just:1 fiser:3 correlation:2 hand:2 receives:1 horizontal:5 sketch:1 expressive:1 defines:2 artifact:1 effect:1 contain:1 barlow:1 read:1 iteratively:1 visualizing:1 ambiguous:2 unnormalized:1 trying:1 allowable:1 presenting:2 complete:2 demonstrate:3 interpreting:1 ranging:1 image:47 meaning:1 harmonic:1 recently:1 novel:1 common:2 rotation:20 volume:2 extend:3 interpretation:6 versa:1 gibbs:11 cambridge:4 outlined:1 had:3 rot:9 add:1 base:16 posterior:1 own:1 showed:1 perspective:2 belongs:1 apart:5 reverse:1 binary:6 seen:17 schloss:1 stephen:1 ii:1 afterwards:1 multiple:2 infer:15 technical:1 calculation:1 believed:2 prediction:15 basic:1 vision:1 expectation:1 poisson:1 iteration:6 q2n:1 represent:1 affecting:1 addition:1 sudderth:2 leaving:1 unlike:1 recruited:2 undergo:4 jordan:1 presence:1 bengio:1 marginalization:1 psychology:2 gave:1 identified:1 court:1 shift:2 bottleneck:1 whether:4 six:1 moral:1 karen:1 nine:3 cause:1 reoccur:1 clear:1 factorial:2 amount:3 nonparametric:7 permissable:2 unpaired:2 generate:1 unitized:15 canonical:1 nsf:1 neuroscience:1 discrete:3 express:1 group:10 four:1 drawn:3 rectangle:1 relaxation:1 wood:2 sum:1 run:1 prob:1 parameterized:2 znew:4 you:1 uncertainty:1 extends:1 place:1 decide:1 draw:3 decision:1 rnk:12 correspondence:1 identifiable:1 annual:1 adapted:1 occur:4 constraint:2 scene:9 aspect:2 argument:1 orban:1 acted:1 department:2 kd:1 spearman:2 across:4 slightly:1 em:1 joseph:2 invariant:20 equation:3 visualization:1 describing:1 know:1 eight:1 appropriate:6 buffet:9 encounter:2 thomas:1 denotes:1 dirichlet:5 include:3 top:2 recognizes:1 eon:1 ghahramani:3 build:1 society:1 sweep:3 question:4 occurs:1 looked:1 parametric:2 strategy:1 fa:1 distance:5 separate:16 thank:1 prospective:1 reason:1 willsky:1 assuming:1 rotational:3 demonstration:2 downsampling:1 tribute:1 unfortunately:1 potentially:2 teach:1 unknown:1 diamond:6 allowing:1 vertical:21 observation:3 finite:3 auxilary:1 incorrectly:1 introspective:1 defining:3 situation:1 extended:1 team:1 rn:11 varied:1 inferred:5 rating:3 inverting:1 pair:18 required:1 raising:1 california:2 learned:2 pearl:1 able:1 bar:30 dth:1 perception:2 pattern:2 challenge:2 explanation:2 belief:1 power:1 difficulty:1 natural:1 nth:1 representing:5 scheme:1 rated:3 created:1 ready:1 prior:6 understanding:1 acknowledgement:1 removal:1 determining:1 marginalizing:3 relative:1 afosr:1 fully:1 expect:2 interesting:2 proportional:1 degree:2 znk:9 viewpoint:1 story:2 editor:1 translation:11 row:1 last:2 side:2 allow:1 understand:2 taking:1 recoding:1 dimension:2 xn:15 valid:4 avoids:1 calculated:2 world:1 sensory:1 dale:1 made:5 collection:1 qualitatively:2 commonly:1 transaction:1 approximate:2 preferred:1 confirm:2 dealing:1 instantiation:5 investigating:2 latent:7 object1:1 learn:3 ca:2 expanding:1 improving:1 schuurmans:1 interact:1 bottou:1 complex:1 diag:2 did:2 allowed:10 augmented:1 causality:1 gatsby:1 aid:1 inferring:2 position:4 learns:1 down:2 rk:1 familiarity:1 specific:2 showing:2 explored:2 essential:1 effectively:2 perceptually:1 conditioned:2 demand:1 nk:7 simply:1 appearance:3 explore:2 forming:1 likely:3 visual:6 contained:1 monotonic:1 determines:2 chance:1 ma:4 conditional:1 viewed:2 presentation:1 absence:1 change:3 infinite:2 except:4 uniformly:1 sampler:2 called:1 total:2 invariance:10 select:1 formally:1 college:1 people:17 support:1 arises:2 indian:9 d1:3 phenomenon:2 |
3,368 | 405 | SEXNET: A NEURAL NETWORK
IDENTIFIES SEX FROM HUMAN FACES
B.A. Golomb, D.T. Lawrence, and T.J. Sejnowski
The Salk Institute
10010 N. Torrey Pines Rd.
La Jolla, CA 92037
Abstract
Sex identification in animals has biological importance. Humans are good
at making this determination visually, but machines have not matched
this ability. A neural network was trained to discriminate sex in human
faces, and performed as well as humans on a set of 90 exemplars. Images
sampled at 30x30 were compressed using a 900x40x900 fully-connected
back-propagation network; activities of hidden units served as input to a
back-propagation "SexNet" trained to produce values of 1 for male and
o for female faces. The network's average error rate of 8.1% compared
favorably to humans, who averaged 11.6%. Some SexNet errors mimicked
those of humans.
1
INTRODUCTION
People can capably tell if a human face is male or female. Recognizing the sex of
conspecifics is important. ''''hile some animals use pheromones to recognize sex, in
humans this task is primarily visual. How is sex recognized from faces? By and
large we are unable to say. Although certain features are nearly pathognomonic for
one sex or the other (facial hair for men, makeup or certain hairstyles for women),
even in the absence of these cues the determination is made; and even in their
presence, other cues may override.
Sex-recognition in faces is thus a. prototypical pattern recognition task of the sort
at which humans excel, but which has vexed traditional AI. It appea.rs to follow
no simple algorithm, and indeed is modifiable according to fashion (makeup, hair
etc). While ambiguous cases exist, for which we must appeal to other cues such as
physical build (if visible), voice patterns (if audible), and mannerisms, humans are
572
| 405 |@word build:1 sex:8 r:1 human:10 traditional:1 ambiguous:1 unable:1 override:1 biological:1 image:1 visually:1 lawrence:1 must:1 visible:1 pine:1 physical:1 favorably:1 cue:3 ai:1 rd:1 makeup:2 etc:1 female:2 jolla:1 certain:2 indeed:1 pattern:2 hidden:1 recognized:1 matched:1 golomb:1 hairstyle:1 animal:2 determination:2 identifies:1 nearly:1 hair:2 excel:1 unit:1 primarily:1 recognize:1 fully:1 men:1 conspecific:1 prototypical:1 male:2 presence:1 sexnet:3 averaged:1 hile:1 institute:1 facial:1 face:6 made:1 recognizing:1 exist:1 modifiable:1 audible:1 ca:1 woman:1 recognition:2 fashion:1 salk:1 connected:1 performed:1 sort:1 trained:2 activity:1 who:1 appeal:1 identification:1 importance:1 served:1 sejnowski:1 x30:1 according:1 tell:1 visual:1 say:1 making:1 compressed:1 ability:1 torrey:1 pheromone:1 sampled:1 back:2 absence:1 follow:1 discriminate:1 la:1 mimicked:1 voice:1 produce:1 people:1 propagation:2 exemplar:1 |
3,369 | 4,050 | Accounting for network effects in neuronal responses
using L1 regularized point process models
Ryan C. Kelly?
Computer Science Department
Center for the Neural Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Matthew A. Smith
University of Pittsburgh
Center for the Neural Basis of Cognition
Pittsburgh, PA 15213
[email protected]
Robert E. Kass
Department of Statistics
Center for the Neural Basis of Cognition
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Tai Sing Lee
Computer Science Department
Center for the Neural Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Activity of a neuron, even in the early sensory areas, is not simply a function of
its local receptive field or tuning properties, but depends on global context of the
stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity
of a neuron. In this paper we implemented an L1 regularized point process model
to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode ?Utah? array. We
found that the spikes of surrounding neurons indeed provide strong predictions of
a neuron?s response, in addition to the neuron?s receptive field transfer function.
We also found that the same spikes could be accounted for with the local field
potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate
and stimulus-response transfer functions.
1
Introduction
One of the most striking features of spike trains is their variability ? that is, the same visual stimulus
does not elicit the same spike pattern on repeated presentations. This variability is often considered
to be ?noise,? meaning that it is due to unknown factors. Identifying these unknowns should enable
better characterization of neural responses. In the retina, it has recently become possible to record
from a nearly complete population of certain types of ganglion cells in a region and identify the
?
Data was collected by RCK, MAS and Adam Kohn in his laboratory as a part of a collaborative effort
between the Kohn laboratory at Albert Einstein College of Medicine and the Lee laboratory at Carnegie Mellon
University. This work was supported by a National Science Foundation (NSF) Integrative Graduate Education
and Research Traineeship to RCK (DGE-0549352), National Eye Institute (NEI) grant EY018894 to MAS,
NSF 0635257 and NSF CISE IIS 0713206 to TSL, NIMH grant MH064537 to REK, and NEI grant EY016774
to Adam Kohn. We thank Adam Kohn for collaboration, and we are also grateful to Amin Zandvakili, Xiaoxuan
Jia and Stephanie Wissig for assistance in data collection. We also thank Ben Poole for helpful comments.
1
correlation structure of this population [1]. However, in cerebral cortex, recording a full population
of individual neurons in a region is currently impossible, and large scale recordings in vivo have
been rare. Cross-trial variability is often removed in order to better reveal the effect of a signal of
interest. Classical methods attempt to explain the activity of neurons only in terms of stimulus filters
or kernels, ignoring sources unrelated to the stimulus.
An increasing number of groups have modeled spiking with point process models [2, 3, 4] to assess
the relative contributions of specific sources. Pillow et al.[3] used these methods to model retinal
ganglion cells, and they showed that the responses of cells could be predicted to a large extent
using the activity of nearby cells. We apply this technique to model spike trains in macaque V1 in
vivo using L1 regularized point process models, which for discrete time become Generalized Linear
Models (GLMs) [5]. In addition to incorporating the spike trains of nearby cells, we incorporated a
meaningful summary of local network activity, the local field potential (LFP), and show that it also
can explain an important part of the neuronal variability.
2
L1 regularized Poisson regression
Fitting an unregularized point process model or GLM is simple with any convex optimization
method, but the kind of neural data we have collected typically has a likelihood function that is
relatively flat near its minimum. This is a data constraint: there simply are not enough spikes to
locate the true parameters. To solve this over-fitting problem, we take the approach of regularizing
the GLMs with an L1 penalty (Lasso) on the log-likelihood function. Here we provide some details
of how we fit L1-regularized GLMs using a Poisson noise assumption on data with large dimensionality. In general, a point process may be represented in terms of a conditional intensity function
and, assuming the data (the spike times) are in sufficiently small time bins, the resulting likelihood
function may be approximated by a Poisson regression likelihood function. For ease of notation we
leave the spiking history and other covariates implicit and write the conditional intensity (firing rate)
at time t as ?(t). We then model the log of ?(t) as a linear summation of other factors:
log ?(t) =
N
X
(t)
?j vj = ?V (t)
(1)
j
where vj is a feature of the data and ?j is the corresponding parameter to be fit, and ? = {?1 , .., ?N }.
We define V to be a N ? T matrix (N parameters, T time steps) of variables we believe can impact
(t)
(t)
the firing rate of a cell, where each column V (t) of V is v1 , ..., vN , which are the collection of
observables, including input stimulus and measured neural responses.
We define y = y1 ...yT , with yt ? {0, 1} as the observed binary spike train for the cell being
modeled, and let ?t = ?(t). The likelihood of the entire spike train is given by:
P (Y = y1 ...yT ) =
T
Y
(?t )yt exp(??t )
yt !
t
(2)
We obtain the log-likelihood by substituting Equation 1 into Equation 2 and taking the log:
L(?) =
T
X
(yt ?V (t) ? exp(?V (t) ) ? log yt !)
(3)
t
Maximizing the likelihood with L1 penalty is equivalent to finding the ? that minimizes the following
cost function:
N
X
R = ?L(?) +
?j |?j |
(4)
j=1
An L1 penalty term drives many of the ?i coefficients to zero. Fitting this equation with an L1
constraint is computationally difficult, because many standard convex optimization algorithms are
only guaranteed to converge for differentiable functions. Friedman et al. [5] discuss how coordinate
descent can efficiently facilitate GLM fitting on functions with L1 penalties, and they provide a
derivation for the logistic regression case. Here we show a derivation for the Poisson regression
case.
2
We approximate L(?) with LQ (?), a quadratic Taylor series expansion around the current estimate
? Then we proceed to minimize RQ = ?LQ (?) + PN ?j |?j |.
?.
j=1
? we can compute ?
Given ?,
?, the current estimate of ?. A coordinate descent step for coordinate j
amounts to the minimization of RQ with respect to ?j , for j ? 1 . . . N .
dRQ
For ??j > 0,
= ?j +?j
d?j
T
X
dRQ
= ?j +?j
For ??j < 0,
d?j
(t)
?
?t (vj )2 +?j ,
t
where ?j =
T
X
(t)
vj
(t)
?yt + ?
?t ? ?
?t (vj ??j )
T
X
(t)
?
?t (vj )2 ??j
t
(5)
t
This is a linear function with positive slope, and a discontinuity at ?j = 0. If ??j < ?j < ?j ,
dRQ
dRQ
d?j 6= 0 and the minimum is at this discontinuity, ?j = 0. Otherwise, if |?j | ? ?j , d?j = 0 when
T
X
(t)
?j = ?(?j ? ?j )/(
?
?t (vj )2 ),
for ?j ? ?j
(6)
for ?j ? ??j
(7)
t
T
X
(t)
?j = ?(?j + ?j )/(
?
?t (vj )2 ),
t
We cyclically repeat these steps on all parameters until convergence.
2.1
Regularization path
To choose efficiently a penalty that avoids over-fitting, we implement a regularization path algorithm [6, 5]. The algorithm proceeds by computing a sequence of solutions ?(1) , ?(2) . . . ?(L) for
?(1) , ?(2) . . . ?(L) . We standardize V (i.e. make each observable have mean 0 and standard deviation 1) and include a constant term v1 , which is not penalized. With this normalization, we set all
?j equal to the same ?, except there is no penalty for v1 .
In the coordinate descent method, we start with a ?(1) = ?max = maxj |?j |, which is large enough
so that all coefficients are dominated by the regularization, and hence all coefficients are 0 for this
heavy penalty. In determining ?max , ?j is computed based on the constant term v1 only. Initially,
the active set A(1) is empty, because ? > ?max . The active set is the set of all coordinates with nonzero coefficients for which the coordinate descent is being performed. As ? is reduced and becomes
smaller than ?max , more and more non-zero terms will be included in the active set. For step i,
we compute the solution ?(i) using penalty ?(i) and ?(i?1) as a warm start. As the regularization
parameter ? is decreased, the fitted models begin by under-fitting the data (with large ?) and progress
through the regularization path to over-fitting (with small ?). The above algorithm works much faster
when the active set is smaller, and we can halt the algorithm before over-fitting occurs.
The purpose of this regularization path is to find the best ?. To quantitatively assess the model fits,
we employ an ROC procedure [7]. To compute the ROC curve based on the conditional intensity
function ?(t), we first create a thresholded version of ?(t) which serves as the prediction of spiking:
r?c (t) =1 if ?(t) ? c
(8)
0 if ?(t) < c
(9)
For each fixed threshold c, a point on the ROC curve is the true positive rate (TPR) versus the false
positive rate (FPR). At each ? in the regularization path, we compute the area under the ROC curve
(AUC) to assess the relative performance of models fit below using a 10-fold cross validation procedure. An alternative and natural metric is the likelihood value, and the peak of the regularization
path was very similar between AUC and likelihood. We focus on AUC results because it was easier
to relate the AUCs from different cells, some of which had very different likelihood values.
3
Modeling neural data
We report results from the application of Eq. (4) to neural data. The models here contain combinations of stimulus effects (spatio-temporal receptive fields), coupling effects (history terms and past
3
spikes from other cells), and network effects (given by the LFP). We find that cells had different
degrees of contributions from the different terms, ranging from entirely stimulus-dependent cells to
entirely network-dependent cells.
3.1
Methods
The details of the array insertion have been described elsewhere [8]. Briefly, we inserted the array 0.6 mm into cortex using a pneumatic insertion device [9], which led to recordings confined
mostly to layers 2?3 of parafoveal V1 (receptive fields within 5? of the fovea) in an anesthetized
and paralyzed macaque (sufentanil anesthesia). Signals from each microelectrode were amplified
and bandpass filtered (250 Hz to 7.5 kHz) to acquire spiking data. Waveform segments that exceeded a threshold (set as a multiple of the rms noise on each channel) were digitized (30 kHz) and
sorted off-line. We first performed a principal components analysis by waveform shape [10] and
then refined the output by hand with custom time-amplitude window discrimination software (written in MATLAB; MathWorks). We studied the responses of cells to visual stimuli, presented on a
computer screen. All stimuli were generated with custom software on a Silicon Graphics Octane2
Workstation and displayed at a resolution of 1024 ? 768 pixels and frame rate of 100 Hz on a CRT
monitor (stimulus intensities were linearized in luminance). We presented Gaussian white noise
movies, with 8 pixel spatial blocks chosen independently from a Gaussian distribution. The movies
were 5? in width and height, 320 by 320 pixels. The stimuli were all surrounded by a gray field of
average luminance. Frames lasted 4 monitor refreshes, so the duration of each frame of noise was
40 ms. The average noise correlation between pairs of cells was 0.256.
The biggest obstacle for fitting models is the huge dimensionality in the number of parameters and
in the large number of observations. To reduce the problem size, we binned the spiking observations
at 10 ms instead of 1 ms. The procedures we used to reduce the parameter sizes are given in the
corresponding sections below. We used cross validation to estimate the performance of the models
on 10 different test sets. Each test set consisted of 12,000 test observations and 180,000 training
observations. The penalty in the regularization path with the largest average area across all the cross
validation runs was considered the optimal penalty.
The full model ?(t) = ?STIM + ?COUP + ?LFP has the following form:
log ?(t) =
XXX
x
3.2
y
kxy? sxy (t ? ? ) +
100
M X
X
?
i
?i ri (t ? ? ) +
? =1
E
X
?i xi (t)
(10)
i
Stimulus effects
For modeling the stimulus alone we used the form
XXX
log ?STIM (t) =
kxy? sxy (t ? ? )
x
y
(11)
?
Here, sxy (t ? ? ) is an individual feature of the stimulus ? ms before the current observation (time
t). If we were to use pixel intensities over the last 150 ms (15 observations), the 320 ? 320 movie
would have 1 536 000 parameters, a number far too large for the fitting method and data. We took the
approach of first restricting the movie to a much smaller region (40x40 pixels) chosen using spiketriggered average (STA) maps of the neural responses. Then, we transformed the stimulus space
with overlapping Gaussian bump filters, which are very similar to basis functions. The separation
of the bump centers was 4 pixels spatially in the 40x40 pixel space, and 2 time points (20 ms). The
total number of parameters was 10 ? 10 ? 7 = 700, which is 100 parameters for each of 7 distinct
time points. Thus, sxy (t ? ? ) corresponds to the convolution of a small Gaussian bump indexed by
x, y, ? with the recent stimulus frames. Figure 1 shows the regularization path for one example cell.
For each model (11), we chose the ? corresponding to the peak of the regularization path. Figure 2A
shows the k parameters for some example cells transformed back to the original pixel space, with
the corresponding STAs alongside for comparison. The models produce cleaner receptive fields, a
consequence of the L1 regularization. Figure 2D shows the population results for these models. The
distribution of AUC values is generally low, with many cells near chance (.5), and a smaller portion
of cells climbing to 0.6 or higher. This suggests that a linear receptive field may not be appropriate
4
A
? = 30
? = 50
? = 70
? = 100
? = 130
B
STA
0.58
? = 714
AUC
0.56
? = 172
0.54
0.52
? = 41
0.5
7
41
?
172
714
?=7
Figure 1: Example of fitting a GLM with stimulus terms for a single cell. A: For four L1 penalties
(?), the corresponding {ki } are shown, with the STA above for reference. For high ?, the model is
sparser. B: The regularization path for this same cell. ? = 172 is the peak of the AUC curve and is
thus the best model by this metric.
A
B
Stimulus models
{kxy} STA
2
4
6
8
10
{kxy} STA
{?i} [cell at (3,4)]
0
?0.2
2
4
6
8
10
{kxy} STA
0
?0.2
{kxy} STA
Stimulus model AUC
0.7
2
0.65
4
0.6
8
0.55
10
0.5
2
4 6 8
Electrode
10
0
?0.2
2
4
6
8
10
0
?0.05
1
0
?1
2 4 6 8 10
E
2 4 6 8 10
Coupling model AUC
0.9
2
0.8
4
6
0.2
{?i} [cell at (3,1)]
0.05
20ms 40ms 60ms 80ms 100ms
?1
2
4
6
8
10
0.2
2
4
6
8
10
0
{?i} [cell at (5,4)]
{?i} [cell at (1,1)]
Cell at (4,9)
1
2
4
6
8
10
0.2
{?i} [cell at (3,5)]
Cell at (3,4)
Electrode
LFP models
{?i} [cell at (3,5)]
Cell at (7,2)
D
C
Spike coupling models
Cell at (7,6)
F
LFP model AUC
0.9
2
0.8
4
6
0.7
8
0.6
10
0.5
2
4 6 8
Electrode
10
6
0.7
8
0.6
10
0.5
2
4 6 8
Electrode
10
Figure 2: Different GLM types. A: 4 example stimulus models, with the STAs shown for reference.
These models correspond to the AUC peaks of their respective regularization paths. B: 3 example
cells fit with spike coupling models. The coefficients are shown with respect to the cell location on
the array. If multiple cells were isolated on the same electrode, the square is divided into 2 or 3 parts.
Nearby electrodes tend to have more strength in their fitted coefficients. C: 3 example cells fit with
LFP models. As in B, nearby electrodes carry more information about spiking. D-F: Population
results for A-C. These are plots of the AUCs for the 57 cells modeled.
5
for many of these cells. In addition, there is an effect of electrode location, with cells with the
highest AUC located on the left side of the array.
3.3
Spike coupling effects
For the coupling terms, we used the history of firing for the other cells recording in the array as well
as the history for the cell being modeled. These take the form:
log ?COUP (t) =
M X
100
X
i
?i ri (t ? ? )
(12)
? =1
with ?i being the coupling strength/coefficient, and ri (t ? ? ) being the activity of the ith neuron ?
ms earlier, and M being the number of neurons. Thus the influence from a surrounding neuron is
computed based on its spike count in the last 100ms. As expected, nearby cells generally had the
largest coefficients (Figure 2B), indicating that cells in closer proximity tend to have more correlation in their spike trains. We observed a large range of AUC values for these fits (Figure 2E), from
near chance levels up to .9. There was a significant (p < 10?6 ) negative correlation between the
AUC and the number of nonzero coefficients used in the model. Thus, the units which were well
predicted by the other firing in the population also did not require a large number of parameters to
achieve the best AUC possible. Also apparent in the figure is that the relationship between spike
train predictability and array location had the opposite pattern of the stimulus model results, with
units toward the left side of the array generally having smaller AUCs based on the population activity
than units on the right side.
The models described above had one parameter per cell in the population, with each parameter
corresponding to the firing over a 100 ms past window. We also fit models with 3 parameters per
cell in the population, corresponding to the spikes in three non-overlapping temporal epochs (120 ms, 21-50 ms, 51-100 ms). These were considered to be independent parameters, and thus the
active set could contain none, some, or all of these 3 parameters for each cell. The mean AUC across
the population was .01 larger with this increased parameter set, but also the mean active set size was
100 elements larger. We did not attempt to model effects on very short timescales, since we binned
the spikes at 10 ms.
3.4
Network models
The spiking of cells in the population serves to help predict spiking very well for many cells, but the
cause of this relationship remains undetermined. The specific timing of spikes may play a large role
in predicting spikes, but alternatively the general network fluctuations could be the primary cause.
To disentangle these possibilities, we can model the network state using the LFP as an estimate:
log ?LFP (t) =
E
X
?i xi (t)
(13)
i
Here, E is the number of surrounding electrodes, xi is the LFP value from electrode i, and ?i is the
coefficient of the LFP influence on the spiking activity of the neuron being considered. Figure 2C
shows the model coefficients of several cells when {xi } are the LFP values at time t. The variance in
the coefficient values falls off with increasing distance, with distant electrodes providing relatively
less information about spiking. Across the population, the AUC values for the cells are almost the
same as in the spike coupling models (Figure 2F), and consequently the spatial pattern of AUC on
the array is almost identical. We also investigated models built using the LFP power in different
frequency bands, and we found that the LFP power in the gamma frequency range (30-80Hz) produced similar results. With these models, the AUC distributions were remarkably similar to the
models built with spike coupling terms (Figure 2E). The LFP reflects activity over a very broad region, and thus for these data the connectivity between most pairs in the population do not generally
have much more predictive power than the more broad network dynamics. This suggests that much
of the power of the spike coupling terms above is a direct result of both cells being driven by the
underlying network dynamics, rather than by a direct connection between the two cells unrelated
6
A
1
Full model AUC
0.9
B
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.6
0.8
1
Stimulus model AUC
C
1
0.6
0.8
LFP model AUC
1
D
1
1
0.5
0.6
0.8
1
Trial?shuffled AUC
0.6
0.8
?PSTH AUC
1
Figure 3: Scatter plots of the AUC values for the population under different models and conditions.
A,B: The full model improves upon the individual LFP or stimulus models. C: For most cells, trial
shuffling the spike trains destroys the effectiveness of the models. D: Taking the network state and
cell spikes into account generally yields a larger AUC than ?PSTH .
to the more global dynamics. Models of spike coupling with more precise timing (< 10 ms) may
reflect information that these LFP terms would fail to capture.
4
Capturing variability and predicting the PSTH
Neuronal firing has long been accepted to have sources of noise that have typically been ignored or
removed. The simplest conception is that each of these cells has an independent source of intrinsic
noise, and to recover the underlying firing rate function we can simply repeat a stimulus many times.
We have shown above that for many cells, a portion of the noise is not independent from the rest
of the network and is related to other cells and the LFP. The population included a distribution of
cells, and the GLMs showed that some cells included mostly network terms, and other cells included
mostly stimulus terms. For most cells, the models included significant contributions from both types
of terms.
From Figure 3A and 3B we can see that the inclusion of network terms does indeed explain more of
the spikes than the stimulus model alone. It is theoretically possible that the LFP or spikes from other
cells are reflecting higher order terms of the stimulus-response relationship that the linear model fails
to capture, and the GLM is harnessing these effects to increase AUC. We performed an AUC analysis
on test data from the same neurons: 120 trials of the same 30 second noise movie. Since the stimulus
was repeated we were able to shuffle trials. Any stimulus information is present on every trial of this
repeated stimulus, and so if the AUC improvement is entirely due to the network terms capturing
stimulus information, there should be no decrease in AUC in the trial-shuffled condition. Figure 3C
shows that this is not the case: trial shuffling reduces AUC values greatly across the population. This
means that the network terms are not merely capturing extraneous stimulus effects.
Kelly et al. [11] show that when taking the network state into account with a very simple GLM, the
signal to noise in the stimulus-response relationship was improved. The PSTH is typically used as a
proxy for the stimulus effects. The idea is that any noise terms are averaged out after many trials to
the same repeated stimulus. For the data set of a single repeated noise movie, we made a comparison
of the AUC values computed from the PSTH to the AUC values due to the models. Recall that the
AUC is computed from an ROC analysis on the thresholded ? function. Here, we define ?PSTH to
be the estimated firing rate given by the PSTH. Thus, it is the same function for every trial to the
repeated stimulus. We compared the AUC values in the same manner as in the model procedure
above, building the ?PSTH function on 90% of the trials and holding out 10% of the trials for the
ROC computation. Figure 3D shows the comparison: for almost every cell the full model is better
at predicting the spikes than the PSTH itself, even though the stimulus component of the model is
merely a linear filter.
If the extra-stimulus variability has truly been averaged out of the PSTH, the stimulus-only model
should do equally well in modeling the PSTH as the full model. To compare the ability for different
models to reconstruct the PSTH, we computed the predicted firing rates (?) to each of the 120 trials
of the same white noise movie, and the predicted PSTH is simply the average of these 120 temporal
functions. We computed these model predictions for the LFP-only model, stimulus-only model,
and full model. Figure 4A shows examples of these simulated PSTHs for these three conditions.
Figure 4B shows the overall results for the population. The stimulus model predicted the PSTH
7
A
LFP model, R2 = 0.058
LFP model
20
Count
Spikes/s
B
?
PSTH
40
20
0.10
10
0
0
Stimulus model, R2 = 0.276
Stimulus model
20
Count
Spikes/s
40
20
10
0
0.13
0
Full model, R2 = 0.424
Full model
0.29
Count
Spikes/s
40
20
0
5
0
0
0.5
1
1.5
Time (s)
2
2.5
3
0 0.1 0.2 0.3 0.4 0.5 0.6
R2
Figure 4: A: For an example cell, the ability for different models to predict the PSTH. Taking the
network state into account yields a closer estimate to the PSTH, indicating that the PSTH contains
effects unrelated to the stimulus. B: Population histograms of the PSTH variance explained. Including all the terms yields a dramatic increase in the variance explained across the population.
well for some cells, but for most others the stimulus model alone cannot match the full model?s
performance, indicating a corruption of the PSTH by network effects.
5
Conclusions
In this paper we have implemented a L1 regularized point process model to account for stimulus
effects, neuronal interactions and network state effects for explaining the spiking activity of V1 neurons. We have showed the derivation for a form of L1 regularized Poisson regression, and identified
and implemented a number of computational approaches including coordinate descent and the regularization path. These are crucial for solving the point process model for in vivo V1 data, and to our
knowledge have not been previously attempted on this scale.
Using this model, we have shown that activity of cells in the surrounding population can account
for a significant amount of the variance in the firing of many neurons. We found that the LFP,
a broad indicator of the synaptic activity of many cells across a large region (the network state),
can account for a large share of these influences from the surrounding cells. This suggests that
these spikes are due to the general network state rather than precise spike timing or individual true
synaptic connections between a pair of cells. This is consistent with earlier observations that the
spiking activity of a neuron is linked to ongoing population activity as measured with optical imaging
[12] and LFP [13]. This link to the state of the local population is an influential force affecting
the variability in a cell?s spiking behavior. Indeed, groups of neurons transition between ?Up?
(depolarized) and ?Down? (hyperpolarized) states, which leads to cycles of higher and lower than
normal firing rates (for review, see [14]). These state transitions occur in sleeping and anesthetized
animals, in cortical slices [15], as well as in awake animal [16, 17] and awake human patients [18,
19], and might be responsible for generating much of the slow time scale correlation. Our additional
experiments showed similar results are found in experiments with natural movie stimulation.
By directly modeling these sources of variability, this method begins to allow us to obtain better
encoding models and more accurately isolate the elements of the stimulus that are truly driving the
cells? responses. By attributing portions of firing to network state effects (as indicated by the LFP),
this approach can obtain more accurate estimates of the underlying connectivity among neurons in
cortical circuits.
8
References
[1] Jonathon Shlens, Greg D Field, Jeffrey L Gauthier, Martin Greschner, Alexander Sher, Alan M
Litke, and E J Chichilnisky. The structure of large-scale synchronized firing in primate retina.
J Neurosci, 29(15):5022?31, Apr 2009.
[2] Wilson Truccolo, Leigh R Hochberg, and John P Donoghue. Collective dynamics in human
and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci, 13(1):105?
11, Jan 2010.
[3] Jonathan W Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M Litke, E J
Chichilnisky, and Eero P Simoncelli. Spatio-temporal correlations and visual signalling in
a complete neuronal population. Nature, 454(7207):995?9, Aug 2008.
[4] Robert E. Kass, Valerie Ventura, and Emory N. Brown. Statistical issues in the analysis of
neuronal data. J Neurophysiol, 94:8?25, 2005.
[5] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Regularization paths for generalized
linear models via coordinate descent. Department of Statistics, Jan 2008.
[6] Mee Young Park and Trevor Hastie. L1 regularization path algorithm for generalized linear models. Journal of the Royal Statistical Society: Series B (Statistical Methodology),
69(4):659?677, 2007.
[7] Nicholas G Hatsopoulos, Qingqing Xu, and Yali Amit. Encoding of movement fragments in
the motor cortex. J Neurosci, 27(19):5105?14, May 2007.
[8] Matthew A Smith and Adam Kohn. Spatial and temporal scales of neuronal correlation in
primary visual cortex. J Neurosci, 28(48):12591?603, Nov 2008.
[9] P J Rousche and Richard A Normann. A method for pneumatically inserting an array of
penetrating electrodes into cortical tissue. Ann Biomed Eng, 20(4):413?22, Jan 1992.
[10] Shy Shoham, Matthew R Fellows, and Richard A Normann. Robust, automatic spike sorting
using mixtures of multivariate t-distributions. J Neurosci Methods, 127(2):111?22, Aug 2003.
[11] Ryan C Kelly, Matthew A Smith, Jason M Samonds, Adam Kohn, A B Bonds, J Anthony
Movshon, and Tai Sing Lee. Comparison of recordings from microelectrode arrays and single
electrodes in the visual cortex. J Neurosci, 27(2):261?4, Jan 2007.
[12] M Tsodyks, Tal Kenet, Amiram Grinvald, and A Arieli. Linking spontaneous activity of single
cortical neurons and the underlying functional architecture. Science, 286(5446):1943?6, Dec
1999.
[13] Ian Nauhaus, Laura Busse, Matteo Carandini, and Dario L Ringach. Stimulus contrast modulates functional connectivity in visual cortex. Nat Neurosci, 12(1):70?6, Jan 2009.
[14] Alain Destexhe and Diego Contreras. Neuronal computations with stochastic network states.
Science, 314(5796):85?90, Oct 2006.
[15] Hope A Johnson and Dean V Buonomano. Development and plasticity of spontaneous activity
and up states in cortical organotypic slices. J Neurosci, 27(22):5915?25, May 2007.
[16] David A Leopold, Yusuke Murayama, and Nikos K Logothetis. Very slow activity fluctuations
in monkey visual cortex: implications for functional brain imaging. Cereb Cortex, 13(4):422?
33, Apr 2003.
[17] Artur Luczak, Peter Barth?o, Stephan L Marguet, Gy?orgy Buzs?aki, and Kenneth D Harris.
Sequential structure of neocortical spontaneous activity in vivo. Proc Natl Acad Sci USA,
104(1):347?52, Jan 2007.
[18] Biyu J He, Abraham Z Snyder, John M Zempel, Matthew D Smyth, and Marcus E Raichle.
Electrophysiological correlates of the brain?s intrinsic large-scale functional architecture. Proc
Natl Acad Sci USA, 105(41):16039?44, Oct 2008.
[19] Yuval Nir, Roy Mukamel, Ilan Dinstein, Eran Privman, Michal Harel, Lior Fisch, Hagar
Gelbard-Sagiv, Svetlana Kipervasser, Fani Andelman, Miri Y Neufeld, Uri Kramer, Amos
Arieli, Itzhak Fried, and Rafael Malach. Interhemispheric correlations of slow spontaneous
neuronal fluctuations revealed in human sensory cortex. Nat Neurosci, 11(9):1100?8, Sep
2008.
9
| 4050 |@word trial:14 briefly:1 version:1 hyperpolarized:1 mee:1 integrative:1 linearized:1 accounting:2 eng:1 dramatic:1 carry:1 series:2 contains:1 fragment:1 past:2 ka:3 current:3 emory:1 michal:1 scatter:1 written:1 john:2 distant:1 plasticity:1 shape:1 motor:1 plot:2 discrimination:1 alone:3 device:1 greschner:1 signalling:1 fpr:1 fried:1 ith:1 smith:3 short:1 record:1 filtered:1 characterization:1 location:3 psth:20 height:1 anesthesia:1 direct:2 become:2 fitting:11 manner:1 cnbc:2 theoretically:1 expected:1 indeed:3 behavior:1 busse:1 brain:3 window:2 increasing:2 becomes:1 begin:2 unrelated:3 notation:1 underlying:4 circuit:1 kind:1 minimizes:1 monkey:2 finding:1 rck:2 temporal:5 every:3 fellow:1 unit:4 grant:3 positive:3 before:2 local:5 timing:3 consequence:1 acad:2 encoding:2 yusuke:1 firing:15 fluctuation:4 path:14 matteo:1 might:1 chose:1 exert:1 studied:1 suggests:4 ease:1 liam:1 graduate:1 range:2 averaged:2 responsible:1 lfp:25 block:1 implement:1 procedure:4 jan:6 area:3 elicit:1 shoham:1 cannot:1 context:2 influence:4 impossible:1 equivalent:1 map:1 dean:1 center:5 yt:8 maximizing:1 independently:1 convex:2 duration:1 resolution:1 identifying:1 artur:1 array:11 shlens:2 his:1 population:23 coordinate:8 spontaneous:4 play:1 diego:1 logothetis:1 smyth:1 pa:4 element:2 standardize:1 approximated:1 roy:1 located:1 malach:1 observed:2 inserted:1 role:1 capture:2 tsodyks:1 region:5 cycle:1 shuffle:1 removed:2 highest:1 decrease:1 hatsopoulos:1 rq:2 movement:1 nimh:1 covariates:1 insertion:2 dynamic:4 grateful:1 solving:1 segment:1 interhemispheric:1 predictive:1 upon:1 basis:5 observables:1 neurophysiol:1 sep:1 represented:1 surrounding:6 train:8 derivation:3 distinct:1 refined:1 harnessing:1 apparent:1 larger:3 solve:1 otherwise:1 reconstruct:1 ability:2 statistic:2 itself:1 sequence:1 differentiable:1 neufeld:1 took:1 interaction:1 inserting:1 murayama:1 fisch:1 achieve:1 amplified:1 amin:1 convergence:1 electrode:14 empty:1 produce:1 generating:1 adam:5 leave:1 ben:1 help:1 coupling:11 stat:1 measured:2 progress:1 aug:2 strong:1 eq:1 implemented:3 c:1 predicted:5 synchronized:1 waveform:2 filter:3 stochastic:1 crt:1 human:3 enable:1 jonathon:2 education:1 bin:1 require:1 truccolo:1 ryan:2 summation:1 mm:1 proximity:1 sufficiently:1 considered:4 around:1 normal:1 exp:2 cognition:4 predict:2 bump:3 matthew:5 substituting:1 driving:1 early:1 purpose:1 proc:2 bond:1 currently:1 largest:2 create:1 reflects:1 amos:1 minimization:1 hope:1 destroys:1 gaussian:4 rather:2 pn:1 wilson:1 focus:1 improvement:1 likelihood:10 lasted:1 greatly:1 contrast:1 litke:2 helpful:1 dependent:2 typically:3 entire:1 initially:1 raichle:1 transformed:2 microelectrode:2 pixel:8 overall:1 among:1 issue:1 biomed:1 extraneous:1 development:1 animal:2 spatial:3 field:10 equal:1 having:1 identical:1 valerie:1 broad:3 park:1 nearly:1 report:1 stimulus:49 quantitatively:1 others:1 employ:1 retina:2 sta:7 leigh:1 richard:2 harel:1 simultaneously:1 national:2 gamma:1 individual:5 maxj:1 jeffrey:1 attempt:2 friedman:2 interest:1 huge:1 possibility:1 custom:2 samonds:1 truly:2 mixture:1 natl:2 implication:1 parafoveal:1 accurate:1 closer:2 respective:1 indexed:1 taylor:1 isolated:1 fitted:2 increased:1 column:1 modeling:4 obstacle:1 earlier:2 cost:1 deviation:1 rare:1 undetermined:1 johnson:1 graphic:1 too:1 peak:4 lee:3 off:2 connectivity:3 reflect:1 recorded:1 choose:1 laura:1 sagiv:1 account:6 potential:2 ilan:1 retinal:1 gy:1 coefficient:12 depends:1 performed:3 jason:1 linked:1 portion:3 start:2 recover:1 slope:1 jia:1 vivo:4 contribution:4 ass:4 collaborative:1 minimize:1 greg:1 square:1 variance:4 efficiently:2 correspond:1 identify:1 yield:3 climbing:1 accurately:1 produced:1 none:1 drive:1 corruption:1 tissue:1 history:4 explain:3 synaptic:2 trevor:2 sensorimotor:1 frequency:2 coup:2 nauhaus:1 lior:1 workstation:1 carandini:1 recall:1 knowledge:1 dimensionality:2 improves:1 electrophysiological:1 amplitude:1 back:1 reflecting:1 barth:1 exceeded:1 higher:3 xxx:2 methodology:1 response:11 pneumatic:1 improved:1 though:1 implicit:1 correlation:8 glms:4 until:1 hand:1 jerome:1 gauthier:1 overlapping:2 logistic:1 reveal:1 gray:1 indicated:1 dge:1 believe:1 building:1 utah:1 effect:17 facilitate:1 true:3 contain:2 consisted:1 brown:1 regularization:17 hence:1 shuffled:2 spatially:1 laboratory:3 nonzero:2 ringach:1 white:2 assistance:1 width:1 auc:37 aki:1 m:19 generalized:3 penetrating:1 complete:2 neocortical:1 cereb:1 l1:15 meaning:1 ranging:1 recently:1 psths:1 stimulation:1 spiking:13 functional:4 khz:2 cerebral:1 linking:1 he:1 tpr:1 mellon:4 silicon:1 significant:3 shuffling:2 tuning:1 automatic:1 inclusion:1 had:5 cortex:10 buzs:1 disentangle:1 multivariate:1 showed:4 recent:1 driven:1 certain:1 contreras:1 binary:1 minimum:2 additional:1 nikos:1 converge:1 signal:3 ii:1 paralyzed:1 multiple:3 simoncelli:1 full:10 reduces:1 alan:2 faster:1 match:1 cross:4 long:1 divided:1 equally:1 halt:1 impact:1 prediction:3 regression:5 patient:1 cmu:4 poisson:5 metric:2 albert:1 histogram:1 kernel:1 normalization:1 confined:1 cell:69 sleeping:1 dec:1 addition:3 remarkably:1 affecting:1 decreased:1 source:5 tsl:1 crucial:1 extra:1 rest:1 depolarized:1 comment:1 recording:5 hz:3 tend:2 isolate:1 effectiveness:1 near:3 revealed:1 enough:2 conception:1 destexhe:1 stephan:1 fit:8 architecture:2 hastie:2 lasso:1 zandvakili:1 opposite:1 reduce:2 idea:1 identified:1 donoghue:1 x40:2 kohn:6 rms:1 effort:1 penalty:11 movshon:1 peter:1 proceed:1 cause:2 matlab:1 ignored:1 kxy:6 generally:5 cleaner:1 amount:2 band:1 simplest:1 reduced:1 nsf:3 estimated:1 per:2 tibshirani:1 carnegie:4 discrete:1 write:1 snyder:1 group:2 four:1 threshold:2 monitor:2 thresholded:2 kenneth:1 v1:9 luminance:2 imaging:2 merely:2 run:1 striking:1 svetlana:1 almost:3 vn:1 separation:1 hochberg:1 capturing:3 entirely:3 layer:1 ki:1 guaranteed:1 yali:1 fold:1 quadratic:1 activity:19 binned:2 strength:2 occur:1 constraint:2 awake:2 ri:3 flat:1 software:2 tal:1 nearby:5 dominated:1 optical:1 relatively:2 martin:1 buonomano:1 department:5 influential:1 miri:1 arieli:2 combination:1 smaller:5 across:6 stephanie:1 primate:1 explained:2 glm:6 unregularized:1 computationally:1 equation:3 tai:3 remains:1 discus:1 count:4 fail:1 mathworks:1 previously:1 serf:2 apply:1 einstein:1 appropriate:1 nicholas:1 alternative:1 original:1 include:1 medicine:1 amit:1 classical:1 society:1 spike:37 occurs:1 receptive:6 sxy:4 primary:2 eran:1 surrogate:1 fovea:1 distance:1 thank:2 link:1 simulated:1 sci:2 collected:2 extent:1 toward:1 stim:2 marcus:1 assuming:1 modeled:4 relationship:4 providing:1 acquire:1 difficult:1 mostly:3 ventura:1 robert:3 relate:1 holding:1 negative:1 collective:1 unknown:2 neuron:20 observation:7 convolution:1 sing:2 gelbard:1 descent:6 spiketriggered:1 displayed:1 variability:8 incorporated:1 digitized:1 locate:1 y1:2 frame:4 precise:2 nei:2 intensity:5 david:1 pair:3 chichilnisky:2 connection:2 leopold:1 macaque:2 discontinuity:2 able:1 poole:1 proceeds:1 pattern:3 below:2 alongside:1 built:2 including:3 max:4 royal:1 power:4 natural:2 warm:1 regularized:7 predicting:4 indicator:1 force:1 improve:1 movie:8 eye:1 marguet:1 sher:2 normann:2 nir:1 epoch:1 kelly:3 review:1 determining:1 relative:2 versus:1 shy:1 validation:3 foundation:1 degree:1 usa:2 proxy:1 consistent:1 surrounded:1 collaboration:1 heavy:1 share:1 elsewhere:1 summary:1 accounted:1 supported:1 repeat:2 penalized:1 last:2 alain:1 side:3 allow:1 institute:1 fall:1 explaining:1 taking:4 anesthetized:2 slice:2 curve:4 rek:1 transition:2 pillow:2 avoids:1 cortical:5 sensory:2 collection:2 made:1 far:1 correlate:1 approximate:1 observable:1 nov:1 rafael:1 global:4 active:6 pittsburgh:5 eero:1 spatio:2 xi:4 alternatively:1 channel:1 transfer:2 nature:1 robust:1 ignoring:1 expansion:1 orgy:1 investigated:1 anthony:1 vj:8 did:2 apr:2 timescales:1 neurosci:9 abraham:1 noise:14 repeated:6 xu:1 neuronal:9 biggest:1 roc:6 screen:1 slow:3 predictability:1 fails:1 bandpass:1 lq:2 grinvald:1 young:1 cyclically:1 ian:1 down:1 specific:2 r2:4 incorporating:1 intrinsic:2 false:1 restricting:1 sequential:1 modulates:1 mukamel:1 nat:3 uri:1 dario:1 sparser:1 easier:1 sorting:1 attributing:1 led:1 simply:4 paninski:1 ganglion:2 visual:7 luczak:1 corresponds:1 chance:2 harris:1 ma:2 oct:2 conditional:3 sorted:1 presentation:1 kramer:1 consequently:1 ann:1 cise:1 considerable:1 included:5 except:1 yuval:1 principal:1 total:1 accepted:1 attempted:1 meaningful:1 indicating:3 college:1 jonathan:1 alexander:2 ongoing:1 regularizing:1 |
3,370 | 4,051 | Inferring Stimulus Selectivity from the Spatial
Structure of Neural Network Dynamics
Kanaka Rajan
Lewis-Sigler Institute for Integrative Genomics
Carl Icahn Laboratories # 262, Princeton University
Princeton NJ 08544 USA
[email protected]
L. F. Abbott
Department of Neuroscience
Department of Physiology and Cellular Biophysics
Columbia University College of Physicians and Surgeons
New York, NY 10032-2695 USA
[email protected]
Haim Sompolinsky
Racah Institute of Physics
Interdisciplinary Center for Neural Computation
Hebrew University
Jerusalem, Israel
and
Center for Brain Science
Harvard University
Cambridge, MA 02138 USA
[email protected]
Abstract
How are the spatial patterns of spontaneous and evoked population responses related? We study the impact of connectivity on the spatial pattern of fluctuations
in the input-generated response, by comparing the distribution of evoked and intrinsically generated activity across the different units of a neural network. We
develop a complementary approach to principal component analysis in which separate high-variance directions are derived for each input condition. We analyze
subspace angles to compute the difference between the shapes of trajectories corresponding to different network states, and the orientation of the low-dimensional
subspaces that driven trajectories occupy within the full space of neuronal activity.
In addition to revealing how the spatiotemporal structure of spontaneous activity
affects input-evoked responses, these methods can be used to infer input selectivity induced by network dynamics from experimentally accessible measures of
spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments). We conclude that the absence of a detailed spatial map of afferent
inputs and cortical connectivity does not limit our ability to design spatially extended stimuli that evoke strong responses.
1
1
Motivation
Stimulus selectivity in neural networks was historically measured directly from input-driven responses [1], and only later were similar selectivity patterns observed in spontaneous activity across
the cortical surface [2, 3]. We argue that it is possible to work in the reverse order, and show that analyzing the distribution of spontaneous activity across the different units in the network can inform
us about the selectivity of evoked responses to stimulus features, even when no apparent sensory
map exists.
Sensory-evoked responses are typically divided into a signal component generated by the stimulus
and a noise component corresponding to ongoing activity that is not directly related to the stimulus.
Subsequent effort focuses on understanding how the signal depends on properties of the stimulus,
while the remaining, irregular part of the response is treated as additive noise. The distinction
between external stochastic processes and the noise generated deterministically as a function of
intrinsic recurrence has been previously studied in chaotic neural networks [4]. It has also been
suggested that internally generated noise is not additive and can be more sensitive to the frequency
and amplitude of the input, compared to the signal component of the response [5 - 8].
In this paper, we demonstrate that the interaction between deterministic intrinsic noise and the spatial
properties of the external stimulus is also complex and nonlinear. We study the impact of network
connectivity on the spatial pattern of input-driven responses by comparing the structure of evoked
and spontaneous activity, and show how the unique signature of these dynamics determines the
selectivity of networks to spatial features of the stimuli driving them.
2
Model description
In this section, we describe the network model and the methods we use to analyze its dynamics. Subsequent sections explore how the spatial patterns of spontaneous and evoked responses are related
in terms of the distribution of the activity across the network. Finally, we show how the stimulus
selectivity of the network can be inferred from its spontaneous activity patterns.
2.1
Network elements
We build a firing rate model of N interconnected units characterized by a statistical description of the
underlying circuitry (as N ? ?, the system ?self averages? making the description independent
of a specific network architecture, see also [11, 12]). Each unit is characterized by an activation
variable xi ? i = 1, 2, . . . N , and a nonlinear response function ri which relates to xi through
ri = R0 + ?(xi ) where,
?
? R0 tanh x
for x ? 0
R0
?(x) =
(1)
x
? (Rmax ? R0 ) tanh
otherwise.
Rmax ?R0
Eq. 1 allows us to independently set the maximum firing rate Rmax and the background rate R0
to biologically reasonable values, while retaining a maximum gradient at x = 0 to guarantee the
smoothness of the transition to chaos [4].
We introduce a recurrent weight matrix with element Jij equivalent to the strength of the synapse
from unit j ? unit i. The individual weights are chosen independently
2and
randomly from a Gaussian distribution with mean and variance given by [Jij ]J = 0 and
Jij J = g 2 /N , where square
brackets are ensemble averages [9 - 11,13]. The control parameter g which scales as the variance of
the synaptic weights, is particularly important in determining whether or not the network produces
spontaneous activity with non-trivial dynamics (Specifically, g = 0 corresponds to a completely
uncoupled network and a network with g = 1 generates non-trivial spontaneous activity [4, 9, 10]).
The activation variable for each unit xi is therefore determined by the relation,
?r
N
X
dxi
= ?xi + g
Jij rj + Ii ,
dt
j=1
with the time scale of the network set by the single-neuron time constant ?r of 10 ms.
2
(2)
The amplitude I of an oscillatory external input of frequency f , is always the same for each unit,
but in some examples shown in this paper, we introduce a neuron-specific phase factor ?i , chosen
randomly from a uniform distribution between 0 and 2?, such that
Ii = I cos(2?f t + ?i ) ? i = 1, 2, . . . N.
(3)
In visually responsive neurons, this mimics a population of simple cells driven by a drifting grating
of temporal frequency f , with the different phases arising from offsets in spatial receptive field
locations. The randomly assigned phases in our model ensure that the spatial pattern of input is not
correlated with the pattern of recurrent connectivity. In our selectivity analysis however (Fig. 3), we
replace the random phases with spatial input patterns that are aligned with network connectivity.
2.2
PCA redux
Principal component analysis (PCA) has been applied profitably to neuronal recordings (see for
example [14]) but these analyses often plot activity trajectories corresponding to different network
states using the fixed principal component coordinates derived from combined activities under all
stimulus conditions. Our analysis offers a complementary approach whereby separate principal
components are derived for each stimulus condition, and the resulting principal angles reveal not
only the difference between the shapes of trajectories corresponding to different network states,
but also the orientation of the low-dimensional subspaces these trajectories occupy within the full
N -dimensional space of neuronal activity.
The instantaneous network state can be described by a point in an N -dimensional space with coordinates equal to the firing rates of the N units. Over time, the network activity traverses a trajectory
in this N -dimensional space and PCA can be used to delineate the subspace in which this trajectory
lies. The analysis is done by diagonalizing the equal-time cross-correlation matrix of network firing
rates given by,
Dij = h(ri (t) ? hri i)(rj (t) ? hrj i)i ,
(4)
where <> denotes a time average. The eigenvalues of this matrix expressed as a fraction of their sum
? a in this paper), indicate the distribution of variances across the different orthogonal
(denoted by ?
directions in the activity trajectory.
Spontaneous activity is a useful indicator of recurrent effects, because it is completely determined
by network feedback. We can therefore study the impact of network connectivity on the spatial
pattern of input-driven responses by comparing the spatial structure of evoked and spontaneous activity. In the spontaneous state, there are a number of significant contributors to the total variance.
For instance, for g = 1.5, the leading 10% of the components account for 90% of the total variance
with an exponential taper for the variance associated with higher components. In addition, projections of network activity onto components with smaller variances fluctuate at progressively higher
frequencies, as illustrated in Fig. 1b & d.
Other models of chaotic networks have shown a regime in which an input generates a non-chaotic
network response, even though the network returns to chaotic fluctuations when the external drive
is turned off [5, 16]. Although chaotic intrinsic activity can be completely suppressed by the input in this network state, its imprint can still be detected in the spatial pattern of the non-chaotic
activity. We determine that the perfectly entrained driven state is approximately two-dimensional
corresponding to a circular oscillatory orbit, the projections of which are oscillations ?/2 apart in
phase. (The residual variance in the higher dimensions reflects harmonics arising naturally from the
nonlinearity in the network model).
2.3
Dimensionality of spontaneous and evoked activity
To quantify the dimension of the subspace containing the chaotic trajectory in more detail, we introduce the quantity
!?1
N
X
2
?
Neff =
?a
,
(5)
a=1
which provides a measure of the effective number of principal components describing a trajectory.
For example, if n principal components share the total variance equally, and the remaining N ? n
principal components have zero variance, Neff = n.
3
a 30
b
P1
10
P10
% variance
20
0
c
P50
0
10
PC#
20
30
0
t(s)
1
0
t(s)
1
d
50
40
% variance
P1
30
20
P3
10
0
P5
0
10
PC#
20
30
Figure 1: PCA of the chaotic spontaneous state and non-chaotic driven state reached when an input
of sufficiently high amplitude has suppressed the chaotic fluctuations. a) % variance accounted for
by different PC?s for chaotic spontaneous activity. b) Projections of the chaotic spontaneous activity
onto PC vectors 1, 10 and 50 (in decreasing order of variance). c) Same as panel a, but for nonchaotic driven activity. d) Projections of periodic driven activity onto PC?s 1, 3, and 5. Projections
onto components 2, 4, and 6 are identical but phase shifted by ?/2. For this figure, N = 1000,
g = 1.5, f = 5 Hz and I/I1/2 = 0.7 for b and d.
40
30
Neff
2000 units
20
1000 units
10
0
1
1.5
g
2
2.5
Figure 2: The effective dimension Neff of the trajectory of chaotic spontaneous activity as a function
of g for networks with 1000 (blue circles) or 2000 (red circles) neurons.
For the chaotic spontaneous state in the networks we build, Neff increases with g (Fig. 2), due to
the higher amplitude and frequency content of chaotic dynamics for large g. Note that Neff scales
approximately with N , which means that large networks have proportionally higher-dimensional
chaotic activity (compare the two traces within Fig. 2). The fact that the number of activated modes
is only 2% of the system?s total dimensionality even for g as high as 2.5, is another manifestation
of the deterministic nature of the autonomous fluctuations. For comparison, we calculated Neff for a
similar network driven by white noise, with g set below the chaotic transition at g = 1. In this case,
Neff only assumes such low values when g is within a few percent of the critical value of 1.
4
2.4
Subspace angles
The orbit describing the activity in the non-chaotic driven state consists of a circle in a twodimensional subspace of the full N-dimensions of neuronal activities. How does this circle align
relative to the subspaces defined by different numbers of principal components that characterize the
spontaneous activity? To overcome the difficulty in visualizing this relationship due to the high dimensionality of both the chaotic subspace as well as the full space of network activities, we utilize
principal angles between subspaces [15].
The first principal angle is the angle between two unit vectors (called principal vectors), one in
each subspace, that have the maximum overlap (dot product). Higher principal angles are defined
recursively as the angles between pairs of unit vectors with the highest overlap that are orthogonal
to the previously defined principal vectors. For two subspaces of dimension d1 and d2 defined by
the orthogonal unit vectors V1a , for a = 1, 2, . . . d1 and V2b , for b = 1, 2, . . . d2 , the cosines of the
principal angles are equal to the singular values of the d1 ?d2 matrix V1a ?V2b . The angle between the
two subspaces is given by,
? = arccos min(singularvalueofV1a ?V2b ) .
(6)
The resulting principal angles vary between 0 and ?/2 depending on whether the two subspaces
overlap partially or whether the two subspaces are completely non-overlapping, respectively. The
angle between two subspaces is, by convention, the largest of their principal angles.
2.5
Signal and noise from network responses
To characterize the activity of the entire network, we compute the average autocorrelation function
of each neuronal firing rate averaged across all the network units, defined as,
C(? ) =
N
1 X
h(ri (t) ? hri i)(ri (t + ? ) ? hri i)i .
N i=1
(7)
The total variance in the fluctuations of the firing rates of the network neurons is denoted by C(0),
whereas C(? ) for non-zero ? provides information about the temporal structure of the network activity. To quantify signal and noise from this measure of network activity, we split the total variance
of the network activity (i.e., C(0)) into oscillatory and chaotic components,
2
2
C(0) = ?chaos
+ ?osc
.
(8)
2
?osc
is defined as the amplitude of the oscillatory part
As depicted in the function plotted in Fig. 4a,
2
of the correlation function C(? ). The chaotic variance ?chaos
, is then equal to the difference between
2
induced by entrainment to the periodic drive. We call
the full variance C(0) and the variance ?osc
?osc the signal amplitude and ?chaos the noise amplitude, although it should be kept in mind that this
?noise? is generated by the network in a deterministic not stochastic manner [5 - 8].
3
Network effects on the spatial pattern of evoked activity
A mean-field-based study developed for chaotic neural networks has recently shown a phase transition in which chaotic background can be actively suppressed by inputs in a temporal frequencydependent manner [5 - 8]. Similar effects have also been shown in discrete-time models and models
with white noise inputs [16, 17] but these models lack the rich dynamics of continuous time models.
In contrast, we show that external inputs do not exert nearly as strong control on the spatial structure of the network response. The phases of the firing-rate oscillations of network neurons are only
partially correlated with the phases of the inputs that drive them, instead appearing more strongly
influenced by the recurrent feedback.
We schematize the irregular trajectory of the chaotic spontaneous activity, described by its leading
principal components in red in Fig. 3a. The circular orbit of the periodic activity (schematically
in blue in 3a) has been rotated by the smaller of its two principal angles. The angle between these
two subspaces (the angle between n
? chaos and n
? osc ) is then the remaining angle through which the
periodic orbit would have to be rotated to align it with the horizontal plane containing the twodimensional projection of the chaotic trajectory. In other words, Fig. 3a depicts the angle between
5
the subspaces defined by the first two principal components of the orbit of periodic driven activity
and the first two principal components of the chaotic spontaneous activity. We ask how this circle is aligned relative to the subspaces defined by different numbers of principal components that
characterize the spontaneous activity.
b
nosc
nchaos
PC #
d
nosc1
Subspace angle (rad)
c
v. random
v. driven periodic
Subspace angle (rad)
a
nosc2
fIn = 5Hz
f (Hz)
Figure 3: Spatial pattern of network responses. a) Cartoon of the angle between the subspace defined by the first two components of the chaotic activity (red) and a two-dimensional description
of the periodic orbit (blue curve). b) Relationship between the orientation of periodic and chaotic
trajectories. Angles between the subspace defined by the two PC?s of the non-chaotic driven state
and subspaces formed by PC?s 1 through m of the chaotic spontaneous activity, where m appears on
the horizontal axis (red dots). Black dots show the analogous angles but with the two-dimensional
subspace defined by the random input phases replacing the subspace of the non-chaotic driven activity. c) Cartoon of the angle between the subspaces defined by two periodic driven trajectories. d)
Effect of input frequency on the orientation of the periodic orbit. Angle between the subspaces defined by the two leading principal components of non-chaotic driven activity at different frequencies
and these two vectors for a 5 Hz input frequency. The results in this figure come from a network
simulation with N = 1000 and I/I1/2 = 0.7 and f = 5 Hz for b, I/I1/2 = 1.0 for d.
Next, we compare the two-dimensional subspace of the periodic driven orbit to the subspaces defined
by the first m principal components of the chaotic spontaneous activity. This allows us to see how the
orbit lies in the full N -dimensional space of neuronal activities relative to the trajectory of the chaotic
spontaneous activity. The results (Fig. 3b, red dots) show that this angle is close to ?/2 for small m,
equivalent to the angle between two randomly chosen subspaces. However, the value drops quickly
for subspaces defined by progressively more of the leading principal components of the chaotic
activity. Ultimately, this angle approaches zero when all N of the chaotic principal component
vectors are considered, as it must, because these span the entire space of network activities.
In the periodic driven regime, the temporal phases of the different neurons determine the orientation
of the orbit in the space of neuronal activities. The rapidly falling angle between this orbit and the
subspaces defined by spatial patterns dominating the chaotic state (Fig. 3b, red dots) indicates that
these phases are strongly influenced by the recurrent connectivity, that in turn determines the spatial
pattern of the spontaneous activity. As an indication of the magnitude of this effect, we note that the
angles between the random phase sinusoidal trajectory of the input and the same chaotic subspaces
are much larger than those associated with the periodic driven activity (Fig. 3b, black dots).
6
4
Temporal frequency modulation of spatial patterns
Although recurrent feedback in the network plays an important role in the structure of driven network
responses, the spatial pattern of the activity is not fixed but rather, is shaped by a complex interaction between the driving input and intrinsic network dynamics. It is therefore sensitive to both the
amplitude and the frequency of this drive. To see this, we examine how the orientation of the approximately two-dimensional periodic orbit of driven network activity in the non-chaotic regime depends
on input frequency. We use the technique of principal angles described above, to examine how the
orientation of the oscillatory orbit changes when the input frequency is varied (angle between n
? osc1
and n
? osc2 in Fig. 3c). For comparison purposes, we choose the dominant two-dimensional subspace
of the network oscillatory responses to a driving input at 5 Hz as a reference. We then calculate
the principal angles between this subspace and the corresponding subspaces evoked by inputs with
different frequencies. The result shown in Fig. 3d indicates that the orientation of the orbit for these
driven states rotates as the input frequency changes.
The frequency dependence of the orientation of the evoked response is likely related to the effect
seen in Fig. 1b & d in which higher frequency activity is projected onto higher principal components
of the spontaneous activity. This causes the orbit of the driven activity to rotate in the direction of
higher-order principal components of the spontaneous activity as the input frequency increases. In
addition, we find that the larger the stimulus amplitude, the closer the response phases of the neurons
are to the random phases of their external inputs (results not shown).
5
Network selectivity
We have shown that the response of a network to random-phase input is strongly affected by the
spatial structure of spontaneous activity (Fig. 3b). We now ask if the spatial patterns that dominate
the spontaneous activity in a network correspond to the spatial input patterns to which the network
responds most robustly. In other words, can the spatial structure of an input be designed to maximize
its ability to suppress chaos?
Rather than using random-phase inputs, we align the inputs to our network along the directions
defined by the different principal components of its spontaneous activity. Specifically, the input to
neuron i is set to,
Ii = IVia cos(2?f t) ,
(9)
where I is the amplitude factor and Via is the ith component of principal component vector a of the
spontaneous activity. The index a is ordered so that a = 1 corresponds to the principal component
with the largest variance and a = N , the least.
The signal amplitude when the input is aligned with different leading eigenvectors shows no strong
dependence on a, but the noise amplitude exhibits a sharp transition from no chaotic component for
small a to partial chaos for larger a (Fig.4b). The critical value of a depends on I, f and g but, in
general, inputs aligned with the directions along which the spontaneous network activity has large
projections are most effective at inducing transitions to the driven periodic state. The point a = 5
corresponds to a phase transition analogous to that seen in other network models [5, 16]. The noise
is therefore more sensitive to the spatial structure of the input compared to the signal.
Suppression of spontaneously generated noise in neural networks does not require stimuli so strong
that they simply overwhelm fluctuations through saturation. Near the onset of chaos, complete
noise suppression can be achieved with relatively low amplitude inputs (compared to the strength of
the internal feedback), especially if the input is aligned with the dominant principal components of
the spontaneous activity.
6
Discussion
Many models of selectivity in cortical circuits rely on knowledge of the spatial organization of afferent inputs as well as cortical connectivity. However, in many cortical areas, such information is not
available. This is analogous to the random character of connectivity in our network which precludes
7
0.4
2 +
chaos
b
2
osc
0.3
Response amplitude / R1/2
a
0.3
C - [r]
2
0.2
0.2
2
osc
0
0.125
?(s)
0.2
0.1
0.1
0
signal
noise
0.25
0
1
2
3
4
5
PC aligned to input
6
7
8
Figure 4: a) An example autocorrelation function. Horizontal lines indicate how we define the
signal and noise amplitudes. Parameters used for this figure are I/I1/2 = 0.4, g = 1.8 and f = 20
Hz. b) Network selectivity to different spatial patterns of input. Signal and noise amplitudes in the
input-evoked response aligned to the leading principal components of the spontaneous activity of
the network. The inset shows a larger range on a coarser scale. The results in this figure come from
a network simulation with N = 1000, I/I1/2 = 0.2 and f = 2 Hz for b.
a simple description of the spatial distribution of activity patterns in terms of topographically organized maps. Our analysis shows that even in cortical areas where the underlying connectivity does
not exhibit systematic topography, dissecting the spatial patterns of fluctuations in neuronal activity
can yield important insight about both intrinsic network dynamics and stimulus selectivity.
Analysis of the spatial pattern of network activity reveals that even though the network connectivity
matrix is full rank, the effective dimensionality of the chaotic fluctuations is much smaller than
network size. This suppression of spatial modes is much stronger than expected, for instance, from
a linear network that low-pass filters a spatiotemporal white noise input. Further, this study extends
a similar effect demonstrated in the temporal domain elsewhere [5 - 8] to show that active spatial
patterns exhibit strong nonlinear interaction between external driving inputs and intrinsic dynamics.
Surprisingly though, even when the input is strong enough to fully entrain the temporal pattern
of network activity, spatial organization of the activity remains strongly influenced by recurrent
dynamics.
Our results show that experimentally accessible spatial patterns of spontaneous activity (e.g. from
voltage- or calcium-sensitive optical imaging experiments) can be used to infer the stimulus selectivity induced by the network dynamics and to design spatially extended stimuli that evoke strong
responses. This is particularly true when selectivity is measured in terms of the ability of a stimulus
to entrain the neural dynamics. In general, our results indicate that the analysis of spontaneous
activity can provide valuable information about the computational implications of neuronal circuitry.
Acknowledgments
Research of KR and LFA supported by National Science Foundation grant IBN-0235463 and an NIH
Director?s Pioneer Award, part of the NIH Roadmap for Medical Research, through grant number
5-DP1-OD114-02. HS was partially supported by grants from the Israel Science Foundation and
the McDonnell Foundation. This research was also supported by the Swartz Foundation through the
Swartz Centers at Columbia, Princeton and Harvard Universities.
8
References
[1] Hubel, D.H. & Wiesel, T.N. (1962) Receptive fields, binocular interaction and functional architecture in the cats visual cortex. J. Physiol. 160, 106-154.
[2] Arieli, A., Shoham, D., Hildesheim, R. & Grinvald, A. (1995) Coherent spatiotemporal patterns
of ongoing activity revealed by real-time optical imaging coupled with single-unit recording in the
cat visual cortex. J. Neurophysiol. 73, 2072-2093.
[3] Arieli, A., Sterkin, A., Grinvald, A. & Aertsen, A. (1996) Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses. Science 273, 1868-1871.
[4] Sompolinsky, H., Crisanti, A. & Sommers, H.J. (1988) Chaos in Random Neural Networks.
Phys. Rev. Lett. 61, 259-262.
[5] Rajan, K., Abbott, L.F. & Sompolinsky, H. (2010) Stimulus-dependent Suppression of Chaos in
Recurrent Neural Networks. Phys. Rev. E., 82: 01193.
[6] Rajan, K. (2009) Nonchaotic Responses from Randomly Connected Networks of Model Neurons.
Ph.D. Dissertation, Columbia University in the City of New York.
[7] Rajan, K., Abbott, L. F., & Sompolinsky, H. (2010) Stimulus-dependent Suppression of Intrinsic
Variability in Recurrent Neural Networks. BMC Neuroscience, 11, O17: 11.
[8] Rajan, K. (2010) What do Random Matrices Tell us about the Brain? Grace Hopper Celebration
of Women in Computing, published by the Anita Borg Institute for Women & Technology and the
Association for Computing Machinery.
[9] van Vreeswijk, C. & Sompolinsky, H. (1996) Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science 24, 1724-1726.
[10] van Vreeswijk, C. & Sompolinsky, H. (1998) Chaotic balanced state in a model of cortical
circuits. Neural Comput. 10, 1321-1371.
[11] Shriki, O., Hansel, D. & Sompolinsky, H. (2003) Rate models for conductance-based cortical
neuronal networks. Neural Comput. 15, 1809-1841.
[12] Wong, K.-F. & Wang, X.-J. (2006) A Recurrent network mechanism of time integration in
perceptual decisions. J. Neurosci. 26, 1314-1328.
[13] Rajan, K. & Abbott, L.F. (2006) Eigenvalue spectra of random matrices for neural networks.
Phys. Rev. Lett. 97, 188104.
[14] Broome, B.M., Jayaraman, V. & Laurent, G. (2006) Encoding and decoding of overlapping
odor sequences. Neuron 51, 467-482.
[15] Ipsen, I.C.F. & Meyer, C.D. (1995) The angle between complementary subspaces. Amer. Math.
Monthly 102, 904-911.
[16] Bertchinger, N. & Natschl?ger, T. (1995) Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput. 16, 1413-1436.
[17] Molgedey, L., Schuchhardt, J. & Schuster, H.G. (1992) Suppressing chaos in neural networks
by noise. Phys. Rev. Lett. 69, 3717-3719.
9
| 4051 |@word h:1 wiesel:1 stronger:1 integrative:1 d2:3 simulation:2 recursively:1 suppressing:1 comparing:3 activation:2 must:1 pioneer:1 physiol:1 additive:2 subsequent:2 shape:2 plot:1 drop:1 progressively:2 designed:1 plane:1 ith:1 dissertation:1 provides:2 math:1 location:1 traverse:1 along:2 borg:1 director:1 consists:1 autocorrelation:2 manner:2 introduce:3 jayaraman:1 expected:1 p1:2 examine:2 brain:2 decreasing:1 underlying:2 panel:1 circuit:2 israel:2 what:1 rmax:3 developed:1 nj:1 guarantee:1 temporal:7 control:2 unit:16 internally:1 grant:3 medical:1 limit:1 encoding:1 analyzing:1 laurent:1 fluctuation:8 firing:7 approximately:3 modulation:1 black:2 exert:1 studied:1 evoked:14 co:2 range:1 averaged:1 unique:1 acknowledgment:1 spontaneously:1 chaotic:42 area:2 physiology:1 revealing:1 projection:7 shoham:1 word:2 onto:5 close:1 twodimensional:2 wong:1 equivalent:2 map:3 deterministic:3 center:3 demonstrated:1 jerusalem:1 independently:2 insight:1 dominate:1 racah:1 population:2 coordinate:2 autonomous:1 analogous:3 spontaneous:38 play:1 carl:1 harvard:2 element:2 particularly:2 coarser:1 observed:1 role:1 p5:1 wang:1 calculate:1 connected:1 sompolinsky:7 highest:1 valuable:1 balanced:2 imprint:1 sigler:1 dynamic:14 hri:3 ultimately:1 signature:1 topographically:1 surgeon:1 molgedey:1 completely:4 neurophysiol:1 cat:2 describe:1 effective:4 detected:1 tell:1 apparent:1 larger:4 dominating:1 otherwise:1 precludes:1 ability:3 sequence:1 eigenvalue:2 indication:1 interaction:4 interconnected:1 jij:4 product:1 aligned:7 turned:1 rapidly:1 description:5 inducing:1 r1:1 produce:1 rotated:2 depending:1 develop:1 ac:1 recurrent:11 measured:2 eq:1 grating:1 strong:7 ibn:1 indicate:3 come:2 quantify:2 convention:1 direction:5 filter:1 stochastic:2 require:1 sufficiently:1 considered:1 visually:1 shriki:1 circuitry:2 driving:4 vary:1 purpose:1 tanh:2 hansel:1 sensitive:5 contributor:1 largest:2 city:1 reflects:1 gaussian:1 always:1 rather:2 fluctuate:1 profitably:1 voltage:2 derived:3 focus:1 rank:1 indicates:2 contrast:1 suppression:5 dependent:2 anita:1 typically:1 entire:2 relation:1 dissecting:1 i1:5 orientation:9 denoted:2 retaining:1 arccos:1 spatial:35 integration:1 field:3 equal:4 shaped:1 cartoon:2 identical:1 bmc:1 nearly:1 mimic:1 stimulus:20 few:1 randomly:5 national:1 individual:1 phase:18 fiz:1 conductance:1 organization:2 circular:2 bracket:1 pc:9 activated:1 implication:1 edge:1 closer:1 partial:1 orthogonal:3 machinery:1 orbit:15 circle:5 plotted:1 instance:2 uniform:1 crisanti:1 dij:1 characterize:3 spatiotemporal:3 periodic:15 combined:1 huji:1 accessible:2 interdisciplinary:1 systematic:1 physician:1 physic:1 off:1 decoding:1 quickly:1 connectivity:11 broome:1 containing:2 choose:1 woman:2 external:7 leading:6 return:1 actively:1 account:1 sinusoidal:1 afferent:2 depends:3 onset:1 later:1 analyze:2 reached:1 red:6 il:1 square:1 formed:1 variance:21 ensemble:1 correspond:1 yield:1 trajectory:17 drive:4 dp1:1 published:1 oscillatory:6 inform:1 influenced:3 phys:4 synaptic:1 frequency:17 celebration:1 naturally:1 associated:2 dxi:1 intrinsically:1 ask:2 knowledge:1 dimensionality:4 organized:1 amplitude:16 appears:1 higher:9 dt:1 response:27 synapse:1 amer:1 done:1 delineate:1 though:3 strongly:4 binocular:1 correlation:2 horizontal:3 replacing:1 nonlinear:3 overlapping:2 lack:1 mode:2 reveal:1 usa:3 effect:7 true:1 assigned:1 spatially:2 laboratory:1 hildesheim:1 illustrated:1 white:3 visualizing:1 self:1 recurrence:1 whereby:1 cosine:1 m:1 manifestation:1 complete:1 demonstrate:1 p50:1 percent:1 harmonic:1 chaos:14 instantaneous:1 neff:8 recently:1 nih:2 hopper:1 functional:1 association:1 significant:1 monthly:1 cambridge:1 smoothness:1 nonlinearity:1 sommers:1 dot:6 cortex:2 surface:1 align:3 dominant:2 driven:25 reverse:1 apart:1 selectivity:14 p10:1 seen:2 r0:6 determine:2 maximize:1 swartz:2 signal:11 ii:3 relates:1 full:7 rj:2 infer:2 characterized:2 offer:1 cross:1 divided:1 equally:1 award:1 biophysics:1 impact:3 achieved:1 cell:1 irregular:2 addition:3 background:2 whereas:1 schematically:1 singular:1 natschl:1 induced:3 recording:2 hz:8 entrained:1 call:1 near:1 revealed:1 split:1 enough:1 affect:1 architecture:2 perfectly:1 whether:3 pca:4 effort:1 york:2 cause:1 useful:1 detailed:1 proportionally:1 eigenvectors:1 ph:1 occupy:2 inhibitory:1 shifted:1 neuroscience:2 arising:2 v1a:2 blue:3 discrete:1 affected:1 rajan:6 hrj:1 falling:1 abbott:4 utilize:1 kept:1 imaging:3 fraction:1 sum:1 angle:34 extends:1 reasonable:1 p3:1 oscillation:2 decision:1 haim:2 activity:78 strength:2 nonchaotic:2 ri:5 generates:2 min:1 span:1 optical:3 relatively:1 department:2 arieli:2 mcdonnell:1 across:6 smaller:3 suppressed:3 character:1 rev:4 making:1 biologically:1 previously:2 overwhelm:1 describing:2 turn:1 remains:1 vreeswijk:2 mechanism:1 mind:1 available:1 osc:7 appearing:1 responsive:1 robustly:1 odor:1 drifting:1 denotes:1 remaining:3 ensure:1 assumes:1 build:2 especially:1 quantity:1 receptive:2 dependence:2 responds:1 lfa:1 aertsen:1 exhibit:3 gradient:1 grace:1 subspace:38 separate:2 rotates:1 roadmap:1 argue:1 cellular:1 trivial:2 index:1 relationship:2 hebrew:1 ipsen:1 trace:1 suppress:1 design:2 calcium:2 neuron:11 fin:1 extended:2 variability:2 varied:1 sharp:1 inferred:1 pair:1 rad:2 coherent:1 distinction:1 uncoupled:1 suggested:1 below:1 pattern:27 regime:3 saturation:1 explanation:1 critical:2 overlap:3 treated:1 difficulty:1 rely:1 indicator:1 residual:1 diagonalizing:1 historically:1 technology:1 axis:1 coupled:1 columbia:4 genomics:1 understanding:1 determining:1 relative:3 fully:1 topography:1 ger:1 foundation:4 share:1 elsewhere:1 excitatory:1 accounted:1 surprisingly:1 supported:3 institute:3 taper:1 van:2 feedback:4 dimension:5 cortical:9 transition:6 calculated:1 overcome:1 rich:1 sensory:2 curve:1 lett:3 projected:1 evoke:2 active:1 reveals:1 hubel:1 conclude:1 xi:5 spectrum:1 continuous:1 nature:1 complex:2 domain:1 neurosci:1 motivation:1 noise:20 complementary:3 neuronal:11 fig:15 depicts:1 ny:1 inferring:1 meyer:1 deterministically:1 grinvald:2 exponential:1 comput:3 lie:2 entrain:2 perceptual:1 specific:2 inset:1 offset:1 exists:1 intrinsic:7 kr:1 magnitude:1 depicted:1 simply:1 explore:1 likely:1 visual:2 expressed:1 ordered:1 partially:3 corresponds:3 determines:2 lewis:1 ma:1 replace:1 absence:1 content:1 experimentally:2 change:2 specifically:2 determined:2 entrainment:1 principal:35 total:6 called:1 pas:1 college:1 internal:1 rotate:1 ongoing:3 princeton:4 d1:3 schuster:1 correlated:2 |
3,371 | 4,052 | A Computational Decision Theory
for Interactive Assistants
Prasad Tadepalli
School of EECS
Oregon State University
Corvallis, OR 97331
[email protected]
Alan Fern
School of EECS
Oregon State University
Corvallis, OR 97331
[email protected]
Abstract
We study several classes of interactive assistants from the points of view of decision theory and computational complexity. We first introduce a class of POMDPs
called hidden-goal MDPs (HGMDPs), which formalize the problem of interactively assisting an agent whose goal is hidden and whose actions are observable.
In spite of its restricted nature, we show that optimal action selection in finite horizon HGMDPs is PSPACE-complete even in domains with deterministic dynamics.
We then introduce a more restricted model called helper action MDPs (HAMDPs),
where the assistant?s action is accepted by the agent when it is helpful, and can be
easily ignored by the agent otherwise. We show classes of HAMDPs that are complete for PSPACE and NP along with a polynomial time class. Furthermore, we
show that for general HAMDPs a simple myopic policy achieves a regret, compared to an omniscient assistant, that is bounded by the entropy of the initial goal
distribution. A variation of this policy is shown to achieve worst-case regret that
is logarithmic in the number of goals for any goal distribution.
1
Introduction
Integrating AI with Human Computer Interaction has received significant attention in recent years [8,
11, 13, 3, 2]. In most applications, e.g. travel scheduling, information retrieval, or computer desktop
navigation, the relevant state of the computer is fully observable, but the goal of the user is not, which
poses a difficult problem to the computer assistant. The assistant needs to correctly reason about the
relative merits of taking different actions in the presence of significant uncertainty about the goals of
the human agent. It might consider taking actions that directly reveal the goal of the agent, e.g. by
asking questions to the user. However, direct communication is often difficult due to the language
mismatch between the human and the computer. Another strategy is to take actions that help achieve
the most likely goals. Yet another strategy is to take actions that help with a large number of possible
goals. In this paper, we formulate and study several classes of interactive assistant problems from
the points of view of decision theory and computational complexity. Building on the framework
of decision-theoretic assistance (DTA) [5], we analyze the inherent computational complexity of
optimal assistance in a variety of settings and the sources of that complexity. Positively, we analyze
a simple myopic heuristic and show that it performs nearly optimally in a reasonably pervasive
assistance problem, thus explaining some of the positive empirical results of [5].
We formulate the problem of optimal assistance as solving a hidden-goal MDP (HGMDP), which
is a special case of a POMDP [6]. In a HGMDP, a (human) agent and a (computer) assistant take
actions in turns. The agent?s goal is the only unobservable part of the state of the system and does
not change throughout the episode. The objective for the assistant is to find a history-dependent
policy that maximizes the expected reward of the agent given the agent?s goal-based policy and its
goal distribution. Despite the restricted nature of HGMDPs, the complexity of determining if an
HGMDP has a finite-horizon policy of a given value is PSPACE-complete even in deterministic
1
environments. This motivates a more restricted model called Helper Action MDP (HAMDP), where
the assistant executes a helper action at each step. The agent is obliged to accept the helper action
if it is helpful for its goal and receives a reward bonus (or cost reduction) for doing so. Otherwise,
the agent can continue with its own preferred action without any reward or penalty to the assistant.
We show classes of this problem that are complete for PSPACE and NP. We also show that for the
class of HAMDPs with deterministic agents there are polynomial time algorithms for minimizing
the expected and worst-case regret relative to an omniscient assistant. Further, we show that the
optimal worst case regret can be characterized by a graph-theoretic property called the tree rank of
the corresponding all-goals policy tree and can be computed in linear time.
The main positive result of the paper is to give a simple myopic policy for general stochastic
HAMDPs that has a regret which is upper bounded by the entropy of the goal distribution. Furthermore we give a variant of this policy that is able to achieve worst-case and expected regret that
is logarithmic in the number of goals without any prior knowledge of the goal distribution.
To the best of our knowledge, this is the first formal study of the computational hardness of the problem of decision-theoretically optimal assistance and the performance of myopic heuristics. While
the current HAMDP results are confined to unobtrusively assisting a competent agent, they provide
a strong foundation for analyzing more complex classes of assistant problems, possibly including
direct communication, coordination, partial observability, and irrationality of users.
2
Hidden Goal MDPs
Throughout the paper we will refer to the entity that we are attempting to assist as the agent and
the assisting entity as the assistant. Our objective is to select actions for the assistant in order to
help the agent maximize its reward. The key complication is that the agent?s goal is not directly
observable to the assistant, so reasoning about the likelihood of possible goals and how to help
maximize reward given those goals is required. In order to support this type of reasoning we will
model the agent-assistant process via hidden goal MDPs (HGMDPs).
General Model. An HGMDP describes the dynamics and reward structure of the environment
via a first-order Markov model, where it is assumed that the state is fully observable to both
the agent and assistant. In addition, an HGMDP describes the possible goals of the agent and
the behavior of the agent when pursuing those goals. More formally, an HGMDP is a tuple
hS, G, A, A0 , T, R, ?, IS , IG i where S is a set of states, G is a finite set of possible agent goals,
A is the set of agent actions, A0 is the set of assistant actions, T is the transition function such that
T (s, g, a, s0 ) is the probability of a transition to state s0 from s after taking action a ? A ? A0 when
the agent goal is g, R is the reward function which maps S ? G ? (A ? A0 ) to real valued rewards, ?
is the agent?s policy that maps S ? G to distributions over A and need not be optimal in any sense,
and IS (IG ) is an initial state (goal) distribution. The dependence of the reward and policy on the
goal allows the model to capture the agent?s desires and behavior under each goal. The dependence
of T on the goal is less intuitive and in many cases there will be no dependence when T is used only
to model the dynamics of the environment. However, we allow goal dependence of T for generality
of modeling. For example, it can be convenient to model basic communication actions of the agent
as changing aspects of the state, and the result of such actions will often be goal dependent.
We consider a finite-horizon episodic problem setting where the agent begins each episode in a state
drawn from IS with a goal drawn from IG . The goal, for example, might correspond to a physical
location, a dish that the agent wants to cook, or a destination folder on a computer desktop. The
process then alternates between the agent and assistant executing actions (including noops) in the
environment until the horizon is reached. The agent is assumed to select actions according to ?. In
many domains, a terminal goal state will be reached within the horizon, though in general, goals can
have arbitrary impact on the reward function. The reward for the episode is equal to the sum of the
rewards of the actions executed by the agent and assistant during the episode. The objective of the
assistant is to reason about the HGMDP and observed state-action history in order to select actions
that maximize the expected (or worst-case) total reward of an episode.
An example HGMDP from previous work [5] is the doorman domain, where an agent navigates a
grid world in order to arrive at certain goal locations. To move from one location to another the
agent must open a door and then walk through the door. The assistant can reduce the effort for the
agent by opening the relevant doors for the agent. Another example from [1] involves a computer
2
desktop where the agent wishes to navigate to certain folders using a mouse. The assistant can select
actions that offer the agent a small number of shortcuts through the folder structure.
Given knowledge of the agent?s goal g in an HGMDP, the assistant?s problem reduces to solving
an MDP over assistant actions. The MDP transition function captures both the state change due to
the assistant action and also the ensuing state change due to the agent action selected according to
the policy ? given g. Likewise the reward function on a transition captures the reward due to the
assistant action and the ensuing agent action conditioned on g. The optimal policy for this MDP
corresponds to an optimal assistant policy for g. However, since the real assistant will often have
uncertainty about the agent?s goal, it is unlikely that this optimal performance will be achieved.
Computational Complexity. We can view an HGMDP as a collection of |G| MDPs that share the
same state space, where the assistant is placed in one of the MDPs at the beginning of each episode,
but cannot observe which one. Each MDP is the result of fixing the goal component of the HGMDP definition to one of the goals. This collection can be easily modeled as a restricted type of
partially observable MDP (POMDP) with a state space S ? G. The S component is completely observable, while the G component is unobservable but only changes at the beginning of each episode
(according to IG ) and remains constant throughout an episode. Furthermore, each POMDP transition provides observations of the agent action, which gives direct evidence about the unchanging
G component. From this perspective HGMDPs appear to be a significant restriction over general
POMDPs. However, our first result shows that despite this restriction the worst-case complexity is
not reduced even for deterministic dynamics.
Given an HGMDP M , a horizon m = O(|M |) where |M | is the size of the encoding of M , and a
reward target r? , the short-term reward maximization problem asks whether there exists a historydependent assistant policy that achieves an expected finite horizon reward of at least r? . For general
POMDPs this problem is PSPACE-complete [12, 10], and for POMDPs with deterministic dynamics, it is NP-complete [9]. However, we have the following result.
Theorem 1. Short-term reward maximization for HGMDPs with deterministic dynamics is
PSPACE-complete.
The proof is in the appendix. This result shows that any POMDP can be encoded as an HGMDP
with deterministic dynamics, where the stochastic dynamics of the POMDP are captured via the
stochastic agent policy in the HGMDP. However, the HGMDPs resulting from the PSPACE-hardness
reduction are quite pathological compared to those that are likely to arise in practice. Most importantly, the agent?s actions provide practically no information about the agent?s goal until the end of
an episode, when it is too late to exploit this knowledge. This suggests that we search for restricted
classes of HGMDPs that will allow for efficient solutions with performance guarantees.
3
Helper Action MDPs
The motivation for HAMDPs is to place restrictions on the agent and assistant that avoid the following three complexities that arise in general HGMDPs: 1) the agent can behave arbitrarily poorly
if left unassisted and as such the agent actions may not provide significant evidence about the goal;
2) the agent is free to effectively ?ignore? the assistant?s help and not exploit the results of assistive
action, even when doing so would be beneficial; and 3) the assistant actions have the possibility of
negatively impacting the agent compared to not having an assistant. HAMDPs will address the first
issue by assuming that the agent is competent at (approximately) maximizing reward without the
assistant. The last two issues will be addressed by assuming that the agent will always ?detect and
exploit? helpful actions and that the assistant actions do not hurt the agent.
Informally, the HAMDP provides the assistant with a helper action for each of the agent?s actions.
Whenever a helper action h is executed directly before the corresponding agent action a, the agent
receives a bonus reward of 1. However, the agent will only accept the helper action h (by taking a)
and hence receive the bonus, if a is an action that the agent considers to be good for achieving the
goal without the assistant. Thus, the primary objective of the assistant in an HAMDP is to maximize
the number of helper actions that get accepted by the agent. While simple, this model captures much
of the essence of assistance domains where assistant actions cause minimal harm and the agent is
able to detect and accept good assistance when it arises.
An HAMDP is an HGMDP hS, G, A, A0 , T, R, ?, IS , IG i with the following constraints:
3
? The agent and the assistant actions sets are A = {a1 , . . . , an } and A0 = {h1 , . . . , hn }, so
that for each ai there is a corresponding helper action hi .
? The state space is S = W ? (W ? A0 ), where W is a set of world states. States in W ? A0
encode the current world state and the previous assistant action.
? The reward function R is 0 for all assistant actions. For agent actions the reward is zero
unless the agent selects the action ai in state (s, hi ) which gives a reward of 1. That is, the
agent receives a bonus of 1 whenever its action corresponds to the preceding helper action.
? The assistant always acts from states in W , and T is such that taking hi in s deterministically transitions to (s, hi ).
? The agent always acts from states in S ?A0 , resulting in states in S according to a transition
function that does not depend on hi , i.e. T ((s, hi ), g, ai , s0 ) = T 0 (s, g, ai , s0 ) for some
transition function T 0 .
? Finally, for the agent policy, let ?(s, g) be a function that returns a set of actions and P (s, g)
be a distribution over those actions. We will view ?(s, g) as the set of actions that the agent
considers acceptable (or equally good) in state s for goal g. The agent policy ? always
selects ai after its helper action hi whenever ai is acceptable. That is, ?((s, hi ), g) = ai
whenever ai ? ?(s, g). Otherwise the agent draws an action according to P (s, g).
In a HAMDP, the primary impact of an assistant action is to influence the reward of the following
agent action. The only rewards in HAMDPS are the bonuses received whenever the agent accepts a
helper action. Any additional environmental reward is assumed to be already captured by the agent
policy via ?(s, g) that contains actions that approximately optimize this reward.
The HAMDP model can be adapted to both the doorman domain in [5] and the folder prediction
domain from [1]. In the doorman domain, the helper actions correspond to opening doors for the
agent, which reduce the cost of navigating from one room to another. Importantly opening an
incorrect door has a fixed reward loss compared to an optimal assistant, which is a key property
of HAMDPs. In the folder prediction domain, the system proposes multiple folders to save a file,
potentially saving the user a few clicks every time the proposal is accepted.
Despite the apparent simplification of HAMDPs over HGMDPs, somewhat surprisingly the worst
case computational complexity is not reduced.
Theorem 2. Short-term reward maximization for HAMDPs is PSPACE-complete.
The proof is in the appendix. Unlike the case of HGMDPs, we will see that the stochastic dynamics
are essential for PSPACE-hardness. Despite this negative result, the following sections show the
utility of the HAMDP restriction by giving performance guarantees for simple policies and improved
complexity results in special cases. So far, there are no analogous results for HGMDPs.
4
Regret Analysis for HAMDPs
Given an assistant policy ? 0 , the regret of a particular episode is the extra reward that an omniscient
assistant with knowledge of the goal would achieve over ? 0 . For HAMDPs the omniscient assistant
can always achieve a reward equal to the finite horizon m, because it can always select a helper action
that will be accepted by the agent. Thus, the regret of an execution of ? 0 in a HAMDP is equal to
the number of helper actions that are not accepted by the agent, which we will call mispredictions.
From above we know that optimizing regret is PSPACE-hard and thus here we focus on bounding
the expected and worst-case regret of the assistant. We now show that a simple myopic policy is
able to achieve regret bounds that are logarithmic in the number of goals.
Myopic Policy. Intuitively, our myopic assistant policy ?
? will select an action that has the highest
probability of being accepted with respect to a ?coarsened? version of the posterior distribution over
goals. The myopic policy in state s given history H is based on the consistent goal set C(H), which
is the set of goals that have non-zero probability with respect to history H. It is straightforward to
maintain C(H) after each observation. The myopic policy is defined as:
?
? (s, H) = arg max IG (C(H) ? G(s, a))
a
where G(s, a) = {g | a ? ?(s, g)} is the set of goals for which the agent considers a to be an
acceptable action in state s. The expression IG (C(H) ? G(s, a)) can be viewed as the probability
4
mass of G(s, a) under a coarsened goal posterior which assigns goals outside of C(H) probability
zero and otherwise weighs them proportional to the prior.
Theorem 3. For any HAMDP the expected regret of the myopic policy is bounded above by the
entropy of the goal distribution H(IG ).
Proof. The main idea of the proof is to show that after each misprediction of the myopic policy (i.e.
the selected helper action is not accepted by the agent) the uncertainty about the goal is reduced by
a constant factor, which will allow us to bound the total number of mispredictions on any trajectory.
Consider a misprediction step where the myopic policy selects helper action hi in state s given history H, but the agent does not accept the action and instead selects a? 6= ai . By the definition of the
myopic policy we know that IG (C(H) ? G(s, ai )) ? IG (C(H) ? G(s, a? )), since otherwise the
assistant would not have chosen hi . From this fact we now argue that IG (C(H 0 )) ? IG (C(H))/2
where H 0 is the history after the misprediction. That is, the probability mass under IG of the consistent goal set after the misprediction is less than half that of the consistent goal set before the
misprediction. To show this we will consider two cases: 1) IG (C(H) ? G(s, ai )) < IG (C(H))/2,
and 2) IG (C(H) ? G(s, ai )) ? IG (C(H))/2. In the first case, we immediately get that
IG (C(H)?G(s, a? )) < IG (C(H))/2. Combining this with the fact that C(H 0 ) ? C(H)?G(s, a? )
we get the desired result that IG (C(H 0 )) ? IG (C(H))/2. In the second case, note that
C(H 0 ) ? C(H) ? (G(s, a? ) ? G(s, ai )) ? C(H) ? (C(H) ? G(s, ai ))
Combining this with our assumption for the second case implies that IG (C(H 0 )) ? IG (C(H))/2.
This implies that for any episode, after n mispredictions resulting in a history Hn , IG (C(Hn )) ?
2?n . Now consider an arbitrary episode where the true goal is g. We know that IG (g) is a lower
bound on IG (C(Hn )), which implies that IG (g) ? 2?n or equivalently that n ? ? log(IG (g)).
Thus for any episode with goal g the maximum number of mistakes is bounded by ? log(IG (g)).
Using this fact we get that the
P expected number of mispredictions during an episode with respect to
IG is bounded above by ? g IG (g) log(IG (g)) = H(IG ), which completes the proof.
Since H(IG ) ? log(|G|), this result implies that for HAMDPs the expected regret of the myopic
policy is no more than logarithmic in the number of goals. Furthermore, as the uncertainty about
the goal decreases (decreasing H(IG )) the regret bound improves until we get a regret of 0 when IG
puts all mass on a single goal. This logarithmic bound is asymptotically tight in the worst case.
Theorem 4. There exists a HAMDP such that for any assistant policy the expected regret is at least
log(|G|)/2.
Proof. Consider a deterministic HAMDP such that the environment is structured as a binary tree
of depth log(|G|), where each leaf corresponds to one of the |G| goals. By considering a uniform
goal distribution it is easy to verify that at any node in the tree there is an equal chance that the true
goal is in the left or right sub-tree during any episode. Thus, any policy will have a 0.5 chance of
committing a misprediction at each step of an episode. Since each episode is of length log(|G|), the
expected regret of an episode for any policy is log(|G|)/2.
Resolving the gap between the myopic policy bound and this regret lower bound is an open problem.
Approximate Goal Distributions. Suppose that the assistant uses an approximate goal distribution
0
IG
instead of the true underlying goal distribution IG when computing the myopic policy. That
0
is, the assistant selects actions that maximize IG
(C(H) ? G(s, a)), which we will refer to as the
0
0
myopic policy relative to IG . The extra regret for using IG
instead of IG can be bounded in terms
0
0
of the KL-divergence between these distributions KL(IG k IG
), which is zero when IG
equals IG .
Theorem 5. For any HAMDP with goal distribution IG , the expected regret of the myopic policy
0
0
with respect to distribution IG
is bounded above by H(IG ) + KL(IG k IG
).
The proof is in the appendix. Deriving similar results for other approximations is an open problem.
A consequence of Theorem 5 is that the myopic policy with respect to the uniform goal distribution
has expected regret bounded by log(|G|) for any HAMDP, showing that logarithmic regret can be
achieved without knowledge of IG . This can be strengthened to hold for worst case regret.
5
Theorem 6. For any HAMDP, the worst case and hence expected regret of the myopic policy with
respect to the uniform goal distribution is bounded above by log(|G|).
Proof. The proof of Theorem 5 shows that the number of mispredictions on any episode is bounded
0
0
above by ? log(IG
). In our case IG
= 1/|G| which shows a worst case regret bound of log(|G|),
which also bounds the expected regret of the uniform myopic policy.
5
Deterministic and Bounded Choice Policies
We now consider several special cases of HAMDPs. First, we restrict the agent?s policy to be
deterministic for each goal, i.e. ?(s, g) has at most a single action for each state-goal pair (s, g).
Theorem 7. The myopic policy achieves the optimal expected reward for HAMDPs with deterministic agent policies.
The proof is given in the appendix. We now consider the case where both the agent policy and
the environment are deterministic, and attempt to minimize the worst possible regret compared to
an omniscient assistant who knows the agent?s goal. As it happens, this ?minimax policy? can be
captured by a graph-theoretic notion of tree rank that generalizes the rank of decision trees [4].
Definition 1. The rank of a rooted tree is the rank of its root node. If a node is a leaf node
then rank(node) = 0, else if a node has at least two distinct children c1 and c2 with equal
highest ranks among all children, then rank(node) = 1+ rank(c1 ). Otherwise rank(node) =
rank of the highest ranked child.
The optimal trajectory tree (OTT) of a HAMDP in deterministic environments is a tree where the
nodes represent the states of the HAMDP reached by the prefixes of optimal action sequences for
different goals starting from the initial state.1 Each node in the tree represents a state and a set of
goals for which it is on the optimal path from the initial state.
Since the agent policy and the environment are both deterministic, there is at most one trajectory per
goal in the tree. Hence the size of the optimal trajectory tree is bounded by the number of goals times
the maximum length of any trajectory, which is at most the size of the state space in deterministic
domains. The following Lemma follows by induction on the depth of the optimal trajectory tree.
Lemma 1. The minimum worst-case regret of any policy for an HAMDP for deterministic environments and deterministic agent policies is equal to the tree rank of its optimal trajectory tree.
Theorem 8. If the agent policy is deterministic, the problem of minimizing the maximum regret in
HAMDPs in deterministic environments is in P.
Proof. We first construct the optimal trajectory tree. We then compute its rank and the optimal
minimax policy using the recursive definition of tree rank in linear time.
The assumption of deterministic agent policy may be too restrictive in many domains. We now
consider HAMDPs in which the agent policies have a constant bound on the number of possible
actions in ?(s, g) for each state-goal pair. We call them bounded choice HAMDPs.
Definition 2. The branching factor of a HAMDP is the largest number of possible actions in ?(s, g)
by the agent in any state for any goal and any assistant?s action.
The doorman domain of [5] has a branching factor of 2 since there are at most two optimal actions
to reach any goal from any state.
Theorem 9. Minimizing the worst-case regret in finite horizon bounded choice HAMDPS of a constant branching factor k ? 2 in deterministic environments is NP-complete.
The proof is in the appendix. We can also show that minimizing the expected regret for a bounded
k is NP-hard. We conjecture that this problem is also in NP, but this question remains open.
1
If there are multiple initial states, we build an OTT for each initial state. Then the rank would be the
maximum of the ranks of all trees.
6
6
Conclusions and Future Work
In this paper, we formulated the problem of optimal assistance and analyzed its complexity in multiple settings. We showed that the general problem of HGMDP is PSPACE-complete due to the lack
of constraints on the user, who can behave stochastically or even adversarially with respect to the
assistant, which makes the assistant?s task very difficult. By suitably constraining the user?s actions
through HAMDPs, we are able to reduce the complexity to NP-complete, but only in deterministic
environments with bounded choice agents. More encouragingly, we are able to show that HAMDPs
are amenable to a simple myopic heuristic which has a regret bounded by the entropy of the goal
distribution when compared to the omniscient assistant. This is a satisfying result since optimal
communication of the goal requires as much information to pass from the agent to the assistant. Importantly, this result applies to stochastic as well as deterministic environments and with no bound
on the number of agent?s action choices.
Although HAMDPs are somewhat restricted compared to possible assistantship scenarios one could
imagine, they in fact fit naturally to many domains where the user is on-line, knows which helper
actions are acceptable, and accepts help when it is appropriate to the goal. Indeed, in many domains,
it is reasonable to constrain the assistant so that the agent has the final say on approving the actions
proposed by the assistant. These scenarios range from the ubiquitous auto-complete functions and
Microsoft?s infamous Paperclip to more sophisticated adaptive programs such as SmartEdit [7]
and TaskTracer [3] that learn assistant policies from users? long-term behaviors. By analyzing
the complexity of these tasks in a more general framework than what is usually done, we shed
light on some of the sources of complexity such as the stochasticity of the environment and the
agent?s policy. Many open problems remain including generalization of these and other results to
more general assistant frameworks, including partially observable and adversarial settings, learning
assistants, and multi-agent assistance.
7
Appendix
Proof of Theorem 1. Membership in PSPACE follows from the fact that any HGMDP can be polynomially encoded as a POMDP for which policy existence is in PSPACE. To show PSPACEhardness, we reduce the QSAT problem to the problem of the existence of a history-dependent
assistant policy of expected reward ? r.
Let ? be a quantified Boolean formula ?x1 ?x2 ?x3 . . . ?xn {C1 (x1 , . . . , xn ) ? . . . ?
Cm (x1 , . . . , xn )}, where each Ci is a disjunctive clause. For us, each goal gi is a quantified clause,
?x1 ?x2 ?x3 . . . ?xn {Ci (x1 , . . . , xn )}. The agent chooses a goal uniformly randomly from the set
of goals formed from ? and hides it from the assistant. The states consist of pairs of the form (v, i),
where v ? {0, 1} is the current value of the goal clause, and i is the next variable to set. The actions of the assistant are to set the existentially quantified variables. The agent simulates setting the
universally quantified variables by choosing actions from the set {0, 1} with equal probability. The
episode terminates when all the variables are set, and the assistant gets a reward of 1 if the value of
the clause is 1 at the end and a reward of 0 otherwise.
Note that the assistant does not get any useful feedback from the agent until it is too late and it
either makes a mistake or solves the goal. The best the assistant can do is to find an optimal historydependent policy that maximizes the expected reward over the goals in ?. If ? is satisfiable, then
there is an assistant policy that leads to a reward of 1 over all goals and all agent actions, and hence
has an expected value of 1 over the goal distribution. If not, then at least one of the goals will not be
satisfied for some setting of the universal quantifiers, leading to an expected value < 1.
Proof of Theorem 2. Membership in PSPACE follows easily since HAMDP is a specialization of
HGMDP. The proof of PSPACE-hardness is identical to that of 1 except that here, instead of the
agent?s actions, the stochastic environment models the universal quantifiers. The agent accepts
all actions until the last one and sets the variable as suggested by the assistant. After each of the
assistant?s actions, the environment chooses a value for the universally quantified variable with equal
probability. The last action is accepted by the agent if the goal clause evaluates to 1, otherwise not.
There is a history-dependent policy whose expected reward ? the number of existential variables if
and only if the quantified Boolean formula is satisfiable.
7
Proof of Theorem 5. The proof is similar to that of Theorem 3, except that since the myopic policy
0
is with respect to IG
rather than IG , on any episode, the maximum number of mispredictions n is
0
bounded above by ? log(IG
(g)). Hence, the average number of mispredictions is given by:
P
IG (g) log(
X
1
1
)
=
I
(g)
log(
)
+
log(I
(g))
?
log(I
(g))
=
G
G
G
0
0
IG
(g)
IG
(g)
g
P
IG (g) log(
X
IG (g)
0
)?
IG (g) log(IG (g)) = H(IG ) + KL(IG k IG
).
0
IG (g)
g
g
g
Proof of Theorem 7. According to the theory of POMDPs, the optimal action in a POMDP maximizes the sum of the immediate expected reward and the value of the resulting belief state (of the
assistant) [6]. When the agent policy is deterministic, the initial goal distribution IG and the history
of agent actions and states H fully capture the belief state of the agent. Let V (IG , H) represent
the optimal value of the current belief state. It satisfies the following Bellman equation, where H 0
stands for the history after the assistant?s action hi and the agent?s action aj .
V (IG , H) = max E(R((s, hi ), g, aj )) + V (IG , H 0 )
hi
Since there is only one agent?s action a? (s, g) in ?(s, g), the subsequent state s0 in H 0 , and its value
do not depend on hi . Hence the best helper action h? of the assistant is given by:
X
h? (IG , H) = arg max E(R((s, hi ), g, a? (s, g))) = arg max
IG (g)I(ai ? ?(s, g))
hi
=
hi
g?C(H)
arg max IG (C(H) ? G(s, ai ))
hi
where C(H) is the set of goals consistent with the current history H, and G(s, ai ) is the set of goals
g for which ai ? ?(s, g). I(ai ? ?(s, g)) is an indicator function which is = 1 if ai ? ?(s, g).
Note that h? is exactly the myopic policy.
Proof of Theorem 9. We first show that the problem is in NP. We build a tree representation of an
optimal history-dependent policy for each initial state which acts as a polynomial-size certificate.
Every node in the tree is represented by a pair (si , Gi ), where si is a state and Gi is a set of goals
for which the node is on a good path from the root node. We let hi be the helper action selected in
node i. The children of a node in the tree represent possible successor nodes (sj , Gj ) reached by
the agent?s response to hi . Note that multiple children can result from the same action because the
dynamics is a function of the agent?s goal.
To verify that the optimal policy tree is of polynomial size we note that the number of leaf nodes is
upper bounded by |G| ? maxg N (g), where N (g) is the number of leaf nodes generated by the goal
g and G is the set of all goals. To estimate N (g), we note that by our protocol, for any node (si , Gi )
where g ? Gi and the assistant?s action is hi , if ai ? ?(s, g), it will have a single successor that
contains g. Otherwise, there is a misprediction, which leads to at most k successors for g. Hence, the
number of nodes reached for g grows geometrically with the number of mispredictions. Since there
are at most log |G| mispredictions in any such path, N (g) ? k log2 |G| = k logk |G| log2 k = |G|log2 k .
Hence the total number of all leaf nodes of the tree is bounded by |G|1+log k , and the total number of
nodes in the tree is bounded by m|G|1+log k , where m is the number of steps to the horizon. Since
this is polynomial in the problem parameters, the problem is in NP.
To show NP-hardness, we reduce 3-SAT to the given problem. We consider each 3-literal clause
Ci of a propositional formula ? as a possible goal. The rest of the proof is identical to that of
Theorem 1 except that all variables are set by the assistant. The agent accepts every setting, except
possibly the last one which he reverses if the clause evaluates to 0. Since the assistant does not get
any useful information until it makes the clause true or fails to do so, its optimal policy is to choose
the assignment that maximizes the number of satisfied clauses so that the mistakes are minimized.
The assistant makes a single prediction mistake on the last literal of each clause that is not satisfied
by the assignment. Hence, the worst regret on any goal is 0 iff the 3-SAT problem is satisfiable.
Acknowledgments
The authors gratefully acknowledge the support of NSF under grants IIS-0905678 and IIS-0964705.
8
References
[1] Xinlong Bao, Jonathan L. Herlocker, and Thomas G. Dietterich. Fewer clicks and less frustration: reducing the cost of reaching the right folder. In IUI, pages 178?185, 2006.
[2] J. Boger, P. Poupart, J. Hoey, C. Boutilier, G. Fernie, and A. Mihailidis. A decision-theoretic approach to
task assistance for persons with dementia. In IJCAI, 2005.
[3] Anton N. Dragunov, Thomas G. Dietterich, Kevin Johnsrude, Matt McLaughlin, Lida Li, and Jon L. Herlocker. Tasktracer: A desktop environment to support multi-tasking knowledge workers. In Proceedings
of IUI, 2005.
[4] Andrzej Ehrenfeucht and David Haussler. Learning decision trees from random examples. Information
and Computation, 82(3):231?246, September 1989.
[5] A. Fern, S. Natarajan, K. Judah, and P. Tadepalli. A decision-theoretic model of assistance. In Proceedings
of the International Joint Conference in AI, 2007.
[6] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially
observable stochastic domains. Artificial Intelligence, 101:99?134, 1998.
[7] Tessa A. Lau, Steven A. Wolfman, Pedro Domingos, and Daniel S. Weld. Programming by demonstration
using version space algebra. Machine Learning, 53(1-2):111?156, 2003.
[8] H. Lieberman. User interface goals, AI opportunities. AI Magazine, 30(2), 2009.
[9] M. L . Littman. Algorithms for Sequential Decision Making. PhD thesis, Brown University, Providence,
RI, 1996.
[10] Martin Mundhenk. The complexity of planning with partially-observable Markov Decision Processes.
PhD thesis, Friedrich-Schiller-Universitdt, 2001.
[11] K. Myers, P. Berry, J. Blythe, K. Conley, M. Gervasio, D. McGuinness, D. Morley, A. Pfeffer, M. Pollack,
and M. Tambe. An intelligent personal assistant for task and time management. AI Magazine, 28(2):47?
61, 2007.
[12] C. Papadimitriou and J. Tsitsiklis. The complexity of Markov Decision Processes. Mathematics of Operations Research, 12(3):441?450, 1987.
[13] M. Tambe. Electric Elves: What went wrong and why. AI Magazine, 29(2), 2008.
9
| 4052 |@word h:2 version:2 polynomial:5 tadepalli:2 suitably:1 open:5 prasad:1 asks:1 reduction:2 initial:8 contains:2 daniel:1 omniscient:6 prefix:1 current:5 si:3 yet:1 must:1 subsequent:1 mundhenk:1 unchanging:1 half:1 selected:3 cook:1 leaf:5 fewer:1 intelligence:1 desktop:4 beginning:2 short:3 provides:2 certificate:1 complication:1 location:3 node:22 along:1 c2:1 direct:3 incorrect:1 introduce:2 theoretically:1 hardness:5 indeed:1 expected:23 behavior:3 planning:2 multi:2 terminal:1 bellman:1 obliged:1 decreasing:1 considering:1 begin:1 bounded:21 misprediction:7 maximizes:4 bonus:5 mass:3 underlying:1 what:2 cm:1 guarantee:2 every:3 act:3 interactive:3 shed:1 exactly:1 wrong:1 grant:1 appear:1 positive:2 before:2 mistake:4 consequence:1 infamous:1 despite:4 encoding:1 analyzing:2 path:3 approximately:2 might:2 quantified:6 suggests:1 tambe:2 range:1 acknowledgment:1 practice:1 regret:34 recursive:1 x3:2 episodic:1 empirical:1 universal:2 tasking:1 convenient:1 integrating:1 spite:1 get:8 cannot:1 selection:1 scheduling:1 put:1 influence:1 restriction:4 optimize:1 deterministic:24 map:2 maximizing:1 straightforward:1 attention:1 starting:1 pomdp:7 formulate:2 mispredictions:9 assigns:1 immediately:1 haussler:1 importantly:3 deriving:1 notion:1 variation:1 hurt:1 analogous:1 target:1 suppose:1 imagine:1 user:9 magazine:3 programming:1 us:1 domingo:1 paperclip:1 satisfying:1 natarajan:1 pfeffer:1 observed:1 coarsened:2 disjunctive:1 steven:1 afern:1 capture:5 worst:16 episode:21 went:1 decrease:1 highest:3 environment:17 complexity:16 reward:40 littman:2 dynamic:10 personal:1 depend:2 solving:2 tight:1 algebra:1 negatively:1 completely:1 easily:3 joint:1 represented:1 assistive:1 distinct:1 committing:1 encouragingly:1 artificial:1 kevin:1 outside:1 choosing:1 whose:3 heuristic:3 encoded:2 valued:1 quite:1 apparent:1 say:1 otherwise:9 gi:5 final:1 sequence:1 myers:1 interaction:1 relevant:2 combining:2 iff:1 poorly:1 achieve:6 intuitive:1 bao:1 ijcai:1 executing:1 help:6 fixing:1 pose:1 school:2 received:2 solves:1 strong:1 involves:1 implies:4 revers:1 stochastic:7 human:4 successor:3 generalization:1 hold:1 practically:1 achieves:3 assistant:85 travel:1 coordination:1 largest:1 assistantship:1 always:6 rather:1 reaching:1 avoid:1 pervasive:1 encode:1 focus:1 rank:16 likelihood:1 adversarial:1 sense:1 detect:2 helpful:3 dependent:5 membership:2 unlikely:1 accept:4 a0:9 hidden:5 selects:5 unobservable:2 issue:2 arg:4 among:1 impacting:1 proposes:1 special:3 equal:9 construct:1 saving:1 having:1 identical:2 represents:1 adversarially:1 nearly:1 jon:1 future:1 minimized:1 np:10 papadimitriou:1 intelligent:1 inherent:1 opening:3 few:1 elf:1 pathological:1 randomly:1 divergence:1 maintain:1 microsoft:1 attempt:1 possibility:1 navigation:1 analyzed:1 light:1 myopic:25 amenable:1 tuple:1 partial:1 helper:21 worker:1 unless:1 tree:26 walk:1 unobtrusively:1 desired:1 weighs:1 pollack:1 minimal:1 modeling:1 boolean:2 asking:1 lieberman:1 assignment:2 leslie:1 maximization:3 ott:2 cost:3 kaelbling:1 uniform:4 too:3 optimally:1 providence:1 eec:4 chooses:2 person:1 international:1 destination:1 michael:1 mouse:1 thesis:2 frustration:1 satisfied:3 interactively:1 management:1 hn:4 possibly:2 choose:1 literal:2 stochastically:1 leading:1 return:1 li:1 oregon:2 view:4 h1:1 root:2 analyze:2 doing:2 reached:5 satisfiable:3 minimize:1 formed:1 who:2 likewise:1 correspond:2 anton:1 fern:2 trajectory:8 pomdps:5 executes:1 history:13 reach:1 whenever:5 definition:5 evaluates:2 naturally:1 proof:20 knowledge:7 improves:1 ubiquitous:1 formalize:1 sophisticated:1 response:1 improved:1 done:1 though:1 generality:1 furthermore:4 until:6 receives:3 lack:1 aj:2 reveal:1 mdp:7 grows:1 building:1 dietterich:2 matt:1 verify:2 true:4 brown:1 hence:9 historydependent:2 ehrenfeucht:1 assistance:11 during:3 branching:3 essence:1 rooted:1 complete:12 theoretic:5 performs:1 interface:1 reasoning:2 physical:1 clause:10 conley:1 he:1 significant:4 corvallis:2 refer:2 ai:27 grid:1 mathematics:1 stochasticity:1 language:1 gratefully:1 gj:1 navigates:1 posterior:2 own:1 recent:1 showed:1 perspective:1 optimizing:1 hide:1 dish:1 scenario:2 certain:2 binary:1 continue:1 arbitrarily:1 captured:3 minimum:1 additional:1 somewhat:2 preceding:1 maximize:5 assisting:3 resolving:1 multiple:4 ii:2 reduces:1 alan:1 characterized:1 offer:1 long:1 retrieval:1 equally:1 a1:1 impact:2 prediction:3 variant:1 basic:1 represent:3 pspace:15 confined:1 achieved:2 c1:3 receive:1 addition:1 want:1 proposal:1 addressed:1 completes:1 else:1 source:2 extra:2 rest:1 unlike:1 file:1 simulates:1 call:2 presence:1 door:5 constraining:1 easy:1 blythe:1 variety:1 fit:1 restrict:1 click:2 observability:1 reduce:5 idea:1 mclaughlin:1 whether:1 expression:1 specialization:1 utility:1 assist:1 effort:1 penalty:1 cause:1 action:98 ignored:1 useful:2 boutilier:1 informally:1 reduced:3 nsf:1 correctly:1 per:1 key:2 achieving:1 drawn:2 changing:1 approving:1 mcguinness:1 graph:2 asymptotically:1 geometrically:1 year:1 sum:2 tadepall:1 uncertainty:4 arrive:1 throughout:3 place:1 reasonable:1 pursuing:1 draw:1 decision:12 appendix:6 acceptable:4 bound:11 hi:21 simplification:1 adapted:1 constraint:2 constrain:1 x2:2 ri:1 weld:1 aspect:1 attempting:1 martin:1 conjecture:1 structured:1 according:6 alternate:1 dta:1 describes:2 beneficial:1 remain:1 terminates:1 making:1 happens:1 unassisted:1 restricted:7 intuitively:1 quantifier:2 hoey:1 lau:1 equation:1 remains:2 turn:1 know:5 merit:1 end:2 generalizes:1 operation:1 observe:1 appropriate:1 save:1 existence:2 thomas:2 andrzej:1 log2:3 opportunity:1 exploit:3 giving:1 restrictive:1 johnsrude:1 build:2 objective:4 irrationality:1 question:2 move:1 already:1 strategy:2 primary:2 dependence:4 september:1 navigating:1 schiller:1 entity:2 ensuing:2 poupart:1 argue:1 considers:3 reason:2 induction:1 assuming:2 length:2 modeled:1 minimizing:4 demonstration:1 equivalently:1 difficult:3 executed:2 potentially:1 negative:1 herlocker:2 motivates:1 policy:66 upper:2 observation:2 markov:3 finite:7 acknowledge:1 behave:2 immediate:1 communication:4 iui:2 arbitrary:2 david:1 propositional:1 pair:4 required:1 kl:4 friedrich:1 accepts:4 address:1 able:5 suggested:1 usually:1 mismatch:1 program:1 including:4 max:5 belief:3 ranked:1 indicator:1 minimax:2 mdps:7 auto:1 existential:1 prior:2 oregonstate:2 berry:1 determining:1 relative:3 fully:3 loss:1 proportional:1 foundation:1 agent:109 consistent:4 s0:5 share:1 placed:1 last:5 free:1 surprisingly:1 tsitsiklis:1 formal:1 allow:3 explaining:1 taking:5 doorman:4 depth:2 xn:5 transition:8 world:3 feedback:1 stand:1 author:1 collection:2 folder:7 adaptive:1 ig:75 universally:2 far:1 polynomially:1 sj:1 approximate:2 observable:9 ignore:1 preferred:1 sat:2 harm:1 assumed:3 search:1 maxg:1 why:1 nature:2 reasonably:1 learn:1 pack:1 complex:1 anthony:1 domain:14 protocol:1 electric:1 main:2 motivation:1 bounding:1 arise:2 judah:1 child:5 competent:2 positively:1 x1:5 strengthened:1 sub:1 fails:1 wish:1 deterministically:1 late:2 theorem:18 formula:3 navigate:1 showing:1 dementia:1 evidence:2 exists:2 essential:1 consist:1 sequential:1 effectively:1 ci:3 logk:1 phd:2 execution:1 conditioned:1 horizon:10 gap:1 cassandra:1 entropy:4 logarithmic:6 likely:2 wolfman:1 desire:1 partially:4 applies:1 pedro:1 corresponds:3 environmental:1 chance:2 satisfies:1 goal:106 viewed:1 formulated:1 room:1 shortcut:1 change:4 hard:2 except:4 uniformly:1 reducing:1 acting:1 lemma:2 called:4 total:4 pas:1 accepted:8 select:6 formally:1 support:3 arises:1 jonathan:1 |
3,372 | 4,053 | Probabilistic Belief Revision with Structural
Constraints
Peter B. Jones
MIT Lincoln Laboratory
Lexington, MA 02420
[email protected]
Venkatesh Saligrama
Dept. of ECE
Boston University
Boston, MA 02215
[email protected]
Sanjoy K. Mitter
Dept. of EECS
MIT
Cambridge, MA 02139
[email protected]
Abstract
Experts (human or computer) are often required to assess the probability of uncertain events. When a collection of experts independently assess events that are
structurally interrelated, the resulting assessment may violate fundamental laws of
probability. Such an assessment is termed incoherent. In this work we investigate
how the problem of incoherence may be affected by allowing experts to specify
likelihood models and then update their assessments based on the realization of a
globally-observable random sequence.
Keywords: Bayesian Methods, Information Theory, consistency
1 Introduction
Coherence is perhaps the most fundamental property of probability estimation. Coherence will be
formally defined later, but in essence a coherent probability assessment is one that exhibits logical
consistency. Incoherent assessments are those that cannot be correct, that are at odds with the
underlying structure of the space, and so can?t be extended to a complete probability distribution [1,
2]. From a decision theoretic standpoint, treating assessments as odds, incoherent assessments result
in guaranteed losses to assessors. They are dominated strategies, meaning that for every incoherent
assessment there is a coherent assessment that uniformly improves the outcome for the assessors.
Despite this fact, expert assessments (human and machine) are vulnerable to incoherence [3].
Previous authors have used coherence as a tool for fusing distributed expert assessments [4, 5, 6].
The focus has been on static coherence in which experts are polled once about some set of events
and the responses are then fused through a geometric projection. Besides relying on arbitrary scoring functions to define the ?right? projection, such analyses don?t address dynamically evolving
assessments or forecasts. This paper is, to our knowledge, the first attempt to analyze the problem of
coherence under Bayesian belief dynamics. The importance of dynamic coherence is demonstrated
in the following example.
Consider two uncertain events A1 and A2 where A1 ? A2 (e.g. A2 = {NASDAQ ? tomorrow}
and A1 = {NASDAQ ? tomorrow ? 10 points}). To be coherent, a probability assessment must
obey the relation P (A1 ) ? P (A2 ). For the purposes of the example, suppose the initial belief is
P (A1 ) = P (A2 ) = 0.5 which is coherent. Next, suppose there is some binary random variable
This work was sponsored by the U.S. Government under Air Force Contract FA8721-05-C-0002. Opinions,
interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by
the United States Government
1
Z that is believed to correlate with the underlying event (e.g. Z = 11{Google?today} where 11 is an
indicator function). The believed dependence between Z and Ai is captured by a likelihood model
P (Z|Ai ) that gives the probability of observing Z when event Ai does or doesn?t occur. For the
example, suppose Z = 0 and the believed likelihoods are P (Z = 0|A1 ) = 1 and P (Z = 0|A?1 ) and
P (Z = 0|A2 ) = P (Z = 0|A?2 ) = 0.5 where A? is the complement of A. There?s nothing inherently
irrational in this belief model, but when Bayes? Rule is applied, it gives P (A1 |Z = 0) = 0.67 >
P (A2 |Z = 0) = 0.5. The belief update has introduced incoherence!
1.1
Motivating Example
Concerned with their network security, BigCorps wants to purchase an Intrusion Detection and Prevention System (IDPS). They have two options, IDPS1 and IDPS2 . IDPS1 detects both distributed
denial of service (DDoS) attacks and port scan (PS) attacks, while IDPS2 detects only DDoS attacks.
While studying the NIST guide to IDPSs [7], BigCorps? CTO notes the recommendation that ?organizations should consider using multiple types of IDPS technologies to achieve more comprehensive
and accurate detection and prevention of malicious activity.? Following the NIST recommendation,
BigCorps purchases both IDPSs and sets them to work monitoring network traffic.
One morning while reading the output reports of the two detectors, an intrepid security analyst
witnesses an interesting behavior. IDPS2 is registering an attack probability of 0.1 while detector
IDPS1 is reading an attack probability of 0.05. Since the threats detected by IDPS1 are a superset
of those detected by IDPS2 , the probability assigned by IDPS1 should always be larger than that
assigned by IDPS2 . The dilemma faced by our analyst is how to reconcile the logically incoherent
outputs of the two detectors. Particularly, how to ascribe probabilities in a way that is logically
consistent, but still retains as much as possible the expert assessments of the detectors.
1.2
Contributions of this Work
This work introduces the concept of dynamic coherence, one that has not been previously treated
in the literature. We suggest two possible forms of dynamic coherence and analyze the relationship
between them. They are implemented and compared in a simple network modeling simulation.
1.3
Previous Work
Previous authors have analyzed coherence with respect to contingent (or conditional) probability
assessments [8, 9, 10]. These developments attempt to determine conditions characterizing coherent
subjective posteriors. While likelihood models are a form of contingent probability assessment, this
paper goes further in analyzing the impact of these assessments on coherent belief dynamics.
In [11, 12] a different form of conditional coherence is suggested which derives from coherence of a
joint probability distribution over observations and states of nature. It is shown that for this stronger
form of conditional coherence, certain specially structured event sets and likelihood functions will
produce coherent posterior assessments.
Logical consistency under non-Bayesian belief dynamics has been previously analyzed. In [13]
conditions for invariance under permutations of the observational sequence under Jeffrey?s rule are
developed. A comparison of Jeffrey?s rule and Pearl?s virtual evidence method is made in [14] which
shows that the virtual evidence method implicitly assumes the conditions of Jeffrey?s update rule.
2
Model
Let ? = {?1 , ?2 , . . .} be an event space and (?, F) a measurable space. Let ? : ? ? ? be a
measurable random variable; consider ? = {?1 , ?j , . . . , ?J } to be the set of all possible ?states of
the world.? Also, let Zi : ? ? Z be a sequence of measureable random variables; consider Zi to
be the sequence of observations, with Z = {z 1 , z 2 , . . . , z K } and K < ?. Let ?? (resp. ?Zi ) be
2
the pre-image of ? (resp. Zi ). Since the random variables are assumed measureable, ?? and ?Zi are
measureable sets (i.e. elements of F), as are their countable intersections and unions.
For i = 1, 2, . . . , N , let A?i be a subset of ?, let Ai = ?{??A?i } ?? and let A = {Ai }. We
call elements of A events under assessment. The characteristic matrix ? for the events under
assessment is defined as
?
1 ?j ? A?i
?ij =
.
0
o.w.
An individual probability assessment P : A ? [0, 1] maps each event under assessment to the
?
?T
unit interval. In an abuse of notation, we will let P , P (A1 ) P (A2 ) ? ? ? P (AN )
be a
(joint) probability assessment. A coherent assessment (i.e. one that is logically consistent) can be
described
geometrically as lying in the convex hull of the columns of ?, meaning ?? ? [0, 1]J s.t.
P
i ?i = 1 and P = ??.
We now consider a sequence of probability assessments Pn defined as follows: Pn is the result of
a belief revision process based on an initial probability assessment P0 , a likelihood model pn (z|A),
and a sequence of observations Z1 , Z2 , . . . , Zn . A likelihood model pn (z|A) is a pair of probability
mass functions over the observations: one conditioned on A and the other conditioned on A? (where
A? denotes the complement of A). We will make the simplifying assumption that the likelihood
? = p(z|A)
? for all n.
model is static, i.e. pn (z|A) = p(z|A) and pn (z|A)
In this paper we assume belief revision dynamics governed by Bayes? rule, i.e.
Pn+1 =
p(zn+1 |A) ? Pn
? ? (1 ? Pn ) =
p(zn+1 |A) ? Pn + p(zn+1 |A)
1+
1
? 1?P
p(zn+1 |A)
n
p(zn+1 |A) Pn
To simplify development, denote p(z = z i |Aj ) = ?ij and p(z = z i |A?j ) = ?ij and assume ?j, ?i
s.t. ?ij 6= ?ij (i.e. each event has at least one informative observation) and ?ij ? (0, 1), ?ij ? (0, 1)
for all i, j (i.e. no observation determines absolutely whether any event obtains). Then by induction
the posterior probability of event A after n observations is:
Pn (Aj ) =
1+
1?P0
P0
1
QK ? ?ij ?ni
i=1
(1)
?i
when ni is the number of observations z i .
3 Probability convergence for single assessors
For a single assessor revising his estimate of the likelihood of event A, let the probability model
? = ?i . It is convenient to rewrite (1) in terms of
be given by p(z = z i |A) = ?i and p(z = z i |A)
ni
the ratio ?i = n and for simplicity assuming P0 = 0.5 (although the analysis holds for general
P0 ? (0, 1)). Substituting yields
Pn =
1+
1
hQ
? ??i in
K
?i
i=1
(2)
?i
Note that 1) ? is the empirical distribution over the observations, and so converges almost surely
(a.s.) to the true generating distribution, and 2) the convergence properties of Pn are determined by
the quantity between the square brackets in (2). Specifically, let
??
K ?
Y
?i i
L? = lim
n??
?i
i=1
L? is commonly referred to as the likelihood ratio, familiar from classical binary hypothesis testing.
Since ? converges a.s. and the function is continuous, L? exists a.s. If L? < 1 then Pn ? 1; if
L? > 1 then Pn ? 0; if L? = 1 then Pn ? 12 .
3
3.1
Matched likelihood functions
Assume that the likelihood model is both infinitely precise and infinitely accurate, meaning that
? obtains observations are generated i.i.d. according to ? (resp. ?).
when A (resp. A)
QK ? ??i
Assume that A obtains; then L? = i=1 ??ii
a.s. Let L? = log L? which in this case yields
L? = log
??
K ?
Y
?i i
i=1
?i
=
K
X
?i log
i=1
?i
= ?D(?||?) < 0
?i
where all relations hold a.s., D(?||?) is the relative entropy [15], and the last inequality follows since
by assumption ? 6= ?. Since L? < 0 ? L? < 1, this implies that when the true generating
distribution is ?, Pn ? 1 a.s.
Similarly, when A? obtains, we have
L? = log
??
K ?
Y
?i i
i=1
?i
=
K
X
?i log
i=1
?i
= D(?||?) > 0
?i
and Pn ? 0 a.s.
3.2
Mismatched likelihood functions
Now consider the situation when the expert assessed likelihood model is incorrect. Assume the
observation
distribution is ? = P(Zi = z) where ? 6= ? and ? 6= ?. In this case,
P generating
L? =
?i log ??ii . We define
T (?) = ?L? =
X
i
?i log
?i
?i
(3)
Then the probability simplex over the observation space Z can be partitioned into two sets:
P0 = {?|T (?) < 0} and P1 = {?|T (?) > 0}. By the a.s. convergence of the empirical distribtuion, ? ? Pi ? Pn ? i. (The boundary set {?|T (?) = 0} represents an unstable equilibrium
in which Pn a.s. converges to 12 ).
The problem of mismatched likelihood functions is similar to composite hypothesis testing (c.f.
[16] and references therein). Composite hypothesis testing attempts to design tests to determine the
truth or falsity of a hypothesis with some ambiguity in the underlying parameter space. Because of
this ambiguity, each hypothesis Hi corresponds not to a single distribution, but to a set of possible
distributions. In the mismatched likelihood function problem, composite spaces are formed due to
the properties of Bayes? rule for a specific likelihood model. A corollary of the above result is that if
Hi ? Pi then Bayes? rule (under the specific likelihood model) is an asymptotically perfect detector.
4
Multiple Assessors with Structural Constraints
In Section 3 we analyzed convergence properties of a single event under assessment. Considering
multiple events introduces the challenge of defining a dynamic concept of coherence for the assessment revision process. In this section we suggest two possible definitions of dynamic coherence and
consider some of the implications of these definitions.
4.1
Step-wise Coherence
We first introduce a step-wise definition of coherence, and derive equivalency conditions for the
special class of 2-expert likelihood models.
4
Definition 1 Under the Bayes? rule revision process, a likelihood model p(z|A) is step-wise coherent (SWC) if Pn ? convhull(?) ? Pn+1 ? convhull(?) for all z ? Z.
Essentially this definition says that if the posterior assessment process is coherent at any time, it
will remain coherent perpetually, independent of observation sequence. We derive necessary and
sufficient conditions for SWC for the characteristic matrix given by
?
?
1 1 0
(4)
?=
0 1 0
Generalizations of this development are possible for any ? ? {0, 1}2?|?| .
Note that under the characteristic matrix given by (4) a model is SWC iff Pn (A1 ) ? Pn (A2 ) for
all n and all coherent P0 . Proceeding inductively, assume Pn is marginally SWC, i.e. Pn (A1 ) =
Pn (A2 ) = ?. Due to the continuity of the update rule, a model will be SWC iff it is coherent at the
margins. For coherence, for any i we must have Pn+1 (A1 ) ? Pn+1 (A2 ). By substitution into (1)
?i1 ?+?i1 (1??)
i1
or, equivalently, ?
?i2 ? ?i2 ?+?i2 (1??) .
h
n
o
n
oi
?i1 ?+?i1 (1??)
?i1 ?i1
?i1 ?i1
By monotonicity, ?
?
min
,
,
max
,
. Since
?i2 ?i2
?i2 ?i2
i2 ?+?i2 (1??)
?i1 ?
?i1 ?+?i1 (1??)
?
?i2 ?
?i2 ?+?i2 (1??)
ately, for ? given by (4), the model will be SWC iff
?i,
4.2
?i1
?i2
?
?i1
?i2
?i1
?i2
?
?i1
?i2
degener-
?i, or (rearranging)
?i1
?i2
?
?i1
?i2
(5)
Asymptotic coherence
While it is relatively simple to characterize coherent models in the two assessor case, in general
SWC is difficult to check. As such, we introduce a simpler condition:
Definition 2 A likelihood model p(z|A) is weakly asymptotically coherent (WAC) if for all observation generating distributions ? s.t. limn?? Pn ? {0, 1}N , ?i s.t. limn?? Pn = ?ei a.s., where
ei is the ith unit vector.
Lemma 1 Step-wise coherence implies weakly asymptotic coherence.
Assume that a model is SWC but not WAC. Since it?s not WAC, there exists a ? s.t. Zi drawn
IID from ? a.s. results in Pn ? P? where P? ? {0, 1}N is not a column of ? and is therefore
not coherent. Since this holds regardless of initial conditions, assume the process is initialized
coherently. Then, by a separating hyperplane argument, there must exist some n (and therefore
some zn ) s.t. Pn ? convhull(?) and Pn+1 ?
/ convhull(?). This contradicts the assumption that
the likelihood model is SWC. Therefore any SWC model is also WAC. We demonstrate that the
converse is not true by counterexample in Section 4.2.2.
4.2.1
WAC for static models
Analogous to (3), we define
Tj (?) =
X
i
?i log
?ij
.
?ij
For a given ?, define the logical vector r(?) as
?
Tj (?) < 0
? 0
1
Tj (?) > 0
rj (?) =
?
undet Tj (?) = 0
Lemma 2 A likelihood model is WAC if ?? s.t. limn?? Pn ? {0, 1}N , ?i s.t. r(?) = ?ei .
5
(6)
(7)
Define the sets Pi = {?|r(?) = ?ei }. Lemma 2 states that for a WAC likelihood model, {Pi }
partitions the simplex (excluding unstable edge events) into sets of distributions s.t. ? ? Pi ?
Pn ? ?ei . It is simple to show that the sets Pi are convex, and by definition the boundaries
between sets are linear.
4.2.2
Motivating Example Revisited
Consider again the motivating example of the two IDPSs from Section 1.1. Recall that IDPS1 detects
a superset of the attacks detected by IDPS2 , and so this scenario conforms to the characteristic matrix
analyzed in Section 4.1. Therefore (5) gives necessary and sufficient conditions for SWC, while (7)
gives necessary and sufficient conditions for WAC.
Suppose that both the IDPSs use the interval between packet arrivals as their observation and assume the learned likelihood models for the two IDPSs happen to be geometrically distributed with
parameters x1 , x2 (when an attack is occurring) and y1 , y2 (when no attack is occurring), with the
index denoting the IDPS. We will analyze SWC and WAC for this class of models.
Plugging the given likelihood model into (5) implies that the model is SWC iff, for z = 0, 1, 2, . . .
?
?z
?
?z
1 ? x1
x1
1 ? x2
x2
?
(8)
1 ? y1
y1
1 ? y2
y2
Equation (8) will be satisfied iff
sufficient condition for SWC.
x1
y1
?
x2
y2
and
1?x1
1?y1
?
1?x2
1?y2 ,
which is therefore a necessary and
Now, we turn to WAC. Forming T as defined in (6), we see that
Tj (?) =
X
z
?z z log
1 ? xj
xj
1 ? xj
xj
+ log
= ? log
+ log
1 ? yj
yj
1 ? yj
yj
(9)
where ? = E? [z]. By the structure of the characteristic matrix, the model will be WAC iff T2 (?) >
0 ? T1 (?) > 0 for all ? ? 0. Assume for convenience that xi > yi . Then {?|Ti (?) < 0} =
log yi /xi
{?|? < log(1?x
} and therefore the model is WAC iff
i )/(1?yi )
x2
y2
1?x2
1?y2
log
log
?
x1
y1
1?x1
1?y1
log
log
(10)
Comparing the conditions for SWC (8) to those for WAC (10), we see that any parameters satisfying
(8) also satisfy (10) but not vice versa. For example x1 = 0.3, x2 = 0.5, y1 = 0.2, y2 = 0.25 don?t
satisfy (8), but do satisfy (10). Thus WAC is truly a weaker sense of convergence than SWC.
5 Coherence with only finitely many observations
As shown in Sections 3 and 4, a WAC likelihood model generates a partition {Pi } over the observation probability simplex such that ? ? Pi ? Pn ? ?ei . The question we now address is, given
a WAC likelihood model and finitely many observations (with empirical distribution ??n ), how to
revise an incoherent posterior probability assessment Pn so that it is both coherent and consistent
with the observed data.
Principle of Conserving Predictive Uncertainty: Given ??n , choose ? such that
?i = Pr[limn?? ??n ? Pi ] for each i (where ? ? Pi iff Pn ? ?ei ).
The principle of conserving predictive uncertainty states that in revising an incoherent assessment
Pn to a coherent one P?n , the weight vectors over the columns of ? should reflect the uncertainty in
whether the observations are being generated by a distribution in the corresponding element of the
partition {Pi } (and therefore whether Pn is converging to ?ei ).
6
Given a uniform prior over generating distributions ? and assuming Lebesgue measure ? over the
parameters of the generating distribution, we can write
Z
Z
P (?
?n |?)P (?)
R
P (? ? Pi |?
?n ) =
P (?|?
?n )d?
=
d?
0
0
0
P
(?
?
n |? )P (? )d?
??Pi
??Pi P
Z
Z
P (?
?n |?)
1
R
P (?
?n |?)d?
=
d?
=R
0
0
?n |? )d?
P (?
?n |? 0 )d?0 ??Pi
??Pi P P (?
P
.
.
In the limit of large n P (?
?n |?) = e?nD(?? ||?) (where = denotes equality to the first degree in the
exponent; c.f. [15]). This implies that as n gets large, Pr[limn?? ??n ? Pi ] is dominated by the
point ?i? = argmin??Pi D(?
?n ||?) (i.e. the reverse i-projection, or Maximum Likelihood estimate).
This suggests the following approximation method for determining a coherent projection of Pn :
P (?
? |?i? )
? |?j? )
j?|{Pi }| P (?
?j = P
(11)
The relationship between the ML estimates (?i? ) and the probability over the columns of the characteristic matrix is represented graphically in Figure 1. As will be shown in Section 6, the principle of
conserving predictive uncertainty can even be effectively applied to non-WAC models.
The observation simplex
?J
? J
? P1 J
J
?? ?B r
? 2 B ??nr =? ? J
1
J
? P2 B
Br r
J
?
?B ? ?
? P3 ?3 4 P4 J
BB
J
?
?e3
The outcome simplex
?e
pp 1
p
? ppBJ
? pp BJ
? ?p pprp B J
? p p p p p p p pBpp p J
pBpp pp p p J
?pppp
pppp
?BpZ
p p pJp
?p p p p
?
?
p pJ
?e2 Z
pppp
p p ???
?
pp ?
pppp
Z
Z
J
pp
?
?
?e4
Figure 1: The relationship between observation and outcome simplices
5.1 Sparse coherent approximation
In general |?| (the length of the vector ?) can be of order 2N (where N is the number of assessors),
so solving for ? directly using (11) may be computationally infeasible. The following result suggests that to generate the optimal (in the sense of capturing to most possible weight) O(N ) sparse
approximation of ? we need only calculate the O(N 2 ) reverse i-projections.
Let ? be determined according to (11) and let {Pi } be as defined in Section 4. Assume wlog that
?i ? ?j for all i > j. Define the neighborhood of Pi as N (Pi ) = {Pj : |r(Pi )?r(Pj )| = 1} where
r(Pi ) is defined as in (7). The neighborhood of Pi is the set of partition elements such that the limit
of one (and only one) assessor?s probability assessment has changed. The size of the neighborhood
is thus less than or equal to N .
By the assumed ordering of ? and (11), it is immediately evident that ?? = ?1? , i.e. the maximally
weighted partition element is the one that contains
the empirical distribution. It can be shown that
S
?2? ? N (P1 ), and thus recursively that ?i? ? j<i N (Pj ). Therefore the total number of projections
in calculating the i = N largest weights is bounded by
?
?
?
? X
X
X
?[
?
? N (Pj )? ?
|N
(P
)|
?
max
|N
(P
)|
?
N = N 2.
j
j
?
?
j
?j<i
? j<i
j<i
j<i
7
6
Experimental Results
Consider a three-assessor situation with an identity characteristic matrix, i.e. each of three assessors
estimates the probability that his unique outcome has occured knowing exactly one has occurred.
Suppose each event is a priori equally likely, and a sequence of iid observations is generated with
?
conditional probability p(z i |Ai ) = 0.4 and p(z i |Ai ) = 0.3 (thus observation z i is evidence that
i
event A has occurred). Optimal joint estimation results in the posterior distribution convergence regions shown in Figure 6(a). Marginal estimation introduces incoherent convergence regions (6(b));
but for well-calibrated models, the empirical distribution is exponentially unlikely to lie in an incoherent region. However, miscalibrated models (6(c)) may lead to the true distribution lying in
an incoherence region. WAC-approximation can ameliorate such miscalibration. The results of a
??AA
?
?
??AA
? A
? q A
?Q
?A
? Q
? A
Q?
? q
q A
A
?
? q A
?A
?A
? A ? A
? q A? q A
?A
A
?
? A
A
?
(b)
A
A
(a)
?
?
??qAA
a
??qAA
A
A
?A
?
?
? A
?
? a A
?A a
? A ?
qA
A?
A
?q
(c)
a
A
A
? 6 ?A
? ?? ? A
?A a r ? a A
?J^
J?
? A?
qA
A?
A
?q
(d)
?
Figure 2: (a) Decision boundaries for optimal joint estimator; (b) Decision boundaries for marginal
estimator; (c) Decision boundaries for miscalibrated observation model; (d) WAC approximation
Monte Carlo implementation of this miscalibrated estimation is shown in Figure 3. The top line
(blue) shows the average error for accepting the posterior assessments generated by the miscalibrated observation models. The next line (green) corresponds to renormalization at each time step,
equivalent to projecting the posterior into the coherent set with a divergence-based objective function. Next (red) shows the error generated by standard (L2) projection of the miscalibrated posterior
into the coherent set. Finally, in cyan is shown the WAC approximation.
Estimate Mean Squared Error
0.8
0.7
0.6
0.5
Straight Bayes
Rescaling
L2 Projection
WAC
0.4
0.3
0.2
0.1
0
200
400
600
800 1000 1200 1400
Number of Observations
1600
1800
2000
Figure 3: Comparison of mean-square errors as a function of the number of observations under four
different estimation techniques
7
Conclusions
This paper has introduced the problem of dynamic coherence and analyzed it when the dynamics
are induced by Bayes? rule. First, we demonstrated how under subjective event likelihood models
(potentially unmatched to the true underlying distributions) Bayes? rule results in a partition over
the observation probability simplex. Then we introduced two concepts of dynamic coherence: stepwise coherence and weak, asymptotic coherence. Next we suggested a principle of conservation of
predictive uncertainty, by which observation-based incoherence can be mitigated (even in incoherent
models). Finally, we briefly analyzed the computational impact of coherent approximation.
8
References
[1] V.S. Borkar, V.R. Konda, and S.K. Mitter. On De Finetti coherence and Kolmogorov probability. Statistics and Probability Letters, 66(4):417?421, March 2004.
[2] Bruno de Finetti. Theory of Probability, volume 1-2. Wiley New York, 1974.
[3] Daniel Kahneman, Paul Slovic, and Amos Tversky, editors. Judgment under uncertainty:
Heuristics and biases. Cambridge University Press, 1982.
[4] J.B. Predd et al. Aggregating forecasts of chance from incoherent and abstaining experts.
Decision Analysis, 5:177?189, 2008.
[5] D.N. Osherson and M.Y. Vardi. Aggregating disparate estimates of chance. Games and Economic Behavior, 56(1):148?173, July 2006.
[6] P. Jones, S. Mitter, and V. Saligrama. Revision of marginal probability assessments. In the
13th Internationl Conference on Information Fusion, Edinburgh, UK, 2010.
[7] K. Scarfone and P. Mell. Guide to intrusion detection and prevention systems (IDPS). Technical Report 800-94, National Institute of Standards and Technology, Technology Administration, US Dept. of Commerce.
[8] D.A. Freedman and R.A. Purves. Bayes? method for bookies. The Annals of Mathematical
Statistics, 40(4):1177?1186, August 1969.
[9] D. Heath and W. Sudderth. On finitely additive priors, coherence, and extended admissibility.
The Annals of Statistics, 6(2):333?345, March 1978.
[10] D.A. Lane and W. Sudderth. Coherent and continuous inference. The Annals of Statistics,
11(1):114?120, March 1983.
[11] E. Regazzini. De Finetti?s coherence and statistical inference. The Annals of Statistics,
15(2):845?864, June 1987.
[12] E. Regazzini. Coherent statistical inference and bayes theorem. The Annals of Statistics,
19(1):366?381, March 1991.
[13] P. Diaconis and S.L. Zabell. Updating subjective probability. Journal of the American Statistical Association, 77(380):822?830, December 1982.
[14] H. Chan and A. Darwiche. On the revision of probabilistic beliefs using uncertain evidence.
Artificial Intelligence, 40(4):67?90, August 2005.
[15] Joy Thomas and Thomas Cover. Elements of Information Theory. Wiley Interscience, 2nd
edition, 2006.
[16] M. Feder and N. Merhav. Universal composite hypothesis testing: A competitive minimax
approach. IEEE Transactions on Information Theory, 48(6):1504?1517, June 2002.
9
| 4053 |@word briefly:1 stronger:1 nd:2 simulation:1 simplifying:1 p0:7 recursively:1 initial:3 substitution:1 contains:1 united:1 daniel:1 denoting:1 subjective:3 z2:1 comparing:1 must:3 additive:1 partition:6 happen:1 informative:1 treating:1 sponsored:1 update:4 joy:1 intelligence:1 ith:1 accepting:1 revisited:1 attack:8 simpler:1 mathematical:1 registering:1 tomorrow:2 incorrect:1 interscience:1 darwiche:1 introduce:2 behavior:2 p1:3 globally:1 relying:1 detects:3 considering:1 revision:7 underlying:4 notation:1 matched:1 mass:1 bounded:1 mitigated:1 argmin:1 developed:1 revising:2 lexington:1 every:1 ti:1 exactly:1 uk:1 unit:2 converse:1 t1:1 service:1 aggregating:2 limit:2 despite:1 analyzing:1 incoherence:5 abuse:1 therein:1 dynamically:1 suggests:2 unique:1 commerce:1 testing:4 yj:4 union:1 universal:1 evolving:1 empirical:5 composite:4 projection:8 convenient:1 pre:1 suggest:2 get:1 cannot:1 convenience:1 measurable:2 map:1 demonstrated:2 equivalent:1 go:1 regardless:1 graphically:1 independently:1 convex:2 simplicity:1 immediately:1 rule:11 estimator:2 his:2 analogous:1 resp:4 annals:5 suppose:5 today:1 hypothesis:6 element:6 satisfying:1 particularly:1 updating:1 observed:1 calculate:1 region:4 ordering:1 inductively:1 dynamic:12 convhull:4 irrational:1 denial:1 weakly:2 rewrite:1 solving:1 tversky:1 predictive:4 dilemma:1 kahneman:1 joint:4 osherson:1 represented:1 kolmogorov:1 monte:1 detected:3 artificial:1 outcome:4 neighborhood:3 heuristic:1 larger:1 say:1 statistic:6 sequence:8 polled:1 p4:1 saligrama:2 realization:1 iff:8 achieve:1 lincoln:1 conserving:3 convergence:7 p:1 produce:1 generating:6 perfect:1 converges:3 derive:2 finitely:3 ij:10 keywords:1 p2:1 implemented:1 implies:4 idp:9 correct:1 hull:1 human:2 packet:1 observational:1 opinion:1 virtual:2 government:2 generalization:1 mell:1 hold:3 lying:2 equilibrium:1 bj:1 substituting:1 a2:11 purpose:1 estimation:5 largest:1 ddos:2 vice:1 tool:1 weighted:1 amos:1 mit:4 always:1 pn:43 corollary:1 focus:1 june:2 likelihood:30 logically:3 check:1 intrusion:2 sense:2 inference:3 nasdaq:2 ately:1 unlikely:1 relation:2 i1:18 exponent:1 priori:1 prevention:3 development:3 special:1 marginal:3 equal:1 once:1 represents:1 jones:2 purchase:2 simplex:6 report:2 t2:1 simplify:1 diaconis:1 divergence:1 comprehensive:1 individual:1 national:1 zabell:1 familiar:1 lebesgue:1 jeffrey:3 attempt:3 detection:3 organization:1 investigate:1 introduces:3 analyzed:6 bracket:1 truly:1 tj:5 implication:1 accurate:2 edge:1 necessary:4 conforms:1 initialized:1 regazzini:2 uncertain:3 column:4 modeling:1 cover:1 retains:1 zn:7 fusing:1 subset:1 uniform:1 motivating:3 characterize:1 eec:1 calibrated:1 slovic:1 fundamental:2 bu:1 probabilistic:2 contract:1 fused:1 again:1 ambiguity:2 satisfied:1 reflect:1 squared:1 choose:1 unmatched:1 expert:10 american:1 rescaling:1 de:3 satisfy:3 later:1 analyze:3 observing:1 traffic:1 red:1 bayes:10 option:1 purves:1 competitive:1 contribution:1 ass:2 formed:1 square:2 air:1 ni:3 qk:2 characteristic:7 oi:1 yield:2 judgment:1 weak:1 bayesian:3 iid:2 marginally:1 carlo:1 monitoring:1 straight:1 detector:5 definition:7 pp:5 e2:1 static:3 revise:1 logical:3 recall:1 knowledge:1 lim:1 improves:1 occured:1 specify:1 response:1 maximally:1 ei:8 assessment:34 google:1 morning:1 continuity:1 aj:2 perhaps:1 ascribe:1 concept:3 true:5 y2:8 equality:1 assigned:2 laboratory:1 i2:18 ll:1 game:1 essence:1 evident:1 complete:1 theoretic:1 demonstrate:1 meaning:3 image:1 wise:4 exponentially:1 volume:1 association:1 interpretation:1 occurred:2 cambridge:2 counterexample:1 ai:7 versa:1 consistency:3 similarly:1 bruno:1 posterior:9 chan:1 reverse:2 termed:1 scenario:1 certain:1 inequality:1 binary:2 yi:3 scoring:1 captured:1 contingent:2 surely:1 determine:2 july:1 ii:2 violate:1 multiple:3 rj:1 pppp:4 technical:1 believed:3 equally:1 a1:11 plugging:1 impact:2 converging:1 essentially:1 want:1 interval:2 sudderth:2 malicious:1 limn:5 standpoint:1 specially:1 heath:1 induced:1 december:1 odds:2 call:1 structural:2 superset:2 concerned:1 equivalency:1 xj:4 zi:7 economic:1 knowing:1 br:1 administration:1 whether:3 feder:1 peter:1 e3:1 york:1 miscalibrated:5 endorsed:1 generate:1 exist:1 blue:1 write:1 affected:1 finetti:3 threat:1 four:1 drawn:1 pj:5 abstaining:1 asymptotically:2 geometrically:2 letter:1 uncertainty:6 ameliorate:1 almost:1 p3:1 coherence:28 decision:5 capturing:1 cyan:1 hi:2 guaranteed:1 activity:1 occur:1 constraint:2 x2:8 lane:1 dominated:2 generates:1 argument:1 min:1 relatively:1 structured:1 according:2 march:4 miscalibration:1 remain:1 contradicts:1 partitioned:1 projecting:1 pr:2 computationally:1 equation:1 previously:2 turn:1 studying:1 obey:1 thomas:2 assumes:1 denotes:2 top:1 calculating:1 konda:1 classical:1 objective:1 question:1 quantity:1 coherently:1 strategy:1 dependence:1 nr:1 exhibit:1 hq:1 separating:1 srv:1 unstable:2 induction:1 analyst:2 assuming:2 besides:1 length:1 index:1 relationship:3 ratio:2 equivalently:1 difficult:1 measureable:3 potentially:1 merhav:1 disparate:1 design:1 countable:1 implementation:1 allowing:1 observation:29 nist:2 situation:2 extended:2 witness:1 precise:1 defining:1 excluding:1 y1:8 arbitrary:1 august:2 wac:21 introduced:3 venkatesh:1 complement:2 required:1 pair:1 z1:1 security:2 coherent:25 learned:1 pearl:1 qa:2 address:2 suggested:2 reading:2 challenge:1 max:2 green:1 belief:10 event:21 treated:1 force:1 indicator:1 minimax:1 technology:3 incoherent:11 faced:1 prior:2 geometric:1 literature:1 qaa:2 l2:2 determining:1 relative:1 law:1 asymptotic:3 loss:1 admissibility:1 permutation:1 interesting:1 degree:1 sufficient:4 consistent:3 port:1 principle:4 editor:1 pi:25 changed:1 last:1 infeasible:1 guide:2 weaker:1 bias:1 mismatched:3 institute:1 characterizing:1 sparse:2 distributed:3 edinburgh:1 boundary:5 world:1 doesn:1 author:3 collection:1 made:1 commonly:1 perpetually:1 correlate:1 bb:1 transaction:1 observable:1 obtains:4 implicitly:1 monotonicity:1 ml:1 assumed:2 conservation:1 cto:1 xi:2 don:2 continuous:2 nature:1 rearranging:1 inherently:1 pjp:1 necessarily:1 reconcile:1 paul:1 arrival:1 predd:1 nothing:1 vardi:1 freedman:1 edition:1 x1:8 referred:1 mitter:4 assessor:10 simplices:1 renormalization:1 wiley:2 wlog:1 structurally:1 lie:1 governed:1 e4:1 theorem:1 specific:2 evidence:4 derives:1 exists:2 stepwise:1 fusion:1 effectively:1 importance:1 conditioned:2 occurring:2 margin:1 forecast:2 boston:2 entropy:1 intersection:1 interrelated:1 borkar:1 likely:1 infinitely:2 forming:1 vulnerable:1 recommendation:3 aa:2 corresponds:2 truth:1 determines:1 chance:2 ma:3 fa8721:1 conditional:4 identity:1 determined:2 specifically:1 uniformly:1 hyperplane:1 lemma:3 total:1 sanjoy:1 ece:1 invariance:1 experimental:1 formally:1 scan:1 assessed:1 absolutely:1 dept:3 |
3,373 | 4,054 | Energy Disaggregation via Discriminative
Sparse Coding
J. Zico Kolter
Computer Science and
Artificial Intelligence Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Siddarth Batra, Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{sidbatra,ang}@cs.stanford.edu
Abstract
Energy disaggregation is the task of taking a whole-home energy signal and separating it into its component appliances. Studies have shown that having devicelevel energy information can cause users to conserve significant amounts of energy, but current electricity meters only report whole-home data. Thus, developing
algorithmic methods for disaggregation presents a key technical challenge in the
effort to maximize energy conservation. In this paper, we examine a large scale
energy disaggregation task, and apply a novel extension of sparse coding to this
problem. In particular, we develop a method, based upon structured prediction,
for discriminatively training sparse coding algorithms specifically to maximize
disaggregation performance. We show that this significantly improves the performance of sparse coding algorithms on the energy task and illustrate how these
disaggregation results can provide useful information about energy usage.
1
Introduction
Energy issues present one of the largest challenges facing our society. The world currently consumes
an average of 16 terawatts of power, 86% of which comes from fossil fuels [28]; without any effort
to curb energy consumption or use different sources of energy, most climate models predict that the
earth?s temperature will increase by at least 5 degrees Fahrenheit in the next 90 years [1], a change
that could cause ecological disasters on a global scale. While there are of course numerous facets to
the energy problem, there is a growing consensus that many energy and sustainability problems are
fundamentally informatics problems, areas where machine learning can play a significant role.
This paper looks specifically at the task of energy disaggregation, an informatics task relating to
energy efficiency. Energy disaggregation, also called non-intrusive load monitoring [11], involves
taking an aggregated energy signal, for example the total power consumption of a house as read by
an electricity meter, and separating it into the different electrical appliances being used. Numerous
studies have shown that receiving information about ones energy usage can automatically induce
energy-conserving behaviors [6, 19], and these studies also clearly indicate that receiving appliancespecific information leads to much larger gains than whole-home data alone ([19] estimates that
appliance-level data could reduce consumption by an average of 12% in the residential sector). In
the United States, electricity constitutes 38% of all energy used, and residential and commercial
buildings together use 75% of this electricity [28]; thus, this 12% figure accounts for a sizable
amount of energy that could potentially be saved. However, the widely-available sensors that provide
electricity consumption information, namely the so-called ?Smart Meters? that are already becoming
ubiquitous, collect energy information only at the whole-home level and at a very low resolution
(typically every hour or 15 minutes). Thus, energy disaggregation methods that can take this wholehome data and use it to predict individual appliance usage present an algorithmic challenge where
advances can have a significant impact on large-scale energy efficiency issues.
1
Energy disaggregation methods do have a long history in the engineering community, including
some which have applied machine learning techniques ? early algorithms [11, 26] typically looked
for ?edges? in power signal to indicate whether a known device was turned on or off; later work
focused on computing harmonics of steady-state power or current draw to determine more complex
device signatures [16, 14, 25, 2]; recently, researchers have analyzed the transient noise of an electrical circuit that occurs when a device changes state [15, 21]. However, these and all other studies
we are aware of were either conducted in artificial laboratory environments, contained a relatively
small number of devices, trained and tested on the same set of devices in a house, and/or used custom hardware for very high frequency electrical monitoring with an algorithmic focus on ?event
detection? (detecting when different appliances were turned on and off). In contrast, in this paper
we focus on disaggregating electricity using low-resolution, hourly data of the type that is readily
available via smart meters (but where most single-device ?events? are not apparent); we specifically
look at the generalization ability of our algorithms for devices and homes unseen at training time;
and we consider a data set that is substantially larger than those previously considered, with 590
homes, 10,165 unique devices, and energy usage spanning a time period of over two years.
The algorithmic approach we present in this paper builds upon sparse coding methods and recent
work in single-channel source separation [24, 23, 22]. Specifically, we use a sparse coding algorithm
to learn a model of each device?s power consumption over a typical week, then combine these
learned models to predict the power consumption of different devices in previously unseen homes,
using their aggregate signal alone. While energy disaggregation can naturally be formulated as such
a single-channel source separation problem, we know of no previous application of these methods
to the energy disaggregation task. Indeed, the most common application of such algorithm is audio
signal separation, which typically has very high temporal resolution; thus, the low-resolution energy
disaggregation task we consider here poses a new set of challenges for such methods, and existing
approaches alone perform quite poorly.
As a second major contribution of the paper, we develop a novel approach for discriminatively training sparse coding dictionaries for disaggregation tasks, and show that this significantly improves
performance on our energy domain. Specifically, we formulate the task of maximizing disaggregation performance as a structured prediction problem, which leads to a simple and effective algorithm
for discriminatively training such sparse representation for disaggregation tasks. The algorithm is
similar in spirit to a number of recent approaches to discriminative training of sparse representations
[12, 17, 18]. However, these past works were interested in discriminatively training sparse coding representation specifically for classification tasks, whereas we focus here on discriminatively
training the representation for disaggregation tasks, which naturally leads to substantially different
algorithmic approaches.
2
Discriminative Disaggregation via Sparse Coding
We begin by reviewing sparse coding methods and their application to disaggregation tasks. For concreteness we use the terminology of our energy disaggregation domain throughout this description,
but the algorithms can apply equally to other domains. Formally, assume we are given k different classes, which in our setting corresponds to device categories such as televisions, refrigerators,
heaters, etc. For every i = 1, . . . , k, we have a matrix Xi ? RT ?m where each column of Xi
contains a week of energy usage (measured every hour) for a particular house and for this particular
(j)
type of device. Thus, for example, the jth column of X1 , which we denote x1 , may contain weekly
(j)
energy consumption for a refrigerator (for a single week in a single house) and x2 could contain
weekly energy consumption of a heater (for this same week in the same house). We denote the
? ? Pk Xi so that the jth column of X,
?
aggregate power consumption over all device types as X
i=1
(j)
? , contains a week of aggregated energy consumption for all devices in a given house. At training
x
time, we assume we have access to the individual device energy readings X1 , . . . , Xk (obtained for
example from plug-level monitors in a small number of instrumented homes). At test time, however,
? ? (as would
we assume that we have access only to the aggregate signal of a new set of data points X
be reported by smart meter), and the goal is to separate this signal into its components, X?1 , . . . , X?k .
The sparse coding approach to source separation (e.g., [24, 23]), which forms for the basis for our
disaggregation approach, is to train separate models for each individual class Xi , then use these
models to separate an aggregate signal. Formally, sparse coding models the ith data matrix using the
approximation Xi ? Bi Ai where the columns of Bi ? RT ?n contain a set of n basis functions, also
called the dictionary, and the columns of Ai ? Rn?m contain the activations of these basis functions
2
[20]. Sparse coding additionally imposes the the constraint that the activations Ai be sparse, i.e.,
that they contain mostly zero entries, which allows us to learn overcomplete representations of the
data (more basis functions than the dimensionality of the data). A common approach for achieving
this sparsity is to add an ?1 regularization penalty to the activations.
Since energy usage is an inherently non-negative quantity, we impose the further constraint that the
activations and bases be non-negative, an extension known as non-negative sparse coding [13, 7].
Specifically, in this paper we will consider the non-negative sparse coding objective
X
1
(j)
min
kXi ? Bi Ai k2F + ?
(Ai )pq subject to kbi k2 ? 1, j = 1, . . . , n (1)
Ai ?0,Bi ?0 2
p,q
where
R+ is a regularization parameter, kYkF ?
P Xi , Ai , and Bi are defined as above, ? ? P
( p,q Ypq )1/2 is the Frobenius norm, and kyk2 ? ( p yp2 )1/2 is the ?2 norm. This optimization
problem is not jointly convex in Ai and Bi , but it is convex in each optimization variable when
holding the other fixed, so a common strategy for optimizing (1) is to alternate between minimizing
the objective over Ai and Bi .
After using the above procedure to find representations Ai and Bi for each of the classes i =
? ? RT ?m? (without providing the algorithm
1, . . . , k, we can disaggregate a new aggregate signal X
its individual components), using the following procedure (used by, e.g., [23], amongst others). We
concatenate the bases to form single joint set of basis functions and solve the optimization problem
? 1:k
A
?
2
?
A1
X
? ? [B1 ? ? ? Bk ] ? .. ?
+ ?
= arg min
X
(Ai )pq
.
A1:k ?0
i,p,q
Ak
F
? B1:k , A1:k )
? arg min F (X,
(2)
A1:k ?0
where for ease of notation we use A1:k as shorthand for A1 , . . . , Ak , and we abbreviate the opti? B1:k , A1:k ). We then predict the ith component of the signal to be
mization objective as F (X,
? i = Bi A
? i.
X
(3)
The intuition behind this approach is that if Bi is trained to reconstruct the ith class with small
activations, then it should be better at reconstructing the ith portion of the aggregate signal (i.e.,
require smaller activations) than all other bases Bj for j 6= i. We can evaluate the quality of the
resulting disaggregation by what we refer to as the disaggregation error,
!
k
k
X
X
1
2
? i k subject to A
? 1:k = arg min F
kXi ? Bi A
Xi , B1:k , A1:k ,
E(X1:k , B1:k ) ?
F
A1:k ?0
2
i=1
i=1
(4)
which quantifies how accurately we reconstruct each individual class when using the activations
obtained only via the aggregated signal.
2.1
Structured Prediction for Discriminative Disaggregation Sparse Coding
An issue with using sparse coding alone for disaggregation tasks is that the bases are not trained to
minimize the disaggregation error. Instead, the method relies on the hope that learning basis functions for each class individually will produce bases that are distinct enough to also produce small
disaggregation error. Furthermore, it is very difficult to optimize the disaggregation error directly
over B1:k , due to the non-differentiability (and discontinuity) of the argmin operator with a nonnegativity constraint. One could imagine an alternating procedure where we iteratively optimize
? 1:k on B1:k , then re-solve for the activations A
? 1:k ;
over B1:k , ignoring the the dependence of A
?
but ignoring how A1:k depends on B1:k loses much of the problem?s structure and this approach
performs very poorly in practice. Alternatively, other methods (though in a different context from
disaggregation) have been proposed that use a differentiable objective function and implicit differentiation to explicitly model the derivative of the activations with respect to the basis functions [4];
however, this formulation loses some of the benefits of the standard sparse coding formulation, and
computing these derivatives is a computationally expensive procedure.
3
Instead, we propose in this paper a method for optimizing disaggregation performance based upon
structured prediction methods [27]. To describe our approach, we first define the regularized disag? 1:k ,
gregation error, which is simply the disaggregation error plus a regularization penalty on A
X
? i )pq
Ereg (X1:k , B1:k ) ? E(X1:k , B1:k ) + ?
(A
(5)
i,p,q
? is defined as in (2). This criterion provides a better optimization objective for our algorithm,
where A
as we wish to obtain a sparse set of coefficients that can achieve low disaggregation error. Clearly,
? i for this objective function is given by
the best possible value of A
X
1
(Ai )pq ,
(6)
A?i = arg min kXi ? Bi Ai k2F + ?
Ai ?0 2
p,q
which is precisely the activations obtained after an iteration of sparse coding on the data matrix Xi .
Motivated by this fact, the first intuition of our algorithm is that in order to minimize disaggregation
error, we can discriminatively optimize the bases B1:k that such performing the optimization (2)
produces activations that are as close to A?1:k as possible. Of course, changing the bases B1:k to
optimize this criterion would also change the resulting optimal coefficients A?1:k . Thus, the second
intuition of our method is that the bases used in the optimization (2) need not be the same as the bases
used to reconstruct the signals. We define an augmented regularized disaggregation error objective
!
k
X
X
1
? 1:k ) ?
? i k2 + ?
? i )pq
?reg (X1:k , B1:k , B
E
kXi ? Bi A
(A
F
2
p,q
i=1
(7)
!
k
X
? 1:k = arg min F
? 1:k , A1:k ,
subject to A
Xi , B
A1:k ?0
i=1
where the B1:k bases (referred to as the reconstruction bases) are the same as those learned from
? 1:k bases (refereed to as the disaggregation bases) are discriminatively
sparse coding while the B
? 1:k closer to A? , without changing these targets.
optimized in order to move A
1:k
? 1:k is naturally framed as a structured prediction
Discriminatively training the disaggregation bases B
? 1:k , and the
? the multi-variate desired output is A? , the model parameters are B
task: the input is X,
1:k
1
?
?
?
discriminant function is F (X, B1:k , A1:k ). In other words, we seek bases B1:k such that (ideally)
? B
? 1:k , A1:k ).
A?1:k = arg min F (X,
A1:k ?0
(8)
While there are many potential methods for optimizing such a prediction task, we use a simple
? 1:k ,
method based on the structured perceptron algorithm [5]. Given some value of the parameters B
? using (2). We then perform the perceptron update with a step size ?,
we first compute A
? 1:k ? B
? 1:k ? ? ? ? F (X,
? B
? 1:k , A? ) ? ? ? F (X,
? B
? 1:k , A
? 1:k )
B
(9)
1:k
B1:k
B1:k
iT
i
h
h
?
? = B
?1 ???B
? k , A? = A? T ? ? ? A? T
(and similarly for A),
or more explicitly, defining B
1
1
? ?B
? ? ? (X
? ?B
? A)
? A
? T ? (X
? ? BA
? ? )A? T .
(10)
B
? 1:k in a similar form to B1:k , we keep only the positive part of B
? 1:k and we re-normalize
To keep B
each column to have unit norm. One item to note is that, unlike typical structured prediction where
the discriminant is a linear function in the parameters (which guarantees convexity of the problem),
here our discriminant is a quadratic function of the parameters, and so we no longer expect to
necessarily reach a global optimum of the prediction problem; however, since sparse coding itself
is a non-convex problem, this is not overly concerning for our setting. Our complete method for
discriminative disaggregation sparse coding, which we call DDSC, is shown in Algorithm 1.
? and
The structured prediction task actually involves m examples (where m is the number of columns of X),
?
(j)
(j)
?
the goal is to output the desired activations (a1:k ) , for the jth example x . However, since the function F
decomposes across the columns of X and A, the above notation is equivalent to the more explicit formulation.
1
4
Algorithm 1 Discriminative disaggregation sparse coding
Input: data points for each individual source Xi ? RT ?m , i = 1, . . . , k, regularization parameter
? ? R+ , gradient step size ? ? R+ .
Sparse coding pre-training:
(j)
1. Initialize Bi and Ai with positive values and scale columns of Bi such that kbi k2 = 1.
2. For each i = 1, . . . , k, iterate until convergence:
P
(a) Ai ? arg minA?0 kXi ? Bi Ak2F + ? p,q Apq
(b) Bi ? arg minB?0,kb(j) k2 ?1 kXi ? BAi k2F
Discriminative disaggregation training:
? 1:k ? B1:k .
3. Set A? ? A1:k , B
1:k
4. Iterate until convergence:
? 1:k ? arg minA ?0 F (X,
? B
? 1:k , A1:k )
(a) A
h
i
1:k
? ? B
? ? ? (X
? ?B
? A)
? A
? T ? (X
? ? BA
? ? )(A? )T
(b) B
+
(c) For all i, j,
(j)
bi
?
(j)
(j)
bi /kbi k2 .
??
Given aggregated test examples X :
? ? ? arg minA ?0 F (X
? ?, B
? 1:k , A1:k )
5. A
1:k
1:k
? ? = Bi A
? ?.
6. Predict X
i
i
2.2
Extensions
Although, as we show shortly, the discriminative training procedure has made the largest difference
in terms of improving disaggregation performance in our domain, a number of other modifications
to the standard sparse coding formulation have also proven useful. Since these are typically trivial
extensions or well-known algorithms, we mention them only briefly here.
Total Energy Priors. One deficiency of the sparse coding framework for energy disaggregation
is that the optimization objective does not take into consideration the size of an energy signal for
determinining which class it belongs to, just its shape. Since total energy used is obviously a discriminating factor for different device types, we consider an extension that penalizes the ?2 deviation
between a device and its mean total energy. Formally, we augment the objective F with the penalty
? B1:k , A1:k ) = F (X,
? B1:k , A1:k ) + ?T EP
FT EP (X,
k
X
k?i 1T ? 1T Bi Ai k22
(11)
i=1
where 1 denotes a vector of ones of the appropriate size, and ?i =
total energy of device class i.
1 T
m 1 Xi
denotes the average
Group Lasso. Since the data set we consider exhibits some amount of sparsity at the device level
(i.e., several examples have zero energy consumed by certain device types, as there is either no such
device in the home or it was not being monitored), we also would like to encourage a grouping effect
to the activations. That is, we would like a certain coefficient being active for a particular class to
encourage other coefficients to also be active in that class. To achieve this, we employ the group
Lasso algorithm [29], which adds an ?2 norm penalty to the activations of each device
? B1:k , A1:k ) = F (X,
? B1:k , A1:k ) + ?GL
FGL (X,
m
k X
X
(j)
kai k2 .
(12)
i=1 j=1
Shift Invariant Sparse Coding. Shift invariant, or convolutional sparse coding is an extension
to the standard sparse coding framework where each basis is convolved over the input data, with
a separate activation for each shift position [3, 10]. Such a scheme may intuitively seem to be
beneficial for the energy disaggregation task, where a given device might exhibit the same energy
signature at different times. However, as we will show in the next section, this extension actually
perform worse in our domain; this is likely due to the fact that, since we have ample training data
5
and a relatively low-dimensional domain (each energy signal has 168 dimensions, 24 hours per
day times 7 days in the week), the standard sparse coding bases are able to cover all possible shift
positions for typical device usage. However, pure shift invariant bases cannot capture information
about when in the week or day each device is typically used, and such information has proven crucial
for disaggregation performance.
2.3
Implementation
Space constraints preclude a full discussion of the implementation details of our algorithms, but for
the most part we rely on standard methods for solving the optimization problems. In particular,
most of the time spent by the algorithm involves solving sparse optimization problems to find the
activation coefficients, namely steps 2a and 4a in Algorithm 1. We use a coordinate descent approach
here, both for the standard and group Lasso version of the optimization problems, as these have been
recently shown to be efficient algorithms for ?1 -type optimization problems [8, 9], and have the
added benefit that we can warm-start the optimization with the solution from previous iterations. To
solve the optimization over Bi in step 2b, we use the multiplicative non-negative matrix factorization
update from [7].
3
3.1
Experimental Results
The Plugwise Energy Data Set and Experimental Setup
We conducted this work using a data set provided by Plugwise, a European manufacturer of pluglevel monitoring devices. The data set contains hourly energy readings from 10,165 different devices
in 590 homes, collected over more than two years. Each device is labeled with one of 52 device
types, which we further reduce to ten broad categories of electrical devices: lighting, TV, computer,
other electronics, kitchen appliances, washing machine and dryer, refrigerator and freezer, dishwasher, heating/cooling, and a miscellaneous category. We look at time periods in blocks of one
week, and try to predict the individual device consumption over this week given only the wholehome signal (since the data set does not currently contain true whole-home energy readings, we
approximate the home?s overall energy usage by aggregating the individual devices). Crucially, we
focus on disaggregating data from homes that are absent from the training set (we assigned 70% of
the homes to the training set, and 30% to the test set, resulting in 17,133 total training weeks and
6846 testing weeks); thus, we are attempting to generalize over the basic category of devices, not
just over different uses of the same device in a single house. We fit the hyper-parameters of the
algorithms (number of bases and regularization parameters) using grid search over each parameter
independently on a cross validation set consisting of 20% of the training homes.
3.2
Qualitative Evaluation of the Disaggregation Algorithms
We first look qualitatively at the results obtained by the method. Figure 1 shows the true energy energy consumed by two different houses in the test set for two different weeks, along with the energy
consumption predicted by our algorithms. The figure shows both the predicted energy of several
devices over the whole week, as well as a pie chart that shows the relative energy consumption of
different device types over the whole week (a more intuitive display of energy consumed over the
week). In many cases, certain devices like the refrigerator, washer/dryer, and computer are predicted
quite accurately, both in terms the total predicted percentage and in terms of the signals themselves.
There are also cases where certain devices are not predicted well, such as underestimating the heating component in the example on the left, and a predicting spike in computer usage in the example
on the right when it was in fact a dishwasher. Nonetheless, despite some poor predictions at the
hourly device level, the breakdown of electric consumption is still quite informative, determining
the approximate percentage of many devices types and demonstrating the promise of such feedback.
In addition to the disaggregation results themselves, sparse coding representations of the different
device types are interesting in their own right, as they give a good intuition about how the different
devices are typically used. Figure 2 shows a graphical representation of the learned basis functions.
In each plot, the grayscale image on the right shows an intensity map of all bases functions learned
for that device category, where each column in the image corresponds to a learned basis. The plot
on the left shows examples of seven basis functions for the different device types. Notice, for
example, that the bases learned for the washer/dryer devices are nearly all heavily peaked, while
the refrigerator bases are much lower in maximum magnitude. Additionally, in the basis images
devices like lighting demonstrate a clear ?band? pattern, indicating that these devices are likely to
6
Actual Energy
Whole Home
Whole Home
3
Predicted Energy
2
1
0
1
2
3
4
5
6
7
0.1
2
3
4
5
6
1.5
1
0.5
0
1
2
3
4
5
6
7
1
0.5
1
2
3
4
5
6
Refrigerator
0.05
2
3
4
5
6
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
1
2
3
4
5
6
7
0.3
0.2
0.1
1
True Usage
2
3
4
5
6
1
2
3
4
5
6
7
0.2
0.1
1.5
1
0.5
0
1
0.5
0.1
0.05
0
7
Heating/Cooling
Heating/Cooling
1
0.4
0
0
0
7
0.1
0
Predicted Energy
0.3
0
7
Washer/Dryer
Washer/Dryer
Dishwasher
1
2
0
Refrigerator
Computer
0.2
0
Actual Energy
1
0.5
0.4
0.3
Dishwasher
Computer
0.4
2
1.5
7
0.06
0.04
0.02
0
Predicted Usage
True Usage
Lighting
TV
Computer
Electronics
Kitchen Appliances
Washer/Dryer
Dishwasher
Refrigerator
Heating/Cooling
Other
Predicted Usage
Lighting
TV
Computer
Electronics
Kitchen Appliances
Washer/Dryer
Dishwasher
Refrigerator
Heating/Cooling
Other
Figure 1: Example predicted energy profiles and total energy percentages (best viewed in color).
Blue lines show the true energy usage, and red the predicted usage, both in units of kWh.
1
Lighting
0.8
0.6
0.4
0.2
0
Refridgerator
1
0.8
0.6
0.4
0.2
0
Washer/Dryer
1
0.8
0.6
0.4
0.2
0
Figure 2: Example basis functions learned from three device categories (best viewed in color). The
plot of the left shows seven example bases, while the image on the right shows all learned basis
functions (one basis per column).
be on and off during certain times of the day (each basis covers a week of energy usage, so the seven
bands represent the seven days). The plots also suggests why the standard implementation of shift
invariance is not helpful here. There is sufficient training data such that, for devices like washers and
dryers, we learn a separate basis for all possible shifts. In contrast, for devices like lighting, where
the time of usage is an important factor, simple shift-invariant bases miss key information.
3.3
Quantitative Evaluation of the Disaggregation Methods
There are a number of components to the final algorithm we have proposed, and in this section
we present quantitative results that evaluate the performance of each of these different components.
While many of the algorithmic elements improve the disaggregation performance, the results in this
section show that the discriminative training in particular is crucial for optimizing disaggregation
performance. The most natural metric for evaluating disaggregation performance is the disaggregation error in (4). However, average disaggregation error is not a particularly intuitive metric, and so
we also evaluate a total-week accuracy of the prediction system, defined formally as
o
nP
P
P
? i )pq
min
(X
)
,
(B
A
i
pq
i
i,q
p
p
P ?
Accuracy ?
.
(13)
Xp,q
p,q
7
Method
Predict Mean Energy
SISC
Sparse Coding
Sparse Coding + TEP
Sparse Coding + GL
Sparse Coding + TEP + GL
DDSC
DDSC + TEP
DDSC + GL
DDSC + TEP + GL
Training Set
Disagg. Err.
Acc.
20.98
45.78%
20.84
41.87%
10.54
56.96%
11.27
55.52%
10.55
54.98%
9.24
58.03%
7.20
64.42%
8.99
59.61%
7.59
63.09%
7.92
61.64%
Test Accuracy
Disagg. Err.
Acc.
21.72
47.41%
24.08
41.79%
18.69
48.00%
16.86
50.62%
17.18
46.46%
14.05
52.52%
15.59
53.70%
15.61
53.23%
14.58
52.20%
13.20
55.05%
Table 1: Disaggregation results of algorithms (TEP = Total Energy Prior, GL = Group Lasso, SISC
= Shift Invariant Sparse Coding, DDSC = Discriminative Disaggregation Sparse Coding).
Training Set
Test Set
9.5
0.64
14.5
0.58
Disaggregation Error
Accuracy
9
0.62
8.5
14
0.56
13.5
0.54
0.6
8
7.5
0
Disaggregation Error
Accuracy
0.58
20
40
60
DDSC Iteration
80
0.56
100
13
0
20
40
60
DDSC Iteration
80
0.52
100
Figure 3: Evolution of training and testing errors for iterations of the discriminative DDSC updates.
Despite the complex definition, this quantity simply captures the average amount of energy predicted
correctly over the week (i.e., the overlap between the true and predicted energy pie charts).
Table 1 shows the disaggregation performance obtained by many different prediction methods. The
advantage of the discriminative training procedure is clear: all the methods employing discriminative training perform nearly as well or better than all the methods without discriminative training;
furthermore, the system with all the extensions, discriminative training, a total energy prior, and
the group Lasso, outperforms all competing methods on both metrics. To put these accuracies in
context, we note that separate to the results presented here we trained an SVM, using a variety
of hand-engineered features, to classify individual energy signals into their device category, and
were able to achieve at most 59% classification accuracy. It therefore seems unlikely that we could
disaggregate a signal to above this accuracy and so, informally speaking, we expect the achievable
performance on this particular data set to range between 47% for the baseline of predicting mean energy (which in fact is a very reasonable method, as devices often follow their average usage patterns)
and 59% for the individual classification accuracy. It is clear, then, that the discriminative training
is crucial to improving the performance of the sparse coding disaggregation procedure within this
range, and does provide a significant improvement over the baseline. Finally, as shown in Figure 3,
both the training and testing error decrease reliably with iterations of DDSC, and we have found that
this result holds for a wide range of parameter choices and step sizes (though, as with all gradient
methods, some care be taken to choose a step size that is not prohibitively large).
4
Conclusion
Energy disaggregation is a domain where advances in machine learning can have a significant impact
on energy use. In this paper we presented an application of sparse coding algorithms to this task,
focusing on a large data set that contains the type of low-resolution data readily available from smart
meters. We developed the discriminative disaggregation sparse coding (DDSC) algorithm, a novel
discriminative training procedure, and show that this algorithm significantly improves the accuracy
of sparse coding for the energy disaggregation task.
Acknowledgments This work was supported by ARPA-E (Advanced Research Projects Agency?
Energy) under grant number DE-AR0000018. We are very grateful to Plugwise for providing us
with their plug-level energy data set, and in particular we thank Willem Houck for his assistance
with this data. We also thank Carrie Armel and Adrian Albert for helpful discussions.
8
References
[1] D. Archer. Global Warming: Understanding the Forecast. Blackwell Publishing, 2008.
[2] M. Berges, E. Goldman, H. S. Matthews, and L Soibelman. Learning systems for electric comsumption
of buildings. In ASCI International Workshop on Computing in Civil Engineering, 2009.
[3] T. Blumensath and M. Davies. On shift-invariant sparse coding. Lecture Notes in Computer Science,
3195(1):1205?1212, 2004.
[4] D. Bradley and J.A. Bagnell. Differentiable sparse coding. In Advances in Neural Information Processing
Systems, 2008.
[5] M. Collins. Discriminative training methods for hidden markov models: Theory and experiements with
perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language
Processing, 2002.
[6] S. Darby. The effectiveness of feedback on energy consumption. Technical report, Environmental Change
Institute, University of Oxford, 2006.
[7] J. Eggert and E. Korner. Sparse coding and NMF. In IEEE International Joint Conference on Neural
Networks, 2004.
[8] J. Friedman, T. Hastie, H Hoefling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of
Applied Statistics, 2(1):302?332, 2007.
[9] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. Technical
report, Stanford University, 2010.
[10] R. Grosse, R. Raina, H. Kwong, and A. Y. Ng. Shift-invariant sparse coding for audio classification. In
Proceedings of the Conference on Uncertainty in Artificial Intelligence, 2007.
[11] G. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12), 1992.
[12] S. Hasler, H. Wersin, and E Korner. Combinging reconstruction and discrimination with class-specific
sparse coding. Neural Computation, 19(7):1897?1918, 2007.
[13] P.O. Hoyer. Non-negative sparse coding. In IEEE Workshop on Neural Networks for Signal Processing,
2002.
[14] C. Laughman, K. Lee, R. Cox, S. Shaw, S. Leeb, L. Norford, and P. Armstrong. Power signature analysis.
IEEE Power & Energy Magazine, 2003.
[15] C. Laughman, S. Leeb, and Lee. Advanced non-intrusive monitoring of electric loads. IEEE Power and
Energy, 2003.
[16] W. Lee, G. Fung, H. Lam, F. Chan, and M. Lucente. Exploration on load signatures. International
Conference on Electrical Engineering (ICEE), 2004.
[17] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman. Supervised dictionary learning. In Advances
in Neural Information Processing Systems, 2008.
[18] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative sparse image models for
class-specific edge detection and image interpretation. In European Conference on Computer Vision,
2008.
[19] B. Neenan and J. Robinson. Residential electricity use feedback: A research synthesis and economic
framework. Technical report, Electric Power Research Institute, 2009.
[20] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[21] S. N. Patel, T. Robertson, J. A. Kientz, M. S. Reynolds, and G. D. Abowd. At the flick of a switch: Detecting and classifying unique electrical events on the residential power line. 9th international conference
on Ubiquitous Computing (UbiComp 2007), 2007.
[22] S. T. Roweis. One microphone source separation. In Advances in Neural Information Processing Systems,
2000.
[23] M. N. Schmidt, J. Larsen, and F. Hsiao. Wind noise reduction using non-negative sparse coding. In IEEE
Workshop on Machine Learning for Signal Processing, 2007.
[24] M N. Schmidt and R. K. Olsson. Single-channel speech separation using sparse non-negative matrix
factorization. In International Conference on Spoken Language Processing, 2006.
[25] S. R. Shaw, C. B. Abler, R. F. Lepard, D. Luo, S. B. Leeb, and L. K. Norford. Instrumentation for high
performance nonintrusive electrical load monitoring. ASME, 120(224), 1998.
[26] F. Sultanem. Using appliance signatures for monitoring residential loads at meter panel level. IEEE
Transaction on Power Delivery, 6(4), 1991.
[27] B. Taskar, V. Chatalbashev, D. Koller, and C. Guestrin. Learning structured prediction models: A large
margin approach. In International Conference on Machine Learning, 2005.
[28] Various. Annual Energy Review 2009. U.S. Energy Information Administration, 2009.
[29] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the
Royal Statisical Society, Series B, 68(1):49?67, 2007.
9
| 4054 |@word cox:1 version:1 briefly:1 achievable:1 norm:4 seems:1 adrian:1 seek:1 crucially:1 mention:1 reduction:1 electronics:3 bai:1 contains:4 series:1 united:1 reynolds:1 past:1 existing:1 err:2 current:2 disaggregation:60 outperforms:1 bradley:1 luo:1 activation:16 readily:2 concatenate:1 informative:1 shape:1 plot:4 update:3 discrimination:1 alone:4 intelligence:2 device:52 item:1 xk:1 ith:4 underestimating:1 detecting:2 provides:1 appliance:10 along:1 qualitative:1 shorthand:1 korner:2 blumensath:1 combine:1 yuan:1 abowd:1 indeed:1 behavior:1 themselves:2 examine:1 growing:1 multi:1 automatically:1 goldman:1 actual:2 preclude:1 begin:1 provided:1 notation:2 project:1 circuit:1 panel:1 fuel:1 what:1 argmin:1 substantially:2 developed:1 spoken:1 differentiation:1 guarantee:1 sapiro:1 temporal:1 quantitative:2 every:3 weekly:2 prohibitively:1 k2:6 zico:1 unit:2 grant:1 hourly:3 positive:2 engineering:3 aggregating:1 despite:2 ak:2 oxford:1 opti:1 becoming:1 hsiao:1 might:1 plus:1 collect:1 suggests:1 ease:1 factorization:2 bi:22 range:3 unique:2 acknowledgment:1 testing:3 practice:1 block:1 procedure:8 area:1 empirical:1 significantly:3 davy:1 word:1 induce:1 pre:1 kbi:3 cannot:1 close:1 selection:1 operator:1 put:1 context:2 optimize:4 equivalent:1 map:1 maximizing:1 independently:1 convex:3 focused:1 resolution:5 formulate:1 pure:1 his:1 curb:1 coordinate:2 annals:1 imagine:1 play:1 commercial:1 user:1 target:1 heavily:1 carrie:1 us:1 magazine:1 element:1 ak2f:1 expensive:1 conserve:1 particularly:1 robertson:1 breakdown:1 cooling:5 labeled:1 ep:2 role:1 ft:1 taskar:1 electrical:7 refrigerator:9 ereg:1 capture:2 decrease:1 consumes:1 intuition:4 environment:1 convexity:1 agency:1 ideally:1 freezer:1 signature:5 trained:4 grateful:1 reviewing:1 solving:2 smart:4 upon:3 efficiency:2 basis:17 joint:2 mization:1 various:1 train:1 distinct:1 effective:1 describe:1 ubicomp:1 artificial:3 aggregate:6 hyper:1 apparent:1 quite:3 stanford:4 larger:2 widely:1 solve:3 kai:1 reconstruct:3 ability:1 statistic:1 unseen:2 jointly:1 itself:1 emergence:1 final:1 obviously:1 advantage:1 differentiable:2 propose:1 reconstruction:2 lam:1 turned:2 poorly:2 conserving:1 achieve:3 roweis:1 description:1 frobenius:1 intuitive:2 normalize:1 convergence:2 optimum:1 produce:3 spent:1 illustrate:1 andrew:1 develop:2 pose:1 measured:1 sizable:1 c:1 involves:3 come:1 indicate:2 predicted:13 dishwasher:6 saved:1 kb:1 exploration:1 engineered:1 transient:1 kwong:1 require:1 generalization:1 extension:8 hold:1 considered:1 algorithmic:6 predict:7 week:18 bj:1 matthew:1 major:1 dictionary:3 early:1 earth:1 estimation:1 currently:2 individually:1 largest:2 grouped:1 hope:1 mit:1 clearly:2 sensor:1 focus:4 ponce:2 improvement:1 contrast:2 baseline:2 helpful:2 chatalbashev:1 typically:6 unlikely:1 hidden:1 koller:1 archer:1 interested:1 issue:3 classification:4 arg:10 overall:1 augment:1 initialize:1 field:2 aware:1 having:1 ng:2 broad:1 look:4 k2f:3 constitutes:1 nearly:2 peaked:1 report:4 others:1 fundamentally:1 disaggregate:2 employ:1 np:1 olsson:1 individual:10 kitchen:3 consisting:1 friedman:2 detection:2 custom:1 evaluation:2 analyzed:1 behind:1 edge:2 closer:1 encourage:2 penalizes:1 re:2 desired:2 overcomplete:1 arpa:1 column:11 classify:1 facet:1 cover:2 electricity:7 deviation:1 entry:1 conducted:2 reported:1 kxi:6 international:6 discriminating:1 csail:1 lee:3 off:3 informatics:2 receiving:2 together:1 synthesis:1 choose:1 worse:1 nonintrusive:2 derivative:2 account:1 potential:1 fossil:1 de:1 coding:48 coefficient:5 kolter:2 explicitly:2 depends:1 armstrong:1 later:1 multiplicative:1 try:1 wind:1 portion:1 start:1 red:1 ascus:1 contribution:1 minimize:2 chart:2 accuracy:10 convolutional:1 generalize:1 accurately:2 monitoring:7 lighting:6 researcher:1 history:1 acc:2 reach:1 definition:1 energy:83 nonetheless:1 frequency:1 larsen:1 naturally:3 monitored:1 gain:1 massachusetts:1 color:2 improves:3 ubiquitous:2 dimensionality:1 actually:2 warming:1 focusing:1 day:5 follow:1 supervised:1 zisserman:1 formulation:4 though:2 furthermore:2 just:2 implicit:1 hoefling:1 until:2 hand:1 quality:1 olshausen:1 building:2 usage:18 k22:1 contain:6 true:6 effect:1 fgl:1 evolution:1 regularization:5 assigned:1 read:1 alternating:1 laboratory:2 iteratively:1 climate:1 assistance:1 during:1 kyk2:1 steady:1 criterion:2 tep:5 mina:3 asme:1 complete:1 demonstrate:1 eggert:1 performs:1 temperature:1 image:7 harmonic:1 consideration:1 novel:3 recently:2 common:3 interpretation:1 relating:1 significant:5 refer:1 cambridge:1 ai:17 framed:1 grid:1 similarly:1 language:2 pq:7 refereed:1 access:2 longer:1 etc:1 add:2 base:23 own:1 recent:2 chan:1 optimizing:4 belongs:1 instrumentation:1 certain:5 ecological:1 guestrin:1 care:1 impose:1 aggregated:4 maximize:2 determine:1 period:2 signal:21 full:1 technical:4 plug:2 cross:1 long:1 bach:2 lin:1 concerning:1 hart:1 equally:1 a1:23 impact:2 prediction:13 basic:1 regression:1 vision:1 metric:3 albert:1 iteration:6 represent:1 disaster:1 cell:1 whereas:1 addition:1 source:6 crucial:3 sisc:2 unlike:1 minb:1 subject:3 ample:1 spirit:1 seem:1 effectiveness:1 call:1 enough:1 iterate:2 variety:1 variate:1 fit:1 switch:1 hastie:2 lasso:7 competing:1 audio:2 reduce:2 economic:1 consumed:3 administration:1 shift:11 absent:1 whether:1 motivated:1 effort:2 penalty:4 speech:1 speaking:1 cause:2 flick:1 useful:2 clear:3 informally:1 amount:4 ang:1 ten:1 band:2 hardware:1 category:7 differentiability:1 percentage:3 notice:1 overly:1 per:2 correctly:1 tibshirani:2 blue:1 promise:1 group:7 key:2 terminology:1 demonstrating:1 monitor:1 achieving:1 changing:2 hasler:1 concreteness:1 year:3 residential:5 uncertainty:1 throughout:1 reasonable:1 separation:6 home:17 draw:1 delivery:1 display:1 quadratic:1 annual:1 constraint:4 precisely:1 deficiency:1 x2:1 min:8 yp2:1 performing:1 attempting:1 relatively:2 department:1 developing:1 structured:9 alternate:1 tv:3 fung:1 poor:1 smaller:1 across:1 reconstructing:1 beneficial:1 instrumented:1 modification:1 intuitively:1 invariant:7 dryer:9 apq:1 washing:1 computationally:1 taken:1 previously:2 know:1 available:3 willem:1 apply:2 sustainability:1 manufacturer:1 appropriate:1 shaw:2 schmidt:2 shortly:1 convolved:1 denotes:2 publishing:1 graphical:1 build:1 society:2 objective:9 move:1 already:1 quantity:2 looked:1 occurs:1 receptive:1 strategy:1 added:1 rt:4 dependence:1 spike:1 bagnell:1 experiements:1 exhibit:2 amongst:1 gradient:2 hoyer:1 separate:6 thank:2 separating:2 consumption:15 seven:4 collected:1 consensus:1 discriminant:3 spanning:1 trivial:1 code:1 providing:2 minimizing:1 difficult:1 mostly:1 setup:1 sector:1 potentially:1 holding:1 pie:2 negative:8 ba:2 implementation:3 reliably:1 perform:4 markov:1 descent:1 defining:1 rn:1 community:1 intensity:1 nmf:1 bk:1 namely:2 blackwell:1 optimized:1 learned:8 hour:3 discontinuity:1 robinson:1 able:2 pattern:2 reading:3 challenge:4 sparsity:2 including:1 royal:1 power:13 event:3 overlap:1 natural:3 rely:1 regularized:2 warm:1 predicting:2 abbreviate:1 raina:1 advanced:2 scheme:1 improve:1 heater:2 technology:1 numerous:2 prior:3 understanding:1 review:1 meter:7 determining:1 relative:1 expect:2 discriminatively:8 lecture:1 interesting:1 intrusive:2 proven:2 facing:1 validation:1 degree:1 sufficient:1 xp:1 imposes:1 classifying:1 course:2 gl:6 supported:1 hebert:1 jth:3 perceptron:3 institute:3 wide:1 taking:2 sparse:57 benefit:2 feedback:3 dimension:1 world:1 evaluating:1 made:1 qualitatively:1 employing:1 transaction:1 approximate:2 patel:1 keep:2 global:3 active:2 mairal:2 b1:25 conservation:1 discriminative:20 xi:11 alternatively:1 grayscale:1 search:1 quantifies:1 decomposes:1 why:1 table:2 additionally:2 channel:3 learn:3 nature:1 ca:1 inherently:1 ignoring:2 improving:2 complex:2 necessarily:1 european:2 domain:7 electric:4 pk:1 whole:9 noise:2 profile:1 heating:6 x1:7 augmented:1 referred:1 grosse:1 position:2 nonnegativity:1 wish:1 explicit:1 house:8 minute:1 load:6 specific:2 svm:1 grouping:1 workshop:3 magnitude:1 television:1 margin:1 forecast:1 civil:1 simply:2 likely:2 contained:1 pathwise:1 leordeanu:1 corresponds:2 loses:2 environmental:1 relies:1 ma:1 goal:2 formulated:1 viewed:2 miscellaneous:1 change:4 specifically:7 typical:3 miss:1 microphone:1 batra:1 called:3 total:11 invariance:1 experimental:2 leeb:3 indicating:1 formally:4 collins:1 evaluate:3 reg:1 tested:1 |
3,374 | 4,055 | Learning Networks of
Stochastic Differential Equations
Morteza Ibrahimi
Department of Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Jos?e Bento
Department of Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Andrea Montanari
Department of Electrical Engineering and Statistics
Stanford University
Stanford, CA 94305
[email protected]
Abstract
We consider linear models for stochastic dynamics. To any such model can be associated a network (namely a directed graph) describing which degrees of freedom
interact under the dynamics. We tackle the problem of learning such a network
from observation of the system trajectory over a time interval T .
We analyze the ?1 -regularized least squares algorithm and, in the setting in which
the underlying network is sparse, we prove performance guarantees that are uniform in the sampling rate as long as this is sufficiently high. This result substantiates the notion of a well defined ?time complexity? for the network inference
problem.
keywords: Gaussian processes, model selection and structure learning, graphical models, sparsity
and feature selection.
1
Introduction and main results
Let G = (V, E) be a directed graph with weight A0ij ? R associated to the directed edge (j, i) from
j ? V to i ? V . To each node i ? V in this network is associated an independent standard Brownian
motion bi and a variable xi taking values in R and evolving according to
X
dxi (t) =
A0ij xj (t) dt + dbi (t) ,
j??+ i
where ?+ i = {j ? V : (j, i) ? E} is the set of ?parents? of i. Without loss of generality we shall
take V = [p] ? {1, . . . , p}. In words, the rate of change of xi is given by a weighted sum of the
current values of its neighbors, corrupted by white noise. In matrix notation, the same system is then
represented by
dx(t) = A0 x(t) dt + db(t) ,
p
(1)
0
p?p
with x(t) ? R , b(t) a p-dimensional standard Brownian motion and A ? R
a matrix with
entries {A0ij }i,j?[p] whose sparsity pattern is given by the graph G. We assume that the linear system
x(t)
?
= A0 x(t) is stable (i.e. that the spectrum of A0 is contained in {z ? C : Re(z) < 0}). Further,
we assume that x(t = 0) is in its stationary state. More precisely, x(0) is a Gaussian random variable
1
independent of b(t), distributed according to the invariant measure. Under the stability assumption,
this a mild restriction, since the system converges exponentially to stationarity.
A portion of time length T of the system trajectory {x(t)}t?[0,T ] is observed and we ask under which
conditions these data are sufficient to reconstruct the graph G (i.e., the sparsity pattern of A0 ). We
are particularly interested in computationally efficient procedures, and in characterizing the scaling
of the learning time for large networks. Can the network structure be learnt in a time scaling linearly
with the number of its degrees of freedom?
As an example application, chemical reactions can be conveniently modeled by systems of nonlinear stochastic differential equations, whose variables encode the densities of various chemical
species [1, 2]. Complex biological networks might involve hundreds of such species [3], and learning stochastic models from data is an important (and challenging) computational task [4]. Considering one such chemical reaction network in proximity of an equilibrium point, the model (1) can be
used to trace fluctuations of the species counts with respect to the equilibrium values. The network
G would represent in this case the interactions between different chemical factors. Work in this area
focused so-far on low-dimensional networks, i.e. on methods that are guaranteed to be correct for
fixed p, as T ? ?, while we will tackle here the regime in which both p and T diverge.
Before stating our results, it is useful to stress a few important differences with respect to classical
graphical model learning problems:
(i) Samples are not independent. This can (and does) increase the sample complexity.
(ii) On the other hand, infinitely many samples are given as data (in fact a collection indexed
by the continuous parameter t ? [0, T ]). Of course one can select a finite subsample, for
instance at regularly spaced times {x(i ?)}i=0,1,... . This raises the question as to whether
the learning performances depend on the choice of the spacing ?.
(iii) In particular, one expects that choosing ? sufficiently large as to make the configurations in
the subsample approximately independent can be harmful. Indeed, the matrix A0 contains
more information than the stationary distribution of the above process (1), and only the
latter can be learned from independent samples.
(iv) On the other hand, letting ? ? 0, one can produce an arbitrarily large number of distinct
samples. However, samples become more dependent, and intuitively one expects that there
is limited information to be harnessed from a given time interval T .
Our results confirm in a detailed and quantitative way these intuitions.
1.1
Results: Regularized least squares
Regularized least squares is an efficient and well-studied method for support recovery. We will
discuss relations with existing literature in Section 1.3.
In the present case, the algorithm reconstructs independently each row of the matrix A0 . The rth
row, A0r , is estimated by solving the following convex optimization problem for Ar ? Rp
minimize L(Ar ; {x(t)}t?[0,T ] ) + ?kAr k1 ,
where the likelihood function L is defined by
Z T
Z
1
1 T ?
L(Ar ; {x(t)}t?[0,T ] ) =
(A?r x(t))2 dt ?
(A x(t)) dxr (t) .
2T 0
T 0 r
(2)
(3)
(Here and below M ? denotes the transpose of matrix/vector M .) To see that this likelihood function
is indeed related to least squares, one can formally write x? r (t)R= dxr (t)/dt and complete
R the square
for the right hand side of Eq. (3), thus getting the integral (A?r x(t) ? x? r (t))2 dt ? x? r (t)2 dt.
The first term is a sum of square residuals, and the second is independent of A. Finally the ?1
regularization term in Eq. (2) has the role of shrinking to 0 a subset of the entries Aij thus effectively
selecting the structure.
Let S 0 be the support of row A0r , and assume |S 0 | ? k. We will refer to the vector sign(A0r ) as to
the signed support of A0r (where sign(0) = 0 by convention). Let ?max (M ) and ?min (M ) stand for
2
the maximum and minimum eigenvalue of a square matrix M respectively. Further, denote by Amin
the smallest absolute value among the non-zero entries of row A0r .
When stable, the diffusion process (1) has a unique stationary measure which is Gaussian with
covariance Q0 ? Rp?p given by the solution of Lyapunov?s equation [5]
A0 Q0 + Q0 (A0 )? + I = 0.
(4)
Our guarantee for regularized least squares is stated in terms of two properties of the covariance Q0
and one assumption on ?min (A0 ) (given a matrix M , we denote by ML,R its submatrix ML,R ?
(Mij )i?L,j?R ):
(a) We denote by Cmin ? ?min (Q0S 0 ,S 0 ) the minimum eigenvalue of the restriction of Q0 to
the support S 0 and assume Cmin > 0.
?1
(b) We define the incoherence parameter ? by letting |||Q0 (S 0 )C ,S 0 Q0 S 0 ,S 0
|||? = 1 ? ?,
and assume ? > 0. (Here ||| ? |||? is the operator sup norm.)
?
(c) We define ?min (A0 ) = ??max ((A0 + A0 )/2) and assume ?min (A0 ) > 0. Note this is a
stronger form of stability assumption.
Our main result is to show that there exists a well defined time complexity, i.e. a minimum time
interval T such that, observing the system for time T enables us to reconstruct the network with
high probability. This result is stated in the following theorem.
Theorem 1.1. Consider the problem of learning the support S 0 of row A0r of the matrix A0 from a
sample trajectory {x(t)}t?[0,T ] distributed according to the model (1). If
4pk
104 k 2 (k ?min (A0 )?2 + A?2
min )
T >
log
,
(5)
2
?2 ?min (A0 )Cmin
?
then there exists ? such that ?1 -regularized least squares recovers
the signed support of A0r with
p
probability larger than 1 ? ?. This is achieved by taking ? = 36 log(4p/?)/(T ?2 ?min (A0 )) .
The time complexity is logarithmic in the number of variables and polynomial in the support size.
Further, it is roughly inversely proportional to ?min (A0 ), which is quite satisfying conceptually,
since ?min (A0 )?1 controls the relaxation time of the mixes.
1.2
Overview of other results
So far we focused on continuous-time dynamics. While, this is useful in order to obtain elegant statements, much of the paper is in fact devoted to the analysis of the following discrete-time dynamics,
with parameter ? > 0:
x(t) = x(t ? 1) + ?A0 x(t ? 1) + w(t),
t ? N0 .
(6)
Here x(t) ? Rp is the vector collecting the dynamical variables, A0 ? Rp?p specifies the dynamics
as above, and {w(t)}t?0 is a sequence of i.i.d. normal vectors with covariance ? Ip?p (i.e. with
independent components of variance ?). We assume that consecutive samples {x(t)}0?t?n are
given and will ask under which conditions regularized least squares reconstructs the support of A0 .
The parameter ? has the meaning of a time-step size. The continuous-time model (1) is recovered,
in a sense made precise below, by letting ? ? 0. Indeed we will prove reconstruction guarantees
that are uniform in this limit as long as the product n? (which corresponds to the time interval T in
the previous section) is kept constant. For a formal statement we refer to Theorem 3.1. Theorem 1.1
is indeed proved by carefully controlling this limit. The mathematical challenge in this problem is
related to the fundamental fact that the samples {x(t)}0?t?n are dependent (and strongly dependent
as ? ? 0).
Discrete time models of the form (6) can arise either because the system under study evolves by
discrete steps, or because we are subsampling a continuous time system modeled as in Eq. (1).
Notice that in the latter case the matrices A0 appearing in Eq. (6) and (1) coincide only to the zeroth
order in ?. Neglecting this technical complication, the uniformity of our reconstruction guarantees
as ? ? 0 has an appealing interpretation already mentioned above. Whenever the samples spacing
is not too large, the time complexity (i.e. the product n?) is roughly independent of the spacing
itself.
3
1.3
Related work
A substantial amount of work has been devoted to the analysis of ?1 regularized least squares, and
its variants [6, 7, 8, 9, 10]. The most closely related results are the one concerning high-dimensional
consistency for support recovery [11, 12]. Our proof follows indeed the line of work developed in
these papers, with two important challenges. First, the design matrix is in our case produced by
a stochastic diffusion, and it does not necessarily satisfies the irrepresentability conditions used by
these works. Second, the observations are not corrupted by i.i.d. noise (since successive configurations are correlated) and therefore elementary concentration inequalities are not sufficient.
Learning sparse graphical models via ?1 regularization is also a topic with significant literature. In
the Gaussian case, the graphical LASSO was proposed to reconstruct the model from i.i.d. samples
[13]. In the context of binary pairwise graphical models, Ref. [11] proves high-dimensional consistency of regularized logistic regression for structural learning, under a suitable irrepresentability
conditions on a modified covariance. Also this paper focuses on i.i.d. samples.
Most of these proofs builds on the technique of [12]. A naive adaptation to the present case allows
to prove some performance guarantee for the discrete-time setting. However the resulting bounds
are not uniform as ? ? 0 for n? = T fixed. In particular, they do not allow to prove an analogous
of our continuous time result, Theorem 1.1. A large part of our effort is devoted to producing more
accurate probability estimates that capture the correct scaling for small ?.
Similar issues were explored in the study of stochastic differential equations, whereby one is often
interested in tracking some slow degrees of freedom while ?averaging out? the fast ones [14]. The
relevance of this time-scale separation for learning was addressed in [15]. Let us however emphasize
that these works focus once more on system with a fixed (small) number of dimensions p.
Finally, the related topic of learning graphical models for autoregressive processes was studied recently in [16, 17]. The convex relaxation proposed in these papers is different from the one developed here. Further, no model selection guarantee was proved in [16, 17].
2
Illustration of the main results
It might be difficult to get a clear intuition of Theorem 1.1, mainly because of conditions (a) and (b),
which introduce parameters Cmin and ?. The same difficulty arises with analogous results on the
high-dimensional consistency of the LASSO [11, 12]. In this section we provide concrete illustration
both via numerical simulations, and by checking the condition on specific classes of graphs.
2.1
Learning the laplacian of graphs with bounded degree
Given a simple graph G = (V, E) on vertex set V = [p], its laplacian ?G is the symmetric p ? p
matrix which is equal to the adjacency matrix of G outside the diagonal, and with entries ?Gii =
?deg(i) on the diagonal [18]. (Here deg(i) denotes the degree of vertex i.)
It is well known that ?G is negative semidefinite, with one eigenvalue equal to 0, whose multiplicity
is equal to the number of connected components of G. The matrix A0 = ?m I + ?G fits into
the setting of Theorem 1.1 for m > 0. The corresponding model (1.1) describes the over-damped
dynamics of a network of masses connected by springs of unit strength, and connected by a spring
of strength m to the origin. We obtain the following result.
Theorem 2.1. Let G be a simple connected graph of maximum vertex degree k and consider the
model (1.1) with A0 = ?m I + ?G where ?G is the laplacian of G and m > 0. If
k + m 5
4pk
T ? 2 ? 105 k 2
,
(7)
(k + m2 ) log
m
?
then there exists ? such that ?1 -regularized least squares recovers
the signed support of A0r with
p
probability larger than 1 ? ?. This is achieved by taking ? = 36(k + m)2 log(4p/?)/(T m3 ).
In other words, for m bounded away from 0 and ?, regularized least squares regression correctly
reconstructs the graph G from a trajectory of time length which is polynomial in the degree and
logarithmic in the system size. Notice that once the graph is known, the laplacian ?G is uniquely
determined. Also, the proof technique used for this example is generalizable to other graphs as well.
4
2800
Min. # of samples for success prob. = 0.9
1
0.9
p = 16
p = 32
0.8
Probability of success
p = 64
0.7
p = 128
p = 256
0.6
p = 512
0.5
0.4
0.3
0.2
0.1
0
0
50
100
150
200
250
300
T=n?
350
400
2600
2400
2200
2000
1800
1600
1400
1200
1
10
450
2
10
3
10
p
Figure 1: (left) Probability of success vs. length of the observation interval n?. (right) Sample
complexity for 90% probability of success vs. p.
2.2
Numerical illustrations
In this section we present numerical validation of the proposed method on synthetic data. The results
confirm our observations in Theorems 1.1 and 3.1, below, namely that the time complexity scales
logarithmically with the number of nodes in the network p, given a constant maximum degree.
Also, the time complexity is roughly independent of the sampling rate. In Fig. 1 and 2 we consider
the discrete-time setting, generating data as follows. We draw A0 as a random sparse matrix in
{0, 1}p?p with elements chosen independently at random with P(A0ij = 1) = k/p, k = 5. The
process xn0 ? {x(t)}0?t?n is then generated according to Eq. (6). We solve the regularized least
square problem (the cost function is given explicitly in Eq. (8) for the discrete-time case) for different
values of n, the number of observations, and record if the correct support is recovered for a random
row r using the optimum value of the parameter ?. An estimate of the probability of successful
recovery is obtained by repeating this experiment. Note that we are estimating here an average
probability of success over randomly generated matrices.
The left plot in Fig.1 depicts the probability of success vs. n? for ? = 0.1 and different values of
p. Each curve is obtained using 211 instances, and each instance is generated using a new random
matrix A0 . The right plot in Fig.1 is the corresponding curve of the sample complexity vs. p where
sample complexity is defined as the minimum value of n? with probability of success of 90%. As
predicted by Theorem 2.1 the curve shows the logarithmic scaling of the sample complexity with p.
In Fig. 2 we turn to the continuous-time model (1). Trajectories are generated by discretizing this
stochastic differential equation with step ? much smaller than the sampling rate ?. We draw random
matrices A0 as above and plot the probability of success for p = 16, k = 4 and different values of ?,
as a function of T . We used 211 instances for each curve. As predicted by Theorem 1.1, for a fixed
observation interval T , the probability of success converges to some limiting value as ? ? 0.
3
Discrete-time model: Statement of the results
Consider a system evolving in discrete time according to the model (6), and let xn0 ? {x(t)}0?t?n
be the observed portion of the trajectory. The rth row A0r is estimated by solving the following
convex optimization problem for Ar ? Rp
where
minimize L(Ar ; xn0 ) + ?kAr k1 ,
L(Ar ; xn0 ) ?
n?1
1 X
2
{xr (t + 1) ? xr (t) ? ? A?r x(t)} .
2? 2 n t=0
(8)
(9)
Apart from an additive constant, the ? ? 0 limit of this cost function can be shown to coincide
with the cost function in the continuous time case, cf. Eq. (3). Indeed the proof of Theorem 1.1 will
amount to a more precise version of this statement. Furthermore, L(Ar ; xn0 ) is easily seen to be the
log-likelihood of Ar within model (6).
5
1
1
0.9
0.95
0.9
0.7
Probability of success
Probability of success
0.8
? = 0.04
? = 0.06
0.6
? = 0.08
0.5
? = 0.1
0.4
? = 0.14
0.3
? = 0.22
? = 0.18
0.85
0.8
0.75
0.7
0.65
0.2
0.6
0.1
0
50
100
150
T=n?
200
0.55
0.04
250
0.06
0.08
0.1
0.12
?
0.14
0.16
0.18
0.2
0.22
Figure 2: (right)Probability of success vs. length of the observation interval n? for different values
of ?. (left) Probability of success vs. ? for a fixed length of the observation interval, (n? = 150) .
The process is generated for a small value of ? and sampled at different rates.
As before, we let S 0 be the support of row A0r , and assume |S 0 | ? k. Under the model (6) x(t) has
a Gaussian stationary state distribution with covariance Q0 determined by the following modified
Lyapunov equation
A0 Q0 + Q0 (A0 )? + ?A0 Q0 (A0 )? + I = 0 .
(10)
It will be clear from the context whether A0 /Q0 refers to the dynamics/stationary matrix from the
continuous or discrete time system. We assume conditions (a) and (b) introduced in Section 1.1, and
0
adopt the notations already introduced there. We use as a shorthand notation ?max
? ?max (I +? A )
where ?max (.) is the maximum singular value. Also define D ? 1 ? ?max /? . We will assume
D > 0. As in the previous section, we assume the model (6) is initiated in the stationary state.
Theorem 3.1. Consider the problem of learning the support S 0 of row A0r from the discrete-time
trajectory {x(t)}0?t?n . If
4pk
104 k 2 (kD?2 + A?2
min )
log
n? >
,
(11)
2
?2 DCmin
?
then there exists ? such that ?1 -regularized least squares recovers
the signed support of A0r with
p
probability larger than 1 ? ?. This is achieved by taking ? = (36 log(4p/?))/(D?2 n?).
In other words the discrete-time sample complexity, n, is logarithmic in the model dimension, polynomial in the maximum network degree and inversely proportional to the time spacing between
samples. The last point is particularly important. It enables us to derive the bound on the continuoustime sample complexity as the limit ? ? 0 of the discrete-time sample complexity. It also confirms
our intuition mentioned in the Introduction: although one can produce an arbitrary large number
of samples by sampling the continuous process with finer resolutions, there is limited amount of
information that can be harnessed from a given time interval [0, T ].
4
Proofs
In the following we denote by X ? Rn?p the matrix whose (t + 1)th column corresponds to the
configuration x(t), i.e. X = [x(0), x(1), . . . , x(n ? 1)]. Further ?X ? Rn?p is the matrix containing configuration changes, namely ?X = [x(1) ? x(0), . . . , x(n) ? x(n ? 1)]. Finally we write
W = [w(1), . . . , w(n ? 1)] for the matrix containing the Gaussian noise realization. Equivalently,
The r
th
row of W is denoted by Wr .
W = ?X ? ?A X .
In order to lighten the notation, we will omit the reference to xn0 in the likelihood function (9) and
simply write L(Ar ). We define its normalized gradient and Hessian by
b = ??L(A0 ) = 1 XW ? ,
G
r
r
n?
b = ?2 L(A0 ) = 1 XX ? .
Q
r
n
6
(12)
4.1
Discrete time
In this Section we outline our prove for our main result for discrete-time dynamics, i.e., Theorem
3.1. We start by stating a set of sufficient conditions for regularized least squares to work. Then we
present a series of concentration lemmas to be used to prove the validity of these conditions, and
finally we sketch the outline of the proof.
As mentioned, the proof strategy, and in particular the following proposition which provides a compact set of sufficient conditions for the support to be recovered correctly is analogous to the one in
[12]. A proof of this proposition can be found in the supplementary material.
Proposition 4.1. Let ?, Cmin > 0 be be defined by
?1
?min (Q0S 0 ,S 0 ) ? Cmin ,
|||Q0(S 0 )C ,S 0 Q0S 0 ,S 0
|||? ? 1 ? ? .
(13)
If the following conditions hold then the regularized least square solution (8) correctly recover the
signed support sign(A0r ):
b ? ? ?? ,
b S 0 k? ? Amin Cmin ? ?,
kGk
kG
(14)
3
4k
b S 0 ,S 0 ? Q0 0 0 |||? ? ? C?min .
b (S 0 )C ,S 0 ? Q0 0 C 0 |||? ? ? C?min ,
|||Q
(15)
|||Q
S ,S
(S ) ,S
12 k
12 k
b and Q
b are the gradient
Further the same statement holds for the continuous model 3, provided G
and the hessian of the likelihood (3).
The proof of Theorem 3.1 consists in checking that, under the hypothesis (11) on the number of
consecutive configurations, conditions (14) to (15) will hold with high probability. Checking these
conditions can be regarded in turn as concentration-of-measure statements. Indeed, if expectation is
b = 0, E{Q}
b = Q0 .
taken with respect to a stationary trajectory, we have E{G}
4.1.1
Technical lemmas
In this section we will state the necessary concentration lemmas for proving Theorem 3.1. These
b b
are non-trivial
because G, Q are quadratic functions of dependent random variables the samples
{x(t)}0?t?n . The proofs of Proposition 4.2, of Proposition 4.3, and Corollary 4.4 can be found in
the supplementary material provided.
b around 0.
Our first Proposition implies concentration of G
Proposition 4.2. Let S ? [p] be any set of vertices and ? < 1/2. If ?max ? ?max (I + ? A0 ) < 1,
then
b S k? > ? ? 2|S| e?n(1??max ) ?2 /4 .
P kG
(16)
We furthermore need to bound the matrix norms as per (15) in proposition 4.1. First we relate
b JS ? Q0 JS |||? with bounds on |Q
b ij ? Q0 |, (i ? J, i ? S) where J and S are any
bounds on |||Q
ij
subsets of {1, ..., p}. We have,
b JS ? Q0JS )|||? > ?) ? |J||S| max P(|Q
b ij ? Q0ij | > ?/|S|).
P(|||Q
(17)
i,j?J
b ij ?
Then, we bound |Q
Q0ij |
using the following proposition
Proposition 4.3. Let i, j ? {1, ..., p}, ?max ? ?max (I + ?A0 ) < 1, T = ?n > 3/D and 0 < ? <
2/D where D = (1 ? ?max )/? then,
b ij ? Q0 )| > ?) ? 2e
P(|Q
ij
3 2
n
? 32?
2 (1??max ) ?
.
(18)
Finally, the next corollary follows from Proposition 4.3 and Eq. (17).
Corollary 4.4. Let J, S (|S| ? k) be any two subsets of {1, ..., p} and ?max ? ?max (I + ?A0 ) < 1,
? < 2k/D and n? > 3/D (where D = (1 ? ?max )/?) then,
b JS ? Q0JS |||? > ?) ? 2|J|ke
P(|||Q
7
? 32kn2 ?2 (1??max )3 ?2
.
(19)
4.1.2
Outline of the proof of Theorem 3.1
With these concentration bounds we can now easily prove Theorem 3.1. All we need to do is
to compute the probability that the conditions given by Proposition 4.1 hold. From the statement
of the theorem we have that the first two conditions (?, Cmin > 0) of Proposition 4.1 hold. In
b imply the second condition on G
b we assume that ??/3 ?
order to make the first condition on G
(Amin Cmin )/(4k) ? ? which is guaranteed to hold if
? ? Amin Cmin /8k.
b thus obtaining the following
We also combine the two last conditions on Q,
b [p],S 0 ? Q0 0 |||? ?
|||Q
[p],S
? Cmin
? ,
12 k
(20)
(21)
b failing and
since [p] = S 0 ? (S 0 )C . We then impose that both the probability of the condition on Q
b failing are upper bounded by ?/2 using Proposition 4.2 and
the probability of the condition on G
Corollary 4.4. It is shown in the supplementary material that this is satisfied if condition (11) holds.
4.2
Outline of the proof of Theorem 1.1
To prove Theorem 1.1 we recall that Proposition 4.1 holds provided the appropriate continuous time
b and Q,
b namely
expressions are used for G
Z T
Z T
b = ??L(A0r ) = 1
b = ?2 L(A0r ) = 1
G
x(t) dbr (t) ,
Q
x(t)x(t)? dt .
(22)
T 0
T 0
These are of course random variables. In order to distinguish these from the discrete time version,
bn , Q
b n for the latter. We claim that these random variables can be
we will adopt the notation G
bn ? G
b and Q
bn ? Q
b
coupled (i.e. defined on the same probability space) in such a way that G
almost surely as n ? ? for fixed T . Under assumption (5), it is easy to show that (11) holds for all
n > n0 with n0 a sufficiently large constant (for a proof see the provided supplementary material).
b n and
Therefore, by the proof of Theorem 3.1, the conditions in Proposition 4.1 hold for gradient G
n
b for any n ? n0 , with probability larger than 1 ? ?. But by the claimed convergence
hessian Q
n
b
b
b n ? Q,
b they hold also for G
b and Q
b with probability at least 1 ? ? which proves the
G ? G and Q
theorem.
We are left with the task of showing that the discrete and continuous time processes can be coupled
bn ? G
b and Q
b n ? Q.
b With slight abuse of notation, the state of the discrete
in such a way that G
time system (6) will be denoted by x(i) where i ? N and the state of continuous time system (1) by
x(t) where t ? R. We denote by Q0 the solution of (4) and by Q0 (?) the solution of (10). It is easy
to check that Q0 (?) ? Q0 as ? ? 0 by the uniqueness of stationary state distribution.
The initial state of the continuous time system x(t = 0) is a N(0, Q0 ) random variable independent of b(t) and the initial state of the discrete time system is defined to be x(i = 0) =
(Q0 (?))1/2 (Q0 )?1/2 x(t = 0). At subsequent times, x(i) and x(t) are assumed are generated by the
respective dynamical systems using the same matrix A0 using common randomness provided by the
standard Brownian motion {b(t)}0?t?T in Rp . In order to couple x(t) and x(i), we construct w(i),
the noise driving the discrete time system, by letting w(i) ? (b(T i/n) ? b(T (i ? 1)/n)).
bn ? G
b and Q
bn ? Q
b follows then from standard convergence of
The almost sure convergence G
random walk to Brownian motion.
Acknowledgments
This work was partially supported by a Terman fellowship, the NSF CAREER award CCF-0743978
and the NSF grant DMS-0806211 and by a Portuguese Doctoral FCT fellowship.
8
References
[1] D.T. Gillespie. Stochastic simulation of chemical kinetics. Annual Review of Physical Chemistry, 58:35?55, 2007.
[2] D. Higham. Modeling and Simulating Chemical Reactions. SIAM Review, 50:347?368, 2008.
[3] N.D.Lawrence et al., editor. Learning and Inference in Computational Systems Biology. MIT
Press, 2010.
[4] T. Toni, D. Welch, N. Strelkova, A. Ipsen, and M.P.H. Stumpf. Modeling and Simulating
Chemical Reactions. J. R. Soc. Interface, 6:187?202, 2009.
[5] K. Zhou, J.C. Doyle, and K. Glover. Robust and optimal control. Prentice Hall, 1996.
[6] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society. Series B (Methodological), 58(1):267?288, 1996.
[7] D.L. Donoho. For most large underdetermined systems of equations, the minimal l1-norm
near-solution approximates the sparsest near-solution. Communications on Pure and Applied
Mathematics, 59(7):907?934, 2006.
[8] D.L. Donoho. For most large underdetermined systems of linear equations the minimal l1norm solution is also the sparsest solution. Communications on Pure and Applied Mathematics,
59(6):797?829, 2006.
[9] T. Zhang. Some sharp performance bounds for least squares regression with L1 regularization.
Annals of Statistics, 37:2109?2144, 2009.
[10] M.J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using l1constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183?2202,
2009.
[11] M.J. Wainwright, P. Ravikumar, and J.D. Lafferty. High-Dimensional Graphical Model Selection Using l-1-Regularized Logistic Regression. Advances in Neural Information Processing
Systems, 19:1465, 2007.
[12] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning
Research, 7:2541?2563, 2006.
[13] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432, 2008.
[14] K. Ball, T.G. Kurtz, L. Popovic, and G. Rempala. Modeling and Simulating Chemical Reactions. Ann. Appl. Prob., 16:1925?1961, 2006.
[15] G.A. Pavliotis and A.M. Stuart. Parameter estimation for multiscale diffusions. J. Stat. Phys.,
127:741?781, 2007.
[16] J. Songsiri, J. Dahl, and L. Vandenberghe. Graphical models of autoregressive processes. pages
89?116, 2010.
[17] J. Songsiri and L. Vandenberghe. Topology selection in graphical models of autoregressive
processes. Journal of Machine Learning Research, 2010. submitted.
[18] F.R.K. Chung. Spectral Graph Theory. CBMS Regional Conference Series in Mathematics,
1997.
[19] P. Ravikumar, M.J. Wainwright, and J. Lafferty. High-dimensional Ising model selection using
l1-regularized logistic regression. Annals of Statistics, 2008.
9
| 4055 |@word mild:1 kgk:1 version:2 polynomial:3 norm:3 stronger:1 confirms:1 simulation:2 bn:6 covariance:6 initial:2 configuration:5 contains:1 series:3 selecting:1 reaction:5 existing:1 current:1 recovered:3 dx:1 portuguese:1 numerical:3 additive:1 subsequent:1 enables:2 plot:3 n0:4 v:6 stationary:8 record:1 provides:1 node:2 complication:1 successive:1 zhang:1 mathematical:1 glover:1 differential:4 become:1 prove:8 shorthand:1 consists:1 combine:1 introduce:1 pairwise:1 indeed:7 roughly:3 andrea:1 considering:1 provided:5 estimating:1 underlying:1 notation:6 bounded:3 mass:1 xx:1 biostatistics:1 kg:2 developed:2 generalizable:1 a0ij:4 guarantee:6 quantitative:1 collecting:1 tackle:2 control:2 unit:1 grant:1 omit:1 producing:1 kurtz:1 before:2 engineering:3 limit:4 initiated:1 fluctuation:1 incoherence:1 approximately:1 abuse:1 might:2 signed:5 zeroth:1 doctoral:1 studied:2 substantiates:1 challenging:1 appl:1 limited:2 bi:1 directed:3 unique:1 acknowledgment:1 xr:2 procedure:1 area:1 evolving:2 word:3 refers:1 get:1 selection:8 operator:1 prentice:1 context:2 jbento:1 restriction:2 independently:2 convex:3 focused:2 resolution:1 ke:1 welch:1 recovery:4 pure:2 m2:1 dbi:1 regarded:1 vandenberghe:2 stability:2 proving:1 notion:1 analogous:3 limiting:1 annals:2 controlling:1 dxr:2 programming:1 hypothesis:1 origin:1 logarithmically:1 element:1 satisfying:1 particularly:2 ising:1 ibrahimi:2 observed:2 role:1 electrical:3 capture:1 connected:4 mentioned:3 intuition:3 substantial:1 complexity:14 dynamic:8 raise:1 solving:2 depend:1 uniformity:1 easily:2 represented:1 various:1 distinct:1 fast:1 choosing:1 outside:1 whose:4 quite:1 stanford:9 larger:4 solve:1 supplementary:4 reconstruct:3 statistic:3 itself:1 noisy:1 bento:1 ip:1 sequence:1 eigenvalue:3 reconstruction:2 interaction:1 product:2 adaptation:1 realization:1 amin:4 getting:1 parent:1 convergence:3 optimum:1 produce:2 generating:1 converges:2 derive:1 stating:2 stat:1 ij:6 keywords:1 eq:8 soc:1 predicted:2 implies:1 convention:1 lyapunov:2 closely:1 correct:3 stochastic:8 cmin:11 material:4 adjacency:1 proposition:16 biological:1 elementary:1 underdetermined:2 kinetics:1 hold:11 proximity:1 sufficiently:3 around:1 hall:1 normal:1 equilibrium:2 lawrence:1 claim:1 driving:1 consecutive:2 smallest:1 adopt:2 failing:2 uniqueness:1 estimation:2 weighted:1 mit:1 gaussian:6 modified:2 zhou:1 shrinkage:1 corollary:4 encode:1 focus:2 methodological:1 likelihood:5 mainly:1 check:1 sense:1 inference:2 dependent:4 a0:39 relation:1 interested:2 issue:1 among:1 denoted:2 equal:3 once:2 construct:1 sampling:4 biology:1 stuart:1 yu:1 lighten:1 terman:1 few:1 randomly:1 doyle:1 continuoustime:1 friedman:1 freedom:3 stationarity:1 semidefinite:1 devoted:3 damped:1 accurate:1 edge:1 integral:1 neglecting:1 necessary:1 respective:1 indexed:1 iv:1 harmful:1 walk:1 re:1 minimal:2 instance:4 column:1 modeling:3 ar:9 cost:3 vertex:4 entry:4 expects:2 subset:3 uniform:3 hundred:1 successful:1 too:1 corrupted:2 learnt:1 synthetic:1 density:1 fundamental:1 siam:1 jos:1 diverge:1 concrete:1 satisfied:1 reconstructs:3 containing:2 zhao:1 chung:1 chemistry:1 explicitly:1 analyze:1 sup:1 portion:2 observing:1 start:1 recover:1 minimize:2 square:18 variance:1 spaced:1 conceptually:1 produced:1 trajectory:8 finer:1 randomness:1 submitted:1 phys:1 whenever:1 dm:1 dbr:1 associated:3 dxi:1 recovers:3 proof:14 couple:1 sampled:1 proved:2 ask:2 recall:1 carefully:1 cbms:1 dt:7 stumpf:1 strongly:1 generality:1 furthermore:2 hand:3 sketch:1 nonlinear:1 multiscale:1 kn2:1 l1norm:1 logistic:3 validity:1 normalized:1 ccf:1 regularization:3 chemical:8 q0:27 symmetric:1 white:1 uniquely:1 whereby:1 stress:1 outline:4 complete:1 motion:4 interface:1 l1:3 meaning:1 recently:1 common:1 physical:1 overview:1 harnessed:2 exponentially:1 interpretation:1 slight:1 rth:2 approximates:1 refer:2 significant:1 consistency:4 mathematics:3 stable:2 j:4 brownian:4 irrepresentability:2 apart:1 claimed:1 kar:2 inequality:1 arbitrarily:1 binary:1 success:13 discretizing:1 seen:1 minimum:4 impose:1 surely:1 ii:1 mix:1 technical:2 long:2 concerning:1 ravikumar:2 award:1 laplacian:4 variant:1 regression:6 expectation:1 represent:1 achieved:3 fellowship:2 spacing:4 interval:9 addressed:1 singular:1 regional:1 sure:1 elegant:1 db:1 gii:1 regularly:1 lafferty:2 structural:1 near:2 iii:1 easy:2 xj:1 fit:1 hastie:1 lasso:6 topology:1 whether:2 expression:1 effort:1 hessian:3 useful:2 detailed:1 involve:1 clear:2 pavliotis:1 amount:3 repeating:1 specifies:1 nsf:2 notice:2 sign:3 estimated:2 correctly:3 wr:1 per:1 tibshirani:2 write:3 discrete:19 shall:1 threshold:1 dahl:1 diffusion:3 kept:1 graph:12 relaxation:2 sum:2 prob:2 inverse:1 almost:2 separation:1 draw:2 scaling:4 submatrix:1 bound:8 guaranteed:2 distinguish:1 quadratic:2 annual:1 strength:2 precisely:1 min:16 spring:2 fct:1 department:3 according:5 ball:1 kd:1 describes:1 smaller:1 appealing:1 a0r:15 evolves:1 intuitively:1 invariant:1 multiplicity:1 taken:1 computationally:1 equation:8 describing:1 count:1 discus:1 turn:2 letting:4 away:1 appropriate:1 spectral:1 simulating:3 appearing:1 rp:6 denotes:2 subsampling:1 cf:1 graphical:10 xw:1 k1:2 prof:2 build:1 classical:1 society:1 question:1 already:2 strategy:1 concentration:6 diagonal:2 gradient:3 topic:2 trivial:1 length:5 modeled:2 illustration:3 equivalently:1 difficult:1 ipsen:1 statement:7 relate:1 trace:1 stated:2 negative:1 design:1 upper:1 observation:8 finite:1 communication:2 precise:2 rn:2 arbitrary:1 sharp:2 introduced:2 namely:4 learned:1 trans:1 below:3 pattern:2 dynamical:2 regime:1 sparsity:4 challenge:2 max:18 royal:1 wainwright:3 gillespie:1 suitable:1 difficulty:1 regularized:16 residual:1 inversely:2 imply:1 naive:1 coupled:2 review:2 literature:2 checking:3 loss:1 proportional:2 validation:1 degree:9 sufficient:4 editor:1 row:10 course:2 supported:1 last:2 transpose:1 aij:1 side:1 formal:1 allow:1 neighbor:1 taking:4 characterizing:1 absolute:1 sparse:4 distributed:2 curve:4 dimension:2 stand:1 autoregressive:3 collection:1 made:1 coincide:2 far:2 emphasize:1 compact:1 confirm:2 ml:2 deg:2 assumed:1 popovic:1 xi:2 spectrum:1 continuous:14 robust:1 ca:3 career:1 obtaining:1 interact:1 complex:1 necessarily:1 pk:3 main:4 montanari:2 linearly:1 noise:4 subsample:2 arise:1 toni:1 ref:1 fig:4 depicts:1 slow:1 shrinking:1 sparsest:2 theorem:23 specific:1 showing:1 explored:1 exists:4 effectively:1 higham:1 morteza:1 logarithmic:4 simply:1 infinitely:1 conveniently:1 contained:1 tracking:1 partially:1 mij:1 corresponds:2 satisfies:1 donoho:2 ann:1 change:2 determined:2 averaging:1 lemma:3 specie:3 m3:1 xn0:6 select:1 formally:1 support:16 latter:3 arises:1 relevance:1 correlated:1 |
3,375 | 4,056 | Phoneme Recognition with Large Hierarchical
Reservoirs
Fabian Triefenbach
Azarakhsh Jalalvand
Benjamin Schrauwen
Jean-Pierre Martens
Department of Electronics and Information Systems
Ghent University
Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium
[email protected]
Abstract
Automatic speech recognition has gradually improved over the years, but the reliable recognition of unconstrained speech is still not within reach. In order to
achieve a breakthrough, many research groups are now investigating new methodologies that have potential to outperform the Hidden Markov Model technology
that is at the core of all present commercial systems. In this paper, it is shown
that the recently introduced concept of Reservoir Computing might form the basis
of such a methodology. In a limited amount of time, a reservoir system that can
recognize the elementary sounds of continuous speech has been built. The system already achieves a state-of-the-art performance, and there is evidence that the
margin for further improvements is still significant.
1
Introduction
Thanks to a sustained world-wide effort, modern automatic speech recognition technology has now
reached a level of performance that makes it suitable as an enabling technology for novel applications such as automated dictation, speech based car navigation, multimedia information retrieval,
etc. Basically all state-of-the-art systems utilize Hidden Markov Models (HMMs) to compose an
acoustic model that captures the relations between the acoustic signal and the phonemes, defined
as the basic contrastive units of the sound system of a spoken language. The HMM theory has not
changed that much over the years, and the performance growth is slow and for a large part owed to
the availability of more training data and computing resources.
Many researchers advocate the need for alternative learning methodologies that can supplement or
even totally replace the present HMM methodology. In the nineties for instance, very promising
results were obtained with Recurrent Neural Networks (RNNs) [1] and hybrid systems both comprising neural networks and HMMs [2], but these systems were more or less abandoned since then.
More recently, there was a renewed interest in applying new results originating from the Machine
Learning community. Two techniques, namely Deep Belief Networks (DBNs) [3, 4] and Long ShortTerm Memory (LSTM) recurrent neural networks [5], have already been used with great success for
phoneme recognition. In this paper we present the first (to our knowledge) phoneme recognizer that
employs Reservoir Computing (RC) [6, 7, 8] as its core technology.
The basic idea of Reservoir Computing (RC) is that complex classifications can be performed by
means of a set of simple linear units that ?read-out? the outputs of a pool of fixed (not trained)
nonlinear interacting neurons. The RC concept has already been successfully applied to time series generation [6], robot navigation [9], signal classification [8], audio prediction [10] and isolated
1
spoken digit recognition [11, 12, 13]. In this contribution we envisage a RC system that can recognize the English phonemes in continuous speech. In a short period (a couple of months) we have
been able to design a hierarchical system of large reservoirs that can already compete with many
state-of-the-art HMMs that have only emerged after several decades of research.
The rest of this paper is organized as follows: in Section 2 we describe the speech corpus we are
going to work on, in Section 3 we recall the basic principles of Reservoir Computing, in Section 4 we
discuss the architecture of the reservoir system which we propose for performing Large Vocabulary
Continuous Speech Recognition (LVCSR), and in Section 5 we demonstrate the potential of this
architecture for phoneme recognition.
2
The speech corpus
Since the main aim of this paper is to demonstrate that reservoir computing can yield a good acoustic
model, we will conduct experiments on TIMIT, an internationally renowned corpus [14] that was
specifically designed to support the development and evaluation of such a model.
The TIMIT corpus contains 5040 English sentences spoken by 630 different speakers representing
eight dialect groups. About 70% of the speakers are male, the others are female. The corpus documentation defines a training set of 462 speakers and a test set of 168 different speakers: a main test
set of 144 speakers and a core test set of 24 speakers. Each speaker has uttered 10 sentences: two
SA sentences which are the same for all speakers, 5 SX-sentences from a list of 450 sentences (each
one thus appearing 7 times in the corpus) and 3 SI-sentences from a set of 1890 sentences (each one
thus appearing only once in the corpus). To avoid a biased result, the SA sentences will be excluded
from training and testing.
For each utterance there is a manual acoustic-phonetic segmentation. It indicates where the phones,
defined as the atomic units of the acoustic realizations of the phonemes, begin and end. There are 61
distinct phones, which, for evaluation purposes, are usually reduced to an inventory of 39 symbols,
as proposed by [15]. Two types of error rates can be reported for the TIMIT corpus. One is the
Classification Error Rate (CER), defined as the percentage of the time the top hypothesis of the tested
acoustic model is correct. The second one is the Recognition Error Rate (RER), defined as the ratio
between the number of edit operations needed to convert the recognized symbol sequence into the
reference sequence, and the number of symbols in that reference sequence. The edit operations are
symbol deletions, insertions and substitutions. Both classification and recognition can be performed
at the phone and the phoneme level.
3
The basics of Reservoir Computing
In this paper, a Reservoir Computing network (see Figure 1) is an Echo State Network [6, 7, 8]
consisting of a fixed dynamical system (the reservoir) composed of nonlinear recurrently connected
neurons which are left untrained, and a set of linear output nodes (read-out nodes). Each output node
is trained to recognize one class (one-vs-all classification). The number of connections between and
within layers can be varied from sparsely connected to fully connected. The reservoir neurons have
an activation function f(x) = logistic(x).
trained output connections
random recurrent connections
random input connections
input nodes
reservoir
output nodes
Figure 1: A reservoir computing network consists of a reservoir of fixed recurrently connected
nonlinear neurons which are stimulated by the inputs, and an output layer of trainable linear units.
2
The RC approach avoids the back-propagation through time learning which can be very time consuming and which suffers from the problem of vanishing gradients [6]. Instead, it employs a simple
and efficient linear regression learning of the output weights. The latter tries to minimize the mean
squared error between the computed and the desired outputs at all time steps.
Based on its recurrent connections, the reservoir can capture the long-term dynamics of the human
articulatory system to perform speech sound classification. This property should give it an advantage
over HMMs that rely on the assumption that subsequent acoustical input vectors are conditionally
independent.
Besides the ?memory? introduced through the recurrent connections, the neurons themselves can
also integrate information over time. Typical neurons that can accomplish this are Leaky Integrator
Neurons (LINs) [16]. With such neurons the reservoir state at time k+1 can be computed as follows:
x[k + 1] = (1 ? ?)x[k] + ?f (Wres x[k] + Win u[k])
(1)
with u[k] and x[k] representing the inputs and the reservoir state at time k. The W matrices contain
the input and recurrent connection weights. It is common to include a constant bias in u[k]. As
long as the leak rate ? < 1, the integration function provides an additional fading memory of the
reservoir state.
To perform a classification task, the RC network computes the outputs at time k by means of the
following linear equation:
y[k] = Wout x[k]
(2)
The reservoir state in this equation is augmented with a constant bias. If the reservoir states at the
different time instants form the columns of a large state matrix X and if the corresponding desired
outputs form the columns of a matrix D, the optimal Wout emerges from the following equations:
1
2
2
Wout = arg min
||X W ? D|| + ||W||
(3)
N
W
Wout = (XT X + I)?1 (XT D)
(4)
with N being the number of frames. The regularization constant aims to limit the norm of the output
weights (this is the so-called Tikhonov or ridge regression). For large training sets, as common in
speech processing, the matrices XT X and XT D are updated on-line in order to suppress the need
for huge storage capacity. In this paper, the regularization parameter was fixed to 10?8 . This
regularization is equivalent to adding Gaussian noise with a variance of 10?8 to the reservoir state
variables.
4
System architecture
The main objective of our research is to build an RC-based LVCSR system that can retrieve the
words from a spoken utterance. The general architecture we propose for such a system is depicted
in Figure 2. The preprocessing stage converts the speech waveform into a sequence of acoustic
Figure 2: Hierarchical reservoir architecture with multiple layers.
feature vectors representing the acoustic properties in subsequent speech frames. This sequence is
supplied to a hierarchical system of RC networks. Each reservoir is composed of LINs which are
fully connected to the inputs and to the 41 outputs. The latter represent the distinct phonemes of
the language. The outputs of the last RC network are supplied to a decoder which retrieves the
most likely linguistic interpretation of the speech input, given the information computed by the RC
3
networks and given some prior knowledge of the spoken language. In this paper, the decoder is
a phoneme recognizer just accommodating a bigram phoneme language model. In a later stage it
will be extended with other components: (1) a phonetic dictionary comprising all the words of the
system?s vocabulary and their common pronunciations, expressed as phoneme sequences, and (2) a
n-gram language model describing the probabilities of each word, given the preceding (n-1) words.
We conjecture that the integration time of the LINs in the first reservoir should ideally be long
enough to capture the co-articulations between successive phonemes emerging from the dynamical
constraints of the articulatory system. On the other hand, it has to remain short enough to avoid that
information pointing to the presence of a short phoneme is too much blurred by the left phonetic
context. Furthermore, we argue that additional reservoirs can correct some of the errors made by
the first reservoir. Indeed, such an error correcting reservoir can guess the correct labels from its
inputs, and take the past phonetic context into account in an implicit way to refine the decision. This
is in contrast to an HMM system which adopts an explicit approach, involving separate models for
several thousands of context-dependent phonemes.
In the next subsections we provide more details about the different parts of our recognizer, and we
also discuss the tuning of some of its control parameters.
4.1
Preprocessing
The preprocessor utilizes the standard Mel Frequency Cepstral Coefficient (MFCC) analysis [17] encountered in most state-of-the-art LVCSR systems. The analysis is performed on 25 ms Hammingwindowed speech frames, and subsequent speech frames are shifted over 10 ms with respect to each
other. Every 10 ms a 39-dimensional feature vector is generated. It consists of 13 static parameters,
namely the log-energy and the first 12 MFCC coefficients, their first order derivatives (the velocity
or ? parameters), and their second order derivatives (the acceleration or ?? parameters).
In HMM systems, the training is insensitive to a linear rescaling of the individual features. In
RC systems however, the input and recurrent weights are not trained and drawn from predefined
statistical distributions. Consequently, by rescaling the features, the impact of the inputs on the
activations of the reservoir neurons is changed as well, which makes it compulsory to employ an
appropriate input scaling [8].
To establish a proper input scaling the acoustic feature vector is split into six sub-vectors according
to the dimensions (energy, cepstrum) and (static, velocity, acceleration). Then, each feature ai , (i =
1, .., 39) is normalized to zi = ?s (ui ? ui ) with ui being the mean of ui and s (s = 1, .., 6)
referring to the sub-vector (group) the feature belongs to. The aim of ?s is to ensure that the norm
of each sub-vector is one. If the zi were supplied to the reservoir, each sub-vector would on average
have the same impact on the reservoir neuron activations. Therefore, in a second stage, the zi are
rescaled to ui = ?s zi with ?s representing the relative importance of sub-vector s in the reservoir
neuron activations. The normalization constants ?s straightly follow from a statistical analysis of the
Table 1: Different types of acoustic information in the input features and their optimal scale factors.
Energy features
group name
norm factor ?
scale factor ?
Cepstral features
log(E)
? log(E)
?? log(E)
c1...12
?c1...12
??c1...12
0.27
1.75
1.77
1.25
4.97
1.00
0.10
1.25
0.61
0.50
1.75
0.25
acoustic feature vectors. The factors ?s are free parameters that were selected such that the phoneme
classification error of a single reservoir system of 1000 neurons is minimized on the validation
set. The obtained factors (see Table 1) confirm that the static features are more important than the
velocity and the acceleration features.
The proposed rescaling has the following advantages: it preserves the relative importance of the
individual features within a sub-vector, it is fully defined by six scaling parameters ?s ?s , it takes
only a minimal computational effort, and it is actually supposed to work well for any speech corpus.
4
4.2
Sequence decoding
The decoder in our present system performs a Viterbi search for the most likely phonemic sequence
given the acoustic inputs and a bigram phoneme language model. The search is driven by a simple
model for the conditional likelihood p(y|m) that the reservoir output vector y is observed during the
acoustical realization of phoneme m. The model is based on the cosine similarity between y + 1
and a template vector tm = [0, .., 0, 1, 0, .., 0], with its nonzero element appearing at position m.
Since the template vector is a unity vector, we compute p(y|m) as
?
< y + 1, tm >)
p(y|m) = max[0, ?
(5)
] ,
< y + 1, y + 1 >
with < x, y > denoting the dot product of vectors x and y. Due to the offset, we can ensure that
the components of y + 1 are between 0 and 1 most of the time. The maximum operator prevents
the likelihoods from becoming negative occasionally. The exponent ? is a free parameters that will
be tuned experimentally. It controls the relative importance of the acoustic model and the bigram
phoneme language model.
4.3
Reservoir optimization
The training of the reservoir output nodes is based on Equations (3) and (4) and the desired phoneme
labels emerge from a time synchronized phonemic transcription. The latter was derived from the
available acoustic-phonetic segmentation of TIMIT. For all experiments reported in this paper, we
have used the modular RC toolkit OGER1 developed at Ghent University.
The recurrent weights of the reservoir are not trained but randomly drawn from statistical distributions. The input weights emerge from a uniform distribution between ?U and +U , the recurrent
weights from a zero-mean Gaussian distribution with a variance V . The value of U controls the relative importance of the inputs in the activation of the reservoir neurons and is often called the input
scale factor (ISF). The variance V directly determines the spectral radius (SR), defined as the largest
absolute eigenvalue of the recurrent weight matrix. The SR describes the dynamical excitability of
the reservoir [6, 8]. The SR and the ISF must be jointly optimized. To do so, we used 1000 neuron
reservoirs, supplied with inputs that were normalized according to the procedure reviewed in the
previous section. We found that SR = 0.4 and ISF = 0.4 yield the best performance, but for SR
? (0.3...0.8) and for ISF ? (0.2...1.0), the performance is quite stable.
Another parameter that must be optimized is the leak rate, denoted as ?. It determines the integration time of the neurons. If the nonlinear function is ignored and the time between frames is Tf ,
the reservoir neurons represent a first-order leaky integrator with a time constant ? that is related
to ? by ? = 1 ? e?Tf /? . As stated before, the integration time should be long enough to capture the relevant co-articulation effects and short enough to constrain the information blurring over
subsequent phonemes. This is confirmed by Figure 3 showing how the phoneme CER of a single
reservoir system changes as a function of the integrator time constant. The optimal value is 40 ms,
CER [in %]
44
43
42
41
40
39
0
20
40
60
80
100 120
integration time [in ms]
140
160
180
Figure 3: The phoneme Classification Error Rate (CER) as a function of the integration time (in ms)
and completely in line with psychophysical data concerning the post and pre-masking properties of
the human auditory system. In [18] for instance, it is shown that these properties can be explained
by means of a second order low-pass filter with real poles corresponding to time constants of 8 and
40 ms respectively (it is the largest constant that determines the integration time here).
1
http://reservoir-computing.org/organic/engine
5
It has been reported [19] that one can easily reduce the number of recurrent connections in a RC
network without much affecting its performance. We have found that limiting the number of connection to 50 per neuron does not harm the performance while it dramatically reduces the required
computational resources (memory and computation time).
5
Experiments
Since our ultimate goal is to perform LVCSR, and since LVCSR systems work with a dictionary of
phonemic transcriptions, we have worked with phonemes rather than with phones. As in [20] we
consider the 41 phoneme symbols one encounters in a typical phonetic dictionary like COMLEX
[21]. The 41 symbols are very similar to the 39 symbols of the reduced phone set proposed by [15],
but with one major difference, namely, that a phoneme string does not contain any silences referring
to closures of plosive sounds (e.g. the closure /kcl/ of phoneme /k/ ). By ignoring confusions between
/sh/ and /zh/ and between /ao/ and /aa/ we finally measure phoneme error rates for 39 classes, in
order to make them more compliant with the phone error rates for 39 classes reported in other
papers. Nevertheless, we will see later that phoneme recognition is harder to accomplish than phone
recognition. This is because the closures are easy to recognize and contribute to a low phone error
rate. In phoneme recognition there are no closure anymore.
In what follows, all parameter tuning is performed on the TIMIT training set (divided into independent training and development sets), and all error rates are measured on the main test set. The
bigram phoneme language model used for the sequence decoding step is created from the phonemic
transcriptions of the training utterances.
5.1
Single reservoir systems
In a first experiment we assess the performance of a single reservoir system as a function of the reservoir size, defined as the number of neurons in the reservoir. The phoneme targets during training are
derived from the manual acoustic-phonetic segmentation, as explained in Section 4.3. We increase
the number of neurons from 500 to 20000. The corresponding number of trainable parameters then
changes from 20K to 800K. The latter figure corresponds to the number of trainable parameters in
an HMM system comprising 1200 independent Gaussian mixture distributions of 8 mixtures each.
Figure 4 shows that the phoneme CER on the training set drops by about 4% every time the reservoir
size is doubled. The phoneme CER on the test set shows a similar trend, but the slope is decreasing
from 4% at low reservoir sizes to 2% at 20000 neurons (nodes). At that point the CER on the test
Figure 4: The Classification Error Rate (CER) at the phoneme level for the training and test set as a
function of the reservoir size.
set is 30.6% and the corresponding RER (not shown) is 31.4%. The difference between the test and
the training error is about 8%.
Although the figures show that an even larger reservoir will perform better, we stopped at 20000
nodes because the storage and the inversion of the large matrix XT X are getting problematic. Before
starting to investigate even larger reservoirs, we first want to verify our hypothesis that adding a
second (equally large) layer can lead to a better performance.
6
5.2
Multilayer reservoir systems
Usually, a single reservoir system produces a number of competing outputs at all time steps, and
this hampers the identification of the correct phoneme sequence. The left panel of Figure 5 shows
the outputs of a reservoir of 8000 nodes in a time interval of 350 ms. Our hypothesis was that the
observed confusions are not arbitrary, and that a second reservoir operating on the outputs of the first
reservoir system may be able to discover regularities in the error patterns. And indeed, the outputs
of this second reservoir happen to exhibit a larger margin between the winner and the competition,
as illustrated in the right panel of Figure 5.
Figure 5: The outputs of the first (left) and the second (right) layer of a two-layer system composed
of two 8000 node reservoirs. The shown interval is 350 ms long.
In Figure 6, we have plotted the phoneme CER and RER as a function of the number of reservoirs
(layers) and the size of these reservoirs. We have thus far only tested systems with equally large
reservoirs at every layer. For the exponent ?, we have just tried ? = 0.5, 0.7 and 1, and we have
selected the value yielding the best balance between insertions and deletions.
Figure 6: The phoneme CERs and RERs for different combinations of number of nodes and layers
For all reservoir sizes, the second layer induces a significant improvement of the CER by 3-4%
absolute. The corresponding improvements of the recognition error rates are a little bit less but
still significant. The best RER obtained with a two-layer system comprising reservoirs of 20000
nodes is 29.1%. Both plots demonstrate that a third layer does not cause any additional gain when
the reservoir size is large enough. However, this might also be caused by the fact that we did not
systematically optimize the parameters (SR, leak rate, regularization parameter, etc.) for each large
system configuration we investigated. We just chose sensible values which were retrieved from tests
with smaller systems.
5.3
Comparison with the state-of-the-art
In Table 2 we have listed some published results obtained on TIMIT with state-of-the-art HMM
systems and other recently proposed research systems. We have also included the results of own experiments we conducted with SPRAAK2 [22], a recently launched HMM-based speech recognition
toolkit. In order to provide an easier comparison, we also build a phone recognition system based
on the same design parameters that were optimized for phoneme recognition. All phone RERs are
2
http://www.spraak.org
7
calculated on the core test set, while the phoneme RERs were measured on the main test set. We
do this because most figures in speech community papers apply to these experimental settings. Our
final results were obtained with systems that were trained on the full training data (including the
development set). Before discussing our figures in detail we emphasize that the two figures for
SPRAAK confirm our earlier statement that phoneme recognition is harder than phone recognition.
Table 2: Phoneme and Phone Recognition Error Rates (in %) obtained with state-of-the-art systems.
System description
used test set
Reservoir Computing (this paper)
CD-HMM (SPRAAK Toolkit)
CD-HMM [20]
Recurrent Neural Networks [1]
LSTM+CTC [5]
Bayesian Triphone HMM [23]
Deep Belief Networks [4]
Hierarchical HMM + MLPs [20]
Phone RER
core test
Phoneme RER
main test
26.8
25.6
29.1
28.1
28.7
26.1
(24.6)
24.4
23.0
(23.4)
Given the fact that SPRAAK seems to achieve state-of-the-art performance, it is fair to conclude
from the figures in Table 2 that our present system is already competitive with other modern HMM
systems. It is also fair to say that better systems do exist, like the Deep Belief Network system
[4] and the hierarchical HMM system with multiple Multi-Layer Perceptrons (MLPs) on top of an
HMM system [20]. Note however that the latter system also employs complex temporal patterns
(TRAPs) as input features. These patterns are much more powerful than the simple MFCC vectors
used in all other systems we cite. Furthermore, the LSTM+CTC [5] results too must be considered
with some care since they were obtained with a bidirectional system. Such a system is impractical
in many application since it has to wait until the end of a speech utterance to start the recognition.
We therefore put the results of the latter two systems between brackets in Table 2.
To conclude this discussion, we also want to mention some training and execution times. The
training of our two-layer 20K reservoir systems takes about 100 hours on a single core 3.0 GHz PC,
while recognition takes about two seconds of decoding per second of speech.
6
Conclusion and future work
In this paper we showed for the first time that good phoneme recognition on TIMIT can be achieved
with a system based on Reservoir Computing. We demonstrated that in order to achieve this, we
need large reservoirs (at least 20000 nodes) which are configured in a hierarchical way. By stacking
two reservoir layers, we were able to achieve error rates that are competitive with what is attainable
using state-of-the-art HMM technology. Our results support the idea that reservoirs can exploit
long-term dynamic properties of the articulatory system in continuous speech recognition.
It is acknowledged though that other techniques such as Deep Belief Networks are still outperforming our present system, but the plots and the discussions presented in the course of this paper clearly
show a significant margin for further improvement of our system in the near future.
To achieve this improvement we will investigate even larger reservoirs with 50000 and more nodes
and we will more thoroughly optimize the parameters of the different reservoirs. Furthermore, we
will explore the use of sparsely connected outputs and multi-frame inputs in combination with PCAbased dimensionality reduction. Finally, we will develop an embedded training scheme that permits
the training of reservoirs on much larger speech corpora for which only orthographic representations
are distributed together with the speech data.
Acknowledgement
The work presented in this paper is funded by the EC FP7 project ORGANIC (FP7-231267).
8
References
[1] A. Robinson. An application of recurrent neural nets to phone probability estimation. IEEE Trans. on
Neural Networks, 5:298?305, 1994.
[2] H. Bourlard and N. Morgan. Continuous speeh recognition by connectionist statistical methods. IEEE
Trans. on Neural Networks, 4:893?909, 1993.
[3] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Computation,
18:1527?1554, 2006.
[4] A. Mohamed, G. Dahl, and G. Hinton. Deep belief networks for phone recognition. In NIPS Workshop
on Deep Learning for Speech Recognition and Related Applications, 2009.
[5] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other
neural network architectures. Neural Networks, 18:602?610, 2005.
[6] H. Jaeger. Tutorial on training recurrent neural networks, covering BPTT, RTRL, EKF and the echo
state network approach (48 pp). Technical report, German National Research Center for Information
Technology, 2002.
[7] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: A new framework
for neural computation based on perturbations. Neural Computation, 14(11):2531?2560, 2002.
[8] D. Verstraeten, B. Schrauwen, M. D?Haene, and D. Stroobandt. An experimental unification of reservoir
computing methods. Neural Networks, 20:391?403, 2007.
[9] E. Antonelo, B. Schrauwen, and J. Van Campenhout. Generative modeling of autonomous robots and
their environments using reservoir computing. Neural Processing Letters, 26(3):233?249, 2007.
[10] G. Holzmann and H. Hauser. Echo state networks with filter neurons and a delay & sum readout. Neural
Networks, 23:244?256, 2010.
[11] D. Verstraeten, B. Schrauwen, and D. Stroobandt. Isolated word recognition using a liquid state machine.
In Proceedings of the 13th European Symposium on Artificial Neural Networks (ESANN), pages 435?440,
2005.
[12] M. Skowronski and J. Harris. Automatic speech recognition using a predictive echo state network classifier. Neural Networks, 20(3):414?423, 2007.
[13] B. Schrauwen. A hierarchy of recurrent networks for speech recognition. In NIPS Workshop on Deep
Learning for Speech Recognition and Related Applications, 2009.
[14] J. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallett, and N. Dahlgren. The DARPA TIMIT acousticphonetic continuous speech corpus cd-rom. Technical report, National Institute of Standards and Technology, 1993.
[15] K.F. Lee and H-W. Hon. Speaker-independent phone recognition using hidden markov models. In IEEE
Trans. on Acoustics, Speech and Signal Processing, ASSP, volume 37, pages 1641?1648, 1989.
[16] H. Jaeger, M. Lukosevicius, D. Popovici, and U. Siewert. Optimization and applications of echo state
networks with leaky-integrator neurons. Neural Networks, 20:335?352, 2007.
[17] S. Davis and P. Mermelstein. Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans. on Acoustics Speech & Signal Processing, 28:357?
366, 1980.
[18] L. Van Immerseel and J.P. Martens. Pitch and voiced/unvoiced determination with an auditory model.
Acoustical Society of America, 91(6):3511?3526, June 1992.
[19] B. Schrauwen, L. Buesing, and R. Legenstein. Computational power and the order-chaos phase transition
in reservoir computing. In Proc. Advances in Neural Information Processing Systems (NIPS), volume 21,
pages 1425?1432, 2008.
[20] P. Schwarz, P. Matejka, and J. Cernocky. Hierarchical structures of neural networks for phoneme recognition. In Proc. International Conference on Acoustics, Speech and Signal Processing, pages 325?328,
2006.
[21] Linguistic Data Consortium. COMLEX english pronunciation lexicon, 2009.
[22] K. Demuynck, J. Roelens, D. Van Compernolle, and P. Wambacq. SPRAAK: An open source speech
recognition and automatic annotation kit. In Procs. Interspeech 2008, page 495, 2008.
[23] J. Ming and F.J. Smith. Improved phone recognition using bayesian triphone models. IEEE Trans. on
Acoustics, Speech and Signal Processing, ASSP, 1:409?412, 1998.
9
| 4056 |@word inversion:1 bigram:4 norm:3 seems:1 bptt:1 open:1 closure:4 tried:1 contrastive:1 attainable:1 mention:1 harder:2 reduction:1 electronics:1 substitution:1 series:1 contains:1 configuration:1 liquid:1 denoting:1 renewed:1 tuned:1 past:1 si:1 activation:5 must:3 subsequent:4 happen:1 designed:1 drop:1 plot:2 v:1 generative:1 selected:2 guess:1 vanishing:1 core:6 short:4 smith:1 provides:1 node:14 contribute:1 successive:1 lexicon:1 org:2 rc:13 framewise:1 symposium:1 consists:2 sustained:1 compose:1 advocate:1 indeed:2 themselves:1 multi:2 integrator:4 ming:1 decreasing:1 little:1 totally:1 begin:1 discover:1 project:1 panel:2 what:2 string:1 emerging:1 developed:1 spoken:6 impractical:1 temporal:1 every:3 dahlgren:1 growth:1 classifier:1 control:3 unit:4 before:3 limit:1 becoming:1 might:2 rnns:1 chose:1 garofolo:1 co:2 hmms:4 limited:1 monosyllabic:1 testing:1 atomic:1 orthographic:1 digit:1 procedure:1 organic:2 word:6 pre:1 wait:1 consortium:1 doubled:1 acousticphonetic:1 operator:1 storage:2 context:3 applying:1 put:1 optimize:2 equivalent:1 www:1 marten:2 demonstrated:1 center:1 uttered:1 starting:1 correcting:1 mermelstein:1 retrieve:1 autonomous:1 updated:1 limiting:1 dbns:1 commercial:1 target:1 hierarchy:1 hypothesis:3 velocity:3 documentation:1 recognition:36 element:1 trend:1 sparsely:2 observed:2 capture:4 thousand:1 readout:1 connected:6 verstraeten:2 rescaled:1 fiscus:1 benjamin:1 environment:1 leak:3 insertion:2 ui:5 ideally:1 dynamic:2 trained:6 predictive:1 blurring:1 basis:1 completely:1 easily:1 darpa:1 retrieves:1 america:1 dialect:1 distinct:2 kcl:1 describe:1 fast:1 artificial:1 pronunciation:2 jean:1 emerged:1 modular:1 quite:1 larger:5 say:1 tested:2 jointly:1 echo:5 envisage:1 final:1 sequence:10 advantage:2 eigenvalue:1 net:2 propose:2 product:1 relevant:1 realization:2 achieve:5 supposed:1 description:1 competition:1 getting:1 regularity:1 jaeger:2 produce:1 recurrent:15 develop:1 measured:2 phonemic:4 sa:2 esann:1 synchronized:1 waveform:1 radius:1 correct:4 filter:2 human:2 ao:1 elementary:1 considered:1 great:1 viterbi:1 pointing:1 major:1 achieves:1 dictionary:3 belgium:1 campenhout:1 purpose:1 recognizer:3 estimation:1 proc:2 label:2 schwarz:1 edit:2 largest:2 tf:2 successfully:1 lukosevicius:1 clearly:1 gaussian:3 aim:3 ekf:1 rather:1 avoid:2 linguistic:2 derived:2 june:1 improvement:5 indicates:1 likelihood:2 contrast:1 dependent:1 hidden:3 relation:1 originating:1 going:1 comprising:4 arg:1 classification:11 hon:1 denoted:1 exponent:2 development:3 art:9 breakthrough:1 integration:7 once:1 procs:1 cer:11 future:2 minimized:1 others:1 connectionist:1 report:2 employ:4 modern:2 randomly:1 composed:3 preserve:1 recognize:4 hamper:1 individual:2 national:2 gent:1 phase:1 consisting:1 interest:1 huge:1 investigate:2 evaluation:2 male:1 navigation:2 sh:1 mixture:2 yielding:1 bracket:1 pc:1 articulatory:3 predefined:1 unification:1 ager:1 conduct:1 desired:3 plotted:1 isolated:2 minimal:1 stopped:1 instance:2 column:2 earlier:1 modeling:1 stacking:1 pole:1 elis:1 ugent:1 uniform:1 delay:1 conducted:1 osindero:1 too:2 reported:4 hauser:1 accomplish:2 referring:2 thanks:1 thoroughly:1 lstm:4 international:1 lee:1 compliant:1 decoding:3 pool:1 together:1 continuously:1 schrauwen:6 squared:1 pietersnieuwstraat:1 derivative:2 rescaling:3 account:1 potential:2 availability:1 coefficient:2 blurred:1 configured:1 caused:1 performed:4 try:1 later:2 reached:1 competitive:2 start:1 masking:1 annotation:1 slope:1 voiced:1 timit:8 contribution:1 minimize:1 ass:1 mlps:2 phoneme:46 variance:3 yield:2 buesing:1 identification:1 bayesian:2 basically:1 mfcc:3 researcher:1 confirmed:1 published:1 reach:1 suffers:1 manual:2 energy:3 frequency:1 mohamed:1 pp:1 static:3 couple:1 gain:1 auditory:2 recall:1 knowledge:2 car:1 emerges:1 subsection:1 organized:1 segmentation:3 dimensionality:1 actually:1 back:1 bidirectional:2 follow:1 methodology:4 improved:2 cepstrum:1 though:1 furthermore:3 just:3 stage:3 implicit:1 until:1 hand:1 nonlinear:4 propagation:1 defines:1 logistic:1 name:1 effect:1 verify:1 concept:2 contain:2 normalized:2 regularization:4 read:2 excluded:1 nonzero:1 excitability:1 maass:1 illustrated:1 conditionally:1 during:2 interspeech:1 covering:1 speaker:9 mel:1 davis:1 cosine:1 m:9 ridge:1 demonstrate:3 confusion:2 performs:1 lamel:1 chaos:1 novel:1 recently:4 common:3 ctc:2 winner:1 insensitive:1 volume:2 interpretation:1 isf:4 significant:4 ai:1 automatic:4 unconstrained:1 tuning:2 language:8 dot:1 funded:1 toolkit:3 robot:2 stable:2 similarity:1 operating:1 internationally:1 etc:2 own:1 showed:1 female:1 retrieved:1 belongs:1 driven:1 phone:17 schmidhuber:1 phonetic:7 tikhonov:1 occasionally:1 outperforming:1 success:1 discussing:1 renowned:1 morgan:1 additional:3 care:1 preceding:1 kit:1 recognized:1 triphone:2 period:1 signal:6 multiple:2 sound:4 full:1 reduces:1 technical:2 determination:1 long:7 retrieval:1 lin:3 divided:1 concerning:1 post:1 equally:2 impact:2 prediction:1 involving:1 basic:4 regression:2 multilayer:1 pitch:1 represent:2 normalization:1 achieved:1 c1:3 affecting:1 want:2 interval:2 source:1 biased:1 rest:1 launched:1 sr:6 natschl:1 near:1 presence:1 split:1 enough:5 easy:1 automated:1 zi:4 architecture:6 competing:1 reduce:1 idea:2 tm:2 pallett:1 six:2 ultimate:1 effort:2 lvcsr:5 speech:34 cause:1 deep:8 ignored:1 dramatically:1 listed:1 amount:1 matejka:1 induces:1 reduced:2 http:2 outperform:1 percentage:1 supplied:4 problematic:1 exist:1 shifted:1 tutorial:1 per:2 group:4 nevertheless:1 acknowledged:1 drawn:2 dahl:1 utilize:1 year:2 convert:2 sum:1 compete:1 letter:1 powerful:1 rer:6 utilizes:1 legenstein:1 decision:1 scaling:3 bit:1 layer:15 refine:1 encountered:1 fading:1 constraint:1 worked:1 constrain:1 min:1 performing:1 conjecture:1 department:1 according:2 combination:2 remain:1 describes:1 ninety:1 smaller:1 unity:1 dictation:1 rtrl:1 haene:1 explained:2 gradually:1 sint:1 resource:2 equation:4 discus:2 describing:1 german:1 needed:1 fp7:2 end:2 available:1 operation:2 permit:1 eight:1 apply:1 hierarchical:8 appropriate:1 spectral:1 pierre:1 appearing:3 anymore:1 alternative:1 encounter:1 abandoned:1 top:2 include:1 ensure:2 instant:1 exploit:1 build:2 establish:1 society:1 psychophysical:1 objective:1 already:5 parametric:1 exhibit:1 gradient:1 win:1 separate:1 capacity:1 hmm:15 decoder:3 accommodating:1 sensible:1 acoustical:3 argue:1 rom:1 besides:1 ratio:1 balance:1 statement:1 negative:1 stated:1 suppress:1 design:2 proper:1 perform:4 teh:1 neuron:22 markov:3 unvoiced:1 fabian:2 enabling:1 cernocky:1 extended:1 hinton:2 assp:2 frame:6 interacting:1 varied:1 perturbation:1 arbitrary:1 community:2 introduced:2 namely:3 required:1 sentence:9 connection:9 optimized:3 acoustic:19 engine:1 deletion:2 hour:1 nip:3 robinson:1 trans:5 able:3 usually:2 dynamical:3 pattern:3 articulation:2 built:1 reliable:1 memory:4 max:1 belief:6 including:1 power:1 suitable:1 hybrid:1 rely:1 bourlard:1 representing:4 wout:4 scheme:1 technology:7 created:1 shortterm:1 utterance:4 prior:1 popovici:1 acknowledgement:1 zh:1 relative:4 graf:1 embedded:1 fully:3 siewert:1 generation:1 validation:1 integrate:1 principle:1 systematically:1 cd:3 course:1 changed:2 last:1 free:2 english:3 silence:1 bias:2 institute:1 wide:1 template:2 cepstral:2 emerge:2 absolute:2 leaky:3 markram:1 ghz:1 distributed:1 van:3 dimension:1 vocabulary:2 world:1 avoids:1 gram:1 computes:1 calculated:1 adopts:1 made:1 transition:1 preprocessing:2 far:1 ec:1 emphasize:1 transcription:3 confirm:2 investigating:1 corpus:11 harm:1 conclude:2 consuming:1 continuous:6 search:2 decade:1 table:6 stimulated:1 promising:1 reviewed:1 ignoring:1 plosive:1 inventory:1 untrained:1 complex:2 investigated:1 european:1 did:1 main:6 noise:1 fair:2 augmented:1 reservoir:78 slow:1 sub:6 position:1 stroobandt:2 explicit:1 third:1 preprocessor:1 xt:5 showing:1 symbol:7 list:1 recurrently:2 offset:1 evidence:1 trap:1 workshop:2 adding:2 importance:4 supplement:1 compulsory:1 execution:1 margin:3 sx:1 easier:1 depicted:1 likely:2 explore:1 prevents:1 expressed:1 aa:1 corresponds:1 cite:1 determines:3 harris:1 conditional:1 month:1 goal:1 acceleration:3 consequently:1 replace:1 fisher:1 experimentally:1 change:2 included:1 specifically:1 typical:2 ghent:2 multimedia:1 called:2 pas:1 experimental:2 perceptrons:1 owed:1 support:2 latter:6 audio:1 trainable:3 |
3,376 | 4,057 | Infinite Relational Modeling of Functional
Connectivity in Resting State fMRI
Morten M?rup
Section for Cognitive Systems
DTU Informatics
Technical University of Denmark
[email protected]
Kristoffer Hougaard Madsen
Danish Research Centre for Magnetic Resonance
Copenhagen University Hospital Hvidovre
[email protected]
Anne Marie Dogonowski
Danish Research Centre for Magnetic Resonance
Copenhagen University Hospital Hvidovre
[email protected]
Hartwig Siebner
Danish Research Centre for Magnetic Resonance
Copenhagen University Hospital Hvidovre
[email protected]
Lars Kai Hansen
Section for Cognitive Systems
DTU Informatics
Technical University of Denmark
[email protected]
Abstract
Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole
brain level. Most analyses of functional resting state networks (RSN) have been
based on the analysis of correlation between the temporal dynamics of various
regions of the brain. While these models can identify coherently behaving groups
in terms of correlation they give little insight into how these groups interact. In
this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups
we search for functional units of the brain that communicate with other parts of
the brain in a coherent manner as measured by mutual information. We use the
infinite relational model (IRM) to quantify functional coherent groups of resting
state networks and demonstrate how the extracted component interactions can be
used to discriminate between functional resting state activity in multiple sclerosis
and normal subjects.
1
Introduction
Neuronal elements of the brain constitute an intriguing complex network [4]. Functional magnetic
resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements
which form this complex network at a whole brain level. It has been suggested that fluctuations in
the blood oxygenation level-dependent (BOLD) signal during rest reflecting the neuronal baseline
activity of the brain correspond to functionally relevant networks [9, 3, 19].
Most analysis of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain either assessed by how well
voxels correlate with the signal from predefined regions (so-called) seeds [3, 24] or through unsupervised multivariate approaches such as independent component analysis (ICA) [10, 9]. While
1
Figure 1: The proposed framework. All pairwise mutual information (MI) are calculated between
the 2x2x2 group of voxels for each subjects resting state fMRI activity. The graph of pairwise
mutual information is thresholded such that the top 100,000 un-directed links are kept. The graphs
are analyzed by the infinite relational model (IRM) assuming the functional units Z are the same
for all subjects but their interactions ?(n) are individual. We will use these extracted interactions to
characterize the individuals.
these models identify coherently behaving groups in terms of correlation they give limited insight
into how these groups interact. Furthermore, while correlation is optimal for extracting second order
statistics it easily fails in establishing higher order interactions between regions of the brain [22, 7].
In this paper we take a different view on the analysis of functional resting state networks. Starting
from the definition of resting state as functional coherent groups we search for functional units of the
brain that communicate with other parts of the brain in a coherent manner. Consequently, what define functional units are the way in which they interact with the remaining parts of the network. We
will consider functional connectivity between regions as measured by mutual information. Mutual
information (MI) is well rooted in information theory and given enough data MI can detect functional relations between regions regardless of the order of the interaction [22, 7]. Thereby, resting
state fMRI can be represented as a mutual information graph of pairwise relations between voxels
constituting a complex network. Numerous studies have analyzed these graphs borrowing on ideas
from the study of complex networks [4]. Here common procedures have been to extract various
summary statistics of the networks and compare them to those of random networks and these analyses have demonstrated that fMRI derived graphs behave far from random [11, 1, 4]. In this paper
we propose to use relational modeling [17, 16, 27] in order to quantify functional coherent groups
of resting state networks. In particular, we investigate how this line of modeling can be used to
discriminate patients with multiple sclerosis from healthy individuals.
Multiple Sclerosis (MS) is an inflammatory disease resulting in widespread demyelinization of the
subcortical and spinal white matter. Focal axonal demyelinization and secondary axonal degeneration results in variable delays or even in disruption of signal transmission along cortico-cortical and
cortico-subcortical connections [21, 26]. In addition to the characteristic macroscopic white-matter
lesions seen on structural magnetic resonance imaging (MRI), pathology- and advanced MRI-studies
have shown demyelinated lesions in cortical gray-matter as well as in white-matter that appear normal on structural MRI [18, 12]. These findings show that demyelination is disseminated throughout
the brain affecting brain functional connectivity. Structural MRI gives information about the extent
of white-matter lesions, but provides no information on the impact on functional brain connectivity. Given the widespread demyelinization in the brain (i.e., affecting the brain?s anatomical and
functional ?wiring?) MS represents a disease state which is particular suited for relational modeling.
Here, relational modeling is able to provide a global view of the communication in the functional
network between the extracted functional units. Furthermore, the method facilitates the examination of all brain networks simultaneously in a completely data driven manner. An illustration of the
proposed analysis is given in figure 1.
2
2
Methods
Data: 42 clinically stable patients with relapsing-remitting (RR) and secondary progressive multiple
sclerosis (27 RR; 22 females; mean age: 43.5 years; range 25-64 years) and 30 healthy individuals
(15 females; mean age: 42.6 years; range 22-69 years) participated in this cross-sectional study.
Patients were neurologically examined and assigned a score according to the EDSS which ranged
from 0 to 7 (median EDSS: 4.25; mean disease duration: 14.3 years; range 3-43 years). rs-fMRI
was performed with the subjects being at rest and having their eyes closed (3 Tesla Magnetom Trio,
Siemens, Erlangen, Germany). We used a gradient echo T2*-weighted echo planar imaging sequence with whole-brain coverage (repetition time: 2490 ms; 3 mm isotropic voxels). The rs-fMRI
session lasted 20 min (482 brain volumes). During the scan session the cardiac and respiratory cycles were monitored using a pulse oximeter and a pneumatic belt.
Preprocessing: After exclusion of 2 pre-saturation volumes each remaining volume was realigned
to the mean volume using a rigid body transformation. The realigned images were then normalized
to the MNI template. In order to remove nuisance effects related to residual movement or physiological effects a linear filter comprised of 24 motion related and a total of 60 physiological effects
(cardiac, respiratory and respiration volume over time) was constructed [14]. After filtering, the
voxel were masked [23] and divided into 5039 voxel groups consisting of 2 ? 2 ? 2 voxels for the
estimation of pairwise MI.
2.1
Mutual Information Graphs
The mutual information between voxel groups i and j is given by
P
Pij (u,v)
I(i, j) = uv Pij (u, v) log Pi (u)P
. Thus, the mutual information hinges on the estimation of
j (u)
the joint density Pij (u, v). Several approaches exists for the estimation of mutual information [25]
ranging from parametric to non-parametric methods such as nearest neighbor density estimators [7]
and histogram methods. The accuracy of both approaches relies on the number of observations
present. We used the histogram approach. We used equiprobable rather than equidistant bins [25]
based on 10 percentiles derived from the individual distribution of each voxel group, i.e. Pi (u) =
1
Pj (v) = 10
. Pij (u, v) counts the number of co-occurrences of observations from voxels in voxel
group i that are at bin u while the corresponding voxels from group j are at bin v at time t. As such,
we had a total of 8 ? 480 = 3840 samples to populate the 100 bins in the joint histogram. To generate
the mutual information graphs for each subject a total of 72?5039?(5039?1)/2 ? 1 billion pairwise
MI were evaluated. We thresholded each graph keeping the top 100, 000 pairwise MI as links in the
graph. As such, each graph had size 5039 ? 5039 with a total of 200,000 directed links (i.e. 100, 000
100,000
undirected link) which resulted in each graph having link density 5039?(5039?1)/2
= 0.0079 while
the total number of links was 72 ? 100, 000 = 7.2 million links (when counting links only in the one
direction).
2.2
Infinite Relational Modeling (IRM)
The importance of modeling brain connectivity and interactions is widely recognized in the literature on fMRI [13, 28, 20]. Approaches such as dynamic causal modeling [13], structural equation
models [20] and dynamic Bayes nets [28] are normally limited to analysis of a few interactions between known brain regions or predefined regions of interest. The benefits of the current relational
modeling approach are that regions are defined in a completely data driven manner while the method
establishes interaction at a low computational complexity admitting the analysis of large scale brain
networks. Functional connectivity graphs have previously been considered in [6] for the discrimination of schizophrenia. In [24] resting state networks were defined based on normalized graph
cuts in order to derive functional units. While normalized cuts are well suited for the separation of
voxels into groups of disconnected components the method lacks the ability to consider coherent
interaction between groups. In [17] the stochastic block model also denoted the relational model
(RM) was proposed for the identification of coherent groups of nodes in complex networks. Here,
each node i belongs to a class z ir where ir denote the ith row of a clustering assignment matrix Z,
and the probability, ?ij , of a link between node i and j is determined by the class assignments z ir
and z jr as ?ij = z ir ?z >
jr . Here, ?k` ? [0, 1] denotes the probability of generating a link between
a node in class k and a node in class `. Using the Dirichlet process (DP), [16, 27] propose a nonparametric generalization of the model with a potentially infinite number of classes, i.e. the infinite
3
relational model (IRM). Inference in IRM jointly determines the number of latent classes as well as
class assignments and class link probabilities. To our knowledge this is the first attempt to explore
the IRM model for fMRI data.
Following [16] we have the following generative model for the infinite relational model
? DP(?)
Z|?
(n)
?
(a, b)|? + (a, b), ? ? (a, b) ? Beta(? + (a, b), ? ? (a, b))
? Bernoulli(z ir ?(n) z >
jr )
A(n) (i, j)|Z, ?(n)
As such an entitys tendency to participate in relations is determined solely by its cluster assignment
in Z. Since the prior on the
R elements of ? is conjugate the resulting integral
P (A(n) |Z, ? + , ? ? ) = P (A(n) |?(n) , Z)P (?(n) |? + , ? ? )d?(n) has an analytical solution such
that
P (A(n) |Z, ? + , ? ? )
=
(n)
Y Beta(M (n)
+ (a, b) + ? + (a, b), M ? (a, b) + ? ? (a, b))
,
Beta(? + (a, b), ? ? (a, b))
a?b
(n)
M + (a, b)
(n)
M ? (a, b)
>
=
(n)
+ A(n) )z b
(1 ? 21 ?a,b )z >
a (A
=
>
(1 ? 12 ?a,b )z >
a (ee ? I)z b ? M + (a, b)
(n)
(n)
(n)
M + (a, b) is the number of links between functional units a and b whereas M ? (a, b) is the
number of non-links between functional unit a and b when disregarding links between a node and
itself. e is a vector of length J with ones in all entries where J is the number of voxel groups.
We will assume that the graphs are independent over subjects such that
P (A(1) , . . . , A(N ) |Z, ? + , ? ? ) =
(n)
(n)
Y Y Beta(M +
(a, b) + ? + (a, b), M ? (a, b) + ? ? (a, b))
.
Beta(? + (a, b), ? ? (a, b))
n
a?b
As a result, the posterior likelihood is given by
!
(1)
P (Z|A
(N )
,...,A
, ? + , ? ? , ?) ?
Y
P (A
(n)
|Z, ? + , ? ? ) P (Z|?) =
n
?
?
(n)
Y Y Beta(M (n)
+ (a, b) + ? + (a, b), M ? (a, b) + ? ? (a, b)) ?
?
?
Beta(? + (a, b), ? ? (a, b))
n
a?b
!
?(?) Y
?
?(na ) .
?(J + ?) a
D
Where D is the number of expressed functional units and na the number of voxel groups assigned
to functional unit a. The expected value of ?(n) is given by
(n)
h?(n) (a, b)i =
M + (a,b)+? + (a,b)
(n)
(n)
M + (a,b)+M ? (a,b)+? + (a,b)+? ? (a,b)
.
MCMC Sampling the IRM model: As proposed in [16] we use a Gibbs sampling scheme in
combination with split-merge sampling [15] for the clustering assignment matrix Z. We used the
split-merge sampling procedure proposed in [15] with three restricted Gibbs sampling sweeps. We
initialized the restricted Gibbs sampler by the sequential allocation procedure proposed in [8]. For
the MCMC sampling, the posterior likelihood for a node assignment given the assignment of the
remaining nodes is needed both for the Gibbs sampler as well as for calculating the split-merge
acceptance ratios [15].
P (z ia
?
(n)
(n)
?
? ma Q Q Beta(M + (a,b)+?+ (a,b),M ? (a,b)+?? (a,b)) if ma > 0
n
b
Beta(? + (a,b),? ? (a,b))
= 1|Z\z ir , A(1) , ..., A(N ) ) ?
(n)
(n)
?
? ? Q Q Beta(M + (a,b)+?+ (a,b),M ? (a,b)+?? (a,b)) otherwise .
n
b
Beta(? (a,b),? (a,b))
+
th
?
where ma = j6=i zj,a is the size of the a functional unit disregarding the assignment of the ith
node. We note that this posterior likelihood can be efficiently calculated only considering the parts
(n)
(n)
of the computation of M + (a, b) and M ? (a, b) as well as evaluation of the Beta function that are
affected by the considered assignment change.
P
4
Scoring the functional units in terms of stability: By sampling we obtain a large amount of
potential solutions, however, for visualization and interpretation it is difficult to average across all
samples as this requires that the extracted groups in different samples and runs can be related to
each other. For visualization we instead selected the single best extracted sample r? (i.e., the MAP
estimate) across 10 separate randomly initialized runs each of 500 iterations.
To facilitate interpretation we displayed the top 20 extracted functional units most reproducible
across the separate runs. To identify these functional units we analyzed how often nodes co-occurred
in the same cluster across the extracted samples from the other random starts r according to C =
P
(r) (r)>
Z
? I) using the following score ?c
r6=r ? (Z
?c =
sc
,
stot
c
sc = 21 z (r
c
? >
)
?
)
Cz (r
,
c
(r
stot
c = zc
? >
)
Ce ? sc .
sc counts the number of times the voxels in group c co-occurred with other voxels in the group
whereas stot gives the total number of times voxels in group c co-occurred with other voxels in the
graph. As such 0 ? ?c ? 1 where 1 indicates that all voxels in the cth group were in the same
cluster across all samples whereas 0 indicates that the voxels never co-occurred in any of the other
samples.
3
Results and Discussion
Following [11] we calculated the average shortest path length hLi, average clustering coefficient
hCi, degree distribution ? and largest connected component (i.e., giant component) G for each
subject specific graph as well as the MI threshold value tc used to define the top 100, 000 links. In
table 1 it can be seen that the derived graphs are far from Erd?os-R?enyi random graphs. Both the
clustering coefficient, degree distribution parameter ? and giant component G differ significantly
from the random graphs. However, there are no significant differences between the Normal and MS
group indicating that these global features do not appear to be affected by the disease.
For each run, we initialized the
IRM model with D = 50 randomly
generated functional units.
5
a=b
1
a=b
We set the prior ? + (a, b) =
and ? ? (a, b) =
favoring a
1 otherwise
5 otherwise
priori higher within functional unit link density relative to between link density. We set ? = log J
(where J is the number of voxel groups). In the model estimation we treated 2.5% of the links
and an equivalent number of non-links as missing at random in the graphs. When treating entries
as missing at random these can be ignored maintaining counts only over the observed values [16].
The estimated models are very stable as they on average extracted D = 72.6 ? 0.6 functional units.
In figure 2 the area under curve (AUC) scores of the receiver operator characteristic for predicting
links are given for each subject where the prediction of links was based on averaging over the final
100 samples. While these AUC scores are above random for all subjects we see a high degree of
variability across the subjects in terms of the model?s ability to account for links and non-links in
the graphs. We found no significant difference between the Normal and MS group in terms of the
Table 1: Median threshold values tc , average shortest path hLi, average clustering coefficient hCi,
degree distribution exponent ? (i.e. p(k) ? k ?? ) and giant component G (i.e. largest connected
component in the graphs relative to the complete graph) for the normal and multiple-sclerosis group
as well as a non-parametric test of difference in median between the two groups. The random graph
is an Erd?os-R?enyi random graph with same density as the constructed graphs.
Normal
MS
Random
P-value(Normal vs. MS)
P-value(Normal and MS vs. Random)
tc
0.0164
0.0163
0.9964
-
5
hLi
2.77
2.70
2.73
0.4509
0.6764
hCi
0.1116
0.0898
0.0079
0.9954
p ? 0.001
?
1.40
1.36
0.88
0.7448
p ? 0.001
G
0.8587
0.8810
1
0.7928
p ? 0.001
Figure 2: AUC score across the 10 different runs for each subject in the Normal group (top) and
MS group (bottom). At the top right the distribution of the AUC scores is given for the two groups
(Normal: blue, MS: red). No significant difference between the median value of the two distributions
are found (p ? 0.34).
model?s ability to account for the network dynamics. Thus, there seem to be no difference in terms
of how well the IRM model is able to account for structure in the networks of MS and Normal
subjects. Finally, we see that the link prediction is surprisingly stable for each subject across runs as
well as links and non-links treated as missing. This indicate that there is a high degree of variability
in the graphs extracted from resting state fMRI between the subjects relative to the variability within
each subject.
Considering the inference a stochastic optimization procedure we have visualized the sample with
highest likelihood (i.e. the MAP estimate) over the runs in figure 3. We display the top 20 most
reproducible extracted voxel groups (i.e., functional units) across the 10 runs. Fifteen of the 20
functional units are easily identified as functionally relevant networks. These selected functional
units are similar to the networks previously identified on resting-state fMRI data using ICA [9].
The sensori-motor network is represented by the functional units 2, 3, 13 and 20; the posterior part
of the default-mode network [19] by functional units 6, 14, 16, 19; a fronto-parietal network by the
functional units 7,10 and 12; the visual system represented by the functional units 5, 11, 15, 18. Note
the striking similarity to the sensori-motor ICA1, posterior part of the default network ICA2 and
fronto-parietal network ICA3 and visual component ICA4. Contrary to ICA the current approach is
able to also model interactions between components and a consistent pattern is revealed where the
functional units with the highest within connectivity also show the strongest between connectivity.
Furthermore the functional units appear to have symmetric connectivity profiles e.g. functional unit
2 is strongly connected to functional unit 3 (sensori-motor system), and these both strongly connect
to the same other functional units, in this case 6 and 16 (default-mode network). Functional units
1, 4, 8, 9, 17 we attribute to vascular noise and these units appear to be less connected with the
remaining functional units.
In panel C of figure 3 we tested the difference between medians in the connectivity of the extracted
functional units. Given are connections that are significant at p ? 0.05. Healthy individuals show
stronger connectivity among selected functional units relative to patients. The functional units involved are distributed throughout the brain and comprise the visual system (functional unit 5 and 11),
the sensori-motor network (functional unit 2), and the fronto-parietal network (functional unit 10).
This is expected since MS affects the brain globally by white-matter changes disseminated throughout the brain [12]. Patients with MS show stronger connectivity relative to healthy individuals
between selected parts of the sensori-motor (functional unit 13) and fronto-parietal network (functional units 7 and 12). An interpretation of this finding could be that the communication increases
between the fronto-parietal and the sensori-motor network either as a maladaptive consequence of
the disease or as part of a beneficial compensatory mechanism to maintain motor function.
6
Figure 3: Panel A: Visualization of the MAP model over the 10 restarts. Given are the functional
units indicated in red while circles indicate median within unit link density and lines median between
functional unit link density. Gray scale and line width code the link density between and within the
functional units using a logarithmic scale. Panel B: Selected resting state components extracted
from a group independent component analysis (ICA) are given. After temporal concatenation over
subjects the Infomax ICA algorithm [2] was used to identify 20 spatially independent components.
Subsequently the individual component time series was used in a regression model to obtain subject
specific component maps [5]. The displayed ICA maps are based on one sample t-tests corrected
for multiple comparisons p ? 0.05 using Gaussian random fields theory. Panel C: AUC score for
relations between the extracted groups thresholded at a significance level of ? = 5% based on a two
sided rank-sum test. Blue indicates that the link density is larger for Normal than MS, yellow that
MS is larger than Normal. (A high resolution version of the figure can be found in the supplementary
material).
7
Table 2: Leave one out classification performance based on support vector machine (SVM) with
a linear kernel, linear discriminant analysis (LDA) and K-nearest neighbor (KNN). Significance
level estimated by comparing to classification performance for the corresponding classifiers with
randomly permuted class labels, bold indicates significant classification at a p ? 0.05.
SVM
LDA
KNN
Raw data
51.39
59.72
38.89
PCA
55.56
51.39
58.33
ICA
63.89 (p ? 0.04)
63.89 (p ? 0.05)
56.94
Degree
59.72
51.39
51.39
IRM
72.22(p ? 0.002)
75.00(p ? 0.001)
66.67(p ? 0.01)
Discriminating Normal subjects from MS: We evaluated the classification performance of the
subject specific group link densities ?(n) based on leave one out cross-validation. We considered
three standard classifiers, soft margin support vector machine (SVM) with linear kernel (C = 1),
linear discriminant analysis (LDA) based on the pooled variance estimate (features projected by principal component analysis to a 20 dimensional feature space prior to analysis), as well as K-nearest
neighbor (KNN), K = 3. We compared the classifier performances to classifying the normalized
raw subject specific voxel ? time series, i.e. the matrix given by subject ? voxel ? time as well as
the data projected to the most dominant 20 dimensional subspace denoted (PCA). For comparison
we also included a group ICA [5] analysis as well as the performance using node degree (Degree)
as features which has previously been very successful for classification of schizophrenia [6]. For
the IRM model we used the Bayesian average over predictions which was dominated by the MAP
estimate given in figure 3. For all the classification analyses we normalized each feature. In table
2 is given the classification results. Group ICA as well as the proposed IRM model significantly
classify above random. The IRM model has a higher classification rate and is significant across all
the classifiers.
Finally, we note that contrary to analysis based on temporal correlation such as the ICA and PCA
approaches used for the classification the benefit of mutual information is that it can take higher
order dependencies into account that are not necessarily reflected by correlation. As such, a brain
region driven by the variance of another brain region can be captured by mutual information whereas
this is not necessarily captured by correlation.
4
Conclusion
The functional units extracted using the IRM model correspond well to previously described RSNs
[19, 9]. Whereas conventional models for assessing functional connectivity in rs-fMRI data often
aim to divide the brain into segregated networks the IRM explicitly models relations between functional units enabling visualization and analysis of interactions. Using classification models to predict
the subject disease state revealed that the IRM model had a higher prediction rate than discrimination based on the components extracted from a conventional group ICA approach [5]. IRM readily
extends to directed graphs and networks derived from task related functional activation. As such
we believe the proposed method constitutes a promising framework for the analysis of functionally
derived brain networks in general.
References
[1] S. Achard, R. Salvador, B. Whitcher, J. Suckling, and E. Bullmore. A resilient, low-frequency, smallworld human brain functional network with highly connected association cortical hubs. The Journal of
Neuroscience, 26(1):63?72, 2006.
[2] A. J. Bell and T. J. Sejnowski. An information maximization approach to blind source separation and
blind deconvolution. Neural Computation, 7:1129?1159, 1995.
[3] B. Biswal, F. Z. Yetkin, V. M. Haughton, and J. S. Hyde. Functional connectivity in the motor cortex of
resting human brain using echo-planar MRI. Magnetic Resonance in Medicine, 34(4):537?541, 1995.
[4] E. Bullmore and O. Sporns. Complex brain networks: graph theoretical analysis of structural and functional systems. Nature Reviews. Neuroscience, 10(3):186-98, 2009.
8
[5] V. D. Calhoun, T. Adali, G. D. Pearlson, and J. J. Pekar. A method for making group inferences from
functional MRI data using independent component analysis. Human Brain Mapping, 14:140?151, 2001.
[6] G. Cecchi, I. Rish, B. Thyreau, B. Thirion, M. Plaze, M.-L. Paillere-Martinot, C. Martelli, J.-L. Martinot,
and J.-B. Poline. Discriminative network models of schizophrenia. Advances in Neural Information
Processing Systems, 22:252?260, 2009.
[7] B. Chai, D. Walther, D. Beck, and L. Fei-Fei. Exploring functional connectivities of the human brain
using multivariate information analysis. Advances in Neural Information Processing Systems, 22:270?
278, 2009.
[8] D. B. Dahl. Sequentially-allocated merge-split sampler for conjugate and nonconjugate Dirichlet process
mixture models. Technical report, Texas A&M University, 2005.
[9] J. S. Damoiseaux, S.A.R.B. Rombouts, F. Barkhof, P. Scheltens, C. J. Stam, S. M. Smith, and C. F. Beckmann. Consistent resting-state networks across healthy subjects. Proceedings of the National Academy
of Sciences of the United States of America, 103(37):13848?13853, 2006.
[10] M. De Luca, S. Smith, N. De Stefano, A. Federico, and P. M. Matthews. Blood oxygenation level dependent contrast resting state networks are relevant to functional activity in the neocortical sensorimotor
system. Experimental Brain Research, 167(4):587?594, 2005.
[11] V. M. Eguiluz, D. R. Chialvo, G. A. Cecchi, M. Baliki, and A. V. Apkarian. Scale-free brain functional
networks. Physical Review Letters, 94(1):018102, 2005.
[12] M. Filippi and M. A. Rocca. MRI evidence for multiple sclerosis as a diffuse disease of the central
nervous system. Journal of Neurology, 252 Suppl 5:v16?v24, 2005.
[13] K.J. Friston, L. Harrison, and W.D. Penny. Dynamic Causal Modelling. NeuroImage, 19(4):1273?1302,
2003.
[14] G. H. Glover, T. Q. Li, and D. Ress. Image-based method for retrospective correction of physiological
motion effects in fMRI: RETROICOR. Magnetic Resonance in Medicine, 44:162?167, 2000.
[15] S. Jain and R. M. Neal. A split-merge markov chain monte carlo procedure for the dirichlet process
mixture model. Journal of Computational and Graphical Statistics, 13(1):158?182, 2004.
[16] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with
an infinite relational model. In Artificial Intelligence, Proceedings of the 21st National AAAI Conference
on, 1:381?388, 2006.
[17] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures. Journal of
the American Statistical Association, 96(455):1077?1087, 2001.
[18] J. W. Peterson, L. B?o, S. M?ork, A. Chang, and B. D. Trapp. Transected neurites, apoptotic neurons, and
reduced inflammation in cortical multiple sclerosis lesions. Annals of Neurology, 50:389?400, 2001.
[19] M. E. Raichle, A. M. MacLeod, A. Z. Snyder, W. J. Powers, D. A. Gusnard, and G. L. Shulman. A
default mode of brain function. Proceedings of the National Academy of Sciences of the United States of
America, 98(2):676?682, 2001.
[20] A. J. Storkey, E. Simonotto, H. Whalley, S. Lawrie, L. Murray, and D. McGonigle. Learning structural
equation models for fMRI. Advances in Neural Information Processing Systems, 19:1329?1336, 2007.
[21] B. D. Trapp, J. Peterson, R. M. Ransohoff, R. Rudick, S. M?ork, and L B?o. Axonal transection in the
lesions of multiple sclerosis. The New England journal of medicine, 338(5):278?85, 1998.
[22] A. Tsai, J. W. Fisher, III, C. Wible, W. M. Wells, III, J. Kim, and A. S. Willsky. Analysis of functional
MRI data using mutual information. In MICCAI ?99: Proc. of the Sec. Intern. Conf. on Medical Image Computing and Computer-Assisted Intervention, Lecture Notes in Computer Science, 1679:473?480,
1999.
[23] N. Tzourio-Mazoyer, B. Landeau, D. Papathanassiou, F. Crivello, O. Etard, N. Delcroix, B. Mazoyer,
and M. Joliot. Automated anatomical labeling of activations in SPM using a macroscopic anatomical
parcellation of the MNI MRI single-subject brain. NeuroImage, 15(1):273?89, 2002.
[24] M. van den Heuvel, R. Mandl, and H. Hulshoff Pol. Normalized cut group clustering of resting-state
fMRI data. PLoS ONE, 3(4), 2008.
[25] J. Walters-williams, Y. Li. Estimation of Mutual Information: A Survey. Lecture Notes in Computer
Science, 5589:389?396, 2009.
[26] S. G Waxman. Axonal conduction and injury in multiple sclerosis: the role of sodium channels. Nature
reviews. Neuroscience, 7(12):932?41, 2006.
[27] Z. Xu, V. Tresp, K. Yu, and H. P. Kriegel. Infinite hidden relational models. In In Proceedings of the 22nd
International Conference on Uncertainty in Artificial Intelligence, 2006.
[28] L. Zhang, D. Samaras, N. Alia-klein, N. Volkow, and R. Goldstein. Modeling neuronal interactivity using
dynamic bayesian networks. Advances in Neural Information Processing Systems, 18:1593?1600, 2006.
9
| 4057 |@word version:1 mri:9 stronger:2 nd:1 r:3 pulse:1 pearlson:1 fifteen:1 thereby:1 series:2 score:7 united:2 current:2 comparing:1 rish:1 anne:1 activation:2 intriguing:1 readily:1 oxygenation:2 motor:8 remove:1 reproducible:2 treating:1 discrimination:2 v:2 generative:1 selected:5 martinot:2 nervous:1 intelligence:2 isotropic:1 ith:2 smith:2 yamada:1 provides:1 node:11 belt:1 zhang:1 glover:1 along:1 constructed:2 beta:12 walther:1 hci:3 manner:4 pairwise:6 expected:2 ica:11 brain:38 globally:1 little:1 considering:2 panel:4 what:1 finding:2 transformation:1 giant:3 temporal:4 rm:1 classifier:4 unit:46 normally:1 paillere:1 appear:4 medical:1 intervention:1 consequence:1 establishing:1 fluctuation:1 solely:1 merge:5 path:2 examined:1 co:5 limited:2 range:3 directed:3 hougaard:1 block:1 procedure:5 area:1 bell:1 significantly:2 lawrie:1 pre:1 griffith:1 operator:1 equivalent:1 map:6 demonstrated:1 missing:3 conventional:2 williams:1 regardless:1 starting:2 duration:1 survey:1 resolution:1 khm:1 insight:2 estimator:1 stability:1 annals:1 element:4 storkey:1 cut:3 maladaptive:1 observed:1 bottom:1 role:1 mcgonigle:1 region:11 degeneration:1 cycle:1 connected:5 plo:1 movement:1 highest:2 disease:7 rup:1 complexity:1 pol:1 dynamic:7 apkarian:1 samara:1 completely:2 easily:2 joint:2 various:3 represented:3 america:2 walter:1 enyi:2 jain:1 sejnowski:1 monte:1 artificial:2 sc:4 labeling:1 disseminated:2 kai:1 widely:1 larger:2 supplementary:1 otherwise:3 calhoun:1 federico:1 ability:3 statistic:3 knn:3 bullmore:2 jointly:1 echo:3 itself:1 final:1 sequence:1 rr:2 net:1 analytical:1 chialvo:1 propose:2 interaction:11 relevant:3 academy:2 stam:1 billion:1 chai:1 cluster:3 transmission:1 assessing:1 generating:1 leave:2 derive:1 measured:2 ij:2 nearest:3 coverage:1 indicate:2 quantify:2 differ:1 direction:1 attribute:1 filter:1 lars:1 stochastic:3 subsequently:1 human:4 plaze:1 material:1 bin:4 resilient:1 generalization:1 hyde:1 magnetom:1 exploring:1 correction:1 mm:2 assisted:1 considered:3 normal:14 seed:1 mapping:1 predict:1 matthew:1 estimation:6 proc:1 whalley:1 label:1 hansen:1 healthy:5 largest:2 repetition:1 establishes:1 kristoffer:1 weighted:1 pekar:1 gaussian:1 aim:1 rather:1 realigned:2 derived:5 lkh:1 bernoulli:1 likelihood:4 indicates:4 lasted:1 rank:1 contrast:1 modelling:1 baseline:1 detect:1 kim:1 inference:3 dependent:2 rigid:1 borrowing:1 relation:5 favoring:1 raichle:1 hidden:1 germany:1 among:1 classification:10 denoted:2 rsn:2 priori:1 exponent:1 resonance:8 mutual:15 field:1 comprise:1 never:1 having:2 sampling:7 represents:1 progressive:1 yu:1 unsupervised:1 constitutes:1 fmri:16 t2:1 report:1 equiprobable:1 few:1 randomly:3 simultaneously:1 resulted:1 national:3 individual:8 beck:1 joliot:1 consisting:1 maintain:1 attempt:1 interest:1 acceptance:1 x2x2:1 investigate:1 highly:1 evaluation:1 analyzed:3 mixture:2 admitting:1 chain:1 predefined:2 integral:1 divide:1 irm:17 initialized:3 circle:1 causal:2 fronto:5 theoretical:1 classify:1 modeling:10 soft:1 inflammation:1 injury:1 assignment:9 maximization:1 entry:2 masked:1 comprised:1 delay:1 successful:1 characterize:1 conduction:1 connect:1 dependency:1 trio:1 st:1 density:11 international:1 discriminating:1 informatics:2 infomax:1 na:2 connectivity:17 central:1 aaai:1 cognitive:2 conf:1 american:1 li:2 waxman:1 account:4 potential:1 filippi:1 de:2 bold:2 pooled:1 sec:1 coefficient:3 matter:6 explicitly:1 blind:2 performed:1 view:3 closed:1 red:2 start:1 bayes:1 ir:6 accuracy:1 variance:2 characteristic:2 efficiently:1 correspond:2 identify:4 yellow:1 sensori:6 identification:1 raw:2 bayesian:2 hli:3 carlo:1 j6:1 strongest:1 danish:3 definition:2 sensorimotor:1 frequency:1 involved:1 mi:7 monitored:1 erlangen:1 knowledge:1 goldstein:1 reflecting:1 higher:5 restarts:1 planar:2 reflected:1 pneumatic:1 erd:2 nonconjugate:1 evaluated:2 strongly:2 salvador:1 furthermore:3 miccai:1 correlation:8 o:2 lack:1 widespread:2 spm:1 scheltens:1 mode:3 lda:3 gray:2 indicated:1 believe:1 facilitate:1 effect:4 ranged:1 normalized:6 concept:1 assigned:2 spatially:1 symmetric:1 nowicki:1 neal:1 white:5 biswal:1 wiring:1 during:2 width:1 nuisance:1 auc:5 rooted:1 percentile:1 m:16 complete:1 demonstrate:1 neocortical:1 motion:2 stefano:1 disruption:1 image:3 ranging:1 common:1 permuted:1 functional:75 neurologically:1 physical:1 spinal:1 volume:5 million:1 association:2 interpretation:3 occurred:4 resting:20 functionally:3 stot:3 heuvel:1 significant:6 respiration:1 neurites:1 gibbs:4 uv:1 focal:1 session:2 centre:3 pathology:1 had:3 stable:3 similarity:1 behaving:2 cortex:1 dominant:1 multivariate:2 posterior:5 exclusion:1 madsen:1 female:2 belongs:1 driven:3 shulman:1 scoring:1 seen:2 captured:2 recognized:1 inflammatory:1 shortest:2 signal:3 multiple:10 snijders:1 technical:3 england:1 cross:2 divided:1 luca:1 schizophrenia:3 impact:1 prediction:5 regression:1 patient:5 histogram:3 iteration:1 cz:1 kernel:2 suppl:1 addition:1 affecting:2 participated:1 whereas:5 harrison:1 median:7 source:1 macroscopic:2 allocated:1 rest:2 smallworld:1 simonotto:1 subject:24 facilitates:1 undirected:1 contrary:2 seem:1 barkhof:1 extracting:1 axonal:4 structural:6 counting:1 ee:1 revealed:2 split:5 enough:1 iii:2 automated:1 affect:1 equidistant:1 identified:2 idea:1 texas:1 pca:3 cecchi:2 vascular:1 retrospective:1 constitute:1 ignored:1 amount:1 nonparametric:1 tenenbaum:1 visualized:1 reduced:1 generate:1 zj:1 estimated:2 neuroscience:3 klein:1 anatomical:3 blue:2 affected:2 snyder:1 group:41 threshold:2 mazoyer:2 blood:2 marie:1 pj:1 ce:1 thresholded:3 dahl:1 kept:1 imaging:4 graph:29 year:6 sum:1 ork:2 run:8 letter:1 uncertainty:1 communicate:2 striking:1 extends:1 throughout:3 blockstructures:1 ueda:1 separation:2 display:1 activity:4 mni:2 fei:2 diffuse:1 dominated:1 alia:1 min:1 achard:1 according:2 clinically:1 disconnected:1 sclerosis:9 conjugate:2 jr:3 combination:1 cardiac:2 across:11 beneficial:1 cth:1 making:1 den:1 restricted:2 hartwig:2 sided:1 equation:2 visualization:4 previously:4 count:3 mechanism:1 thirion:1 needed:1 gusnard:1 magnetic:8 occurrence:1 top:7 remaining:4 clustering:6 denotes:1 dirichlet:3 res:1 graphical:1 hinge:1 maintaining:1 macleod:1 calculating:1 medicine:3 parcellation:1 murray:1 sweep:1 coherently:2 parametric:3 gradient:1 dp:2 subspace:1 rombouts:1 morten:1 link:31 separate:2 entity:1 concatenation:1 participate:1 extent:1 discriminant:2 kemp:1 denmark:2 willsky:1 assuming:1 length:2 code:1 illustration:1 ratio:1 beckmann:1 difficult:1 potentially:1 crivello:1 observation:2 neuron:1 markov:1 enabling:1 behave:1 displayed:2 parietal:5 relational:13 communication:2 variability:3 copenhagen:3 connection:2 compensatory:1 coherent:8 able:3 suggested:1 kriegel:1 pattern:1 saturation:1 sporns:1 ia:1 power:1 treated:2 examination:1 friston:1 predicting:1 residual:1 advanced:1 sodium:1 scheme:1 eye:1 numerous:1 dtu:4 extract:1 tresp:1 prior:3 voxels:14 literature:1 review:3 segregated:1 relative:5 lecture:2 volkow:1 interactivity:1 subcortical:2 filtering:1 allocation:1 age:2 validation:1 degree:8 pij:4 consistent:2 classifying:1 pi:2 row:1 poline:1 summary:1 surprisingly:1 keeping:1 free:1 populate:1 zc:1 cortico:2 neighbor:3 template:1 peterson:2 martelli:1 penny:1 benefit:2 distributed:1 curve:1 calculated:3 cortical:4 default:4 van:1 preprocessing:1 projected:2 far:2 constituting:1 voxel:11 correlate:1 global:2 imm:2 sequentially:1 receiver:1 discriminative:1 neurology:2 search:2 un:1 latent:1 table:4 promising:1 nature:2 channel:1 interact:3 complex:7 necessarily:2 significance:2 whole:3 noise:1 profile:1 lesion:5 tesla:1 body:1 neuronal:3 respiratory:2 xu:1 fails:1 neuroimage:2 r6:1 specific:4 hub:1 thyreau:1 dk:5 physiological:3 disregarding:2 svm:3 evidence:1 deconvolution:1 exists:1 sequential:1 importance:1 margin:1 suited:2 tc:3 logarithmic:1 explore:1 intern:1 visual:3 sectional:1 expressed:1 chang:1 determines:1 relies:1 extracted:15 ma:3 consequently:1 fisher:1 change:2 included:1 infinite:9 determined:2 corrected:1 tzourio:1 sampler:3 averaging:1 principal:1 called:1 hospital:3 discriminate:2 secondary:2 total:6 tendency:1 siemens:1 experimental:1 indicating:1 support:2 scan:1 assessed:1 adali:1 tsai:1 mcmc:2 tested:1 |
3,377 | 4,058 | A Bayesian Framework for Figure-Ground
Interpretation
Vicky Froyen
Center for Cognitive Science
Rutgers University, Piscataway, NJ 08854
Laboratory of Experimental Psychology
University of Leuven (K.U. Leuven), Belgium
[email protected]
?
Jacob Feldman
Center for Cognitive Science
Rutgers University, Piscataway, NJ 08854
[email protected]
Manish Singh
Center for Cognitive Science
Rutgers University, Piscataway, NJ 08854
[email protected]
Abstract
Figure/ground assignment, in which the visual image is divided into nearer (figural) and farther (ground) surfaces, is an essential step in visual processing, but its
underlying computational mechanisms are poorly understood. Figural assignment
(often referred to as border ownership) can vary along a contour, suggesting a
spatially distributed process whereby local and global cues are combined to yield
local estimates of border ownership. In this paper we model figure/ground estimation in a Bayesian belief network, attempting to capture the propagation of border
ownership across the image as local cues (contour curvature and T-junctions) interact with more global cues to yield a figure/ground assignment. Our network
includes as a nonlocal factor skeletal (medial axis) structure, under the hypothesis
that medial structure ?draws? border ownership so that borders are owned by the
skeletal hypothesis that best explains them. We also briefly present a psychophysical experiment in which we measured local border ownership along a contour at
various distances from an inducing cue (a T-junction). Both the human subjects
and the network show similar patterns of performance, converging rapidly to a
similar pattern of spatial variation in border ownership along contours.
Figure/ground assignment (further referred to as f/g), in which the visual image is divided into nearer
(figural) and farther (ground) surfaces, is an essential step in visual processing. A number of factors are known to affect f/g assignment, including region size [9], convexity [7, 16], and symmetry
[1, 7, 11]. Figural assignment (often referred to as border ownership, under the assumption that the
figural side ?owns? the border) is usually studied globally, meaning that entire surfaces and their
enclosing boundaries are assumed to receive a globally consistent figural status. But recent psychophysical findings [8] have suggested that border ownership can vary locally along a boundary,
even leading to a globally inconsistent figure/ground assignment?broadly consistent with electrophysiological evidence showing local coding for border ownership in area V2 as early as 68 msec
after image onset [20]. This suggests a spatially distributed and potentially competitive process of
figural assignment [15], in which adjacent surfaces compete to own their common boundary, with
figural status propagating across the image as this competition proceeds. But both the principles and
computational mechanisms underlying this process are poorly understood.
?
V.F. was supported by a Fullbright Honorary fellowship and by the Rutgers NSF IGERT program in Perceptual Science, NSF DGE 0549115, J.F. by NIH R01 EY15888, and M.S. by NSF CCF-0541185
1
In this paper we consider how border ownership might propagate over both space and time?that is,
across the image as well as over the progression of computation. Following Weiss et al. [18] we
adopt a Bayesian belief network architecture, with nodes along boundaries representing estimated
border ownership, and connections arranged so that both neighboring nodes and nonlocal integrating
nodes combine to influence local estimates of border ownership. Our model is novel in two particular
respects: (a) we combine both local and global influences on border ownership in an integrated and
principled way; and (b) we include as a nonlocal factor skeletal (medial axis) influences on f/g
assignment. Skeletal structure has not been previously considered as a factor on border ownership,
but its relevance follows from a model [4] in which shapes are conceived of as generated by or
?grown? from an internal skeleton, with the consequence that their boundaries are perceptually
?owned? by the skeletal side.
We also briey present a psychophysical experiment in which we measured local border ownership
along a contour, at several distances from a strong local f/g inducing cue, and at several time delays
after the onset of the cue. The results show measurable spatial differences in judged border ownership, with judgments varying with distance from the inducer; but no temporal effect, with essentially
asymptotic judgments even after very brief exposures. Both results are consistent with the behavior
of the network, which converges quickly to an asymptotic but spatially nonuniform f/g assignment.
1
The Model
The Network. For simplicity, we take an edge map as input for the model, assuming that edges
and T-junctions have already been detected. From this edge map we then create a Bayesian belief
network consisting of four hierarchical levels. At the input level the model receives evidence E
from the image, consisting of local contour curvature and T-junctions. The nodes for this level are
placed at equidistant locations along the contour. At the first level the model estimates local border
ownership. The border ownership, or B-nodes at this level are at the same locations as the E-nodes,
but are connected to their nearest neighbors, and are the parent of the E-node at their location. (As a
simplifying assumption, such connections are broken at T-junctions in such a way that the occluded
contour is disconnected from the occluder.) The highest level has skeletal nodes, S, whose positions
are defined by the circumcenters of the Delaunay triangulation on all the E-nodes, creating a coarse
medial axis skeleton [13]. Because of the structure of the Delaunay, each S-node is connected to
exactly three E-nodes from which they receive information about the position and the local tangent
of the contour. In the current state of the model the S-nodes are ?passive?, meaning their posteriors
are computed before the model is initiated. Between the S nodes and the B nodes are the grouping
nodes G. They have the same positions as the S-nodes and the same Delaunay connections, but to
B-nodes that have the same image positions as the E-nodes. They will integrate information from
distant B-nodes, applying an interiority cue that is influenced by the local strength of skeletal axes
as computed by the S-nodes (Fig. 1). Although this is a multiply connected network, we have found
that given reasonable parameters the model converges to intuitive posteriors for a variety of shapes
(see below).
Updating. Our goal is to compute the posterior p(Bi |I), where I is the whole image. Bi is a
binary variable coding for the local direction of border ownership, that is, the side that owns the
border. In order for border ownership estimates to be influenced by image structure elsewhere in
the image, information has to propagate throughout the network. To achieve this propagation, we
use standard equations for node updating [14, 12]. However while to all other connections being
directed, connections at the B-node level are undirected, causing each node to be child and parent
node at the same time. Considering only the B-node level, a node Bi is only separated from the
rest of the network by its two neighbors. Hence the Markovian property applies, in that Bi only
needs to get iterative information from its neighbors to eventually compute p(Bi |I). So considering the whole network, at each iteration t, Bi receives information from both its child, Ei and
from its parents?that is neigbouring nodes (Bi+1 and Bi?1 )?as well as all grouping nodes connected to it (Gj , ..., Gm ). The latter encode for interiority versus exteriority, interiority meaning that
the B-node?s estimated gural direction points towards the G-node in question, exteriority meaning
that it points away. Integrating all this information creates a multidimensional likelihood function:
p(Bi |Bi?1 , Bi+1 , Gj , ..., Gm ). Because of its complexity we choose to approximate it (assuming
all nodes are marginally independent of each other when conditioned on Bi ) by
2
Figure 1: Basic network structure of the model. Both skeletal (S-nodes) and border-ownerhsip nodes
(B-nodes) get evidence from E-nodes, though different types. S-nodes receive mere positional information, while B-nodes receive information about local curvature and the presence of T-junctions.
Because of the structure of the Delaunay triangulation S-nodes and G-nodes (grouping nodes) always get input from exactly three nodes, respectively E and B-nodes. The gray color depicts the
fact that this part of the network is computed before the model is initiated and does not thereafter
interact with the dynamics of the model.
p(Bi |Pj , ..., Pm ) ?
m
Y
p(Bi |Pj )
(1)
j
where the Pj ?s are the parents of Bi . Given this, at each iteration, each node Bi performs the
following computation:
Bel(Bi ) ? c?(Bi )?(Bi )?(Bi )?(Bi )
(2)
where conceptually ? stands for bottom-up information, ? for top down information and ? and ?
for information received from within the same level. More formally,
?(Bi ) ? p(E|Bi )
m X
Y
?(Bi ) ?
p(Bi |Gj )?Gj (Bi )
j
(3)
(4)
Gj
and analogously to equation 4 for ?(Bi ) and ?(Bi ), which compute information coming from Bi?1
and Bi+1 respectively. For these ?Bi?1 (Bi ), ?Bi+1 (Bi ), and ?Gj (Bi ):
?Gj (Bi ) ? c0 ?(G)
Y
?Bk (Gj )
(5)
k6=i
?Bi?1 (Bi ) ? c0 ?(Bi?1 )?(Bi?1 )?(Bi?1 )
3
(6)
and ?Bi+1 (Bi ) is analogous to ?Bi?1 (Bi ), with c0 and c being normalization constants. Finally for
the G-nodes:
Bel(Gi ) ? c?(Gi )?(Gi )
Y
?(Gi ) ?
?Bj (Gi )
(7)
(8)
j
?Bj (Gi ) ?
X
?(Bj )p(Bi |Gj )[?(Bj )?(Bj )
Bj
m X
Y
p(Bi |Gk )?Gk (Bi )]
(9)
k6=i Gk
The posteriors of the S-nodes are used to compute the ?(Gi ). This posterior computes how well
the S-node at each position explains the contour?that is, how well it accounts for the cues flowing
from the E-nodes it is connected to. Each Delaunay connection between S- and E-nodes can be
seen as a rib that sprouts from the skeleton. More specifically each rib sprouts in a direction that is
normal (perpendicular) to the tangent of the contour at the E-node plus a random error ?i chosen
independently for each rib from a von Mises distribution centered on zero, i.e. ?i ? V (0, ?S ) with
spread parameter ?S [4]. The rib lengths are drawn from an exponential decreasing density function
p(?i ) ? e??S ?i [4]. We can now express how well this node ?explains? the three E-nodes it is
connected to via the probability that this S-node deserves to be a skeletal node or not,
p(S = true|E1 , E2 , E3 ) ?
Y
p(?i )p(?i )
(10)
i
with S = true depicting that this S-node deserves to be a skeletal node. From this we then compute
the prior ?(Gi ) in such a way that good (high posterior) skeletal nodes induce a high interiority bias,
hence a stronger tendency to induce figural status. Conversely, bad (low posterior) skeletal nodes
create a prior close to indifferent (uniform) and thus have less (or no) influence on figural status.
Likelihood functions Finally we need to express the likelihood function necessary for the updating rules described above. The first two likelihood functions are part of p(Ei |Bi ), one for each of
the local cues. The first one, reflecting local curvature, gives the probability of the orientations of
the two vectors inherent to Ei (?1 and ?2 ) given both direction of figure (?) encoded in Bi as a von
Mises density centered on ?, i.e. ?i ? V (?, ?EB ). The second likelihood function, reflecting the
presence of a T-junction, simply assumes a fixed likelihood when a T-junction is present?that is
p(T-junction = true|Bi ) = ?T , where Bi places the direction of figure in the direction of the occluder. This likelihood function is only in effect when a T-junction is present, replacing the curvature
cue at that node.
The third likelihood function serves to keep consistency between nodes of the first level. This function p(Bi |Bi?1 ) or p(Bi |Bi+1 ) is used to compute ?(B) and ?(B) and is defined 2x2 conditional
probability matrix with a single free parameter, ?BB (the probability that figural direction at both
B-nodes are the same). A fourth and final likelihood function p(Bi |Gj ) serves to propagate information between level one and two. This likelihood function is 2x2 conditional probability matrix
matrix with one free parameter, ?BG . In this case ?BG encodes the probability that the figural direction of the B-node is in the direction of the exterior or interior preference of the G-node. In total this
brings us to six free parameters in the model: ?S , ?S , ?EB , ?T , ?BB , and ?BG .
2
Basic Simulations
To evaluate the performance of the model, we first tested it on several basic stimulus configurations
in which the desired outcome is intuitively clear: a convex shape, a concave shape, a pair of overlapping shapes, and a pair of non-overlapping shapes (Fig. 2,3). The convex shape is the simplest
in that curvature never changes sign. The concave shape includes a region with oppositely signed
curvature. (The shape is naturally described as predominantly positively curved with a region of negative curvature, i.e. a concavity. But note that it can also be interpreted as predominantly negatively
curved ?window? with a region of positive curvature, although this is not the intuitive interpretation.)
4
The overlapping pair of shapes consists of two convex shapes with one partly occluding the other,
creating a competition between the two shapes for the ownership of the common borderline. Finally
the non-overlapping shapes comprise two simple convex shapes that do not touch?again setting up
a competition for ownership of the two inner boundaries (i.e. between each shape and the ground
space between them). Fig. 2 shows the network structures for each of these four cases.
Figure 2: Network structure for the four shape categories (left to right: convex, concave, overlapping, non-overlapping shapes). Blue depict the locations of the B-nodes (and also the E-nodes),
the red connections are the connections between B-nodes, the green connections are connections
between B-nodes and G-nodes, and the G-nodes (and also the S-nodes) go from orange to dark red.
This colour code depicts low (orange) to high (dark red) probability that this is a skeletal node, and
hence the strength of the interiority cue.
Running our model with hand-estimated parameter values yields highly intuitive posteriors (Fig. 3),
an essential ?sanity check? to ensure that the network approximates human judgments in simple
cases. For the convex shape the model assigns figure to the interior just as one would expect even
based solely on local curvature (Fig. 3A). In the concave figure (Fig. 3B), estimated border ownership begins to reverse inside the deep concavity. This may seem surprising, but actually closely
matches empirical results obtained when local border ownership is probed psychophysically inside
a similarly deep concavity, i.e. a ?negative part? in which f/g seems to partly reverse [8]. For the
overlapping shapes posteriors were also intuitive, with the occluding shape interpreted as in front
and owning the common border (Fig. 3C). Finally, for the two non-overlapping shapes the model
computed border-ownership just as one would expect if each shape were run separately, with each
shape treated as figural along its entire boundary (Fig. 3D). That is, even though there is skeletal
structure in the ground-region between the two shapes (see Fig. 2D), its posterior is weak compared
to the skeletal structure inside the shapes, which thus loses the competition to own the boundary
between them.
For all these configurations, the model not only converged to intuitive estimates but did so rapidly
(Fig. 4), always in fewer cycles than would be expected by pure lateral propagation, niterations <
Nnodes [18] (with these parameters, typically about five times faster).
Figure 3: Posteriors after convergence for the four shape categories (left to right: convex, concave,
overlapping, non-overlapping). Arrows indicate estimated border ownership, with direction pointing
to the perceived figural side, and length proportional to the magnitude of the posterior. All four
simulations used the same parameters.
5
Figure 4: Convergence of the model for the basic shape categories. The vertical lines represent the
point
P of convergence for each of the three shape categories. The posterior change is calculated as
|p(Bi = 1|I)t ? p(Bi = 1|I)t?1 | at each iteration.
3
Comparison to human data
Beyond the simple cases reviewed above, we wished to submit our network to a more fine-grained
comparison with human data. To this end we compared its performance to that of human subjects
in an experiment we conducted (to be presented in more detail in a future paper). Briefly, our
experiment involved finding evidence for propagation of f/g signals across the image. Subjects were
first shown a stimulus in which the f/g configuration was globally and locally unambiguous and
consistent: a smaller rectangle partly occluding a larger one (Fig. 5A), meaning that the smaller
(front) one owns the common border. Then this configuration was perturbed by adding two bars,
of which one induced a local f/g reversal?making it now appear locally that the larger rectangle
owned the border (Fig. 5B). (The other bar in the display does not alter f/g interpretation, but was
included to control for the attentional affects of introducing a bar in the image.) The inducing bar
creates T-junctions that serve as strong local f/g cues, in this case tending to reverse the prior global
interpretation of the figure. We then measured subjective border ownership along the central contour
at various distances from the inducing bar, and at different times after the onset of the bar (25ms,
100ms and 250ms). We measured border ownership locally using a method introduced in [8] in
which a local motion probe is introduced at a point on the boundary between two color regions of
different colors, and the subject is asked which color appeared to move. Because the figural side
?owns? the border, the response reflects perceived figural status.
The goal of the experiment was to actually measure the progression of the influence of the inducing
T-junction as it (hypothetically) propagated along the boundary. Briefly, we found no evidence of
temporal differences, meaning that f/g judgments were essentially constant over time, suggesting
rapid convergence of local f/g assignment. (This is consistent with the very rapid convergence
of our network, which would suggest a lack of measurable temporal differences except at much
shorter time scales than we measured.) But we did find a progressive reduction of f/g reversal with
increasing distance from the inducer?that is, the influence of the T-junction decayed with distance.
Mean responses aggregated over subjects (shortest delay only) are shown in Fig. 6.
In order to run our model on this stimulus (which has a much more complex structure than the simple
figures tested above) we had to make some adjustments. We removed the bars from the edge map,
leaving only the T-junctions as underlying cues. This was a necessary first step because our model is
not yet able to cope with skeletons that are split up by occluders. (The larger rectangle?s skeleton has
been split up by the lower bar.) In this way all contours except those created by the bars were used to
create the network (Fig. 7). Given this network we ran the model using hand-picked parameters that
6
Figure 5: Stimuli used in the experiment. A. Initial stimulus with locally and globally consistent and
unambiguous f/g. B. Subsequently bars were added of which one (the top bar in this case) created a
local reversal of f/g. C. Positions at which local f/g judgments of subjects were probed.
Figure 6: Results from our experiment aggregated for all 7 subjects (shortest delay only) are shown
in red. The x-axis shows distance from the inducing bar at which f/g judgment was probed. The
y-axis shows the proportion of trials on which subjects judged the smaller rectangle to own the
boundary. As can be seen, the further from the T-junction, the lower the f/g reversal. The fitted
model (green curve) shows very similar pattern. Horizontal black line indicates chance performance
(ambiguous f/g).
gave us the best possible qualitative similarity to the human data. The parameters used never entailed
total elimination of the influence of any likelihood function (?S = 16, ?S = .025, ?EB = .5,
?T = .9, ?BB = .9, and ?BG = .6). As can be seen in Fig. 6 the border-ownership estimates at
the locations where we had data show compelling similarities to human judgments. Furthermore
along the entire contour the model converged to intuitive border-ownership estimates (Fig. 7) very
rapidly (within 36 iterations). The fact that our model yielded intuitive estimates for the current
network in which not all contours were completed shows another strength of our model. Because
our model included grouping nodes, it did not require contours to be amodally completed [6] in
order for information to propagate.
4
Conclusion
In this paper we proposed a model rooted in Bayesian belief networks to compute figure/ground.
The model uses both local and global cues, combined in a principled way, to achieve a stable and
apparently psychologically reasonable estimate of border ownership. Local cues included local
curvature and T-junctions, both well-established cues to f/g. Global cues included skeletal structure,
7
Figure 7: (left) Node structure for the experimental stimulus. (right) The model?s local borderownership estimates after convergence.
a novel cue motivated by the idea that strongly axial shapes tend to be figural and thus own their
boundaries. We successfully tested this model on both simple displays, in which it gave intuitive
results, and on a more complex experimental stimulus, in which it gave a close match to the pattern
of f/g propagation found in our subjects. Specifically, the model, like the human subjects rapidly
converged to a stable local f/g interpretation.
Our model?s structure shows several interesting parallels to properties of neural coding of border
ownership in visual cortex. Some cortical cells (end-stopped cells) appear to code for local curvature
[3] and T-junctions [5]. The B-nodes in our model could be seen as corresponding to cells that code
for border ownership [20]. Furthermore, some authors [2] have suggested that recurrent feedback
loops between border ownership cells in V2 and cells in V4 (corresponding to G-nodes in our model)
play a role in the rapid computation of border ownership. The very rapid convergence we observed
in our model likewise appears to be due to the connections between B-nodes and G-nodes. Finally
scale-invariant shape representations (such as, speculatively, those based on skeletons) are thought
to be present in higher cortical regions such as IT [17], which project down to earlier areas in ways
that are not yet understood.
A number of parallels to past models of f/g should be mentioned. Weiss [18] pioneered the application of belief networks to the f/g problem, though their network only considered a more restricted
set of local cues and no global ones, such that information only propagated along the contour. Furthermore it has not been systematically compared to human judgments. Kogo et al. [10] proposed
an exponential decay of f/g signals as they spread throughout the image. Our model has a similar
decay for information going through the G-nodes, though it is also influenced by an angular factor
defined by the position of the skeletal node. Like the model by Li Zhaoping [19], our model includes
horizontal propagation between B-nodes, analogous to border-ownership cells in her model. A neurophysiological model by Craft et al. [2] defines grouping cells coding for an interiority preference
that decays with the size of the receptive fields of these grouping cells. Our model takes this a step
further by including shape (skeletal) structure as a factor in interiority estimates, rather than simply
size of receptive fields (which is similar to the rib lengths in our model).
Currently, our use of skeletons as shape representations is still limited to medial axis skeletons and
surfaces that are not split up by occluders. Our future goals including integrating skeletons in a more
robust way following the probabilistic account suggested by Feldman and Singh [4]. Eventually, we
hope to fully integrate skeleton computation with f/g computation so that the more general problem
of shape and surface estimation can be approached in a coherent and unified fashion.
8
References
[1] P. Bahnsen.
Eine untersuchung uber symmetrie und assymmetrie bei visuellen
wahrnehmungen. Zeitschrift fur psychology, 108:129?154, 1928.
[2] E. Craft, H. Sch?utze, E. Niebur, and R. von der Heydt. A neural model of figure-ground
organization. Journal of Neurophysiology, 97:4310?4326, 2007.
[3] A. Dobbins, S. W. Zucker, and M. S. Cyander. Endstopping and curvature. Vision Research,
29:1371?1387, 1989.
[4] J. Feldman and M. Singh. Bayesian estimation of the shape skeleton. Proceedings of the
National Academy of Sciences, 103:18014?18019, 2006.
[5] B. Heider, V. Meskenaite, and E. Peterhans. Anatomy and physiology of a neural mechanism
defining depth order and contrast polarity at illusory contours. European Journal of Neuroscience, 12:4117?4130, 2000.
[6] G. Kanizsa. Organization inVision. New York: Praeger, 1979.
[7] G. Kanizsa and W. Gerbino. Vision and Artifact, chapter Convexity and symmetry in figureground organisation, pages 25?32. New York: Springer, 1976.
[8] S. Kim and J. Feldman. Globally inconsistent figure/ground relations induced by a negative
part. Journal of Vision, 9:1534?7362, 2009.
[9] K. Koffka. Principles of Gestalt Psychology. Lund Humphries, London, 1935.
[10] N. Kogo, C. Strecha, L. Van Gool, and J. Wagemans. Surface construction by a 2-d
differentiation-integration process: a neurocomputational model for perceived border ownership, depth, and lightness in kanizsa figures. Psychological Review, 117:406?439, 2010.
[11] B. Machielsen, M. Pauwels, and J. Wagemans. The role of vertical mirror-symmetry in visual
shape detection. Journal of Vision, 9:1?11, 2009.
[12] K. Murphy, Y. Weiss, and M.I. Jordan. Loopy belief propagation for approximate inference:
an empirical study. Proceedings of Uncertainty in AI, pages 467?475, 1999.
[13] R. L. Ogniewicz and O. K?ubler. Hierarchic Voronoi skeletons. Pattern Recognition, 28:343?
359, 1995.
[14] J. Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, 1988.
[15] M. A. Peterson and E. Skow. Inhibitory competition between shape properties in figureground perception. Journal of Experimental Psychology: Human Perception and Performance,
34:251?267, 2008.
[16] K. A. Stevens and A. Brookes. The concave cusp as a determiner of figure-ground. Perception,
17:35?42, 1988.
[17] K. Tanaka, H. Saito, Y. Fukada, and M. Moriya. Coding visual images of object in the inferotemporal cortex of the macaque monkey. Journal of Neurophysiology, 66:170?189, 1991.
[18] Y. Weiss. Interpreting images by propagating Bayesian beliefs. Adv. in Neural Information
Processing Systems, 9:908915, 1997.
[19] L. Zhaoping. Border ownership from intracortical interactions in visual area V2. Neuron,
47(1):143?153, Jul 2005.
[20] H. Zhou, H. S. Friedman, and R. von der Heydt. Coding of border ownerschip in monkey
visual cortex. The Journal of Neuroscience, 20:6594?6611, 2000.
9
| 4058 |@word neurophysiology:2 trial:1 briefly:3 stronger:1 seems:1 proportion:1 c0:3 simulation:2 propagate:4 jacob:2 simplifying:1 vicky:2 reduction:1 initial:1 configuration:4 subjective:1 past:1 current:2 surprising:1 yet:2 distant:1 shape:36 praeger:1 strecha:1 medial:5 depict:1 cue:19 fewer:1 farther:2 coarse:1 node:79 location:5 preference:2 five:1 along:12 qualitative:1 consists:1 combine:2 inside:3 expected:1 rapid:4 behavior:1 occluder:2 globally:6 decreasing:1 window:1 considering:2 increasing:1 begin:1 project:1 underlying:3 interpreted:2 monkey:2 unified:1 finding:2 differentiation:1 nj:3 temporal:3 multidimensional:1 concave:6 exactly:2 control:1 appear:2 before:2 positive:1 understood:3 local:32 occluders:2 consequence:1 zeitschrift:1 initiated:2 solely:1 might:1 plus:1 signed:1 eb:3 studied:1 black:1 suggests:1 conversely:1 limited:1 bi:59 perpendicular:1 directed:1 borderline:1 saito:1 area:3 empirical:2 thought:1 physiology:1 integrating:3 induce:2 suggest:1 get:3 close:2 interior:2 judged:2 influence:7 applying:1 measurable:2 map:3 humphries:1 center:3 exposure:1 go:1 independently:1 convex:7 simplicity:1 assigns:1 pure:1 rule:1 variation:1 analogous:2 construction:1 gm:2 play:1 pioneered:1 ruccs:2 dobbin:1 us:1 hypothesis:2 recognition:1 updating:3 fukada:1 bottom:1 role:2 observed:1 capture:1 region:7 connected:6 cycle:1 adv:1 highest:1 removed:1 ran:1 principled:2 mentioned:1 und:1 convexity:2 broken:1 skeleton:12 complexity:1 asked:1 occluded:1 dynamic:1 singh:3 serve:1 creates:2 negatively:1 various:2 chapter:1 grown:1 separated:1 london:1 detected:1 approached:1 outcome:1 sanity:1 whose:1 encoded:1 larger:3 plausible:1 gi:8 final:1 interaction:1 coming:1 neighboring:1 causing:1 loop:1 rapidly:4 figural:17 poorly:2 achieve:2 academy:1 intuitive:8 inducing:6 moriya:1 competition:5 parent:4 convergence:7 converges:2 object:1 recurrent:1 propagating:2 axial:1 measured:5 nearest:1 wished:1 received:1 strong:2 indicate:1 direction:10 anatomy:1 closely:1 stevens:1 subsequently:1 centered:2 human:10 cusp:1 elimination:1 explains:3 require:1 considered:2 ground:14 normal:1 bj:6 pointing:1 vary:2 early:1 adopt:1 belgium:1 utze:1 perceived:3 estimation:3 determiner:1 currently:1 create:3 successfully:1 reflects:1 hope:1 always:2 rather:1 zhou:1 varying:1 encode:1 ax:1 fur:1 likelihood:11 check:1 indicates:1 ubler:1 contrast:1 kim:1 inference:2 voronoi:1 entire:3 integrated:1 typically:1 her:1 relation:1 going:1 orientation:1 k6:2 spatial:2 integration:1 orange:2 field:2 comprise:1 never:2 progressive:1 zhaoping:2 alter:1 future:2 stimulus:7 intelligent:1 inherent:1 koffka:1 national:1 murphy:1 consisting:2 friedman:1 detection:1 organization:2 highly:1 multiply:1 brooke:1 indifferent:1 entailed:1 inducer:2 edge:4 necessary:2 shorter:1 desired:1 fitted:1 stopped:1 psychological:1 earlier:1 compelling:1 markovian:1 assignment:11 deserves:2 loopy:1 introducing:1 uniform:1 delay:3 conducted:1 front:2 perturbed:1 psychophysically:1 combined:2 density:2 decayed:1 v4:1 probabilistic:2 analogously:1 quickly:1 von:4 again:1 central:1 speculatively:1 choose:1 cognitive:3 creating:2 leading:1 manish:2 li:1 suggesting:2 account:2 intracortical:1 coding:6 includes:3 onset:3 bg:4 picked:1 apparently:1 red:4 competitive:1 parallel:2 jul:1 kaufmann:1 likewise:1 yield:3 igert:1 judgment:8 conceptually:1 weak:1 bayesian:7 marginally:1 mere:1 niebur:1 converged:3 influenced:3 involved:1 e2:1 naturally:1 mi:2 propagated:2 illusory:1 color:4 electrophysiological:1 actually:2 reflecting:2 appears:1 higher:1 oppositely:1 endstopping:1 flowing:1 wei:4 response:2 arranged:1 though:4 strongly:1 furthermore:3 just:2 angular:1 hand:2 receives:2 horizontal:2 ei:3 replacing:1 touch:1 overlapping:10 propagation:7 lack:1 defines:1 brings:1 artifact:1 gray:1 dge:1 effect:2 true:3 ccf:1 hence:3 spatially:3 laboratory:1 adjacent:1 unambiguous:2 whereby:1 ambiguous:1 rooted:1 m:3 performs:1 motion:1 interpreting:1 passive:1 reasoning:1 image:16 meaning:6 novel:2 nih:1 common:4 predominantly:2 tending:1 interpretation:5 approximates:1 feldman:4 ai:1 leuven:2 consistency:1 pm:1 similarly:1 had:2 stable:2 zucker:1 similarity:2 surface:7 gj:10 cortex:3 delaunay:5 inferotemporal:1 curvature:13 posterior:13 own:4 recent:1 triangulation:2 reverse:3 binary:1 der:2 seen:4 morgan:1 eine:1 aggregated:2 shortest:2 signal:2 match:2 faster:1 divided:2 e1:1 converging:1 basic:4 essentially:2 vision:4 rutgers:7 iteration:4 normalization:1 represent:1 psychologically:1 cell:8 receive:4 fellowship:1 separately:1 fine:1 leaving:1 sch:1 figureground:2 rest:1 subject:10 induced:2 tend:1 undirected:1 inconsistent:2 seem:1 jordan:1 presence:2 split:3 variety:1 affect:2 psychology:4 equidistant:1 architecture:1 gave:3 inner:1 idea:1 pauwels:1 six:1 motivated:1 colour:1 e3:1 york:2 deep:2 clear:1 dark:2 locally:5 category:4 simplest:1 nsf:3 inhibitory:1 sign:1 estimated:5 conceived:1 neuroscience:2 blue:1 broadly:1 skeletal:18 probed:3 express:2 thereafter:1 four:5 eden:1 drawn:1 pj:3 rectangle:4 compete:1 run:2 fourth:1 uncertainty:1 place:1 throughout:2 reasonable:2 draw:1 display:2 yielded:1 strength:3 x2:2 encodes:1 attempting:1 piscataway:3 disconnected:1 across:4 smaller:3 making:1 intuitively:1 invariant:1 restricted:1 equation:2 previously:1 eventually:2 mechanism:3 serf:2 end:2 reversal:4 junction:17 probe:1 progression:2 hierarchical:1 v2:3 away:1 top:2 assumes:1 include:1 running:1 ensure:1 completed:2 r01:1 psychophysical:3 move:1 already:1 question:1 added:1 receptive:2 distance:7 attentional:1 lateral:1 assuming:2 length:3 code:3 polarity:1 potentially:1 gk:3 negative:3 enclosing:1 vertical:2 neuron:1 curved:2 defining:1 nonuniform:1 kanizsa:3 heydt:2 bk:1 introduced:2 pair:3 connection:11 bel:2 coherent:1 established:1 pearl:1 tanaka:1 macaque:1 nearer:2 beyond:1 suggested:3 proceeds:1 usually:1 pattern:5 below:1 bar:12 able:1 appeared:1 lund:1 perception:3 program:1 including:3 green:2 belief:7 gool:1 treated:1 representing:1 brief:1 lightness:1 axis:6 created:2 sprout:2 prior:3 review:1 tangent:2 asymptotic:2 fully:1 expect:2 interesting:1 proportional:1 versus:1 integrate:2 consistent:6 principle:2 systematically:1 elsewhere:1 supported:1 placed:1 free:3 side:5 bias:1 hierarchic:1 neighbor:3 peterson:1 distributed:2 van:1 boundary:12 calculated:1 curve:1 stand:1 cortical:2 contour:18 computes:1 concavity:3 author:1 feedback:1 depth:2 cope:1 gestalt:1 nonlocal:3 bb:3 approximate:2 status:5 keep:1 global:7 rib:5 owns:4 assumed:1 iterative:1 reviewed:1 robust:1 exterior:1 symmetry:3 depicting:1 interact:2 complex:2 european:1 submit:1 did:3 spread:2 arrow:1 border:45 whole:2 child:2 positively:1 fig:16 referred:3 owning:1 depicts:2 fashion:1 position:7 msec:1 exponential:2 perceptual:1 third:1 bei:1 grained:1 down:2 bad:1 showing:1 decay:3 evidence:5 grouping:6 essential:3 organisation:1 adding:1 mirror:1 magnitude:1 perceptually:1 conditioned:1 simply:2 neurophysiological:1 visual:9 positional:1 adjustment:1 applies:1 springer:1 loses:1 owned:3 chance:1 conditional:2 goal:3 towards:1 ownership:38 change:2 included:4 specifically:2 except:2 total:2 partly:3 experimental:4 tendency:1 uber:1 craft:2 occluding:3 formally:1 hypothetically:1 internal:1 latter:1 relevance:1 heider:1 evaluate:1 tested:3 |
3,378 | 4,059 | Functional Geometry Alignment
and Localization of Brain Areas
Georg Langs, Polina Golland
Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology
Cambridge, MA 02139, USA
[email protected], [email protected]
Yanmei Tie, Laura Rigolo, Alexandra J. Golby
Department of Neurosurgery, Brigham and Women?s Hospital, Harvard Medical School
Boston, MA 02115, USA
[email protected], [email protected]
[email protected]
Abstract
Matching functional brain regions across individuals is a challenging task, largely
due to the variability in their location and extent. It is particularly difficult, but
highly relevant, for patients with pathologies such as brain tumors, which can
cause substantial reorganization of functional systems. In such cases spatial registration based on anatomical data is only of limited value if the goal is to establish
correspondences of functional areas among different individuals, or to localize
potentially displaced active regions. Rather than rely on spatial alignment, we
propose to perform registration in an alternative space whose geometry is governed by the functional interaction patterns in the brain. We first embed each
brain into a functional map that reflects connectivity patterns during a fMRI experiment. The resulting functional maps are then registered, and the obtained
correspondences are propagated back to the two brains. In application to a language fMRI experiment, our preliminary results suggest that the proposed method
yields improved functional correspondences across subjects. This advantage is
pronounced for subjects with tumors that affect the language areas and thus cause
spatial reorganization of the functional regions.
1
Introduction
Alignment of functional neuroanatomy across individuals forms the basis for the study of the functional organization of the brain. It is important for localization of specific functional regions and
characterization of functional systems in a population. Furthermore, the characterization of variability in location of specific functional areas is itself informative of the mechanisms of brain formation
and reorganization. In this paper we propose to align neuroanatomy based on the functional geometry of fMRI signals during specific cognitive processes. For each subject, we construct a map based
on spectral embedding of the functional connectivity of the fMRI signals and register those maps to
establish correspondence between functional areas in different subjects.
Standard registration methods that match brain anatomy, such as the Talairach normalization [21]
or non-rigid registration techniques like [10, 20], accurately match the anatomical structures across
individuals. However the variability of the functional locations relative to anatomy can be substantial
[8, 22, 23], which limits the usefulness of such alignment in functional experiments. The relationship
between anatomy and function becomes even less consistent in the presence of pathological changes
in the brain, caused by brain tumors, epilepsy or other diseases [2, 3].
1
Registration based on anatomical data
brain 1
registration
brain 2
Registration based on the functional geometry
brain 1
embedding
registration
embedding
brain 2
Figure 1: Standard anatomical registration and the proposed functional geometry alignment. Functional geometry alignment matches the diffusion maps of fMRI signals of two subjects.
Integrating functional features into the registration process promises to alleviate this challenge. Recently proposed methods match the centers of activated cortical areas [22, 26], or estimate dense
correspondences of cortical surfaces [18]. The fMRI signals at the surface points serve as a feature
vector, and registration is performed by maximizing the inter-subject fMRI correlation of matched
points, while at the same time regularizing the surface warp to preserve cortical topology and penalizing cortex folding and metric distortion, similar to [9]. In [7] registration of a population of
subjects is accomplished by using the functional connectivity pattern of cortical points as a descriptor of the cortical surface. It is warped so that the Frobenius norm of the difference between the
connectivity matrices of the reference subject and the matched subject is minimized, while at the
same time a topology-preserving deformation of the cortex is enforced.
All the methods described above rely on a spatial reference frame for registration, and use the functional characteristics as a feature vector of individual cortical surface points or the entire surface.
This might have limitations in the case of severe pathological changes that cause a substantial reorganization of the functional structures. Examples include the migration to the other hemisphere,
changes in topology of the functional maps, or substitution of functional roles played by one damaged region with another area. In contrast, our approach to functional registration does not rely on
spatial consistency.
Spectral embedding [27] represents data points in a map that reflects a large set of measured pairwise affinity values in a euclidean space. Previously we used spectral methods to map voxels of
a fMRI sequence into a space that captures joint functional characteristics of brain regions [14].
This approach represents the level of interaction by the density in the embedding. In [24], different
embedding methods were compared in a study of parceled resting-state fMRI data. Functionally homogeneous units formed clusters in the embedding. In [11] multidimensional scaling was employed
to retrieve a low dimensional representation of positron emission tomography (PET) signals after
selecting sets of voxels by the standard activation detection technique [12].
Here we propose and demonstrate a functional registration method that operates in a space that
reflects functional connectivity patterns of the brain. In this space, the connectivity structure is captured by a structured distribution of points, or functional geometry. Each point in the distribution
represents a location in the brain and the relation of its fMRI signal to signals at other locations.
Fig. 1 illustrates the method. To register functional regions among two individuals, we first embed
both fMRI volumes independently, and then obtain correspondences by matching the two point distributions in the functional geometry. We argue that such a representation offers a more natural view
of the co-activation patterns than the spatial structure augmented with functional feature vectors.
The functional geometry can handle long-range reorganizations and topological variability in the
functional organization of different individuals. Furthermore, by translating connectivity strength
to distances we are able to regularize the registration effectively. Strong connections are preserved
during registration in the map by penalizing high-frequencies in the map deformation field.
The clinical goal of our work is to reliably localize language areas in tumor patients. The functional
connectivity pattern for a specific area provides a refined representation of its activity that can augment the individual activation pattern. Our approach is to utilize connectivity information to improve
localization of the functional areas in tumor patients. Our method transfers the connectivity patterns
from healthy subjects to tumor patients. The transferred patterns then serve as a patient-specific
prior for functional localization, improving the accuracy of detection. The functional geometry we
use is largely independent of the underlying anatomical organization. As a consequence, our method
handles substantial changes in spatial arrangement of the functional areas that typically present sig2
nificant challenges for anatomical registration methods. Such functional priors promise to improve
detection accuracy in the challenging case of language area localization. The mapping of healthy
networks to patients provides additional evidence for the location of the language areas. It promises
to enhance accuracy and robustness of localization.
In addition to localization, studies of reconfiguration mechanisms in the presence of lesions aim
to understand how specific sub-areas are redistributed (e.g., do they migrate to a compact area, or
to other intact language areas). While standard detection identifies the regions whose activation
is correlated with the experimental protocol, we seek a more detailed description of the functional
roles of the detected regions, based on the functional connectivity patterns.
We evaluate the method on healthy control subjects and brain tumor patients who perform language
mapping tasks. The language system is highly distributed across the cortex. Tumor growth sometimes causes a reorganization that sustains language ability of the patient, even though the anatomy
is severely changed. Our initial experimental results indicate that the proposed functional alignment
outperforms anatomical registration in predicting activation in target subjects (both healthy controls
and patients). Furthermore functional alignment can handle substantial reorganizations and is much
less affected by the tumor presence than anatomical registration.
2
Embedding the brain in a functional geometry
We first review the representation of the functional geometry that captures the co-activation patterns
in a diffusion map defined on the fMRI voxels [6, 14]. Given a fMRI sequence I ? RT ?N that
contains N voxels, each characterized by an fMRI signal over T time points, we calculate matrix
C ? RN ?N that assigns each pair of voxels hk, li with corresponding time courses Ik and Il a
non-negative symmetric weight
corr(Ik ,Il )
c(k, l) = e
,
(1)
where corr is the correlation coefficient of the two signals Ik and Il , and is the speed of weight
decay. We define a graph whose vertices correspond to voxels and whose edge weights are determined by C. In practice, we discard all edges that have a weight below a chosen threshold if they
connect nodes with a large distance in the anatomical space. This construction yields a sparse graph
which is then transformed into a Markov chain. Note that in contrast to methods like multidimensional scaling, this sparsity reflects the intuition that meaningful information about the connectivity
structure is encoded by the high correlation values.
We transform the graph into a Markov chain on the set
Pof nodes by the normalized graph Laplacian
construction [5]. The degree of each node g(k) = l c(k, l) is used to define the directed edge
weights of the Markov chain as
c(k, l)
(2)
p(k, l) =
,
g(k)
which can be interpreted as transition probabilities
along the graph edges. This set of probabilities
P
defines a diffusion operator P f (x) =
p(x, y)f (y) on the graph vertices (voxels). The diffusion
operator integrates all pairwise relations in the graph and defines a geometry on the entire set of
fMRI signals.
We embed the graph in a Euclidean space via an eigenvalue decomposition of P [6]. The eigenvalue
decomposition of the operator P results in a sequence of decreasing eigen values ?1 , ?2 . . . and corresponding eigen vectors ?1 , ?2 , . . . that satisfy P ?i = ?i ?i and constitute the so-called diffusion
map:
?t , h?t1 ?1 . . . ?tw ?w i,
(3)
where w ? T is the dimensionality of the representation, and t is a parameter that controls scaling
of the axes in this newly defined space. ?kt ? Rw is the representation of voxel k in the functional
geometry; it comprises the kth components of the first w eigenvectors. We will refer to Rw as
the functional space. The global structure of the functional connectivity is reflected in the point
distribution ?t . The axes of the eigenspace are the directions that capture the highest amount of
structure in the connectivity landscape of the graph.
This functional geometry is governed by the diffusion distance Dt on the graph: Dt (k, l) is defined
through the probability of traveling between two vertices k and l by taking all paths of at most t steps
3
a. Maps of two subjects
s0
?0
?1
s1
Subject 1
Map 1
Map 2
Subject 2
b. Aligning the point sets
xl1
xk0
Figure 2: Maps of two subjects in the process of registration: (a) Left and right: the axial and
sagittal views of the points in the two brains. The two central columns show plots of the first
three dimensions of the embedding in the functional geometry after coarse rotational alignment. (b)
During alignment, a maps is represented as a Gaussian mixture model. The colors in both plots
indicate clusters which are only used for visualization.
into account. The transition probabilities are based on the functional connectivity of pairs of nodes.
Thus the diffusion distance integrates the connectivity values over possible paths that connect two
points and defines a geometry that captures the entirety of the connectivity structure. It corresponds
to the operator P t parameterized by the diffusion time t:
X (pt (k, i) ? pt (l, i))2
g(i)
Dt (k, l) =
(4)
where ?(i) = P
.
?(i)
u g(u)
i=1,...,N
The distance Dt is low if there is a large number of paths of length t with high transition probabilities
between the nodes k and l.
The diffusion distance corresponds to the Euclidean distance in the embedding space: k?t (k) ?
?t (l)k = Dt (k, l). The functional relations between fMRI signals are translated into spatial distances in the functional geometry [14]. This particular embedding method is closely related to other
spectral embedding approaches [17]; the parameter t controls the range of graph nodes that influence
a certain local configuration.
To facilitate notation, we assume the diffusion time t is fixed in the remainder of the paper, and omit
it from the equations. The resulting maps are the basis for the functional registration of the fMRI
volumes.
3
Functional geometry alignment
Let ?0 , and ?1 be the functional maps of two subjects. ?0 , and ?1 are point clouds embedded in
a w-dimensional Euclidean space. The points in the maps correspond to voxels and registration of
the maps establishes correspondences between brain regions of the subjects. Our goal is to estimate
correspondences of points in the two maps based on the structure in the two distributions determined
by the functional connectivity structure in the data. We perform registration in the functional space
by non-rigidly deforming the distributions until their overlap is maximized. At the same time, we
regularize the deformation so that high frequency displacements of individual points, which would
correspond to a change in the relations with strong connectivity, are penalized.
4
We note that the embedding is defined up to rotation, order and sign of individual coordinate axes.
However, for successful alignment it is essential that the embedding is consistent between the subjects, and we have to match the nuisance parameters of the embedding during alignment. In [19]
a greedy method for sign matching was proposed. In our data the following procedure produces
satisfying results. When computing the embedding, we set the sign of each individual coordinate
axis j so that mean({?j (k)}) ? median({?j (k)}) > 0, ?j = 1, . . . , w. Since the distributions
typically have a long tail, and are centered at the origin, this step disambiguates the coordinate axis
directions well.
Fig. 2 illustrates the level of consistency of the maps across two subjects. It shows the first three
dimensions of maps for two different control subjects. The colors illustrate clusters in the map and
their corresponding positions in the brain. For illustration purposes the colors are matched based on
the spatial location of the clusters. The two maps indicate that there is some degree of consistency
of the mappings for different subjects.
Eigenvectors may switch if the corresponding eigenvalues are similar [13]. We initialise the registration using Procrustes analysis [4] so that the distance between a randomly chosen subset of vertices
from the same anatomical part of the brain is minimised in functional space. This typically resolves
ambiguity in the embedding with respect to rotation and the order of eigenvectors in the functional
space.
We employ the Coherent Point Drift algorithm for the subsequent non-linear registration of the
functional maps [16]. We consider the points in ?0 to be centroids of a Gaussian mixture model
that is fitted to the points in ?1 to minimize the energy
!
N0
N1
X
X
kxk0 ? ?(xl1 )k2
?
E(?) = ?
log
exp ?
(5)
+ ?(?),
2
2?
2
k=1
xk0
l=1
xl1
where
and
are the points in the maps ?0 and ?1 during matching and ? is a function that
regularizes the deformation ? of the point set. The minimization of E(?) involves a trade-off between its two terms controlled by ?. The first term is a Gaussian kernel, that generates a continuous
distribution for the entire map ?0 in the functional space Rw . By deforming xk0 we increase the
likelihood of the points in ?1 with respect to the distribution defined by xk0 . At the same time
?(?) encourages a smooth deformation field by penalizing high frequency local deformations by
e.g., a radial basis function [15]. The first term in Eq. 5 moves the two point distributions so that
their overlap is maximized. That is, regions that exhibit similar global connectivity characteristics
are moved closer to each other. The regularization term induces a high penalty on changing strong
functional connectivity relationships among voxels (which correspond to small distances or clusters
in the map). At the same time, the regularization allows more changes between regions with weak
connectivity (which correspond to large distances). In other words, it preserves the connectivity
structure of strong networks, while being flexible with respect to weak connectivity between distant
clusters.
Once the registration of the two distributions in the functional geometry is completed, we assign
correspondences between points in ?0 and ?1 by a simple matching algorithm that for any point in
one map chooses the closest point in the other map.
4
Validation of Alignment
To validate the functional registration quantitatively we align pairs of subjects via (i) the proposed
functional geometry alignment, and (ii) the anatomical non-rigid demons registration [25, 28]. We
restrict the functional evaluation to the grey matter. Functional geometry embedding is performed on
a random sampling of 8000 points excluding those that exhibit no activation (with a liberal threshold
of p = 0.15 in the General Linear Model (GLM) analysis [12]). After alignment we evaluate the
quality of the fit by (1) the accuracy of predicting the location of the active areas in the target subject
and (2) the inter-subject correlation of BOLD signals after alignment. The first criterion is directly
related to the clinical aim of localization of active areas.
A. Predictive power: We evaluate if it is possible to establish correspondences, so that the activation
in one subject lets us predict the activation in another subject after alignment. That is, we examine
if the correspondences identify regions that exhibit a relationship with the task (in our experiment, a
5
Reference - healthy control subject
Target - patient with tumor indicated in blue
fMRI points in the reference subject
Corresponding fMRI points in the target - Functional Geometry Alignment
Corresponding fMRI points in the target - Anatomical Registration
Figure 3: Mapping a region by functional geometry alignment: a reference subject (first column)
aligned to a tumor patient (second and third columns, the tumor is shown in blue). The green
region in the healthy subject is mapped to the red region by the proposed functional registration
and to the yellow region by anatomical registration. Note that the functional alignment places the
region narrowly around the tumor location, while the anatomical registration result intersects with
the tumor. The slice of the anatomical scan with the tumor and the zoomed visualization of the
registration results (fourth column) are also shown.
language task) even if they are ambiguous or not detected based on the standard single-subject GLM
analysis. That is, can we transfer evidence for the activation of specific regions between subjects,
for example between healthy controls and tumor patients? In the following we refer to regions
detected by the standard single subject fMRI analysis with an activation threshold of p = 0.05 (false
discovery rate (FDR) corrected [1]) as above-threshold regions.
We validate the accuracy of localizing activated regions in a target volume by measuring the average
correlation of the t-maps (based on the standard GLM) between the source and the corresponding
target regions after registration. A t-map indicates activation - i.e., a significant correlation with the
task the subject is performing during fMRI acquisition - for each voxel in the fMRI volume. A high
inter-subject correlation of the t-maps indicates that the aligned source t-maps are highly predictive
of the t-map in the target fMRI data. Additionally, we measure the overlap between regions in
the target image to which the above-threshold source regions are mapped, and the above-threshold
regions in the target image. Note that for the registration itself neither the inter-subject correlation of
fMRI signals, nor the correlation of t-maps is used. In other words, although we enforce homology
in the pattern of correlations between two subjects, the correlations across subjects per se are not
matched.
B. Correlation of BOLD signal across subjects: To assess the relationship between the source
and registered target regions relative to the fMRI activation, we measure the correlation between the
fMRI signals in the above-threshold regions of the source volume and the fMRI signals at the corresponding locations in the target volume. Across-subject correlation of the fMRI signals indicates a
relationship between the underlying functional processes. We are interested in two specific scenarios: (i) above-threshold regions in the target image that were matched to above-threshold regions in
the source image, and (ii) below-threshold regions in the target image that were matched to abovethreshold regions in the source image. This second group includes candidates for activation, even
though they do not pass detection threshold in the particular volume. We do not expect correlation
of signals for non-activated regions.
5
Experimental Results
We demonstrate the method on a set of 6 control subjects and 3 patients with low-grade tumors in
one of the regions associated with language processing. For all 9 subjects fMRI data was acquired
6
A. Correlation of t-maps
0.2
0.2
FGA
AR
FGA
B. Correlation of fMRI signal
AR
0.15
0.15
?
0.15
0.15
0.1
0.1
Above-threshold region in
source subject
Above-threshold region in
target subject
Region corresponding to abovethreshold region in source
subject after alignment
A.
0.10.1
0.05
0.05
0.05
0.05
00
00
BOLD signal in source subject
B.
FGA
BOLD signal at corresponding position
after alignment of target subject
1
2
Control - Control
3
4
Control - Tumor
1
AR
Activ. to Activ.2
FGA
AR
5
6
Activ.
to Non-Activ.
Figure 4: Validation: A. Correlation distribution of corresponding t-values after functional geometry
alignment (FGA) and anatomical registration (AR) for control-control and control-tumor matches.
B. correlation of the BOLD signals for activated regions mapped to activated regions (left) and
activated regions mapped to sub-threshold regions (right).
using a 3T GE Signa system (TR=2s, TE=40ms, flip angle=90? , slice gap=0mm, FOV=25.6cm,
dimension 128 ? 128 ? 27 voxels, voxel size of 2 ? 2 ? 4 mm3 ). The language task (antonym
generation) block design was 5min 10s, starting with a 10s pre-stimulus period. Eight task and
seven rest blocks each 20s long alternated in the design. For each subject, anatomical T1 MRI data
was acquired and registered to the functional data. We perform pair-wise registration in all 36 image
pairs, 21 of which include at least one patient.
Fig.3 illustrates the effect of a tumor in a language region, and the corresponding registration results.
An area of the brain associated with language is registered from a control subject to a tumor patient.
The location of the tumor is shown in blue; the regions resulting from functional and anatomical registration are indicated in red (FGA), and yellow (AR), respectively. While anatomical registration
creates a large overlap between the mapped region and the tumor, functional geometry alignment
maps the region to a plausible area narrowly surrounding the tumor. Fig. 4 reports quantitative comparison of functional alignment vs. anatomical registration for the entire set of subjects. Functional
geometry alignment achieves significantly higher correlation of t-values than anatomical registration
(0.14 vs. 0.07, p < 10?17 , paired t-test, all image pairs). Anatomical registration performance drops
significantly when registering a control subject and a tumor patient, compared to a pair of control
subjects (0.08 vs. 0.06, p = 0.007). For functional geometry alignment this drop is not significant
(0.15 vs. 0.14, p = 0.17). Functional geometry alignment predicts 50% of the above-threshold in
the target brain, while anatomical registration predicts 29% (Fig. 4 (A)).
These findings indicate that the functional alignment of language regions among source and target subjects is less affected by the presence of a tumor and the associated reorganization than the
matching of functional regions by anatomical registration. Furthermore the functional alignment
has better predictive power for the activated regions in the target subject for both control-control and
control-patient pairs. In our experiments this predictive power is affected onyl to a small degree by
a tumor presence in the target. In contrast and as expected, the matching of functional regions by
anatomical alignment is affected by the tumor.
Activated source regions mapped to a target subject exhibit the following characteristics. If both
source region and corresponding target region are above-threshold the average correlation between
the source and target signals is significantly higher for functional geometry alignment (0.108 vs.
0.097, p = 0.004 paired t-test). For above-threshold regions mapped to below-threshold regions the
same significant difference exists (0.020 vs. 0.016, p = 0.003), but correlations are significantly
lower. This significant difference between functional geometry alignment and anatomical registration vanishes for regions mapped from below-threshold regions in the source subject. The baseline
of below-threshold region pairs exhibits very low correlation (? 0.003) and no difference between
the two methods.
The fMRI signal correlation in the source and the target region is higher for functional alignment
if the source region is activated. This suggests that even if the target region does not exhibit task
specific behavior detectable by standard analysis, its fMRI signal still correlates with the activated
source fMRI signal to a higher degree than non-activated region pairs. The functional connectivity
7
structure is sufficiently consistent to support an alignment of the functional geometry between subjects. It identifies experimental correspondences between regions, even if their individual relationship to the task is ambiguous. We demonstrate that our alignment improves inter-subject correlation
for activated source regions and their target regions, but not for the non-active source regions. This
suggest that we enable localization of regions that would not be detected by standard analysis, but
whose activations are similar to the source regions in the normal subjects.
6
Conclusion
In this paper we propose and demonstrate a method for registering neuroanatomy based on the
functional geometry of fMRI signals. The method offers an alternative to anatomical registration;
it relies on matching a spectral embedding of the functional connectivity patterns of two fMRI
volumes. Initial results indicate that the structure in the diffusion map that reflects functional connectivity enables accurate matching of functional regions. When used to predict the activation in a
target fMRI volume the proposed functional registration exhibits higher predictive power than the
anatomical registration. Moreover it is more robust to pathologies and the associated changes in
the spatial organization of functional areas. The method offers advantages for the localization of
activated but displaced regions in cases where tumor-induced changes of the hemodynamics make
direct localization difficult. Functional alignment contributes evidence from healthy control subjects. Further research is necessary to evaluate the predictive power of the method for localization
of specific functional areas.
Acknowledgements This work was funded in part by the NSF IIS/CRCNS 0904625 grant, the
NSF CAREER 0642971 grant, the NIH NCRR NAC P41-RR13218, NIH NIBIB NAMIC U54EB005149, NIH U41RR019703, and NIH P01CA067165 grants, the Brain Science Foundation, and
the Klarman Family Foundation.
References
[1] Y. Benjamini and Y. Hochberg. Controlling the false discovery rate: a practical and powerful
approach to multiple testing. Journal of the Royal Statistical Society. Series B (Methodological), pages 289?300, 1995.
[2] S.B. Bonelli, R.H.W. Powell, M. Yogarajah, R.S. Samson, M.R. Symms, P.J. Thompson, M.J.
Koepp, and J.S. Duncan. Imaging memory in temporal lobe epilepsy: predicting the effects of
temporal lobe resection. Brain, 2010.
[3] S. Bookheimer. Pre-surgical language mapping with functional magnetic resonance imaging.
Neuropsychology Review, 17(2):145?155, 2007.
[4] F.L. Bookstein. Two shape metrics for biomedical outline data: Bending energy, procrustes
distance, and the biometrical modeling of shape phenomena. In Proceedings International
Conference on Shape Modeling and Applications, pages 110 ?120, 1997.
[5] Fan R.K. Chung. Spectral Graph Theory. American Mathematical Society, 1997.
[6] Ronald R. Coifman and St?ephane Lafon. Diffusion maps. App. Comp. Harm. An., 21:5?30,
2006.
[7] Bryan Conroy, Ben Singer, James Haxby, and Peter Ramadge. fmri-based inter-subject cortical
alignment using functional connectivity. In Adv. in Neural Information Proc. Systems, pages
378?386, 2009.
[8] E. Fedorenko and N. Kanwisher. Neuroimaging of Language: Why Hasn?t a Clearer Picture
Emerged? Language and Linguistics Compass, 3(4):839?865, 2009.
[9] B. Fischl, M.I. Sereno, and A.M. Dale. Cortical surface-based analysis II: Inflation, flattening,
and a surface-based coordinate system. Neuroimage, 9(2):195?207, 1999.
[10] B. Fischl, M.I. Sereno, R.B.H. Tootell, and A.M. Dale. High-resolution intersubject averaging
and a coordinate system for the cortical surface. HBM, 8(4):272?284, 1999.
[11] KJ Friston, CD Frith, P. Fletcher, PF Liddle, and RSJ Frackowiak. Functional topography:
multidimensional scaling and functional connectivity in the brain. Cerebral Cortex, 6(2):156,
1996.
8
[12] KJ Friston, AP Holmes, KJ Worsley, JB Poline, CD Frith, RSJ Frackowiak, et al. Statistical
parametric maps in functional imaging: a general linear approach. Hum Brain Mapp, 2(4):189?
210, 1995.
[13] V. Jain and H. Zhang. Robust 3D shape correspondence in the spectral domain. In Shape
Modeling and Applications, 2006. SMI 2006. IEEE International Conference on, page 19.
IEEE, 2006.
[14] Georg Langs, Dimitris Samaras, Nikos Paragios, Jean Honorio, Nelly Alia-Klein, Dardo
Tomasi, Nora D Volkow, and Rita Z Goldstein. Task-specific functional brain geometry from
model maps. In Proc. of MICCAI, volume 11, pages 925?933, 2008.
[15] A. Myronenko and X. Song. Point set registration: Coherent point drift. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 2010.
[16] A. Myronenko, X. Song, and M.A. Carreira-Perpin?an. Non-rigid point set registration: Coherent Point Drift. Adv. in Neural Information Proc. Systems, 19:1009, 2007.
[17] H.J. Qiu and E.R. Hancock. Clustering and Embedding Using Commute Times. IEEE TPAMI,
29(11):1873?1890, 2007.
[18] M.R. Sabuncu, B.D. Singer, B. Conroy, R.E. Bryan, P.J. Ramadge, and J.V. Haxby. Functionbased intersubject alignment of human cortical anatomy. Cerebral Cortex, 20(1):130?140,
2010.
[19] L.S. Shapiro and J. Michael Brady. Feature-based correspondence: an eigenvector approach.
Image and vision computing, 10(5):283?288, 1992.
[20] D. Shen and C. Davatzikos. HAMMER: hierarchical attribute matching mechanism for elastic
registration. IEEE Trans. Med. Imaging, 21(11):1421?1439, 2002.
[21] J. Talairach and P. Tournoux. Co-planar stereotaxic atlas of the human brain. Thieme New
York, 1988.
[22] B. Thirion, G. Flandin, P. Pinel, A. Roche, P. Ciuciu, and J.B. Poline. Dealing with the shortcomings of spatial normalization: Multi-subject parcellation of fMRI datasets. Human brain
mapping, 27(8):678?693, 2006.
[23] B. Thirion, P. Pinel, S. M?eriaux, A. Roche, S. Dehaene, and J.B. Poline. Analysis of a
large fMRI cohort: Statistical and methodological issues for group analyses. Neuroimage,
35(1):105?120, 2007.
[24] Bertrand Thirion, Silke Dodel, and Jean-Baptiste Poline. Detection of signal synchronizations
in resting-state fmri datasets. Neuroimage, 29(1):321?327, 2006.
[25] J.P. Thirion. Image matching as a diffusion process: an analogy with Maxwell?s demons.
Medical Image Analysis, 2(3):243?260, 1998.
[26] D.C. Van Essen, H.A. Drury, J. Dickson, J. Harwell, D. Hanlon, and C.H. Anderson. An
integrated software suite for surface-based analyses of cerebral cortex. Journal of the American
Medical Informatics Association, 8(5):443, 2001.
[27] U. Von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416,
2007.
[28] H. Wang, L. Dong, J. O?Daniel, R. Mohan, A.S. Garden, K.K. Ang, D.A. Kuban, M. Bonnen,
J.Y. Chang, and R. Cheung. Validation of an accelerated?demons? algorithm for deformable
image registration in radiation therapy. Physics in Medicine and Biology, 50(12):2887?2906,
2005.
9
| 4059 |@word mri:1 norm:1 grey:1 seek:1 perpin:1 lobe:2 decomposition:2 commute:1 tr:1 initial:2 substitution:1 contains:1 configuration:1 selecting:1 series:1 daniel:1 outperforms:1 activation:16 ronald:1 distant:1 subsequent:1 informative:1 shape:5 enables:1 haxby:2 plot:2 drop:2 atlas:1 n0:1 v:6 intelligence:2 greedy:1 positron:1 characterization:2 provides:2 node:6 location:11 coarse:1 liberal:1 zhang:1 mathematical:1 along:1 registering:2 direct:1 ik:3 coifman:1 acquired:2 pairwise:2 inter:6 kanwisher:1 expected:1 behavior:1 examine:1 nor:1 multi:1 brain:33 grade:1 bertrand:1 decreasing:1 resolve:1 pf:1 becomes:1 pof:1 matched:6 underlying:2 notation:1 moreover:1 eigenspace:1 namic:1 cm:1 interpreted:1 thieme:1 eigenvector:1 finding:1 brady:1 suite:1 temporal:2 quantitative:1 multidimensional:3 growth:1 tie:1 k2:1 control:21 unit:1 medical:3 omit:1 grant:3 t1:2 local:2 limit:1 consequence:1 severely:1 rigidly:1 path:3 ap:1 might:1 fov:1 suggests:1 challenging:2 co:3 ramadge:2 limited:1 range:2 directed:1 practical:1 testing:1 practice:1 block:2 procedure:1 displacement:1 powell:1 area:22 significantly:4 matching:11 word:2 integrating:1 radial:1 pre:2 suggest:2 operator:4 tootell:1 influence:1 map:44 langs:3 center:1 maximizing:1 starting:1 independently:1 thompson:1 resolution:1 shen:1 assigns:1 pinel:2 holmes:1 regularize:2 retrieve:1 initialise:1 population:2 embedding:20 handle:3 coordinate:5 target:27 construction:2 damaged:1 pt:2 controlling:1 homogeneous:1 origin:1 rita:1 harvard:4 satisfying:1 particularly:1 predicts:2 role:2 cloud:1 wang:1 capture:4 calculate:1 region:70 adv:2 trade:1 highest:1 neuropsychology:1 substantial:5 disease:1 intuition:1 vanishes:1 predictive:6 serve:2 localization:12 creates:1 samara:1 basis:3 translated:1 joint:1 frackowiak:2 represented:1 intersects:1 surrounding:1 jain:1 hancock:1 shortcoming:1 artificial:1 detected:4 formation:1 refined:1 whose:5 encoded:1 emerged:1 plausible:1 jean:2 distortion:1 ability:1 statistic:1 transform:1 itself:2 advantage:2 sequence:3 eigenvalue:3 tpami:1 propose:4 redistributed:1 interaction:2 zoomed:1 remainder:1 relevant:1 aligned:2 silke:1 deformable:1 description:1 frobenius:1 pronounced:1 moved:1 validate:2 demon:3 cluster:6 produce:1 ben:1 illustrate:1 clearer:1 radiation:1 axial:1 measured:1 school:1 intersubject:2 eq:1 strong:4 entirety:1 involves:1 indicate:5 direction:2 anatomy:5 closely:1 hammer:1 attribute:1 centered:1 human:3 enable:1 translating:1 assign:1 preliminary:1 alleviate:1 mm:1 around:1 sufficiently:1 inflation:1 normal:1 exp:1 therapy:1 fletcher:1 mapping:6 predict:2 achieves:1 purpose:1 proc:3 integrates:2 healthy:8 establishes:1 reflects:5 minimization:1 mit:2 neurosurgery:1 gaussian:3 aim:2 rather:1 ax:3 emission:1 methodological:2 likelihood:1 indicates:3 nora:1 hk:1 contrast:3 centroid:1 baseline:1 rigid:3 entire:4 typically:3 honorio:1 integrated:1 relation:4 transformed:1 interested:1 issue:1 among:4 flexible:1 smi:1 augment:1 resonance:1 spatial:11 myronenko:2 field:2 construct:1 once:1 sampling:1 biology:1 represents:3 fmri:41 minimized:1 jb:1 stimulus:1 quantitatively:1 report:1 employ:1 ephane:1 pathological:2 randomly:1 preserve:2 individual:12 geometry:34 n1:1 resection:1 detection:6 organization:4 highly:3 essen:1 evaluation:1 severe:1 alignment:40 mixture:2 activated:13 chain:3 kt:1 accurate:1 edge:4 closer:1 necessary:1 ncrr:1 euclidean:4 deformation:6 fitted:1 fga:6 column:4 modeling:3 compass:1 dickson:1 ar:6 localizing:1 measuring:1 vertex:4 subset:1 usefulness:1 successful:1 connect:2 chooses:1 migration:1 density:1 international:2 st:1 csail:2 off:1 informatics:1 minimised:1 dong:1 enhance:1 michael:1 roche:2 physic:1 connectivity:29 von:1 central:1 ambiguity:1 woman:1 cognitive:1 laura:1 warped:1 chung:1 american:2 worsley:1 li:1 account:1 harwell:1 bold:5 biometrical:1 includes:1 coefficient:1 matter:1 satisfy:1 hasn:1 register:2 caused:1 performed:2 view:2 lab:1 red:2 ass:1 formed:1 minimize:1 accuracy:5 il:3 descriptor:1 largely:2 characteristic:4 who:1 yield:2 correspond:5 landscape:1 maximized:2 identify:1 yellow:2 surgical:1 weak:2 accurately:1 comp:1 app:1 energy:2 acquisition:1 frequency:3 james:1 associated:4 propagated:1 newly:1 massachusetts:1 color:3 dimensionality:1 improves:1 goldstein:1 back:1 maxwell:1 higher:5 dt:5 reflected:1 planar:1 improved:1 though:2 anderson:1 furthermore:4 biomedical:1 miccai:1 correlation:25 traveling:1 until:1 defines:3 quality:1 indicated:2 alexandra:1 nac:1 usa:2 facilitate:1 effect:2 normalized:1 homology:1 liddle:1 regularization:2 symmetric:1 during:7 nuisance:1 encourages:1 ambiguous:2 criterion:1 m:1 outline:1 mm3:1 demonstrate:4 mapp:1 sabuncu:1 image:12 wise:1 recently:1 nih:4 rotation:2 functional:109 volume:10 cerebral:3 tail:1 association:1 davatzikos:1 resting:2 epilepsy:2 functionally:1 refer:2 significant:4 cambridge:1 consistency:3 benjamini:1 pathology:2 language:18 funded:1 samson:1 cortex:6 surface:10 align:2 aligning:1 functionbased:1 closest:1 hemisphere:1 discard:1 scenario:1 certain:1 accomplished:1 preserving:1 captured:1 additional:1 nikos:1 kxk0:1 employed:1 neuroanatomy:3 period:1 signal:28 ii:4 multiple:1 smooth:1 match:6 characterized:1 offer:3 long:3 clinical:2 baptiste:1 paired:2 laplacian:1 controlled:1 patient:17 metric:2 vision:1 normalization:2 sometimes:1 kernel:1 folding:1 golland:1 preserved:1 addition:1 nelly:1 median:1 source:21 rest:1 subject:63 induced:1 med:1 dehaene:1 presence:5 cohort:1 switch:1 affect:1 fit:1 topology:3 restrict:1 narrowly:2 penalty:1 song:2 peter:1 abovethreshold:2 york:1 cause:4 constitute:1 migrate:1 polina:2 detailed:1 eigenvectors:3 se:1 procrustes:2 amount:1 ang:1 tomography:1 induces:1 rw:3 shapiro:1 nsf:2 tutorial:1 sign:3 per:1 bryan:2 anatomical:28 blue:3 fischl:2 klein:1 promise:3 georg:2 affected:4 group:2 threshold:20 localize:2 changing:1 penalizing:3 neither:1 registration:54 diffusion:13 utilize:1 imaging:4 graph:12 enforced:1 luxburg:1 angle:1 parameterized:1 fourth:1 powerful:1 place:1 family:1 hochberg:1 scaling:4 duncan:1 flandin:1 played:1 correspondence:14 fan:1 topological:1 activity:1 rr13218:1 strength:1 software:1 generates:1 alia:1 speed:1 min:1 performing:1 transferred:1 department:1 structured:1 across:9 tw:1 s1:1 sustains:1 glm:3 bookstein:1 equation:1 visualization:2 previously:1 detectable:1 mechanism:3 hbm:1 singer:2 thirion:4 flip:1 ge:1 eight:1 hierarchical:1 spectral:8 enforce:1 magnetic:1 alternative:2 robustness:1 eigen:2 clustering:2 include:2 linguistics:1 completed:1 medicine:1 parcellation:1 establish:3 society:2 rsj:2 move:1 arrangement:1 hum:1 parametric:1 rt:1 exhibit:7 affinity:1 kth:1 distance:12 mapped:8 seven:1 argue:1 extent:1 pet:1 length:1 reorganization:8 relationship:6 illustration:1 rotational:1 difficult:2 neuroimaging:1 potentially:1 negative:1 ciuciu:1 design:2 reliably:1 xk0:4 fdr:1 perform:4 tournoux:1 displaced:2 markov:3 datasets:2 regularizes:1 variability:4 excluding:1 frame:1 rn:1 drift:3 pair:10 connection:1 tomasi:1 coherent:3 registered:4 conroy:2 trans:1 able:1 below:5 pattern:14 dimitris:1 sparsity:1 challenge:2 green:1 royal:1 memory:1 garden:1 power:5 overlap:4 hemodynamics:1 natural:1 rely:3 friston:2 predicting:3 improve:2 technology:1 picture:1 identifies:2 axis:2 alternated:1 kj:3 bending:1 prior:2 voxels:10 review:2 discovery:2 acknowledgement:1 relative:2 embedded:1 synchronization:1 expect:1 topography:1 generation:1 limitation:1 volkow:1 analogy:1 validation:3 foundation:2 sagittal:1 degree:4 consistent:3 s0:1 signa:1 cd:2 course:1 changed:1 penalized:1 poline:4 understand:1 warp:1 institute:1 taking:1 sparse:1 distributed:1 slice:2 van:1 dimension:3 cortical:10 transition:3 lafon:1 dale:2 voxel:3 correlate:1 transaction:1 compact:1 nibib:1 disambiguates:1 dealing:1 global:2 active:4 harm:1 continuous:1 why:1 additionally:1 transfer:2 robust:2 career:1 elastic:1 frith:2 contributes:1 improving:1 protocol:1 domain:1 flattening:1 dense:1 sereno:2 qiu:1 lesion:1 augmented:1 fig:5 crcns:1 nificant:1 sub:2 position:2 comprises:1 neuroimage:3 paragios:1 candidate:1 governed:2 third:1 embed:3 specific:11 symms:1 decay:1 evidence:3 brigham:1 essential:1 exists:1 false:2 hanlon:1 effectively:1 corr:2 p41:1 te:1 mohan:1 illustrates:3 antonym:1 gap:1 boston:1 xl1:3 chang:1 corresponds:2 talairach:2 relies:1 ma:2 goal:3 cheung:1 change:8 activ:4 determined:2 carreira:1 operates:1 corrected:1 averaging:1 reconfiguration:1 tumor:29 called:1 hospital:1 pas:1 experimental:4 intact:1 meaningful:1 deforming:2 support:1 scan:1 accelerated:1 evaluate:4 stereotaxic:1 regularizing:1 phenomenon:1 correlated:1 |
3,379 | 406 | Distributed Recursive Structure Processing
Geraldine Legendre
Yoshiro Miyata
Department of
Optoelectronic
Linguistics
Computing Systems Center
University of Colorado
Boulder, CO 80309-0430?
Paul Smolensky
Department of
Computer Science
Abstract
Harmonic grammar (Legendre, et al., 1990) is a connectionist theory of linguistic well-formed ness based on the assumption that the well-formedness
of a sentence can be measured by the harmony (negative energy) of the
corresponding connectionist state. Assuming a lower-level connectionist
network that obeys a few general connectionist principles but is otherwise
unspecified, we construct a higher-level network with an equivalent harmony function that captures the most linguistically relevant global aspects
of the lower level network. In this paper, we extend the tensor product
representation (Smolensky 1990) to fully recursive representations of recursively structured objects like sentences in the lower-level network. We
show theoretically and with an example the power of the new technique
for parallel distributed structure processing.
1
Introduction
A new technique is presented for representing recursive structures in connectionist
networks. It has been developed in the context of the framework of Harmonic
Grammar (Legendre et a1. 1990a, 1990b), a formalism for theories of linguistic
well-formedness which involves two basic levels: At the lower level, elements of the
problem domain are represented as distributed patterns of activity in a networkj At
the higher level, the elements in the domain are represented locally and connection
weights are interpreted as soft rules involving these elements. There are two aspects
that are central to the framework.
-The authors are listed in alphabetical order.
591
592
Legendre, Miyata, and Smolensky
First, the connectionist well-formedness measure harmony (or negative "energy"),
which we use to model linguistic well-formed ness , has the properties that it is preserved between the lower and the higher levels and that it is maximized in the
network processing. Our previous work developed techniques for deriving harmonies
at the higher level from linguistic data, which allowed us to make contact with existing higher-level analyses of a given linguistic phenomenon.
This paper concentrates on the second aspect of the framework: how particular
linguistic structures such as sentences can be efficiently represented and processed
at the lower level. The next section describes a new method for representing tree
structures in a network which is an extension of the tensor product representation
proposed in (Smolensky 1990) that allows recursive tree structures to be represented
and various tree operations to be performed in parallel.
2
Recursive tensor product representations
A tensor product representation of a set of structures S assigns to each 8 E S a
vector built up by superposing role-sensitive representations of its constituents. A
role decomposition of S specifies the constituent structure of s by assigning to it
an unordered set of filler-role bindings. For example, if S is the set of strings from
the alphabet {a, b, chand 8
cba, then we might choose a role decomposition in
which the roles are absolute positions in the string (rl = first, r2 = second, ... )
and the constituents are the filler/role bindings {b/r2, a/rs, c/rl}. 1
=
In a tensor product representation a constituent - i.e., a filler/role binding - is
represented by the tensor (or generalized outer) product of vectors representing the
filler and role in isolation: fir is represented by the vector v = f?r, which is in fact
a second-rank tensor whose elements are conveniently labelled by two subscripts and
defined simply by vt.pp = ft.prp.
Where do the filler and role vectors f and r come from? In the most straightforward
case, each filler is a member of a simple set F (e.g. an alphabet) and each role is
a member of a simple set R and the designer of the representation simply specifies
vectors representing all the elements of F and R. In more complex cases, one or
both of the sets F and R might be sets of structures which in turn can be viewed as
having constituents, and which in turn can be represented using a tensor product
representation. This recursive construction of the tensor product representations
leads to tensor products of three or more vectors, creating tensors of rank three and
higher, with elements conveniently labelled by three or more subscripts.
The recursive structure of trees leads naturally to such a recursive construction of
a tensor product representation. (The following analysis builds on Section 3.7.2 of
(Smolensky 1990.? We consider binary trees (in which every node has at most two
children) since the techniques developed below generalize immediately to trees with
higher branching factor, and since the power of binary trees is well attested, e.g.,
by the success of Lisp, whose basic datastructure is the binary tree. Adopting the
conventions and notations of Lisp, we assume for simplicity that the terminal nodes
lThe other major kind of role decomposition considered in (Smolensky 1990) is contextual roles; under one such decomposition, one constituent of cba is "b in the role 'preceded
by c and followed by a'''.
Distributed Recursive Structure Processing
of the tree (those with no children), and only the terminal nodes, are labelled by
symbols or atoms. The set of structures S we want to represent is the union of a set
of atoms and the set of binary trees with terminal nodes labelled by these atoms.
One way to view a binary tree, by analogy with how we viewed strings above, is as
having a large number of positions with various locations relative to the root: we
adopt positional roles rill labelled by binary strings (or bit vectors) such as Z = 0110
which is the position in a tree accessed by "caddar
car(cdr(cdr(car)))", that
is, the left child (0; car) of the right child (1; cdr) of the right child of the left child
of the root of the tree. Using this role decomposition, each constituent of a tree is
an atom (the filler) bound to some role rill specifying its location; so if a tree s has
a set of atoms {fi} at respective locations {zih then the vector representing s is
8
Ei fi?rXi'
=
=
A more recursive view of a binary tree sees it as having only two constituents: the
atoms or subtrees which are the left and right children of the root. In this fully
recursive role decomposition, fillers may either be atoms or trees: the set of possible
fillers F is the same as the original set of structures S.
The fully recursive role decomposition can be incorporated into the tensor product
framework by making the vector spaces and operations a little more complex than
in (Smolensky 1990). The goal is a representation obeying, Vs, p, q E S:
s
= cons(p, q) => 8 = p?rO + q?rl
(1)
=
Here, s
cons(p, q) is the tree with left subtree p and right subtree q, while
p and q are the vectors representing s, p and q. The only two roles in this
recursive decomposition are ro, rl: the left and right children of root. These roles
are represented by two vectors rO and rl'
8,
A fully recursive representation obeying Equation 1 can actually be constructed
from the positional representation, by assuming that the (many) positional role
vectors are constructed recursively from the (two) fully recursive role vectors according to:
rxO = rx?rO rxl rx?rl'
For example, rOllO = rO?rl ?rl ?rO' 2 Thus the vectors representing positions
at depth d in the tree are tensors of rank d (taking the root to be depth 0). As
an example, the tree s
cons(A, cons(B, e))
cons(p, q), where p = A and q =
cons(B, e), is represented by
=
=
8
A?rO
A?rO
=
+ B?rOl + C?rll = A?rO + B?rO?rl + C?rl ?rl
+ (B?rO + C?rl)?rl =p?rO + q?rl,
in accordance with Equation 1.
The complication in the vector spaces needed to accomplish this recursive analysis
is one, that allows us to add together the tensors of different ranks representing
different depths in the tree. All we need do is take the direct sum of the spaces of
tensors of different rank; in effect, concatenating into a long vector all the elements
'By adopting this definition of rXt we are essentially taking the recursive structure that
is implicit in the subscripts z labelling the positional role vectors, and mapping it into the
structure of the vectors themselves.
593
594
Legendre, Miyata, and Smolensky
=
of the tensors. For example, in S
cons(A, cons(B, C?, depth 0 is 0, since s isn't an
atom; depth 1 contains A, represented by the tensor S~~1
AI;'rOP1' and depth 2
contains Band C, represented by
S~~IP2
=
= Bl;' r Opl rl p2 + Cl;'rlpl rl p2 '
The tree
. t h en represented by t he sequence s = {S(O)
(1) S(2)
}
h
as a who Ie IS
1;" SI;'P1 ' I;'P 1P 2""
were
the tensor for depth 0, S~), and the tensors for depths d> 2, S~~l"'PI.' are all zero.
We let V denote the vector space of such sequences of tensors of rank 0, rank I,
... , up to some maximum depth D which may be infinite. Two elements of V are
added (or "superimposed") simply by adding together the tensors of corresponding
rank. This is our vector space for representing trees. a
The vector operation cons for building the representation of a tree from that of its
two subtrees is given by Equation 1. As an operation on V this can be written:
({P~), P~~I' P~J1P2""}, {Q~), Q~~l' Q~~lP2''''}) 1-+
(0)
(1)
}
{Q(O)
(1)
}
{
O'PI;' rO P1 'PI;'P 1r OP2"" + 0, I;' rl p1 ,QI;'Pl rl P2""
cons :
(Here, 0 denotes the zero vector in the space representing atoms.) In terms of
matrices multiplying vectors in V, this can be written
cons(p, q)
= W consO p + W consl
q
(parallel to Equation 1) where the non-zero elements of the matrix W consO are
W cons 0 I;'P1P2,,,PI.PI.+l'I;'P 1 P2?.. PI. --rO PHI
and W consl is gotten by replacing ro with rl'
Taking the car or cdr of a tree - extracting the left or right child - in the recursive
decomposition is equivalent to unbinding either "0 or 7'1. As shown in (Smolensky
1990, Section 3.1), if the role vectors are linearly independent, this unbinding can be
performed accurately, via a linear operation, specifically, a generalized inner product
(tensor contraction) of the vector representing the tree with an unbinding vector
Uo or ul' In general, the unbinding vectors are the dual basis to the role vectors;
equivalently, they are the vectors comprising the inverse matrix to the matrix of
all role vectors. If the role vectors are orthonormal (as in the simulation discussed
below), the unbinding vectors are the same as the role vectors. The car operation
can be written explicitly as an operation on V:
(O)
(1)
(2)
}
car: {S1;" SI;'P' SI;'P1P1' . .. .-
{E p1 S~~l UO P1 ' E p2 S~~IP2 UOp2 ' E p, S~~lP2P' UOp,' .. -}
3In the connectionist implementation simulated below, there is one unit for each element
of each tensor in the sequence. In the simulation we report, seven atoms are represented
by (binary) vectors in a three-dimensional space, so cp = O,1,2j rO and rl are vectors in
a two-dimensional space, so p = 0,1. The number of units representing the portion of V
for depth d is thus 3 . 24 and the total number of units representing depths up to D is
3(2D+l - 1). In tensor product representations, exact representation of deeply embedded
structure does not come cheap.
Distributed Recursive Structure Processing
(Replacing uo by u1 gives cdr.) The operation car can be realized as a matrix
W car mapping V to V with non-zero elements:
W car CPP J P2""PI..CPP 1 P2"""PI.PHJ = uOPI.+J?
W cdr is the same matrix, with uo replaced by u 1.
4
One of the main points of developing this connectionist representation of trees
is to enable massively parallel processing. Whereas in the traditional sequential
implementation of Lisp, symbol processing consists of a long sequence of car, cdr,
and cons operations, here we can compose together the corresponding sequence of
W car, W cdr' W consO and W cons1 operations into a single matrix operation.
Adding some minimal nonlinearity allows us to compose more complex operations
incorporating the equivalent of conditional branching. We now illustrate this with
a simple linguistically motivated example.
3
An example
The symbol manipulation problem we consider is that of transforming a tree representation of a syntactic parse of an English sentence into a tree representation of
a predicate-calculus expression for the meaning of the sentence. We considered two
possible syntactic structures: simple active sentences of the form
~ and passive
form~. Each was to be transformed into a tree representing V(A,P), namely v~. Here, the agent & and patient.?. of the verb V are
sentences of the
both arbitrarily complex noun phrase trees. (Actually, the network could handle
arbitrarily complex V's as well.) Aux is a marker for passive (eg. is in is feared.)
The network was presented with an input tree of either type, represented as an
activation vector using the fully recursive tensor product representation developed in
the preceding section. The seven non-zero binary vectors oflength three coded seven
atoms; the role vectors used were technique described above. The desired output
was the same tensorial representation of the tree representing V(A, B). The filler
vectors for the verb and for the constituent words of the two noun phrases should
be unbound from their roles in the input tree and then bound to the appropriate
roles in the output tree.
Such transformation was performed, for an active sentence, by the operation
cons ( cadr( s), cons( car( s), cddr( s))) on the input tree s, and for a passive sentence,
by cons(cdadr(s), cons(cdddr(s), car(s))). These operations were implemented in
the network as two weight matrices, W a and W p' 5 connecting the input units to
the output units as shown in Figure 1. In additIon, the network had a circuit for
tNote that in the caSe when the {rO,rl} are orthonormal, and therefore uo = 1'0,
W car = W consO T i similarly, W cdr = W consl T .
&The two weight matrices were constructed from the four basic matrices as Wa W consO W car W cdr + W cons1 (WconsO W car + W cons1 W cdr W cdr) and Wp =
W consO W cdr W car W cdr + W consl (W consO W cdr W cdr W cdr + W cons1 W car).
595
596
Legendre, Miyata, and Smolensky
Output
= cons{V,cons{C,cons(A,B?)
Input = cons(cons(A,B),cons(cons(Aux,V),cons(by,C))
Figure 1: Recursive tensor product network processing a passive sentence
determining whether the input sentence was active or passive. In this example, it
simply computed, by a weight matrix, the caddr of the input tree (where a passive
sentence should have an Aux), and if it was the marker Aux, gated (with sigma-pi
connections) W p , and otherwise gated Wa.
Given this setting, the network was able to process arbitrary input sentences of
either type, up to a certain depth (4 in this example) limited by the size of the
network, properly and generated correct case role assignments. Figure 1 shows the
network processing a passive sentence ?A.B).?Aux.V).(by.C))) as in All connectionist, are feared by Minsky and generating (V.(C.(A.B?) as output.
4
Discussion
The formalism developed here for the recursive representation of trees generates
quite different representations depending on the choice of the two fundamental role
vectors rO and rl and the vectors for representing the atoms. At one extreme is
the trivial fully local representation in which one connectionist unit is dedicated
to each possible atom in each possible position: this is the special case in which
rO and rl are chosen to be the canonical basis vectors (1 0) and (0 I), and the
vectors representing the n atoms are also chosen to be the canonical basis vectors
of n-space. The example of the previous section illustrated the case of (a) linearly
dependent vectors for atoms and (b) orthonormal vectors for the roles that were
"distributed" in that both elements of both vectors were non-zero. Property (a)
permits the representation of many more than n atoms with n-dimensional vectors,
and could be used to enrich the usual notions of symbolic computation by letting
"similar atoms" be represented by vectors that are closer to each other than are
"dissimilar atoms." Property (b) contributes no savings in units of the purely
local case, amounting to a literal rotation in role space. But it does allow us
Distributed Recursive Structure Processing
to demonstrate that fully distributed representations are as capable as fully local
ones at supporting massively parallel structure processing. This point has been
denied (often rather loudly) by advocates oflocal representations and by such critics
as (Fodor & Pylyshyn 1988) and (Fodor & McLaughlin 1990) who have claimed
that only connectionist implementations that preserve the concatenative structure
of language-like representations of symbolic structures could be capable of true
structure-sensitive processing.
The case illustrated in our example is distributed in the sense that all units corresponding to depth d in the tree are involved in the representation of all the atoms
at that depth. But different depths are kept separate in the formalism and in the
network. We can go further by allowing the role vectors to be linearly dependent,
sacrificing full accuracy and generality in structure processing for representation of
greater depth in fewer units. This case is the subject of current research, but space
limitations have prevented us from describing our preliminary results here.
Returning to Harmonic Grammar, the next question is, having developed a fully
recursive tensor product representation for lower-level representation of embedded
structures such as those ubiquitous in syntax, what are the implications for wellformedness as measured by the harmony function? A first approximation to the
natural language case is captured by context free grammars, in which the wellformedness of a subtree is independent of its level of embedding. It turns out that
such depth-independent well-formed ness is captured by a simple equation governing
the harmony function (or weight matrix). At the higher level where grammatical
"rules" of Harmonic Grammar reside, this has the consequence that the numerical
constant appearing in each soft constraint that constitutes a "rule" applies at all
levels of embedding. This greatly constrains the parameters in the grammar.
References
[1] J. A. Fodor and B. P. McLaughlin. Connectionism and the problem of systematicity: Why smolensky's solution doesn't work. Cognition, 35:183-204, 1990.
[2] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A
critical analysis. Cognition, 28:3-71, 1988.
[3] G. Legendre, Y. Miyata, and P. Smolensky. Harmonic grammar - a formal
multi-level connectionist theory of linguistic well-formedness: Theoretical foundations. In the Proceeding. of the twelveth meeting of the Cognitive Science
Society, 1990a.
[4] G. Legendre, Y. Miyata, and P. Smolensky. Harmonic grammar - a formal multilevel connectionist theory of linguistic well-formedness: An application. In the
Proceedings of the twelveth meeting of the Cognitive Science Society, 1990b.
[5] P. Smolensky. Tensor product variable binding and the representation of symbolic structures in connectionist networks. Artificial Intelligence, 46:159-216,
1990.
597
| 406 |@word tensorial:1 calculus:1 simulation:2 r:1 decomposition:9 contraction:1 rol:1 recursively:2 contains:2 existing:1 current:1 contextual:1 si:3 assigning:1 activation:1 written:3 numerical:1 cheap:1 v:1 pylyshyn:2 intelligence:1 fewer:1 node:4 location:3 complication:1 accessed:1 constructed:3 direct:1 attested:1 uop:1 consists:1 compose:2 advocate:1 theoretically:1 themselves:1 p1:5 multi:1 terminal:3 little:1 phj:1 notation:1 circuit:1 unbinding:5 what:1 kind:1 unspecified:1 interpreted:1 string:4 developed:6 transformation:1 every:1 ro:19 returning:1 unit:9 uo:5 accordance:1 local:3 consequence:1 subscript:3 might:2 specifying:1 co:1 limited:1 obeys:1 recursive:24 alphabetical:1 union:1 oflocal:1 word:1 symbolic:3 context:2 equivalent:3 center:1 rll:1 straightforward:1 go:1 simplicity:1 assigns:1 immediately:1 rule:3 deriving:1 orthonormal:3 datastructure:1 embedding:2 handle:1 notion:1 fodor:4 construction:2 colorado:1 exact:1 element:12 role:36 ft:1 capture:1 deeply:1 transforming:1 constrains:1 purely:1 basis:3 represented:15 various:2 rxl:1 alphabet:2 concatenative:1 artificial:1 whose:2 quite:1 otherwise:2 grammar:8 syntactic:2 sequence:5 product:17 relevant:1 constituent:9 amounting:1 generating:1 object:1 illustrate:1 depending:1 measured:2 p2:7 implemented:1 involves:1 come:2 convention:1 concentrate:1 gotten:1 correct:1 enable:1 multilevel:1 formedness:5 cba:2 preliminary:1 connectionism:2 extension:1 pl:1 considered:2 mapping:2 cognition:2 major:1 adopt:1 linguistically:2 harmony:6 sensitive:2 rather:1 linguistic:8 properly:1 rank:8 superimposed:1 prp:1 greatly:1 sense:1 dependent:2 transformed:1 comprising:1 dual:1 superposing:1 enrich:1 noun:2 ness:3 special:1 construct:1 saving:1 having:4 atom:19 constitutes:1 connectionist:14 report:1 few:1 preserve:1 replaced:1 minsky:1 geraldine:1 extreme:1 subtrees:2 implication:1 closer:1 capable:2 respective:1 tree:39 desired:1 sacrificing:1 theoretical:1 minimal:1 yoshiro:1 formalism:3 soft:2 rxi:1 assignment:1 phrase:2 predicate:1 accomplish:1 fundamental:1 ie:1 together:3 connecting:1 central:1 choose:1 fir:1 literal:1 cognitive:3 creating:1 unordered:1 explicitly:1 performed:3 view:2 root:5 systematicity:1 portion:1 parallel:5 formed:3 accuracy:1 who:2 efficiently:1 maximized:1 generalize:1 accurately:1 rx:2 multiplying:1 definition:1 energy:2 pp:1 involved:1 naturally:1 con:25 car:18 ubiquitous:1 actually:2 higher:8 generality:1 governing:1 implicit:1 parse:1 ei:1 replacing:2 marker:2 building:1 effect:1 true:1 wp:1 illustrated:2 eg:1 branching:2 generalized:2 syntax:1 demonstrate:1 cp:1 dedicated:1 passive:7 meaning:1 harmonic:6 fi:2 rotation:1 preceded:1 rl:23 extend:1 he:1 discussed:1 oflength:1 ai:1 similarly:1 nonlinearity:1 language:2 had:1 add:1 massively:2 manipulation:1 certain:1 claimed:1 binary:9 success:1 arbitrarily:2 vt:1 meeting:2 captured:2 greater:1 preceding:1 op2:1 full:1 long:2 prevented:1 coded:1 a1:1 qi:1 involving:1 basic:3 essentially:1 patient:1 represent:1 adopting:2 preserved:1 whereas:1 want:1 addition:1 subject:1 member:2 lisp:3 extracting:1 isolation:1 architecture:1 inner:1 mclaughlin:2 whether:1 motivated:1 expression:1 ul:1 listed:1 locally:1 band:1 processed:1 specifies:2 canonical:2 designer:1 four:1 kept:1 sum:1 inverse:1 bit:1 bound:2 followed:1 activity:1 constraint:1 generates:1 aspect:3 u1:1 department:2 structured:1 according:1 developing:1 ip2:2 p1p1:1 legendre:8 describes:1 making:1 s1:1 boulder:1 equation:5 turn:3 describing:1 needed:1 letting:1 operation:14 permit:1 appropriate:1 optoelectronic:1 appearing:1 original:1 denotes:1 linguistics:1 build:1 society:2 contact:1 bl:1 tensor:29 added:1 realized:1 question:1 opl:1 usual:1 traditional:1 separate:1 simulated:1 denied:1 outer:1 seven:3 lthe:1 trivial:1 assuming:2 equivalently:1 sigma:1 negative:2 implementation:3 gated:2 allowing:1 supporting:1 incorporated:1 verb:2 arbitrary:1 namely:1 sentence:14 connection:2 able:1 below:3 pattern:1 smolensky:14 built:1 unbound:1 power:2 critical:1 natural:1 representing:17 isn:1 determining:1 relative:1 embedded:2 fully:10 limitation:1 analogy:1 foundation:1 agent:1 principle:1 pi:9 critic:1 free:1 english:1 formal:2 allow:1 taking:3 absolute:1 distributed:9 grammatical:1 depth:17 doesn:1 author:1 reside:1 global:1 active:3 why:1 miyata:6 contributes:1 complex:5 cl:1 domain:2 main:1 linearly:3 paul:1 allowed:1 child:9 wellformedness:2 en:1 position:5 obeying:2 concatenating:1 symbol:3 r2:2 incorporating:1 adding:2 sequential:1 subtree:3 labelling:1 simply:4 cdr:17 positional:4 conveniently:2 phi:1 binding:4 applies:1 conditional:1 viewed:2 goal:1 labelled:5 infinite:1 specifically:1 rill:2 rxt:1 total:1 cpp:2 dissimilar:1 filler:10 aux:5 phenomenon:1 |
3,380 | 4,060 | Regularized estimation of image statistics
by Score Matching
Diederik P. Kingma
Department of Information and Computing Sciences
Universiteit Utrecht
[email protected]
Yann LeCun
Courant Institute of Mathematical Sciences
New York University
[email protected]
Abstract
Score Matching is a recently-proposed criterion for training high-dimensional
density models for which maximum likelihood training is intractable. It has been
applied to learning natural image statistics but has so-far been limited to simple
models due to the difficulty of differentiating the loss with respect to the model
parameters. We show how this differentiation can be automated with an extended
version of the double-backpropagation algorithm. In addition, we introduce a regularization term for the Score Matching loss that enables its use for a broader
range of problem by suppressing instabilities that occur with finite training sample sizes and quantized input values. Results are reported for image denoising and
super-resolution.
1 Introduction
Consider the subject of density estimation for high-dimensional continuous random variables, like
images. Approaches for normalized density estimation, like mixture models, often suffer from the
curse of dimensionality. An alternative approach is Product-of-Experts (PoE) [7], where we model
the density as a product, rather than a sum, of component (expert) densities. The multiplicative
nature of PoE models make them able to form complex densities: in contrast to mixture models, each
expert has the ability to have a strongly negative influence on the density at any point by assigning
it a very low component density. However, Maximum Likelihood Estimation (MLE) of the model
requires differentiation of a normalizing term, which is infeasible even for low data dimensionality.
A recently introduced estimation method is Score Matching [10], which involves minimizing the
square distance between the model log-density slope (score) and data log-density slope, which is
independent of the normalizing term. Unfortunately, applications of SM estimation have thus far
been limited. Besides ICA models, SM has been applied to Markov Random Fields [14] and
a multi-layer model [13], but reported results on real-world data have been of qualitative, rather
than quantitative nature. Differentiating the SM loss with respect to the parameters can be very
challenging, which somewhat complicates the use of SM in many situations. Furthermore, the proof
of the SM estimator [10] requires certain conditions that are often violated, like a smooth underlying
density or an infinite number of samples.
Other estimation methods are Constrastive Divergence [8] (CD), Basis Rotation [23] and NoiseContrastive Estimation [6] (NCE). CD is an MCMC method that has been succesfully applied
to Restricted Boltzmann Machines (RBM?s) [8], overcomplete Independent Component Analysis
1
(ICA) [9], and convolution variants of ICA and RBM?s [21, 19]. Basis Rotation [23] works by restricting weight updates such that they are probability mass-neutral. SM and NCE are consistent
estimators [10, 6], while CD estimation has been shown to be generally asymptotically biased [4].
No consistency results are known for Basis Rotation, to our knowledge. NCE is a promising method,
but unfortunately too new to be included in experiments. CD and Basis Rotation estimation will be
used as a basis for comparison.
In section 2 a regularizer is proposed that makes Score Matching applicable to a much broader
class of problems. In section 3 we show how computation and differentiation of the SM loss can
be performed in automated fashion. In section 4 we report encouraging quantitative experimental
results.
2 Regularized Score Matching
Consider an energy-based [17] model E(x; w), where ?energy? is the unnormalized negative logdensity such that the pdf is: p(x; w) = e?E(x;w) /Z(w), where Z(w) is the normalizing constant.
In other words, low energies correspond to high probability density, and high energies correspond
to low probability density.
Score Matching works by fitting the slope (score) of the model density to the slope of the true,
underlying density at the data points, which is obviously independent of the vertical offset of the logdensity (the normalizing constant). Hyv?arinen [10] shows that under some conditions, this objective
is equivalent to minimizing the following expression, which involves only first and second partial
derivatives of the model density:
J(w) =
Z
x?RN
px (x)
N
X
i=1
1
2
?E(x; w)
?xi
2
? 2 E(x; w)
?
(?xi )2
!
dx + const
(1)
with N -dimensional data vector x, weight vector w and true, underlying pdf px (x). Among the
conditions 1 is (1) that px (x) is differentiable, and (2) that the log-density is finite everywhere. In
practice, the true pdf is unknown, and we have a finite sample of T discrete data points. The sample
version of the SM loss function is:
T
N
1 XX
J (w) =
T t=1 i=1
S
1
2
?E(x(t) ; w)
?xi
2
? 2 E(x(t) ; w)
?
(?xi )2
!
(2)
which is asymptotically equivalent to the equation (1) as T approaches infinity, due to the law of
large numbers. This loss function was used in previous publications on SM [10, 12, 13, 15].
2.1 Issues
Should these conditions be violated, then (theoretically) the pdf cannot be estimated using equation
(1). Only some specific special-case solutions exist, e.g. for non-negative data [11]. Unfortunately,
situations where the mentioned conditions are violated are not rare. The distribution for quantized
data (like images) is discontinuous, hence not differentiable, since the data points are concentrated
at a finite number of discrete positions. Moreover, the fact that equation (2) is only equivalent to
equation (1) as T approaches infinity may cause problems: the distribution of any finite training
set of discrete data points is discrete, hence not differentiable. For proper estimation with SM, data
can be smoothened by whitening; however, common whitening methods (such as PCA or SVD) are
computational infeasible for large data dimensionality, and generally destroy the local structure of
spatial and temporal data such as image and audio. Some previous publications on Score Matching
apply zero-phase whitening (ZCA) [13] which computes a weighed sum over an input patch which
removes some of the original quantization, and can potentially be applied convolutionally. However,
1
pdf px (x) is differentiable, the expectations
The conditions are: the true (underlying)
E k? log px (x)/?xk2 and E k?E(x; w)/?xk2 w.r.t. x are finite for any w, and px (x)?E(x; w)/?x
goes to zero for any w when kxk ? ?.
2
the amount of information removed from the input by such whitening is not parameterized and
potentially large.
2.2 Proposed solution
Our proposed solution is the addition of a regularization term to the loss, approximately equivalent
to replacing each data point x with a Gaussian cloud of virtual datapoints (x+?) with i.i.d. Gaussian
noise ? ? N (0, ? 2 I). By this replacement, the sample pdf becomes smooth and the conditions for
proper SM estimation become satisfied. The expected value of the sample loss is:
"
2 #! X
N 2
N
S
1X
? E(x + ?; w)
?E(x + ?; w)
E J (x + ?; w) =
E
?
E
(3)
2 i=1
?(xi + ?i )
(?(xi + ?i ))2
i=1
We approximate the first and second term with a simple first-order Taylor expansion. Recall
that
since the noise is i.i.d. Gaussian, E [?i ] = 0, E [?i ?j ] = E [?i ] E [?j ] = 0 if i 6= j, and E ?2i = ? 2 .
The expected value of the first term is:
?
!2 ?
"
2 #
N
N
N
1X
?E(x + ?; w)
1 X ? ?E(x; w) X ? 2 E(x; w)
2
+
?j + O(?i ) ?
E
E
=
2 i=1
?(xi + ?i )
2 i=1
?xi
?xi ?xj
j=1
!
2
2
N
N 2
X
?E(x; w)
? E(x; w)
1X
2
2
?
+ O(?i )
=
+?
2 i=1
?xi
?xi ?xj
j=1
(4)
The expected value of the second term is:
"
#!
X
N 2
N
N
X
? E(x + ?; w)
? 2 E(x; w) X ? 3 E(x; w)
2
E
E
=
?j + O(?i )
+
(?(xi + ?i ))2
(?xi )2
?xi ?xi ?xj
i=1
i=1
i=1
N 2
X
? E(x; w)
=
+ O(?2i )
2
(?x
i)
i=1
(5)
Putting the terms back together, we have:
N
1X
E J S (x + ?; w) =
2 i=1
?E
?xi
2
2
N
N N
X
?2E
?2E
1 2XX
? 2 ) (6)
+ ?
?
+ O(?
2
(?x
)
2
?x
?x
i
i
j
i=1
i=1 j=1
where E = E(x; w). This is the full regularized Score Matching loss. While minimization of above
loss may be feasible in some situations, in general it requires differentiation of the full Hessian w.r.t.
x which scales like O(W 2 ). However, the off-diagonal elements of the Hessian are often dominated
by the diagonal. Therefore, we will use the diagonal approximation:
2
N
X
?2E
Jreg (x; w; ?) = J (x; w) + ?
(?xi )2
i=1
S
(7)
where ? sets regularization strength and is related to (but not exactly equal to) 21 ? 2 in equation (6).
This regularized loss is computationally convenient: the added complexity is almost negligible since
differentiation of the second derivative terms (? 2 E/(?xi )2 ) w.r.t. the weights is already required
for unregularized Score Matching. The regularizer is related to Tikhonov regularization [22] and
curvature-driven smoothing [2] where the square of the curvature of the energy surface at the data
points are also penalized. However, its application has been limited since (contrary to our case) in
the general case it adds considerable computational cost.
3
Figure 1: Illustration of local computational flow around some node j. Black lines: computation of
quantities ?j = ?E/?gj , ?j? = ? 2 E/(?gi )2 and the SM loss J(x; w). Red lines indicate computational flow for differentiation of the Score Matching loss: computation of e.g. ?J/??j and ?J/?gj .
The influence of weights are not shown, for which the derivatives are computed in the last step.
3 Automatic differentiation of J(x; w)
In most optimization methods for energy-based models [17], the sample loss is defined in readily
obtainable quantities obtained by forward inference in the model. In such situations, the required
derivatives w.r.t. the weights can be obtained in a straightforward and efficient fashion by standard
application of the backpropagation algorithm.
For Score Matching, the situation is more complex since the (regularized) loss (equations 2,7) is
defined in terms of {?E/?xi } and {? 2 E/(?xi )2 }, each term being some function of x and w.
In earlier publications on Score Matching for continuous variables [10, 12, 13, 15], the authors
rewrote {?E/?xi } and {? 2 E/(?xi )2 } to their explicit forms in terms of x and w by manually
differentiating the energy2. Subsequently, derivatives of the loss w.r.t. the weights can be found.
This manual differentiation was repeated for different models, and is arguably a rather inflexible
approach. A procedure that could automatically (1) compute and (2) differentiate the loss would
make SM estimation more accessible and flexible in practice.
A large class of models (e.g. ICA, Product-of-Experts and Fields-of-Experts), can be interpreted as
a form of feed-forward neural network. Consequently, the terms {?E/?xi } and {? 2 E/(?xi )2 } can
be efficiently computed using a forward and backward pass: the first pass performs forward inference
(computation of E(x; w)) and the second pass applies the backpropagation algorithm [3] to obtain
the derivatives of the energy w.r.t. the data point ({?E/?xi } and {? 2 E/(?xi )2 }). However, only
the loss J(x; w) is obtained by these two steps. For differentiation of this loss, one must perform an
additional forward and backward pass.
3.1 Obtaining the loss
Consider a feed-forward neural network with input vector x and weights w and an ordered set of
nodes indexed 1 . . . N , each node j with child nodes i ? children(j) with j < i and parent nodes
k ? parents(j) with k < j. The first D < N nodes are input nodes, for which the activation value
is gj = xj . For the other nodes (hidden units and output unit), the activation value is determined by
a differentiable scalar function gj ({gi }i?parents(j) , w). The network?s ?output? (energy) is determined as the activation of the last node: E(x; w) = gN (.). The values ?j = ?E/?gj are efficiently
computed by backpropagation. However, backpropagation of the full Hessian scales like O(W 2 ),
where W is the number of model weights. Here, we limit backpropagation to the diagonal approximation which scales like O(W ) [1]. This will still result in the correct gradients ? 2 E/(?xj )2 for
one-layer models and the models considered in this paper. Rewriting the equations for the full Hessian is a straightforward exercise. For brevity, we write ?j? = ? 2 E/(?gj )2 . The SM loss is split in
PD
PD
two terms: J(x; w) = K + L with K = 21 j=1 ?j2 and L = j=1 ??j? + ?(?j? )2 . The equations
for inference and backpropagation are given as the first two f or-loops in Algorithm 1.
2
Most previous publications do not express unnormalized neg. log-density as ?energy?
4
Input: x, w (data and weight vectors)
for j ? D + 1 to N do
compute gj (.)
for i ? parents(j) do
?2g
?g
compute ?gji , (?gi )j2 ,
// Forward propagation
? 3 gj
(?gi )3
?
?N ? 1, ?N
?0
for j ? N ? 1 to 1 do
P
k
?j ? k?children(j) ?k ?g
?gj
2
P
? 2 gk
?gk
?
?j? ? k?children(j) ?k (?g
2 + ?k
)
?g
j
j
// Backpropagation
for j ? 1 to D do
?K
?L
?L
?
??j ? ?j ; ??j ? 0; ?? ? ? ?1 + 2??j
j
for j ? D + 1 to N do
P
?K
i?parents(j)
??j ?
P
?L
?
i?parents(j)
??j
P
?L
i?parents(j)
?? ? ?
j
// SM Forward propagation
?K
??i
?L
??i?
?gj
?gi
? 2 gj
(?gi )2
?L
??i?
for j ? N to D + 1 do
P
?K
k?children(j)
?gj ?
P
?L
k?children(j)
?gj ?
for w ? w do
PN
?J
j=D+1
?w ?
?K
?gk
?L
?gk
?K ?gj
?gj ?w
?gj
?gi
?gk
?gj
?gk
?gj
+
2
+
+
?L ?gj
??i ?gi
// SM Backward propagation
? 2 gk
?K
??j ?k (?gj )2
? 2 gk
?L
??j ?k (?gj )2
?L ? ?gk
+ 2 ??
? ?k ?g
j
j
? 2 gk
(?gj )2
+
? 3 gk
?L
??j? ?k (?gj )3
// Derivatives wrt weights
+
?L ?gj
?gj ?w
+
?K ??j
??j ?w
+
?L ??j
??j ?w
+
?
?L ??j
??j? ?w
Algorithm 1: Compute ?w J. See sections 3.1 and 3.2 for context.
3.2 Differentiating the loss
Since the computation of the loss J(x; w) is performed by a deterministic forward-backward mechanism, this two-step computation can be interpreted as a combination of two networks: the original
network for computing {gj } and E(x; w), and an appended network for computing {?j }, {?j? } and
eventually J(x; w). See figure 1. The combined network can be differentiated by an extended
version of the double-backpropagation procedure [5], with the main difference that the appended
network not only computes {?j }, but also {?j? }. Automatic differentiation of the combined network
consists of two phases, corresponding to reverse traversal of the appended and original network
respectively: (1) obtaining ?K/??j , ?L/??j and ?L/??j? for each node j in order 1 to N ; (2) obtaining ?J/?gj for each node j in order N to D + 1. These procedures are given as the last two
f or-loops in Algorithm 1. The complete algorithm scales like O(W ).
4 Experiments
Consider the following Product-of-Experts (PoE) model:
E(x; W, ?) =
M
X
?i g(wiT x)
(8)
i=1
where M is the number of experts, wi is an image filter and the i-th row of W and ?i are scaling
parameters. Like in [10], the filters are L2 normalized to prevent a large portion from vanishing.We
use a slightly modified Student?s t-distribution (g(z) = log((cz)2 /2 + 1)) for latent space, so this is
also a Product of Student?s t-distribution model [24]. The parameter c is a non-learnable horizontal
scaling parameter, set to e1.5 . The vertical scaling parameters ?i are restricted to positive, by setting
?i = exp ?i where ?i is the actual weight.
5
4.1 MNIST
The first task is to estimate a density model of the MNIST handwritten digits [16]. Since a large
number of models need to be learned, a 2? downsampled version of MNIST was used. The MNIST
dataset is highly non-smooth: for each pixel, the extreme values (0 and 1) are highly frequent leading to sharp discontinuities in the data density at these Rpoints. It is well known that for models
?
with square weight matrix W , normalized g(.) (meaning ?? exp(?g(x))dx = 1) and ?i = 1, the
normalizing constant can be computed [10]: Z(w) = | det W |. For this special case, models can be
compared by computing the log-likelihood for the training- and test set. Unregularized, and regularized models for different choices of ? were estimated and log-likelihood values were computed.
Subsequently, these models were compared on a classification task. For each MNIST digit class, a
small sample of 100 data points was converted to internal features by different models. These features, combined with the original class label, were subsequently used to train a logistic regression
classifier for each model. For the PoE model, the ?activations? g(wiT x) were used as features. Classification error on the test set was compared against reported results for optimal RBM and SESM
models [20].
Results. As expected, unregularized estimation did not result in an accurate model. Figure 2 shows
how the log-likelihood of the train- and test set is optimal at ?? ? 0.01, and decreases for smaller
?. Coincidentally, the classification performance is optimal for the same choice of ?.
4.2 Denoising
Consider grayscale natural image data from the Berkeley dataset [18]. The data quantized and
therefore non-smooth, so regularization is potentially beneficial. In order to estimate the correct
regularization magnitude, we again esimated a PoE model as in equation (8) with square W , such
that Z(w) = | det W | and computed the log-likelihood of 10.000 random patches under different
regularization levels. We found that ?? ? 10?5 for maximum likelihood (see figure 2d). This
value is lower than for MNIST data since natural image data is ?less unsmooth?. Subsequently, a
convolutional PoE model known as Fields-of-Experts [21] (FoE) was estimated using regularized
SM:
M
XX
?i g(wiT x(p) )
(9)
E(x; W, ?) =
p
i=1
where p runs over image positions, and x(p) is a square image patch at p. The first model has the
same architecture as the CD-1 trained model in [21]: 5 ? 5 receptive fields, 24 experts (M = 24),
and ?i and g(.) as in our PoE model. Note that qualitative results of a similar model estimated
with SM have been reported earlier [15]. We found that for best performance, the model is learned
on images ?whitened? with a 5 ? 5 Laplacian kernel. This is approximately equivalent to ZCA
whitening used in [15].
Models are evaluated by means of Bayesian denoising using maximum a posteriori (MAP) estimation. As in a general Bayesian image restoration framework, the goal is to estimate the original
input x given a noisy image y using the Bayesian proportionality p(x|y) ? p(y|x)p(x). The assumption is white Gaussian noise such that the likelihood is p(y|x) ? N (0, ? 2 I). The model
E(x; w) = ? log p(x; w) ? Z(w) is our prior. The gradient of the log-posterior is:
?x log p(x|y) = ??x E(x; w) +
N
X
1
(yi ? xi )2
?
x
2? 2
i=1
(10)
Denoising is performed by initializing x to a noise image, and 300 subsequent steps of steepest
descent according to x? ? x + ??x log p(x|y), with ? annealed from 2 ? 10?2 to 5 ? 10?4. For comparison, we ran the same denoising procedure with models estimated by CD-1 and Basis Rotation,
from [21] and [23] respectively. Note that the CD-1 model is trained using PCA whitening. The
CD-1 model has been extensively applied to denoising before [21] and shown to compare favourably
to specialized denoising methods.
Results. Training of the convolutional model took about 1 hour on a 2Ghz machine. Regularization
turns out to be important for optimal denoising (see figure 2[e-g]). See table 1 for denoising performance of the optimal model for specific standard images. Our model performed significantly better
6
classification error (%)
log-likelihood (avg.)
250
test
training
200
150
100
50
0
0.01
0.02
12.5
12
11.5
11
10.5
10
9.5
9
8.5
Reg. SM
RBM
SESM
0.03
0.01
?
(a)
(b)
-2900
-3000
-3100
-3200
-6.5 -6 -5.5 -5 -4.5
log10 ?
48.8
48.4
48.2
48
-7 -6 -5 -4 -3 -2
log10 ?
(d)
?noise=5/256
36.5
36
35.5
35
34.5
34
48.6
(e)
0.03
(c)
?noise=1/256
denoised PSNR
log-likelihood (avg.)
Log-l. of image patches
-2800
0.02
?
?noise=15/256
30
29
28
27
26
-7 -6 -5 -4 -3 -2
log10 ?
(f)
-7 -6 -5 -4 -3 -2
log10 ?
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
Figure 2: (a) Top: selection of downsampled MNIST datapoints. Middle and bottom: random
sample of filters from unregularized and regularized (? = 0.01) models, respectively. (b) Average
log-likelihood of MNIST digits in training- and test sets for choices of ?. Note that ?? ? 0.01, both
for maximum likelihood and optimal classification. (c) Test set error of a logistic regression classifier
learned on top of features, with only 100 samples per class, for different choices of ?. Optimal error
rates of SESM and RBM (figure 1a in [20]) are shown for comparison. (d) Log-likelihood of 10.000
random natural image patches for complete model, for different choices of ?. (e-g) PSNR of 500
denoised images, for different levels of noise and choices of ?. Note that ?? ? 10?5 , both for
maximum likelihood and best denoising performance. (h) Some natural images from the Berkeley
dataset. (i) Filters of model with 5 ? 5 ? 24 weights learned with CD-1 [21], (j) filters of our model
with 5 ? 5 ? 24 weights, (k) random selection of filters from the Basis Rotation [23] model with
15 ? 15 ? 25 weights, (l) random selection of filters from our model with 8 ? 8 ? 64 weights. (m)
Detail of original Lena image. (n) Detail with noise added (?noise = 5/256). (o) Denoised with
model learned with CD-1 [21], (p) Basis Rotation [23], (q) and Score Matching with (near) optimal
regularization.
7
than the Basis Rotation model and slightly better than the CD-1 model. As reported earlier in [15],
we can verify that the filters are completely intuitive (Gabor filters with different phase, orientation
and scale) unlike the filters of CD-1 and Basis Rotation models (see figure 2[i-l]).
Table 1: Peak signal-to-noise ratio (PSNR) of denoised images with ?noise = 5/256. Shown errors
are aggregated over different noisy images.
Image
Weights
Barbara
Peppers
House
Lena
Boat
CD-1
(5 ? 5) ? 24
37.30?0.01
37.63?0.01
37.85?0.02
38.16?0.02
36.33?0.01
Basis Rotation
(15 ? 15) ? 25
37.08?0.02
37.09?0.02
37.73?0.03
37.97?0.01
36.21?0.01
Our model
(5 ? 5) ? 24
37.31?0.01
37.41?0.03
38.03?0.04
38.19?0.01
36.53?0.01
4.3 Super-resolution
In addition, models are compared with respect to their performance on a simple version of superresolution as follows. An original image xorig is sampled down to image xsmall by averaging blocks
of 2 ? 2 pixels into a single pixel. A first approximation x is computed by linearly scaling up xsmall
and subsequent application of a low-pass filter to remove false high frequency information. The
image is than fine-tuned by 200 repetitions of two subsequent steps: (1) refining the image slightly
using x? ? x + ??x E(x; w) with ? annealed from 2 ? 10?2 to 5 ? 10?4 ; (2) updating each k ? k
block of pixels such that their average corresponds to the down-sampled value. Note: the simple
block-downsampling results in serious aliasing artifacts in the Barbara image, so the Castle image
is used instead.
Results. PSNR values for standard images are shown in table 2. The considered models made give
slight improvements in terms of PSNR over the initial solution with low pass filter. Still, our model
did slightly better than the CD-1 and Basis Rotation models.
Table 2: Peak signal-to-noise ratio (PSNR) of super-resolved images for different models.
Image
Weights
Peppers
House
Lena
Boat
Castle
Low pass filter
27.54
33.15
32.39
29.20
24.19
CD-1
(5 ? 5) ? 24
29.11
33.53
33.31
30.81
24.15
Basis Rotation
(15 ? 15) ? 25
27.69
33.41
33.07
30.77
24.26
Our model
(5 ? 5) ? 24
29.76
33.48
33.46
30.82
24.31
5 Conclusion
We have shown how the addition of a principled regularization term to the expression of the Score
Matching loss lifts continuity assumptions on the data density, such that the estimation method
becomes more generally applicable. The effectiveness of the regularizer was verified with the discontinuous MNIST and Berkeley datasets, with respect to likelihood of test data in the model. For
both datasets, the optimal regularization parameter is approximately equal for both likelihood and
subsequent classification and denoising tasks. In addition, we showed how computation and differentiation of the Score Matching loss can be automated using an efficient algorithm.
8
References
[1] S. Becker and Y. LeCun. Improving the convergence of back-propagation learning with second-order
methods. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proc. of the 1988 Connectionist Models
Summer School, pages 29?37, San Mateo, 1989. Morgan Kaufman.
[2] C. M. Bishop. Neural networks for pattern recognition. Oxford University Press, Oxford, UK, 1996.
[3] A. E. Bryson and Y. C. Ho. Applied optimal control; optimization, estimation, and control. Blaisdell Pub.
Co. Waltham, Massachusetts, 1969.
[4] M. A. Carreira-Perpinan and G. E. Hinton. On contrastive divergence learning. In Artificial Intelligence
and Statistics, 2005.
[5] H. Drucker and Y. LeCun. Improving generalization performance using double backpropagation. IEEE
Transactions on Neural Networks, 3(6):991?997, 1992.
[6] M. Gutmann and A. Hyv?arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proc. Int. Conf. on Artificial Intelligence and Statistics (AISTATS2010),
2010.
[7] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation,
14:2002, 2000.
[8] G. E. Hinton, S. Osindero, and Y. W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 18(7):1527?1554, 2006.
[9] G. E. Hinton, S. Osindero, M. Welling, and Y. W. Teh. Unsupervised discovery of non-linear structure
using contrastive backpropagation. Cognitive Science, 30(4):725?731, 2006.
[10] A. Hyv?arinen. Estimation of non-normalized statistical models by score matching. Journal of Machine
Learning Research, 6:695?709, 2005.
[11] A. Hyv?arinen. Some extensions of score matching. Computational Statistics & Data Analysis,
51(5):2499?2512, 2007.
[12] A. Hyv?arinen. Optimal approximation of signal priors. Neural Computation, 20:3087?3110, 2008.
[13] U. K?oster and A. Hyv?arinen. A two-layer ica-like model estimated by score matching. In J. M. de S?a,
L. A. Alexandre, W. Duch, and D. P. Mandic, editors, ICANN (2), volume 4669 of Lecture Notes in
Computer Science, pages 798?807. Springer, 2007.
[14] U. Koster, J. T. Lindgren, and A. Hyv?arinen. Estimating markov random field potentials for natural
images. Proc. Int. Conf. on Independent Component Analysis and Blind Source Separation (ICA2009),
2009.
[15] U. K?oster, J. T. Lindgren, and A. Hyv?arinen. Estimating markov random field potentials for natural
images. In T. Adali, C. Jutten, J. M. T. Romano, and A. K. Barros, editors, ICA, volume 5441 of Lecture
Notes in Computer Science, pages 515?522. Springer, 2009.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
In Proceedings of the IEEE, pages 2278?2324, 1998.
[17] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang. A tutorial on energy-based learning. In
G. Bakir, T. Hofman, B. Sch?olkopf, A. Smola, and B. Taskar, editors, Predicting Structured Data. MIT
Press, 2006.
[18] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th Int?l
Conf. Computer Vision, volume 2, pages 416?423, July 2001.
[19] S. Osindero and G. E. Hinton. Modeling image patches with a directed hierarchy of markov random
fields. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 1121?1128. MIT Press, Cambridge, MA, 2008.
[20] M. Ranzato, Y. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. In Advances in
Neural Information Processing Systems (NIPS 2007), 2007.
[21] S. Roth and M. J. Black. Fields of experts. International Journal of Computer Vision, 82(2):205?229,
2009.
[22] A. N. Tikhonov. On the stability of inverse problems. Dokl. Akad. Nauk SSSR, (39):176?179, 1943.
[23] Y. Weiss and W. T. Freeman. What makes a good model of natural images. In CVPR 2007: Proceedings
of the 2007 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE
Computer Society, pages 1?8, 2007.
[24] M. Welling, G. E. Hinton, and S. Osindero. Learning sparse topographic representations with products
of student-t distributions. In S. T. S. Becker and K. Obermayer, editors, Advances in Neural Information
Processing Systems 15, pages 1359?1366. MIT Press, Cambridge, MA, 2003.
9
| 4060 |@word middle:1 version:5 proportionality:1 hyv:8 contrastive:4 initial:1 score:21 pub:1 tuned:1 document:1 suppressing:1 activation:4 diederik:1 assigning:1 dx:2 readily:1 must:1 subsequent:4 enables:1 remove:2 update:1 intelligence:2 steepest:1 vanishing:1 quantized:3 node:11 mathematical:1 become:1 qualitative:2 consists:1 fitting:1 introduce:1 theoretically:1 expected:4 ica:6 multi:1 aliasing:1 lena:3 freeman:1 automatically:1 encouraging:1 curse:1 actual:1 becomes:2 xx:3 underlying:4 moreover:1 estimating:2 mass:1 superresolution:1 what:1 kaufman:1 interpreted:2 differentiation:11 temporal:1 quantitative:2 berkeley:3 exactly:1 classifier:2 uk:1 control:2 unit:2 platt:1 arguably:1 positive:1 negligible:1 before:1 local:2 limit:1 oxford:2 approximately:3 black:2 mateo:1 challenging:1 co:1 limited:3 succesfully:1 range:1 directed:1 lecun:6 practice:2 block:3 backpropagation:11 digit:3 procedure:4 significantly:1 gabor:1 matching:19 convenient:1 word:1 downsampled:2 cannot:1 selection:3 context:1 influence:2 instability:1 weighed:1 equivalent:5 deterministic:1 map:1 roth:1 annealed:2 go:1 straightforward:2 resolution:2 wit:3 hadsell:1 estimator:2 datapoints:2 stability:1 hierarchy:1 element:1 recognition:3 updating:1 database:1 bottom:1 cloud:1 taskar:1 initializing:1 gutmann:1 ranzato:2 decrease:1 removed:1 ran:1 mentioned:1 principled:1 pd:2 complexity:1 traversal:1 trained:2 hofman:1 basis:13 completely:1 resolved:1 regularizer:3 train:2 fast:1 sejnowski:1 artificial:2 lift:1 cvpr:1 ability:1 statistic:6 gi:8 topographic:1 noisy:2 obviously:1 differentiate:1 differentiable:5 net:1 took:1 product:7 frequent:1 j2:2 loop:2 roweis:1 nauk:1 intuitive:1 olkopf:1 parent:7 double:3 convergence:1 school:1 c:1 involves:2 indicate:1 uu:1 rewrote:1 waltham:1 sssr:1 discontinuous:2 correct:2 filter:13 subsequently:4 human:1 virtual:1 arinen:8 generalization:1 extension:1 around:1 considered:2 exp:2 xk2:2 estimation:20 proc:4 applicable:2 label:1 repetition:1 minimization:1 mit:3 gaussian:4 super:3 modified:1 rather:3 pn:1 broader:2 publication:4 xorig:1 refining:1 improvement:1 likelihood:16 contrast:1 zca:2 bryson:1 posteriori:1 inference:3 hidden:1 koller:1 pixel:4 issue:1 among:1 flexible:1 classification:6 orientation:1 aistats2010:1 spatial:1 special:2 smoothing:1 field:8 equal:2 manually:1 unsupervised:1 report:1 connectionist:1 serious:1 divergence:3 phase:3 replacement:1 highly:2 mixture:2 extreme:1 nl:1 accurate:1 partial:1 indexed:1 taylor:1 overcomplete:1 complicates:1 earlier:3 modeling:1 gn:1 measuring:1 restoration:1 cost:1 neutral:1 rare:1 smoothened:1 osindero:4 too:1 reported:5 combined:3 density:21 peak:2 international:1 accessible:1 off:1 together:1 again:1 satisfied:1 huang:1 conf:3 cognitive:1 expert:11 derivative:7 leading:1 castle:2 converted:1 potential:2 de:1 student:4 int:3 blind:1 multiplicative:1 performed:4 red:1 universiteit:1 portion:1 denoised:4 slope:4 appended:3 square:5 convolutional:2 efficiently:2 correspond:2 handwritten:1 bayesian:3 utrecht:1 foe:1 touretzky:1 manual:1 against:1 energy:10 frequency:1 proof:1 rbm:5 sampled:2 dataset:3 massachusetts:1 recall:1 knowledge:1 dimensionality:3 psnr:6 bakir:1 segmentation:1 obtainable:1 back:2 feed:2 alexandre:1 courant:1 wei:1 evaluated:1 strongly:1 furthermore:1 smola:1 horizontal:1 favourably:1 replacing:1 propagation:4 continuity:1 jutten:1 logistic:2 artifact:1 normalized:4 true:4 verify:1 regularization:11 hence:2 white:1 unnormalized:3 criterion:1 pdf:6 complete:2 performs:1 image:38 meaning:1 recently:2 common:1 rotation:12 specialized:1 volume:3 slight:1 cambridge:2 automatic:2 consistency:1 surface:1 whitening:6 gj:27 add:1 lindgren:2 curvature:2 posterior:1 showed:1 driven:1 reverse:1 barbara:2 tikhonov:2 certain:1 ecological:1 unsmooth:1 yi:1 neg:1 morgan:1 additional:1 somewhat:1 aggregated:1 signal:3 july:1 full:4 smooth:4 segmented:1 convolutionally:1 mandic:1 mle:1 e1:1 laplacian:1 variant:1 regression:2 whitened:1 vision:3 expectation:1 kernel:1 cz:1 addition:5 fine:1 source:1 sch:1 biased:1 unlike:1 subject:1 contrary:1 flow:2 effectiveness:1 near:1 chopra:1 split:1 bengio:1 automated:3 xj:5 audio:1 pepper:2 architecture:1 haffner:1 det:2 drucker:1 poe:7 logdensity:2 expression:2 pca:2 becker:2 suffer:1 york:1 cause:1 hessian:4 romano:1 deep:2 generally:3 amount:1 coincidentally:1 extensively:1 concentrated:1 exist:1 tutorial:1 estimated:6 per:1 discrete:4 write:1 express:1 putting:1 prevent:1 rewriting:1 verified:1 backward:4 destroy:1 asymptotically:2 sum:2 nce:3 run:1 koster:1 everywhere:1 parameterized:1 inverse:1 almost:1 yann:2 patch:6 separation:1 scaling:4 layer:3 summer:1 strength:1 occur:1 infinity:2 tal:1 dominated:1 px:6 martin:1 department:1 structured:1 according:1 combination:1 inflexible:1 slightly:4 smaller:1 beneficial:1 blaisdell:1 wi:1 restricted:2 unregularized:4 computationally:1 equation:9 turn:1 eventually:1 mechanism:1 wrt:1 singer:1 constrastive:1 apply:1 differentiated:1 fowlkes:1 alternative:1 ho:1 original:7 top:2 log10:4 const:1 society:2 objective:1 malik:1 added:2 already:1 quantity:2 receptive:1 diagonal:4 obermayer:1 gradient:3 distance:1 sesm:3 besides:1 illustration:1 gji:1 minimizing:3 ratio:2 downsampling:1 akad:1 unfortunately:3 potentially:3 gk:11 negative:3 proper:2 boltzmann:1 unknown:1 perform:1 teh:2 vertical:2 convolution:1 markov:4 sm:19 datasets:2 finite:6 descent:1 situation:5 extended:2 hinton:7 rn:1 sharp:1 introduced:1 required:2 learned:5 kingma:2 hour:1 discontinuity:1 nip:1 able:1 dokl:1 pattern:2 belief:2 natural:9 difficulty:1 regularized:8 predicting:1 boat:2 oster:2 prior:2 l2:1 discovery:1 law:1 loss:25 lecture:2 consistent:1 principle:1 editor:6 cd:15 row:1 penalized:1 last:3 infeasible:2 institute:1 differentiating:4 sparse:2 ghz:1 world:1 evaluating:1 computes:2 forward:9 author:1 avg:2 made:1 san:1 far:2 welling:2 transaction:1 approximate:1 xi:27 grayscale:1 continuous:2 latent:1 table:4 promising:1 nature:2 obtaining:3 improving:2 expansion:1 bottou:1 complex:2 barros:1 did:2 icann:1 main:1 linearly:1 noise:14 repeated:1 child:6 fashion:2 position:2 explicit:1 exercise:1 house:2 perpinan:1 down:2 specific:2 bishop:1 learnable:1 offset:1 nyu:1 normalizing:5 intractable:1 quantization:1 restricting:1 mnist:9 false:1 magnitude:1 boureau:1 kxk:1 ordered:1 scalar:1 applies:1 springer:2 corresponds:1 ma:2 goal:1 consequently:1 feasible:1 considerable:1 included:1 infinite:1 determined:2 carreira:1 averaging:1 denoising:11 pas:7 experimental:1 svd:1 internal:1 brevity:1 violated:3 adali:1 mcmc:1 reg:1 |
3,381 | 4,061 | Layer-wise analysis of deep networks with Gaussian
kernels
Gr?egoire Montavon
Machine Learning Group
TU Berlin
Mikio L. Braun
Machine Learning Group
TU Berlin
?
Klaus-Robert Muller
Machine Learning Group
TU Berlin
[email protected]
[email protected]
[email protected]
Abstract
Deep networks can potentially express a learning problem more efficiently than local learning machines. While deep networks outperform local learning machines
on some problems, it is still unclear how their nice representation emerges from
their complex structure. We present an analysis based on Gaussian kernels that
measures how the representation of the learning problem evolves layer after layer
as the deep network builds higher-level abstract representations of the input. We
use this analysis to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are
obtained when the deep network discriminates only in the last layers.
1
Introduction
Local learning machines such as nearest neighbors classifiers, radial basis function (RBF) kernel
machines or linear classifiers predict the class of new data points from their neighbors in the input
space. A limitation of local learning machines is that they cannot generalize beyond the notion
of continuity in the input space. This limitation becomes detrimental when the Bayes classifier
has more variations (ups and downs) than the number of labeled samples available. This situation
typically occurs on problems where an instance ? let?s say, a handwritten digit ? can take various
forms due to irrelevant variation factors such as its position, its size, its thickness and more complex
deformations. These multiple factors of variation can greatly increase the complexity of the learning
problem (Bengio, 2009).
This limitation motivates the creation of learning machines that can map the input space into a
higher-level representation where regularities of higher order than simple continuity in the input
space can be expressed. Engineered feature extractors, nonlocal kernel machines (Zien et al., 2000)
or deep networks (Rumelhart et al., 1986; LeCun et al., 1998; Hinton et al., 2006; Bengio et al., 2007)
can implement these more complex regularities. Deep networks implement them by distorting the
input space so that initially distant points in the input space appear closer. Also, their multilayered
nature acts as a regularizer, allowing them to share at a given layer features computed at the previous
layer (Bengio, 2009). Understanding how the representation is built in a deep network and how to
train it efficiently received a lot of attention (Goodfellow et al., 2009; Larochelle et al., 2009; Erhan
et al., 2010). However, it is still unclear how their nice representation emerges from their complex
structure, in particular, how the representation evolves from layer to layer.
The main contribution of this paper is to introduce an analysis based on RBF kernels and on the
kernel principal component analysis (kPCA, Sch?olkopf et al., 1998) that can capture and quantify the
layer-wise evolution of the representation in a deep network. In practice, for each layer 1 ? l ? L
of the deep network, we take a small labeled dataset D, compute its image D(l) at the layer l of the
deep network and measure what dimensionality the local model built on top of D(l) must have in
order to solve the learning problem with a certain accuracy.
1
l=0
y
f2
l=1
y
x
f3
l=2
y
f1 (x)
l=3
y
f2 (f1 (x))
f3 (f2 (f1 (x)))
l
l
l
l
=
=
=
=
0
1
2
3
dimensionality d
error e(do )
f1
error e(d)
output
input
layer l
Figure 1: As we move from the input to the output of the deep network, better representations of
the learning problem are built. We measure this improvement with the layer-wise RBF analysis
presented in Section 2 and Section 3.2. This analysis relates the prediction error e(d) to the dimensionality d of a local model built at each layer of the deep network. As the data is propagated
through the deep network, lower errors are obtained with lower-dimensional local models. The plots
on the right illustrate this dynamic where the thick gray arrows indicate the forward path of the deep
network and where do is a fixed number of dimensions.
We apply this novel analysis to a multilayer perceptron (MLP), a pretrained multilayer perceptron
(PMLP) and a convolutional neural network (CNN). We observe in each case that the error and the
dimensionality of the local model decrease as we propagate the dataset through the deep network.
This reveals that the deep network improves the representation of the learning problem layer after
layer. This progressive layer-wise simplification is illustrated in Figure 1. In addition, we observe
that the CNN and the PMLP tend to postpone the discrimination to the last layers, leading to more
transferable features and better-generalizing representations than for the simple MLP. This result
suggests that the structure of a deep network, by enforcing a separation of concerns between lowlevel generic features and high-level task-specific features, has an important role to play in order to
build good representations.
2
RBF analysis of a learning problem
We would like to quantify the complexity of a learning problem p(y | x) where samples are drawn
independently from a probability distribution p(x, y). A simple way to do it is to measure how many
degrees of freedom (or dimensionality d) a local model must have in order to solve the learning
problem with a certain error e. This analysis relates the dimensionality d of the local model to its
prediction error e(d).
In practice, there are many ways to define the dimensionality of a model, for example, (1) the
number of samples given to the learning machine, (2) the number of required hidden nodes of a
neural network (Murata et al., 1994), (3) the number of support vectors of a SVM or (4) the number
of leading kPCA components of the input distribution p(x) used in the model. The last option is
chosen for the following two reasons:
First, the kPCA components are added cumulatively to the prediction model as the dimensionality of
the model increases, thus offering stability, while in the case of support vector machines, previously
chosen support vectors might be dropped in favor of other support vectors in higher-dimensional
models.
Second, the leading kPCA components obtained with a finite and typically small number of samples
n are similar to those that would be obtained in the asymptotic case where p(x, y) is fully observed
(n ? ?). This property is shown by Braun (2006) and Braun et al. (2008) in the case of a single
kernel, and by extension, in the case of a finite set of kernels.
This last property is particularly useful since p(x, y) is unknown and only a finite number of observations are available. The analysis presented here is strongly inspired from the relevant dimensionality
estimation (RDE) method of Braun et al. (2008) and is illustrated in Figure 2 for a small two2
d=1
e(d) = 0.5
d=2
e(d) = 0.25
d=3
e(d) = 0.25
d=4
e(d) = 0
d=5
e(d) = 0
d=6
e(d) = 0
Figure 2: Illustration of the RBF analysis on a toy dataset of 12 samples. As we add more and more
leading kPCA components, the model becomes more flexible, creating a better decision boundary.
Note that with four leading kPCA components out of the 12 kPCA components, all the samples are
already classified perfectly.
dimensional toy example. In the next lines, we present the computation steps required to estimate
the error as a function of the dimensionality.
Let {(x1 , y1 ), . . . , (xn , yn )} be a dataset of n points drawn independently from p(x, y) where yi is
an indicator vector having value 1 at the index corresponding to the class of xi and 0 elsewhere. Let
X = (x1 , . . . , xn ) and Y = (y1 , . . . , yn ) be the matrices associated to the inputs and labels of the
dataset. We compute the kernel matrix K associated to the dataset:
kx ? x? k2
?
.
[K]ij = k(xi , xj )
where k(x, x ) = exp ?
2? 2
The kPCA components u1 , . . . , un are obtained by performing an eigendecomposition of K where
eigenvectors u1 , . . . , un have unit length and eigenvalues ?1 , . . . , ?n are sorted by decreasing magnitude:
K = (u1 | . . . |un ) ? diag(?1 , . . . , ?n ) ? (u1 | . . . |un )
?
? = (u1 | . . . |ud ) and ?
? = diag(?1 , . . . , ?d ) be a d-dimensional approximation of the eigendeLet U
composition. We fit a linear model ? ? that maps the projection on the d leading components of the
training data to the log-likelihood of the classes
?U
? ? ?) ? Y ||2
? ? = argmin? || exp(U
F
where ? is a matrix of same size as Y and where the exponential function is applied element-wise.
The predicted class log-probability log(?
y ) of a test point (x, y) is computed as
??
? ?1 U
? ??? + C
log(?
y ) = k(x, X)U
where k(x, X) is a matrix of size 1 ? n computing the similarities between the new point and each
training point and where C is a normalization constant. The test error is defined as:
e(d) = Pr(argmax y? 6= argmax y)
The training and test error can be used as an approximation bound for the asymptotic case n ? ?
where the data would be projected on the real eigenvectors of the input distribution. In the next
sections, the training and test error are depicted respectively as dotted and solid lines in Figure 3 and
as the bottom and the top of error bars in Figure 4. For each dimension, the kernel scale parameter ?
that minimizes e(d) is retained, leading to a different kernel for each dimensionality. The rationale
for taking a different kernel for each model is that the optimal scale parameter typically shrinks as
more leading components of the input distribution are observed.
3
Methodology
In order to test our two hypotheses (the progressive emergence of good representations in deep
networks and the role of the structure for postponing discrimination), we consider three deep networks of interest, namely a convolutional neural network (CNN), a multilayer perceptron (MLP)
and a variant of the multilayer perceptron pretrained in an unsupervised fashion with a deep belief
3
network (PMLP). These three deep networks are chosen in order to evaluate how the two types of
regularizers implemented respectively by the CNN and the PMLP impact on the evolution of the
representation layer after layer. We describe how they are built, how they are trained and how they
are analyzed layer-wise with the RBF analysis described in Section 2.
The multilayer perceptron (MLP) is a deep network obtained by alternating linear transformations
and element-wise nonlinearities. Each layer maps an input vector of size m into an output vector
of size n and consists of (1) a linear transformation linearm?n (x) = w ? x + b where w is a
weight matrix of size n ? m learned from the data and (2) a non-linearity applied element-wise
to the output of the linear transformation. Our implementation of the MLP maps two-dimensional
images of 28 ? 28 pixels into a vector of size 10 (the 10 possible digits) by applying successively
the following functions:
f1 (x) = tanh(linear28?28?784 (x))
f2 (x) = tanh(linear784?784 (x))
f3 (x) = tanh(linear784?784 (x))
f4 (x) = softmax(linear784?10 (x))
The pretrained multilayer perceptron (Hinton et al., 2006) that we abbreviate PMLP in this paper
is a variant of the MLP where weights are initialized with a deep belief network (DBN, Hinton
et al., 2006) using an unsupervised greedy layer-wise pretraining procedure. This particular weight
initialization acts as a regularizer, allowing to learn better-generalizing representation of the learning
problem than the simple MLP.
The convolutional neural network (CNN, LeCun et al., 1998) is a deep network obtained by ala set of m input features maps
ternating convolution filters y = convolvea?b
m?n (x) transforming
Pm
{x1 , . . . , xm } into a set of n output features maps {yi = j=1 wij ? xj + bi , i = 1 . . . , n} where
the convolution filters wij of size a ? b are learned from data, and pooling units subsampling each
feature map by a factor two. Our implementation maps images of 32 ? 32 pixels into a vector of
size 10 (the 10 possible digits) by applying successively the following functions:
5?5
(x)))
f1 (x) = tanh(pool(convolve1?36
5?5
f2 (x) = tanh(pool(convolve36?36
(x)))
f3 (x) = tanh(linear5?5?36?400 (x))
f4 (x) = softmax(linear400?10 (x))
The CNN is inspired by the structure of biological visual systems (Hubel and Wiesel, 1962). It
combines three ideas into a single architecture: (1) only local connections between neighboring
pixels are allowed, (2) the convolution operator applies the same filter over the whole feature map
and (3) a pooling mechanism at the top of each convolution filter adds robustness to input distortion.
These mechanisms act as a regularizer on images and other types of sequential data, and learn wellgeneralizing models from few data points.
3.1
Training the deep networks
Each deep network is trained on the MNIST handwriting digit recognition dataset (LeCun et al.,
1998). The MNIST dataset consists of predicting the digit 0 ? 9 from scanned handwritten digits of
28 ? 28 pixels. We partition randomly the MNIST training set in three subsets of 45000, 5000 and
10000 samples that are respectively used for training the deep network, selecting the parameters of
the deep network and performing the RBF analysis.
We consider three training procedures:
1. No training: the weights of the deep network are left at their initial value. If the deep
network hasn?t received unsupervised pretraining, the weights are set randomly according
to a normal distribution N (0, ? ?1 ) where ? denotes for a given layer the number of input
nodes that are connected to a single output node.
2. Training on an alternate task: the deep network is trained on a binary classification task that
consists of determining whether the digit is original (positive example) or whether it has
4
been transformed by one of the 11 possible rotation/flip combinations that differs from the
original (negative example). This problem has therefore 540000 labeled samples (45000
positives and 495000 negatives). The goal of training a deep network on an alternate task
is to learn features on a problem where the number of labeled samples is abundant and then
reuse these features to learn the target task that has typically few labels. In the alternate task
described earlier, negative examples form a cloud around the manifold of positive examples
and learning this manifold potentially allows the deep network to learn features that can be
transfered to the digit recognition task.
3. Training on the target task: the deep network is trained on the digit recognition task using
the 45000 labeled training samples.
These procedures are chosen in order to assess the forming of good representations in deep networks
and to test the role of the structure of deep networks on different aspects of learning, such as the
effectiveness of random projections, the transferability of features from one task to another and the
generalization to new samples of the same distribution.
3.2
Applying the RBF analysis to deep networks
In this section, we explain how the RBF analysis described in Section 2 is applied to analyze layerwise the deep networks presented in Section 3.
Let f = fL ?? ? ??f1 be the trained deep network of depth L. Let D be the analysis dataset containing
the 10000 samples of the MNIST dataset on which the deep network hasn?t been trained. For each
layer, we build a new dataset D(l) corresponding to the mapping of the original dataset D to the l
first layers of the deep network. Note that by definition, the index zero corresponds to the raw input
data (mapped through zero layers):
D
l=0 ,
(l)
D =
{(fl ? ? ? ? ? f1 (x), t) | (x, t) ? D)}
1?l?L .
Then, for each dataset D(0) , . . . , D(L) we perform the RBF analysis described in Section 2. We use
n = 2500 samples for computing the eigenvectors and the remaining 7500 samples to estimate the
prediction error of the model. This analysis yields for each dataset D(l) the error as a function of the
dimensionality of the model e(d). A typical evolution of e(d) is depicted in Figure 1.
The goal of this analysis is to observe the evolution of e(d) layer after layer for the deep networks
and training procedures presented in Section 3 and to test the two hypotheses formulated in Section 1
(the progressive emergence of good representations in deep networks and the role of the structure
for postponing discrimination). The interest of using a local model to solve the learning problem
is that the local models are blind with respect to possibly better representations that could be obtained in previous or subsequent layers. This local scoping property allows for fine isolation of the
representations in the deep network. The need for local scoping also arises when ?debugging? deep
architectures. Sometimes, deep architectures perform reasonably well even when the first layers do
something wrong. This analysis is therefore able to detect these ?bugs?.
The size n of the dataset is selected so that it is large enough to approximate well the asymptotic
case (n ? ?) but also be small enough so that computing the eigendecomposition of the kernel
matrix of size n ? n is fast. We choose a set of scale parameters for the RBF kernel corresponding
to the 0.01, 0.05, 0.10, 0.25, 0.5, 0.75, 0.9, 0.95 and 0.99 quantiles of the distribution of distances
between pairs of data points.
4
Results
Layer-wise evolution of the error e(d) is plotted in Figure 3 in the supervised training case. The
layer-wise evolution of the error when d is fixed to 16 dimensions is plotted in Figure 4. Both figures
capture the simultaneous reduction of error and dimensionality performed by the deep network when
trained on the target task. In particular, they illustrate that in the last layers, a few number of
dimensions is sufficient to build a good model of the target task.
5
Figure 3: Layer-wise evolution of the error e(d) when the deep network has been trained on the
target task. The solid line and the dotted line represent respectively the test error and the training
error. As the data distribution is mapped through more and more layers, more accurate and lowerdimensional models of the learning problem can be obtained.
From these results, we first demonstrate some properties of deep networks trained on an ?asymptotically? large number of samples. Then, we demonstrate the important role of structure in deep
networks.
4.1
Asymptotic properties of deep networks
When the deep network is trained on the target task with an ?asymptotically? large number of samples (45000 samples) compared to the number of dimensions of the local model, the deep network
builds representations layer after layer in which a low number of dimensions can create more accurate models of the learning problem.
This asymptotic property of deep networks should not be thought of as a statistical superiority of
deep networks over local models. Indeed, it is still possible that a higher-dimensional local model
applied directly on the raw data performs as well as a local model applied at the output of the deep
network. Instead, this asymptotic property has the following consequence:
Despite the internal complexity of deep networks a local interpretation of the representation is possible at each stage of the processing. This means that deep networks do not explode the original data
distribution into a statistically intractable distribution before recombining everything at the output,
but instead, apply controlled distortions and reductions of the input space that preserve the statistical
tractability of the data distribution at every layer.
4.2
Role of the structure of deep networks
We can observe in Figure 4 (left) that even when the convolutional neural network (CNN) and the
pretrained MLP (PMLP) have not received supervised training, the first layers slightly improve the
representation with respect to the target task. On the other hand, the representation built by a simple
MLP with random weights degrades layer after layer. This observation highlights the structural
prior encoded by the CNN: by convolving the input with several random convolution filters and
subsampling subsequent feature maps by a factor two, we obtain a random projection of the input
data that outperforms the implicit projection performed by an RBF kernel in terms of task relevance.
This observation closely relates to results obtained in (Ranzato et al., 2007; Jarrett et al., 2009) where
it is observed that training the deep network while keeping random weights in the first layers still
allows for good predictions by the subsequent layers. In the case of the PMLP, the successive layers
progressively disentangle the factors of variation (Hinton and Salakhutdinov, 2006; Bengio, 2009)
and simplify the learning problem.
We can observe in Figure 4 (middle) that the phenomenon is even clearer when the CNN and the
PMLP are trained on an alternate task: they are able to create generic features in the first layers
that transfer well to the target task. This observation suggests that the structure embedded in the
CNN and the PMLP enforces a separation of concerns between the first layers that encode lowlevel features, for example, edge detectors, and the last layers that encode high-level task-specific
6
Figure 4: Evolution of the error e(do ) as a function of the layer l when do has been fixed to 16
dimensions. The top and the bottom of the error bars represent respectively the test error and the
training error of the local model.
MLP, alternate task
MLP, target task
PMLP, alternate task
PMLP, target task
CNN, alternate task
CNN, target task
Figure 5: Leading components of the weights (receptive fields) obtained in the first layer of each
architecture. The filters learned by the CNN and the pretrained MLP are richer than the filters
learned by the MLP. The first component of the MLP trained on the alternate task dominates all
other components and prevents good transfer on the target task.
features. On the other hand, the standard MLP trained on the alternate task leads to a degradation of
representations. This degradation is even higher than in the case of random weights, despite all the
prior knowledge on pixel neighborhood contained implicitly in the alternate task.
Figure 5 shows that the MLP builds receptive fields that are spatially informative but dissimilar
between the two tasks. The fact that receptive fields are different for each task indicates that the
MLP tries to discriminate already in the first layers. The absence of a built-in separation of concerns
between low-level and high-level feature extractors seems to be a reason for the inability to learn
transferable features. It indicates that end-to-end transfer learning on unstructured learning machines is in general not appropriate and supports the recent success of transfer learning on restricted
portions of the deep network (Collobert and Weston, 2008; Weston et al., 2008) or on structured
deep networks (Mobahi et al., 2009).
When the deep networks are trained on the target task, the CNN and the PMLP solve the problem
differently as the MLP. In Figure 4 (right), we can observe that the CNN and the PMLP tend to
postpone the discrimination to the last layers while the MLP starts to discriminate already in the first
layers. This result suggests that again, the structure contained in the CNN and the PMLP enforces
a separation of concerns between the first layers encoding low-level generic features and the last
layers encoding high-level task-specific features. This separation of concerns might explain the
better generalization of the CNN and PMLP observed respectively in (LeCun et al., 1998; Hinton
et al., 2006). It also rejoins the findings of Larochelle et al. (2009) showing that the pretraining of the
PMLP must be unsupervised and not supervised in order to build well-generalizing representations.
5
Conclusion
We present a layer-wise analysis of deep networks based on RBF kernels. This analysis estimates
for each layer of the deep network the number of dimensions that is necessary in order to model well
a learning problem based on the representation obtained at the output of this layer.
7
We observe that a properly trained deep network creates representations layer after layer in which a
more accurate and lower-dimensional local model of the learning problem can be built.
We also observe that despite a steady improvement of representations for each architecture of interest
(the CNN, the MLP and the pretrained MLP), they do not solve the problem in the same way: the
CNN and the pretrained MLP seem to separate concerns by building low-level generic features in
the first layers and high-level task-specific features in the last layers while the MLP does not enforce
this separation. This observation emphasizes the limitations of black box transfer learning and, more
generally, of black box training of deep architectures.
References
Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep networks.
In Advances in Neural Information Processing Systems 19, pages 153?160. MIT Press, 2007.
Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning,
2(1):1?127, 2009.
Mikio L. Braun. Accurate bounds for the eigenvalues of the kernel matrix. Journal of Machine
Learning Research, 7:2303?2328, Nov 2006.
Mikio L. Braun, Joachim Buhmann, and Klaus-Robert M?uller. On relevant dimensions in kernel
feature spaces. Journal of Machine Learning Research, 9:1875?1908, Aug 2008.
R. Collobert and J. Weston. A unified architecture for natural language processing: Deep neural
networks with multitask learning. In International Conference on Machine Learning, ICML,
2008.
Dumitru Erhan, Yoshua Bengio, Aaron C. Courville, Pierre-Antoine Manzagol, Pascal Vincent, and
Samy Bengio. Why does unsupervised pre-training help deep learning? Journal of Machine
Learning Research, 11:625?660, 2010.
Ian Goodfellow, Quoc Le, Andrew Saxe, and Andrew Y. Ng. Measuring invariances in deep networks. In Advances in Neural Information Processing Systems 22, pages 646?654, 2009.
G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks.
Science, 313(5786):504?507, July 2006.
Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief
nets. Neural Comput., 18(7):1527?1554, 2006.
D. H. Hubel and T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in
the cat?s visual cortex. The Journal of physiology, 160:106?154, January 1962.
Kevin Jarrett, Koray Kavukcuoglu, Marc?Aurelio Ranzato, and Yann LeCun. What is the best multistage architecture for object recognition? In Proc. International Conference on Computer Vision
(ICCV?09). IEEE, 2009.
Hugo Larochelle, Yoshua Bengio, J?er?ome Louradour, and Pascal Lamblin. Exploring strategies for
training deep neural networks. J. Mach. Learn. Res., 10:1?40, 2009. ISSN 1532-4435.
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proceedings of the IEEE, 86(1):2278?2324, November 1998.
Hossein Mobahi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence
in video. In L?eon Bottou and Michael Littman, editors, Proceedings of the 26th International
Conference on Machine Learning, pages 737?744, Montreal, June 2009. Omnipress.
Noboru Murata, Shuji Yoshizawa, and Shun ichi Amari. Network information criterion - determining the number of hidden units for an artificial neural network model. IEEE Transactions on
Neural Networks, 5:865?872, 1994.
Genevieve B. Orr and Klaus-Robert M?uller, editors. Neural Networks: Tricks of the Trade, this book
is an outgrowth of a 1996 NIPS workshop, volume 1524 of Lecture Notes in Computer Science,
1998. Springer.
M. A. Ranzato, Fu J. Huang, Y. L. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In Computer Vision and Pattern Recognition, 2007. CVPR ?07. IEEE Conference on, pages 1?8, 2007.
8
D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating
errors. Nature, 323(6088):533?536, 1986.
Bernhard Sch?olkopf, Alexander Smola, and Klaus-Robert M?uller. Nonlinear component analysis as
a kernel eigenvalue problem. Neural Comput., 10(5):1299?1319, 1998.
Jason Weston, Fr?ed?eric Ratle, and Ronan Collobert. Deep learning via semi-supervised embedding.
In ICML ?08: Proceedings of the 25th international conference on Machine learning, pages 1168?
1175, 2008.
Alexander Zien, Gunnar R?atsch, Sebastian Mika, Bernhard Sch?olkopf, Thomas Lengauer, and
Klaus-Robert M?uller. Engineering support vector machine kernels that recognize translation initiation sites. Bioinformatics, 16(9):799?807, 2000.
9
| 4061 |@word multitask:1 cnn:19 middle:1 wiesel:2 seems:1 propagate:1 solid:2 reduction:2 initial:1 selecting:1 offering:1 document:1 ala:1 outperforms:1 transferability:1 must:3 subsequent:3 ronan:2 distant:1 partition:1 informative:1 plot:1 progressively:2 discrimination:4 greedy:2 selected:1 node:3 successive:1 consists:3 combine:1 introduce:1 indeed:1 ratle:1 inspired:2 salakhutdinov:2 decreasing:1 becomes:2 linearity:1 what:2 argmin:1 minimizes:1 unified:1 finding:1 transformation:3 temporal:1 every:1 act:3 braun:6 classifier:3 k2:1 wrong:1 unit:3 appear:1 yn:2 superiority:1 positive:3 before:1 dropped:1 local:22 engineering:1 consequence:1 despite:3 encoding:2 mach:1 path:1 might:2 black:2 mika:1 initialization:1 suggests:3 bi:1 statistically:1 jarrett:2 lecun:7 enforces:2 practice:2 implement:2 postpone:2 differs:1 digit:9 procedure:4 rde:1 thought:1 physiology:1 projection:4 ups:1 pre:1 radial:1 cannot:1 operator:1 applying:3 yee:1 map:10 williams:1 attention:1 lowlevel:2 independently:2 unstructured:1 lamblin:2 stability:1 embedding:1 notion:1 variation:4 target:13 play:1 hierarchy:1 samy:1 goodfellow:2 hypothesis:2 trick:1 element:3 rumelhart:2 recognition:7 particularly:1 trend:1 labeled:5 observed:4 role:6 bottom:2 cloud:1 capture:2 connected:1 ranzato:3 decrease:1 trade:1 discriminates:1 transforming:1 complexity:3 littman:1 multistage:1 dynamic:1 trained:15 creation:1 creates:1 f2:5 eric:1 basis:1 differently:1 various:1 cat:1 regularizer:3 train:1 fast:2 describe:1 artificial:1 klaus:5 kevin:1 neighborhood:1 encoded:1 richer:1 solve:5 cvpr:1 say:1 distortion:2 amari:1 favor:1 emergence:2 eigenvalue:3 net:1 interaction:1 fr:1 tu:6 relevant:2 neighboring:1 ome:1 bug:1 olkopf:3 regularity:2 object:2 help:1 illustrate:2 andrew:2 clearer:1 propagating:1 montreal:1 ij:1 nearest:1 received:3 aug:1 implemented:1 c:3 predicted:1 indicate:1 larochelle:4 quantify:2 thick:1 closely:1 f4:2 filter:7 engineered:1 saxe:1 everything:1 shun:1 f1:8 generalization:2 biological:1 extension:1 exploring:1 around:1 normal:1 exp:2 mapping:1 predict:1 estimation:1 proc:1 label:2 tanh:6 create:2 uller:4 mit:1 gaussian:2 encode:2 june:1 joachim:1 improvement:2 properly:1 likelihood:1 indicates:2 greatly:1 detect:1 typically:4 initially:1 hidden:2 wij:2 transformed:1 pixel:5 classification:1 flexible:1 pascal:2 hossein:1 softmax:2 field:4 f3:4 having:1 ng:1 koray:1 progressive:3 unsupervised:6 icml:2 yoshua:3 simplify:1 few:3 randomly:2 preserve:1 recognize:1 cumulatively:1 argmax:2 freedom:1 mlp:23 interest:3 genevieve:1 analyzed:1 regularizers:1 accurate:4 edge:1 closer:1 fu:1 necessary:1 initialized:1 abundant:1 plotted:2 re:1 deformation:1 instance:1 earlier:1 measuring:1 shuji:1 kpca:8 tractability:1 subset:1 gr:1 osindero:1 thickness:1 international:4 pool:2 michael:1 transfered:1 again:1 successively:2 containing:1 choose:1 possibly:1 huang:1 creating:1 convolving:1 book:1 leading:9 toy:2 nonlinearities:1 de:3 orr:1 hasn:2 blind:1 collobert:4 performed:2 try:1 lot:1 jason:2 analyze:1 portion:1 start:1 bayes:1 option:1 simon:1 contribution:1 ass:1 accuracy:1 convolutional:4 efficiently:2 murata:2 yield:1 generalize:1 handwritten:2 raw:2 vincent:1 kavukcuoglu:1 emphasizes:1 classified:1 explain:2 simultaneous:1 detector:1 sebastian:1 ed:1 definition:1 ternating:1 yoshizawa:1 associated:2 handwriting:1 propagated:1 dataset:15 knowledge:1 emerges:2 dimensionality:14 improves:1 back:1 higher:6 supervised:4 methodology:1 shrink:1 strongly:1 box:2 stage:1 implicit:1 binocular:1 smola:1 hand:2 nonlinear:1 continuity:2 noboru:1 gray:1 building:1 lengauer:1 evolution:8 alternating:1 spatially:1 illustrated:2 transferable:2 steady:1 criterion:1 whye:1 demonstrate:2 scoping:2 performs:1 omnipress:1 image:4 wise:14 novel:1 rotation:1 functional:1 empirically:1 hugo:1 egoire:1 volume:1 interpretation:1 composition:1 ai:1 dbn:1 pm:1 language:1 similarity:1 cortex:1 add:2 something:1 disentangle:1 recent:1 irrelevant:1 certain:2 initiation:1 binary:1 success:1 yi:2 muller:1 lowerdimensional:1 ud:1 july:1 semi:1 zien:2 multiple:1 relates:3 controlled:1 impact:1 prediction:5 variant:2 multilayer:6 vision:2 kernel:20 normalization:1 sometimes:1 represent:2 addition:1 fine:1 sch:3 pooling:2 tend:2 effectiveness:1 seem:1 structural:1 bengio:10 enough:2 xj:2 fit:1 isolation:1 architecture:10 perfectly:1 idea:1 haffner:1 whether:2 distorting:1 reuse:1 pretraining:3 deep:78 useful:1 generally:1 eigenvectors:3 outperform:1 dotted:2 express:1 group:3 ichi:1 four:1 gunnar:1 drawn:2 asymptotically:2 yann:1 separation:6 decision:1 coherence:1 layer:63 bound:2 fl:2 simplification:1 courville:1 scanned:1 explode:1 u1:5 aspect:1 layerwise:1 performing:2 recombining:1 structured:1 according:1 alternate:10 debugging:1 combination:1 slightly:1 evolves:2 quoc:1 restricted:1 pr:1 iccv:1 invariant:1 previously:1 mechanism:2 flip:1 end:2 available:2 apply:2 observe:8 generic:4 appropriate:1 enforce:1 pierre:1 robustness:1 original:4 thomas:1 top:4 denotes:1 subsampling:2 remaining:1 eon:1 build:8 move:1 added:1 already:3 occurs:1 degrades:1 receptive:4 strategy:1 antoine:1 unclear:2 gradient:1 detrimental:1 distance:1 separate:1 mapped:2 berlin:6 manifold:2 reason:2 enforcing:1 length:1 issn:1 index:2 retained:1 illustration:1 manzagol:1 postponing:2 robert:5 potentially:2 negative:3 implementation:2 motivates:1 unknown:1 perform:2 allowing:2 teh:1 observation:5 convolution:5 finite:3 november:1 january:1 situation:1 hinton:8 y1:2 namely:1 required:2 pair:1 connection:1 learned:4 nip:1 beyond:1 bar:2 able:2 pattern:1 xm:1 built:8 video:1 belief:3 natural:1 predicting:1 indicator:1 abbreviate:1 buhmann:1 improve:1 nice:2 understanding:1 prior:2 popovici:1 determining:2 asymptotic:6 embedded:1 fully:1 lecture:1 highlight:1 rationale:1 limitation:4 geoffrey:1 eigendecomposition:2 foundation:1 degree:1 sufficient:1 editor:2 share:1 translation:1 elsewhere:1 last:9 keeping:1 perceptron:6 neighbor:2 taking:1 boundary:1 dimension:9 xn:2 depth:1 forward:1 projected:1 erhan:2 transaction:1 nonlocal:1 approximate:1 nov:1 implicitly:1 bernhard:2 hubel:2 reveals:1 xi:2 un:4 why:1 nature:2 learn:7 reasonably:1 transfer:5 bottou:2 complex:4 marc:1 diag:2 louradour:1 main:1 multilayered:1 arrow:1 whole:1 aurelio:1 allowed:1 x1:3 mikio:4 site:1 quantiles:1 fashion:1 position:1 exponential:1 comput:2 extractor:2 montavon:1 ian:1 down:1 dumitru:1 specific:4 showing:1 mobahi:2 er:1 svm:1 concern:6 dominates:1 intractable:1 workshop:1 mnist:4 sequential:1 magnitude:1 kx:1 boureau:1 generalizing:3 depicted:2 forming:1 visual:2 prevents:1 expressed:1 contained:2 pretrained:7 applies:1 springer:1 corresponds:1 weston:5 sorted:1 goal:2 formulated:1 rbf:13 krm:1 absence:1 typical:1 reducing:1 principal:1 degradation:2 discriminate:2 invariance:1 atsch:1 aaron:1 internal:1 support:6 arises:1 inability:1 dissimilar:1 relevance:1 alexander:2 bioinformatics:1 evaluate:1 phenomenon:1 |
3,382 | 4,062 | Spectral Regularization for Support Estimation
Ernesto De Vito
DSA, Univ. di Genova, and
INFN, Sezione di Genova, Italy
Lorenzo Rosasco
CBCL - MIT, - USA, and
IIT, Italy
[email protected]
[email protected]
Alessandro Toigo
Politec. di Milano, Dept. of Math., and
INFN, Sezione di Milano, Italy
[email protected]
Abstract
In this paper we consider the problem of learning from data the support of a probability distribution when the distribution does not have a density (with respect to
some reference measure). We propose a new class of regularized spectral estimators based on a new notion of reproducing kernel Hilbert space, which we call
?completely regular?. Completely regular kernels allow to capture the relevant
geometric and topological properties of an arbitrary probability space. In particular, they are the key ingredient to prove the universal consistency of the spectral
estimators and in this respect they are the analogue of universal kernels for supervised problems. Numerical experiments show that spectral estimators compare
favorably to state of the art machine learning algorithms for density support estimation.
1 Introduction
In this paper we consider the problem of estimating the support of an arbitrary probability distribution and we are more broadly motivated by the problem of learning from complex high dimensional
data. The general intuition that allows to tackle these problems is that, though the initial representation of the data is often very high dimensional, in most situations the data are not uniformly
distributed, but are in fact confined to a small (possibly low dimensional) region. Making such an
intuition rigorous is the key towards designing effective algorithms for high dimensional learning.
The problem of estimating the support of a probability distribution is of interest in a variety of applications such as anomaly/novelty detection [8], or surface modeling [16]. From a theoretical point
of view the problem has been usually considered in the setting where the probability distribution has
a density with respect to a known measure (for example the Lebesgue measure in Rd or the volume
measure on a manifold). Among others we mention [22, 5] and references therein. Algorithms inspired by Support Vector Machine (SVM), often called one-class SVM are have been proposed see
[17, 20] and references therein. Another kernel method, related to the one we discuss in this paper, is
presented in [11]. More generally one of the main approaches to learning from high dimensional is
the one considered in manifold learning. In this context the data are assumed to lie on a low dimensional Riemannian sub-manifold embedded (that is represented) in a high dimensional Euclidean
space. This framework inspired algorithms to solve a variety of problems such as: semisupervised
learning [3], clustering [23], data parameterization/dimensionality reduction [15, 21], to name a few.
The basic assumption underlying manifold learning is often too restrictive to describe real data and
this motivates considering other models, such as the setting where the data are assumed to be essentially concentrated around a low dimensional manifold as in [12], or can be modeled as samples
from a metric space as in [10].
1
In this paper we consider a general scenario (see [18]) where the underlying model is a probability
space (X, ?) and we are given a (similarity) function K which is a reproducing kernel. The available
training set is an i.i.d sample x1 , . . . , xn ? ?. The geometry (and topology) in (X, ?) is defined by
the kernel K. While this framework is abstract and poses new challenges, by assuming the similarity
function to be a reproducing kernel we can make full use of the good computational properties of
kernel methods and the powerful theory of reproducing kernel Hilbert spaces (RKHS) [2]. Interestingly, the idea of using a reproducing kernel K to construct a metric on a set X is originally due to
Schoenberg (see for example [4]).
Broadly speaking, in this setting we consider the problem of finding a model of the smallest region
X? containing all the data. A rigorous formalization of this problem requires: 1) defining the region
X? , 2) specifying the sense in which we model X? . This can be easily done if the probability distribution has density p with respect to a known measure, in fact X? = {x ? X : p(x) > 0}, but is
otherwise a challenging question for a general distribution. Intuitively, X? can be thought of as the
region where the distribution is concentrated, that is ?(X? ) = 1. However, there are many different
sets having this property. If X is Rd (in fact any topological space), a natural candidate to define the
region of interest, is the notion of support of a probability distribution? defined as the intersection
of the closed subsets C of X, such that ?(C) = 1. In an arbitrary probability space the support of
the measure is not well defined since no topology is given.
The reproducing kernel K provides a way to solve this problem and also suggests a possible approach to model X? . The first idea is to use the fact that under mild assumptions the kernel defines
a metric on X [18], so that the concept of closed set, hence that of support, is well defined. The
second idea is to use the kernel to construct a function F? such that the level set corresponding to
one is exactly the support X? ? in this case we say that the RKHS associated to K separates the
support X? . By doing this we are in fact imposing an assumption on X? : given a kernel K, we can
only separate certain sets. More precisely, our contribution is two-fold.
? We prove that F? is uniquely defined by the null space of the integral operator associated
to K. Given that the integral operator (and its spectral properties) can be approximated
studying the kernel matrix on a sample, this result suggests a way to estimate the support
empirically. However, a further complication arises from the fact that in general zero is
not an isolated point of the spectrum, so that the estimation of a null space is an ill-posed
problem (see for example [9]). Then, a regularization approach is needed in order to find a
stable (hence generalizing) estimator. In this paper, we consider a spectral estimator based
on a spectral regularization strategy, replacing the kernel matrix with its regularized version
(Tikhonov regularization being one example).
? We introduce the notion of completely regular RKHS, that answer positively to the question whether there exist kernels that can separate the support of any distribution. Examples
of completely regular kernels are presented and results suggesting how they can be constructed are given. The concept of completely regular RKHS plays a role similar to the
concept of universal kernels in supervised learning, for example see [19].
Finally, given the above results, we show that the regularized spectral estimator enjoys a universal
consistency property: the correct support can be asymptotically recovered for any problem (that is
any probability distribution).
The plan of the paper is as follows. In Section 2 we introduce the notion of completely regular
kernels and their basic properties. In Section 3 we present the proposed regularized algorithms. In
Section 4 and 5 we provide a theoretical and empirical analysis, respectively. Proofs and further
development can be found in the supplementary material.
2 Completely regular reproducing kernel Hilbert spaces
In this section we introduce the notion of a completely regular reproducing kernel Hilbert space.
Such a space defines a geometry on a measurable space X which is compatible with the measurable
structure. Furthermore it shows how to define a function F such that the one level set is the support
of the probability distribution. The function is determined by the spectral projection associated with
the null eigenvalue of the integral operator defined by the reproducing kernel. All the proofs of this
section are reported in the supplementary material.
2
We assume X to be a measurable space with a probability measure ?. We fix a complex1 reproducing
kernel Hilbert space H on X with a reproducing kernel K : X ? X ? C [2]. The scalar product
and the norm are denoted by h?, ?i, linear in the first argument, and k?k, respectively. For all x ? X,
Kx ? H denotes the function K(?, x). For each function f ? H, the reproducing property f (x) =
hf, Kx i holds for all x ? X. When different reproducing kernel Hilbert spaces are considered, we
denote by HK the reproducing kernel Hilbert space with reproducing kernel K. Before giving the
definition of completely regular RKHS, which is the key concept presented in this section, we need
some preliminary definitions and results.
Definition 1. A subset C ? X is separated by H, if, for any x0 6? C, there exists f ? H such that
f (x0 ) 6= 0
and
f (x) = 0
?x ? C.
(1)
For example, if X = Rd and H is the reproducing kernel Hilbert space with linear kernel K(x, t) =
x ? t, the sets separated by H are precisely the hyperplanes containing the origin. In Eq. (1) the
function f depends on x0 and C, but Proposition 1 below will show that there is a function, possibly
not in H, whose one level set is precisely C ( if K(x, x) = 1 ). Note that in [19] a different notion
of separating property is given.
We need some further notation. For any set C, let PC : H ? H be the orthogonal projection onto
the closure of the linear space generated by {Kx | x ? C}, so that PC2 = PC , PC? = PC and
ker PC = {Kx | x ? C}? = {f ? H | f (x) = 0, ?x ? C}.
Moreover let FC : X ? C be defined by FC (x) = hPC Kx , Kx i .
Proposition 1. For any subset C ? X, the following facts are equivalent
(i) the set C is separated by H;
(ii) for all x 6? C, Kx ?
/ Ran PC ;
(iii) C = {x ? X | FC (x) = K(x, x)}.
If one of the above conditions is satisfied, then K(x, x) 6= 0
?x ?
/ C.
A natural and minimal requirement on H is to be able to separates any pairs of distinct points and
this implies that Kx 6= Kt if x 6= t and K(x, x) 6= 0. The first condition ensures the metric given
by
dK (x, y) = kKx ? Kt k
x, t ? X.
(2)
to be well defined. Then (X, dK ) is a metric space and the sets separated by H are always dK closed, see Prop. 2 below. This last property is not enough to ensure that we can evaluate ? on the
set separated by RKHS H. In fact the ?-algebra generated by the metric d might not be contained in
the ?-algebra on X. The next result shows that assuming the kernel to be measurable is enough to
solve this problem.
Proposition 2. Assume that Kx 6= Kt if x 6= t, then the sets separated by H are closed with respect
to dK . Moreover, if H is separable and the kernel is measurable, then the sets separated by H are
measurable.
Given the above premises, the following is the key definition that characterizes the reproducing
kernel Hilbert spaces which are able to separate the largest family of subsets of X.
Definition 2 (Completely Regular RKHS). A reproducing kernel Hilbert space H with reproducing
kernel K such that Kx 6= Kt if x 6= t is called completely regular if H separates all the subsets
C ? X which are closed with respect to the metric (2).
The term completely regular is borrowed from topology, where a topological space is called completely regular if, for any closed subset C and any point x0 ?
/ C, there exists a continuous function f
such that f (x0 ) 6= 0 and f (x) = 0 for all x ? C. In the supplementary material, several examples of
completely regular reproducing kernel Hilbert spaces are given, as well as a discussion on how such
spaces can be constructed. A particular case is when X is already a metric space with a distance
1
Considering complex valued RKHS allows to use the theory of Fourier transform and for practical problems we can simply consider real valued kernels.
3
function dX . If K is continuous with respect to dX , the assumption of complete regularity forces
the metrics dK and dX to have the same closed subsets. Then, the supports defined by dK and dX
are the same. Furthermore, since the closed sets of X are independent of H, the complete regularity
of H can be proved by showing that a suitable family of bump2 functions is contained in H.
Corollary 1. Let X be a separable metric space with respect to a metric dX . Assume that the kernel
K is a continuous function with respect to dX and that the space H separates every subset C which
is closed with respect to dX . Then
(i) The space H is separable and K is measurable with respect to the Borel ?-algebra generated by dX .
(ii) The metric dK defined by (2) is equivalent to dX , that is, a set is closed with respect to dK
if and only if it is closed with respect to dX .
(iii) The space H is completely regular.
As a consequence of the above result, many classical reproducing kernel Hilbert spaces are completely regular. For example, if X = Rd and H is the Sobolev space of order s with s > d/2, then H
is completely regular. This is due to the fact that the space of smooth compactly supported functions
is contained in H. In fact, a standard result of analysis ensures that, for any closed set C and any
x0 ?
/ C there exists a smooth bump function such that f (x0 ) = 1 and its support is contained in
the complement of C. Interestingly enough, if H is the reproducing kernel Hilbert space with the
Gaussian kernel, it is known that the elements of H are analytic functions, see Cor. 4.44 in [19].
Clearly H can not be completely regular. Indeed, if C is a closed subset of Rd with not empty interior and f ? H is such that f (x) = 0 for all x ? C, a standard result of complex analysis implies
that f (x) = 0 for every x ? Rd . Finally, the next result shows that the reproducing kernel can be
normalized to one on the diagonal under the mild assumption that K(x, x) 6= 0 for all x ? X.
Lemma 1. Assume that K(x, x) > 0 for all x ? X. Then the reproducing kernel Hilbert space
K(x, t)
with the normalized kernel K ? (x, t) = p
separates the same sets as H.
K(x, x)K(t, t)
Finally we briefly mention some examples and refer to the supplementary material for further developments. In particular, we prove that both the Laplacian kernel K(x, y) = e
exponential kernel K(x, y) = e
d ? N.
?
kx?yk1
?
2?
?
kx?yk2
?
2?
and ?1 -
d
defined on R are completely regular for any ? > 0 and
3 Spectral Algorithms for Learning the Support
In this section, we first discuss our framework and our main assumptions. Then we present the
proposed regularized spectral algorithms.
Motivated by the results in the previous section, we describe our framework which is given by a triple
(X, ?, K). We consider a probability space (X, ?) and a training set x = (x1 . . . , xn ) sampled
i.i.d. with respect to ?. Moreover we consider a reproducing kernel K satisfying the following
assumption.
Assumption 1. The reproducing kernel K is measurable and K(x, x) = 1, for all x ? X. Moreover
K defines a completely regular and separable RKHS H.
We endow X with the metric dK defined in (2), so that X becomes a separable metric space. The
assumption of complete regularity ensures that any closed subset is separated by H and, hence, is
measurable by Prop. 2. Then we can define the support X? of the measure ?, as the intersection of
all the closed sets C ? X, such that ?(C) = 1. Clearly X? is closed and ?(X? ) = 1 (note that this
last property depends on the separability of X, hence of H).
Summarizing the key result in the previous section, under the above assumptions, X? is the one level
set of the function F? : X ? [0, 1]
F? (x) = hP? Kx , Kx i ,
2
Given an open subset U and a compact subset C ? U , a bump function is a continuous compactly supported function which is one on C and its support is contained in U .
4
where P? is a short notation for PX? . Since F? depends on the unknown measure ?, in practice
it cannot be explicitly calculated. To design an effective empirical estimator we develop a novel
characterization of the support of an arbitrary distribution that we describe in the next section.
3.1 A New Characterization of the Support
The key observation towards defining a learning algorithm to estimate X? it is that the projection P?
can be expressed in terms of the integral operator defined by the kernel K.
To see this, for all x ? X, let Kx ? Kx denote the rank one positive operator on H, given by
(Kx ? Kx )(f ) = hf, Kx i Kx = f (x)Kx
f ? H.
Moreover, let T : H ? H be the linear operator defined as
Z
T =
Kx ? Kx d?(x),
X
where the integral converges in the Hilbert space of Hilbert-Schmidt operators on H (see for example
[7] for the proof). Using the reproducing property in H [2], it is straightforward to see that T is
simply the integral operator with kernel K with domain and range in H.
Then, one can easily see that the null space of T is precisely (I ? P? )H, so that
P? = T ? T,
(3)
where T ? is the pseudo-inverse of T (see for example [9]). Hence
F? (x) = T ? T Kx , Kx .
Observe that in general Kx does not belong to the domain of T ? and, if ? denotes the Heaviside
function with ?(0) = 0, then spectral theory gives that P? = T ? T = ?(T ). The above observation
is crucial as it gives a new characterization of the support of ? in terms of the null space of T and
the latter can be estimated from data.
3.2 Spectral Regularization Algorithms
Finally, in this section, we describe how to construct an estimator Fn of F? . As we mentioned above,
Eq. (3) suggests a possible way to learn the projection from finite data. In fact, we can consider the
empirical version of the integral operator associated to K which is simply defined by
n
Tn =
1X
K xi ? K xi .
n i=1
The latter operator is an unbiased estimator of T . Indeed, since Kx ? Kx is a bounded random
variable into the separable Hilbert space of Hilbert-Schmidt operators, one can use concentration
inequalities for random variables in Hilbert spaces to prove that
?
n
lim
kT ? Tn kHS = 0
almost surely,
(4)
n?+? log n
where k?kHS is the Hilbert-Schmidt norm (see for example [14] for a short proof). However, in
general Tn? Tn does non converge to T ? T since 0 is an accumulation point of the spectrum of T or,
equivalently, since T ? is not a bounded operator. Hence, a regularization approach is needed.
In this paper we study a spectral filtering approach which replaces Tn? with an approximation g? (Tn )
obtained filtering out the components corresponding to the
P small eigenvalues of Tn . The function g?
is defined by spectral calculus. More precisely if Tn = j ?j vj ? vj is a spectral decomposition of
P
Tn , then g? (Tn ) = j g? (?j )vj ? vj . Spectral regularization defined by linear filters is classical in
the theory of inverse problems [9]. Intuitively, g? (Tn ) is an approximation of the generalized inverse
Tn? and it is such that the approximation gets better, but the condition number of g? (Tn ) gets worse
as ? decreases. More formally these properties are captured by the following set of conditions.
Assumption 2. For ? ? [0, 1], let r? (?) := ?g? (?), then
? r? (?) ? [0, 1], ?? > 0,
5
? lim??0 r? (?) = 1, , ?? > 0
? |r? (?) ? r? (? ? )| ? L? |? ? ? ? |, ?? > 0, where L? is a positive constant depending on ?.
ExamplesP
of algorithms that fall into the above class include iterative methods? akin to boosting
m?
g? (?) = k=0
(1 ? ?)k , spectral cut-off g? (?) = ?1 1?>? (?) + ?1 1??? (?), and Tikhonov regular1
. We refer the reader to [9] for more details and examples, and, given the space
ization g? (?) = ?+?
constraints, will focus mostly on Tikhonov regularization in the following.
For a chosen filter, the regularized empirical estimator of F? can be defined by
Fn (x) = hg? (Tn )Tn Kx , Kx i .
(5)
One can see that that the computation of Fn reduces to solving a simple finite dimensional problem
involving the empirical kernel matrix defined by the training data. Towards this end, it is useful to
introduce the sampling operator Sn : H ? Cn defined by Sn f = (f (x1 ), . . . , f (xn )), f ? H,
which can be interpreted as the restriction operator which evaluates
functions in H on the training set
P
points. The adjoint Sn? : Cn ? H of Sn is given by Sn? ? = ni=1 ?i Kxi , ? = (?1 , . . . , ?n ) ? Cn ,
and can be interpreted as the out-of-sample extension operator. A simple computation shows that
Tn = n1 Sn? Sn and Sn Sn? = Kn is the n by n kernel matrix, where the (i, j)-entry is K(xi , xj ).
Then it is easy to see that g? (Tn )Tn = g? (Sn? Sn /n)Sn? Sn /n = n1 Sn? g? (Kn /n)Sn , so that
Fn (x) =
1 T
kx g? (Kn /n)kx ,
n
(6)
where kx is the n-dimensional column vector kx = Sn Kx = (K(x1 , x), . . . , K(xn , x)) . Note that
Equation (6) plays the role of a representer theorem for the spectral estimator, in the sense that it
reduces the problem of finding an estimator in an infinite dimensional space to a finite dimensional
problem.
4 Theoretical Analysis: Universal Consistency
In this section we study the consistency property of spectral estimators. All the proofs of this section
are reported in the supplementary material. We prove the results only for the filter corresponding to
the classical Tikhonov regularization though the same results hold for the class of spectral filters described by Assumption 2. To study the consistency of the methods we need to choose an appropriate
performance measure to compare Fn and F? . Note that there is no natural notion of risk, since we
have to compute the function on and off the support. Also note that standard metric used for support
estimation (see for example [22, 5]) cannot be used in our analsys since they rely on the existence
of a reference measure ? (usually the Lebesgue measure) and the assumption that ? is absolutely
continuous with respect to ?.
The following preliminary result shows that we can control the convergence of the Tikhonov estimator Fn , defined by g? (T ) = (Tn + ?n I)?1 , to F? uniformly on any compact set of X, provided
a suitable sequence ?n .
Theorem 1. Let Fn be the estimator defined by Tikhonov regularization and choose a sequence ?n
so that
log n
lim ?n = 0 and limsup ? < +?,
(7)
n??
n?? ?n n
then
lim sup |Fn (x) ? F? (x)| = 0,
almost surely,
(8)
n?+? x?C
for every compact subset C of X
We add three comments. First, we note that, as we mentioned before, Tikhonov regularization
can be replaced by a large class of filters. Second, we observe that a natural choice would be the
regularization defined by kernel PCA [11], which corresponds to truncating the generalized inverse
of the kernel matrix at some cutoff parameter ?. However, one can show that, in general, in this case
it is not possible to choose ? so that the sample error goes to zero. In fact, for KPCA the sample
error depends on the gap between the M -th and the M + 1-th eigenvalue of T [1], where M -th
and M + 1-th are the eigenvalues around the cutoff parameter. Such a gap can go to zero with an
6
arbitrary rate so that there exists no choice of the cut-off parameter ensuring convergence to zero
of the sample error. Third, we note that the uniform convergence of Fn to F? on compact subsets
does not imply the convergence of the level sets of Fn to the corresponding level sets of F? , for
example with respect to the standard Hausdorff distance among closed subsets. In practice to have
an effective decision rule, an off-set parameter ?n can be introduced and the level set is replaced by
Xn = {x ? X | Fn (x) ? 1 ? ?n } ? recall that Fn takes values in [0, 1]. The following result will
show that for a suitable choice of ?n the Hausdorff distance between Xn ? C and X? ? C goes to
zero for all compact sets C. We recall that the Hausdorff distance between two subsets A, B ? X is
dH (A, B) = max{sup dK (a, B), sup dK (b, A)}
a?A
b?B
Theorem 2. If the sequence (?n )n?N converges to zero in such a way that
lim sup
n??
supx?C |Fn (x) ? F? (x)|
? 1,
?n
almost surely
(9)
then,
lim dH (Xn ? C, X? ? C) = 0
n?+?
almost surely,
for any compact subset C.
We add two comments. First, it is possible to show that, if the (normalized) kernel K is such that
limx? ?? Kx (x? ) = 0 for any x ? X ? as it happens for the Laplacian kernel, then Theorems 1
and 2 also hold by choosing C = X. Second, note that the choice of ?n depends on the rate of
convergence of Fn to F? which will itself depend on some a-priori assumption on ?. Developing
learning rates and finite sample bound is a key question that we will tackle in future work.
5 Empirical Analysis
In this section we describe some preliminary experiments aimed at testing the properties and the
performances of the proposed methods both on simlauted and real data. Again for space constraints
we will only discuss spectral algorithms induced by Tikhonov regularization. Note that while computations can be made efficient in several ways, we consider a simple algorithmic protocol and leave
a more refined computational study for future work. Following the discussion in the last section,
Tikhonov regularization defines an estimator Fn (x) = kx T (Kn + n?I)?1 kx and a point is labeled
as belonging to the support if Fn (x) ? 1 ? ? . The computational cost for the algorithm is, in the
worst case, of order n3 , like standard regularized least squares, for training and order N n2 if we
have to predict the value of Fn at N test points. In practice, one has to choose a good value for the
regularization parameter ? and this requires computing multiple solutions, a so called regularization
path. As noted in [13], if we form the inverse using the eigendecomposition of the kernel matrix the
price of computing the full regularization path is essentially the same as that of computing a single
solution (note that the cost of the eigen-decomposition of Kn is also of order n3 though the constant
is worse). This is the strategy that we consider in the following. In our experiments we considered two data-sets the MNIST data-set and the CBCL face database. For the digits we considered
a reduced set consisting of a training set of 5000 images and a test set of 1000 images. In the first
experiment we trained on 500 images for the digit 3 and tested on 200 images of digits 3 and 8. Each
experiment consists of training on one class and testing on two different classes and was repeated
for 20 trials over different training set choices. The performance is evaluated computing ROC curve
??
(and the corresponding AUC value) for varying ?, ? ? , ? . For all our experiments we considered the
Laplacian kernel. Note that, in this case the algorithm requires to choose 3 parameters: the regularization parameter ?, the kernel width ? and the threshold ? . In supervised learning cross validation
is typically used for parameter tuning, but cannot be used in our setting since support estimation is
an unsupervised problem. Then, we considered the following heuristics. The kernel width is chosen as the median of the distribution of distances of the K-th nearest neighbor of each training set
point for K = 10. Fixed the kernel width, we choose regularization parameter in correspondence
of the maximum curvature in the eigenvalue behavior? see Figure 1, the rational being that after this
value the eigenvalues are relatively small. For comparison we considered a Parzen window density
estimator and one-class SVM (1CSVM )as implemented by [6]. For the Parzen window estimator
we used the same kernel used in the spectral algorithm, that is the Laplacian kernel and use the
7
Eigenvalues Decay
Eigenvalues Decay
160
18
140
16
Eigenvalues Maginitude
Eigenvalues Maginitude
120
100
80
60
40
14
10
Eigenvalues Decay
Regularization Parameter
8
6
4
20
0
Eigenvalues Decay
Regularization Parameter
12
2
0
50
100
150
200
250
Eigenvalues Index
300
350
0
400
5
10
15
20
25
30
Eigenvalues Index
35
40
45
50
Figure 1: Decay of the eigenvalues of the kernel matrix ordered in decreasing magnitude and corresponding regularization parameter (Left) and a detail of the first 50 eigenvalues (Right).
same width used in our estimator. Given a kernel width an estimate of the probability distribution
is computed and can be used to estimate the support by fixing a threshold ? ? . For the one-class
SVM we considered the Gaussian kernel, so that we have to fix the kernel width and a regularization
parameter ?. We fix the kernel width to be the same used by our estimator and fixed ? = 0.9. For
??
the sake of comparison, also for one-class SVM we considered a varying offset ? . The ROC curves
on the different tasks are reported (for one of the trial) in Figure 2, Left. The mean and standard
deviation of the AUC for the 3 methods is reported in Table 5. Similar experiments were repeated
considering other pairs of digits, see Table 5. Also in the case of the CBCL data sets we considered
a reduced data-set consisting of 472 images for training and other 472 for test. On the different test
performed on the Mnist data the spectral algorithm always achieves results which are better- and
often substantially better - than those of the other methods. On the CBCL dataset SVM provides the
best result, but spectral algorithm still provides a competitive performance.
6 Conclusions
In this paper we presented a new approach to estimate the support of an arbitrary probability distribution. Unlike previous work we drop the assumption that the distribution has a density with respect
to a (known) reference measure and consider a general probability space. To overcome this problem we introduce a new notion of RKHS, that we call completely regular, that captures the relevant
geometric properties of the probability distribution. Then, the support of the distribution can be
characterized as the null space of the integral operator defined by the kernel and can be estimated
using a spectral filtering approach. The proposed estimators are proven to be universally consistent
and have good empirical performances on some benchmark data-sets. Future work will be devoted
MNIST 9vs4
MNIST 1vs7
1
0.9
1
Spectral
Parzen
OneClassSVM
0.9
0.8
0.7
0.7
0.7
0.6
0.6
0.6
TruePos
0.8
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
FalsePos
0.7
0.8
0.9
1
0
Spectral
Parzen
OneClassSVM
0.9
0.8
TruePos
TruePos
CBCL
1
Spectral
Parzen
OneClassSVM
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
FalsePos
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
FalsePos
0.7
0.8
0.9
1
Figure 2: ROC curves for the different estimator in three different tasks: digit 9vs 4 Left, digit 1vs 7
Center, CBCL Right.
Spectral
Parzen
1CSVM
3vs 8
0.8371 ? 0.0056
0.7841 ? 0.0069
0.7896 ? 0.0061
8vs 3
0.7830 ? 0.0026
0.7656 ? 0.0029
0.7642 ? 0.0032
1vs 7
0.9921 ? 4.7283e ? 04
0.9811 ? 3.4158e ? 04
0.9889 ? 1.8479e ? 04
9vs 4
0.8651 ? 0.0024
0.0.7244 ? 0.0030
0.7535 ? 0.0041
CBCL
0.8682 ? 0.0023
0.8778 ? 0.0023
0.8824 ? 0.0020
Table 1: Average and standard deviation of the AUC for the different estimators on the considered
tasks.
8
to derive finite sample bounds, to develop strategies to scale-up the algorithms to massive data-sets
and to a more extensive experimental analysis.
References
[1] P. M. Anselone. Collectively compact operator approximation theory and applications to integral equations. Prentice-Hall Inc., Englewood Cliffs, N. J., 1971.
[2] N. Aronszajn. Theory of reproducing kernels. Trans. Amer. Math. Soc., 68:337?404, 1950.
[3] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from labeled and unlabeled examples. J. Mach. Learn. Res., 7:2399?2434, 2006.
[4] C. Berg, J. Christensen, and P. Ressel. Harmonic analysis on semigroups, volume 100 of
Graduate Texts in Mathematics. Springer-Verlag, New York, 1984.
[5] G. Biau, B. Cadre, D. Mason, and Bruno Pelletier. Asymptotic normality in density support
estimation. Electron. J. Probab., 14:no. 91, 2617?2635, 2009.
[6] S. Canu, Y. Grandvalet, V. Guigue, and A. Rakotomamonjy. Svm and kernel methods matlab
toolbox. Perception Systmes et Information, INSA de Rouen, Rouen, France, 2005.
[7] C. Carmeli, E. De Vito, and A. Toigo. Vector valued reproducing kernel Hilbert spaces of
integrable functions and Mercer theorem. Anal. Appl. (Singap.), 4(4):377?408, 2006.
[8] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Comput. Surv.,
41(3):1?58, 2009.
[9] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems, volume 375 of
Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht, 1996.
[10] M. Hein, O. Bousquet, and B. Schlkopf. Maximal margin classification for metric spaces.
Journal of Computer and System Sciences, 71(3):333?359, 10 2005.
[11] H. Hoffmann. Kernel pca for novelty detection. Pattern Recogn., 40(3):863?874, 2007.
[12] P Niyogi, S Smale, and S Weinberger. A topological view of unsupervised learning from noisy
data. preprint, Jan 2008.
[13] R. Rifkin and R. Lippert. Notes on regularized least squares. Technical report, Massachusetts
Institute of Technology, 2007.
[14] L. Rosasco, M. Belkin, and E. De Vito. On learning with integral operators. J. Mach. Learn.
Res., 11:905?934, 2010.
[15] S Roweis and L Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, Jan 2000.
[16] B. Sch?olkopf, J. Giesen, and S. Spalinger. Kernel methods for implicit surface modeling. In
Advances in Neural Information Processing Systems 17, pages 1193?1200, Cambridge, MA,
2005. MIT Press.
[17] B. Sch?olkopf, J. Platt, J. Shawe-Taylor, A. Smola, and R. Williamson. Estimating the support
of a high-dimensional distribution. Neural Comput., 13(7):1443?1471, 2001.
[18] S. Smale and D.X. Zhou. Geometry of probability spaces. Constr. Approx., 30(3):311?323,
2009.
[19] I. Steinwart and A. Christmann. Support vector machines. Information Science and Statistics.
Springer, New York, 2008.
[20] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. J.
Mach. Learn. Res., 6:211?232 (electronic), 2005.
[21] J. Tenenbaum, V. Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, Jan 2000.
[22] A. B. Tsybakov. On nonparametric estimation of density level sets. Ann. Statist., 25(3):948?
969, 1997.
[23] U. Von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4), 2007.
9
| 4062 |@word mild:2 trial:2 version:2 briefly:1 norm:2 open:1 closure:1 calculus:1 decomposition:2 mention:2 reduction:3 initial:1 rkhs:10 interestingly:2 recovered:1 scovel:1 dx:10 fn:17 numerical:1 analytic:1 drop:1 v:6 parameterization:1 short:2 provides:3 math:2 complication:1 characterization:3 boosting:1 hyperplanes:1 constructed:2 prove:5 consists:1 introduce:5 x0:7 indeed:2 behavior:1 inspired:2 decreasing:1 window:2 considering:3 becomes:1 provided:1 estimating:3 underlying:2 notation:2 moreover:5 bounded:2 null:6 interpreted:2 substantially:1 finding:2 pseudo:1 every:3 tackle:2 exactly:1 dima:1 control:1 platt:1 before:2 positive:2 insa:1 consequence:1 anselone:1 mach:3 cliff:1 path:2 might:1 therein:2 specifying:1 challenging:1 suggests:3 appl:1 range:1 graduate:1 practical:1 testing:2 practice:3 digit:6 cadre:1 ker:1 jan:3 universal:5 empirical:7 thought:1 projection:4 regular:21 get:2 onto:1 interior:1 cannot:3 operator:18 unlabeled:1 prentice:1 context:1 risk:1 accumulation:1 measurable:9 equivalent:2 restriction:1 center:1 straightforward:1 go:3 truncating:1 survey:1 estimator:23 rule:1 csvm:2 embedding:1 notion:8 schoenberg:1 play:2 massive:1 anomaly:3 designing:1 origin:1 surv:1 element:1 approximated:1 satisfying:1 cut:2 yk1:1 labeled:2 database:1 role:2 preprint:1 capture:2 worst:1 region:5 ensures:3 decrease:1 ran:1 alessandro:1 intuition:2 mentioned:2 vito:3 trained:1 depend:1 solving:1 algebra:3 completely:21 compactly:2 easily:2 iit:1 represented:1 recogn:1 univ:1 separated:8 distinct:1 effective:3 describe:5 choosing:1 refined:1 dordrecht:1 whose:1 heuristic:1 posed:1 solve:3 supplementary:5 say:1 valued:3 otherwise:1 niyogi:2 statistic:2 transform:1 itself:1 noisy:1 sequence:3 eigenvalue:16 propose:1 product:1 maximal:1 relevant:2 rifkin:1 roweis:1 adjoint:1 olkopf:2 convergence:5 regularity:3 requirement:1 empty:1 converges:2 leave:1 depending:1 develop:2 derive:1 fixing:1 pose:1 nearest:1 borrowed:1 eq:2 soc:1 implemented:1 christmann:1 implies:2 correct:1 filter:5 milano:2 material:5 premise:1 fix:3 preliminary:3 proposition:3 extension:1 hold:3 around:2 considered:12 hall:1 cbcl:7 algorithmic:1 predict:1 bump:2 electron:1 achieves:1 smallest:1 giesen:1 estimation:7 largest:1 hpc:1 mit:3 clearly:2 always:2 gaussian:2 zhou:1 varying:2 corollary:1 endow:1 focus:1 rank:1 hk:1 rigorous:2 sense:2 summarizing:1 typically:1 france:1 among:2 ill:1 classification:2 denoted:1 priori:1 development:2 plan:1 art:1 construct:3 ernesto:1 having:1 sampling:1 unsupervised:2 representer:1 future:3 others:1 report:1 few:1 belkin:2 replaced:2 geometry:3 consisting:2 lebesgue:2 semigroups:1 n1:2 detection:4 interest:2 limx:1 englewood:1 pc:6 hg:1 devoted:1 kt:5 integral:10 orthogonal:1 euclidean:1 taylor:1 re:3 isolated:1 hein:1 theoretical:3 minimal:1 column:1 modeling:2 engl:1 kpca:1 cost:2 rakotomamonjy:1 deviation:2 subset:17 entry:1 uniform:1 too:1 reported:4 kn:5 answer:1 supx:1 kxi:1 density:8 off:4 parzen:6 infn:3 again:1 von:1 satisfied:1 containing:2 rosasco:2 possibly:2 choose:6 worse:2 suggesting:1 de:4 rouen:2 chandola:1 inc:1 explicitly:1 depends:5 performed:1 view:2 closed:17 doing:1 characterizes:1 sup:4 competitive:1 hf:2 hanke:1 contribution:1 square:2 ni:1 biau:1 schlkopf:1 definition:5 evaluates:1 associated:4 di:4 riemannian:1 proof:5 sampled:1 rational:1 proved:1 dataset:1 massachusetts:1 recall:2 lim:6 dimensionality:3 hilbert:21 originally:1 supervised:3 amer:1 done:1 though:3 evaluated:1 furthermore:2 implicit:1 smola:1 langford:1 steinwart:2 replacing:1 aronszajn:1 nonlinear:2 banerjee:1 defines:4 semisupervised:1 usa:1 name:1 concept:4 normalized:3 unbiased:1 ization:1 hausdorff:3 regularization:25 hence:6 width:7 uniquely:1 auc:3 noted:1 generalized:2 complete:3 tn:19 silva:1 image:5 harmonic:1 novel:1 empirically:1 volume:3 belong:1 kluwer:1 refer:2 cambridge:1 imposing:1 rd:6 tuning:1 consistency:5 mathematics:2 hp:1 canu:1 approx:1 bruno:1 shawe:1 stable:1 similarity:2 surface:2 yk2:1 add:2 curvature:1 italy:3 lrosasco:1 scenario:1 tikhonov:9 certain:1 verlag:1 inequality:1 integrable:1 captured:1 surely:4 novelty:2 converge:1 ii:2 full:2 multiple:1 reduces:2 smooth:2 technical:1 characterized:1 academic:1 cross:1 dept:1 laplacian:4 ensuring:1 involving:1 basic:2 essentially:2 metric:16 kernel:74 confined:1 median:1 crucial:1 publisher:1 sch:2 unlike:1 comment:2 induced:1 call:2 iii:2 enough:3 easy:1 variety:2 xj:1 topology:3 idea:3 cn:3 whether:1 motivated:2 pca:2 akin:1 speaking:1 york:2 matlab:1 generally:1 useful:1 aimed:1 nonparametric:1 tsybakov:1 locally:1 tenenbaum:1 concentrated:2 statist:1 reduced:2 exist:1 tutorial:1 estimated:2 broadly:2 group:1 key:7 threshold:2 cutoff:2 asymptotically:1 luxburg:1 inverse:6 powerful:1 family:2 almost:4 reader:1 electronic:1 sobolev:1 decision:1 pc2:1 genova:2 bound:2 correspondence:1 fold:1 topological:4 replaces:1 precisely:5 constraint:2 n3:2 sake:1 bousquet:1 fourier:1 argument:1 kumar:1 separable:6 px:1 relatively:1 pelletier:1 developing:1 carmeli:1 belonging:1 separability:1 constr:1 making:1 happens:1 christensen:1 intuitively:2 neubauer:1 equation:2 discus:3 needed:2 ge:1 toigo:3 cor:1 end:1 studying:1 available:1 observe:2 spectral:31 appropriate:1 vs7:1 schmidt:3 weinberger:1 eigen:1 sezione:2 existence:1 denotes:2 clustering:2 ensure:1 include:1 giving:1 restrictive:1 classical:3 lippert:1 question:3 already:1 hoffmann:1 strategy:3 concentration:1 diagonal:1 distance:5 separate:8 separating:1 manifold:6 ressel:1 assuming:2 modeled:1 index:2 equivalently:1 mostly:1 favorably:1 smale:2 design:1 anal:1 motivates:1 unknown:1 observation:2 benchmark:1 finite:5 situation:1 defining:2 reproducing:29 arbitrary:6 introduced:1 complement:1 pair:2 toolbox:1 extensive:1 kkx:1 hush:1 trans:1 able:2 usually:2 below:2 perception:1 pattern:1 challenge:1 max:1 analogue:1 suitable:3 natural:4 force:1 regularized:8 rely:1 normality:1 technology:1 lorenzo:1 imply:1 sn:16 text:1 probab:1 geometric:4 asymptotic:1 embedded:1 filtering:3 proven:1 ingredient:1 triple:1 validation:1 eigendecomposition:1 consistent:1 mercer:1 grandvalet:1 compatible:1 supported:2 last:3 enjoys:1 allow:1 institute:1 fall:1 neighbor:1 face:1 saul:1 distributed:1 curve:3 calculated:1 xn:7 overcome:1 made:1 universally:1 compact:7 global:1 assumed:2 xi:3 spectrum:2 continuous:5 iterative:1 table:3 learn:4 williamson:1 complex:3 domain:2 vj:4 protocol:1 main:2 n2:1 repeated:2 x1:4 positively:1 borel:1 roc:3 formalization:1 sub:1 exponential:1 comput:2 candidate:1 lie:1 third:1 theorem:5 showing:1 offset:1 dk:11 dsa:1 svm:7 decay:5 mason:1 exists:4 mnist:4 magnitude:1 kx:38 margin:1 gap:2 intersection:2 generalizing:1 fc:3 simply:3 expressed:1 contained:5 ordered:1 scalar:1 sindhwani:1 collectively:1 springer:2 corresponds:1 khs:2 dh:2 acm:1 prop:2 ma:1 ann:1 towards:3 price:1 determined:1 infinite:1 uniformly:2 lemma:1 vs4:1 called:4 experimental:1 formally:1 berg:1 support:33 latter:2 arises:1 absolutely:1 evaluate:1 heaviside:1 tested:1 |
3,383 | 4,063 | Global Analytic Solution
for Variational Bayesian Matrix Factorization
Shinichi Nakajima
Nikon Corporation
Tokyo, 140-8601, Japan
[email protected]
Masashi Sugiyama
Tokyo Institute of Technology
Tokyo 152-8552, Japan
[email protected]
Ryota Tomioka
The University of Tokyo
Tokyo 113-8685, Japan
[email protected]
Abstract
Bayesian methods of matrix factorization (MF) have been actively explored recently as promising alternatives to classical singular value decomposition. In this
paper, we show that, despite the fact that the optimization problem is non-convex,
the global optimal solution of variational Bayesian (VB) MF can be computed
analytically by solving a quartic equation. This is highly advantageous over a
popular VBMF algorithm based on iterated conditional modes since it can only
?nd a local optimal solution after iterations. We further show that the global optimal solution of empirical VBMF (hyperparameters are also learned from data) can
also be analytically computed. We illustrate the usefulness of our results through
experiments.
1 Introduction
The problem of ?nding a low-rank approximation of a target matrix through matrix factorization
(MF) attracted considerable attention recently since it can be used for various purposes such as
reduced rank regression [19], canonical correlation analysis [8], partial least-squares [27, 21],
multi-class classi?cation [1], and multi-task learning [7, 29].
Singular value decomposition (SVD) is a classical method for MF, which gives the optimal lowrank approximation to the target matrix in terms of the squared error. Regularized variants of SVD
have been studied for the Frobenius-norm penalty (i.e., singular values are regularized by the ?2 penalty) [17] or the trace-norm penalty (i.e., singular values are regularized by the ?1 -penalty) [23].
Since the Frobenius-norm penalty does not automatically produce a low-rank solution, it should be
combined with an explicit low-rank constraint, which is non-convex. In contrast, the trace-norm
penalty tends to produce sparse solutions, so a low-rank solution can be obtained without explicit
rank constraints. This implies that the optimization problem of trace-norm MF is still convex, and
thus the global optimal solution can be obtained. Recently, optimization techniques for trace-norm
MF have been extensively studied [20, 6, 12, 25].
Bayesian approaches to MF have also been actively explored. A maximum a posteriori (MAP)
estimation, which computes the mode of the posterior distributions, was shown [23] to correspond to
the ?1 -MF when Gaussian priors are imposed on factorized matrices [22]. The variational Bayesian
(VB) method [3, 5], which approximates the posterior distributions by factorized distributions, has
also been applied to MF [13, 18]. The VB-based MF method (VBMF) was shown to perform well
in experiments, and its theoretical properties have been investigated [15].
1
M
L
U
H
=L
B
M
A?
H
Figure 1: Matrix factorization model. H ? L ? M . A = (a1 , . . . , aH ) and B = (b1 , . . . , bH ).
However, the optimization problem of VBMF is non-convex. In practice, the VBMF solution is
computed by the iterated conditional modes (ICM) [4, 5], where the mean and the covariance of the
posterior distributions are iteratively updated until convergence [13, 18]. One may obtain a local
optimal solution by the ICM algorithm, but many restarts would be necessary to ?nd a good local
optimum.
In this paper, we ?rst show that, although the optimization problem is non-convex, the global optimal solution of VBMF can be computed analytically by solving a quartic equation. This is highly
advantageous over the standard ICM algorithm since the global optimum can be found without any
iterations and restarts. We next consider an empirical VB (EVB) scenario where the hyperparameters (prior variances) are also learned from data. Again, the optimization problem of EVBMF is
non-convex, but we still show that the global optimal solution of EVBMF can be computed analytically. The usefulness of our results is demonstrated through experiments.
Recently, the global optimal solution of VBMF when the target matrix is square has been obtained
in [15]. Thus, our contribution to VBMF can be regarded as an extension of the previous result to
general rectangular matrices. On the other hand, for EVBMF, this is the ?rst paper that gives the
analytic global solution, to the best of our knowledge. The global analytic solution for EVBMF is
shown to be highly useful in experiments.
2
Bayesian Matrix Factorization
In this section, we formulate the MF problem and review a variational Bayesian MF algorithm.
2.1
Formulation
The goal of MF is to approximate an unknown target matrix U (? RL?M ) from its n observations
V n = {V (i) ? RL?M }ni=1 .
We assume that L ? M . If L > M , we may simply re-de?ne the transpose U ? as U so that L ? M
holds. Thus this does not impose any restriction.
A key assumption of MF is that U is a low-rank matrix. Let H (? L) be the rank of U . Then
the matrix U can be decomposed into the product of A ? RM ?H and B ? RL?H as follows (see
Figure 1):
U = BA? .
Assume that the observed matrix V is subject to the following additive-noise model:
V = U + E,
where E (? RL?M ) is a noise matrix. Each entry of E is assumed to independently follow the
Gaussian distribution with mean zero and variance ? 2 . Then, the likelihood p(V n |A, B) is given by
?
!
n
X
1
p(V n |A, B) ? exp ? 2
?V (i) ? BA? ?2Fro ,
2? i=1
where ? ? ?Fro denotes the Frobenius norm of a matrix.
2
2.2
Variational Bayesian Matrix Factorization
We use the Gaussian priors on the parameters A = (a1 , . . . , aH ) and B = (b1 , . . . , bH ):
? H
!
? H
!
X ?ah ?2
X ?bh ?2
?(U ) = ?A (A)?B (B), where ?A (A) ? exp ?
and ?B (B) ? exp ?
.
2c2ah
2c2bh
h=1
h=1
c2ah and c2bh are hyperparameters corresponding to the prior variance. Without loss of generality, we
assume that the product cah cbh is non-increasing with respect to h.
Let r(A, B|V n ) be a trial distribution for A and B, and let FVB be the variational Bayes (VB) free
energy with respect to r(A, B|V n ):
?
?
r(A, B|V n )
,
FVB (r|V n ) = log
p(V n , A, B) r(A,B|V n )
where ???p denotes the expectation over p.
The VB approach minimizes the VB free energy FVB (r|V n ) with respect to the trial distribution
r(A, B|V n ), by restricting the search space of r(A, B|V n ) so that the minimization is computationally tractable. Typically, dissolution of probabilistic dependency between entangled parameters (A
and B in the case of MF) makes the calculation feasible:1
H
Y
r(A, B|V n ) =
rah (ah |V n )rbh (bh |V n ).
(1)
h=1
b VB is given by the VB
The resulting distribution is called the VB posterior. The VB solution U
posterior mean:
b VB = ?BA? ?r(A,B|V n ) .
U
By applying the variational method to the VB free energy, we see that the VB posterior can be
expressed as follows:
H
Y
r(A, B|V n ) =
NM (ah ; ?ah , ?ah )NL (bh ; ?bh , ?bh ),
h=1
where Nd (?; ?, ?) denotes the d-dimensional Gaussian density with mean ? and covariance matrix
?. ?ah , ?bh , ?ah , and ?bh satisfy
? n?
??1
? n?
??1
h
h
?ah = ?ah ??
+c?2
IM , ?bh =
+c?2
IL , (2)
h ?bh , ?bh = ?bh ?h ?ah , ?ah =
ah
bh
2
2
?
?
where Id denotes the d-dimensional identity matrix, and
?h = ??ah ?2 + tr(?ah ), ?h = ??bh ?2 + tr(?bh ),
n
?
X
n?
1 X (i)
?h = 2 V ?
,
?bh? ??
V
=
V .
ah?
?
n i=1
?
h ?=h
The iterated conditional modes (ICM) algorithm [4, 5] for VBMF (VB-ICM) iteratively updates
?ah , ?bh , ?ah , and ?bh by Eq.(2) from some initial values until convergence [13, 18], allowing one
to obtain a local optimal solution. Finally, an estimator of U is computed as
H
X
VB?ICM
b
U
=
?bh ??
ah .
h=1
2
When the noise variance ? is unknown, it may be estimated by the following re-estimation formula:
?
?
?2
?
n ?
H ?
H
?
?
X
X
X
1
1
?
?
?
?2 = 2
?bh ??
+
?h ?h ? ??ah ?2 ??bh ?2 ? ,
?V (i) ?
ah ?
?
?
? LM n
i=1
h=1
Fro
h=1
which corresponds to the derivative of the VB free energy with respect to ? 2 set to zero (see Eq.(4)
in Section 3). This can be incorporated in the ICM algorithm by updating ? 2 from some initial value
by the above formula in every iteration of the ICM algorithm.
1
Although a weaker constraint, r(A, B|V n ) = rA (A|V n )rB (B|V n ), is suf?cient to derive a tractable iterative algorithm [13], we assume the stronger one (1) used in [18], which makes our theoretical analysis tractable.
3
2.3
Empirical Variational Bayesian Matrix Factorization
In the VB framework, hyperparameters (c2ah and c2bh in the current setup) can also be learned from
data by minimizing the VB free energy, which is called the empirical VB (EVB) method [5].
By setting the derivatives of the VB free energy with respect to c2ah and c2bh to zero, the following
optimality condition can be obtained (see also Eq.(4) in Section 3):
c2ah = ?h /M and c2bh = ?h /L.
(3)
The ICM algorithm for EVBMF (EVB-ICM) is to iteratively update c2ah and c2bh by Eq.(3), in addition to ?ah , ?bh , ?ah , and ?bh by Eq.(2). Again, one may obtain a local optimal solution by this
algorithm.
3
Analytic-form Expression of Global Optimal Solution of VBMF
In this section, we derive an analytic-form expression of the VBMF global solution.
The VB free energy can be explicitly expressed as follows.
!
?
H
X
1
1
?h
L
?h
nLM
M
2
2
2
n
log ? +
log cah? log |?ah |+ 2 + log cbh? log |?bh |+ 2
FVB (r|V ) =
2
2
2
2cah 2
2
2cbh
h=1
?
?
2
n
H
H
?
?
1 X?
n X?
? (i) X
?
2
2
?
?
?
??
?
??
?
,
(4)
+ 2
?bh ??
+
?V ?
?
h h
ah
bh
ah
?
2? i=1 ?
2? 2
h=1
Fro
h=1
where | ? | denotes the determinant of a matrix. We solve the following problem:
Given
(c2ah , c2bh ) ? R2++ (?h = 1, . . . , H), ? 2 ? R++ ,
min FVB ({?ah , ?bh , ?ah , ?bh ; h = 1, . . . , H})
s.t.
L
?ah ? RM , ?bh ? RL , ?ah ? SM
++ , ?bh ? S++ (?h = 1, . . . , H),
where Sd++ denotes the set of d ? d symmetric positive-de?nite matrices. This is a non-convex
optimization problem, but still we show that the global optimal solution can be analytically obtained.
Let ?h (? 0) be the h-th largest singular value of V , and let ? ah and ? bh be the associated right
and left singular vectors:2
L
X
V =
?h ? bh ? ?
ah .
h=1
Let ?
bh be the second largest real solution of the following quartic equation with respect to t:
fh (t) := t4 + ?3 t3 + ?2 t2 + ?1 t + ?0 = 0,
(5)
where the coef?cients are de?ned by
?
!
(L ? M )2 ?h
(L2 + M 2 )b
?h2
2? 4
?3 =
, ?2 = ? ?3 ?h +
+ 2 2 2
,
LM
LM
n cah cbh
!2
?
?
??
?
4
?2 L
?2 M
?
2
2
, ?bh = 1 ?
1?
?h2 .
?0 = ?bh ? 2 2 2
n cah cbh
n?h2
n?h2
Let
?1 = ?3
p
?0 ,
v
v?
u
!2
u
u
u (L + M )? 2
4
4
u (L + M )? 2
?
?
LM ? 4
t
?
eh = t
+ 2 2 2 +
+ 2 2 2
.
?
2n
2n cah cbh
2n
2n cah cbh
n2
(6)
b VB as in the following theorem.
Then we can analytically express the VBMF solution U
2
In our analysis, we assume that V has no missing entry, and its singular value decomposition (SVD) is
easily obtained. Therefore, our results cannot be directly applied to missing entry prediction.
4
Theorem 1 The global VB solution can be expressed as
b VB =
U
H
X
?
bhVB ? bh ? ?
bhVB =
ah , where ?
h=1
?
?
bh
0
if ?h > ?
eh ,
otherwise.
Sketch of proof: We ?rst show that minimizing (4) amounts to a reweighed SVD and any minimizer
is a stationary point. Then, by analyzing the stationary condition (2), we obtain an equation with
respect to ?
bh as a necessary and suf?cient condition to be a stationary point (note that its quadratic
approximation gives bounds of the solution [15]). Its rigorous evaluation results in the quartic equation (5). Finally, we show that only the second largest solution of the quartic equation (5) lies within
the bounds, which completes the proof.
The coef?cients of the quartic equation (5) are analytic, so ?
bh can also be obtained analytically3 ,
e.g., by Ferrari?s method [9] (we omit the details due to lack of space). Therefore, the global VB
solution can be analytically computed. This is a strong advantage over the standard ICM algorithm
since many iterations and restarts would be necessary to ?nd a good solution by ICM.
Based on the above result, the complete VB posterior can also be obtained analytically as follows.
Corollary 2 The VB posteriors are given by
rA (A|V n ) =
H
Y
NM (ah ; ?ah , ?ah ),
rB (B|V n ) =
h=1
H
Y
NM (bh ; ?bh , ?bh ),
h=1
where, for ?
bhVB being the solution given by Theorem 1,
q
q
VB
b
bh ?h ? ? ah , ?bh = ? ?
bhVB ?bh?1 ? ? bh ,
?ah = ? ?
? ?
!
p
?
? nb
?h2 ? ? 2 (M ? L) + (nb
?h2 ? ? 2 (M ? L))2 + 4M n? 2 ?bh2
?ah =
IM ,
2nM (b
?hVB ?bh?1 + n?1 ? 2 c?2
ah )
!
? ?
? p
?h2 + ? 2 (M ? L))2 + 4Ln? 2 ?bh2
? nb
?h2 + ? 2 (M ? L) + (nb
IL ,
?bh =
2nL(b
?hVB ?bh + n?1 ? 2 c?2
bh )
q
4 LM
n(M ? L)(?h ? ?
bhVB ) + n2 (M ? L)2 (?h ? ?
bhVB )2 + 4?
c2a c2b
h
h
?bh =
,
2? 2 M c?2
ah
(
?h2
if ?h > ?
eh ,
2
?bh =
?2
otherwise.
nca cb
h
h
When the noise variance ? 2 is unknown, one may use the minimizer of the VB free energy with
respect to ? 2 as its estimate. In practice, this single-parameter minimization may be carried out
numerically based on Eq.(4) and Corollary 2.
4
Analytic-form Expression of Global Optimal Solution of Empirical VBMF
In this section, we solve the following problem to obtain the EVBMF global solution:
Given
? 2 ? R++ ,
min FVB ({?ah , ?bh , ?ah , ?bh , c2ah , c2bh ; h = 1, . . . , H})
s.t.
L
2
2
2
?ah ? RM , ?bh ? RL , ?ah ? SM
++ , ?bh ? S++ , (cah , cbh ) ? R++ (?h = 1, . . . , H),
where Rd++ denotes the set of the d-dimensional vectors with positive elements. We show that, although this is again a non-convex optimization problem, the global optimal solution can be obtained
analytically. We can observe the invariance of the VB free energy (4) under the transform
?
?
?
?
?2
?2 2
2
2 2
(?ah , ?bh , ?ah , ?bh , c2ah , c2bh ) ? (sh ?ah , s?1
h ?bh , sh ?ah , sh ?bh , sh cah , sh cbh )
3
R
.
In practice, one may solve the quartic equation numerically, e.g., by the ?roots? function in MATLAB?
5
3
5
3.5
Global solution
2.5
4.5
3.25
2
0
Global solution
1
2
4
Global solution
3
0
1
2
ch
ch
(a) V = 1.5
(b) V = 2.1
3
0
1
2
3
ch
(c) V = 2.7
Figure 2: Pro?les of the VB free energy (4) when L = M = H = 1, n = 1, and ? 2 = 1 for
observations V = 1.5, 2.1, and 2.7. (a) When V = 1.5 < 2 = ? h , the VB free energy is monotone
increasing and thus the global solution is given by ch ? 0. (b) When V = 2.1 > 2 = ? h , a local
minimum exists at ch = c?h ? 1.37, but ?h ? 0.12 > 0 so ch ? 0 is still the global solution. (c)
When V = 2.7 > 2 = ? h , ?h ? ?0.74 ? 0 and thus the minimizer at ch = c?h ? 2.26 is the global
solution.
for any {sh ?= 0; h = 1, . . . , H}. Accordingly, we ?x the ratios to cah /cbh = S > 0, and refer to
ch := cah cbh also as a hyperparameter.
Let
?
?
s?
?
2
2 2
4
1
(L
+
M
)?
(L
+
M
)?
4LM
?
??h2 ?
?,
c?2h =
?h2 ?
+
?
2LM
n
n
n2
?
?
?
? h = ( L + M )?/ n.
(7)
Then, we have the following lemma:
Lemma 3 If ?h ? ? h , the VB free energy function (4) can have two local minima, namely, ch ? 0
and ch = c?h . Otherwise, ch ? 0 is the only local minimum of the VB free energy.
Sketch of proof: Analyzing the region where ch is so small that the VB solution given ch is ?
bh = 0,
we ?nd a local minimum ch ? 0. Combining the stationary conditions (2) and (3), we derive a
quadratic equation with respect to c2h whose larger solution is given by Eq.(7). Showing that the
smaller solution corresponds to saddle points completes the proof.
Figure 2 shows the pro?les of the VB free energy (4) when L = M = H = 1, n = 1, and ? 2 = 1
for observations V = 1.5, 2.1, and 2.7. As illustrated, depending on the value of V , either ch ? 0
or ch = c?h is the global solution.
Let
? n?
?
? n?
?
?
n ?
h VB
h VB
??h + 1 + L log
??h + 1 + 2 ?2?h ??hVB + LM c?2h , (8)
2
2
M?
L?
?
VB
where ??h is the VB solution for ch = c?h . We can show that the sign of ?h corresponds to that of
the difference of the VB free energy at ch = c?h and ch ? 0. Then, we have the following theorem
and corollary.
?h := M log
Theorem 4 The hyperparameter b
ch that globally minimizes the VB free energy function (4) is given
by b
ch = c?h if ?h > ? h and ?h ? 0. Otherwise b
ch ? 0.
Corollary 5 The global EVB solution can be expressed as
(
H
X
??hVB
b EVB =
U
?
bhEVB ? bh ? ?
bhEVB :=
ah , where ?
0
h=1
if ?h > ? h and ?h ? 0,
otherwise.
Since the optimal hyperparameter value b
ch can be expressed in a closed-form, the global EVB
solution can also be computed analytically using the result given in Section 3. This is again a strong
advantage over the standard ICM algorithm since ICM would require many iterations and restarts to
?nd a good solution.
6
5
Experiments
In this section, we experimentally evaluate the usefulness of our analytic-form solutions using artiR
?cial and benchmark datasets. The MATLAB?
code will be available at [14].
5.1
Arti?cial Dataset
PH ?
?
We randomly created a true matrix V ? = h=1 b?h a??
h with L = 30, M = 100, and H = 10,
where every element of {ah , bh } was drawn independently from the standard Gaussian distribution.
We set n = 1, and an observation matrix V was created by adding independent Gaussian noise with
variance ? 2 = 1 to each element. We used the full-rank model, i.e., H = L = 30. The noise
variance ? 2 was assumed to be unknown, and estimated from data (see Section 2.2 and Section 3).
We ?rst investigate the learning curve of the VB free energy over EVB-ICM iterations. We created
the initial values of the EVB-ICM algorithm as follows: ?ah and ?bh were set to randomly created
orthonormal vectors, ?ah and ?bh were set to identity matrices multiplied by scalars ?a2h and ?b2h ,
respectively. ?a2h and ?b2h as well as the noise variance ? 2 were drawn from the ?2 -distribution with
degree-of-freedom one. 10 learning curves of the VB free energy were plotted in Figures 3(a). The
value of the VB free energy of the global solution computed by our analytic-form solution was also
plotted in the graph by the dashed line. The graph shows that the EVB-ICM algorithm reduces the
VB free energy reasonably well over iterations. However, for this arti?cial dataset, the convergence
speed was quite slow once in 10 runs, which was actually trapped in a local minimum.
Next, we compare the computation time. Figure 3(b) shows the computation time of EVB-ICM over
iterations and our analytic form-solution. The computation time of EVB-ICM grows almost linearly
with respect to the number of iterations, and it took 86.6 [sec] for 100 iterations on average. On the
other hand, the computation of our analytic-form solution took only 0.055 [sec] on average, including the single-parameter search for ? 2 . Thus, our method provides the reduction of computation
time in 4 orders of magnitude, with better accuracy as a minimizer of the VB free energy.
Next, we investigate the generalization error of the global analytic solutions of VB and EVB, meab ? V ? ?2 /(LM ). Figure 3(c) shows the mean and error bars (min and max)
sured by G = ?U
Fro
over 10 runs for VB with various hyperparameter values and EVB. A single hyperparameter value
was commonly used (i.e., c1 = ? ? ? = cH ) in VB, while each hyperparameter ch was separately
optimized in EVB. The result shows that EVB gives slightly lower generalization errors than VB
with the best common hyperparameter. Thus, automatic hyperparameter selection of EVB works
quite well.
Figure 3(d) shows the hyperparameter values chosen in EVB sorted in the decreasing order. This
shows that, for all 10 runs, ch is positive for h ? H ? (= 10) and zero for h > H ? . This implies that
the effect of automatic relevance determination [16, 5] works excellently for this arti?cial dataset.
5.2
Benchmark Dataset
MF can be used for canonical correlation analysis (CCA) [8] and reduced rank regression (RRR)
[19] with appropriately pre-whitened data. Here, we solve these tasks by VBMF and evaluate the
performance using the concrete slump test dataset [28] available from the UCI repository [2].
The experimental results are depicted in Figure 4, which is in the same format as Figure 3. The
results showed that similar trends to the arti?cial dataset can still be observed for the CCA task with
the benchmark dataset (the RRR results are similar and thus omitted from the ?gure). Overall, the
proposed global analytic solution is shown to be a useful alternative to the popular ICM algorithm.
6
Discussion and Conclusion
Overcoming the non-convexity of VB methods has been one of the important challenges in the
Bayesian machine learning community, since it sometimes prevented us from applying the VB methods to highly complex real-world problems. In this paper, we focused on the MF problem with no
missing entry, and showed that this weakness could be overcome by computing the global optimal
solution analytically. We further derived the global optimal solution analytically for the EVBMF
7
0.3
120
EVB-Analytic
EVB-ICM
1.95
EVB-Analytic
EVB-ICM
100
0.28
80
0.26
VB-Analytic
EVB-Analytic
1.6
1.4
1.2
0.24
1.91
40
0.22
1.9
20
0.2
ch
60
1
^
1.92
G
Time(sec)
F VB /(LM)
1.93
EVB-Analytic
1.8
1.94
0.8
0.6
0.4
0.2
1.89
0
50
Iteration
0.18
0
0
100
(a) VB free energy
50
Iteration
0
(b) Computation time
0
0
1
10
? ch
100
10
10
20
30
h
(c) Generalization error
(d) Hyperparameter value
Figure 3: Experimental results for arti?cial dataset.
1.4
EVB-Analytic
EVB-ICM
?63.52
200
EVB-Analytic
EVB-ICM
1.2
0.25
VB-Analytic
EVB-Analytic
EVB-Analytic
0.2
1
?63.55
ch
0.6
0.1
100
0.4
0.05
0.2
?63.56
0
0.15
^
?63.54
150
0.8
G
Time(sec)
?63.53
50
100 150
Iteration
200
(a) VB free energy
250
0
0
50
50
100
150
Iteration
200
250
(b) Computation time
?3
10
? ch
10
?1
1
10
(c) Generalization error
0
0
1
2
h
3
(d) Hyperparameter value
Figure 4: Experimental results of CCA for the concrete slump test dataset.
method, where hyperparameters are also optimized based on data samples. Since no hand-tuning
parameter remains in EVBMF, our analytic-form solution is practically useful and computationally
highly ef?cient. Numerical experiments showed that the proposed approach is promising.
When cah cbh ? ?, the priors get (almost) ?at and the quartic equation (5) is factorized as
lim
cah cb ??
?
?
?
? ?? ?
? ?? ?
? ??
? ?
2
2
2
2
fh (t) = t + M
1? ?n?L2 ?h t + 1? ?n?M
1? ?n?L2 ?h = 0.
t ? 1? ?n?M
t?M
2 ?h
2 ?h
L
L
h
h
h
h
h
Theorem 1 states that its second largest solution gives the VB estimator for ?h > limcah cbh ?? ?
eh =
p
2
M ? /n. Thus we have
? ?
??
M ?2
VB
lim ?
bh = max 0, 1 ?
?h .
cah cbh ??
n?h2
This is the positive-part James-Stein (PJS) shrinkage estimator [10], operated on each singular component separately, and this coincides with the upper-bound derived in [15] for arbitrary cah cbh > 0.
The counter-intuitive fact?a shrinkage is observed even in the limit of ?at priors?can be explained
by strong non-uniformity of the volume element of the Fisher metric, i.e., the Jeffreys prior [11], in
the parameter space. We call this effect model-induced regularization (MIR), because it is induced
not by priors but by structure of model likelihood functions. MIR was shown to generally appear in
Bayesian estimation when the model is non-identi?able (i.e., the mapping between parameters and
distribution functions is not one-to-one) and the parameters are integrated out at least partially [26].
Thus, it never appears in MAP estimation [15]. The probabilistic PCA can be seen as an example
of MF, where A and B correspond to latent variables and principal axes, respectively [24]. The
MIR effect is observed in its analytic solution when A is integrated out and B is estimated to be the
maximizer of the marginal likelihood.
Our results fully made use of the assumptions that the likelihood and priors are both spherical Gaussian, the VB posterior is column-wise independent, and there exists no missing entry. They were
necessary to solve the free energy minimization problem as a reweighted SVD. An important future work is to obtain the analytic global solution under milder assumptions. This will enable us to
handle more challenging problems such as missing entry prediction [23, 20, 6, 13, 18, 22, 12, 25].
Acknowledgments
The authors appreciate comments by anonymous reviewers, which helped improve our earlier
manuscript and suggested promising directions for future work. MS thanks the support from the
FIRST program. RT was partially supported by MEXT Kakenhi 22700138.
8
References
[1] Y. Amit, M. Fink, N. Srebro, and S. Ullman. Uncovering shared structures in multiclass classi?cation. In
Proceedings of International Conference on Machine Learning, pages 17?24, 2007.
[2] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[3] H. Attias. Inferring parameters and structure of latent variable models by variational Bayes. In Proceedings of the Fifteenth Conference Annual Conference on Uncertainty in Arti?cial Intelligence (UAI-99),
pages 21?30, San Francisco, CA, 1999. Morgan Kaufmann.
[4] J. Besag. On the Statistical Analysis of Dirty Pictures. J. Royal Stat. Soc. B, 48:259?302, 1986.
[5] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, NY, USA, 2006.
[6] J.-F. Cai, E. J. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM
Journal on Optimization, 20(4):1956?1982, 2008.
[7] O. Chapelle and Z. Harchaoui. A Machine Learning Approach to Conjoint Analysis. In Advances in
neural information processing systems, volume 17, pages 257?264, 2005.
[8] D. R. Hardoon, S. R. Szedmak, and J. R. Shawe-Taylor. Canonical correlation analysis: An overview
with application to learning methods. Neural Computation, 16(12):2639?2664, 2004.
[9] M. Hazewinkel, editor. Encyclopaedia of Mathematics. Springer, 2002.
[10] W. James and C. Stein. Estimation with quadratic loss. In Proceedings of the 4th Berkeley Symposium on
Mathematical Statistics and Probability, volume 1, pages 361?379. University of California Press, 1961.
[11] H. Jeffreys. An Invariant Form for the Prior Probability in Estimation Problems. In Proceedings of the
Royal Society of London. Series A, volume 186, pages 453?461, 1946.
[12] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In Proceedings of International Conference on Machine Learning, pages 457?464, 2009.
[13] Y. J. Lim and T. W. Teh. Variational Bayesian Approach to Movie Rating Prediction. In Proceedings of
KDD Cup and Workshop, 2007.
[14] S. Nakajima. Matlab Code for VBMF, http://sites.google.com/site/shinnkj23/, 2010.
[15] S. Nakajima and M. Sugiyama. Implicit regularization in variational Bayesian matrix factorization. In
Proceedings of 27th International Conference on Machine Learning (ICML2010), 2010.
[16] R. M. Neal. Bayesian Learning for Neural Networks. Springer, 1996.
[17] A. Paterek. Improving Regularized Singular Value Decomposition for Collaborative Filtering. In Proceedings of KDD Cup and Workshop, 2007.
[18] T. Raiko, A. Ilin, and J. Karhunen. Principal Component Analysis for Large Sale Problems with Lots of
Missing Values. In Proc. of ECML, volume 4701, pages 691?698, 2007.
[19] G. R. Reinsel and R. P. Velu. Multivariate reduced-rank Regression: Theory and Applications. Springer,
New York, 1998.
[20] J. D. M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction.
In Proceedings of the 22nd International Conference on Machine learning, pages 713?719, 2005.
[21] R. Rosipal and N. Kr?amer. Overview and recent advances in partial least squares. In Subspace, Latent
Structure and Feature Selection Techniques, volume 3940, pages 34?51. Springer, 2006.
[22] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In J.C. Platt, D. Koller, Y. Singer, and
S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1257?1264, 2008.
[23] N. Srebro, J. Rennie, and T. Jaakkola. Maximum Margin Matrix Factorization. In Advances in NIPS,
volume 17, 2005.
[24] M.E. Tipping and C.M. Bishop. Probabilistic Principal Component Analysis. Journal of the Royal Statistical Society: Series B, 61(3):611?622, 1999.
[25] R. Tomioka, T. Suzuki, M. Sugiyama, and H. Kashima. An ef?cient and general augmented Lagrangian
algorithm for learning low-rank matrices. In Proceedings of International Conference on Machine Learning, 2010.
[26] S. Watanabe. Algebraic Geometry and Statistical Learning. Cambridge University Press, Cambridge,
UK, 2009.
[27] K. J. Worsley, J-B. Poline, K. J. Friston, and A. C. Evanss. Characterizing the Response of PET and fMRI
Data Using Multivariate Linear Models. NeuroImage, 6(4):305?319, 1997.
[28] I-Cheng Yeh. Modeling slump ?ow of concrete using second-order regressions and arti?cial neural networks. Cement and Concrete Composites, 29(6):474?480, 2007.
[29] K. Yu, V. Tresp, and A. Schwaighofer. Learning Gaussian Processes from Multiple Tasks. In Proc. of
ICML, page 1019, 2005.
9
| 4063 |@word trial:2 determinant:1 repository:2 advantageous:2 norm:8 nd:7 cah:15 stronger:1 decomposition:4 covariance:2 arti:7 tr:2 reduction:1 initial:3 series:2 current:1 com:1 attracted:1 evans:1 numerical:1 additive:1 kdd:2 analytic:26 update:2 stationary:4 intelligence:1 accordingly:1 gure:1 provides:1 mathematical:1 symposium:1 ilin:1 ra:2 multi:2 salakhutdinov:1 globally:1 decomposed:1 decreasing:1 automatically:1 spherical:1 increasing:2 hardoon:1 factorized:3 minimizes:2 corporation:1 cial:8 berkeley:1 masashi:1 every:2 fink:1 rm:3 platt:1 sale:1 uk:1 omit:1 appear:1 positive:4 local:10 tends:1 sd:1 limit:1 despite:1 id:1 analyzing:2 bhevb:2 studied:2 challenging:1 co:1 factorization:11 nca:1 acknowledgment:1 practice:3 nite:1 vbmf:15 empirical:5 composite:1 pre:1 get:1 cannot:1 selection:2 bh:67 nb:4 applying:2 restriction:1 c2ah:9 map:2 imposed:1 demonstrated:1 missing:6 reviewer:1 attention:1 lagrangian:1 independently:2 convex:8 rectangular:1 formulate:1 focused:1 shen:1 estimator:3 regarded:1 orthonormal:1 handle:1 ferrari:1 updated:1 target:4 element:4 trend:1 recognition:1 updating:1 observed:4 region:1 counter:1 convexity:1 uniformity:1 solving:2 easily:1 various:2 fast:1 london:1 newman:1 whose:1 quite:2 larger:1 solve:5 rennie:2 otherwise:5 statistic:1 transform:1 advantage:2 bhvb:6 cai:1 took:2 product:2 cients:2 combining:1 uci:2 roweis:1 intuitive:1 frobenius:3 rst:4 convergence:3 optimum:2 produce:2 encyclopaedia:1 illustrate:1 derive:3 ac:2 stat:1 depending:1 completion:1 lowrank:1 eq:7 strong:3 soc:1 c:1 implies:2 direction:1 tokyo:6 nlm:1 enable:1 hvb:4 require:1 generalization:4 anonymous:1 im:2 extension:1 hold:1 practically:1 exp:3 cb:2 mapping:1 lm:10 omitted:1 fh:2 purpose:1 estimation:6 proc:2 largest:4 minimization:4 gaussian:8 reinsel:1 shrinkage:2 jaakkola:1 corollary:4 derived:2 ax:1 kakenhi:1 rank:12 likelihood:4 contrast:1 rigorous:1 besag:1 posteriori:1 milder:1 typically:1 integrated:2 koller:1 overall:1 uncovering:1 marginal:1 once:1 never:1 yu:1 icml:1 future:2 fmri:1 t2:1 randomly:2 geometry:1 freedom:1 highly:5 investigate:2 mnih:1 evaluation:1 weakness:1 sh:6 nl:2 operated:1 partial:2 necessary:4 cbh:15 taylor:1 re:2 plotted:2 theoretical:2 column:1 earlier:1 modeling:1 evb:29 entry:6 c2b:1 usefulness:3 dependency:1 combined:1 thanks:1 density:1 international:5 siam:1 probabilistic:4 concrete:4 squared:1 again:4 nm:4 derivative:2 velu:1 worsley:1 ullman:1 actively:2 japan:3 de:3 sec:4 satisfy:1 cement:1 explicitly:1 root:1 helped:1 closed:1 lot:1 bayes:2 slump:3 candes:1 asuncion:1 contribution:1 collaborative:2 square:3 ni:1 il:2 accuracy:1 variance:8 kaufmann:1 correspond:2 t3:1 bayesian:14 iterated:3 cation:2 ah:54 coef:2 sured:1 energy:24 sugi:1 rah:1 james:2 associated:1 proof:4 dataset:9 popular:2 knowledge:1 lim:3 actually:1 appears:1 manuscript:1 tipping:1 restarts:4 follow:1 response:1 a2h:2 formulation:1 amer:1 generality:1 implicit:1 correlation:3 until:2 hand:3 sketch:2 maximizer:1 lack:1 google:1 bh2:2 mode:4 grows:1 usa:1 effect:3 ye:1 true:1 analytically:12 regularization:2 symmetric:1 iteratively:3 neal:1 illustrated:1 reweighted:1 c2bh:9 coincides:1 m:1 mist:1 complete:1 b2h:2 pro:2 variational:11 wise:1 ef:2 recently:4 common:1 rl:6 ji:1 overview:2 jp:3 volume:7 approximates:1 numerically:2 refer:1 cup:2 cambridge:2 rd:1 automatic:2 tuning:1 mathematics:1 sugiyama:3 shawe:1 chapelle:1 posterior:9 multivariate:2 reweighed:1 showed:3 quartic:8 recent:1 scenario:1 seen:1 minimum:5 morgan:1 impose:1 dashed:1 full:1 harchaoui:1 multiple:1 reduces:1 determination:1 calculation:1 prevented:1 a1:2 prediction:4 variant:1 regression:4 whitened:1 titech:1 expectation:1 metric:1 fifteenth:1 iteration:14 nakajima:4 sometimes:1 c1:1 addition:1 separately:2 entangled:1 completes:2 singular:10 appropriately:1 mir:3 subject:1 induced:2 comment:1 call:1 multiclass:1 attias:1 expression:3 pca:1 penalty:6 algebraic:1 york:2 matlab:3 useful:3 generally:1 amount:1 stein:2 extensively:1 ph:1 excellently:1 reduced:3 http:1 canonical:3 sign:1 estimated:3 trapped:1 rb:2 hyperparameter:11 express:1 key:1 drawn:2 nikon:2 graph:2 monotone:1 run:3 uncertainty:1 almost:2 vb:62 bound:3 cca:3 cheng:1 quadratic:3 annual:1 constraint:3 speed:1 optimality:1 min:3 format:1 ned:1 smaller:1 slightly:1 rrr:2 jeffreys:2 explained:1 invariant:1 computationally:2 equation:10 ln:1 remains:1 singer:1 tractable:3 available:2 multiplied:1 observe:1 kashima:1 alternative:2 denotes:7 dirty:1 amit:1 classical:2 society:2 appreciate:1 rt:1 gradient:1 ow:1 subspace:1 pet:1 code:2 ratio:1 minimizing:2 dissolution:1 setup:1 ryota:1 trace:5 ba:3 unknown:4 perform:1 allowing:1 upper:1 teh:1 observation:4 datasets:1 sm:2 benchmark:3 ecml:1 incorporated:1 shinichi:1 arbitrary:1 community:1 overcoming:1 rating:1 namely:1 optimized:2 identi:1 california:1 learned:3 nip:1 able:1 bar:1 suggested:1 pattern:1 challenge:1 program:1 rosipal:1 including:1 max:2 royal:3 friston:1 eh:4 regularized:4 improve:1 movie:1 technology:1 ne:1 picture:1 nding:1 created:4 carried:1 fro:5 raiko:1 tresp:1 szedmak:1 prior:10 review:1 l2:3 yeh:1 loss:2 fully:1 icml2010:1 suf:2 filtering:1 srebro:3 conjoint:1 h2:12 degree:1 thresholding:1 editor:2 poline:1 supported:1 free:24 transpose:1 hazewinkel:1 weaker:1 institute:1 characterizing:1 sparse:1 curve:2 overcome:1 world:1 computes:1 author:1 commonly:1 made:1 san:1 suzuki:1 approximate:1 global:33 uai:1 b1:2 assumed:2 francisco:1 search:2 iterative:1 latent:3 promising:3 reasonably:1 ca:1 improving:1 investigated:1 complex:1 linearly:1 noise:7 hyperparameters:5 n2:3 icm:24 augmented:1 site:2 cient:4 c2a:1 slow:1 ny:1 tomioka:3 neuroimage:1 inferring:1 watanabe:1 explicit:2 lie:1 formula:2 theorem:6 bishop:2 showing:1 explored:2 r2:1 exists:2 workshop:2 restricting:1 adding:1 kr:1 magnitude:1 karhunen:1 t4:1 margin:2 mf:18 depicted:1 simply:1 saddle:1 expressed:5 schwaighofer:1 partially:2 scalar:1 springer:5 ch:30 corresponds:3 minimizer:4 conditional:3 goal:1 identity:2 sorted:1 shared:1 fisher:1 considerable:1 feasible:1 experimentally:1 classi:2 lemma:2 principal:3 called:2 invariance:1 svd:5 experimental:3 support:1 mext:1 relevance:1 accelerated:1 evaluate:2 |
3,384 | 4,064 | Exploiting weakly-labeled Web images to improve
object classification: a domain adaptation approach
Alessandro Bergamo
Lorenzo Torresani
Computer Science Department
Dartmouth College
Hanover, NH 03755, U.S.A.
{aleb, lorenzo}@cs.dartmouth.edu
Abstract
Most current image categorization methods require large collections of manually annotated training examples to learn accurate visual recognition models.
The time-consuming human labeling effort effectively limits these approaches to
recognition problems involving a small number of different object classes. In order to address this shortcoming, in recent years several authors have proposed to
learn object classifiers from weakly-labeled Internet images, such as photos retrieved by keyword-based image search engines. While this strategy eliminates
the need for human supervision, the recognition accuracies of these methods are
considerably lower than those obtained with fully-supervised approaches, because
of the noisy nature of the labels associated to Web data.
In this paper we investigate and compare methods that learn image classifiers by
combining very few manually annotated examples (e.g., 1-10 images per class)
and a large number of weakly-labeled Web photos retrieved using keyword-based
image search. We cast this as a domain adaptation problem: given a few stronglylabeled examples in a target domain (the manually annotated examples) and many
source domain examples (the weakly-labeled Web photos), learn classifiers yielding small generalization error on the target domain. Our experiments demonstrate
that, for the same number of strongly-labeled examples, our domain adaptation
approach produces significant recognition rate improvements over the best published results (e.g., 65% better when using 5 labeled training examples per class)
and that our classifiers are one order of magnitude faster to learn and to evaluate
than the best competing method, despite our use of large weakly-labeled data sets.
1 Introduction
The last few years have seen a proliferation of human efforts to collect labeled image data sets
for the purpose of training and evaluating visual recognition systems. Label information in these
collections comes in different forms, ranging from simple object category labels to detailed semantic
pixel-level segmentations. Examples include Caltech256 [14], and the Pascal VOC2010 data set [7].
In order to increase the variety and the number of labeled object classes, a few authors have designed
online games and appealing software tools encouraging common users to participate in these image
annotation efforts [23, 30]. Despite the tremendous research contribution brought by such attempts,
even the largest labeled image collections today [6] are limited to a number of classes that is at least
one order of magnitude smaller than the number of object categories that humans can recognize [3].
In order to overcome this limitation and in an attempt to build classifiers for arbitrary object classes,
several authors have proposed systems that learn from weakly-labeled Internet photos [10, 9, 29, 20].
Most of these approaches rely on keyword-based image search engines to retrieve image examples
of specified object classes. Unfortunately, while image search engines provide training examples
1
without the need of any human intervention, it is sufficient to type a few example keywords in
Google or Bing image search to verify that often the majority of the retrieved images are only
loosely related with the query concept. Most prior work has attempted to address this problem
by means of outlier rejection mechanisms discarding irrelevant images from the retrieved results.
However, despite the dynamic research activity in this area, weakly-supervised approaches today
still yield significantly lower recognition accuracy than fully supervised object classifiers trained on
clean data (see, e.g., results reported in [9, 29]).
In this paper we argue that the poor performance of models learned from weakly-labeled Internet
data is not only due to undetected outliers contaminating the training data, but it is also a consequence of the statistical differences often present between Web images and the test data. Figure 1
shows sample images for some of the Caltech256 object categories versus the top six images retrieved by Bing using the class names as keywords1. Although a couple of outliers are indeed
present in the Bing sets, the striking difference between the two collections is that even the relevant
results in the Bing groups appear to be visually less homogeneous. For example, in the case of the
classes shown in figure 1(a,b), while the Caltech256 groups contain only real photographs, the Bing
counterparts include several cartoon drawings. In figure 1(c,d), each Caltech256 image contains
only the object of interest while the pictures retrieved by Bing include extraneous items, such as
people or faces, which act as distractors in the learning (this is particularly true when evaluating the
classifiers on Caltech256, given that ?faces? and ?people? are separate categories in the data set).
Furthermore, even when ?irrelevant? results do occur in the retrieved images, they are rarely outliers
detectable via simple coherence tests as there is often some consistency even among such photos.
For example, polysemy ? the capacity of one word to have multiple meanings ? causes multiple
visual clusters (as opposed to individual outliers) to appear in the Bing sets of figure 1(e,f) (the two
clusters in (e) are due to the fact that the word ?hawksbill? denotes both a crag in Arkansas as well as
a type of sea turtle, while in the case of (f) the keyword ?tricycle? retrieves images of both bicycles
as well as motorcycles with three wheels; note, again, that Caltech256 contains for both classes only
images corresponding to one of the words meanings and that ?motorcycle? appears as a separate
additional category). Finally, in some situations, different shooting distances or angles may produce
completely unrelated views of the same object or scene: for example, the Bing set in 1(g) includes
both aerial and ground views of Mars, which have very little in common visually.
Note that for most of the classes in figure 1 it is not clear a priori which are the ?relevant? Internet
images to be used for training until we compare them to the photos in the corresponding Caltech256
categories. In this paper we show that a few strongly-labeled examples from the test domain (e.g. a
few Caltech256 images for the class of interest) are indeed sufficient to disambiguate this relevancy
problem and to model the distribution differences between the weakly-labeled Internet data and the
test application data, so as to significantly improve recognition performance on the test set.
The situation where the test data is drawn from a distribution that is related, but not identical, to
the distribution of the training data has been widely studied in the field of machine learning and it
is traditionally addressed using so-called ?domain adaptation? methods. These techniques exploit
ample availability of training data from a source domain to learn a model that works effectively
in a related target domain for which only few training examples are available. More formally, let
pt (X, Y ) and ps (X, Y ) be the distributions generating the target and the source data, respectively.
Here, X denotes the input (a random feature vector) and Y the class (a discrete random variable).
The domain adaptation problem arises whenever pt (X, Y ) differs from ps (X, Y ). In covariance
shift, it is assumed that only the distributions of the input features differ in the two domain, i.e.,
pt (Y |X) = ps (Y |X) but pt (X) 6= ps (X). Note that, without adaptation, this may lead to poor classification in the target domain since a model learned from a large source training set will be trained
to perform well in the dense source regions of X which, under the covariance shift assumption,
will generally be different from the dense regions of the target domain. Typically, covariance shift
algorithms (e.g., [16]) address this problem by modeling the ratio pt (X)/ps (X). Unfortunately, the
much more common and challenging case is when the conditional distributions are different, i.e.,
pt (Y |X) 6= ps (Y |X). When such differences are relatively small, however, knowledge gained by
analyzing data in the source domain may still yield valuable information to perform prediction for
test target data. This is precisely the scenario considered in this paper.
1
Note that image search results may have changed since these examples were captured.
2
Caltech256
Bing
(a)
(b)
(c)
(d)
(e)
(f)
(g)
Figure 1: Images in Caltech256 for several categories and top results retrieved by Bing image search
for the corresponding keywords. The Bing sets are both semantically and visually less coherent:
presence of multiple objects in the same image, polysemy, caricaturization, as well as variations in
viewpoints are some of the visual effects present in Internet images which cause significant data
distribution differences between the Bing sets and the corresponding Caltech256 groups.
3
2 Relationship to other methods
Most of the prior work on learning visual models from image search has focused on the task of
?cleaning up? Internet photos. For example, in the pioneering work of Fergus et al. [10], visual filters
learned from image search were used to rerank photos on the basis of visual consistency. Subsequent
approaches [2, 25, 20] have employed similar outlier rejection schemes to automatically construct
clean(er) data sets of images for training and testing object classifiers. Even techniques aimed at
learning explicit object classifiers from image search [9, 29] have identified outlier removal as the
key-ingredient to improve recognition. In our paper we focus on another fundamental, yet largely
ignored, aspect of the problem: we argue that the current poor performance of classification models
learned from the Web is due to the distribution differences between Internet photos and image test
examples. To the best of our knowledge we propose the first systematic empirical analysis of domain
adaptation methods to address sample distribution differences in object categorization due to the use
of weakly-labeled Web images as training data. We note that in work concurrent to our own, Saenko
et al. [24] have also analyzed cross-domain adaptation of object classifiers. However, their work
focuses on the statistical differences caused by varying lighting conditions (uncontrolled versus
studio setups) and by images taken with different camera types (a digital SLR versus a webcam).
Transfer learning, also known as multi-task learning, is related to domain adaptation. In computer
vision, transfer learning has been applied to a wide range of problems including object categorization
(see, e.g., [21, 8, 22]). However, transfer learning addresses a different problem. In transfer learning
there is a single distribution of the inputs p(X) but there are multiple output variables Y1 , . . . , YT ,
associated to T distinct tasks (e.g., learning classifiers for different object classes). Typically, it
is assumed that some relations exist among the tasks; for example, some common structure when
learning classifiers p(Y1 |X, ?1 ), . . . , p(YT |X, ?T ) can be enforced by assuming that the parameters
?1 , . . . , ?T are generated from a shared prior p(?). The fundamental difference is that in domain
adaptation we have a single task but different domains, i.e., different sources of data.
As our approach relies on a mix of labeled and weakly-labeled images, it is loosely related to semisupervised methods for object classification [15, 19]. Within this genre, the algorithm described
in [11] is perhaps the closest to our work as it also relies on weakly-labeled Internet images. However, unlike our approach, these semi-supervised methods are designed to work in cases where the
test examples and the training data are generated from the same distribution.
3 Approach overview
3.1 Experimental setup
Our objective is to evaluate domain adaptations methods on the task of object classification, using
photos from a human-labeled data set as target domain examples and images retrieved by a keywordbased image search engine as examples of the source domain.
We used Caltech256 as the data set for the target domain since it is an established benchmark for
object categorization and it contains a large number of classes (256) thus allowing us to average out
performance variations due to especially easy or difficult categories. From each class, we randomly
sampled nT images as target training examples, and other mT images as target test examples.
We formed the weakly-labeled source data by collecting the top nS images retrieved by Bing image search for each of the Caltech256 category text labels. Although it may have been possible to
improve the relevancy of the image results for some of the classes by manually selecting less ambiguous search keywords, we chose to issue queries on the unchanged Caltech256 text class labels
to avoid subjective alteration of the results. However, in order to ensure valid testing, we removed
near duplicates of Caltech256 images from the source training set by a human-supervised process.
3.2 Feature representation and classification model
In order to study the effect of large weakly-labeled training sets on object recognition performance,
we need a baseline system that achieves good performance on object categorization and that supports
efficient learning and test evaluation. The current best published results on Caltech256 were obtained
by a kernel combination classifier using 39 different feature kernels, one for each feature type [13].
However, since both training as well testing are computationally very expensive with this classifier,
this model is unsuitable for our needs.
4
Instead, in this work we use as image representation the classeme features recently proposed by
Torresani et al. [28]. This descriptor is particularly suitable for our task as it has been shown to
yield near state-of-the-art results with simple linear support vector machines, which can be learned
very efficiently even for large training sets. The descriptor measures the closeness of an image
to a basis set of classes and can be used as an intermediate representation to learn classifiers for
new classes. The basis classifiers of the classeme descriptor are learned from weakly-labeled data
collected for a large and semantically broad set of attributes (the final descriptor contains 2659
attributes). To eliminate the risk of the test classes being already explicitely represented in the feature
vector, in this work we removed from the descriptor 34 attributes, corresponding to categories related
to Caltech256 classes. We use a binarized version of this descriptor obtained by thresholding to 0
the output of the attribute classifiers: this yields for each image a 2625-dimensional binary vector
describing the predicted presence/absence of visual attributes in the photo. This binarization has
been shown to yield very little degradation in recognition performance (see [28] for further details).
We denote with f (x) ? {0, 1}F the binary attribute vector extracted from image x with F = 2625.
Object class recognition is traditionally formulated as a multiclass classification problem: given a
test image x, predict the class label y ? {1, . . . , K} of the object present in it, where K is the
number of possible classes (in the case of Caltech256, K = 256). In this paper we implement
multi-class classification using K binary classifiers trained using the one-versus-the-rest scheme
and perform prediction according to the winner-take-all strategy. The k-th binary classifier (distinguishing between class k and the other classes) is trained on a target training set Dkt and a collection
Dks of weakly-labeled source training examples. Dkt is formed by aggregating the Caltech256 training images of all classes, using the data from the k-th class as positive examples and the data from
t
t
t
t
the remaining classes as negative examples, i.e. Dkt = {(f ti , yi,k
)}N
i=1 where f i = f (xi ) denotes
the feature vector of the i-th image, Nt = (K ? nt ) is the total number of images in the stronglyt
? {?1, 1} is 1 iff example i belongs to class k. The source training
labeled data set, and yi,k
s
ns
s
set Dk = {f i,k }i=1 is the collection of ns images retrieved by Bing using the category name of
the k-th class as keyword. As discussed in the next section, different methods will make different
assumptions on the labels of the source examples.
We adopt a linear SVM as the model for the binary one-vs-the-rest classifiers. This choice is primarily motivated by the availability of several simple yet effective domain adaptation variants of
SVM [5, 26], in addition to the aforementioned reasons of good performance and efficiency.
4 Methods
We now present the specific domain adaptation SVM algorithms. For brevity, we drop the subscript
k indicating dependence on the specific class. The hyperparameters C of all classifiers are selected
so as to minimize the multiclass cross validation error on the target training data. For all algorithms,
we cope with the largely unequal number of positive and negative examples by normalizing the cost
entries in the loss function by the respective class sizes.
4.1 Baselines: SVMs , SVMt , SVMs?t
We include in our evaluation three algorithms not based on domain adaptation and use them as
comparative baselines. We indicate with SVMt a linear SVM learned exclusively from the target
examples. SVMs denotes an SVM learned from the source examples using the one-versus-the-rest
scheme and assuming no outliers are present in the image search results. SVMs?t is a linear SVM
trained on the union of the target and source examples. Specifically, for each class k, we train a
binary SVM on the data obtained by merging Dkt with Dks , where the data in the latter set is assumed
to contain only positive examples, i.e., no outliers. The hyperparameter C is kept the same for all K
binary classifiers but tuned distinctly for each of the three methods by selecting the hyperparameter
value yielding the best multiclass performance on the target training set (we used hold out validation
on Dkt for SVMs and 5-fold cross validation for both SVMt as well SVMs?t ).
4.2 Mixture of source and target hypotheses: MIXSVM
One of the simplest possible strategies for domain adaptation consists of using as final classifier a
convex combination of the two SVM hypotheses learned independently from the source and target
data. Despite its simplicity, this classifier has been shown to yield good empirical results [26].
5
Let us represent the source and target multiclass hypotheses as vector-valued functions hs (f ) ?
RK , ht (f ) ? RK , where the k-th outputs are the respective SVM scores for class k. MIXSVM
computes a convex combination h(f ) = ?hs (f )+(1??)ht (f ) and predicts the class k ? associated
to the largest output, i.e. k ? = arg maxk?{1,...,K} hk (f ). The parameter ? ? [0, 1] is determined
via grid search by optimizing multiclass error on the target training set. We avoid biased estimates
resulting from learning the hypothesis ht and ? on the same training set by applying a two-stage
procedure: we learn 5 distinct hypotheses ht using 5-fold cross validation (with the hyperpameter
value found for SVMt ) and compute prediction ht (f ti ) at each training sample f ti using the cross
validation hypothesis that was not trained on that example; we then use these predicted outputs to
determine the optimal ?. Last, we learn the final hypothesis ht using the entire target training set.
4.3 Domain weighting: DWSVM
Another straightforward yet popular domain adaptation approach is to train a classifier using both
the source and the target examples by weighting differently the two domains in the learning objective [5, 12, 4]. We follow the implementation proposed in [26] and weight the loss function values
differently for the source and target examples by using two distinct SVM hyperparameters, Cs and
Ct , encoding the relative importance of the two domains. The values of these hyperparameters are
selected by minimizing the multiclass 5-fold cross validation error on the target training set.
4.4 Feature augmentation: AUGSVM
We denote with AUGSVM the domain adaptation method described in [5]. The key-idea of this
approach is to create a feature-augmented version of each individual example f , where distinct
feature augmentation mappings ?s , ?t are used for the source and target data, respectively:
iT
iT
h
h
and
?t (f ) = f T 0T f T ,
(1)
?s (f ) = f T f T 0T
where 0 indicates a F -dimensional vector of zeros. A linear SVM is then trained on the union of
the feature-augmented source and target examples (using a single hyperparameter). The principle
behind this mapping is that the SVM trained in the feature-augmented space has the ability to distinguish features having common behavior in the two domains (associated to the first F SVM weights)
from features having different properties in the two domains.
4.5 Transductive learning: TSVM
The previous methods implement different strategies to adjust the relative importance of the source
and the training examples in the learning process. However, all these techniques assume that the
source data is fully and correctly labeled. Unfortunately, in our practical problem this assumption
is violated due to outliers and irrelevant results being present in the images retrieved by keyword
search. To tackle this problem we propose to perform transductive inference on the label of the
source data during the learning: the key-idea is to exploit the availability of strongly-labeled target
training data to simultaneously determine the correct labels of the source training examples and
incorporate this labeling information to improve the classifier. To address this task we employ the
transductive SVM model introduced in [17]. Although this method is traditionally used to infer
the labels of unlabeled data available at learning time, it outputs a proper inductive hypothesis and
therefore can be used also to predict labels of unseen test examples. The problem of learning a
transductive SVM in our context can be formulated as follows:
t
s
N
n
X
1
Cs X s T s
mins ||w||2 + C t
cti l(yit wT f ti ) + s
l(y w f j )
w ,y 2
n j=1 j
i=1
s
n
1 X
subject to
max[0, sign(wT f sj )] = ?
ns j=1
(2)
where l() denotes the loss function, w is the vector of SVM weights, y s contains the labels of the
source examples, and the cti are scalar coefficients used to counterbalance the effect of the unequal
number of positive and negative examples: we set cti = 1/nt if yit = 1, cti = 1/((K ? 1)nt ) otherwise. The scalar parameter ? defines the fraction of source examples that we expect to be positive
and is tuned via cross validation. Note that TSVM solves jointly for the separating hyperplane and
the labels of the source examples by trading off maximization of the margin and minimization of the
6
# additional examples needed by SVM to match accuracy
40
35
Accuracy (%)
TSVM
DWSVM
MIXSVM
AUGSVM
25
20
SVMs
15
t
SVM
10
5
0
20
t
30
25
SVMs ? t
10
20
30
40
Number of target training images (nt)
15
10
5
0
0
5
10
15
20
# target training examples in TSVM
25
30
50
Figure 2: Recognition accuracy obtained with
ns = 300 Web photos and a varying number of
Caltech256 target training examples.
Figure 3: Manual annotation saving: the plot
shows for a varying number of labeled examples given to TSVM the number of additional labeled images that would be needed
by SVMt to achieve the same accuracy.
prediction errors on both source and target data. This optimization can be interpreted as implementing the cluster assumption, i.e., the expectation that points in a data cluster have the same label. We
solve the optimization problem in Eq. 2 for a quadratic soft-margin loss function l (i.e., l is chosen to
be the square of the hinge loss) using the minimization algorithm proposed in [27], which computes
an efficient primal solution using the modified finite Newton method of [18]. This minimization
approach is ideally suited to large-scale sparse data sets such as ours (about 70% of our features are
zero). We used the same values of hyperparameters (C t , C s , and ?) for all classes k = 1, . . . , K
and selected them by minimizing the multiclass cross validation error. We also tried letting ? vary
for each individual class but that led to slightly inferior results, possibly due to overfitting.
5 Experimental results
We now present the experimental results. Figure 2 shows the accuracy achieved by the different
algorithms when using ns = 300 and a varying number of training target examples (nt ). The
accuracy is measured as the average of the mean recognition rate per class, using mt = 25 test
examples for each class. The best accuracy is achieved by the domain adaptation methods TSVM and
DWSVM, which produce significant improvements over the SVM trained using only target examples
(SVMt ), particularly for small values of nt . For nt = 5, TSVM yields a 65% improvement over the
best published results on this benchmark (for the same number of examples, an accuracy of 16.7% is
reported in [13]). Our method achieves this performance by analyzing additional images, the Internet
photos, but since these are collected automatically and do not require any human supervision, the
gain we achieve is effectively ?human-cost free?. It is interesting to note that while using solely
source training images yields very low accuracy (14.5% for SVMs ), adding even just a single labeled
target image produces a significant improvement (TSVM achieves 18.5% accuracy with nt = 1,
and 27.1% with nt = 5): this indicates that the method can indeed adapt the classifier to work
effectively on the target domain given a small amount of strongly-labeled data. It is interesting to
note that while TSVM implements a form of outlier rejection as it solves for the labels of the source
examples, DWSVM assumes that all source images in Dks are positive examples for class k. Yet,
DWSVM achieves results similar to those of TSVM: this suggests that domain adaptation rather than
outlier rejection is the key-factor contributing to the improvement with respect to the baselines.
By analyzing the performance of the baselines in figure 2 we observe that training exclusively with
Web images (SVMs ) yields much lower accuracy than using strongly-labeled data (SVMt ): this is
consistent with prior work [9, 29]. Furthermore, the poor accuracy of SVMs?t compared to SVMt
suggests that na??vely adding a large number of source examples to the target training set without
consideration of the domain differences not only does not help but actually worsens the recognition.
Figure 3 illustrates the significant manual annotation saving produced by our approach: the x-axis
is the number of target labeled images provided to TSVM while the y-axis shows the number of
additional labeled examples that would be needed by SVMt to achieve the same accuracy.
7
450
25
TSVM
DWSVM
MIXSVM
AUGSVM
20
s
SVM
15
t
SVM
s?t
SVM
10
0
s
n =50
400
Training time (in minutes) for TSVM
Accuracy (%)
30
50
100
150
200
250
s
Number of source training images (n )
s
n =300
350
300
250
200
150
100
50
300
0
5
Figure 4: Classification accuracy of the different methods using nt = 10 target training images and a varying number of source examples.
10
15
20
25
30
t
Number of target training images (n )
35
40
Figure 5: Training time: time needed to learn
a multiclass classifier for Caltech256 using
TSVM.
The setting ns = 300 in the results above was chosen by studying the recognition accuracy as
a function of the number of source examples: we carried out an experiment where we fixed the
number nt of target training example for each category to an intermediate value (nt = 10), and
varied the number ns of top image results used as source training examples for each class. Figure 4
summarizes the results. We notice that the performance of the SVM trained only on source images
(SVMs ) peaks at ns = 100 and decreases monotonically after this value. This result can be explained
by observing that image search engines provide images sorted according to estimated relevancy with
respect to the keyword. It is conceivable to assume that images far down in the ranking list will often
tend to be outliers, which may lead to degradation of recognition particularly for non-robust models.
Despite this, we see that the domain adaptation methods TSVM and DWSVM exhibit a monotonically
non-decreasing accuracy as ns grows: this indicates that these methods are highly robust to outliers
and can make effective use of source data even when increasing ns causes a likely decrease of the
fraction of inliers and relevant results. Contrast these robust performances with the accuracy of
SVMs?t , which grows as we begin adding source examples but then decays rapidly after ns = 10
and approaches the poor recognition of SVMs for large values of ns .
Our approach compares very favorably with competing algorithms also in terms of computational
complexity: training TSVM (without cross validation) on Caltech256 with nt = 5 and ns = 300
takes 84 minutes on a AMD Opteron Processor 280 2.4GHz; training the multiclass method of [13]
using 5 labeled examples per class takes about 23 hours on the same machine (for fairness of comparison, we excluded cross validation even for this method). A detailed analysis of training time as a
function of the number of labeled training examples is reported in figure 5. Evaluation of our model
on a test example takes 0.18ms, while the method of [13] requires 37ms.
6 Discussion and future work
In this work we have investigated the application of domain adaptation methods to object categorization using Web photos as source data. Our analysis indicates that, while object classifiers learned
exclusively from Web data are inferior to fully-supervised models, the use of domain adaptation
methods to combine Web photos with small amounts of strongly labeled data leads to state-of-theart results. The proposed strategy should be particularly useful in scenarios where labeled data is
scarce or expensive to acquire. Future work will include application of our approach to combine
data from multiple source domains (e.g., images obtained from different search engines or photo
sharing sites) and different media (e.g., text and video). Additional material including software and
our source training data may be obtained from [1].
Acknowledgments
We are grateful to Andrew Fitzgibbon and Martin Szummer for discussion. We thank Vikas Sindhwani for providing code. This research was funded in part by NSF CAREER award IIS-0952943.
8
References
[1] http://vlg.cs.dartmouth.edu/projects/domainadapt.
[2] T. L. Berg and D. A. Forsyth. Animals on the web. In CVPR, pages 1463?1470, 2006.
[3] I. Bierderman. Recognition-by-components: A theory of human image understanding. Psychological
Review, 94(2):115?147, 1987.
[4] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain adaptation.
In NIPS, 2007.
[5] H. Daume III. Frustratingly easy domain adaptation. In ACL, 2007.
[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical
Image Database. In CVPR, 2009.
[7] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object
Classes Challenge 2010 (VOC2010) Results.
[8] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Trans. Pattern Anal.
Mach. Intell., 28(4):594?611, 2006.
[9] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google?s image
search. In ICCV, pages 1816?1823, 2005.
[10] R. Fergus, P. Perona, and A. Zisserman. A visual category filter for google images. In ECCV, 2004.
[11] R. Fergus, Y. Weiss, and A. Torralba. Semi-supervised learning in gigantic image collections. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, NIPS 22, 2009.
[12] J. R. Finkel and C. D. Manning. Hierarchical bayesian domain adaptation. In Proceedings of the North
American Association of Computational Linguistics (NAACL 2009), 2009.
[13] P. V. Gehler and S. Nowozin. On feature combination for multiclass object classification. In IEEE
International Conference on Computer Vision (ICCV), 2009.
[14] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology, 2007.
[15] A. Holub, M. Welling, and P. Perona. Exploiting unlabelled data for hybrid object classification. In NIPS,
Interclass transfer workshop, 2005.
[16] J. Huang, A. J. Smola, A. Gretton, K. M. Borgwardt, and B. Sch?olkopf. Correcting sample selection bias
by unlabeled data. In NIPS, pages 601?608, 2006.
[17] T. Joachims. Transductive inference for text classification using support vector machines. In ICML, pages
200?209, 1999.
[18] S. S. Keerthi and D. DeCoste. A modified finite newton method for fast solution of large scale linear
svms. Journal of Machine Learning Research, 6:341?361, 2005.
[19] C. Leistner, H. Grabner, and H. Bischof. Semi-supervised boosting using visual similarity learning. In
CVPR, 2008.
[20] L. Li and L. Fei-Fei. Optimol: Automatic online picture collection via incremental model learning. Intl.
Jrnl. of Computer Vision, 88(2):147?168, 2010.
[21] E. G. Miller, N. E. Matsakis, and P. A. Viola. Learning from one example through shared densities on
transforms. In CVPR, 2000.
[22] A. Quattoni, M. Collins, and T. Darrell. Transfer learning for image classification with sparse prototype
representations. In CVPR, 2008.
[23] B. C. Russell, A. B. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: A database and web-based
tool for image annotation. International Journal of Computer Vision, 77(1-3):157?173, 2008.
[24] K. Saenko, B. Kulis, M. Fritz, and T. Darrell. Adapting visual category models to new domains. In
European Conference on Computer Vision (ECCV), Sept. 2010.
[25] F. Schroff, A. Criminisi, and A. Zisserman. Harvesting image databases from the web. In ICCV, 2007.
[26] G. Schweikert, C. Widmer, B. Sch?olkopf, and G. R?atsch. An empirical analysis of domain adaptation
algorithms for genomic sequence analysis. In NIPS, pages 1433?1440, 2008.
[27] V. Sindhwani and S. S. Keerthi. Large scale semi-supervised linear svms. In SIGIR, pages 477?484, 2006.
[28] L. Torresani, M. Szummer, and A. Fitzgibbon. Efficient object category recognition using classemes. In
European Conference on Computer Vision (ECCV), pages 776?789, Sept. 2010.
[29] S. Vijayanarasimhan and K. Grauman. Keywords to visual categories: Multiple-instance learning forweakly supervised object categorization. In CVPR, 2008.
[30] L. von Ahn. Games with a purpose. IEEE Computer, 39(6):92?94, 2006.
9
| 4064 |@word h:2 worsens:1 kulis:1 version:2 everingham:1 relevancy:3 tried:1 covariance:3 shot:1 contains:5 exclusively:3 selecting:2 score:1 tuned:2 ours:1 subjective:1 current:3 nt:15 yet:4 subsequent:1 designed:2 drop:1 plot:1 v:1 selected:3 item:1 harvesting:1 classeme:2 boosting:1 shooting:1 consists:1 combine:2 indeed:3 behavior:1 proliferation:1 multi:2 freeman:1 decreasing:1 automatically:2 encouraging:1 little:2 decoste:1 increasing:1 provided:1 begin:1 unrelated:1 project:1 medium:1 interpreted:1 collecting:1 act:1 binarized:1 ti:4 tackle:1 grauman:1 classifier:29 intervention:1 appear:2 gigantic:1 slr:1 positive:6 aggregating:1 limit:1 consequence:1 despite:5 encoding:1 mach:1 analyzing:3 subscript:1 solely:1 chose:1 acl:1 studied:1 collect:1 challenging:1 suggests:2 limited:1 range:1 practical:1 camera:1 acknowledgment:1 testing:3 union:2 implement:3 differs:1 fitzgibbon:2 procedure:1 area:1 empirical:3 significantly:2 adapting:1 word:3 wheel:1 unlabeled:2 selection:1 risk:1 applying:1 context:1 vijayanarasimhan:1 yt:2 straightforward:1 williams:2 independently:1 convex:2 focused:1 sigir:1 simplicity:1 correcting:1 retrieve:1 traditionally:3 variation:2 target:40 today:2 pt:6 user:1 cleaning:1 homogeneous:1 distinguishing:1 hypothesis:8 recognition:19 particularly:5 expensive:2 predicts:1 labeled:37 database:3 gehler:1 caltech256:22 region:2 culotta:1 keyword:7 decrease:2 removed:2 russell:1 valuable:1 alessandro:1 complexity:1 ideally:1 dynamic:1 trained:10 weakly:16 grateful:1 efficiency:1 completely:1 basis:3 differently:2 voc2010:2 represented:1 retrieves:1 genre:1 train:2 distinct:4 fast:1 shortcoming:1 dkt:5 effective:2 query:2 labeling:2 widely:1 valued:1 solve:1 cvpr:6 drawing:1 otherwise:1 ability:1 unseen:1 transductive:5 jointly:1 noisy:1 final:3 online:2 sequence:1 propose:2 adaptation:26 relevant:3 combining:1 motorcycle:2 rapidly:1 iff:1 achieve:3 arkansas:1 olkopf:2 exploiting:2 cluster:4 p:6 darrell:2 sea:1 produce:4 categorization:7 generating:1 svmt:9 comparative:1 object:36 help:1 blitzer:1 andrew:1 incremental:1 measured:1 keywords:4 eq:1 solves:2 c:4 predicted:2 come:1 indicate:1 trading:1 differ:1 annotated:3 attribute:6 filter:2 correct:1 opteron:1 criminisi:1 human:10 material:1 implementing:1 require:2 generalization:1 leistner:1 hold:1 considered:1 ground:1 visually:3 bicycle:1 predict:2 mapping:2 achieves:4 adopt:1 vary:1 torralba:2 purpose:2 schroff:1 label:15 largest:2 concurrent:1 create:1 bergamo:1 tool:2 minimization:3 brought:1 genomic:1 modified:2 rather:1 avoid:2 finkel:1 varying:5 focus:2 joachim:1 improvement:5 indicates:4 hk:1 contrast:1 baseline:5 inference:2 typically:2 eliminate:1 entire:1 perona:5 relation:1 pixel:1 arg:1 issue:1 classification:13 among:2 pascal:2 aforementioned:1 extraneous:1 priori:1 animal:1 art:1 field:1 construct:1 saving:2 having:2 cartoon:1 manually:4 identical:1 broad:1 icml:1 fairness:1 theart:1 future:2 report:1 torresani:3 duplicate:1 few:8 primarily:1 employ:1 randomly:1 simultaneously:1 recognize:1 intell:1 individual:3 murphy:1 keerthi:2 attempt:2 interest:2 investigate:1 highly:1 evaluation:3 adjust:1 analyzed:1 mixture:1 yielding:2 behind:1 primal:1 inliers:1 accurate:1 respective:2 vely:1 loosely:2 psychological:1 instance:1 modeling:1 soft:1 maximization:1 cost:2 entry:1 wortman:1 reported:3 considerably:1 borgwardt:1 fundamental:2 peak:1 international:2 density:1 fritz:1 systematic:1 off:1 dong:1 na:1 again:1 augmentation:2 von:1 opposed:1 huang:1 possibly:1 american:1 li:3 alteration:1 includes:1 availability:3 coefficient:1 forsyth:1 north:1 caused:1 ranking:1 view:2 observing:1 tsvm:16 annotation:4 contribution:1 minimize:1 square:1 formed:2 accuracy:20 descriptor:6 largely:2 efficiently:1 miller:1 yield:9 bayesian:1 produced:1 lighting:1 published:3 processor:1 quattoni:1 whenever:1 manual:2 sharing:1 associated:4 couple:1 sampled:1 gain:1 dataset:1 popular:1 distractors:1 knowledge:2 segmentation:1 holub:2 actually:1 appears:1 supervised:10 follow:1 zisserman:4 wei:1 strongly:6 mar:1 furthermore:2 just:1 stage:1 smola:1 until:1 web:15 google:3 defines:1 perhaps:1 grows:2 semisupervised:1 name:2 effect:3 naacl:1 verify:1 concept:1 contain:2 counterpart:1 true:1 inductive:1 excluded:1 semantic:1 widmer:1 game:2 during:1 inferior:2 ambiguous:1 m:2 demonstrate:1 image:83 ranging:1 meaning:2 consideration:1 recently:1 common:5 mt:2 overview:1 winner:1 nh:1 discussed:1 association:1 significant:5 automatic:1 consistency:2 grid:1 funded:1 supervision:2 similarity:1 ahn:1 contaminating:1 own:1 recent:1 closest:1 retrieved:12 optimizing:1 irrelevant:3 belongs:1 jrnl:1 scenario:2 binary:7 yi:2 caltech:1 seen:1 captured:1 additional:6 employed:1 deng:1 determine:2 monotonically:2 semi:4 ii:1 multiple:6 mix:1 gretton:1 infer:1 technical:1 faster:1 match:1 adapt:1 cross:10 unlabelled:1 award:1 prediction:4 involving:1 variant:1 vision:6 expectation:1 kernel:2 represent:1 achieved:2 addition:1 addressed:1 winn:1 source:44 sch:2 biased:1 eliminates:1 unlike:1 rest:3 subject:1 tend:1 ample:1 lafferty:1 near:2 presence:2 intermediate:2 iii:1 easy:2 bengio:1 variety:1 dartmouth:3 competing:2 identified:1 classemes:1 idea:2 prototype:1 multiclass:10 shift:3 six:1 motivated:1 effort:3 cause:3 ignored:1 generally:1 useful:1 detailed:2 clear:1 aimed:1 amount:2 transforms:1 svms:16 category:19 simplest:1 http:1 exist:1 nsf:1 notice:1 sign:1 estimated:1 per:4 correctly:1 discrete:1 hyperparameter:3 group:3 key:4 drawn:1 yit:2 clean:2 ht:6 kept:1 fraction:2 year:2 enforced:1 angle:1 striking:1 schweikert:1 coherence:1 summarizes:1 griffin:1 bound:1 internet:10 uncontrolled:1 ct:1 distinguish:1 fold:3 quadratic:1 activity:1 occur:1 precisely:1 fei:8 scene:1 software:2 aspect:1 turtle:1 min:1 relatively:1 martin:1 department:1 according:2 combination:4 poor:5 aerial:1 manning:1 smaller:1 slightly:1 appealing:1 outlier:14 explained:1 iccv:3 taken:1 computationally:1 bing:14 describing:1 detectable:1 mechanism:1 needed:4 letting:1 photo:16 studying:1 available:2 hanover:1 observe:1 hierarchical:2 matsakis:1 vikas:1 intl:1 denotes:5 top:4 include:5 ensure:1 remaining:1 assumes:1 linguistics:1 hinge:1 newton:2 unsuitable:1 exploit:2 build:1 especially:1 grabner:1 webcam:1 unchanged:1 objective:2 already:1 strategy:5 dependence:1 exhibit:1 conceivable:1 distance:1 separate:2 thank:1 separating:1 capacity:1 majority:1 participate:1 amd:1 argue:2 collected:2 reason:1 assuming:2 code:1 relationship:1 ratio:1 minimizing:2 acquire:1 providing:1 setup:2 unfortunately:3 difficult:1 favorably:1 negative:3 implementation:1 anal:1 proper:1 optimol:1 perform:4 allowing:1 benchmark:2 finite:2 situation:2 maxk:1 viola:1 y1:2 varied:1 interclass:1 arbitrary:1 introduced:1 cast:1 specified:1 imagenet:1 bischof:1 engine:6 coherent:1 learned:10 unequal:2 tremendous:1 established:1 hour:1 california:1 nip:5 trans:1 address:6 pattern:1 kulesza:1 challenge:1 pioneering:1 including:2 max:1 video:1 gool:1 suitable:1 rely:1 hybrid:1 undetected:1 scarce:1 counterbalance:1 scheme:3 improve:5 technology:1 lorenzo:2 picture:2 axis:2 carried:1 sept:2 text:4 prior:4 binarization:1 understanding:1 removal:1 review:1 contributing:1 relative:2 fully:4 loss:5 rerank:1 expect:1 interesting:2 limitation:1 versus:5 ingredient:1 digital:1 validation:10 sufficient:2 consistent:1 vlg:1 thresholding:1 viewpoint:1 principle:1 editor:1 nowozin:1 eccv:3 changed:1 last:2 free:1 bias:1 institute:1 wide:1 face:2 sparse:2 distinctly:1 ghz:1 van:1 overcome:1 evaluating:2 valid:1 computes:2 author:3 collection:8 far:1 cope:1 dks:3 welling:1 sj:1 overfitting:1 assumed:3 explicitely:1 consuming:1 fergus:5 xi:1 search:19 frustratingly:1 disambiguate:1 learn:11 nature:1 transfer:6 robust:3 career:1 schuurmans:1 investigated:1 european:2 domain:48 polysemy:2 dense:2 hyperparameters:4 daume:1 augmented:3 site:1 n:14 pereira:1 explicit:1 weighting:2 rk:2 minute:2 down:1 discarding:1 specific:2 er:1 list:1 dk:1 svm:23 decay:1 closeness:1 normalizing:1 workshop:1 socher:1 merging:1 effectively:4 gained:1 importance:2 adding:3 magnitude:2 illustrates:1 studio:1 margin:2 rejection:4 suited:1 led:1 photograph:1 likely:1 visual:13 scalar:2 sindhwani:2 relies:2 extracted:1 cti:4 conditional:1 sorted:1 formulated:2 shared:2 absence:1 labelme:1 specifically:1 determined:1 semantically:2 wt:2 hyperplane:1 degradation:2 called:1 total:1 experimental:3 attempted:1 saenko:2 atsch:1 rarely:1 formally:1 college:1 indicating:1 berg:1 people:2 support:3 latter:1 arises:1 szummer:2 brevity:1 crammer:1 violated:1 collins:1 incorporate:1 evaluate:2 |
3,385 | 4,065 | Divisive Normalization: Justification and
Effectiveness as Efficient Coding Transform
Siwei Lyu ?
Computer Science Department
University at Albany, State University of New York
Albany, NY 12222, USA
Abstract
Divisive normalization (DN) has been advocated as an effective nonlinear efficient coding transform for natural sensory signals with applications in biology
and engineering. In this work, we aim to establish a connection between the DN
transform and the statistical properties of natural sensory signals. Our analysis
is based on the use of multivariate t model to capture some important statistical
properties of natural sensory signals. The multivariate t model justifies DN as
an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical
dependency with multi-information, we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual
performance of the DN transform in reducing statistical dependencies of natural
sensory signals. Our theoretical analysis and quantitative evaluations confirm
DN as an effective efficient coding transform for natural sensory signals. On the
other hand, we also observe a previously unreported phenomenon that DN may
increase statistical dependencies when the size of pooling is small.
1
Introduction
It has been widely accepted that biological sensory systems are adapted to match the statistical
properties of the signals in the natural environments. Among different ways such may be achieved,
the efficient coding hypothesis [2, 3] asserts that a sensory system might be understood as a transform
that reduces redundancies in its responses to the input sensory stimuli (e.g., odor, sounds, and time
varying images). Such signal transforms, termed as efficient coding transforms, are also important
to applications in engineering ? with the reduced statistical dependencies, sensory signals can be
more efficiently stored, transmitted and processed. Over the years, many works, most notably the
ICA methodology, have aimed to find linear efficient coding transforms for natural sensory signals
[20, 4, 15]. These efforts were widely regarded as a confirmation of the efficient coding hypothesis,
as they lead to localized linear basis that are similar to receptive fields found physiologically in the
cortex. Nonetheless, it has also been noted that there are statistical dependencies in natural images
or sounds, to which linear transforms are not effective to reduce or eliminate [5, 17]. This motivates
the study of nonlinear efficient coding transforms.
Divisive normalization (DN) is perhaps the most simple nonlinear efficient coding transform that
has been extensively studied recently. The output of the DN transform is obtained from the response
of a linear basis function divided by the square root of a biased and weighted sum of the squared
responses of neighboring basis functions of adjacent spatial locations, orientations and scales. In
biology, initial interests in DN focused on its ability to model dynamic gain control in retina [24]
and the ?masking? behavior in perception [11, 33], and to fit neural recordings from the mammalian
?
This work is supported by an NSF CAREER Award (IIS-0953373).
1
(a)
(b)
(c)
(d)
(e)
Figure 1: Statistical properties of natural images in a band-pass domain and their representations with the
multivariate t model. (a): Marginal densities in the log domain (images: red solid curve, t model: blue dashed
curve). (b): Contour plot of the joint density, p(x1 , x2 ), of adjacent pairs of band-pass filter responses. (c):
Contour plot of the optimally fitted multivariate t model of p(x1 , x2 ). (d): Each column of the image correspond
to a conditional density p(x1 |x2 ) of different x2 values. (e): The three red solid curves correspond to E(x1 |x2 )
and E(x1 |x2 ) ? std(x1 |x2 ). Blue dashed curves correspond to E(x1 |x2 ) and E(x1 |x2 ) ? std(x1 |x2 ) from
the optimally fitted multivariate t model to p(x1 , x2 ).
visual cortex [12, 19]. In image processing, nonlinear image representations based on DN have been
applied to image compression and contrast enhancement [18, 16] showing improved performance
over linear representations.
As an important nonlinear transform with such a ubiquity, it has been of great interest to find the
underlying principle from which DN originates. Based on empirical observations, Schwartz and
Simoncelli [23] suggested that DN can reduce statistical dependencies in natural sensory signals
and is thus justified by the efficient coding hypothesis. More recent works on statistical models
and efficient coding transforms of natural sensory signals (e.g., [17, 26]) have also hinted that DN
may be an approximation to the optimal efficient coding transform. However, this claim needs to
be rigorously validated based on statistical properties of natural sensory signals, and quantitatively
evaluated with DN?s performance in reducing statistical dependencies of natural sensory signals.
In this work, we aim to establish a connection between the DN transform and the statistical properties of natural sensory signals. Our analysis is based on the use of multivariate t model to capture
some important statistical properties of natural sensory signals. The multivariate t model justifies DN
as an approximation to the transform that completely eliminates its statistical dependency. Furthermore, using the multivariate t model and measuring statistical dependency with multi-information,
we can precisely quantify the statistical dependency that is reduced by the DN transform. We compare this with the actual performance of the DN transform in reducing statistical dependencies of
natural sensory signals. Our theoretical analysis and quantitative evaluations confirm DN as an effective efficient coding transform for natural sensory signals. On the other hand, we also observe a
previously unreported phenomenon that DN may increase statistical dependencies when the size of
pooling is small.
2
Statistical Properties of Natural Sensory Signals and Multivariate t Model
Sensory signals in natural environments are highly structured and non-random. Their regularities
exhibit as statistical properties that distinguish them from the rest of the ensemble of all possible
signals. Over the years, many distinct statistical properties of natural sensory signals have been
observed. Particularly, in band-pass filtered domains where local means are removed, three statistical
characteristics have been commonly observed across different signal ensembles1 :
- symmetric and sparse non-Gaussian marginal distributions with high kurtosis [7, 10], Fig.1(a);
- joint densities of neighboring responses that have elliptically symmetric (spherically symmetric
after whitening) contours of equal probability [34, 32]; Fig.1(b);
- conditional distributions of one response given neighboring responses that exhibit a ?bow-tie?
shape when visualized as an image [25, 6], Fig.1(d).
It has been noted that higher order statistical dependencies in the joint and conditional densities
(Fig.1 (b) and (d)) cannot be effectively reduced with linear transform [17].
1
The results in Fig.1 are obtained with spatial neighbors in images. Similar behaviors have also been
observed for orientation and scale neighbors [6], as well as other type of sensory signals such as audios [23, 17].
2
A compact mathematical form that can capture all three aforementioned statistical properties is
the multivariate Student?s t model. Formally, the probability density function of a d dimensional t
random vector x is defined as2 :
???d/2
?? ? (? + d/2)
p
pt (x; ?, ?) =
,
(1)
? + x0 ??1 x
?(?) det(??)
where ? > 0 is the scale parameter and ? > 1 is the shape parameter. ? is a symmetric and positive
definite matrix, and ?(?) is the Gamma function. From data of neighboring responses of natural
sensory signals in the band-pass domain, the parameters (?, ?) in the multivariate t model can be
obtained numerically with maximum likelihood, the details of which are given in the supplementary
material.. The joint density of the fitted multivariate t model has elliptically symmetric level curves
of equal probability, and its marginals are 1D Student?s t densities that are non-Gaussian and kurtotic
[14], all resembling those of the natural sensory signals, Fig.1(a) and (c). It is due to its heavy tail
property that the multivariate t model has been used as models of natural images [35, 22].
Furthermore, we provide another property of the multivariate t model that captures the bow-tie
dependency exhibited by the conditional distributions of natural sensory signals.
Lemma 1 Denote x\i as the vector formed by excluding the ith element from x.
dimensional isotropic t vector x (i.e., ? = I), we have
1
E(xi |x\i ) = 0, and var(xi |x\i ) =
(? + x0\i x\i ),
2? + d ? 3
For a d-
where E(?) and var(?) denote expectation and variance, respectively.
This is proved in the supplementary material. Lemma 1 can be extended to anisotropic t models
by incorporating a non-diagonal ? using a linear ?un-whitening? procedure, the result of which
is demonstrated in Fig.1(e). The three red solid curves correspond to E(xi |x\i ) and E(xi |x\i ) ?
p
var(xi |x\i ) for pairs of adjacent band-pass filtered responses of a natural image, and the three
blue dashed curves are the same quantities of the optimally fitted t model. The bow-tie phenomenon
comes directly from the dependencies in the conditional variances, which is precisely captured by
the fitted multivariate t model3 .
3
DN as Efficient Coding Transform for Multivariate t Model
Using the multivariate t model as a compact representation of statistical properties of natural sensory
signals in linear band-pass domains, our aim is to find an efficient coding transform that can effectively reduce its statistical dependencies. This is based on an important property of the multivariate
t model ? it is a special case of the Gaussian scale mixture (GSM) [1]. More specifically, the joint
density pt (x; ?, ?) can be written as an infinite mixture of Gaussians with zero mean and covariance
Z ?
matrix ?, as
1 0 ?1
1
p
exp ? x ? x p? ?1 (z; ?, ?)dz,
pt (x; ?, ?) =
2z
det(2?z?)
0
??
?
???1
where p? ?1 (z) = 2? ?(?) z
exp ? 2z is the inverse Gamma distribution. Equivalently, for a
d
?dimensional t vector
? x, we can decompose it into the product of two independent variables u and
z, as x = u ? z, where u is a d-dim Gaussian vector with zero mean and covariance matrix
?, and z > 0 is a scalar variable of an inverse Gamma law with parameter (?, ?). To simplify
the discussion, hereafter we will assume that the signals have been whitened so that there is no
second-order dependencies in x. Correspondingly, the Gaussian vector u has a covariance ? = I.
?
According to the GSM equivalence of the multivariate t model, we have u = x/ z. As an isotropic
Gaussian vector has mutually independent
components, there is no statistical dependency among
?
elements of u. In other words, x/ z equals to a transform that completely eliminates all statistical
dependencies in x. Unfortunately, this optimal efficient coding transform is not realizable, because
z is a latent variable that we do not have direct access to.
To overcome this difficulty, we can use an estimator of z based on the visible data vector x, z?,
to approximate the true value of z, and obtain an approximation to the optimal efficient coding
2
3
Eq.(1) can be shown to be equivalent to the standard definition of multivariate t density in [14].
The dependencies illustrated are nonlinear because we use conditional standard deviations.
3
?
transform as x/ z?. For the multivariate t model, it turns out that two most common choices for
the estimators z, namely, the maximum a posterior (MAP) and the Bayesian least square (BLS)
estimators, and a third estimator all have similar forms, a result formally stated in the following
lemma (a proof is given in the supplementary material).
Lemma 2 For the d-dimensional isotropic t vector x with parameters (?, ?), we consider three
estimators of z as: (i) the MAP estimator, z?1 = argmaxz p(z|x), which is the mode of the posterior
density, (ii) the BLS estimator, which is the mean of the posterior density z?2 = Ez|x (z|x), and (iii)
?1
, which are:
the inverse of the conditional mean of 1/z, as z?3 = Ez|x (1/z|x)
? + x0 x
,
2? + d + 2
?1
? + x0 x
? + x0 x
, and z?3 = Ez|x (1/z|x)
=
.
2? + d ? 2
2? + d
?
If we drop the irrelevant scaling factors from each of these estimators and plug them in x/ z?, we
obtain a nonlinear transform of x as,
x
x
kxk
y = ?(x), where ?(x) ? ?
=p
.
(2)
0
2
?+xx
? + kxk kxk
This is the standard form of divisive normalization that will be used throughout this paper. Lemma 2
shows that the DN transform is justified as an approximate to the optimal efficient coding transform
given a multivariate t model of natural sensory signals. Our result also shows that the DN transform
approximately ?gaussianizes? the input data, a phenomenon that has been empirically observed by
several authors (e.g., [6, 23]).
z?1 =
3.1
z?2 =
Properties of DN Transform
The standard DN transform given by Eq.(2) has some nice and important properties. Particularly,
the following Lemma shows that it is invertible and its Jacobian determinant has closed form.
Lemma 3 For the
standard DN
transform given in Eq. (2), its inversion for y ? Rd with kyk < 1
?
?
y
. The determinant of its Jacobian matrix is also in closed
is ??1 (y) = ? ?y 2 = ? ?kyk 2 kyk
1?kyk
1?kyk
form, which is given by det ??(x)
= ?(? + x0 x)?(d/2+1) .
?x
Further, the DN transform of a multivariate t vector also has a closed form density function.
Lemma 4 If x ? Rd has an isotropic t density with parameter (?, ?), then its DN transform,
y = ?(x), follows an isotropic r model, whose probability density function is
(
??1
?(?+d/2)
(1 ? y0 y)
kyk < 1
? d/2 ?(?)
(3)
p? (y) =
0
kyk ? 1
Lemma 4 suggests a duality between t and r models with regards to the DN transform. Proofs of
Lemma 3 and Lemma 4 can be found in [8]. For completeness, we also provide our proofs in the
supplementary material.
3.2
Equivalent Forms of DN Transform
In the current literature, the DN transform has been defined in many different forms other than
Eq.(2). However, if we are merely interested in their ability to reduce statistical dependencies, many
of the different forms of DN transform based on l2 norm of the input vector x become equivalent.
To be more specific, we quantify statistical statistical dependency of a random vector x using the
multi-information (MI) [27], defined as
!
Z
d
d
Y
X
I(x) =
p(x) log p(x)/
p(xk ) dx =
H(xk ) ? H(x),
(4)
x
k=1
k=1
where H(?) denotes the Shannon differential entropy. MI is non-negative, and is zero if and only if
the components of x are mutually independent. MI is a generalization of mutual information, and
the two become identical when measures dependency for two dimensional x. Furthermore, MI is
invariant to any operation that operates on individual components of x (e.g., element-wise rescaling)
Pd
since such operations produce an equal effect on the two terms k=1 H(xk ) and H(x) (see [27]).
4
Now consider four different definitions of the DN transform expressed in terms of the individual
element of the output vector as
xi
x2i
xi
x2i
q
yi = ?
,
t
=
.
,
v
=
,
s
=
i
i
i
? + x0 x
? + x0\i x\i
? + x0 x
? + x0 x\i
\i
Here x\i denotes the vector formed from x without its ith component. Specifically, yi is the output
of Eq.(2). si is the output of the original DN transform used by Heeger [12]. vi corresponds to the
DN transform used by Schwartz and Simoncelli [23]. The main difference with Eq.(2) is that the
denominator is formed without element xi . Last, ti is the output of the DN transform used in [31].
These forms of DN4 related with each other by element-wise operations, as we have
xi
y
yi2
xi
2
p i
si = yi2 , vi = q
=
,
and
t
=
s
=
.
=p
i
i
1 ? yi2
? + x0 x ? x2i
1 ? yi2
? + x0 x\i
\i
As element-wise operations do not affect MI, in terms of dependency reduction, all three transforms
are equivalent to the standard form in terms of reducing statistical dependencies. Therefore, the
subsequent analysis applies to all these equivalent forms of the DN transform.
4
Quantifying DN Transform as Efficient Coding Transform
We have set up a relation between the DN transform with statistical properties of natural sensory
signals through the multivariate t model. However, its effectiveness as an efficient coding transform
for natural sensory signals needs yet to be quantified for two reasons. First, DN is only an approximation to the optimal transform that eliminates statistical dependencies in a multivariate t model.
Further, the multivariate t model itself is a surrogate of the true statistical model of natural sensory
signals. It is our goal in this section to quantify the effectiveness of the DN transform in reducing statistical dependencies. We start with a study of applying DN to the multivariate t model, the
closed form density of which permits us a theoretical analysis of DN?s performance in dependency
reduction. We then appy DN to real natural sensory signal data, and compare its effectiveness as an
efficient coding transform with the theoretical prediction obtained with the multivariate t model.
4.1
Results with Multivariate t Model
For simplicity, we consider isotropic models whose second order dependencies are removed with
whitening. The density functions of multivariate t and r models lead to closed form solutions for
MI, as formally stated in the following lemma (proved in the supplementary material).
Lemma 5 The MI of a d-dimensional isotropic t vector x is
I(x) = (d ? 1) log ?(?) ? d log ?(? + 1/2) + log ?(? + d/2) ? (d ? 1)??(?)
+ d(? + 1/2)?(? + 1/2) ? (? + d/2)?(? + d/2).
Similarly, the MI of a d-dimensional r vector y = ?(x), which is the DN transform of x, is
I(y) = d log ?(? + (d ? 1)/2) ? log ?(?) ? (d ? 1) log ?(? + d/2) + (? ? 1)?(?)
+ (d ? 1)(? + d/2 ? 1)?(? + d/2) ? d(? + (d ? 3)/2)?(? + (d ? 1)/2).
d
In both cases, ?(?) denotes the Digamma function which is defined as ?(?) = d?
log ?(?).
Note that ? does not appear in these formulas, as it can be removed by re-scaling data and has
no effect on MI. Using Lemma 5, for a d-dimensional t vector, if we have I(x) > I(y), the DN
transform reduces its statistical dependency, conversely, if I(x) < I(y), it increases dependency. As
both Gamma function and Digamma function can be computed to high numerical precision, we can
evaluate ?I = I(x)?I(y) corresponding to different shape parameter ? and data dimensionality d.
The left panel of Fig.2 illustrates the surface of ?I/I(x), which measures the relative change in MI
between an isotropic t vector and its DN transform. The right panel of Fig.2 shows one dimensional
curves of ?I/I(x) corresponding to different d values with varying ?.
These plots illustrate several interesting aspects of the DN transform as an approximate efficient
coding transform of the multivariate t models. First, with data dimensionality d > 4, using DN
4
There are usually weights to each x2i in the denominator, but re-scaling data can remove the different
weights and leads to no change in terms of MI.
5
0.03
0.02
?I/d
0.1
0.05
d=6
0.01
d=5
0
d=4
?I/d
0.15
?0.01
d=2
35
0
?0.02
1 1.5
20
2 2.5
3 3.5
10
4 4.5 2
?0.03
1
d
?
1.5
2
2.5
?
3
3.5
4
4.5
Figure 2: left: Surface plot of [I(x) ? I(?(x))]/I(x), measuring MI changes after applying the DN transform ?(?) to an isotropic t vector x. I(x) and I(?(x)) computed numerically using Lemma 5. The two
coordinates correspond with data dimensionality (d) and shape parameters of the multivariate t model (?).
right: one dimensional curves of ?I/I(x) corresponding to different d values with varying ?.
leads to significant reduction of statistical dependency, but such reductions become weaker as ?
increases. On the other hand, our experiment also showed an unexpected behavior that has not been
reported before, for d ? 4, the change of MI caused by the use of DN is negative, i.e., DN increases
statistical dependency for such cases. Therefore, though effective for high dimensional models, DN
is not an efficient coding transform for low dimensional multivariate t models.
4.2
Results with Natural Sensory Signals
As mentioned previously, the multivariate t model is an approximation to the source model of natural
sensory signals. Therefore, we would like to compare our analysis in the previous section with the
actual dependency reduction performance of the DN transform on real natural sensory signal data.
4.2.1
Non-parametric Estimating MI Changes
To this end, we need to evaluate MI changes after applying DN without relying on any specific parametric density model. This has been achieved previously for two dimensional data using straightforward nonparametric estimation of MI based on histograms [28]. However, the estimations obtained
this way are prone to strong bias due to the binning scheme in generating the histograms [21], and
cannot be generalized to higher data dimensions due to the ?curse of dimensionality?, as the number
of bins increases exponentially with regards to the data dimension.
Instead, in this work, we directly compute the difference of MI after DN is applied without explicitly
binning data. To see how this is possible, we first express the computation of the MI change as
d
d
X
X
I(x) ? I(y) =
H(xk ) ?
H(yk ) ? H(x) + H(y).
(5)
k=1
k=1
Next, the entropy
y = ?(x) is related to the entropy of x, as H(y) = H(x) ?
of
R
??(x)
p(x) log det ?x dx, where det ??(x)
is the Jacobian determinant of ?(x) [9]. For DN,
?x
x
has closed form (Lemma 3), and replacing it in Eq.(5) yields
det ??(x)
?x
I(y) ? I(x) =
d
X
k=1
H(yk ) ?
d
X
H(xk ) + log ? ?
k=1
d
+1
2
Z
p(x) log (? + x0 x)dx.
(6)
x
Once we determine ?, the last term in Eq.(6) can be approximated with the average of function
log (? + x0 x) over input data. The first two terms requires direct estimation of differential entropies
of scalar random variables, H(yk ) and H(xk ). For a more reliable estimation, we use the nonparametric ?bin-less? m-spacing estimator [30]. As a simple sanity check, Fig.3(a) shows the theoretical
evaluation of (I(y) ? I(x))/d obtained with Lemma 5 for isotropic t models with ? = 1.10 and
varying d (blue solid curve). The red dashed curve shows the same quantity computed using Eq.(6)
with 10, 000 random samples drawn from the same multivariate t models. The small difference
between the two curves in this plot confirms the quality of the non-parametric estimation.
6
1.12
1.1
1.1
1.08
1.08
1.06
1.06
?
?
1.12
1.04
1.04
1.02
1.02
1
2
3
4
5
6
7
8
1
9 10 11 12 13 14 15
4
9
16
25
36
49
64
81 100 121
0.04
0.06
0.04
0.02
?I/d
0.04
0.02
0
0.02
0
?0.02
0
?0.02
model prediction
nonparam estimation
?0.02
2
4
6
8
10
?0.04
model prediction
nonparam estimation
2 3 4 5 6 7 8 9 10 11 12 13 14 15
model prediction
nonparam estimation
?0.04
4
9
16
25
36
49
64
81 100 121
(a) t model
(b) audio data
(c) image data
Figure 3: (a) Comparison of theoretical prediction of MI reduction for isotropic t model with ? = 1.1 and different dimensions (blue solid curve) with the non-parametric estimation using Eq.(6) and m-spacing estimator
[30] on 10, 000 random samples drawn from the corresponding multivariate t models (red dashed curve). (b)
Top row is the mean and standard deviation of the estimated shape parameter ? for natural audio data of different local window sizes. Bottom row is the comparison of MI changes (?I/d). Blue solid curve corresponds
to the prediction with Lemma 5, red dashed curve is the non-parametric estimation of Eq.(6). (c) Same results
as (b) for natural image data with different local block sizes.
4.2.2
Experimental Evaluation and Comparison
We next experiment with natural audio and image data. For audio, we used 20 sound clips of
animal vocalization and recordings in natural environments, which have a sampling frequency of
44.1 kHz and typical length of 15 ? 20 seconds. These sound clips were filtered with a bandpass
gamma-tone filter of 3 kHz center frequency [13]. For image data, we used eight images in the
van Hateren database [29]. These images have contents of natural scenes such as woods and greens
with linearized intensity values. Each image was first cropped to the central 1024 ? 1024 region
and then subject to a log transform. The log pixel intensities are further adjusted to have a zero
mean. We further processed the log transformed pixel intensities by convolving with an isotropic
bandpass filter that captures an annulus of frequencies in the Fourier domain ranging from ?/4 to ?
radians/pixel. Finally, data used in our experiments are obtained by extracting adjacent samples in
localized 1D temporal (for audios) or 2D spatial (for images) windows of different sizes. We further
whiten the data to remove second order dependencies.
With these data, we first fit multivariate t models using maximum likelihood (detailed procedure
given in the supplementary material), from which we compute the theoretical prediction of MI difference using Lemma 5. Shown in the top row of Fig.3 (b) and (c) are the means and standard
deviations of the estimated shape parameters of different sizes of local windows for audio and image data, respectively. These plots suggest two properties of the fitted multivariate t model. First,
the estimated ? values are typically close to one due to the high kurtosis of these signal ensembles.
Second, the shape parameter in general decreases as the data dimension increases.
Using the same data, we obtain the optimal DN transform by searching for optimal ? in Eq.(2) that
maximizes the change in MI given by Eq.(6). However, as entropy is estimated non-parametrically,
we cannot use gradient based optimization for ?. Instead, with a range of possible ? values, we
perform a binary search, at each step of which we evaluate Eq.(6) using the current ? and the nonparametric estimation of entropy based on the data set.
In the bottom rows of Fig.3 (b) (for audios) and (c) (for images), we show MI changes of using
DN on natural sensory data that are predicted by the optimally fitted t model (blue solid curves) and
that obtained with optimized DN parameters using nonparametric estimation of Eq.(6) (red dashed
curve). For robustness, these results are the averages over data sets from the 20 audio signals and
8 images, respectively. In general, changes in statistical dependencies obtained with the optimal
DN transforms are in accordance with those predicted by the multivariate t model. The model7
based predictions also tend to be upper-bounds of the actual DN performance. Some discrepancies
between the two start to show as dimensionality increases, as the dependency reductions achieved
with DN become smaller even though the model-based predictions tend to keep increasing. This
may be caused by the approximation nature of the multivariate t model to natural sensory data. As
such, more complex structures in the natural sensory signals, especially with larger local windows,
cannot be effectively captured by the multivariate t models, which renders DN less effective.
On the other hand, our observation based on the multivariate t model that the DN transform tends to
increase statistical dependency for small pooling size also holds to real data. Indeed, the increment
of MI becomes more severe for d ? 4. On the surface, our finding seems to be in contradiction
with [23], where it was empirically shown that applying an equivalent form of the DN transform as
Eq.(2) (see Section 3.2) over a pair of input neurons can reduce statistical dependencies. However,
one key yet subtle difference is that statistical dependency is defined as the correlations in the conditional variances in [23], i.e., the bow-tie behavior as in Fig.1(d). The observation made in [23]
is then based on the empirical observations that after applying DN transform, such dependencies
in the transformed variables become weaker, while our results show that the statistical dependency
measured by MI in that case actually increases.
5
Conclusion
In this work, based on the use of the multivariate t model of natural sensory signals, we have presented a theoretical analysis showing that DN emerges as an approximate efficient coding transform.
Furthermore, we provide a quantitative analysis of the effectiveness of DN as an efficient coding
transform for the multivariate t model and natural sensory signal data. These analyses confirm the
ability of DN in reducing statistical dependency of natural sensory signals. More interestingly, we
observe a previously unreported result that DN can actually increase statistical dependency when the
size of pooling is small. As a future direction, we would like to extend this study to a generalized
DN transform where the denominator and numerator can have different degrees.
Acknowledgement The author would like to thank Eero Simoncelli for helpful discussions, and the
three anonymous reviewers for their constructive comments.
References
[1] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal Statistical
Society. Series B (Methodological), 36(1):99?102, 1974.
[2] F Attneave. Some informational aspects of visual perception. Psych. Rev., 61:183?193, 1954.
[3] H B Barlow. Possible principles underlying the transformation of sensory messages. In W A Rosenblith,
editor, Sensory Communication, pages 217?234. MIT Press, Cambridge, MA, 1961.
[4] A J Bell and T J Sejnowski.
37(23):3327?3338, 1997.
The ?independent components? of natural scenes are edge filters.
[5] Matthias Bethge. Factorial coding of natural images: how effective are linear models in removing higherorder dependencies? J. Opt. Soc. Am. A, 23(6):1253?1268, 2006.
[6] R. W. Buccigrossi and E. P. Simoncelli. Image compression via joint statistical characterization in the
wavelet domain. 8(12):1688?1701, 1999.
[7] P.J. Burt and E.H. Adelson. The Laplacian pyramid as a compact image code. IEEE Transactions on
Communication, 31(4):532?540, 1981.
[8] J. Costa, A. Hero, and C. Vignat. On solutions to multivariate maximum ?-entropy problems. In EMMCVPR, 2003.
[9] T. Cover and J. Thomas. Elements of Information Theory. Wiley-Interscience, 2nd edition, 2006.
[10] D J Field. Relations between the statistics of natural images and the response properties of cortical cells.
4(12):2379?2394, 1987.
[11] J. Foley. Human luminence pattern mechanisims: Masking experimants require a new model. J. of Opt.
Soc. of Amer. A, 11(6):1710?1719, 1994.
[12] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual neural science, 9:181?198,
1992.
8
[13] P. Johannesma. The pre-response stimulus ensemble of neurons in the cochlear nucleus. In Symposium
on Hearing Theory, pages 58?69, Eindhoven, Holland, 1972.
[14] Samuel Kotz and Saralees Nadarajah. Multivariate t Distributions and Their Applications. Cambridge
University Press, 2004.
[15] M S Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356?363, 2002.
[16] S. Lyu and E. P. Simoncelli. Nonlinear image representation using divisive normalization. In IEEE
Conference on Computer Vision and Patten Recognition (CVPR), Anchorage, AK, June 2008.
[17] S Lyu and E P Simoncelli. Nonlinear extraction of ?independent components? of natural images using
radial Gaussianization. Neural Computation, 18(6):1?35, 2009.
[18] J. Malo, I. Epifanio, R. Navarro, and E. P. Simoncelli. Non-linear image representation for efficient
perceptual coding. 15(1):68?80, January 2006.
[19] V. Mante, V. Bonin, and M. Carandini. Functional mechanisms shaping lateral geniculate responses to
artificial and natural stimuli. Neuron, 58:625?638, May 2008.
[20] B A Olshausen and D J Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[21] Liam Paninski. Estimation of entropy and mutual information. Neural Comput., 15(6):1191?1253, 2003.
[22] S. Roth and M. Black. Fields of experts: A framework for learning image priors. volume 2, pages
860?867, 2005.
[23] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience,
4(8):819?825, August 2001.
[24] R Shapley and C Enroth-Cugell. Visual adaptation and retinal gain control. Progress in Retinal Research,
3:263?346, 1984.
[25] E P Simoncelli and R W Buccigrossi. Embedded wavelet image compression based on a joint probability
model. In Proc 4th IEEE Int?l Conf on Image Proc, volume I, pages 640?643, Santa Barbara, October
26-29 1997. IEEE Sig Proc Society.
[26] Fabian H. Sinz and Matthias Bethge. The conjoint effect of divisive normalization and orientation selectivity on redundancy reduction. In NIPS. 2009.
[27] M. Studeny and J. Vejnarova. The multiinformation function as a tool for measuring stochastic dependence. In M. I. Jordan, editor, Learning in Graphical Models, pages 261?297. Dordrecht: Kluwer., 1998.
[28] Roberto Valerio and Rafael Navarro. Input?output statistical independence in divisive normalization
models of v1 neurons. Network: Computation in Neural Systems, 14(4):733?745, 2003.
[29] A van der Schaaf and J H van Hateren. Modelling the power spectra of natural images: Statistics and
information. Vision Research, 28(17):2759?2770, 1996.
[30] Oldrich Vasicek. A test for normality based on sample entropy. Journal of the Royal Statistical Society,
Series B, 38(1):54?59, 1976.
[31] M. J. Wainwright, O. Schwartz, and E. P. Simoncelli. Natural image statistics and divisive normalization: Modeling nonlinearity and adaptation in cortical neurons. In Probabilistic Models of the Brain:
Perception and Neural Function, pages 203?222. MIT Press, 2002.
[32] M J Wainwright and E P Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In S. A. Solla, T. K. Leen, and K.-R. M?uller, editors, Adv. Neural Information Processing Systems
(NIPS*99), volume 12, pages 855?861, Cambridge, MA, May 2000. MIT Press.
[33] A. Watson and J. Solomon. A model of visual contrast gain control and pattern masking. J. Opt. Soc.
Amer. A, pages 2379?2391, 1997.
[34] B Wegmann and C Zetzsche. Statistical dependence between orientation filter outputs used in an human
vision based image code. In Proc Visual Comm. and Image Processing, volume 1360, pages 909?922,
Lausanne, Switzerland, 1990.
[35] M. Welling, G. E. Hinton, and S. Osindero. Learning sparse topographic representations with products of
Student-t distributions. pages 1359?1366, 2002.
9
| 4065 |@word determinant:3 inversion:1 compression:3 norm:1 seems:1 nd:1 confirms:1 linearized:1 covariance:3 solid:7 reduction:8 initial:1 series:2 hereafter:1 interestingly:1 current:2 si:2 yet:2 dx:3 written:1 visible:1 subsequent:1 numerical:1 shape:7 remove:2 plot:6 drop:1 kyk:7 tone:1 isotropic:12 xk:6 ith:2 filtered:3 completeness:1 characterization:1 location:1 mathematical:1 dn:73 anchorage:1 direct:2 become:5 differential:2 symposium:1 shapley:1 interscience:1 x0:14 notably:1 indeed:1 ica:1 behavior:4 multi:3 brain:1 relying:1 informational:1 actual:4 curse:1 window:4 increasing:1 becomes:1 xx:1 underlying:2 estimating:1 panel:2 maximizes:1 psych:1 finding:1 transformation:1 sinz:1 temporal:1 quantitative:3 ti:1 tie:4 schwartz:4 control:4 originates:1 appear:1 positive:1 before:1 engineering:2 understood:1 local:5 accordance:1 tends:1 ak:1 approximately:1 might:1 black:1 studied:1 quantified:1 equivalence:1 suggests:1 conversely:1 multiinformation:1 lausanne:1 liam:1 range:1 mallow:1 block:1 definite:1 procedure:2 empirical:2 bell:1 johannesma:1 word:1 pre:1 radial:1 suggest:1 cannot:4 close:1 applying:5 equivalent:6 map:2 demonstrated:1 dz:1 center:1 resembling:1 straightforward:1 reviewer:1 roth:1 focused:1 simplicity:1 contradiction:1 estimator:10 regarded:1 searching:1 coordinate:1 justification:1 increment:1 pt:3 hypothesis:3 sig:1 element:8 approximated:1 particularly:2 recognition:1 mammalian:1 std:2 database:1 binning:2 observed:4 bottom:2 capture:5 region:1 adv:1 solla:1 decrease:1 removed:3 yk:3 mentioned:1 environment:3 pd:1 comm:1 rigorously:1 dynamic:1 completely:3 basis:3 joint:7 cat:1 distinct:1 effective:7 sejnowski:1 artificial:1 sanity:1 whose:2 dordrecht:1 widely:2 supplementary:6 larger:1 cvpr:1 ability:3 statistic:5 topographic:1 transform:64 itself:1 emergence:1 vocalization:1 kurtosis:2 matthias:2 product:2 adaptation:2 neighboring:4 bow:4 asserts:1 enhancement:1 regularity:1 produce:1 generating:1 illustrate:1 andrew:1 measured:1 advocated:1 progress:1 eq:16 strong:1 soc:3 predicted:2 come:1 quantify:4 direction:1 switzerland:1 gaussianization:1 filter:5 stochastic:1 human:2 material:6 bin:2 require:1 generalization:1 decompose:1 anonymous:1 opt:3 biological:1 hinted:1 eindhoven:1 adjusted:1 hold:1 normal:1 exp:2 great:1 lyu:3 claim:1 estimation:13 albany:2 geniculate:1 proc:4 tool:1 weighted:1 uller:1 mit:3 gaussian:6 aim:3 varying:4 validated:1 june:1 methodological:1 modelling:1 likelihood:2 dn4:1 check:1 contrast:2 digamma:2 realizable:1 am:1 dim:1 helpful:1 wegmann:1 eliminate:1 typically:1 relation:2 transformed:2 interested:1 pixel:3 among:2 orientation:4 aforementioned:1 animal:1 spatial:3 special:1 schaaf:1 mutual:2 marginal:2 field:5 equal:4 once:1 extraction:1 sampling:1 biology:2 identical:1 adelson:1 patten:1 discrepancy:1 future:1 stimulus:3 quantitatively:1 simplify:1 retina:1 gamma:5 individual:2 interest:2 message:1 highly:1 evaluation:4 severe:1 mixture:4 zetzsche:1 edge:1 bonin:1 re:2 vasicek:1 theoretical:8 fitted:7 column:1 modeling:1 kurtotic:1 cover:1 measuring:4 hearing:1 deviation:3 parametrically:1 osindero:1 optimally:4 stored:1 reported:1 dependency:47 density:18 probabilistic:1 invertible:1 bethge:2 squared:1 central:1 solomon:1 conf:1 convolving:1 expert:1 rescaling:1 retinal:2 coding:28 student:3 int:1 cugell:1 caused:2 explicitly:1 vi:2 root:1 closed:6 red:7 start:2 masking:3 square:2 formed:3 variance:3 characteristic:1 efficiently:1 ensemble:3 correspond:5 yield:1 bayesian:1 studeny:1 annulus:1 gsm:2 siwei:1 rosenblith:1 definition:2 nonetheless:1 frequency:3 attneave:1 proof:3 mi:25 radian:1 gain:4 costa:1 proved:2 carandini:1 emerges:1 dimensionality:5 subtle:1 shaping:1 actually:2 higher:2 methodology:1 response:13 improved:1 amer:2 evaluated:1 though:2 leen:1 furthermore:5 correlation:1 hand:4 replacing:1 nonlinear:9 mode:1 quality:1 perhaps:1 olshausen:1 usa:1 effect:3 true:2 barlow:1 symmetric:5 spherically:1 nadarajah:1 illustrated:1 adjacent:4 numerator:1 noted:2 whiten:1 samuel:1 generalized:2 image:39 wise:3 ranging:1 recently:1 common:1 functional:1 empirically:2 khz:2 exponentially:1 volume:4 anisotropic:1 tail:1 extend:1 kluwer:1 numerically:2 marginals:1 significant:1 cambridge:3 rd:2 similarly:1 nonlinearity:1 unreported:3 access:1 cortex:3 surface:3 whitening:3 multivariate:47 posterior:3 recent:1 showed:1 irrelevant:1 barbara:1 termed:1 selectivity:1 binary:1 watson:1 yi:2 der:1 transmitted:1 captured:2 determine:1 signal:43 ii:2 dashed:7 sound:5 simoncelli:11 reduces:2 valerio:1 match:1 plug:1 divided:1 award:1 laplacian:1 prediction:9 whitened:1 denominator:3 expectation:1 emmcvpr:1 vision:3 histogram:2 normalization:9 pyramid:1 achieved:3 cell:3 justified:2 cropped:1 spacing:2 source:1 biased:1 eliminates:4 rest:1 exhibited:1 comment:1 pooling:4 recording:2 subject:1 tend:2 navarro:2 effectiveness:5 jordan:1 extracting:1 iii:1 affect:1 fit:2 independence:1 reduce:5 det:6 effort:1 render:1 enroth:1 york:1 elliptically:2 detailed:1 aimed:1 santa:1 factorial:1 transforms:8 nonparametric:4 extensively:1 band:6 clip:2 processed:2 visualized:1 vejnarova:1 reduced:4 nsf:1 estimated:4 neuroscience:2 blue:7 bls:2 express:1 redundancy:2 four:1 key:1 drawn:2 v1:1 merely:1 year:2 sum:1 wood:1 inverse:3 throughout:1 kotz:1 scaling:3 bound:1 distinguish:1 mante:1 adapted:1 precisely:3 as2:1 x2:11 scene:2 aspect:2 fourier:1 department:1 structured:1 according:1 across:1 smaller:1 y0:1 rev:1 invariant:1 mutually:2 previously:5 turn:1 mechanism:1 hero:1 end:1 gaussians:2 operation:4 permit:1 eight:1 observe:3 ubiquity:1 odor:1 robustness:1 original:1 thomas:1 denotes:3 top:2 graphical:1 especially:1 establish:2 society:3 quantity:2 receptive:2 parametric:5 dependence:2 striate:1 diagonal:1 surrogate:1 exhibit:2 gradient:1 thank:1 higherorder:1 lateral:1 cochlear:1 reason:1 length:1 code:3 equivalently:1 unfortunately:1 october:1 stated:2 negative:2 motivates:1 perform:1 upper:1 observation:4 neuron:5 fabian:1 january:1 extended:1 excluding:1 communication:2 hinton:1 august:1 intensity:3 burt:1 pair:3 namely:1 connection:2 optimized:1 nip:2 suggested:1 nonparam:3 usually:1 perception:3 pattern:2 reliable:1 green:1 royal:2 wainwright:2 power:1 natural:56 difficulty:1 normality:1 scheme:1 x2i:4 foley:1 roberto:1 nice:1 literature:1 l2:1 acknowledgement:1 prior:1 relative:1 law:1 embedded:1 interesting:1 var:3 localized:2 conjoint:1 nucleus:1 degree:1 principle:2 editor:3 heavy:1 row:4 prone:1 supported:1 last:2 buccigrossi:2 bias:1 weaker:2 neighbor:2 correspondingly:1 sparse:3 van:3 regard:2 curve:18 overcome:1 dimension:4 cortical:2 contour:3 sensory:43 author:2 commonly:1 made:1 welling:1 transaction:1 approximate:4 compact:3 rafael:1 keep:1 confirm:3 eero:1 xi:10 spectrum:1 physiologically:1 un:1 latent:1 search:1 nature:4 confirmation:1 career:1 model3:1 epifanio:1 complex:1 domain:7 main:1 yi2:4 malo:1 edition:1 x1:10 fig:13 ny:1 wiley:1 precision:1 heeger:2 bandpass:2 comput:1 perceptual:1 third:1 jacobian:3 wavelet:2 formula:1 removing:1 specific:2 showing:2 incorporating:1 effectively:3 justifies:2 argmaxz:1 illustrates:1 entropy:9 paninski:1 ez:3 visual:6 kxk:3 expressed:1 unexpected:1 scalar:2 lewicki:1 holland:1 applies:1 corresponds:2 ma:2 conditional:8 goal:1 quantifying:1 content:1 change:11 specifically:2 infinite:1 reducing:6 operates:1 typical:1 lemma:19 pas:6 accepted:1 divisive:8 duality:1 experimental:1 shannon:1 formally:3 hateren:2 constructive:1 evaluate:3 audio:9 phenomenon:4 |
3,386 | 4,066 | Evaluating neuronal codes for inference using Fisher
information
Ralf M. Haefner? and Matthias Bethge
Centre for Integrative Neuroscience, University of T?ubingen,
Bernstein Center for Computational Neuroscience, T?ubingen,
Max Planck Institute for Biological Cybernetics
Spemannstr. 41, 72076 T?ubingen, Germany
Abstract
Many studies have explored the impact of response variability on the quality of
sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here
we study the impact of such stimulus-induced response variability for the case of
binocular disparity inference. We characterize the response distribution for the
binocular energy model in response to random dot stereograms and find it to be
very different from the Poisson-like noise usually assumed. We then compute the
Fisher information with respect to binocular disparity, present in the monocular
inputs to the standard model of early binocular processing, and thereby obtain an
upper bound on how much information a model could theoretically extract from
them. Then we analyze the information loss incurred by the different ways of
combining those inputs to produce a scalar single-neuron response. We find that
in the case of depth inference, monocular stimulus variability places a greater
limit on the extractable information than intrinsic neuronal noise for typical spike
counts. Furthermore, the largest loss of information is incurred by the standard
model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is
preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in
higher cortical regions.
1
Introduction
Understanding how the brain performs statistical inference is one of the main problems of theoretical neuroscience. In this paper, we propose to apply the tools developed to evaluate the information
content of neuronal codes corrupted by noise to address the question of how well they support statistical inference. At the core of our approach lies the interpretation of neuronal response variability
due to nuisance stimulus variability as noise.
Many theoretical and experimental studies have probed the impact of intrinsic response variability on
the quality of sensory codes ([1, 12] and references therein). However, most neurons are responsive
to more than one stimulus attribute. So when trying to infer a particular stimulus property, the
brain needs to be able to ignore the effect of confounding attributes that also influence the neuron?s
response. We propose to evaluate the usefulness of a population code for inference over a particular
parameter by treating the neuronal response variability due to nuisance stimulus attributes as noise
equivalent to intrinsic noise (e.g. Poisson spiking).
We explore the implications of this new approach for the model system of stereo vision where the
inference task is to extract depth from binocular images. We compute the Fisher information present
?
Corresponding author ([email protected])
1
Right image
Left RF Right RF
Tuning curve
response
Left image
response
disparity
disparity
Figure 1: Left: Example random dot stereogram (RDS). Right: Illustration of bincular energy model
without (top) and with (bottom) phase disparity.
in the monocular inputs to the standard model of early binocular processing and thereby obtain an
upper bound on how precisely a model could theoretically extract depth. We compare this with the
amount of information that remains after early visual processing. We distinguish the two principal
model flavors that have been proposed to explain the physiological findings. We find that one of the
two models appears superior to the other one for inferring depth.
We start by giving a brief introduction to the two principal flavors of the binocular energy model. We
then retrace the processing steps and compute the Fisher information with respect to depth inference
that is present: first in the monocular inputs, then after binocular combination, and finally for the
resulting tuning curves.
2
Binocular disparity as a model system
Stereo vision has the advantage of a clear separation between the relevant stimulus dimension ?
binocular disparity ? and the confounding or nuisance stimulus attributes ? monocular image structure ([9]). The challenge in inferring disparity in image pairs consists in distinguishing true from
false matches, regardless of the monocular structures in the two images. The stimulus that tests this
system in the most general way are random dot stereograms (RDS) that consist of nearly identical
dot patterns in either eye (see Figure 1). The fact that parts of the images are displaced horizontally
with respect to each other has been shown to be sufficient to give rise to a sensation of depth in
humans and monkeys ([5, 4]). Since RDS do not contain any monocular depth cues (e.g. size or
perspective) the brain needs to correctly match the monocular image features across eyes to compute
disparity.
The standard model for binocular processing in primary visual cortex (V1) is the binocular energy
model ([5, 10]). It explains the response of disparity-selective V1 neurons by linearly combining the
output of monocular simple cells and passing the sum through a squaring nonlinearity (illustrated in
Figure 1).
e2
o2
e
o
e 2
o 2
reven = (?Le + ?R
) + (?Lo + ?R
) = ?Le 2 + ?Lo 2 + ?R
+ ?R
+ 2(?Le ?R
+ ?Lo ?R
).
(1)
e
o
where ?L is the output of an even-symmetric receptive field (RF) applied to the left image, ?R is
the output of an odd-symmetric receptive field (RF) applied to the right image, etc. By pairing
an even and an odd-symmetric RF in each eye1 , the monocular part of the response of the cell
e2
o2
?Le 2 + ?Lo 2 + ?R
+ ?R
becomes invariant to the monocular phase of a grating stimulus (since
2
2
sin + cos = 1) and the binocular part is modulated only by the difference (or disparity) between
the phases in left and right grating ? as observed for complex cells in V1. The disparity tuning curve
resulting from the combination in equation (1) is even-symmetric (illustrated in Figure 1 in blue)
and is one of two primary types of tuning curves found in cortex ([5]). In order to model the other,
odd-symmetric type of tuning curves (Figure 1 in red), the filter outputs are combined such that the
output of an even-symmetric filter is always combined with that of an odd-symmetric one in the
other eye:
o 2
e 2
e2
o2
o
e
rodd = (?Le + ?R
) + (?Lo + ?R
) = ?Le 2 + ?Lo 2 + ?R
+ ?R
+ 2(?Le ?R
+ ?Lo ?R
).
1
WLOG we assume the quadrature pair to consist of a purely even and a purely odd RF.
2
(2)
Note that the two models are identical in their monocular inputs and the monocular part of their
output (the first four terms in equations 1 and 2) and only vary in their binocular interaction terms
(in brackets). The only way in which the first model can implement preferred disparities other than
zero is by a positional displacement of the RFs in the two eyes with respect to each other (the
disparity tuning curve achieves its maximum when the disparity in the image matches the disparity
between the RFs). The second model, on the other hand achieves non-zero preferred disparities by
employing a phase shift between the left and right RF (90 deg in our case). It is therefore considered
to be phase-disparity model, while the first one is called a position disparity one.2
3
Results
How much information the response of a neuron carries about a particular stimulus attribute depends
both on the sensitivity of the response to changes in that attribute and to the variability (or uncertainty) in the response across all stimuli while keeping that attribute fixed. Fisher information is the
standard way to quantify this intuition in the context of intrinsic noise ([6], but also see [2]) and we
will use it to evaluate the binocular energy model mechanisms with regard to their ability to extract
the disparity information contained in the monocular inputs arriving at the eyes.
3.1
Response variability
response
Figure 2 shows the mean of the binocular response of the two models.
The variation of the response around the mean due to the variation in
monocular image structure in the RDS is shown in Figure 3 (top row)
for four exemplary disparities: ?1, 0, 1 and uncorrelated (??), indicated in Figure 2. Unlike in the commonly assumed case of intrinsic
noise, pbinoc (r|d) ? the stimulus-conditioned response distribution ?
d
is far from Poisson or Gaussian. Interestingly, its mode is always at
zero ? the average response to uncorrelated stimuli ? and the fact that
the mean depends on the stimulus disparity is primarily due to the
-1 0 1
disparity-dependence of the skew of the response distribution (Figure
3).3 The skew in turn depends on the disparity through the disparity- Figure 2: Binocular redependent correlation between the RF outputs as illustrated in Figure sponses for even (blue) and
3 (bottom row). Of particular interest are the response distributions odd (red) model.
at the zero disparity 4 , the disparities at which rodd takes its minimum and maximum, respectively,
and the uncorrelated case (infinite disparity). In the case of infinite disparity, the images in the two
eyes are completely independent of each other and hence the outputs of the left and right RFs are
independent Gaussians. Therefore, ?L ?R ? pbinoc (r|d = ?) is symmetric around 0. In the case of
zero disparity (identical images in left and right eye), the correlation is 1 between the outputs of left
and right RFs (both even, or both odd). It follows that ?L ?R ? ?21 and hence has a mean of 1. What
is also apparent is that the binocular energy model with phase disparity (where each even-symmetric
RF is paired with an odd-symmetric one) never achieves perfect correlation between the left and
right eye and only covers smaller values.
3.2
Fisher information
3.2.1
Fisher information contained in monocular inputs
First, we quantify the information contained in the inputs to the energy model by using Fisher information. Consider the 4D space spanned by the outputs of the four RFs in left and right eye:
e
o
(?Le , ?Lo , ?R
, ?R
). Since the ? are drawn from identical Gaussians5 , the mean responses of the
2
We use position disparity model and even-symmetric tuning interchangeably, as well as phase disparity
model and odd-symmetric tuning. Unfortunately, the term disparity is used for both disparities between the
RFs, and for disparities between left and right images (in the stimulus). If not indicated otherwise, we will
always refer to stimulus disparity for the rest of the paper.
3
The RF outputs are Normally distributed in the limit of infinitely many dots (RFs act as linear filters +
central limit theorem). Therefore the disparity-conditioned responses p(r|d) correspond to the off-diagonal
terms in a Wishart distribution, marginalized over all the other matrix elements.
4
WLOG we assume the displacement between the RF centers in the left and right eye to be zero.
5
The model RFs have been normalized by their variance, such that var[?] = 1 and ? ? N (0, 1).
3
1
1
0
?1
0
1
1
0
?1
0
0
1
1
?1
0
1
0
1
1
1
1
0
0
0
0
?1
?1
?1
?1
?1
0
1
?1
0
1
?1
0
1
?1
0
1
?1
0
1
Figure 3: Response distributions p(r|d) for varying d. Top row: histograms for values of interaction
o
e
(red). Bottom row: distribution of corresponding RF outputs ?L vs
(blue) and ?Le ?R
terms ?Le ?R
o
e
) colors refer
) and red (?Le vs ?R
?R . 1? curves are shown to indicate correlations. Blue (?Le vs ?R
to the model with even-symmetric tuning curve and odd-symmetric tuning curve, respectively. The
disparity value for each column is ??, ?1, 0 and 1 corresponding to those highlighted in Figure 2.
monocular inputs do not depend on the stimulus and hence, the Fisher information is given by
o
e
):
, ?R
I(d) = 12 tr(C ?1 C 0 C ?1 C 0 ) where C is the covariance matrix belonging to (?Le , ?Lo , ?R
?
?
1
0
a(d) c(d)
1
c(?d) a(d) ?
? 0
C=?
a(d) c(?d)
1
0 ?
c(d) a(d)
0
1
o
o
e
i as Gabor
i and c(d) := h?Le ?R
i = h?Lo ?R
where we model the interaction terms a(d) := h?Le ?R
functions6 since Gabors functions have been shown to provide a good fit to the range of RF shapes
and disparity tuning curves that are empirically observed in early sensory cortex ([5]).7 a(d) and
c(d) are illustrated by the blue and red curves in Figure 2, respectively. Because the binocular part
of the energy model response, or disparity tuning curve, is the convolution of the left and right RFs,
the phase of the Gabor describing the disparity tuning curve is given by the difference between the
phases of the corresponding RFs. Therefore c(d) is odd-symmetric and c(?d) = ?c(d). We obtain
Iinputs (d) =
2
(1 ?
a2
?
c2 ) 2
(1 + a2 ? c2 )a02 + (1 + c2 ? a2 )c02 + 4aca0 c0
(3)
where we omitted the stimulus dependence of a(d) and c(d) for clarity of exposition and where 0
denotes the 1st derivative with respect to the stimulus d. The denominator of equation (3)) is given
by det C and corresponds to the Gaussian envelope of the Gabor functions for a(d) and c(d):
det C = 1 ? a2 ? c2 = 1 ? exp(?
s2
).
?2
In Figure 4B (black) we plot the Fisher information as a function of disparity. We find that the Fisher
information available in the inputs diverges at zero disparity (at the difference between the centers
of the left and right RFs in general). This means that the ability to discriminate zero disparity from
2
0)
A Gabor function is defined as cos(2?f d ? ?) exp[? (d?d
] were f is spatial frequency, d is disparity,
2? 2
? is the Gabor phase, do is the envelope center (set to zero here, WLOG) and ? the envelope bandwidth.
7
The assumption that the binocular interaction can be modeled by a Gabor is not important for the principal
results of this paper. In fact, the formulas for the Fisher information in the monocular inputs and in the disparity
tuning curves derived below hold for other (reasonable) choices for a(d) and c(d) as well.
6
4
2
B
A
1.5
1
C
100
100
50
50
0.4
0.3
0.2
0.5
0
?4
D
0.1
?2
0
2
disparity d
4
0
?4
?2
0
2
0
?4
4
disparity d
?2
0
2
disparity d
4
0
?4
?2
0
2
disparity d
4
Figure 4: A: Disparity tuning curves for the model using position disparity (even) and phase disparity
(odd) in blue and red, respectively. B: Black: Fisher information contained in the monocular inputs.
Blue: Fisher information left after combining inputs from left and right eye according to position
disparity model. Red: Fisher information after combining inputs using phase disparity model. Note
that the black and red curves diverge at zero disparity. C: Fisher information for the final model
output/neuronal response. Same color code as previously. Solid lines correspond to complex, dashed
lines to simple cells. D: Same as C but with added Gaussian noise in the monocular inputs.
nearby disparities is arbitrarily good. In reality, intrinsic neuronal variability will limit the Fisher
information at zero.8
3.2.2
Combination of left and right inputs
Next we analyze the information that remains after linearly combining the monocular inputs in the
energy model. It follows that the 4-dimensional monocular input space is reduced to a 2-dimensional
e
o
o
e
), respectively.
, ?Lo + ?R
) and (?Le + ?R
, ?Lo + ?R
binocular one for each model, sampled by (?Le + ?R
Again, the marginal distributions are Gaussians with zero mean independent of stimulus disparity.
This means that we can compute the Fisher information for the position disparity model from the
covariance matrix C as above:
e 2
e
o
h(?Le + ?R
) i
h(?Le + ?R
)(?Lo + ?R
)i
Ceven =
o 2
o
e
) i
)i
h(?Lo + ?R
)(?Lo + ?R
h(?Le + ?R
2 + 2a
0
=
0
2 + 2a
e o
Here we exploited that h?Le ?Lo i = h?R
?R i = 0 since the even and odd RFs are orthogonal and that
e o
o e
h?L ?R i = ?h?R ?L i. The Fisher information follows as
Ieven (d) =
a0 (d)2
.
[1 + a(d)]2
(4)
The dependence of Fisher information on d is shown in Figure 4B (blue). The total information
(as measured by integrating Fisher information over all disparities) communicated by the positiondisparity model is greatly reduced compared to the total Fisher information present in the inputs.
a(d) is an even-symmetric Gabor (illustrated in Figure 2) and hence the Fisher information is greatest on either side of the maximum where the slopes of a(d) are steepest, and zero at the center
where a(d) has its peak. We note here that the Fisher information for the final tuning curve for
the position-disparity model is the same as in equation (4) and therefore we will postpone a more
detailed discussion of it until section 3.2.3.
2
E.g. additive Gaussian noise with variance ? N on the monocular filter outputs eliminates the singularity:
2
2
det C = 1 + ? N ? a2 ? c2 ? ? N .
8
5
On the other hand, when combining the monocular inputs according to the phase disparity model,
we find:
o 2
o
e
h(?Le + ?R
) i
h(?Le + ?R
)(?Lo + ?R
)i
Codd =
o
e
e 2
h(?Le + ?R
)(?Lo + ?R
)i
h(?Lo + ?R
) i
2 + 2c
2a
=
2a
2 ? 2c
e o
o
o e
since again h?Le ?Lo i = h?R
?R i = 0 and h?Le ?R
i = ?h?R
?L i = c. The Fisher information in this case
follows as
1
Iodd (d) =
(1 + a2 ? c2 )a02 + (1 + c2 ? a2 )c02 + 4aca0 c0
(1 ? a2 ? c2 )2
1
=
Iinputs (d)
2
Iodd (d) is shown in Figure 4B (red). While loosing 50% of the Fisher information present in the
inputs, the Fisher information after combining left and right RF outputs is much larger in this case
than for the position disparity model explored above. How can that be? Why are the two ways of
combining the monocular outputs not symmetric? Insight into this question can be gained by looking
at the binocular interaction terms in the quadratic expansion of the feature space for the two models.9
e
o
o
e
For the position disparity model we obtain the 3-dimensional space (?Le ?R
, ?Lo ?R
, ?Le ?R
+ ?Lo ?R
) of
o
e
which the third dimension cannot contribute to the Fisher information since ?Le ?R
+ ?Lo ?R
= 0. In
o
e
e
o
the phase-disparity model, however, the quadratic expansion yields (?Le ?R
, ?Lo ?R
, ?Le ?R
+ ?Lo ?R
).
Here, all three dimensions are linearly independent (although correlated), each contributing to the
Fisher information. This can also explain why Iodd (d) is symmetric around zero, and independent
of the Gabor phase of c(d). While this is not a rigorous analysis yet of the differences between the
models at the stage of binocular combination, it serves as a starting point for a future investigation.
3.2.3
Disparity tuning curves
In order to collapse the 2-dimensional binocular inputs into a scalar output that can be coded in the
spike rate of a neuron, the energy model postulates a squaring output nonlinearity after each linear
combination and summing the results. Since the (?L + ?R )2 are not Normally distributed and their
means depend on the stimulus disparity, we cannot employ the above approach to calculate Fisher
information but instead use the more general
"
2 # Z ?
2
?
?
ln p(r; d)
=
p(r; d)
ln p(r; d)
dr
(5)
I(d) = E
?d
?d
0
where p(r; d) is the response distribution for stimulus disparity d. Because the ? are drawn from a
o
e
are drawn from N [0, 2(1 + a(d))] since we defined
and ?Lo + ?R
Gaussian with variance 1, ?Le + ?R
e 2
o 2
o o
e e
) are independent and it
) and (?Lo + ?R
a(d) = h?L ?R i = h?L ?R i. Conditioned on d, (?Le + ?R
follows for the model with an even-symmetric tuning function that
e
1
e 2
o 2
(?L + ?R
) + (?Lo + ?R
)
? ?22 and
2[1 + a(d)]
1
r
H(r) (6)
peven (r; d) =
exp ?
4[1 + a(d)]
4[1 + a(d)]
where H(r) is the Heaviside step function.10 Substituting equation (6) into equation (5) we find11
2
Z ?
a0 (d)2
r
r
complex
Ieven (d) =
? 1 exp ?
dr
4[1 + a(d)]3 0
4[1 + a(d)]
4[1 + a(d)]
0
2
a (d)
=
(7)
[1 + a(d)]2
9
By quadratic expansion of the feature space we refer to expanding a 2-dimensional feature space (f1 , f2 )
to a 3-dimensional one (f12 , f22 , f1 f2 ) by considering the binocular interaction terms in all quadratic forms.
10
We see that hripeven (r;d) = 4[1 + a(d)] and hence we recover the Gabor-shaped tuning function that we
introduced in section 3.2.1 to model the empirically observed relationship between disparity d and mean spike
rate r.
R
11 ?
dx (x/? ? 1)2 exp(?x/?) = ? for ? > 0.
0
6
Remarkably, this is exactly the same amount of information that is available after summing left and
right RFs (see equation 4), so none is lost after squaring and combining the quadrature pair. We show
Ieven (d) in Figure 4C (blue). It is also interesting to note that the general form for Ieven (d) differs
from the Fisher information based on the Poisson noise model (and ignoring stimulus variability as
considered here) only by the exponent of 2 in the denominator. Since 1 + a(d) ? 0 this means
that the qualitative dependence of I on d is the same, the main difference being that the Fisher
information favors small over large spike rates even more. Conversely, it follows that when Fisher
information only takes the neuronal noise into consideration, it greatly overestimates the information
that the neuron carries with respect to the to-be-inferred stimulus parameter for realistic spike counts
(of greater than two). Furthermore, unlike in the Poisson case, a scaling up of the tuning function
1 + a(d) does not translate into greater Fisher information. Fisher information with respect to
stimulus variability as considered here is invariant to the absolute height of the tuning curve.12
o 2
e 2
Considering the phase-disparity model, (?Le +?R
) and (?Lo +?R
) are drawn from N [0, 2(1+c(d))]
e o
e
o
and N [0, 2(1 + c(d))], respectively, since c(d) = h?L ?R i = ?h?Lo ?R
i. Unfortunately, since ?Le + ?R
o
e
and ?L + ?R have different variances depending on d, and are usually not independent of each other,
the sum cannot be modeled by a ?2 ?distribution. However, we can compute the Fisher information
for the two implied binocular simple cells instead.13 It follows that
e
1
o 2
(?L + ?R
)
? ?21 and
2[1 + c(d)]
psimple
odd (r; d)
=
1
r
1
p
? exp ?
H(r).
4[1 + c(d)]
2?(1/2) 1 + c(d) r
and14
simple
Iodd
(d)
=
1
c0 (d)2
p
2?(1/2) 1 + c(d)5
0
=
Z
0
?
2
1
r
r
1
?
exp ?
dr ?
4[1 + c(d)]
r 4[1 + c(d)] 2
2
1 c (d)
2 [1 + c(d)]2
simple
The dependence of Iodd
on disparity is shown in Figure 4C (red dashed). Most of the Fisher
information is located in the primary slope (compare Figure 4A) followed by secondary slope to
its left. The reason for this is the strong boost Fisher information gets when responses are lowest.
We also see that the total Fisher information carried by a phase-disparity simple cell is significantly
higher than that carried by a position-disparity simple cell (compare dashed red and blue lines)
raising the question of what other advantages or trade-offs there are that make it beneficial for the
primate brain to employ so many position-disparity ones. Intrinsic neuronal variability may provide
part of the answer since the difference in Fisher information between both models decreases as
intrinsic variability increases. Figure 4D shows the Fisher information after Gaussian noise has been
added to the monocular inputs. However, even in this high intrinsic noise regime (noise variance of
the same order as tuning curve amplitude) the model with phase disparity carries significantly more
total Fisher information.
15
12
What is outside of the scope of this paper but obvious from equation (7) is that Fisher information is
maximized when the denominator, or the tuning function is minimal. Within the context of the energy model,
this occurs for neither the position-disparity model, nor the classic phase-disparity one, but for a model where
the left and right RFs that are linearly combined, are inverted with respect to each other (i.e. phase-shifted by
?). In that case a(d) is a Gabor function with phase ? and becomes zero at zero disparity such that the Fisher
information diverges. Such neurons, called tuned-inhibitory (TI, [11]) make up a small minority of neurons in
monkey V1.
13
The energy model as presented thus far models the responses of binocular complex cells. Disparityo 2
selective simple cells are typically modeled by just one combination of left and right RFs (?Le + ?R
) or
e 2
(?Lo +
?
)
,
and
not
the
entire
quadrature
pair.
R
R
?
?
?
?1
14 ?
dx x (x/? ? 1/2)2 exp(?x/?) = ? ?/2 for ? > 0.
0
15
This derivation equally applies to the Fisher information of simple cells with position disparity by suba0 (d)2
simple
stituting a(d) for c(d) and we obtain Ieven
(d) = 12 [1+a(d)]
2 . This function is shown in Figure 4C (blue
dashed).
7
4
Discussion
The central idea of our paper is to evaluate the quality of a sensory code with respect to an inference
task by taking stimulus variability into account, in particular that induced by irrelevant stimulus
attributes. By framing stimulus-induced nuisance variability as noise, we were able to employ the
existing framework of Fisher information for evaluating the standard model of early binocular processing with respect to inferring disparity from random dot stereograms.
We started by investigating the disparity-conditioned variability of the binocular response in the absence of intrinsic neuronal noise. We found that the response distributions are far from Poisson or
Gaussian and ? independent of stimulus disparity ? are always peaked at zero (the mean response
to uncorrelated images). The information contained in the correlations between left and right RF
outputs are translated into a modulation of the neuron?s mean firing rate primarily by altering the
skew of the response distribution. This is quite different from the case of intrinsic noise and has
implications for comparing different codes. It is noteworthy that these response distributions are
entirely imposed by the sensory system ? the combination of the structure of the external world with
the internal processing model. Unlike the case of intrinsic noise which is usually added ad-hoc after
the neuronal computation has been performed, in our case the computational model impacts the usefulness of the code beyond the traditionally reported tuning functions. This property extends to the
case of population codes, the next step for future work. Of great importance for the performance of
population codes are interneuronal correlations. Again, the noise correlations due to nuisance stimulus parameters are a direct consequence of the processing model and the structure of the external
input.
Next we compared the Fisher information available for our inference task at various stages of binocular processing. We computed the Fisher information available in the monocular inputs to binocular
neurons in V1, after binocular combination and after the squaring nonlinearity required to translate
binocular correlations into mean firing rate modulation. We find that despite the great stimulus variability, the total Fisher information available in the inputs diverges and is only bounded by intrinsic
neuronal variability. The same is still true after binocular combination for one flavor of the model
considered here ? that employing phase disparity (or pairing unlike RFs in either eye), not the other
one (position disparity), which has lost most information after the initial combination. At this point,
our new approach allows us to ask a normative question: In what way should the monocular inputs
be combined so as to lose a minimal amount of information about the relevant stimulus dimension?
Is the combination proposed by the standard model to obtain even-symmetric tuning curves the only
one to do so or are they others that produce a different tuning curve, with a different response distribution that is more suited to inferring depth? Conversely, we can compare our results for the
model stages leading from simple to complex cells and compare them with the corresponding Fisher
information computed from empirically observed distributions, to test our model assumptions.
Recently, Fisher information has been criticized as a tool for comparing population codes ([3, 2]).
We note that our approach can be readily adapted to other measures like mutual information or
their framework of neurometric function analysis to compare the performance of different codes in
a disparity discrimination task.
Another potentially promising avenue of future research would to investigate the effect of thresholding on inference performance. One reason that odd-symmetric tuning curves had higher Fisher
information in the case we investigated was that odd-symmetric cells produce near-zero responses
more often in the context of the energy model. However, it is known from empirical observations
that fitting even-symmetric disparity tuning curves requires an additional thresholding output nonlinearity. It is unclear at this point to what extend such a change to the response distribution helps or
hinders inference.
And finally, we suggest that considering the different shapes of response distributions induced by
the specifics of the sensory modality might have an impact on the discussion about probabilistic
population codes ([7, 8] and references therein). Cue-integration, for instance, has usually been
studied under the assumption of Poisson-like response distributions, assumptions that do not appear
to hold in the case of combining disparity cues from different parts of the visual field.
Acknowledgments
This work has been supported by the Bernstein award to MB (BMBF; FKZ: 01GQ0601).
8
References
[1] LF Abbott and P Dayan. The effect of correlated variability on the accuracy of a population code. Neural
Comput, 11(1):91?101, 1999.
[2] P Berens, S Gerwinn, A Ecker, and M Bethge. Neurometric function analysis of population codes. In
Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neural
Information Processing Systems 22, pages 90?98. 2009.
[3] M Bethge, D Rotermund, and K Pawelzik. Optimal short-term population coding: when fisher information fails. Neural Comput, 14(10):2317?2351, 2002.
[4] C Blakemore and B Julesz. Stereoscopic depth aftereffect produced without monocular cues. Science,
171(968):286?288, 1971.
[5] BG Cumming and GC DeAngelis. The physiology of stereopsis. Annu Rev Neurosci, 24:203?238, 2001.
[6] P Dayan and LF Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural
systems. MIT Press, 2001.
[7] J Fiser, P Berkes, G Orban, and M Lengyel. Statistically optimal perception and learning: from behavior
to neural representations. Trends Cogn Sci, 14(3):119?130, 2010.
[8] WJ Ma, JM Beck, PE Latham, and A Pouget. Bayesian inference with probabilistic population codes.
Nat Neurosci, 9(11):1432?1438, 2006.
[9] David Marr. Vision: A Computational Investigation into the Human Representation and Processing of
Visual Information. Henry Holt and Co., Inc., New York, NY, USA, 1982.
[10] I Ohzawa, GC DeAngelis, and RD Freeman. Stereoscopic depth discrimination in the visual cortex:
neurons ideally suited as disparity detectors. Science, 249(4972):1037?1041, 1990.
[11] GF Poggio and B Fischer. Binocular interaction and depth sensitivity in striate and prestriate cortex of
behaving rhesus monkey. J Neurophysiol, 40(6):1392?1405, 1977.
[12] F. Rieke, D. Warland, R.R. van, Steveninck, and W. Bialek. Spikes: exploring the neural code. MIT Press,
Cambridge, MA, 1997.
9
| 4066 |@word c0:3 a02:2 integrative:1 rhesus:1 covariance:2 thereby:2 tr:1 solid:1 carry:3 initial:1 disparity:91 tuned:4 interestingly:1 o2:3 existing:1 com:1 comparing:2 gmail:1 yet:1 dx:2 readily:1 additive:1 realistic:1 shape:2 treating:1 plot:1 v:3 discrimination:2 cue:4 steepest:1 core:1 short:1 contribute:1 height:1 mathematical:1 c2:8 direct:1 pairing:2 qualitative:1 consists:1 fitting:1 theoretically:2 behavior:1 nor:1 brain:5 freeman:1 pawelzik:1 jm:1 considering:3 becomes:2 bounded:1 lowest:1 what:5 monkey:4 developed:1 finding:1 act:2 ti:1 exactly:1 interneuronal:1 normally:2 appear:1 planck:1 overestimate:1 limit:4 consequence:1 despite:1 firing:2 modulation:2 noteworthy:1 black:3 might:1 therein:2 studied:1 conversely:2 co:3 blakemore:1 collapse:1 range:1 statistically:1 steveninck:1 acknowledgment:1 lost:2 implement:1 postpone:1 communicated:1 differs:1 lf:2 cogn:1 displacement:2 empirical:1 gabor:11 significantly:2 physiology:1 integrating:1 holt:1 suggest:1 get:1 cannot:3 context:3 influence:1 equivalent:1 imposed:1 ecker:1 center:5 williams:1 regardless:1 starting:1 pouget:1 insight:1 spanned:1 marr:1 ralf:2 population:9 classic:1 rieke:1 variation:2 traditionally:1 distinguishing:1 element:1 trend:1 located:1 bottom:3 observed:4 calculate:1 region:1 culotta:1 wj:1 hinders:1 trade:1 decrease:1 intuition:1 stereograms:3 ideally:1 depend:2 purely:2 f2:2 completely:1 neurophysiol:1 translated:1 various:1 derivation:1 deangelis:2 rds:4 outside:1 apparent:1 quite:1 larger:1 otherwise:1 ability:2 favor:1 fischer:1 highlighted:1 final:2 hoc:1 advantage:2 matthias:1 exemplary:1 propose:2 interaction:7 mb:1 relevant:2 combining:10 translate:2 diverges:3 produce:3 perfect:1 help:1 depending:1 measured:1 odd:16 strong:1 grating:2 indicate:1 quantify:2 sensation:1 attribute:9 filter:4 human:2 explains:1 f1:2 investigation:2 biological:1 singularity:1 exploring:1 hold:2 around:3 considered:4 exp:8 great:2 scope:1 substituting:1 vary:1 early:5 achieves:3 a2:8 omitted:1 gq0601:1 lose:1 largest:1 tool:2 offs:1 mit:2 always:5 gaussian:7 varying:1 derived:1 greatly:2 rigorous:1 inference:13 dayan:2 squaring:4 typically:1 entire:1 a0:2 selective:2 germany:1 exponent:1 spatial:1 integration:1 mutual:1 marginal:1 field:3 never:1 shaped:1 identical:4 nearly:1 peaked:1 future:3 others:1 stimulus:35 primarily:3 employ:3 beck:1 phase:23 interest:1 investigate:1 bracket:1 implication:2 poggio:1 retrace:1 orthogonal:1 theoretical:3 minimal:2 criticized:1 instance:1 column:1 modeling:1 cover:1 altering:1 usefulness:2 characterize:1 reported:1 answer:1 corrupted:1 combined:4 st:1 peak:1 sensitivity:2 probabilistic:2 off:1 diverge:1 bethge:3 again:3 central:2 postulate:1 f22:1 wishart:1 dr:3 external:2 derivative:1 leading:1 account:1 coding:1 inc:1 depends:3 ad:1 bg:1 performed:1 analyze:2 red:11 start:1 recover:1 slope:3 f12:1 accuracy:1 variance:5 maximized:1 correspond:2 yield:1 bayesian:1 produced:1 none:1 cybernetics:1 lengyel:1 explain:2 detector:1 energy:13 frequency:1 obvious:1 e2:3 associated:1 sampled:1 ask:1 color:2 ubiquitous:1 amplitude:1 appears:1 higher:3 response:42 furthermore:2 just:1 binocular:35 stage:3 fiser:1 correlation:8 until:1 hand:2 mode:1 quality:3 indicated:2 usa:1 effect:3 ohzawa:1 contain:1 true:2 normalized:1 hence:5 symmetric:23 illustrated:5 sin:1 interchangeably:1 nuisance:5 trying:1 latham:1 performs:1 image:16 consideration:1 recently:1 superior:1 spiking:1 empirically:3 extend:1 interpretation:1 refer:3 cambridge:1 tuning:28 rd:1 centre:1 nonlinearity:4 had:1 dot:6 henry:1 cortex:6 behaving:1 etc:1 berkes:1 confounding:2 perspective:1 irrelevant:1 ubingen:3 gerwinn:1 arbitrarily:1 exploited:1 inverted:1 minimum:1 greater:3 additional:1 dashed:4 stereogram:1 infer:1 match:3 equally:1 award:1 coded:1 paired:1 impact:5 denominator:3 vision:3 poisson:7 histogram:1 cell:12 preserved:1 remarkably:1 source:1 modality:1 envelope:3 rest:1 unlike:4 eliminates:1 induced:4 spemannstr:1 lafferty:1 near:2 bernstein:2 bengio:1 fit:1 bandwidth:1 fkz:1 idea:1 avenue:1 det:3 shift:1 stereo:2 passing:1 york:1 clear:1 detailed:1 julesz:1 amount:3 reduced:2 inhibitory:1 shifted:1 stereoscopic:2 neuroscience:4 correctly:1 blue:11 probed:1 four:3 prestriate:1 drawn:4 clarity:1 neither:1 abbott:2 v1:5 sum:2 uncertainty:1 place:1 almost:1 c02:2 reasonable:1 extends:1 separation:1 scaling:1 rotermund:1 entirely:1 bound:2 followed:1 distinguish:1 quadratic:4 adapted:1 precisely:1 nearby:1 orban:1 extractable:1 according:2 combination:11 belonging:1 across:2 smaller:1 beneficial:1 rev:1 primate:1 invariant:2 ln:2 equation:8 monocular:29 remains:2 previously:1 skew:3 count:2 mechanism:1 turn:1 describing:1 aftereffect:1 serf:1 available:5 gaussians:2 apply:1 responsive:1 top:3 denotes:1 marginalized:1 warland:1 giving:1 implied:1 question:4 added:3 spike:6 occurs:1 receptive:2 primary:4 dependence:5 striate:1 diagonal:1 bialek:1 unclear:1 sci:1 reason:2 minority:1 neurometric:2 code:18 modeled:3 relationship:1 illustration:1 unfortunately:2 potentially:1 rise:1 upper:2 neuron:14 displaced:1 convolution:1 observation:1 variability:22 looking:1 gc:2 inferred:1 introduced:1 david:1 pair:4 required:1 raising:1 framing:1 boost:1 address:1 able:2 beyond:1 usually:4 pattern:1 below:1 perception:1 regime:1 challenge:1 rf:31 max:1 greatest:1 brief:1 eye:12 started:1 carried:2 extract:4 gf:1 understanding:1 contributing:1 loss:2 interesting:1 var:1 incurred:2 sufficient:1 thresholding:2 editor:1 uncorrelated:4 lo:31 row:4 excitatory:1 supported:1 keeping:1 arriving:1 side:1 institute:1 taking:1 absolute:1 distributed:2 regard:1 curve:24 depth:11 cortical:1 evaluating:2 dimension:4 world:1 van:1 sensory:6 author:1 commonly:1 far:4 employing:2 ignore:1 preferred:2 deg:1 investigating:1 summing:2 assumed:3 stereopsis:1 why:2 reality:1 promising:1 expanding:1 ignoring:1 schuurmans:1 expansion:3 investigated:1 complex:5 berens:1 main:2 linearly:4 neurosci:2 s2:1 noise:21 quadrature:3 neuronal:12 ny:1 wlog:3 bmbf:1 fails:1 inferring:5 position:14 cumming:1 comput:2 lie:1 pe:1 third:1 theorem:1 formula:1 annu:1 specific:1 normative:1 explored:2 physiological:1 intrinsic:14 consist:2 false:1 effectively:1 sponses:1 gained:1 importance:1 nat:1 conditioned:4 flavor:3 suited:2 explore:1 infinitely:1 visual:6 positional:1 horizontally:1 contained:5 scalar:2 applies:1 corresponds:1 ma:2 haefner:2 loosing:1 exposition:1 fisher:53 content:1 change:2 absence:1 typical:1 infinite:2 principal:3 called:2 total:5 discriminate:1 secondary:1 experimental:1 internal:1 support:1 modulated:1 evaluate:4 heaviside:1 correlated:2 |
3,387 | 4,067 | Lifted Inference Seen from the Other Side : The
Tractable Features
Abhay Jha Vibhav Gogate Alexandra Meliou Dan Suciu
Computer Science & Engineering
University of Washington
Washington, WA 98195
{abhaykj,vgogate,ameli,suciu}@cs.washington.edu
Abstract
Lifted Inference algorithms for representations that combine first-order logic and
graphical models have been the focus of much recent research. All lifted algorithms developed to date are based on the same underlying idea: take a standard
probabilistic inference algorithm (e.g., variable elimination, belief propagation
etc.) and improve its efficiency by exploiting repeated structure in the first-order
model. In this paper, we propose an approach from the other side in that we use
techniques from logic for probabilistic inference. In particular, we define a set of
rules that look only at the logical representation to identify models for which exact
efficient inference is possible. Our rules yield new tractable classes that could not
be solved efficiently by any of the existing techniques.
1
Introduction
Recently, there has been a push towards combining logical and probabilistic approaches in Artificial
Intelligence. It is motivated in large part by the representation and reasoning challenges in real world
applications: many domains such as natural language processing, entity resolution, target tracking
and Bio-informatics contain both rich relational structure, and uncertain and incomplete information.
Logic is good at handling the former but lacks the representation power to model the latter. On the
other hand, probability theory is good at modeling uncertainty but inadequate at handling relational
structure.
Many representations that combine logic and graphical models, a popular probabilistic representation [1, 2], have been proposed over the last few years. Among them, Markov logic networks
(MLNs) [2, 3] are arguably the most popular one. In its simplest form, an MLN is a set of weighted
first-order logic formulas, and can be viewed as a template for generating a Markov network. Specifically, given a set of constants that model objects in the domain, it represents a ground Markov
network that has one (propositional) feature for each grounding of each (first-order) formula with
constants in the domain.
Until recently, most inference schemes for MLNs were propositional: inference was carried out
by first constructing a ground Markov network and then running a standard probabilistic inference
algorithm over it. Unfortunately, the ground Markov network is typically quite large, containing
millions and sometimes even billions of inter-related variables. This precludes the use of existing
probabilistic inference algorithms, as they are unable to handle networks at this scale. Fortunately,
in some cases, one can perform lifted inference in MLNs without grounding out the domain. Lifted
inference treats sets of indistinguishable objects as one, and can yield exponential speed-ups over
propositional inference.
Many lifted inference algorithms have been proposed over the last few years (c.f. [4, 5, 6, 7]). All
of them are based on the same principle: take an existing probabilistic inference algorithm and try
1
Interpretation in English
Most people don?t smoke
Most people don?t have asthma
Most people aren?t friends
People who have asthma don?t smoke
Asthmatics don?t have smoker friends
Feature
?Smokes(X)
?Asthma(X)
?Friends(X,Y)
Asthma(X) ? ?Smokes(X)
Asthma(X) ? Friends(X,Y) ? ?Smokes(Y)
Weight
1.4
2.3
4.6
1.5
1.1
Table 1: An example MLN (modified from [10]).
to lift it by carrying out inference over groups of random variables that behave similarly during
the algorithm?s execution. In other words, these algorithms are basically lifted versions of standard
probabilistic inference algorithms. For example, first-order variable elimination [4, 5, 7] lifts the
standard variable elimination algorithm [8, 9], while lifted Belief propagation [10] lifts Pearl?s Belief
propagation [11, 12].
In this paper, we depart from existing approaches, and present a new approach to lifted inference
from the other, logical side. In particular, we propose a set of rewriting rules that exploit the structure
of the logical formulas for inference. Each rule takes an MLN as input and expresses its partition
function as a combination of partition functions of simpler MLNs (if the preconditions of the rule
are satisfied). Inference is tractable if we can evaluate an MLN using these set of rules. We analyze
the time complexity of our algorithm and identify new tractable classes of MLNs, which have not
been previously identified.
Our work derives heavily from database literature in which inference techniques based on manipulating logical formulas (queries) have been investigated rigorously [13, 14]. However, the techniques
that they propose are not lifted. Our algorithm extends their techniques to lifted inference, and thus
can be applied to a strictly larger class of probabilistic models.
To summarize, our algorithm is truly lifted, namely we never ground the model, and it offers guarantees on the running time. This comes at a cost that we do not allow arbitrary MLNs. However, the
set of tractable MLNs is quite large, and includes MLNs that cannot be solved in PTIME by any of
the existing lifted approaches. The small toy MLN given in Table 1 is one such example. This MLN
is also out of reach of state-of-the-art propositional inference approaches such as variable elimination [8, 9], which are exponential in treewidth. This is because the treewidth of the ground Markov
network is polynomial in the number of constants in the domain.
2
Preliminaries
In this section we will cover some preliminaries and notation used in the rest of the paper. A feature
(fi ) is constructed using constants, variables, and predicates. Constants, denoted with small-case
letters (e.g. a), are used to represent a particular object. An upper-case letter (e.g. X) indicates a
variable associated with a particular domain (?X ), ranging over all objects in its domain. Predicate
symbols (e.g. Friends) are used to represent relationships between the objects. For example,
Friends(bob,alice) denotes that Alice (represented by constant alice) and Bob (constant
bob) are friends. An atom is a predicate symbol applied to a tuple of variables or constants. For
example, Friends(bob,X) and Friends(bob,alice) are atoms.
? r1 ? r2 ? ? ? ? ? rk , where each ri is an atom or the negation
A conjunctive feature is of the form ?X
? are the variables used in the atoms. Similarly, a disjunctive feature is of the form
of an atom, and X
? r1 ? r2 ? ? ? ? ? rk . For example, fc : ?X ?Smokes(X) ? Asthma(X) is a conjunctive feature,
?X
while fd : ?X ?Smokes(X) ? ?Friends(bob,X) is a disjunctive feature. The former asserts
everyone in the domain of X has asthma and does not smoke. The latter says that if a person smokes,
he/she cannot be friends with Bob. A grounding of a feature is an assignment of the variables to
constants from their domain. For example, ?Smokes(alice) ? ?Friends(bob,alice) is
a grounding of the disjunctive feature fd . We assume that no predicate symbol occurs more than
once in a feature i.e. we don?t allow for self-joins. In this work we focus on features containing only
universal quantifiers (?), and will from now on drop the quantification symbol ? from the notation.
Given a set (wi , fi )i=1,k where each fi is a conjunctive or disjunctive feature and wi ? R is a
weight assigned to that feature, we define the following probability distribution over a possible
2
world ? in accordance with Markov Logic Networks (MLN) :
X
1
P r(?) = exp
wi N (fi , ?)
Z
i
!
(1)
In Equation (1), a possible world ? can be any subset of tuples from the domain of predicates, Z,
the normalizing constant is called the partition function, and N (fi , ?) is the number of groundings
of feature fi that are true in the world ?.
Table 1 gives an example of a MLN that has been modified from [10]. There is an implicit typesafety assumption in the MLNs, that if a predicate symbol occurs in more than one feature, then the
variables used at the same position must have same domain. In the MLN of Table 1, if ?X = ?Y =
{alice, bob}; then predicates Smokes and Asthma each have two tuples, while Friends has four.
Hence, the total number of possible worlds is 22+2+4 = 256. Consider the possible world ? below :
Smokes
bob
Asthma
bob
Friends
(bob,bob)
(bob,alice)
alice
(alice,alice)
Then from Equation (1): P r(?) = Z1 exp (1.4 ? 1 + 2.3 ? 0 + 4.6 ? 0 + 1.5 ? 1 + 1.1 ? 2). In this
paper we focus on MLNs, but our algorithm is applicable to other first order probabilistic models as
well.
3
Problem Statement
In this paper, we are interested in computing the partition function Z(M ) of an MLN M . We
formulate the partition function in a parametrized form, using the notion of Generating Functions of
Counting Programs (CP). A Counting Program is a set of features f? along with indeterminates ?
?,
where ?i is the indeterminate for fi . Given a counting program P = (fi , ?i )i=1...k , we define its
generating function(GF) FP as follows:
X Y N (f ,?)
FP (?
?) =
?i i
(2)
?
i
The generating function as expressed in Eq. 2 is in general of exponential size in the domain of
objects. We want to characterize cases where we can express it more succinctly, and hence compute
the partition function faster. Let n be the size of the object domain, and k be the size of our program.
Then we are interested in the cases where FP can be computed with following number of arithmetic
operations.
Closed Form Polynomial in log(n), k
Polynomial Expression Polynomial in n, k
Pseudo-Polynomial Expression Polynomial in n for bounded k
Computing FP refers to evaluating it for one instantiation of parameters ?
? . To illustrate the above
cases, let k = 1. Then the pseudo-polynomial and polynomial expression are equivalent. The
program (R(X, Y ), ?) has GF (1 + ?)|?X ||?Y | , which is in closed form. While the program
i |?Y |
P|? |
(R(X) ? S(X, Y ) ? T (Y ), ?) has GF 2|?X ||?Y | i=0X |?iX | 1 + 1+?
, which is a
2
polynomial expression. This polynomial does not have a closed form.
In the following section we demonstrate an algorithm that computes the generating function, and
allows us to identify cases where the generating function falls under one of the above categories.
4
Computing the Generating Function
Asssume a Counting Program P = (fi , ?i )i=1,k . In this section, we present some rules that can be
used to compute the GF of a CP from simpler CPs. We can then upper bound the size of FP by the
3
choice of rules used. The cases which cannot be evaluated by these rules are still open and we don?t
know if the GF in those cases can be expressed succinctly.
We will require that all CPs are in normal form to simplify our analysis. Note that the normality
requirement does not change the class of CPs that can be solved in PTIME by our algorithm. This
is because every CP can converted to an equivalent normal CP in PTIME.
4.1
Normal Counting Programs
Definition 4.1 A counting program is called normal if it satisfies the following properties :
1. There are no constants in any feature.
2. If two distinct atoms with the same predicate symbol have variables X and Y in the same
position, then ?X = ?Y .
It is easy to show that:
Proposition 4.2 Computing the partition function of an MLN can be reduced in PTIME to computing the generating function of a normal CP.
The following example demonstrates how to normalize a set of features.
Example 4.3 Consider a CP containing two features Friends(X, Y ) and Friends(bob, Y ).
Clearly, it is not in normal form because the second feature contains a constant. To normalize it, we can replace the two features by: (i) Friends1(Y ) ? Friends(bob, Y ), and (ii)
Friends2(Z, Y ) ? Friends(X, Y ), X 6= bob, where the domain of Z is ?Z = ?X \ bob.
Note that we assume criterion 2 is satisfied in MLNs. During the course of algorithm, it may
get violated when we replace variables with constants as we?ll see, but we can use the above
transformation whenever that happens. So from now on we assume that our CP is normalized.
4.2
Preliminaries and Operators
We proceed to establish notation and operators used by our algorithm. Given a feature f , we denote
by V ars(f ) the set of variables used in its atoms. We assume that variables used in different features
must be different. Furthermore, without loss of generality, we assume numeric domains for each
logical variable, namely ?X = {1, . . . , |?X |}. We define a substitution f [a/X], where X ?
V ars(f ) and a ? ?X , as the replacement of X with a in every atom of f . P [a/X] applies the
substitution fi [a/X] to every feature fi in P . Note that after a substitution, the CP is no longer
normal and therefore, we may have to normalize it.
Define a relation U among the variables of a CP as follows : U (X, Y ) iff there exist two atoms ri , rj
with the same predicate, such that X ? V ars(ri ), Y ? V ars(rj ), and X and Y appear at the same
position in ri and rj respectively. Let U be the transitive closure of U . Note that U is an equivalence
relation. For a variable X, denote by Unify(X) its equivalence class under U. For example, given
two features Smokes(X) ? ?Asthma(X) and ?Smokes(Y) ? ?Friends(Z,Y), we have
Unify(X) = Unify(Y ) = {X, Y }. Given a feature, a variable is a root variable iff it appears in
every atom of the feature. For some variable X, the set X = Unify(X) is a separator if ?Y ? X
: Y ? V ars(fi ) implies Y must be a root variable for fi . In the last example, the set {X, Y } is a
separator. Notice that, since the program is normal, we have ?X = ?Y whenever Y ? Unify(X),
? is a separator, then we write ?X? for ?Y for any Y ? Unify(X). Two variables are called
thus, if X
equivalent if there is a bijection from Unify(X) to Unify(Y ) such that for any Z1 ? Unify(X) and
its image Z2 ? Unify(Y ), Z1 and Z2 always occur together.
Next, we define three operators used by our algorithm: splitting, conditioning and Dirichlet convolution. We define a process Split(Y, k) that splits every feature in the CP that contains the variable Y
into two features with disjoint domains: one with ?Y = {k} and the other with ?Y c = ?Y ? {k}.
Both features retain the same indeterminate. Also, Cond(i, r, k) defines a process that removes an
atom r from feature fi . Denote fi0 = fi \ {r}; then Cond(i, r, k) replaces fi with (i) two features
(T RU E, ?ik ) and (fi0 , 1) if r ? fi , (ii) one feature (fi0 , 1) if r ? ?fi , and (iii) (fi0 , ?i ) otherwise.
Pn
Pm
Given two polynomials P = i ai ?i and Q = i bi ?i , their Dirichlet convolution, P ?Q, is
defined as:
X
P ?Q =
ai bj ?ij
i,j
4
We define a new variant of this operator P ?c Q as: P ?c Q = ?mn P 0
Q(?)
P (?)
0 1
?n and Q ? = ?m
1
?
?Q0
1
?
, where P 0
1
?
=
4.3 The Algorithm
Our algorithm is basically a recursive application of a series of rewriting rules (see rules R1-R6
given below). Each (non-trivial) rule takes a CP as input and if the preconditions for applying it are
satisfied, then it expresses the generating function of the input CP as a combination of generating
functions of a few simpler CPs. The generating function of the resulting CPs can then be computed
(independently) by recursively calling the algorithm on each. The recursion terminates when the
generating function of the CP is trivial to compute (SUCCESS) or when none of the rules can be
applied (FAILURE). In the case, when algorithm succeeds, we analyze whether the GF is in closed
form or is a polynomial expression.
Next, we present our algorithm which is essentially a sequence of rules. Given a CP, we go through
the rules in order and apply the first applicable rule, which may require us to recursively compute
the GF of simpler CPs, for which we continue in the same way.
Our first rule uses feature and variable equivalence to reduce the size of the CP. Formally,
Rule R1 (Variable and Feature Equivalence Rule) If variables X and Y are equivalent, replace
the pair with a single new variable Z in every atom where they occur. Do the same for every pair of
variables in Unify(X), Unify(Y ).
If two features fi , fj are identical, then we replace them with a single feature fi with indeterminate
?i ?j that is the product of their individual indeterminates.
The correctness of Rule R1 is immediate from the fact that the CP after the transformation is equal
to the CP before the transformation.
Our second rule specifies some trivial manipulations.
Rule R2 (Trivial manipulations)
1.
2.
3.
4.
Eliminate FALSE features.
If a feature fi is T RU E, then FP = ?i FP ?fi .
If a program P is just a tuple then FP = 1 + ?, where ? is the indeterminate.
If some feature fi has indeterminate ?i = 1 (due to R6), then remove all the atoms in fi
of a predicate symbol that is present in some other feature. Let N be the product of the
domain of the rest of the atoms, then FP = 2N FP ?fi .
Our third rule utilizes the independence property. Intuitively, given two CPs which are independent,
namely they have no atoms in common, the generating function of the joint CP is simply the product
of the generating function of the two CPs. Formally,
Rule R3 (Independence Rule) If a CP P can be split into two programs P1 and P2 such that the
two programs don?t have any predicate symbols in common, then FP = FP1 ? FP2 .
The correctness of Rule R3 follows from the fact that every world ? of P can be written as a
concatenation of two disjoint worlds, namely ? = (?1 ? ?2 ) where ?1 and ?2 are the worlds from
P1 and P2 respectively. Hence the GF can be written as:
FP =
X
Y
?1 ??2 fi ?P1
N (fi ,?1 )
?i
Y
fi ?P2
N (fi ,?2 )
?i
=
X Y
?1 fi ?P1
N (fi ,?1 )
?i
X Y
N (fi ,?2 )
?i
= FP1 ? FP2
(3)
?2 fi ?P2
The next rule allows us to split a feature if it has a component that is independent of the rest of the
program. Note that while the previous rule splits the program into two independent sets of features,
this feature enables us to split a single feature.
Rule R4 (Dirichlet Convolution Rule) If the program contains feature f = f1 ? f2 , s.t. f1 doesn?t
share any variables or symbols with any atom in the program, then FP = Ff1 ?FP ?f +f2 . Similarly
if f = f1 ? f2 , then FP = Ff1 ?c FP ?f +f2 .
5
We show the proof for a single feature f , the extension is straightforward. For this, we write GF in
a different form as
X
FP (?) =
C(f, i)?i
i
where the coefficient C(f, i) is exactly the number of worlds where the feature f is satisfied i times.
Now assume f = f1 ? f2 , then in any given world ?, if f1 is satisfied n1 times and f2 is satisfied
n2 times, then f is satisfied n1 n2 times. Hence
X
X
Ff (?) =
C(f, i)?i =
C(f1 , i1 )C(f2 , i2 )?i = Ff1 ?Ff2
i
i1 ,i2 |i1 i2 =i
Our next rule utilizes the similarity property in addition to the independence property. Given a set
P of independent but equivalent CPs, the generating function of the joint CP equals the generating
function of any CP, Pi ? P raised to the power |P|. By definition, every instantiation a
? of a separator
? defines a CP that has no tuple in common with other programs for X
? = ?b, a
X
? 6= ?b. Moreover, all
such CPs are equivalent (subject to a renaming of the variables and constants). Thus, we have the
following rule:
|?X? |
? be a separator. Then FP = FP [?a/X]
Rule R5 (Power Rule) Let X
?
Rule R5 generalizes the inversion and partial inversion operators given in [4, 5]. Its correctness
follows in a straight-forward manner from the correctness of the independence rule.
Our final rule generalizes the counting arguments presented in [5, 7]. Consider a singleton atom
R(X). Conditioning over all possible truth assignments to all groundings of R(X) will yield 2|?X |
independent CPs. Thus, the GF can be written as a sum over the generating functions of 2|?X |
independent CPs. However, the resulting GF has exponential complexity. In some cases, however,
the sum can be written efficiently by grouping together GFs that are equivalent.
Rule R6 (Generalized Binomial Rule) Let P red(X) be a singleton atom in some feature. For
every Y ? Unify(X) apply Split(Y, k). Then for every feature fi in the new program containing an
atom r = P red(Y ) apply (fi , ?i ) ? Cond(i, r, k) and similarly (fi , ?i ) ? Cond(i, ?r, ?Y c ? k)
P?X ?X
for those containing r = P red(Y c ). Let the resulting program be Pk . Then FP = k=0
k FPk .
Note that Pk is just one CP whose GF has a parameter k.
The proof is a little involved and omitted here for lack of space.
Having specified the rules and established their correctness, we now present the main result of this
paper:
Theorem 4.4 Let P be a Counting Program (CP).
? If P can be evaluated using only rules R1, R2, R3 and R5, then it has a closed form.
? If P can be evaluated using only rules R1, R2, R3, R4, and R5, then it has a polynomial
expression.
? If P can be evaluated using rules Rules 1 to 6 then it admits a pseudo-polynomial expression.
Computing the dirichlet convolution (Rule R4) requires going through all the coefficients, hence it
takes linear time. Thus, we do not have a closed form solution when we apply (Rule R4). Rule R6
implies that we have to recurse over more than one program, hence their repeated application can
mean we have to solve number of programs that is exponential in the size of program. Therefore,
we can only guarantee a pseudo-polynomial expression if we use this rule.
We can now see the effectiveness of generating functions. When we want to recurse over a set of
features, simply keeping the partition function for smaller features is not enough; we need more
information than that. In particular we need all the coefficients of the generating function. For e.g.
we can?t compute the partition function for R(X) ? S(Y ) with just the partition functions of R(X)
and S(Y ). However, if we have their GF, the GF of f = R(X) ? S(Y ) is just a dirichlet convolution
of the GF of R(X) and S(Y ). One could also compute the GF of f using a dynamic programming
algorithm, which keeps all the coefficients of the generating function. Generating functions let us
store this information in a very succinct way. For e.g. if the GF is (1 + ?)n, then it is much simpler
to use this representation, than keeping all n + 1 binomial coefficients : nk , k = 0, n.
6
Counting Program (domain size 13)
Counting Program (domain size 100)
FOVE (domain size 13)
6
10
2
10
4
0
2
10
Time (sec)
Time (sec)
10
0
10
10
?2
10
?2
10
?4
Counting Program (evidence 30%)
FOVE (evidence 30%)
FOVE extrapolation
10
?6
10
1
10
2
10
?4
10
3
0
10
20
40
60
80
100
Percentage of Evidence
Domain Size
Figure 1: Our approach vs FOVE for increasing Figure 2: Our approach vs FOVE as the evidence
domain sizes. X,Y-axes drawn on a log-scale.
increases. Y-axis is drawn on a log scale.
4.4
Examples
We illustrate our approach through examples. We will use simple predicate symbols like R, S, T and
assume the domain of all variables as [n]. Note that for a single tuple, say R(a) with indeterminate
?, GF = 1 + ? from rule R2. Now suppose we have a simple program like P = {(R(X), ?)} (a
n
single feature R(X) with indeterminate ?). Then from rule R5: FP = FP [a/X] = (1 + ?)n .
These are both examples of programs with closed form GF. We can evaluate
P n FkP with O(log(n))
arithmetic operations, while if we were to write the same GF as
k k ? it would require
O(n log(n)) operations. The key insight of our approach is representing GFs succinctly. Now
assume the following program P with multiple features :
R(X1 ) ? S(X1 , Y1 ) ?
S(X2 , Y2 ) ? T (X2 ) ?
Note that (X1 , X2 ) form a separator. Hence using R5, FP = FP [(a,a)/(X1 ,X2 )]
program P 0 = P [(a, a)/(X1 , X2 )]:
R(a) ? S(a, Y1 ) ?
S(a, Y2 ) ? T (a) ?
n
. Now consider
Using R4 twice, for R(a) and T (a) along with R2 (to get the GF for R(a), T (a)); we get FP 0 =
(1 + ?)?(1 + ?)?FP 00 , where P 00 is
S(a, Y1 ) ?
S(a, Y2 ) ?
which is same as (S(a, Y ), ??) using R1. The GF for this program, as shown earlier is (1 + ??)n .
Now putting values back together, we get:
FP 0 = (1 + ?)?(1 + ?)?(1 + ??)n = 2n+1 + (1 + ??)n
n
n
Finally, for the original program: FP = (FP 0 ) = 2n+1 + (1 + ??)n . Note that this is also in
closed form.
5
Experiments
The algorithm that we described is based on computing the generating functions of counting programs to perform lifted inference, which approaches the problem from a completely different angle
than existing techniques. Due to this novelty, we can solve MLNs that are intractable for other existing lifted algorithms such as first-order variable elimination (FOVE) [5, 6, 7]. Specifically, we
demonstrate with our experiments that on some MLNs we indeed outperform FOVE by orders of
magnitude.
We ran our algorithm on the MLN given in Table 1. The set of features used in this MLN fall into
the class of counting programs having a pseudo-polynomial generating function. This is the most
general class of features our approach covers, and here our algorithm does not give any guarantees
as evidence increases. The evidence in our experiments is randomly generated for the two tables
Asthma and Smokes. In our experiments we study the influence of two factors on the runtime:
7
Size of Domain: Identifying tractable features is particularly important for inference in first order
models, because (i) grounding can produce very big graphical models and (ii) the treewidth of these
models could be very high. As the size of domain increases, our approach should scale better than
the existing techniques which can?t do lifted inference on this MLN. All the predicates in this MLN
are only defined on one domain, that of persons.
Evidence: Since this MLN falls into the class of features for which we give no guarantees as
evidence increases, we want to study the behavior of our algorithm in the presence of increasingly
more evidence.
Fig. 5 displays the execution time of our CP algorithm vs the FOVE approach for domain sizes
varying from 5 to 100, at the presence of 30% evidence. All results display average runtimes over
15 repetitions with the same parameter settings. FOVE cannot do lifted inference on this MLN
and resorts to grounding. Thus, it could only execute up to the domain size of 18; after that it
consistently ran out of memory. The figure also displays the extrapolated data points for FOVE?s
behavior in larger domain sizes, and shows its runtime growing exponentially. Our approach on the
other hand dominates FOVE by orders of magnitude for those small domains, and finishes within
seconds even for domains of size 100. Note that the complexity of our algorithm for this MLN is
quadratic. Hence it looks linear on the log-scale.
Fig. 5 demonstrates the behavior of the algorithms as the amount of evidence is increased from 0 to
100%. We chose a domain size of 13 to run FOVE, since it couldn?t terminate for higher domain
sizes. The figure displays the runtime of our algorithm for domain sizes of 13 and 100. Although for
this class of features we do not give guarantees on the running time for large evidence, our algorithm
still performs well as the evidence increases. In fact after a point the algorithm gets faster. This is
because the main time-consuming rule used in this MLN is R4. R4 chooses a singleton atom in
the last feature, say Asthma, and eliminates it. This involves time complexity proportional to the
domain of the atom and the running time of the smaller MLN obtained after removing that atom. As
evidence increases, the atom corresponding to Asthma may be split into many smaller predicates;
but the domain size of each predicate also keeps getting smaller. In particular with 100% evidence,
the domain is just 1 and therefore R6 takes constant time!
6
Conclusion and Future Work
We have presented a novel approach to lifted inference that uses the theory of generating functions
to do efficient inference. We also give guarantees on the theoretical complexity of our approach.
This is the first work that tries to address the complexity of lifted inference in terms of only the
features (formulas). This is beneficial because using a set of tractable features ensures that inference
is always efficient and hence it will scale to large domains.
Several avenues remain for future work. For instance, a feature such as transitive closure ( e.g.,
Friends(X,Y) ? Friends(Y,Z) ? Friends(X,Z)), which occurs quite often in many
real world applications, is intractable for our algorithm. In future, we would like to address the
complexity of such features by characterizing the completeness of our approach. Another avenue for
future work is extending other lifted inference approaches [5, 7] with rules that we have developed
in this paper. Unlike our algorithm, the aforementioned algorithms are complete. Namely, when
lifted inference is not possible, they ground the domain and resort to propositional inference. But
even in those cases, just running a propositional algorithm that does not exploit symmetry is not very
efficient. In particular, ground networks generated by logical formulas have some repetition in their
structure that is difficult to capture after grounding. Take for example R(X,Y) ? S(Z,Y). This
feature is in PTIME by our algorithm, but if we create a ground markov network by grounding this
feature then it can have unbounded treewidth (as big as the domain itself). We think our approach
can provide an insight about how to best construct a graphical model from the groundings of a
logical formula. This is also another interesting piece of future work that our algorithm motivates.
References
[1] Lise Getoor and Ben Taskar. Introduction to Statistical Relational Learning. The MIT Press,
2007.
8
[2] Pedro Domingos and Daniel Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan and Claypool, 2009.
[3] Matthew Richardson and Pedro Domingos. Markov logic networks. In Machine Learning,
page 2006, 2006.
[4] David Poole. First-order probabilistic inference. In IJCAI?03: Proceedings of the 18th international joint conference on Artificial intelligence, pages 985?991, San Francisco, CA, USA,
2003. Morgan Kaufmann Publishers Inc.
[5] Rodrigo De Salvo Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference.
In IJCAI?05: Proceedings of the 19th international joint conference on Artificial intelligence,
pages 1319?1325, San Francisco, CA, USA, 2005. Morgan Kaufmann Publishers Inc.
[6] Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, and Leslie Pack Kaelbling. Lifted probabilistic inference with counting formulas. In AAAI?08: Proceedings of the
23rd national conference on Artificial intelligence, pages 1062?1068. AAAI Press, 2008.
[7] K. S. Ng, J. W. Lloyd, and W. T. Uther. Probabilistic modelling, inference and learning using
logical theories. Annals of Mathematics and Artificial Intelligence, 54(1-3):159?205, 2008.
[8] Nevin Zhang and David Poole. A simple approach to bayesian network computations. In
Proceedings of the Tenth Canadian Conference on Artificial Intelligence, pages 171?178, 1994.
[9] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence,
113:41?85, 1999.
[10] Parag Singla and Pedro Domingos. Lifted first-order belief propagation. In AAAI?08: Proceedings of the 23rd national conference on Artificial intelligence, pages 1094?1099. AAAI
Press, 2008.
[11] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[12] Kevin P. Murphy, Yair Weiss, and Michael I. Jordan. Loopy belief propagation for approximate
inference: An empirical study. In In Proceedings of the Fifteenth Conference on Uncertainty
in Artificial Intelligence (UAI), pages 467?475, 1999.
[13] Nilesh Dalvi and Dan Suciu. Management of probabilistic data: foundations and challenges.
In PODS, pages 1?12, New York, NY, USA, 2007. ACM Press.
[14] Karl Schnaitter Nilesh Dalvi and Dan Suciu. Computing query probability with incidence
algebras. In PODS, 2007.
9
| 4067 |@word version:1 inversion:2 polynomial:16 open:1 closure:2 recursively:2 substitution:3 contains:3 series:1 daniel:1 existing:8 z2:2 incidence:1 conjunctive:3 must:3 written:4 dechter:1 partition:10 enables:1 remove:2 drop:1 v:3 intelligence:10 braz:1 amir:1 mln:20 completeness:1 bijection:1 simpler:5 zhang:1 unbounded:1 along:2 constructed:1 ik:1 dan:4 combine:2 dalvi:2 manner:1 inter:1 indeed:1 behavior:3 p1:4 growing:1 little:1 increasing:1 underlying:1 notation:3 bounded:1 moreover:1 developed:2 transformation:3 guarantee:6 pseudo:5 every:11 runtime:3 exactly:1 demonstrates:2 vgogate:1 bio:1 appear:1 arguably:1 before:1 engineering:1 accordance:1 treat:1 chose:1 twice:1 equivalence:4 r4:7 luke:1 alice:11 bi:1 recursive:1 universal:1 empirical:1 indeterminate:7 ups:1 word:1 refers:1 renaming:1 get:5 cannot:4 operator:5 milch:1 applying:1 influence:1 equivalent:7 roth:1 go:1 straightforward:1 independently:1 pod:2 resolution:1 formulate:1 unify:13 splitting:1 identifying:1 rule:52 insight:2 handle:1 notion:1 annals:1 target:1 suppose:1 heavily:1 exact:1 programming:1 us:2 domingo:3 particularly:1 database:1 disjunctive:4 taskar:1 solved:3 capture:1 precondition:2 ensures:1 nevin:1 ran:2 complexity:7 rigorously:1 dynamic:1 carrying:1 algebra:1 efficiency:1 f2:7 completely:1 joint:4 fpk:1 represented:1 ff1:3 distinct:1 artificial:10 query:2 couldn:1 lift:3 kevin:1 quite:3 whose:1 larger:2 solve:2 say:3 otherwise:1 precludes:1 richardson:1 think:1 itself:1 final:1 sequence:1 propose:3 product:3 combining:1 date:1 iff:2 fi0:4 asserts:1 normalize:3 getting:1 exploiting:1 billion:1 ijcai:2 requirement:1 r1:8 extending:1 produce:1 generating:24 ben:1 object:7 illustrate:2 friend:22 ij:1 eq:1 p2:4 c:1 involves:1 come:1 treewidth:4 implies:2 elimination:6 require:3 parag:1 f1:6 preliminary:3 proposition:1 brian:1 strictly:1 extension:1 ground:8 normal:8 exp:2 claypool:1 bj:1 matthew:1 omitted:1 mlns:13 applicable:2 singla:1 correctness:5 repetition:2 create:1 weighted:1 mit:1 clearly:1 always:2 modified:2 pn:1 lifted:24 varying:1 kersting:1 ax:1 focus:3 lise:1 she:1 consistently:1 modelling:1 indicates:1 inference:38 typically:1 eliminate:1 relation:2 manipulating:1 going:1 interested:2 i1:3 among:2 aforementioned:1 denoted:1 art:1 raised:1 equal:2 once:1 never:1 having:2 washington:3 atom:23 runtimes:1 identical:1 represents:1 r5:6 look:2 ng:1 future:5 simplify:1 intelligent:1 few:3 randomly:1 national:2 individual:1 murphy:1 replacement:1 n1:2 negation:1 fd:2 truly:1 recurse:2 suciu:4 tuple:4 partial:1 incomplete:1 theoretical:1 uncertain:1 increased:1 instance:1 modeling:1 earlier:1 cover:2 ar:5 assignment:2 leslie:1 loopy:1 cost:1 kaelbling:1 subset:1 predicate:15 inadequate:1 characterize:1 chooses:1 person:2 international:2 retain:1 probabilistic:16 informatics:1 meliou:1 nilesh:2 michael:2 together:3 aaai:4 satisfied:7 management:1 containing:5 resort:2 toy:1 converted:1 singleton:3 de:1 sec:2 lloyd:1 includes:1 jha:1 coefficient:5 inc:2 piece:1 try:2 root:2 closed:8 extrapolation:1 analyze:2 eyal:1 red:3 fp1:2 kaufmann:3 who:1 efficiently:2 yield:3 identify:3 bayesian:1 basically:2 none:1 bob:18 straight:1 reach:1 whenever:2 definition:2 failure:1 involved:1 associated:1 proof:2 popular:2 logical:9 back:1 appears:1 higher:1 wei:1 evaluated:4 execute:1 generality:1 furthermore:1 just:6 implicit:1 until:1 asthma:13 hand:2 propagation:5 lack:2 smoke:15 defines:2 lowd:1 vibhav:1 alexandra:1 grounding:11 usa:3 contain:1 true:1 normalized:1 y2:3 former:2 hence:9 assigned:1 q0:1 i2:3 indistinguishable:1 during:2 self:1 ll:1 criterion:1 generalized:1 complete:1 demonstrate:2 performs:1 cp:25 interface:1 fj:1 reasoning:3 ranging:1 image:1 novel:1 recently:2 fi:36 common:3 conditioning:2 exponentially:1 million:1 interpretation:1 he:1 ai:2 rd:2 pm:1 similarly:4 mathematics:1 language:1 longer:1 similarity:1 etc:1 recent:1 manipulation:2 store:1 success:1 continue:1 seen:1 morgan:4 fortunately:1 novelty:1 arithmetic:2 ii:3 multiple:1 rj:3 faster:2 offer:1 variant:1 essentially:1 fifteenth:1 sometimes:1 represent:2 gfs:2 cps:12 addition:1 want:3 zettlemoyer:1 publisher:2 rest:3 eliminates:1 unlike:1 fkp:1 subject:1 effectiveness:1 jordan:1 counting:14 presence:2 canadian:1 split:8 easy:1 iii:1 enough:1 independence:4 finish:1 identified:1 reduce:1 idea:1 avenue:2 whether:1 motivated:1 expression:8 fove:12 proceed:1 york:1 ptime:5 amount:1 category:1 simplest:1 reduced:1 specifies:1 outperform:1 exist:1 percentage:1 notice:1 disjoint:2 write:3 express:3 group:1 key:1 four:1 putting:1 drawn:2 indeterminates:2 rewriting:2 tenth:1 year:2 sum:2 run:1 angle:1 letter:2 uncertainty:2 extends:1 utilizes:2 bound:1 layer:1 display:4 replaces:1 quadratic:1 occur:2 ri:4 x2:5 calling:1 haimes:1 speed:1 argument:1 combination:2 terminates:1 smaller:4 increasingly:1 beneficial:1 remain:1 wi:3 happens:1 intuitively:1 quantifier:1 bucket:1 equation:2 previously:1 r3:4 know:1 tractable:7 generalizes:2 operation:3 apply:4 yair:1 original:1 denotes:1 running:5 dirichlet:5 binomial:2 graphical:4 unifying:1 exploit:2 establish:1 depart:1 occurs:3 unable:1 entity:1 parametrized:1 concatenation:1 trivial:4 ru:2 relationship:1 gogate:1 difficult:1 unfortunately:1 fp2:2 statement:1 abhay:1 motivates:1 perform:2 upper:2 convolution:5 markov:10 construct:1 behave:1 immediate:1 relational:3 y1:3 arbitrary:1 ff2:1 david:2 propositional:6 namely:5 pair:2 specified:1 z1:3 established:1 pearl:2 salvo:1 address:2 poole:2 below:2 fp:29 challenge:2 summarize:1 program:35 memory:1 belief:5 everyone:1 power:3 getoor:1 natural:1 quantification:1 recursion:1 mn:1 normality:1 scheme:1 improve:1 representing:1 axis:1 carried:1 transitive:2 gf:22 literature:1 loss:1 interesting:1 proportional:1 foundation:1 uther:1 principle:1 share:1 pi:1 karl:1 succinctly:3 course:1 extrapolated:1 last:4 keeping:2 english:1 side:3 allow:2 fall:3 template:1 characterizing:1 rodrigo:1 world:12 evaluating:1 rich:1 computes:1 numeric:1 doesn:1 forward:1 san:2 approximate:1 logic:9 keep:2 instantiation:2 uai:1 francisco:2 tuples:2 consuming:1 don:7 table:6 terminate:1 pack:1 ca:2 symmetry:1 investigated:1 separator:6 constructing:1 domain:40 pk:2 main:2 big:2 n2:2 succinct:1 repeated:2 x1:5 fig:2 join:1 ff:1 ny:1 position:3 exponential:5 r6:5 third:1 ix:1 formula:8 rk:2 theorem:1 removing:1 symbol:10 r2:7 admits:1 normalizing:1 derives:1 grouping:1 evidence:15 intractable:2 false:1 dominates:1 magnitude:2 execution:2 push:1 nk:1 smoker:1 aren:1 fc:1 simply:2 expressed:2 tracking:1 applies:1 kristian:1 pedro:3 truth:1 satisfies:1 acm:1 viewed:1 towards:1 replace:4 change:1 specifically:2 called:3 total:1 succeeds:1 cond:4 formally:2 people:4 latter:2 violated:1 evaluate:2 handling:2 |
3,388 | 4,068 | Rates of convergence for the cluster tree
Kamalika Chaudhuri
UC San Diego
[email protected]
Sanjoy Dasgupta
UC San Diego
[email protected]
Abstract
For a density f on Rd , a high-density cluster is any connected component of {x :
f (x) ? ?}, for some ? > 0. The set of all high-density clusters form a hierarchy
called the cluster tree of f . We present a procedure for estimating the cluster tree
given samples from f . We give finite-sample convergence rates for our algorithm,
as well as lower bounds on the sample complexity of this estimation problem.
1
Introduction
A central preoccupation of learning theory is to understand what statistical estimation based on a
finite data set reveals about the underlying distribution from which the data were sampled. For
classification problems, there is now a well-developed theory of generalization. For clustering,
however, this kind of analysis has proved more elusive.
Consider for instance k-means, possibly the most popular clustering procedure in use today. If
this procedure is run on points X1 , . . . , Xn from distribution f , and is told to find k clusters, what
do these clusters reveal about f ? Pollard [8] proved a basic consistency result: if the algorithm
always finds the global minimum of the k-means cost function (which is NP-hard, see Theorem 3
of [3]), then as n ? ?, the clustering is the globally optimal k-means solution for f . This result,
however impressive, leaves the fundamental question unanswered: is the best k-means solution to f
an interesting or desirable quantity, in settings outside of vector quantization?
In this paper, we are interested in clustering procedures whose output on a finite sample converges
to ?natural clusters? of the underlying distribution f . There are doubtless many meaningful ways
to define natural clusters. Here we follow some early work on clustering (for instance, [5]) by
associating clusters with high-density connected regions. Specifically, a cluster of density f is any
connected component of {x : f (x) ? ?}, for any ? > 0. The collection of all such clusters forms
an (infinite) hierarchy called the cluster tree (Figure 1).
Are there hierarchical clustering algorithms which converge to the cluster tree? Previous theory
work [5, 7] has provided weak consistency results for the single-linkage clustering algorithm, while
other work [13] has suggested ways to overcome the deficiencies of this algorithm by making it
more robust, but without proofs of convergence. In this paper, we propose a novel way to make
single-linkage more robust, while retaining most of its elegance and simplicity (see Figure 3). We
establish its finite-sample rate of convergence (Theorem 6); the centerpiece of our argument is a
result on continuum percolation (Theorem 11). We also give a lower bound on the problem of
cluster tree estimation (Theorem 12), which matches our upper bound in its dependence on most of
the parameters of interest.
2
Definitions and previous work
Let X be a subset of Rd . We exclusively consider Euclidean distance on X , denoted k ? k. Let
B(x, r) be the closed ball of radius r around x.
1
f (x)
?1
?2
?3
C1
C2
X
C3
Figure 1: A probability density f on R, and three of its clusters: C1 , C2 , and C3 .
2.1
The cluster tree
We start with notions of connectivity. A path P in S ? X is a continuous 1 ? 1 function P :
P
[0, 1] ? S. If x = P (0) and y = P (1), we write x
y, and we say that x and y are connected in
S. This relation ? ?connected in S? ? is an equivalence relation that partitions S into its connected
components. We say S ? X is connected if it has a single connected component.
The cluster tree is a hierarchy each of whose levels is a partition of a subset of X , which we will
occasionally call a subpartition of X . Write ?(X ) = {subpartitions of X }.
Definition 1 For any f : X ? R, the cluster tree of f is a function Cf : R ? ?(X ) given by
Cf (?) = connected components of {x ? X : f (x) ? ?}.
Any element of Cf (?), for any ?, is called a cluster of f .
For any ?, Cf (?) is a set of disjoint clusters of X . They form a hierarchy in the following sense.
Lemma 2 Pick any ?? ? ?. Then:
1. For any C ? Cf (?), there exists C ? ? Cf (?? ) such that C ? C ? .
2. For any C ? Cf (?) and C ? ? Cf (?? ), either C ? C ? or C ? C ? = ?.
We will sometimes deal with the restriction of the cluster tree to a finite set of points x1 , . . . , xn .
Formally, the restriction of a subpartition C ? ?(X ) to these points is defined to be C[x1 , . . . , xn ] =
{C ? {x1 , . . . , xn } : C ? C}. Likewise, the restriction of the cluster tree is Cf [x1 , . . . , xn ] : R ?
?({x1 , . . . , xn }), where Cf [x1 , . . . , xn ](?) = Cf (?)[x1 , . . . , xn ]. See Figure 2 for an example.
2.2
Notion of convergence and previous work
Suppose a sample Xn ? X of size n is used to construct a tree Cn that is an estimate of Cf . Hartigan
[5] provided a very natural notion of consistency for this setting.
Definition 3 For any sets A, A? ? X , let An (resp, A?n ) denote the smallest cluster of Cn containing
A ? Xn (resp, A? ? Xn ). We say Cn is consistent if, whenever A and A? are different connected
components of {x : f (x) ? ?} (for some ? > 0), P(An is disjoint from A?n ) ? 1 as n ? ?.
It is well known that if Xn is used to build a uniformly consistent density estimate fn (that is,
supx |fn (x) ? f (x)| ? 0), then the cluster tree Cfn is consistent; see the appendix for details.
The big problem is that Cfn is not easy to compute for typical density estimates fn : imagine, for
instance, how one might go about trying to find level sets of a mixture of Gaussians! Wong and
2
f (x)
X
Figure 2: A probability density f , and the restriction of Cf to a finite set of eight points.
Lane [14] have an efficient procedure that tries to approximate Cfn when fn is a k-nearest neighbor
density estimate, but they have not shown that it preserves the consistency property of Cfn .
There is a simple and elegant algorithm that is a plausible estimator of the cluster tree: single
linkage (or Kruskal?s algorithm); see the appendix for pseudocode. Hartigan [5] has shown that it is
consistent in one dimension (d = 1). But he also demonstrates, by a lovely reduction to continuum
percolation, that this consistency fails in higher dimension d ? 2. The problem is the requirement
that A ? Xn ? An : by the time the clusters are large enough that one of them contains all of A,
there is a reasonable chance that this cluster will be so big as to also contain part of A? .
With this insight, Hartigan defines a weaker notion of fractional consistency, under which An (resp,
A?n ) need not contain all of A ? Xn (resp, A? ? Xn ), but merely a sizeable chunk of it ? and ought to
be very close (at distance ? 0 as n ? ?) to the remainder. He then shows that single linkage has
this weaker consistency property for any pair A, A? for which the ratio of inf{f (x) : x ? A ? A? } to
sup{inf{f (x) : x ? P } : paths P from A to A? } is sufficiently large. More recent work by Penrose
[7] closes the gap and shows fractional consistency whenever this ratio is > 1.
A more robust version of single linkage has been proposed by Wishart [13]: when connecting points
at distance r from each other, only consider points that have at least k neighbors within distance r
(for some k > 2). Thus initially, when r is small, only the regions of highest density are available for
linkage, while the rest of the data set is ignored. As r gets larger, more and more of the data points
become candidates for linkage. This scheme is intuitively sensible, but Wishart does not provide a
proof of convergence. Thus it is unclear how to set k, for instance.
Stuetzle and Nugent [12] have an appealing top-down scheme for estimating the cluster tree, along
with a post-processing step (called runt pruning) that helps identify modes of the distribution. The
consistency of this method has not yet been established.
Several recent papers [6, 10, 9, 11] have considered the problem of recovering the connected components of {x : f (x) ? ?} for a user-specified ?: the flat version of our problem. In particular,
the algorithm of [6] is intuitively similar to ours, though they use a single graph in which each point
is connected to its k nearest neighbors, whereas we have a hierarchy of graphs in which each point
is connected to other points at distance ? r (for various r). Interestingly, k-nn graphs are valuable
for flat clustering because they can adapt to clusters of different scales (different average interpoint
distances). But they are challenging to analyze and seem to require various regularity assumptions
on the data. A pleasant feature of the hierarchical setting is that different scales appear at different
levels of the tree, rather than being collapsed together. This allows the use of r-neighbor graphs, and
makes possible an analysis that has minimal assumptions on the data.
3
Algorithm and results
In this paper, we consider a generalization of Wishart?s scheme and of single linkage, shown in
Figure 3. It has two free parameters: k and ?. For practical reasons, it is of interest to keep these as
3
1. For each xi set rk (xi ) = inf{r : B(xi , r) contains k data points}.
2. As r grows from 0 to ?:
(a) Construct a graph Gr with nodes {xi : rk (xi ) ? r}.
Include edge (xi , xj ) if kxi ? xj k ? ?r.
b
(b) Let C(r)
be the connected components of Gr .
Figure 3: Algorithm for hierarchical clustering. The input is a sample Xn = {x1 , . . . , xn } from
density f on X . Parameters k and ? need to be set. Single linkage is (? = 1, k = 2). Wishart
suggested ? = 1 and larger k.
small as possible. We provide finite-sample convergence rates for?all 1 ? ? ? 2 and we can achieve
k ? d log n, which we conjecture to be the best possible, if ? > 2. Our rates for ? = 1 force k to
be much larger, exponential in d. It is a fascinating open problem to determine whether the setting
(? = 1, k ? d log n) yields consistency.
3.1
A notion of cluster salience
Suppose density f is supported on some subset X of Rd . We will show that the hierarchical clustering procedure is consistent in the sense of Definition 3. But the more interesting question is, what
clusters will be identified from a finite sample? To answer this, we introduce a notion of salience.
The first consideration is that a cluster is hard to identify if it contains a thin ?bridge? that would
make it look disconnected in a small sample. To control this, we consider a ?buffer zone? of width
? around the clusters.
Definition 4 For Z ? Rd and ? > 0, write Z? = Z + B(0, ?) = {y ? Rd : inf z?Z ky ? zk ? ?}.
An important technical point is that Z? is a full-dimensional set, even if Z itself is not.
Second, the ease of distinguishing two clusters A and A? depends inevitably upon the separation
between them. To keep things simple, we?ll use the same ? as a separation parameter.
Definition 5 Let f be a density on X ? Rd . We say that A, A? ? X are (?, ?)-separated if there
exists S ? X (separator set) such that:
? Any path in X from A to A? intersects S.
? supx?S? f (x) < (1 ? ?) inf x?A? ?A?? f (x).
Under this definition, A? and A?? must lie within X , otherwise the right-hand side of the inequality
is zero. However, S? need not be contained in X .
3.2
Consistency and finite-sample rate of convergence
?
Here we state the result for ? > 2 and k ? d log n. The analysis section also has results for
1 ? ? ? 2 and k ? (2/?)d d log n.
Theorem 6 There is an absolute constant C such that the following holds. Pick any ?, ? > 0, and
run the algorithm on a sample Xn of size n drawn from f , with settings
?
?2
d log n
1
? ? ? 2 and k = C ?
? log2 .
2 1+ ?
?2
?
d
Then there is a mapping r : [0, ?) ? [0, ?) such that with probability at least 1 ? ?, the following
holds uniformly for all pairs of connected subsets A, A? ? X : If A, A? are (?, ?)-separated (for ?
and some ? > 0), and if
1
k
?
? :=
inf ? f (x) ?
? ? 1+
(*)
d
x?A? ?A?
vd (?/2) n
2
where vd is the volume of the unit ball in Rd , then:
4
1. Separation. A ? Xn is disconnected from A? ? Xn in Gr(?) .
2. Connectedness. A ? Xn and A? ? Xn are each individually connected in Gr(?) .
The two parts of this theorem ? separation and connectedness ? are proved in Sections 3.3 and 3.4.
We mention in passing that this finite-sample result implies consistency (Definition 3): as n ? ?,
take kn = (d log n)/?2n with any schedule of (?n : n = 1, 2, . . .) such that ?n ? 0 and kn /n ? 0.
Under mild conditions, any two connected components A, A? of {f ? ?} are (?, ?)-separated for
some ?, ? > 0 (see appendix); thus they will get distinguished for sufficiently large n.
3.3
Analysis: separation
The cluster tree algorithm depends heavily on the radii rk (x): the distance within which x?s nearest
k neighbors lie (including x itself). Thus the empirical probability mass of B(x, rk (x)) is k/n. To
show that rk (x) is meaningful, we need to establish that the mass of this ball under density f is also,
very approximately, k/n. The uniform convergence of these empirical counts follows from the fact
that balls in Rd have finite VC dimension, d + 1. Using uniform Bernstein-type bounds, we get a
set of basic inequalities that we use repeatedly.
Lemma 7 Assume k ? d log n, and fix some ? > 0. Then there exists a constant C? such that with
probability > 1 ? ?, every ball B ? Rd satisfies the following conditions:
C? d log n
n
C? p
k
kd log n
f (B) ? +
n
n
C? p
k
kd log n
f (B) ? ?
n
n
f (B) ?
=?
fn (B) > 0
k
n
k
=? fn (B) <
n
R
Here fn (B) = |Xn ? B|/n is the empirical mass of B, while f (B) = B f (x)dx is its true mass.
=?
fn (B) ?
P ROOF : See appendix. C? = 2Co log(2/?), where Co is the absolute constant from Lemma 16.
We will henceforth think of ? as fixed, so that we do not have to repeatedly quantify over it.
Lemma 8 Pick 0 < r < 2?/(? + 2) such that
vd r d ?
?
vd rd ?(1 ? ?)
<
k
C? p
kd log n
+
n
n
k
C? p
kd log n
?
n
n
(recall that vd is the volume of the unit ball in Rd ). Then with probability > 1 ? ?:
1. Gr contains all points in (A??r ? A???r ) ? Xn and no points in S??r ? Xn .
2. A ? Xn is disconnected from A? ? Xn in Gr .
P ROOF : For (1), any point x ? (A??r ?A???r ) has f (B(x, r)) ? vd rd ?; and thus, by Lemma 7, has
at least k neighbors within radius r. Likewise, any point x ? S??r has f (B(x, r)) < vd rd ?(1 ? ?);
and thus, by Lemma 7, has strictly fewer than k neighbors within distance r.
For (2), since points in S??r are absent from Gr , any path from A to A? in that graph must have an
edge across S??r . But any such edge has length at least 2(? ? r) > ?r and is thus not in Gr .
Definition 9 Define r(?) to be the value of r for which vd rd ? =
k
n
+
C? ?
kd log n.
n
To satisfy the conditions of Lemma 8, it suffices to take k ? 4C?2 (d/?2 ) log n; this is what we use.
5
x?
xi
xi+1
xi
x?
?(xi )
?(xi )
x
x
Figure 4: Left: P is a path from x to x? , and ?(xi ) is the point furthest along the path that is within
distance r of xi . Right: The next point, xi+1 ?
? Xn , is chosen from a slab of B(?(xi ), r) that is
perpendicular to xi ? ?(xi ) and has width 2?/ d.
3.4
Analysis: connectedness
We need to show that points in A (and similarly A? ) are connected in Gr(?) . First we state a simple
bound (proved in the appendix) that works if ? = 2 and k ? d log n; later we consider smaller ?.
Lemma 10 Suppose 1 ? ? ? 2. Then with probability ? 1 ? ?, A ? Xn is connected in Gr
whenever r ? 2?/(2 + ?) and the conditions of Lemma 8 hold, and
d
2
C? d log n
d
vd r ? ?
.
?
n
Comparing this to the definition of r(?), we see that choosing ? = 1 would entail k ? 2d , which is
undesirable. We can get a more reasonable setting of k ? d log n by choosing
? = 2, but we?d like
?
? to be as small as possible. A more refined argument shows that ? ? 2 is enough.
?
Theorem 11 Suppose ?2 ? 2(1 + ?/ d), for some 0 < ? ? 1. Then, with probability > 1 ? ?,
A ? Xn is connected in Gr whenever r ? ?/2 and the conditions of Lemma 8 hold, and
vd r d ? ?
8 C? d log n
?
.
?
n
P ROOF : We have already made heavy use of uniform convergence over balls. We now also require
a more complicated class G, each element of which is the intersection of an open ball and a slab
defined by two parallel hyperplanes. Formally, each of these functions is defined by a center ? and
a unit direction u, and is the indicator function of the set
?
{z ? Rd : kz ? ?k < r, |(z ? ?) ? u| ? ?r/ d}.
We will describe any such set as ?the slab of B(?, r) in direction u?. A simple calculation (see
Lemma 4 of [4]) shows that the volume of this slab is at least ?/4 that of B(x, r). Thus, if the slab lies
entirely in A? , its probability mass is at least (?/4)vd rd ?. By uniform convergence over G (which
has VC dimension 2d), we can then conclude (as in Lemma 7) that if (?/4)vd rd ? ? (2C? d log n)/n,
then with probability at least 1 ? ?, every such slab within A contains at least one data point.
P
Pick any x, x? ? A?Xn ; there is a path P in A with x
x? . We?ll identify a sequence of data points
?
x0 = x, x1 , x2 , . . ., ending in x , such that for every i, point xi is active in Gr and kxi ?xi+1 k ? ?r.
This will confirm that x is connected to x? in Gr .
To begin with, recall that P is a continuous 1 ? 1 function from [0, 1] into A. We are also interested
in the inverse P ?1 , which sends a point on the path back to its parametrization in [0, 1]. For any
point y ? X , define N (y) to be the portion of [0, 1] whose image under P lies in B(y, r): that is,
N (y) = {0 ? z ? 1 : P (z) ? B(y, r)}. If y is within distance r of P , then N (y) is nonempty.
Define ?(y) = P (sup N (y)), the furthest point along the path within distance r of y (Figure 4, left).
The sequence x0 , x1 , x2 , . . . is defined iteratively; x0 = x, and for i = 0, 1, 2, . . . :
? If kxi ? x? k ? ?r, set xi+1 = x? and stop.
6
? By construction, xi is within distance r of path P and hence N (xi ) is nonempty.
? Let B be the open ball of radius r around ?(xi ). The slab of B in direction xi ? ?(xi )
must contain a data point; this is xi+1 (Figure 4, right).
The process eventually stops because each ?(xi+1 ) is strictly further along path P than ?(xi );
formally, P ?1 (?(xi+1 )) > P ?1 (?(xi )). This is because kxi+1 ? ?(xi )k < r, so by continuity of
the function P , there are points further along the path (beyond ?(xi )) whose distance to xi+1 is still
< r. Thus xi+1 is distinct from x0 , x1 , . . . , xi . Since there are finitely many data points, the process
must terminate, so the sequence {xi } does constitute a path from x to x? .
Each xi lies in Ar ? A??r and is thus active in Gr (Lemma 8). Finally, the distance between
successive points is:
kxi ? xi+1 k2
=
=
?
kxi ? ?(xi ) + ?(xi ) ? xi+1 k2
kxi ? ?(xi )k2 + k?(xi ) ? xi+1 k2 + 2(xi ? ?(xi )) ? (?(xi ) ? xi+1 )
2?r2
? ?2 r 2 ,
2r2 + ?
d
where the second-last inequality comes from the definition of slab.
To complete the proof of Theorem 6, take k = 4C?2 (d/?2 ) log n, which satisfies the requirements
of Lemma 8 as well as those of Theorem 11, using ? = 2?2 . The relationship that defines r(?)
(Definition 9) then translates into
k
?
vd r d ? =
1+
.
n
2
This shows that clusters at density level ? emerge when the growing radius r of the cluster tree
algorithm reaches roughly (k/(?vd n))1/d . In order for (?, ?)-separated clusters to be distinguished,
we need this radius to be at most ?/2; this is what yields the final lower bound on ?.
4
Lower bound
We have shown that the algorithm of Figure 3 distinguishes pairs of clusters that are (?, ?)-separated.
The number of samples it requires to capture clusters at density ? ? is, by Theorem 6,
d
d
O
,
log
vd (?/2)d ??2
vd (?/2)d ??2
We?ll now show that this dependence on ?, ?, and ? is optimal. The only room for improvement,
therefore, is in constants involving d.
Theorem 12 Pick any ? in (0, 1/2), any d > 1, and any ?, ? > 0 such that ?vd?1 ? d < 1/50. Then
there exist: an input space X ? Rd ; a finite family of densities ? = {?i } on X ; subsets Ai , A?i , Si ?
X such that Ai and A?i are (?, ?)-separated by Si for density ?i , and inf x?Ai,? ?A?i,? ?i (x) ? ?, with
the following additional property.
Consider any algorithm that is given n ? 100 i.i.d. samples Xn from some ?i ? ? and, with
probability at least 1/2, outputs a tree in which the smallest cluster containing Ai ? Xn is disjoint
from the smallest cluster containing A?i ? Xn . Then
1
1
n = ?
.
log
vd ? d ?
vd ? d ??2 d1/2
P ROOF : We start by constructing the various spaces and densities. X is made up of two disjoint
regions: a cylinder X0 , and an additional region X1 whose sole purpose is as a repository for excess
probability mass. Let Bd?1 be the unit ball in Rd?1 , and let ?Bd?1 be this same ball scaled to have
radius ?. The cylinder X0 stretches along the x1 -axis; its cross-section is ?Bd?1 and its length is
4(c + 1)? for some c > 1 to be specified: X0 = [0, 4(c + 1)?] ? ?Bd?1 . Here is a picture of it:
7
4(c + 1)?
?
0
4?
8?
x1 axis
12?
We will construct a family of densities ? = {?i } on X , and then argue that any cluster tree algorithm
that is able to distinguish (?, ?)-separated clusters must be able, when given samples from some ?I ,
to determine the identity of I. The sample complexity of this latter task can be lower-bounded using
Fano?s inequality (typically stated as in [2], but easily rewritten in the convenient form of [15], see
appendix): it is ?((log |?|)/?), for ? = maxi6=j K(?i , ?j ), where K(?, ?) is KL divergence.
The family ? contains c ? 1 densities ?1 , . . . , ?c?1 , where ?i is defined as follows:
? Density ? on [0, 4?i + ?] ? ?Bd?1 and on [4?i + 3?, 4(c + 1)?] ? ?Bd?1 . Since the crosssectional area of the cylinder is vd?1 ? d?1 , the total mass here is ?vd?1 ? d (4(c + 1) ? 2).
? Density ?(1 ? ?) on (4?i + ?, 4?i + 3?) ? ?Bd?1 .
? Point masses 1/(2c) at locations 4?, 8?, . . . , 4c? along the x1 -axis (use arbitrarily narrow
spikes to avoid discontinuities).
? The remaining mass, 1/2 ? ?vd?1 ? d (4(c + 1) ? 2?), is placed on X1 in some fixed manner
(that does not vary between different densities in ?).
Here is a sketch of ?i . The low-density region of width 2? is centered at 4?i + 2? on the x1 -axis.
density ?(1 ? ?)
2?
density ?
point mass 1/2c
For any i 6= j, the densities ?i and ?j differ only on the cylindrical sections (4?i + ?, 4?i + 3?) ?
?Bd?1 and (4?j + ?, 4?j + 3?) ? ?Bd?1 , which are disjoint and each have volume 2vd?1 ? d . Thus
?
?(1 ? ?)
K(?i , ?j ) = 2vd?1 ? d ? log
+ ?(1 ? ?) log
?(1 ? ?)
?
4
vd?1 ? d ??2
= 2vd?1 ? d ?(?? log(1 ? ?)) ?
ln 2
(using ln(1 ? x) ? ?2x for 0 < x ? 1/2). This is an upper bound on the ? in the Fano bound.
Now define the clusters and separators as follows: for each 1 ? i ? c ? 1,
? Ai is the line segment [?, 4?i] along the x1 -axis,
? A?i is the line segment [4?(i + 1), 4(c + 1)? ? ?] along the x1 -axis, and
? Si = {4?i + 2?} ? ?Bd?1 is the cross-section of the cylinder at location 4?i + 2?.
Thus Ai and A?i are one-dimensional sets while Si is a (d ? 1)-dimensional set. It can be checked
that Ai and A?i are (?, ?)-separated by Si in density ?i .
With the various structures defined, what remains is to argue that if an algorithm is given a sample
Xn from some ?I (where I is unknown), and is able to separate AI ? Xn from A?I ? Xn , then it can
effectively infer I. This has sample complexity ?((log c)/?). Details are in the appendix.
There remains a discrepancy of 2d between the upper and lower bounds; it is an interesting open
problem to close this gap. Does the (? = 1, k ? d log n) setting (yet to be analyzed) do the job?
Acknowledgments. We thank the anonymous reviewers for their detailed and insightful comments,
and the National Science Foundation for support under grant IIS-0347646.
8
References
[1] O. Bousquet, S. Boucheron, and G. Lugosi. Introduction to statistical learning theory. Lecture
Notes in Artificial Intelligence, 3176:169?207, 2004.
[2] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 2005.
[3] S. Dasgupta and Y. Freund. Random projection trees for vector quantization. IEEE Transactions on Information Theory, 55(7):3229?3242, 2009.
[4] S. Dasgupta, A. Kalai, and C. Monteleoni. Analysis of perceptron-based active learning. Journal of Machine Learning Research, 10:281?299, 2009.
[5] J.A. Hartigan. Consistency of single linkage for high-density clusters. Journal of the American
Statistical Association, 76(374):388?394, 1981.
[6] M. Maier, M. Hein, and U. von Luxburg. Optimal construction of k-nearest neighbor graphs
for identifying noisy clusters. Theoretical Computer Science, 410:1749?1764, 2009.
[7] M. Penrose. Single linkage clustering and continuum percolation. Journal of Multivariate
Analysis, 53:94?109, 1995.
[8] D. Pollard. Strong consistency of k-means clustering. Annals of Statistics, 9(1):135?140, 1981.
[9] P. Rigollet and R. Vert. Fast rates for plug-in estimators of density level sets. Bernoulli,
15(4):1154?1178, 2009.
[10] A. Rinaldo and L. Wasserman.
38(5):2678?2722, 2010.
Generalized density clustering.
Annals of Statistics,
[11] A. Singh, C. Scott, and R. Nowak. Adaptive hausdorff estimation of density level sets. Annals
of Statistics, 37(5B):2760?2782, 2009.
[12] W. Stuetzle and R. Nugent. A generalized single linkage method for estimating the cluster tree
of a density. Journal of Computational and Graphical Statistics, 19(2):397?418, 2010.
[13] D. Wishart. Mode analysis: a generalization of nearest neighbor which reduces chaining effects. In Proceedings of the Colloquium on Numerical Taxonomy held in the University of St.
Andrews, pages 282?308, 1969.
[14] M.A. Wong and T. Lane. A kth nearest neighbour clustering procedure. Journal of the Royal
Statistical Society Series B, 45(3):362?368, 1983.
[15] B. Yu. Assouad, Fano and Le Cam. Festschrift for Lucien Le Cam, pages 423?435, 1997.
9
| 4068 |@word mild:1 cylindrical:1 repository:1 version:2 open:4 pick:5 mention:1 reduction:1 contains:6 exclusively:1 series:1 ours:1 interestingly:1 comparing:1 si:5 yet:2 dx:1 must:5 bd:10 fn:8 numerical:1 partition:2 intelligence:1 leaf:1 fewer:1 parametrization:1 node:1 location:2 successive:1 hyperplanes:1 along:9 c2:2 become:1 manner:1 introduce:1 x0:7 roughly:1 growing:1 globally:1 provided:2 estimating:3 underlying:2 begin:1 bounded:1 mass:10 what:6 kind:1 developed:1 ought:1 every:3 demonstrates:1 k2:4 scaled:1 control:1 unit:4 grant:1 appear:1 path:13 approximately:1 connectedness:3 might:1 lugosi:1 equivalence:1 challenging:1 co:2 ease:1 perpendicular:1 practical:1 acknowledgment:1 lovely:1 procedure:7 stuetzle:2 area:1 empirical:3 vert:1 convenient:1 projection:1 get:4 close:3 undesirable:1 collapsed:1 wong:2 restriction:4 reviewer:1 center:1 elusive:1 go:1 simplicity:1 identifying:1 wasserman:1 estimator:2 insight:1 unanswered:1 notion:6 resp:4 imagine:1 diego:2 hierarchy:5 today:1 suppose:4 user:1 heavily:1 distinguishing:1 construction:2 annals:3 element:3 capture:1 region:5 connected:21 highest:1 valuable:1 colloquium:1 complexity:3 cam:2 singh:1 segment:2 upon:1 easily:1 various:4 intersects:1 separated:8 distinct:1 fast:1 describe:1 artificial:1 outside:1 choosing:2 refined:1 whose:5 larger:3 plausible:1 say:4 otherwise:1 statistic:4 think:1 itself:2 noisy:1 final:1 sequence:3 propose:1 remainder:1 chaudhuri:1 achieve:1 ky:1 convergence:11 cluster:49 requirement:2 regularity:1 maxi6:1 converges:1 help:1 andrew:1 finitely:1 nearest:6 sole:1 job:1 strong:1 recovering:1 c:1 implies:1 come:1 quantify:1 differ:1 direction:3 radius:7 vc:2 centered:1 require:2 fix:1 generalization:3 suffices:1 anonymous:1 strictly:2 stretch:1 hold:4 around:3 sufficiently:2 considered:1 mapping:1 slab:8 kruskal:1 continuum:3 early:1 smallest:3 vary:1 purpose:1 estimation:4 lucien:1 percolation:3 bridge:1 individually:1 always:1 rather:1 kalai:1 avoid:1 improvement:1 bernoulli:1 sense:2 nn:1 typically:1 initially:1 relation:2 interested:2 classification:1 denoted:1 retaining:1 uc:2 construct:3 look:1 yu:1 thin:1 discrepancy:1 np:1 cfn:4 distinguishes:1 neighbour:1 preserve:1 divergence:1 national:1 roof:4 festschrift:1 cylinder:4 interest:2 mixture:1 analyzed:1 held:1 edge:3 nowak:1 tree:22 euclidean:1 hein:1 theoretical:1 minimal:1 instance:4 ar:1 cover:1 cost:1 subset:5 uniform:4 gr:14 kn:2 answer:1 supx:2 kxi:7 chunk:1 st:1 density:35 fundamental:1 told:1 connecting:1 together:1 connectivity:1 von:1 central:1 containing:3 possibly:1 wishart:5 henceforth:1 american:1 sizeable:1 satisfy:1 depends:2 later:1 try:1 closed:1 analyze:1 sup:2 portion:1 start:2 complicated:1 parallel:1 maier:1 likewise:2 yield:2 identify:3 weak:1 reach:1 monteleoni:1 whenever:4 checked:1 definition:12 elegance:1 proof:3 sampled:1 stop:2 proved:4 popular:1 recall:2 fractional:2 schedule:1 back:1 higher:1 follow:1 though:1 hand:1 sketch:1 continuity:1 defines:2 mode:2 reveal:1 grows:1 effect:1 contain:3 true:1 hausdorff:1 hence:1 boucheron:1 iteratively:1 deal:1 ll:3 width:3 chaining:1 generalized:2 trying:1 complete:1 image:1 consideration:1 novel:1 pseudocode:1 rigollet:1 volume:4 association:1 he:2 ai:8 rd:19 consistency:14 similarly:1 fano:3 entail:1 impressive:1 multivariate:1 recent:2 inf:7 occasionally:1 buffer:1 inequality:4 arbitrarily:1 minimum:1 additional:2 converge:1 determine:2 ii:1 full:1 desirable:1 reduces:1 infer:1 technical:1 match:1 adapt:1 calculation:1 cross:2 plug:1 post:1 involving:1 basic:2 sometimes:1 c1:2 whereas:1 sends:1 rest:1 comment:1 elegant:1 thing:1 seem:1 call:1 bernstein:1 easy:1 enough:2 xj:2 associating:1 identified:1 cn:3 translates:1 absent:1 whether:1 linkage:12 pollard:2 passing:1 constitute:1 repeatedly:2 ignored:1 pleasant:1 detailed:1 nugent:2 exist:1 disjoint:5 write:3 dasgupta:4 crosssectional:1 drawn:1 hartigan:4 graph:7 merely:1 run:2 inverse:1 luxburg:1 family:3 reasonable:2 separation:5 appendix:7 entirely:1 bound:10 distinguish:1 fascinating:1 deficiency:1 x2:2 flat:2 lane:2 bousquet:1 argument:2 conjecture:1 ball:11 disconnected:3 kd:5 across:1 smaller:1 appealing:1 making:1 intuitively:2 ln:2 remains:2 count:1 nonempty:2 eventually:1 available:1 gaussians:1 rewritten:1 eight:1 hierarchical:4 distinguished:2 thomas:1 top:1 clustering:14 cf:13 include:1 remaining:1 graphical:1 log2:1 build:1 establish:2 society:1 question:2 quantity:1 already:1 spike:1 dependence:2 unclear:1 kth:1 distance:14 separate:1 thank:1 vd:26 sensible:1 argue:2 reason:1 furthest:2 length:2 relationship:1 ratio:2 taxonomy:1 stated:1 unknown:1 upper:3 finite:12 inevitably:1 ucsd:2 pair:3 specified:2 c3:2 kl:1 narrow:1 established:1 discontinuity:1 beyond:1 suggested:2 able:3 scott:1 including:1 royal:1 natural:3 force:1 indicator:1 scheme:3 picture:1 axis:6 freund:1 lecture:1 interesting:3 foundation:1 consistent:5 heavy:1 supported:1 last:1 free:1 placed:1 salience:2 side:1 weaker:2 understand:1 perceptron:1 neighbor:9 emerge:1 absolute:2 overcome:1 dimension:4 xn:37 ending:1 kz:1 collection:1 made:2 san:2 adaptive:1 transaction:1 excess:1 approximate:1 pruning:1 keep:2 confirm:1 global:1 active:3 reveals:1 conclude:1 xi:48 continuous:2 terminate:1 zk:1 robust:3 separator:2 constructing:1 big:2 x1:20 centerpiece:1 wiley:1 fails:1 exponential:1 candidate:1 lie:5 theorem:11 down:1 rk:5 insightful:1 r2:2 exists:3 quantization:2 kamalika:1 preoccupation:1 effectively:1 gap:2 intersection:1 penrose:2 rinaldo:1 contained:1 chance:1 satisfies:2 assouad:1 identity:1 room:1 hard:2 specifically:1 infinite:1 uniformly:2 typical:1 lemma:14 called:4 sanjoy:1 total:1 meaningful:2 zone:1 formally:3 support:1 latter:1 interpoint:1 d1:1 |
3,389 | 4,069 | Direct Loss Minimization for Structured Prediction
David McAllester
TTI-Chicago
[email protected]
Tamir Hazan
TTI-Chicago
[email protected]
Joseph Keshet
TTI-Chicago
[email protected]
Abstract
In discriminative machine learning one is interested in training a system to optimize a certain desired measure of performance, or loss. In binary classification
one typically tries to minimizes the error rate. But in structured prediction each
task often has its own measure of performance such as the BLEU score in machine
translation or the intersection-over-union score in PASCAL segmentation. The
most common approaches to structured prediction, structural SVMs and CRFs, do
not minimize the task loss: the former minimizes a surrogate loss with no guarantees for task loss and the latter minimizes log loss independent of task loss.
The main contribution of this paper is a theorem stating that a certain perceptronlike learning rule, involving features vectors derived from loss-adjusted inference,
directly corresponds to the gradient of task loss. We give empirical results on phonetic alignment of a standard test set from the TIMIT corpus, which surpasses all
previously reported results on this problem.
1
Introduction
Many modern software systems compute a result as the solution, or approximate solution, to an optimization problem. For example, modern machine translation systems convert an input word string
into an output word string in a different language by approximately optimizing a score defined on the
input-output pair. Optimization underlies the leading approaches in a wide variety of computational
problems including problems in computational linguistics, computer vision, genome annotation, advertisement placement, and speech recognition. In many optimization-based software systems one
must design the objective function as well as the optimization algorithm. Here we consider a parameterized objective function and the problem of setting the parameters of the objective in such a way
that the resulting optimization-driven software system performs well.
We can formulate an abstract problem by letting X be an abstract set of possible inputs and Y an
abstract set of possible outputs. We assume an objective function sw : X ? Y ? R parameterized
by a vector w ? Rd such that for x ? X and y ? Y we have a score sw (x, y). The parameter setting
w determines a mapping from input x to output yw (x) is defined as follows:
yw (x) = argmax sw (x, y)
(1)
y?Y
Our goal is to set the parameters w of the scoring function such that the mapping from input to
output defined by (1) performs well. More formally, we assume that there exists some unknown
probability distribution ? over pairs (x, y) where y is the desired output (or reference output) for
input x. We assume a loss function L, such as the BLEU score, which gives a cost L(y, y?) ? 0 for
producing output y? when the desired output (reference output) is y. We then want to set w so as to
minimize the expected loss.
w? = argmin E [L(y, yw (x))]
(2)
w
In (2) the expectation is taken over a random draw of the pair (x, y) form the source data distribution
?. Throughout this paper all expectations will be over a random draw of a fresh pair (x, y). In
machine learning terminology we refer to (1) as inference and (2) as training.
1
Unfortunately the training objective function (2) is typically non-convex and we are not aware of any
polynomial algorithms (in time and sample complexity) with reasonable approximation guarantees
to (2) for typical loss functions, say 0-1 loss, and an arbitrary distribution ?. In spite of the lack of
approximation guarantees, it is common to replace the objective in (2) with a convex relaxation such
as structural hinge loss [8, 10]. It should be noted that replacing the objective in (2) with structural
hinge loss leads to inconsistency ? the optimum of the relaxation is different from the optimum of
(2).
An alternative to a convex relaxation is to perform gradient descent directly on the objective in (2).
In some applications it seems possible that the local minima problem of non-convex optimization is
less serious than the inconsistencies introduced by a convex relaxation.
Unfortunately, direct gradient descent on (2) is conceptually puzzling in the case where the output
space Y is discrete. In this case the output yw (x) is not a differentiable function of w. As one
smoothly changes w the output yw (x) jumps discontinuously between discrete output values. So
one cannot write ?w E [L(y, yw (x))] as E [?w L(y, yw (x))]. However, when the input space X is
continuous the gradient ?w E [L(y, yw (x))] can exist even when the output space Y is discrete. The
main results of this paper is a perceptron-like method of performing direct gradient descent on (2)
in the case where the output space is discrete but the input space is continuous.
After formulating our method we discovered that closely related methods have recently become
popular for training machine translation systems [7, 2]. Although machine translation has discrete
inputs as well as discrete outputs, the training method we propose can still be used, although without
theoretical guarantees. We also present empirical results on the use of this method in phoneme
alignment on the TIMIT corpus, where it achieves the best known results on this problem.
2
Perceptron-Like Training Methods
Perceptron-like training methods are generally formulated for the case where the scoring function
is linear in w. In other words, we assume that the scoring function can be written as follows where
? : X ? Y ? Rd is called a feature map.
sw (x, y) = w> ?(x, y)
Because the feature map ? can itself be nonlinear, and the feature vector ?(x, y) can be very high
dimensional, objective functions of the this form are highly expressive.
Here we will formulate perceptron-like training in the data-rich regime where we have access to
an unbounded sequence (x1 , y1 ), (x2 , y2 ), (x3 , y3 ), . . . where each (xt , yt ) is drawn IID from the
distribution ?. In the basic structured prediction perceptron algorithm [3] one constructs a sequence
of parameter settings w0 , w1 , w2 , . . . where w0 = 0 and wt+1 is defined as follows.
wt+1 = wt + ?(xt , yt ) ? ?(xt , ywt (xt ))
(3)
Note that if ywt (xt ) = yt then no update is made and we have wt+1 = wt . If ywt (xt ) 6= yt then the
update changes the parameter vector in a way that favors yt over ywt (xt ). If the source distribution ?
is ?-separable, i.e., there exists a weight vector w with the property that yw (x) = y with probability 1
and yw (x) is always ?-separated from all distractors, then the perceptron update rule will eventually
lead to a parameter setting with zero loss. Note, however, that the basic perceptron update does not
involve the loss function L. Hence it cannot be expected to optimize the training objective (2) in
cases where zero loss is unachievable.
A loss-sensitive perceptron-like algorithm can be derived from the structured hinge loss of a marginscaled structural SVM [10]. The optimization problem for margin-scaled structured hinge loss can
be defined as follows.
>
?
w = argmin E max L(y, y?) ? w (?(x, y) ? ?(x, y?))
w
y??Y
It can be shown that this is a convex relaxation of (2). We can optimize this convex relaxation
with stochastic sub-gradient descent. To do this we compute a sub-gradient of the objective by first
computing the value of y? which achieves the maximum.
t
yhinge
=
argmax L(yt , y?) ? (wt )> (?(xt , yt ) ? ?(xt , y?))
y
??Y
=
argmax (wt )> ?(xt , y?) + L(yt , y?)
y
??Y
2
(4)
This yields the following perceptron-like update rule where the update direction is the negative of
the sub-gradient of the loss and ? t is a learning rate.
t
wt+1 = wt + ? t ?(xt , yt ) ? ?(xt , yhinge
)
(5)
Equation (4) is often referred to as loss-adjusted inference. The use of loss-adjusted inference causes
the rule update (5) to be at least influenced by the loss function.
Here we consider the following perceptron-like update rule where ? t is a time-varying learning rate
and t is a time-varying loss-adjustment weight.
t
wt+1 = wt + ? t ?(xt , ywt (xt )) ? ?(xt , ydirect
)
(6)
t
ydirect
= argmax (wt )> ?(xt , y?) + t L(y, y?)
(7)
y??Y
t
In the update (6) we view ydirect
as being worse than ywt (xt ). The update direction moves away
from feature vectors of larger-loss labels. Note that the reference label yt in (5) has been replaced
by the inferred label ywt (x) in (6). The main result of this paper is that under mild conditions the
expected update direction of (6) approaches the negative direction of ?w E [L(y, yw (x))] in the limit
as the update weight t goes to zero. In practice we use a different version of the update rule which
moves toward better labels rather than away from worse labels. The toward-better version is given
in Section 5. Our main theorem applies equally to the toward-better and away-from-worse versions
of the rule.
3
The Loss Gradient Theorem
The main result of this paper is the following theorem.
Theorem 1. For a finite set Y of possible output values, and for w in general position as defined
below, we have the following where ydirect is a function of w, x, y and .
1
?w E [L(y, yw (x))] = lim E [?(x, ydirect ) ? ?(x, yw (x)))]
?0
where
ydirect = argmax w> ?(x, y?) + L(y, y?)
y??Y
We prove this theorem in the case of only two labels where we have y ? {?1, 1}. Although the
proof is extended to the general case in a straight forward manner, we omit the general case to
maintain the clarity of the presentation. We assume an input set X and a probability distribution or
a measure ? on X ? {?1, 1} and a loss function L(y, y 0 ) for y, y 0 ? {?1, 1}. Typically the loss
L(y, y 0 ) is zero if y = y 0 but the loss of a false positive, namely L(?1, 1), may be different from the
loss of a false negative, L(1, ?1).
By definition the gradient of expected loss satisfies the following condition for any vector ?w ? Rd .
E [L(y, yw+?w (x))] ? E [L(y, yw (x))]
Using this observation, the direct loss theorem is equivalent to the following
?w> ?w E [L(y, yw (x))] = lim
?0
E [L(y, yw+?w (x)) ? L(y, yw (x))]
(?w)> E [?(x, ydirect ) ? ?(x, yw (x))]
= lim
?0
?0
lim
(8)
For the binary case we define ??(x) = ?(x, 1)??(x, ?1). Under this convention we have yw (x) =
sign(w> ??(x)). We first focus on the left hand side of (8). If the two labels yw+?w (x) and yw (x)
are the same then the quantity inside the expectation is zero. We now define the following two sets
which correspond to the set of inputs x for which these two labels are different.
S+
= {x : yw (x) = ?1, yw+?w (x) = 1}
=
x : w> ??(x) < 0, (w + ?w)> ??(x) ? 0
=
x : w> ??(x) ? [?(?w)> ??(x), 0)
3
??1 (x)
S?? = {0 < w? ??(x) < ???w? ??(x)}
w + ?w
w + ?w
? ? slice
w
??L(y)
?
?L(y) ? ?w ??(x)
??2 (x)
w
?
?L(y ) ? ?w ??(x)
?L(y)
S?+
E ?L(y) ? 1{wt ??(x)?[?(?w)> ??(x),0)}
?E ?L(y) ? 1{wt ??(x)?(0,?(?w)> ??(x)]}
E ?L(y) ? (?w)T ??(x) ? 1{wt ??(x)?[0,]}
(a)
(b)
Figure 1: Geometrical interpretation of the loss gradient. In (a) we illustrate the integration of the
constant value ?L(y) over the set S+ and the constant value ??L(y) over the set S? (the green
area). The lines represent the decision boundaries defined by the associated vectors. In (b) we
show the integration of ?L(y)(?w)> ??(x) over the sets U+ = {x : wt ??(x) ? [0, ]} and
U? = {x : wt ??(x) ? [?(?w)> ??(x), 0)}. The key observation of the proof is that under very
general conditions these integrals are asymptotically equivalent in the limit as goes to zero.
and
S?
=
=
=
{x : yw (x) = 1, yw+?w (x) = ?1}
x : w> ??(x) ? 0, (w + ?w)> ??(x) < 0
x : w> ??(x) ? [0, ?(?w)> ??(x))
We define ?L(y) = L(y, 1) ? L(y, ?1) and then write the left hand side of (8) as follows.
h
i
h
i
E [L(y, yw+?w (x)) ? L(y, yw (x))] = E ?L(y)1{x?S+ } ? E ?L(y)1{x?S? }
(9)
These expectations are shown as integrals in Figure 1 (a) where the lines in the figure represent the
decision boundaries defined by w and w + ?w.
To analyze this further we use the following lemma.
Lemma 1. Let Z(z), U (u) and V (v) be three real-valued random variables whose joint measure
? can be expressed as a measure ? on U and V and a bounded continuous conditional density
3
functionRf (z|u,
R v). More rigorously, we
require that for any ?-measurable set S ? R we have
?(S) = u,v z f (z|u, v)1{z,u,v?S} dz d?(u, v). For any three such random variables we have the
following.
1
lim
E? U ? 1{z?[0,V ]} ? E? U ? 1{z?[V,0]}
= E? [U V ? f (0|u, v)]
+
? 0
1
= lim
E
? U V ? 1{z?[0,]}
?+ 0
Proof. First we note the following where V + denotes max(0, V ).
" Z
#
V
1
1
lim E? U ? 1{z?[0,V )} = lim
E? U
f (z|u, v)dz
?+ 0
?+ 0
0
= E? U V + ? f (0|U, V )
Similarly we have the following where V ? denotes min(0, V ).
Z 0
1
1
lim
E
U
?
1
=
lim
E
U
f
(z|u,
v)dz
?
?
{z?(V,0]}
?+ 0
?+ 0
V
= ?E? U V ? ? f (0|U, V )
4
Subtracting these two expressions gives the following.
E? U V + ? f (0|U, V ) + E? U V ? ? f (0|U, V ) =
=
E? U (V + + V ? ) ? f (0|U, V )
E? [U V ? f (0|U, V )]
Applying Lemma 1 to (9) with Z being the random variable wT ??(x), U being the random variable
??L(y) and V being ?(?w)T ??(x) yields the following.
i
h
i
1 h
+
?
(?w)> ?w E [L(y, yw (x))] = lim
E
?L(y)
?
1
?
E
?L(y)
?
1
{x?S
{x?S
}
}
?+ 0
1
(10)
= lim E ?L(y) ? (?w)> ??(x) ? 1{w> ???[0,]}
?+ 0
Of course we need to check that the conditions of Lemma 1 hold. This is where we need a general
position assumption for w. We discuss this issue briefly in Section 3.1.
Next we consider the right hand side of (8). If the two labels ydirect and yw (x) are the same then the
quantity inside the expectation is zero. We note that we can write ydirect as follows.
ydirect = sign w> ??(x) + ?L(y)
We now define the following two sets which correspond to the set of pairs (x, y) for which yw (x)
and ydirect are different.
B+
= {(x, y) : yw (x) = ?1, ydirect = 1}
=
(x, y) : w> ??(x) < 0, w> ??(x) + ?L(y) ? 0
=
(x, y) : w> ??(x) ? [??L(y)(x), 0)
B?
= {(x, y) : yw (x) = 1, ydirect = ?1}
=
(x, y) : w> ??(x) ? 0, w> ??(x) + ?L(y) < 0
=
(x, y) : w> ??(x) ? [0, ??L(y))
We now have the following.
E (?w)> (?(x, ydirect ) ? ?(x, yw (x)))
h
i
h
i
= E (?w)> ??(x) ? 1{(x,y)?B+ } ? E (?w)> ??(x) ? 1{(x,y)?B? }
(11)
These expectations are shown as integrals in Figure 1 (b). Applying Lemma 1 to (11) with Z set to
w> ??(x), U set to ?(?w)> ??(x) and V set to ??L(y) gives the following.
1
(?w)> E [?(x, ydirect ) ? ?(x, yw (x))]
1
lim E (?w)> ??(x) ? ?L(y) ? 1{w> ??(x)?[0,]}
?+ 0
lim
?+ 0
=
(12)
Theorem 1 now follows from (10) and (12).
3.1
The General Position Assumption
The general position assumption is needed to ensure that Lemma 1 can be applied in the proof of
Theorem 1. As a general position assumption, it is sufficient, but not necessary, that w 6= 0 and
?(x, y) has a bounded density on Rd for each fixed value of y. It is also sufficient that the range of
the feature map is a submanifold of Rd and ?(x, y) has a bounded density relative to the surface of
that submanifold, for each fixed value of y. More complex distributions and feature maps are also
possible.
5
4
Extensions: Approximate Inference and Latent Structure
In many applications the inference problem (1) is intractable. Most commonly we have some form
of graphical model. In this case the score w> ?(x, y) is defined as the negative energy of a Markov
random field (MRF) where x and y are assignments of values to nodes of the field. Finding a lowest
energy value for y in (1) in a general graphical model is NP-hard.
A common approach to an intractable optimization problem is to define a convex relaxation of the
objective function. In the case of graphical models this can be done by defining a relaxation of a
marginal polytope [11]. The details of the relaxation are not important here. At a very abstract
level the resulting approximate inference problem can be defined as follows where the set R is a
relaxation of the set Y, and corresponds to the extreme points of the relaxed polytope.
rw (x) = argmax w> ?(x, r)
(13)
r?R
We assume that for y ? Y and r ? R we can assign a loss L(y, r). In the case of a relaxation of
the marginal polytope of a graphical model we can take L(y, r) to be the expectation over a random
rounding of r to y? of L(y, y?). For many loss functions, such as weighted Hamming loss, one can
compute L(y, r) efficiently. The training problem is then defined by the following equation.
w? = argmin E [L(y, rw (x))]
(14)
w
Note that (14) directly optimizes the performance of the approximate inference algorithm. The parameter setting optimizing approximate inference might be significantly different from the parameter
setting optimizing the loss under exact inference.
The proof of Theorem 1 generalizes to (14) provided that R is a finite set, such as the set of vertices
of a relaxation of the marginal polytope. So we immediately get the following generalization of
Theorem 1.
1
?w E(x,y)?? [L(y, rw (x))] = lim E [?(x, rdirect ) ? ?(x, rw (x))]
?0
where
rdirect = argmax w> ?(x, r?) + L(y, r?)
r??R
Another possible extension involves hidden structure. In many applications it is useful to introduce
hidden information into the inference optimization problem. For example, in machine translation
we might want to construct parse trees for the both the input and output sentence. In this case the
inference equation can be written as follows where h is the hidden information.
yw (x) = argmax max w> ?(x, y, h)
y?Y
h?H
(15)
In this case we can take the training problem to again be defined by (2) but where yw (x) is defined
by (15).
Latent information can be handled by the equations of approximate inference but where R is reinterpreted as the set of pairs (y, h) with y ? Y and h ? H. In this case L(y, r) has the form L(y, (y 0 , h))
which we can take to be equal to L(y, y 0 ).
5
Experiments
In this section we present empirical results on the task of phoneme-to-speech alignment. Phonemeto-speech alignment is used as a tool in developing speech recognition and text-to-speech systems.
In the phoneme alignment problem each input x represents a speech utterance, and consists of a pair
(s, p) of a sequence of acoustic feature vectors, s = (s1 , . . . , sT ), where st ? Rd , 1 ? t ? T ; and a
sequence of phonemes p = (p1 , . . . , pK ), where pk ? P, 1 ? k ? K is a phoneme symbol and P is
a finite set of phoneme symbols. The lengths K and T can be different for different inputs although
typically we have T significantly larger than K. The goal is to generate an alignment between the
two sequences in the input. Sometimes this task is called forced-alignment because one is forced
6
Table 1: Percentage of correctly positioned phoneme boundaries, given a predefined tolerance on
the TIMIT corpus. Results are reported on the whole TIMIT test-set (1344 utterances).
t ? 10ms
Brugnara et al. (1993)
Keshet (2007)
Hosom (2009)
Direct loss min. (trained ? -alignment)
Direct loss min. (trained ? -insensitive)
? -alignment accuracy [%]
t ? 20ms t ? 30ms t ? 40ms
74.6
80.0
79.30
86.01
85.72
88.8
92.3
93.36
94.08
94.21
94.1
96.4
96.74
97.08
97.21
96.8
98.2
98.22
98.44
98.60
? -insensitive
loss
0.278
0.277
to interpret the given acoustic signal as the given phoneme sequence. The output y is a sequence
(y1 , . . . , yK ), where 1 ? yk ? T is an integer giving the start frame in the acoustic sequence of
the k-th phoneme in the phoneme sequence. Hence the k-th phoneme starts at frame yk and ends at
frame yk+1 ?1.
Two types of loss functions are used to quantitatively assess alignments. The first loss is called the
? -alignment loss and it is defined as
L? -alignment (?
y , y?0 ) =
1
|{k : |yk ? yk0 | > ? }| .
|?
y|
(16)
In words, this loss measures the average number of times the absolute difference between the predicted alignment sequence and the manual alignment sequence is greater than ? . This loss with
different values of ? was used to measure the performance of the learned alignment function in
[1, 9, 4]. The second loss, called ? -insensitive loss was proposed in [5] as is defined as follows.
L? -insensitive (?
y , y?0 ) =
1
max {|yk ? yk0 | ? ?, 0}
|?
y|
(17)
This loss measures the average disagreement between all the boundaries of the desired alignment
sequence and the boundaries of predicted alignment sequence where a disagreement of less than ?
is ignored. Note that ? -insensitive loss is continuous and convex while ? -alignment is discontinuous
and non-convex. Rather than use the ?away-from-worse? update given by (6) we use the ?towardbetter? update defined as follows. Both updates give the gradient direction in the limit of small but
the toward-better version seems to perform better for finite .
t
wt+1 = wt + ? t ?(?
xt , y?direct
) ? ?(?
xt , y?wt (?
xt ))
t
y?direct
= argmax (wt )> ?(?
xt , y?) ? t L(?
y , y?)
y??Y
Our experiments are on the TIMIT speech corpus for which there are published benchmark results
[1, 5, 4]. The corpus contains aligned utterances each of which is a pair (x, y) where x is a pair of
a phonetic sequence and an acoustic sequence and y is a desired alignment. We divided the training
portion of TIMIT (excluding the SA1 and SA2 utterances) into three disjoint parts containing 1500,
1796, and 100 utterances, respectively. The first part of the training set was used to train a phoneme
frame-based classifier, which given a speech frame and a phoneme, outputs the confident that the
phoneme was uttered in that frame. The phoneme frame-based classifier is then used as part of a
seven dimensional feature map ?(x, y) = ?((?s, p?), y?) as described in [5]. The feature set used to
train the phoneme classifier consisted of the Mel-Frequency Cepstral Coefficient (MFCC) and the
log-energy along with their first and second derivatives (?+??) as described in [5]. The classifier
used a Gaussian kernel with ? 2 = 19 and a trade-off parameter C = 5.0. The complete set of 61
TIMIT phoneme symbols were mapped into 39 phoneme symbols as proposed by [6], and was used
throughout the training process.
The seven dimensional weight vector w was trained on the second set of 1796 aligned utterances.
We trained twice, once for ? -alignment loss and once for ? -insensitive loss, with ? = 10 ms in both
cases. Training was done by first setting w0 = 0 and then repeatedly selecting one of the 1796
training pairs at random and performing the update (6) with ? t = 1 and t set to a fixed value . It
should be noted that if w0 = 0 and t and ? t are both held constant at and ? respectively, then the
7
direction of wt is independent of the choice of ?. These updates are repeated until the performance
of wt on the third data set (the hold-out set) begins to degrade. This gives a form of regularization
known as early stopping. This was repeated for various values of and a value of was selected
based on the resulting performance on the 100 hold-out pairs. We selected = 1.1 for both loss
functions.
We scored the performance of our system on the whole TIMIT test set of 1344 utterances using
? -alignment accuracy (one minus the loss) with ? set to each of 10, 20, 30 and 40 ms and with ? insensitive loss with ? set to 10 ms. As should be expected, for ? equal to 10 ms the best performance
is achieved when the loss used in training matches the loss used in test. Larger values of ? correspond
to a loss function that was not used in training. The results are given in Table 1. We compared our
results with [4], which is an HMM/ANN-based system, and with [5], which is based on structural
SVM training for ? -insensitive loss. Both systems are considered to be state-of-the-art results on
this corpus. As can be seen, our algorithm outperforms the current state-of-the-art results in every
tolerance value. Also, as might be expected, the ? -insensitive loss seems more robust to the use of a
? value at test time that is larger than the ? value used in training.
6
Open Problems and Discussion
The main result of this paper is the loss gradient theorem of Section 3. This theorem provides a
theoretical foundation for perceptron-like training methods with updates computed as a difference
between the feature vectors of two different inferred outputs where at least one of those outputs
is inferred with loss-adjusted inference. Perceptron-like training methods using feature differences
between two inferred outputs have already been shown to be successful for machine translation but
theoretical justification has been lacking. We also show the value of these training methods in a
phonetic alignment problem.
Although we did not give an asymptotic convergence results it should be straightforward to show
t
that under the update given by (6) we have that
to a local optimum of the objective
P w t converges
t
t
provided that?
both ? and go to zero while t ? t goes to infinity. For example one could take
? t = t = 1/ t.
An open problem is how to properly incorporate regularization in the case where only a finite corpus of training data is available. In our phoneme alignment experiments we trained only a seven
dimensional weight vector and early stopping was used as regularization. It should be noted that
naive regularization with a norm of w, such as regularizing with ?||w||2 , is nonsensical as the loss
E [L(y, yw (x))] is insensitive to the norm of w. Regularization is typically done with a surrogate
loss function such as hinge loss. Regularization remains an open theoretical issue for direct gradient descent on a desired loss function on a finite training sample. Early stopping may be a viable
approach in practice.
Many practical computational problems in areas such as computational linguistics, computer vision,
speech recognition, robotics, genomics, and marketing seem best handled by some form of score optimization. In all such applications we have two optimization problems. Inference is an optimization
problem (approximately) solved during the operation of the fielded software system. Training involves optimizing the parameters of the scoring function to achieve good performance of the fielded
system. We have provided a theoretical foundation for a certain perceptron-like training algorithm
by showing that it can be viewed as direct stochastic gradient descent on the loss of the inference
system. The main point of this training method is to incorporate domain-specific loss functions, such
as the BLEU score in machine translation, directly into the training process with a clear theoretical
foundation. Hopefully the theoretical framework provided here will prove helpful in the continued
development of improved training methods.
References
[1] F. Brugnara, D. Falavigna, and M. Omologo. Automatic segmentation and labeling of speech
based on hidden markov models. Speech Communication, 12:357?370, 1993.
[2] D. Chiang, K. Knight, and W. Wang. 11,001 new features for statistical machine translation.
In Proc. NAACL, 2009, 2009.
8
[3] M. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Conference on Empirical Methods in Natural Language
Processing, 2002.
[4] J.-P. Hosom. Speaker-independent phoneme alignment using transition-dependent states.
Speech Communication, 51:352?368, 2009.
[5] J. Keshet, S. Shalev-Shwartz, Y. Singer, and D. Chazan. A large margin algorithm for speech
and audio segmentation. IEEE Trans. on Audio, Speech and Language Processing, Nov. 2007.
[6] K.-F. Lee and H.-W. Hon. Speaker independent phone recognition using hidden markov models. IEEE Trans. Acoustic, Speech and Signal Proc., 37(2):1641?1648, 1989.
[7] P. Liang, A. Bouchard-Ct, D. Klein, and B. Taskar. An end-to-end discriminative approach to
machine translation. In International Conference on Computational Linguistics and Association for Computational Linguistics (COLING/ACL), 2006.
[8] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In Advances in Neural
Information Processing Systems 17, 2003.
[9] D.T. Toledano, L.A.H. Gomez, and L.V. Grande. Automatic phoneme segmentation. IEEE
Trans. Speech and Audio Proc., 11(6):617?625, 2003.
[10] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. Journal of Machine Learning Research, 6:1453?1484,
2005.
[11] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Foundations and Trends in Machine Learning, 1(1-2):1?305, December 2008.
9
| 4069 |@word mild:1 version:4 briefly:1 polynomial:1 seems:3 norm:2 nonsensical:1 open:3 minus:1 contains:1 score:8 selecting:1 outperforms:1 current:1 must:1 written:2 chicago:3 hofmann:1 update:20 selected:2 ydirect:15 chiang:1 provides:1 node:1 unbounded:1 along:1 direct:10 become:1 viable:1 prove:2 consists:1 inside:2 introduce:1 manner:1 expected:6 p1:1 provided:4 begin:1 bounded:3 lowest:1 argmin:3 minimizes:3 string:2 finding:1 guarantee:4 y3:1 every:1 scaled:1 classifier:4 omit:1 producing:1 positive:1 local:2 limit:3 approximately:2 might:3 acl:1 twice:1 falavigna:1 range:1 practical:1 union:1 practice:2 x3:1 area:2 empirical:4 significantly:2 word:4 spite:1 altun:1 get:1 cannot:2 tsochantaridis:1 applying:2 optimize:3 measurable:1 equivalent:2 map:5 yt:10 crfs:1 dz:3 go:4 uttered:1 straightforward:1 convex:10 formulate:2 immediately:1 rule:7 continued:1 justification:1 exact:1 trend:1 recognition:4 taskar:2 solved:1 wang:1 trade:1 knight:1 yk:6 complexity:1 rigorously:1 trained:5 joint:1 various:1 train:2 separated:1 forced:2 labeling:1 shalev:1 whose:1 larger:4 valued:1 say:1 favor:1 itself:1 sequence:15 differentiable:1 propose:1 subtracting:1 aligned:2 perceptronlike:1 achieve:1 convergence:1 optimum:3 tti:3 converges:1 illustrate:1 stating:1 predicted:2 involves:2 convention:1 direction:6 closely:1 discontinuous:1 stochastic:2 mcallester:2 require:1 assign:1 generalization:1 adjusted:4 extension:2 hold:3 considered:1 mapping:2 achieves:2 early:3 proc:3 label:9 sensitive:1 tool:1 weighted:1 minimization:1 always:1 gaussian:1 rather:2 varying:2 derived:2 focus:1 joachim:1 properly:1 check:1 helpful:1 inference:17 dependent:1 stopping:3 typically:5 hidden:6 koller:1 interested:1 issue:2 classification:1 pascal:1 hon:1 development:1 art:2 integration:2 marginal:3 field:2 aware:1 construct:2 equal:2 once:2 represents:1 np:1 quantitatively:1 serious:1 modern:2 replaced:1 argmax:9 maintain:1 highly:1 reinterpreted:1 alignment:24 extreme:1 held:1 predefined:1 integral:3 necessary:1 tree:1 desired:6 theoretical:7 assignment:1 cost:1 surpasses:1 vertex:1 submanifold:2 successful:1 rounding:1 reported:2 confident:1 st:2 density:3 international:1 lee:1 off:1 w1:1 again:1 containing:1 worse:4 derivative:1 leading:1 ywt:7 coefficient:1 try:1 view:1 hazan:1 analyze:1 portion:1 start:2 bouchard:1 annotation:1 timit:8 contribution:1 minimize:2 ass:1 accuracy:2 phoneme:21 efficiently:1 yield:2 correspond:3 conceptually:1 iid:1 mfcc:1 straight:1 published:1 influenced:1 manual:1 definition:1 energy:3 frequency:1 proof:5 associated:1 hamming:1 popular:1 distractors:1 lim:15 segmentation:4 positioned:1 improved:1 done:3 marketing:1 until:1 hand:3 parse:1 expressive:1 replacing:1 nonlinear:1 hopefully:1 lack:1 naacl:1 consisted:1 y2:1 former:1 hence:2 regularization:6 during:1 noted:3 mel:1 speaker:2 m:8 complete:1 performs:2 omologo:1 geometrical:1 variational:1 recently:1 common:3 jkeshet:1 insensitive:10 association:1 interpretation:1 interpret:1 refer:1 rd:6 automatic:2 similarly:1 language:3 access:1 surface:1 own:1 optimizing:4 optimizes:1 driven:1 phone:1 phonetic:3 certain:3 binary:2 inconsistency:2 scoring:4 seen:1 minimum:1 greater:1 relaxed:1 guestrin:1 signal:2 match:1 divided:1 equally:1 prediction:4 involving:1 underlies:1 basic:2 mrf:1 vision:2 expectation:7 yk0:2 kernel:1 sometimes:1 represent:2 achieved:1 robotics:1 want:2 source:2 w2:1 december:1 seem:1 jordan:1 integer:1 structural:5 variety:1 expression:1 handled:2 speech:16 cause:1 repeatedly:1 ignored:1 generally:1 useful:1 yw:38 involve:1 clear:1 svms:1 rw:4 generate:1 exist:1 percentage:1 sign:2 brugnara:2 correctly:1 disjoint:1 klein:1 discrete:6 write:3 key:1 terminology:1 drawn:1 clarity:1 asymptotically:1 relaxation:12 convert:1 parameterized:2 throughout:2 reasonable:1 family:1 draw:2 decision:2 ct:1 gomez:1 placement:1 infinity:1 x2:1 software:4 unachievable:1 min:3 formulating:1 performing:2 separable:1 structured:7 developing:1 joseph:1 s1:1 taken:1 equation:4 previously:1 remains:1 discus:1 eventually:1 needed:1 singer:1 letting:1 end:3 generalizes:1 available:1 operation:1 away:4 disagreement:2 alternative:1 denotes:2 linguistics:4 ensure:1 graphical:5 hinge:5 sw:4 giving:1 objective:13 move:2 already:1 quantity:2 surrogate:2 gradient:15 mapped:1 hmm:1 w0:4 degrade:1 seven:3 polytope:4 bleu:3 fresh:1 toward:4 length:1 liang:1 unfortunately:2 negative:4 design:1 unknown:1 perform:2 observation:2 markov:5 benchmark:1 finite:6 descent:6 defining:1 extended:1 excluding:1 communication:2 y1:2 discovered:1 frame:7 arbitrary:1 ttic:3 inferred:4 david:1 introduced:1 pair:11 namely:1 sentence:1 fielded:2 acoustic:5 learned:1 sa1:1 trans:3 below:1 regime:1 including:1 max:5 green:1 wainwright:1 chazan:1 natural:1 naive:1 utterance:7 genomics:1 text:1 interdependent:1 relative:1 asymptotic:1 lacking:1 loss:69 foundation:4 yhinge:2 sufficient:2 translation:9 course:1 side:3 perceptron:14 wide:1 cepstral:1 absolute:1 tolerance:2 slice:1 boundary:5 transition:1 tamir:2 genome:1 rich:1 forward:1 made:1 jump:1 commonly:1 approximate:6 nov:1 corpus:7 discriminative:3 shwartz:1 continuous:4 latent:2 grande:1 table:2 robust:1 complex:1 domain:1 did:1 pk:2 main:7 whole:2 scored:1 repeated:2 x1:1 referred:1 sub:3 position:5 exponential:1 third:1 advertisement:1 coling:1 theorem:13 xt:21 specific:1 showing:1 symbol:4 svm:2 exists:2 intractable:2 false:2 keshet:3 margin:4 smoothly:1 intersection:1 expressed:1 adjustment:1 applies:1 corresponds:2 determines:1 satisfies:1 conditional:1 goal:2 formulated:1 presentation:1 ann:1 viewed:1 replace:1 change:2 hard:1 typical:1 discontinuously:1 wt:24 lemma:6 called:4 formally:1 puzzling:1 latter:1 collins:1 incorporate:2 audio:3 regularizing:1 |
3,390 | 407 | Convergence of a Neural Network Classifier
John S. Baras
Systems Research Center
University of Maryland
College Park, Maryland 20705
Anthony La Vigna
Systems Research Center
University of Maryland
College Park, Maryland 20705
Abstract
In this paper, we prove that the vectors in the LVQ learning algorithm
converge. We do this by showing that the learning algorithm performs
stochastic approximation. Convergence is then obtained by identifying
the appropriate conditions on the learning rate and on the underlying
statistics of the classification problem. We also present a modification to
the learning algorithm which we argue results in convergence of the LVQ
error to the Bayesian optimal error as the appropriate parameters become
large.
1
Introduction
Learning Vector Quantization (LVQ) originated in the neural network community
and was introduced by Kohonen (Kohonen [1986]). There have been extensive
simulation studies reported in the literature demonstrating the effectiveness of LVQ
as a classifier and it has generated considerable interest as the training times associated with LVQ are significantly less than those associated with backpropagation
networks.
In this paper we analyse the convergence properties of LVQ. Using a theorem from
the stochastic approximation literature, we prove that the update algorithm converges under the suitable conditions. We also present a modification to the algorithm which provides for more stable learning. Finally, we discuss the decision error
associated with this "modified" LVQ algorithm.
839
840
Baras and LaVigna
A Review of Learning Vector Quantization
2
Let {(Xi, dX')}~1 be the training data or past observation set. This means that Xi
is observed when pattern dx , is in effect. We assume that the xi's are statistically
independent (this assumption can be relaxed). Let OJ be a Voronoi vector and let
8
{Ol, ... , Od be the set ofVoronoi vectors. We assume that there are many more
observations than Voronoi vectors (Duda & Hart [1973]). Once the Voronoi vectors
are initialized, training proceeds by taking a sample (Xj, dx }) from the training set,
finding the closest Voronoi vector and adjusting its value according to equations (1)
and (2). After several passes through the data, the Voronoi vectors converge and
training is complete.
=
Suppose Oe is the closest vector. Adjust Oe as follows:
(1)
if dec
f::.
d X1 '
Oe(n + 1) = Oe(n) - an (Xj - Oe(n))
The ot.her Voronoi vectors are not modified.
(2)
This update has the effect that if Xj and Oe have the same decision then Oe is moved
closer to Xj, however if they have different decisions then Oe is moved away from Xj.
The constants {an} are positive and decreasing, e.g., an = lin. We are concerned
with the convergence properties of 8( n) and with the resulting detection error.
For ease of notation, we assume that there are only two pattern classes. The
equations for the case of more than two pattern classes are given in (LaVigna
[1989]).
3
Convergence of the Learning Algorithm
The LVQ algorithm has the general form
0i(n
+ 1) =
Oi(n)
+ an ,(dxn,de.(n),xn,8n) (xn
- Oi(n))
(3)
where Xn is the currently chosen past observation. The function I determines
whether there is an update and what its sign should be and is given by
if d Xn = de, and Xn EVe,
if d Xn f::. de, and Xn EVe,
otherwise
(4)
Here Ve, represents the set of points closest to OJ and is given by
Ve, = {x E ~d
:
IIOi - xii < IIOj - xiI, j f::.
i}
i = 1, ... , k.
(5)
The update in (3) is a stochastic approximation algorithm (Benveniste, Metivier &
Priouret [1987]). It has the form
(6)
where 8 is the vector with components OJ; H(8, z) is the vector with components
defined in the obvious manner from (3) and Zn = (xn' dx n) is the random pair
Convergence of a Neural Network Classifier
consisting of the observation and the associated true pattern number. If the appropriate conditions are satisfied by On, H, and Zn, then 8 n approaches the solution
of
d (7)
dt 8(t) = h(8(t))
for the appropriate choice of h(8).
For the two pattern case, we let PI (x) represent the density for pattern 1 and 11"1
represent its prior. Likewise for po{x) and 11"0. It can be shown (Kohonen [1986])
that
(8)
where
(9)
If the following hypotheses hold then using techniques from (Benveniste, Metivier &
Priouret [1987]) or (Kushner & Clark [1978]) we can prove the convergence theorem
below:
[H.1] {on} is a non increasing sequence of positive reals such that Ln an =
LnO~ < 00.
[H.2] Given dxn , Xn are independent and distributed according to Pd:rn (x).
[H.3] The pattern densities, Pi(X), are continuous.
00,
Theorem 1 Assume that [H.l]-[H.3] hold. Let 8* be a locally asymptotic stable
equilibrium point of (7) with domain of attraction D*. Let Q be a compact subset
of D*. If 8 n E Q for infinitely many n then
lim 8
n-oo
n
= 0*
a.s.
( 10)
Proof: (see (LaVigna (1989)))
Hence if the initial locations and decisions of the Voronoi vectors are close to a
locally asymptotic stable equilibrium of (7) and if they do not move too much then
the vectors converge.
Given the form of (8) one might try to use Lyapunov theory to prove convergence
with
K
L(8)
=L
i=I
J
IIx -
8il1 2 qi(X), dx
(11)
VIl,
as a candidate Lyapunov function. This function will not work as is demonstrated
by the following calculation in the one dimensional case. Suppose that f{ = 2 and
(h < O2 then
{)
-L(8)
{)Ol
(12)
841
842
Haras and LaVigna
?
-00
? 0
0
o
00
Figure 1: A possible distribution of observations and two Voronoi vectors.
Likewise
(18)
Therefore
~L(E?e
= -h 1 (E?2-h 2(E?2+1I(01-02)/2W Ql((Ol +(2)/2)(h 1 (E?-h 2 (E?)
(19)
In order for this to be a Lyapunov function (19) would have to be strictly nonpositive
which is not the case. The problem with this candidate occurs because the integrand
qi (x) is not strictly positive as is the case for ordinary vector quantization and
adaptive K-means.
4
Modified LVQ AlgorithlTI
The convergence results above require that the initial conditions are close to the
stable points of (7) in order for the algorithm to converge. In this section we
present a modification to the LVQ algorithm which increases the number of stable
equilibrium for equation (7) and hence increases the chances of convergence. First
we present a simple example which emphasizes a defect of LVQ and suggests an
appropriate modification to the algorithm.
Let 0 represent an observation from pattern 2 and let 6. represent an observation
from pattern 1. We assume that the observations are scalar. Figure 1 shows a
possible distribution of observations. Suppose there are two Voronoi vectors 01 and
O2 with decisions 1 and 2, respectively, initialized as shown in Figure 1. At each
update of the LVQ algorithm, a point is picked at random from the observation set
and the closest Voronoi vector is modified. We see that during this update , it is
possible for 02(n) to be pushed towards 00 and 01(n) to be pushed towards -00,
hence the Voronoi vectors may not converge.
Recall that during the update procedure in (3), the Voronoi cells are changed by
changing the location of one Voronoi vector. After an update, the majority vote of
Convergence of a Neural Network Classifier
the observations in each new Voronoi cell may not agree with the decision previously
assigned to that cell. This discrepency can cause the divergence of the algorithm.
In order to prevent this from occuring the decisions associated with the Voronoi
vectors should be updated to agree with the majority vote of the observations that
fall within their Voronoi cells. Let
g,(8; N) = { :
1
if N
N
L
I{YJE V8,lI{d yJ =1}
j=l
>
1
N
N
L
I{Y J Ev8 .}I{d yJ =2}
j=l
(20)
otherwise.
Then gi represents the decision of the majority vote of the observations falling in
Ve,. With this modification, the learning for ()j becomes
()i(n
+ 1) = ()i(n) + an ,(dxn ,gi(8 n ; N),x n ,8 n ) \70,(n)(()i(n) -
xn).
(21)
This equation has the same form as (3) with the function H(8, z) defined from (21)
replacing H(8, z).
This divergence happens because the decisions of the Voronoi vectors do not agree
with the majority vote of the observations closest to each vector. As a result, the
Voronoi vectors are pushed away from the origin. This phenomena occurs even
though the observation data is bounded. The point here is that, if the decision
associated with a Voronoi vector does not agree with the majority vote of the
observations closest to that vector then it is possible for the vector to diverge. A
simple solution to this problem is to correct the decisions of all the Voronoi vectors
a.fter every adjustment so that their decisions correspond to the majority vote. In
practice this correction would only be done during the beginning iterations of the
learning algorithm since that is when an is large and the Voronoi vectors are moving
around significantly. "Vith this modification it is possible to show convergence to the
Bayes optimal classifier (La Vigna [1989]) as the number of Voronoi vectors become
large.
5
Decision Error
In this section we discuss the error associated with the modified LVQ algorithm.
Here two results are discussed. The first is the simple comparison between LVQ
and the nearest neighbor algorithm. The second result is if the number of Voronoi
vectors is allowed to go to infinity at an appropriate rate as the number of observations goes to infinity, then it is possible to construct a convergent estimator of
the Bayes risk. That is, the error associated with LVQ can be made to approach
the optimal error. As before, we concentrate on the binary pa.ttern case for ease of
notation.
5.1
Nearest Neighbor
If a Voronoi vector is assigned to each observation then the LVQ algorithm reduces
to the nearest neighbor algorithm. For that algorithm, it was shown (Cover & Hart
[1967]) that its Bayes minimum probability of error is less than twice that of the
optimal classifier. More specifically, let r* be the Bayes optimal risk and let l' be
843
844
Baras and LaVigna
the nearest neighbor risk. It was shown that
r*::;
r::;
2r*(1- r*)
< 2r*.
(22)
Hence in the case of no iteration, the Bayes' risk associated with LVQ is given from
the nearest neighbor algorithm.
5.2
Other Choices for Number of Voronoi Vectors
We saw above that if the number of Voronoi vectors equals the number of observations then LVQ coincides with the nearest neighbor algorithm. Let kN represent the
number of Voronoi vectors for an observation sample size of N. We are interested
00
in determining the probability of error for LVQ when kN satisfies (1) limkN
and (2) lim(kN / N) = O. In this case, there are more observations than vectors and
hence the Voronoi vectors represent averages of the observations. It is possible to
show that with kN satisfying (1)-(2) the decision error associated with modified
LVQ can be made to approach the Bayesian optimal decision error as N becomes
large (LaVigna [1989]).
=
6
Conclusions
We have shown convergence of the Voronoi vectors in the LVQ algorithm. We have
also presented the majority vote modification of the LVQ algorithm. This modification prevents divergence of the Voronoi vectors and results in convergence for a
larger set of initial conditions. In addition, with this modification it is possible to
show that as the appropriate parameters go to infinity the decision regions associated with the modified LVQ algorithm approach the Bayesian optimal (LaVigna
[1989]).
7
Acknowledgements
This work was supported by the National Science Foundation through grant CDR8803012, Texas Instruments through a TI/SRC Fellowship and the Office of Naval
Research through an ONR Fellowship.
8
References
A. Benveniste, M. Metivier & P. Priouret [1987], Algorithmes Adaptatifs et Approximations Stochastiques, Mason, Paris.
T. M. Cover & P. E. Hart [1967], "Nearest Neighbor Pattern Classification," IEEE
Transactions on Information Theory IT-13, 21-27.
R. O. Duda & P. E. Hart [1973], Pattern Classification and Scene Analysis, John
Wiley & Sons, New York, NY.
T. Kohonen [1986], "Learning Vector Quantization for Pattern Recognition," Technical Report TKK-F-A601, Helsinki University of Technology.
Convergence of a Neural Network Classifier
H. J. Kushner & D. S. Clark [1978], Stochastic Approximation Methods for
Constrained and Unconstrained Systems, Springer-Verlag, New YorkHeidelberg-Berlin.
A. LaVigna [1989], "Nonparametric Classification using Learning Vector Quantization," Ph.D. Dissertation, Department of Electrical Engineering, University
of Maryland.
845
| 407 |@word effect:2 true:1 duda:2 lyapunov:3 concentrate:1 hence:5 assigned:2 move:1 correct:1 occurs:2 simulation:1 stochastic:4 during:3 require:1 maryland:5 coincides:1 berlin:1 initial:3 vigna:2 majority:7 argue:1 complete:1 occuring:1 performs:1 o2:2 past:2 strictly:2 hold:2 correction:1 od:1 ttern:1 around:1 priouret:3 dx:5 equilibrium:3 john:2 ql:1 vith:1 determines:1 update:8 discussed:1 currently:1 observation:22 saw:1 beginning:1 dissertation:1 unconstrained:1 provides:1 location:2 rn:1 modified:7 moving:1 stable:5 community:1 become:2 office:1 closest:6 introduced:1 prove:4 pair:1 paris:1 naval:1 extensive:1 manner:1 verlag:1 binary:1 onr:1 ol:3 voronoi:30 proceeds:1 minimum:1 below:1 decreasing:1 relaxed:1 pattern:12 her:1 converge:5 increasing:1 becomes:2 interested:1 oj:3 underlying:1 notation:2 bounded:1 classification:4 reduces:1 suitable:1 what:1 technical:1 constrained:1 calculation:1 lin:1 equal:1 finding:1 once:1 construct:1 hart:4 technology:1 represents:2 every:1 park:2 ti:1 qi:2 classifier:7 report:1 iteration:2 represent:6 grant:1 review:1 dec:1 cell:4 literature:2 addition:1 positive:3 before:1 engineering:1 ve:3 divergence:3 national:1 fellowship:2 determining:1 consisting:1 asymptotic:2 ot:1 detection:1 pass:1 interest:1 clark:2 might:1 foundation:1 twice:1 adjust:1 suggests:1 dxn:3 effectiveness:1 eve:2 ease:2 benveniste:3 pi:2 statistically:1 concerned:1 changed:1 xj:5 supported:1 yj:2 practice:1 closer:1 backpropagation:1 procedure:1 fall:1 texas:1 neighbor:7 taking:1 initialized:2 whether:1 distributed:1 significantly:2 xn:10 stochastiques:1 made:2 adaptive:1 cover:2 baras:3 close:2 york:1 zn:2 cause:1 v8:1 risk:4 ordinary:1 transaction:1 subset:1 compact:1 demonstrated:1 center:2 nonparametric:1 go:3 lno:1 too:1 locally:2 ph:1 reported:1 kn:4 identifying:1 xi:3 estimator:1 attraction:1 density:2 sign:1 continuous:1 xii:2 diverge:1 updated:1 satisfies:1 suppose:3 demonstrating:1 satisfied:1 falling:1 anthony:1 domain:1 hypothesis:1 origin:1 changing:1 pa:1 prevent:1 satisfying:1 recognition:1 defect:1 li:1 allowed:1 observed:1 de:3 x1:1 electrical:1 region:1 il1:1 wiley:1 oe:8 ny:1 acknowledgement:1 originated:1 try:1 picked:1 src:1 decision:16 candidate:2 pd:1 pushed:3 bayes:5 convergent:1 metivier:3 theorem:3 oi:2 showing:1 infinity:3 algorithmes:1 likewise:2 scene:1 correspond:1 helsinki:1 mason:1 po:1 integrand:1 bayesian:3 quantization:5 emphasizes:1 vil:1 department:1 according:2 son:1 infinitely:1 larger:1 prevents:1 otherwise:2 modification:9 obvious:1 happens:1 statistic:1 gi:2 associated:11 proof:1 adjustment:1 analyse:1 nonpositive:1 scalar:1 springer:1 adjusting:1 chance:1 ln:1 sequence:1 recall:1 lim:2 equation:4 agree:4 previously:1 discus:2 lvq:23 towards:2 kohonen:4 instrument:1 considerable:1 dt:1 specifically:1 away:2 done:1 though:1 moved:2 appropriate:7 prior:1 a601:1 la:2 tkk:1 vote:7 convergence:16 college:2 replacing:1 fter:1 converges:1 kushner:2 oo:1 iix:1 nearest:7 phenomenon:1 |
3,391 | 4,070 | Identifying Dendritic Processing
Yevgeniy B. Slutskiy?
Department of Electrical Engineering
Columbia University
New York, NY 10027
[email protected]
Aurel A. Lazar
Department of Electrical Engineering
Columbia University
New York, NY 10027
[email protected]
Abstract
In system identification both the input and the output of a system are available to
an observer and an algorithm is sought to identify parameters of a hypothesized
model of that system. Here we present a novel formal methodology for identifying
dendritic processing in a neural circuit consisting of a linear dendritic processing
filter in cascade with a spiking neuron model. The input to the circuit is an analog
signal that belongs to the space of bandlimited functions. The output is a time
sequence associated with the spike train. We derive an algorithm for identification
of the dendritic processing filter and reconstruct its kernel with arbitrary precision.
1
Introduction
The nature of encoding and processing of sensory information in the visual, auditory and olfactory
systems has been extensively investigated in the systems neuroscience literature. Many phenomenological [1, 2, 3] as well as mechanistic [4, 5, 6] models have been proposed to characterize and
clarify the representation of sensory information on the level of single neurons.
Here we investigate a class of phenomenological neural circuit models in which the time-domain
linear processing takes place in the dendritic tree and the resulting aggregate dendritic current is encoded in the spike domain by a spiking neuron. In block diagram form, these neural circuit models
are of the [Filter]-[Spiking Neuron] type and as such represent a fundamental departure from the
standard Linear-Nonlinear-Poisson (LNP) model that has been used to characterize neurons in many
sensory systems, including vision [3, 7, 8], audition [2, 9] and olfaction [1, 10]. While the LNP
model also includes a linear processing stage, it describes spike generation using an inhomogeneous
Poisson process. In contrast, the [Filter]-[Spiking Neuron] model incorporates the temporal dynamics of spike generation and allows one to consider more biologically-plausible spike generators.
We perform identification of dendritic processing in the [Filter]-[Spiking Neuron] model assuming
that input signals belong to the space of bandlimited functions, a class of functions that closely
model natural stimuli in sensory systems. Under this assumption, we show that the identification of
dendritic processing in the above neural circuit becomes mathematically tractable. Using simulated
data, we demonstrate that under certain conditions it is possible to identify the impulse response of
the dendritic processing filter with arbitrary precision. Furthermore, we show that the identification
results fundamentally depend on the bandwidth of test stimuli.
The paper is organized as follows. The phenomenological neural circuit model and the identification
problem are formally stated in section 2. The Neural Identification Machine and its realization as an
algorithm for identifying dendritic processing is extensively discussed in section 3. Performance of
the identification algorithm is exemplified in section 4. Finally, section 5 concludes our work.
?
The names of the authors are alphabetically ordered.
1
2
Problem Statement
In what follows we assume that the dendritic processing is linear [11] and any nonlinear effects arise
as a result of the spike generation mechanism [12]. We use linear BIBO-stable filters (not necessarily
causal) to describe the computation performed by the dendritic tree. Furthermore, a spiking neuron
model (as opposed to a rate model) is used to model the generation of action potentials or spikes.
We investigate a general neural circuit comprised of a filter in cascade with a spiking neuron model
(Fig. 1(a)). This circuit is an instance of a Time Encoding Machine (TEM), a nonlinear asynchronous circuit that encodes analog signals in the time domain [13, 14]. Examples of spiking
neuron models considered in this paper include the ideal IAF neuron, the leaky IAF neuron and
the threshold-and-feedback (TAF) neuron [15]. However, the methodology developed below can be
extended to many other spiking neuron models as well.
We break down the full identification of this circuit into two problems: (i) identification of linear operations in the dendritic tree and (ii) identification of spike generator parameters. First, we consider
problem (i) and assume that parameters of the spike generator can be obtained through biophysical
experiments. Then we show how to address (ii) by exploring the space of input signals. We consider
a specific example of a neural circuit in Fig. 1(a) and carry out a full identification of that circuit.
Dendritic Processing
u(t)
Linear
F ilter
Dendritic Processing
Spike Generation
v(t)
Spiking
N euron
u(t)
h(t)
Spike Generation: Ideal IAF Neuron
v(t)
1
C
+
(tk )k?Z
b
(a)
?
(tk )k?Z
voltage reset to 0
(b)
Figure 1: Problem setup. (a) The dendritic processing is described by a linear filter and spikes are produced
by a (nonlinear) spiking neuron model. (b) An example of a neural circuit in (a) is a linear filter in cascade with
the ideal IAF neuron. An input signal u is first passed through a filter with an impulse response h. The output
of the filter v(t) = (u ? h)(t), t ? R, is then encoded into a time sequence (tk )k?Z by the ideal IAF neuron.
3
Neuron Identification Machines
A Neuron Identification Machine (NIM) is the realization of an algorithm for the identification of
the dendritic processing filter in cascade with a spiking neuron model. First, we introduce several
definitions needed to formally address the problem of identifying dendritic processing. We then consider the [Filter]-[Ideal IAF] neural circuit. We derive an algorithm for a perfect identification of the
impulse response of the filter and provide conditions for the identification with arbitrary precision.
Finally, we extend our results to the [Filter]-[Leaky IAF] and [Filter]-[TAF] neural circuits.
3.1
Preliminaries
We model signals
u = u(t),
t ? R, at the input to
a neural circuit as elements of the Paley-Wiener
space ? = u ? L2 (R) supp (Fu) ? [??, ?] , i.e., as functions of finite energy having a finite
spectral support (F denotes the Fourier transform). Furthermore, we assume that the dendritic
processing filters h = h(t), t ?
BIBO-stable and have
R, are linear,
a finite temporal support, i.e.,
they belong to the space H = h ? L1 (R) supp(h) ? [T1 , T2 ] .
Definition 1. A signal u ? ? at the input to a neural circuit together with the resulting output
T = (tk )k?Z of that circuit is called an input/output (I/O) pair and is denoted by (u, T).
Definition 2. Two neural circuits are said to be ?-I/O-equivalent if their respective I/O pairs are
identical for all u ? ?.
Definition 3. Let P : H ? ? with (Ph)(t) = (h ? g)(t), where (h ? g) denotes the convolution of
h with the sinc kernel g , sin(?t)/(?t), t ? R. We say that Ph is the projection of h onto ?.
Definition 4. Signals {ui }N
if there do not exist real numbers
i=1 are said to be linearly independent
PN
N
i
{?i }N
,
not
all
zero,
and
real
numbers
{?
}
such
that
?
i i=1
i=1
i=1 i u (t + ?i ) = 0.
2
3.2
NIM for the [Filter]-[Ideal IAF] Neural Circuit
An example of a model circuit in Fig. 1(a) is the [Filter]-[Ideal IAF] circuit shown in Fig. 1(b).
In this circuit, an input signal u ? ? is passed through a filter with an impulse response (kernel)
h ? H and then encoded by an ideal IAF neuron with a bias b ? R+ , a capacitance C ? R+ and a
threshold ? ? R+ . The output of the circuit is a sequence of spike times (tk )k?Z that is available to
an observer. This neural circuit is an instance of a TEM and its operation can be described by a set
of equations (formally known as the t-transform [13]):
Z tk+1
(u ? h)(s)ds = qk , k ? Z,
(1)
tk
where qk , C??b(tk+1 ?tk ). Intuitively, at every spike time tk+1 the ideal IAF neuron is providing
a measurement qk of the signal v(t) = (u ? h)(t) on the interval t ? [tk , tk+1 ].
Proposition 1. The left-hand side of the
t-transform
linear
in (1) can be written as a bounded
functional Lk : ? ? R with Lk (Ph) = ?k , Ph , where ?k (t) = 1[tk , tk+1 ] ? u
? (t) and u
? =
u(?t), t ? R, denotes the involution of u.
Rt
Proof: Since (u?h) ? ?, we have (u?h)(t) = (u?h?g)(t), t ? R, and therefore tkk+1 (u?h)(s)ds =
R tk+1
(u?Ph)(s)ds. Now since Ph is bounded, the expression on the right-hand side of the equality
tk
is a bounded linear functional Lk : ? ? R with
Z tk+1
Lk (Ph) =
(u ? Ph)(s)ds = ?k , Ph ,
(2)
tk
where ?k ? ? and the last equality follows from the Riesz representation theorem [16]. To find ?k ,
we use the fact that ? is a Reproducing Kernel Hilbert Space (RKHS) [17] with
a kernel
K(s, t) =
g(t ? s). By the reproducing property of the kernel [17], we have ?k (t) = ?k , Kt = Lk (Kt ).
Letting u
? = u(?t) denote the involution of u and using (2), we obtain
?k (t) = 1[tk , tk+1 ] ? u
?, Kt = 1[tk , tk+1 ] ? u
? (t).
Proposition 1 effectively states that the measurements (qk )k?Z of v(t) = (u ? h)(t) can be
also interpreted as the measurements of (Ph)(t). A natural question then is how to identify Ph
from (qk )k?Z . To that end, we note that an observer can typically record both the input u = u(t),
t ? R and the output T = (tk )k?Z of a neural circuit. Since (qk )k?Z can be evaluated from (tk )k?Z
using the definition of qk in (1), the problem is reduced to identifying Ph from an I/O pair (u, T).
Theorem 1. Let u be bounded with supp(Fu) = [??, ?], h ? H and b/(C?) > ?/?. Then given
an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, Ph can be perfectly identified as
X
(Ph)(t) =
ck ?k (t),
k?Z
where ?k (t) = g(t ? tk ), t ? R. Furthermore, c = G+ q with G+ denoting the Moore-Penrose
Rt
pseudoinverse of G, [G]lk = tll+1 u(s ? tk )ds for all k, l ? Z, and [q]l = C? ? b(tl+1 ? tl ).
Proof: By appropriately bounding the input signal u, the spike density (the average number of spikes
over arbitrarily long time intervals) of an ideal IAF neuron is given by D = b/(C?) [14]. Therefore,
for D > ?/? the setPof the representation functions (?k )k?Z , ?k (t) = g(t ? tk ), is a frame in ?
[18] and (Ph)(t) = k?Z ck ?k (t). To find the coefficients ck we note from (2) that
X
X
ql = ?l , Ph =
ck ?l , ?k =
[G]lk ck ,
(3)
k?Z
k?Z
R tl+1
where [G]lk = ?l , ?k = 1[tl , tl+1 ] ? u
?, g( ? ? tk ) = tl u(s ? tk )ds. Writing (3) in matrix
form, we obtain q = Gc with [q]l = ql and [c]k = ck . Finally, the coefficients ck , k ? Z, can be
computed as c = G+ q.
3
Remark 1. The condition b/(C?) > ?/? in Theorem 1 is a Nyquist-type rate condition. Thus,
perfect identification of the projection of h onto ? can be achieved for a finite average spike rate.
Remark 2. Ideally, we would like to identify the kernel h ? H of the filter in cascade with the ideal
IAF neuron. Note that unlike h, the projection Ph belongs to the space L2 (R), i.e., in general Ph
is not BIBO-stable and does not have a finite temporal support. Nevertheless, it is easy to show that
(Ph)(t) approximates h(t) arbitrarily closely on t ? [T1 , T2 ], provided that the bandwidth ? of u
is sufficiently large.
Remark 3. If the impulse response h(t) = ?(t), i.e., if there is no processing on the (arbitrary)
Rt
Rt
input signal u(t), then ql = tll+1 (u ? h)(s)ds = tll+1 u(s)ds, l ? Z. Furthermore,
Z tl+1
Z tl+1
Z tl+1
Z tl+1
(u ? Ph)(s)ds =
(u ? h)(s)ds =
u(s)ds =
(u ? g)(s)ds,
l ? Z.
tl
tl
tl
tl
The above holds if and only if (Ph)(t) = g(t), t ? R. In other words, if h(t) = ?(t), then we
identify P?(t) = sin(?t)/(?t), the projection of ?(t) onto ?.
b
Corollary 1. Let u be bounded with supp(Fu) = [??, ?], h ? H and C?
> ?
? . Furthermore, let
W = (?1 , ?2 ) so that (?2 ? ?1 ) > (T2 ? T1 ) and let ? = (?1 + ?2 )/2, T = (T1 + T2 )/2. Then
given an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, (Ph)(t) can be approximated
arbitrarily closely on t ? [T1 , T2 ] by
X
? =
h(t)
ck ?k (t),
k: tk ?W
Rt
where ?k (t) = g(t ? (tk ? ? + T )), c = G+ q, [G]lk = tll+1 u(s ? (tk ? ? + T ))ds and
[q]l = C? ? b(tl+1 ? tl ) for all k, l ? Z, provided that |?1 | and |?2 | are sufficiently large.
Proof: Through a change of coordinates t ? t0 = (t ? ? + T ) illustrated in Fig. 2, we obtain
W 0 = [?1 ? ? + T, ?2 ? ? + T ] ? [T1 , T2 ] and the set of spike times (tk ? ? + T )k: tk ?W . Note
that W 0 ? R as (?2 ? ?1 ) ? ?. The rest of the proof follows from Theorem 1 and the fact that
limt??? g(t) = 0.
From Corollary 1 we see that if the [Filter]-[Ideal IAF] neural circuit is producing spikes
with a spike density above the Nyquist rate, then we can use a set of spike times (tk )k: tk ?W from a
single temporal window W to identify (Ph)(t) to an arbitrary precision on [T1 , T2 ].
This result is not surprising. Since the spike density is above the Nyquist rate, we could have also
used a canonical time decoding machine (TDM) [13] to first perfectly recover the filter output v(t)
and then employ one of the widely available LTI system techniques to estimate (Ph)(t).
However, the problem becomes much more difficult if the spike density is below the Nyquist rate.
u(t)
h(t)
0
(Ph)(t)
T2
0
T2
t
0
t
T2
(tk )k?Z
t
0
T2
T2
?1
?2
?
t
?1 ? ? + T
(a)
0
t
W
? )
h(t
(Ph)(t)
h(t)
0
W
T2
?2 ? ? + T
t
(b)
Figure 2: Change of coordinates in Corollary 1. (a) Top: example of a causal impulse response h(t) with
supp(h) = [T1 , T2 ], T1 = 0. Middle: projection Ph of h onto some ?. Note that Ph is not causal and
supp(Ph) = R. Bottom: h(t) and (Ph)(t) are plotted on the same set of axes. (b) Top: an input signal
u(t) with supp(Fu) = [??, ?]. Middle: only red spikes from a temporal window W = (?1 , ?2 ) are used to
?
? on t ? [T1 , T2 ] using spike times (tk ? ? + T )k:t ?W .
construct h(t).
Bottom: Ph is approximated by h(t)
k
4
Theorem 2. (The Neuron Identification Machine) Let {ui | supp(Fui ) = [??, ?] }N
i=1 be a collection of N linearly independent and bounded stimuli at the input to a [Filter]-[Ideal IAF] neural
circuit with a dendritic processing filter h ? H. Furthermore, let Ti = (tik )k?Z denote the output of
PN b
the neural circuit in response to the bounded input signal ui . If j=1 C?
> ?
? , then (Ph)(t) can
i
i N
be identified perfectly from the collection of I/O pairs {(u , T )}i=1 .
Proof: Consider the SIMO TEM [14] depicted in Fig. 3(a). h(t) is the input to a population of N
i
[Filter]-[Ideal IAF] neural circuits.
i The spikes (tk )k?Z at the output of each neurali circuit represent
i
distinct measurements qk = ?k , Ph of (Ph)(t). Thus we can think of the qk ?s as projections
N
N
of Ph onto (?11 , ?12 , . . . , ?1k , . . . , ?N
1 , ?2 , . . . , ?k , . . . ). Since the filters are linearly independent
PN b
i N
[14], it follows that, if {u }i=1 are appropriately bounded and j=1 C?
> ?
? or equivalently if the
j
?
?C?
N
number of neurons N > ?b = ?D , the set of functions { (?k )k?Z }j=1 with ?kj (t) = g(t ? tjk ), is
a frame for ? [14], [18]. Hence
(Ph)(t) =
N X
X
cjk ?kj (t).
(4)
j=1 k?Z
To find the coefficients ck , we take the inner product of (4) with ?1l (t), ?2l (t), ..., ?N
l (t):
X
X
i
X 1
i 1
i
N
?l , Ph =
ck ?l , ?k +
c2k ?il , ?k2 + ? ? ? +
cN
? qli ,
k ?l , ?k
k?Z
k?Z
k?Z
for i = 1, . . . , N, l ? Z. Letting [Gij ]lk = ?il , ?kj , we obtain
X
X
X
qli =
Gi1 lk c1k +
Gi2 lk c2k + ? ? ? +
GiN lk cN
k ,
k?Z
k?Z
(5)
k?Z
for i = 1, . . . , N, l ? Z. Writing (5) in matrix form, we have q = Gc, where q = [q1 , q2 , . . . , qN ]T
R ti
with [qi ]l = C? ? b(til+1 ? til ), [Gij ]lk = til+1 ui (s ? tjk )ds and c = [c1 , c2 , . . . , cN ]T . Finally,
l
to find the coefficients ck , k ? Z, we compute c = G+ q.
PN b
?
Corollary 2. Let {ui }N
i=1 as before, h ? H and
j=1 C? > ? . Furthermore, let W = (?1 , ?2 ) so
that (?2 ? ?1 ) > (T2 ? T1 ) and let ? = (?1 + ?2 )/2, T = (T1 + T2 )/2. Then given the I/O pairs
{(ui , Ti )}N
IAF] neural circuit, (Ph)(t) can be approximated arbitrarily
i=1 of the [Filter]-[Ideal
PN P
?
closely on t ? [T1 , T2 ] by h(t) = j=1 k: tj ?W cjk ?kj (t), where ?kj (t) = g(t ? (tjk ? ? + T )), c =
k
R til+1 i
j
+
ij
G q, with [G ]lk = ti u (s?(tk ?? +T ))ds, q = [q1 , q2 , . . . , qN ]T , [qi ]l = C??b(til+1 ?til )
l
for all k, l ? Z provided that |?1 | and |?2 | are sufficiently large.
Proof: Similar to Corollary 1.
N
Corollary 3. Let supp(Fu) = [??, ?], h ? H and let W i , ?1i , ?2i i=1 be a collection of
windows of fixed length (?2i ? ?1i ) > (T2 ? T1 ), i = 1, 2, ..., N . Furthermore, let ? i = (?1i + ?2i )/2,
T = (T1 + T2 )/2 and let (tik )k?Z denote those spikes of the I/O pair (u, T) that belong to W i . Then
Ph can be approximated arbitrarily closely on [T1 , T2 ] by
? =
h(t)
N
X
X
cjk ?kj (t),
j=1 k: tk ?W j
where ?kj (t) = g(t ? (tjk ? ? j + T )), c = G+ q with [Gij ]lk =
R til+1
til
u(s ? (tjk ? ? j + T ))ds,
q = [q1 , q2 , . . . , qN ]T , [qi ]l = C? ? b(til+1 ? til ) for all k, l ? Z, provided that the number of
non-overlapping windows N is sufficiently large.
N
Proof: The input signal u restricted, respectively, to the collection of intervals W i , ?1i , ?2i i=1
plays the same role here as the test stimuli {ui }N
i=1 in Corollary 2. See also Remark 9 in [14].
5
u1 (t)
1
C
+
b
u2 (t)
1
C
+
b
h(t)
uN (t)
?
(t1k )k?Z
(t1k )k?Z
?
(t2k )k?Z
(t2k )k?Z
1
C
b
?
c1k ?(t ? t1k )
c2k ?(t ? t2k )
k?Z
voltage reset to 0
+
k?Z
voltage reset to 0
+
c=G q
(tN
k )k?Z
(tN
k )k?Z
k?Z
voltage reset to 0
(a)
+
g(t)
(Ph)(t)
N
cN
k ?(t ? tk )
(b)
Figure 3: The Neuron Identification Machine. (a) SIMO TEM interpretation of the identification problem
with (tik ) = (tk )k:tk ?W i , i = 1, 2, . . . , N . (b) Block diagram of the algorithm in Theorem 2.
Remark 4. The methodology presented in Theorem 2 can easily be applied to other spiking neuron
models. For example, for the leaky IAF neuron, we have
!#
!
"
Z til+1
til ? til+1
s ? til+1
j
ij
i
i
, [G ]lk =
ds.
[q ]l = C? ? bRC 1 ? exp
u s ? tk exp
RC
RC
til
Similarly, for a threshold-and-feedback (TAF) neuron [15] with a bias b ? R+ , a threshold ? ? R+ ,
and a causal feedback filter with an impulse response f (t), t ? R, we obtain
X
[qi ]l = ? ? b +
f (til ? tik ),
[Gij ]lk = ui til ? tjk .
k<l
3.3
Identifying Parameters of the Spiking Neuron Model
If parameters of the spiking neuron model cannot be obtained through biophysical experiments, we
can use additional input stimuli to derive a neural circuit that is ?-I/O-equivalent to the original
circuit. For example, consider the circuit in Fig. 1(a). Rewriting the t-transform in (1), we obtain
Z
Z tk+1
1 tk+1
C?
? (tk+1 ? tk )
??
(u ? h)(s)ds =
(u ? h0 )(s)ds = qk0 ,
b tk
b
tk
where h0 (t) = h(t)/b, t ? R and qk0 = C?/b ? (tk+1 ? tk ).
Setting u = 0, we can now compute C?/b = (tk+1 ? tk ). Next we can use the NIM described in
Section 3.2 to identify with arbitrary precision the projection Ph0 of h0 onto ?. Thus we identify a
[Filter]-[Ideal IAF] circuit with a filter impulse response Ph0 , a bias b0 = 1, a capacitance C 0 = 1
and a threshold ? 0 = C?/b. This neural circuit is ?-I/O-equivalent to the circuit in Fig. 1(b).
4
Examples
We now demonstrate the performance of the identification algorithm inCorollary 3. We model
the
dendritic processing filter using a causal linear kernel h(t) = ce??t (?t)3 /3! ? (?t)5 /5! with
t ? [0, 0.1 s], c = 3 and ? = 200. The general form of this kernel was suggested in [19] as a
plausible approximation to the temporal structure of a visual receptive field.
We use two different bandlimited signals and show that the identification results fundamentally
depend on the signal bandwidth ?. In Fig. 4 the signal is bandlimited to ? = 2??25 rad/s, whereas
in Fig. 5 it is bandlimited to ? = 2? ?100 rad/s. Although in principle the kernel h has an infinite
bandwidth (having a finite temporal support), its effective bandwidth ? ? 2??100 rad/s (Fig. 6(b)).
Thus in Fig. 4 we reconstruct the projection Ph of the kernel h onto ? with ? = 2? ? 25 rad/s,
whereas in Fig. 5 we reconstruct nearly h itself.
6
(a)
Input signal u(t)
(d)
Periodogram Power Spectrum Estimate of u(t)
0
? = 2??25rad/s
Power, [dB]
Amplitude
1
0.5
0
?0.5
supp(F u) = [-?, ?]
?20
?40
?60
?80
?1
0
(b)
0.2
0.4
0.6
0.8
?100
?150
1
(e)
Output of the [Filter]-[Ideal IAF] neural circuit
?100
?50
0
Power, [dB]
0
D = 40 Hz
Windows {W i } 5i= 1
50
100
150
Periodogram Power Spectrum Estimate of h(t)
supp(F h) ? [-?, ?]
?20
?40
?60
?80
0.2
(c)
0.4
0.6
100
Amplitude
0.8
1
Original ?lter vs. the identi?ed ?lter
(f )
? h) = 1.53e-01
h, RMSE( h,
? P h) = 2.04e-04
P h, RMSE( h,
?
h
50
0
?100
?50
0
50
100
150
Periodogram Power Spectrum Estimate of v(t)
0
Power, [dB]
0
?100
?150
supp(F v ) = [-?, ?]
?20
?40
?60
?80
?50
?0.05
0
0.05
Time, [s]
0.1
?100
?150
0.15
?100
?50
0
50
Frequency, [Hz]
100
150
Figure 4: Identifying dendritic processing in the [Filter]-[Ideal IAF] neural circuit. ? = 2? ? 25 rad/s.
(a) Signal u(t) at the input to the circuit. (b) The output of the circuit is a set of spikes at times (tk )k?Z . The
? (c) The
spike density D = 40 Hz. Note that only 25 spikes from 5 temporal windows are used to construct h.
? (red) and Ph (blue) is 2.04 ? 10?4 . The RMSE between h
? (red) and h (dashed black) is
RMSE between h
1.53 ? 10?1 . (d)-(f) Spectral estimates of u, h and v = u ? h. Note that supp(Fu) = [??, ?] = supp(Fv)
but supp(Fh) ? [??, ?]. In other words, both u, v ? ? but h ?
/ ?.
(a)
0.5
0
?0.5
?1
0
(b)
Periodogram Power Spectrum Estimate of u(t)
0
? = 2??100rad/s
Power, [dB]
Amplitude
(d)
Input signal u(t)
1
supp(F u) = [-?, ?]
?20
?40
?60
?80
0.2
0.4
0.6
0.8
1
1.2
?100
?150
1.4
Output of the [Filter]-[Ideal IAF] neural circuit
(e)
?100
?50
0
Power, [dB]
0
D = 40 Hz
Windows {W i } 10
i= 1
50
100
150
Periodogram Power Spectrum Estimate of h(t)
supp(F h) ? [-?, ?]
?20
?40
?60
?80
0.2
(c)
0.4
100
Amplitude
0.6
0.8
1
1.2
1.4
Original ?lter vs. the identi?ed ?lter
(f )
? h) = 4.58e-03
h, RMSE( h,
? P h) = 1.13e-03
P h, RMSE( h,
?
h
50
0
?100
?50
0
50
100
150
Periodogram Power Spectrum Estimate of v(t)
0
Power, [dB]
0
?100
?150
supp(F v ) = [-?, ?]
?20
?40
?60
?80
?50
?0.05
0
0.05
Time, [s]
0.1
?100
?150
0.15
?100
?50
0
50
Frequency, [Hz]
100
150
Figure 5: Identifying dendritic processing of the [Filter]-[Ideal IAF] neural circuit. ? = 2? ?100 rad/s.
(a) Signal u(t) at the input to the circuit. (b) The output of the circuit is a set of spikes at times (tk )k?Z . The
? (c) The
spike density D = 40 Hz. Note that only 43 spikes from 10 temporal windows are used to construct h.
? (red) and Ph (blue) is 1.13 ? 10?3 . The RMSE between h
? (red) and h (dashed black) is
RMSE between h
4.58 ? 10?3 . (d)-(f) Spectral estimates of u, h and v = u ? h. Note that supp(Fu) = [??, ?] = supp(Fv)
but supp(Fh) ? [??, ?]. In other words, both u, v ? ? but h ?
/ ?.
7
Next, we evaluate the filter identification error as a function of the number of temporal windows N
and the stimulus bandwidth ?. By increasing N , we can approximate the projection Ph of h with
? converges to Ph faster for higher average
arbitrary precision (Fig. 6(a)). Note that the estimate h
spike rate (spike density D) of the neuron. At the same time, by increasing the stimulus bandwidth
?, we can approximate h itself with arbitrary precision (Fig. 6(b)).
? P h) vs. the number of temporal windows
MSE( h,
(a)
20
D = 20 Hz
? P h), [dB]
MSE( h,
0
D = 40 Hz
D = 60 Hz
?20
?/(?D 1 )
?40
?/(?D 2 )
?/(?D 3 )
?60
?80
?100
0
5
10
15
Number of windows N
20
25
? h) vs. the input signal bandwidth
MSE( h,
(b)
0
D = 60Hz, N = 10
h
?
h
?10
? h), [dB]
MSE( h,
30
?20
?30
h
?
h
?40
?50
?60
?70
10
20
30
40
50
60
70
80
90
100
Input signal bandwidth ?/(2?), [Hz]
110
120
130
140
150
? Ph) as a function of the number of temporal windows
Figure 6: The Filter Identification Error. (a) MSE(h,
N . The larger the neuron spike density D, the faster the algorithm converges. The impulse response h is the
? h) as a function of
same as in Fig. 4, 5 and the input signal bandwidth is ? = 2? ? 100 rad/s. (b) MSE(h,
? approximates h. Note that
the input signal bandwidth ?. The larger the bandwidth, the better the estimate h
significant improvement is seen even for ? > 2??100 rad/s, which is roughly the effective bandwidth of h.
5
Conclusion
Previous work in system identification of neural circuits (see [20] and references therein) calls for
parameter identification using white noise input stimuli. The identification process for, e.g., the LNP
model entails identification of the linear filter, followed by a ?best-of-fit? procedure to find the nonlinearity. The performance of such an identification method has not been analytically characterized.
In our work, we presented the methodology for identifying dendritic processing in simple [Filter][Spiking Neuron] models from a single input stimulus. The discussed spiking neurons include the
ideal IAF neuron, the leaky IAF neuron and the threshold-and-fire neuron. However, the methods
presented in this paper are applicable to many other spiking neuron models as well.
The algorithm of the Neuron Identification Machine is based on the natural assumption that the dendritic processing filter has a finite temporal support. Therefore, its action on the input stimulus can
be observed in non-overlapping temporal windows. The filter is recovered with arbitrary precision
from an input/output pair of a neural circuit, where the input is a single signal assumed to be bandlimited. Remarkably, the algorithm converges for a very small number of spikes. This should be
contrasted with the reverse correlation and spike-triggered average methods [20].
Finally, the work presented here will be extended to spiking neurons with random parameters.
Acknowledgement
The work presented here was supported by NIH under the grant number R01DC008701-01.
8
References
[1] Maria N. Geffen, Bede M. Broome, Gilles Laurent, and Markus Meister. Neural encoding of rapidly
fluctuating odors. Neuron, 61(4):570?586, 2009.
[2] Sean J. Slee, Matthew H. Higgs, Adrienne L. Fairhall, and William J. Spain. Two-dimensional time
coding in the auditory brainstem. The Journal of Neuroscience, 25(43):9978?9988, October 2005.
[3] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal elements
of macaque V1 receptive fields. Neuron, Vol. 46:945?956, 2005.
[4] Daniel P. Dougherty, Geraldine A. Wright, and Alice C. Yew. Computational model of the cAMPmediated sensory response and calcium-dependent adaptation in vertebrate olfactory receptor neurons.
Proceedings of the National Academy of Sciences, 102(30):0415?10420, 2005.
[5] Yuqiao Gu, Philippe Lucas, and Jean-Pierre Rospars. Computational model of the insect pheromone
transduction cascade. PLoS Computational Biology, 5(3), 2009.
[6] Zhuoyi Song, Daniel Coca, Stephen Billings, Marten Postma, Roger C. Hardie, and Mikko Juusola.
Biophysical Modeling of a Drosophila Photoreceptor. In Lecture Notes In Computer Science., volume
5863 of Proceedings of the 16th International Conference on Neural Information Processing: Part I, pages
57 ? 71. Springer-Verlag, 2009.
[7] E.J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in
Neural Systems, 12:199?213, 2001.
[8] Jonathan W. Pillow and Eero P. Simoncelli. Dimensionality reduction in neural models: An informationtheoretic generalization of spike-triggered average and covariance analysis. Journal of Vision, 6:414?428,
2006.
[9] J J Eggermont, A M H J Aersten, and P I M Johannesma. Quantitative characterization procedure for
auditory neurons based on the spectra-temporal receptive field. Hearing Research, 10, 1983.
[10] Anmo J. Kim, Aurel A. Lazar, and Yevgeniy B. Slutskiy. System identification of Drosophila olfactory
sensory neurons. Journal of Computational Neuroscience, 2010.
[11] Sydney Cash and Rafael Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons.
Neuron, 22:383?394, 1999.
[12] Jonathan Pillow. Neural coding and the statistical modeling of neuronal responses. PhD thesis, New York
University, May 2005.
[13] Aurel A. Lazar and Laszlo T. T?oth. Perfect recovery and sensitivity analysis of time encoded bandlimited
signals. IEEE Transactions on Circuits and Systems-I: Regular Papers, 51(10):2060?2073, October 2004.
[14] Aurel A. Lazar and Eftychios A. Pnevmatikakis. Faithful representation of stimuli with a population of
integrate-and-fire neurons. Neural Computation, 20(11):2715?2744, November 2008.
[15] Justin Keat, Pamela Reinagel, R. Clay Reid, and Markus Meister. Predicting every spike: A model for the
responses of visual neurons. Neuron, 30:803?817, June 2001.
[16] Michael Reed and Barry Simon. Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis.
Academic Press, 1980.
[17] Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and
Statistics. Kluwer Academic Publishers, 2004.
[18] Ole Christensen. An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis.
Birkh?auser, 2003.
[19] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the perception of motion.
Journal of Optical Society of America, 2(2), February 1985.
[20] Michael C.-K. Wu, Stephen V. David, and Jack L. Gallant. Complete functional characterization of
sensory neurons by system identification. Annual Reviews of Neuroscience, 29:477?505, 2006.
9
| 4070 |@word middle:2 covariance:1 q1:3 slee:1 carry:1 reduction:1 daniel:2 denoting:1 rkhs:1 current:1 recovered:1 surprising:1 written:1 numerical:1 v:4 record:1 characterization:2 rc:2 mathematical:1 c2:1 olfactory:3 introduce:1 roughly:1 window:13 increasing:2 becomes:2 provided:4 spain:1 bounded:8 vertebrate:1 circuit:51 what:1 interpreted:1 q2:3 developed:1 ca1:1 temporal:15 quantitative:1 every:2 ti:4 k2:1 schwartz:1 berlinet:1 grant:1 producing:1 reid:1 t1:16 before:1 engineering:2 receptor:1 encoding:3 laurent:1 black:2 therein:1 alice:1 faithful:1 block:2 geffen:1 procedure:2 johannesma:1 cascade:6 projection:9 word:3 regular:1 onto:7 cannot:1 writing:2 equivalent:3 marten:1 nicole:1 identifying:9 recovery:1 reinagel:1 population:2 coordinate:2 play:1 mikko:1 element:2 approximated:4 bottom:2 role:1 observed:1 electrical:2 plo:1 ui:8 ideally:1 dynamic:1 depend:2 gu:1 easily:1 america:1 train:1 distinct:1 describe:1 effective:2 birkh:1 ole:1 aggregate:1 lazar:4 h0:3 jean:1 encoded:4 widely:1 plausible:2 larger:2 say:1 reconstruct:3 statistic:1 dougherty:1 think:1 transform:4 itself:2 pheromone:1 sequence:3 paley:1 biophysical:3 triggered:2 product:1 reset:4 adaptation:1 realization:2 rapidly:1 academy:1 qli:2 perfect:3 converges:3 tk:57 derive:3 ij:2 b0:1 edward:1 sydney:1 riesz:2 inhomogeneous:1 closely:5 gi2:1 filter:45 brainstem:1 generalization:1 preliminary:1 drosophila:2 proposition:2 dendritic:25 summation:1 mathematically:1 exploring:1 clarify:1 hold:1 sufficiently:4 considered:1 wright:1 exp:2 matthew:1 tjk:6 sought:1 involution:2 fh:2 applicable:1 tik:4 pnevmatikakis:1 ck:11 pn:5 cash:1 voltage:4 tll:4 corollary:8 ax:1 june:1 improvement:1 maria:1 contrast:1 kim:1 dependent:1 bergen:1 typically:1 denoted:1 insect:1 lucas:1 auser:1 field:3 construct:3 having:2 yevgeniy:2 identical:1 biology:1 adelson:1 nearly:1 tem:4 t2:21 stimulus:11 fundamentally:2 employ:1 modern:1 national:1 consisting:1 t2k:3 fire:2 william:1 olfaction:1 geraldine:1 investigate:2 qk0:2 light:1 tj:1 oth:1 kt:3 laszlo:1 fu:7 respective:1 ilter:1 simo:2 tree:3 plotted:1 causal:5 instance:2 modeling:2 taf:3 hearing:1 comprised:1 characterize:2 spatiotemporal:2 density:8 fundamental:1 international:1 sensitivity:1 physic:1 decoding:1 fui:1 michael:2 together:1 broome:1 thesis:1 opposed:1 audition:1 til:17 supp:21 potential:1 coding:2 coca:1 includes:1 coefficient:4 performed:1 break:1 observer:3 higgs:1 red:5 recover:1 simon:1 rmse:8 il:2 wiener:1 qk:9 identify:8 yew:1 identification:33 produced:1 ed:2 definition:6 energy:2 frequency:2 james:1 associated:1 proof:7 auditory:3 dimensionality:1 organized:1 hilbert:2 amplitude:4 sean:1 clay:1 higher:1 methodology:4 response:14 evaluated:1 furthermore:9 roger:1 stage:1 tkk:1 aurel:5 c2k:3 d:19 hand:2 correlation:1 nonlinear:4 overlapping:2 postma:1 hardie:1 impulse:9 name:1 effect:1 hypothesized:1 equality:2 hence:1 analytically:1 moore:1 illustrated:1 white:2 sin:2 complete:1 demonstrate:2 tn:2 l1:1 christine:1 motion:1 harmonic:1 jack:1 novel:1 nih:1 functional:4 spiking:19 rust:1 volume:1 analog:2 belong:3 discussed:2 extend:1 approximates:2 interpretation:1 kluwer:1 measurement:4 significant:1 similarly:1 nonlinearity:1 phenomenological:3 stable:3 entail:1 base:1 setpof:1 belongs:2 reverse:1 certain:1 verlag:1 tdm:1 arbitrarily:5 lnp:3 seen:1 additional:1 barry:1 signal:28 ii:2 dashed:2 full:2 simoncelli:2 stephen:2 faster:2 characterized:1 academic:2 long:1 juusola:1 qi:4 vision:2 poisson:2 kernel:12 represent:2 limt:1 achieved:1 c1:1 whereas:2 remarkably:1 interval:3 diagram:2 pyramidal:1 publisher:1 appropriately:2 rest:1 unlike:1 hz:11 db:8 incorporates:1 call:1 ee:1 ideal:23 easy:1 fit:1 t1k:3 bandwidth:13 perfectly:3 slutskiy:2 identified:2 inner:1 cn:4 billing:1 eftychios:1 t0:1 expression:1 c1k:2 passed:2 nyquist:4 movshon:1 song:1 york:3 action:2 remark:5 bibo:3 extensively:2 ph:46 reduced:1 exist:1 canonical:1 neuroscience:4 blue:2 vol:2 threshold:6 nevertheless:1 rewriting:1 ce:1 lti:1 lter:4 v1:1 place:1 wu:1 followed:1 nim:3 annual:1 fairhall:1 encodes:1 markus:2 fourier:1 u1:1 optical:1 department:2 describes:1 biologically:1 christensen:1 intuitively:1 restricted:1 equation:1 mechanism:1 needed:1 mechanistic:1 letting:2 tractable:1 end:1 meister:2 available:3 operation:2 fluctuating:1 spectral:3 pierre:1 odor:1 original:3 thomas:1 denotes:3 top:2 include:2 eggermont:1 february:1 society:1 capacitance:2 question:1 spike:40 receptive:3 rt:5 said:2 gin:1 simulated:1 assuming:1 length:1 reed:1 providing:1 equivalently:1 setup:1 ql:3 difficult:1 october:2 statement:1 stated:1 iaf:27 calcium:1 perform:1 gilles:1 gallant:1 neuron:54 convolution:1 finite:7 november:1 philippe:1 extended:2 frame:3 gc:2 reproducing:3 arbitrary:9 ph0:2 david:1 pair:9 chichilnisky:1 rad:10 identi:2 fv:2 macaque:1 address:2 justin:1 suggested:1 below:2 exemplified:1 agnan:1 departure:1 perception:1 including:1 keat:1 bandlimited:7 power:12 natural:3 predicting:1 lk:18 concludes:1 columbia:4 kj:7 review:1 literature:1 l2:2 acknowledgement:1 lecture:1 generation:6 yuste:1 generator:3 integrate:1 principle:1 excitatory:1 supported:1 last:1 asynchronous:1 alain:1 formal:1 bias:3 side:2 leaky:4 feedback:3 pillow:2 qn:3 sensory:7 author:1 collection:4 alphabetically:1 transaction:1 approximate:2 informationtheoretic:1 rafael:1 pseudoinverse:1 assumed:1 eero:2 spectrum:7 un:1 nature:1 adrienne:1 mse:6 investigated:1 necessarily:1 anthony:1 domain:3 linearly:3 bounding:1 noise:2 arise:1 neuronal:2 fig:16 tl:16 transduction:1 ny:2 precision:8 periodogram:6 down:1 theorem:7 specific:1 sinc:1 cjk:3 effectively:1 phd:1 pamela:1 depicted:1 visual:3 penrose:1 gi1:1 ordered:1 u2:1 springer:1 change:2 infinite:1 contrasted:1 called:1 gij:4 photoreceptor:1 formally:3 support:5 odelia:1 jonathan:2 evaluate:1 |
3,392 | 4,071 | Dynamic Infinite Relational Model
for Time-varying Relational Data Analysis
Katsuhiko Ishiguro Tomoharu Iwata
Naonori Ueda
NTT Communication Science Laboratories
Kyoto, 619-0237 Japan
{ishiguro,iwata,ueda}@cslab.kecl.ntt.co.jp
Joshua Tenenbaum
MIT
Boston, MA.
[email protected]
Abstract
We propose a new probabilistic model for analyzing dynamic evolutions of relational data, such as additions, deletions and split & merge, of relation clusters like
communities in social networks. Our proposed model abstracts observed timevarying object-object relationships into relationships between object clusters. We
extend the infinite Hidden Markov model to follow dynamic and time-sensitive
changes in the structure of the relational data and to estimate a number of clusters
simultaneously. We show the usefulness of the model through experiments with
synthetic and real-world data sets.
1 Introduction
Analysis of ?relational data?, such as the hyperlink structure on the Internet, friend links on social
networks, or bibliographic citations between scientific articles, is useful in many aspects. Many
statistical models for relational data have been presented [10, 1, 18]. The stochastic block model
(SBM) [11] and the infinite relational model (IRM) [8] partition objects into clusters so that the
relations between clusters abstract the relations between objects well. SBM requires specifying the
number of clusters in advance, while IRM automatically estimates the number of clusters. Similarly,
the mixed membership model [2] associates each object with multiple clusters (roles) rather than a
single cluster.
These models treat the relations as static information. However, a large amount of relational data
in the real world is time-varying. For example, hyperlinks on the Internet are not stationary since
links disappear while new ones appear every day. Human relationships in a company sometimes
drastically change by the splitting of an organization or the merging of some groups due to e.g.
Mergers and Acquisitions. One of our modeling goals is to detect these sudden changes in network
structure that occur over time.
Recently some researchers have investigated the dynamics in relational data. Tang et al.[13] proposed a spectral clustering-based model for multi-mode, time-evolving relations. Yang et al.[16]
developed the time-varying SBM. They assumed a transition probability matrix like HMM, which
governs all the cluster assignments of objects for all time steps. This model has only one transition
probability matrix for the entire data. Thus, it cannot represent more complicated time variations
such as split & merge of clusters that only occur temporarily. Fu et al.[4] proposed a time-series
extension of the mixed membership model. [4] assumes a continuous world view: roles follow a
mixed membership structure; model parameters evolve continuously in time. This model is very
general for time series relational data modeling, and is good for tracking gradual and continuous
changes of the relationships. Some works in bioinformatics [17, 5] have also adopted similar strategies. However, a continuous model approach does not necessarily best capture sudden transitions of
the relationships we are interested in. In addition, previous models assume the number of clusters is
fixed and known, which is di?cult to determine a priori.
1
In this paper we propose yet another time-varying relational data model that deals with temporal
and dynamic changes of cluster structures such as additions, deletions and split & merge of clusters. Instead of the continuous world view of [4], we assume a discrete structure: distinct clusters
with discrete transitions over time, allowing for birth, death and split & merge dynamics. More
specifically, we extend IRM for time-varying relational data by using a variant of the infinite HMM
(iHMM) [15, 3]. By incorporating the idea of iHMM, our model is able to infer clusters of objects
without specifying a number of clusters in advance. Furthermore, we assume multiple transition
probabilities that are dependent on time steps and clusters. This specific form of iHMM enables the
model to represent time-sensitive dynamic properties such as split & merge of clusters. Inference is
performed e?ciently with the slice sampler.
2 Infinite Relational Model
We first explain the infinite relational model (IRM) [8], which can estimate the number of hidden
clusters from a relational data. In IRM, Dirichlet process (DP) is used as a prior for clusters of an
unknown number, and is denoted as DP(?, G0 ) where ? > 0 is a parameter and G0 is a base measure.
We write G ? DP(?, G0 ) when a distribution G (?) is sampled from DP. In this paper, we implement
DP by using a stick-breaking process
[12], which is based on the fact that G is represented as an
?
infinite mixture of ?s: G (?) = ?
k=1 ?k ??k (?), ?k ? G 0 . ? = (?1 , ?2 , . . .) is a mixing ratio vector with
infinite elements whose sum equals one, constructed in a stochastic way:
?k = vk
k?1
?
(1 ? vl ),
vk ? Beta (1, ?) .
(1)
l=1
Here vk is drawn from a Beta distribution with a parameter ?.
The IRM is an application of the DP for relational data. Let us assume a binary two-place relation
on the set of objects D = {1, 2, . . . , N} as D ? D ? {0, 1}. For simplicity, we only discuss a two
place relation between the identical domain (D ? D). The IRM divides the set of N objects into
multiple clusters based on the observed relational data X = {xi, j ? {0, 1}; 1 ? i, j ? N}. The IRM
is able to infer the number of clusters at the same time because it uses DP as a prior distribution
of the cluster partition. Observation xi, j ? {0, 1} denotes the existence of a relation between objects
i, j ? {1, 2, . . . , N}. If there is (not) a relation between i and j, then xi, j = 1 (0). We allow asymmetric
relations xi, j , x j,i throughout the paper.
The probabilistic generative model (Fig. 1(a)) of the IRM is as follows:
?|? ? Stick (?)
zi |? ? Multinomial (?)
?k,l |?, ? ? Beta (?, ?)
(
)
xi, j |Z, H ? Bernoulli ?zi ,z j .
(2)
(3)
(4)
(5)
N
Here, Z = {zi }i=1
and H = {?k,l }?
k,l=1 . In Eq. (2) ?Stick? is the stick-breaking process (Eq. (1)). We
sample a cluster index of the object i, zi = k, k ? {1, 2, . . . , } using ? as in Eq. (3). In Eq. (4) ?k,l is
the strength of a relation between the objects in clusters k and l. Generating the observed relational
data xi, j follows Eq. (5) conditioned by the cluster assignments Z and the strengths H.
3
3.1
Dynamic Infinite Relational Model (dIRM)
Time-varying relational data
First, we define the time-varying relational data considered
in }this paper. Time-varying relational
{
data X have three subscripts t, i, and j: X = xt,i, j ? {0, 1} , where i, j ? {1, 2, . . . , N}, t ?
{1, 2, . . . , T }. xt,i, j = 1(0) indicates that there is (not) an observed relationship between objects i
and j at time step t. T is the number of time steps, and N is the number of objects. We assume
that there is no relation between objects belonging to a di?erent time step t and t0 . The time-varying
relational data X is a set of T (static) relational data for T time steps.
2
?
?0
?
?
?
?
?
?
?
zi
?
xi,j
k,l
zt,i
?
?
?
N
NXN
?
?
x t,i,j
k,l
NXN
(a)
? t,k
?
N
?
zt-1,i
zt,i
zt+1,i
N
x t,i,j
k,l
NXN
T
(b)
T
(c)
Figure 1: Graphical model of (a)IRM (Eqs.2-5), (b)?tIRM? (Eqs.7-10), and (c)dIRM (Eqs.11-15).
Circle nodes denote variables, square nodes are constants and shaded nodes indicate observations.
It is natural to assume that every object transits between di?erent clusters along with the time evolution. Observing several real world time-varying relational data, we assume there are several properties of transitions, as follows:
? P1. Cluster assignments in consecutive time steps have higher correlations.
? P2. Time evolutions of clusters are not stationary nor uniform.
? P3. The number of clusters is time-varying and unknown a priori.
P1 is a common assumption for many kinds of time series data, not limited to relational data. For
example, a member of a firm community on SNSs will belong to the same community for a long
time. A hyperlink structure in a news website may alter because of breaking news, but most of the
site does not change as rapidly every minute.
P2 tries to model occasional and drastic changes from frequent and minor modifications in relational networks. Such unstable changes are observed elsewhere. For example, human relationships
in companies will evolve every day, but a merger of departments sometimes brings about drastic
changes. On an SNS, a user community for the upcoming Olympics games may exist for a limited
time: it will not last years after the games end. This will cause an addition and deletion of a user
cluster (community). P3 is indispensable to track such changes of clusters.
3.2
Naive extensions of IRM
We attempt to modify the IRM to satisfy these properties. We first consider several straightforward
solutions based on the IRM for analyzing time-varying relational data.
The simplest way is to convert time-varying relational data X into ?static? relational data X? = { x?i, j }
? For example, we can generate X? as follows:
and apply the IRM to X.
{
x?i, j =
?
1 T1 Tt=1 xt,i, j > ?,
0 otherwise,
(6)
where ? denotes a threshold. This solution cannot represent the time changes of clustering because
it assume the same clustering results for all the time steps (z1,i = z2,i = ? ? ? = zT,i ).
We may separate the time-varying relational data X into a series of time step-wise relational data Xt
and apply the IRM for each Xt . In this case, we will have a di?erent clustering result for each time
step, but the analysis ignores the dependency of the data over time.
3
Another solution is to extend the object assignment variable zi to be time-dependent zt,i . The resulting ?tIRM? model is described as follows (Fig. 1(b)):
?|? ? Stick (?)
zt,i |? ? Multinomial (?)
?k,l |?, ? ? Beta (?, ?)
(
)
xt,i, j |Zt , H ? Bernoulli ?zt,i ,zt, j .
(7)
(8)
(9)
(10)
N
Here, Zt = {zt,i }i=1
. Since ? is shared over all time steps, we may expect that the clustering results
between time steps will have higher correlations. However, this model assumes that zt,i is conditionally independent from each other for all t given ?. This implies that the tIRM is not suitable for
modeling time evolutions since the order of time steps are ignored in the model.
3.3
dynamic IRM
To address three conditions P1?3 above, we propose a new probabilistic model called the dynamic
infinite relational model (dIRM). The generative model is given below:
?|? ? Stick (?)
(
)
?0 ? + ??k
?t,k |?0 , ?, ? ? DP ?0 + ?,
?0 + ?
(
)
zt,i |zt?1,i , ?t ? Multinomial ?t,zt?1,i
?k,l |?, ? ? Beta (?, ?)
(
)
xt,i, j |Zt , H ? Bernoulli ?zt,i ,zt, j .
(11)
(12)
(13)
(14)
(15)
Here, ?t = {?t,k : k = 1, . . . , ?}. A graphical model of the dIRM is presented in Fig. 1(c).
? in Eq. (11) represents time-average memberships (mixing ratios) to clusters. Newly introduced
?t,k = (?t,k,1 , ?t,k,2 , . . . , ?t,k,l , . . .) in Eq. (12) is a transition probability that an object remaining in the
cluster k ? {1, 2, . . .} at time t ? 1 will move to the cluster l ? {1, 2, . . .} at time t. Because of the DP,
this transition probability is able to handle infinite hidden states like iHMM [14].
The DP used in Eq. (12) has an additional term ? > 0, which is introduced by Fox et al. [3]. ?k is
a vector whose elements are zero except the kth element, which is one. Because the base measure
in Eq. (12) is biased by ? and ?k , the kth element of ?t,k prefers to take a larger value than other
elements. This implies that this DP encourages the self-transitions of objects, and we can achieve
the property P1 for time-varying relational data.
One di?erence from conventional iHMMs [14, 3] lies in P2, which is achieved by making the
transition probability ? time-dependent. ?t,k is sampled for every time step t, thus, we can model
time-varying patterns of transitions, including additions, deletions and split & merge of clusters as
extreme cases. These changes happen only temporarily, therefore, time-dependent transition probabilities are indispensable for our purpose. Note that the transition probability is also dependent
on the cluster index k, as in conventional iHMMs. Also the dIRM can automatically determine the
number of clusters thanks to DP: this enables us to hold P3.
Equation (13) generates a cluster assignment for the object i at time t, based on the cluster, where
the object was previously (zt?1,i ) and its transition probability ?. Equation (14) generates a strength
parameter ? for the pair of clusters k and l, then we obtain the observed sample xt,i, j in Eq. (15).
The di?erence between iHMMs and dIRM is two-fold. One is the time-dependent transition probability of the dIRM discussed above. The another is that the iHMMs have one hidden state sequence
s1:t to be inferred, while the dIRM needs to estimate multiple hidden state sequences z1:t,i given one
time sequence observation. Thus, we may interpret the dIRM as an extension of the iHMM, which
has N (= a number of objects) hidden sequences to handle relational data.
4
4
Inference
We use a slice sampler [15], which enables fast and e?cient sampling of the sequential hidden states.
The slice sampler introduces auxiliary variables U = {ut,i }. Given U, the number of clusters can be
reduced to a finite number during the inference, and it enables us an e?cient sampling of variables.
4.1
Sampling parameters
First, we explain the sampling of an auxiliary variable ut,i . We assume a prior of ut,i as a uniform
distribution. Also we define the joint distribution of u, z, and x:
)1??zt,i ,zt, j
(
)
(
) (
) ?z ,z (
. (16)
p xt,i, j , ut,i , ut, j , zt?1:t,i , zt?1:t, j = I ut,i < ?t,zt?1,i ,zt,i I ut, j < ?t,zt?1, j ,zt, j xt,i,t,ij t, j 1?xt,i, j
Here, I(?) is 1 if the predicate holds, otherwise zero. Using Eq. (16), we can derive the posterior of
ut,i as follows:
(
)
ut,i ? Uniform 0, ?t,zt?1,i ,zt,i .
(17)
Next, we explain the sampling of an object assignment variable zt,i . We define the following message
variable p:
(
)
pt,i,k = p zt,i = k|X1:t , U1:t , ?, H, ? .
(18)
Sampling of zt,i is similar to the forward-backward algorithm for the original HMM. First, we compute the above message variables from t = 1 to t = T (forward filtering). Next, we sample zt,i from
t = T to t = 1 using the computed message variables (backward sampling).
In forward filtering we compute the following equation from t = 1 to t = T :
) (
) ?
(
)? (
pt,i,k ? p xt,i,i |zt,i = k, H
p xt,i, j |zt,i = k, H p xt, j,i |zt,i = k, H
pt?1,i,l .
(19)
l:ut,i <?t,l,k
j,i
Note that the summation is conditioned by ut,i . The number of ls (cluster indices) that hold this
condition is limited to a certain finite number. Thus, we can evaluate the above equation.
In backward sampling, we sample zt,i from t = T to t = 1 from the equation below:
(
)
(
)
p zt,i = k|zt+1,i = l ? pt,i,k ?t+1,k,l I ut+1,i < ?t+1,k,l .
(20)
Because of I(u < ?), values of cluster indices k are limited within a finite set. Therefore, the variety
of sampled zt,i will be limited a certain finite number K given U.
Given U and Z, we have finite K-realized clusters. Thus, computing the posteriors of ?t,k and ?k,l
becomes easy and straightforward. First ? is assumed as?a K + 1-dimensional vector (mixing ratios
K
of unrepresented clusters are aggregated in ?K+1 = 1 ? k=1
?k ). mt,k,l denotes a number of objects
i such that zt?1,i = k and zt,i = l. Also, let us denote a number of xt,i, j such that zt,i = k and zt, j = l as
Nk,l . Similarly, nk,l denotes a number of xt,i, j such that zt,i = k, zt, j = l and xt,i, j = 1. Then we obtain
following posteriors:
(
)
?t,k ? Dirichlet ?0 ? + ??k + mt,k .
(21)
(
)
?k,l ? Beta ? + nk,l , ? + Nk,l ? nk,l .
(22)
mt,k is a K + 1-dimensional vector whose lth element is mt,k,l (mt,k,K+1 = 0).
We omit the derivation of the posterior of ? since it is almost the same with that of Fox et al. [3].
4.2
Sampling hyperparameters
Sampling hyperparameters is important to obtain the best results. This could be done normally by
putting vague prior distributions [14]. However, it is di?cult to evaluate the precise posteriors for
some hyperparameters [3]. Instead, we reparameterize and sample a hyperparameter in terms of
a ? (0, 1) [6]. For example, if the hyperparameter ? is assumed as Gamma-distributed, we convert ?
?
. Sampling a can be achieved from a uniform grid on (0, 1). We compute (unnormalized)
by a = 1+?
posterior probability densities at several as and choose one to update the hyperparameter.
5
IOtables data t = 1
i
Enron data t = 2
IOtables data t = 5
0
5
5
10
10
15
i
20
25
25
0
0
50
50
i
15
20
30
Enron data t = 10
0
0
i
100
100
30
5
10
15
20
25
30
150
0
5
10
j
(a)
15
20
25
0
30
50
100
j
j
(b)
(c)
150
150
0
50
100
150
j
(d)
Figure 2: Example of real-world datasets. (a)IOtables data, observations at t = 1, (b)IOtables data,
observations at t = 5, (c)Enron data, observations at t = 2, and (d)Enron data, observations at t = 10.
5
Experiments
Performance of the dIRM is compared with the original IRM [8] and its naive extension tIRM
(described in Eqs. (7-10)). To apply the IRM to time-varying relational data, we use Eq. (6) to X
with a threshold ? = 0.5. The di?erence between the tIRM (Eqs. (7-10)) and the dIRM is that
the tIRM does not incorporate the dependency between successive time steps while the dIRM does.
Hyperparameters were estimated simultaneously in all experiments.
5.1 Datasets and measurements
We prepared two synthetic datasets (Synth1 and Synth2). To synthesize datasets, we first determined
the number of time steps T , the number of clusters K, and the number of objects N. Next, we manually assigned zt,i in order to obtain cluster split & merge, additions, and deletions. After obtaining
Z, we defined the connection strengths between clusters H = {?k,l }. In this experiment, each ?k,l may
take one of two values ? = 0.1 (weakly connected) or ? = 0.9 (strongly connected). Observation X
was randomly generated according to Z and H. Synth1 is smaller (N = 16) and stable while Synth2
is much larger (N = 54), and objects actively transit between clusters.
Two real-world datasets were also collected. The first one is the National Input-Output Tables for
Japan (IOtables) provided by the Statistics Bureau of the Ministry of Internal A?airs and Communications of Japan. IOtables summarize the transactions of goods and services between industrial
sectors. We used an inverted coe?cient matrix, which is a part of the IOtables. Each element in
the matrix ei, j represents that one unit of demand in the jth sector invokes ei, j productions in the ith
sector. We generated xi, j from ei, j by binarizaion: setting xi, j = 1 if ei, j exceeds the average, and
setting xi, j = 0 otherwise. We collected data from 1985, 1990, 1995, 2000, and 2005, in 32 sectors
resolutions. Thus we obtain a time-varying relational data of N = 32 and T = 5.
The another real-world dataset is the Enron e-mail dataset [9], used in many studies including [13, 4].
We extracted e-mails sent in 2001. The number of time steps was T = 12, so the dataset was divided
into monthly transactions. The full dataset contained N = 151 persons. xt,i, j = 1(0) if there is
(not) an e-mail sent from i to j at time (month) t. We also generated a smaller dataset (N = 68) by
excluding those who send few e-mails for convenience. Quantitative measurements were computed
with this smaller dataset.
Fig. 2 presents examples of IOtables dataset ((a),(b)) and Enron dataset ((c),(d)). IOtables dataset
characterized by its stable relationships, compared to Enron dataset. In Enron dataset, the amount
of communication rapidly increases after the media reported on the Enron scandals.
We used three evaluating measurements. One is the Rand index, which computes the similarity
between true and estimated clustering results [7]. The Rand index takes the maximum value (1)
if the two clustering results completely match. We computed the Rand index between the ground
truth Zt and the estimated Z?t for each time step, and averaged the indices for T steps. We also
compute the error in the number of estimated clusters. Di?erences in the number of realized clusters
were computed between Zt and Z?t , and we calculated the average of these errors for T steps. We
6
Table 1: Computed Rand indices, numbers of erroneous clusters, and averaged test data log likelihoods.
Data
Synth1
Synth2
IOtables
Enron
IRM
0.796
0.433
-
Rand index
tIRM dIRM
0.946 0.982
0.734 0.847
-
# of erroneous clusters
IRM tIRM dIRM
1.00
0.20
0.13
3.00
0.98
0.65
-
Test log likelihood
IRM
tIRM
dIRM
-0.542 -0.508 -0.505
-0.692 -0.393 -0.318
-0.354 -0.358 -0.291
-0.120 -0.135 -0.106
calculated these measurements for the synthetic datasets. The third measure is an (approximated)
test-data log likelihood. For all datasets, we generated noisy datasets whose observation values are
inverted. The number of inverted elements was kept small so that inversions would not a?ect the
global clustering results. The ratios of inverted elements over the entire elements are set to 5% for
two synthetic data, 1% for IOtables data and 0.5% for Enron data. We made inferences on the noisy
datasets, and computed the likelihoods that ?inverted observations take the real value?. We used the
averaged log-likelihood per a observation as a measurement.
5.2
Results
First, we present the quantitative results. Table 1 lists the computed Rand index, errors in the estimated number of clusters, and test-data log likelihoods. We confirmed that dIRM outperformed
the other models in all datasets for the all measures. Particularly, dIRM showed good results in the
Synth2 and Enron datasets, where the changes in relationships are highly dynamic and unstable. On
the other hand, the dIRM did not achieve a remarkable improvement against tIRM for the Synth1
dataset whose temporal changes are small. Thus we can say that the dIRM is superior in modeling
time-varying relational data, especially for dynamic ones.
Next, we evaluate results of the real-world datasets qualitatively. Figure 3 shows the results from
IOtables data. The panel (a) illustrates the estimated ?k,l using the dIRM, and the panel (b) presents
the time evolution of cluster assignments, respectively. The dIRM obtained some reasonable and
stable industrial clusters, as shown in Fig. 3 (b). For example, dIRM groups the machine industries
into cluster 5, and infrastructure related industries are grouped into cluster 13. We believe that the
self-transition bias ? helps the model find these stable clusters. Also relationships between clusters
presented in Fig. 3 (a) are intuitively understandable. For example, demands for machine industries
(cluster 5) will cause large productions for ?iron and steel? sector (cluster 7). The ?commerce &
trade? and ?enterprise services? sectors (cluster 10) connects strongly to almost all the sectors.
There are some interesting cluster transitions. First, look at the ?finance, insurance? sector. At
t = 1, this sector belongs to cluster 14. However, the sector transits to cluster 1 afterwards, which
does not connect strongly with clusters 5 and 7. This may indicates the shift of money from these
matured industries. Next, the ?transport? sector enlarges its roll in the market by moving to cluster
14, and it causes the deletion of cluster 8. Finally, note the transitions of ?telecom, broadcast? sector.
From 1985 to 2000, this sector is in the cluster 9 which is rather independent from other clusters.
However, in 2005 the cluster separated, and telecom industry merged with cluster 1, which is a
influential cluster. This result is consistent with the rapid growth in ITC technologies and its large
impact on the world.
Finally, we discuss results on the Enron dataset. Because this e-mail dataset contains many individuals? names, we refrain from cataloging the object assignments as in the IOtables dataset. Figure
4 (a) tells us that clusters 1 ? 7 are relatively separated communities. For example, members in
cluster 4 belong to a restricted domain business such energy, gas, or pipeline businesses. Cluster 5
is a community of financial and monetary departments, and cluster 7 is a community of managers
such as vice presidents, and CFOs.
One interesting result from the dIRM is finding cluster 9. This cluster notably sends many messages
to other clusters, especially for management cluster 7. The number of objects belonging to this
cluster is only three throughout the time steps, but these members are the key-persons at that time.
7
dIRM: learned ? kl for IOtables data
1985 ( t = 1)
1990 ( t = 2)
1995 ( t = 3)
2000 ( t = 4)
2005 ( t = 5)
Cluster 1
(unborn)
finance, insurance
finance, insurance
finance, insurance
finance, insurance
telecom, broadcast
Cluster 5
machinery
electronic machinery
transport machinery
precision machinery
machinery
electronic machinery
transport machinery
precision machinery
machinery
electronic machinery
transport machinery
precision machinery
machinery
electronic machinery
transport machinery
precision machinery
machinery
electronic machinery
transport machinery
precision machinery
0.6
Cluster 7
iron and steel
iron and steel
iron and steel
iron and steel
iron and steel
0.5
Cluster 8
transport
transport
transport
(deleted)
(deleted)
Cluster 9
telecom, broadcast
consumer services
telecom, broadcast
consumer services
telecom, broadcast
consumer services
telecom, broadcast
consumer services
consumer services
Cluster 10
commerce, trades
enterprise services
commerce, trades
enterprise services
commerce, trades
enterprise services
commerce, trades
enterprise services
commerce, trades
enterprise services
Cluster 13
mining
petroleum
electric powers, gas
water, waste disposal
mining
petroleum
electric powers, gas
water, waste disposal
mining
petroleum
electric powers, gas
water, waste disposal
mining
petroleum
electric powers, gas
water, waste disposal
mining
petroleum
electric powers, gas
water, waste disposal
Cluster 14
finance, insurance
(deleted)
(deleted)
transport
transport
1
1
0.9
2
3
0.8
4
0.7
5
6
k
7
8
0.4
9
10
0.3
11
0.2
12
13
0.1
14
1
0
2
3
4
5
6
7
8
9
10 11 12 13 14
l
(b)
(a)
Figure 3: (a) Example of estimated ?k,l (strength of relationship between clusters k, l) for IOtable
data by dIRM. (d) Time-varying clustering assignments for selected clusters by dIRM.
dIRM: Learned ?kl for Enron data
?Inactive? object cluster
CEO of Enron America
The founder
COO
(a)
(b)
Figure 4: (a): Example of estimated ?k,l for Enron dataset using dIRM. (b): Number of items
belonging to clusters at each time step for Enron dataset using dIRM.
First, the CEO of Enron America stayed at cluster 9 in May (t = 5). Next, the founder of Enron was
a member of the cluster in August t = 8. The CEO of Enron resigned that month, and the founder
actually made an announcement to calm down the public. Finally, the COO belongs to the cluster in
October t = 10. This is the month that newspapers reported the accounting violations.
Fig. 4 (b) presents the time evolutions of the cluster memberships; i.e. the number of objects belonging to each cluster at each time step. In contrast to the IOtables dataset, this Enron e-mail dataset is
very dynamic, as you can see from Fig. 2(c), (d). For example, the volume of cluster 6 (inactive cluster) decreases as time evolves. This result reflects the fact that the transactions between employees
increase as the scandal is more and more revealed. On the contrary, cluster 4 is stable in membership. Thus, we can imagine that the group of energy and gas is a dense and strong community. This
is also true for cluster 5.
6 Conclusions
We proposed a new time-varying relational data model that is able to represent dynamic changes
of cluster structures. The dynamic IRM (dIRM) model incorporates a variant of the iHMM model
and represents time-sensitive dynamic properties such as split & merge of clusters. We explained a
generative model of the dIRM, and showed an inference algorithm based on a slice sampler. Experiments with synthetic and real-world time series datasets showed that the proposed model improves
the precision of time-varying relational data analysis. We will apply this model to other datasets to
study the capability and the reliability of the model. We also are interested in modifying the dIRM
to deal with multi-valued observation data.
8
References
[1] A. Clauset, C. Moore, and M. E. J. Newman. Hierarchical structure and the prediction of
missing links in networks. Nature, 453:98?101, 2008.
[2] E. Erosheva, S. Fienberg, and J. La?erty. Mixed-membership models of scientific publications.
Proceedings of the National Academy of Sciences of the United States of America (PNAS),
101(Suppl 1):5220?5227, 2004.
[3] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. An HDP-HMM for systems with
state persistence. In Proceedings of the 25th International Conference on Machine Learning
(ICML), 2008.
[4] Wenjie Fu, Le Song, and Eric P. Xing. Dynamic mixed membership blockmodel for evolving
networks. In Proceedings of the 26th International Conference on Machine Learning (ICML),
2009.
[5] O. Hirose, R. Yoshida, S. Imoto, R. Yamaguchi, T. Higuchi, D. S. Chamock-Jones, C. Print,
and S. Miyano. Statistical inference of transcriptional module-based gene networks from time
course gene expression profiles by using state space models. Bioinformatics, 24(7):932?942,
2008.
[6] P. D. Ho?. Subset clustering of binary sequences, with an application to genomic abnormality
data. Biometrics, 61(4):1027?1036, 2005.
[7] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2(1):193?218, 1985.
[8] C. Kemp, J. B. Tenenbaum, T. L. Gri?ths, T. Yamada, and N. Ueda. Learning systems of
concepts with an infinite relational model. In Proceedings of the 21st National Conference on
Artificial Intelligence (AAAI), 2006.
[9] B. Klimat and Y. Yang. The enron corpus: A new dataset for email classification research. In
Proceedings of the European Conference on Machine Learning (ECML), 2004.
[10] D. Liben-Nowell and J. Kleinberg. The link prediction problem for social networks. In Proceedings of the Twelfth International Conference on Information and Knowledge Management,
pages 556?559. ACM, 2003.
[11] K. Nowicki and T. A. B. Snijders. Estimation and prediction for stochastic blockstructures.
Journal of the American Statistical Association, 96(455):1077?1087, 2001.
[12] J. Sethuraman. A constructive definition of dirichlet process. Statistica Sinica, 4:639?650,
1994.
[13] L. Tang, H. Liu, J. Zhang, and Z. Nazeri. Community evolution in dynamic multi-mode networks. In Proceeding of the 14th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pages 677?685, 2008.
[14] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet process. Journal of
The American Statistical Association, 101(476):1566?1581, 2006.
[15] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden
Markov model. In Proceedings of the 25th International Conference on Machine Learning
(ICML), 2008.
[16] T. Yang, Y. Chi, S. Zhu, Y. Gong, and R. Jin. A Bayesian approach toward finding communities and their evolutions in dynamic social networks. In Proceedings of SIAM International
Conference on Data Mining (SDM), 2009.
[17] R. Yoshida, S. Imoto, and T. Higuchi. Estimating time-dependent gene networks from time
series microarray data by dynamic linear models with markov switching. In Proceedings of
the International Conference on Computational Systems Bioinformatics, 2005.
[18] S. Zhu, K. Yu, and Y. Gong. Stochastic relational models for large-scale dyadic data using
mcmc. In Advances in Neural Information Processing Systems 21 (NIPS), 2009.
9
| 4071 |@word inversion:1 twelfth:1 gradual:1 accounting:1 liu:1 series:6 contains:1 united:1 bibliographic:1 imoto:2 z2:1 comparing:1 yet:1 happen:1 partition:3 matured:1 enables:4 cfo:1 update:1 stationary:2 generative:3 selected:1 website:1 item:1 intelligence:1 merger:2 cult:2 ith:1 yamada:1 sudden:2 infrastructure:1 blei:1 node:3 successive:1 zhang:1 along:1 constructed:1 enterprise:6 beta:6 ect:1 notably:1 market:1 rapid:1 p1:4 nor:1 multi:3 manager:1 chi:1 automatically:2 company:2 becomes:1 provided:1 estimating:1 panel:2 medium:1 erty:1 kind:1 developed:1 finding:2 temporal:2 quantitative:2 every:5 growth:1 finance:6 wenjie:1 stick:6 normally:1 unit:1 omit:1 appear:1 t1:1 service:12 treat:1 modify:1 switching:1 analyzing:2 subscript:1 merge:8 specifying:2 shaded:1 co:1 limited:5 averaged:3 commerce:6 block:1 implement:1 evolving:2 erence:3 persistence:1 cannot:2 convenience:1 conventional:2 missing:1 send:1 straightforward:2 yoshida:2 l:1 resolution:1 simplicity:1 splitting:1 sbm:3 financial:1 handle:2 variation:1 president:1 pt:4 imagine:1 user:2 us:1 associate:1 element:10 synthesize:1 approximated:1 particularly:1 asymmetric:1 observed:6 role:2 module:1 capture:1 clauset:1 news:2 connected:2 olympics:1 trade:6 decrease:1 liben:1 dynamic:20 arabie:1 weakly:1 eric:1 completely:1 vague:1 joint:1 represented:1 america:3 derivation:1 separated:2 distinct:1 fast:1 artificial:1 tell:1 newman:1 birth:1 firm:1 whose:5 larger:2 valued:1 say:1 otherwise:3 enlarges:1 statistic:1 noisy:2 beal:1 sequence:5 sdm:1 propose:3 frequent:1 monetary:1 rapidly:2 mixing:3 achieve:2 academy:1 cluster:108 generating:1 object:31 help:1 derive:1 friend:1 gong:2 ij:1 erent:3 minor:1 eq:17 strong:1 p2:3 auxiliary:2 indicate:1 implies:2 merged:1 modifying:1 stochastic:4 human:2 public:1 stayed:1 announcement:1 summation:1 extension:4 hold:3 considered:1 ground:1 consecutive:1 nowell:1 purpose:1 estimation:1 outperformed:1 sensitive:3 grouped:1 vice:1 reflects:1 mit:2 genomic:1 rather:2 varying:22 timevarying:1 publication:1 vk:3 improvement:1 bernoulli:3 indicates:2 likelihood:6 industrial:2 sigkdd:1 contrast:1 blockmodel:1 detect:1 yamaguchi:1 inference:6 dependent:7 membership:8 vl:1 entire:2 hidden:8 relation:12 interested:2 classification:2 denoted:1 priori:2 equal:1 sampling:12 manually:1 identical:1 represents:3 look:1 icml:3 jones:1 yu:1 alter:1 few:1 synth2:4 randomly:1 simultaneously:2 gamma:1 national:3 individual:1 saatci:1 connects:1 attempt:1 organization:1 message:4 highly:1 mining:7 insurance:6 introduces:1 mixture:1 extreme:1 violation:1 hubert:1 naonori:1 fu:2 machinery:20 fox:3 biometrics:1 divide:1 irm:22 circle:1 industry:5 modeling:4 assignment:9 subset:1 uniform:4 usefulness:1 predicate:1 reported:2 dependency:2 connect:1 synthetic:5 thanks:1 density:1 person:2 international:7 st:1 siam:1 probabilistic:3 continuously:1 aaai:1 management:2 choose:1 broadcast:6 american:2 actively:1 japan:3 waste:5 satisfy:1 performed:1 view:2 try:1 observing:1 xing:1 complicated:1 capability:1 erosheva:1 square:1 air:1 roll:1 who:1 bayesian:1 confirmed:1 researcher:1 explain:3 email:1 definition:1 ihmm:6 against:1 energy:2 acquisition:1 di:9 static:3 sampled:3 newly:1 dataset:20 knowledge:2 ut:12 improves:1 iron:6 actually:1 resigned:1 disposal:5 higher:2 day:2 follow:2 rand:6 done:1 tomoharu:1 strongly:3 furthermore:1 erences:1 correlation:2 hand:1 ei:4 transport:11 mode:2 brings:1 scientific:2 believe:1 name:1 concept:1 true:2 evolution:8 assigned:1 laboratory:1 jbt:1 death:1 moore:1 nowicki:1 deal:2 conditionally:1 game:2 self:2 encourages:1 during:1 unnormalized:1 tt:1 wise:1 recently:1 common:1 superior:1 multinomial:3 mt:5 jp:1 volume:1 extend:3 belong:2 discussed:1 association:2 interpret:1 employee:1 measurement:5 monthly:1 grid:1 similarly:2 reliability:1 moving:1 stable:5 similarity:1 money:1 base:2 posterior:6 showed:3 belongs:2 indispensable:2 certain:2 binary:2 refrain:1 joshua:1 inverted:5 ministry:1 additional:1 determine:2 aggregated:1 multiple:4 full:1 snijders:1 afterwards:1 kyoto:1 infer:2 ntt:2 exceeds:1 match:1 characterized:1 pnas:1 long:1 divided:1 impact:1 prediction:3 variant:2 itc:1 sometimes:2 represent:4 suppl:1 achieved:2 beam:1 addition:6 sudderth:1 sends:1 microarray:1 biased:1 enron:22 sent:2 member:4 contrary:1 incorporates:1 jordan:2 ciently:1 yang:3 abnormality:1 revealed:1 split:8 easy:1 variety:1 zi:6 gri:1 idea:1 shift:1 t0:1 inactive:2 expression:1 song:1 cause:3 prefers:1 ignored:1 useful:1 gael:1 governs:1 amount:2 prepared:1 tenenbaum:2 simplest:1 reduced:1 generate:1 exist:1 estimated:8 track:1 per:1 discrete:2 write:1 hyperparameter:3 group:3 putting:1 key:1 threshold:2 drawn:1 deleted:4 kept:1 backward:3 sum:1 year:1 convert:2 unrepresented:1 you:1 scandal:2 place:2 throughout:2 almost:2 reasonable:1 ueda:3 electronic:5 p3:3 blockstructures:1 internet:2 fold:1 strength:5 occur:2 generates:2 aspect:1 u1:1 kleinberg:1 reparameterize:1 relatively:1 cslab:1 department:2 influential:1 according:1 belonging:4 smaller:3 founder:3 evolves:1 modification:1 making:1 s1:1 coo:2 intuitively:1 restricted:1 explained:1 pipeline:1 fienberg:1 equation:5 previously:1 discus:2 drastic:2 end:1 adopted:1 apply:4 occasional:1 hierarchical:2 spectral:1 calm:1 ho:1 existence:1 original:2 bureau:1 denotes:4 dirichlet:4 clustering:10 assumes:2 remaining:1 graphical:2 ceo:3 coe:1 invokes:1 ghahramani:1 especially:2 disappear:1 upcoming:1 move:1 g0:3 print:1 realized:2 strategy:1 transcriptional:1 dp:12 kth:2 link:4 separate:1 hmm:4 transit:3 mail:6 collected:2 unstable:2 kemp:1 water:5 toward:1 willsky:1 consumer:5 hdp:1 index:11 relationship:11 ratio:4 sinica:1 october:1 sector:13 steel:6 zt:50 understandable:1 unknown:2 allowing:1 ihmms:4 teh:2 observation:12 markov:3 ishiguro:2 datasets:14 finite:5 petroleum:5 jin:1 gas:7 ecml:1 relational:42 communication:3 precise:1 excluding:1 incorporate:1 august:1 community:11 inferred:1 introduced:2 pair:1 kl:2 z1:2 connection:1 learned:2 deletion:6 nip:1 address:1 able:4 below:2 pattern:1 hyperlink:3 summarize:1 including:2 power:5 suitable:1 natural:1 business:2 zhu:2 technology:1 sethuraman:1 naive:2 sn:2 prior:4 discovery:1 evolve:2 nxn:3 expect:1 mixed:5 interesting:2 filtering:2 remarkable:1 consistent:1 article:1 kecl:1 miyano:1 production:2 elsewhere:1 course:1 last:1 jth:1 drastically:1 bias:1 allow:1 distributed:1 slice:4 van:1 calculated:2 world:11 transition:18 evaluating:1 computes:1 ignores:1 forward:3 made:2 qualitatively:1 hirose:1 social:4 transaction:3 newspaper:1 citation:1 gene:3 global:1 corpus:1 assumed:3 xi:10 continuous:4 table:3 nature:1 obtaining:1 investigated:1 necessarily:1 european:1 electric:5 domain:2 did:1 dense:1 statistica:1 hyperparameters:4 profile:1 cataloging:1 dyadic:1 x1:1 fig:8 site:1 cient:3 telecom:7 precision:6 lie:1 breaking:3 third:1 tang:2 minute:1 down:1 erroneous:2 specific:1 xt:18 list:1 incorporating:1 merging:1 sequential:1 conditioned:2 illustrates:1 demand:2 nk:5 boston:1 contained:1 temporarily:2 tracking:1 synth1:4 iwata:2 truth:1 extracted:1 ma:1 acm:2 lth:1 goal:1 month:3 higuchi:2 shared:1 change:15 infinite:13 specifically:1 except:1 determined:1 sampler:4 called:1 la:1 internal:1 bioinformatics:3 constructive:1 evaluate:3 mcmc:1 |
3,393 | 4,072 | Estimation of R?enyi Entropy and Mutual Information
Based on Generalized Nearest-Neighbor Graphs
Barnab?as P?oczos
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
D?avid P?al
Department of Computing Science
University of Alberta
Edmonton, AB, Canada
[email protected]
Csaba Szepesv?ari
Department of Computing Science
University of Alberta
Edmonton, AB, Canada
[email protected]
Abstract
We present simple and computationally efficient nonparametric estimators of
R?enyi entropy and mutual information based on an i.i.d. sample drawn from an
unknown, absolutely continuous distribution over Rd . The estimators are calculated as the sum of p-th powers of the Euclidean lengths of the edges of the
?generalized nearest-neighbor? graph of the sample and the empirical copula of
the sample respectively. For the first time, we prove the almost sure consistency
of these estimators and upper bounds on their rates of convergence, the latter of
which under the assumption that the density underlying the sample is Lipschitz
continuous. Experiments demonstrate their usefulness in independent subspace
analysis.
1
Introduction
We consider the nonparametric problem of estimating R?enyi ?-entropy and mutual information (MI)
based on a finite sample drawn from an unknown, absolutely continuous distribution over Rd . There
are many applications that make use of such estimators, of which we list a few to give the reader
a taste: Entropy estimators can be used for goodness-of-fit testing (Vasicek, 1976; Goria et al.,
2005), parameter estimation in semi-parametric models (Wolsztynski et al., 2005), studying fractal
random walks (Alemany and Zanette, 1994), and texture classification (Hero et al., 2002b,a). Mutual information estimators have been used in feature selection (Peng and Ding, 2005), clustering
(Aghagolzadeh et al., 2007), causality detection (Hlav?ackova-Schindler et al., 2007), optimal experimental design (Lewi et al., 2007; P?oczos and L?orincz, 2009), fMRI data processing (Chai et al.,
2009), prediction of protein structures (Adami, 2004), or boosting and facial expression recognition (Shan et al., 2005). Both entropy estimators and mutual information estimators have been used
for independent component and subspace analysis (Learned-Miller and Fisher, 2003; P?oczos and
L?orincz, 2005; Hulle, 2008; Szab?o et al., 2007), and image registration (Kybic, 2006; Hero et al.,
2002b,a). For further applications, see Leonenko et al. (2008); Wang et al. (2009a).
In a na??ve approach to R?enyi entropy and mutual information estimation, one could use the so called
?plug-in? estimates. These are based on the obvious idea that since entropy and mutual information
are determined solely by the density f (and its marginals), it suffices to first estimate the density
using one?s favorite density estimate which is then ?plugged-in? into the formulas defining entropy
1
and mutual information. The density is, however, a nuisance parameter which we do not want to
estimate. Density estimators have tunable parameters and we may need cross validation to achieve
good performance.
The entropy estimation algorithm considered here is direct?it does not build on density estimators.
It is based on k-nearest-neighbor (NN) graphs with a fixed k. A variant of these estimators, where
each sample point is connected to its k-th nearest neighbor only, were recently studied by Goria
et al. (2005) for Shannon entropy estimation (i.e. the special case ? = 1) and Leonenko et al.
(2008) for R?enyi ?-entropy estimation. They proved the weak consistency of their estimators under
certain conditions. However, their proofs contain some errors, and it is not obvious how to fix them.
Namely, Leonenko et al. (2008) apply the generalized Helly-Bray theorem, while Goria et al. (2005)
apply the inverse Fatou lemma under conditions when these theorems do not hold. This latter error
originates from the article of Kozachenko and Leonenko (1987), and this mistake can also be found
in Wang et al. (2009b).
The first main contribution of this paper is to give a correct proof of consistency of these estimators.
Employing a very different proof techniques than the papers mentioned above, we show that these
estimators are, in fact, strongly consistent provided that the unknown density f has bounded support
and ? ? (0, 1). At the same time, we allow for more general nearest-neighbor graphs, wherein
as opposed to connecting each point only to its k-th nearest neighbor, we allow each point to be
connected to an arbitrary subset of its k nearest neighbors. Besides adding generality, our numerical experiments seem to suggest that connecting each sample point to all its k nearest neighbors
improves the rate of convergence of the estimator.
The second major contribution of our paper is that we prove a finite-sample high-probability bound
on the error (i.e. the rate of convergence) of our estimator provided that f is Lipschitz. According
to the best of our knowledge, this is the very first result that gives a rate for the estimation of R?enyi
entropy. The closest to our result in this respect is the work by Tsybakov and van der Meulen
(1996) who proved the root-n consistency of an estimator of the Shannon entropy and only in one
dimension.
The third contribution is a strongly consistent estimator of R?enyi mutual information that is based on
NN graphs and the empirical copula transformation (Dedecker et al., 2007). This result is proved for
d ? 3 1 and ? ? (1/2, 1). This builds upon and extends the previous work of P?oczos et al. (2010)
where instead of NN graphs, the minimum spanning tree (MST) and the shortest tour through the
sample (i.e. the traveling salesman problem, TSP) were used, but it was only conjectured that NN
graphs can be applied as well.
There are several advantages of using k-NN graph over MST and TSP (besides the obvious conceptual simplicity of k-NN): On a serial computer the k-NN graph can be computed somewhat faster
than MST and much faster than the TSP tour. Furthermore, in contrast to MST and TSP, computation of k-NN can be easily parallelized. Secondly, for different values of ?, MST and TSP need to
be recomputed since the distance between two points is the p-th power of their Euclidean distance
where p = d(1 ? ?). However, the k-NN graph does not change for different values of p, since
p-th power is a monotone transformation, and hence the estimates for multiple values of ? can be
calculated without the extra penalty incurred by the recomputation of the graph. This can be advantageous e.g. in intrinsic dimension estimators of manifolds (Costa and Hero, 2003), where p is a free
parameter, and thus one can calculate the estimates efficiently for a few different parameter values.
The fourth major contribution is a proof of a finite-sample high-probability error bound (i.e. the rate
of convergence) for our mutual information estimator which holds under the assumption that the
copula of f is Lipschitz. According to the best of our knowledge, this is the first result that gives a
rate for the estimation of R?enyi mutual information.
The toolkit for proving our results derives from the deep literature of Euclidean functionals, see,
(Steele, 1997; Yukich, 1998). In particular, our strong consistency result uses a theorem due to Redmond and Yukich (1996) that essentially states that any quasi-additive power-weighted Euclidean
functional can be used as a strongly consistent estimator of R?enyi entropy (see also Hero and Michel
1999). We also make use of a result due to Koo and Lee (2007), who proved a rate of convergence
result that holds under more stringent conditions. Thus, the main thrust of the present work is show1
Our result for R?enyi entropy estimation holds for d = 1 and d = 2, too.
2
ing that these conditions hold for p-power weighted nearest-neighbor graphs. Curiously enough, up
to now, no one has shown this, except for the case when p = 1, which is studied in Section 8.3 of
(Yukich, 1998). However, the condition p = 1 gives results only for ? = 1 ? 1/d.
Unfortunately, the space limitations do not allow us to present any of our proofs, so we relegate them
into the extended version of this paper (P?al et al., 2010). We instead try to give a clear explanation
of R?enyi entropy and mutual information estimation problems, the estimation algorithms and the
statements of our converge results.
Additionally, we report on two numerical experiments. In the first experiment, we compare the
empirical rates of convergence of our estimators with our theoretical results and plug-in estimates.
Empirically, the NN methods are the clear winner. The second experiment is an illustrative application of mutual information estimation to an Independent Subspace Analysis (ISA) task.
The paper is organized as follows: In the next section, we formally define R?enyi entropy and R?enyi
mutual information and the problem of their estimation. Section 3 explains the ?generalized nearest
neighbor? graphs. This graph is then used in Section 4 to define our R?enyi entropy estimator. In
the same section, we state a theorem containing our convergence results for this estimator (strong
consistency and rates). In Section 5, we explain the copula transformation, which connects R?enyi
entropy with R?enyi mutual information. The copula transformation together with the R?enyi entropy
estimator from Section 4 is used to build an estimator of R?enyi mutual information. We conclude
this section with a theorem stating the convergence properties of the estimator (strong consistency
and rates). Section 6 contains the numerical experiments. We conclude the paper by a detailed
discussion of further related work in Section 7, and a list of open problems and directions for future
research in Section 8.
2
The Formal Definition of the Problem
R?enyi entropy and R?enyi mutual information of d real-valued random variables2 X =
(X 1 , X 2 , . . . , X d ) with joint density f : Rd ? R and marginal densities fi : R ? R, 1 ? i ? d,
are defined for any real parameter ? assuming the underlying integrals exist. For ? 6= 1, R?enyi
entropy and R?enyi mutual information are defined respectively as3
Z
1
H? (X) = H? (f ) =
log
f ? (x1 , x2 , . . . , xd ) d(x1 , x2 , . . . , xd ) ,
(1)
1??
Rd
!1??
Z
d
Y
1
? 1
2
d
i
I? (X) = I? (f ) =
log
f (x , x , . . . , x )
fi (x )
d(x1 , x2 , . . . , xd ). (2)
??1
Rd
i=1
For ? = 1 they are defined by the limits H1 = lim??1 H? and I1 = lim??1 I? . In fact, Shannon
(differential) entropy and the Shannon mutual information are just special cases of R?enyi entropy
and R?enyi mutual information with ? = 1.
The goal of this paper is to present estimators of R?enyi entropy (1) and R?enyi information (2) and
study their convergence properties. To be more explicit, we consider the problem where we are
given i.i.d. random variables X1:n = (X1 , X2 , . . . , Xn ) where each Xj = (Xj1 , Xj2 , . . . , Xjd ) has
density f : Rd ? R and marginal densities fi : R ? R and our task is to construct an estimate
b ? (X1:n ) of H? (f ) and an estimate Ib? (X1:n ) of I? (f ) using the sample X1:n .
H
3
Generalized Nearest-Neighbor Graphs
The basic tool to define our estimators is the generalized nearest-neighbor graph and more specifically the sum of the p-th powers of Euclidean lengths of its edges.
Formally, let V be a finite set of points in an Euclidean space Rd and let S be a finite non-empty
set of positive integers; we denote by k the maximum element of S. We define the generalized
2
We use superscript for indexing dimension coordinates.
The base of the logarithms in the definition is not important; any base strictly bigger than 1 is allowed.
Similarly as with Shannon entropy and mutual information, one traditionally uses either base 2 or e. In this
paper, for definitiveness, we stick to base e.
3
3
nearest-neighbor graph N NS (V ) as a directed graph on V . The edge set of N NS (V ) contains
for each i ? S an edge from each vertex x ? V to its i-th nearest neighbor. That is, if we sort
V \ {x} = {y1 , y2 , . . . , y|V |?1 } according to the Euclidean distance to x (breaking ties arbitrarily):
kx ? y1 k ? kx ? y2 k ? ? ? ? ? kx ? y|V |?1 k then yi is the i-th nearest-neighbor of x and for each
i ? S there is an edge from x to yi in the graph.
For p ? 0 let us denote by Lp (V ) the sum of the p-th powers of Euclidean lengths of its edges.
Formally,
X
Lp (V ) =
kx ? ykp ,
(3)
(x,y)?E(N NS (V ))
where E(N NS (V )) denotes the edge set of N NS (V ). We intentionally hide the dependence on S
in the notation Lp (V ). For the rest of the paper, the reader should think of S as a fixed but otherwise
arbitrary finite non-empty set of integers, say, S = {1, 3, 4}.
The following is a basic result about Lp . The proof can be found in P?al et al. (2010).
Theorem 1 (Constant ?). Let X1:n = (X1 , X2 , . . . , Xn ) be an i.i.d. sample from the uniform
distribution over the d-dimensional unit cube [0, 1]d . For any p ? 0 and any finite non-empty set S
of positive integers there exists a constant ? > 0 such that
Lp (X1:n )
lim
=?
a.s.
(4)
n?? n1?p/d
The value of ? depends on d, p, S and, except for special cases, an analytical formula for its value is
not known. This causes a minor problem since the constant ? appears in our estimators. A simple
and effective way to deal with this problem is to generate a large i.i.d. sample X1:n from the uniform
distribution over [0, 1]d and estimate ? by the empirical value of Lp (X1:n )/n1?p/d .
4
An Estimator of R?enyi Entropy
We are now ready to present an estimator of R?enyi entropy based on the generalized nearest-neighbor
graph. Suppose we are given an i.i.d. sample X1:n = (X1 , X2 , . . . , Xn ) from a distribution ? over
Rd with density f . We estimate entropy H? (f ) for ? ? (0, 1) by
b ? (X1:n ) = 1 log Lp (X1:n )
where p = d(1 ? ?),
(5)
H
1??
?n1?p/d
and Lp (?) is the sum of p-th powers of Euclidean lengths of edges of the nearest-neighbor graph
N NS (?) for some finite non-empty S ? N+ as defined by equation (3). The constant ? is the same
as in Theorem 1.
b ? . It states that H
b ? is strongly
The following theorem is our main result about the estimator H
consistent and gives upper bounds on the rate of convergence. The proof of theorem is in P?al et al.
(2010).
b ? ). Let ? ? (0, 1). Let ? be an absolutely continuous
Theorem 2 (Consistency and Rate for H
distribution over Rd with bounded support and let f be its density. If X1:n = (X1 , X2 , . . . , Xn ) is
an i.i.d. sample from ? then
b ? (X1:n ) = H? (f )
lim H
a.s.
(6)
n??
Moreover, if f is Lipschitz then for any ? > 0 with probability at least 1 ? ?,
?
d?p
?O n? d(2d?p) (log(1/?))1/2?p/(2d) , if 0 < p < d ? 1 ;
b
H? (X1:n ) ? H? (f ) ?
d?p
?O n? d(d+1) (log(1/?))1/2?p/(2d) , if d ? 1 ? p < d .
5
(7)
Copulas and Estimator of Mutual Information
Estimating mutual information is slightly more complicated than estimating entropy. We start with a
basic property of mutual information which we call rescaling. It states that if h1 , h2 , . . . , hd : R ?
R are arbitrary strictly increasing functions, then
I? (h1 (X 1 ), h2 (X 2 ), . . . , hd (X d )) = I? (X 1 , X 2 , . . . , X d ) .
4
(8)
A particularly clever choice is hj = Fj for all 1 ? j ? d, where Fj is the cumulative distribution
function (c.d.f.) of X j . With this choice, the marginal distribution of hj (X j ) is the uniform distribution over [0, 1] assuming that Fj , the c.d.f. of X j , is continuous. Looking at the definition of H?
and I? we see that
I? (X 1 , X 2 , . . . , X d ) = I? (F1 (X 1 ), F2 (X 2 ), . . . , Fd (X d )) = ?H? (F1 (X 1 ), F2 (X 2 ), . . . , Fd (X d )) .
In other words, calculation of mutual information can be reduced to the calculation of entropy provided that marginal c.d.f.?s F1 , F2 , . . . , Fd are known. The problem is, of course, that these are not
known and need to be estimated from the sample. We will use empirical c.d.f.?s (Fb1 , Fb2 , . . . , Fbd )
as their estimates. Given an i.i.d. sample X1:n = (X1 , X2 , . . . , Xn ) from distribution ? and with
density f , the empirical c.d.f?s are defined as
1
Fbj (x) = |{i : 1 ? i ? n, x ? Xij }|
n
for x ? R, 1 ? j ? d .
b : Rd ? [0, 1]d ,
Introduce the compact notation F : Rd ? [0, 1]d , F
F(x1 , x2 , . . . , xd ) = (F1 (x1 ), F2 (x2 ), . . . , Fd (xd ))
b 1 , x2 , . . . , xd ) = (Fb1 (x1 ), Fb2 (x2 ), . . . , Fbd (xd ))
F(x
for (x1 , x2 , . . . , xd ) ? Rd ;
1
2
d
(9)
d
for (x , x , . . . , x ) ? R . (10)
b the copula transformation, and the empirical copula transformation,
Let us call the maps F, F
respectively. The joint distribution of F(X) = (F1 (X 1 ), F2 (X 2 ), . . . , Fd (X d )) is called the copula
b 1, Z
b 2, . . . , Z
b n ) = (F(X
b 1 ), F(X
b 2 ), . . . , F(X
b n )) is called the empirical
of ?, and the sample (Z
b i equals
copula (Dedecker et al., 2007). Note that j-th coordinate of Z
1
Zbij = rank(Xij , {X1j , X2j , . . . , Xnj }) ,
n
where rank(x, A) is the number of element of A less than or equal to x. Also, observe
b 1, Z
b 2, . . . , Z
b n are not even independent! Nonetheless, the empirithat the random variables Z
b 1, Z
b 2, . . . , Z
b n ) is a good approximation of an i.i.d. sample (Z1 , Z2 , . . . , Zn ) =
cal copula (Z
(F(X1 ), F(X2 ), . . . , F(Xn )) from the copula of ?. Hence, we estimate the R?enyi mutual information I? by
b 1, Z
b 2, . . . , Z
b n ),
b ? (Z
Ib? (X1:n ) = ?H
(11)
b ? is defined by (5). The following theorem is our main result about the estimator Ib? . It
where H
states that Ib? is strongly consistent and gives upper bounds on the rate of convergence. The proof of
this theorem can be found in P?al et al. (2010).
Theorem 3 (Consistency and Rate for Ib? ). Let d ? 3 and ? = 1 ? p/d ? (1/2, 1). Let ? be an
absolutely continuous distribution over Rd with density f . If X1:n = (X1 , X2 , . . . , Xn ) is an i.i.d.
sample from ? then
lim Ib? (X1:n ) = I? (f )
a.s.
n??
Moreover, if the density of the copula of ? is Lipschitz, then for any ? > 0 with probability at least
1 ? ?,
?
d?p
?
O max{n? d(2d?p) , n?p/2+p/d }(log(1/?))1/2 , if 0 < p ? 1 ;
?
?
?
d?p
b
O max{n? d(2d?p) , n?1/2+p/d }(log(1/?))1/2 , if 1 ? p ? d ? 1 ;
I? (X1:n ) ? I? (f ) ?
?
?
d?p
?
?O max{n? d(d+1) , n?1/2+p/d }(log(1/?))1/2 , if d ? 1 ? p < d .
6
Experiments
In this section we show two numerical experiments to support our theoretical results about the convergence rates, and to demonstrate the applicability of the proposed R?enyi mutual information estimator, Ib? .
5
6.1
The Rate of Convergence
In our first experiment (Fig. 1), we demonstrate that the derived rate is indeed an upper bound on
the convergence rate. Figure 1a-1c show the estimation error of Ib? as a function of the sample
size. Here, the underlying distribution was a 3D uniform, a 3D Gaussian, and a 20D Gaussian with
randomly chosen nontrivial covariance matrices, respectively. In these experiments ? was set to 0.7.
For the estimation we used S = {3} (kth) and S = {1, 2, 3} (knn) sets. Our results also indicate that
these estimators achieve better performances than the histogram based plug-in estimators (hist). The
number and the sizes of the bins were determined with the rule of Scott (1979). The histogram based
estimator is not shown in the 20D case, as in this large dimension it is not applicable in practice. The
figures are based on averaging 25 independent runs, and they also show the theoretical upper bound
(Theoretical) on the rate derived in Theorem 3. It can be seen that the theoretical rates are rather
conservative. We think that this is because the theory allows for quite irregular densities, while the
densities considered in this experiment are very nice.
1
1
10
10
1
10
0
0
10
10
?1
10
?2
10
?1
10
kth
knn
hist
Theoretical
2
10
kth
knn
hist
Theoretical
?2
3
10
10
2
(a) 3D uniform
0
3
10
10
(b) 3D Gaussian
10 2
10
kth
knn
Theoretical
3
10
4
10
(c) 20D Gaussian
Figure 1: Error of the estimated R?enyi informations in the number of samples.
6.2
Application to Independent Subspace Analysis
An important application of dependence estimators is the Independent Subspace Analysis problem
(Cardoso, 1998). This problem is a generalization of the Independent Component Analysis (ICA),
where we assume the independent sources are multidimensional vector valued random variables.
The formal description of the problem is as follows. We have S = (S1 ; . . . ; Sm ) ? Rdm , m independent d-dimensional sources, i.e. Si ? Rd , and I(S1 , . . . , Sm ) = 0.4 In the ISA statistical model
we assume that S is hidden, and only n i.i.d. samples from X = AS are available for observation,
where A ? Rq?dm is an unknown invertible matrix with full rank and q ? dm. Based on n i.i.d.
observation of X, our task is to estimate the hidden sources Si and the mixing matrix A. Let the
estimation of S be denoted by Y = (Y1 ; . . . ; Ym ) ? Rdm , where Y = WX. The goal of ISA is
to calculate argminW I(Y1 , . . . , Ym ), where W ? Rdm?q is a matrix with full rank. Following the
ideas of Cardoso (1998), this ISA problem can be solved by first preprocessing the observed quantities X by a traditional ICA algorithm which provides us WICA estimated separation matrix5 , and
then simply grouping the estimated ICA components into ISA subspaces by maximizing the sum of
the MI in the estimated subspaces, that is we have to find a permutation matrix P ? {0, 1}dm?dm
which solves
max
P
m
X
I(Y1j , Y2j , . . . , Ydj ) .
(12)
j=1
where Y = PWICA X. We used the proposed copula based information estimation, Ib? with
? = 0.99 to approximate the Shannon mutual information, and we chose S = {1, 2, 3}. Our
experiment shows that this ISA algorithm using the proposed MI estimator can indeed provide good
4
Here we need the generalization of MI to multidimensional quantities, but that is obvious by simply replacing the 1D marginals by d-dimensional ones.
5
for simplicity we used the FastICA algorithm in our experiments (Hyv?arinen et al., 2001)
6
estimation of the ISA subspaces. We used a standard ISA benchmark dataset from Szab?o et al.
(2007); we generated 2,000 i.i.d. sample points on 3D geometric wireframe distributions from 6
different sources independently from each other. These sampled points can be seen in Fig. 2a, and
they represent the sources, S. Then we mixed these sources by a randomly chosen invertible matrix
A ? R18?18 . The six 3-dimensional projections of X = AS observed quantities are shown in
Fig. 2b. Our task was to estimate the original sources S using the sample of the observed quantity
X only. By estimating the MI in (12), we could recover the original subspaces as it can be seen
in Fig. 2c. The successful subspace separation is shown in the form of Hinton diagrams as well,
which is the product of the estimated ISA separation matrix W = PWICA and A. It is a block
permutation matrix if and only if the subspace separation is perfect (Fig. 2d).
(a) Original
(b) Mixed
(c) Estimated
(d) Hinton
Figure 2: ISA experiment for six 3-dimensional sources.
7
Further Related Works
As it was pointed out earlier, in this paper we heavily built on the results known from the theory of
Euclidean functionals (Steele, 1997; Redmond and Yukich, 1996; Koo and Lee, 2007). However,
now we can be more precise about earlier work concerning nearest-neighbor based Euclidean functionals: The closest to our work is Section 8.3 of Yukich (1998), where the case of N NS graph
based p-power weighted Euclidean functionals with S = {1, 2, . . . , k} and p = 1 was investigated.
Nearest-neighbor graphs have first been proposed for Shannon entropy estimation by Kozachenko
and Leonenko (1987). In particular, in the mentioned work only the case of N NS graphs with
S = {1} was considered. More recently, Goria et al. (2005) generalized this approach to S = {k}
and proved the resulting estimator?s weak consistency under some conditions on the density. The
estimator in this paper has a form quite similar to that of ours:
n
2? d/2
d X
?
H1 = log(n ? 1) ? ?(k) + log
+
log kei k .
d?(d/2)
n i=1
Here ? stands for the digamma function, and ei is the directed edge pointing from Xi to its k th
nearest-neighbor. Comparing this with (5), unsurprisingly, we find that the main difference is the
use of the logarithm function instead of | ? |p and the different normalization. As mentioned before,
Leonenko et al. (2008) proposed an estimator that uses the N NS graph with S = {k} for the purpose
of estimating the R?enyi entropy. Their estimator takes the form
!
n
d(1??)
X
n
?
1
ke
k
1
i
1??
1??
?? =
log
Vd Ck
,
H
1??
n
(n ? 1)?
i=1
h
i1/(1??)
?(k)
where ? stands for the Gamma function, Ck = ?(k+1??)
and Vd = ? d/2 ?(d/2 + 1)
is the volume of the d-dimensional unit ball, and again ei is the directed edge in the N NS graph
starting from node Xi and pointing to the k-th nearest node. Comparing this estimator with (5),
it is apparent that it is (essentially) a special case of our N NS based estimator. From the results
of Leonenko et al. (2008) it is obvious that the constant ? in (5) can be found in analytical form
when S = {k}. However, we kindly warn the reader again that the proofs of these last three cited
articles (Kozachenko and Leonenko, 1987; Goria et al., 2005; Leonenko et al., 2008) contain a few
errors, just like the Wang et al. (2009b) paper for KL divergence estimation from two samples.
Kraskov et al. (2004) also proposed a k-nearest-neighbors based estimator for the Shannon mutual
information estimation, but the theoretical properties of their estimator are unknown.
7
8
Conclusions and Open Problems
We have studied R?enyi entropy and mutual information estimators based on N NS graphs. The
estimators were shown to be strongly consistent. In addition, we derived upper bounds on their
convergence rate under some technical conditions. Several open problems remain unanswered:
An important open problem is to understand how the choice of the set S ? N+ affects our estimators.
Perhaps, there exists a way to choose S as a function of the sample size n (and d, p) which strikes
the optimal balance between the bias and the variance of our estimators.
Our method can be used for estimation of Shannon entropy and mutual information by simply using
? close to 1. The open problem is to come up with a way of choosing ?, approaching 1, as a
function of the sample size n (and d, p) such that the resulting estimator is consistent and converges
as rapidly as possible. An alternative is to use the logarithm function in place of the power function.
However, the theory would need to be changed significantly to show that the resulting estimator
remains strongly consistent.
In the proof of consistency of our mutual information estimator Ib? we used Kiefer-DvoretzkyWolfowitz theorem to handle the effect of the inaccuracy of the empirical copula transformation
(see P?al et al. (2010) for details). Our particular use of the theorem seems to restrict ? to the interval
(1/2, 1) and the dimension to values larger than 2. Is there a better way to estimate the error caused
by the empirical copula transformation and prove consistency of the estimator for a larger range of
??s and d = 1, 2?
Finally, it is an important open problem to prove bounds on converge rates for densities that have
higher order smoothness (i.e. ?-H?older smooth densities). A related open problem, in the context of
of theory of Euclidean functionals, is stated in Koo and Lee (2007).
Acknowledgements
This work was supported in part by AICML, AITF (formerly iCore and AIF), NSERC, the PASCAL2 Network of Excellence under EC grant no. 216886 and by the Department of Energy under
grant number DESC0002607. Cs. Szepesv?ari is on leave from SZTAKI, Hungary.
References
C. Adami. Information theory in molecular biology. Physics of Life Reviews, 1:3?22, 2004.
M. Aghagolzadeh, H. Soltanian-Zadeh, B. Araabi, and A. Aghagolzadeh. A hierarchical clustering based on
mutual information maximization. In in IEEE ICIP, pages 277?280, 2007.
P. A. Alemany and D. H. Zanette. Fractal random walks from a variational formalism for Tsallis entropies.
Phys. Rev. E, 49(2):R956?R958, Feb 1994.
J. Cardoso. Multidimensional independent component analysis. Proc. ICASSP?98, Seattle, WA., 1998.
B. Chai, D. B. Walther, D. M. Beck, and L. Fei-Fei. Exploring functional connectivity of the human brain using
multivariate information analysis. In NIPS, 2009.
J. A. Costa and A. O. Hero. Entropic graphs for manifold learning. In IEEE Asilomar Conf. on Signals, Systems,
and Computers, 2003.
J. Dedecker, P. Doukhan, G. Lang, J.R. Leon, S. Louhichi, and C Prieur. Weak Dependence: With Examples
and Applications, volume 190 of Lecture notes in Statistics. Springer, 2007.
M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy
estimators and its applications in testing statistical hypotheses. Journal of Nonparametric Statistics, 17:
277?297, 2005.
A. O. Hero and O. J. Michel. Asymptotic theory of greedy approximations to minimal k-point random graphs.
IEEE Trans. on Information Theory, 45(6):1921?1938, 1999.
A. O. Hero, B. Ma, O. Michel, and J. Gorman. Alpha-divergence for classification, indexing and retrieval,
2002a. Communications and Signal Processing Laboratory Technical Report CSPL-328.
A. O. Hero, B. Ma, O. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal
Processing Magazine, 19(5):85?95, 2002b.
8
K. Hlav?ackova-Schindler, M. Palu?sb, M. Vejmelkab, and J. Bhattacharya. Causality detection based on
information-theoretic approaches in time series analysis. Physics Reports, 441:1?46, 2007.
M. M. Van Hulle. Constrained subspace ICA based on mutual information optimization directly. Neural
Computation, 20:964?973, 2008.
A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley, New York, 2001.
Y. Koo and S. Lee. Rates of convergence of means of Euclidean functionals. Journal of Theoretical Probability,
20(4):821?841, 2007.
L. F. Kozachenko and N. N. Leonenko. A statistical estimate for the entropy of a random vector. Problems of
Information Transmission, 23:9?16, 1987.
A. Kraskov, H. St?ogbauer, and P. Grassberger. Estimating mutual information. Phys. Rev. E, 69:066138, 2004.
J. Kybic. Incremental updating of nearest neighbor-based high-dimensional entropy estimation. In Proc. Acoustics, Speech and Signal Processing, 2006.
E. Learned-Miller and J. W. Fisher. ICA using spacings estimates of entropy. Journal of Machine Learning
Research, 4:1271?1295, 2003.
N. Leonenko, L. Pronzato, and V. Savani. A class of R?enyi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153?2182, 2008.
J. Lewi, R. Butera, and L. Paninski. Real-time adaptive information-theoretic optimization of neurophysiology
experiments. In Advances in Neural Information Processing Systems, volume 19, 2007.
D. P?al, Cs. Szepesv?ari, and B. P?oczos. Estimation of R?enyi entropy and mutual information based on generalized nearest-neighbor graphs, 2010. http://arxiv.org/abs/1003.1954.
H. Peng and C. Ding. Feature selection based on mutual information: Criteria of max-dependency, maxrelevance, and min-redundancy. IEEE Trans On Pattern Analysis and Machine Intelligence, 27, 2005.
B. P?oczos and A. L?orincz. Independent subspace analysis using geodesic spanning trees. In ICML, pages
673?680, 2005.
B. P?oczos and A. L?orincz. Identification of recurrent neural networks by Bayesian interrogation techniques.
Journal of Machine Learning Research, 10:515?554, 2009.
B. P?oczos, S. Kirshner, and Cs. Szepesv?ari. REGO: Rank-based estimation of R?enyi information using Euclidean graph optimization. In AISTATS 2010, 2010.
C. Redmond and J. E. Yukich. Asymptotics for Euclidean functionals with power-weighted edges. Stochastic
processes and their applications, 61(2):289?304, 1996.
D. W. Scott. On optimal and data-based histograms. Biometrika, 66:605?610, 1979.
C. Shan, S. Gong, and P. W. Mcowan. Conditional mutual information based boosting for facial expression
recognition. In British Machine Vision Conference (BMVC), 2005.
J. M. Steele. Probability Theory and Combinatorial Optimization. Society for Industrial and Applied Mathematics, 1997.
Z. Szab?o, B. P?oczos, and A. L?orincz. Undercomplete blind subspace deconvolution. Journal of Machine
Learning Research, 8:1063?1095, 2007.
A. B. Tsybakov and E. C. van der Meulen. Root-n consistent estimators of entropy for densities with unbounded
support. Scandinavian Journal of Statistics, 23:75?83, 1996.
O. Vasicek. A test for normality based on sample entropy. Journal of the Royal Statistical Society, Series B,
38:54?59, 1976.
Q. Wang, S. R. Kulkarni, and S. Verd?u. Universal estimation of information measures for analog sources.
Foundations and Trends in Communications and Information Theory, 5(3):265?352, 2009a.
Q. Wang, S. R. Kulkarni, and S. Verd?u. Divergence estimation for multidimensional densities via k-nearestneighbor distances. IEEE Transactions on Information Theory, 55(5):2392?2405, 2009b.
E. Wolsztynski, E. Thierry, and L. Pronzato. Minimum-entropy estimation in semi-parametric models. Signal
Process., 85(5):937?949, 2005.
J. E. Yukich. Probability Theory of Classical Euclidean Optimization Problems. Springer, 1998.
9
| 4072 |@word neurophysiology:1 version:1 advantageous:1 seems:1 open:7 hyv:2 covariance:1 contains:2 series:2 ours:1 fbj:1 xnj:1 z2:1 comparing:2 si:2 lang:1 john:1 grassberger:1 mst:5 additive:1 numerical:4 thrust:1 show1:1 wx:1 greedy:1 intelligence:1 provides:1 boosting:2 node:2 org:1 unbounded:1 direct:1 differential:1 walther:1 prove:4 introduce:1 excellence:1 peng:2 ica:5 indeed:2 brain:1 alberta:2 increasing:1 provided:3 estimating:6 underlying:3 bounded:2 notation:2 moreover:2 csaba:1 transformation:8 multidimensional:5 xd:8 tie:1 biometrika:1 stick:1 originates:1 unit:2 grant:2 positive:2 before:1 variables2:1 mistake:1 limit:1 solely:1 koo:4 fb1:2 chose:1 studied:3 nearestneighbor:1 aitf:1 tsallis:1 doukhan:1 range:1 savani:1 directed:3 testing:2 practice:1 block:1 lewi:2 fb2:2 asymptotics:1 universal:1 empirical:10 significantly:1 projection:1 word:1 protein:1 suggest:1 clever:1 selection:2 cal:1 close:1 context:1 map:1 maximizing:1 starting:1 independently:1 ke:1 simplicity:2 estimator:60 rule:1 y2j:1 hd:2 proving:1 unanswered:1 handle:1 coordinate:2 traditionally:1 annals:1 suppose:1 heavily:1 ualberta:3 magazine:1 us:3 verd:2 hypothesis:1 pa:1 element:2 trend:1 recognition:2 particularly:1 updating:1 observed:3 ding:2 wang:5 solved:1 calculate:2 connected:2 ykp:1 mentioned:3 rq:1 geodesic:1 upon:1 f2:5 easily:1 joint:2 icassp:1 soltanian:1 enyi:36 effective:1 choosing:1 quite:2 apparent:1 larger:2 valued:2 say:1 otherwise:1 statistic:4 dedecker:3 knn:4 think:2 tsp:5 superscript:1 advantage:1 analytical:2 product:1 argminw:1 hungary:1 rapidly:1 mixing:1 achieve:2 description:1 chai:2 convergence:16 xj2:1 empty:4 seattle:1 transmission:1 perfect:1 converges:1 leave:1 incremental:1 recurrent:1 stating:1 gong:1 nearest:24 minor:1 school:1 thierry:1 solves:1 strong:3 c:4 indicate:1 come:1 direction:1 correct:1 stochastic:1 human:1 stringent:1 bin:1 explains:1 kirshner:1 arinen:2 barnab:1 suffices:1 fix:1 f1:5 generalization:2 secondly:1 strictly:2 exploring:1 hold:5 considered:3 pointing:2 major:2 entropic:2 purpose:1 estimation:28 proc:2 applicable:1 combinatorial:1 tool:1 weighted:4 gaussian:4 rather:1 ck:2 hj:2 as3:1 derived:3 rank:5 contrast:1 digamma:1 industrial:1 nn:10 sb:1 hidden:2 quasi:1 i1:2 classification:2 denoted:1 xjd:1 special:4 copula:16 mutual:38 marginal:4 cube:1 construct:1 equal:2 constrained:1 biology:1 novi:1 icml:1 fmri:1 future:1 report:3 few:3 randomly:2 oja:1 gamma:1 ve:1 divergence:3 beck:1 connects:1 yukich:7 n1:3 ab:3 detection:2 fd:5 edge:11 integral:1 prieur:1 facial:2 tree:2 euclidean:17 plugged:1 walk:2 vasicek:2 logarithm:3 theoretical:10 minimal:1 formalism:1 earlier:2 araabi:1 goodness:1 zn:1 maximization:1 applicability:1 vertex:1 subset:1 tour:2 uniform:5 usefulness:1 undercomplete:1 fastica:1 successful:1 too:1 dependency:1 st:1 density:25 cited:1 lee:4 physic:2 alemany:2 invertible:2 connecting:2 together:1 ym:2 na:1 connectivity:1 again:2 recomputation:1 containing:1 choose:1 opposed:1 conf:1 rescaling:1 michel:4 sztaki:1 caused:1 depends:1 blind:1 root:2 try:1 h1:4 start:1 sort:1 recover:1 complicated:1 contribution:4 kiefer:1 variance:1 who:2 efficiently:1 miller:2 weak:3 identification:1 bayesian:1 explain:1 phys:2 definition:3 nonetheless:1 energy:1 intentionally:1 obvious:5 dm:4 proof:10 mi:5 costa:2 sampled:1 tunable:1 proved:5 dataset:1 knowledge:2 lim:5 improves:1 organized:1 x1j:1 appears:1 higher:1 wherein:1 bmvc:1 aif:1 strongly:7 generality:1 furthermore:1 just:2 traveling:1 replacing:1 ei:2 warn:1 perhaps:1 usa:1 effect:1 steele:3 contain:2 xj1:1 y2:2 hence:2 hulle:2 butera:1 laboratory:1 deal:1 nuisance:1 szepesva:1 illustrative:1 criterion:1 generalized:10 theoretic:2 demonstrate:3 fj:3 image:1 variational:1 ari:4 recently:2 fi:3 functional:2 empirically:1 winner:1 volume:3 analog:1 marginals:2 mellon:1 smoothness:1 rd:14 consistency:12 mathematics:1 similarly:1 pointed:1 toolkit:1 scandinavian:1 zanette:2 base:4 feb:1 closest:2 multivariate:1 hide:1 conjectured:1 certain:1 oczos:9 arbitrarily:1 icore:1 life:1 der:2 yi:2 seen:3 minimum:2 somewhat:1 parallelized:1 converge:2 shortest:1 strike:1 ogbauer:1 signal:5 semi:2 multiple:1 full:2 isa:10 ing:1 technical:2 faster:2 smooth:1 plug:3 cross:1 calculation:2 retrieval:1 concerning:1 serial:1 y1j:1 molecular:1 bigger:1 prediction:1 variant:1 basic:3 essentially:2 vision:1 arxiv:1 histogram:3 represent:1 normalization:1 irregular:1 szepesv:4 want:1 addition:1 spacing:1 interval:1 diagram:1 source:9 extra:1 rest:1 sure:1 r956:1 seem:1 integer:3 call:2 kraskov:2 enough:1 mergel:1 xj:1 fit:1 affect:1 rdm:3 approaching:1 restrict:1 idea:2 avid:1 palu:1 expression:2 six:2 curiously:1 penalty:1 aghagolzadeh:3 poczos:1 york:1 cause:1 speech:1 fractal:2 deep:1 clear:2 detailed:1 cardoso:3 nonparametric:3 adami:2 tsybakov:2 reduced:1 generate:1 inverardi:1 http:1 exist:1 xij:2 estimated:7 carnegie:1 wireframe:1 recomputed:1 redundancy:1 drawn:2 goria:6 schindler:2 registration:1 graph:32 monotone:1 sum:5 run:1 inverse:1 fourth:1 extends:1 almost:1 reader:3 place:1 separation:4 zadeh:1 bound:9 shan:2 pronzato:2 nontrivial:1 bray:1 fei:2 x2:15 min:1 leonenko:12 leon:1 mcowan:1 department:3 according:3 ball:1 remain:1 slightly:1 lp:8 rev:2 s1:2 indexing:2 asilomar:1 computationally:1 equation:1 louhichi:1 remains:1 hero:8 studying:1 salesman:1 available:1 apply:2 observe:1 hierarchical:1 kozachenko:4 bhattacharya:1 alternative:1 original:3 denotes:1 clustering:2 wolsztynski:2 build:3 society:2 classical:1 quantity:4 parametric:2 fbd:2 dependence:3 traditional:1 kth:4 subspace:14 distance:4 vd:2 manifold:2 spanning:3 assuming:2 length:4 besides:2 aicml:1 balance:1 unfortunately:1 statement:1 stated:1 design:1 unknown:5 upper:6 observation:2 sm:2 benchmark:1 finite:8 defining:1 dpal:1 communication:2 extended:1 orincz:5 looking:1 y1:4 hinton:2 precise:1 arbitrary:3 canada:2 namely:1 kl:1 z1:1 icip:1 acoustic:1 learned:2 inaccuracy:1 nip:1 trans:2 redmond:3 pattern:1 scott:2 built:1 max:5 royal:1 explanation:1 pascal2:1 power:11 normality:1 older:1 meulen:2 ready:1 formerly:1 nice:1 literature:1 taste:1 geometric:1 acknowledgement:1 review:1 asymptotic:1 unsurprisingly:1 lecture:1 permutation:2 mixed:2 interrogation:1 limitation:1 validation:1 h2:2 foundation:1 incurred:1 consistent:9 article:2 course:1 changed:1 supported:1 last:1 free:1 formal:2 allow:3 understand:1 bias:1 neighbor:23 van:3 calculated:2 dimension:5 xn:7 cumulative:1 stand:2 adaptive:1 preprocessing:1 kei:1 employing:1 ec:1 transaction:1 functionals:7 approximate:1 compact:1 alpha:1 hist:3 conceptual:1 pittsburgh:1 conclude:2 xi:2 continuous:6 additionally:1 favorite:1 ca:3 investigated:1 kindly:1 aistats:1 main:5 allowed:1 x1:33 causality:2 fig:5 edmonton:2 wiley:1 n:12 explicit:1 ib:10 breaking:1 third:1 formula:2 theorem:16 british:1 hlav:2 list:2 derives:1 intrinsic:1 exists:2 grouping:1 deconvolution:1 adding:1 texture:1 karhunen:1 kx:4 gorman:2 fatou:1 entropy:44 simply:3 relegate:1 paninski:1 nserc:1 springer:2 ma:2 conditional:1 goal:2 lipschitz:5 fisher:2 change:1 determined:2 except:2 szab:3 specifically:1 averaging:1 lemma:1 conservative:1 called:3 x2j:1 experimental:1 shannon:9 formally:3 support:4 latter:2 absolutely:4 kulkarni:2 |
3,394 | 4,073 | Near?Optimal Bayesian Active Learning
with Noisy Observations
Daniel Golovin
Caltech
Andreas Krause
Caltech
Debajyoti Ray
Caltech
Abstract
We tackle the fundamental problem of Bayesian active learning with noise, where
we need to adaptively select from a number of expensive tests in order to identify
an unknown hypothesis sampled from a known prior distribution. In the case of
noise?free observations, a greedy algorithm called generalized binary search (GBS)
is known to perform near?optimally. We show that if the observations are noisy,
perhaps surprisingly, GBS can perform very poorly. We develop EC2 , a novel,
greedy active learning algorithm and prove that it is competitive with the optimal
policy, thus obtaining the first competitiveness guarantees for Bayesian active learning with noisy observations. Our bounds rely on a recently discovered diminishing
returns property called adaptive submodularity, generalizing the classical notion
of submodular set functions to adaptive policies. Our results hold even if the tests
have non?uniform cost and their noise is correlated. We also propose E FF ECXTIVE , a particularly fast approximation of EC 2 , and evaluate it on a Bayesian
experimental design problem involving human subjects, intended to tease apart
competing economic theories of how people make decisions under uncertainty.
1
Introduction
How should we perform experiments to determine the most accurate scientific theory among competing candidates, or choose among expensive medical procedures to accurately determine a patient?s
condition, or select which labels to obtain in order to determine the hypothesis that minimizes generalization error? In all these applications, we have to sequentially select among a set of noisy, expensive
observations (outcomes of experiments, medical tests, expert labels) in order to determine which hypothesis (theory, diagnosis, classifier) is most accurate. This fundamental problem has been studied in
a number of areas, including statistics [17], decision theory [13], machine learning [19, 7] and others.
One way to formalize such active learning problems is Bayesian experimental design [6], where one
assumes a prior on the hypotheses, as well as probabilistic assumptions on the outcomes of tests. The
goal then is to determine the correct hypothesis while minimizing the cost of the experimentation. Unfortunately, finding this optimal policy is not just NP-hard, but also hard to approximate [5]. Several
heuristic approaches have been proposed that perform well in some applications, but do not carry theoretical guarantees (e.g., [18]). In the case where observations are noise-free1 , a simple algorithm, generalized binary search2 (GBS) run on a modified prior, is guaranteed to be competitive with the optimal
policy; the expected number of queries is a factor of O(log n) (where n is the number of hypotheses)
more than that of the optimal policy [15], which matches lower bounds up to constant factors [5].
The important case of noisy observations, however, as present in most applications, is much less
well understood. While there are some recent positive results in understanding the label complexity
of noisy active learning [19, 1], to our knowledge, so far there are no algorithms that are provably
competitive with the optimal sequential policy, except in very restricted settings [16]. In this paper, we
1
This case is known as the Optimal Decision Tree (ODT) problem.
GBS greedily selects tests to maximize, in expectation over the test outcomes, the prior probability mass of
eliminated hypotheses (i.e., those with zero posterior probability, computed w.r.t. the observed test outcomes).
2
1
introduce a general formulation of Bayesian active learning with noisy observations that we call the
Equivalence Class Determination problem. We show that, perhaps surprisingly, generalized binary
search performs poorly in this setting, as do greedily (myopically) maximizing the information gain
(measured w.r.t. the distribution on equivalence classes) or the decision-theoretic value of information.
This motivates us to introduce a novel active learning criterion, and use it to develop a greedy active
learning algorithm called the Equivalence Class Edge Cutting algorithm (EC2 ), whose expected cost
is competitive to that of the optimal policy. Our key insight is that our new objective function satisfies
adaptive submodularity [9], a natural diminishing returns property that generalizes the classical notion
of submodularity to adaptive policies. Our results also allow us to relax the common assumption
that the outcomes of the tests are conditionally independent given the true hypothesis. We also
develop the Efficient Edge Cutting approXimate objective algorithm (E FF ECX TIVE), an efficient
approximation to EC2 , and evaluate it on a Bayesian experimental design problem intended to tease
apart competing theories on how people make decisions under uncertainty, including Expected Value
[22], Prospect Theory [14], Mean-Variance-Skewness [12] and Constant Relative Risk Aversion [20].
In our experiments, E FF ECX TIVE typically outperforms existing experimental design criteria such as
information gain, uncertainty sampling, GBS, and decision-theoretic value of information. Our results
from human subject experiments further reveal that E FF ECX TIVE can be used as a real-time tool
to classify people according to the economic theory that best describes their behaviour in financial
decision-making, and reveal some interesting heterogeneity in the population.
2
Bayesian Active Learning in the Noiseless Case
In the Bayesian active learning problem, we would like to distinguish among a given set of hypotheses
H = {h1 , . . . , hn } by performing tests from a set T = {1, . . . , N } of possible tests. Running test
t incurs a cost of c(t) and produces an outcome from a finite set of outcomes X = {1, 2, . . . , `}.
We let H denote the random variable which equals the true hypothesis, and model the outcome of
each test t by a random variable Xt taking values in X . We denote the observed outcome of test
t by xt . We further suppose we have a prior distribution P modeling our assumptions on the joint
probability P (H, X1 , . . . , XN ) over the hypotheses and test outcomes. In the noiseless case, we
assume that the outcome of each test is deterministic given the true hypothesis, i.e., for each h ? H,
P (X1 , . . . , XN | H = h) is a deterministic distribution. Thus, each hypothesis h is associated
with a particular vector of test outcomes. We assume, w.l.o.g., that no two hypotheses lead to the
same outcomes for all tests. Thus, if we perform all tests, we can uniquely determine the true
hypothesis. However in most applications we will wish to avoid performing every possible test, as
this is prohibitively expensive. Our goal is to find an adaptive policy for running tests that allows us
to determine the value of H while minimizing the cost of the tests performed. Formally, a policy ?
(also called a conditional plan) is a partial mapping ? from partial observation vectors xA to tests,
specifying which test to run next (or whether we should stop testing) for any observation vector xA .
Hereby, xA ? X A is a vector of outcomes indexed by a set of tests A ? T that we have performed
so far 3 (e.g., the set of labeled examples in active learning, or outcomes of a set of medical tests that
we ran). After having made observations xA , we can rule out inconsistent hypotheses. We denote
the set of hypotheses consistent with event ? (often called the version space associated with ?) by
V(?) := {h ? H : P (h | ?) > 0}. We call a policy feasible if it is guaranteed to uniquely determine
the correct hypothesis. That is, upon termination with observation xA , it must hold that |V(xA )| = 1.
We can define the expected cost of a policy ? by
X
c(?) :=
P (h)c(T (?, h))
h
where T (?, h) ? T is the set of tests run by policy ? in case H = h. Our goal is to find a feasible
policy ? ? of minimum expected cost, i.e.,
? ? = arg min {c(?) : ? is feasible}
(2.1)
?
A policy ? can be naturally represented as a decision tree T , and thus problem (2.1) is often called
the Optimal Decision Tree (ODT) problem.
Unfortunately, obtaining an approximate policy ? for which c(?) ? c(? ? ) ? o(log(n)) is NP-hard [5].
Hence, various heuristics are employed to solve the Optimal Decision Tree problem and its variants.
Two of the most popular heuristics are to select tests greedily to maximize the information gain (IG)
3
Formally we also require that (xt )t?B ? dom(?) and A ? B, implies (xt )t?A ? dom(?) (c.f., [9]).
2
conditioned on previous test outcomes, and generalized binary search (GBS). Both heuristics are
greedy, and after having made observations xA will select
t? = arg max ?Alg (t | xA ) /c(t),
t?T
where Alg ? {IG, GBS}. Here, ?IG (t | xA ) := H (XT | xA ) ? Ext ?Xt |xA [H (XT |xA , xt )] is the
marginal information gain measured with
P respect to the Shannon entropy H (X) := Ex [? log2 P (x)],
and ?GBS (t | xA ) := P (V(xA )) ? x?X P (Xt = x | xA )P (V(xA , Xt = x)) is the expected
reduction in version space probability mass. Thus, both heuristics greedily chooses the test that
maximizes the benefit-cost ratio, measured with respect to their particular benefit functions. They
stop after running a set of tests A such that |V(xA )| = 1, i.e., once the true hypothesis has been
uniquely determined.
It turns out that for the (noiseless) Optimal Decision Tree problem, these two heuristics are equivalent
[23], as can be proved using the chain rule of entropy. Interestingly, despite its myopic nature
GBS has been shown [15, 7, 11, 9] to obtain near-optimal expected cost: the strongest known bound
is c(?GBS ) ? c(? ? ) (ln(1/pmin ) + 1) where pmin := minh?H P (h). Let xS (h) be the unique
vector xS ? X S such that P (xS | h) = 1. The result above is proved by exploiting the fact
that fGBS (S, h) := 1 ? P (V(xS (h))) + P (h) is adaptive submodular and strongly adaptively
monotone [9]. Call xA a subvector of xB if A ? B and P (xB | xA ) > 0. In this case we write
xA ? xB . A function f : 2T ? H is called adaptive submodular w.r.t. a distribution P , if for any
xA ? xB and any test t it holds that ? (t | xA ) ? ? (t | xB ), where
? (t | xA ) := EH [f (A ? {t} , H) ? f (A, H) | xA ] .
Thus, f is adaptive submodular if the expected marginal benefits ? (t | xA ) of adding a new test t can
only decrease as we gather more observations. f is called strongly adaptively monotone w.r.t. P if,
informally, ?observations never hurt? with respect to the expected reward. Formally, for all A, all
t?
/ A, and all x ? X we require EH [f (A, H) | xA ] ? EH [f (A ? {t} , H) | xA , Xt = x] .
The performance guarantee for GBS follows from the following general result about the greedy
algorithm for adaptive submodular functions (applied with Q = 1 and ? = pmin ):
Theorem 1 (Theorem 10 of [9] with ? = 1). Suppose f : 2T ? H ? R?0 is adaptive submodular
and strongly adaptively monotone with respect to P and there exists Q such that f (T , h) = Q for all
h. Let ? be any value such that f (S, h) > Q ? ? implies f (S, h) = Q for all sets S and
h.
hypotheses
Then for self?certifying instances the adaptive greedy policy ? satisfies c(?) ? c(? ? ) ln
Q
?
+1 .
The technical requirement that instances be self?certifying means that the policy will have proof
that it has obtained the maximum possible objective value, Q, immediately upon doing so. It is
not difficult to show that this is the case with the instances we consider in this paper. We refer the
interested reader to [9] for more detail.
In the following sections, we will use the concept of adaptive submodularity to provide the first
approximation guarantees for Bayesian active learning with noisy observations.
3
The Equivalence Class Determination Problem and the EC2 Algorithm
We now wish to consider the Bayesian active learning problem where tests can have noisy outcomes.
Our general strategy is to reduce the problem of noisy observations to the noiseless setting. To gain
intuition, consider a simple model where tests have binary outcomes, and we know that the outcome
of exactly one test, chosen uniformly at random unbeknown to us, is flipped. If any pair of hypotheses
h 6= h0 differs by the outcome of at least three tests, we can still uniquely determine the correct
hypothesis after running all tests. In this case we can reduce the noisy active learning problem to
the noiseless setting by, for each hypothesis, creating N ?noisy? copies, each obtained by flipping
the outcome of one of the N tests. The modified prior P 0 would then assign mass P 0 (h0 ) = P (h)/N
to each noisy copy h0 of h. The conditional distribution P 0 (XT | h0 ) is still deterministic (obtained
by flipping the outcome of one of the tests). Thus, each hypothesis hi in the original problem is
now associated with a set Hi of hypotheses in the modified problem instance. However, instead of
selecting tests to determine which noisy copy has been realized, we only care which set Hi is realized.
3
The Equivalence Class Determination problem (ECD). More generally, we introduce the
Equivalence Class Determination problem4 , where ourUset of hypotheses H is partitioned into a set
m
of m equivalence classes {H1 , . . . , Hm } so that H = i=1 Hi , and the goal is to determine which
class Hi the true hypothesis lies in. Formally, upon termination with observations xA we require
that V(xA ) ? Hi for some i. As with the ODT problem, the goal is to minimize the expected cost
of the tests, where the expectation is taken over the true hypothesis sampled from P . In ?4, we will
show how the Equivalence Class Determination problem arises naturally from Bayesian experimental
design problems in probabilistic models.
Given the fact that GBS performs near-optimally on the Optimal Decision Tree problem, a natural approach to solving ECD would be to run GBS until the termination condition is met. Unfortunately, and
perhaps surprisingly, GBS can perform very poorly on the ECD problem. Consider an instance with
a uniform prior over n hypotheses, h1 , . . . , hn , and two equivalence classes H1 := {hi : 1 ? i < n}
and H2 := {hn }. There are tests T = {1, . . . , n} such that hi (t) = 1[i = t], all of unit cost. Hereby,
1[?] is the indicator variable for event ?. In this case, the optimal policy only needs to select test
n, however GBS may select tests 1, 2, . . . , n in order until running test t, where H = ht is the true
hypothesis. Given our uniform prior, it takes n/2 tests in expectation until this happens, so that GBS
pays, in expectation, n/2 times the optimal expected cost in this instance.
The poor performance of GBS in this instance may be attributed to its lack of consideration for the
equivalence classes. Another natural heuristic would be to run the greedy information gain policy,
only with the entropy measured with respect to the probability distribution on equivalence classes
rather than hypotheses. Call this policy ?IG . It is clearly aware of the equivalence classes, as it
adaptively and myopically selects tests to reduce the uncertainty of the realized class, measured w.r.t.
the Shannon entropy. However, we can show there are instances in which it pays ?(n/ log(n)) times
the optimal cost, even under a uniform prior. See the long version of this paper [10] for details.
The EC2 algorithm. The reason why GBS fails is because reducing the version space mass does
not necessarily facilitate differentiation among the classes Hi . The reason why ?IG fails is that there
are complementarities among tests; a set of tests can be far better than the sum of its parts. Thus, we
would like to optimize an objective function that encourages differentiation among classes, but lacks
complementarities. We adopt a very elegant idea from Dasgupta [8], and define weighted edges between hypotheses that we aim to distinguish between. However, instead of introducing edges between
arbitrary pairs of hypotheses (as done in [8]), we only introduce edges between hypotheses in different
classes. Tests will allow us to cut edges inconsistent with their outcomes, and we aim to eliminate
all inconsistent edges while minimizing the expected cost incurred. We now formalize this intuition.
Specifically, we define a set of edges E = ?1?i<j?m {{h, h0 } : h ? Hi , h0 ? Hj }, consisting of all
(unordered) pairs of hypotheses belonging to distinct classes. These are the edges that must be cut, by
which we mean for any edge {h, h0 } ? E, at least one hypothesis in {h, h0 } must be ruled out (i.e.,
eliminated from the version space). Hence, a test t run under true hypothesis h is said to cut edges
Et (h) := {{h0 , h00 } : h0 (t) 6= h(t) or h00 (t) 6= h(t)}. See Fig. 1(a) for an illustration. We define a
weight function w : E ? R?0 by w({h, h0 }) := P (h) ? P (h0 ). We extend the weight
P function to an
additive (modular) function on sets of edges in the natural manner, i.e., w(E 0 ) := e?E 0 w(e). The
objective fEC that we will greedily maximize is then defined as the weight of the edges cut (EC):
[
fEC (A, h) := w
Et (h)
(3.1)
t?A
The key insight that allows us to prove approximation guarantees for fEC is that fEC shares the same
beneficial properties that make fGBS amenable to efficient greedy optimization. The proof of this
fact, as stated in Proposition 2, can be found in the long version of this paper [10].
Proposition 2. The objective fEC is strongly adaptively monotone and adaptively submodular.
Based on the objective fEC , we can calculate the marginal benefits for test t upon observations xA as
?EC (t | xA ) := EH [fEC (A ? {t} , H) ? fEC (A, H) | xA ] .
We call the adaptive policy ?EC that, after observing xA , greedily selects test
t? ? arg maxt ?EC (t | xA ) /c(t), the EC2 algorithm (for equivalence class edge cutting).
4
Bellala et al. simultaneously studied ECD [2], and, like us, used it to model active learning with noise [3].
They developed an extension of GBS for ECD. We defer a detailed comparison of our approaches to future work.
4
. ?. ?.
(a) The Equivalence Class Determination problem
(b) Error model
Figure 1: (a) An instance of Equivalence Class Determination with binary test outcomes, shown with the set of
edges that must be cut, and depicting the effects of test i under different outcomes. (b) The graphical model
underlying our error model.
Note that these instances are self?certifying, because we obtain maximum objective value if and
only if the version space lies within an equivalence class, and the policy can certify this condition
when it holds. So we can apply Theorem 1 to show EC2 obtains
Pa ln(Q/?) + 1 approximation to
Equivalence Class Determination. Hereby, Q = w(E) = 1 ? i (P (h ? Hi ))2 ? 1 is the total
weight of all edges that need to be cut, and ? = mine?E w(e) ? p2min is a bound on the minimum
weight among all edges. We have the following result:
Theorem 3. Suppose P (h) is rational for all h ? H. For the adaptive greedy policy ?EC implemented by EC2 it holds that
c(?EC ) ? (2 ln(1/pmin ) + 1)c(? ? ),
where pmin := minh?H P (h) is the minimum prior probability of any hypothesis, and ? ? is the
optimal policy for the Equivalence Class Determination problem.
In the case of unit cost tests, we can apply a technique of Kosaraju et al. [15], originally developed
for the GBS algorithm, to improve the approximation guarantee to O(log n) by applying EC2 with a
modified prior distribution. We defer details to the full version of this paper.
4
Bayesian Active Learning with Noise and the E FF ECX TIVE Algorithm
We now address the case of noisy observations, using ideas from ?3. With noisy observations, the
conditional distribution P (X1 , . . . , XN | h) is no longer deterministic. We model the noise using an
additional random variable ?. Fig. 1(b) depicts the underlying graphical model. The vector of test
outcomes xT is assumed to be an arbitrary, deterministic function xT : H ? supp(?) ? X N ; hence
XT | h is distributed as xT (h, ?h ) where ?h is distributed as P (? | h). For example, there might be
up to s = | supp(?)| ways any particular disease could manifest itself, with different patients with
the same disease suffering from different symptoms.
In cases where it is always possible to identify the true hypothesis, i.e., xT (h, ?) 6= xT (h0 , ?0 )
for all h 6= h0 and all ?, ?0 ? supp(?), we can reduce the problem to Equivalence Class Determination with hypotheses {xT (h, ?) : h ? H, ? ? supp(?)} and equivalence classes Hi :=
{xT (hi , ?) : ? ? supp(?)} for all i. Then Theorem 3 immediately yields that the approximation
factor of EC2 is at most 2 ln (1/ minh,? P (h, ?)) + 1, where the minimum is taken over all (h, ?) in
the support of P . In the unit cost case, running EC2 with a modified prior a` la Kosaraju et al. [15]
allows us to obtain an O(log |H| + log | supp(?)|) approximation factor. Note this model allows us
to incorporate noise with complex correlations.
However, a major challenge when dealing with noisy observations is that it is not always possible
to distinguish distinct hypotheses. Even after we have run all tests, there will generally still be
uncertainty about the true hypothesis, i.e., the posterior distribution P (H | xT ) obtained using
Bayes? rule may still assign non-zero probability to more than one hypothesis. If so, uniquely
determining the true hypothesis is not possible. Instead, we imagine that there is a set D of possible
decisions we may make after (adaptively) selecting a set of tests to perform and we must choose
one (e.g., we must decide how to treat the medical patient, which scientific theory to adopt, or which
classifier to use, given our observations). Thus our goal is to gather data to make effective decisions
[13]. Formally, for any decision d ? D we take, and each realized hypothesis h, we incur some loss
`(d, h). Decision theory recommends, after observing xA , to choose the decision d? that minimizes
the risk, i.e., the expected loss, namely d? ? arg mind EH [`(d, H) | xA ].
5
A natural goal in Bayesian active learning is thus to adaptively pick observations, until we are
guaranteed to make the same decision (and thus incur the same expected loss) that we would have
made had we run all tests. Thus, we can reduce the noisy Bayesian active learning problem to
the ECD problem by defining the equivalence classes over all test outcomes that lead to the same
minimum risk decision. Hence, for each decision d ? D, we define
Hd := {xT : d = arg min EH [`(d0 , H) | xT ]}.
(4.1)
d0
If multiple decisions minimize the risk for a particular xT , we break ties arbitrarily. Identifying the
best decision d ? D then amounts to identifying which equivalence class Hd contains the realized
vector of outcomes, which is an instance of ECD.
One common approach to this problem is to myopically pick tests maximizing the
decision-theoretic value of information (VoI): ?VoI (t | xA ) := mind EH [`(d, H) | xA ] ?
Ext ?Xt |xA [mind EH [`(d, H) | xA , xt ]]. The VoI of a test t is the expected reduction in the expected
loss of the best decision due to the observation of xt . However, we can show there are instances in
which such a policy pays ?(n/ log(n)) times the optimal cost, even under a uniform prior on (h, ?)
and with | supp(?)| = 2. See the long version of this paper [10] for details. In contrast, on such
instances EC2 obtains an O(log n) approximation. More generally, we have the following result
for EC2 as an immediate consequence of Theorem 3.
Theorem 4. Fix hypotheses H, tests T with costs c(t) and outcomes in X , decision set D, and
loss function `. Fix a prior P (H, ?) and a function xT : H ? supp(?) ? X N which define the
probabilistic noise model. Let c(?) denote the expected cost of ? incurs to find the best decision, i.e.,
to identify which equivalence class Hd the outcome vector xT belongs to. Let ? ? denote the policy
minimizing c(?), and let ?EC denote the adaptive policy implemented by EC2 . Then it holds that
1
c(?EC ) ? 2 ln
+
1
c(? ? ),
p0min
where p0min := minh?H {P (h, ?) : P (h, ?) > 0}.
If all tests have unit cost, by using a modified prior [15] the approximation factor can be improved to
O (log |H| + log | supp(?)|) as in the case of Theorem 3.
The E FF ECX TIVE algorithm. For some noise models, ? may have exponentially?large support. In
this case reducing Bayesian active learning with noise to Equivalence Class Determination results in
instances with exponentially-large equivalence classes. This makes running EC2 on them challenging,
since explicitly keeping track of the equivalence classes is impractical. To overcome this challenge,
we develop E FF ECX TIVE, a particularly efficient algorithm which approximates EC2 .
For clarity, we only consider the 0?1 loss, i.e., our goal is to find the most likely hypothesis (MAP
estimate) given all the data xT , namely h? (xT ) := arg maxh P (h | xT ). Recall definition (4.1), and
consider the weight of edges between distinct equivalence classes Hi and Hj :
X
X
X
w(Hi ?Hj ) =
P (xT )P (x0T ) =
P (xT )
P (x0T ) = P (XT ? Hi )P (XT ? Hj ).
xT ?Hi ,x0T ?Hj
x0T ?Hj
xT ?Hi
In general, P (XT ? Hi ) can be estimated to arbitrary accuracy using a rejection sampling approach
with bounded sample complexity. We defer details to the full version of the paper. Here, we focus on
the case where, upon observing all tests, the hypothesis is uniquely determined, i.e., P (H | xT ) is
deterministic for all xT in the support of P . In this case, it holds that P (XT ? Hi ) = P (H = hi ).
Thus, the total weight is
X
2 X
X
X
w(Hi ? Hj ) =
P (hi ) ?
P (hi )2 = 1 ?
P (hi )2 .
i
i6=j
i
i
This insight motivates us to use the objective function
hX
X
i X
?Eff (t | xA ) :=
P (Xt = x | xA )
P (hi | xA , Xt = x)2 ?
P (hi | xA )2 ,
x
i
i
which is the expected reduction in
Pweight from the prior to the posterior distribution. Note that
the weight of a distribution 1 ? i P (hi )2 is a monotonically increasing function of the R?enyi
6
P
entropy (of order 2), which is ? 12 log i P (hi )2 . Thus the objective ?Eff can be interpreted as a
(non-standard) information gain in terms of the (exponentiated) R?enyi entropy. In our experiments,
we show that this criterion performs well in comparison to existing experimental design criteria,
including the classical Shannon information gain. Computing ?Eff (t | xA ) requires us to perform one
inference task for each outcome x of Xt , and O(n) computations to calculate the weight for each
outcome. We call the algorithm that greedily optimizes ?Eff the E FF ECX TIVE algorithm (since it
uses an Efficient Edge Cutting approXimate objective), and present pseudocode in Algorithm 1.
Input: Set of hypotheses H; Set of tests T ; prior distribution P ; function f .
begin
A ? ?;
while ?h 6= h0 : P (h | xA ) > 0 and P (h0 | xA ) > 0 do
foreach t ? T \ A hdo
i P
P
P
2
? i P (hi | xA )2 ;
?Eff (t | xA ) :=
i P (hi | xA , Xt = x)
x P (Xt = x | xA )
Select t? ? arg maxt ?Eff (t | xA ) /c(t); Set A ? A ? {t? } and observe outcome xt? ;
end
Algorithm 1: The E FF ECX TIVE algorithm using the Efficient Edge Cutting approXimate objective.
5
Experiments
Several economic theories make claims to explain how people make decisions when the payoffs
are uncertain. Here we use human subject experiments to compare four key theories proposed in
literature. The uncertainty of the payoff in a given situation is represented by a lottery L, which is
simply a random variable with a range of payoffs L := {`1 , . . . , `k }. For our purposes, a payoff is
an integer denoting how many dollars you receive (or lose, if the payoff is negative). Fix lottery
L, and let pi := P [L = `i ]. The four theories posit distinct utility functions, with agents preferring larger utility lotteries. Three of the theories have associated parameters. The Expected Value
theory [22] P
posits simply UEV (L) = E [L], and has no parameters. Prospect theory [14] posits
UP T (L) = i f (`i )w(pi ) for nonlinear functions f (`i ) = `?i , if `i ? 0 and f (`i ) = ??(?`i )? , if
?
`i < 0, and w(pi ) = e?(log(1/pi )) [21]. The parameters ?P T = {?, ?, ?} represent risk aversion,
loss aversion and probability weighing factor respectively. For portfolio optimization problems, financial economists have used value functions that give weights to different moments of the lottery [12]:
UM V S (L) = w? ? ? w? ? + w? ?, where ?M V S = {w? , w? , w? } are the weights for the mean, standard deviation and standardized skewness of the lottery respectively. In Constant Relative Risk Aversion theory [20], there is a parameter
?CRRA = a representing the level of risk
P
Paversion, and the utility posited is UCRRA (L) = i pi `1?a
/(1
?
a)
if
a
=
6
1,
and
U
(L)
=
CRRA
i
i pi log(`i ), if a = 1.
The goal is to adaptively select a sequence of tests to present to a human subject in order to
distinguish which of the four theories best explains the subject?s responses. Here a test t is a pair
of lotteries, (Lt1 , Lt2 ). Based on the theory that represents behaviour, one of the lotteries would be
preferred to the other, denoted by a binary response xt ? {1, 2}. The possible payoffs were fixed to
L = {?10, 0, 10} (in dollars), and the distribution (p1 , p2 , p3 ) over the payoffs was varied, where
pi ? {0.01, 0.99} ? {0.1, 0.2, . . . , 0.9}. By considering all non-identical pairs of such lotteries, we
obtained the set of possible tests.
We compare six algorithms: E FF ECX TIVE, greedily maximizing Information Gain (IG), Value of
Information (VOI), Uncertainty Sampling5 (US), Generalized Binary Search (GBS), and tests selected
at Random. We evaluated the ability of the algorithms to recover the true model based on simulated
responses. We chose parameter values for the theories such that they made distinct predictions and
were consistent with the values proposed in literature [14]. We drew 1000 samples of the true model
and fixed the parameters of the model to some canonical values, ?P T = {0.9, 2.2, 0.9}, ?M V S =
{0.8, 0.25, 0.25}, ?CRRA = 1. Responses were generated using a softmax function, with the
t
t
probability of response xt = 1 given by P (xt = 1) = 1/(1 + eU (L2 )?U (L1 ) ). Fig. 2(a) shows the
performance of the 6 methods, in terms of the accuracy of recovering the true model with the number
of tests. We find that US, GBS and VOI perform significantly worse than Random in the presence
of noise. E FF ECX TIVE outperforms InfoGain significantly, which outperforms Random.
5
Uncertainty sampling greedily selects the test whose outcome distribution has maximum Shannon entropy.
7
1
1
InfoGain
EffECXtive
1
InfoGain
0.9
0.8
EffECXtive
0.8
0.7
UncertaintySampling
0.6
0.9
Random
VOI
0.7
0.6
VOI
0.5
UncertaintySampling
0.5
0.4
0.4
GBS
0.3
0.3
0.2
0
0.2
0
5
10
15
20
Number of tests
25
(a) Fixed parameters
30
PT, n=2
0.8
Random
Prob. of Classified Type
0.9
0.7
0.6
0.5
EV, n=7
0.4
CRRA, n=1
0.3
0.2
GBS
MVS, n=1
0.1
5
10
15
20
Number of tests
25
30
(b) With parameter uncertainty
0
0
5
10
15
20
Number of tests
25
30
(c) Human subject data
Figure 2: (a) Accuracy of identifying the true model with fixed parameters, (b) Accuracy using a grid of
parameters, incorporating uncertainty in their values, (c) Experimental results: 11 subjects were classified into
the theories that described their behavior best. We plot probability of classified type.
We also considered uncertainty in the values of the parameters, by setting ? from 0.85-0.95, ? from
2.1-2.3, ? from 0.9-1; w? from 0.8-1.0, w? from 0.2-0.3, w? from 0.2-0.3; and a from 0.9-1.0, all
with 3 values per parameter. We generated 500 random samples by first randomly sampling a model
and then randomly sampling parameter values. E FF ECX TIVE and InfoGain outperformed Random
significantly, Fig. 2(b), although InfoGain did marginally better among the two. The increased
parameter range potentially poses model identifiability issues, and violates some of the assumptions
behind E FF ECX TIVE, decreasing its performance to the level of InfoGain.
After obtaining informed consent according to a protocol approved by the Institutional Review Board
of Caltech, we tested 11 human subjects to determine which model fit their behaviour best. Laboratory
experiments have been used previously to distinguish economic theories, [4], and here we used a
real-time, dynamically optimized experiment that required fewer tests. Subjects were presented 30
tests using E FF ECX TIVE. To incentivise the subjects, one of these tests was picked at random, and
subjects received payment based the outcome of their chosen lottery. The behavior of most subjects
(7 out of 10) was best described by EV. This is not unexpected given the high quantitative abilities of
the subjects. We also found heterogeneity in classification: One subject got classified as MVS, as
identified by violations of stochastic dominance in the last few choices. 2 subjects were best described
by prospect theory since they exhibited a high degree of loss aversion and risk aversion. One subject
was also classified as a CRRA-type (log-utility maximizer). Figure 2(c) shows the probability of
the classified model with number of tests. Although we need a larger sample to make significant
claims of the validity of different economic theories, our preliminary results indicate that subject
types can be identified and there is heterogeneity in the population. They also serve as an example of
the benefits of using real-time dynamic experimental design to collect data on human behavior.
6
Conclusions
In this paper, we considered the problem of adaptively selecting which noisy tests to perform in order
to identify an unknown hypothesis sampled from a known prior distribution. We studied the Equivalence Class Determination problem as a means to reduce the case of noisy observations to the classic,
noiseless case. We introduced EC2 , an adaptive greedy algorithm that is guaranteed to choose the
same hypothesis as if it had observed the outcome of all tests, and incurs near-minimal expected cost
among all policies with this guarantee. This is in contrast to popular heuristics that are greedy w.r.t.
version space mass reduction, information gain or value of information, all of which we show can be
very far from optimal. EC2 works by greedily optimizing an objective tailored to differentiate between
sets of observations that lead to different decisions. Our bounds rely on the fact that this objective function is adaptive submodular. We also develop E FF ECX TIVE, a practical algorithm based on EC2 , that
can be applied to arbitrary probabilistic models in which efficient exact inference is possible. We apply
E FF ECX TIVE to a Bayesian experimental design problem, and our results indicate its effectiveness in
comparison to existing algorithms. We believe that our results provide an interesting direction towards
providing a theoretical foundation for practical active learning and experimental design problems.
Acknowledgments. This research was partially supported by ONR grant N00014-09-1-1044, NSF grant
CNS-0932392, NSF grant IIS-0953413, a gift by Microsoft Corporation, an Okawa Foundation Research Grant,
and by the Caltech Center for the Mathematics of Information.
8
References
[1] N. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, 2006.
[2] Gowtham Bellala, Suresh Bhavnani, and Clayton Scott. Extensions of generalized binary search to group
identification and exponential costs. In Advances in Neural Information Processing Systems (NIPS), 2010.
[3] Gowtham Bellala, Suresh K. Bhavnani, and Clayton D. Scott. Group-based query learning for rapid
diagnosis in time-critical situations. CoRR, abs/0911.4511, 2009.
[4] Colin F. Camerer. An experimental test of several generalized utility theories. The Journal of Risk and
Uncertainty, 2(1):61?104, 1989.
[5] V. T. Chakaravarthy, V. Pandit, S. Roy, P. Awasthi, and M. Mohania. Decision trees for entity identification:
Approximation algorithms and hardness results. In In Proceedings of the ACM- SIGMOD Symposium on
Principles of Database Systems, 2007.
[6] K. Chaloner and I. Verdinelli. Bayesian experimental design: A review. Statistical Science, 10(3):273?304,
Aug. 1995.
[7] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In NIPS, 2004.
[8] Sanjoy Dasgupta. Coarse sample complexity bounds for active learning. In Y. Weiss, B. Sch?olkopf,
and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 235?242. MIT Press,
Cambridge, MA, 2006.
[9] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning
and stochastic optimization. CoRR, abs/1003.3967v3, 2010.
[10] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal Bayesian active learning with noisy
observations. CoRR, abs/1010.3091, 2010.
[11] Andrew Guillory and Jeff Bilmes. Average-case active learning with costs. In The 20th International
Conference on Algorithmic Learning Theory, University of Porto, Portugal, October 2009.
[12] Giora Hanoch and Haim Levy. Efficient portfolio selection with quadratic and cubic utility. The Journal of
Business, 43(2):181?189, 1970.
[13] R. A. Howard. Information value theory. In IEEE Transactions on Systems Science and Cybernetics
(SSC-2), 1966.
[14] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica,
47(2):263?292, 1979.
[15] S. Rao Kosaraju, Teresa M. Przytycka, and Ryan S. Borgstrom. On an optimal split tree problem. In WADS
?99: Proceedings of the 6th International Workshop on Algorithms and Data Structures, pages 157?168,
London, UK, 1999. Springer-Verlag.
[16] Andreas Krause and Carlos Guestrin. Optimal value of information in graphical models. Journal of
Artificial Intelligence Research (JAIR), 35:557?591, 2009.
[17] D. V. Lindley. On a measure of the information provided by an experiment. Annals of Mathematical
Statistics, 27:986?1005, 1956.
[18] D. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590?
604, 1992.
[19] Rob Nowak. Noisy generalized binary search. In Y. Bengio, D. Schuurmans, J. Lafferty, C. K. I. Williams,
and A. Culotta, editors, Advances in Neural Information Processing Systems 22, pages 1366?1374. 2009.
[20] John W. Pratt. Risk aversion in the small and in the large. Econometrica, 32(1):122?136, 1964.
[21] D. Prelec. The probablity weighting function. Econometrica, 66(3):497?527, 1998.
[22] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behaviour. Princeton
University Press, 1947.
[23] Alice X. Zheng, Irina Rish, and Alina Beygelzimer. Efficient test selection in active diagnosis via entropy
approximation. In UAI ?05, Proceedings of the 21st Conference in Uncertainty in Artificial Intelligence,
2005.
9
| 4073 |@word uev:1 version:11 approved:1 termination:3 pick:2 incurs:3 infogain:6 carry:1 moment:1 reduction:4 contains:1 selecting:3 daniel:3 denoting:1 interestingly:1 outperforms:3 existing:3 rish:1 beygelzimer:2 p2min:1 must:6 john:2 additive:1 plot:1 greedy:12 selected:1 weighing:1 fewer:1 intelligence:2 probablity:1 coarse:1 mathematical:1 przytycka:1 symposium:1 competitiveness:1 prove:2 ray:2 manner:1 introduce:4 hardness:1 expected:20 rapid:1 p1:1 behavior:3 decreasing:1 considering:1 increasing:1 gift:1 begin:1 provided:1 underlying:2 bounded:1 maximizes:1 mass:5 agnostic:1 interpreted:1 minimizes:2 skewness:2 morgenstern:1 developed:2 voi:7 informed:1 finding:1 differentiation:2 impractical:1 corporation:1 guarantee:7 quantitative:1 every:1 tackle:1 tie:1 exactly:1 prohibitively:1 classifier:2 um:1 platt:1 uk:1 unit:4 medical:4 grant:4 positive:1 understood:1 treat:1 problem4:1 consequence:1 ext:2 despite:1 might:1 chose:1 studied:3 equivalence:28 specifying:1 challenging:1 dynamically:1 collect:1 alice:1 range:2 unique:1 practical:2 acknowledgment:1 testing:1 differs:1 procedure:1 suresh:2 area:1 significantly:3 got:1 selection:3 wad:1 risk:11 applying:1 optimize:1 equivalent:1 deterministic:6 map:1 center:1 maximizing:3 williams:1 identifying:3 immediately:2 insight:3 rule:3 financial:2 hd:3 population:2 classic:1 notion:2 hurt:1 annals:1 imagine:1 suppose:3 pt:1 exact:1 us:1 hypothesis:51 complementarity:2 pa:1 roy:1 expensive:4 particularly:2 cut:6 labeled:1 database:1 observed:3 calculate:2 culotta:1 eu:1 decrease:1 prospect:4 ran:1 disease:2 intuition:2 complexity:3 reward:1 econometrica:3 mine:1 dynamic:1 dom:2 tversky:1 solving:1 incur:2 serve:1 upon:5 kahneman:1 joint:1 represented:2 various:1 distinct:5 fast:1 effective:1 enyi:2 london:1 query:2 artificial:2 outcome:36 h0:16 whose:2 heuristic:8 modular:1 solve:1 larger:2 relax:1 ability:2 statistic:2 noisy:22 itself:1 differentiate:1 sequence:1 propose:1 consent:1 poorly:3 olkopf:1 exploiting:1 requirement:1 neumann:1 produce:1 develop:5 andrew:1 pose:1 measured:5 received:1 aug:1 p2:1 ecx:15 implemented:2 recovering:1 implies:2 indicate:2 met:1 direction:1 submodularity:5 posit:3 correct:3 porto:1 stochastic:2 human:7 eff:6 violates:1 pandit:1 explains:1 require:3 behaviour:4 assign:2 hx:1 fix:3 generalization:1 preliminary:1 proposition:2 ryan:1 extension:2 hold:7 considered:2 mapping:1 algorithmic:1 claim:2 major:1 adopt:2 institutional:1 purpose:1 outperformed:1 lose:1 label:3 tool:1 weighted:1 awasthi:1 clearly:1 mit:1 always:2 aim:2 modified:6 rather:1 avoid:1 hj:7 focus:1 odt:3 chaloner:1 contrast:2 greedily:10 dollar:2 inference:2 typically:1 eliminate:1 diminishing:2 selects:4 interested:1 provably:1 issue:1 arg:7 classification:1 among:10 denoted:1 plan:1 softmax:1 mackay:1 marginal:3 equal:1 once:1 never:1 having:2 aware:1 sampling:5 eliminated:2 identical:1 represents:1 flipped:1 icml:1 future:1 others:1 np:2 few:1 randomly:2 simultaneously:1 intended:2 consisting:1 cns:1 irina:1 microsoft:1 ab:3 zheng:1 violation:1 behind:1 myopic:1 xb:5 chain:1 amenable:1 accurate:2 edge:20 nowak:1 partial:2 tree:8 indexed:1 ruled:1 theoretical:2 minimal:1 uncertain:1 instance:14 classify:1 modeling:1 increased:1 rao:1 cost:23 introducing:1 deviation:1 uniform:5 optimally:2 guillory:1 chooses:1 adaptively:11 st:1 fundamental:2 ec2:19 international:2 preferring:1 probabilistic:4 von:1 choose:4 hn:3 ssc:1 worse:1 creating:1 expert:1 return:2 pmin:5 supp:9 unordered:1 explicitly:1 mv:2 performed:2 h1:4 break:1 picked:1 doing:1 observing:3 competitive:4 bayes:1 recover:1 carlos:1 identifiability:1 defer:3 lindley:1 minimize:2 accuracy:4 variance:1 yield:1 identify:4 camerer:1 bayesian:19 identification:2 accurately:1 marginally:1 bilmes:1 cybernetics:1 classified:6 explain:1 strongest:1 definition:1 prelec:1 hereby:3 associated:4 naturally:2 proof:2 attributed:1 sampled:3 gain:10 stop:2 proved:2 popular:2 rational:1 uncertaintysampling:2 manifest:1 knowledge:1 recall:1 formalize:2 originally:1 jair:1 response:5 improved:1 wei:1 formulation:1 done:1 evaluated:1 strongly:4 symptom:1 just:1 xa:52 until:4 correlation:1 langford:1 nonlinear:1 maximizer:1 lack:2 reveal:2 perhaps:3 scientific:2 believe:1 facilitate:1 effect:1 validity:1 concept:1 true:16 hence:4 laboratory:1 conditionally:1 game:1 self:3 uniquely:6 encourages:1 criterion:4 generalized:8 theoretic:3 performs:3 l1:1 balcan:1 consideration:1 novel:2 recently:1 common:2 x0t:4 pseudocode:1 exponentially:2 foreach:1 extend:1 approximates:1 fec:8 refer:1 significant:1 cambridge:1 grid:1 mathematics:1 i6:1 portugal:1 submodular:8 had:2 portfolio:2 longer:1 maxh:1 posterior:3 recent:1 optimizing:1 belongs:1 apart:2 optimizes:1 bhavnani:2 n00014:1 verlag:1 binary:10 arbitrarily:1 onr:1 kosaraju:3 caltech:5 guestrin:1 minimum:5 additional:1 care:1 employed:1 determine:12 maximize:3 colin:1 monotonically:1 v3:1 ii:1 full:2 multiple:1 d0:2 technical:1 match:1 determination:12 long:3 posited:1 prediction:1 involving:1 variant:1 patient:3 expectation:4 noiseless:6 represent:1 tailored:1 receive:1 krause:4 myopically:3 sch:1 exhibited:1 subject:17 elegant:1 inconsistent:3 lafferty:1 effectiveness:1 call:6 integer:1 near:6 presence:1 hanoch:1 split:1 recommends:1 bengio:1 pratt:1 fit:1 competing:3 identified:2 andreas:4 economic:6 reduce:6 idea:2 okawa:1 whether:1 six:1 utility:6 gb:24 generally:3 detailed:1 informally:1 amount:1 lottery:9 canonical:1 nsf:2 certify:1 estimated:1 track:1 per:1 diagnosis:3 write:1 dasgupta:3 dominance:1 key:3 four:3 group:2 alina:1 clarity:1 ht:1 monotone:4 sum:1 run:8 prob:1 uncertainty:13 you:1 reader:1 decide:1 p3:1 decision:30 bound:6 hi:31 pay:3 guaranteed:4 distinguish:5 haim:1 quadratic:1 certifying:3 min:2 performing:2 according:2 poor:1 belonging:1 describes:1 beneficial:1 partitioned:1 rob:1 making:1 happens:1 lt2:1 oskar:1 restricted:1 taken:2 ln:6 previously:1 payment:1 turn:1 know:1 mind:3 end:1 generalizes:1 experimentation:1 apply:3 observe:1 original:1 assumes:1 running:7 ecd:7 standardized:1 graphical:3 log2:1 sigmod:1 classical:3 objective:15 realized:5 flipping:2 strategy:2 said:1 bellala:3 simulated:1 entity:1 reason:2 economist:1 illustration:1 ratio:1 minimizing:4 providing:1 difficult:1 unfortunately:3 october:1 potentially:1 stated:1 negative:1 design:10 motivates:2 policy:29 unknown:2 perform:10 observation:28 gowtham:2 howard:1 finite:1 minh:4 immediate:1 heterogeneity:3 defining:1 payoff:7 situation:2 discovered:1 varied:1 arbitrary:4 tive:15 introduced:1 pair:5 subvector:1 namely:2 required:1 optimized:1 clayton:2 teresa:1 nip:2 address:1 ev:2 scott:2 challenge:2 including:3 max:1 event:2 critical:1 natural:5 rely:2 eh:8 business:1 indicator:1 representing:1 improve:1 hm:1 prior:18 understanding:1 literature:2 l2:1 review:2 determining:1 relative:2 loss:8 interesting:2 h2:1 aversion:7 incurred:1 agent:1 degree:1 gather:2 foundation:2 consistent:2 principle:1 editor:2 share:1 pi:7 maxt:2 borgstrom:1 surprisingly:3 last:1 free:1 tease:2 copy:3 keeping:1 supported:1 allow:2 exponentiated:1 taking:1 benefit:5 distributed:2 overcome:1 xn:3 made:4 adaptive:18 ig:6 ec:9 far:4 transaction:1 debajyoti:2 approximate:5 obtains:2 cutting:5 preferred:1 dealing:1 active:29 sequentially:1 uai:1 assumed:1 search:6 why:2 nature:1 golovin:3 correlated:1 obtaining:3 depicting:1 schuurmans:1 alg:2 necessarily:1 complex:1 protocol:1 did:1 noise:12 suffering:1 x1:3 fig:4 ff:16 depicts:1 board:1 lt1:1 cubic:1 fails:2 wish:2 exponential:1 candidate:1 lie:2 levy:1 weighting:1 theorem:8 xt:51 x:4 exists:1 incorporating:1 workshop:1 sequential:1 adding:1 drew:1 corr:3 conditioned:1 rejection:1 entropy:8 generalizing:1 simply:2 likely:1 unexpected:1 partially:1 springer:1 satisfies:2 acm:1 ma:1 conditional:3 goal:9 towards:1 jeff:1 feasible:3 hard:3 h00:2 determined:2 except:1 uniformly:1 reducing:2 specifically:1 called:8 total:2 sanjoy:2 verdinelli:1 experimental:12 la:1 shannon:4 select:9 formally:5 people:4 support:3 arises:1 incorporate:1 evaluate:2 princeton:1 tested:1 ex:1 |
3,395 | 4,074 | The Multidimensional Wisdom of Crowds
Peter Welinder1 Steve Branson2 Serge Belongie2 Pietro Perona1
1
California Institute of Technology, 2 University of California, San Diego
{welinder,perona}@caltech.edu
{sbranson,sjb}@cs.ucsd.edu
Abstract
Distributing labeling tasks among hundreds or thousands of annotators is an increasingly important method for annotating large datasets. We present a method
for estimating the underlying value (e.g. the class) of each image from (noisy) annotations provided by multiple annotators. Our method is based on a model of the
image formation and annotation process. Each image has different characteristics
that are represented in an abstract Euclidean space. Each annotator is modeled as
a multidimensional entity with variables representing competence, expertise and
bias. This allows the model to discover and represent groups of annotators that
have different sets of skills and knowledge, as well as groups of images that differ
qualitatively. We find that our model predicts ground truth labels on both synthetic and real data more accurately than state of the art methods. Experiments
also show that our model, starting from a set of binary labels, may discover rich
information, such as different ?schools of thought? amongst the annotators, and
can group together images belonging to separate categories.
1
Introduction
Producing large-scale training, validation and test sets is vital for many applications. Most often
this job has to be carried out ?by hand? and thus it is delicate, expensive, and tedious. Services
such as Amazon Mechanical Turk (MTurk) have made it easy to distribute simple labeling tasks to
hundreds of workers. Such ?crowdsourcing? is increasingly popular and has been used to annotate
large datasets in, for example, Computer Vision [8] and Natural Language Processing [7]. As some
annotators are unreliable, the common wisdom is to collect multiple labels per exemplar and rely
on ?majority voting? to determine the correct label. We propose a model for the annotation process
with the goal of obtaining more reliable labels with as few annotators as possible.
It has been observed that some annotators are more skilled and consistent in their labels than others.
We postulate that the ability of annotators is multidimensional; that is, an annotator may be good at
some aspects of a task but worse at others. Annotators may also attach different costs to different
kinds of errors, resulting in different biases for the annotations. Furthermore, different pieces of
data may be easier or more difficult to label. All of these factors contribute to a ?noisy? annotation
process resulting in inconsistent labels. Although approaches for modeling certain aspects of the
annotation process have been proposed in the past [1, 5, 6, 9, 13, 4, 12], no attempt has been made
to blend all characteristics of the process into a single unified model.
This paper has two main contributions: (1) we improve on current state-of-the-art methods for
crowdsourcing by introducing a more comprehensive and accurate model of the human annotation process, and (2) we provide insight into the human annotation process by learning a richer
representation that distinguishes amongst the different sources of annotator error. Understanding
the annotation process can be important toward quantifying the extent to which datasets constructed
from human data are ?ground truth?.
We propose a generative Bayesian model for the annotation process. We describe an inference
algorithm to estimate the properties of the data being labeled and the annotators labeling them. We
show on synthetic and real data that the model can be used to estimate data difficulty and annotator
1
(b)
(a)
zi
species
specimen
pose
location
weather
...
viewpoint
Ii
xi
!"#$
?
?
?j
?z
annotators
wj
?
?j
M
camera
zi
images
yij
xi
lij
ij
Ji
N
labels
|Lij |
Figure 1: (a) Sample MTurk task where annotators were asked to click on images of Indigo Bunting (described
in Section 5.2). (b) The image formation process. The class variable zi models if the object (Indigo Bunting)
will be present (zi = 1) or absent (zi = 0) in the image, while a number of ?nuisance factors? influence
the appearance of the image. The image is then transformed into a low-dimensional representation xi which
captures the main attributes that are considered by annotators in labeling the image. (c) Probabilistic graphical
model of the entire annotation process where image formation is summarized by the nodes zi and xi . The
observed variables, indicated by shaded circles, are the index i of the image, index j of the annotators, and
value lij of the label provided by annotator j for image i. The annotation process is repeated for all i and for
multiple j thus obtaining multiple labels per image with each annotator labeling multiple images (see Section 3).
biases, while identifying annotators? different ?areas of strength?. While many of our results are
valid for general labels and tasks, we focus on the binary labeling of images.
2
Related Work
The advantages and drawbacks of using crowdsourcing services for labeling large datasets have
been explored by various authors [2, 7, 8]. In general, it has been found that many labels are of
high quality [8], but a few sloppy annotators do low quality work [7, 12]; thus the need for efficient
algorithms for integrating the labels from many annotators [5, 12]. A related topic is that of using
paired games for obtaining annotations, which can be seen as a form of crowdsourcing [10, 11].
Methods for combining the labels from many different annotators have been studied before. Dawid
and Skene [1] presented a model for multi-valued annotations where the biases and skills of the
annotators were modeled by a confusion matrix. This model was generalized and extended to other
annotation types by Welinder and Perona [12]. Similarly, the model presented by Raykar et al. [4]
considered annotator bias in the context of training binary classifiers with noisy labels. Building
on these works, our model goes a step further in modeling each annotator as a multidimensional
classifier in an abstract feature space. We also draw inspiration from Whitehill et al. [13], who
modeled both annotator competence and image difficulty, but did not consider annotator bias. Our
model generalizes [13] by introducing a high-dimensional concept of image difficulty and combining it with a broader definition of annotator competence. Other approaches have been proposed
for non-binary annotations [9, 6, 12]. By modeling annotator competence and image difficulty as
multidimensional quantities, our approach achieves better performance on real data than previous
methods and provides a richer output space for separating groups of annotators and images.
3
The Annotation Process
An annotator, indexed by j, looks at image Ii and assigns it a label lij . Competent annotators
provide accurate and precise labels, while unskilled annotators provide inconsistent labels. There
is also the possibility of adversarial annotators assigning labels that are opposite to those assigned
by competent annotators. Annotators may have different areas of strength, or expertise, and thus
provide more reliable labels on different subsets of images. For example, when asked to label
images containing ducks some annotators may be more aware of the distinction between ducks and
geese while others may be more aware of the distinction between ducks, grebes, and cormorants
(visually similar bird species). Furthermore, different annotators may weigh errors differently; one
annotator may be intolerant of false positives, while another is more optimistic and accepts the cost
of a few false positives in order to get a higher detection rate. Lastly, the difficulty of the image
may also matter. A difficult or ambiguous image may be labeled inconsistently even by competent
annotators, while an easy image is labeled consistently even by sloppy annotators. In modeling the
annotation process, all of these factors should be considered.
2
We model the annotation process in a sequence of steps. N images are produced by some image
capture/collection process. First, a variable zi decides which set of ?objects? contribute to producing
an image Ii . For example, zi ? {0, 1} may denote the presence/absence of a particular bird species.
A number of ?nuisance factors,? such as viewpoint and pose, determine the image (see Figure 1).
Each image is transformed by a deterministic ?visual transformation? converting pixels into a vector
of task-specific measurements xi , representing measurements that are available to the visual system
of an ideal annotator. For example, the xi could be the firing rates of task-relevant neurons in the
brain of the best human annotator. Another way to think about xi is that it is a vector of visual
attributes (beak shape, plumage color, tail length etc) that the annotator will consider when deciding
on a label. The process of transforming zi to the ?signal? xi is stochastic and it is parameterized by
?z , which accounts for the variability in image formation due to the nuisance factors.
There are M annotators in total, and the set of annotators that label image i is denoted by Ji .
An annotator j ? Ji , selected to label image Ii , does not have direct access to xi , but rather to
yij = xi + nij , a version of the signal corrupted by annotator-specific and image-specific ?noise?
nij . The noise process models differences between the measurements that are ultimately available
to individual annotators. These differences may be due to visual acuity, attention, direction of gaze,
etc. The statistics of this noise are different from annotator to annotator and are parametrized by ?j .
Most significantly, the variance of the noise will be lower for competent annotators, as they are more
likely to have access to a clearer and more consistent representation of the image than confused or
unskilled annotators.
The vector yij can be understood as a perceptual encoding that encompasses all major components
that affect an annotator?s judgment on an annotation task. Each annotator is parameterized by a unit
vector w
?j , which models the annotator?s individual weighting on each of these components. In this
way, w
?j encodes the training or expertise of the annotator in a multidimensional space. The scalar
projection hyij , w
?j i is compared to a threshold ??j . If the signal is above the threshold, the annotator
assigns a label lij = 1, and lij = 0 otherwise.
4
Model and Inference
Putting together the assumptions of the previous section, we obtain the graphical model shown in
Figure 1. We will assume a Bayesian treatment, with priors on all parameters. The joint probability
distribution, excluding hyper-parameters for brevity, can be written as
p(L, z, x, y, ?, w,
? ??) =
M
Y
j=1
p(?j )p(?
?j )p(w
?j )
N
Y
?
?
?p(zi )p(xi | zi )
i=1
Y
p(yij | xi , ?j ) p(lij | w
?j , ??j , yij )? ,
j?Ji
(1)
where we denote z, x, y, ?, ??, w,
? and L to mean the sets of all the corresponding subscripted
variables. This section describes further assumptions on the probability distributions. These assumptions are not necessary; however, in practice they simplify inference without compromising
the quality of the parameter estimates.
Although both zi and lij may be continuous or multivalued discrete in a more general treatment
of the model [12], we henceforth assume that they are binary, i.e. zi , lij ? {0, 1}. We assume a
Bernoulli prior on zi with p(zi = 1) = ?, and that xi is normally distributed1 with variance ?z2 ,
p(xi | zi ) = N (xi ; ?z , ?z2 ),
(2)
where ?z = ?1 if zi = 0 and ?z = 1 if zi = 1 (see Figure 2a). If xi and yij are multi-dimensional,
then ?j is a covariance matrix. These assumptions are equivalent to using a mixture of Gaussians
prior on xi .
The noisy version of the signal xi that annotator j sees, denoted by yij , is assumed to be generated by
a Gaussian with variance ?j2 centered at xi , that is p(yij | xi , ?j ) = N (yij ; xi , ?j2 ) (see Figure 2b).
We assume that each annotator assigns the label lij according to a linear classifier. The classifier is
parameterized by a direction w
?j of a decision plane and a bias ??j . The label lij is deterministically
chosen, i.e. lij = I (hw
?j , yij i ? ??j ), where I (?) is the indicator function. It is possible to integrate
1
We used the parameters ? = 0.5 and ?z = 0.8.
3
(a)
(b)
p(yij | zi = 0)
(c)
p(xi | zi = 1)
p(xi | zi = 0)
p(yij | zi = 1)
1
2
3
4
5
67
8
xi
p(yij | xi )
A
xi = (x1i , x2i )
B
?2
?1
0
?j
1
2
3
?3
yij
x1i
?j
C
?3
p(xi | zi = 1)
x2i
?2
?1
0
?j
1
2
3
wj = (wj1 , wj2 )
p(xi | zi = 0)
yij
Figure 2: Assumptions of the model. (a) Labeling is modeled in a signal detection theory framework, where
the signal yij that annotator j sees for image Ii is produced by one of two Gaussian distributions. Depending
on yij and annotator parameters wj and ?j , the annotator labels 1 or 0. (b) The image representation xi is
assumed to be generated by a Gaussian mixture model where zi selects the component. The figure shows 8
different realizations xi (x1 , . . . , x8 ), generated from the mixture model. Depending on the annotator j, noise
nij is added to xi . The three lower plots shows the noise distributions for three different annotators (A,B,C),
with increasing ?incompetence? ?j . The biases ?j of the annotators are shown with the red bars. Image no. 4,
represented by x4 , is the most ambiguous image, as it is very close to the optimal decision plane at xi = 0. (c)
An example of 2-dimensional xi . The red line shows the decision plane for one annotator.
out yij and put lij in direct dependence on xi ,
p(lij = 1 | xi , ?j , ??j ) = ?
hw
?j , xi i ? ??j
?j
,
(3)
where ?(?) is the cumulative standardized normal distribution, a sigmoidal-shaped function.
In order to remove the constraint on w
?j being a direction, i.e. kw
?j k2 = 1, we reparameterize the
problem with wj = w
?j /?j and ?j = ??j /?j . Furthermore, to regularize wj and ?j during inference,
we give them Gaussian priors parameterized by ? and ? respectively. The prior on ?j is centered at
the origin and is very broad (? = 3). For the prior on wj , we kept the center close to the origin to
be initially pessimistic of the annotator competence, and to allow for adversarial annotators (mean
1, std 3). All of the hyperparameters were chosen somewhat arbitrarily to define a scale for the
parameter space, and in our experiments we found that results (such as error rates in Figure 3) were
quite insensitive to variations in the hyperparameters. The modified Equation 1 becomes,
?
?
M
N
Y
Y
Y
?p(xi | ?z , ?)
p(L, x, w, ? ) =
p(?j | ?)p(wj | ?)
p(lij | xi , wj , ?j )? .
(4)
j=1
i=1
j?Ji
The only observed variables in the model are the labels L = {lij }, from which the other parameters
have to be inferred. Since we have priors on the parameters, we proceed by MAP estimation, where
we find the optimal parameters (x? , w? , ? ? ) by maximizing the posterior on the parameters,
(x? , w? , ? ? ) = arg max p(x, w, ? | L) = arg max m(x, w, ? ),
x,w,?
x,w,?
(5)
where we have defined m(x, w, ? ) = log p(L, x, w, ? ) from Equation 4. Thus, to do inference, we
need to optimize
m(x, w, ? ) =
N
X
log p(xi | ?z , ?) +
i=1
+
M
X
log p(wj | ?) +
j=1
N X
X
M
X
log p(?j | ?)
j=1
[lij log ? (hwj , xi i ? ?j ) + (1 ? lij ) log (1 ? ? (hwj , xi i ? ?j ))] .
(6)
i=1 j?Ji
To maximize (6) we carry out alternating optimization using gradient ascent. We begin by fixing the
x parameters and optimizing Equation 6 for (w, ? ) using gradient ascent. Then we fix (w, ? ) and
optimize for x using gradient ascent, iterating between fixing the image parameters and annotator
parameters back and forth. Empirically, we have observed that this optimization scheme usually
converges within 20 iterations.
4
number of annotators
number of annotators
number of annotators
number of annotators
Figure 3: (a) and (b) show the correlation between the ground truth and estimated parameters as the number
of annotators increases on synthetic data for 1-d and 2-d xi and wj . (c) Performance of our model in predicting
zi on the data from (a), compared to majority voting, the model of [1], and GLAD [13]. (d) Performance on
real labels collected from MTurk. See Section 5.1 for details on (a-c) and Section 5.2 for details on (d).
In the derivation of the model above, there is no restriction on the dimensionality of xi and wj ;
they may be one-dimensional scalars or higher-dimensional vectors. In the former case, assuming
w
?j = 1, the model is equivalent to a standard signal detection theoretic model [3] where a signal
yij is generated by one of two Normal distributions p(yij | zi ) = N (yij | ?z , s2 ) with variance
s2 = ?z2 + ?j2 , centered on ?0 = ?1 and ?1 = 1 for zi = 0 and zi = 1 respectively (see Figure 2a).
In signal detection theory, the sensitivity index, conventionally denoted d0 , is a measure of how well
the annotator can discriminate the two values of zi [14]. It is defined as the Mahalanobis distance
between ?0 and ?1 normalized by s,
2
?1 ? ?0
d0 =
=q
.
(7)
s
2
? + ?2
z
j
Thus, the lower ?j , the better the annotator can distinguish between classes of zi , and the more
?competent? he is. The sensitivity index can also be computed directly from the false alarm rate f
and hit rate h using d0 = ??1 (h) ? ??1 (f ) where ??1 (?) is the inverse of the cumulative normal
distribution [14]. Similarly, the ?threshold?,
which is a measure of annotator bias, can be computed
by ? = ? 12 ??1 (h) + ??1 (f ) . A large positive ? means that the annotator attributes a high cost
to false positives, while a large negative ? means the annotator avoids false negative mistakes. Under
the assumptions of our model, ? is related to ?j in our model by the relation ? = ??j /s.
In the case of higher dimensional xi and wj , each component of the xi vector can be thought of as an
attribute or a high level feature. For example, the task may be to label only images with a particular
bird species, say ?duck?, with label 1, and all other images with 0. Some images contain no birds
at all, while other images contain birds similar to ducks, such as geese or grebes. Some annotators
may be more aware of the distinction between ducks and geese and others may be more aware of
the distinction between ducks, grebes and cormorants. In this case, xi can be considered to be 2dimensional. One dimension represents image attributes that are useful in the distinction between
ducks and geese, and the other dimension models parameters that are useful in distinction between
ducks and grebes (see Figure 2c). Presumably all annotators see the same attributes, signified by xi ,
but they use them differently. The model can distinguish between annotators with preferences for
different attributes, as shown in Section 5.2.
Image difficulty is represented in the model by the value of xi (see Figure 2b). If there is a particular
ground truth decision plane, (w0 , ? 0 ), images Ii with xi close to the plane will be more difficult for
annotators to label. This is because the annotators see a noise corrupted version, yij , of xi . How
well the annotators can label a particular image depends on both the closeness of xi to the ground
truth decision plane and the annotator?s ?noise? level, ?j . Of course, if the annotator bias ?j is far
from the ground truth decision plane, the labels for images near the ground truth decision plane will
be consistent for that annotator, but not necessarily correct.
5
Experiments
5.1 Synthetic Data
To explore whether the inference procedure estimates image and annotator parameters accurately,
we tested our model on synthetic data generated according to the model?s assumptions. Similar to
the experimental setup in [13], we generated 500 synthetic image parameters and simulated between
4 and 20 annotators labeling each image. The procedure was repeated 40 times to reduce the noise
in the results.
We generated the annotator parameters by randomly sampling ?j from a Gamma distribution (shape
1.5 and scale 0.3) and biases ?j from a Normal distribution centered at 0 with standard deviation
5
(a)
(b)
(c)
(d)
Figure 4: Ellipse dataset. (a) The images to be labeled were fuzzy ellipses (oriented uniformly from 0 to ?)
enclosed in dark circles. The task was to select ellipses that were more vertical than horizontal (the former are
marked with green circles in the figure). (b-d) The image difficulty parameters xi , annotator competence 2/s,
and bias ??j /s learned by our model are compared to the ground truth equivalents. The closer xi is to 0, the
more ambiguous/difficult the discrimination task, corresponding to ellipses that have close to 45? orientation.
0.5. The direction of the decision plane wj was +1 with probability 0.99 and ?1 with probability
0.01. The image parameters xi were generated by a two-dimensional Gaussian mixture model with
two components of standard deviation 0.8 centered at -1 and +1. The image ground truth label
zi , and thus the mixture component from which xi was generated, was sampled from a Bernoulli
distribution with p(zi = 1) = 0.5.
For each trial, we measured the correlation between the ground truth values of each parameter and
the values estimated by the model. We averaged Spearman?s rank correlation coefficient for each
parameter over all trials. The result of the simulated labeling process is shown Figure 3a. As can
be seen from the figure, the model estimates the parameters accurately, with the accuracy increasing
as the number of annotators labeling each image increases. We repeated a similar experiment with
2-dimensional xi and wj (see Figure 3b). As one would expect, estimating higher dimensional xi
and wj requires more data.
We also examined how well our model estimated the binary class values, zi . For comparison, we also
tried three other methods on the same data: a simple majority voting rule for each image, the biascompetence model of [1], and the GLAD algorithm from [13]2 , which models 1-d image difficulty
and annotator competence, but not bias. As can be seen from Figure 3c, our method presents a small
but consistent improvement. In a separate experiment (not shown) we generated synthetic annotators
with increasing bias parameters ?j . We found that GLAD performs worse than majority voting when
the variance in the bias between different annotators is high (? & 0.8); this was expected as GLAD
does not model annotator bias. Similarly, increasing the proportion of difficult images degrades the
performance of the model from [1]. The performance of our model points to the benefits of modeling
all aspects of the annotation process.
5.2
Human Data
We next conducted experiments on annotation results from real MTurk annotators. To compare the
performance of the different models on a real discrimination task, we prepared dataset of 200 images
of birds (100 with Indigo Bunting, and 100 with Blue Grosbeak), and asked 40 annotators per image
if it contained at least one Indigo Bunting; this is a challenging task (see Figure 1). The annotators
were given a description and example photos of the two bird species. Figure 3d shows how the
performance varies as the number of annotators per image is increased. We sampled a subset of the
annotators for each image. Our model did better than the other approaches also on this dataset.
To demonstrate that annotator competence, annotator bias, image difficulty, and multi-dimensional
decision surfaces are important real life phenomena affecting the annotation process, and to quantify
our model?s ability to adapt to each of them, we tested our model on three different image datasets:
one based on pictures of rotated ellipses, another based on synthetically generated ?greebles?, and a
third dataset with images of waterbirds.
Ellipse Dataset: Annotators were given the simple task of selecting ellipses which they believed
to be more vertical than horizontal. This dataset was chosen to make the model?s predictions quan2
We used the implementation of GLAD available on the first author?s website:
We varied the ? prior in their code between 1-10 to achieve best performance.
6
http://mplab.ucsd.edu/?jake/
x2i
A
E
B
F
C
G
H
D
x1i
Figure 5: Estimated image parameters (symbols) and annotator decision planes (lines) for the greeble ex-
periment. Our model learns two image parameter dimensions x1i and x2i which roughly correspond to color
and height, and identifies two clusters of annotator decision planes, which correctly correspond to annotators
primed with color information (green lines) and height information (red lines). On the left are example images
of class 1, which are shorter and more yellow (red and blue dots are uncorrelated with class), and on the right
are images of class 2, which are taller and more green. C and F are easy for all annotators, A and H are
difficult for annotators that prefer height but easy for annotators that prefer color, D and E are difficult for
annotators that prefer color but easy for annotators that prefer height, B and G are difficult for all annotators.
tifiable, because ground truth class labels and ellipse angle parameters are known to us for each test
image (but hidden from the inference algorithm).
By definition, ellipses at an angle of 45? are impossible to classify, and we expect that images
gradually become easier to classify as the angle moves away from 45? . We used a total of 180 ellipse
images, with rotation angle varying from 1-180? , and collected labels from 20 MTurk annotators for
each image. In this dataset, the estimated image parameters xi and annotator parameters wj are 1dimensional, where the magnitudes encode image difficulty and annotator competence respectively.
Since we had ground truth labels, we could compute the false alarm and hit rates for each annotator,
and thus compute ? and d0 for comparison with ??j /s and 2/s (see Equation 7 and following text).
The results in Figure 4b-d show that annotator competence and bias vary among annotators. Moreover, the figure shows that our model accurately estimates image difficulty, annotator competence,
and annotator bias on data from real MTurk annotators.
Greeble Dataset: In the second experiment, annotators were shown pictures of ?greebles? (see
Figure 5) and were told that the greebles belonged to one of two classes. Some annotators were
told that the two greeble classes could be discriminated by height, while others were told they could
be discriminated by color (yellowish vs. green). This was done to explore the scenario in which
annotators have different types of prior knowledge or abilities. We used a total of 200 images with
20 annotators labeling each image. The height and color parameters for the two types of greebles
were randomly generated according to Gaussian distributions with centers (1, 1) and (?1, ?1), and
standard deviations of 0.8.
The results in Figure 5 show that the model successfully learned two clusters of annotator decision
surfaces, one (green) of which responds mostly to the first dimension of xi (color) and another (red)
responding mostly to the second dimension of xi (height). These two clusters coincide with the
sets of annotators primed with the two different attributes. Additionally, for the second attribute, we
observed a few ?adversarial? annotators whose labels tended to be inverted from their true values.
This was because the instructions to our color annotation task were ambiguously worded, so that
some annotators had become confused and had inverted their labels. Our model robustly handles
these adversarial labels by inverting the sign of the w
? vector.
Waterbird Dataset: The greeble experiment shows that our model is able to segregate annotators
looking for different attributes in images. To see whether the same phenomenon could be observed in
a task involving images of real objects, we constructed an image dataset of waterbirds. We collected
50 photographs each of the bird species Mallard, American Black Duck, Canada Goose and Rednecked Grebe. In addition to the 200 images of waterbirds, we also selected 40 images without any
birds at all (such as photos of various nature scenes and objects) or where birds were too small be
seen clearly, making 240 images in total. For each image, we asked 40 annotators on MTurk if they
could see a duck in the image (only Mallards and American Black Ducks are ducks). The hypothesis
7
x2i
1
2
3
x1i
Figure 6: Estimated image and annotator parameters on the Waterbirds dataset. The annotators were asked
to select images containing at least one ?duck?. The estimated xi parameters for each image are marked with
symbols that are specific to the class the image belongs to. The arrows show the xi coordinates of some example
images. The gray lines are the decision planes of the annotators. The darkness of the lines is an indicator of
kwj k: darker gray means the model estimated the annotator to be more competent. Notice how the annotators?
decision planes fall roughly into three clusters, marked by the blue circles and discussed in Section 5.2.
was that some annotators would be able to discriminate ducks from the two other bird species, while
others would confuse ducks with geese and/or grebes.
Results from the experiment, shown in Figure 6, suggest that there are at least three different groups
of annotators, those who separate: (1) ducks from everything else, (2) ducks and grebes from everything else, and (3) ducks, grebes, and geese from everything else; see numbered circles in Figure 6.
Interestingly, the first group of annotators was better at separating out Canada geese than Red-necked
grebes. This may be because Canada geese are quite distinctive with their long, black necks, while
the grebes have shorter necks and look more duck-like in most poses. There were also a few outlier
annotators that did not provide answers consistent with any other annotators. This is a common
phenomenon on MTurk, where a small percentage of the annotators will provide bad quality labels
in the hope of still getting paid [7]. We also compared the labels predicted by the different models
to the ground truth. Majority voting performed at 68.3% correct labels, GLAD at 60.4%, and our
model performed at 75.4%.
6
Conclusions
We have proposed a Bayesian generative probabilistic model for the annotation process. Given
only binary labels of images from many different annotators, it is possible to infer not only the
underlying class (or value) of the image, but also parameters such as image difficulty and annotator competence and bias. Furthermore, the model represents both the images and the annotators
as multidimensional entities, with different high level attributes and strengths respectively. Experiments with images annotated by MTurk workers show that indeed different annotators have variable
competence level and widely different biases, and that the annotators? classification criterion is best
modeled in multidimensional space. Ultimately, our model can accurately estimate the ground truth
labels by integrating the labels provided by several annotators with different skills, and it does so
better than the current state of the art methods.
Besides estimating ground truth classes from binary labels, our model provides information that is
valuable for defining loss functions and for training classifiers. For example, the image parameters estimated by our model could be taken into account for weighing different training examples,
or, more generally, it could be used for a softer definition of ground truth. Furthermore, our findings suggest that annotators fall into different groups depending on their expertise and on how they
perceive the task. This could be used to select annotators that are experts on certain tasks and to
discover different schools of thought on how to carry out a given task.
Acknowledgements
P.P. and P.W. were supported by ONR MURI Grant #N00014-06-1-0734 and EVOLUT.ONR2. S.B. was supported by NSF CAREER Grant #0448615, NSF Grant AGS-0941760, ONR MURI Grant #N00014-08-1-0638,
and a Google Research Award.
8
References
[1] A. P. Dawid and A. M. Skene. Maximum likelihood estimation of observer error-rates using
the em algorithm. J. Roy. Statistical Society, Series C, 28(1):20?28, 1979. 1, 2, 5, 6
[2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale
Hierarchical Image Database. In CVPR, 2009. 2
[3] D.M. Green and J.M. Swets. Signal detection theory and psychophysics. John Wiley and Sons
Inc, New York, 1966. 5
[4] V.C. Raykar, S. Yu, L.H. Zhao, A. Jerebko, C. Florin, G.H. Valadez, L. Bogoni, and L. Moy.
Supervised Learning from Multiple Experts: Whom to trust when everyone lies a bit. In ICML,
2009. 1, 2
[5] V.S. Sheng, F. Provost, and P.G. Ipeirotis. Get another label? improving data quality and data
mining using multiple, noisy labelers. In KDD, 2008. 1, 2
[6] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective
labelling of Venus images. NIPS, 1995. 1, 2
[7] R. Snow, B. O?Connor, D. Jurafsky, and A.Y. Ng. Cheap and Fast - But is it Good? Evaluating
Non-Expert Annotations for Natural Language Tasks. In EMNLP, 2008. 1, 2, 8
[8] A. Sorokin and D. Forsyth. Utility data annotation with amazon mechanical turk. In First
IEEE Workshop on Internet Vision at CVPR?08, 2008. 1, 2
[9] M. Spain and P. Perona. Some objects are more equal than others: measuring and predicting
importance. In ECCV, 2008. 1, 2
[10] L. von Ahn and L. Dabbish. Labeling images with a computer game. In SIGCHI conference
on Human factors in computing systems, pages 319?326, 2004. 2
[11] L. von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. reCAPTCHA: Human-based
character recognition via web security measures. Science, 321(5895):1465?1468, 2008. 2
[12] Peter Welinder and Pietro Perona. Online crowdsourcing: rating annotators and obtaining costeffective labels. In IEEE Conference on Computer Vision and Pattern Recognition Workshops
(ACVHL), 2010. 1, 2, 3
[13] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more:
Optimal integration of labels from labelers of unknown expertise. In NIPS, 2009. 1, 2, 5, 6
[14] T. D. Wickens. Elementary signal detection theory. Oxford University Press, United States,
2002. 5
9
| 4074 |@word trial:2 version:3 proportion:1 tedious:1 instruction:1 tried:1 covariance:1 paid:1 carry:2 series:1 selecting:1 united:1 wj2:1 interestingly:1 past:1 subjective:1 current:2 z2:3 assigning:1 written:1 john:1 kdd:1 shape:2 cheap:1 remove:1 plot:1 discrimination:2 v:1 generative:2 selected:2 website:1 weighing:1 plane:13 ruvolo:1 provides:2 contribute:2 location:1 node:1 preference:1 sigmoidal:1 height:7 skilled:1 constructed:2 direct:2 become:2 baldi:1 swets:1 indeed:1 expected:1 roughly:2 multi:3 brain:1 mplab:1 increasing:4 becomes:1 provided:3 estimating:3 underlying:2 discover:3 confused:2 begin:1 moreover:1 spain:1 kind:1 fuzzy:1 unified:1 finding:1 transformation:1 ag:1 multidimensional:8 voting:5 classifier:5 k2:1 hit:2 unit:1 normally:1 grant:4 producing:2 before:1 service:2 positive:4 understood:1 mistake:1 encoding:1 oxford:1 jerebko:1 firing:1 black:3 bird:11 studied:1 examined:1 collect:1 shaded:1 challenging:1 jurafsky:1 averaged:1 camera:1 practice:1 movellan:1 procedure:2 area:2 thought:3 weather:1 significantly:1 projection:1 integrating:2 indigo:4 numbered:1 suggest:2 get:2 close:4 put:1 context:1 influence:1 unskilled:2 impossible:1 darkness:1 optimize:2 equivalent:3 deterministic:1 map:1 center:2 maximizing:1 restriction:1 go:1 attention:1 starting:1 amazon:2 identifying:1 assigns:3 perceive:1 insight:1 rule:1 regularize:1 handle:1 crowdsourcing:5 variation:1 coordinate:1 diego:1 smyth:1 hypothesis:1 origin:2 dawid:2 roy:1 expensive:1 recognition:2 std:1 predicts:1 labeled:4 muri:2 observed:6 database:1 capture:2 thousand:1 wj:16 valuable:1 weigh:1 transforming:1 bunting:4 asked:5 ultimately:2 distinctive:1 joint:1 differently:2 represented:3 various:2 derivation:1 fast:1 describe:1 labeling:13 formation:4 hyper:1 crowd:1 quite:2 richer:2 whose:2 valued:1 widely:1 say:1 cvpr:2 annotating:1 otherwise:1 ability:3 statistic:1 think:1 noisy:5 online:1 advantage:1 sequence:1 propose:2 ambiguously:1 j2:3 relevant:1 combining:2 realization:1 achieve:1 forth:1 description:1 getting:1 cluster:4 converges:1 rotated:1 object:5 depending:3 clearer:1 fixing:2 pose:3 measured:1 exemplar:1 ij:1 school:2 job:1 c:1 predicted:1 quantify:1 differ:1 direction:4 snow:1 drawback:1 correct:3 attribute:11 compromising:1 stochastic:1 annotated:1 centered:5 human:7 softer:1 everything:3 fix:1 pessimistic:1 elementary:1 yij:22 considered:4 ground:17 normal:4 visually:1 deciding:1 presumably:1 major:1 achieves:1 vary:1 estimation:2 label:53 successfully:1 hope:1 clearly:1 gaussian:6 modified:1 rather:1 primed:2 varying:1 broader:1 encode:1 focus:1 acuity:1 improvement:1 cormorant:2 consistently:1 bernoulli:2 rank:1 likelihood:1 costeffective:1 adversarial:4 inference:7 entire:1 signified:1 initially:1 perona:5 relation:1 hidden:1 transformed:2 subscripted:1 selects:1 pixel:1 arg:2 among:2 orientation:1 classification:1 denoted:3 art:3 integration:1 psychophysics:1 equal:1 aware:4 shaped:1 ng:1 sampling:1 x4:1 kw:1 broad:1 look:2 represents:2 yu:1 icml:1 others:7 simplify:1 few:5 distinguishes:1 randomly:2 oriented:1 gamma:1 comprehensive:1 individual:2 delicate:1 attempt:1 detection:6 possibility:1 mining:1 mcmillen:1 mixture:5 dabbish:1 accurate:2 closer:1 worker:2 necessary:1 shorter:2 indexed:1 euclidean:1 maurer:1 circle:5 nij:3 increased:1 classify:2 modeling:5 measuring:1 cost:3 introducing:2 deviation:3 subset:2 hundred:2 welinder:3 conducted:1 too:1 wickens:1 answer:1 varies:1 corrupted:2 synthetic:7 sensitivity:2 probabilistic:2 told:3 dong:1 gaze:1 together:2 von:2 postulate:1 containing:2 emnlp:1 henceforth:1 worse:2 american:2 expert:3 zhao:1 valadez:1 li:2 account:2 distribute:1 summarized:1 coefficient:1 matter:1 inc:1 forsyth:1 depends:1 piece:1 performed:2 observer:1 optimistic:1 red:6 annotation:27 contribution:1 accuracy:1 variance:5 characteristic:2 who:2 serge:1 wisdom:2 judgment:1 greeble:4 correspond:2 yellow:1 bayesian:3 accurately:5 produced:2 expertise:5 tended:1 definition:3 beak:1 turk:2 sampled:2 dataset:11 treatment:2 popular:1 knowledge:2 color:9 multivalued:1 dimensionality:1 back:1 steve:1 higher:4 supervised:1 done:1 furthermore:5 lastly:1 correlation:3 hand:1 sheng:1 horizontal:2 web:1 trust:1 google:1 quality:5 indicated:1 gray:2 building:1 concept:1 normalized:1 contain:2 true:1 former:2 burl:1 inspiration:1 assigned:1 alternating:1 mahalanobis:1 game:2 raykar:2 nuisance:3 during:1 ambiguous:3 criterion:1 generalized:1 theoretic:1 demonstrate:1 confusion:1 performs:1 image:105 common:2 rotation:1 ji:6 empirically:1 discriminated:2 insensitive:1 tail:1 he:1 discussed:1 measurement:3 connor:1 similarly:3 language:2 had:3 dot:1 access:2 surface:2 ahn:2 etc:2 labelers:2 posterior:1 bergsma:1 optimizing:1 belongs:1 scenario:1 certain:2 n00014:2 binary:8 arbitrarily:1 onr:2 life:1 caltech:1 inverted:2 seen:4 somewhat:1 specimen:1 converting:1 deng:1 determine:2 maximize:1 signal:11 ii:6 multiple:7 infer:1 d0:4 adapt:1 believed:1 long:1 hwj:2 award:1 paired:1 ellipsis:6 prediction:1 involving:1 mturk:9 vision:3 annotate:1 represent:1 iteration:1 affecting:1 addition:1 else:3 source:1 ascent:3 inconsistent:2 near:1 presence:1 ideal:1 synthetically:1 vital:1 easy:5 affect:1 zi:34 florin:1 yellowish:1 click:1 opposite:1 reduce:1 venus:1 absent:1 whether:2 utility:1 distributing:1 peter:2 moy:1 proceed:1 york:1 useful:2 iterating:1 generally:1 dark:1 prepared:1 category:1 http:1 percentage:1 nsf:2 notice:1 sign:1 estimated:9 per:4 correctly:1 blue:3 discrete:1 taller:1 group:7 putting:1 threshold:3 blum:1 kept:1 pietro:2 inverse:1 parameterized:4 angle:4 wu:1 draw:1 decision:14 prefer:4 bit:1 internet:1 distinguish:2 sorokin:1 strength:3 constraint:1 fei:2 scene:1 encodes:1 aspect:3 reparameterize:1 fayyad:1 glad:6 skene:2 according:3 belonging:1 spearman:1 describes:1 increasingly:2 em:1 son:1 character:1 making:1 outlier:1 gradually:1 taken:1 goose:9 equation:4 count:1 photo:2 generalizes:1 available:3 gaussians:1 hierarchical:1 away:1 sigchi:1 robustly:1 inconsistently:1 standardized:1 greebles:4 responding:1 graphical:2 grosbeak:1 ellipse:4 jake:1 society:1 move:1 added:1 quantity:1 blend:1 degrades:1 dependence:1 responds:1 amongst:2 gradient:3 distance:1 separate:3 separating:2 entity:2 majority:5 parametrized:1 w0:1 simulated:2 topic:1 mallard:2 whom:1 extent:1 collected:3 toward:1 assuming:1 length:1 code:1 modeled:5 index:4 besides:1 difficult:8 setup:1 mostly:2 recaptcha:1 whitehill:2 negative:2 implementation:1 unknown:1 vertical:2 neuron:1 datasets:5 defining:1 extended:1 variability:1 precise:1 excluding:1 segregate:1 looking:1 ucsd:2 varied:1 provost:1 competence:13 canada:3 inferred:1 rating:1 inverting:1 mechanical:2 imagenet:1 security:1 california:2 distinction:6 accepts:1 learned:2 nip:2 able:2 bar:1 usually:1 pattern:1 belonged:1 encompasses:1 reliable:2 max:2 green:6 everyone:1 difficulty:12 natural:2 rely:1 attach:1 indicator:2 sjb:1 predicting:2 ipeirotis:1 representing:2 scheme:1 improve:1 technology:1 x2i:5 picture:2 identifies:1 carried:1 x8:1 conventionally:1 lij:18 wj1:1 text:1 prior:9 understanding:1 acknowledgement:1 loss:1 expect:2 enclosed:1 sloppy:2 annotator:156 validation:1 integrate:1 consistent:5 viewpoint:2 uncorrelated:1 eccv:1 course:1 supported:2 bias:21 allow:1 institute:1 fall:2 benefit:1 dimension:5 valid:1 cumulative:2 rich:1 avoids:1 evaluating:1 author:2 qualitatively:1 made:2 san:1 collection:1 coincide:1 far:1 skill:3 unreliable:1 decides:1 assumed:2 xi:62 continuous:1 additionally:1 nature:1 career:1 obtaining:4 improving:1 necessarily:1 did:3 main:2 arrow:1 s2:2 noise:9 hyperparameters:2 alarm:2 abraham:1 repeated:3 competent:6 x1:1 periment:1 darker:1 wiley:1 inferring:1 duck:20 deterministically:1 x1i:5 lie:1 perceptual:1 weighting:1 third:1 learns:1 hw:2 bad:1 specific:4 symbol:2 explored:1 closeness:1 workshop:2 socher:1 false:6 importance:1 magnitude:1 labelling:1 confuse:1 easier:2 photograph:1 appearance:1 likely:1 explore:2 visual:4 bogoni:1 contained:1 scalar:2 kwj:1 truth:17 goal:1 marked:3 quantifying:1 absence:1 uniformly:1 total:4 specie:7 discriminate:2 neck:2 experimental:1 vote:1 select:3 brevity:1 tested:2 phenomenon:3 ex:1 |
3,396 | 4,075 | Identifying graph-structured activation patterns in
networks
James Sharpnack
Machine Learning Department, Statistics Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Aarti Singh
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We consider the problem of identifying an activation pattern in a complex, largescale network that is embedded in very noisy measurements. This problem is
relevant to several applications, such as identifying traces of a biochemical spread
by a sensor network, expression levels of genes, and anomalous activity or congestion in the Internet. Extracting such patterns is a challenging task specially
if the network is large (pattern is very high-dimensional) and the noise is so excessive that it masks the activity at any single node. However, typically there are
statistical dependencies in the network activation process that can be leveraged to
fuse the measurements of multiple nodes and enable reliable extraction of highdimensional noisy patterns. In this paper, we analyze an estimator based on the
graph Laplacian eigenbasis, and establish the limits of mean square error recovery of noisy patterns arising from a probabilistic (Gaussian or Ising) model based
on an arbitrary graph structure. We consider both deterministic and probabilistic
network evolution models, and our results indicate that by leveraging the network
interaction structure, it is possible to consistently recover high-dimensional patterns even when the noise variance increases with network size.
1
Introduction
The problem of identifying high-dimensional activation patterns embedded in noise is important for
applications such as contamination monitoring by a sensor network, determining the set of differentially expressed genes, and anomaly detection in networks. Formally, we consider the problem of
identifying a pattern corrupted by noise that is observed at the p nodes of a network:
yi = xi + ?i
i ? [p] = {1, . . . , p}
p
(1)
p
Here yi denotes the observation at node i, x = [x1 , . . . , xp ] ? R (or {0, 1} ) is the p-dimensional
iid
unknown continuous (or binary) activation pattern, and the noise ?i ? N (0, ? 2 ), the Gaussian distribution with mean zero and variance ? 2 . This problem is particularly challenging when the network
is large-scale, and hence x is a high-dimensional pattern embedded in heavy noise. Classical approaches to this problem in the signal processing and statistics literature involve either thresholding
the measurements at every node, or in the discrete case, matching the observed noisy measurements
with all possible patterns (also known as the scan statistic). The first approach does not work well
when the noise level is too high, rendering the per node activity statistically insignificant. In this
case, multiple hypothesis testing effects imply that the noise variance needs to decrease as the number of nodes p increase [10, 1] to enable consistent mean square error (MSE) recovery. The second
approach based on the scan statistic is computationally infeasible in high-dimensional settings as the
number of discrete patterns scale exponentially (? 2p ) in the number of dimensions p.
In practice, network activation patterns tend to be structured due to statistical dependencies in the
network activation process. Thus, it is possible to recover activation patterns in a computationally
and statistically efficient manner in noisy high-dimensional settings by leveraging the structure of
1
o(pg)
o(1)
Figure 1: Threshold of noise variance below which consistent MSE recovery of network activation
patterns is possible. If the activation is independent at each node, noise variance needs to decrease
as network size p increases (in blue). If dependencies in the activation process are harnessed, noise
variance can increase as p? where 0 < ? < 1 depends on network interactions (in red).
the dependencies between node measurements. In this paper, we study the limits of MSE recovery
of high-dimensional, graph-structured noisy patterns. Specifically, we assume that the patterns x
are generated from a probabilistic model, either Gaussian graphical model (GGM) or Ising (binary),
based on a general graph structure G(V, E), where V denotes the p vertices and E denotes the edges.
Gaussian graphical model:
Ising model:
T ?1
p(x) ? exp(?x
P? x)
p(x) ? exp ? (i,j)?E Wij (xi ? xj )2 ? exp(?xT Lx)
(2)
In the Ising model, L = D ? W denotes the graph Laplacian,
P where W is the weighted adjacency
matrix and D is the diagonal matrix of node degrees di = j:(i,j)?E Wij . In the Gaussian graphical
model, L = ??1 denotes the inverse covariance matrix whose zero entries indicate the absence of an
edge between the corresponding nodes in the graph. The graphical model implies that all patterns are
not equally likely to occur in the network. Patterns in which the values of nodes that are connected
by an edge agree are more likely, the likelihood being determined by the weights Wij of the edges.
Thus, the graph structure dictates the statistical dependencies in network measurements. We assume
that this graph structure is known, either because it corresponds to the physical topology of the
network or it can be learnt using network measurements [18, 25].
In this paper, we are concerned with the following problem: What is the largest amount of noise that
can be tolerated, as a function of the graph and parameters of the model, while allowing for consistent reconstruction of graph-structured network activation patterns? If the activations at network
nodes are independent of each other, the noise variance (? 2 ) must decrease with network size p to
ensure consistent MSE recovery [10, 1]. We show that by exploiting the network dependencies, it
is possible to consistently recover high-dimensional patterns when the noise variance is much larger
(can grow with the network size p). See Figure 1.
We characterize the learnability of graph structured patterns based on the eigenspectrum of the
network. To this end, we propose using an estimator based on thresholding the projection of the
network measurements onto the graph Laplacian eigenvectors. This is motivated by the fact that
in the Ising model, unlike the GGM, the Bayes rule and it?s risk have no known closed form. Our
results indicate that the noise threshold is determined by the eigenspectrum of the Laplacian. For
the GGM this procedure reduces to PCA and the noise threshold depends on the eigenvalues of the
covariance matrix, as expected. We show that for simple graph structures, such as hierarchical or
lattice graphs, as well as the random Erd?os-R?enyi graph, the noise threshold can possibly grow in
the network size p. Thus, leveraging the structure of network interactions can enable extraction of
high-dimensional patterns embedded in heavy noise.
This paper is organized as follows. We discuss related work in Section 2. Limits of MSE recovery for graph-structured patterns are investigated in Section 3 for the binary Ising model, and in
Section 4 for the Gaussian graphical model. In Section 5, we analyze the noise threshold for some
simple deterministic and random graph structures. Simulation results are presented in Section 6, and
concluding discussion in Section 7. Proof sketches are included in the Appendix.
2
2
Related work
Given a prior, the Bayes optimal estimators are known to be the posterior mean under MSE, the
Maximum A Posterior (MAP) rule under 0/1 loss, and the posterior centroid under Hamming loss
[8]. However, these estimators and their corresponding risks (expected loss) have no closed form
for the Ising graphical model and are intractable to analyze. The estimator we propose based on the
graph Laplacian eigenbasis is both easy to compute and analyze. Eigenbasis of the graph Laplacian
has been successfully used for problems, such as clustering [20, 24], dimensionality reduction [5],
and semi-supervised learning [4, 3]. The work on graph and manifold regularization [4, 3, 23, 2] is
closely related and assumes that the function of interest is smooth with respect to the graph, which is
essentially equivalent to assuming a graphical model prior of the form in Eq. (2). However, the use
of graph Laplacian is theoretically justified mainly in the embedded setting [6, 21], where the data
points are sampled from or near a low-dimensional manifold, and the graph weights are the distances
between two points as measured by some kernel. To the best of our knowledge, no previous work
studies the noise threshold for consistent MSE recovery of arbitrary graph-structured patterns.
There have been several attempts at constructing multi-scale basis for graphs that can efficiently represent localized activation patterns, notably diffusion wavelets [9] and treelets [17], however their
approximation capabilities are not well understood. More recently, [22] and [14] independently proposed unbalanced Haar wavelets and characterized their approximation properties for tree-structured
binary patterns. We argue in Section 5.1 that the unbalanced Haar wavelets are a special instance of
graph Laplacian eigenbasis when the underlying graph is hierarchical. On the other hand, a lattice
graph structure yields activations that are globally supported and smooth, and in this case the Laplacian eigenbasis corresponds to the Fourier transform (see Section 5.2). Thus, the graph Laplacian
eigenbasis provides an efficient representation for patterns whose structure is governed by the graph.
3
Denoising binary graph-structured patterns
The binary Ising model is essentially a discrete version of the GGM, however, the Bayes rule and
risk for the Ising model have no known closed form. For binary graph-structured patterns drawn
from an Ising prior, we suggest a different estimator based on projections onto the graph Laplacian
eigenbasis. Let the graph Laplacian L have spectral decomposition, L = U?UT , and denote the
first k eigenvectors (corresponding to the smallest eigenvalues) of L by U[k] . Define the estimator
bk = U[k] UT[k] y,
x
(3)
which is a hard thresholding of the projection of network measurements y = [y1 , . . . , yp ] onto the
graph Laplacian eigenbasis. The following theorem bounds the MSE of this estimator.
Theorem 1. The Bayes MSE of the estimator in Eq. (3) for the observation model in Eq. (1), when
the binary activation patterns are drawn from the Ising prior of Eq. (2) is bounded as
1
?
k? 2
RB := E[kb
xk ? xk2 ] ? min 1,
+ e?p
+
p
?k+1
p
where 0 < ? < 2 is a constant and ?k+1 is the (k + 1)
th
smallest eigenvalue of L.
Through this bias-variance decomposition, we see the eigenspectrum of the graph Laplacian determines a bound on the MSE for binary graph-structured activations. In practice, k can be chosen
using FDR[1] in the eigendomain or cross-validation.
b0i = 1xbi >1/2 , i ? [p]. Then the results of TheoRemark: Consider the binarized estimator x
rem 1 also provide an upper bound on the expected Hamming distance of this new estimator since
E[dH (b
x0 , x)] = MSE(b
x0 ) ? 4MSE(b
x), by the triangle inequality.
4
Denoising Gaussian graph-structured patterns
If the network activation patterns are generated by a Gaussian graphical model, it is easy to see that
the eigenvalues of the Laplacian (inverse covariance) determine the MSE decay. Consider the GGM
prior as in Eq. (2), then the posterior distribution is
?1
x|y ? N (2? 2 L + I)?1 y, 2L + ? ?2 I
,
(4)
where
matrix. The posterior mean is the Bayes optimal estimator with Bayes MSE,
P I is the identity
1
?2 ?1
(2?
+
?
)
, where {?i }i?[p] are the ordered eigenvalues of L. For the GGM, we obtain
i
i?[p]
p
a result similar to Theorem 1 for the sake of bounding the performance of the Bayes rule.
3
Figure 2: Weight matrices corresponding to hierarchical dependencies between node variables.
Theorem 2. The Bayes MSE of the estimator in Eq. (3) for the observation model in Eq. (1), when
the activation patterns are drawn from the Gaussian graphical model prior of Eq. (2) is bounded as
RB :=
p
k? 2
1
k? 2
1
1 X 1
E[kb
xk ? xk2 ] =
+
?
+
p
p
2?i
p
2?k+1
p
i=k+1
Hence, the Bayes MSE for the estimator of Eq. (3) under the GGM or Ising prior is bounded above
by 2/?k + ? 2 k/p + e?p which is the form used to prove Corollaries 1, 2, 3 in the next section.
5
Noise threshold for some simple graphs
In this section, we discuss the eigenspectrum of some simple graphs and use the MSE bounds derived
in the previous section to analyze the amount of noise that can be tolerated while ensuring consistent
MSE recovery of high-dimensional patterns. In all these examples, we find that the tolerable noise
level scales as ? 2 = o(p? ), where ? ? (0, 1) characterizes the strength of network interactions.
5.1 Hierarchical structure
Consider that, under an appropriate permutation of rows and columns, the weight matrix W has
the hierarchical block form shown in Figure 2. This corresponds to hierarchical graph structured
dependencies between node variables, where ` > `+1 denote the strength of interactions between
nodes that are in the same block at level ` = 0, 1, . . . , L. It is easy to see that in this case the
eigenvectors u of the graph Laplacian correspond to unbalanced Haar wavelet basis (proposed in
[22, 14]), i.e. u ? |c12 | 1c2 ? |c11 | 1c1 , where c1 and c2 are groups of variables within blocks at the
same level that are merged together at the next level (see [19] for the case of a full dyadic hierarchy).
Lemma 1. For a dyadic hierarchy with L levels, the eigenvectors of the graph Laplacian are the
standard Haar wavelet basis and there are L + 1 unique eigenvalues with the smallest eigenvalue
?0 = 0, and the `th smallest unique eigenvalue (` ? [L]) is 2`?1 -fold degenerate and given as
?` =
L
X
2i?1 i + 2L?` L?`+1 .
i=L?`+1
Using the bound on MSE as given in Theorems 1 and 2, we can now derive the noise threshold that
allows for consistent MSE recovery of high-dimensional patterns as the network size p ? ?.
Corollary 1. Consider a graph-structured pattern drawn from an Ising model or the GGM with
weight matrix W of the hierarchical block form as depicted in Figure 2. If ` = 2?`(1??) ?` ?
? log2 p+1, for constants ?, ? ? (0, 1), and ` = 0 otherwise, then the noise threshold for consistent
MSE recovery (RB = o(1)) is
? 2 = o(p? ).
Thus, if we take advantage of the network interaction structure, it is possible to tolerate noise with
variance that scales with the network size p, whereas without exploiting structure the noise variance needs to decrease with p, as discussed in the introduction. Larger ? implies stronger network
interactions, and hence larger the noise threshold.
5.2 Regular Lattice structure
Now consider the lattice graph which is constructed by placing vertices in a regular grid on a d
dimensional torus and adding edges of weight 1 to adjacent points. Let p = nd . For d = 1
4
this is a cycle which has a circulant weight matrix w, with eigenvalues {2 cos( 2?k
p ) : k ? [p]} and
eigenvectors correspond to the discrete Fourier transform [13]. Let i = (i1 , ..., id ), j = (j1 , ..., jd ) ?
[n]d . Then the weight matrix of the lattice in d dimensions is
Wi,j = wi1 ,j1 ?i2 ,j2 ...?id ,jd + ... + wid ,jd ?i1 ,j1 ...?id?1 ,jd?1
(5)
where ? is the Kronecker delta function. This form for W and since all nodes have same degree
gives us a closed form for the eigenvalues of the Laplacian, along with a concentration inequality.
Lemma 2. Let ?L
? be an eigenvalue of the Laplacian, L, of the lattice graph in d dimensions with
p = nd vertices, chosen uniformly at random. Then
P{?L
? ? d} ? exp{?d/8}.
(6)
?d/8
Hence, we can choose k such that ?L
e. So, the risk bound becomes O(2/d +
k ? d and k = dpe
2 ?d/8
?p
? e
+ e ), and as we increase dimensions of the lattice the MSE decays linearly.
Corollary 2. Consider a graph-structured pattern drawn from an Ising model or GGM based on a
lattice graph in d dimensions with p = nd vertices. If n is a constant and d = 8? ln p, for some
constant ? ? (0, 1), then the noise threshold for consistent MSE recovery (RB = o(1)) is given as:
? 2 = o(p? ).
Again, the noise variance can increase with the network size p, and larger ? implies stronger network
interactions as each variables interacts with more number of neighbors (d is larger).
5.3 Erd?os-R?enyi random graph structure
Erd?os-R?enyi (ER) random graphs are generated by adding edges with weight 1 between any two
vertices within the vertex set V (of size p) with probability qp . It is known that the probability of
edge inclusion (qp ) determines large geometric properties of the graph [11]. Real world networks
are generally sparse, so we set qp = p?(1??) , where ? ? (0, 1). Larger ? implies higher probability
of edge inclusion and stronger network interaction structure. Using the degree distribution [7], and
a result from perturbation theory, we bound the quantiles of the eigenspectrum of L.
Lemma 3. Let ?? denote an eigenvalue of L chosen uniformly at random. Let PG be the probability
measure induced by the ER random graph and P? be the uniform distribution over eigenvalues
conditional on the graph. Then, for any ?p increasing in p,
PG {P? {?? ? p? /2 ? p??1 } ? ?p p?? } = O(1/?p )
(7)
Hence, we are able to set the sequence of quantiles for the eigenvalue distribution kp = d?p p1?? e
such that PG {?kp ? p? /2 ? p??1 } = O(1/?p ). So, we obtain a bound for the expected Bayes
MSE (with respect to the graph) EG [RB ] ? O(p?? ) + ? 2 O(?p p?? ) + O(1/?p ).
Corollary 3. Consider a graph G drawn from an Erd?os-R?enyi random graph model with p vertices
and probability of edge inclusion qp = p?(1??) for some constant ? ? (0, 1). If the latent graphstructured pattern is drawn from an Ising model or a GGM with the Laplacian of G, then the noise
variance that can be tolerated while ensuring consistent MSE recovery (RB = oPG (1)) is given as:
? 2 = o(p? ).
6
Experiments
We simulate patterns from the Ising model defined on hierarchical, lattice and ER graphs. Since the
Ising distribution admits a closed form for the distribution of one node conditional on the rest of the
nodes, a Gibbs sampler can be employed. Histograms of the eigenspectrum for the hierarchical tree
graph with a large depth, the lattice graph in high dimensions, and a draw from the ER graph with
many nodes is shown in figures 3(a), 4(a), 5(a) respectively. The eigenspectrum of the lattice and
ER graphs illustrate the concentration of the eigenvalues about the expected degree of each node.
We use iterative eigenvalue solvers to form our estimator and choose the quantile k by minimizing
the bound in Theorem 1. We compute the Bayes MSE (by taking multiple draws) of our estimator
for a noisy sample of node measurements. We observe in all of the models that the eigenmap
estimator is a substantial improvement over Naive (the Bayes estimator that ignores the structure).
5
(a) Eigenvalue Histogram for hierarchical tree.
(b) Estimator Performance
Figure 3: The eigenvalue histogram for the binary tree, L = 11, ? = .1 (left) and the performance
of various estimators (right) with ? = 0.05 and ? 2 = 4, both with ? = 1.
(a) Eigenvalue Histogram for Lattice.
(b) Estimator Performance
Figure 4: The eigenvalue histogram for the lattice with d = 10 and p = 510 (left) and estimator
performances (right) with p = 3d and ? 2 = 1. Notice that the eigenvalues concentrate around 2d.
(a) Eigenvalue Histogram for Erd?os-R?enyi.
(b) Estimator Performance
Figure 5: The eigenvalue histogram for a draw from the ER graph with p = 2500 and qp = p?.5
(left) and the estimator performances (right) with qp = p?.75 and ? 2 = 4. Notice that the eigenvalues are concentrated around p? where qp = p?(1??) .
(a) Eigenvalue Histogram for Watts-Strogatz.
(b) Estimator Performance
Figure 6: The eigenvalue histogram for a draw from the Watts-Strogatz graph with d = 5 and
p = 45 with 0.25 probability of rewiring (left) and estimator performances (right) with 4d vertices
and ? 2 = 4. Notice that the eigenvalues are concentrated around 2d.
6
See Figures 3(b), 4(b), 5(b). For the hierarchical model, we also sample from the posterior using a
Gibbs sampler and estimate the posterior mean (Bayes rule under MSE). We find that the posterior
mean is only a slight improvement over the eigenmap estimator (Figure 3(b)), despite it?s difficulty
to compute. Also, a binarized version of these estimators does not substantially change the MSE.
We also simulate graphs from the Watts-Strogatz ?small world? model [26], which is known to be an
appropriate model for self-organizing systems such as biological systems and human networks. The
?small world? graph is generated by forming the lattice graph described in Section 5.2, then rewiring
each edge with some constant probability to another vertex uniformly at random such that loops
are never created. We observe that the eigenvalues concentrate (more tightly than the lattice graph)
around the expected degree 2d (Figure 6(a)) and note that, like the ER model, the eigenspectrum
converges to a nearly semi-circular distribution [12]. Similarly, the MSE decays in a fashion similar
to the ER model (Figure 6(b)).
7
Discussion
In this paper, we have characterized the improvement in noise threshold, below which consistent
MSE recovery of high-dimensional network activation patterns embedded in heavy noise is possible, as a function of the network size and parameters governing the statistical dependencies in the
activation process. Our results indicate that by leveraging the network interaction structure, it is
possible to tolerate noise with variance that increases with the size of the network whereas without exploiting dependencies in the node measurements, the noise variance needs to decrease as the
network size grows to accommodate for multiple hypothesis testing effects.
While we have only considered MSE recovery, it is often possible to detect the presence of patterns
in much heavier noise, even though the activation values may not be accurately recovered [16].
Establishing the noise threshold for detection, deriving upper bounds on the noise threshold, and
extensions to graphical models with higher-order interaction terms are some of the directions for
future work. In addition, the thresholding estimator based on the graph Laplacian eigenbasis can
also be used in high-dimensional linear regression or compressed sensing framework to incorporate
structure, in addition to sparsity, of the relevant variables.
Appendix
Proof sketch of Theorem 1: First, we argue that whp, xT Lx ? ?p, where 0 < ? < 2 is a constant.
? denotes its complement. By Markov?s inequality, for t > 0,
Let ? = {x : xT Lx ? ?p} and ?
T
T
> etC } ? e?tC Eetx Lx
R
T
Let ? denote the uniform distribution over {0, 1}p and N (L) = ?(dx)e?x Lx . Then,
P{xT Lx > C} = P{etx
Eex
T
(tL)x
=
R
T
?(dx)N (L)?1 e?x
Lx
Lx xT (tL)x
e
=
R
T
?(dx)e?x (1?t)Lx
N (L)
=
N ((1?t)L)
N (L)
? 2p
P
?xT Lx
where the last step follows since N (L) =
and L~1 = 0 implying that 1 ?
x?{0,1}p e
p
N (L), N ((1 ? t)L) ? 2 , ?t ? (0, 1). This gives us the Chernoff-type bound,
? ? P{xT Lx > C} ? e?tC 2p = e(log 2?tC/p)p ? e?p
P(?)
by setting C = ?p and ? =
1+log 2
.
t
If we choose t <
1+log 2
2
then ? < 2.
Let ui denote the ith eigenvector of the graph Laplacian L, then under this orthonormal basis,
p
p
X
X
2
T 2
2
?
E[kb
xk ? xk ] ? E[
ui x | ?] + pP (?) + k? ?
sup
uTi x2 + p e?p + k? 2 .
x:xT Lx??p i=k+1
i=k+1
Pp
T
2
We now establish that supx:xT Lx??p
? p min(1, ?/?k+1 ), and the result follows.
i=k+1 (ui x) P
P
T
T
? i = ui+k x, i ? [p ? k] and note that x Lx = pi=1 ?i (uTi x)2 ? pi=k+1 ?i x
? 2i , for ?i the
Let x
ith eigenvalue of L. Consider the primal problem,
max
p?k
X
j=1
? 2j
x
such that
p?k
X
j=1
7
? 2j ? ?p, x
? ? Rp?k
?j x
? is feasible, so
Note that x contained within the ellipsoid xT Lx ? ?p, x ?P{0, 1}p implies that x
p
T
2
a solution to the optimization upper bounds supx:xT Lx??p
(u
x)
.
By
forming
the dual
i
i=k+1
problem, we find that the solution, x? , to the primal problem attains a bound of ||?
x||2 ? ||?
x? ||2 =
?p/?k+1 . Also, ||?
x||2 ? ||x||2 ? p, so we obtain the desired bound.
Proof sketch of Theorem 2: Under the same notation asPthe previous proof, notice that uTi x ?
p
N (0, (2?)?1
x||2 = i=k+1 (2?i )?1 and, so, p1 E||b
x ? x||2 =
i ) independently over i ? [p]. Then E||?
P
p
1
1
1
?1
2
?1
2
2
2
x|| + p E||U[k] ?|| = p i=k+1 (2?i ) + ? k/p ? (2?k+1 ) + ? k/p.
p E||?
Proof sketch of Corollary 1: Let `? = (1 ? ?) log2 p. Since i = 2?i(1??) ?i < L ? `? + 1 and
?
i = 0 otherwise, we have for ` ? `? and since L = log2 p, ?` ? 2?(L?` ) 2??1 = p?? 2??1 , which
`?
`?
??
is increasing in p. Therefore, we can pick k = 2 and since 2 /p = p , the result follows.
Proof sketch of Lemma 2: If v1 , ..., vd are a subset of the eigenvectors of w with eigenvalues
?1 , ..., ?d , then W (v1 ? ... ? vd ) = (?1 + ... + ?d )(v1 ? ... ? vd ) where ? denotes tensor product.
Noting that the Dii = 2d, ?i ? [n]d then we see that the Laplacian L has eigenvalues ?L
i =
P[d]
2?k
d
w
W
w
2d ? ?i = j (2 ? ?ij ) for all i ? [n] . Recall ?k = 2 cos( n ) for some k ? [n]. Let i be
distributed uniformly over [n]d . Then E[?w
ij ] = 0, and by Hoeffding?s inequality,
d
X
2
P{ (2 ? ?w
ij ) ? 2d ? ?t} ? exp{?2t /16d}
j=1
Pd
?d
So, using t = d we get that P{ j=1 (2 ? ?w
ij ) ? d} ? exp{ 8 } and the result follows.
Proof of Lemma 3: We introduce a random variable ? that is uniform over [p]. Note that, conditioned on this random variable, d? ? Binomial(p ? 1, qp ) and Var(d? ) ? pqp . We decompose
? ? W) + (D ? dI),
? into the expected degree of each vertex
the Laplacian, L = D ? W = (dI
?
(d = (p ? 1)qp ), W and the deviations from the expected degree and use the following lemma.
Lemma 4 (Wielandt-Hoffman Theorem). [15, 27] Suppose A = B+C are symmetric p?p matrices
B p
and denote the ordered eigenvalues byp{?A
i , ?i }i=1 . If ||.||F denotes the Frobenius norm,
X
B 2
2
(?A
(8)
i ? ?i ) ? ||C||F
i=1
?
? 2 /p = Var(d? ) and so EG ||?dI?W ? ?L ||2 /p ? pqp = p? (i). Also, it
Notice that EG ||D ? dI||
F
is known that for
? ? (0, 1) the eigenvalues p
converge to a semicircular distribution[12] such that
p
2
?
|
?
2
PG {|?W
pq
(1
?
q
)}
?
1.
Since
2
pqp (1 ? qp ) ? 2p?/2 , we have EG [(?W
p
p
?
? ) ] ? 4p
for large enough p (ii). Using triangle inequality,
2
L
W 2
W 2
?
EG [(?L
? ? (p ? 1)qp ) ] ? EG [(?? ? ((p ? 1)qp ? ?? )) ] + EG [(?? ) ] ? 5p ,
(9)
?
?dI?W
i
where the last step follows using (i), (ii) and
= (p ? 1)qp ? ?W
i . By Markov?s inequality,
?
?
p
p?
p
??1
??
L
?
p
}
?
?
p
}
?
E
[P
{?
?
? p??1 }]
(10)
PG {P? {?L
?
p
G
?
?
?
2
?p
2
for any ?p which is an increasing positive function in p. We now analyze the right hand side.
?2
2
P? {|?L
E? [(?L
? ? (p ? 1)qp | ? } ?
? ? (p ? 1)qp ) ]
L
?
Note that P? {?L
? ? pqp ? qp ? } ? P? {|?? ? (p ? 1)qp | ? } and setting = pqp /2 = p /2,
?
??1
2
P? {?L
} ? 4p?2? E? [(?L
? ? p /2 ? p
? ? (p ? 1)qp ) ].
Hence, we are able to complete the lemma, such that for p large enough, using Eqs. (10) and (9)
p?
4
20
2
PG {P? {?L
? p??1 } ? ?p p?? } ?
EG [E? [(?L
.
(11)
? ?
? ? (p ? 1)qp ) ]] ?
2
?p p?
?p
Proof sketch of Corollary 3: By lemma 3 and appropriately specifying the quantiles,
kp
2
1
2
2
??
?p
EG RB ? EG
+ ? 2 + e?p ?
+
?
O(?
p
)
+
e
+ O( ) (12)
p
?
??1
?kp
p
p /2 ? p
?p
p
p
2
??
?
2
Note that we have the freedom to choose ?p = p /? making ? O(?p p ) = O( ? 2 /p? ) =
o(1) and O(1/?p ) = o(1) if ? 2 = o(p? ).
8
References
[1] F. Abramovich, Y. Benjamini, D. L. Donoho, and I. M. Johnstone, Adapting to unknown sparsity by
controlling the false discovery rate, Annals of Statistics 34 (2006), no. 2, 584?653.
[2] Rie K. Ando and Tong Zhang, Learning on graph with laplacian regularization, Advances in Neural
Information Processing Systems (NIPS), 2006.
[3] M. Belkin and P. Niyogi, Semi-supervised learning on riemannian manifolds, Machine Learning 56(1-3)
(2004), 209?239.
[4] Mikhail Belkin, Irina Matveeva, and Partha Niyogi, Regularization and semi-supervised learning on large
graphs, Conference on Learning Theory (COLT), 2004.
[5] Mikhail Belkin and Partha Niyogi, Laplacian eigenmaps for dimensionality reduction and data representation, Neural Computation 15 (2003), no. 6, 1373?1396.
[6]
, Convergence of laplacian eigenmaps, Advances in Neural Information Processing Systems
(NIPS), 2006.
[7] B. Bollobas, Random graphs, Cambridge University Press, 2001.
[8] Luis E. Carvalho and Charles E. Lawrence, Centroid estimation in discrete high-dimensional spaces with
applications in biology, PNAS 105 (2008), no. 9, 3209?3214.
[9] R. Coifman and M. Maggioni, Diffusion wavelets, Applied and Computational Harmonic Analysis 21
(2006), no. 1, 53?94.
[10] D. L. Donoho, I. M. Johnstone, J. C. Hoch, and A. S. Stern, Maximum entropy and the nearly black object,
Journal of Royal Statistical Society, Series B 54 (1992), 41?81.
[11] P. Erd?os and A R?enyi, On the evolution of random graphs, Publication of the Mathematical Institute of
the Hungarian Academy of Sciences, 1960, pp. 17?61.
[12] Ill?es J. Farkas, Imre Der?enyi, Albert-L?aszl?o Barab?asi, and Tam?as Vicsek, Spectra of real-world graphs:
Beyond the semi-circle law, Physical Review E 64 (2001), 1?12.
[13] Bernard Friedman, Eigenvalues of composite matrices, Mathematical Proceedings of the Cambridge
Philosophical Society 57 (1961), 37?49.
[14] M. Gavish, B. Nadler, and R. Coifman, Multiscale wavelets on trees, graphs and high dimensional data:
Theory and applications to semi supervised learning, 27th International Conference on Machine Learning
(ICML), 2010.
[15] S. Jalan and J. N. Bandyopadhyay, Random matrix analysis of network laplacians, Tech. Report condmat/0611735, Nov 2006.
[16] J. Jin and D. L. Donoho, Higher criticism for detecting sparse heterogeneous mixtures, Annals of Statistics
32 (2004), no. 3, 962?994.
[17] A. B. Lee, B. Nadler, and L. Wasserman, Treelets - an adaptive multi-scale basis for sparse unordered
data, Annals of Applied Statistics 2 (2008), no. 2, 435?471.
[18] N. Meinshausen and P. Buhlmann, High dimensional graphs and variable selection with the lasso, Annals
of Statistics 34 (2006), no. 3, 1436?1462.
[19] A. T. Ogielski and D. L. Stein, Dynamics on ultrametric spaces, Physical Review Letters 55 (1985),
1634?1637.
[20] J. Shi and J. Malik, Normalized cuts and image segmentation, IEEE Trans. Pattern Analysis and Machine
Intelligence 22 (2000), 888?905.
[21] A. Singer, From graph to manifold laplacian: the convergence rate, Applied and Computational Harmonic Analysis 21 (2006), no. 1, 135?144.
[22] A. Singh, R. Nowak, and R. Calderbank, Detecting weak but hierarchically-structured patterns in networks, 13th International Conference on Artificial Intelligence and Statistics (AISTATS), 2010.
[23] A. Smola and R. Kondor, Kernels and regularization on graphs, Conference on Learning Theory (COLT),
2003.
[24] Ulrike von Luxburg, A tutorial on spectral clustering, Statistics and Computing 17 (2007), no. 4, 395?416.
[25] M. Wainwright, P. Ravikumar, and J. D. Lafferty, High-dimensional graphical model selection using `1 regularized logistic regression, Advances in Neural Information Processing Systems (NIPS), 2006.
[26] Duncan J. Watts and Steven H. Strogatz, Collective dynamics of ?small-world? networks, Nature 393
(1998), no. 6684, 440?442.
[27] Choujun Zhan, Guanrong Chen, and Lam F. Yeung, On the distribution of laplacian eigenvalues versus
node degrees in complex networks, Physica A 389 (2010), 1779?1788.
9
| 4075 |@word eex:1 version:2 kondor:1 stronger:3 norm:1 nd:3 simulation:1 covariance:3 decomposition:2 pg:7 pick:1 accommodate:1 reduction:2 series:1 recovered:1 whp:1 activation:22 dx:3 must:1 luis:1 j1:3 opg:1 farkas:1 congestion:1 implying:1 intelligence:2 xk:4 ith:2 provides:1 detecting:2 node:24 lx:16 zhang:1 mathematical:2 along:1 c2:2 constructed:1 prove:1 introduce:1 manner:1 x0:2 coifman:2 notably:1 theoretically:1 mask:1 expected:8 p1:2 multi:2 rem:1 globally:1 solver:1 increasing:3 becomes:1 pqp:5 underlying:1 bounded:3 notation:1 what:1 substantially:1 eigenvector:1 every:1 binarized:2 positive:1 understood:1 limit:3 despite:1 id:3 establishing:1 black:1 xbi:1 specifying:1 challenging:2 meinshausen:1 co:2 statistically:2 unique:2 testing:2 practice:2 block:4 procedure:1 asi:1 adapting:1 dictate:1 matching:1 projection:3 composite:1 regular:2 suggest:1 get:1 onto:3 selection:2 risk:4 equivalent:1 deterministic:2 map:1 shi:1 bollobas:1 independently:2 identifying:5 recovery:14 wasserman:1 estimator:29 rule:5 deriving:1 orthonormal:1 maggioni:1 ultrametric:1 annals:4 hierarchy:2 suppose:1 controlling:1 anomaly:1 hypothesis:2 matveeva:1 pa:2 particularly:1 cut:1 ising:17 jsharpna:1 observed:2 aszl:1 steven:1 connected:1 cycle:1 decrease:5 contamination:1 substantial:1 pd:1 ui:4 dynamic:2 singh:2 basis:5 triangle:2 various:1 enyi:7 kp:4 artificial:1 aspthe:1 whose:2 larger:6 otherwise:2 compressed:1 statistic:10 niyogi:3 transform:2 noisy:7 advantage:1 eigenvalue:35 sequence:1 reconstruction:1 propose:2 interaction:11 rewiring:2 product:1 lam:1 j2:1 relevant:2 loop:1 organizing:1 degenerate:1 academy:1 frobenius:1 differentially:1 eigenbasis:9 exploiting:3 convergence:2 converges:1 object:1 derive:1 andrew:1 illustrate:1 measured:1 ij:4 eq:10 hungarian:1 indicate:4 implies:5 concentrate:2 direction:1 closely:1 merged:1 kb:3 wid:1 human:1 enable:3 dii:1 adjacency:1 decompose:1 biological:1 extension:1 physica:1 aartisingh:1 around:4 considered:1 exp:6 lawrence:1 nadler:2 smallest:4 xk2:2 aarti:1 gavish:1 wi1:1 estimation:1 largest:1 successfully:1 weighted:1 hoffman:1 sensor:2 gaussian:9 imre:1 b0i:1 publication:1 corollary:6 derived:1 improvement:3 consistently:2 sharpnack:1 likelihood:1 mainly:1 tech:1 criticism:1 centroid:2 detect:1 attains:1 biochemical:1 typically:1 wij:3 i1:2 dual:1 colt:2 ill:1 special:1 never:1 extraction:2 chernoff:1 biology:1 placing:1 icml:1 excessive:1 nearly:2 future:1 report:1 belkin:3 tightly:1 irina:1 ando:1 attempt:1 freedom:1 detection:2 friedman:1 interest:1 circular:1 mixture:1 primal:2 edge:10 nowak:1 tree:5 desired:1 circle:1 instance:1 column:1 lattice:15 vertex:10 entry:1 subset:1 deviation:1 uniform:3 eigenmaps:2 too:1 learnability:1 characterize:1 byp:1 dependency:10 supx:2 corrupted:1 learnt:1 tolerated:3 international:2 probabilistic:3 lee:1 together:1 again:1 von:1 leveraged:1 possibly:1 choose:4 hoeffding:1 tam:1 yp:1 c12:1 unordered:1 abramovich:1 depends:2 closed:5 treelet:2 analyze:6 characterizes:1 red:1 sup:1 recover:3 bayes:13 capability:1 ulrike:1 partha:2 square:2 ggm:10 variance:15 efficiently:1 yield:1 correspond:2 weak:1 accurately:1 iid:1 monitoring:1 pp:3 james:1 proof:8 di:6 riemannian:1 hamming:2 sampled:1 recall:1 knowledge:1 ut:2 dimensionality:2 organized:1 segmentation:1 tolerate:2 higher:3 supervised:4 condmat:1 erd:6 rie:1 though:1 governing:1 smola:1 sketch:6 hand:2 o:6 multiscale:1 logistic:1 grows:1 effect:2 normalized:1 evolution:2 hence:6 regularization:4 symmetric:1 i2:1 eg:10 adjacent:1 self:1 complete:1 image:1 harmonic:2 recently:1 charles:1 physical:3 qp:19 bandyopadhyay:1 harnessed:1 exponentially:1 discussed:1 slight:1 mellon:2 measurement:11 cambridge:2 gibbs:2 grid:1 similarly:1 inclusion:3 benjamini:1 pq:1 etc:1 posterior:8 inequality:6 binary:10 yi:2 der:1 employed:1 c11:1 determine:1 converge:1 signal:1 semi:6 ii:2 multiple:4 full:1 pnas:1 reduces:1 smooth:2 characterized:2 cross:1 vicsek:1 equally:1 ravikumar:1 laplacian:29 ensuring:2 barab:1 anomalous:1 regression:2 heterogeneous:1 essentially:2 cmu:2 albert:1 histogram:9 kernel:2 represent:1 yeung:1 c1:2 justified:1 whereas:2 addition:2 grow:2 appropriately:1 rest:1 specially:1 unlike:1 induced:1 tend:1 leveraging:4 lafferty:1 extracting:1 near:1 presence:1 noting:1 easy:3 concerned:1 rendering:1 enough:2 xj:1 topology:1 lasso:1 expression:1 motivated:1 pca:1 heavier:1 generally:1 involve:1 eigenvectors:6 amount:2 stein:1 concentrated:2 tutorial:1 notice:5 delta:1 arising:1 per:1 rb:7 blue:1 carnegie:2 discrete:5 group:1 threshold:14 drawn:7 diffusion:2 v1:3 graph:79 fuse:1 luxburg:1 inverse:2 letter:1 uti:3 draw:4 appendix:2 duncan:1 zhan:1 bound:14 internet:1 fold:1 activity:3 strength:2 occur:1 kronecker:1 x2:1 sake:1 fourier:2 simulate:2 min:2 concluding:1 structured:16 department:3 watt:4 wi:1 making:1 computationally:2 ln:1 agree:1 discus:2 singer:1 end:1 observe:2 hierarchical:11 spectral:2 appropriate:2 tolerable:1 rp:1 jd:4 denotes:8 clustering:2 ensure:1 assumes:1 binomial:1 graphical:11 log2:3 quantile:1 establish:2 classical:1 society:2 tensor:1 malik:1 concentration:2 diagonal:1 interacts:1 distance:2 vd:3 manifold:4 argue:2 eigenspectrum:8 assuming:1 ellipsoid:1 minimizing:1 trace:1 fdr:1 stern:1 unknown:2 collective:1 allowing:1 upper:3 observation:3 markov:2 semicircular:1 jin:1 y1:1 perturbation:1 arbitrary:2 buhlmann:1 bk:1 complement:1 philosophical:1 nip:3 trans:1 etx:1 able:2 beyond:1 below:2 pattern:45 laplacians:1 sparsity:2 reliable:1 max:1 royal:1 wainwright:1 difficulty:1 regularized:1 haar:4 largescale:1 imply:1 created:1 naive:1 prior:7 literature:1 geometric:1 discovery:1 review:2 determining:1 law:1 embedded:6 loss:3 permutation:1 calderbank:1 carvalho:1 var:2 localized:1 versus:1 validation:1 degree:8 xp:1 consistent:11 thresholding:4 pi:2 heavy:3 row:1 supported:1 last:2 infeasible:1 bias:1 side:1 circulant:1 neighbor:1 johnstone:2 taking:1 institute:1 mikhail:2 sparse:3 distributed:1 dimension:6 depth:1 world:5 ignores:1 adaptive:1 nov:1 gene:2 pittsburgh:2 xi:2 spectrum:1 continuous:1 latent:1 iterative:1 nature:1 mse:31 investigated:1 complex:2 constructing:1 aistats:1 spread:1 hierarchically:1 linearly:1 bounding:1 noise:38 dyadic:2 x1:1 quantiles:3 tl:2 fashion:1 tong:1 torus:1 governed:1 wavelet:7 theorem:9 xt:11 er:8 sensing:1 insignificant:1 decay:3 admits:1 intractable:1 false:1 adding:2 conditioned:1 chen:1 entropy:1 depicted:1 tc:3 wielandt:1 likely:2 forming:2 expressed:1 ordered:2 strogatz:4 contained:1 corresponds:3 determines:2 dh:1 conditional:2 identity:1 donoho:3 absence:1 feasible:1 hard:1 change:1 included:1 specifically:1 determined:2 uniformly:4 sampler:2 denoising:2 lemma:9 bernard:1 e:1 formally:1 highdimensional:1 scan:2 unbalanced:3 eigenmap:2 incorporate:1 |
3,397 | 4,076 | Linear readout from a neural population
with partial correlation data
Adrien Wohrer(1) , Ranulfo Romo(2) , Christian Machens(1)
(1)
Group for Neural Theory
Laboratoire de Neurosciences Cognitives
?
Ecole
Normale Suprieure
75005 Paris, France
{adrien.wohrer,christian.machens}@ens.fr
(2)
Instituto de Fisiolog??a Celular
Universidad Nacional Aut?onoma de M?exico
Mexico City, Mexico
[email protected]
Abstract
How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response
variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix
grows quadratically with population size. Here, we suggest to bypass this problem
by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On
average, noise correlations will therefore reflect signal correlations, which can be
measured in neural populations. We suggest an explicit parametric dependency
between signal and noise correlations. We show how this dependency can be used
to ?fill the gaps? in noise correlations matrices using an iterative application of the
Wishart distribution over positive definitive matrices. We apply our method to data
from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the
population of recorded neurons with the discrimination threshold of the monkey
and show that our method predicts different results than simpler, average schemes
of noise correlations.
1
Introduction
In the field of population coding, a recurring question is the impact on coding efficiency of so-called
noise correlations, i.e., trial-to-trial covariation of different neurons? activities due to shared connectivity. Noise correlations have been proposed to be either detrimental or beneficial to the quantity
of information conveyed by a population [1, 2, 3]. Also, some proposed neural coding schemes,
such as those based on synchronous spike waves, fundamentally rely on second- and higher- order
correlations in the population spikes [4].
The problem of noise correlations is made particularly difficult by its high dimensionality along
two distinct physical magnitudes: time, and number of neurons. Ideally, one should describe the
probabilistic structure of any set of spike trains, at any times, for any ensemble of neurons in
the population; which is clearly impossible experimentally. As a result, when recording from a
1
population of neurons with a finite number of trials, one only has access to very partial correlation
data. First, studies based on experimental data are most often limited to second order (pairwise)
correlations. Second, the temporal correlation structure is generally simplified (e.g., by assuming
stationarity) or forgotten altogether (by studying only correlation in overall spike counts). Third and
most importantly, even with modern multi-electrode arrays, one is limited in the number of neurons
which can be recorded simultaneously during an experiment. Thus, when data are pooled over
experiments involving different neurons, most pairwise noise correlation indices remain unknown.
In consequence, there is always a strong need to ?fill the gaps? in the partial correlation data
extracted experimentally from a population.
In contrast to noise-correlation data, the first-order probabilistic data are easily extracted from a population: They simply consist in the trial-averaged firing rates of the neurons, generally referred to
as their ?signal?. In particular, one can easily measure so-called signal correlations which measure
how different neurons? trial-averaged firing rates covary with changes in the stimulus.
In this paper, we propose a method to ?fill the gaps? in noise correlation data, based on signal correlation data. This approach can be summarized by the notion that ?similar tuning reveals shared inputs?. Indeed, noise correlations reveal a proximity of connection between neurons (through shared
inputs and/or reciprocal connections) which, in turn, will generally result in some covariation of the
neurons? first-order response to stimuli. When browsing through neural pairs in the population, one
should thus expect to find a statistical link between their signal- and noise- correlations; and this has
indeed been reported several times [5, 6]. If this statistical structure is well described, it can serve
as basis to randomly generate noise correlation structures, compatible with the measured signal correlation. Furthermore, to assess the impact of this randomness, one can perform repeated picks of
potential noise correlation structures, each time observing the resulting impact on the coding capacity in the population. Then, this method will provide reliable estimates (average + error bar) of the
impact of noise correlations on population coding, given partial noise correlation data.
We present this general approach in a simplified setting in Section 2. The input stimulus is a single
parameter which can take a finite number of values. The population?s response is summarized by
a single number for each neuron (its mean firing rate during the trial), so that in turn a correlation
structure is simply given by a symmetric, positive, N xN matrix. In Section 3, we detail the method
used to generate random noise correlation matrices compatible with the population?s signal correlation, which we believe to be novel. In Section 4, we apply this procedure to assess the amount
of information about the stimulus in the somatosensory cortex of macaques responding to tactile
stimulation.
2
Model of the neural population
Population activity R. We consider a population of N neurons tested over a discrete set of possible stimuli f ? {f1 , . . . , fK }, lasting for a period of time T . The spike train of neuron i can
Pni
(i)
be described by a series of Dirac pulses Si (t) = k=1
?(t ? tk ). Due to trial-to-trial variability,
(i)
the number of emitted spikes ni and the spike times tk are random variables, whose distribution
depends (amongst other things) on the value of stimulus f .
At each trial, information about f can be extracted from the spike trains Si (t) using several possible
readout mechanisms. In this article, we limit ourselves to the simplest type of readout: The population activity is summarized by the N -dimensional vector R = {Ri }i=1...N , where Ri = ni /T is
the mean firing rate of neuron i on this trial. A more plausible readout, based on sliding-window
estimates of the instantaneous firing rate, has been presented elsewhere [7].
First-moment measurements. Given a particular stimulus f , we note ?i (t, f ) the probability of
observing a spike from neuron i at time t regardless of other neurons? spikes (i.e., the first moment
density, in the nomenclature of point processes): E(Si (t) | f ) = ?i (t, f ). Experimentally, ?i (t, f )
is measured fairly easily, as the trial-averaged firing rate of neuron i in stimulus condition f .
2
Since Ri = 1/T
PT
t=0
Si (t), its expectancy is given by
E(Ri | f ) = 1/T
T
X
?
?i (t, f ) = ?i (f ).
(1)
t=0
This function of f is generally called the tuning curve of neuron i.
The trial-averaged firing rates ?i (t, f ) can also be used to define the signal correlation matrix ? =
{?ij }i,j=1...N , as:
P
bc
f,t ?i (t, f )?j (t, f ) ? KT ?i ?j
r
,
?ij =
2
P
P
2
2 ? KT ?
2 ? KT ?
b
c
?
(t,
f
)
?
(t,
f
)
i
j
f,t i
f,t j
P
where ?bi = 1/(KT ) f,t ?i (t, f ) is the overall average firing rate of neuron i across trials and
stimuli. The Pearson correlation ?ij measures how much the first-order responses of neurons i and
j ?look alike?, both in their temporal course and across stimuli. Being a correlation matrix, ? is
positive definite, with 1s on its diagonal, and off-diagonal elements between ?1 and 1. As opposed
to most studies which define signal correlation only based on tuning curves, it is important for our
purpose to also include the time course of response in the measure of signal similarity. Indeed,
similar temporal courses are more likely to reveal shared input, and thus possible noise correlation.
A model for noise correlations. While first-moment (?signal?) statistics can be measured experimentally with good precision, second-moment statistics (noise correlations) can never be totally
measured in a large population. For this reason a parametric model must be introduced, that will
allow us to infer the correlation parameters that could not be measured.
We introduce a simple model in which the noise correlation matrix ? is independent of stimulus f :
For a given stimulus f , the population activity R is supposed to follow the multivariate Gaussian
N (?(f ), Q(f )), with
?i (f ) = ?i (f ),
q
Qij (f ) = ?ij ?i (f )?j (f ).
(2)
(3)
Let us make a few remarks about this model. The first line is imposed by eq. (1). The second line
implies that var(Ri |f ) = Qii (f ) = ?i (f ), meaning that all neurons in this model are supposed
to have a Fano factor of one. This model is the simplest possible for our purpose, as its only free
parameter is the chosen noise correlation matrix ?, and it has often been used in the literature [8].
Naturally, the assumption of Gaussianity is a simplifying approximation, as the values for R really
come from a discretized spike count.
3
3.1
Inferring the full noise correlation structure
Statistical link between signal and noise correlation
We propose that, across all pairs (i, j) of distinct cells in the population, the noise correlation index
is linked to the signal correlation index by the following statistical relationship:
?ij ? N F (?ij ), c2 ,
(4)
where function F (?ij ) provides the expected value for ?ij if ?ij is known, and c measures the
statistical variations of ?ij across pairs of cells sharing the same signal correlation ?ij . By extension,
we note F (?) the matrix with 1s on its diagonal, and non-diagonal elements F (?ij ).
The choice of F and c is dictated by the experimental data under study. In our case, these are
neural recordings in the primary somatosensory cortex (S1) of monkeys responding to a frequency
discrimination task (see Section 4). For all pairs (i, j) of simultaneously recorded neurons (total of
several hundred pairs), we computed the two correlation coefficients (?ij , ?ij ). This allowed us to
compute an experimental estimate for the distribution of ?ij given ?ij (Figure 1). We find that
F (x) = b + a exp ?(x ? 1)
(5)
3
Figure 1: Statistical link between signal and noise correlations. A: Experimental distribution of
(?ij ,?ij ) across simultaneously recorded neural pairs in population data from cortical area S1 (dark
gray: noise correlation coefficients significantly different from 0). B: Same data transformed into
a conditional distribution for ?ij given ?ij . Plain ligns: experimental mean (green) and error bars
(white). Dotted ligns: model mean F (?ij ) (red) and standard deviation c (yellow).
provides a good fit, with a ' 0.6, ? ' 2.5 and b ' 0.05. For the standard deviation in eq. (4), we
choose c = 0.1. This value is slightly reduced compared to experimental data (Figure 1, white vs.
yellow confidence intervals), because part of the variability of ?ij observed experimentally is due to
finite-sample errors in its measurement. We also note that the value found here for a is higher than
values generally reported for noise correlations in the literature [2], possibly due to experimental
limitations ; however, this has no influence on the method proposed here, only on its quantitative
results.
Once that function F is fitted on the subset of simultaneously recorded neural pairs, we can use
the statistical relation (4)-(5) to randomly generate noise correlation matrices ? for the full neural
population, on the basis of its signal correlation matrix ?. However, such a random generation is
not trivial, as one must insure at the same time that individual coefficients ?ij follow relation (4),
and that ? remains a (positive definite) correlation matrix.
As a first step towards this generation, note that the ?average? noise correlation matrix predicted
by the model, that is F (?), is itself a correlation matrix. First, by construction, it has 1s on the
diagonal and all its elements belong to [?1, 1]. Second, F (?) can be written as a Taylor expansion
on element-wise powers of ? (plus diagonal term (1 ? a ? b)Id), with only positive coefficients (due
to the exponential in eq. (5)). Since the element-wise (or Hadamard) product of two symmetric
semi-definite positive matrices is itself semi-positive definite (?Schur?s product theorem? [9]), all
matrices in the expansion are semi-definite positive, and so is F (?). This property is fundamental
to apply the method of random matrix generation that we propose now.
3.2
Generating random correlation matrices
Wishart and anti-Wishart distributions. The Wishart distribution is probably the most straightforward way of generating a random symmetric, positive definite matrix with an imposed expectancy
matrix. Let ? be an N xN symmetric definite positive matrix, k an integer giving the number of degrees of freedom, and introduce the sample covariance matrix of k i.i.d Gaussian samples Xi drawn
Pk
according to N (0, ?): ? = 1/k i=1 Xi XTi . When k ? N , the matrix ? has almost surely
full-rank. In that case, its pdf has a relatively simple expression, and the distribution for ? is called
the Wishart distribution [10]. When k < N , the matrix ? is almost surely of rank k, so it is not
invertible anymore. In that case, its pdf has a much more intricate expression. This distribution has
sometimes been referred to as anti-Wishart distribution [11].
4
In both cases, the resulting distribution for random matrix ?, which we note W(?, k), can be proven
to have the following characteristic function [11]:
k/2
2i
?(T ) = E(e?iTr(?T ) ) = det Id + ?T
k
(where T is a real symmetric matrix). This result can be used to find the two first moments of ?:
E(?ij ) = ?ij
1
cov(?ij , ?kl ) = (?ik ?jl + ?il ?jk ),
k
(6)
(7)
with a variance naturally scaling as 1/k.
Then, a second step consists in renormalizing ? by its diagonal elements, to produce a correlation
matrix ?. The resulting distribution for ?, which we note W(?, k), has been studied by Fisher and
others [12, 10], and is quite intricate to describe analytically. If one takes the generating matrix
? = F (?) to be itself a correlation matrix, then E(?) ' F (?) still holds approximately, albeit with
a small bias, and the variance of ? still scales with 1/k.
Distribution W(F (?), k) could be a good candidate to generate a random correlation matrix ? that
would approximately verify E(?) = F (?). Unfortunately, this method presents a problem in our
case. To fit the statistical relation eq. (4), we need the variance of an element ?ij to be on the order
of c2 ' 0.01. But this implies (through eq. 7) that k must be small (typically, around 20), so that
noise correlation matrices ? generated in this way necessarily have a very low rank (anti-Wishart
distribution, Figure 2, blue traces). This creates an artificial feature of the noise correlation structure
which is not at all desirable.
Iterated Wishart. We propose here an alternative method for generating random correlation matrices, based on iterative applications of the Wishart distribution. This method allows to create
random correlation matrices with a higher variance than a Wishart distribution, while retaining a
much wider eigenvalue spectrum than the more simple anti-Wishart distribution.
The distribution has two positive integer parameters k and m (plus generative matrix F (?)). It is
based on the following recursive procedure:
1. Start from deterministic matrix ?0 = F (?).
2. For n = 1 . . . m, pick ?n following the Wishart-correlation distribution W(?n?1 , k).
3. Take ? = ?m as output random matrix.
Since E(?n ) ' E(?n?1 ), one expects approximately E(?) ' F (?). Furthermore, by taking a large
k, one can produce full-rank matrices, circumventing the ?low-rank problem? of the anti-Wishart
distribution. Because k is large, the variance added at each step is small (proportional to 1/k),
which is compensated by iterating the procedure a large number m of times.
Simulations allowed us to study the resulting distribution for ? (Figure 2, red traces) and compare
it to the more standard ?anti-Wishart-based? distribution for ? (Figure 2, blue traces). We used the
signal correlation data ? observed in a 100-neuron recorded sample from area S1, and the average
noise correlation F (?) given by our experimental fit of F in that same area (Figure 1). As a simple
investigation into the expectancy and variance of these distributions, we computed the empirical
distribution for ?ij conditionned on ?ij , for both distributions (Panel A). On this aspect the two
distributions lead to very similar results, with a mean sticking closely to F (?ij ), except for low
values of ?ij where the slight bias, previously mentionned, is observed in both cases. In contrast,
the two distributions lead to very different results in term of their spectra (Panel B). The iterative
Wishart, used with a large value of k, preserves a non-null spectrum across all its dimensions. It
should be noted, though, that the spectrum is markedly more concentrated on the first eigenvalues
than the spectrum of F (?) (dotted line). However, this tendency towards dimensional reduction is
much milder than in the anti-Wishart case !
As long as m is sensibly smaller than k, the variances added at each step (of order 1/k) simply
sum up, so that m/k is the main factor defining the variance of the distribution. For example, in
5
Figure 2: Random generation of noise correlation matrices. N = 100 neurons from our recorded
sample (area S1). A: Empirical distribution of noise correlation ?ij conditioned on signal correlation
?ij (mean ? std). B: Empirical distribution of eigenvalue spectrum (mean ? std in log domain).
Figure 2, k/m equals 20, precisely the number of degrees of freedom in the equivalent anti-Wishart
distribution. Also, the eigenvalue spectrum of ? appears to follow a quasi-perfect exponential decay
(even on a trial-by-trial basis), a result for which we have yet no explanation. The theoretical study
of the ?iterated Wishart? distribution, especially when k and m tend to infinity in a fixed ratio, might
yield an interesting new type of distribution for positive symmetric matrices.
4
Linear encoding of tactile frequency in somatosensory cortex
To illustrate the interest of random noise correlation matrix generation, we come back to our experimental data. They consist of neural recordings in the somatosensory cortex of macaques during a
two-frequency discrimination task. Two tactile vibrations are successively applied on the fingertips
of a monkey. The monkey must then decide which vibration had the higher frequency (the detailed
experimental protocol has been described elsewhere). Here, we analyze neural responses to the first
presented frequency, in primary somatosensory cortex (S1). Most neurons there have a positive tuning (?i (f ) grows with f ) and positive noise correlations ; however, negative tunings (resulting in
the appearance of negative signal correlations) and significant negative noise-correlations can also
be found (Figure 1-A).
In the notations of Section 2, stimulus f is the vibration frequency, which can take K = 5 possible
values (14, 18, 22, 26 and 30 Hz). The neural activities Ri consist of each neuron?s mean firing
rate over the duration of the stimulation, with T = 250 ms. Our goal is to estimate the amount
of information about stimulus f which can be extracted from a linear readout of neural activities,
depending on the number of neurons N in the population. This implies to estimate the impact of
noise correlations. We thus generate a random noise correlation structure ? following the above
procedure, and assume the resulting distribution for neural activity R to follow eq. (2)-(3). This
being given, one can estimate the sensitivity ?f of a linear readout of f from R, as we now present.
4.1
Linear stimulus discriminability in a neural population
Linear readout from the population. To predict the value of f given R, we resort to a simple
PN
one-dimensional linear readout, based on a prediction variable f? = i=1 ai Ri . The set of neural
weights a = {ai }i=1...N must be chosen in order to maximize the readout performance. We find
it through 1-dimensional Linear Discriminant Analysis (LDA), as the direction which maximizes
(aT Ma)/(aT Qa), where M is the inter-class covariance matrix of class centroids {?(f )}f =f1 ...fK ,
P
and Q = 1/K f Q(f ) is the average intra-class covariance matrix. Then, the norm of a is chosen
so as for variable f? to be the best possible predictor of stimulus value f , in terms of mean square
error.
6
Readout discriminability. The previous procedure produces a prediction variable f? which is normally distributed, with E(f? | f ) = aT ?(f ) and var(f? | f ) = aT Q(f )a. As a result, one can
compute analytically the neurometric curve giving the probability that two successive stimuli be
correctly compared by the prediction variable:
G(?) = P (f?2 > f?1 |f2 ? f1 = ?).
(8)
Finally, a sigmoid can be fit to this curve and provide a single neurometric index ?f , as half its
25% ? 75% interval. ?f measures what we call the linear discriminability of stimulus f in this
neural population. It provides an estimate of the amount of information about the stimulus linearly
present in the population activity R.
4.2
Discriminability curves
Discriminability versus population size. The previous paragraphs have described a means to
estimate the linear discriminability ?f of a given neural population, with a given noise correlation
structure. We apply this method to estimate ?f (N ) in growing populations of size N = 1, 2, . . . ,
up to the full recorded neural sample (approx. 100 neurons in S1, Brodmann area 1). For each
N , ?f (N ) is computed to approximate the linear discriminability of the best N -tuple population
available from our recorded sample. As it is not tractable to test all possible N -tuples, we resort
to the following recursive scheme: Search for neuron i1 with best discriminability, then search for
neuron i2 with the best discriminability for 2-tuple {i1 , i2 }, etc. We term the resulting curve ?f (N )
the discriminability curve for the population. Note that this curve is not necessarily decreasing, as
the last neurons to be included in the population can actually deteriorate the overall readout, by their
influence on the LDA axis a.
Each draw of a sample noise correlation structure gives rise to a different discriminability curve. To
better assess the possible impact of noise correlations, we performed 20 random draws of possible
noise correlation structures, each time computing the discriminability curve. This produces an average discrimination curve flanked by a confidence interval modelling our ignorance of the exact
full correlation structure in the population (Figure 3, red lines). The confidence interval is found
to be rather small. This means that, if our statistical model for the link between signal and noise
correlation (4)-(5) is correct, it is possible to assess with good precision the content of information
present in a neural population, even with very partial knowledge of its correlation structure.
Since the resulting confidence interval on ?f (N ) is small, one could assume that the impact of noise
correlations is only driven by the ?statistical average? matrix F (?). In this particular application,
however, this is not the case. When the noise correlation matrix ? is (deterministically) set equal
to F (?), the resulting linear discriminability is underestimated (blue curve in Figure 3). Indeed,
the statistical fluctuations in ?ij around F (?ij ), of magnitude c ' 0.1, induce an overcorrelation of
certain neural pairs, and a decorrelation of other pairs (including a significant minority of negative
correlation indices ? as observed in our data, Figure 1). The net effect of the decorrelated pairs is
stronger and improves the overall discriminability in the population as compared to the ?statistical
average?.
In our particular case, the predicted discriminability curve is actually closer to what it would be
in a totally decorrelated population (? = 0, green curve). This result is not generic (it depends
on the parameter values in this particular example), but it illustrates how noise correlations are not
necessarily detrimental to coding efficiency [2], in neural populations with balanced tuning and/or
balanced noise correlations (as is the case here, for a minority of cells).
Comparison with monkey behavior. The measure of discriminability through G(?) (eq. 8) mimics the two-stimulus comparison which is actually performed by the monkey. And indeed, one can
build in the same fashion a psychometric curve for the monkey, describing its behavioral accuracy in
comparing correctly f1 and f2 across trials, depending on ? = f2 ? f1 . The resulting psychometric
index ?fmonkey can then directly be compared with ?f , to assess the behavioral relevance of the
proposed linear readout (Figure 3, black dotted line). In our model, the neurometric discriminability
curve crosses the monkey?s psychometric index at around N ' 8. If neurons are assumed to be
decorrelated, the crossing occurs at N ' 5. Using the ?statistical average? of the noise correlation
structure, the monkey?s psychometric index is approached around N ' 20.
7
Figure 3: Discriminability curves for various correlation structures. Neural data: Mean firing rates
over T = 250 msec, for N = 100 neurons from our recorded sample (area S1). Green: No noise
correlations. Red: Random noise correlation structure (mean+std). Blue: Statistical average of the
noise correlation structure. Black: Psychometric index for the monkey.
These results illustrate a number of important qualitative points. First, a known fact: the chosen
noise correlation structure in a model can have a strong impact on the neural readout. Maybe not so
known is the fact that considering a simplified, ?statistical average? of noise correlations may lead
to dramatically different results in the estimation of certain quantities such as discriminability. Thus,
inferring a noise correlation structure must be done with as much care as possible in sticking to the
available structure in the data. We think the method of extrapolation of noise correlation matrices
proposed here offers a means to stick closer to the statistical structure (partially) observed in the
data, than more simplistic methods.
Second, a comment must be made on the typical number of neurons required to attain the monkey?s
behavioral level of performance (N ? 10 using our extrapolation method for noise correlations).
No matter the exact computation and sensory modality, it is a known fact that a few sensory neurons
are sufficient to convey as much information about the stimulus as the monkey seems to be using,
when their spikes are counted over long periods of time (typically, several hundreds of ms) [13, 14].
This is paradoxical when considering the number of neurons involved, even in such a simple task
as that studied here. The simplest explanation to this paradox is that this spike count over several
hundreds of milliseconds is not accessible behaviorally to the animal. Most likely, the animal?s
percept relies on much more instantaneous integrations of its sensory areas? activities, so that the
contributions of many more neurons are required to achieve the animal?s level of accuracy. In this
optic, we have started to study an alternative type of linear readout from a neural population, based
on its instantaneous spiking activity, which we term ?online readout? [7]. We believe that such an
approach, combined with the method proposed here to account for noise correlations with more
accuracy, will lead to better approximations of the number of neurons and typical integration times
used by the monkey in solving this type of task.
5
Conclusion
We have proposed a new method to account for the noise correlation structure in a neural population,
on the basis of partial correlation data. The method is based on the statistical link between signal and
noise correlation, which is a reflection of the underlying neural connectivity, and can be estimated
through pairwise simultaneous recordings. Noise correlation matrices generated in accordance with
this statistical link display robust properties across possible configurations, and thus provide reliable
estimates for the impact of noise correlation ? if, naturally, the statistical model linking signal and
noise correlation is accurate enough. We applied this method to estimate the linear discriminability
in N -tuples of neurons from area S1 when their spikes are counted over 200 msec. We found that
less than 10 neurons can account for the monkey?s behavioral accuracy, suggesting that percepts
based on full neural populations are likely based on much shorter integration times.
8
References
[1] Zohary, E. and Shadlen, M.N. and Newsome, W.T. (1994) Correlated neuronal discharge rate and its implications for psychophysical performance, Nature 370(6485): 140?143
[2] Romo, R., Hern?andez, A., Zainos, A. and Salinas, E. (2003) Correlated neuronal discharges that increase
coding efficiency during perceptual discrimination, Neuron 38(4): 649?657
[3] Averbeck, B.B., Latham, P.E. and Pouget, A. (2006) Neural correlations, population coding and computation, Nature Reviews Neuroscience 7(5): 358?366
[4] Abeles, M. (1991) Corticonics: Neural circuits of the cerebral cortex, Cambridge Univ Pr
[5] Lee, D., Port, N.L., Kruse, W. and Georgopoulos, A.P. (1998) Variability and correlated noise in the discharge of neurons in motor and parietal areas of the primate cortex, Journal of Neuroscience 18(3)
[6] Petersen, R.S., Panzeri, S. and Diamond, M.E. (2001) Population coding of stimulus location in rat somatosensory cortex, Neuron 32(3): 503?514
[7] Wohrer, A., Romo, R. and Machens, C. K. (2010) Online readout of frequency information in areas SI and
SII Computational and Systems Neuroscience 2010 (CoSyne)
[8] Abbott, LF and Dayan, P. (1999) The effect of correlated variability on the accuracy of a population code,
Neural Computation 11(1): 91?101
[9] Horn, R.A. and Johnson, C.R. (1990) Matrix analysis, Cambridge Univ Pr
[10] Johnson, R.A. and Wichern, D.W. (1998) Applied multivariate statistical analysis, Prentice Hall Englewood Cliffs, NJ
[11] Janik, R.A. and Nowak, M.A. (2003) Wishart and anti-Wishart random matrices, Journal of Physics A:
Mathematical and General 36: 3629?3637
[12] Fisher, R.A. (1915) Frequency Distribution of the Values of the Correlation Coefficients in Samples from
an Indefinitely Large Population, Biometrika 10(4)
[13] Britten, KH, Shadlen, MN, Newsome, WT and Movshon, JA (1992) The analysis of visual motion: a
comparison of neuronal and psychophysical performance, Journal of Neuroscience 12(12)
[14] Romo, R. and Salinas, E. (2003) Flutter discrimination: neural codes, perception, memory and decision
making, Nature Reviews Neuroscience 4(3): 203?218
9
| 4076 |@word trial:16 seems:1 norm:1 stronger:1 pulse:1 simulation:1 simplifying:1 covariance:3 pick:2 reduction:1 moment:5 configuration:1 series:1 ecole:1 bc:1 comparing:1 si:5 yet:1 must:7 written:1 christian:2 motor:1 discrimination:7 v:1 generative:1 half:1 reciprocal:1 indefinitely:1 provides:3 location:1 successive:1 simpler:1 mathematical:1 along:1 c2:2 sii:1 ik:1 qij:1 consists:1 qualitative:1 behavioral:4 paragraph:1 introduce:2 deteriorate:1 pairwise:3 inter:1 intricate:2 expected:1 indeed:5 behavior:1 growing:1 multi:1 discretized:1 decreasing:1 xti:1 window:1 considering:2 totally:2 zohary:1 insure:1 notation:1 panel:2 maximizes:1 underlying:1 null:1 what:2 circuit:1 monkey:15 nj:1 temporal:3 forgotten:1 quantitative:1 sensibly:1 biometrika:1 stick:1 normally:1 positive:14 accordance:1 limit:1 instituto:1 consequence:1 encoding:1 id:2 cliff:1 firing:10 fluctuation:1 approximately:3 might:1 plus:2 discriminability:20 black:2 studied:2 qii:1 limited:2 bi:1 averaged:4 horn:1 recursive:2 definite:7 lf:1 procedure:5 area:10 flutter:1 empirical:3 significantly:1 attain:1 confidence:4 induce:1 suggest:2 petersen:1 prentice:1 impossible:1 influence:2 equivalent:1 imposed:2 deterministic:1 compensated:1 romo:4 straightforward:1 regardless:1 duration:1 pouget:1 onoma:1 wichern:1 array:1 importantly:1 fill:3 population:50 notion:1 variation:1 discharge:3 pt:1 construction:1 exact:2 machens:3 element:7 crossing:1 particularly:1 jk:1 std:3 predicts:1 observed:5 readout:16 balanced:2 ideally:1 depend:1 solving:1 serve:1 creates:1 efficiency:3 f2:3 basis:4 easily:3 various:1 train:3 univ:2 distinct:2 describe:2 artificial:1 approached:1 pearson:1 salina:2 whose:1 quite:1 zainos:1 plausible:1 statistic:2 cov:1 think:1 itself:3 online:2 eigenvalue:4 net:1 propose:4 product:2 fr:1 cognitives:1 hadamard:1 achieve:1 supposed:2 sticking:2 kh:1 dirac:1 electrode:1 produce:4 generating:4 renormalizing:1 perfect:1 tk:2 wider:1 illustrate:2 depending:2 measured:6 ij:35 eq:7 strong:2 predicted:2 somatosensory:7 implies:3 come:2 direction:1 closely:1 correct:1 ja:1 f1:5 andez:1 really:1 investigation:1 extension:1 hold:1 proximity:1 around:4 hall:1 exp:1 panzeri:1 predict:1 purpose:2 estimation:1 vibration:3 create:1 city:1 clearly:1 behaviorally:1 always:1 gaussian:2 averbeck:1 normale:1 rather:1 pn:1 rank:5 modelling:1 contrast:2 centroid:1 milder:1 dayan:1 typically:2 relation:3 quasi:1 transformed:1 i1:2 france:1 overall:4 retaining:1 adrien:2 animal:3 integration:3 fairly:1 field:1 once:1 never:1 equal:2 corticonics:1 look:1 mimic:1 others:1 stimulus:23 fundamentally:1 few:2 modern:1 randomly:2 simultaneously:4 preserve:1 individual:1 ourselves:1 freedom:2 wohrer:3 stationarity:1 interest:1 englewood:1 fingertip:1 intra:1 implication:1 kt:4 accurate:1 unam:1 tuple:2 closer:2 partial:6 nowak:1 shorter:1 mentionned:1 taylor:1 theoretical:1 fitted:1 newsome:2 deviation:2 subset:1 expects:1 hundred:3 predictor:1 johnson:2 reported:2 dependency:2 answer:1 abele:1 combined:1 density:1 nacional:1 fundamental:1 sensitivity:1 accessible:1 probabilistic:2 physic:1 universidad:1 off:1 lee:1 invertible:1 connectivity:2 reflect:1 recorded:10 successively:1 opposed:1 choose:1 possibly:1 cosyne:1 wishart:20 resort:2 account:3 potential:1 suggesting:1 de:3 coding:9 pooled:1 summarized:3 gaussianity:1 coefficient:5 matter:1 depends:2 performed:2 extrapolation:2 observing:2 linked:1 red:4 wave:1 start:1 analyze:1 ifc:1 contribution:1 ass:5 square:1 ni:2 il:1 accuracy:5 variance:8 characteristic:1 percept:2 ensemble:1 yield:1 yellow:2 iterated:2 randomness:1 simultaneous:1 sharing:1 decorrelated:3 frequency:8 involved:1 naturally:3 covariation:2 knowledge:1 dimensionality:1 improves:1 actually:3 back:1 appears:1 higher:4 follow:4 brodmann:1 response:6 done:1 though:1 strongly:1 furthermore:2 correlation:110 flanked:1 lda:2 reveal:2 gray:1 believe:2 grows:2 effect:2 verify:1 analytically:2 read:1 symmetric:6 covary:1 i2:2 white:2 ignorance:1 during:4 noted:1 rat:1 m:2 pdf:2 latham:1 motion:1 reflection:1 meaning:1 wise:2 instantaneous:3 novel:1 common:1 sigmoid:1 stimulation:2 spiking:1 physical:1 cerebral:1 jl:1 belong:1 slight:1 linking:1 measurement:2 significant:2 cambridge:2 imposing:1 ai:2 tuning:6 approx:1 fk:2 fano:1 had:1 access:1 cortex:9 similarity:1 etc:1 multivariate:2 dictated:1 driven:1 certain:2 aut:1 care:1 surely:2 maximize:1 period:2 kruse:1 signal:23 semi:3 sliding:1 full:7 desirable:1 infer:1 cross:1 long:2 offer:1 impact:9 prediction:3 involving:1 basic:1 simplistic:1 essentially:1 sometimes:1 cell:3 interval:5 underestimated:1 laboratoire:1 modality:1 probably:1 markedly:1 recording:4 tend:1 hz:1 comment:1 thing:1 schur:1 emitted:1 integer:2 call:1 enough:1 fit:4 itr:1 det:1 synchronous:1 expression:2 movshon:1 tactile:3 nomenclature:1 remark:1 dramatically:1 generally:5 iterating:1 detailed:1 maybe:1 amount:3 dark:1 concentrated:1 simplest:3 reduced:1 generate:5 millisecond:1 dotted:3 neuroscience:6 estimated:1 correctly:2 blue:4 discrete:1 group:1 threshold:2 drawn:1 abbott:1 circumventing:1 sum:1 almost:2 decide:1 draw:2 decision:1 scaling:1 display:1 activity:10 optic:1 precisely:1 infinity:1 georgopoulos:1 ri:7 aspect:1 performing:1 relatively:1 according:1 beneficial:1 remain:1 across:8 slightly:1 smaller:1 primate:1 alike:1 s1:8 lasting:1 making:1 pr:2 remains:1 previously:1 turn:2 count:3 mechanism:1 describing:1 hern:1 tractable:1 studying:1 available:2 apply:4 generic:1 anymore:1 alternative:2 altogether:1 responding:2 include:1 paradoxical:1 giving:2 especially:1 build:1 psychophysical:2 question:2 quantity:2 spike:14 added:2 parametric:3 primary:3 occurs:1 diagonal:7 amongst:1 detrimental:2 mx:1 link:6 capacity:1 discriminant:1 trivial:1 reason:1 neurometric:3 assuming:1 minority:2 code:2 index:9 relationship:1 ratio:1 mexico:2 difficult:1 unfortunately:1 trace:3 negative:4 rise:1 unknown:1 perform:1 diamond:1 neuron:45 finite:3 anti:9 parietal:1 defining:1 variability:5 paradox:1 introduced:1 pair:10 paris:1 kl:1 required:2 connection:2 quadratically:1 macaque:2 qa:1 recurring:1 bar:2 perception:1 reliable:2 green:3 explanation:2 including:1 memory:1 power:1 decorrelation:1 rely:1 mn:1 scheme:3 axis:1 started:1 britten:1 review:2 literature:2 expect:1 generation:5 limitation:1 proportional:1 interesting:1 proven:1 var:2 versus:1 degree:2 ranulfo:1 conveyed:1 sufficient:1 shadlen:2 article:1 port:1 bypass:1 compatible:2 elsewhere:2 course:3 last:1 free:1 bias:2 allow:1 pni:1 taking:1 distributed:1 curve:17 plain:1 xn:2 cortical:1 dimension:1 sensory:3 made:2 expectancy:3 simplified:3 counted:2 approximate:1 reveals:1 assumed:1 tuples:2 xi:2 spectrum:7 search:2 iterative:3 nature:3 robust:1 expansion:2 necessarily:3 domain:1 protocol:1 pk:1 main:1 linearly:1 noise:69 arise:1 definitive:1 repeated:1 allowed:2 convey:2 neuronal:3 referred:2 psychometric:5 en:1 fashion:1 precision:2 inferring:2 explicit:1 deterministically:1 exponential:2 msec:2 candidate:1 perceptual:1 third:1 theorem:1 decay:1 consist:3 albeit:1 magnitude:2 conditioned:1 illustrates:1 browsing:1 gap:3 simply:3 likely:3 appearance:1 visual:1 partially:1 relies:1 extracted:4 ma:1 conditional:1 goal:1 towards:2 shared:4 fisher:2 content:1 experimentally:5 change:1 included:1 typical:2 except:1 wt:1 called:4 total:1 experimental:10 tendency:1 relevance:1 tested:1 correlated:4 |
3,398 | 4,077 | Optimal learning rates
for Kernel Conjugate Gradient regression
Nicole Kr?amer
Weierstrass Institute
Mohrenstr. 39, 10117 Berlin, Germany
[email protected]
Gilles Blanchard
Mathematics Institute, University of Potsdam
Am neuen Palais 10, 14469 Potsdam
[email protected]
Abstract
We prove rates of convergence in the statistical sense for kernel-based least
squares regression using a conjugate gradient algorithm, where regularization
against overfitting is obtained by early stopping. This method is directly related
to Kernel Partial Least Squares, a regression method that combines supervised
dimensionality reduction with least squares projection. The rates depend on two
key quantities: first, on the regularity of the target regression function and second, on the effective dimensionality of the data mapped into the kernel space.
Lower bounds on attainable rates depending on these two quantities were established in earlier literature, and we obtain upper bounds for the considered method
that match these lower bounds (up to a log factor) if the true regression function belongs to the reproducing kernel Hilbert space. If this assumption is not
fulfilled, we obtain similar convergence rates provided additional unlabeled data
are available. The order of the learning rates match state-of-the-art results that
were recently obtained for least squares support vector machines and for linear
regularization operators.
1
Introduction
The contribution of this paper is the learning theoretical analysis of kernel-based least squares regression in combination with conjugate gradient techniques. The goal is to estimate a regression function
f ? based on random noisy observations. We have an i.i.d. sample of n observations (Xi , Yi ) ? X ?R
from an unknown distribution P (X, Y ) that follows the model
Y
=
f ? (X) + ? ,
where ? is a noise variable whose distribution can possibly depend on X, but satisfies E [?|X] = 0.
We assume that the true regression function f ? belongs to the space L2 (PX ) of square-integrable
functions. Following the kernelization principle, we implicitly map the data into a reproducing
kernel Hilbert space H with a kernel k. We denote by Kn = n1 (k(Xi , Xj )) ? Rn?n the normalized
kernel matrix and by ? = (Y1 , . . . , Yn )> ? Rn the n-vector of response observations. The task is
to find coefficients ? such that the function defined by the normalized kernel expansion
n
1X
f? (X) =
?i k(Xi , X)
n i=1
is an adequate estimator of the true regression function f ? . The closeness of the estimator f? to the
target f ? is measured via the L2 (PX ) distance,
?
?
?
?
?
?
2
kf? ? f ? k2 = EX?PX (f? (X) ? f ? (X))2 = EXY (f? (X) ? Y )2 ? EXY (f ? (X) ? Y )2 ,
The last equality recalls that this criterion is the same as the excess generalization error for the
squared error loss `(f, x, y) = (f (x) ? y)2 .
1
In empirical risk minimization, we use the training data empirical distribution as a proxy for the generating distribution, and minimize the training squared error. This gives rise to the linear equation
with ? ? Rn .
Kn ? = ?
(1)
Assuming Kn invertible, the solution of the above equation is given by ? = Kn?1 ?, which yields
a function in H interpolating perfectly the training data but having poor generalization error. It is
well-known that to avoid overfitting, some form of regularization is needed. There is a considerable
variety of possible approaches (see e.g. [10] for an overview). Perhaps the most well-known one is
? = (Kn + ?I)?1 ?,
(2)
known alternatively as kernel ridge regression, Tikhonov?s regularization, least squares support vector machine, or MAP Gaussian process regression. A powerful generalization of this is to consider
? = F? (Kn )?,
(3)
where F? : R+ ? R+ is a fixed function depending on a parameter ?. The notation F? (Kn ) is
to be interpreted as F? applied to each eigenvalue of Kn in its eigen decomposition. Intuitively,
F? should be a ?regularized? version of the inverse function F (x) = x?1 . This type of regularization, which we refer to as linear regularization methods, is directly inspired from the theory of
inverse problems. Popular examples include as particular cases kernel ridge regression, principal
components regression and L2 -boosting. Their application in a learning context has been studied
extensively [1, 2, 5, 6, 12]. Results obtained in this framework will serve as a comparison yardstick
in the sequel.
In this paper, we study conjugate gradient (CG) techniques in combination with early stopping for
the regularization of the kernel based learning problem (1). The principle of CG techniques is to
restrict the learning problem onto a nested set of data-dependent subspaces, the so-called Krylov
subspaces, defined as
?
?
Km (?, Kn ) = span ?, Kn ?, . . . , Knm?1 ? .
(4)
Denote by h., .i the usual euclidean scalar product on Rn rescaled by the factor n?1 . We define
2
the Kn -norm as k?kKn := h?, ?iKn := h?, Kn ?i . The CG solution after m iterations is formally
defined as
?m = arg
min
k? ? Kn ?kKn ;
(5)
??Km (?,Kn )
and the number m of CG iterations is the model parameter. To simplify notation we define
fm := f?m . In the learning context considered here, regularization corresponds to early stopping.
Conjugate gradients have the appealing property that the optimization criterion (5) can be computed
by a simple iterative algorithm that constructs basis vectors d1 , . . . , dm of Km (?, Kn ) by using
only forward multiplication of vectors by the matrix Kn . Algorithm 1 displays the computation of
the CG kernel coefficients ?m defined by (5).
Algorithm 1 Kernel Conjugate Gradient regression
Input kernel matrix Kn , response vector ?, maximum number of iterations m
Initialization: ?0 = 0n ; r1 = ?; d1 = ?; t1 = Kn ?
for i = 1, . . . , m do
ti = ti /kti kKn ; di = di /kti kKn (normalization of the basis, resp. update vector)
?i = h?, ti iKn (proj. of ? on basis vector)
?i = ?i?1 + ?i di (update)
ri+1 = ri ? ?i ti (residuals)
di+1 = ri+1 ? di hti , Kn ri+1 iKn ; ti+1 = Kn di+1 (new update, resp. basis vector)
end for
Pn
Return: CG kernel coefficients ?m , CG function fm = i=1 ?i,m k(Xi , ?)
The CG approach is also inspired by the theory of inverse problems, but it is not covered by the
framework of linear operators defined in (3): As we restrict the learning problem onto the Krylov
space Km (?, Kn ) , the CG coefficients ?m are of the form ?m = qm (Kn )? with qm a polynomial
of degree ? m ? 1. However, the polynomial qm is not fixed but depends on ? as well, making the
CG method nonlinear in the sense that the coefficients ?m depend on ? in a nonlinear fashion.
2
We remark that in machine learning, conjugate gradient techniques are often used as fast solvers
for operator equations, e.g. to obtain the solution for the regularized equation (2). We stress that
in this paper, we study conjugate gradients as a regularization approach for kernel based learning,
where the regularity is ensured via early stopping. This approach is not new. As mentioned in
the abstract, the algorithm that we study is closely related to Kernel Partial Least Squares [18].
The latter method also restricts the learning problem onto the Krylov subspace Km (?, Kn ), but
it minimizes the euclidean distance k? ? Kn ?k instead of the distance k? ? Kn ?kKn defined
above1 . Kernel Partial Least Squares has shown competitive performance in benchmark experiences
(see e.g [18, 19]). Moreover, a similar conjugate gradient approach for non-definite kernels has been
proposed and empirically evaluated by Ong et al [17]. The focus of the current paper is therefore not
to stress the usefulness of CG methods in practical applications (and we refer to the above mentioned
references) but to examine its theoretical convergence properties. In particular, we establish the
existence of early stopping rules that lead to optimal convergence rates. We summarize our main
results in the next section.
2
Main results
For the presentation of our convergence results, we require suitable assumptions on the learning
problem. We first assume that the kernel space H is separable and that the kernel function is
measurable. (This assumption is satisfied for all practical situations that we know of.) Furthermore, for all results, we make the (relatively standard) assumption that the kernel is bounded:
k(x, x) ? ? for all x ? X . We consider ? depending on the result ? one of the following assumptions on the noise:
(Bounded) (Bounded Y ): |Y | ? M almost surely.
(Bernstein) (Bernstein condition): E [?p |X] ? (1/2)p!M p almost surely, for all integers p ? 2.
The second assumption is weaker than the first. In particular, the first assumption implies that not
only the noise, but also the target function f ? is bounded in supremum norm, while the second
assumption does not put any additional restriction on the target function.
The regularity of the target function f ? is measured in terms of a source condition as follows. The
kernel integral operator is given by
Z
K : L2 (PX ) ? L2 (PX ), g 7? k(., x)g(x)dP (x) .
The source condition for the parameters r > 0 and ? > 0 is defined by:
SC(r, ?) : f ? = K r u
with
kuk ? ??r ?.
It is a known fact that if r ? 1/2, then f ? coincides almost surely with a function belonging to Hk .
We refer to r ? 1/2 as the ?inner case? and to r < 1/2 as the ?outer case?.
The regularity of the kernel operator K with respect to the marginal distribution PX is measured in
terms of the so-called effective dimensionality condition, defined by the two parameters s ? (0, 1),
D ? 0 and the condition
ED(s, D) : tr(K(K + ?I)?1 ) ? D2 (??1 ?)?s for all ? ? (0, 1].
This notion was first introduced in [22] in a learning context, along with a number of fundamental
analysis tools which we rely on and have been used in the rest of the related literature cited here.
It is known that the best attainable rates of convergence, as a function of the number of examples
n, are determined by the parameters r and s in the above conditions: It was shown in [6] that the
minimax learning rate given these two parameters is lower bounded by O(n?2r/(2r+s) ).
We now expose our main results in different situations. In all the cases considered, the early stopping
rule takes the form of a so-called discrepancy stopping rule: For some sequence of thresholds
?m > 0 to be specified (and possibly depending on the data), define the (data-dependent) stopping
iteration m
b as the first iteration m for which
k? ? Kn ?m kKn < ?m .
1
(6)
This is generalized to a CG-l algorithm (l ? N?0 ) by replacing the Kn -norm in (5) with the norm defined
by Knl . Corresponding fast iterative algorithms to compute the solution exist for all l (see e.g. [11]).
3
Only in the first result below, the threshold ?m actually depends on the iteration m and on the data.
It is not difficult to prove from (4) and (5) that k? ? Kn ?n kKn = 0, so that the above type of
stopping rule always has m
b ? n.
2.1
Inner case without knowledge on effective dimension
The inner case corresponds to r ? 1/2, i.e. the target function f ? lies in H almost surely. For
some constants ? > 1 and 1 > ? > 0, we consider the discrepancy stopping rule with the threshold
sequence
r
?
p
? log(2? ?1 ) ??
?m = 4?
? k?m kKn + M log(2? ?1 ) .
(7)
n
For technical reasons,
b
instead of
p we consider a slight variation of the rule in that we stop at step m?1
?1 )/n, where q is the iteration polynomial such that ?
m
b if qm
m
m = qm (Kn )?.
b (0) ? 4? log(2?
Denote m
e the resulting stopping step. We obtain the following result.
Theorem 2.1. Suppose that Y is bounded (Bounded), and that the source condition SC(r, ?) holds
for r ? 1/2. With probability 1 ? 2? , the estimator fm
e obtained by the (modified) discrepancy
stopping rule (7) satisfies
kfm
e ?
2
f ? k2
? c(r, ? )(M + ?)
2
?
log2 ? ?1
n
2r
? 2r+1
.
We present the proof in Section 4.
2.2
Optimal rates in inner case
We now introduce a stopping rule yielding order-optimal convergence rates as a function of the
two parameters r and s in the ?inner? case (r ? 1/2, which is equivalent to saying that the target
function belongs to H almost surely). For some constant ? 0 > 3/2 and 1 > ? > 0, we consider the
discrepancy stopping rule with the fixed threshold
0
?
?
?m ? ? = ? M ?
6
4D
? log
?
n
? 2r+1
2r+s
.
(8)
for which we obtain the following:
Theorem 2.2. Suppose that the noise fulfills the Bernstein assumption (Bernstein), that the source
condition SC(r, ?) holds for r ? 1/2, and that ED(s, D) holds. With probability 1 ? 3? , the
estimator fm
b obtained by the discrepancy stopping rule (8) satisfies
2r
?
? 2r+s
16D2
2 6
? 2
0
2
log
kfm
.
b ? f k2 ? c(r, ? )(M + ?)
n
?
Due to space limitations, the proof is presented in the supplementary material.
2.3
Optimal rates in outer case, given additional unlabeled data
We now turn to the ?outer? case. In this case, we make the additional assumption that unlabeled
data is available. Assume that we have n
? i.i.d. observations X1 , . . . , Xn? , out of which only the first
? = n? (Y1 , . . . , Yn , 0, . . . , 0) ? Rn? and run the CG
n are labeled. We define a new response vector ?
n
? We use the same threshold (8) as in the previous section for the
algorithm 1 on X1 , . . . , Xn? and ?.
stopping rule, except that the factor M is replaced by max(M, ?).
Theorem 2.3. Suppose assumptions (Bounded), SC(r, ?) and ED(s, D), with r + s ? 21 . Assume
unlabeled data is available with
n
e
?
n
?
16D2
6
log2
n
?
4
+
?? (1?2r)
2r+s
.
Then with probability 1 ? 3? , the estimator fm
b obtained by the discrepancy stopping rule defined
above satisfies
2r
?
? 2r+s
16D2
2 6
? 2
0
2
kfm
?
f
k
?
c(r,
?
)(M
+
?)
log
.
b
2
n
?
A sketch of the proof can be found in the supplementary material.
3
Discussion and comparison to other results
For the inner case ? i.e. f ? ? H almost surely ? we provide two different consistent stopping
criteria. The first one (Section 2.1) is oblivious to the effective dimension parameter s, and the
obtained bound corresponds to the ?worst case? with respect to this parameter (that is, s = 1).
However, an interesting feature of stopping rule (7) is that the rule itself does not depend on the
a priori knowledge of the regularity parameter r, while the achieved learning rate does (and with
the optimal dependence in r when s = 1). Hence, Theorem 2.1 implies that the obtained rule is
automatically adaptive with respect to the regularity of the target function. This contrasts with the
results obtained in [1] for linear regularization schemes of the form (3), (also in the case s = 1)
for which the choice of the regularization parameter ? leading to optimal learning rates required the
knowledge or r beforehand.
When taking into account also the effective dimensionality parameter s, Theorem 2.2 provides the
order-optimal convergence rate in the inner case (up to a log factor). A noticeable difference to
Theorem 2.1 however, is that the stopping rule is no longer adaptive, that is, it depends on the a
priori knowledge of parameters r and s. We observe that previously obtained results for linear
regularization schemes of the form (2) in [6] and of the form (3) in [5], also rely on the a priori
knowledge of r and s to determine the appropriate regularization parameter ?.
The outer case ? when the target function does not lie in the reproducing Kernel Hilbert space H ? is
more challenging and to some extent less well understood. The fact that additional assumptions are
made is not a particular artefact of CG methods, but also appears in the studies of other regularization
techniques. Here we follow the semi-supervised approach that is proposed in e.g. [5] (to study linear
regularization of the form (3)) and assume that we have sufficient additional unlabeled data in order
to ensure learning rates that are optimal as a function of the number of labeled data. We remark that
other forms of additional requirements can be found in the recent literature in order to reach optimal
rates. For regularized M-estimation schemes studied in [20], availability of unlabeled data is not
p
1?p
required, but a condition is imposed of the form kf k? ? C kf kH kf k2 for all f ? H and some
p ? (0, 1]. In [13], assumptions on the supremum norm of the eigenfunctions of the kernel integral
operator are made (see [20] for an in-depth discussion on this type of assumptions).
Finally, as explained in the introduction, the term ?conjugate gradients? comprises a class of methods
that approximate the solution of linear equations on Krylov subspaces. In the context of learning,
our approach is most closely linked to Partial Least Squares (PLS) [21] and its kernel extension
[18]. While PLS has proven to be successful in a wide range of applications and is considered one
of the standard approaches in chemometrics, there are only few studies of its theoretical properties.
In [8, 14], consistency properties are provided for linear PLS under the assumption that the target
function f ? depends on a finite known number of orthogonal latent components. These findings
were recently extended to the nonlinear case and without the assumption of a latent components
model [3], but all results come without optimal rates of convergence. For the slightly different CG
approach studied by Ong et al [17], bounds on the difference between the empirical risks of the CG
approximation and of the target function are derived in [16], but no bounds on the generalization
error were derived.
4
Proofs
Convergence rates for regularization methods of the type (2) or (3) have been studied by casting
kernel learning methods into the framework of inverse problems (see [9]). We use this framework
for the present results as well, and recapitulate here some important facts.
5
We first define the empirical evaluation operator Tn as follows:
Tn :
g ? H 7? Tn g := (g(X1 ), . . . , g(Xn ))> ? Rn
and the empirical integral operator Tn? as:
n
Tn? : u = (u1 , . . . , un ) ? Rn 7? Tn? u :=
1X
ui k(Xi , ?) ? H.
n i=1
Using the reproducing property of the kernel, it can be readily checked that Tn and Tn? are adjoint
operators, i.e. they satisfy hTn? u, giH = hu, Tn gi, for all u ? Rn , g ? H . Furthermore, Kn =
Tn Tn? , and therefore k?kKn = kf? kH . Based on these facts, equation (5) can be rewritten as
fm = arg
min
f ?Km (Tn? ?,Sn )
kTn? Y ? Sn f kH ,
(9)
where Sn = Tn? Tn is a self-adjoint operator of H, called empirical covariance operator. This definition corresponds to that of the ?usual? conjugate gradient algorithm formally applied to the so-called
normal equation (in H)
Sn f? = Tn? ? ,
which is obtained from (1) by left multiplication by Tn? . The advantage of this reformulation is that
it can be interpreted as a ?perturbation? of a population, noiseless version (of the equation and of
the algorithm), wherein Y is replaced by the target function f ? and the empirical operator Tn? , Tn
are respectively replaced by their population analogues, the kernel integral operator
Z
T ? : g ? L2 (PX ) 7? T ? g := k(., x)g(x)dPX (x) = E [k(X, ?)g(X)] ? H ,
and the change-of-space operator
T :
g ? H 7? g ? L2 (PX ) .
The latter maps a function to itself but between two Hilbert spaces which differ with respect to their
geometry ? the inner product of H being defined by the kernel function k, while the inner product of
L2 (PX ) depends on the data generating distribution (this operator is well defined: since the kernel
is bounded, all functions in H are bounded and therefore square integrable under any distribution
PX ).
The following results, taken from [1] (Propositions 21 and 22) quantify more precisely that the
empirical covariance operator Sn = Tn? Tn and the empirical integral operator applied to the data,
Tn? ?, are close to the population covariance operator S = T ? T and to the kernel integral operator
applied to the noiseless target function, T ? f ? respectively.
Proposition 4.1. Assume that k(x, x) ? ? for all x ? X . Then the following holds:
r
?
?
4?
2
log
?1??,
(10)
P kSn ? SkHS ? ?
?
n
?
where k.kHS denotes the Hilbert-Schmidt norm. If the representation f ? = T fH
holds, and under
assumption (Bernstein), we have the following:
?
?
?
2
4M ?
?
log
?1??.
(11)
P kTn? Y ? SfH
k? ?
?
n
?
?
We note that f ? = T fH
implies that the target function f ? coincides with a function fH
belonging
to H (remember that T is just the change-of-space operator). Hence, the second result (11) is valid
for the case with r ? 1/2, but it is not true in general for r < 1/2 .
4.1
Nemirovskii?s result on conjugate gradient regularization rates
We recall a sharp result due to Nemirovskii [15] establishing convergence rates for conjugate gradient methods in a deterministic context. We present the result in an abstract context, then show how,
combined with the previous section, it leads to a proof of Theorem 2.1. Consider the linear equation
Az ? = b ,
6
where A is a bounded linear operator over a Hilbert space H . Assume that the above equation has a
? and
solution and denote z ? its minimal norm solution; assume further that a self-adjoint operator A,
an element ?b ? H are known such that
?
?
?
?
?A ? A?? ? ? ;
?b ? ?b? ? ? ,
(12)
(with ? and ? known positive numbers). Consider the CG algorithm based on the noisy operator A?
and data ?b, giving the output at step m
?
?
? ? ?b?2 .
zm = Arg Min ?Az
(13)
??
z?Km (A,
b)
The discrepancy principle stopping rule is defined as follows. Consider a fixed constant ? > 1 and
define
?
?
?
?
? m ? ?b? < ? (? kzm k + ?) .
m
? = min m ? 0 : ?Az
We output the solution obtained at step max(0, m
? ? 1) . Consider a minor variation of of this rule:
?
?1
m
?
if qm
? (0) < ??
m
b =
max(0, m
? ? 1) otherwise,
??
where qm
? is the degree m ? 1 polynomial such that zm
? = qm
? (A)b , and ? is an arbitrary positive
constant such that ? < 1/? . Nemirovskii established the following theorem:
? ?
Theorem 4.2. Assume that (a) max(kAk , ?A??) ? L; and that (b) z ? = A? u? with ku? k ? R for
some ? > 0. Then for any ? ? [0, 1] , provided that m
b < ? it holds that
?
? ?
2(1??)
2
? ?
?A (zm
? c(?, ?, ?)R 1+? (? + ?RL? )2(?+?)/(1+?) .
b ?z )
4.2
Proof of Theorem 2.1
We apply Nemirovskii?s result in our setting (assuming r ? 21 ): By identifying the approximate
operator and data as A? = Sn and ?b = Tn? Y, we see that the CG algorithm considered by Nemirovskii
(13) is exactly (9), more precisely with the identification zm = fm .
?
(remember that provided r ?
For the population version, we identify A = S, and z ? = fH
?
?
).
source condition, then there exists fH
? H such that f ? = T fH
1
2
in the
Condition (a) of Nemirovskii?s theorem 4.2 is satisfied with L = ? by the boundedness of the kernel.
Condition (b) is satisfied with ? = r ? 1/2 ? 0 and R = ??r ?, as implied by the source condition
SC(r, ?). Finally, the concentration result 4.1 ensures that theqapproximation conditions (12) are
?
4M
4?
2
? ? log 2 . (Here
satisfied with probability 1 ? 2? , more precisely with ? = ?
log
and
?
=
?
?
n
n
we replaced ? in (10) and (11) by ?/2, so that the two conditions are satisfied simultaneously, by
the union bound). The operator norm is upper bounded by the Hilbert-Schmidt norm, so that the
deviation inequality for the operators is actually stronger than what is needed.
We consider the discrepancy principle stopping rule associated to these parameters, the choice ? =
1/(2? ), and ? = 21 , thus obtaining the result, since
?2 ? 1
?2
? 1
? 2
?
? ?
? ?
? 2
?A (zm
b ? z )? = ?S 2 (fm
b ? fH )? = kfm
b ? fH k2 .
H
4.3
Notes on the proof of Theorems 2.2 and 2.3
The above proof shows that an application of Nemirovskii?s fundamental result for CG regularization
of inverse problems under deterministic noise (on the data and the operator) allows us to obtain our
first result. One key ingredient is the concentration property 4.1 which allows to bound deviations
in a quasi-deterministic manner.
To prove the sharper results of Theorems 2.2 and 2.3, such a direct approach does not work unfortunately, and a complete rework and extension of the proof is necessary. The proof of Theorem 2.2
is presented in the supplementary material to the paper. In a nutshell, the concentration result 4.1
is too coarse to prove the optimal rates of convergence taking into account the effective dimension
7
parameter. Instead of ?
that result, we have to consider
? the mean in a ?warped?
? the deviations from
?
1
?
?
?
? 12
?
? ? ?
norm, i.e. of the form ?(S + ?I) (Tn Y ? T f )? for the data, and ?(S + ?I)? 2 (Sn ? S)?
HS
for the operator (with an appropriate choice of ? > 0) respectively. Deviations of this form were
introduced and used in [5, 6] to obtain sharp rates in the framework of Tikhonov?s regularization (2)
and of the more general linear regularization schemes of the form (3). Bounds on deviations of this
form can be obtained via a Bernstein-type concentration inequality for Hilbert-space valued random
variables.
On the one hand, the results concerning linear regularization schemes of the form (3) do not apply
to the nonlinear CG regularization. On the other hand, Nemirovskii?s result does not apply to deviations controlled in the warped norm. Moreover, the ?outer? case introduces additional technical
difficulties. Therefore, the proofs for Theorems 2.2 and 2.3, while still following the overall fundamental structure and ideas introduced by Nemirovskii, are significantly different in that context. As
mentioned above, we present the complete proof of Theorem 2.2 in the supplementary material and
a sketch of the proof of Theorem 2.3.
5
Conclusion
In this work, we derived early stopping rules for kernel Conjugate Gradient regression that provide
optimal learning rates to the true target function. Depending on the situation that we study, the rates
are adaptive with respect to the regularity of the target function in some cases. The proofs of our
results rely most importantly on ideas introduced by Nemirovskii [15] and further developed by
Hanke [11] for CG methods in the deterministic case, and moreover on ideas inspired by [5, 6].
Certainly, in practice, as for a large majority of learning algorithms, cross-validation remains the
standard approach for model selection. The motivation of this work is however mainly theoretical,
and our overall goal is to show that from the learning theoretical point of view, CG regularization stands on equal footing with other well-studied regularization methods such as kernel ridge
regression or more general linear regularization methods (which includes between many others L2
boosting). We also note that theoretically well-grounded model selection rules can generally help
cross-validation in practice by providing a well-calibrated parametrization of regularizer functions,
or, as is the case here, of thresholds used in the stopping rule.
One crucial property used in the proofs is that the proposed CG regularization schemes can be conveniently cast in the reproducing kernel Hilbert space H as displayed in e.g (9). This reformulation
is not possible for Kernel Partial Least Squares: It is also a CG type method, but uses the standard
Euclidean norm instead of the Kn -norm used here. This point is the main technical justification
on why we focus on (5) rather than kernel PLS. Obtaining optimal convergence rates also valid for
Kernel PLS is an important future direction and should build on the present work.
Another important direction for future efforts is the derivation of stopping rules that do not depend on
the confidence parameter ?. Currently, this dependence prevents us to go from convergence in high
probability to convergence in expectation, which would be desirable. Perhaps more importantly, it
would be of interest to find a stopping rule that is adaptive to both parameters r (target function
regularity) and s (effective dimension parameter) without their a priori knowledge. We recall that
our first stopping rule is adaptive to r but at the price of being worst-case in s. In the literature on
linear regularization methods, the optimal choice of regularization parameter is also non-adaptive,
be it when considering optimal rates with respect to r only [1] or to both r and s [5]. An approach to
alleviate this problem is to use a hold-out sample for model selection; this was studied theoretically
in [7] for linear regularization methods (see also [4] for an account of the properties of hold-out
in a general setup). We strongly believe that the hold-out method will yield theoretically founded
adaptive model selection for CG as well. However, hold-out is typically regarded as inelegant in that
it requires to throw away part of the data for estimation. It would be of more interest to study model
selection methods that are based on using the whole data in the estimation phase. The application of
Lepskii?s method is a possible step towards this direction.
8
References
[1] F. Bauer, S. Pereverzev, and L. Rosasco. On Regularization Algorithms in Learning Theory.
Journal of Complexity, 23:52?72, 2007.
[2] N. Bissantz, T. Hohage, A. Munk, and F. Ruymgaart. Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications. SIAM Journal on Numerical
Analysis, 45(6):2610?2636, 2007.
[3] G. Blanchard and N. Kr?amer. Kernel Partial Least Squares is Universally Consistent. Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, JMLR
Workshop & Conference Proceedings, 9:57?64, 2010.
[4] G. Blanchard and P. Massart. Discussion of V. Koltchinskii?s ?Local Rademacher complexities
and oracle inequalities in risk minimization?. Annals of Statistics, 34(6):2664?2671, 2006.
[5] A. Caponnetto. Optimal Rates for Regularization Operators in Learning Theory. Technical
Report CBCL Paper 264/ CSAIL-TR 2006-062, Massachusetts Institute of Technology, 2006.
[6] A. Caponnetto and E. De Vito. Optimal Rates for Regularized Least-squares Algorithm. Foundations of Computational Mathematics, 7(3):331?368, 2007.
[7] A. Caponnetto and Y. Yao. Cross-validation based Adaptation for Regularization Operators in
Learning Theory. Analysis and Applications, 8(2):161?183, 2010.
[8] H. Chun and S. Keles. Sparse Partial Least Squares for Simultaneous Dimension Reduction
and Variable Selection. Journal of the Royal Statistical Society B, 72(1):3?25, 2010.
[9] E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, and F. Odone. Learning from
Examples as an Inverse Problem. Journal of Machine Learning Research, 6(1):883, 2006.
[10] L. Gy?orfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution-Free Theory of Nonparametric
Regression. Springer, 2002.
[11] M. Hanke. Conjugate Gradient Type Methods for Linear Ill-posed Problems. Pitman Research
Notes in Mathematics Series, 327, 1995.
[12] L. Lo Gerfo, L. Rosasco, E. Odone, F.and De Vito, and A. Verri. Spectral Algorithms for
Supervised Learning. Neural Computation, 20:1873?1897, 2008.
[13] S. Mendelson and J. Neeman. Regularization in Kernel Learning. The Annals of Statistics,
38(1):526?565, 2010.
[14] P. Naik and C.L. Tsai. Partial Least Squares Estimator for Single-index Models. Journal of the
Royal Statistical Society B, 62(4):763?771, 2000.
[15] A. S. Nemirovskii. The Regularizing Properties of the Adjoint Gradient Method in Ill-posed
Problems. USSR Computational Mathematics and Mathematical Physics, 26(2):7?16, 1986.
[16] C. S. Ong. Kernels: Regularization and Optimization. Doctoral dissertation, Australian National University, 2005.
[17] C. S. Ong, X. Mary, S. Canu, and A. J. Smola. Learning with Non-positive Kernels. In
Proceedings of the 21st International Conference on Machine Learning, pages 639 ? 646,
2004.
[18] R. Rosipal and L.J. Trejo. Kernel Partial Least Squares Regression in Reproducing Kernel
Hilbert Spaces. Journal of Machine Learning Research, 2:97?123, 2001.
[19] R. Rosipal, L.J. Trejo, and B. Matthews. Kernel PLS-SVC for Linear and Nonlinear Classification. In Proceedings of the Twentieth International Conference on Machine Learning, pages
640?647, Washington, DC, 2003.
[20] I. Steinwart, D. Hush, and C. Scovel. Optimal Rates for Regularized Least Squares Regression.
In Proceedings of the 22nd Annual Conference on Learning Theory, pages 79?93, 2009.
[21] S. Wold, H. Ruhe, H. Wold, and W.J. Dunn III. The Collinearity Problem in Linear Regression.
The Partial Least Squares (PLS) Approach to Generalized Inverses. SIAM Journal of Scientific
and Statistical Computations, 5:735?743, 1984.
[22] T. Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural
Computation, 17(9):2077?2098, 2005.
9
| 4077 |@word h:1 collinearity:1 version:3 polynomial:4 norm:13 stronger:1 nd:1 km:7 d2:4 hu:1 decomposition:1 recapitulate:1 covariance:3 attainable:2 tr:2 boundedness:1 reduction:2 series:1 lepskii:1 neeman:1 current:1 scovel:1 exy:2 readily:1 numerical:1 update:3 intelligence:1 parametrization:1 footing:1 dissertation:1 weierstrass:1 provides:1 math:1 boosting:2 coarse:1 zhang:1 mathematical:1 along:1 direct:1 above1:1 prove:4 combine:1 manner:1 introduce:1 theoretically:3 examine:1 inspired:3 automatically:1 solver:1 considering:1 provided:4 notation:2 moreover:3 bounded:12 what:1 interpreted:2 minimizes:1 developed:1 finding:1 remember:2 ti:5 nutshell:1 exactly:1 ensured:1 k2:5 qm:8 yn:2 gerfo:1 t1:1 positive:3 understood:1 local:1 establishing:1 initialization:1 studied:6 koltchinskii:1 doctoral:1 challenging:1 range:1 practical:2 union:1 practice:2 definite:1 dpx:1 dunn:1 empirical:9 significantly:1 orfi:1 projection:1 confidence:1 onto:3 unlabeled:6 close:1 operator:30 selection:6 put:1 risk:3 context:7 restriction:1 measurable:1 map:3 equivalent:1 nicole:2 imposed:1 deterministic:4 go:1 pereverzev:1 identifying:1 estimator:6 rule:25 d1:2 importantly:2 regarded:1 inelegant:1 population:4 notion:1 variation:2 justification:1 resp:2 target:17 suppose:3 annals:2 us:1 element:1 labeled:2 worst:2 kfm:4 ensures:1 rescaled:1 mentioned:3 ui:1 complexity:2 ong:4 vito:3 depend:5 serve:1 basis:4 regularizer:1 derivation:1 fast:2 effective:8 artificial:1 sc:5 odone:2 whose:1 supplementary:4 valued:1 posed:2 ikn:3 otherwise:1 statistic:3 gi:1 noisy:2 itself:2 sequence:2 eigenvalue:1 advantage:1 product:3 zm:5 adaptation:1 adjoint:4 kh:3 az:3 chemometrics:1 convergence:16 regularity:8 requirement:1 r1:1 rademacher:1 generating:2 help:1 depending:5 knl:1 measured:3 minor:1 noticeable:1 throw:1 implies:3 come:1 quantify:1 differ:1 artefact:1 direction:3 australian:1 closely:2 material:4 munk:1 require:1 generalization:4 alleviate:1 proposition:2 extension:2 hold:10 considered:5 normal:1 cbcl:1 matthew:1 early:7 fh:8 estimation:3 currently:1 expose:1 tool:1 minimization:2 gaussian:1 always:1 modified:1 rather:1 avoid:1 pn:1 casting:1 derived:3 focus:2 mainly:1 hk:1 contrast:1 cg:25 sense:2 am:1 dependent:2 stopping:27 typically:1 proj:1 quasi:1 germany:1 arg:3 overall:2 ill:2 classification:1 priori:4 ussr:1 art:1 neuen:1 marginal:1 equal:1 construct:1 having:1 washington:1 discrepancy:8 future:2 others:1 report:1 simplify:1 oblivious:1 few:1 ktn:2 simultaneously:1 national:1 replaced:4 geometry:1 phase:1 n1:1 interest:2 evaluation:1 certainly:1 introduces:1 yielding:1 beforehand:1 integral:6 partial:10 necessary:1 experience:1 orthogonal:1 euclidean:3 walk:1 theoretical:5 minimal:1 earlier:1 deviation:6 usefulness:1 successful:1 too:1 kn:31 combined:1 calibrated:1 st:1 cited:1 fundamental:3 siam:2 international:3 csail:1 sequel:1 physic:1 invertible:1 yao:1 squared:2 satisfied:5 rosasco:3 possibly:2 warped:2 leading:1 return:1 account:3 knm:1 de:6 gy:1 availability:1 blanchard:4 coefficient:5 includes:1 satisfy:1 depends:5 view:1 linked:1 competitive:1 hanke:2 contribution:1 minimize:1 square:19 yield:2 identify:1 identification:1 simultaneous:1 reach:1 ed:3 checked:1 definition:1 against:1 dm:1 ruhe:1 proof:15 di:6 associated:1 stop:1 popular:1 massachusetts:1 recall:3 knowledge:6 dimensionality:5 hilbert:10 actually:2 appears:1 supervised:3 follow:1 response:3 wherein:1 verri:1 amer:2 evaluated:1 wold:2 strongly:1 furthermore:2 just:1 smola:1 sketch:2 hand:2 steinwart:1 replacing:1 nonlinear:5 perhaps:2 scientific:1 believe:1 mary:1 normalized:2 true:5 regularization:35 equality:1 hence:2 self:2 kak:1 coincides:2 criterion:3 generalized:2 palais:1 stress:2 ridge:3 complete:2 tn:23 svc:1 recently:2 empirically:1 overview:1 rl:1 slight:1 refer:3 consistency:1 mathematics:4 canu:1 longer:1 recent:1 belongs:3 tikhonov:2 inequality:3 yi:1 integrable:2 additional:8 surely:6 determine:1 semi:1 desirable:1 caponnetto:4 technical:4 match:2 cross:3 concerning:1 controlled:1 regression:21 noiseless:2 expectation:1 iteration:7 kernel:50 normalization:1 grounded:1 achieved:1 htn:1 source:6 crucial:1 rest:1 eigenfunctions:1 massart:1 integer:1 bernstein:6 iii:1 variety:1 xj:1 perfectly:1 restrict:2 fm:8 inner:9 idea:3 effort:1 krzyzak:1 kraemer:1 remark:2 adequate:1 generally:1 covered:1 nonparametric:1 extensively:1 gih:1 exist:1 restricts:1 fulfilled:1 key:2 reformulation:2 threshold:6 kuk:1 naik:1 run:1 inverse:8 powerful:1 almost:6 saying:1 bound:10 display:1 oracle:1 annual:1 precisely:3 ri:4 keles:1 u1:1 span:1 min:4 separable:1 px:10 relatively:1 combination:2 poor:1 conjugate:15 belonging:2 slightly:1 appealing:1 making:1 intuitively:1 explained:1 taken:1 equation:10 previously:1 remains:1 turn:1 needed:2 know:1 end:1 available:3 rewritten:1 apply:3 observe:1 away:1 appropriate:2 spectral:1 schmidt:2 eigen:1 existence:1 denotes:1 include:1 kkn:9 ensure:1 log2:2 giving:1 build:1 establish:1 society:2 implied:1 quantity:2 concentration:4 dependence:2 usual:2 gradient:16 dp:1 subspace:4 distance:3 mapped:1 berlin:2 majority:1 outer:5 extent:1 reason:1 assuming:2 index:1 providing:1 difficult:1 unfortunately:1 setup:1 sharper:1 rise:1 unknown:1 gilles:1 upper:2 observation:4 benchmark:1 finite:1 displayed:1 situation:3 extended:1 nemirovskii:11 y1:2 rn:8 perturbation:1 reproducing:6 dc:1 sharp:2 arbitrary:1 introduced:4 cast:1 required:2 specified:1 potsdam:3 established:2 hush:1 krylov:4 below:1 giovannini:1 summarize:1 rosipal:2 max:4 royal:2 analogue:1 suitable:1 difficulty:1 rely:3 regularized:5 wias:1 residual:1 minimax:1 scheme:6 technology:1 sn:7 literature:4 l2:9 kf:5 multiplication:2 loss:1 interesting:1 limitation:1 proven:1 ingredient:1 validation:3 foundation:1 kti:2 degree:2 sufficient:1 proxy:1 consistent:2 principle:4 lo:1 last:1 free:1 weaker:1 institute:3 wide:1 taking:2 sparse:1 pitman:1 bauer:1 dimension:5 xn:3 depth:1 valid:2 stand:1 forward:1 made:2 adaptive:7 universally:1 founded:1 excess:1 approximate:2 uni:1 implicitly:1 supremum:2 overfitting:2 xi:5 alternatively:1 un:1 iterative:2 latent:2 why:1 ku:1 obtaining:2 expansion:1 interpolating:1 main:4 motivation:1 noise:5 whole:1 x1:3 fashion:1 comprises:1 lie:2 jmlr:1 hti:1 theorem:17 chun:1 closeness:1 exists:1 workshop:1 mendelson:1 kr:2 trejo:2 twentieth:1 conveniently:1 prevents:1 pls:7 scalar:1 springer:1 corresponds:4 nested:1 satisfies:4 khs:1 goal:2 presentation:1 towards:1 price:1 considerable:1 change:2 determined:1 except:1 principal:1 called:5 ksn:1 formally:2 support:2 latter:2 fulfills:1 yardstick:1 tsai:1 kohler:1 kernelization:1 regularizing:1 ex:1 |
3,399 | 4,078 | Identifying Patients at Risk of Major Adverse
Cardiovascular Events Using Symbolic Mismatch
Zeeshan Syed
University of Michigan
Ann Arbor, MI 48109
[email protected]
John Guttag
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected]
Abstract
Cardiovascular disease is the leading cause of death globally, resulting in 17 million deaths each year. Despite the availability of various treatment options, existing techniques based upon conventional medical knowledge often fail to identify patients who might have benefited from more aggressive therapy. In this paper, we describe and evaluate a novel unsupervised machine learning approach
for cardiac risk stratification. The key idea of our approach is to avoid specialized medical knowledge, and assess patient risk using symbolic mismatch, a new
metric to assess similarity in long-term time-series activity. We hypothesize that
high risk patients can be identified using symbolic mismatch, as individuals in
a population with unusual long-term physiological activity. We describe related
approaches that build on these ideas to provide improved medical decision making for patients who have recently suffered coronary attacks. We first describe
how to compute the symbolic mismatch between pairs of long term electrocardiographic (ECG) signals. This algorithm maps the original signals into a symbolic
domain, and provides a quantitative assessment of the difference between these
symbolic representations of the original signals. We then show how this measure
can be used with each of a one-class SVM, a nearest neighbor classifier, and hierarchical clustering to improve risk stratification. We evaluated our methods on a
population of 686 cardiac patients with available long-term electrocardiographic
data. In a univariate analysis, all of the methods provided a statistically significant
association with the occurrence of a major adverse cardiac event in the next 90
days. In a multivariate analysis that incorporated the most widely used clinical
risk variables, the nearest neighbor and hierarchical clustering approaches were
able to statistically significantly distinguish patients with a roughly two-fold risk
of suffering a major adverse cardiac event in the next 90 days.
1
Introduction
In medicine, as in many other disciplines, decisions are often based upon a comparative analysis.
Patients are given treatments that worked in the past on apparently similar conditions. When given
simple data (e.g., demographics, comorbidities, and laboratory values) such comparisons are relatively straightforward. For more complex data, such as continuous long-term signals recorded during
physiological monitoring, they are harder. Comparing such time-series is made challenging by three
factors: the need to efficiently compare very long signals across a large number of patients, the need
to deal with patient-specific differences, and the lack of a priori knowledge associating signals with
long-term medical outcomes.
In this paper, we exploit three different ideas to address these problems.
1
? We address the problems related to scale by abstracting the raw signal into a sequence of
symbols,
? We address the problems related to patient-specific differences by using a novel technique,
symbolic mismatch, that allows us to compare sequences of symbols drawn from distinct
alphabets. Symbolic mismatch compares long-term time-series by quantifying differences
between the morphology and frequency of prototypical functional units, and
? We address the problems related to lack of a priori knowledge using three different methods, each of which exploits the observation that high risk patients typically constitute a
small minority in a population.
In the remainder of this paper, we present our work in the context of risk stratification for cardiovascular disease. Cardiovascular disease is the leading cause of death globally and causes roughly
17 million deaths each year [3]. Despite improvements in survival rates, in the United States, one
in four men and one in three women still die within a year of a recognized first heart attack [4].
This risk of death can be substantially lowered with an appropriate choice of treatment (e.g., drugs
to lower cholesterol and blood pressure; operations such as coronary artery bypass graft; and medical devices such as implantable cardioverter defibrillators) [3]. However, matching patients with
treatments that are appropriate for their risk has proven to be challenging [5,6].
That existing techniques based upon conventional medical knowledge have proven inadequate for
risk stratification leads us to explore methods with few a priori assumptions. We focus, in particular,
on identifying patients at elevated risk of major adverse cardiac events (death, myocardial infarction
and severe recurrent ischemia) following coronary attacks. This work uses long-term ECG signals
recorded during patient admission for ACS. These signals are routinely collected, potentially allowing for the work presented here to be deployed easily without imposing additional needs on patients,
caregivers, or the healthcare infrastructure.
Fortunately, only a minority of cardiac patients experience serious subsequent adverse cardiovascular events. For example, cardiac mortality over a 90 day period following acute coronary syndrome
(ACS) was reported to be 1.79% for the SYMPHONY trial involving 14,970 patients [1] and 1.71%
for the DISPERSE2 trial with 990 patients [2]. The rate of myocardial infarction (MI) over the
same period for the two trials was 5.11% for the SYMPHONY trial and 3.54% for the DISPERSE2
trial. Our hypothesis is that these patients can be discovered as anomalies in the population, i.e.,
their physiological activity over long periods of time is dissimilar to the majority of the patients in
the population. In contrast to algorithms that require labeled training data, we propose identifying
these patients using unsupervised approaches based on three machine learning methods previously
reported in the literature: one-class support vector machines (SVMs), nearest neighbor analysis, and
hierarchical clustering.
The main contributions of our work are: (1) we describe a novel unsupervised approach to cardiovascular risk stratification that is complementary to existing clinical approaches, (2) we explore the
idea of similarity-based clinical risk stratification where patients are categorized in terms of their
similarities rather than specific features based on prior knowledge, (3) we develop the hypothesis
that patients at future risk of adverse outcomes can be detected using an unsupervised approach as
outliers in a population, (4) we present symbolic mismatch, as a way to efficiently compare very long
time-series without first reducing them to a set of features or requiring symbol registration across
patients, and (5) we present a rigorous evaluation of unsupervised similarity-based risk stratification
using long-term data from nearly 700 patients with detailed admissions and follow-up data.
2
Symbolic Mismatch
We start by describing the process through which symbolic mismatch is measured on ECG signals.
2.1
Symbolization
As a first step, the ECG signal zm for each patient m = 1, ..., n is symbolized using the technique
proposed by [7]. To segment the ECG signal into beats, we use two open-source QRS detection
algorithms [8,9]. QRS complexes are marked at locations where both algorithms agree. A variant
of dynamic time-warping (DTW) [7] is then used to quantify differences in morphology between
2
beats. Using this information, beats with distinct morphologies are partitioned into groups, with
each group assigned a unique label or symbol. This is done using a Max-Min iterative clustering
algorithm that starts by choosing the first observation as the first centroid, c1 , and initializes the set
S of centroids to {c1 }. During the i-th iteration, ci is chosen such that it maximizes the minimum
difference between ci and observations in S:
ci = arg max min C(x, y)
x?S
/ y?S
(1)
where C(x, y) is the DTW difference between x and y. The set S is incremented at the end of each
iteration such that S = S ? ci .
The number of clusters discovered by Max-Min clustering is chosen by iterating until the maximized
minimum difference falls below a threshold ?. At this point, the set S comprises the centroids for
the clustering process, and the final assignment of beats to clusters proceeds by matching each beat
to its nearest centroid. Each set of beats assigned to a centroid constitutes a unique cluster. The final
number of clusters, ?, obtained using this process depends on the separability of the underlying data.
The overall effect of the DTW-based partitioning of beats is to transform the original raw ECG
signal into a sequence of symbols, i.e., into a sequence of labels corresponding to the different beat
morphology classes that occur in the signal. Our approach differs from the methods typically used
to annotate ECG signals in two ways. First, we avoid using specialized knowledge to partition
beats into known clinical classes. There is a set of generally accepted labels that cardiologists
use to differentiate distinct kinds of heart beats. However, in many cases, finer distinctions than
provided by these labels can be clinically relevant [7]. Our use of beat clustering rather than beat
classification allows us to infer characteristic morphology classes that capture these finer-grained
distinctions. Second, our approach does not involve extracting features (e.g., the length of the beat
or the amplitude of the P wave) from individual beats. Instead, our clustering algorithm compares
the entire raw morphology of pairs of beats. This approach is potentially advantageous, because it
does not assume a priori knowledge about what aspects of a beat are most relevant. It can also be
extended to other time-series data (e.g., blood pressure and respiration waveforms).
2.2 Measuring Mismatch in Symbolic Representations
Denoting the set of symbol centroids for patient p as Sp and the set of frequencies with which these
symbols occur in the electrocardiogram as Fp (for patient q an analogous representation is adopted),
we define the symbolic mismatch between the long-term ECG time-series for patients p and q as:
? ?
?p,q =
(2)
C(pi , qj )Fp [pi ]Fq [qj ]
pi ?Sp qj ?Sq
where C(pi , qj ) corresponds to the DTW cost of aligning the centroids of symbol classes pi and qj .
Intuitively, the symbolic mismatch between patients p and q corresponds to an estimate of the expected difference in morphology between any two randomly chosen beats from these patients. The
symbolic mismatch computation achieves this by weighting the difference between the centroids for
every pair of symbols for the patients by the frequencies with which these symbols occur.
An important feature of symbolic mismatch is that it avoids the need to set up a correspondence
between the symbols of patients p and q. In contrast to cluster matching techniques [10,11] to
compare data for two patients by first making an assignment from symbols in one patient to the
other, symbolic mismatch does not require any cross-patient registration of symbols. Instead, it
performs weighted morphologic comparisons between all symbol centroids for patients p and q. As
a result, the symbolization process does not need to be restricted to well-defined labels and is able
to use a richer set of patient-specific symbols that capture fine-grained activity over long periods.
2.3
Spectrum Clipping and Adaptation for Kernel-based Methods
The formulation for symbolic mismatch in Equation 2 gives rise to a symmetric dissimilarity matrix. For methods that are unable to work directly from dissimilarities, this can be transformed into a
similarity matrix using a generalized radial basis function. For both the dissimilarity and similarity
case, however, symbolic mismatch can produce a matrix that is indefinite. This can be problematic
3
when using symbolic mismatch with kernel-based algorithms since the optimization problems become non-convex and the underlying theory is invalidated. In particular, kernel-based classification
methods require Mercer?s condition to be satisfied by a positive semi-definite kernel matrix [12].
This creates the need to transform the symbolic mismatch matrix before it can be used as a kernel in
these methods.
We use spectrum clipping to generalize the use of symbolic mismatch for classification. This approach has been shown both theoretically and empirically to offer advantages over other strategies
(e.g., spectrum flipping, spectrum shifting, spectrum squaring, and the use of indefinite kernels)
[13]. The symmetric mismatch matrix ? has an eigenvalue decomposition:
? = U T ?U
(3)
where U is an orthogonal matrix and ? is a diagonal matrix of real eigenvalues:
? = diag(?1 , ..., ?n )
(4)
Spectrum clipping makes ? positive semi-definite by clipping all negative eigenvalues to zero. The
modified positive semi-definite symbolic mismatch matrix is then given by:
?clip = U T ?clip U
(5)
?clip = diag(max(?1 , 0), ..., max(?n , 0))
(6)
where:
Using ?clip as a kernel matrix is then equivalent to using (?clip )1/2 ui as the i-th training sample.
Though we introduce spectrum clipping mainly for the purpose of broadening the applicability of
symbolic mismatch to kernel-based methods, this approach offers additional advantages. When the
negative eigenvalues of the similarity matrix are caused by noise, one can view spectrum clipping as
a denoising step [14]. The results of our experiments, presented later in this paper, support the view
of spectrum clipping being useful in a broader context.
3
Risk Stratification Using Symbolic Mismatch
We now sketch three different approaches using symbolic mismatch to identify high risk patients in a
population. The following two sections contain an empirical evaluation of each. The first approach
uses a one-class SVM and a symbolic mismatch similarity matrix obtained using a generalized
radial basis transformation. The other two approaches, nearest neighbor analysis and hierarchical
clustering, use the symbolic mismatch dissimilarity matrix. In each case, the symbolic mismatch
matrix is processed using spectrum clipping.
3.1
Classification Approach
SVMs can applied to anomaly detection in a one-class setting [15] . This is done by mapping the
data into the feature space corresponding to the kernel and separating instances from the origin with
the maximum margin. To separate data from the origin, the following quadratic program is solved:
1 ?
1
min ?w?2 +
?i ? p
(7)
w,?,p 2
vn i
subject to:
(w ? ?(zi )) ? p ? ?i i = 1, ..., n ?i ? 0
(8)
where v reflects the tradeoff between incorporating outliers and minimizing the support region.
For a new instance, the label is determined by evaluating which side of the hyperplane the instance
falls on in the feature space. The resulting predicted label in terms of the Lagrange multipliers ?i
and the spectrum clipped symbolic mismatch similarity matrix ?clip is then:
?
y?j = sgn(
?i ?clip (i, j) ? p)
(9)
i
4
We apply this approach to train a one-class SVM on all patients. Patients outside the enclosing
boundary are labeled anomalies. The parameter v can be varied to control the size of this group.
3.2
Nearest Neighbor Approach
Our second approach is based on the concept of nearest neighbor analysis. The assumption underlying this approach is that normal data instances occur in dense neighborhoods, while anomalies occur
far from their closest neighbors.
We use an approach similar to [16]. The anomaly score of each patient?s long-term time-series is
computed as the sum of its distances from the time-series for its k-nearest neighbors, as measured
by symbolic mismatch. Patients with anomaly scores exceeding a threshold ? are labeled anomalies.
3.3 Clustering Approach
Our third approach is based on hierarchical clustering. We place each patient in a separate cluster,
and then proceed in each iteration to merge the two clusters that are most similar to each other. The
distance between two clusters is defined as the average of the pairwise symbolic mismatch of the
patients in each cluster. The clustering process terminates when it enters the region of diminishing
returns (i.e., at the ?knee? of the curve corresponding to the distance of clusters merged together at
each iteration). At this point, all patients outside the largest cluster are labeled as anomalies.
4
Evaluation Methodology
We evaluated our work on patients enrolled in the DISPERSE2 trial [2]. Patients in the study were
admitted to a hospital with non-ST-elevation ACS. Three lead continuous ECG monitoring (LifeCard
CF / Pathfinder, DelMar Reynolds / Spacelabs, Issaqua WA) was performed for a median duration
of four days at a sampling rate of 128 Hz. The endpoints of cardiovascular death, myocardial
infarction and severe recurrent ischemia were adjudicated by a blinded Clinical Events Committee
for a median follow-up period of 60 days. The maximum follow-up was 90 days. Data from 686
patients was available after removal of noise-corrupted signals. During the follow-up there were
14 cardiovascular deaths, 28 myocardial infarctions, and 13 cases of severe recurrent ischemia. We
define a major adverse cardiac event to be any of these three adverse events.
We studied the effectiveness of combining symbolic mismatch with each of classification, nearest neighbor analysis and clustering in identifying a high risk group of patients. Consistent with
other clinical studies to evaluate methods for risk stratification in the setting of ACS [17], we classified patients in the highest quartile as the high risk group. For the classification approach, this
corresponded to choosing v such that the group of patients lying outside the enclosing boundary
constituted roughly 25% of the population. For the nearest neighbor approach we investigated all
odd values of k from 3 to 9, and patients with anomaly scores in the top 25% of the population were
classified as being at high risk. For the clustering approach, the varying sizes of the clusters merged
together at each step made it difficult to select a high risk quartile. Instead, patients lying outside
the largest cluster were categorized as being at risk. In the tests reported later in this paper, this
group contained roughly 23% the patients in the population. We used the LIBSVM implementation
for our one-class SVM. Both the nearest neighbor and clustering approaches were carried out using
MATLAB implementations.
We employed Kaplan-Meier survival analysis to compare the rates for major adverse cardiac events
between patients declared to be at high and low risk. Hazard ratios (HR) and 95% confidence interval (CI) were estimated using a Cox proportional hazards regression model. The predictions of
each approach were studied in univariate models, and also in multivariate models that additionally
included other clinical risk variables (age?65 years, gender, smoking history, hypertension, diabetes mellitus, hyperlipidemia, history of chronic obstructive pulmonary disorder (COPD), history
of coronary heart disease (CHD), previous MI, previous angina, ST depression on admission, index
diagnosis of MI) as well as ECG risk metrics proposed in the past (heart rate variability (HRV), heart
rate turbulence (HRT), and deceleration capacity (DC)) [18].
5
Method
One-Class SVM
3-Nearest Neighbor
5-Nearest Neighbor
7-Nearest Neighbor
9-Nearest Neighbor
Hierarchical Clustering
HR
1.38
1.91
2.10
2.28
2.07
2.04
P Value
0.033
0.031
0.013
0.005
0.015
0.017
95% CI
1.04-1.89
1.06-3.44
1.17-3.76
1.28-4.07
1.15-3.71
1.13-3.68
Table 1: Univariate association of risk predictions from different approaches using symbolic mismatch with major adverse cardiac events over a 90 day period following ACS.
Clinical Variable
Age?65 years
Female Gender
Current Smoker
Hypertension
Diabetes Mellitus
Hyperlipidemia
History of COPD
History of CHD
Previous MI
Previous angina
ST depression>0.5mm
Index diagnosis of MI
Heart Rate Variability
Heart Rate Turbulence
Deceleration Capacity
HR
1.82
0.69
1.05
1.44
1.95
1.00
1.05
1.10
1.17
0.94
1.13
1.42
1.56
1.64
1.77
P Value
0.041
0.261
0.866
0.257
0.072
0.994
0.933
0.994
0.630
0.842
0.675
0.134
0.128
0.013
0.002
95% CI
1.02-3.24
0.37-1.31
0.59-1.87
0.77-2.68
0.94-4.04
0.55-1.82
0.37-2.92
0.37-2.92
0.62-2.22
0.53-1.68
0.64-2.01
0.90-2.26
0.88-2.77
1.11-2.42
1.23-2.54
Table 2: Univariate association of existing clinical and ECG risk variables with major adverse cardiac events over a 90 day period following ACS.
5
5.1
Results
Univariate Results
Results of univariate analysis for all three unsupervised symbolic mismatch-based approaches are
presented in Table 1. The predictions from all methods showed a statistically significant (i.e., p <
0.05) association with major adverse cardiac events following ACS. The results in Table 1 can
be interpreted as roughly a doubled rate of adverse outcomes per unit time in patients identified as
being at high risk by the nearest neighbor and clustering approaches. For the classification approach,
patients identified as being at high risk had a nearly 40% increased risk.
For comparison, we also include the univariate association of the other clinical and ECG risk variables in our study (Table 2). Both the nearest neighbor and clustering approaches had a higher hazard
ratio in this patient population than any of the other variables studied. Of the clinical risk variables,
only age was found to be significantly associated on univariate analysis with major cardiac events
after ACS. Diabetes (p=0.072) was marginally outside the 5% level of significance. Of the ECG risk
variables, both HRT and DC showed a univariate association with major adverse cardiac events in
this population. These results are consistent with the clinical literature on these risk metrics.
5.2
Multivariate Results
We measured the correlation between the predictions of the unsupervised symbolic mismatch-based
approaches and both the clinical and ECG risk variables. All of the unsupervised approaches had
low correlation with both sets of variables (R ? 0.2). This suggests that the results of these novel
approaches can be usefully combined with results of existing approaches.
On multivariate analysis, both the nearest neighbor approach and the clustering approach were independent predictors of adverse outcomes (Table 3). In our study, the nearest neighbor approach (for
k > 3) had the highest hazard ratio on both univariate and multivariate analysis. Both the nearest
neighbor and clustering approaches predicted patients with an approximately two-fold increased risk
of adverse outcomes. This increased risk did not change much even after adjusting for other clinical
and ECG risk variables.
6
Method
One-Class SVM
3-Nearest Neighbor
5-Nearest Neighbor
7-Nearest Neighbor
9-Nearest Neighbor
Hierarchical Clustering
Adjusted HR
1.32
1.88
2.07
2.25
2.04
1.86
P Value
0.074
0.042
0.018
0.008
0.021
0.042
95% CI
0.97-1.79
1.02-3.46
1.13-3.79
1.23-4.11
1.11-3.73
1.02-3.46
Table 3: Multivariate association of high risk predictions from different approaches using symbolic
mismatch with major adverse cardiac events over a 90 day period following ACS. Multivariate
results adjusted for variables in Table 2.
Method
One-Class SVM
3-Nearest Neighbor
5-Nearest Neighbor
7-Nearest Neighbor
9-Nearest Neighbor
Hierarchical Clustering
HR
1.36
1.74
1.57
1.73
1.89
1.19
P Value
0.038
0.069
0.142
0.071
0.034
0.563
95% CI
1.01-1.79
0.96-3.16
0.86-2.88
0.95-3.14
1.05-3.41
0.67-2.12
Table 4: Univariate association of high risk predictions without the use of spectrum clipping. None
of the approaches showed a statistically significant association with the study endpoint in any of the
multivariate models including other clinical risk variables when spectrum clipping was not used.
5.3
Effect of Spectrum Clipping
We also investigated the effect of spectrum clipping on the performance of our different risk stratification approaches. Table 4 presents the associations when spectrum clipping was not used. For
all three methods, performance was worse without the use of spectrum clipping, although the effect
was small for the one-class SVM case.
6
Related Work
Most previous work on comparing signals in terms of their raw samples (e.g., metrics such as
dynamic time warping, longest common subsequence, edit distance with real penalty, sequence
weighted alignment, spatial assembling distance, threshold queries) [19] focuses on relatively short
time-series. This is due to the runtime of these methods (quadratic for many methods) and the need
to reason in terms of the frequency and dynamics of higher-level signal constructs (as opposed to
individual samples) when studying systems over long periods.
Most prior research on comparing long-term time-series focuses instead on extracting specific features from long-term signals and quantifying the differences between these features. In the context
of cardiovascular disease, long-term ECG is often reduced to features (e.g., mean heart rate or heart
rate variability) and compared in terms of these features. These approaches, unlike our symbolic
mismatch based approaches, draw upon significant a priori knowledge. Our belief was that for
applications like risk stratifying patients for major cardiac events, focusing on a set of specialized
features leads to important information being potentially missed. In our work, we focus instead
on developing an approach that avoids use of significant a priori knowledge by comparing the raw
morphology of long-term time-series. We propose doing this in a computationally efficient and
systematic way through symbolization. While this use of symbolization represents a lossy compression of the original signal, the underlying DTW-based process of quantifying differences between
long-term time-series remains grounded in the comparison of raw morphology.
Symbolization maps the comparison of long-term time-series into the domain of sequence comparison. There is an extensive body of prior work focusing on the comparison of sequential or string
data. Algorithms based on measuring the edit distance between strings are widely used in disciplines such as computational biology, but are typically restricted to comparisons of short sequences
because of their computational complexity. Research on the use of profile hidden Markov models
[20,21] to optimize recognition of binary labeled sequences is more closely related to our work. This
work focuses on learning the parameters of a hidden Markov model that can represent approximations of sequences and can be used to score other sequences. Such approaches require large amounts
of data or good priors to train the hidden Markov models. Computing forward and backward prob7
abilities from the Baum-Welch algorithm is also very computationally intensive. Other research in
this area focuses on mismatch tree-based kernels [22], which use the mismatch tree data structure
[23] to quantify the difference between two sequences based on the approximate occurrence of fixed
length subsequences within them. Similar to this approach is work on using a ?bag of motifs? representation [24], which provides a more flexible representation than fixed length subsequences but
usually requires prior knowledge of motifs in the data [24].
In contrast to these efforts, we use a simple computationally efficient approach to compare symbolic sequences without prior knowledge. Most importantly, our approach helps address the situation where symbolizing long-term time-series in a patient-specific manner leads to the symbolic
sequences from different alphabets [25]. In this case, hidden Markov models, mismatch trees or a
?bag of motifs? approach trained on one patient cannot be easily used to score the sequences for
other patients. Instead, any comparative approach must maintain a hard or soft registration of symbols across individuals. Symbolic mismatch complements existing work on sequence comparison
by using a measure that quantifies differences across patients while retaining information on how
the symbols for these patients differ.
Finally, we distinguish our work from earlier method for ECG-based risk stratification. These methods typically calculate a particular pre-defined feature from the raw ECG signal, and to use it to rank
patients along a risk continuum. Our approach, focusing on detecting patients with high symbolic
mismatch relative to other patients in the population, is orthogonal to the use of specialized high risk
features along two important dimensions. First, it does not require the presence of significant prior
knowledge. For the cardiovascular care, we only assume that ECG signals from patients who are
at high risk differ from those of the rest of the population. There are no specific assumptions about
the nature of these differences. Second, the ability to partition patients into groups with similar
ECG characteristics and potentially common risk profiles potentially allows for a more fine-grained
understanding of a how a patient?s future health may evolve over time. Matching patients to past
cases with similar ECG signals could lead to more accurate assignments of risk scores for particular
events such as death and recurring heart attacks.
7
Discussion
In this paper, we described a novel unsupervised learning approach to cardiovascular risk stratification that is complementary to existing clinical approaches.
We proposed using symbolic mismatch to quantify differences in long-term physiological timeseries. Our approach uses a symbolic transformation to measure changes in the morphology and
frequency of prototypical functional units observed over long periods in two signals. Symbolic
mismatch avoids feature extraction and deals with inter-patient differences in a parameter-less way.
We also explored the hypothesis that high risk patients in a population can be identified as individuals
with anomalous long-term signals. We developed multiple comparative approaches to detect such
patients, and evaluated these methods in a real-world application of risk stratification for major
adverse cardiac events following ACS.
Our results suggest that symbolic mismatch-based comparative approaches may have clinical utility
in identifying high risk patients, and can provide information that is complementary to existing
clinical risk variables. In particular, we note that the hazard ratios we report are typically considered
clinically meaningful. In a different study of 118 variables in 15,000 post-ACS patients with 90 day
follow-up similar to our population, [1] did not find any variables with a hazard ratio greater than
2.00. We observed a similar result in our patient population, where all of the existing clinical and
ECG risk variables had a hazard ratio less than 2.00. In contrast to this, our nearest neighbor-based
approach achieved a hazard ratio of 2.28, even after being adjusted for existing risk measures.
Our study has limitations. While our decision to compare the morphology and frequency of prototypical functional units leads to a measure that is computationally efficient on large volumes of
data, this process does not capture information related to the dynamics of these prototypical units.
We also observe that all three of the comparative approaches investigated in our study focus only on
identifying patients who are anomalies. While we believe that symbolic mismatch may have further
use in supervised learning, this hypothesis needs to be evaluated more fully in future work.
8
References
[1] LK Newby, MV Bhapkar, HD White et al. (2003) Predictors of 90-day outcome in patients stabilized after
acute coronary syndromes. Eur Heart J, 172-181.
[2] C.P. Cannon, S. Husted, R.A. Harringtonet al. (2007) Safety, Tolerability, and Initial Efficacy of AZD6140,
the First Reversible Oral Adenosine Diphosphate Receptor Antagonist, Compared With Clopidogrel, in Patients
With NonST-Segment Elevation Acute Coronary Syndrome Primary. J Am Coll Cardiol, 1844-1851.
[3] World Health Organization. (2009) Cardiovascular Diseases Fact Sheet.
[4] J. Mackay, G.A. Mensah, S. Mendis et al. (2004) The Atlas of Heart Disease and Stroke. WHO.
[5] J.J. Bailey, A.S. Berson, H. Handelsman et al. (2001) Utility of current risk stratification tests for predicting
major arrhythmic events after myocardial infarction. J Am Coll Cardio, 1902-1911.
[6] G. Lopera & A.B. Curtis. (2009) Risk stratification for sudden cardiac death: current approaches and
predictive value. Curr Cardiol Rev, 56-64.
[7] Z. Syed, J. Guttag & C. Stultz. (2007) Clustering and Symbolic Analysis of Cardiovascular Signals: Discovery and Visualization of Medically Relevant Patterns in Long-Term Data Using Limited Prior Knowledge.
EURASIP J Adv Sig Proc, 1-16.
[8] P.S. Hamilton & W.J. Tompkins. (1986) Quantitative investigation of QRS detection rules using the
MIT/BIH arrhythmia database. IEEE Trans Biomed Eng, 1157-1165.
[9] W. Zong, GB Moody, & D. Jiang. (2003) A robust open-source algorithm to detect onset and duration of
QRS complexes. Comp Cardiol, 737-740.
[10] S.H. Chang, F.H. Cheng, W. Hsu et al. (1997) Fast algorithm for point pattern matching: invariant to
translations, rotations and scale changes. Pattern Recognition, 311-320.
[11] W.W. Cohen & J. Richman (2002). Learning to match and cluster large high-dimensional data sets for data
integration. In Proc. ACM SIGKDD, 475-480.
[12] B. Scholkopf & A.J. Smola. (2002) Learning with Kernels. MIT Press.
[13] Y. Chen, E.K. Garcia, M.R. Gupta et al. (2009) Similarity-based classification: concepts and algorithms.
JMLR, 747-776.
[14] G. Wu, EY. Chang & Z. Zhang. (2005) An analysis of transformation on non-positive semidefinite similarity matrix for kernel machines. Technical report, University of California, Santa Barbara.
[15] B. Scholkopf, J.C. Platt, J. Shawe-Taylor, et al. (2001) Estimating the support of a high-dimensional
distribution. Neural Computation, 1443-1471.
[16] E. Eskin, A. Arnold, M. Prerau et al. (2002) A geometric framework for unsupervised anomaly detection.
App Data Mining Comp Secur, 1-20.
[17] M.G. Shlipak, J.H. Ix, K. Bibbins-Domingo et al. (2008) Biomarkers to predict recurrent cardiovascular
disease: the Heart and Soul Study. JAMA, 50-57.
[18] B. M. Scirica. (2010) Acute coronary syndrome: emerging tools for diagnosis and risk assessment. J Am
Coll Cardiol, 1403-1415.
[19] H. Ding, G. Trajcevski, P Scheuermann et al. (2008) Querying and mining of time series data: experimental comparison of representations and distance measures. In Proc. VLDB, 1542-1552.
[20] A. Krogh. (1994) Hidden Markov models for labeled sequences. In Proc. ICPR, 140-144.
[21] T. Jaakkola, M. Diekhans & D. Haussler. (1999) Using the Fisher kernel method to detect remote protein
homologies. In Proc. ICISMB, 149-158.
[22] C. Leslie, E. Eskin, J. Weston et al. (2003) Mismatch string kernels for SVM protein classification. In
Proc. NIPS, 1441-1448.
[23] E. Eskin & P.A. Pevzner. (2002) Finding composite regulatory patterns in DNA sequences. Bioinformatics,
354-363.
[24] A. Ben-Hur & D. Brutlag. (2006) Sequence motifs: highly predictive features of protein function. Feature
Extraction, 625-645.
[25] Z. Syed, C. Stultz, M. Kellis et al. (2010) Motif discovery in physiological datasets: a methodology for
inferring predictive elements. ACM Trans. Knowledge Discovery in Data, 1-23.
9
| 4078 |@word trial:6 cox:1 compression:1 advantageous:1 open:2 vldb:1 decomposition:1 eng:1 pressure:2 harder:1 initial:1 series:15 score:6 united:1 efficacy:1 symphony:2 denoting:1 reynolds:1 past:3 existing:10 current:3 comparing:4 must:1 john:1 subsequent:1 partition:2 hypothesize:1 atlas:1 device:1 short:2 sudden:1 infrastructure:1 provides:2 detecting:1 eskin:3 location:1 attack:4 zhang:1 admission:3 along:2 become:1 pevzner:1 scholkopf:2 manner:1 introduce:1 pairwise:1 theoretically:1 inter:1 expected:1 roughly:5 arrhythmia:1 morphology:11 globally:2 provided:2 estimating:1 underlying:4 maximizes:1 what:1 caregiver:1 kind:1 interpreted:1 substantially:1 string:3 emerging:1 developed:1 finding:1 transformation:3 quantitative:2 every:1 symbolization:5 usefully:1 runtime:1 classifier:1 platt:1 healthcare:1 unit:5 medical:6 partitioning:1 control:1 hamilton:1 cardiovascular:14 positive:4 before:1 hyperlipidemia:2 stratifying:1 safety:1 despite:2 receptor:1 jiang:1 merge:1 approximately:1 might:1 studied:3 ecg:22 suggests:1 challenging:2 limited:1 statistically:4 electrocardiographic:2 unique:2 definite:3 differs:1 sq:1 area:1 empirical:1 drug:1 mellitus:2 significantly:2 composite:1 matching:5 confidence:1 radial:2 pre:1 suggest:1 symbolic:48 doubled:1 cannot:1 protein:3 sheet:1 turbulence:2 risk:62 context:3 optimize:1 conventional:2 map:2 equivalent:1 chronic:1 baum:1 straightforward:1 duration:2 convex:1 welch:1 identifying:6 knee:1 disorder:1 rule:1 haussler:1 importantly:1 cholesterol:1 hd:1 population:17 deceleration:2 analogous:1 anomaly:11 us:3 hypothesis:4 origin:2 diabetes:3 sig:1 domingo:1 element:1 recognition:2 labeled:6 database:1 observed:2 ding:1 solved:1 capture:3 enters:1 calculate:1 region:2 adv:1 remote:1 incremented:1 highest:2 disease:8 graft:1 ui:1 complexity:1 dynamic:4 trained:1 segment:2 oral:1 predictive:3 upon:4 creates:1 basis:2 easily:2 various:1 routinely:1 alphabet:2 train:2 distinct:3 fast:1 describe:4 detected:1 query:1 corresponded:1 outcome:6 choosing:2 outside:5 neighborhood:1 richer:1 widely:2 ability:2 transform:2 final:2 differentiate:1 sequence:18 advantage:2 eigenvalue:4 propose:2 remainder:1 zm:1 adaptation:1 relevant:3 combining:1 artery:1 cluster:14 produce:1 comparative:5 ben:1 help:1 recurrent:4 ac:11 develop:1 measured:3 cardio:1 nearest:29 odd:1 krogh:1 hrt:2 predicted:2 quantify:3 differ:2 waveform:1 merged:2 closely:1 quartile:2 sgn:1 require:5 investigation:1 elevation:2 adjusted:3 mm:1 lying:2 therapy:1 considered:1 normal:1 mapping:1 predict:1 major:15 achieves:1 continuum:1 purpose:1 proc:6 bag:2 label:7 edit:2 largest:2 hrv:1 tool:1 weighted:2 reflects:1 mit:3 modified:1 rather:2 avoid:2 cannon:1 varying:1 broader:1 jaakkola:1 focus:7 improvement:1 longest:1 rank:1 fq:1 mainly:1 contrast:4 sigkdd:1 rigorous:1 centroid:9 detect:3 am:3 motif:5 squaring:1 typically:5 entire:1 diminishing:1 hidden:5 transformed:1 biomed:1 arg:1 overall:1 flexible:1 classification:9 priori:6 retaining:1 spatial:1 integration:1 mackay:1 construct:1 extraction:2 sampling:1 stratification:15 biology:1 represents:1 unsupervised:10 nearly:2 constitutes:1 future:3 myocardial:5 report:2 serious:1 few:1 randomly:1 ischemia:3 individual:5 implantable:1 maintain:1 curr:1 detection:4 organization:1 mining:2 highly:1 evaluation:3 severe:3 alignment:1 semidefinite:1 accurate:1 experience:1 orthogonal:2 tree:3 taylor:1 prerau:1 instance:4 increased:3 soft:1 earlier:1 measuring:2 assignment:3 leslie:1 clipping:14 applicability:1 cost:1 predictor:2 inadequate:1 reported:3 eec:1 corrupted:1 combined:1 eur:1 defibrillator:1 st:3 csail:1 systematic:1 discipline:2 together:2 moody:1 recorded:2 mortality:1 satisfied:1 opposed:1 scirica:1 woman:1 worse:1 leading:2 return:1 aggressive:1 electrocardiogram:1 availability:1 caused:1 mv:1 depends:1 onset:1 cardioverter:1 later:2 view:2 morphologic:1 performed:1 apparently:1 doing:1 start:2 wave:1 option:1 contribution:1 ass:2 zong:1 who:5 efficiently:2 maximized:1 characteristic:2 identify:2 generalize:1 raw:7 marginally:1 none:1 monitoring:2 comp:2 finer:2 app:1 classified:2 history:5 obstructive:1 stroke:1 frequency:6 associated:1 mi:6 hsu:1 treatment:4 massachusetts:1 adjusting:1 invalidated:1 knowledge:15 hur:1 amplitude:1 focusing:3 higher:2 day:11 follow:5 methodology:2 supervised:1 improved:1 formulation:1 evaluated:4 done:2 though:1 smola:1 until:1 correlation:2 sketch:1 assessment:2 lack:2 reversible:1 lossy:1 believe:1 effect:4 requiring:1 contain:1 multiplier:1 concept:2 homology:1 assigned:2 symmetric:2 death:10 laboratory:1 deal:2 white:1 during:4 die:1 generalized:2 antagonist:1 performs:1 angina:2 novel:5 recently:1 common:2 rotation:1 specialized:4 functional:3 empirically:1 cohen:1 endpoint:2 volume:1 million:2 association:10 elevated:1 assembling:1 significant:6 respiration:1 cambridge:1 imposing:1 shawe:1 had:5 lowered:1 similarity:11 acute:4 aligning:1 multivariate:8 closest:1 showed:3 female:1 barbara:1 binary:1 minimum:2 additional:2 fortunately:1 care:1 greater:1 syndrome:4 employed:1 recognized:1 ey:1 period:10 signal:26 semi:3 multiple:1 infer:1 technical:1 match:1 clinical:19 long:27 cross:1 offer:2 hazard:8 post:1 prediction:6 involving:1 variant:1 regression:1 anomalous:1 patient:83 metric:4 iteration:4 annotate:1 kernel:14 grounded:1 represent:1 achieved:1 c1:2 fine:2 interval:1 median:2 source:2 suffered:1 rest:1 unlike:1 subject:1 hz:1 effectiveness:1 extracting:2 presence:1 zi:1 identified:4 associating:1 trajcevski:1 idea:4 tradeoff:1 copd:2 intensive:1 qj:5 diekhans:1 biomarkers:1 pathfinder:1 utility:2 gb:1 effort:1 penalty:1 blinded:1 proceed:1 cause:3 constitute:1 matlab:1 hypertension:2 depression:2 generally:1 iterating:1 detailed:1 involve:1 useful:1 santa:1 enrolled:1 amount:1 clip:7 svms:2 processed:1 dna:1 reduced:1 problematic:1 stabilized:1 estimated:1 per:1 diagnosis:3 group:8 key:1 four:2 indefinite:2 threshold:3 scheuermann:1 blood:2 drawn:1 libsvm:1 registration:3 backward:1 year:5 sum:1 clipped:1 place:1 wu:1 vn:1 missed:1 draw:1 decision:3 distinguish:2 correspondence:1 fold:2 quadratic:2 cheng:1 activity:4 symbolized:1 occur:5 worked:1 chd:2 declared:1 aspect:1 min:4 medically:1 relatively:2 developing:1 icpr:1 clinically:2 across:4 cardiac:18 qrs:4 separability:1 terminates:1 partitioned:1 infarction:5 rev:1 making:2 outlier:2 intuitively:1 restricted:2 invariant:1 heart:13 computationally:4 equation:1 agree:1 previously:1 remains:1 describing:1 visualization:1 fail:1 committee:1 tompkins:1 demographic:1 umich:1 unusual:1 end:1 available:2 operation:1 adopted:1 studying:1 apply:1 observe:1 hierarchical:8 appropriate:2 occurrence:2 bailey:1 original:4 top:1 clustering:23 cf:1 include:1 medicine:1 exploit:2 build:1 kellis:1 warping:2 initializes:1 flipping:1 strategy:1 primary:1 diagonal:1 distance:7 unable:1 separate:2 separating:1 capacity:2 majority:1 collected:1 reason:1 minority:2 guttag:3 cardiol:4 length:3 index:2 ratio:7 minimizing:1 difficult:1 potentially:5 negative:2 rise:1 kaplan:1 stultz:2 enclosing:2 implementation:2 allowing:1 brutlag:1 observation:3 markov:5 datasets:1 timeseries:1 beat:17 situation:1 extended:1 incorporated:1 variability:3 dc:2 discovered:2 varied:1 complement:1 pair:3 meier:1 smoking:1 extensive:1 california:1 distinction:2 nip:1 trans:2 address:5 able:2 recurring:1 proceeds:1 below:1 usually:1 mismatch:47 pattern:4 soul:1 fp:2 program:1 max:5 including:1 belief:1 shifting:1 event:19 syed:3 comorbidities:1 predicting:1 hr:5 improve:1 technology:1 dtw:5 lk:1 carried:1 health:2 prior:8 literature:2 understanding:1 removal:1 discovery:3 evolve:1 geometric:1 relative:1 fully:1 abstracting:1 prototypical:4 men:1 proportional:1 coronary:8 proven:2 limitation:1 querying:1 age:3 consistent:2 mercer:1 bih:1 bypass:1 pi:5 translation:1 side:1 institute:1 neighbor:29 fall:2 arnold:1 boundary:2 curve:1 dimension:1 evaluating:1 avoids:3 world:2 forward:1 made:2 coll:3 richman:1 far:1 approximate:1 spectrum:17 subsequence:3 continuous:2 iterative:1 regulatory:1 quantifies:1 table:10 additionally:1 nature:1 robust:1 curtis:1 broadening:1 investigated:3 complex:3 domain:2 diag:2 sp:2 significance:1 main:1 dense:1 constituted:1 did:2 noise:2 profile:2 suffering:1 complementary:3 categorized:2 body:1 benefited:1 deployed:1 inferring:1 comprises:1 exceeding:1 jmlr:1 weighting:1 third:1 ix:1 grained:3 specific:7 symbol:17 explored:1 physiological:5 svm:9 gupta:1 survival:2 incorporating:1 sequential:1 ci:9 dissimilarity:4 margin:1 chen:1 smoker:1 michigan:1 admitted:1 garcia:1 univariate:11 explore:2 jama:1 lagrange:1 contained:1 chang:2 gender:2 corresponds:2 pulmonary:1 cardiologist:1 acm:2 ma:1 weston:1 marked:1 ann:1 quantifying:3 fisher:1 adverse:18 change:3 included:1 determined:1 hard:1 reducing:1 eurasip:1 hyperplane:1 denoising:1 hospital:1 accepted:1 arbor:1 experimental:1 meaningful:1 select:1 support:4 dissimilar:1 bioinformatics:1 evaluate:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.