Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
2,900 | 3,629 | Distribution-Calibrated Hierarchical Classification
Ofer Dekel
Microsoft Research
One Microsoft Way, Redmond, WA 98052, USA
[email protected]
Abstract
While many advances have already been made in hierarchical classification learning, we take a step back and examine how a hierarchical classification problem
should be formally defined. We pay particular attention to the fact that many arbitrary decisions go into the design of the label taxonomy that is given with the
training data. Moreover, many hand-designed taxonomies are unbalanced and
misrepresent the class structure in the underlying data distribution. We attempt
to correct these problems by using the data distribution itself to calibrate the hierarchical classification loss function. This distribution-based correction must be
done with care, to avoid introducing unmanageable statistical dependencies into
the learning problem. This leads us off the beaten path of binomial-type estimation and into the unfamiliar waters of geometric-type estimation. In this paper,
we present a new calibrated definition of statistical risk for hierarchical classification, an unbiased estimator for this risk, and a new algorithmic reduction from
hierarchical classification to cost-sensitive classification.
1 Introduction
Multiclass classification is the task of assigning labels from a predefined label-set to instances in a
given domain. For example, consider the task of assigning a topic to each document in a corpus.
If a training set of labeled documents is available, then a multiclass classifier can be trained using
a supervised machine learning algorithm. Often, large label-sets can be organized in a taxonomy.
Examples of popular label taxonomies are the ODP taxonomy of web pages [2], the gene ontology
[6], and the LCC ontology of book topics [1]. A taxonomy is a hierarchical structure over labels,
where some labels define very general concepts, and other labels define more specific specializations
of those general concepts. A taxonomy of document topics could include the labels MUSIC, CLAS SICAL MUSIC, and POPULAR MUSIC, where the last two are special cases of the first. Some label
taxonomies form trees (each label has a single parent) while others form directed acyclic graphs.
When a label taxonomy is given alongside a training set, the multiclass classification problem is
often called a hierarchical classification problem. The label taxonomy defines a structure over the
multiclass problem, and this structure should be used both in the formal definition of the hierarchical
classification problem, and in the design of learning algorithms to solve this problem.
Most hierarchical classification learning algorithms treat the taxonomy as an indisputable definitive
model of the world, never questioning its accuracy. However, most taxonomies are authored by
human editors and subjective matters of style and taste play a major role in their design. Many
arbitrary decisions go into the design of a taxonomy, and when multiple editors are involved, these
arbitrary decisions are made inconsistently. Figure 1 shows two versions of a simple taxonomy, both
equally reasonable; choosing between them is a matter of personal preference. Arbitrary decisions
that go into the taxonomy design can have a significant influence on the outcome of the learning
algorithm [19]. Ideally, we want learning algorithms that are immune to the arbitrariness in the
taxonomy.
1
The arbitrary factor in popular label taxonomies is a well-known phenomenon. [17] gives the example of the Library of Congress Classification system (LCC), a widely adopted and constantly
updated taxonomy of ?all knowledge?, which includes the category WORLD HISTORY and four of
its direct subcategories: ASIA, AFRICA, NETHERLANDS, and BALKAN PENINSULA. There is a clear
imbalance between the the level of granularity of ASIA versus its sibling BALKAN PENINSULA.
The Dewey Decimal Classification (DDC), another widely accepted taxonomy of ?all knowledge?,
defines ten main classes, each has exactly ten subclasses, and each of those again has exactly ten subclasses. The rigid choice of a decimal fan-out is an arbitrary one, and stems from an aesthetic ideal
rather than a notion of informativeness. Incidentally, the ten subclasses of RELIGION in the DDC
include six categories about Christianity and the additional category OTHER RELIGIONS, demonstrating the editor?s clear subjective predilection for Christianity. The ODP taxonomy of web-page
topics is optimized for navigability rather than informativeness, and is therefore very flat and often
unbalanced. As a result, two of the direct children of the label GAMES are VIDEO GAMES (with
over 42, 000 websites listed) and PAPER AND PENCIL GAMES (with only 32 websites). These examples are not intended to show that these useful taxonomies are flawed, they merely demonstrate
the arbitrary subjective aspect of their design.
Our goal is to define the problem such that it is invariant to many of these subjective and arbitrary
design choices, while still exploiting much of the available information. Some older approaches to
hierarchical classification do not use the taxonomy in the definition of the classification problem
[12, 13, 18, 9, 16]. Namely, these approaches consider all classification mistakes to be equally
bad, and use the taxonomy only to the extent that it reduces computational complexity and the
number of classification mistakes. More recent approaches [3, 8, 5, 4] exploit the label taxonomy
more thoroughly, by using it to induce a hierarchy-dependent loss function, which captures the
intuitive idea that not all classification mistakes are equally bad: incorrectly classifying a document
as CLASSICAL MUSIC when its true topic is actually JAZZ is not nearly as bad as classifying that
document as COMPUTER HARDWARE. When this interpretation of the taxonomy can be made,
ignoring it is effectively wasting a valuable signal in the problem input. For example, [8] define the
loss of predicting a label u when the correct label is y as the number of edges along the path between
the two labels in the taxonomy graph.
Additionally, a taxonomy provides a very natural framework for balancing the tradeoff between
specificity and accuracy in classification. Ideally, we would like our classifier to assign the most
specific label possible to an instance, and the loss function should reward it adequately for doing
so. However, when a specific label cannot be assigned with sufficiently high confidence, it is often
better to fall-back on a more general correct label than it is to assign an incorrect specific label. For
example, classifying a document on JAZZ as the broader topic MUSIC is better than classifying it as
the more specific yet incorrect topic COUNTRY MUSIC. A hierarchical classification problem should
be defined in a way that penalizes both over-confidence and under-confidence in a balanced way.
The graph-distance based loss function introduced by [8] captures both of the ideas mentioned
above, but it is very sensitive to arbitrary choices that go into the taxonomy design. Once again
consider the example in Fig. 1: each hierarchy would induce a different graph-distance, which
would lead to a different outcome of the learning algorithm. We can make the difference between
the two outcomes arbitrarily large by making some regions of the taxonomy very deep and other
regions very flat. Additionally, we note that the simple graph-distance based loss works best when
the taxonomy is balanced, namely, when all of the splits in the taxonomy convey roughly the same
amount of information. For example, in the taxonomy of Fig. 1, the children of CLASSICAL MU SIC are VIVALDI and NON - VIVALDI, where the vast majority of classical music falls in the latter.
If the correct label is NON - VIVALDI and our classifier predicts the more general label CLASSICAL
MUSIC, the loss should be small, since the two labels are essentially equivalent. On the other hand,
if the correct label is VIVALDI then predicting CLASSICAL MUSIC should incur a larger loss, since
important detail was excluded. A simple graph-distance based loss will penalize both errors equally.
On one hand, we want to use the hierarchy to define the problem. On the other hand, we don?t want
arbitrary choices and unbalanced splits in the taxonomy to have a significant effect on the outcome.
Can we have our cake and eat it too? Our proposed solution is to leave the taxonomy structure
as-is, and to stick with a graph-distance based loss, but to introduce non-uniform edge weights.
Namely, the loss of predicting u when the true label is y is defined as the sum of edge-weights
along the shortest path from u to y. We use the underlying distribution over labels to set the edge
2
Figure 1: Two equally-reasonable label taxonomies. Note the subjective decision to include/exclude
the label ROCK, and note the unbalanced split of CLASSICAL to the small class VIVALDI and the
much larger class NON - VIVALDI.
weights in a way that adds balance to the taxonomy and compensates for certain arbitrary design
choices. Specifically, we set edge weights using the information-theoretic notion of conditional selfinformation [7]. The weight of an edge between a label u and its parent u? is the log-probability of
observing the label u given that the example is also labeled by u? .
Others [19] have previously tried to use the training data to ?fix? the hierarchy, as a preprocessing
step to classification. However, it is unclear whether it is statistically permissible to reuse the training
data twice: once to fix the hierarchy and then again in the actual learning procedure. The problem
is that the preprocessing step may introduce strong statistical dependencies into our problem. These
dependencies could prove detrimental to our learning algorithm, which expects to see a set of independent examples. The key to our approach is that we can estimate our distribution-dependent loss
using the same data used to define it, without introducing any significant bias. It turns out that to
accomplish this, we must deviate from the prevalent binomial-type estimation scheme that currently
dominates machine learning and turn to a more peculiar geometric-distribution-type estimator. A
binomial-type estimator essentially counts things (such as mistakes), while a geometric-type estimator measures the amount of time that passes before something occurs. Geometric-type estimators
have the interesting property that they might occasionally fail, which we investigate in detail below.
Moreover, we show how to control the variance of our estimate without adding bias. Since empirical estimation is the basis of supervised machine learning, we can now extrapolate hierarchical
learning algorithms from our unbiased estimation technique. Specifically, we present a reduction
from hierarchical classification to cost-sensitive multiclass classification, which is based on our new
geometric-type estimator.
This paper is organized as follows. We formally set the problem in Sec. 2 and present our new
distribution-dependent loss function in Sec. 3. In Sec. 4 we discuss how to control the variance of
our empirical estimates, which is a critical step towards the learning algorithm described in Sec. 5.
We conclude with a discussion in Sec. 6. We omit technical proofs due to space constraints.
2 Problem Setting
We now define our problem more formally. Let X be an instance space and let T be a taxonomy of
labels. For simplicity, we focus on tree hierarchies. T is formally defined as the pair (U, ?), where
U is a finite set of labels and ? is the function that specifies the parent of each label in U. U contains
both general labels and specific labels. Specifically, we assume that U contains the special label
ALL , and that all other labels in U are special cases of ALL . ? : U ? U is a function that defines
the structure of the taxonomy by assigning a parent ?(u) to each label u ? U. Semantically, ?(u) is
a more general label than u that contains u as a special case. In other words, we can say that ?u is
a specific type of ?(u)?. For completeness, we define ?(ALL) = ALL. The n?th generation parent
function ? n : U ? U is defined by recursively applying ? to itself n times. Formally
? n (u) = ?(?(. . . ? (u) . . .)) .
| {z }
n
0
For completeness, define ? as the identity function over U. T is acyclic, namely, for all u 6= ALL
and for all n ? 1 it holds that ? n (u)S6= u. The ancestor function ? ? , maps each label to its set of
?
ancestors, and is defined as ? ? (u) = n=0 {? n (u)}. In other words, ? ? (u) includes u, its parent, its
parent?s parent, and so on. We assume that T is connected and specifically that ALL is an ancestor
3
of all labels, meaning that ALL ? ? ? (u) for all u ? U. The inverse of the ancestor function is the
descendent function ? , which maps u ? U to the subset {u? ? U : u ? ? ? (u? )}. In other words,
u is a descendent of u? if and only if u? is an ancestor of u. Graphically,
we can depict T as a
rooted tree: U defines the tree nodes, ALL is the root, and { u, ?(u) : u ? U \ ALL} is the set of
edges. In this graphical representation, ? (u) includes the nodes in the subtree rooted at u. Using this
representation, we define the graph distance between any two labels d(u, u? ) as the number of edges
along the path between u and u? in the tree. The lowest common ancestor function ? : U ? U ? U
maps any pair of labels to their lowest common ancestor in the taxonomy, where ?lowest? is in the
sense of tree depth. Formally, ?(u, u? ) = ? j (u) where j = min{i : ? i (u) ? ? ? (u? )}. In words,
?(u, u? ) is the closest ancestor of u that is also an ancestor if u? . It is straightforward to verify that
?(u, u? ) = ?(u? , u). The leaves of a taxonomy are the labels that are not parents of any other labels.
We denote the set of leaves by Y and note that Y ? U.
Now, let D be a distribution on the product space X ? Y. In other words, D is a joint distribution
over instances and their corresponding labels. Note that we assume that the labels that occur in the
distribution are always leaves of the taxonomy T . This assumption can be made without loss of
generality: if this is not the case then we can always add a leaf to each interior node, and relabel
all of the examples accordingly. More formally, for each label u ? U \ Y, we add a new node y to
U with ?(y) = u, and whenever we sample (x, u) from D then we replace it with (x, y). Initially,
we do not know anything about D, other than the fact that it is supported on X ? Y. We sample m
independent points from D, to obtain the sample S = {(xi , yi )}m
i=1 .
A classifier is a function f : X ? U that assigns a label to each instance of X . Note that a classifier
is allowed to predict any label in U, even though it knows that only leaf labels are ever observed
in the real world. We feel that this property captures a fundamental characteristic of hierarchical
classification: although the truth is always specific, a good hierarchical classifier will fall-back to a
more general label when it cannot confidently give a specific prediction. The quality of f is measured
using a loss function ? : U ? Y ? R+ . For any instance-label pair (x, y), the loss ?(f (x), y) should
be interpreted as the penalty associated with predicting the label f (x) when the true label is y. We
require ? to be weakly monotonic, in the following sense: if u? lies along the path from u to y then
?(u? , y) ? ?(u, y). Although the error indicator function, ?(u, y) = 1u6=y satisfies our requirements,
it is not what we have in mind. Another fundamental characteristic of hierarchical classification
problems is that not all prediction errors are equally bad, and the definition of the loss should reflect
this. More specifically, if u? lies along the path from u to y and u is not semantically equivalent to
u? , we actually expect that ?(u? , y) < ?(u, y).
3 A Distribution-Calibrated Loss for Hierarchical Classification
As mentioned above, we want to calibrate the hierarchical classification loss function using the
distribution D, through its empirical proxy S. In other words, we want D to differentiate between
informative splits in the taxonomy and redundant ones. We follow [8] in using graph-distance to
define the loss function, but instead of setting all of the edge weights to 1, we define edge weights
using D.
For each y ? Y, let p(y) be
Pthe marginal probability of the label y in the distribution D. For
each u ? U, define p(u) = y?Y?? (u) p(y). In words, for any u ? U, p(u) is the probability of
observing any descendent of u. We assume henceforth that p(u) > 0 for all u ? U. With
these
definitions handy, define the weight of the edge between u and ?(u) as log p(?(u))/p(u) . This
weight is essentially the definition of conditional self information from information theory [7].
The nice thing about this definition is that the weighted graph-distance between labels u and y
telescopes between u and ?(u, y) and between u and ?(u, y), and becomes
?(u, y) = 2 log p(?(u, y)) ? log p(u) ? log p(y) .
(1)
Since this loss function depends only on u, y, and ?(u, y), and their frequencies according to D, it
is completely invariant to the the number of labels along the path from u or y. It is also invariant
to inconsistent degrees of flatness of the taxonomy in different regions. Finally, it is even invariant
to the addition or subtraction of new leaves or entire subtrees, so long as the marginal distributions
p(u), p(y), and p(?(u, y)) remain unchanged. This loss also balances uneven splits in the taxonomy.
4
Recalling the example in Fig. 1 where CLASSICAL is split into VIVALDI and NON - VIVALDI, the edge
to the former will have a very high weight, whereas the edge to the latter will have a weight close to
zero.
Now, define the risk of a classifier h as R(f ) = E(X,Y )?D [?(f (X), Y )], the expected loss over
examples sampled from D. Our goal is to obtain a classifier with a small risk. However, before we
tackle the problem of finding a low risk classifier, we address the intermediate task of estimating the
risk of a given classifier f using the sample S. The solution is not straightforward since we cannot
even compute the loss on an individual example, ?(f (xi ), yi ), as this requires knowledge
Pm of D. A
naive way to estimate ?(f (xi ), yi ) using the sample S is to first estimate each p(y) by i=1 1yi =y ,
and to plug these values into the definition of ?. This estimator tends to suffer from a strong bias,
due to the non-linearity of the logarithm, and is considered to be unreliable1. Instead, we want an
unbiased estimator.
First, we write the definition of risk more explicitly using the definition of the loss function in Eq. (1).
Define q(f, u) = Pr(f (X) = u), the probability that f outputs u when X is drawn according to
the marginal distribution of D over X . Also define r(f, u) = Pr(?(f (X), Y ) = u), the probability
that the lowest common ancestor of f (X) and Y is u, when (X, Y ) is drawn from D. R(f ) can be
rewritten as
R(f ) =
X
u?U
X
2r(f, u) ? q(f, u) log(p(u)) ?
p(y) log p(y) .
(2)
y?Y
Notice that the second term in the definition of risk is a constant, independent of f . This constant
is simply H(Y ), the Shannon entropy [7] of the label distribution. Our ultimate goal is to compare
the risk values of different classifiers and to choose the best one, so we don?t really care about this
constant, and we can discard it henceforth. From here on, we focus on estimating the augmented
? ) = R(f ) ? H(Y ).
risk R(f
The main building block of our estimator is the estimation technique presented in [14]. Assume
Pn for
a moment that the sample S is infinite. Recall that the harmonic number hn is defined as i=1 1i ,
with h0 = 0. Define the random variables Ai and Bi as follows
Ai = min{j ? N : yi+j ? ? (f (xi ))} ? 1
Bi = min j ? N : yi+j ? ? ?(f (xi ), yi ) ? 1
For example, A1 + 2 is the index of the first example after (x1 , y1 ) whose label is contained in
the subtree rooted at f (x1 ), and B1 + 2 is the index of the first example after (x1 , y1 ) whose
label is contained in the subtree rooted at ?(f (x1 ), y1 ). Note that Bi ? Ai , since ?(u, y) is, by
definition, an ancestor of u, so y ? ? ? (u) implies y ? ? ? (?(u, y)). Next, define the random variable
L1 = hA1 ? 2hB1 .
? ).
Theorem 1. L1 is an unbiased estimator of R(f
Proof. We have that
?
?
X
j
j
X
hj 1 ? p(?(u, y)) .
hj 1 ? p(u) ? 2p ?(u, y)
E L1 f (X1 ) = u, Y1 = y = p(u)
j=0
j=0
P?
? log(1??)
1??
Using the fact that for any ? ? [0, 1) it holds that n=0 hn ?n =
we get, E[L1 |f (X1 ) =
u, Y1 = y] = ? log p(u) + 2 log p(?(u, y)) . Therefore,
P
P
E[L1 ] =
u?U
y?Y Pr(f (X) = u, Y = y) E[L1 |f (X1 ) = u, Y1 = y]
P
? ) .
=
2r(f, u) ? q(f, u) log p(u) = R(f
u?U
We now recall that our sample S is actually of finite size m. The problem that now occurs is that
A1 and B1 are not well defined when f (X1 ) does not appear anywhere in Y2 , . . . , Ym . When this
happens, we say that the estimator L1 fails. If f outputs a label u with p(u) = 0 then L1 will fail
1
The interested reader is referred to the extensive literature on the closely related problem of estimating the
entropy of a distribution from a finite sample.
5
with probability 1. On the other hand, the probability of failure is negligible when m is large enough,
and when f does not output labels with tiny probabilities. Formally, let ?(f ) = minu:q(f,u)>0 p(u)
be the smallest probability of any label that f outputs.
Theorem 2. The probability of failure is at most e?(m?1)?(f ) .
? ), but the bias is small. SpecifThe estimator E[L1 |no-fail] is no longer an unbiased estimator of R(f
? ).
ically, since we are after a classifier f with a small risk, we prove an upper-bound on R(f
??(f )(m?1)
? ) ? (m?1)e 2
Theorem 3. It holds that E L1 no-fail ? R(f
.
? (f )
For example, with ? = 0.01 and m = 2500, the bias term in Thm. 3 is less than 0.0004. With
m = 5000 it is already less than 10?14 .
4 Decreasing the Variance of the Estimator
Say that we have k classifiers and we want to choose the best one. The estimator L1 suffers from
an unnecessarily high variance because it typically uses a short prefix of the sample S and wastes
the remaining examples. To reliably compare k empirical risk estimates, we need to reduce the
variance of each estimator. The exact value of Var(L1 ) depends on the distributions p, q, and r in a
non-trivial way, but we can give a simple upper-bound on Var(L1 ) in terms of ?(f ).
Theorem 4. Var(L1 ) ? ?9 log ?(f ) + 9 log2 ?(f ) .
We reduce the variance of the estimator by repeating the estimation multiple times, without reusing
any sample points. Formally, define S1 = 1, and define for all i ? 2 the random variables Si =
Si?1 + ASi?1 + 2, and Li = hASi ? 2hBSi . In words: the first estimator L1 starts at S1 = 1
and uses A1 + 2 examples, namely, the examples 1, . . . , (A1 + 2). Now, S2 = A1 + 3 is the first
untouched example in the sequence. The second estimator, L2 starts at example S2 and uses AS2 + 2
examples, namely, the examples S2 , . . . , (S2 + AS2 + 1), and so on. If we had an infinite sample and
? ),
chose some threshold t, the random variables L1 , . . . , Lt would all be unbiased estimators of R(f
Pt
1
?
and therefore the aggregate estimator L = t i=1 Li would also be an unbiased estimate of R(f ).
Since L1 , . . . , Lt are also independent, the variance of the aggregate estimator would be 1t Var(L1 ).
In the finite-sample case, aggregating multiple estimators is not as straightforward. Again, the event
where the estimation fails introduces a small bias. Additionally, the number of independent estimations that fit in a sample of fixed size m is itself a random variable T . Moreover, the value of T
depends on the value of the risk estimators. In other words, if L1 , L2 , . . . take large values then T
will take a small value. The precise definition of T should be handled with care, to ensure that the
individual estimators remain independent and that the aggregate estimator maintains a small bias.
For example, the first thing that comes to mind is to set T to be the largest number t such that
St ? m - this is a bad idea. To see why, note that if T = 2 and A1 = m ? 4 then we know with
certainty that AS2 = 0. This clearly demonstrates a strong statistical dependence between L1 , L2
and T , which both interferes with the variance reduction and introduces a bias. Instead, we define T
as follows: choose a positive integer l ? m and set T using the last l examples in S, as follows, set
T = min {t ? N : St+1 ? m ? l} .
(3)
In words, we think of the last l examples in S as the ?landing strip? of our procedure: we keep
jumping forward in the sequence of samples, from S1 to S2 , to S3 , and so on, until the first time we
land on the landing strip. Our new failure scenario occurs when our last jump overshoots the strip,
and no Si falls on any one of the last l examples. If L does not fail, define the aggregate estimator as
P
L = Ti=1 Li . Note that we are summing Li rather than averaging them; we explain this later on.
Theorem 5. The probability of failure of the estimator L is at most e?l?(f ) .
We now prove that our definition of T indeed decreases the variance without adding bias. We give a
simplified version of the analysis, assuming that S is infinite, and assuming that the limit m is merely
a recommendation. In other words, T is still defined as before, but estimation never fails, even in the
rare case where ST + AST + 1 > m (the index of the last example used in the estimation exceeds
the predefined limit m). We note that a very similar theorem can be stated in the finite-sample case,
6
I NPUTS : a training set S = {(xi , yi )}m
i=1 , a label taxonomy T .
1
2
3
4
5
6
for i = 1, . . . , m
generate random permutation ? : {1, . . . , (m ? 1)} ? {1, . . . , (i ? 1), (i + 1), . . . , m}.
for u = 1, . . . , d
n
o
a = ?1 + min j ? {1, . . . , (m ? 1)} : y?(j) ? ? (u)
n
o
b = ?1 + min j ? {1, . . . , (m ? 1)} : y?(j) ? ? ?(u, yi )
M (i, u) =
1
b+1
+
1
b+2
+ ??? +
1
a
O UTPUT: M
Figure 2: A reduction from hierarchical multiclass to cost-sensitive multiclass.
at the price of a significantly more complicated analysis. The complication stems from the fact that
we are estimating the risk of k classifiers simultaneously, and the failure of one estimator depends
on the values of the other estimators. We allow ourselves to ignore failures because they occur with
such small probability, and because they introduce an insignificant bias.
Theorem
6. Assuming that S is infinite, but T is still defined as in Eq. (3), it holds that E L] =
? ) and Var(L) ? E[T ]? 2 , where ? 2 = Var Li ).
E T ]R(f
The proof follows from variations on Wald?s theorem [15].
Recall that we have k competing classifiers, f1 , . . . , fk , and we want to choose one with a small
risk. We overload our notation to support multiple concurrent estimations, and define T (fj ) as the
? j ). Also let
stopping time (previously defined as T in Eq. (3)) of the estimation process for R(f
?
Li (fj ) be the i?th unbiased estimator of R(fj ). To conduct a fair comparison of the k classifiers,
PT
we redefine T = minj=1,...,k T (fj ), and let L(fj ) = i=1 Li (fj ). In other words, we aggregate
the same number of estimators for each classifier. We then choose the classifier with the smallest
risk estimate, arg min L(Fj ). Theorem 6 still holds for each individual classifier because the new
definition of T remains a stopping time for each of the individual estimation processes. Although
we may not know the exact value of E[T ], it is just a number that we can use to reason about the bias
and the variance of L. We note that finding j that minimizes L(fj ) is equivalent to finding j that
? ). Moreover,
minimizes L(fj )/E[T ]. The latter, according to Thm. 6, is an unbiased estimate of R(f
2
the variance of each L(fj )/E[T ] is Var (L(fj )/E[T ]) = ? /E[T ], so the effective variance of our
unbiased estimate decreases like 1/E[T ], which is what we would expect. Using the one-tailed
? j ) ? L(fj ) + ? < ? 2 /(? 2 +E[T ]?2 ).
Chebyshev inequality [11], we get that for any ? > 0, Pr R(f
The bound holds uniformly for all k classifiers with probability k? 2 /(? 2 + E[T ]?2 ) (using the union
bound). The variance of the estimation depends on E[T ], and we expect E[T ] to grow linearly with
m. For example we can prove the following crude lower-bound.
Pk
Theorem 7. E[T ] ? (m ? l)/c, where c = k + j=1 1/?(fj ).
5 Reducing Hierarchical Classification to Cost-Sensitive Classification
In this section, we propose a method for learning low-risk hierarchical classifiers, using our new
definition of risk. More precisely, we describe a reduction from hierarchical classification to costsensitive multiclass classification. The appeal of this approach is the abundance of existing costsensitive learning algorithms. This reduction is itself an algorithm whose input is a training set of m
examples and a taxonomy over d labels, and whose output is a d ? m matrix of non-negative reals,
denoted by M . Entry M (i, j) is the cost of classifying example i with label j. This cost matrix, and
the original training set, are given to a cost-aware
Pm multiclass learning algorithm, which attempts to
find a classifier f with a small empirical loss i=1 M (i, f (xi )).
7
For example, a common approach to multiclass problems is to train a model fu : X ? R for each
label u ? U and to define the classifier f (x) = arg maxu?U fu (x). An SVM-flavored way to train
a cost sensitive classifier is to assume that the functions fu live in a Hilbert space, and to minimize
d
m X h
i
X
X
M (i, u) + fu (xi ) ? fyi (xi )
,
(4)
kfu k2 + C
u=1
+
i=1 u6=yi
where C > 0 is a parameter and [?]+ = max{0, ?}. The first
is
P term is a regularizer and the second
an empirical loss, justified by the fact that M (i, f (xi )) ? u6=yi M (i, u) + fu (xi ) ? fyi (xi ) + .
Coming back to the reduction algorithm, we generate M using the procedure outlined in Fig. 2.
Based on the analysis of the previous sections, it is easy to see that, for all i, M (i, f (xi )) is an
? ). This holds even if ? (as defined in Fig. 2) is a fixed function,
unbiased estimator of the risk R(f
P
1
because the training set is assumed to be i.i.d. Therefore, m
M (i, f (xi )) is also an unbiased
? ). The cost-sensitive learning algorithm will try to minimize this empirical estiestimator of R(f
mate. The purpose of the random permutation at each step is to hopefully decrease the variance
of the overall estimate, by decreasing the dependencies between the different individual estimators.
We profess that a rigorous analysis of the variance of this estimator is missing from
this work. Ide1 P
M (i, f (xi )) is
ally, we would like to show that, with high probability, the empirical estimate m
? ), uniformly for all classifiers f in our function class. This is a
?-close to its expectation of R(f
challenging problem due to the complex dependencies in the estimator.
The learning algorithm used to solve this problem can (and should) use the hierarchical structure to
guide its search for a good classifier. Our reduction to an unstructured cost-sensitive problem should
not be misinterpreted as a recommendation not to use the structure in the learning process. For
example, following [10, 8], we could augment the SVM approach described in Eq. (4) by replacing
Pd
Pd
the unstructured regularizer u=1 kfu k2 with the structured regularizer u=1 kfu ?f?(u) k2 , where
?(u) is the parent label of u. [8] showed significant gains on hierarchical problems using this
regularizer.
6 Discussion
We started by taking a step back from the typical setup of a hierarchical classification machine
learning problem. As a consequence, our focus was on the fundamental aspects of the hierarchical
problem definition, rather than on the equally important algorithmic issues. Our discussion was
restricted to the simplistic model of single-label hierarchical classification with single-linked taxonomies, and our first goal going forward is to relax these assumptions.
We point out that many of the theorems proven in this paper depend on the value of ?(f ), which
is defined as minu:q(u)>0 p(u). Specifically, if f occasionally outputs a very rare label, then ?(f )
is tiny and much of our analysis breaks down. This provides a strong indication that an empirical
estimate of ?(f ) would make a good regularization term in a hierarchical learning scheme. In other
words, we should deter the learning algorithm from choosing a classifier that predicts very rare
labels. As mentioned in the introduction, the label taxonomy provides the perfect mechanism for
backing off and predicting a more common and less risky ancestor of that label.
We believe that our work is significant in the broader context of structured learning. Most structured
learning algorithms blindly trust the structure that they are given, and arbitrary design choices are
likely to appear in many types of structured learning. The idea of using the data distribution to
calibrate, correct, and balance the side-information extends to other structured learning scenarios.
The geometric-type estimation procedure outlined in this paper may play an important role in those
settings as well.
Acknowledgment
The author would like to thank Paul Bennett for his suggestion of the loss function for its information
theoretic properties, reduction to a tree-weighted distance, and ability to capture other desirable
characteristics of hierarchical loss functions like weak monotonicity. The author also thanks Ohad
Shamir, Chris Burges, and Yael Dekel for helpful discussions.
8
References
[1] The Library of Congress Classification. http://www.loc.gov/aba/cataloging/classification/.
[2] The Open Directory Project. http://www.dmoz.org/about.html.
[3] L. Cai and T. Hofmann. Hierarchical document categorization with support vector machines.
In 13th ACM Conference on Information and Knowledge Management, 2004.
[4] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Hierarchical classification: combining bayes
with svm. In Proceedings of the 23rd International Conference on Machine Learning, 2006.
[5] N. Cesa-Bianchi, C. Gentile, and L. Zaniboni. Incremental algorithms for hierarchical classification. Journal of Machine Learning Research, 7:31?54, 2007.
[6] The Gene Ontology Consortium. Gene ontology: tool for the unification of biology. Nature
Genetics, 25:25?29, 2000.
[7] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
[8] O. Dekel, J. Keshet, and Y. Singer. Large margin hierarchical classification. In Proceedings of
the Twenty-First International Conference on Machine Learning, 2004.
[9] S. T. Dumais and H. Chen. Hierarchical classification of Web content. In Proceedings of
SIGIR-00, pages 256?263, 2000.
[10] T. Evgeniou, C.Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal
of Machine Learning Research, 6:615?637, 2005.
[11] W. Feller. An Introduction to Probability and its Applications, volume 2. John Wiley and Sons,
second edition, 1970.
[12] D. Koller and M. Sahami. Hierarchically classifying docuemnts using very few words. In
Machine Learning: Proceedings of the Fourteenth International Conference, pages 171?178,
1997.
[13] A. K. McCallum, R. Rosenfeld, T. M. Mitchell, and A. Y. Ng. Improving text classification by
shrinkage in a hierarchy of classes. In Proceedings of ICML-98, pages 359?367, 1998.
[14] S. Montgomery-Smith and T. Schurmann. Unbiased estimators for entropy and class number.
[15] S.M. Ross and E.A. Pekoz. A second course in probability theory. 2007.
[16] E. Ruiz and P. Srinivasan. Hierarchical text categorization using neural networks. Information
Retrieval, 5(1):87?118, 2002.
[17] C. Shirky. Ontology is overrated: Categories, links, and tags. In O?Reilly Media Emerging
Technology Conference, 2005.
[18] A. S. Weigend, E. D. Wiener, and J. O. Pedersen. Exploiting hierarchy in text categorization.
Information Retrieval, 1(3):193?216, 1999.
[19] J. Zhang, L. Tang, and H. Liu. Automatically adjusting content taxonomies for hierarchical
classification. In Proceedings of the Fourth Workshop on Text Mining, SDM06, 2006.
9
| 3629 |@word schurmann:1 version:2 dekel:3 open:1 tried:1 recursively:1 hasi:1 moment:1 liu:1 contains:3 loc:1 reduction:9 document:7 prefix:1 subjective:5 africa:1 existing:1 com:1 si:3 assigning:3 ddc:2 must:2 yet:1 john:1 informative:1 hofmann:1 designed:1 depict:1 leaf:6 website:2 accordingly:1 directory:1 mccallum:1 smith:1 short:1 provides:3 completeness:2 node:4 complication:1 preference:1 misinterpreted:1 org:1 zhang:1 along:6 direct:2 incorrect:2 prove:4 redefine:1 introduce:3 indeed:1 expected:1 roughly:1 ontology:5 examine:1 decreasing:2 automatically:1 gov:1 actual:1 becomes:1 project:1 estimating:4 moreover:4 underlying:2 linearity:1 notation:1 medium:1 lowest:4 what:2 interpreted:1 minimizes:2 emerging:1 navigability:1 finding:3 wasting:1 certainty:1 subclass:3 ti:1 tackle:1 exactly:2 classifier:27 demonstrates:1 stick:1 control:2 k2:3 omit:1 appear:2 before:3 negligible:1 positive:1 selfinformation:1 treat:1 congress:2 mistake:4 tends:1 aggregating:1 limit:2 consequence:1 path:7 might:1 chose:1 twice:1 challenging:1 bi:3 statistically:1 directed:1 acknowledgment:1 union:1 block:1 handy:1 procedure:4 pontil:1 empirical:9 asi:1 significantly:1 reilly:1 confidence:3 induce:2 word:14 specificity:1 consortium:1 get:2 cannot:3 interior:1 close:2 context:1 risk:19 influence:1 applying:1 ast:1 live:1 landing:2 equivalent:3 predilection:1 map:3 missing:1 www:2 go:4 attention:1 graphically:1 straightforward:3 christianity:2 sigir:1 simplicity:1 unstructured:2 assigns:1 estimator:37 u6:3 s6:1 his:1 notion:2 variation:1 updated:1 feel:1 hierarchy:8 play:2 pt:2 shamir:1 exact:2 us:3 fyi:2 element:1 predicts:2 labeled:2 observed:1 role:2 capture:4 region:3 connected:1 decrease:3 valuable:1 balanced:2 feller:1 mentioned:3 pd:2 mu:1 complexity:1 reward:1 ideally:2 personal:1 trained:1 weakly:1 hb1:1 overshoot:1 depend:1 incur:1 basis:1 completely:1 joint:1 regularizer:4 train:2 effective:1 describe:1 aggregate:5 choosing:2 outcome:4 h0:1 whose:4 widely:2 solve:2 larger:2 say:3 relax:1 compensates:1 ability:1 think:1 rosenfeld:1 itself:4 differentiate:1 sequence:2 questioning:1 indication:1 interferes:1 rock:1 cai:1 propose:1 product:1 coming:1 combining:1 pthe:1 intuitive:1 exploiting:2 parent:10 requirement:1 incidentally:1 perfect:1 leave:1 categorization:3 incremental:1 measured:1 eq:4 strong:4 implies:1 come:1 closely:1 correct:6 human:1 lcc:2 deter:1 require:1 assign:2 fix:2 f1:1 really:1 correction:1 hold:7 sufficiently:1 considered:1 minu:2 algorithmic:2 predict:1 maxu:1 major:1 smallest:2 purpose:1 estimation:16 jazz:2 label:77 currently:1 ross:1 sensitive:8 largest:1 concurrent:1 tool:1 weighted:2 clearly:1 always:3 rather:4 avoid:1 pn:1 hj:2 shrinkage:1 broader:2 focus:3 prevalent:1 rigorous:1 sense:2 kfu:3 helpful:1 dependent:3 rigid:1 stopping:2 entire:1 typically:1 initially:1 koller:1 ancestor:12 going:1 interested:1 backing:1 arg:2 issue:1 classification:42 overall:1 denoted:1 augment:1 html:1 special:4 marginal:3 once:2 never:2 aware:1 evgeniou:1 flawed:1 ng:1 biology:1 unnecessarily:1 icml:1 nearly:1 aba:1 others:2 few:1 simultaneously:1 individual:5 intended:1 ourselves:1 microsoft:3 attempt:2 recalling:1 investigate:1 mining:1 introduces:2 predefined:2 subtrees:1 peculiar:1 edge:13 fu:5 unification:1 jumping:1 ohad:1 tree:7 conduct:1 logarithm:1 penalizes:1 instance:6 cover:1 calibrate:3 cost:10 introducing:2 subset:1 expects:1 rare:3 uniform:1 entry:1 too:1 dependency:5 accomplish:1 calibrated:3 thoroughly:1 st:3 thanks:1 fundamental:3 international:3 dumais:1 off:2 ym:1 again:4 reflect:1 cesa:2 management:1 choose:5 hn:2 henceforth:2 book:1 style:1 reusing:1 li:7 exclude:1 sec:5 waste:1 includes:3 matter:2 descendent:3 explicitly:1 depends:5 later:1 root:1 try:1 break:1 doing:1 observing:2 linked:1 start:2 bayes:1 maintains:1 complicated:1 odp:2 minimize:2 accuracy:2 wiener:1 variance:15 characteristic:3 weak:1 pedersen:1 history:1 explain:1 minj:1 suffers:1 whenever:1 strip:3 definition:17 failure:6 frequency:1 involved:1 proof:3 associated:1 sampled:1 gain:1 adjusting:1 popular:3 mitchell:1 recall:3 knowledge:4 organized:2 hilbert:1 actually:3 back:5 supervised:2 follow:1 asia:2 done:1 though:1 generality:1 anywhere:1 just:1 until:1 hand:5 ally:1 web:3 replacing:1 trust:1 hopefully:1 defines:4 sic:1 costsensitive:2 quality:1 believe:1 building:1 usa:1 effect:1 concept:2 y2:1 unbiased:13 true:3 adequately:1 pencil:1 assigned:1 verify:1 excluded:1 former:1 regularization:1 sdm06:1 game:3 self:1 rooted:4 anything:1 theoretic:2 demonstrate:1 l1:20 fj:13 meaning:1 harmonic:1 common:5 arbitrariness:1 volume:1 untouched:1 interpretation:1 unfamiliar:1 significant:5 ai:3 rd:1 fk:1 pm:2 outlined:2 had:1 immune:1 longer:1 add:3 something:1 closest:1 recent:1 showed:1 discard:1 scenario:2 occasionally:2 certain:1 inequality:1 zaniboni:2 arbitrarily:1 yi:11 additional:1 care:3 gentile:2 subtraction:1 shortest:1 redundant:1 signal:1 multiple:5 flatness:1 desirable:1 reduces:1 stem:2 exceeds:1 technical:1 plug:1 long:1 retrieval:2 equally:7 a1:6 vivaldi:8 prediction:2 wald:1 simplistic:1 relabel:1 essentially:3 expectation:1 blindly:1 kernel:1 penalize:1 justified:1 addition:1 want:8 whereas:1 grow:1 country:1 permissible:1 pass:1 thing:3 misrepresent:1 inconsistent:1 integer:1 granularity:1 ideal:1 aesthetic:1 split:6 intermediate:1 enough:1 easy:1 fit:1 competing:1 reduce:2 idea:4 multiclass:10 sibling:1 tradeoff:1 chebyshev:1 whether:1 specialization:1 six:1 handled:1 ultimate:1 reuse:1 penalty:1 suffer:1 deep:1 useful:1 clear:2 listed:1 netherlands:1 amount:2 repeating:1 authored:1 ten:4 hardware:1 category:4 telescope:1 generate:2 specifies:1 http:2 notice:1 s3:1 write:1 srinivasan:1 key:1 four:1 demonstrating:1 threshold:1 dewey:1 drawn:2 vast:1 graph:10 merely:2 sum:1 weigend:1 inverse:1 fourteenth:1 fourth:1 extends:1 reasonable:2 reader:1 decision:5 bound:5 pay:1 fan:1 occur:2 constraint:1 precisely:1 as2:3 flat:2 tag:1 aspect:2 min:7 eat:1 structured:5 according:3 remain:2 son:1 making:1 happens:1 s1:3 invariant:4 pr:4 restricted:1 previously:2 remains:1 turn:2 count:1 fail:5 discus:1 mechanism:1 know:4 mind:2 singer:1 sahami:1 montgomery:1 adopted:1 ofer:1 available:2 rewritten:1 yael:1 hierarchical:37 original:1 inconsistently:1 binomial:3 cake:1 include:3 remaining:1 ensure:1 graphical:1 log2:1 thomas:1 music:9 exploit:1 nputs:1 classical:7 unchanged:1 micchelli:1 already:2 occurs:3 dependence:1 unclear:1 detrimental:1 distance:9 thank:1 link:1 majority:1 chris:1 topic:7 extent:1 trivial:1 water:1 reason:1 dmoz:1 assuming:3 index:3 decimal:2 balance:3 setup:1 taxonomy:49 stated:1 negative:1 design:10 reliably:1 twenty:1 bianchi:2 imbalance:1 upper:2 finite:5 mate:1 incorrectly:1 ever:1 precise:1 y1:6 arbitrary:12 thm:2 introduced:1 namely:6 pair:3 extensive:1 optimized:1 address:1 redmond:1 alongside:1 below:1 confidently:1 max:1 video:1 critical:1 event:1 natural:1 predicting:5 indicator:1 older:1 scheme:2 technology:1 library:2 risky:1 started:1 naive:1 deviate:1 nice:1 geometric:6 taste:1 literature:1 l2:3 text:4 loss:29 subcategories:1 expect:3 permutation:2 ically:1 interesting:1 generation:1 suggestion:1 acyclic:2 versus:1 var:7 proven:1 degree:1 proxy:1 informativeness:2 editor:3 classifying:6 tiny:2 balancing:1 land:1 genetics:1 course:1 supported:1 last:6 formal:1 bias:11 allow:1 guide:1 side:1 fall:4 burges:1 taking:1 unmanageable:1 ha1:1 depth:1 world:3 forward:2 made:4 jump:1 preprocessing:2 simplified:1 author:2 ignore:1 indisputable:1 gene:3 keep:1 monotonicity:1 corpus:1 b1:2 conclude:1 summing:1 assumed:1 xi:15 don:2 search:1 tailed:1 why:1 additionally:3 nature:1 ignoring:1 improving:1 utput:1 complex:1 domain:1 pk:1 main:2 hierarchically:1 linearly:1 oferd:1 definitive:1 s2:5 paul:1 edition:1 cataloging:1 child:2 allowed:1 convey:1 fair:1 x1:8 augmented:1 fig:5 referred:1 wiley:2 fails:3 lie:2 crude:1 ruiz:1 abundance:1 tang:1 theorem:11 down:1 bad:5 specific:9 appeal:1 insignificant:1 beaten:1 svm:3 dominates:1 workshop:1 adding:2 effectively:1 keshet:1 subtree:3 margin:1 chen:1 entropy:3 lt:2 simply:1 likely:1 clas:1 religion:2 contained:2 recommendation:2 monotonic:1 truth:1 satisfies:1 constantly:1 acm:1 conditional:2 goal:4 identity:1 towards:1 replace:1 price:1 bennett:1 content:2 specifically:6 infinite:4 uniformly:2 semantically:2 averaging:1 reducing:1 typical:1 called:1 accepted:1 shannon:1 formally:9 uneven:1 support:2 latter:3 unbalanced:4 overload:1 phenomenon:1 extrapolate:1 |
2,901 | 363 | Modeling Time Varying Systems
Using Hidden Control Neural Architecture
Esther Levin
AT&T Bell Laboratories
Speech Research Department
Murray Hill, NJ 07974 USA
ABSTRACT
Multi-layered neural networks have recently been proposed for nonlinear prediction and system modeling. Although proven successful
for modeling time invariant nonlinear systems, the inability of neural
networks to characterize temporal variability has so far been an
obstacle in applying them to complicated non stationary signals, such
as speech. In this paper we present a network architecture, called
"Hidden Control Neural Network" (HCNN), for modeling signals
generated by nonlinear dynamical systems with restricted time
variability. The approach taken here is to allow the mapping that is
implemented by a multi layered neural network to change with time
as a function of an additional control input signal. This network is
trained using an algorithm that is based on "back-propagation" and
segmentation algorithms for estimating the unknown control together
with the network's parameters. The HCNN approach was applied to
several tasks including modeling of time-varying nonlinear systems
and speaker-independent recognition of connected digits, yielding a
word accuracy of 99.1 %.
L INTRODUCTION
Layered networks have attracted considerable interest in recent years due to their
ability to model adaptively nonlinear multivariate functions. It has been recently proved
in [1]. that a network with one intennediate layer of sigmoidal units can approximate
arbitrarily well any continuous mapping. However, being a static model, a layered
network is not capable of modeling signals with an inherent time variability, such as
speech.
In this paper we present a hidden control neural network that can implements nonlinear and time-varying mapping. The hidden control input signal which allows the
network's mapping to change over time, provides the ability to capture the nonstationary properties, and learn the underlying temporal structure of the modeled signal.
147
148
Levin
II. THE MODEL
11.1 MULTI LAYERED NETWORK
Multi layered neural network is a cn,nnectionist models that iwplements a nonlinear
mapping from and input x E X c R I to an output y EYe R 0:
y = F Q)(x),
(1)
where ro Ene R D , the parameter set of the network, consists of the connection
wegihts and the biases, and x and y are the activation vectors of the input and output
layers, of dimensionality NJ and No, respectively.
Recently layered networks have proven useful for non-linear prediction of signals and
system modeling [2]. In these applications one uses the values of a real signal x(t), at a
set of discrete times in the past, to predict x (t) at a point in the future. For example,
for order-one-predictor, the output of the network y is used as a predictor of the next
signal sample, when the network is given past sample as input, e.g. y t =F Q)(Xt -l ),
where t denotes the predicted value of the signal at time t, which. in general, differs
from the true value, x" The parameter set of the network ro is estimated from a
training set of discrete time samples from a segment of known signal (
t =0?...? T ),
by minimizing a prediction error which measures the distortion between the signal and
the prediction made by the network,
=x
x
x,.
T
E(ro)= L
II
xt-F Q)(xt-d
II
2,
(2)
t=1
and the estimated parameter set
ro is given by argmin
E ( ro ).
o
In [2] such a neural network predictor is used for modeling chaotic series. One of the
examples considered in [2] is prediction of time series generated by the classic logistic.
or Feigenbaum. map.
(3)
Xt+l =4'b'xt (1- x t )
This iterated map produces an ergodic chaotic time series when b is chosen to equal 1.
Although this time series passes virtually every test for randomness, it is generated by
the deterministic Eq.(3), and can be predicted perfectly, once the generating system (3)
is learned. Using the back-propagation algorithm [3] to minimize the prediction error
(2) defined on a set of samples of this time series, the network parameters ro were
adjusted, enabling accurate prediction of the next point Xt+l in this "random" series
given the present point X t as an input. The mapping F Q) implemented by the trained
network approximated very closely the logistic map (3) that generated the modeled
series.
11.2 IDDDEN CONTROL NETWORK
For a given fixed value of the parameters ro, a layered network implements a fixed
input-output mapping, and therefore can be used for time-invariant system modeling or
prediction of signals generated by a fixed, time-invariant system. Hidden control
network that is based on such layered network. has an additional mechanism that
allows the mapping (1) to change with time, keeping the parameters ro fixed. We
consider the case where the units in the input layer are divided into two distinct
groups. The first input unit group represents the observable input to the network,
x E X c R p. and the second represents a control signal C E C c R q , P + q =NJ , that
controls the mapping between the observable input x, and the network output y.
The output of the network y is given. according to (I), by F Q)(x ? c) , where (x, c)
denotes the concatenation of the two inputs. We focus on the mapping between the
observable input x and the output. This mapping is modulated by the control input c :
Modeling Time Varying Systems Using Hidden Control Neural Architecture
for a fixed value of x and for different values of c. the network produces different
outputs. For a fixed control input. the network implements a fixed observable inputoutput mapping. but when the control input changes. the network's mapping changes
as well. modifying the characteristics of the observed signal:
~
y=Fm(x.c) = Fm,c(x).
(4)
If the control signal is known for all time t. there is no point in distinguishing between
the observable input, X. and the control input c. The more interesting situation is when
the control signal is unknown or hidden. i.e.. the hidden control case. which we will
treat in this paper,
This model can be used for prediction and modeling of nonstationary signals generated
by time-varying sources, In the case of first order prediction the present value of the
signal x, is predicted based on x,-1. with respect to the control input c,. If we restrict
the control signal to take its values from a finite set. c E {C 1. " . ? CN } == C. then
the network is a finite state networ~ where in each state it implements a fixed inputoutput mapping F m,C.' Such a network with two or more intennidiate layers can
approximate arbitrarily closely any set (F 1. . , . .FN} of continuous functions of the
observable input x [4].
In the applications we considered for this model. two types of time-structures were
used. namely
Fully connected model: In this type of HCNN. every state. corresponding to a specific
value of the control input. can be reached from any other state in a single time step. It
means that there are no temporal restrictions on the control signal. and in each time
step. it can take any of its N possible values { C 1 ? ???? CN }. For example. a 2 state fully
connected model is shown in Fig. la. In a generative mode of operation. when the
observable input of the network is wired to be the the previous network's output ? the
observable signal x(t) is generated in each one of the states by a different dynamics:
x'+1=Fc,(x,). C, E {O. I}. and therefore this network emulates two different dynamical
systems. with the control signal acting as a switch between them.
Left-to-right model: For spoken word modeling. we will consider a finite-state. leftto-right HCNN (see Fig.lb). where the control signal is further restricted to take value
Ci only if in the previous time step it had a value of Ci or Ci - 1 ? Each state of this
network represents an unspecified acoustic unit. and due to the "left-to-right" structure.
the whole word is modeled as concatenation of such acoustic units. The time spent in
each of the states is not fixed. since it varies according to the value of the control
signal. and therefore the model can take into account the duration variability between
different utterances of the same word.
F.
Figure 1: a-Fully connected 2 state HCNN ;
b-Left to right 8 state HCNN for word modeling.
149
150
Levin
ITI. USING HCNN
Given the predictive fonn of HCNN described in the previous section, there are three
basic problems of interest that must be solved for the model to be useful in real-world
applications. This problems are the following:
Segmentation problem : Here we attempt to uncover the hidden part of the model,
Le., given a network ro and a sequence of observations ( X" t =<>?...? T ), to find the
correct control sequence, which best explains the observations. This problem is solved
using an optimality criterion. namely the prediction error. similar to Eq.(2).
T
E(ro.cD==l:
II
x,-Fm,c,(x,-d
112.
(5)
1=1
where cf denotes the control sequence Cl ? ???? CT. Ci e C. For a given network. rot
the prediction error (5) is a function of the hidden control input sequence. and thus
segmentation is associated with the minimization:
c;==argminE(ro.cf).
cT
(6)
In the case of a finite-state. fully connected model. this minimization can be perfonned
exhaustively, by minimizing for each observation separately. and for a fully connected
HCNN with a real-valued control signal (i.e. not the finite state case), local
minimization of (5) can be perfonned using the back-propagation algorithm. For a
"left-to-right" model , global minimum of (5) is attained efficiently using the Viterbi
algorithm [5].
Evaluation problem, namely how well a given network ro matches a given sequence
of observations { x,, t =<>?... , T }. The evaluation is a key point for many applications.
For example. if we consider the case in which we are trying to choose among several
competing networks, that represent different hypothesis in the hypotheses space, the
solution to Problem 2 allows us to choose the network that best matches the
observation. This problem is also solved using the prediction error defined in (5). The
match, or actually, the distortion, is measured by the prediction error of the network on
a sequence of observations, for the best possible sequence of hidden control inputs, i.e .?
E(ro)==minE(ro.cf).
(7)
cT
Therefore. to evaluate a network. first the segmentation problem must be solved.
Training problem, i.e.. how to adjust the model parameters ro to best match the
observation sequence. or training set. (x" 1=0 ? .... T ).
The training in layered networks is accomplished by minimizing the prediction error of
Eq.(2) using versions of the back-propagation algorithm. In the HCNN case, the
prediction error (5) is a function of the hidden parameters and the hidden control input
sequence. and thus training is associated with the joint minimization:
ro==argmin{minE(ro.c[)} .
(8)
cT
Q
This minimization is perfonned by an iterative training algorithm.
The k-th iteration of the algorithm consists of two stages:
1. Reestimation: For the present value of the control input sequence. the prediction
error is minimized with respect to the network parameters.
(ro)k==argminE(ro. (C[h-l)
a
(9)
Modeling Time Varying Systems Using Hidden Control Neural Architecture
This minimization is implemented by the back-propagation algorithm.
2. Segmentation: Using the values of parameters. obtained from the previous stage.
the control sequence is estimated (as in (6) ).
(c[}k=argminE?ro)A: .c[)
cT
(10)
IV. HCNN AS A STATISTICAL MODEL
For further understanding of the properties of the proposed model and the training
procedure. it is useful to describe the HCNN by an equivalent statistical vector source
of the following form:
(11)
x, = Fm,c,(X,-l )+n,. n,-N(O.J).
where n, is a white Gaussian noise. Assuming for simplicity that all the values of the
control allowed by the model are equiprobable (this is a special case of Markov
process. and can be easily extended for the general case) ? we can write the joint
likelihood of the data and the control
pT
p(xLc[ I ro)=(27t)-T exp[-~
T
L II x,-F CIl,c,(Xr-1) 11 2].
,=1
(12)
where xf denotes the sequence of observation {x 1. X2. . ?. .XT}'
Eq.(12) provides a probabilistic interpretation of the procedures described in the
previous section:
The proposed segmentation procedure is equivalent to choosing the most probable
control sequence. given the network and the observations.
The evaluation of the network is related to the probability of the observations given the
model. for the best sequence of control inputs.
min E (ro. cD <=> max P (x[. cf
cT
cT
I ro) ?
(13)
The proposed training procedure (Eq. 8) is equivalent to maximization of the joint
likelihood (12):
&=argmin{minE(ro.s[)) =argmax{maxP (xL c[ I ro)).
11
cT
a
cT
(14)
Thus (8) is equivalent to an approximate maximum likelihood training. where instead
of maximizing the marginal likelihood P (x[ I co)= I:P (xL c[ I ro). only the
cT
maximal term in the sum. the joint likelihood (14) is considered. The approximate
maximum likelihood training avoids the computational complexity of the exact
maximum likelihood approach. and recently [6] was shown to yield results similar to
those obtained by the exact maximum likelihood training.
IV.1 HCNN and the Hidden Markov Model (HMM)
During the past decade hidden Markov modeling has been used extensively to
represent the probability distribution of spoken words [7]. A hidden Markov model
assumes that the modeled speech signal can be characterized as being produced at each
time instant by one of the states of a finite state source. and that each observation
vector is an independent sample according to the probability distribution of the current
state. The transitions between the states of the model are governed by a Markov
process
HCNN can be viewed as an extension of this model to the case of Markov output
processes. The observable signal in each state is modeled as though it was produced by
151
152
Levin
a dynamical system driven by noise. Here we are modeling the dynamics that
generated the signal. F 0). and the dependence of the present observation vector on the
previous one. The assumption that the driving noise (12) is nonnal is not necessary:
instead. we can assume a parametric fonn of the noise density. and estimate its
parameters.
V. EXPERIMENTAL EVALUATION
For experimental evaluation of the proposed model. we tested on two different tasks:
V.l Time-varying system modeling and segmentation
Here an HCNN was used for a single-step prediction of a signal generated by a timevarying system. described by
FL(Xt )
if switch =0
{
(15)
Xt+l = 1-FL (x,) if switch = 1 ?
where FL is the logistic map from Eq. (3). and switch is a random variable. assuming
binary values. Both of the systems. FL. and 1-FL ? are chaotic and produce signals in
the range [0.1]. A fully connected. 2-state HCNN (each state corresponding to one
switch position). as in Fig. 1a. was trained on a segment of 400 samples of such a
signal. according to the training algorithm described in section V. The perfonnance of
the resulting network was tested on an independent set of 1000 samples of this signal.
The estimated control sequence differed from the real switch position in only 8 out of
1000 test samples. The evaluation score. i.e .? the average prediction error for this
estimated control sequence was 7.5xlO-5 per sample. Fig. 2 compares the mapping
implemented by the network in one state. corresponding to control value set to O. and
the logistic map for switch =0. Similar results are obtained for c=l and switch=1.
These results indicate that the HCNN was indeed able to capture the two underlying
dynamics that generated the modeled signal. and to learn the switching pattern
simultaneously.
Fig.2 Comparison of the logistic map and the mapping implemented by HCNN with c=O.
V.2 Continuous recognition of digit sequences
Here we tested the proposed HCNN modeling technique on recognition of connected
spoken versions of the digits. consisting of "zero" to "nine". and including the word
"oh". recorded from male speakers through a telephone handset and sampled at 6.67
Modeling Time Varying Systems Using Hidden Control Neural Architecture
kHz. LPC analysis of order 8 was performed on frames of 45 msec duration, with
overlap of 15 msec, and 12 cepstral and 12 delta cepstral [8] coefficients were derived
for the t-th frame to form the observable signal X" Each digit was modeled by an 8
state,left-to-right RCNN, as in Fig.1b. The network was trained to predict the cepstral
and delta cepstral coefficients for the next frame. Each network consisted of 32 input
units (24 to encode X t and 8 for a distributed representation of the 8 control values), 24
output units and 30 hidden units, all fully connected. Each network was trained using
a training set of 900 utterances from 44 male speakers extracted from continuous
strings of digits using an HMM based recognizer [9]. 1666 strings (5600 words),
uttered by an independent set of 22 male speakers were used for estimating the
recognition accuracy. The mean and the covariance of the driving noise (12) were
modeled. The word accuracy obtained was 99.1 %.
Fig. 3a illustrates the process of recognition (the forward pass of Viterbi algorithm) of
the word "one" by the speaker-independent system. The horizontal axis is time (in
frames). 11 models from "zero" to "nine" , and "oh" appear on the vertical axis. The
numbers that appear in the graph (from 1 to 8) describe the number of a state. For
example, number 2 inside the second row of the graph denotes state number 2 of the
model of the word "one". In each frame, the prediction error was calculated for each
one of the states in each model, resulting in 88 different prediction errors. The graph
in each frame shows the states of the models that are in the vicinity of the minimal
error among those 88. This is a partial description of a forward pass of the Viterbi
algorithm in recognition, before the left-to-right constraints of the models are taken
into account Figure 3a shows that the main candidate considered in recognition of the
word "one" is the actual model of "one", but in the end of the word two spurious
candidates arise. The spurious candidates are certain states of the models of "seven"
and "nine". Those states are detectors of the nasal 'n' that appears in all these words.
Figure 3b shows the recognition of a four digit string "three - five - oh - four". The
spurious candidates indicate detectors of certain sounds, common to different words,
like in "four" and in "oh", in "five" and in "nine", in "three", "six" and "eight" .
...
--.
...-
"-"
--
..
_- ............ ........
Fig. 3 Illustration of the recognition process.
"
153
154
Levin
VI. SUMMARY AND DISCUSSION
This paper introduces a generalization of the layered neural network that can
implement a time-varying non-linear mapping between its observable input and output.
The variation of the network's mapping is due to an additional, hidden control input,
while the network parameters remain unchanged. We proposed an algorithm for
finding the network parameters and the hidden control sequence from a training set of
examples of observable input and output. This algorithm implements an approximate
maximum likelihood estimation of parameters of an equivalent statistical model, when
only the dominant control sequence is taken into account. The conceptual difference
between the proposed model and the HMM is that in the HMM approach, the
observable data in each of the states is modeled as though it was produced by a
memoryless source, and a parametric description of this source is obtained during
training, while in the proposed model the observations in each state are produced by a
non-linear dynamical system driven by noise, and both the parametric form of the
dynamics and the noise are estimated. The perfonnance of the model was illustrated
for the tasks of nonlinear time-varying system modeling and continuously spoken digit
recognition. The reported results show the potential of this model for providing high
performance speech recognition capability.
Acknowledgment
Special thanks are due to N. Merhav for numerous comments and helpful discussions.
Useful discussions with N.Z. Tishby, S.A. Solla, L.R. Rabiner and J.G. Wilpon are
greatly appreciated.
References
1. G. Cybenko, " Approximation by superposition of sigmoidal function," Math.
Control Systems Signals. in press, 1989.
2. A. Lapedes and R. Farber, " Nonlinear signal processing using neural networks:
prediction and system modeling ... Proc of IEEE. in press, 1989.
3. D.E. Rumelhart. G.E. Hinton and R.J. Williams, "Learning internal representation
by error propagation," Parallel Distributed Processing: Exploration in the
Microstructure of Cognition. MIT Press, 1986.
4. E. Levin. "Word recognition using hidden control neural architecture," Proc. of
ICASSP. Albuquerque. April 1990.
5. G.D. Forney. "The Viterbi algorithm," Proc. IEEE. vol. 61. pp. 268-278, Mar.
1973.
6. N. Merhav and Y. Ephraim. "Maximum likelihood hidden Markov modeling using
a dominant sequence of states." accepted for publication in IEEE Transaction on
ASSP.
7. L. R. Rabiner, "A tutorial on hidden Markov models and selected applications in
speech recognition," Proc. of IEEE, vol. 77, No.2, pp. 257-286, February 1989
8. B.S. Atal, "Effectiveness of linear prediction characteristics of the speech wave for
automatic speaker identification and verification," J. Acoust. Soc. Am., vol. 55, No.6,
pp. 1304-1312, June 1974.
9. L.R. Rabiner, J.G. Wilpon, and F.K. Soong, "High performance connected digit
recognition using hidden Markov models," IEEE Transaction on ASSP. vol. 37, 1989.
| 363 |@word version:2 covariance:1 fonn:2 series:7 score:1 lapedes:1 past:3 current:1 activation:1 attracted:1 must:2 fn:1 stationary:1 generative:1 selected:1 provides:2 math:1 sigmoidal:2 five:2 consists:2 inside:1 indeed:1 multi:4 actual:1 estimating:2 underlying:2 intennediate:1 argmin:3 unspecified:1 string:3 spoken:4 finding:1 acoust:1 nj:3 temporal:3 every:2 ro:26 control:48 unit:8 appear:2 before:1 local:1 treat:1 switching:1 co:1 range:1 acknowledgment:1 implement:6 differs:1 chaotic:3 xr:1 digit:8 procedure:4 bell:1 word:16 layered:11 applying:1 restriction:1 equivalent:5 map:6 deterministic:1 maximizing:1 uttered:1 williams:1 duration:2 ergodic:1 simplicity:1 oh:4 classic:1 variation:1 pt:1 exact:2 us:1 distinguishing:1 hypothesis:2 rumelhart:1 recognition:14 approximated:1 observed:1 solved:4 capture:2 connected:10 solla:1 ephraim:1 complexity:1 mine:3 dynamic:4 exhaustively:1 trained:5 segment:2 predictive:1 networ:1 easily:1 joint:4 icassp:1 distinct:1 describe:2 choosing:1 valued:1 distortion:2 maxp:1 ability:2 sequence:20 maximal:1 description:2 inputoutput:2 wired:1 produce:3 generating:1 spent:1 measured:1 eq:6 soc:1 implemented:5 predicted:3 indicate:2 closely:2 correct:1 farber:1 modifying:1 exploration:1 explains:1 microstructure:1 generalization:1 cybenko:1 probable:1 adjusted:1 extension:1 considered:4 exp:1 mapping:18 predict:2 viterbi:4 cognition:1 driving:2 recognizer:1 estimation:1 proc:4 superposition:1 minimization:6 mit:1 gaussian:1 feigenbaum:1 varying:10 timevarying:1 publication:1 encode:1 derived:1 focus:1 june:1 likelihood:10 greatly:1 am:1 esther:1 helpful:1 hidden:25 spurious:3 nonnal:1 among:2 special:2 marginal:1 equal:1 once:1 represents:3 future:1 minimized:1 inherent:1 equiprobable:1 handset:1 simultaneously:1 argmax:1 consisting:1 attempt:1 interest:2 evaluation:6 adjust:1 male:3 introduces:1 yielding:1 accurate:1 capable:1 partial:1 necessary:1 perfonnance:2 iv:2 minimal:1 modeling:22 obstacle:1 maximization:1 predictor:3 successful:1 levin:6 tishby:1 characterize:1 reported:1 varies:1 adaptively:1 thanks:1 density:1 probabilistic:1 together:1 continuously:1 recorded:1 choose:2 account:3 potential:1 coefficient:2 xlc:1 vi:1 performed:1 reached:1 wave:1 complicated:1 capability:1 parallel:1 minimize:1 accuracy:3 characteristic:2 emulates:1 efficiently:1 yield:1 rabiner:3 identification:1 iterated:1 albuquerque:1 produced:4 randomness:1 detector:2 pp:3 associated:2 static:1 sampled:1 proved:1 dimensionality:1 segmentation:7 uncover:1 actually:1 back:5 appears:1 attained:1 april:1 though:2 mar:1 stage:2 horizontal:1 nonlinear:9 propagation:6 logistic:5 mode:1 usa:1 consisted:1 true:1 vicinity:1 memoryless:1 laboratory:1 illustrated:1 white:1 during:2 speaker:6 criterion:1 trying:1 hill:1 recently:4 common:1 khz:1 interpretation:1 automatic:1 wilpon:2 had:1 rot:1 dominant:2 multivariate:1 recent:1 driven:2 certain:2 binary:1 arbitrarily:2 accomplished:1 minimum:1 additional:3 signal:37 ii:5 sound:1 match:4 xf:1 characterized:1 divided:1 prediction:23 basic:1 iteration:1 represent:2 separately:1 source:5 pass:1 comment:1 virtually:1 effectiveness:1 nonstationary:2 switch:8 architecture:6 fm:4 competing:1 perfectly:1 restrict:1 cn:3 six:1 speech:7 nine:4 useful:4 nasal:1 extensively:1 tutorial:1 estimated:6 delta:2 per:1 discrete:2 write:1 vol:4 group:2 key:1 four:3 graph:3 year:1 sum:1 forney:1 layer:4 ct:10 fl:5 constraint:1 x2:1 optimality:1 min:1 department:1 according:4 remain:1 invariant:3 restricted:2 ene:1 soong:1 taken:3 mechanism:1 end:1 operation:1 eight:1 denotes:5 assumes:1 cf:4 instant:1 murray:1 february:1 unchanged:1 parametric:3 dependence:1 concatenation:2 hmm:4 seven:1 assuming:2 modeled:9 illustration:1 providing:1 minimizing:3 merhav:2 unknown:2 vertical:1 observation:13 markov:9 iti:1 enabling:1 finite:6 situation:1 extended:1 variability:4 hinton:1 assp:2 frame:6 lb:1 namely:3 connection:1 acoustic:2 learned:1 able:1 dynamical:4 pattern:1 lpc:1 including:2 max:1 perfonned:3 overlap:1 eye:1 numerous:1 axis:2 utterance:2 understanding:1 fully:7 interesting:1 proven:2 rcnn:1 verification:1 cd:2 row:1 summary:1 keeping:1 appreciated:1 bias:1 allow:1 cepstral:4 distributed:2 calculated:1 world:1 avoids:1 transition:1 forward:2 made:1 far:1 transaction:2 approximate:5 observable:13 global:1 reestimation:1 conceptual:1 continuous:4 iterative:1 decade:1 learn:2 cl:1 main:1 whole:1 noise:7 arise:1 allowed:1 argmine:3 fig:8 differed:1 cil:1 position:2 msec:2 xl:2 candidate:4 governed:1 atal:1 specific:1 xt:9 ci:4 illustrates:1 fc:1 xlo:1 extracted:1 viewed:1 hcnn:19 considerable:1 change:5 telephone:1 acting:1 called:1 pas:2 accepted:1 experimental:2 la:1 internal:1 inability:1 modulated:1 evaluate:1 tested:3 |
2,902 | 3,630 | Spatial Normalized Gamma Processes
Yee Whye Teh
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Vinayak Rao
Gatsby Computational Neuroscience Unit
University College London
[email protected]
Abstract
Dependent Dirichlet processes (DPs) are dependent sets of random measures, each
being marginally DP distributed. They are used in Bayesian nonparametric models
when the usual exchangeability assumption does not hold. We propose a simple
and general framework to construct dependent DPs by marginalizing and normalizing a single gamma process over an extended space. The result is a set of
DPs, each associated with a point in a space such that neighbouring DPs are more
dependent. We describe Markov chain Monte Carlo inference involving Gibbs
sampling and three different Metropolis-Hastings proposals to speed up convergence. We report an empirical study of convergence on a synthetic dataset and
demonstrate an application of the model to topic modeling through time.
1
Introduction
Bayesian nonparametrics have recently garnered much attention in the machine learning and statistics communities, due to their elegant treatment of infinite dimensional objects like functions and
densities, as well as their ability to sidestep the need for model selection. The Dirichlet process (DP)
[1] is a cornerstone of Bayesian nonparametrics, and forms a basic building block for a wide variety
of extensions and generalizations, including the infinite hidden Markov model [2], the hierarchical
DP [3], the infinite relational model [4], adaptor grammars [5], to name just a few.
By itself, the DP is a model that assumes that data are infinitely exchangeable, i.e. the ordering of
data items does not matter. This assumption is false in many situations and there has been a concerted
effort to extend the DP to more structured data. Much of this effort has focussed on defining priors on
collections of dependent random probability measures. [6] expounded on the notion of dependent
DPs, that is, a dependent set of random measures that are all marginally DPs. The property of
being marginally DP here is both due to a desire to construct mathematically elegant solutions, and
also due to the fact that the DP and its implications as a statistical model, e.g. on the behaviour
of induced clusterings of data or asymptotic consistency, are well-understood. In this paper, we
propose a simple and general framework for the construction of dependent DPs on arbitrary spaces.
The idea is based on the fact that just as Dirichlet distributions can be generated by drawing a set
of independent gamma variables and normalizing, the DP can be constructed by drawing a sample
from a gamma process (?P) and normalizing (i.e. it is an example of a normalized random measure
[7, 8]). A ?P is an example of a completely random measure [9]: it has the property that the random
masses it assigns to disjoint subsets are independent. Furthermore, the restriction of a ?P to a subset
is itself a ?P. This implies the following easy construction of a set of dependent DPs: define a ?P
over an extended space, associate each DP with a different region of the space, and define each DP
by normalizing the restriction of the ?P on the associated region. This produces a set of dependent
DPs, with the amount of overlap among the regions controlling the amount of dependence. We call
this model a spatial normalized gamma process (SN?P). More generally, our construction can be
extended to normalizing restrictions of any completely random measure, and we call the resulting
dependent random measures spatial normalized random measures (SNRMs).
1
In Section 2 we briefly describe the ?P. Then we describe our construction of the SN?P in Section 3.
We describe inference procedures based on Gibbs and Metropolis-Hastings sampling in Section 4
and report experimental results in Section 5. We conclude by discussing limitations and possible
extensions of the model as well as related work in Section 6.
2
Gamma Processes
We briefly describe the gamma process (?P) here. A good high-level introduction can be found in
[10]. Let (?, ?) be a measure space on which we would like to define a ?P. Like the DP, realizations
of the ?P are atomic measures with random weighted point masses. We can visualize the point
masses ? ? ? and their corresponding weights w > 0 as points in a product space ? ? [0, ?).
Consider a Poisson process over this product space with mean measure
?(d?dw) = ?(d?)w?1 e?w dw.
(1)
Here ? is a measure on the space (?, ?) and is called the base measure of
R the ?P. A sample from
this Poisson process will yield an infinite set of atoms {?i , wi }?
i=1 since ??[0,?) ?(d?dw) = ?.
A sample from the ?P is then defined as
G=
?
X
wi ??i ? ?P(?).
(2)
i=1
P?
It can be shown that the total mass G(S) = i=1 wi 1(?i ? S) of any measurable subset S ? ? is
simply gamma distributed with shape parameter ?(S), thus the natural name gamma process. Dividing G by G(?), we get a normalized random measure?a random probability measure. Specifically,
we get a sample from the Dirichlet process DP(?):
D = G/G(?) ? DP(?).
(3)
Here we used an atypical parameterization of the DP in terms of the base measure ?. The usual
(equivalent) parameters of the DP are: strength parameter ?(?) and base distribution ?/?(?).
Further, the DP is independent of the normalization: D?
?G(?).
The gamma process is an example of a completely random measure [9]. This means that for mutually
disjoint measurable subsets S1 , . . . , Sn ? ? the random numbers {G(S1 ), . . . , G(Sn )} are mutually
independent. Two straightforward consequences will be of importance in the rest of this paper.
Firstly, if S ? ? then the restriction G0 (d?) = G(d? ? S) onto S is a ?P with base measure
?0 (d?) = ?(d?
R ? S). Secondly, if ? = ?1 ? ?2 is a two dimensional space, Rthen the projection
G00 (d?1 ) = ?2 G(d?1 d?2 ) onto ?1 is also a ?P with base measure ?00 (d?1 ) = ?2 ?(d?1 d?2 ).
3
Spatial Normalized Gamma Processes
In this section we describe our proposal for constructing dependent DPs. Let (?, ?) be a probability
space and T an index space. We wish to construct a set of dependent random measures over (?, ?),
one Dt for each t ? T such that each Dt is marginally DP. Our approach is to define a gamma
process G over an extended space and let each Dt be a normalized restriction/projection of G.
Because restrictions and projections of gamma processes are also gamma processes, each Dt will
be DP distributed.
To this end, let Y be an auxiliary space and for each t ? T, let Yt ? Y be a measurable set. For any
measure ? over ? ? Y define the restricted projection ?t by
Z
?t (d?) =
?(d?dy) = ?(d? ? Yt ).
(4)
Yt
Note that ?t is a measure over ? for each t ? T. Now let ? be a base measure over the product
space ? ? Y and consider a gamma process
G ? ?P(?)
2
(5)
over ? ? Y. Since restrictions and projections of ?Ps are ?Ps as well, Gt will be a ?P over ? with
base measure ?t :
Z
Gt (d?) =
G(d?dy) ? ?P(?t )
(6)
Yt
Now normalizing,
Dt = Gt /Gt (?) ? DP(?t )
(7)
We call the resulting set of dependent DPs {Dt }t?T spatial normalized gamma processes (SN?Ps).
If the index space is continuous, {Dt }t?T can equivalently be thought of as a measure-valued
stochastic process. The amount of dependence between Ds and Dt for s, t ? T is related to the
amount of overlap between Ys and Yt . Generally, the subsets Yt are defined so that the closer s and
t are in T, the more overlap Ys and Yt have and as a result Ds and Dt are more dependent.
3.1
Examples
We give two examples of SN?Ps, both with index set T = R interpreted as the time line. Generalizations to higher dimensional Euclidean spaces Rn are straightforward. Let H be a base distribution
over ? and ? > 0 be a concentration parameter.
The first example uses Y = R as well, with the subsets being Yt = [t ? L, t + L] for some fixed
window length L > 0. The base measure is ?(d?dy) = ?H(d?)dy/2L. In this case the measurevalued stochastic process {Dt }t?R is stationary. The base measure ?t works out to be:
Z t+L
?t (d?) =
?H(d?)dy/2L = ?H(d?),
(8)
t?L
so that each Dt ? DP(?H) with concentration parameter ? and base distribution H. We can
interpret this SN?P as follows. Each atom in the overall ?P G has a time-stamp y and a time-span
of [y ? L, y + L], so that it will only appear in the DPs Dt within the window t ? [y ? L, y + L]. As
a result, two DPs Ds and Dt will share more atoms the closer s and t are to each other, and no atoms
if |s ? t| > 2L. Further, the dependence between Ds and Dt depends on |s ? t| only, decreasing
with increasing |s ? t| and independent if |s ? t| > 2L.
The second example generalizes the first one by allowing different atoms to have different window
lengths. Each atom now has a time-stamp y and a window length l, so that it appears in DPs in the
window [y ? l, y + l]. Our auxiliary space is thus Y = R ? [0, ?), with Yt = {(y, l) : |y ? t| ? l}
(see Figure 1). Let ?(dl) be a distribution over window lengths in [0, ?). We use the base measure
?(d?dydl) = ?H(d?)dy?(dl)/2l. The restricted projection is then
Z
Z ?
Z t+l
?t (d?) =
?H(d?)dy?(dl)/2l = ?H(d?)
?(dl)
dy/2l = ?H(d?)
(9)
|y?t|?l
0
t?l
so that each Dt is again simply DP(?H). Now Ds and Dt will always be dependent with the amount
of dependence decreasing as |s ? t| increases.
3.2
Interpretation as Mixtures of DPs
Even though the SN?P as described above defines an uncountably infinite number of DPs, in practice
we will only have observations at a finite number of times, say t1 , . . . , tm . We define R as the
smallest collection of disjoint regions of Y such that each Ytj is a union of subsets in R. Thus
m
R = {?m
j=1 Sj : Sj = Ytj or Sj = Y\Ytj , with at least one Sj = Ytj and ?j=1 Sj 6= ?}. For
1 ? j ? m let Rj be the collection of regions in R contained in Ytj , so that ?R?Rj = Ytj . For
each R ? R define
GR (d?) = G(d? ? R)
(10)
We see that each GR is a ?P with base measure ?R (d?) = ?(d? ? R). Normalizing, DR =
GR /GR (?) ? DP(?R ), with DR ?
?DR0 for distinct R, R0 ? R. Now
P
(11)
Dtj (d?) = R?Rj P 0 GR (?)
G 0 (?) DR (d?)
R ?Rj
3
R
L
SCALE = L
t1
t2
t3
Y
Figure 1: The extended space Y?L over which the overall ?P is defined in the second example. Not
shown is the ?-space over which the DPs are defined. Also not shown is the fourth dimension W
needed to define the Poisson process used to construct the ?P. t1 , t2 , t3 ? Y are three times at which
observations are present. The subset Ytj corresponding to each tj is the triangular area touching tj .
The regions in R are the six areas formed by various intersections of the triangular areas.
so each Dtj is a mixture where each component DR is drawn independently from a DP. Further, the
mixing proportions are Dirichlet distributed and independent from the components by virtue of each
GR (?) being gamma distributed and independent from DR . Thus we have the following equivalent
construction for a SN?P:
DR ? DP(?R )
X
Dtj =
?jR DR
gR ? Gamma(?R (?))
for R ? R
gR
for R ? Rj
?jR =
P
R0 ?Rj
R?Rj
gR
(12)
Note that the DPs in this construction are all defined only over ?, and references to the auxiliary
space Y and the base measure ? are only used to define the individual base measures ?R and the
shape parameters of the gR ?s. Figure 1 shows the regions for the second example corresponding to
observations at three times.
The mixture of DPs construction is related to the hierarchical Dirichlet process defined in [11] (not
the one defined by Teh et al [3]). The difference is that the parameters of the prior over the mixing
proportions exactly matches the concentration parameters of the individual DPs. A consequence of
this is that each mixture Dtj is now conveniently also a DP.
4
Inference in the SN?P
The mixture of DPs interpretation of the SN?P makes sampling from the model, and consequently
inference via Markov chain Monte Carlo sampling, easy. In what follows, we describe both Gibbs
sampling and Metropolis-Hastings based updates for a hierarchical model in which the dependent
DPs act as prior distributions over a collection of infinite mixture models. Formally, our observations
now lie in a measurable space (X, ?) equipped with a set of probability measures F? parametrized
by ? ? ?. Observation i at time tj is denoted xji , lies in region rji and is drawn from mixture
component parametrized by ?ji . Thus to augment (12), we have
rji ? Mult({?jR : R ? Rj })
?ji ? Drji
xji ? F?ji
(13)
where rji = R with probability ?jR for each R ? Rj . In words, we first pick a region rji from the
set Rj , then a mixture component ?ji , followed by drawing xji from the mixture distribution.
4.1
Gibb Sampling
We derive a Gibbs sampler for the SN?P where the region DPs DR are integrated out and replaced
by Chinese restaurants. Let cji denote the index of the cluster in Drji to which observation xji is
assigned. We also assume that the base distribution H is conjugate to the mixture distributions F?
so that the cluster parameters are integrated out as well. The Gibbs sampler iteratively resamples the
4
latent variables left: rji ?s, cji ?s and gR ?s. In the following, let mjRc be the number of observations
?ji
from time tj assigned to cluster c in the DP DR in region R, and let fRc
(xji ) be the density of
observation xji conditioned on the other variables currently assigned to cluster c in DR , with its
cluster parameters integrated out. We denote marginal counts with dots, for example m?Rc is the
number of observations (over all times) assigned to cluster c in region R. Superscripts ?ji means
observation xji is excluded.
rji and cji are resampled together; their conditional joint probability given the other variables is:
m?ji
?ji
?Rc
p(rji = R, cji = c|others) ? P gR gr
fRc
(xji )
(14)
m?ji +? (?)
r?Rj
R
?R?
The probability of xji joining a new cluster in region R is
?R (?)
gR
P
p(rji = R, cji = cnew |others) ?
fRcnew (xji )
gr
m?ji +? (?)
r?Rj
?R?
(15)
R
where R ? Rj and c denotes the index of an existing cluster in region R. The updates of the gR ?s
are more complicated as they are coupled and not of standard form:
?mj??
Q
P
?R (?)+m?R? ?1 ?gR Q
p({gR }R?R |others) =
e
(16)
R?R gR
j
R?Rj gR
To sample the gR ?s we introduce auxiliary variables {Aj } to simplify the rightmost term above. In
particular, using the Gamma identity
?mj?? Z ?
P
P
mj?? ?1 ? R?Rj gR Aj
g
dAj
(17)
?(mj?? )
=
A
e
j
R?Rj R
0
we have that (16) is the marginal of {gR }R?R of the distribution:
Y ? (?)+m ?1
Y m ?1 ? P
?R?
R?Rj gR Aj
q({gR }R?R , {Aj }) ?
gRR
e?gR
Aj j?? e
(18)
j
R?R
Now we can Gibbs sample the gR ?s and Aj ?s:
gR |others ?Gamma(?R (?) + m?R? , 1 +
P
Aj |others ?Gamma(mj?? , R?Rj gR )
P
j?JR
Aj )
(19)
(20)
Here JR is the collection of indices j such that R ? Rj .
4.2
Metropolis-Hastings Proposals
To improve convergence and mixing of the Markov chain, we introduce three Metropolis-Hastings
(MH) proposals in addition to the Gibbs sampling updates described above. These propose nonincremental changes in the assignment of observations to clusters and regions, allowing the Markov
chain to traverse to different modes that are hard to reach using Gibbs sampling.
The first proposal (Algorithm 1) proceeds like the split-merge proposal of [12]. It either splits an
existing cluster in a region into two new clusters in the same region, or merges two existing clusters
in a region into a single cluster. To improve the acceptance probability, we use 5 rounds of restricted
Gibbs sampling [12].
The second proposal (Algorithm 2) seeks to move a picked cluster from one region to another.
The new region is chosen from a region neighbouring the current one (for example in Figure 1
the neigbors are the four regions diagonally neighbouring the current one). To improve acceptance
probability we also resample the gR ?s associated with the current and proposed regions. The move
can be invalid if the cluster contains an observation from a time point not associated with the new
region; in this case the move is simply rejected.
The third proposal (Algorithm 3) we considered seeks to combine into one step what would take
two steps under the previous two proposals: splitting a cluster and moving it to a new region (or the
reverse: moving a cluster into a new region and merging it with a cluster therein).
5
Algorithm 1 Split and Merge in the Same Region (MH1)
1: Let S0 be the current state of the Markov chain.
2: Pick a region R with probability proportional to m?R? and two distinct observations in R
3: Construct a launch state S 0 by creating two new clusters, each containing one of the two observations, and running restricted Gibbs sampling
4: if the two observations belong to the same cluster in S0 then
5:
Propose split: run one last round of restricted Gibbs sampling to reach the proposed state S1
6: else
7:
Propose merge: the proposed state S1 is the (unique) state merging the two clusters
8: end if
p(S1 )q(S 0 ?S0 )
9: Accept proposed state S1 according to acceptance probability min 1, p(S0 )q(S 0 ?S1 ) where
p(S) is the posterior probability of state S and q(S 0 ? S) is the probability of proposing state
S from the launch state S 0 .
Algorithm 2 Move (MH2)
1: Pick a cluster c in region R0 with probability proportional to m?R0 c
2: Pick a region R1 neighbouring R0 and propose moving c to R1
3: Propose new weights gR0 , gR1 by sampling both from (19)
4: Accept or reject the move
Algorithm 3 Split/merged Move (MH3)
1: Pick a region R0 , a cluster c contained in R, and a neighbouring region R1 with probability
proportional to the number of observations in c that cannot be assigned to a cluster in R1
2: if c contains observations than can be moved to R1 then
3:
Propose assigning these observations to a new cluster in R1
4: else
5:
Pick a cluster from those in R1 and propose merging it into c
6: end if
7: Propose new weights gR0 , gR1 by sampling from (19)
8: Accept or reject the proposal
5
Experiments
Synthetic data In the first of our experiments, we artificially generated 60 data points at each of
5 times by sampling from a mixture of 10 Gaussians. Each component was assigned a timespan,
ranging from a single time to the entire range of five times. We modelled this data as a collection of
five DP mixture of Gaussians, with a SN?P prior over the five dependent DPs. We used the set-up as
described in the second example. To encourage clusters to be shared across times (i.e. to avoid similar clusters with non-overlapping timespans), we chose the distribution over window lengths ?(w)
to give larger probabilities to larger timespans. Even in this simple model, Gibbs sampling alone
usually did not converge to a good optimum; remaining stuck around local maxima. Figure 2 shows
the evolution of the log-likelihood for 5 different samplers: plain Gibbs sampling, Gibbs sampling
augmented with each of MH proposals 1, 2 and 3, and finally a sampler that interleaved all three
MH samplers with Gibbs sampling. Not surprisingly, the complete sampler converged fastest, with
Gibbs sampling with MH-proposal 2 (Gibbs+MH2) performing nearly as well. Gibbs+MH1 seemed
converge no faster than just Gibbs sampling, with Gibbs+MH3 giving performance somewhere in
between. The fact that Gibbs+MH2 performs so well can be explained by the easy clustering structure of the problem, so that exploring region assignments of clusters rather than cluster assignments
of observations was the challenge faced by the sampler (note its high acceptance rate in Figure 4).
To demonstrate how the additional MH proposals help mixing, we examined how the cluster assignment of observations varied over iterations. At each iteration, we construct a 600 by 600 binary
matrix, with element (i, j) being 1 if observations i and j are assigned to the same cluster. In
Figure 3, we plot the average L1 difference between matrices at different iteration lags. Somewhat counterintuitively, Gibbs+MH1 does much better than Gibbs sampling with all MH proposals.
6
Figure 4: Acceptance rates of the MH proposals
for Gibbs+MH1+MH2+MH3 after burn-in (percentages).
?750
Proposal
MH-Proposal 1
MH-Proposal 2
MH-Proposal 3
?800
?850
?900
Gibbs+MH1
MH2+MH3
Gibbs+MH2
Gibbs+MH3
Gibbs+MH1
Gibbs
?950
?1000
0
100
200
300
400
500
600
700
Synthetic
0.51
11.7
0.22
NIPS
0.6621
0.6548
0.0249
9
8
800
7
6
Figure 2: log-likelihoods (the coloured lines are
ordered at iteration 80 like the legend).
5
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
0
500
1000
1500
2000
2500
3000
8
4000
7
3500
6
3000
2500
10
2000
8
Gibbs+MH1
Gibbs+MH1
MH2+MH3
Gibbs+MH3
Gibbs+MH2
Gibbs
1500
1000
500
1
2
3
4
5
6
7
8
9
6
3500
4000
4500
5000
10
Figure 5: Evolution of the timespan of a cluster.
From top to bottom: Gibbs+MH1+MH2+MH3,
Gibbs+MH2
and
Gibbs+MH1
(pink),
Gibbs+MH3 (black) and Gibbs (magenta).
Figure 3: Dissimilarity in clustering structure vs
lag (the coloured lines are ordered like the legend).
This is because the latter is simultaneously exploring the region assignment of clusters as well. In
Gibbs+MH1, clusters split and merge frequently since they stay in the same regions, causing the
cluster matrix to vary rapidly. In Gibbs+MH1+MH2+MH3, after a split the new clusters often move
into separate regions; so it takes longer before they can merge again. Nonetheless, this demonstrates
the importance of split/merge proposals like MH1 and MH3; [12] studied this in greater detail. We
next examined how well the proposals explore the region assignment of clusters. In particular, at
each step of the Markov chain, we pick the cluster with mean closest to the mean of one of the true
Gaussian mixture components, and tracked how its timespan evolved. Figure 5 shows that without
MH proposal 2, the clusters remain essentially frozen in their initial regions.
NIPS dataset For our next experiment we modelled the proceedings of the first 13 years of NIPS.
The number of word tokens was about 2 million spread over 1740 documents, with about 13000
unique words. We used a model that involves both the SN?P (to capture changes in topic distributions across the years) and the hierarchical Dirichlet process (HDP) [3] (to capture differences
among documents). Each document is modeled using a different DP, with the DPs in year i sharing
the same base distribution Di . On top of this, we place a SN?P (with structure given by the second
example in Section 3.1) prior on {Di }13
i=1 . Consequently, each topic is associated with a distribution
over words, and has a particular timespan. Each document in year i is a mixture over the topics
whose timespan include year i. Our model allows statistical strength to be shared in a more refined
manner than the HDP. Instead of all DPs having the same base distribution, we have 13 dependent
base distributions drawn from the SN?P. The concentration parameters of our DPs were chosen to
encourage shared topics, their magnitude chosen to produce about a 100 topics over the whole corpus on average. Figure 6 shows some of the topics identified by the model and their timespans. For
inference, we used Gibbs sampling, interleaved with all three MH proposals to update the SN?P. the
Markov chain was initialized randomly except that all clusters were assigned to the top-most region
(spanning the 13 years). We calculated per-word perplexity [3] on test documents (about half of all
documents, withheld during training). We obtained an average perplexity of 3023.4, as opposed to
about 3046.5 for the HDP.
7
scale
topic A
(173268 words)
topic B
topic C
(98342 words)
(60385 words)
topic D
topic E
topic F
(20290 words)
(7021 words)
(3223 words)
topic G
topic H
1
2
3
4
5
7
topic I
8
function, model, data, error, learning, probability, distribution
Topic B
model, visual, figure, image, motion, object, field
Topic C
network, memory, neural, state, input, matrix, hopfield
Topic D
rules, rule, language, tree, representations, stress, grammar
Topic E
classifier, genetic, memory, classification, tree, algorithm, data
Topic F
map, brain, fish, electric, retinal, eye, tectal
Topic G
recurrent, time, context, sequence, gamma, tdnn, sequences
Topic H
chain, protein, region, mouse, human, markov, sequence
Topic I
routing, load, projection, forecasting, shortest, demand, packet
(5334 words)
(2074 words)
6
Topic A
9
10
11
(780 words)
12
13
year
Figure 6: Inferred topics with their timespans (the horizontal lines). In parentheses are the number
of words assigned to each topic. On the right are the top ten most probable words in the topics.
Computationally, the 3 MH steps are much cheaper than a round of Gibbs sampling. When trying to
split a large cluster (or merge 2 large clusters), MH proposal 1 can still be fairly expensive because
of the rounds of restricted Gibbs sampling. MH proposal 3 does not face this problem. However
we find that after the burn-in period it tends to have low acceptance rate. We believe we need to
redesign MH proposal 3 to produce more intelligent splits to increase the acceptance rate. Finally,
MH-proposal 2 is the cheapest, both in terms of computation and book-keeping, and has reasonably
high acceptance rate. We ran MH-proposal 2 a hundred times between successive Gibbs sampling
updates. The acceptance rates of the MH proposals (given in Figure 4) are slightly lower than those
reported by [12], where a plain DP mixture model was applied to a simple synthetic data set, and
where split/merge acceptance rates were on the order of 1 to 5 percent.
6
Discussion
We described a conceptually simple and elegant framework for the construction of dependent DPs
based on normalized gamma processes. The resulting collection of random probability measures has
a number of useful properties: the marginal distributions are DPs and the weights of shared atoms
can vary across DPs. We developed auxiliary variable Gibbs and Metropolis-Hastings samplers for
the model and applied it to time-varying topic modelling where each topic has its own time-span.
Since [6] there has been strong interest in building dependent sets of random measures. Interestingly,
the property of each random measure being marginally DP, as originally proposed by [6], is often not
met in the literature, where dependent stochastic processes are defined through shared and random
parameters [3, 14, 15, 11]. Useful dependent DPs had not been found [16] until recently, when
a flurry of models were proposed [17, 18, 19, 20, 21, 22, 23]. However most of these proposals
have been defined only for the real line (interpreted as the time line) and not for arbitrary spaces.
[24, 25, 26, 13] proposed a variety of spatial DPs where the atoms and weights of the DPs are
dependent through Gaussian processes. A model similar to ours was proposed recently in [23],
using the same basic idea of introducing dependencies between DPs through spatially overlapping
regions. This model differs from ours in the content of these shared regions (breaks of a stick in that
case vs a (restricted) Gamma process in ours) and the construction of the DPs (they use the stick
breaking construction of the DP, we normalize the restricted Gamma process). Consequently, the
nature of the dependencies between the DPs differ; for instance, their model cannot be interpreted
as a mixture of DPs like ours.
There are a number of interesting future directions. First, we can allow, at additional complexity, the
locations of atoms to vary using the spatial DP approach [13]. Second, more work need still be done
to improve inference in the model, e.g. using a more intelligent MH proposal 3. Third, although
we have only described spatial normalized gamma processes, it should be straightforward to extend
the approach to spatial normalized random measures [7, 8]. Finally, further investigations into the
properties of the SN?P and its generalizations, including the nature of the dependency between DPs
and asymptotic behavior, are necessary for a complete understanding of these processes.
8
References
[1] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics, 1(2):209?230,
1973.
[2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In Advances in
Neural Information Processing Systems, volume 14, 2002.
[3] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[4] C. Kemp, J. B. Tenenbaum, T. L. Griffiths, T. Yamada, and N. Ueda. Learning systems of concepts with
an infinite relational model. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 21,
2006.
[5] M. Johnson, T. L. Griffiths, and S. Goldwater. Adaptor grammars: A framework for specifying compositional nonparametric Bayesian models. In Advances in Neural Information Processing Systems, volume 19, 2007.
[6] S. MacEachern. Dependent nonparametric processes. In Proceedings of the Section on Bayesian Statistical Science. American Statistical Association, 1999.
[7] L. E. Nieto-Barajas, I. Pruenster, and S. G. Walker. Normalized random measures driven by increasing
additive processes. Annals of Statistics, 32(6):2343?2360, 2004.
[8] L. F. James, A. Lijoi, and I. Pruenster. Bayesian inference via classes of normalized random measures.
ICER Working Papers - Applied Mathematics Series 5-2005, ICER - International Centre for Economic
Research, April 2005.
[9] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78, 1967.
[10] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1993.
[11] P. M?uller, F. A. Quintana, and G. Rosner. A method for combining inference across related nonparametric
Bayesian models. Journal of the Royal Statistical Society, 66:735?749, 2004.
[12] S. Jain and R. M. Neal. A split-merge Markov chain Monte Carlo procedure for the Dirichlet process
mixture model. Technical report, Department of Statistics, University of Toronto, 2004.
[13] J. A. Duan, M. Guindani, and A. E. Gelfand. Generalized spatial Dirichlet process models. Biometrika,
94(4):809?825, 2007.
[14] A. Rodr??guez, D. B. Dunson, and A. E. Gelfand. The nested Dirichlet process. Technical Report 2006-19,
Institute of Statistics and Decision Sciences, Duke University, 2006.
[15] D. B. Dunson, Y. Xue, and L. Carin. The matrix stick-breaking process: Flexible Bayes meta analysis. Technical Report 07-03, Institute of Statistics and Decision Sciences, Duke University, 2007.
http://ftp.isds.duke.edu/WorkingPapers/07-03.html.
[16] N. Srebro and S. Roweis. Time-varying topic models using dependent Dirichlet processes. Technical
Report UTML-TR-2005-003, Department of Computer Science, University of Toronto, 2005.
[17] J. E. Griffin and M. F. J. Steel. Order-based dependent Dirichlet processes. Journal of the American
Statistical Association, Theory and Methods, 101:179?194, 2006.
[18] J. E. Griffin. The Ornstein-Uhlenbeck Dirichlet process and other time-varying processes for Bayesian
nonparametric inference. Technical report, Department of Statistics, University of Warwick, 2007.
[19] F. Caron, M. Davy, and A. Doucet. Generalized Polya urn for time-varying Dirichlet process mixtures. In
Proceedings of the Conference on Uncertainty in Artificial Intelligence, volume 23, 2007.
[20] A. Ahmed and E. P. Xing. Dynamic non-parametric mixture models and the recurrent Chinese restaurant
process. In Proceedings of The Eighth SIAM International Conference on Data Mining, 2008.
[21] J. E. Griffin and M. F. J. Steel. Bayesian nonparametric modelling with the Dirichlet process regression
smoother. Technical report, University of Kent and University of Warwick, 2008.
[22] J. E. Griffin and M. F. J. Steel. Generalized spatial Dirichlet process models. Technical report, University
of Kent and University of Warwick, 2009.
[23] Y. Chung and D. B. Dunson. The local Dirichlet process. Annals of the Institute of Mathematical Statistics,
2009. to appear.
[24] S.N. MacEachern, A. Kottas, and A.E. Gelfand. Spatial nonparametric Bayesian models. In Proceedings
of the 2001 Joint Statistical Meetings, 2001.
[25] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in
Neural Information Processing Systems, volume 14, 2002.
[26] A. E. Gelfand, A. Kottas, and S. N. MacEachern. Bayesian nonparametric spatial modeling with Dirichlet
process mixing. Journal of the American Statistical Association, 100(471):1021?1035, 2005.
9
| 3630 |@word briefly:2 proportion:2 seek:2 kent:2 pick:7 tr:1 initial:1 contains:2 series:1 genetic:1 document:6 interestingly:1 ours:4 rightmost:1 existing:3 current:4 assigning:1 guez:1 additive:1 shape:2 utml:1 plot:1 update:5 v:2 stationary:1 alone:1 half:1 intelligence:2 item:1 parameterization:1 yamada:1 blei:1 location:1 traverse:1 successive:1 firstly:1 toronto:2 five:3 rc:2 mathematical:1 constructed:1 combine:1 isds:1 manner:1 introduce:2 concerted:1 xji:10 behavior:1 frequently:1 brain:1 decreasing:2 nieto:1 duan:1 window:7 equipped:1 increasing:2 mh1:13 mass:4 what:2 evolved:1 interpreted:3 developed:1 proposing:1 act:1 exactly:1 biometrika:1 demonstrates:1 classifier:1 uk:2 exchangeable:1 unit:2 stick:3 appear:2 t1:3 before:1 understood:1 local:2 tends:1 consequence:2 joining:1 oxford:1 merge:9 black:1 chose:1 burn:2 therein:1 studied:1 examined:2 specifying:1 fastest:1 range:1 unique:2 atomic:1 practice:1 block:1 union:1 differs:1 procedure:2 area:3 empirical:1 thought:1 mult:1 projection:7 reject:2 word:16 davy:1 griffith:2 protein:1 get:2 onto:2 dr0:1 selection:1 cannot:2 context:1 yee:1 restriction:7 measurable:4 equivalent:2 map:1 yt:9 straightforward:3 attention:1 independently:1 splitting:1 assigns:1 rule:2 dw:3 notion:1 annals:3 construction:10 controlling:1 neighbouring:5 duke:3 us:1 associate:1 element:1 expensive:1 bottom:1 daj:1 capture:2 region:42 ordering:1 ran:1 complexity:1 flurry:1 dynamic:1 completely:4 joint:2 mh:20 hopfield:1 various:1 distinct:2 jain:1 describe:7 london:2 monte:3 artificial:2 refined:1 whose:1 lag:2 larger:2 valued:1 gelfand:4 say:1 drawing:3 warwick:3 grammar:3 ability:1 statistic:8 triangular:2 expounded:1 itself:2 superscript:1 beal:2 sequence:3 frozen:1 ucl:2 propose:10 product:3 causing:1 combining:1 realization:1 rapidly:1 mixing:5 roweis:1 moved:1 normalize:1 convergence:3 cluster:43 p:4 r1:7 optimum:1 produce:3 object:2 help:1 derive:1 recurrent:2 ac:2 ftp:1 adaptor:2 polya:1 strong:1 dividing:1 auxiliary:5 launch:2 involves:1 implies:1 met:1 differ:1 direction:1 lijoi:1 merged:1 grr:1 stochastic:3 human:1 routing:1 packet:1 behaviour:1 generalization:3 investigation:1 probable:1 secondly:1 mathematically:1 extension:2 exploring:2 hold:1 around:1 considered:1 visualize:1 vary:3 smallest:1 resample:1 currently:1 counterintuitively:1 weighted:1 uller:1 always:1 gaussian:3 rather:1 avoid:1 exchangeability:1 varying:4 modelling:2 likelihood:2 inference:9 dependent:27 ferguson:1 integrated:3 entire:1 accept:3 hidden:2 overall:2 among:2 classification:1 rodr:1 denoted:1 augment:1 flexible:1 html:1 spatial:13 fairly:1 marginal:3 field:1 construct:6 having:1 sampling:25 atom:9 nearly:1 carin:1 future:1 report:9 t2:2 simplify:1 others:5 intelligent:2 few:1 randomly:1 gamma:26 simultaneously:1 individual:2 cheaper:1 replaced:1 vrao:1 acceptance:10 interest:1 mining:1 mixture:20 tj:4 chain:9 implication:1 pruenster:2 closer:2 encourage:2 necessary:1 tree:2 euclidean:1 initialized:1 quintana:1 instance:1 modeling:2 rao:1 vinayak:1 assignment:6 introducing:1 subset:8 hundred:1 johnson:1 gr:30 reported:1 dependency:3 xue:1 synthetic:4 density:2 international:2 siam:1 stay:1 together:1 mouse:1 again:2 aaai:1 containing:1 opposed:1 dr:10 creating:1 book:1 sidestep:1 american:4 kingman:2 rji:8 chung:1 expert:1 retinal:1 matter:1 depends:1 ornstein:1 kottas:2 break:1 picked:1 xing:1 bayes:1 complicated:1 formed:1 yield:1 t3:2 conceptually:1 goldwater:1 modelled:2 bayesian:12 marginally:5 carlo:3 converged:1 reach:2 sharing:1 nonetheless:1 james:1 associated:5 di:2 dataset:2 treatment:1 appears:1 higher:1 dt:16 originally:1 april:1 nonparametrics:2 done:1 though:1 furthermore:1 just:3 rejected:1 until:1 d:5 working:1 hastings:6 horizontal:1 mh2:11 overlapping:2 cnew:1 defines:1 mode:1 aj:8 believe:1 name:2 building:2 normalized:13 true:1 concept:1 evolution:2 assigned:9 excluded:1 spatially:1 iteratively:1 neal:1 round:4 during:1 generalized:3 whye:1 trying:1 stress:1 complete:2 demonstrate:2 performs:1 l1:1 motion:1 percent:1 resamples:1 ranging:1 image:1 recently:3 garnered:1 ji:10 tracked:1 volume:5 million:1 extend:2 interpretation:2 belong:1 association:4 interpret:1 caron:1 gibbs:46 consistency:1 mathematics:2 centre:1 language:1 had:1 dot:1 moving:3 longer:1 gt:4 base:19 posterior:1 closest:1 own:1 touching:1 driven:1 reverse:1 perplexity:2 meta:1 binary:1 discussing:1 meeting:1 additional:2 somewhat:1 greater:1 r0:6 converge:2 shortest:1 period:1 tectal:1 smoother:1 rj:19 technical:7 match:1 faster:1 ahmed:1 y:2 parenthesis:1 involving:1 basic:2 regression:1 essentially:1 poisson:4 rosner:1 iteration:4 normalization:1 uhlenbeck:1 proposal:31 addition:1 else:2 walker:1 rest:1 induced:1 elegant:3 legend:2 jordan:1 call:3 split:12 easy:3 variety:2 restaurant:2 identified:1 economic:1 idea:2 tm:1 six:1 cji:5 effort:2 forecasting:1 compositional:1 cornerstone:1 generally:2 useful:2 amount:5 nonparametric:9 ten:1 tenenbaum:1 http:1 percentage:1 fish:1 neuroscience:2 disjoint:3 neigbors:1 per:1 ytj:7 four:1 drawn:3 year:7 run:1 fourth:1 uncertainty:1 place:1 ueda:1 decision:2 dy:8 griffin:4 interleaved:2 resampled:1 followed:1 frc:2 strength:2 g00:1 ywteh:1 gibb:1 speed:1 span:2 min:1 performing:1 urn:1 structured:1 pacific:1 according:1 department:3 pink:1 conjugate:1 jr:6 across:4 remain:1 slightly:1 wi:3 metropolis:6 s1:7 explained:1 restricted:8 computationally:1 mutually:2 count:1 needed:1 end:3 generalizes:1 gaussians:2 hierarchical:5 assumes:1 dirichlet:19 clustering:3 denotes:1 running:1 remaining:1 top:4 include:1 somewhere:1 giving:1 ghahramani:2 chinese:2 society:1 move:7 g0:1 parametric:1 concentration:4 dependence:4 usual:2 dp:70 separate:1 parametrized:2 topic:31 kemp:1 spanning:1 hdp:3 length:5 index:6 modeled:1 equivalently:1 dunson:3 steel:3 redesign:1 teh:3 allowing:2 observation:21 markov:11 finite:1 withheld:1 situation:1 extended:5 relational:2 defining:1 rn:1 varied:1 arbitrary:2 community:1 inferred:1 merges:1 nip:3 proceeds:1 usually:1 eighth:1 challenge:1 guindani:1 including:2 memory:2 royal:1 overlap:3 natural:1 improve:4 eye:1 tdnn:1 coupled:1 sn:18 faced:1 prior:5 coloured:2 literature:1 understanding:1 marginalizing:1 asymptotic:2 interesting:1 limitation:1 proportional:3 srebro:1 nonincremental:1 s0:4 share:1 uncountably:1 diagonally:1 surprisingly:1 last:1 token:1 keeping:1 rasmussen:2 allow:1 institute:3 wide:1 focussed:1 face:1 distributed:5 dimension:1 plain:2 calculated:1 seemed:1 stuck:1 collection:7 sj:5 gr1:2 doucet:1 corpus:1 conclude:1 continuous:1 latent:1 mj:5 reasonably:1 nature:2 artificially:1 constructing:1 timespan:5 electric:1 did:1 cheapest:1 spread:1 whole:1 dtj:4 augmented:1 gatsby:4 wish:1 lie:2 atypical:1 stamp:2 breaking:2 third:2 magenta:1 load:1 virtue:1 normalizing:7 dl:4 false:1 merging:3 importance:2 dissimilarity:1 magnitude:1 conditioned:1 demand:1 intersection:1 simply:3 explore:1 infinitely:1 visual:1 conveniently:1 desire:1 contained:2 ordered:2 nested:1 conditional:1 identity:1 consequently:3 invalid:1 shared:6 content:1 change:2 hard:1 infinite:9 specifically:1 except:1 sampler:8 total:1 called:1 experimental:1 formally:1 college:2 maceachern:3 latter:1 |
2,903 | 3,631 | Neurometric function analysis of population codes
Philipp Berens, Sebastian Gerwinn, Alexander S. Ecker and Matthias Bethge
Max Planck Institute for Biological Cybernetics
Center for Integrative Neuroscience, University of T?ubingen
Computational Vision and Neuroscience Group
Spemannstrasse 41, 72076, T?ubingen, Germany
[email protected]
Abstract
The relative merits of different population coding schemes have mostly been analyzed in the framework of stimulus reconstruction using Fisher Information. Here,
we consider the case of stimulus discrimination in a two alternative forced choice
paradigm and compute neurometric functions in terms of the minimal discrimination error and the Jensen-Shannon information to study neural population codes.
We first explore the relationship between minimum discrimination error, JensenShannon Information and Fisher Information and show that the discrimination
framework is more informative about the coding accuracy than Fisher Information as it defines an error for any pair of possible stimuli. In particular, it includes
Fisher Information as a special case. Second, we use the framework to study population codes of angular variables. Specifically, we assess the impact of different
noise correlations structures on coding accuracy in long versus short decoding
time windows. That is, for long time window we use the common Gaussian noise
approximation. To address the case of short time windows we analyze the Ising
model with identical noise correlation structure. In this way, we provide a new
rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is
in particular applicable to short-time population coding.
1
Introduction
The relative merits of different population coding schemes have mostly been studied (e.g. [1, 12],
for a review see [2]) in the framework of stimulus reconstruction (figure 1a), where the performance
? 2 ]. That is, if a stimulus ? is
of a code is judged on the basis of the mean squared error E[(? ? ?)
encoded by a population of N neurons with tuning curves fi , we ask how well, on average, can an
estimator reconstruct the true value of the presented stimulus based on the neural responses r, which
were generated by the density p(r|?). The average reconstruction error can be written as
2
2
?
E?,r [(? ? ?(r))
] = E? [Var?|?
? ] + E? [b? ].
2
?
?
Here Var?|?
? = Er [(? ? ?(r)) |?] denotes the error variance and b? = Er [?(r)|?] ? ? the bias of the
?
estimator ?. For the sake of analytical tractability, most studies have employed Fisher Information
(FI) (e.g. [1, 12])
?2
J? = ? 2 log p(r|?) ?
??
to bound the conditional error variance Var?|?
? of an unbiased estimator from below according to the
Cramer-Rao bound:
1
Var?|?
.
? ?
J?
1
a
b
c
?1
Error
Stimulus
Neural Response
Stimulus reconstruction
Stimulus discrimination
?2
Figure 1: Illustration of the two frameworks for studying population codes. a. In stimulus reconstruction, an
estimator tries to reconstruct the orientation of a stimulus based on a noisy neural response. The quality of a
code is based on the average error of this estimator. b. In stimulus discrimination, an ideal observer needs to
choose one of two possible stimuli based on a noisy neural response (2AFC task). c. A neurometric function
shows the error E as a function of ??, the difference between a reference direction ?1 and a second direction
?2 . This framework is often used in psychophysical studies.
For the comparison of different coding schemes, it is important that an estimator exists which can
actually attain this lower bound. For short time windows and certain types of tuning functions, this
may not always be the case [4]. In particular, it is unclear how different population coding schemes
affect the fidelity with which a population of binary neurons can encode a stimulus variable.
1.1
A new approach for the analysis of population coding
Here we view the population coding problem from a different perspective: We consider the case of
stimulus discrimination in a two alternative forced choice paradigm (2AFC, figure 1b) with equally
probable stimuli and compute two natural measures of coding accuracy: (1) the minimal discrimination error E(?1 , ?2 ) of an ideal observer classifying a stimulus s based on the response distribution as
either being ?1 or ?2 and (2) the Jensen-Shannon information IJS between the response distributions
p(r|?1 ) and p(r|?2 ). The minimal discrimination error is achieved by the Bayes optimal classifier
?? = argmaxs p(s|r) where s ? {?1 , ?2 } and the prior distribution p(s) = 12 . It is given by
Z
E(?1 , ?2 ) = min (p(s = ?1 , r), p(s = ?2 , r)) dr
Z
(1)
1
min (p(r|?1 ), p(r|?2 )) dr
=
2
and the Jensen-Shannon Information [13] is defined as
IJS (?1 , ?2 ) =
1
1
DKL [p(r|?1 )kp(r)] + DKL [p(r|?2 )kp(r)] ,
2
2
(2)
P
1
where p(r) =
s??1 ,?2 p(s)p(r|s) = 2 (p(r|?1 ) + p(r|?2 )) is the arithmetic average between
the two densities, which in our case is the same as the marginal distribution. DKL [q1 kq2 ] =
R
q1 (x) log qq12 (x)
(x) dx is the Kullback-Leibler divergence. IJS is an interesting measure of coding
accuracy since it directly measures the mutual information between the neural responses and the
?class label?, i.e. the stimulus identity. By observing a population response pattern r, the uncertainty
(in terms of entropy) about the stimulus is reduced by
Z
X
p(r|s)
MI(r, s) =
p(s) p(r|s) log P
dr = IJS ,
p(r|s)p(s)
s
s
with prior distribution as above. In the following, we will restrict our analysis to the special case
of shift-invariant population codes for angular variables and compute neurometric functions E(??)
and IJS (??) (figure 1c) by setting ?1 = ? and ?2 = ? + ??. In the limit of large populations, the
dependence of these curves on ? can be ignored.
2
a
b
1
0.4
0.2
0
0.3
0.1
0
0.2
0.4 0.6
Error
0.8
1
0
MDE
Upper/Lower Bound
Upper/Lower
Bound (FI)
0.5
Error
Error
H[E] (bits)
0.8
0.6
c
Upper/Lower
Bound (equ. 4, 5)
Lin?s lower bound
0.5
0.3
0.1
0
0.2
0.4
0.6
0.8
1
0
0
2
JS Information
4
6
8
10
? ? (deg)
Figure 2: a. Illustration of equations 5: The entropy H[E] (black) intersects 1 ? IJS (grey) at E ? (dashed).
Because of Fano?s inequality, E > E ? . b. Functional form of the bounds in equations 4 and 5 (black). Our lower
bound is tighter than the lower bound proposed in [13] (grey). c. Illustration of the connections between the
proposed measures of coding accuracy. Minimal discrimination error E(??) (red) is shown as a neurometric
curve as a function of ?? and is bounded in terms of the Jensen-Shannon information IJS (??) via equations
4 and 5 (black). Fisher Information links to E via equation 3 and the bounds imposed by IJS (grey). This
approximation is only valid for small ??. The computations have been caried out for a population of N = 50
neurons, with average correlations ?? = .15 and correlation structure as in figure 3e.
1.2
Computing E and IJS
While the integrals in equation (1) and (2) often cannot be solved, they are relatively easy to evaluate
numerically using Monte-Carlo techniques [10]. For the minimal discrimination error, we use
Z
1
E(??) =
min (p(r|?), p(r|? + ??)) dr
2
M
1X
?
min p(r(i) |?), p(r(i) |? + ??) /p(r(i) ),
2 i=1
where r(i) is one of M samples, drawn from the mixture distribution p(r)
1
2 (p(r|?) + p(r|? + ??)). To approximate IJS , we evaluate each DKL term separately as
Z
p(r|?)
DKL [p(r|?)kp(r)] = p(r|?) log
dr
p(r)
?
=
M
1 X
log p(r(i) |?) ? log p(r(i) )
M i=1
where we draw samples r(i) from p(r(i) |?).
We use an analogous expression for
DKL [p(r|? + ??)kp(r)] and plug these estimates into equation 2. This scheme provides consistent estimates of the desired quantities. For all simulations below we used M = 105 samples.
2
Links between the proposed measures
In this section, we link the Fisher Information J? of a population code p(r|?) to the minimum discrimination error E(??) and the Jensen-Shannon Information IJS (??) in the 2AFC paradigm. First,
we link Fisher Information to Jensen-Shannon information IJS . Second, we bound the minimum discrimination error in terms of the Jensen-Shannon information.
2.1
From Fisher Information to Jensen-Shannon Information
In order to obtain a relationship between IJS and Fisher Information, we use an expression already
derived in [7], where p(r|? + ??) is expanded up to second order in ??, which yields:
1
IJS (??) ? (??)2 J? .
(3)
8
3
a
Response (Hz)
50
c
d
e
f
40
30
20
10
0
b
Response (Hz)
50
40
30
20
10
0
0
50
100
150
Stimulus orientation (deg)
Figure 3: Illustration of the model. Tuning functions: a. Cosine-type tuning functions with rates between 5
and 50 Hz. b. Box-like tuning function with matched minimal and maximal firing rates. Cosine tuning function
resembles the orientation tuning functions of many cortical neurons. They are characterized by approximately
constant Fisher Information independent of the stimulus orientation. Box-like tuning functions, in contrast,
have non-constant Fisher Information due to their steep non-linearity. They have been shown to exhibit superior
performance over cosine-like tuning functions with respect to the mean squared error [4]. Correlation matrices:
c. stimulus-independent, no limited range (SI, ? = ?) , d. stimulus-independent, limited range (SI, ? = 2),
e. stimulus-dependent, no limited range (SD, ? = ?), f. stimulus-dependent, limited range (SD, ? = 2)
Therefore, Fisher Information provides a good approximation of the Jensen-Shannon Information
for sufficiently small ??.
2.2
From Jensen-Shannon Information to Minimal Discrimination Error
The minimal discrimination error E(??) of an ideal observer is bounded from above and below in
terms of IJS (??). An upper bound derived by [13] is given by
E(??) ?
1 1
? IJS (??).
2 2
(4)
Next, we derive a new lower bound on E, which is tighter than a bound derived by Lin [13]. To this
end, we observe that from Fano?s inequality [8] it follows that
H [E] ? H[s|r] ? E log(|s| ? 1)
=
H[s|r]
=
H[s] ? MI[r, s]
=
1 ? IJS (??),
(5)
where H[E] is the entropy of a Bernoulli distribution with p = E. The equality from first to second
line follows as the number of stimuli or classes |s| = 2. Since the entropy is monotonic in E on the
interval [0, 0.5], we have the lower bound E ? E ? , where E ? is chosen such that equality holds. For
an illustration, see figure 2a. The shape of both bounds, as well as Lin?s lower bound, are illustrated
in figure 2b.
In figure 2c we show the minimal discrimination error for a population code (red) together with the
upper and lower bound (black) obtained by inserting IJS (??) into equations 4 and 5. Both bounds
follow nicely the neurometric function E(??). For comparison, we also show the upper and lower
bound obtained by plugging Fisher Information into equation 3 and computing the bounds 4 and 5
based on this approximation of IJS (??) (grey). Clearly, the approximation is valid for small ?? and
becomes successively worse for large ones.
4
0.4
0.4
0.4
0.2
0
0.2
0
20
40
60
? ? (deg)
80
Cosine
Cosine FI bound
Cosine d?
Box
Box FI bound
Box d?
Error
c 0.6
Error
b 0.6
Error
a 0.6
0
0.2
0
20
40
60
? ? (deg)
80
0
0
20
40
60
? ? (deg)
80
Figure 4: Comparison of box-like (red) vs. cosine (black) tuning functions in short-term population codes of a.
N = 10 b. N = 50 c. N = 250 independent neurons. Although box-like tuning functions are much broader
than cosine tuning functions, Ebox lies usually below Ecos . For the cosine case, FI (dashed, approximation as in
figure 2c and Ed0 (grey) provide accurate accounts of coding accuracy. In contrast, FI grossly overestimates the
discrimination error for box-like tuning functions in small and medium sized populations. In this case, Ed0 is
only a good approximation of E in the range where ?? is small (dark red). Beyond this point, it underestimates
E (a,b). For N = 250, bounds are not shown for clarity but they capture the true beaviour of E better than in
figure 4a and b.
2.3
Previous work
Only a small number of studies on neural population coding have used other measures than Fisher
Information [18, 3, 6, 4]. Two approaches are most closely related to ours: Snippe and Koenderink
[18] and Averbeck and Lee [3] used a measure analogous to the sensitivity index d0
(d0 )2 = ????1 ??
?? := f (? + ??) ? f (?)
(6)
as a measure of coding accuracy. While Snippe and Koenderink have considered only the limit
?? ? 0, Averbeck and Lee evaluated equation 6 for finite ?? using ? = 21 (?? + ??+?? ) and
converted d0 to a discrimination error Ed0 = 1 ? erf(d0 /2). This approximation is exact only if the
class conditional distribution p(r|?) is Gaussian with fixed covariance ?? = ? for all ??. In that
particular case, the entire neurometric function is fully determined by the Fisher Information [9]:
p
d0 = (??) J? = (??) Jmean
Jmean is the linear part of the Fisher Information (cf. equation 7). In the general case, it is not obvious
what aspects of the quality of a population code are captured by the above measure. Therefore, both
Fisher Information and the class-conditional second-order approximation used by Averbeck and
Lee have shortcomings: The latter does not account for information originating from changes in
the covariance matrix as is quantified by Jcov (cf. equation 7). Fisher Information, on the other
hand, can be quite uninformative about the coding accuracy of the population, especially when the
tuning functions are highly nonlinear (see figure 3) or noise is large, as in these cases it is not certain
whether the Cramer-Rao bound can actually be attained [4]. The examples studied in the next
section demonstrate how these shortcomings can be overcome using the minimal discrimination
error (equation 1).
3
Results
After describing the population model used in this study, we will illustrate in a simple example, how
our proposed framework is more informative than previous approaches. Second, we will investigate
how different noise correlations structures impact population coding on different timescales.
3.1
The population model
In this section, we describe in detail the population model used in the remainder of the study. To
facilitate comparability, we closely follow the model used in a recent study by Josic et al. [12]
5
where applicable. We consider a population of N neurons tuned to orientation, where the firing rate
of neuron i follows an average tuning profile fi (?) with (a) a cosine-like shape
fi (?) = ?1 + ?2 ak (? ? ?i )
with k = 1 in section 3.2 and k = 6 in section 3.3 and a(?) = 12 (1 + cos(?)) or (b) a box-like shape
?
1
2
fi (?) = |cos(? ? ?i )| j ? sgn cos(? ? ?i ) + 1 ?
+ ?1 .
2
Here, ?i is the preferred orientation of neuron i and we use j = 12. We consider two scenarios:
1. Long-term coding: r(?) ? N (f (?), ?(?)), where the trial-to-trial fluctuations are assumed
to be normally distributed with mean f (?) and covariance matrix ?(?).
2. Short-term coding: r(?) ? I (f (?), ?(?)), where ri ? {0, 1} and I(?, ?) is the maximum
entropy distribution consistent with the constraints provided by ? and ?, the Ising model
[16]. That is, for short-term population coding, we assume the population acitivity to be
binary with each neuron either emitting one spike or none. The parameters of the Ising
model were computed using gradient descent on the log likelihood.
Following Josic et al. [12],
p we model the stimulus-dependent covariance matrix as ?ij (?) =
?ij vi (?) + (1 ? ?ij )?ij (?) vi (?)vj (?), where vi (?) is the variance of cell i and ?ij (?) the correlation coefficient. For long-term coding, we set vi (?) = fi (?) and for short-term coding, we
set vi (?) = fi (?)(1 ? fi (?)). We allow for both stimulus and spatial influences on ? by setting ?ij (?) = ?ij (?)c(?i ? ?j ), where ?i is the preferred orientation of neuron i. The function s models the influence of the stimulus, while the function c models the spatial component
of the correlation structure. We use ?ij (?) = ?i (?)?j (?), where ?i (?) = ?1 + ?2 a2 (?). We set
c(?i ? ?j ) = C exp (?|?i ? ?j |/?), where ? controls the length of the spatial decay. To obtain a
desired mean level of correlation ??, we use the method described in [12].
3.2
Minimum discrimination error is more informative than Fisher Information
As has been pointed out in [4], the shape of unimodal tuning functions can strongly influence the
coding accuracy of population codes of angular variables. In particular, box-like tuning functions
can be superior to cosine tuning functions. However, numerical evaluation of the minimum mean
squared error for angular variables is much more difficult than the evaluation of the minimal discrimination error proposed here, and the above claim has only been verified up to N = 20 neurons.
Here we compute the full neurometric functions for N = 10, 50, 250 binary neurons (figure 4). In
this way, we show that the advantage of box-like tuning functions also holds for large numbers of
neurons (compare red and black curves in figure 4 a-c). In addition, we note that Fisher Information
does not provide an accurate account of the performance of box-like tuning functions: it fails as soon
as the nonlinearity in the tuning functions becomes effective and overestimates the true minimal
discrimination error E. Similarly, the approximate neurometric functions Ed0 (??) obtained from
equation 6 do not capture the shape of neurometric functions E(??) but underestimate the minimal
discrimination error. In contrast, the deviation between both curves stays rather small for cosine
tuning functions.
3.3
Stimulus-dependent correlations have opposite effects for long- and short-term
population coding
The shape of the noise covariance matrix ?? can strongly influence the coding fidelity of a neural
population. In order to evaluate these effects it is important to take differences in the noise covariance
for different stimuli into account. In this section, we will use our new framework to study different
noise correlation structures for short- and long-term population coding.
Previous studies so far have investigated the effect of noise correlations in the long-term case: Most
studies assumed p(r|?) to follow a multivariate Gaussian distribution, so that firing rates r|? ?
N (f (?), ?(?)) (for detailed description of the population model see section 3.1). In this case, the
6
a
b
SI, ?=inf
SI, ?=1
SD, ?=inf
SD, ?=1
0.5
Error
0.4
0.3
Information (bits)
0.5
0.5
0.4
0.1
0.4
0.3
0.05
0.2
0.2
0.1
0.1
0
d
c
0.15
0
5
0
10
e
1
0.3
7
8
9
10
0.2
0.1
0
10
0
20
f
1
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
5
? ? (deg)
10
0
0
10
? ? (deg)
20
50
0
50
? ? (deg)
100
1
0.8
0
0
0
100
Figure 5: Neurometric functions E(??) (a-c) and IJS (??) (d-f) for four different noise correlation structures.
a. and d. Large population (N = 100) and long-term coding. b. and e. Medium sized population (N = 15)
and long-term coding. The inset is a magnification for clarity. c. and f. Medium sized population (N = 15)
and short-term coding. The impact of stimulus-dependent noise correlations in the absence of limited range
correlations changes from b/e to c/f (red line). While they are beneficial in long-term coding, they are beneficial
in short-term coding only for close angles. The exact point of this transition is not the same for E and IJS , since
they are only related via the bounds described in section 2.2. Note that the scale of the x-axis varies.
FI of the population takes a particularly simple form. It can be decomposed into:
Jmean
J? = Jmean + Jcov
1
= f 0> ??1 f 0 ,
Jcov = Tr[?0 ??1 ?0 ??1 ],
2
(7)
where we omit the dependence on ? for clarity and f 0 , ?0 are the derivatives of f and ? with respect
to ?. Jmean , Jcov are the Fisher information, when either only the mean or only the covariance are
assumed to depend on ?. For this case, various studies have investigated noise structures where
correlations were either uniform across the population (figure 3c) or their magnitude decayed with
difference in preferred orientations (figure 3d), ?limited range structure? or ?spatial decay?, see e.g.
[1]). Only recently have stimulus-dependent correlations been analyzed in terms of Fisher information [12]. Josic et al. find that in the absence of limited range correlations, stimulus-dependent noise
correlations (figure 3e) are beneficial for a population code, while in their presence (figure 3f), they
are detrimental.
We first compute the neurometric functions E(??) and IJS (??) for a population of 100 neurons
in the case of long-term coding with a Gaussian noise model for the four possible noise correlation
structures (figure 5a). We corroborate the results of Josic et al. in that we find that the lowest E or the
highest IJS is achieved for a population with stimulus-dependent noise correlations and no limited
range structure, while a population with stimulus-dependent noise correlations in the presence of
spatial decay performs worst. Spatially uniform correlations (figure 3c) provide almost as good a
code as the best coding scheme.
7
Next, we directly compare long- and short-term population coding in a population of 15 neurons1 .
For short-term coding, we assume that the population activity is of binary nature, i.e. each neuron
spikes at most once. Again, we compute neurometric functions E(??) and IJS (??) for all four
possible correlation structures. The results for long-term coding do not differ between large and
small populations (figure 5b), although relative differences between different coding schemes are
less prominent. In contrast, we find that the beneficial impact of stimulus-dependent correlations in
the absence of limited range structure reverses in short-term codes for large ?? (figure 5c).
4
Discussion
In this paper, we introduce the computation of neurometric functions as a new framework for studying the representational accuracy of neural population codes. Importantly, it allows for a rigorous
treatment of nonlinear population codes (e.g. box-like tuning functions) and noise correlations for
non-Gaussian noise models. This is particularly important for binary population codes on timescales
where neurons fire at most one spike. Such codes are of special interest since psychophysical experiments have demonstrated that efficient computations can be performed in cortex on short time
scales [19]. Previous studies have mostly focussed on long-term population codes, since in this case
it is possible to study many question analytically using Fisher Information. Although the structure
of neural population acitivity on short timescales has recently attracted much interest [16, 17, 15],
population codes for binary population activity and, in particular, the impact of different noise correlation structures on such codes are not well understood. In contrast to previous work [14], neurometric function analysis allows for a comprehensive treatment of both short- and long-term population
codes in a single framework. In section 3.3, we have started to study population codes on short
timescales and found important differences in the effect of noise correlations between short- and
long-term population codes. In the future, we will extend these results to much larger populations
adapting new techniques for approximate fitting of Ising models [15].
The example discussed in section 3.2 demonstrates that neurometric functions can provide additional information compared to Fisher Information: While Fisher Information is a single number for
each potential population code, neurometric functions in terms of E or IJS assess the coding quality
for each pair of stimuli. This also enables us to detect effects like the dependence of the relative
performance of different population codes on ?? as shown in figure 5 c and f. We can furthermore
easily extend the framework to take unequal prior probabilities into account. In equations 1 and 2
we have assumed equal prior probabilities p(?1 ) = p(?2 ) = 21 . Both E and IJS , however, are also
well defined if this is not the case.
The framework of stimulus discrimination in a 2AFC task has long been used in psychophysical and
neurophysiological studies for measuring the accuracy of orientation coding in the visual system
(e.g. [5, 21]). It is therefore appealing to use the same framework also in theoretical investigations
on neural population coding since this facilitates the comparison with experimental data. Furthermore, it allows studying population codes for categorial variables since, in contrast to Fisher Information, it does not require the variable of interest to be continuous. This is of advantage, as many
neurophysiological studies investigate the encoding of categories, such as objects [11] or numbers
[20].
Acknowledgments
We thank A. Tolias and J. Cotton for discussions. This work has been supported by the Bernstein
award to MB (BMBF; FKZ: 01GQ0601) and a scholarship of the German National academic foundation to PB.
1
We are limited in the number of neurons as fitting the required Ising model is computationally very expensive. For the present purpose, we chose N = 15, which is sufficient to demonstrate our point.
8
References
[1] L. F. Abbott and Peter Dayan. The effect of correlated variability on the accuracy of a population code. Neural Comp., 11(1):91?101, 1999.
[2] B. B. Averbeck, P. E. Latham, and A. Pouget. Neural correlations, population coding and
computation. Nat Rev Neurosci, 7(5):358?366, 2006.
[3] B. B. Averbeck and D. Lee. Effects of noise correlations on information encoding and decoding. J Neurophysiol, 95(6):3633?3644, 2006.
[4] M. Bethge, D. Rotermund, and K. Pawelzik. Optimal Short-Term population coding: When
fisher information fails. Neural Comp., 14(10):2317?2351, 2002.
[5] A. Bradley, B. C. Skottun, I. Ohzawa, G. Sclar, and R. D. Freeman. Visual orientation and
spatial frequency discrimination: a comparison of single neurons and behavior. J Neurophysiol,
57(3):755?772, 1987.
[6] N. Brunel and J. P. Nadal. Mutual information, fisher information, and population coding.
Neural Computation, 10(7):1731?1757, 1998.
[7] M. Casas, P. W. Lamberti, A. Plastino, and A. R. Plastino. Jensen-Shannon divergence, fisher
information, and wootters? hypothesis. Arxiv preprint quant-ph/0407147, 2004.
[8] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley-Interscience, 2006.
[9] P. Dayan and L. F. Abbott. Theoretical neuroscience: Computational and mathematical modeling of neural systems. MIT Press, 2001.
[10] J.R. Hershey and P.A. Olsen. Approximating the kullback leibler divergence between gaussian mixture models. In Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE
International Conference on, volume 4, pages IV?317?IV?320, 2007.
[11] C. P. Hung, G. Kreiman, T. Poggio, and J. J. DiCarlo. Fast readout of object identity from
macaque inferior temporal cortex. Science, 310(5749):863?866, 2005.
[12] K. Josic, E. Shea-Brown, B. Doiron, and J. de la Rocha. Stimulus-dependent correlations and
population codes. Neural Computation, 21(10):2774?2804, 2009.
[13] J. Lin. Divergence measures based on the shannon entropy. Information Theory, IEEE Transactions on, 37(1):145?151, 1991.
[14] S. Panzeri, A. Treves, S. Schultz, and E. T. Rolls. On decoding the responses of a population
of neurons from short time windows. Neural Computation, 11(7):1553?1577, 1999.
[15] Y. Roudi, J. Tyrcha, and J. Hertz. The ising model for neural data: Model quality and approximate methods for extracting functional connectivity. Phys. Rev. E, 79:051915, February
2009.
[16] E. Schneidman, M. J. Berry, R. Segev, and W. Bialek. Weak pairwise correlations imply
strongly correlated network states in a neural population. Nature, 440(7087):1007?1012, 2006.
[17] J. Shlens, G. D. Field, J. L. Gauthier, M. Greschner, A. Sher, A. M. Litke, and E. J.
Chichilnisky. The structure of Large-Scale synchronized firing in primate retina. Journal
of Neuroscience, 29(15):5022, 2009.
[18] H. Snippe and J. Koenderink. Information in channel-coded systems: correlated receivers.
Biological Cybernetics, 67(2):183?190, June 1992.
[19] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature,
381(6582):520?522, 1996.
[20] O. Tudusciuc and A. Nieder. Neuronal population coding of continuous and discrete quantity
in the primate posterior parietal cortex. Proceedings of the National Academy of Sciences of
the United States of America, 104(36):14513?8, 2007.
[21] P. Vazquez, M. Cano, and C. Acuna. Discrimination of line orientation in humans and monkeys. J Neurophysiol, 83(5):2639?2648, 2000.
9
| 3631 |@word trial:2 grey:5 integrative:1 simulation:1 covariance:7 q1:2 tr:1 united:1 tuned:1 ours:1 bradley:1 casas:1 si:4 dx:1 written:1 attracted:1 numerical:1 informative:3 shape:6 enables:1 discrimination:26 v:1 greschner:1 short:22 provides:2 philipp:1 mathematical:1 fitting:2 interscience:1 introduce:1 pairwise:1 behavior:1 mpg:1 freeman:1 decomposed:1 pawelzik:1 window:5 becomes:2 provided:1 bounded:2 matched:1 linearity:1 medium:3 lowest:1 what:1 nadal:1 monkey:1 temporal:1 classifier:1 demonstrates:1 control:1 normally:1 omit:1 planck:1 overestimate:2 understood:1 sd:4 limit:2 consequence:1 encoding:2 ak:1 firing:4 fluctuation:1 approximately:1 black:6 chose:1 studied:2 resembles:1 quantified:1 co:3 limited:10 range:10 acknowledgment:1 attain:1 adapting:1 acuna:1 cannot:1 close:1 judged:1 influence:4 ecker:1 imposed:1 center:1 demonstrated:1 pouget:1 estimator:6 importantly:1 shlens:1 ed0:4 rocha:1 population:72 analogous:2 exact:2 hypothesis:1 element:1 magnification:1 particularly:2 expensive:1 ising:6 preprint:1 solved:1 capture:2 worst:1 readout:1 highest:1 depend:1 basis:1 neurophysiol:3 easily:1 icassp:1 various:1 america:1 intersects:1 forced:2 fast:1 shortcoming:2 describe:1 monte:1 kp:4 effective:1 quite:1 encoded:1 larger:1 reconstruct:2 tyrcha:1 erf:1 josic:5 noisy:2 advantage:2 matthias:1 analytical:1 reconstruction:5 maximal:1 mb:1 remainder:1 inserting:1 argmaxs:1 representational:2 academy:1 description:1 assessing:1 object:2 derive:1 illustrate:1 ij:8 revers:1 plastino:2 differ:1 direction:2 synchronized:1 closely:2 snippe:3 human:2 sgn:1 require:1 investigation:1 biological:2 probable:1 tighter:2 hold:2 sufficiently:1 cramer:2 considered:1 exp:1 panzeri:1 claim:1 a2:1 purpose:1 gq0601:1 applicable:2 label:1 mit:1 clearly:1 gaussian:6 always:1 averbeck:5 rather:1 broader:1 encode:1 derived:3 june:1 bernoulli:1 likelihood:1 contrast:6 rigorous:2 litke:1 detect:1 dependent:11 dayan:2 entire:1 originating:1 germany:1 fidelity:2 orientation:11 spatial:6 special:3 mutual:2 marginal:1 equal:1 once:1 field:1 nicely:1 identical:1 afc:4 future:1 stimulus:40 retina:1 thorpe:1 divergence:4 comprehensive:1 national:2 fire:1 interest:3 highly:1 investigate:2 marlot:1 evaluation:2 analyzed:2 mixture:2 accurate:2 integral:1 poggio:1 iv:2 desired:2 theoretical:2 minimal:13 modeling:1 rao:2 corroborate:1 cover:1 measuring:1 tractability:1 deviation:1 uniform:2 varies:1 density:2 decayed:1 sensitivity:1 international:1 stay:1 lee:4 jensenshannon:1 decoding:3 bethge:2 together:1 connectivity:1 squared:3 again:1 successively:1 choose:1 dr:5 worse:1 koenderink:3 derivative:1 account:5 converted:1 potential:1 de:2 coding:44 includes:1 coefficient:1 vi:5 performed:1 try:1 observer:3 view:1 analyze:1 observing:1 red:6 bayes:1 ass:2 accuracy:13 roll:1 variance:3 yield:1 ecos:1 weak:1 none:1 carlo:1 comp:2 vazquez:1 cybernetics:2 ijs:26 phys:1 sebastian:1 grossly:1 underestimate:2 frequency:1 obvious:1 mi:2 treatment:2 ask:1 actually:2 attained:1 follow:3 response:11 hershey:1 evaluated:1 box:13 strongly:3 furthermore:2 angular:4 correlation:33 hand:1 gauthier:1 nonlinear:2 defines:1 quality:4 facilitate:1 effect:7 ohzawa:1 brown:1 true:3 unbiased:1 equality:2 analytically:1 spatially:1 leibler:2 illustrated:1 spemannstrasse:1 inferior:1 cosine:12 prominent:1 demonstrate:2 latham:1 performs:1 cano:1 fi:14 recently:2 common:1 superior:2 functional:3 volume:1 extend:2 discussed:1 numerically:1 tuning:23 fano:2 pointed:1 similarly:1 nonlinearity:1 cortex:3 j:1 multivariate:1 posterior:1 recent:1 roudi:1 perspective:1 inf:2 scenario:1 certain:2 ubingen:2 inequality:2 gerwinn:1 binary:6 captured:1 minimum:5 additional:1 employed:1 paradigm:3 schneidman:1 signal:1 dashed:2 arithmetic:1 full:1 unimodal:1 kq2:1 d0:5 characterized:1 plug:1 academic:1 long:17 lin:4 equally:1 award:1 dkl:6 plugging:1 coded:1 impact:5 vision:1 arxiv:1 achieved:2 cell:1 addition:1 uninformative:1 separately:1 interval:1 hz:3 facilitates:1 extracting:1 presence:2 ideal:3 bernstein:1 easy:1 affect:1 restrict:1 opposite:1 fkz:1 quant:1 shift:1 whether:1 expression:2 categorial:1 peter:1 speech:1 wootters:1 ignored:1 detailed:1 dark:1 ph:1 category:1 reduced:1 neuroscience:4 discrete:1 group:1 four:3 pb:1 drawn:1 clarity:3 verified:1 abbott:2 fize:1 angle:1 uncertainty:1 almost:1 draw:1 rotermund:1 bit:2 bound:27 activity:2 kreiman:1 constraint:1 segev:1 ri:1 sake:1 aspect:1 speed:1 min:4 expanded:1 relatively:1 according:1 hertz:1 beneficial:4 across:1 appealing:1 rev:2 primate:2 invariant:1 computationally:1 equation:14 describing:1 german:1 merit:2 end:1 studying:3 observe:1 alternative:2 thomas:1 denotes:1 cf:2 scholarship:1 especially:1 approximating:1 february:1 psychophysical:3 already:1 quantity:2 spike:3 question:1 dependence:3 bialek:1 unclear:1 exhibit:1 comparability:1 gradient:1 detrimental:1 link:4 thank:1 tuebingen:1 neurometric:17 code:31 length:1 index:1 relationship:2 illustration:5 dicarlo:1 difficult:1 mostly:3 steep:1 upper:6 neuron:19 finite:1 descent:1 parietal:1 variability:1 treves:1 pair:2 required:1 chichilnisky:1 connection:1 acoustic:1 unequal:1 cotton:1 macaque:1 address:1 beyond:1 below:4 pattern:1 usually:1 max:1 natural:1 scheme:7 imply:1 axis:1 started:1 sher:1 review:1 prior:4 berry:1 relative:4 fully:1 interesting:1 versus:1 var:4 foundation:1 sufficient:1 consistent:2 classifying:1 supported:1 last:1 soon:1 bias:1 allow:1 institute:1 focussed:1 distributed:1 curve:5 overcome:1 cortical:1 valid:2 transition:1 schultz:1 far:1 emitting:1 transaction:1 approximate:4 olsen:1 preferred:3 kullback:2 deg:8 receiver:1 assumed:4 equ:1 tolias:1 continuous:2 nature:3 channel:1 correlated:3 investigated:2 berens:1 vj:1 timescales:4 neurosci:1 noise:23 profile:1 neuronal:1 wiley:1 bmbf:1 fails:2 doiron:1 lie:1 inset:1 jensen:11 er:2 decay:3 exists:1 shea:1 magnitude:1 nat:1 acitivity:2 entropy:6 explore:1 neurophysiological:2 visual:3 sclar:1 monotonic:1 brunel:1 conditional:3 identity:2 sized:3 fisher:30 absence:3 change:2 specifically:1 determined:1 experimental:1 la:1 shannon:12 latter:1 alexander:1 evaluate:3 hung:1 |
2,904 | 3,632 | White Functionals for Anomaly Detection in
Dynamical Systems
Marco Cuturi
ORFE - Princeton University
[email protected]
Jean-Philippe Vert
Mines ParisTech, Institut Curie, INSERM U900
[email protected]
Alexandre d?Aspremont
ORFE - Princeton University
[email protected]
Abstract
We propose new methodologies to detect anomalies in discrete-time processes
taking values in a probability space. These methods are based on the inference
of functionals whose evaluations on successive states visited by the process are
stationary and have low autocorrelations. Deviations from this behavior are used
to flag anomalies. The candidate functionals are estimated in a subspace of a
reproducing kernel Hilbert space associated with the original probability space
considered. We provide experimental results on simulated datasets which show
that these techniques compare favorably with other algorithms.
1 Introduction
Detecting abnormal points in small and simple datasets can often be performed by visual inspection, using notably dimensionality reduction techniques. However, non-parametric techniques are
often the only credible alternative to address these problems on the many high-dimensional, richly
structured data sets available today.
When carried out on independent and identically distributed (i.i.d) observations, anomaly detection
is usually referred to as outlier detection and is in many ways equivalent to density estimation.
Several density estimators have been used in this context and we refer the reader to the exhaustive
review in [1]. Among such techniques, methods which estimate non-parametric alarm functions in
reproducing kernel Hilbert spaces
P (rkHs) are particularly relevant to our work. They form alarm
functions of the type f ( ? ) =
i?I ci k(xi , ? ), where k is a positive definite kernel and (ci )i?I
is a family of coefficients paired with a family (xi )i?I of previously observed data points. A new
observation x is flagged as anomalous whenever f (x) goes outside predetermined boundaries which
are also provided by the algorithm. Two well known kernel methods have been used so far for
this purpose, namely kernel principal component analysis (kPCA) [2] and one-class support vector
machines (ocSVM) [3]. The ocSVM is a popular density estimation tool and it is thus not surprising
that it has already found successful applications to detect anomalies in i.i.d data [4]. kPCA can also
be used to detect outliers as described in [5], where an outlier is defined as any point far enough
from the boundaries of an ellipsoid in the rkHs containing most of the observed points.
These outlier detection methods can also be applied to dynamical systems. We now monitor discrete
time stochastic processes Z = (Zt )t? N taking values in a space Z and, based on previous observations zt?1 , ? ? ? , z0 , we seek to detect whether a new observation zt abnormally deviates from the
usual dynamics of the system. As explained in [1], this problem can be reduced to density estimation
when either Zt or a suitable representation of Zt that includes a finite number of lags is Markovian,
i.e. when the conditional probability of Zt given its past depends only on the values taken by Zt?1 .
1
In practice, anomaly detection then involves a two step procedure. It first produces an estimator
Z?t of the conditional expectation of Zt given Zt?1 to extract an empirical estimator for the residues
??t = Zt ? Z?t . Under an i.i.d assumption, abnormal residues can then be used to flag anomalies. This
approach and advanced extensions can be used both for multivariate data [6, 7] and linear processes
in functional spaces [8] using spaces of H?olderian functions.
The main contribution of our paper is to propose an estimation approach of alarm functionals that
can be used on arbitrary Hilbert spaces and which bypasses the estimation of residues ??t ? Z by focusing directly on suitable properties for alarm functionals. Our approach is based on the following
intuition. Detecting anomalies in a sequence generated by white noise is a task which is arguably
easier than detecting anomalies in arbitrary time-series. In this sense, we look for functionals ? such
that ?(Zt ) exhibits a stationary behavior with low autocorrelations, ideally white noise, which can
be used in turn to flag an anomaly whenever ?(Zt ) departs from normality. We call functionals ?
that strike a good balance between exhibiting a low autocovariance of order 1 and a high variance on
successive values Zt a white functional of the process Z. Our definition can be naturally generalized
to higher autocovariance orders as the reader will naturally see in the remaining of the paper.
Our perspective is directly related to the concept of cointegration (see [9] for a comprehensive review) for multivariate time series, extensively used by econometricians to study equilibria between
various economic and financial indicators. For a multivariate stochastic process X = (Xt )t?Z taking values in Rd , X is said to be cointegrated if there exists a vector a of Rd such that (aT Xt )t?Z is
stationary. Economists typically interpret the weights of a as describing a stable linear relationship
between various (non-stationary) macroeconomic or financial indicators. In this work we discard the
immediate interpretability of the weights associated with linear functionals aT Xt to focus instead
on functionals ? in a rkHs H such that ?(Zt ) is stationary, and use this property to detect anomalies.
The rest of this paper is organized as follows. In Section 2, we study different criterions to measure
the autocorrelation of a process, directly inspired by min/max autocorrelation factors [10] and the
seminal work of Box-Tiao [11] on cointegration. We study the asymptotic properties of finite sample
estimators of these criterions in Section 3 and discuss the practical estimation of white functionals
in Section 4. We discuss relationships with existing methods in Section 5 and provide experimental
results to illustrate the effectiveness of these approaches in Section 6.
2 Criterions to define white functionals
Consider a process Z = (Zt )t? Z taking values in a probability space Z. Z will be mainly considered
in this work under the light of its mapping onto a rkHs H associated with a bounded and continuous
kernel k on Z ? Z. Z is assumed to be second-order stationary, that is the densities p(Zt = z) and
joint densities p(Zt = z, Zt+k = z ? ) for k ? N are independent of t. Following [12, 13] we write
?t = ?(Zt ) ? Ep [?(Zt )],
for the centered projection of Z in H, where ? : z ? Z ? k(z, ?) ? H is the feature map associated
with k. For two elements ? and ? of H we write ? ? ? for their tensor product, namely the linear
map of H onto itself such that ? ? ? : x ? h?, xiH ?. Using the notations of [14] we write
C = Ep [?t ? ?t ],
D = Ep [?t ? ?t+1 ],
respectively for the covariance and autocovariance of order 1 of ?t . Both C and D are linear operators of H by weak stationarity [14, Definition 2.4] of (?t )t?Z , which can be deduced from the
second-order stationarity of Z. The following definitions introduce two criterions which quantify
how related two successive evaluations of ?(Zt ) are.
Definition 1 (Autocorrelation Factor [10]). Given an element ? of H such that h?, C?iH > 0,
?(?) is the absolute autocorrelation of ?(Z) of order 1,
?(?) = | corr(?(Zt ), ?(Zt+1 )| =
|h?, D?iH |
.
h?, C?iH
(1)
The condition h?, C?iH > 0 requires that var ?(?t ) is not zero, which excludes constant or vanishing functions on the support of the density of ?t . Note also that defining ? requires no other
assumption than second-order stationarity of Z.
2
If we assume further that ? is an autoregressive Hilbertian process of order 1 [14], ARH(1) for short,
there exists a compact operator ? : H ? H and a H strong white noise1 (?t )t?Z such that
?t+1 = ? ?t + ?t .
In their seminal work, Box and Tiao [11] quantify the predictability of the linear functionals of
a vector autoregressive process in terms of variance ratios. The following definition is a direct
adaptation of that principle to autoregressive processes in Hilbert spaces. From [14, Theorem 3.2]
we have that C = ? C?? + C? where for any linear operator A of H, A? is its adjoint.
Definition 2 (Predictability in the Box-Tiao sense [11]). Given an element ? of H such that
h?, C?iH > 0, the predictability ?(?) is the quotient
?(?) =
varh?, ? ?t iH
h?, ? C ?? ?iH
h?, DC ?1 D? ?iH
=
=
.
varh?, ?t iH
h?, C?iH
h?, C?iH
(2)
The right hand-side of Equation (2) follows from the fact that ? C = D and ?? = C ?1 D? [14],
the latter equality being always valid irrelevant of the existence of C ?1 on the whole of H as noted
in [15]. Combining these two equalities gives ? C?? = DC ?1 D? .
Both ? and ? are convenient ways to quantify for a given function f of H the independence of f (Zt )
with its immediate past. We provide in this paragraph a common representation for ? and ?. For any
linear operator A of H and any non-zero element x of H write R(A, x) for the Rayleigh quotient
hx, AxiH
.
hx, xiH
R(A, x) =
We use the notations in [12] and introduce the normalized cross-covariance (or rather auto1
1
covariance in the context of this paper) operator V = C ? 2 DC ? 2 . Note that for any skewsymmetric operator A, that is A = ?A? , we have that hx, AxiH = hA? x, xiH = ?hAx, xiH = 0
?
and thus R(A, x) = R( A+A
2 , x). Both ? and ? applied on a function ? ? H can thus be written as
1
V +V?
1
?(?) = R
, C 2 ? , ?(?) = R(V V ? , C 2 ?).
2
As detailed in Section 4, our goal is to estimate functions in H from data such that they have either
low ? or ? values. Minimizing ? is equivalent to solving a generalized eigenvalue problem through
the Courant-Fisher-Weyl theorem. Minimizing ? is a more challenging problem since the operator
V + V ? is not necessarily positive definite. The S-lemma from control theory [16, Appendix B.2]
can be used to cast the problem of estimating functions with low ? as a semi-definite program. In
practice the eigen-decomposition of V + V ? provides good approximate answers.
The formulation of ? and ? as Rayleigh quotients is also useful to obtain the asymptotic convergence
of their empirical counterparts (Section 3) and to draw comparisons with kernel-CCA (Section 5).
3 Asymptotics and matrix expressions for empirical estimators of ? and ?
3.1 Asymptotic convergence of the normalized cross-covariance operator V
The covariance operator C and cross-covariance operator D can be estimated through a finite sample
of points z0 , P
? ? ? , zn translated into a sample of centered points ?1 , ? ? ? , ?n in H, where ?i =
n
1
?(zi ) ? n+1
j=0 ?(zj ). We write
n
Cn =
1 X
?i ? ?i ,
n ? 1 i=1
Dn =
n?1
1 X
?i ? ?i+1 ,
n ? 1 i=1
for the estimates of C and D respectively which converge in Hilbert-Schmidt norm [14]. Estimators
1
for ? or ? require approximating C ? 2 , which is a typical challenge encountered when studying
1
namely a sequence (?t )t?Z of H random variables such that (i) 0 < E k?t k2 = ? 2 , E ?t = 0 and the
covariance C?t is constant, equal to C? ; (ii) (?t ) is a sequence of i.i.d H-random variables
3
ARH(1) processes and more generally stationary linear processes in Hilbert spaces [14, Section
8]. This issue is addressed in this section through a Tikhonov-regularization, that is considering a
sequence of positive numbers ?n we write
1
1
Vn = (Cn + ?n I)? 2 Dn (Cn + ?n I)? 2 ,
for the empirical estimate of V regularized by ?n . We have already assumed that k is bounded and
continuous. The convergence of Vn to V in norm is ensured under the additional conditions below
1
(log n/n) 3
?n
n??
Theorem 3. Assume that V is a compact operator, lim ?n = 0 and lim
n??
= 0. Then
writing k ? kS for the Hilbert-Schmidt operator norm, lim kVn ? V kS = 0.
n??
Proof. The structure of the proof is identical to that of of [12, Theorem 1] except that the i.i.d
assumption does not hold here. In [12], the norm kVn ? V kS is upper-bounded by the two terms
1
1
1
1
kVn ? (C + ?n I)? 2 D(C + ?n I)? 2 kS + k(C + ?n I)? 2 D(C + ?n I)? 2 ? V kS . The second term
converges under the assumption that ?n ? 0 [12, Lemma 7] while the first term decreases at a rate
that is proportional to the rates of kCn ? CkS and kDn ? DkS . With the assumptions above [14,
1
1
Corollary4.1,Theorem 4.8] gives us that kCn ?CkS = O(( logn n ) 2 ) and kDn ?DkS = O(( logn n ) 2 ).
We use this result to substitute the latter rate to the faster rate obtained for i.i.d observations in [12,
Lemma 5] and conclude the proof.
3.2 Empirical estimators and matrix expressions
Given ? ? H, consider the following estimators of ?(?) and ?(?) defined in Equations (1) and (2),
h?, 12 (Dn + Dn? )?iH
Vn + Vn?
1
2
?n (?) = R
,
, (Cn + ?n I) ? =
2
h?, (Cn + ?n I)?iH
1
h?, Dn (Cn + ?n I)?1 Dn? ?iH
,
?n (?) = R(Vn Vn? , (Cn + ?n I) 2 ?) =
h?, (Cn + ?n I)?iH
1
which converge to the adequate values through the convergence of (Cn + ?n I) 2 , Vn + Vn? and
Vn Vn? . The n observations ?1 , . . . , ?n which define the empirical estimators above also span a
subspace Hn in H which canP
be used to estimate white functionals. Given ? ? Hn we use any
n
arbitrary decomposition ? = i=1 ai ?i . We write K for the original n + 1 ? n + 1 Gram matrix
? for its centered counterpart K
? = (In ? 1 1n,n )K(In ? 1 1n,n ) = [h?i , ?j iH ]i,j .
[k(zi , zj )]i,j and K
n
n
Because of the centering span{?0 , . . . , ?n } is actually equal to span{?1 , . . . , ?n } and we will only
?
use the n ? n matrix K obtained by removing the first row and column of K.
For a n ? n matrix M , we write M?i for the n ? n ? 1 matrix obtained by removing the ith column
of M . With these notations, ?n and ?n take the following form when evaluated on ? ? Hn ,
!
n
X
1 |aT (K?1 KT?n + K?1 KT?n )a|
ai ?i =
?n (?) = ?n
,
2
aT (K2 + n?n K)a
i=1
!
n
X
aT K?1 KT?n (K2 + n?n K)?1 K?n KT?1 a
ai ?i =
?n (?) = ?n
.
aT (K2 + n?n K)a
i=1
If ?n follows the assumptions of Theorem 3, both ?n and ?n converge to ? and ? pointwise in Hn .
4 Selecting white functionals in practice
Both ?(?) and ?(?) are proxies to quantify the independence of successive observations ?(Zt ).
Namely, functions with low ? and ? are likely to have low autocorrelations and be stationary when
evaluated on the process Z, and the same can be said of functions with low ?n and ?n asymptotically.
However, when H is of high or infinite dimension, the direct minimization of ?n and ?n is likely
to result in degenerate functions2 which may have extremely low autocovariance on Z but very low
variance as well. We select white functionals having this trade off in mind, such that both h?, C, ?iH
is not negligible and ? or ? are low at the same time.
2
Since the rank of operator Vn is actually n ? 1, we are even guaranteed to find in Hn a minimizer for ?n
and another for ?n with respectively zero predictability and zero absolute autocorrelation.
4
4.1 Enforcing a lower bound on h?, C?iH
We consider the following strategy: following the approach outlined in [14, Section 8] to estimate
autocorrelation operators, and more generally in [17] in the context of kernel methods, we restrict
Hn to the directions spanned P
by the p first eigenfunctions of the operator Cn . Namely, suppose Cn
can be decomposed as Cn = ni=1 gi ei ? ei where ei is an orthonormal basis of eigenvectors with
eigenvalues in decreasing order g1 ? g2 ? ? ? ? ? gn ? 0. For 1 ? p ? n We write Hp for the
span{e1 , . . . , ep } of the p first eigenfunctions. Any function ? in Hp is such that h?, Cn ?iH ? gp
and thus allows us to keep the empirical variance of ?(Zt ) above a certain threshold. Let Ep be the
n ? p coordinate matrix of eigenvectors3 e1 , . . . , ep expressed in the family of n vectors ?1P
, . . . , ?n
and G the p ? p diagonal matrix of terms (g1 , . . . , gp ). We consider now a function ? = pi bi ei
in Hp , and note that
1 |bT ETp (K?1 KT?n + K?1 KT?n )Ep b|
,
2
bT (G + n?n I)b
bT ETp K?1 KT?n (K2 + n?n K)?1 K?n KT?1 Ep b
?n (?) =
.
bT (G + n?n I)b
?n (?) =
(3)
(4)
We define two different functions of Hp , ?mac and ?BT , as the the functionals in Hp whose coefficients correspond to the eigenvector with minimal (absolute) eigenvalue of the two Rayleigh quotients of Equations (3) and (4) respectively. We call these functionals the minimum autocorrelation
(MAC) and Box-Tiao (BT) functionals of Z. Below is a short recapitulation of all the computational
steps we have described so far.
? Input: n + 1 observations z0 , ? ? ? , zn ? Z of a time-series Z, a p.d. kernel k on Z ? Z
and a parameter p (we propose an experimental methodology to set p in Section 6.3)
Pn
? Output: a real-valued function f (?) = i=0 ci k(zi , ?) that is a white functional of Z.
? Algorithm:
? Compute the (n + 1) ? (n + 1) kernel matrix K, center it and drop the first row and
column to obtain K.
? Store K?s p first eigenvectors and eigenvalues in matrices U and diag(v1 , ? ? ? , vp ).
? Compute Ep = U diag(v1 , ? ? ? , vp )?1/2 and G = n1 diag(v1 , ? ? ? , vp ).
? Compute the matrix numerator N and denominator D of either Equation (3) or Equation (4) and recover the eigenvector b with minimal absolute eigenvalue of the generalized eigenvalue problem (N, D)P
Pn
n
? Set a = Ep b ? Rn . Set c0 = ? n1 1 aj and ci = ai ? n1 1 aj
5 Relation to other methods and discussion
The methods presented in this work offer numerous parallels with other kernel methods such as
kernel-PCA or kernel-CCA which, similarly to the BT and MAC functionals, provide a canonical
decomposition of Hn into n ranked eigenfunctions.
When Z is finite dimensional, the authors of [18] perform PCA on a time-series sample z0 , . . . , zn
and consider its eigenvector with smallest eigenvalue to detect cointegrated relationships in the process Zt . Their assumption is that a linear mapping ?T Zt that has small variance on the whole
sample can be interpreted as an integrated relationship. Although the criterion considered by PCA,
namely variance, disregards the temporal structure of the observations and only focuses on the values spanned by the process, this technique is useful to get rid of all non-stationary components of
Zt . On the other hand, kernel-PCA [2], a non-parametric extension of PCA, can be naturally applied
for anomaly detection in an i.i.d. setting [5]. It is thus natural to use kernel-PCA, namely an eigenfunction with low variance, and hope that it will have low autocorrelation to define white functionals
of a process. Our experiments show that this is indeed the case and in agreement with [5] seem to
3
Recall that if (ui , vi ) are eigenvalue and eigenvector pairs of K, the matrix E of coordinates of eigenfunc?1/2
tions ei expressed in the n points ?1 , . . . , ?n can be written as U diag(vi
) and the eigenvalues gi are equal
vi
to n if taken in the same order[2].
5
indicate that the eigenfunctions which lie at the very low end of the spectrum, usually discarded as
noise and less studied in the literature, can prove useful for anomaly detection tasks.
kernel-CCA and variations such as NOCCO [12] are also directly related to the BT functional.
Indeed, the operator V V ? used in this work to define ? is used in the context of kernel-CCA to
extract one of the two functions which maximally correlate two samples, the other function being
obtained from V ? V . Notable differences between our approach and kernel-CCA are: 1. in the
context of this paper, V is an autocorrelation operator while the authors of [12] consider normalized
covariances between two different samples; 2. kernel-CCA assumes that samples are independently
and identically drawn, which is definitely not the case for the BT functional; 3. while kernel-CCA
maximizes the Rayleigh quotient of V V ? , we look for eigenfunctions which lie at the lower end of
the spectrum of the same operator. A possible extension of our work is to look for two functionals f
and g which, rather than maximize the correlation of two distinct samples as is the case in CCA, are
estimated to minimize the correlation between g(zt ) and f (zt+1 ). This direction has been explored
in [19] to shed a new light on the Box-Tiao approach in the finite dimensional case.
6 Experimental results using a population dynamics model
6.1 Generating sample paths polluted by anomalies
We consider in this experimental section a simulated dynamical system perturbed by arbitrary
anomalies. To this effect, we use the Lotka-Volterra equations to generate time-series quantifying the populations of different species competing for common resources. For S species, the model
tracks the population level Xt,i at time t of each species i, which is a number bounded between 0
and 1. Values of 0 and 1 account respectively for the extinction and the saturation levels of each
species. Writing ? for the coordinate-wise kronecker product of vectors and matrices and h > 0 for
a discretization step, the population vector Xt ? [0, 1]S follows the discrete-time dynamic equation
1
Xt+1 = Xt + r ? Xt ? (1S ? AXt ) .
h
We consider the following coefficients introduced in [20] which are known to yield chaotic behavior,
?
?
?
?
1
1.09 1.52
0
1
1
.44 1.36?
? 0
?0.72?
,
, A=?
S = 4, r = ?
2.33
0
1
.47 ?
1.53?
1.21 .51 .35
1
1.27
which can be turned into a stochastic system by adding an i.i.d. standard Gaussian noise ?t ,
1
(5)
Zt+1 = Zt + r ? Zt ? (14 ? AZt ) + ?? ?t .
h
Whenever the equations generate coordinates below 0 or above 1, the violating coordinates are set
to 0 + u or 1 ? u respectively, where u is uniform over [0, 0.01].
We consider trajectories of length 800 of the Lotka-Volterra system described in Equation (5). For
each experiment we draw a starting point Z0 randomly with uniform distribution on [0, 1]4 , discard
the 10 first iterations and generate 400 iterations following Equation (5). Following this we select
randomly (uniformly over the remaining 400 steps) 40 time stamps t1 , ? ? ? , t40 where we introduce
a random perturbation at tk such that Ztk , rather than following the dynamic of Equation (5) is
randomly perturbed by a noise ?t chosen uniformly over {?1, 1}4 with a magnitude ?? , that is
Ztk = Ztk ?1 + ?? ?tk ?1 .
For all other timestamps tk < t < tk+1 , the system follows the usual dynamic of Equation (5).
Anomalies violate the usual dynamics in two different ways: first, they ignore the usual dynamical
equations and the current location of the process to create instead purely random increments; second,
depending on the magnitude of ?? relative to ?? , such anomalies may induce unusual jumps.
6.2 Estimation of white functionals and other alarm functions
We compare in this experiment five techniques to detect the anomalies described above: the BoxTiao functional and a variant described in the paragraph below, the minimal autocorrelation functional, a one-class SVM and the low-variance functional defined by the (p + 1)th eigenfunction of
6
Lotka Volterra System
1
0.5
0
20
40
60
Box?Tiao ? AUC: 0.828
80
100
weights
4
4
2
0.2 2
0
0
?0.2
?2
?0.4
0
?2
50
100
150
ocSVM ? AUC: 0.444
200
180
0.2
0
?0.2
?0.4
200
weights
6
0.4
0.2
0
?0.2
0.1 4
2
?0.1
0.05 0
?0.2
50
100
150
0
200
200
weights
0.4
50
100
150
kPCA ? AUC: 0.628
weights
0
120
140
160
kMAC ? AUC: 0.797
?2
50
100
150
200
Figure 1: The figure on the top plots a sample path of length 200 of a 4-dimensional Lotka-Volterra
dynamic system with perturbations drawn with ?? = .01 and ?? = 0.02. The data is split between
80 regular observations and 120 observations polluted by 10 anomalies. All four functionals have
been estimated using ? = 1, and we highlight by a red dot the values they take when an anomaly
is actually observed. The respective weights associated to each of the 80 training observations are
displayed on the right of each methodology.
the empirical covariance Cn , given by kernel-PCA. All techniques are parameterized by a kernel k.
Writing ?zi = zi ? zi?1 , we use the following mixture of kernels k :
2
2
k(zi , zj ) = ? e?100k?zi ??zj k + (1 ? ?)e?10kzi ?zj k ,
(6)
with ? ? [0, 1]. The first term in k discriminates observations according to their location in [0, 1]4 .
When ? = 0.5, k accounts for both the state of the system and its most recent increments, while
only increments are considered for ? = 1. Anomalies can be detected with both criterions, since
they can be tracked down when the process visits unusual regions or undergoes brusque and atypical
changes. The kernel widths have been set arbitrarily.
We discuss in this paragraph a variant of the BT functional. While the MAC functional is defined
and estimated in order to behave as closely as possible to random i.i.d noise, the BT functional ?BT
is tuned to be stationary as discussed in [11]. In order to obtain a white functional from ?BT it is
possible to model the time series ?BT (zt ) as an unidimensional autoregressive model, that is estimate
(on the training sample again) coefficients r1 , r2 , . . . , rq such that
?BT (zt ) =
q
X
ri ?BT (zt?i ) + ??BT
t .
i=1
Both the order q and the autoregressive coefficients can be estimated on the training sample with
standard AR packages, using for instance Schwartz?s criterion to select q. Note that although ?(Zt )
is assumed to be ARH(1), this does not necessarily translate into the fact that the real-valued process
?BT (Zt ) = h?BT , ?t iH is AR(1)
pointed out in [14, Theorem 3.4]. In practice however we use
Pas
p
the residuals ??BT
t = ?BT (zt ) ?
i=1 ri ?BT (zt?i ) to define the Box-Tiao residuals functional which
we write ??BT .
7
?=0
1
?=1
? = 0.5
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.9
BT Res
BT
AUC
MAC
0.8
ocSVM
kpca
0.01 0.02 0.03 0.04 0.05
0.01 0.02 0.03 0.04 0.05
Noise Amplitude ?
0.01 0.02 0.03 0.04 0.05
?
Figure 2: The three successive plot stand for three different values of ? = 0, 0.5, 1. The detection
rate naturally increases with the size of the anomaly, to the extent that the task becomes only a gap
detection problem when ?? becomes closer to 0.05. Functionals ?BT , ??BT and ?mac have a similar
performance and outperform other techniques when the task is most difficult and ?? is small.
6.3 Parameter selection methodology and numerical results
The BT functional ?BT and its residuals ??BT , the MAC function ?mac , the one-class SVM f?ocSVM
and the p + 1th eigenfunction ep+1 are estimated on a set of 400 observations. We set p through the
rule that the p first
mustP
carry at least 98% of the total variance of Cn , that is p is the first
Pdirections
n
p
integer such that i=1 gi > 0.98? i=1 gi . We fix the ? paramater of the ocSVM to 0.1. The BT and
MAC functionals additionally require the use of a regularization term ?n which we select by finding
the best ridge regressor of ?t+1 given ?t through a 4-fold cross validation procedure on the training
set. For ?BT , ??BT , ?mac and the kPCA functional ep+1 we use their respective empirical mean ?
and variance ? on the training set to rescale and whiten their output on the test set, namely consider
values (f (z) ? ?)/?. Although more elaborate anomaly detection schemes on such unidimensional
time-series might be considered, for the sake of simplicity we treat directly these raw outputs as
alarm scores.
Having on the one hand the correct labels for anomalies and the scores for all detectors, we vary
the threshold at which an alarm is raised to produce ROC curves. We use the area under the curve
of each method on each sample path as a performance measure for that path. Figure 1 provides
a summary of the performance of each method on a unique sample path of 200 observations and
10 anomalies. Perturbation parameters are set such that ?? = 0.01 and ?? varies between 0.005
and 0.055. For each couple (?? , ?? ) we generate 500 draws and compute the mean AUC of each
technique on such draws. We report in Figure 2 these averaged performances for three different
choices of the kernel, namely three different values for ? as defined in Equation (6).
6.4 Discussion
In the experimental setting, anomalies can be characterized as unusual increments between two
successive states of an otherwise smooth dynamical system. Anomalies are unusual due to their
size, controlled by ?? , and their directions, sampled in {?1, 1}4. When the step ?? is relatively
small, it is difficult to flag correctly an anomaly without taking into account the system?s dynamic
as illustrated by the relatively poor performance of the ocSVM and the kPCA compared to the
BT, BTres and MAC functions. On the contrary, when ?? is big, anomalies can be more simply
discriminated as big gaps. The methods we propose do not perform as well as the ocSVM in such a
setting. We can hypothesize two reasons for this: first, white functionals may be less useful in such
a regime that puts little emphasis on dynamics than a simple ocSVM with adequate kernel. Second,
in this study the BT and MAC functions flag anomalies whenever an evaluation goes outside of a
certain bounding tube. More advanced detectors of a deviation or change from normality, such as
CUSUM [21], might be studied in future work.
8
References
[1] V. Chandola, A. Banerjee, and V. Kumar. Anomaly detection: A survey. ACM Computing Surveys, 2009.
[2] B. Sch?olkopf, A. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue problem.
Neural Comput., 10(5):1299?1319, 1998.
[3] B. Sch?olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. C. Williamson. Estimating the support of
a high-dimensional distribution. Neural Comput., 13:2001, 1999.
[4] A.B. Gardner, A.M. Krieger, G. Vachtsevanos, and B. Litt. One-class novelty detection for seizure analysis
from intracranial EEG. J. Mach. Learn. Res., 7:1025?1044, 2006.
[5] H. Hoffmann. Kernel PCA for novelty detection. Pattern Recognit., 40(3):863?874, 2007.
[6] A. J. Fox. Outliers in time series. J. R. Stat. Soc. Ser. B, 34(3):350?363, 1972.
[7] R.S. Tsay, D. Pena, and A.E. Pankratz. Outliers in multivariate time series. Biometrika, 87(4):789?804,
2000.
[8] A. Laukaitis and A. Ra?ckauskas. Testing changes in Hilbert space autoregressive models. Lithuanian
Mathematical Journal, 42(4):343?354, 2002.
[9] G. S. Maddala and I. M. Kim. Unit roots, cointegration, and structural change. Cambridge Univ. Pr.,
1998.
[10] P. Switzer and A.A. Green. Min/max autocorrelation factors for multivariate spatial imagery. Computer
science and statistics, 16:13?16, 1985.
[11] G. Box and G. C. Tiao. A canonical analysis of multiple time series. Biometrika, 64(2):355?365, 1977.
[12] K. Fukumizu, F. R. Bach, and A. Gretton. Statistical consistency of kernel canonical correlation analysis.
J. Mach. Learn. Res., 8:361?383, 2007.
[13] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Kluwer Academic Publishers, 2003.
[14] D. Bosq. Linear Processes in Function Spaces: Theory and Applications. Springer, 2000.
[15] A. Mas. Asymptotic normality for the empirical estimator of the autocorrelation operator of an ARH (1)
process. Compt. Rendus Acad. Sci. Math., 329(10):899?902, 1999.
[16] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[17] L. Zwald and G. Blanchard. Finite dimensional projection for classification and statistical learning. IEEE
Trans. Inform. Theory, 54:4169, 2008.
[18] J. H. Stock and M. W. Watson. Testing for common trends. J. Am. Stat. Assoc., pages 1097?1107, 1988.
[19] P. Bossaerts. Common nonstationary components of asset prices. J. Econ. Dynam. Contr., 12(2-3):347?
364, 1988.
[20] J. A. Vano, J. C. Wildenberg, M. B. Anderson, J. K. Noel, and J. C. Sprott. Chaos in low-dimensional
lotka-volterra models of competition. Nonlinearity, 19(10):2391?2404, 2006.
[21] M. Basseville and I.V Nikiforov. Detection of abrupt changes: theory and applications. Prentice-Hall,
1993.
9
| 3632 |@word norm:4 extinction:1 c0:1 seek:1 covariance:9 decomposition:3 functions2:1 carry:1 reduction:1 series:10 score:2 selecting:1 tuned:1 rkhs:4 past:2 existing:1 current:1 discretization:1 surprising:1 written:2 timestamps:1 numerical:1 predetermined:1 weyl:1 hypothesize:1 drop:1 plot:2 stationary:10 inspection:1 ith:1 vanishing:1 short:2 detecting:3 provides:2 math:1 location:2 successive:6 org:1 five:1 mathematical:1 dn:6 direct:2 prove:1 autocorrelation:12 paragraph:3 introduce:3 notably:1 ra:1 indeed:2 behavior:3 inspired:1 decomposed:1 decreasing:1 little:1 considering:1 becomes:2 provided:1 estimating:2 bounded:4 notation:3 maximizes:1 interpreted:1 eigenvector:4 finding:1 temporal:1 shed:1 axt:1 ensured:1 k2:5 biometrika:2 berlinet:1 schwartz:1 control:1 platt:1 ser:1 unit:1 assoc:1 arguably:1 positive:3 negligible:1 t1:1 treat:1 acad:1 etp:2 mach:2 path:5 might:2 emphasis:1 k:5 studied:2 challenging:1 bi:1 averaged:1 practical:1 unique:1 testing:2 practice:4 definite:3 chaotic:1 procedure:2 asymptotics:1 area:1 empirical:10 vert:2 projection:2 convenient:1 boyd:1 induce:1 regular:1 get:1 onto:2 selection:1 operator:19 put:1 context:5 prentice:1 seminal:2 writing:3 zwald:1 equivalent:2 map:2 center:1 go:2 starting:1 independently:1 convex:1 survey:2 simplicity:1 abrupt:1 estimator:10 rule:1 spanned:2 orthonormal:1 vandenberghe:1 financial:2 population:4 coordinate:5 variation:1 increment:4 compt:1 today:1 suppose:1 anomaly:31 agreement:1 pa:1 element:4 trend:1 particularly:1 observed:3 ep:12 region:1 decrease:1 trade:1 rq:1 intuition:1 discriminates:1 ui:1 cuturi:1 ideally:1 mine:2 dynamic:9 solving:1 purely:1 basis:1 translated:1 joint:1 stock:1 various:2 univ:1 distinct:1 detected:1 recognit:1 outside:2 exhaustive:1 jean:2 whose:2 lag:1 valued:2 otherwise:1 statistic:2 gi:4 g1:2 azt:1 gp:2 itself:1 sequence:4 eigenvalue:10 propose:4 product:2 adaptation:1 relevant:1 combining:1 turned:1 translate:1 degenerate:1 adjoint:1 paramater:1 competition:1 olkopf:2 convergence:4 r1:1 produce:2 generating:1 converges:1 tk:4 tions:1 illustrate:1 depending:1 stat:2 rescale:1 strong:1 soc:1 quotient:5 involves:1 indicate:1 quantify:4 exhibiting:1 direction:3 dynam:1 closely:1 correct:1 stochastic:3 centered:3 require:2 hx:3 fix:1 extension:3 marco:1 hold:1 considered:5 hall:1 equilibrium:1 mapping:2 vary:1 smallest:1 purpose:1 switzer:1 estimation:7 label:1 visited:1 create:1 tool:1 minimization:1 hope:1 uller:1 fukumizu:1 always:1 gaussian:1 rather:3 pn:2 focus:2 rank:1 mainly:1 kim:1 sense:2 detect:7 am:1 inference:1 contr:1 typically:1 bt:35 integrated:1 relation:1 issue:1 among:1 classification:1 logn:2 hilbertian:1 raised:1 spatial:1 equal:3 having:2 identical:1 look:3 future:1 report:1 randomly:3 comprehensive:1 skewsymmetric:1 n1:3 bosq:1 detection:14 stationarity:3 evaluation:3 mixture:1 light:2 kt:8 aspremon:1 cointegration:3 closer:1 respective:2 institut:1 autocovariance:4 fox:1 taylor:1 re:3 minimal:3 instance:1 column:3 markovian:1 gn:1 ar:2 zn:3 kpca:6 mac:12 deviation:2 uniform:2 successful:1 polluted:2 answer:1 perturbed:2 varies:1 deduced:1 density:7 definitely:1 off:1 regressor:1 again:1 imagery:1 tube:1 sprott:1 containing:1 hn:7 account:3 chandola:1 includes:1 coefficient:5 blanchard:1 notable:1 depends:1 vi:3 performed:1 root:1 red:1 recover:1 parallel:1 curie:1 contribution:1 minimize:1 litt:1 ni:1 variance:10 correspond:1 yield:1 vp:3 weak:1 raw:1 trajectory:1 asset:1 canp:1 detector:2 inform:1 whenever:4 definition:6 centering:1 naturally:4 associated:5 proof:3 couple:1 sampled:1 richly:1 popular:1 recall:1 lim:3 dimensionality:1 credible:1 hilbert:9 organized:1 amplitude:1 actually:3 cks:2 focusing:1 alexandre:1 higher:1 courant:1 violating:1 methodology:4 maximally:1 formulation:1 evaluated:2 box:8 anderson:1 smola:2 mcuturi:1 correlation:3 hand:3 ei:5 nonlinear:1 banerjee:1 autocorrelations:3 undergoes:1 aj:2 effect:1 concept:1 normalized:3 counterpart:2 equality:2 regularization:2 illustrated:1 white:15 numerator:1 width:1 auc:6 noted:1 whiten:1 criterion:7 generalized:3 ridge:1 wise:1 chaos:1 common:4 functional:15 discriminated:1 tracked:1 nocco:1 discussed:1 pena:1 interpret:1 kluwer:1 refer:1 cambridge:2 ai:4 rd:2 outlined:1 consistency:1 hp:5 similarly:1 pointed:1 nonlinearity:1 shawe:1 dot:1 stable:1 ztk:3 multivariate:5 intracranial:1 orfe:2 recent:1 perspective:1 irrelevant:1 discard:2 tikhonov:1 certain:2 store:1 arbitrarily:1 watson:1 minimum:1 additional:1 abnormally:1 converge:3 maximize:1 strike:1 novelty:2 semi:1 ii:1 violate:1 multiple:1 gretton:1 smooth:1 faster:1 characterized:1 academic:1 cross:4 offer:1 bach:1 e1:2 visit:1 paired:1 controlled:1 anomalous:1 variant:2 denominator:1 expectation:1 iteration:2 kernel:30 residue:3 varh:2 addressed:1 publisher:1 sch:2 rest:1 eigenfunctions:5 contrary:1 effectiveness:1 seem:1 call:2 integer:1 structural:1 nonstationary:1 split:1 identically:2 enough:1 independence:2 zi:8 restrict:1 competing:1 economic:1 cn:15 unidimensional:2 whether:1 expression:2 pca:8 tsay:1 adequate:2 useful:4 generally:2 detailed:1 eigenvectors:2 extensively:1 reduced:1 generate:4 outperform:1 zj:5 canonical:3 estimated:7 track:1 correctly:1 econ:1 discrete:3 write:10 lotka:5 four:1 threshold:2 monitor:1 drawn:2 v1:3 asymptotically:1 excludes:1 package:1 parameterized:1 family:3 reader:2 vn:11 draw:4 appendix:1 seizure:1 abnormal:2 noise1:1 cca:8 guaranteed:1 bound:1 fold:1 encountered:1 kronecker:1 ri:2 sake:1 min:2 span:4 extremely:1 kumar:1 relatively:2 structured:1 according:1 poor:1 outlier:6 explained:1 pr:1 taken:2 equation:14 resource:1 previously:1 turn:1 describing:1 discus:3 rendus:1 mind:1 end:2 unusual:4 studying:1 available:1 nikiforov:1 alternative:1 schmidt:2 eigen:1 lithuanian:1 existence:1 original:2 substitute:1 assumes:1 remaining:2 top:1 thomas:1 approximating:1 tensor:1 already:2 hoffmann:1 volterra:5 parametric:3 strategy:1 usual:4 diagonal:1 said:2 exhibit:1 subspace:2 simulated:2 sci:1 extent:1 reason:1 enforcing:1 economist:1 length:2 pointwise:1 relationship:4 ellipsoid:1 ratio:1 balance:1 minimizing:2 difficult:2 favorably:1 kdn:2 u900:1 zt:41 perform:2 upper:1 observation:15 datasets:2 discarded:1 finite:6 behave:1 philippe:2 displayed:1 immediate:2 defining:1 cointegrated:2 dc:3 rn:1 perturbation:3 reproducing:3 arbitrary:4 introduced:1 namely:9 cast:1 pair:1 xih:4 eigenfunction:3 address:1 trans:1 dynamical:5 usually:2 below:4 pattern:1 agnan:1 regime:1 challenge:1 program:1 saturation:1 interpretability:1 max:2 green:1 ocsvm:9 suitable:2 ranked:1 natural:1 regularized:1 indicator:2 residual:3 advanced:2 normality:3 scheme:1 cusum:1 numerous:1 gardner:1 carried:1 aspremont:1 extract:2 deviate:1 review:2 literature:1 asymptotic:4 relative:1 highlight:1 proportional:1 var:1 validation:1 proxy:1 principle:1 bypass:1 pi:1 row:2 summary:1 side:1 taking:5 absolute:4 distributed:1 boundary:2 dimension:1 curve:2 valid:1 gram:1 stand:1 autoregressive:6 author:2 jump:1 far:3 kzi:1 dks:2 functionals:26 correlate:1 approximate:1 compact:2 ignore:1 keep:1 inserm:1 rid:1 assumed:3 conclude:1 xi:2 spectrum:2 continuous:2 additionally:1 learn:2 eeg:1 williamson:1 necessarily:2 diag:4 main:1 whole:2 noise:7 alarm:7 big:2 bounding:1 referred:1 roc:1 elaborate:1 predictability:4 comput:2 candidate:1 lie:2 kcn:2 stamp:1 atypical:1 maddala:1 z0:5 departs:1 theorem:7 removing:2 xt:8 down:1 explored:1 r2:1 svm:2 exists:2 ih:20 adding:1 corr:1 ci:4 magnitude:2 krieger:1 gap:2 easier:1 rayleigh:4 simply:1 likely:2 hax:1 visual:1 expressed:2 g2:1 springer:1 minimizer:1 acm:1 ma:1 conditional:2 goal:1 noel:1 flagged:1 quantifying:1 basseville:1 price:1 fisher:1 paristech:1 change:5 arh:4 typical:1 except:1 infinite:1 uniformly:2 flag:5 principal:1 lemma:3 total:1 specie:4 experimental:6 disregard:1 select:4 tiao:8 support:3 kvn:3 latter:2 macroeconomic:1 princeton:4 |
2,905 | 3,633 | Semi-supervised Learning in
Gigantic Image Collections
Rob Fergus
Courant Institute, NYU,
715 Broadway,
New York, NY 10003
Yair Weiss
School of Computer Science,
Hebrew University,
91904, Jerusalem, Israel
Antonio Torralba
CSAIL, EECS, MIT,
32 Vassar St.,
Cambridge, MA 02139
[email protected]
[email protected]
[email protected]
Abstract
With the advent of the Internet it is now possible to collect hundreds of millions
of images. These images come with varying degrees of label information. ?Clean
labels? can be manually obtained on a small fraction, ?noisy labels? may be extracted automatically from surrounding text, while for most images there are no
labels at all. Semi-supervised learning is a principled framework for combining
these different label sources. However, it scales polynomially with the number
of images, making it impractical for use on gigantic collections with hundreds of
millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations
for semi-supervised learning that are linear in the number of images. Specifically,
we use the convergence of the eigenvectors of the normalized graph Laplacian to
eigenfunctions of weighted Laplace-Beltrami operators. Our algorithm enables
us to apply semi-supervised learning to a database of 80 million images gathered
from the Internet.
1
Introduction
Gigantic quantities of visual imagery are present on the web and in off-line databases. Effective
techniques for searching and labeling this ocean of images and video must address two conflicting
problems: (i) the techniques to understand the visual content of an image and (ii) the ability to scale
to millions of billions of images or video frames. Both aspects have received significant attention
from researchers, the former being addressed by recent work on object and scene recognition, while
the latter is the focus of the content-based image retrieval community (CBIR) [7]. A key issue
pertaining to both aspects of the problem is the diversity of label information accompanying real
world image data. A variety of collaborative and online annotation efforts have attempted to build
large collections of human labeled images, ranging from simple image classifications, to boundingboxes and precise pixel-level segmentation [16, 21, 24]. While impressive, these manual efforts
have no hope of scaling to the many billions of images on the Internet. However, even though
most images on the web lack human annotation, they often have some kind of noisy label gleaned
from nearby text or from the image filename and often this gives a strong cue about the content of
the image. Finally, there are images where we have no information beyond the pixels themselves.
Semi-supervised learning (SSL) methods are designed to handle this spectrum of label information
[26, 28]. They rely on the density structure of the data itself to propagate known labels to areas
lacking annotations, and provide a natural way to incorporate labeling uncertainty. However, to
model the density of the data, each point must measure its proximity to every other. This requires
polynomial time ? prohibitive for large-scale problems.
In this paper, we introduce a semi-supervised learning scheme that is linear in the number of images, enabling us to tackle very large scale problems. Building on recent results in spectral graph
theory, we efficiently construct accurate numerical approximations to the eigenvectors of the normalized graph Laplacian. Using these approximations, we can easily propagate labels through huge
collections of images.
1
1.1
Related Work
Cleaning up Internet image data has been explored by several authors: Berg et al. [4], Fergus
et al. [8], Li et al. [13], Vijayanarasimhan et al. [22], amongst others. Unlike our approach, these
methods operate independently on each class and would be problematic to scale to millions or billions of images. A related group of techniques use active labeling, e.g. [10]. Semi-supervised learning is a rapidly growing sub-field of machine learning, dealing with datasets that have a large number
of unlabeled points and a much smaller number of labeled points (see [5] for a recent overview). The
most popular approaches are based on the graph Laplacian (e.g. [26, 28] and there has been much
theoretical work devoted to the asymptotics of these Laplacians [3, 6, 14]. However, these methods
require the explicit manipulation of an n ? n Laplacian matrix (n being the number of data points),
for example [2] notes: ?our algorithms compute the inverse of a dense Gram matrix which leads to
O(n3 ) complexity. This may be impractical for large datasets.?
The large computational complexity of standard graph Laplacian methods has lead to a number
of recent papers on efficient semi-supervised learning (see [27] for an overview). Many of these
methods (e.g. [18, 12, 29, 25] are based on calculating the Laplacian only for a smaller, backbone,
graph which reduces complexity to be cubic in the size of the small graph. In most cases [18, 12]
the smaller graph is built simply by randomly subsampling a subset of the points, while in [29]
a mixture model is learned on the original dataset and each mixture component defines a node in
the backbone graph. In [25] the backbone graph is found using non-negative matrix factorization.
In [9] the backbone graph is a uniform grid over the high dimensional space (so the number of nodes
grows exponentially with dimension). In [20] the number of datapoints is not reduced but rather the
number of edges. This allows the use of sparse numerical algebra techniques.
The problem with approaches based on backbone graphs is that the spectrum of the graph Laplacian
can change dramatically with different backbone construction methods [12]. This can also be seen
visually (see Fig. 3) by examining the clusterings suggested by the full data and a small subsample.
Even in cases where the ?correct? clustering is obvious when the full data is considered, the smaller
subset may suggest erroneous clusterings (e.g. Fig. 3(left)). In our approach, we take an alternative
route. Rather than trying to reduce the number of points, we take the limit as the number of points
goes to infinity.
2
Semi-supervised Learning
We start by introducing semi-supervised learning in a graph setting and then describe an approximation that reduces the learning time from polynomial to linear in the number of images. Fig. 1
illustrates the semi supervised learning problem. Following the notations of Zhu et al. [28], we
are given a labeled dataset of input-output pairs (Xl , Yl ) = {(x1 , y1 ), ..., (xl , yl )} and an unlabeled
dataset Xu = {xl+1 , ..., xn }. Thus in Fig. 1(a) we are given two labeled points and 500 unlabeled
points. Fig. 1(b) shows the output of a nearest neighbor classifier on the unlabeled points. The
purely supervised solution ignores the apparent clustering suggested by the data.
In order to use the unlabeled data, we form a graph G = (V, E) where the vertices V are the
datapoints x1 , ..., xn , and the edges E are represented by an n ? n matrix W . Entry Wij is the edge
weight between nodes i, j and a common practice is to set Wij =Pexp(?kxi ? xj k2 /2?2 ). Let D
be a diagonal matrix whose diagonal elements are given by Dii = j Wij , the combinatorial graph
Laplacian is defined as L = D ? W , which is also called the unnormalized Laplacian.
In graph-based semi-supervised learning, the graph Laplacian L is used to define a smoothness
operator that takes into account the unlabeled data. The main idea is to find functions f which agree
with the labeled data but are also smooth with respect to the graph. The smoothness is measured by
the graph Laplacian:
1X
2
f T Lf =
Wij (f (i) ? f (j))
2 i,j
Of course simply minimizing smoothness can be achieved by the trivial solution f = 1, but in
semi-supervised learning, we minimize a combination of the smoothness and the training loss. For
squared error training loss, this is simply:
J(f ) = f T Lf +
l
X
?(f (i) ? yi )2 = f T Lf + (f ? y)T ?(f ? y)
i=1
2
Data
(a)
Supervised
Semi-Supervised
(c)
(b)
Figure 1: Comparison of supervised and semi-supervised learning on toy data. Semi-supervised
learning seeks functions that are smooth with respect to the input density.
Data
?1, ?1 = 0
?2, ?2 = 0.0002
Density
?3, ?3 = 0.038
?1, ?1 = 0
?2, ?2 = 0.0002 ?3, ?3 = 0.035
Figure 2: Left: The three generalized eigenvectors of the graph Laplacian, for the toy data. Note
that the semi-supervised solution can be written as a linear combination of these eigenvectors (in this
case, the second eigenvector is enough). Using generalized eigenvectors (or equivalently normalized
Laplacians) increases robustness of the first eigenvectors, compared to using the un-normalized
eigenvectors. Right: The 2D density of the toy data, and the associated smoothness eigenfunctions
defined by that density. The plots use the Matlab jet colormap.
where ? is a diagonal matrix whose diagonal elements are ?ii = ? if i is a labeled point and ?ii = 0
for unlabeled points. The minimizer is of course a solution to (L + ?)f = ?y. Fig. 1(c) shows the
semi-supervised solution.
Although the solution can be given in closed form for the squared error loss, note that it requires
solving an n?n system of linear equations. For large n this poses serious problems with computation
time and robustness. But as suggested in [5, 17, 28], the dimension of the problem can be reduced
dramatically by only working with a small number of eigenvectors of the Laplacian.
Let ?i , ?i be the generalized eigenvectors and eigenvalues of the graph Laplacian L (solutions to
L?i = ?i D?i ). Note that the smoothness of an eigenvector ?i is simply ?Ti L?i = ?i so that
P eigenn
vectors with smaller eigenvalues are smoother.
Since
any
vector
in
R
can
be
written
f
=
i ?i ?i ,
P
the smoothness of a vector is simply i ?i2 ?i so that smooth vectors will be linear combinations of
the eigenvectors with small eigenvalues1 .
Fig. 2(left) shows the three generalized eigenvectors of the Laplacian for the toy data shown in
Fig. 1(a). Note that the semi-supervised solution (Fig. 1(c)) is a linear combination of these three
eigenvectors (in fact just one eigenvector is enough). In general, we can significantly reduce the
dimension of f by requiring it to be of the form f = U ? where U is a n ? k matrix whose columns
are the k eigenvectors with smallest eigenvalue. We now have:
J(?) = ?T ?? + (U ? ? y)T ?(U ? ? y)
The minimizing ? is now a solution to the k ? k system of equations:
(? + U T ?U )? = U T ?y
(1)
2.1 From Eigenvectors to Eigenfunctions
Given the eigenvectors of the graph Laplacian, we can now solve the semi-supervised problem in a
reduced dimensional space. But to find the eigenvectors in the first place, we need to diagonalize a
n ? n matrix. How can we efficiently calculate the eigenvectors as the number of unlabeled points
increases?
We follow [23, 14] in assuming the data xi ? Rd are samples from a distribution p(x) and analyzing
the eigenfunctions of the smoothness operator defined by p(x). Fig. 2(right) shows the density in two
1
This discussion holds for both ordinary and generalized eigenvectors, but the latter are much more stable
and we use them.
3
dimensions for the toy data. This density defines a weighted smoothness operator on any function
F (x) defined on Rd which we will denote by Lp (F ):
Z
1
Lp (F ) =
(F (x1 ) ? F (x2 ))2 W (x1 , x2 )p(x1 )p(x2 )dx1 x2
2
with W (x1 , x2 ) = exp(?kx1 ? x2 k2 /2?2 ). Just as the graph Laplacian defined eigenvectors of increasing smoothness, the smoothness operator will define eigenfunctions of increasing smoothness.
We define the first eigenfunction of LP (f ) by a minimization problem:
?1 = arg
F:
R
min
Lp (F )
F 2 (x)p(x)D(x)dx=1
R
where D(x) = x2 W (x, x2 )p(x2 )dx2 . Note that the first eigenfunction will always be the trivial
function ?(x) = 1 since it has maximal smoothness LP (1) = 0. The second eigenfunction of
Lp (f ) minimizes
R the same problem, with the additional constraint that it be orthogonal to the first
eigenfunction F (x)?1 (x)D(x)p(x)dxR = 0. More generally, the kth eigenfunction minimizes
Lp (f ) under additional constraints that F (x)?l (x)p(x)D(x)dx = 0 for all l < k. The eigenvalue of an eigenfunction ?k is simply its smoothness ?k = Lp (?k ). Fig. 2(right) shows the first
three eigenfunctions corresponding to the density of the toy data. Similar to the eigenvectors of the
graph Laplacian, the second eigenfunction reveals the natural clustering of the data. Note that the
eigenvalue of the eigenfunctions is similar to the eigenvalue of the discrete generalized eigenvector.
How are these eigenfunctions related to the generalized eigenvectors of the Laplacian? It is easy
P
2
to see that as n ? ?, n12 f T Lf = 2n1 2 i,j Wij (f (i) ? f (j)) will approach Lp (F ), and
R
P
1
2
F 2 (x)D(x)p(x)dx so that the minimization problems that dei f (i)D(i, i) will approach
n
fine the eigenvectors approach the problems that define the eigenfunctions as n ? ?. Thus under
suitable convergence conditions, the eigenfunctions can be seen as the limit of the eigenvectors as
the number of points goes to infinity [1, 3, 6, 14]. For certain parametric probability functions (e.g.
uniform, Gaussian) the eigenfunctions can be calculated analytically [14, 23]. Thus for these cases,
there is a tremendous advantage in estimating p(x) and calculating the eigenfunctions from p(x)
rather than attempting to estimate the eigenvectors directly. For example, consider a problem with
80 million datapoints sampled from a 32 dimensional Gaussian. Instead of diagonalizing an 80 million by 80 million matrix, we can simply estimate a 32 ? 32 covariance matrix and get analytical
eigenfunctions. In low dimensions, we can calculate the eigenfunction numerically by discretizing
the density. Let g be the eigenfunction values at a set of discrete points, then g satisfies:
? ? PW
? P )g = ?P Dg
?
(D
(2)
? is the affinity between the discrete points, P is a diagonal matrix whose diagonal elements
where W
? is a diagonal matrix whose diagonal elements are the
give the density at the discrete points, and D
? P , and D
? is a diagonal matrix whose diagonal elements are the sum of
sum of the columns of P W
? . This method was used to calculate the eigenfunctions in Fig. 2(right).
the columns of P W
Instead of assuming that p(x) has a simple, parametric form, we will use a more modest assumption,
that p(x) has a product form. Specifically, we assume that if we rotate the data s = Rx then
p(s) = p(s1 )p(s2 ) ? ? ? p(sd ). This assumption allows us to calculate the eigenfunctions of Lp using
only the marginal distributions p(si ).
Observation: Assume p(s) = p(s1 )p(s2 ) ? ? ? p(sd ). Let pk be the marginal distribution of a single
coordinate in s. Let ?i (sk ) be an eigenfunction of Lpk with eigenvalue ?i , then ?i (s) = ?i (sk ) is
also an eigenfunction of Lp with the same eigenvalue ?i .
Proof: This follows from the observation in [14, 23] that for separable distributions, the eigenfunctions are also separable.
This observation motivates the following algorithm:
? Find a rotation of the data R, so that s = Rx are as independent as possible.
? For each ?independent? component sk , use a histogram to approximate the density p(sk ).
In order to regularize the solution (see below), we add a small constant to the value of the
histogram at each bin.
4
? Given the approximated density p(sk ), solve numerically for eigenfunctions and eigenvalues of Lpk using Eqn. 2. As discussed above, this can be done by solving a generalized
eigenvalue problem for a B ? B matrix, where B is the number of bins in the histogram.
? Order the eigenfunctions from all components by increasing eigenvalue.
The need to add a small constant to the histogram comes from the fact that the smoothness operator
Lp (F ) ignores the value of F wherever the density vanishes, p(x) = 0. Thus the eigenfunctions can
oscillate wildly in regions with zero density. By adding a small constant to the density we enforce
an additional smoothness regularizer, even in regions of zero density. Similar regularizers are used
in [2, 9].
This algorithm will recover eigenfunctions of Lp , which depend only on a single coordinate. As
discussed in [23], products of these eigenfunctions for different coordinates are also eigenfunctions,
but we will assume the semi-supervised solution is a linear combination of only the single-coordinate
eigenfunctions. By choosing the k eigenfunctions with smallest eigenvalue we now have k functions
?k (x) whose value is given at a set of discrete points for each coordinate. We then use linear
interpolation in 1D to interpolate ?(x) at each of the labeled points xl . This allows us to solve
Eqn. 1 in time that is independent of the number of unlabeled points.
Although this algorithm has a number of approximate steps, it should be noted that if the ?independent? components are indeed independent, and if the semi-supervised solution is only a linear
combination of the single-coordinate eigenfunctions, then this algorithm will exactly recover the
semi-supervised solution as n ? ?. Consider again a dataset of 80 million points in 32 dimensions
and assume 100 bins per dimension. If the independent components s = Rx are indeed independent, then this algorithm will exactly recover the semi-supervised solution by solving 32 100 ? 100
generalized eigenvector problems and a single k ? k least squares problem. In contrast, directly
estimating the eigenvectors of the graph Laplacian will require diagonalizing an 80 million by 80
million matrix.
3
Experiments
In this section we describe experiments to illustrate the performance and scalability of our approach.
The results will be reported on the Tiny Images database [19], in combination with the CIFAR-10
label set [11]. This data is diverse and highly variable, having been collected directly from Internet
search engines. The set of labels allows us to accurately measure the performance of our algorithm,
while using data typical of the large-scale Internet settings for which our algorithm is designed.
We start with a toy example that illustrates our eigenfunction approach, compared to the Nystrom
method of Talwalker et al. [18], another approximate semi-supervised learning scheme that can scale
to large datasets. In Fig. 3 we show two different 2D datasets, designed to reveal the failure modes
of the two methods.
3.1
Features
For the experiments in this paper we use global image descriptors to represent the entire image
(there is no attempt to localize the objects within the images). Each image is thus represented
by a single Gist descriptor [15], which we then project down to 64 dimensions using PCA. As
Data
Nystrom
Eigenfunction
Data
Nystrom
Eigenfunction
Figure 3: A comparison of the separable eigenfunction approach and the Nystrom method. Both
methods have comparable computational cost. The Nystrom method is based on computing the
graph Laplacian on a set of sparse landmark points and fails in cases where the landmarks do not adequately summarize the density (left). The separable eigenfunction approach fails when the density
is far from a product form (right).
5
illustrated in Fig. 3, the eigenfunction approach assumes that the input distribution is separable
over dimension. In Fig. 4 we show that while the raw gist descriptors exhibit strong dependencies
between dimensions, this is no longer the case after the PCA projection. Note that PCA is one of the
few types of projection permitted: since distances between points must be preserved only rotations
of the data are allowed.
Log histogram of Gist descriptors
Dim. 2 vs 3, MI: 0.555
Dim. 3 vs 4, MI: 0.484
Log histogram of PCA?d Gist descriptors
Dim. 2 vs 16, MI: 0.159
Dim. 2 vs 3, MI: 0.017
Dim. 3 vs 4, MI: 0.009
Dim. 2 vs 16, MI: 0.007
Figure 4: 2D log histograms formed from 1 million Gist descriptors. Red and blue correspond
to high and low densities respectively. Left: three pairs of dimensions in the raw Gist descriptor,
along with their mutual information score (MI), showing strong dependencies between dimensions.
Right: the dimensions in the Gist descriptors after a PCA projection, as used in our experiments.
The dependencies between dimensions are now much weaker, as the MI scores show. Hence the
separability assumption made by our approach is not an unreasonable one for this type of data.
3.2 Experiments with CIFAR label set
The CIFAR dataset [11] was constructed by asking human subjects to label a subset of classes of
the Tiny Images dataset. For a given keyword and image, the subjects determined whether the given
image was indeed an image of that keyword. The resulting labels span 386 distinct keywords in the
Tiny Images dataset. Our experiments use the sub-set of 126 classes which had at least 200 positive
labels and 300 negative labels, giving a total of 63,000 images.
Our experimental protocol is as follows: we take a random subset of C classes from the set of 126.
For each class c, we randomly choose a fixed test-set of 100 positive and 200 negative examples,
reflecting the typical signal-to-noise ratio found in images from Internet search engines. The training
examples consist of t positive/negative pairs drawn from the remaining pool of 100 positive/negative
images for each keyword.
For each class in turn, we use our scheme to propagate labels from the training examples to the test
examples. By assigning higher probability (values in f ) to the genuine positive images of each class,
we are able to re-rank the images. We also make use of the the training examples from keywords
other than c by treating them as additional negative examples. For example, if we have C = 16
keywords and t = 5 training pairs per keyword, then we have 5 positive training examples and
(5+(16-1)*10)=155 negative training examples for each class. We use these to re-rank the 300 test
images of that particular class. Note that the propagation from labeled images to test images may go
through the unlabeled images that are not even in the same class. Our use of examples from other
classes as negative examples is motivated by real problems, where training labels are spread over
many keywords but relatively few labels are available per class.
In experiments using our eigenfunction approach, we compute a fixed set of k=256 eigenfunctions
on the entire 63,000 datapoints in the 64D space with ? = 0.2 and used these for all runs. For
approaches that require explicit formation of the affinity matrix, we calculate the distance between
the 64D image descriptors using ? = 0.125. All approaches use ? = 50. To evaluate performance,
we choose to measure the precision at a low recall rate of 15%, this being a sensible operating point
as it corresponds to the first webpage or so in an Internet retrieval setting. Given the split of +ve/-ve
examples in the test data, chance level performance corresponds to a precision of 33%. All results
were generated by averaging over 10 different runs, each with different random train/test draws, and
with different subsets of classes.
In our first set of experiments, shown in Fig. 5(left), we compare our eigenfunction approach to a
variety of alternative learning schemes. We use C = 16 different classes drawn randomly from
the 126, and vary the number of training pairs t from 0 up to 100 (thus the total number of labeled
points, positive and negative, varied from 0 to 3200). Our eigenfunction approach outperforms other
methods, particularly where relatively few training examples are available. We use two baseline
classifiers: (i) Nearest-Neighbor and (ii) RBF kernel SVM, with kernel width ?. The SVM approach
6
badly over-fits the data for small numbers of training examples, but catches up with the eigenfunction
approach once 64+ve/1984-ve labeled examples are used.
We also test a range of SSL approaches. The exact least-squares approach (f = (L + ?)?1 ?Y )
achieves comparable results to the eigenfunction method, although it is far more expensive. The
eigenvector approach (Eqn. 1) performs less well, being limited by the k = 256 eigenvectors used
(as k is increased, the performance converges to the exact least-squares solution). Neither of these
methods scale to large image collections as the affinity matrix W becomes too big and cannot be
inverted or have its eigenvectors computed. Fig. 5(left) also shows the efficient Nystrom method
[18], using 1000 landmark points, which has a somewhat disappointing performance. Evidently, as
in Fig. 3, the landmark points do not adequately summarize the density. As the number of landmarks
is increased, the performance approaches that of the least squares solution.
0.7
(a) Without noisy labels
0.6
0.55
0.5
Eigenfunction
Eigenfunction
w/noisy labels
Nystrom
0.45
0.4
Least?squares
Eigenvector
0.35
SVM
0.3
NN
0
1
2
3
4
5
6
Log2 number of +ve training examples/class
(c) Without noisy labels
0.7
0.6
0.5
0.4
0.3
0 1 2 3 4 5
7
Log 2 # classes
0 1 2 3 4 5
Log 2 # classes
512
256
128
64
32
16
Chance
0.25
?Inf
(b) With noisy labels
0
1
2
3
5
8
10
15
20
40
60
100
# +ve training examples/class
Mean precision at 15% recall
averaged over 16 classes
0.65
# Eigenfunctions
Figure 5: Left: Performance (precision at 15% recall) on the Tiny Image CIFAR label set for different learning schemes as the number of training pairs is increased, averaged over 16 different classes.
-Inf corresponds to the unsupervised case (0 examples). Our eigenfunction scheme (solid red) outperforms standard supervised methods (nearest-neighbors (green) and a Gaussian SVM (blue)) for
small numbers of training pairs. Compared to other semi-supervised schemes, ours matches the
exact least squares solution (which is too expensive to run on a large number of images), while outperforming approximate schemes, such as Nystrom [18]. By using noisy labels in addition to the
training pairs, the performance is boosted when few training examples are available (dashed red).
Right: (a): The performance of our eigenfunction approach as the number of training pairs per
class and number of classes is varied. Increasing the number of classes also aids performance since
labeled examples from other classes can be used as negative examples. (b): As for (a) but now using noisy label information (Section 3.3). Note the improvement in performance when few training
pairs are available. (c): The performance of our approach (using no noisy labels) as the number of
eigenfunctions is varied.
In Fig. 5(right)(a) we explore how our eigenfunction approach performs as the number of classes C
is varied, for different numbers of training pairs t per class. For a fixed t, as C increases, the number
of negative examples available increases thus aiding performance. Fig. 5(right)(c) shows the effect
of varying the number of eigenfunctions k for C = 16 classes. The performance is fairly stable
above k = 128 eigenfunctions (i.e. on average 2 per dimension), although some mild over-fitting
seems to occur for small numbers of training examples when a very large number is used.
3.3 Leveraging noisy labels
In the experiments above, only two types of data are used: labeled training examples and unlabeled
test examples. However, an additional source is the noisy labels from the Tiny Image dataset (the
keyword used to query the image search engine). These labels can easily be utilized by our framework: all 300 test examples for a class c are given a positive label with a small weight (?/10),
while the 300(C ? 1) test examples from other classes are given negative label with the same small
weight. Note that these labels do not reveal any information about which of the 300 test images
are true positives. These noisy labels can provide a significant performance gain when few training (clean) labels are available, as shown in Fig. 5(left) (c.f. solid and dashed red lines). Indeed,
when no training labels are available, just the noisy labels, our eigenfunction scheme still performs
very well. The performance gain is explored in more detail in Fig. 5(right)(b). In summary, using
7
the eigenfunction approach with noisy labels, the performance obtained with a total of 32 labeled
examples is comparable to the SVM trained with 64*16=512 labeled examples.
3.4 Experiments on Tiny Images dataset
Our final experiment applies the eigenfunction approach to the whole of the Tiny Images dataset
(79,302,017 images). We map the gist descriptor for each image down to a 32D space using PCA
and precompute k = 64 eigenfunctions over the entire dataset. The 445,954 CIFAR labels (64,185
of which are +ve) cover 386 keywords, any of which can be re-ranked by solving Eqn. 1, which
takes around 1ms on a fast PC. In Fig. 6 we show our scheme on four different keywords, each using
3 labeled training pairs, resulting in a significant improvement in quality over the original ordering.
A nearest-neighbor classifier which is not regularized by the data density performs worse than our
approach.
Ranking from search engine Nearest Neighbor re-ranking
Eigenfunction re-ranking
Figure 6: Re-ranking images from 4 keywords in an 80 million image dataset, using 3 labeled pairs
for each keyword. Rows from top: ?Japanese spaniel?, ?airbus?, ?ostrich?, ?auto?. From L to
R, the columns show the original image order, results of nearest-neighbors and the results of our
eigenfunction approach. By regularizing the solution using eigenfunctions computed from all 80
million images, our semi-supervised scheme outperforms the purely supervised method.
4
Discussion
We have proposed a novel semi-supervised learning scheme that is linear in the number of images,
and then demonstrated it on challenging datasets, including one of 80 million images. The approach
can easily be parallelized making it practical for Internet-scale image collections. It can also incorporate a variety of label types, including noisy labels, in one consistent framework.
Acknowledgments
The authors would like to thank H?ector Bernal and the anonymous reviewers and area chairs for their
constructive comments. We also thank Alex Krizhevsky and Geoff Hinton for providing the CIFAR
label set. Funding support came from: NSF Career award (ISI 0747120), ISF and a Microsoft
Research gift.
8
References
[1] M. Belkin and P. Niyogi. Towards a theoretical foundation for laplacian based manifold methods. Journal of Computer and System Sciences, 2007.
[2] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for
learning from labeled and unlabeled examples. JMLR, 7:2399?2434, 2006.
[3] Y. Bengio, O. Delalleau, N. L. Roux, J.-F. Paiement, P. Vincent, and M. Ouimet. Learning
eigenfunctions links spectral embedding and kernel PCA. In NIPS, pages 2197?2219, 2004.
[4] T. Berg and D. Forsyth. Animals on the web. In CVPR, pages 1463?1470, 2006.
[5] O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-Supervised Learning. MIT Press, 2006.
[6] R. R. Coifman, S. Lafon, A. Lee, M. Maggioni, B. Nadler, F. Warner, and S. Zucker. Geometric
diffusion as a tool for harmonic analysis and structure definition of data, part i: Diffusion maps.
PNAS, 21(102):7426?7431, 2005.
[7] R. Datta, D. Joshi, J. Li, and J. Z. Wang. Image retrieval: Ideas, influences, and trends of the
new age. ACM Computing Surveys, 2008.
[8] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google?s
image search. In ICCV, volume 2, pages 1816?1823, Oct. 2005.
[9] J. Garcke and M. Griebel. Semi-supervised learning with sparse grids. In ICML workshop on
learning with partially classified training data, 2005.
[10] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active learning with gaussian processes
for object categorization. In CVPR, 2007.
[11] A. Krizhevsky and G. E. Hinton. Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto, 2009.
[12] S. Kumar, M. Mohri, and A. Talwalkar. Sampling techniques for the Nystrom method. In
AISTATS, 2009.
[13] L. J. Li, G. Wang, and L. Fei-Fei. Optimol: automatic object picture collection via incremental
model learning. In CVPR, 2007.
[14] B. Nadler, S. Lafon, R. R. Coifman, and I. G. Kevrekidis. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Applied and Computational Harmonic
Analysis, 21:113?127, 2006.
[15] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the
spatial envelope. IJCV, 42:145?175, 2001.
[16] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: a database and webbased tool for image annotation. IJCV, 77(1):157?173, 2008.
[17] B. Schoelkopf and A. Smola. Learning with Kernels Support Vector Machines, Regularization,
Optimization, and Beyond. MIT Press,, 2002.
[18] A. Talwalkar, S. Kumar, and H. Rowley. Large-scale manifold learning. In CVPR, 2008.
[19] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large database for nonparametric object and scene recognition. IEEE PAMI, 30(11):1958?1970, November 2008.
[20] I. Tsang and J. Kwok. Large-scale sparsified manifold regularization. In NIPS, 2006.
[21] L. van Ahn. The ESP game, 2006.
[22] S. Vijayanarasimhan and K. Grauman. Keywords to visual categories: Multiple-instance learning for weakly supervised object categorization. In CVPR, 2008.
[23] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.
[24] B. Yao, X. Yang, and S. C. Zhu. Introduction to a large scale general purpose ground truth
dataset: methodology, annotation tool, and benchmarks. In EMMCVPR, 2007.
[25] K. Yu, S. Yu, and V. Tresp. Blockwise supervised inference on large graphs. In ICML workshop
on learning with partially classified training data, 2005.
[26] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global
consistency. In NIPS, 2004.
[27] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, University of
Wisconsin Madison, 2008.
[28] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In In ICML, pages 912?919, 2003.
[29] X. Zhu and J. Lafferty. Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi-supervised learning. In ICML, 2005.
9
| 3633 |@word mild:1 pw:1 polynomial:2 seems:1 seek:1 propagate:3 covariance:1 solid:2 score:2 ours:1 outperforms:3 reaction:1 si:1 assigning:1 dx:3 must:3 written:2 griebel:1 numerical:2 shape:1 enables:1 designed:3 plot:1 gist:8 treating:1 v:6 cue:1 prohibitive:1 node:3 toronto:1 along:1 constructed:1 ijcv:2 fitting:1 introduce:1 coifman:2 indeed:4 themselves:1 isi:1 growing:1 warner:1 freeman:2 automatically:1 increasing:4 becomes:1 project:1 estimating:2 notation:1 gift:1 kevrekidis:1 advent:1 israel:1 backbone:6 kind:1 minimizes:2 eigenvector:7 impractical:2 every:1 ti:1 tackle:1 exactly:2 colormap:1 classifier:3 k2:2 grauman:2 gigantic:3 positive:9 local:1 sd:2 limit:2 esp:1 vassar:1 analyzing:1 interpolation:1 pami:1 webbased:1 garcke:1 collect:1 challenging:1 factorization:1 limited:1 range:1 averaged:2 practical:1 acknowledgment:1 practice:1 lf:4 cbir:1 asymptotics:1 area:2 significantly:1 projection:3 suggest:1 get:1 cannot:1 unlabeled:12 operator:6 vijayanarasimhan:2 influence:1 map:3 demonstrated:1 reviewer:1 jerusalem:1 attention:1 go:3 independently:1 survey:2 roux:1 regularize:1 datapoints:4 embedding:1 searching:1 handle:1 n12:1 coordinate:7 laplace:1 maggioni:1 construction:1 cleaning:1 dxr:1 exact:3 element:5 trend:1 recognition:2 approximated:1 particularly:1 expensive:2 utilized:1 database:5 labeled:17 wang:2 tsang:1 thousand:1 calculate:5 region:2 schoelkopf:1 keyword:6 ordering:1 russell:1 principled:1 vanishes:1 complexity:3 rowley:1 trained:1 depend:1 solving:4 weakly:1 algebra:1 purely:2 easily:3 geoff:1 represented:2 regularizer:1 surrounding:1 train:1 distinct:1 fast:1 effective:1 describe:2 pertaining:1 query:1 labeling:3 formation:1 choosing:1 apparent:1 whose:7 solve:3 cvpr:5 delalleau:1 ability:1 niyogi:2 noisy:15 itself:1 final:1 online:1 advantage:1 eigenvalue:12 evidently:1 analytical:1 maximal:1 product:3 combining:2 rapidly:1 kapoor:1 holistic:1 kx1:1 scalability:1 olkopf:2 billion:3 convergence:2 webpage:1 darrell:1 categorization:2 bernal:1 converges:1 incremental:1 object:7 pexp:1 illustrate:1 ac:1 pose:1 measured:1 nearest:6 keywords:8 school:1 received:1 strong:3 c:1 come:2 beltrami:1 correct:1 human:3 dii:1 bin:3 require:3 yweiss:1 anonymous:1 accompanying:1 proximity:1 hold:1 considered:1 around:1 ground:1 visually:1 exp:1 nadler:2 achieves:1 vary:1 torralba:6 smallest:2 purpose:1 label:43 combinatorial:1 tool:3 weighted:2 hope:1 minimization:2 mit:4 always:1 gaussian:5 rather:3 zhou:1 boosted:1 varying:2 focus:1 improvement:2 rank:2 contrast:1 baseline:1 talwalkar:2 dim:6 inference:1 nn:1 entire:3 perona:1 wij:5 pixel:2 arg:1 issue:1 classification:1 animal:1 spatial:1 ssl:2 fairly:1 mutual:1 marginal:2 field:2 construct:1 genuine:1 having:1 once:1 sampling:1 manually:1 yu:2 unsupervised:1 icml:4 others:1 report:2 serious:1 few:6 belkin:2 randomly:3 dg:1 airbus:1 ve:7 interpolate:1 murphy:1 n1:1 microsoft:1 attempt:1 huge:1 highly:2 mixture:4 pc:1 devoted:1 regularizers:1 accurate:1 edge:3 orthogonal:1 modest:1 re:6 theoretical:2 increased:3 column:4 modeling:1 instance:1 asking:1 cover:1 ordinary:1 cost:1 introducing:1 vertex:1 subset:5 entry:1 hundred:2 uniform:2 krizhevsky:2 examining:1 too:2 reported:1 dependency:3 eec:1 kxi:1 st:1 density:22 huji:1 csail:2 lee:1 off:1 yl:2 pool:1 yao:1 imagery:1 squared:2 again:1 choose:2 worse:1 li:3 toy:7 account:1 diversity:1 forsyth:1 ranking:4 closed:1 red:4 start:2 recover:3 annotation:5 collaborative:1 minimize:1 formed:1 il:1 square:6 descriptor:10 efficiently:2 gathered:1 correspond:1 raw:2 vincent:1 accurately:1 rx:3 researcher:1 classified:2 lpk:2 manual:1 definition:1 failure:1 obvious:1 nystrom:9 associated:1 proof:1 mi:8 sampled:1 gain:2 dataset:13 popular:1 recall:3 segmentation:1 reflecting:1 higher:1 courant:1 supervised:40 follow:1 permitted:1 zisserman:1 wei:2 hashing:1 methodology:1 done:1 though:1 wildly:1 just:3 smola:1 working:1 eqn:4 web:3 lack:1 propagation:1 google:1 defines:2 mode:1 quality:1 reveal:2 grows:1 building:1 effect:1 normalized:4 requiring:1 true:1 former:1 analytically:1 adequately:2 hence:1 regularization:3 inductive:1 dx2:1 i2:1 illustrated:1 game:1 width:1 noted:1 unnormalized:1 m:1 generalized:9 trying:1 gleaned:1 performs:4 image:67 ranging:1 harmonic:4 novel:1 funding:1 common:1 rotation:2 overview:2 exponentially:1 volume:1 million:16 discussed:2 numerically:2 isf:1 significant:3 cambridge:1 smoothness:16 rd:2 automatic:1 grid:2 consistency:1 had:1 chapelle:1 stable:2 zucker:1 impressive:1 longer:1 operating:1 ector:1 add:2 ahn:1 recent:5 inf:2 disappointing:1 manipulation:1 route:1 certain:1 discretizing:1 outperforming:1 came:1 yi:1 inverted:1 seen:2 filename:1 additional:5 somewhat:1 parallelized:1 dashed:2 signal:1 semi:33 multiple:2 ii:4 pnas:1 full:2 reduces:2 smoother:1 smooth:3 technical:2 jet:1 match:1 retrieval:3 cifar:6 award:1 laplacian:22 scalable:1 oliva:1 emmcvpr:1 histogram:7 represent:1 kernel:4 achieved:1 preserved:1 addition:1 fine:1 addressed:1 source:2 diagonalize:1 sch:2 envelope:1 operate:1 unlike:1 eigenfunctions:33 comment:1 subject:2 leveraging:1 lafferty:2 joshi:1 yang:1 split:1 enough:2 easy:1 bengio:1 variety:3 xj:1 fit:1 reduce:2 idea:2 whether:1 motivated:1 pca:7 effort:2 york:1 oscillate:1 matlab:1 antonio:1 dramatically:2 generally:1 eigenvectors:27 aiding:1 nonparametric:1 category:2 reduced:3 problematic:1 nsf:1 per:6 blue:2 diverse:1 discrete:5 paiement:1 group:1 key:1 four:1 localize:1 drawn:2 neither:1 clean:2 diffusion:3 utilize:1 graph:29 fraction:1 sum:2 run:3 inverse:1 uncertainty:1 place:1 draw:1 scaling:1 comparable:3 layer:1 internet:9 badly:1 occur:1 infinity:2 constraint:2 alex:1 fei:4 scene:3 n3:1 x2:9 nearby:1 bousquet:1 aspect:2 min:1 span:1 chair:1 attempting:1 separable:5 kumar:2 relatively:2 department:1 combination:7 precompute:1 smaller:5 separability:1 lp:13 rob:1 making:2 s1:2 wherever:1 iccv:1 equation:2 agree:1 turn:1 ouimet:1 available:7 unreasonable:1 apply:1 kwok:1 spectral:4 enforce:1 ocean:1 alternative:2 yair:1 robustness:2 original:3 assumes:1 clustering:6 subsampling:1 remaining:1 top:1 log2:1 madison:1 calculating:2 giving:1 ghahramani:1 build:1 quantity:1 parametric:2 zien:1 diagonal:10 exhibit:1 amongst:1 kth:1 affinity:3 distance:2 thank:2 link:1 landmark:5 sensible:1 manifold:4 collected:1 trivial:2 urtasun:1 assuming:2 ratio:1 minimizing:2 hebrew:1 providing:1 equivalently:1 blockwise:1 broadway:1 negative:12 optimol:1 motivates:1 observation:3 datasets:5 benchmark:1 enabling:1 november:1 sparsified:1 hinton:2 incorporate:2 precise:1 frame:1 y1:1 varied:4 community:1 datta:1 pair:13 lal:1 engine:4 learned:1 conflicting:1 tremendous:1 nip:4 eigenfunction:32 address:1 beyond:2 suggested:3 able:1 below:1 dynamical:1 laplacians:2 ostrich:1 summarize:2 built:1 green:1 including:2 video:2 suitable:1 natural:2 rely:1 ranked:1 regularized:1 diagonalizing:2 zhu:5 scheme:12 dei:1 picture:1 catch:1 auto:1 tresp:1 text:2 geometric:2 literature:1 wisconsin:1 lacking:1 loss:3 age:1 foundation:1 degree:1 consistent:1 tiny:9 row:1 course:2 summary:1 mohri:1 weaker:1 understand:1 institute:1 neighbor:6 sparse:3 van:1 dimension:15 xn:2 world:1 gram:1 calculated:1 lafon:2 ignores:2 author:2 collection:7 made:1 far:2 polynomially:1 approximate:4 dealing:1 global:2 active:2 reveals:1 fergus:6 xi:1 spectrum:2 un:1 search:5 sk:5 career:1 japanese:1 protocol:1 aistats:1 pk:1 dense:1 main:1 spread:1 s2:2 subsample:1 noise:1 big:1 whole:1 allowed:1 x1:6 xu:1 fig:23 cubic:1 ny:1 aid:1 precision:4 sub:2 fails:2 explicit:2 xl:4 jmlr:1 down:2 erroneous:1 showing:1 dx1:1 nyu:2 explored:2 svm:5 consist:1 workshop:2 adding:1 illustrates:2 simply:7 explore:1 visual:3 partially:2 sindhwani:1 applies:1 corresponds:3 minimizer:1 satisfies:1 chance:2 extracted:1 ma:1 acm:1 oct:1 truth:1 weston:1 rbf:1 towards:1 labelme:1 content:3 change:1 specifically:2 typical:2 determined:1 averaging:1 called:1 total:3 experimental:1 attempted:1 berg:2 support:2 latter:2 rotate:1 constructive:1 evaluate:1 regularizing:1 |
2,906 | 3,634 | Strategy Grafting in Extensive Games
Kevin Waugh
[email protected]
Department of Computer Science
Carnegie Mellon University
Nolan Bard, Michael Bowling
{nolan,bowling}@cs.ualberta.ca
Department of Computing Science
University of Alberta
Abstract
Extensive games are often used to model the interactions of multiple agents within
an environment. Much recent work has focused on increasing the size of an extensive game that can be feasibly solved. Despite these improvements, many interesting games are still too large for such techniques. A common approach for
computing strategies in these large games is to first employ an abstraction technique to reduce the original game to an abstract game that is of a manageable size.
This abstract game is then solved and the resulting strategy is played in the original
game. Most top programs in recent AAAI Computer Poker Competitions use this
approach. The trend in this competition has been that strategies found in larger abstract games tend to beat strategies found in smaller abstract games. These larger
abstract games have more expressive strategy spaces and therefore contain better
strategies. In this paper we present a new method for computing strategies in large
games. This method allows us to compute more expressive strategies without increasing the size of abstract games that we are required to solve. We demonstrate
the power of the approach experimentally in both small and large games, while
also providing a theoretical justification for the resulting improvement.
1
Introduction
Extensive games provide a general model for describing the interactions of multiple agents within an
environment. They subsume other sequential decision making models such as finite horizon MDPs,
finite horizon POMDPs, and multiagent scenarios such as stochastic games. This makes extensive
games a powerful tool for representing a variety of complex situations. Moreover, it means that techniques for computing strategies in extensive games are a valuable commodity that can be applied
in many different domains. The usefulness of the extensive game model is dependent on the availability of solution techniques that scale well with respect to the size of the model. Recent research,
particularly motivated by the domain of poker, has made significant developments in scalable solution techniques. The classic linear programming techniques [5] can solve games with approximately
107 states [1], while more recent techniques [2, 9] can solve games with over 1012 states.
Despite the improvements in solution techniques for extensive games, even the motivating domain of
two-player limit Texas Hold?em is far too large to solve, as the game has approximately 1018 states.
The typical solution to this challenge is abstraction [1]. Abstraction involves constructing a new
game that is tractably sized for current solution techniques, but restricts the information or actions
available to the players. The hope is that the abstract game preserves the important strategic structure
of the game, and so playing a near equilibrium solution of the abstract game will still perform well in
the original game. In poker, employed abstractions include limiting the possible betting sequences,
replacing all betting in the first round with a fixed policy [1], and, most commonly, by grouping the
cards dealt to each player into buckets based on a strength metric [4, 9].
With these improvements in solution techniques, larger abstract games have become tractable, and
therefore increasingly fine abstractions have been employed. Because a finer abstraction can rep1
resent players? information more accurately and provide a more expressive space of strategies, it is
generally assumed that a solution to a finer abstraction will produce stronger strategies for the original game than those computed using a coarser abstraction. Although this assumption is in general
not true [7], results from the AAAI Computer Poker Competition [10] have shown that it does often
hold: near equilibrium strategies with the largest expressive power tend to win the competition.
In this paper, we increase the expressive power of computable strategies without increasing the
size of game that can be feasibly solved. We do this by partitioning the game into tractably sized
sub-games called grafts, solving each independently, and then combining the solutions into a single
strategy. Unlike previous, subsequently abandoned, attempts to solve independent sub-games [1, 3],
the grafting approach uses a base strategy to ensure that the grafts will mesh well as a unit. In fact,
we prove that grafted strategies improve on near equilibrium base strategies. We also empirically
demonstrate this improvement both in a small poker game as well as limit Texas Hold?em.
2
Background
Informally, an extensive game is a game tree where a player cannot distinguish between two histories
that share the same information set. This means a past action, from either chance or another player,
is not completely observed, allowing one to model situations of imperfect information.
Definition 1 (Extensive Game) [6, p. 200] A finite extensive game with imperfect information is
denoted ? and has the following components:
? A finite set N of players.
? A finite set H of sequences, the possible histories of actions, such that the empty sequence
is in H and every prefix of a sequence in H is also in H. Z ? H are the terminal histories.
No sequence in Z is a strict prefix of any sequence in H. A(h) = {a : (h, a) ? H} are the
actions available after a non-terminal history h ? H \ Z.
? A player function P that assigns to each non-terminal history a member of N ? {c}, where
c represents chance. P (h) is the player who takes an action after the history h. Let Hi be
the set of histories where player i chooses the next action.
? A function fc that associates with every history h ? Hc a probability distribution fc (?|h)
on A(h). fc (a|h) is the probability that a occurs given h.
? For each player i ? N , a utility function ui that assigns each terminal history a real value.
ui (z) is rewarded to player i for reaching terminal history z. If N = {1, 2} and for all
z ? Z, u1 (z) = ?u2 (z), an extensive game is said to be zero-sum.
? For each player i ? N , a partition Ii of Hi with the property that A(h) = A(h0 ) whenever
h and h0 are in the same member of the partition. Ii is the information partition of player
i; a set Ii ? Ii is an information set of player i.
In this paper, we exclusively focus on two-player zero-sum games with perfect recall, which is a
restriction on the information partitions that excludes unrealistic situations where a player is forced
to forget her own past information or decisions.
To play an extensive game each player specifies a strategy. A strategy determines how a player
makes her decisions when confronted with a choice.
Definition 2 (Strategy) A strategy for player i, ?i , that assigns a probability distribution over A(h)
to each h ? Hi . This function is constrained so that ?i (h) = ?i (h0 ) whenever h and h0 are in the
same information set. A strategy is pure if no randomization is required. We denote ?i as the set of
all strategies for player i.
Definition 3 (Strategy Profile) A strategy profile in extensive game ? is a set of strategies, ? =
{?1 , . . . , ?n }, that contains one strategy for each player. We let ??i denote the set strategies for all
players except player i. We call the set of all strategy profiles ?.
When all players play according to a strategy profile, ?, we can define the expected utility of each
player as ui (?). Similarly, ui (?i , ??i ) is the expected utility of player i when all other players play
according to ??i and player i plays according to ?i .
The traditional solution concept for extensive games is the Nash equilibrium concept.
2
Definition 4 (Nash Equilibrium) A Nash equilibrium is a strategy profile ? where
?i ? N ??i0 ? ?i
ui (?i ) ? ui (?i0 , ??i )
(1)
An approximation of a Nash equilibrium or ?-Nash equilibrium is a strategy profile ? where
?i ? N ??i0 ? ?i
ui (?i ) + ? ? ui (?i0 , ??i )
(2)
A Nash (?-Nash) equilibrium is a strategy profile where no player can gain (more than ?) through
unilateral deviation. A Nash equilibrium exists in all extensive games. For zero-sum extensive
games with perfect recall we can efficiently compute an ?-Nash equilibrium using techniques such as
linear programming [5], counterfactual regret minimization [9] and the excessive gap technique [2].
In a zero-sum game we say it is optimal to play any strategy belonging to an equilibrium because
this guarantees the equilibrium player the highest expected utility in the worst case. Any deviation
from equilibrium by either player can be exploited by a knowledgeable opponent. In this sense we
can call computing an equilibrium in a zero-sum game solving the game.
Many games of interest are far too large to solve directly and abstraction is often employed to reduce
the game to one of a more manageable size. The abstract game is solved and the resulting strategy
is presumed to be strong in the original game. Abstraction can be achieved by merging information
sets together, restricting the actions a player can take from a given history, or a combination of both.
Definition 5 (Abstraction) [7] An abstraction for player i is a pair ?i = ?iI , ?iA , where,
? ?iI is a partition of Hi , defining a set of abstract information sets coarser1 than Ii , and
? ?iA is a function on histories where ?iA (h) ? A(h) and ?iA (h) = ?iA (h0 ) for all histories
h and h0 in the same abstract information set. We will call this the abstract action set.
The null abstraction for player i, is ?i = hIi , Ai. An abstraction ? is a set of abstractions ?i ,
one for each player. Finally, for any abstraction ?, the abstract game, ?? , is the extensive game
obtained from ? by replacing Ii with ?iI and A(h) with ?iA (h) when P (h) = i, for all i.
Strategies for abstract games are defined in the same manner as for unabstracted games. However,
the strategy must assign the same distribution to all histories in the same block of the abstraction?s
information partition, as well as assigning zero probability to actions not in the abstract action set.
3
Strategy Grafting
Though there is no guarantee that optimal strategies in abstract games are strong in the original
game [7], these strategies have empirically been shown to perform well against both other computers [9] and humans [1]. Currently, strong strategies are solved for in one single equilibrium
computation for a single abstract game. Advancement typically involves developing algorithmic improvements to equilibrium finding techniques in order to find solutions to yet larger abstract games.
It is simple to show that a strategy space must include at least as good, if not better, strategies than
a smaller space that it refines [7]. At first glance, this would seem to imply that a larger abstraction
would always be better, but upon closer inspection we see this depends on our method of selecting
a strategy from the space. In poker, when using arbitrary equilibrium strategies that are evaluated in
a tournament setting, this intuition empirically holds true.
One potentially important factor for the empirical evidence is the presence of dominated strategies
in the support of the abstract equilibrium strategies.
Definition 6 (Dominated Strategy) A dominated strategy for player i is a pure strategy, ?i , such
that there exists another strategy, ?i0 , where for all opponent strategies ??i ,
ui (?i0 , ??i ) ? ui (?i , ??i )
(3)
and the inequality must hold strictly for at least one opponent strategy.
1
Partition A is coarser than partition B, if and only if every set in B is a subset of some set in A, or
equivalently x and y are in the same set in A if x and y are in the same set in B.
3
This implies that a player can never benefit by playing a dominated strategy. When abstracting one
can, in effect, merge a dominated strategy in with a non-dominated strategy. In the abstract game,
this combined strategy might become part of an equilibrium and hence the abstract strategy would
make occasional mistakes. That is, abstraction does not necessarily preserve strategy domination.
As a result of their expressive power, finer abstractions may better preserve domination and thus can
result in less play of dominated strategies.
Decomposition is a natural approach for using larger strategy spaces without incurring additional
computational costs and indeed it has been employed toward this end. In extensive games with
imperfect information, though, straightforward decomposition can be problematic. One way that
equilibrium strategies guard against exploitation is information hiding, i.e., the equilibrium plays in
a fashion that hinders an opponent?s ability to effectively reconstruct the player?s private information. Independent solutions to a set of sub-games, though, may not ?mesh?, or hide information,
effectively as a whole. For example, an observant opponent might be able to determine which subgame is being played, which itself could be valuable information that could be exploited.
Armed with some intuition for why increasing the size of the strategy space may improve the quality
of the solution and why decomposition can be problematic, we will now begin describing the strategy
grafting algorithm and provide some theoretical results regarding the quality of grafted strategies.
First, we will explain how a game of imperfect information is formally divided into sub-games.
Definition 7 (Grafting Partition) G = {G0 , G1 , . . . , Gp } is a grafting partition for player i if
1. G is a partition of Hi ,
2. ?I ? Ii ?j ? {0, . . . , p} such that I ? Gj , and
3. ?j ? {1, . . . , p} if h is a prefix of h0 ? Hi and h ? Gj then h0 ? Gj ? G0 .
Using the elements of a grafting partition, we construct a set of sub-games. The solutions to these
sub-games are called grafts, and we can combine them naturally, since they are disjoint sets, into
one single grafted strategy.
Definition 8 (Grafted Strategy) Given a strategy ?i ? ?i and a grafting partition G for player i.
For j ? {1, . . . , p}, define ??i ,j to be an extensive game derived from the original game ? where
for all h ? Hi \ Gj , P (h) = c and fc (a|h) = ?i (h, a). That is, player i only controls her actions
for histories in Gj and is forced to play according to ?i elsewhere. Let the graft of Gj , ? ?,j , be an
-Nash equilibrium of the game ??i ,j . Finally, define the grafted strategy for player i ?i? as,
?i (h, a)
if h ? G0
?i? (h, a) =
?,j
?i (h, a) if h ? Gj
We will call ?i the base strategy and G the grafting partition for the grafted strategy ?i? .
There are a few key ideas to observe about grafted strategies that distinguish them from previous
sub-game decomposition methods. First, we start out with a base strategy for the player. This base
strategy can be constructed using current techniques for a tractably sized abstraction. It is important
that we use the same base strategy for all grafts, as it is the only information that is shared between
the grafts. Second, when we construct a graft, only the portion of the game that the graft plays is
allowed to vary for our player of interest. The actions over the remainder of the game are played
according to the base strategy. This allows us to refine the abstraction for that block of the grafting
partition, so that it itself is as large as the largest tractably solvable game. Third, note that when we
construct a graft, we continue to use an equilibrium finding technique, but we are not interested in
the pair of strategies ? we are only interested in the strategy for the player of interest. This means
in games like poker, where we are interested in a strategy for both players, we must construct a
grafted strategy separately for each player. Finally, when we construct a graft, our opponent must
learn a strategy for the entire, potentially abstract, game. By letting our opponent?s strategy vary
completely, our graft will be a strategy that is less prone to exploitation, forcing each individual
graft to mesh well with the base strategy and in turn with each other graft when combined.
Strategy grafting allows us to construct a strategy with more expressive power that what can be
computed by solving a single game. We now show that strategy grafting uses this expressive power
to its advantage, causing an (approximate) improvement over its base strategy. Note that we cannot
guarantee a strict improvement as the base strategy may already be an optimal strategy.
4
Theorem 1 For strategies ?1 , ?2 where ?2 is an -best response to ?1 , if ?1? is the grafted strategy
for player 1 where ?1 is used as the base strategy and G is the grafting partition then,
u1 (?1? , ?2 ) ? u1 (?1 , ?2 ) =
p
X
u1 (?1?,j , ?2 ) ? u1 (?1 , ?2 ) ? ?3p.
j=1
In other words, the grafted strategy?s improvement against ?2 is equal to the sum of the gains of the
individual grafts against ?2 and this gain is no less than ?3p.
P ROOF. Define Zj as follows,
?j ? {1, . . . , p}
Zj = {z ? Z | ?h ? Gj with h a prefix of z}
p
[
Z0 = Z \
Zj
(4)
(5)
j=1
By condition (3) of Definition 7, Zj=0,...,p are disjoint and therefore form a partition of Z.
p
X
u1 (?1?,j , ?2 ) ? u1 (?1 , ?2 )
(6)
j=1
=
=
p
X
!
X
u1 (z) Pr(z|?1?,j , ?2 )
j=1 z?Z
p X
p X
X
?
X
u1 (z) Pr(z|?1 , ?2 )
(7)
z?Z
u1 (z) Pr(z|?1?,j , ?2 ) ? Pr(z|?1 , ?2 )
(8)
j=1 k=0 z?Zk
Notice that for all z ? Zk6=j , Pr(z|?1?,j , ?2 ) = Pr(z|?1 , ?2 ), so only when k = j is the summand
non-zero.
p X
X
=
(9)
u1 (z) Pr(z|?1?,j , ?2 ) ? Pr(z|?1 , ?2 )
j=1 z?Zj
=
p X
X
u1 (z) (Pr(z|?1? , ?2 ) ? Pr(z|?1 , ?2 ))
(10)
j=1 z?Zj
=
X
u1 (z) (Pr(z|?1? , ?2 ) ? Pr(z|?1 , ?2 ))
(11)
z?Z
!
=
=
X
u1 (z) Pr(z|?1? , ?2 )
z?Z
u1 (?1? , ?2 )
?
X
u1 (z) Pr(z|?1 , ?2 )
(12)
z?Z
? u1 (?1 , ?2 )
(13)
Furthermore, since ?1?,j and ?2?,j are strategies of the -Nash equilibrium ? ?,j ,
u1 (?1?,j , ?2 ) + ? u1 (?1?,j , ?2?,j ) ? u1 (?1 , ?2?,j ) ?
(14)
Moreover, because ?2 is an -best response to ?1 ,
u1 (?1 , ?2?,j ) ? u1 (?1 , ?2 ) ?
Pp
So, j=1 u1 (?1?,j , ?2 ) ? u1 (?1 , ?2 ) ? ?3p.
(15)
The main application of this theorem is in the following corollary, which follows immediately from
the definition of an -Nash equilibrium.
Corollary 1 Let ? be an abstraction where ?2 = ?2 and ? be an -Nash equilibrium strategy for
the game ?? , then any grafted strategy ?1? in ? with ?1 used as the base strategy will be at most 3p
worse than ?1 against ?2 .
5
Although these results suggest that a grafted strategy will (approximately) improve on its base strategy against an optimal opponent, there is one caveat: it assumes we know the opponent?s abstraction
or can solve a game with the opponent unabstracted. Without this knowledge or ability, this guarantee does not hold. However, all previous work that employs the use of abstract equilibrium strategies also implicitly makes this assumption. Though we know that refining an abstraction also has
no guarantee on improving worst-case performance in the original game [7], the AAAI Computer
Poker Competition [10] has shown that in practice larger abstractions and more expressive strategies
consistently perform well in the original game, even though competition opponents are not using the
same abstractions. We might expect a similar result even when the theorem?s assumptions are not
satisfied. In the next section we examine empirically both situations where we know our opponent?s
abstraction and situations where we do not.
4
Experimental Results
The AAAI Computer Poker Competitions use various types of large Texas Hold?em poker games.
These games are quite large and the resulting abstract games can take weeks of computation to solve.
We begin our experiments in a smaller poker game called Leduc Hold?em where we can examine
several grafted strategies. This is followed by analysis of a grafted strategy for two-player limit
Texas Hold?em that was submitted to the 2009 AAAI Poker Competition.
4.1
Leduc Hold?em
Leduc Hold?em is a two player poker game. The deck used in Leduc Hold?em contains six cards,
two jacks, two queens and two kings, and is shuffled prior to playing a hand. At the beginning of a
hand, each player pays a one chip ante to the pot and receives one private card. A round of betting
then takes place starting with player one. After the round of betting, a single public card is revealed
from the deck, which both players use to construct their hand. This card is called the flop. Another
round of betting occurs after the flop, again starting with player one, and then a showdown takes
place. At a showdown, if either player has paired their private card with the public card they win all
the chips in the pot. In the event neither player pairs, the player with the higher card is declared the
winner. The players split the money in the pot if they have the same private card.
Each betting round follows the same format. The first player to act has the option to check or bet.
When betting the player adds chips into the pot and action moves to the other player. When a player
faces a bet, they have the option to fold, call or raise. When folding, a player forfeits the hand and
all the money in the pot is awarded to the opposing player. When calling, a player places enough
chips into the pot to match the bet faced and the betting round is concluded. When raising, the player
must put more chips into the pot than the current bet faced and action moves to the opposing player.
If the first player checks initially, the second player may check to conclude the betting round or bet.
In Leduc Hold?em there is a limit of one bet and one raise per round. The bets and raises are of a
fixed size. This size is two chips in the first betting round and four chips in the second.
Tournament Setup. Despite using a smaller poker game, we aim to create a tournament setting
similar to the AAAI Poker Competition. To accomplish this we will create a variety of equilibriumlike players using abstractions of varying size. Each of these strategies will then be used as a base
strategy to create two grafted strategies. All strategies are then played against each other in a roundrobin tournament. A strategy is said to beat another strategy if its expected winnings against the other
is positive. Unlike the AAAI Poker Competition, in our smaller game we can feasibly compute the
expected value of one strategy against another and thus we are not required to sample.
The abstractions used are J.Q.K, JQ.K, and J.QK. Prior to the flop, the first abstraction can distinguish all three cards, the second abstraction cannot distinguish a jack from a queen and the third
cannot distinguish a queen from a king. Postflop, all three abstractions are only aware of if they
have paired their private card. These three abstractions were hand chosen as they are representative
of how current abstraction techniques will group hands together. The first abstraction is the biggest,
and hence we would expect it to do the best. The second and third abstractions are the same size.
We chose to train two types of grafted strategies: preflop grafts and flop grafts. Both types consist
of three individual grafts for each player: one to play each card with complete information. That is,
6
(1)
(1) J.Q.K preflop grafts
(2) J.Q.K flop grafts
(3) JQ.K flop grafts
(4) JQ.K preflop grafts
(5) J.QK preflop grafts
(6) J.Q.K
(7) JQ.K
(8) J.QK flop grafts
(9) J.QK
-2.3
-28.0
-17.5
-12.2
-26.6
-36.7
-22.3
-54.7
(2)
2.3
-28.6
-18.6
-16.9
-23.9
-39.7
-24.7
-49.6
(3)
28.0
28.6
(4)
17.5
18.6
-47.2
47.2
-67.0
0.9
-28.5
-79.9
-89.2
11.2
-9.0
-67.3
-3.7
-62.8
(5)
12.2
16.9
67.0
-11.2
-8.1
20.0
-30.9
-110.0
(6)
26.6
23.9
-0.9
9.0
8.1
-13.6
-7.5
-32.5
(7)
36.7
39.7
28.5
67.3
-20.0
13.6
-42.2
-70.6
(8)
22.3
24.7
79.9
3.7
30.9
7.5
42.2
-83.3
(9)
54.7
49.6
89.2
62.8
110.0
32.5
70.6
83.3
Avg.
25.0
25.0
20.0
17.9
5.5
-1.6
-6.6
-16.0
-69.1
Table 1: Expected winnings of the row player against the column player in millibets per hand (mb/h)
Strategy
J.Q.K preflop grafts
J.Q.K flop grafts
JQ.K preflop grafts
JQ.K flop grafts
J.QK preflop grafts
J.Q.K
JQ.K
J.QK flop grafts
J.QK
Wins
8
7
5
4
4
4
3
1
0
Losses
0
1
3
4
4
4
5
7
8
Exploitability
298.3
321.1
465.9
509.0
507.3
315.1
246.8
503.5
371.1
Table 2: Each strategy?s number of wins, losses, and exploitability in unabstracted Leduc Hold?em
in millibets per hand (mb/h)
each graft does not abstract the sub-game for the observed card. These two types differ in that the
preflop grafts play for the entire game whereas the flop grafts only play the game after the flop. For
preflop grafts, this means G0 is empty, i.e., the final grafted strategy is always using the probabilities
from some graft and never the base strategy. For flop grafts, the grafted strategy follows the base
strategy in all preflop information sets. We use ?-Nash equilibria in the three abstract games as our
base strategies. Each base strategy and graft is trained using counterfactual regret minimization for
one billion iterations. The equilibria found are ?-Nash equilibria where no player can benefit more
than ? = 10?5 chips by deviating within the abstract game. We measure the expected winnings in
millibets per hand or mb/h. A millibet is one thousandth of a small bet, or 0.002 chips.
Results. We can see in Table 1 that the grafted strategies perform well in a field of equilibriumlike strategies. The base strategy seems to be of great importance when training a grafted strategy.
Though JQ.K and J.QK are the same size, the JQ.K strategy performs better in this tournament
setting. Similarly, the grafted strategies appear to maintain the ordering of their base strategies
either when considering the expected winnings in Table 1 or the number of wins in Table 2 (though
JQ.K flop grafts switches places with JQ.K preflop grafts in the ordering). Although the choice of
base strategy is important, the grafted strategies do well under both evaluation criteria and even the
worst base strategy sees great relative improvement when used to train grafted strategies.
There are also a few other interesting trends in these results. First, our intuition that larger strategies
perform better seems to hold in all cases except for J.QK flop grafts. Larger abstractions also perform
better for the non-grafted strategies as J.Q.K is the biggest equilibrium strategy and it performs the
best out of this group. Second, it appears that the preflop grafts are usually better than the flop grafts.
This can be explained by the fact that the preflop grafts have more information about the original
game. Finally, observe that the grafted strategies can have worse exploitability in the original game
than their corresponding base strategy. Although this can make grafted strategies more vulnerable
to exploitive strategies, they appear to perform well against a field of equilibrium-like opponents.
In fact, in our experiment, grafted strategies appear to only improve upon the base strategy despite
not always knowing the opponent?s abstraction. This suggests that exploitability is not the only
important measure of strategy quality. Contrast the grafted strategies with the strategy that always
folds, which is exploitable at 500 mb/h. Although always folding is less exploitable than some of
the grafted strategies, it cannot win against any opponent and would place last in this tournament.
7
(1) 20x8 Grafted
(2) 20x32
(3) 20x8 (Base)
(4) 20x7
(5) 14
(6) 12
Relative Size
1.0
2.53
1.0
0.43
0.82
0.45
(1)
(2)
2.1
-2.1
-14.5
-18.1
-13.7
-18.7
-4.9
-9.4
-11.8
-15.5
(3)
14.5
4.9
-6.2
-7.2
-10.7
(4)
18.1
9.4
6.2
-1.7
-5.0
(5)
13.7
11.8
7.2
1.7
-5.3
(6)
18.7
15.5
10.7
5.0
5.3
Avg.
13.4
7.9
0.9
-5.4
-5.8
-11.0
Table 3: Sampled expected winnings in Texas Hold?em of the row player against the column player
in millibets per hand (mb/h). 95% confidence intervals are between 0.8 and 1.6. Relative size is the
ratio of the size of the abstract game(s) solved for the row strategy and the base strategy.
4.2
Texas Hold?em
Two-player limit Texas Hold?em bears many similarities to Leduc Hold?em but is much larger in
scale with respect to the parameters: cards in the deck, private cards, public cards, betting rounds and
bets per round. Due to the computational cost2 needed to solve a strong equilibrium, our experiments
consist of a single grafted strategy. Table 3 shows the results of running this large grafted strategy
against equilibrium-like strategies using a variety of abstractions.
The 20x32 strategy is the largest single imperfect recall abstract game solved to date. It is approximately 2.53 times larger than the base strategy used with grafting, 20x8. The 20x7 (imperfect recall)
and 12 (perfect recall) strategies were the entrants put forward by the Computer Poker Research
Group for the 2008 and 2007 AAAI Computer Poker Competitions, respectively. The 14 strategy
was considered for the 2008 competition, but it was ultimately superseded by the smaller 20x7. For
a detailed description of these abstractions and the rules of Texas Hold?em see A Practical Use of
Imperfect Recall [8].
As evident in the results, the grafted strategy beats all of the players with statistical significance, even
the largest single strategy. In addition to these results against other Computer Poker Research Group
strategies, the grafted strategy also performed well at the 2009 AAAI Computer Poker Competition.
There, against a field of thirteen strong strategies, it placed second and fourth (narrowly behind the
third place entrant) in the limit run-off and limit bankroll competitions, respectively.
These results demonstrate that strategy grafting is competitive and allows one to augment their
existing strategies. Any improvement to the quality of a base strategy should in turn improve the
quality of the grafted strategy in similar tournament settings. This means that strategy grafting can
be used transparently on top of more sophisticated strategy-computing methods.
5
Conclusion
We have introduced a new method, called strategy grafting, for independently solving and combining sub-games in large extensive games. This method allows us to create larger strategies than
previously possible by solving many sub-games. These new strategies seem to maintain the features
of good equilibrium-like strategies. By creating larger strategies we hope to play fewer dominated
strategies and, in turn, make fewer mistakes. Against a static equilibrium-like opponent, making
fewer mistakes should lead to an improvement in the quality of play. Our empirical results confirm
this intuition and demonstrate that this new method can improve the performance of the state-of-theart in both a simulated competition and the actual AAAI Computer Poker Competition. It is likely
that much of the strength of these new strategies will be bounded by the quality of the base strategy
used. In this regard, we are still limited by the capabilities of current methods.
Acknowledgments
The authors would like to thank the members of the Computer Poker Research Group at the University of Alberta for helpful conversations pertaining to this research. This research was supported by
NSERC, iCORE, and Alberta Ingenuity.
2
This particular grafted strategy was computed on a large cluster using 640 processors over almost 6 days.
8
References
[1] Darse Billings, Neil Burch, Aaron Davidson, Robert Holte, Jonathan Schaeffer, Terance
Schauenberg, and Duane Szafron. Approximating Game-Theoretic Optimal Strategies for
Full-scale Poker. In International Joint Conference on Artificial Intelligence, pages 661?668,
2003.
[2] Andrew Gilpin, Samid Hoda, Javier Pe?na, and Tuomas Sandholm. Gradient-based Algorithms
for Finding Nash Equilibria in Extensive Form Games. In Proceedings of the Eighteenth International Conference on Game Theory, 2007.
[3] Andrew Gilpin and Tuomas Sandholm. A Competitive Texas Hold?em Poker Player via Automated Abstraction and Real-time Equilibrium Computation. In Proceedings of the Twenty-First
Conference on Artificial Intelligence, 2006.
[4] Andrew Gilpin and Tuomas Sandholm. Expectation-Based Versus Potential-Aware Automated
Abstraction in Imperfect Information Games: An Experimental Comparison Using Poker. In
Proceedings of the Twenty-Third Conference on Artificial Intelligence, 2008.
[5] Daphne Koller and Avi Pfeffer. Representations and Solutions for Game-Theoretic Problems.
Artificial Intelligence, 94:167?215, 1997.
[6] Martin Osborne and Ariel Rubinstein. A Course in Game Theory. The MIT Press, Cambridge,
Massachusetts, 1994.
[7] Kevin Waugh, David Schnizlein, Michael Bowling, and Duane Szafron. Abstraction Pathologies in Extensive Games. In Proceedings of the Eighth International Joint Conference on
Autonomous Agents and Multi-Agent Systems, pages 781?788, 2009.
[8] Kevin Waugh, Martin Zinkevich, Michael Johanson, Morgan Kan, David Schnizlein, and
Michael Bowling. A Practical Use of Imperfect Recall. In Proceedings of the Eighth Symposium on Abstraction, Reformulation and Approximation, 2009.
[9] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret Minimization in Games with Incomplete Information. In Advances in Neural Information Processing Systems Twenty, pages 1729?1736, 2008. A longer version is available as a University of
Alberta Technical Report, TR07-14.
[10] Martin Zinkevich and Michael Littman. The AAAI Computer Poker Competition. Journal of
the International Computer Games Association, 29, 2006. News item.
9
| 3634 |@word exploitation:2 version:1 private:6 manageable:2 stronger:1 seems:2 szafron:2 decomposition:4 contains:2 exclusively:1 selecting:1 prefix:4 past:2 existing:1 current:5 assigning:1 yet:1 must:6 mesh:3 refines:1 resent:1 partition:17 intelligence:4 fewer:3 advancement:1 item:1 inspection:1 beginning:1 caveat:1 daphne:1 guard:1 constructed:1 become:2 symposium:1 prove:1 combine:1 manner:1 presumed:1 indeed:1 expected:9 ingenuity:1 examine:2 multi:1 terminal:5 alberta:4 actual:1 armed:1 considering:1 increasing:4 hiding:1 begin:2 moreover:2 bounded:1 null:1 what:1 finding:3 guarantee:5 commodity:1 every:3 act:1 partitioning:1 unit:1 control:1 appear:3 positive:1 limit:7 mistake:3 despite:4 approximately:4 merge:1 tournament:7 might:3 chose:1 bankroll:1 suggests:1 limited:1 practical:2 acknowledgment:1 carmelo:1 practice:1 regret:3 block:2 subgame:1 empirical:2 word:1 confidence:1 suggest:1 cannot:5 put:2 restriction:1 zinkevich:3 eighteenth:1 straightforward:1 starting:2 independently:2 focused:1 x32:2 assigns:3 pure:2 immediately:1 rule:1 classic:1 autonomous:1 justification:1 limiting:1 play:14 ualberta:1 programming:2 us:2 associate:1 trend:2 element:1 particularly:1 coarser:2 pfeffer:1 observed:2 solved:7 worst:3 news:1 hinders:1 ordering:2 highest:1 valuable:2 intuition:4 environment:2 graft:42 ui:10 nash:16 littman:1 ultimately:1 trained:1 raise:3 solving:5 upon:2 completely:2 joint:2 chip:9 various:1 train:2 forced:2 pertaining:1 artificial:4 rubinstein:1 kevin:3 avi:1 h0:8 quite:1 larger:13 solve:9 say:1 nolan:2 reconstruct:1 ability:2 neil:1 g1:1 gp:1 itself:2 final:1 confronted:1 sequence:6 advantage:1 interaction:2 mb:5 remainder:1 causing:1 combining:2 date:1 description:1 competition:17 billion:1 empty:2 cluster:1 produce:1 perfect:3 andrew:3 strong:5 pot:7 c:2 involves:2 implies:1 differ:1 stochastic:1 subsequently:1 human:1 public:3 assign:1 randomization:1 strictly:1 cost2:1 hold:21 considered:1 great:2 equilibrium:39 algorithmic:1 week:1 vary:2 currently:1 largest:4 create:4 tool:1 hope:2 minimization:3 mit:1 always:5 aim:1 reaching:1 johanson:2 bet:9 varying:1 corollary:2 derived:1 focus:1 refining:1 improvement:12 consistently:1 check:3 contrast:1 sense:1 helpful:1 waugh:4 abstraction:45 dependent:1 i0:6 typically:1 entire:2 initially:1 her:3 koller:1 jq:11 interested:3 denoted:1 augment:1 development:1 thousandth:1 constrained:1 equal:1 construct:7 never:2 aware:2 field:3 represents:1 excessive:1 theart:1 report:1 feasibly:3 employ:2 few:2 summand:1 leduc:7 preserve:3 individual:3 roof:1 deviating:1 opposing:2 maintain:2 attempt:1 interest:3 evaluation:1 behind:1 closer:1 tree:1 incomplete:1 theoretical:2 column:2 queen:3 strategic:1 cost:1 deviation:2 subset:1 usefulness:1 too:3 motivating:1 accomplish:1 chooses:1 combined:2 international:4 off:1 michael:7 together:2 na:1 again:1 aaai:11 satisfied:1 worse:2 creating:1 potential:1 availability:1 depends:1 performed:1 portion:1 start:1 competitive:2 option:2 capability:1 ante:1 qk:9 who:1 efficiently:1 dealt:1 accurately:1 pomdps:1 finer:3 processor:1 history:15 submitted:1 explain:1 whenever:2 definition:10 against:17 pp:1 naturally:1 schaeffer:1 static:1 gain:3 sampled:1 schauenberg:1 massachusetts:1 counterfactual:2 recall:7 knowledge:1 conversation:1 javier:1 sophisticated:1 appears:1 higher:1 day:1 response:2 evaluated:1 though:7 knowledgeable:1 roundrobin:1 furthermore:1 hand:10 receives:1 expressive:9 replacing:2 glance:1 quality:7 effect:1 samid:1 contain:1 true:2 concept:2 hence:2 shuffled:1 round:11 game:111 bowling:5 criterion:1 evident:1 complete:1 demonstrate:4 theoretic:2 performs:2 jack:2 common:1 empirically:4 winner:1 association:1 mellon:1 significant:1 cambridge:1 ai:1 similarly:2 pathology:1 similarity:1 money:2 gj:8 longer:1 base:29 add:1 own:1 recent:4 hide:1 awarded:1 rewarded:1 scenario:1 forcing:1 inequality:1 continue:1 icore:1 exploited:2 morgan:1 holte:1 additional:1 employed:4 determine:1 ii:10 multiple:2 full:1 technical:1 match:1 divided:1 paired:2 scalable:1 cmu:1 metric:1 expectation:1 iteration:1 achieved:1 folding:2 background:1 whereas:1 fine:1 separately:1 interval:1 addition:1 concluded:1 unlike:2 strict:2 tend:2 showdown:2 member:3 seem:2 call:5 near:3 presence:1 revealed:1 split:1 forfeit:1 enough:1 automated:2 variety:3 switch:1 tr07:1 billing:1 reduce:2 imperfect:9 regarding:1 computable:1 idea:1 knowing:1 texas:9 narrowly:1 motivated:1 six:1 utility:4 unilateral:1 action:14 generally:1 detailed:1 informally:1 specifies:1 restricts:1 problematic:2 zj:6 notice:1 transparently:1 disjoint:2 per:6 carnegie:1 group:5 key:1 four:1 reformulation:1 neither:1 excludes:1 sum:6 run:1 powerful:1 fourth:1 preflop:13 place:6 almost:1 decision:3 hi:7 pay:1 followed:1 played:4 distinguish:5 fold:2 refine:1 strength:2 burch:1 calling:1 dominated:8 declared:1 u1:24 x7:3 betting:11 format:1 martin:4 department:2 developing:1 according:5 combination:1 belonging:1 smaller:6 sandholm:3 em:16 increasingly:1 making:2 explained:1 pr:14 bucket:1 ariel:1 previously:1 describing:2 turn:3 needed:1 know:3 letting:1 tractable:1 end:1 available:3 incurring:1 opponent:16 observe:2 occasional:1 hii:1 original:11 abandoned:1 top:2 assumes:1 include:2 ensure:1 running:1 approximating:1 move:2 g0:4 already:1 occurs:2 strategy:175 traditional:1 poker:26 said:2 gradient:1 win:6 thank:1 card:16 simulated:1 toward:1 bard:1 tuomas:3 providing:1 ratio:1 equivalently:1 setup:1 grafted:36 thirteen:1 potentially:2 robert:1 darse:1 policy:1 twenty:3 perform:7 allowing:1 finite:5 schnizlein:2 beat:3 subsume:1 situation:5 defining:1 flop:16 arbitrary:1 introduced:1 david:2 pair:3 required:3 extensive:23 raising:1 tractably:4 able:1 usually:1 eighth:2 challenge:1 program:1 power:6 unrealistic:1 ia:6 natural:1 event:1 solvable:1 representing:1 improve:6 mdps:1 imply:1 superseded:1 x8:3 millibet:1 faced:2 prior:2 relative:3 multiagent:1 expect:2 loss:2 bear:1 abstracting:1 interesting:2 piccione:1 versus:1 entrant:2 agent:4 playing:3 share:1 row:3 prone:1 elsewhere:1 course:1 placed:1 last:1 supported:1 face:1 benefit:2 regard:1 forward:1 made:1 commonly:1 avg:2 author:1 far:2 approximate:1 grafting:17 implicitly:1 confirm:1 assumed:1 conclude:1 davidson:1 why:2 table:7 learn:1 zk:1 exploitability:4 ca:1 improving:1 hc:1 complex:1 necessarily:1 constructing:1 domain:3 hoda:1 significance:1 main:1 whole:1 millibets:4 profile:7 osborne:1 allowed:1 exploitable:2 representative:1 biggest:2 fashion:1 sub:10 winning:5 pe:1 third:5 theorem:3 z0:1 evidence:1 grouping:1 exists:2 consist:2 restricting:1 sequential:1 merging:1 effectively:2 importance:1 horizon:2 gap:1 forget:1 unabstracted:3 fc:4 likely:1 deck:3 nserc:1 u2:1 vulnerable:1 duane:2 kan:1 chance:2 determines:1 sized:3 king:2 shared:1 experimentally:1 typical:1 except:2 called:5 experimental:2 player:80 domination:2 aaron:1 formally:1 gilpin:3 support:1 jonathan:1 |
2,907 | 3,635 | Asymptotic Analysis of MAP Estimation via the
Replica Method and Compressed Sensing?
Sundeep Rangan
Qualcomm Technologies
Bedminster, NJ
[email protected]
Alyson K. Fletcher
University of California, Berkeley
Berkeley, CA
[email protected]
Vivek K Goyal
Mass. Inst. of Tech.
Cambridge, MA
[email protected]
Abstract
The replica method is a non-rigorous but widely-accepted technique from statistical physics used in the asymptotic analysis of large, random, nonlinear problems. This paper applies the replica method to non-Gaussian maximum a posteriori (MAP) estimation. It is shown that with random linear measurements and
Gaussian noise, the asymptotic behavior of the MAP estimate of an n-dimensional
vector ?decouples? as n scalar MAP estimators. The result is a counterpart to Guo
and Verd?u?s replica analysis of minimum mean-squared error estimation.
The replica MAP analysis can be readily applied to many estimators used in
compressed sensing, including basis pursuit, lasso, linear estimation with thresholding, and zero norm-regularized estimation. In the case of lasso estimation
the scalar estimator reduces to a soft-thresholding operator, and for zero normregularized estimation it reduces to a hard-threshold. Among other benefits, the
replica method provides a computationally-tractable method for exactly computing various performance metrics including mean-squared error and sparsity pattern recovery probability.
1 Introduction
Estimating a vector x ? Rn from measurements of the form
y = ?x + w,
(1)
where ? ? Rm?n represents a known measurement matrix and w ? Rm represents measurement
errors or noise, is a generic problem that arises in a range of circumstances. One of the most basic
estimators for x is the maximum a posteriori (MAP) estimate
? map (y) = arg max px|y (x|y),
x
(2)
x?Rn
which is defined assuming some prior on x. For most priors, the MAP estimate is nonlinear and its
behavior is not easily characterizable. Even if the priors for x and w are separable, the analysis of
the MAP estimate may be difficult since the matrix ? couples the n unknown components of x with
the m measurements in the vector y.
The primary contribution of this paper?an abridged version of [1]?is to show that with certain
large random ? and Gaussian w, there is an asymptotic decoupling of (1) into n scalar MAP estimation problems. Each equivalent scalar problem has an appropriate scalar prior and Gaussian noise
with an effective noise level. The analysis yields the asymptotic joint distribution of each component
? map (y). From the joint
xj of x and its corresponding estimate x
?j in the MAP estimate vector x
distribution, various further computations can be made, such as the mean-squared error (MSE) of
the MAP estimate or the error probability of a hypothesis test computed from the MAP estimate.
?
This work was supported in part by a University of California President?s Postdoctoral Fellowship and
National Science Foundation CAREER Award 0643836.
1
Replica Method. Our analysis is based on a powerful but non-rigorous technique from statistical
physics known as the replica method. The replica method was originally developed by Edwards and
Anderson [2] to study the statistical mechanics of spin glasses. Although not fully rigorous from the
perspective of probability theory, the technique was able to provide explicit solutions for a range of
complex problems where many other methods had previously failed [3].
The replica method was first applied to the study of nonlinear MAP estimation problems by
Tanaka [4] and M?uller [5]. These papers studied the behavior of the MAP estimator of a vector
x with i.i.d. binary components observed through linear measurements of the form (1) with a large
random ? and Gaussian w. The results were then generalized in a remarkable paper by Guo and
Verd?u [6] to vectors x with arbitrary distributions. Guo and Verd?u?s result was also able to incorporate a large class of minimum postulated MSE estimators, where the estimator may assume a
prior that is different from the actual prior. The main result in this paper is the corresponding MAP
statement to Guo and Verd?u?s result. In fact, our result is derived from Guo and Verd?u?s by taking
appropriate limits with large deviations arguments.
The non-rigorous aspect of the replica method involves a set of assumptions that include a selfaveraging property, the validity of a ?replica trick,? and the ability to exchange certain limits. Some
progress has been made in formally proving these assumptions; a survey of this work can be found
in [7]. Also, some of the predictions of the replica method have been validated rigorously by other
means [8]. To emphasize our dependence on these unproven assumptions, we will refer to Guo and
Verd?u?s result as the Replica MMSE Claim. Our main result, which depends on Guo and Verd?u?s
analysis, will be called the Replica MAP Claim.
Applications to Compressed Sensing. As an application of our main result, we will develop a few
analyses of estimation problems that arise in compressed sensing [9?11]. In compressed sensing,
one estimates a sparse vector x from random linear measurements. Generically, optimal estimation
of x with a sparse prior is NP-hard [12]. Thus, most attention has focused on greedy heuristics such
as matching pursuit and convex relaxations such as basis pursuit [13] or lasso [14]. While successful
in practice, these algorithms are difficult to analyze precisely.
Recent compressed sensing research has provided scaling laws on numbers of measurements that
guarantee good performance of these methods [15?17]. However, these scaling laws are in general
conservative. There are, of course, notable exceptions including [18] and [19] which provide matching necessary and sufficient conditions for recovery of strictly sparse vectors with basis pursuit and
lasso. However, even these results only consider exact recovery and are limited to measurements
that are noise-free or measurements with a signal-to-noise ratio (SNR) that scales to infinity.
Many common sparse estimators can be seen as MAP estimators with certain postulated priors.
Most importantly, lasso and basis pursuit are MAP estimators assuming a Laplacian prior. Other
commonly-used sparse estimation algorithms, including linear estimation with and without thresholding and zero norm-regularized estimators, can also be seen as MAP-based estimators. For these
algorithms, the replica method provides?under the assumption of the replica hypotheses?not just
bounds, but the exact asymptotic behavior. This in turns permits exact expressions for various performance metrics such as MSE or fraction of support recovery. The expressions apply for arbitrary
ratios k/n, n/m and SNR.
2 Estimation Problem and Assumptions
Consider the estimation of a random vector x ? Rn from linear measurements of the form
y = ?x + w = AS1/2 x + w,
m
where y ? R is a vector of observations, ? = AS
a diagonal matrix of positive scale factors,
1/2
,A?R
m?n
S = diag (s1 , . . . , sn ) , sj > 0,
(3)
is a measurement matrix, S is
(4)
and w ? Rm is zero-mean, white Gaussian noise. We consider a sequence of such problems indexed
b of x from the observations
by n, with n ? ?. For each n, the problem is to determine an estimate x
y knowing the measurement matrix A and scale factor matrix S.
2
The components xj of x are modeled as zero mean and i.i.d. with some prior probability distribution
p0 (xj ). The per-component variance of the Gaussian noise is E|wj |2 = ?02 . We use the subscript
?0? on the prior and noise level to differentiate these quantities from certain ?postulated? values to
be defined later.
In (3), we have factored ? = AS1/2 so that even with the i.i.d. assumption on xj s above and an
i.i.d. assumption on entries of A, the model can capture variations in powers of the components of
x that are known a priori at the estimator. Variations in the power of x that are not known to the
estimator should be captured in the distribution of x.
We summarize the situation and make additional assumptions to specify the problem precisely as
follows:
(a) The number of measurements m = m(n) is a deterministic quantity that varies with n and
satisfies
lim n/m(n) = ?
n??
for some ? ? 0. (The dependence of m on n is usually omitted for brevity.)
(b) The components xj of x are i.i.d. with probability distribution p0 (xj ).
(c)
(d)
(e)
(f)
The noise w is Gaussian with w ? N (0, ?02 Im ).
The components of the matrix A are i.i.d. zero mean with variance 1/m.
The scale factors sj are i.i.d. and satisfy sj > 0 almost surely.
The scale factor matrix S, measurement matrix A, vector x and noise w are independent.
3 Review of the Replica MMSE Claim
We begin by reviewing the Replica MMSE Claim of Guo and Verd?u [6]. Suppose one is given a
2
that may be different from
?postulated? prior distribution ppost and a postulated noise level ?post
2
the true values p0 and ?0 . We define the minimum postulated MSE (MPMSE) estimate of x as
Z
2
2
? mpmse (y) = E x | y ; ppost , ?post
= xpx|y (x | y ; ppost , ?post
) dx,
x
where px|y (x | y ; q, ? 2 ) is the conditional distribution of x given y under the x distribution and
noise variance specified as parameters after the semicolon:
n
Y
1
px|y (x | y ; q, ? 2 ) = C ?1 exp ? 2 ky ? AS1/2 xk2 q(x),
q(xj ), (5)
q(x) =
2?
j=1
where C is a normalization constant.
The Replica MMSE Claim describes the asymptotic behavior of the postulated MMSE estimator via
an equivalent scalar estimator. Let q(x) be a probability distribution defined on some set X ? R.
Given ? > 0, let px|z (x | z ; q, ?) be the conditional distribution
Z
?1
px|z (x | z ; q, ?) =
?(z ? x ; ?)q(x) dx
?(z ? x ; ?)q(x)
(6)
x?X
where ?(?) is the Gaussian distribution
2
1
?(v ; ?) = ?
e?|v| /(2?) .
2??
(7)
The distribution px|z (x|z ; q, ?) is the conditional distribution of the scalar random variable x ?
q(x) from an observation of the form
?
(8)
z = x + ?v,
where v ? N (0, 1). Using this distribution, we can define the scalar conditional MMSE estimate,
Z
mmse
x
?scalar (z ; q, ?) =
xpx|z (x|z ; ?) dx.
(9)
x?X
3
Also, given two distributions, p0 (x) and p1 (x), and two noise levels, ?0 > 0 and ?1 > 0, define
Z
2
|x ? x
?mmse
(10)
mse(p1 , p0 , ?1 , ?0 , z) =
scalar (z ; p1 , ?1 )| px|z (x | z ; p0 , ?0 ) dx,
x?X
which is the mean-squared error in estimating the scalar x from the variable z in (8) when x has a
true distribution x ? p0 (x) and the noise level is ? = ?0 , but the estimator assumes a distribution
x ? p1 (x) and noise level ? = ?1 .
? mpmse (y) be the
Replica MMSE Claim [6]. Consider the estimation problem in Section 2. Let x
2
MPMSE estimator based on a postulated prior ppost and postulated noise level ?post
. For each
n, let j = j(n) be some deterministic component index with j(n) ? {1, . . . , n}. Then there exist
2
2
effective noise levels ?eff
and ?p?eff
such that:
) converge in distribution to the random
(a) As n ? ?, the random vectors (xj , sj , x
?mpmse
j
vector (x, s, x?) where x, s, and v are independent with x ? p0 (x), s ? pS (s), v ? N (0, 1),
and
?
x
?=x
?mmse
?v.
(11)
scalar (z ; ppost , ?p ), z = x +
2
2
where ? = ?eff
/s and ?p = ?p?eff
/s.
(b) The effective noise levels satisfy the equations
2
?eff
2
?p?eff
= ?02 + ?E [s mse(ppost , p0 , ?p , ?, z)]
=
2
?post
+ ?E [s mse(ppost , ppost , ?p , ?p , z)] ,
(12a)
(12b)
where the expectations are taken over s ? pS (s) and z generated by (11).
The Replica MMSE Claim asserts that the asymptotic behavior of the joint estimation of the ndimensional vector x can be described by n equivalent scalar estimators. In the scalar estimation
problem, a component x ? p0 (x) is corrupted by additive Gaussian noise yielding a noisy mea2
surement z. The additive noise variance is ? = ?eff
/s, which is the effective noise divided by the
scale factor s. The estimate of that component is then described by the (generally nonlinear) scalar
estimator x
?(z ; ppost , ?p ).
2
2
The effective noise levels ?eff
and ?p?eff
are described by the solutions to fixed-point equations
2
2
(12). Note that ?eff and ?p?eff appear implicitly on the left- and right-hand sides of these equations
via the terms ? and ?p . When there are multiple solutions to these equations, the true solution is the
minimizer of a certain Gibbs? function [6].
4 Replica MAP Claim
We now turn to MAP estimation. Let X ? R be some (measurable) set and consider an estimator of
the form
n
X
1
? map (y) = arg min ky ? AS1/2 xk22 +
f (xj ),
(13)
x
x?X n 2?
j=1
where ? > 0 is an algorithm parameter and f : X ? R is some scalar-valued, non-negative cost
function. We will assume that the objective function in (13) has a unique essential minimizer for
almost all y.
The estimator (13) can be interpreted as a MAP estimator. Specifically, for any u > 0, it can be
? map (y) is the MAP estimate
verified that x
? map (y) = arg max px|y (x | y ; pu , ?u2 ),
x
x?X n
where pu (x) and ?u2 are the prior and noise level
Z
?1
pu (x) =
exp(?uf (x))dx
exp(?uf (x)), ?u2 = ?/u,
x?X n
4
(14)
where f (x) =
estimators
P
j
f (xj ). To analyze this MAP estimator, we consider a sequence of MMSE
bu (y) = E x | y ; pu , ?u2 .
(15)
x
The proof of the Replica MAP Claim below (see [1]) uses a standard large deviations argument to
show that
bu (y) = x
? map (y)
lim x
u??
for all y. Under the assumption that the behaviors of the MMSE estimators are described by the
Replica MMSE Claim, we can then extrapolate the behavior of the MAP estimator.
To state the claim, define the scalar MAP estimator
x
?map
scalar (z ; ?) = arg min F (x, z, ?), F (x, z, ?) =
x?X
1
|z ? x|2 + f (x).
2?
(16)
where, again, we assume that (16) has a unique essential minimizer for almost all ? and almost all
z. We also assume that the limit
|x ? x
? |2
,
x??
x 2(F (x, z, ?) ? F (?
x, z, ?))
? 2 (z, ?) = lim
(17)
exists where x
? = x?map
scalar (z; ?). We make the following additional assumptions:
Assumption 1 Consider the MAP estimator (13) applied to the estimation problem in Section 2.
Assume:
(a) For all u > 0 sufficiently large, assume the postulated prior pu and noise level ?u2 satisfy
the Replica MMSE Claim. Also, assume that for the corresponding effective noise levels,
2
2
?eff
(u) and ?p?eff
(u), the following limits exists:
2
2
2
?eff,map
= lim ?eff
(u), ?p = lim u?p?eff
(u).
u??
u??
(b) Suppose for each n, x
?uj (n) is the MMSE estimate of the component xj for some index
j ? {1, . . . , n} based on the postulated prior pu and noise level ?u2 . Then, assume that the
following limits can be interchanged:
lim lim x?uj (n) = lim lim x
?uj (n),
u?? n??
n?? u??
where the limits are in distribution.
(c) Assume that f (x) is non-negative and satisfies f (x)/ log |x| ? ? as |x| ? ?.
Item (a) is stated to reiterate that we are assuming the Replica MMSE Claim is valid. See [1, Sect.
IV] for additional discussion of technical assumptions.
? map (y) be the MAP
Replica MAP Claim [1]. Consider the estimation problem in Section 2. Let x
estimator (13) defined for some f (x) and ? > 0 satisfying Assumption 1. For each n, let j = j(n)
be some deterministic component index with j(n) ? {1, . . . , n}. Then:
(a) As n ? ?, the random vectors (xj , sj , x
?map
) converge in distribution to the random
j
vector (x, s, x?) where x, s, and v are independent with x ? p0 (x), s ? pS (s), v ? N (0, 1),
and
?
x
?=x
?map
?v,
(18)
scalar (z, ?p ), z = x +
2
where ? = ?eff,map
/s and ?p = ?p /s.
2
(b) The limiting effective noise levels ?eff,map
and ?p satisfy the equations
2
?eff,map
= ?02 + ?E s|x ? x
? |2
?p = ? + ?E s? 2 (z, ?p ) ,
(19a)
(19b)
where the expectations are taken over x ? p0 (x), s ? pS (s), and v ? N (0, 1), with x
? and
z defined in (18).
5
Analogously to the Replica MMSE Claim, the Replica MAP Claim asserts that asymptotic behavior
of the MAP estimate of any single component of x is described by a simple equivalent scalar estimator. In the equivalent scalar model, the component of the true vector x is corrupted by Gaussian
noise and the estimate of that component is given by a scalar MAP estimate of the component from
the noise-corrupted version.
5 Analysis of Compressed Sensing
Our results thus far hold for any separable distribution for x and under mild conditions on the cost
function f . The role of f is to determine the estimator. In this section, we first consider choices of
f that yield MAP estimators relevant to compressed sensing. We then additionally impose a sparse
prior for x for numerical evaluations of asymptotic performance.
Lasso Estimation. We first consider the lasso or basis pursuit estimate [13, 14] given by
? lasso (y) = arg min
x
x?Rn
1
ky ? AS1/2 xk22 + kxk1 ,
2?
(20)
where ? > 0 is an algorithm parameter. This estimator is identical to the MAP estimator (13) with
the cost function
f (x) = |x|.
With this cost function, the scalar MAP estimator in (16) is given by
soft
x
?map
scalar (z ; ?) = T? (z),
where T?soft (z) is the soft thresholding operator
(
z ? ?, if z > ?;
0, if |z| ? ?;
T?soft (z) =
z + ?, if z < ??.
(21)
(22)
2
The Replica MAP Claim now states that there exists effective noise levels ?eff,map
and ?p such that
for any component index j, the random vector (xj , sj , x
?j ) converges in distribution to the vector
(x, s, x?) where x ? p0 (x), s ? pS (s), and x? is given by
?
(z),
z = x + ?v,
x? = T?soft
(23)
p
2
where v ? N (0, 1), ?p = ?p /s, and ? = ?eff,map
/s. Hence, the asymptotic behavior of lasso
has a remarkably simple description: the asymptotic distribution of the lasso estimate x
?j of the
component xj is identical to xj being corrupted by Gaussian noise and then soft-thresholded to
yield the estimate x?j .
To calculate the effective noise levels, one can perform a simple calculation to show that ? 2 (z, ?) in
(17) is given by
?, if |z| > ?;
? 2 (z, ?) =
(24)
0, if |z| ? ?.
Hence,
E s? 2 (z, ?p ) = E [s?p Pr(|z| > ?p )] = ?p Pr(|z| > ?p /s)
(25)
where we have use the fact that ?p = ?p /s. Substituting (21) and (25) into (19), we obtain the
fixed-point equations
i
h
2
(26a)
(z)|2
?eff,map
= ?02 + ?E s|x ? T?soft
p
?p
= ? + ??p Pr(|z| > ?p /s),
(26b)
where the expectations are taken with respect to x ? p0 (x), s ? pS (s), and z in (23). Again, while
these fixed-point equations do not have a closed-form solution, they can be relatively easily solved
numerically given distributions of x and s.
6
Zero Norm-Regularized Estimation. Lasso can be regarded as a convex relaxation of zero normregularized estimation
? zero (y) = arg min
x
x?Rn
1
ky ? AS1/2 xk22 + kxk0 ,
2?
(27)
where kxk0 is the number of nonzero components of x. For certain strictly sparse priors, zero
norm-regularized estimation may provide better performance than lasso. While computing the zero
norm-regularized estimate is generally very difficult, we can use the replica analysis to provide a
simple characterization of its performance. This analysis can provide a bound on the performance
achievable by practical algorithms.
The zero norm-regularized estimator is identical to the MAP estimator (13) with the cost function
0, if x = 0;
f (x) =
(28)
1, if x 6= 0.
Technically, this cost function does not satisfy the conditions of the Replica MAP Claim. To avoid
this problem, we can consider an approximation of (28),
0, if |x| < ?;
f?,M (x) =
1, if |x| ? [?, M ],
which is defined on the set X = {x : |x| ? M }. We can then take the limits ? ? 0 and M ? ?.
To simplify the presentation, we will just apply the Replica MAP Claim with f (x) in (28) and omit
the details in taking the appropriate limits.
With f (x) given by (28), the scalar MAP estimator in (16) is given by
?
hard
x
?map
(z),
t = 2?,
scalar (z ; ?) = Tt
(29)
where Tthard is the hard thresholding operator,
Tthard (z)
=
z, if |z| > t;
0, if |z| ? t.
(30)
Now, similar to the case of lasso estimation, the Replica MAP Claim states there exists effective
2
noise levels ?eff,map
and ?p such that for any component index j, the random vector (xj , sj , x?j )
converges in distribution to the vector (x, s, x?) where x ? p0 (x), s ? pS (s), and x? is given by
x
? = Tthard (z),
z = x+
?
?v,
(31)
2
where v ? N (0, 1), ?p = ?p /s, ? = ?eff,map
/s, and
t=
q
p
2?p = 2?p /s.
(32)
Thus, the zero norm-regularized estimation of a vector x is equivalent to n scalar components cor2
rupted by some effective noise level ?eff,map
and hard-thresholded based on a effective noise level
?p .
2
The fixed-point equations for the effective noise levels ?eff,map
and ?p can be computed similarly to
the case of lasso. Specifically, one can verify that (24) and (25) are both satisfied for the hard thresholding operator as well. Substituting (25) and (29) into (19), we obtain the fixed-point equations
2
?eff,map
?p
= ?02 + ?E s|x ? Tthard(z)|2 ,
= ? + ??p Pr(|z| > t),
(33a)
(33b)
where the expectations are taken with respect to x ? p0 (x), s ? pS (s), z in (31), and t given by
(32). These fixed-point equations can be solved numerically.
7
0
Linear
(replica)
Linear
(sim.)
Lasso
(replica)
Lasso
(sim.)
Zero
norm?reg
Optimal
MMSE
Median squared error (dB)
?2
?4
?6
?8
?10
?12
?14
?16
?18
0.5
1
1.5
2
2.5
Measurement ratio ? = n/m
3
Figure 1: MSE performance prediction with the Replica MAP Claim. Plotted is the median normalized SE for various sparse recovery algorithms: linear MMSE estimation, lasso, zero normregularized estimation, and optimal MMSE estimation. Solid lines show the asymptotic predicted
MSE from the Replica MAP Claim. For the linear and lasso estimators, the circles and triangles
show the actual median SE over 1000 Monte Carlo simulations.
Numerical Simulation. To validate the predictive power of the Replica MAP Claim for finite
dimensions, we performed numerical simulations where the components of x are a zero-mean
Bernoulli?Gaussian process. Specifically,
N (0, 1), with prob. 0.1;
xj ?
0, with prob. 0.9.
We took the vector x to have n = 100 i.i.d. components, and we used ten values of m to vary ? =
n/m from 0.5 to 3. For each problem size, we simulated the lasso and linear MMSE estimators over
1000 independent instances with noise levels chosen such that the SNR with perfect side information
is 10 dB. Each set of trials is represented by its median squared error in Fig. 1.
The simulated performance is matched very closely by the asymptotic values predicted by the replica
analysis. (Analysis of the linear MMSE estimator using the Replica MAP Claim is detailed in [1];
the Replica MMSE Claim is also applicable to this estimator.) In addition, the replica analysis can be
applied to zero norm-regularized and optimal MMSE estimators that are computationally infeasible
for large problems. These results are also shown in Fig. 1, illustrating the potential of the replica
method to quantify the precise performance losses of practical algorithms.
Additional numerical simulations in [1] illustrate convergence to the replica MAP limit, applicability
to discrete distributions for x, effects of power variations in the components, and accurate prediction
of the probability of sparsity pattern recovery.
6 Conclusions
We have shown that the behavior of vector MAP estimators with large random measurement matrices and Gaussian noise asymptotically matches that of a set of decoupled scalar estimation problems.
We believe that this equivalence to a simple scalar model will open up numerous doors for analysis,
particularly in problems of interest in compressed sensing. One can use the model to dramatically
improve upon existing performance analyses for sparsity pattern recovery and MSE. Also, the technique is sufficiently general to study effects of dynamic range.
8
References
[1] S. Rangan, A. K. Fletcher, and V. K. Goyal. Asymptotic analysis of MAP estimation via
the replica method and applications to compressed sensing. arXiv:0906.3234v1 [cs.IT]., June
2009.
[2] S. F. Edwards and P. W. Anderson. Theory of spin glasses. J. Phys. F: Metal Physics, 5:965?
974, 1975.
[3] H. Nishimori. Statistical physics of spin glasses and information processing: An introduction.
International Series of Monographs on Physics. Oxford Univ. Press, Oxford, UK, 2001.
[4] T. Tanaka. A statistical-mechanics approach to large-system analysis of CDMA multiuser
detectors. IEEE Trans. Inform. Theory, 48(11):2888?2910, November 2002.
[5] R. R. M?uller. Channel capacity and minimum probability of error in large dual antenna array
systems with binary modulation. IEEE Trans. Signal Process., 51(11):2821?2828, November
2003.
[6] D. Guo and S. Verd?u. Randomly spread CDMA: Asymptotics via statistical physics. IEEE
Trans. Inform. Theory, 51(6):1983?2010, June 2005.
[7] M. Talagrand. Spin Glasses: A Challenge for Mathematicians. Springer, New York, 2003.
[8] A. Montanari and D. Tse. Analysis of belief propagation for non-linear problems: The example
of CDMA (or: How to prove Tanaka?s formula). arXiv:cs/0602028v1 [cs.IT]., February 2006.
[9] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?
509, February 2006.
[10] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289?1306, April
2006.
[11] E. J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: Universal
encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406?5425, December 2006.
[12] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Computing,
24(2):227?234, April 1995.
[13] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM
J. Sci. Comp., 20(1):33?61, 1999.
[14] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Stat. Soc., Ser. B,
58(1):267?288, 1996.
[15] D. L. Donoho, M. Elad, and V. N. Temlyakov. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inform. Theory, 52(1):6?18, January 2006.
[16] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform.
Theory, 50(10):2231?2242, October 2004.
[17] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise.
IEEE Trans. Inform. Theory, 52(3):1030?1051, March 2006.
[18] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
?1 -constrained quadratic programming (lasso). IEEE Trans. Inform. Theory, 55(5):2183?2202,
May 2009.
[19] D. L. Donoho and J. Tanner. Counting faces of randomly-projected polytopes when the projection radically lowers dimension. J. Amer. Math. Soc., 22(1):1?53, January 2009.
9
| 3635 |@word mild:1 trial:1 illustrating:1 version:2 achievable:1 norm:9 open:1 simulation:4 decomposition:1 p0:16 solid:1 series:1 mmse:25 multiuser:1 existing:1 com:1 dx:5 readily:1 numerical:4 additive:2 greedy:1 item:1 provides:2 characterization:1 math:1 prove:1 xpx:2 p1:4 cand:2 mechanic:2 behavior:11 actual:2 provided:1 estimating:2 begin:1 matched:1 mass:1 interpreted:1 developed:1 mathematician:1 nj:1 guarantee:1 berkeley:3 exactly:1 decouples:1 rm:3 uk:1 ser:1 omit:1 appear:1 positive:1 limit:9 encoding:1 oxford:2 subscript:1 modulation:1 studied:1 equivalence:1 limited:1 range:3 unique:2 practical:2 atomic:1 practice:1 goyal:2 asymptotics:1 universal:1 matching:2 projection:2 selection:1 operator:4 romberg:1 equivalent:6 map:74 deterministic:3 measurable:1 attention:1 convex:3 survey:1 focused:1 recovery:10 identifying:1 factored:1 estimator:45 array:1 importantly:1 regarded:1 proving:1 variation:3 president:1 limiting:1 suppose:2 exact:4 programming:2 us:1 verd:9 hypothesis:2 trick:1 satisfying:1 particularly:1 natarajan:1 observed:1 role:1 kxk1:1 solved:2 capture:1 calculate:1 wj:1 sect:1 surement:1 monograph:1 rigorously:1 dynamic:1 reviewing:1 predictive:1 technically:1 upon:1 basis:6 triangle:1 alyson:2 easily:2 joint:3 various:4 represented:1 univ:1 effective:13 monte:1 abridged:1 saunders:1 heuristic:1 widely:1 valued:1 elad:1 ppost:9 relax:1 compressed:11 ability:1 qualcomm:2 noisy:2 antenna:1 differentiate:1 sequence:2 took:1 reconstruction:1 relevant:1 description:1 asserts:2 validate:1 ky:4 convergence:1 p:8 perfect:1 converges:2 illustrate:1 develop:1 stat:1 progress:1 sim:2 edward:2 soc:2 predicted:2 involves:1 c:3 quantify:1 closely:1 eff:27 exchange:1 im:1 strictly:2 hold:1 sufficiently:2 exp:3 fletcher:2 algorithmic:1 claim:25 substituting:2 interchanged:1 vary:1 omitted:1 xk2:1 estimation:32 applicable:1 uller:2 mit:1 gaussian:14 avoid:1 shrinkage:1 derived:1 validated:1 june:2 bernoulli:1 tech:1 rigorous:4 glass:4 inst:1 posteriori:2 tao:2 arg:6 among:1 dual:1 priori:1 constrained:1 identical:3 represents:2 np:1 simplify:1 few:1 randomly:2 national:1 sundeep:1 interest:1 highly:1 evaluation:1 generically:1 yielding:1 accurate:1 necessary:1 decoupled:1 indexed:1 iv:1 incomplete:1 circle:1 plotted:1 overcomplete:1 instance:1 tse:1 soft:8 cost:6 applicability:1 deviation:2 entry:1 snr:3 successful:1 varies:1 eec:1 corrupted:4 international:1 siam:2 bu:2 physic:6 tanner:1 analogously:1 squared:6 again:2 satisfied:1 potential:1 postulated:11 notable:1 satisfy:5 depends:1 reiterate:1 later:1 performed:1 closed:1 analyze:2 contribution:1 spin:4 variance:4 yield:3 carlo:1 comp:1 detector:1 phys:1 inform:9 frequency:1 selfaveraging:1 proof:1 couple:1 lim:9 originally:1 specify:1 april:2 amer:1 anderson:2 just:3 talagrand:1 hand:1 tropp:2 nonlinear:4 propagation:1 believe:1 effect:2 validity:1 verify:1 true:4 normalized:1 counterpart:1 hence:2 nonzero:1 vivek:1 white:1 generalized:1 tt:1 common:1 numerically:2 measurement:17 refer:1 cambridge:1 gibbs:1 similarly:1 had:1 stable:1 pu:6 recent:1 perspective:1 certain:6 binary:2 seen:2 minimum:4 captured:1 additional:4 impose:1 kxk0:2 surely:1 determine:2 converge:2 signal:5 multiple:1 reduces:2 technical:1 match:1 calculation:1 divided:1 post:5 award:1 laplacian:1 prediction:3 basic:1 regression:1 circumstance:1 expectation:4 metric:2 arxiv:2 normalization:1 addition:1 fellowship:1 remarkably:1 median:4 db:2 december:1 near:1 presence:1 door:1 counting:1 srangan:1 xj:17 lasso:21 knowing:1 expression:2 greed:1 york:1 bedminster:1 dramatically:1 generally:2 se:2 detailed:1 ten:1 exist:1 per:1 tibshirani:1 discrete:1 threshold:2 verified:1 thresholded:2 replica:47 v1:2 asymptotically:1 relaxation:2 fraction:1 prob:2 powerful:1 uncertainty:1 almost:4 scaling:2 bound:2 quadratic:1 precisely:2 infinity:1 rangan:2 semicolon:1 aspect:1 argument:2 min:4 separable:2 px:8 relatively:1 uf:2 march:1 describes:1 s1:1 pr:4 taken:4 computationally:2 equation:10 xk22:3 previously:1 turn:2 tractable:1 pursuit:7 permit:1 apply:2 generic:1 appropriate:3 assumes:1 include:1 cdma:3 uj:3 february:2 objective:1 quantity:2 strategy:1 primary:1 dependence:2 unproven:1 diagonal:1 simulated:2 capacity:1 sci:1 assuming:3 modeled:1 index:5 ratio:3 difficult:3 october:1 statement:1 negative:2 stated:1 unknown:1 perform:1 observation:3 finite:1 november:2 january:2 situation:1 precise:1 rn:5 arbitrary:2 sharp:1 specified:1 california:2 polytopes:1 tanaka:3 trans:10 able:2 usually:1 pattern:3 below:1 sparsity:4 summarize:1 challenge:1 including:4 max:2 royal:1 belief:1 wainwright:1 power:4 regularized:8 ndimensional:1 improve:1 technology:1 numerous:1 sn:1 prior:18 review:1 nishimori:1 asymptotic:15 law:2 fully:1 loss:1 remarkable:1 foundation:1 sufficient:1 metal:1 thresholding:6 principle:1 course:1 supported:1 free:1 infeasible:1 side:2 taking:2 face:1 sparse:12 benefit:1 dimension:2 valid:1 made:2 commonly:1 projected:1 far:1 sj:7 approximate:1 emphasize:1 temlyakov:1 implicitly:1 postdoctoral:1 as1:6 additionally:1 vgoyal:1 channel:1 robust:1 ca:1 decoupling:1 career:1 mse:10 complex:1 diag:1 main:3 spread:1 montanari:1 noise:41 arise:1 fig:2 explicit:1 formula:1 normregularized:3 sensing:11 essential:2 exists:4 chen:1 failed:1 scalar:30 u2:6 applies:1 springer:1 radically:1 minimizer:3 satisfies:2 ma:1 conditional:4 presentation:1 donoho:4 hard:6 specifically:3 conservative:1 called:1 accepted:1 e:2 exception:1 formally:1 support:1 guo:9 arises:1 brevity:1 incorporate:1 reg:1 extrapolate:1 |
2,908 | 3,636 | Optimal context separation of spiking haptic signals
by second-order somatosensory neurons
Romain Brasselet
CNRS - UPMC Univ Paris 6, UMR 7102
F 75005, Paris, France
[email protected]
Roland S. Johansson
UMEA Univ, Dept Integr Medical Biology
SE-901 87 Umea, Sweden
[email protected]
Angelo Arleo
CNRS - UPMC Univ Paris 6, UMR 7102
F 75005, Paris, France
[email protected]
Abstract
We study an encoding/decoding mechanism accounting for the relative spike timing of the signals propagating from peripheral nerve fibers to second-order somatosensory neurons in the cuneate nucleus (CN). The CN is modeled as a population of spiking neurons receiving as inputs the spatiotemporal responses of real
mechanoreceptors obtained via microneurography recordings in humans. The efficiency of the haptic discrimination process is quantified by a novel definition of
entropy that takes into full account the metrical properties of the spike train space.
This measure proves to be a suitable decoding scheme for generalizing the classical Shannon entropy to spike-based neural codes. It permits an assessment of
neurotransmission in the presence of a large output space (i.e. hundreds of spike
trains) with 1 ms temporal precision. It is shown that the CN population code
performs a complete discrimination of 81 distinct stimuli already within 35 ms
of the first afferent spike, whereas a partial discrimination (80% of the maximum
information transmission) is possible as rapidly as 15 ms. This study suggests that
the CN may not constitute a mere synaptic relay along the somatosensory pathway but, rather, it may convey optimal contextual accounts (in terms of fast and
reliable information transfer) of peripheral tactile inputs to downstream structures
of the central nervous system.
1
Introduction
During haptic exploration tasks, forces are applied to the skin of the hand, and in particular to the
fingertips, which constitute the most sensitive parts of the hand and are prominently involved in object manipulation/recognition tasks. Due to the visco-elastic properties of the skin, forces applied to
the fingertips generate complex non-linear deformation dynamics, which makes it difficult to predict
how these forces can be transduced into percepts by the somatosensory system. Mechanoreceptors
innervate the epidermis and respond to the mechanical indentations and deformations of the skin.
They send direct projections to the spinal cord and to the cuneate nucleus (CN), which constitutes
an important synaptic relay of the ascending somatosensory pathway. The CN projects to several
areas of the central nervous system (CNS), including the cerebellum and the thalamic ventrolateral
posterior nucleus, which in turn projects to the primary somatosensory cortex. The main objective of
this study is to investigate the role of the CN in mediating optimal feed-forward encoding/decoding
of somatosensory information.
1
number
of afferents
16
First-spike waves
stimulus A
stimulus B
0
0
50
100
Conduction velocity (m/s)
thalamus
(CNS)
cerebellum
Fingertip
mechanoreceptors
Peripheral nerve fibers
2nd order neurones
Cuneate Nucleus
(Brainstem)
Figure 1: Overview of the ascending pathway from primary tactile receptors of the fingertip to 2nd
order somatosensory neurons in the cuneate nucleus of the brainstem.
Recent microneurography studies in humans [9] suggest that the relative timing of impulses from
ensembles of mechanoreceptor afferents can convey information about contact parameters faster
than the fastest possible rate code, and fast enough to account for the use of tactile signals in natural
manipulation. Even under the most favorable conditions, discrimination based on firing rates takes
on average 15 to 20 ms longer than discrimination based on first spike latency [9, 10]. Estimates of
how early the sequence in which afferents are recruited conveys information needed for the discrimination of contact parameters indicate that, among mechanoreceptors, the FA-I population provides
the fastest reliable discrimination of both surface curvature and force direction. Reliable discrimination can take place after as few as some five FA-I afferents are recruited, which can occur a few
milliseconds after the first impulse in the population response [10].
Encoding and decoding of sensory information based on the timing of neural discharges, rather
than (or in addition to) their rate, has received increasing attention in the past decade [7, 22]. In
particular, the high information content in the timing of the first spikes in ensembles of central
neurons has been emphasized in several sensory modalities, including the auditory [3, 16], visual
[4, 6], and somatosensory [17] systems. If relative spike timing is fundamental for rapid encoding
and transfer of tactile events in manipulation, then how do neurons read out information carried by
a temporal code? Various decoding schemes have been proposed to discriminate between different
spatiotemporal sequences of incoming spike patterns [8, 13, 1, 7].
Here, we investigate an encoding/decoding mechanism accounting for the relative spike timing of
signals propagating from primary tactile afferents to 2nd order neurons in the CN (Fig. 1). The
population coding properties of a model CN network are studied by employing as peripheral signals
the responses of real mechanoreceptors obtained via microneurography recordings in humans. We
focus on the first spike of each mechanoreceptor, according to the hypothesis that the variability in
the first-spike latency domain with respect to stimulus feature (e.g. the direction of the force) is
larger than the variability within repetitions of the same stimulus [9]. Thus, each tactile stimulus
consists of a single volley of spikes (black and gray waves in Fig. 1) forming a spatiotemporal
response pattern defined by the first-spike latencies across the afferent population (Fig. S1).
2
2.1
Methods
Human microneurography data
In order to investigate fast encoding/decoding mechanisms of haptic signals, we concentrate on the
responses of FA-I mechanoreceptors only [9]. The stimulus state space is defined according to a set
of four primary contact parameters:
2
? the curvature of the probe (C = {0, 100, 200} m?1 , |C| = 3),
? the magnitude of the applied force (F = {1, 2, 4}N , |F | = 3),
? the direction of the force (O = {Ulnar, Radial, Distal, Proximal, Normal}, |D| = 5),
? the angle of the force relative to the normal direction (A = {5, 10, 20}? , |A| = 3).
In total, we consider the responses of 42 FA-I mechanoreceptors to 81 distinct stimuli. The propagation velocity distribution across the set of primary projections onto 2nd order CN neurons is
considered by fitting experimental observations [11, 21] (see Fig. 1, upper-left inset). Each primary
afferent is assigned a conduction speed equal to the mean of the experimental distribution. An average peripheral nerve length of 1 m (from the fingertip to the CN) is then taken to compute the
corresponding conduction delay.
2.2
Cuneate nucleus model and synaptic plasticity rule
Single unit discharges at the CN level are modeled according to the spike-response model (SRM)
[5] (see Supporting Material Sec. A.1). The parameters determining the response of the CN single neuron model are set according to in vivo electrophysiological recordings by H. J?orntell (unpublished data). Fig. 2A shows a sample firing pattern that illustrates the spike timing reliability property [14] of the model CN neuron. We assume that the stochasticity governing the entire
mechanoreceptors-CN pathway can be represented by the probability function that determines the
electro-responsiveness properties of the SRM.
The CN network is modeled as a population of SRM units. The connectivity layout of the
mechanoreceptor-to-CN projections is based on neuroanatomical data [12], which suggests an average divergence/convergence ratio of 1700/300. This asymmetric coupling is in favor of a fast
feed-forward encoding/decoding process occurring at the CN network level. Based on this divergence/convergence data, and given that there are around 2000 mechanoreceptors at each fingertip
(and that the CN is somatotopically organized at least to the precision of the finger), there must exist
around 12000 CN neurons coding for the tactile information coming from each fingertip. These
data suggest a probability of connection between a mechanoreceptor and a CN cell of 0.15. In order
to test the hypothesis of a purely feed-forward information transfer at the CN level, no collateral
projections between CN neurons are considered in the current version of the model.
We put forth the hypothesis that the efficacy of the mechanoreceptor-CN synapses is regulated according to spike-timing-dependent plasticity (STDP, [1, 15]). In particular, we employ a STDP rule
specifically developed for the SRM [20]. This learning rule optimizes the information transmission
property of a single SRM neuron, accounts for coincidence detection across multiple afferents and
provides a biologically-plausible principle that generalizes the Bienenstock-Cooper-Munro (BCM)
rule [2] for spiking neurons. In order to focus on the first spike latencies of the mechanoreceptor
signals, we adapt the learning rule developed by Toyoizumi et al. 2005 [20] to very short transient
stimuli, and we apply it to maximize the information transfer at the level of the CN neural population.
See Supporting Material Sec. A.2 for details on the learning rule. The weights of mechanoreceptorCN synapses are initialized randomly between 0 and 1 according to a uniform distribution. The
training phase consists of 200 presentations of the sequence of 81 stimuli.
2.3
Metrical information transfer measure
An information-theoretical approach is employed to assess the efficiency of the haptic discrimination process. Classical literature solutions based on Shannon?s mutual information (MI) [19] consist
of using either a binning procedure (which reduces the response space and relaxes the temporal constraint) or a clustering method (e.g. k-nearest neighbors based on spike-train metrics) coupled to a
confusion matrix to estimate a lower bound on MI. Yet, none of these techniques allows the information transmission to be assessed by taking into full account the metrics of the spike response space.
Furthermore, a decoding scheme accounting for precise temporal discrimination while maintaining
the combinatorial properties of the output space within suitable boundaries ? even in the presence
of hundreds of CN spike trains ? is needed.
A novel definition of entropy is set forth to provide a suitable measure for the encoding/decoding of
spiking signals, and to quantify the information transmission in the presence of large populations of
3
A
B
25
PSTH
80
inter-stimuli distance
intra-stimuli distance
70
60
60
50
50
40
40
VP
Trials
D
VP
0
25
70
D
30
20
Input
0
20
40
60
80
100 120 140 160 180 200
Time (ms)
10
60
80
100
120
Time (ms)
140
160
0
20
180
spike trains with a 1 ms temporal precision. The following definition of entropy is taken:
X 1
X < r|r0 >
log
|R|
|R|
0
(1)
r ?R
r?R
where R is the set of responses elicited by all the stimuli, |R| is the cardinal of R, and < r|r0 > is
a similarity measure between any two responses r and r0 . The similarity measure < r|r0 > depends
on Victor-Purpura (VP) spike train metrics [23] (see below).
It is worth noting that, in contrast to the Shannon definition of entropy, in which the sum is over
different response clusters, here the sum is over all the |R| responses, no matter if they are identical
or different (i.e. cluster-less entropy definition). Also, the similarity measure < r|r0 > allows
the computation of the probability of getting a given response (i.e. p(r|s)) to be avoided, which
usually implies to group responses in clusters. These aspects make H ? (R) suitable to take into
account the metric properties of the responses. Notice that if the similarity measure were defined as
< r|r0 >= ?(r, r0 ) (with ? being the Dirac function), then H ? (R) would be exactly the same as the
Shannon entropy.
The conditional entropy is then taken as:
X
s?S
p(s)H ? (R|s) = ?
X
p(s)
s?S
X
r?Rs
X < r|r0 >
1
log
|Rs |
|Rs |
0
(2)
r ?Rs
where Rs is the set of responses elicited by the stimulus s.
Finally, the metrical information measure is given by:
I ? (R; S) = H ? (R) ? H ? (R|S)
(3)
The similarity measure < r|r0 > is defined as a function of the VP distance DV P (r, r0 ) between
two population responses r and r0 . The distance DV P (r, r0 ) depends on the VP cost parameter CV P
[23], which determines the time scale of the analysis by regulating the influence of spike timing vs.
spike count when calculating the distance between r and r0 .
There is an infinite number of ways to obtain a scalar product from a distance. We take a very simple
one, defined as:
< r|r0 >= 1 ?? DV P (r, r0 ) < Dcritic
(4)
4
30
D critic
Figure 2: (A) Example of discharge patterns of a model CN neuron evoked by a constant depolarizing current (bottom). Responses are shown as a raster plot of spike times during 25 trials (center),
and as the corresponding PSTH (top). (B) Example of intra- and inter-stimulus distances DV P (red
and blue curves, respectively) over time for a VP cost parameter CV P = 0.15. The optimal discrimination condition is met after about 110 ms, when the distribution of intra- and inter-stimulus
distances (right plot) stop overlapping. Fig. S2 in the Supporting Material shows an example of two
distance distributions that never become disjoint (i.e. perfect discrimination never occurs).
H ? (R|S) =
intrastimuli dist
20
D critic 10
0
40
H ? (R) = ?
interstimuli dist
40
60
where the critical distance Dcritic is a free parameter. According to Eq. 4, whenever DV P (r, r0 ) <
Dcritic the responses r, r0 are considered to be identical, otherwise they are classified as different. If
Dcritic = 0 one recovers the Shannon entropy from Eq. 1.
In order to determine the optimal value for Dcritic , we consider two sets of VP distances:
? the intra-stimulus distances DV P (r(s), r0 (s)) between responses r, r0 elicited by the same
stimulus s;
? the inter-stimulus distances DV P (r(s), r0 (s00 )) between responses r, r0 elicited by two different stimuli s, s00 .
Then, we compute the minimum and maximum intra-stimulus distances as well as the minimum
and maximum inter-stimulus distances. The optimal coding condition, corresponding to maximum
I ? (R; S) and zero H ? (R|S), occurs when the maximum intra-stimulus distance becomes smaller
than the minimum inter-stimulus distance.
In the case of spike train neurotransmission, the relationship between intra- and inter-stimulus distance distributions tends to evolve over time, as the input spike wave across multiple afferents flows
in. Fig. 2B shows an example of intra- and inter-stimulus distance distributions evolving over time.
The two distributions separate from each other after about 110 ms. The critical parameter Dcritic can
then be taken as the distance at which the maximum intra-stimulus distance becomes smaller than
the minimum inter-stimulus distance (dashed line in Fig. 2B). The time at which the critical distance
Dcritic can be determined (i.e. the time at which the two distributions stop overlapping) indicates
when the perfect discrimination condition is reached (i.e. maximum I ? (R; S) and zero H ? (R|S)).
To summarize, perfect discrimination calls upon the following rule:
? if all intra-stimulus distances are smaller than the critical distance Dcritic , then all the
responses elicited by any stimulus are considered identical. The conditional entropy
H ? (R|S) is therefore nil.
? if all inter-stimulus distances are greater than Dcritic , then two responses elicited by two different stimuli are always discriminated. The information I ? (R; S) is therefore maximum.
As aforementioned, the critical distance Dcritic is interdependent on the VP cost parameter CV P [23].
We define the optimum VP cost CV? P as the one that leads to earliest perfect discrimination (in the
example of Fig. 2B, a cost CV P = 0.15 leads to perfect discrimination after 110 ms).
3
3.1
Results
Decoding of spiking haptic signals upstream from the cuneate nucleus
First, we validate the information theoretical analysis described above to decode a limited set of
microneurography data upstream from the CN network [18]. Only the 5 force directions (ulnar,
radial, distal, proximal, normal) are considered as variable primary features [9]. Each of the 5
stimuli is presented 100 times, and the VP distances DV P are computed across the population of 42
mechanoreceptor afferents. Fig. 3A shows that the critical distance Dcritic = 8 can be set 72 ms
after the stimulus onset. As shown in Fig. 3B, that ensures that the perfect discrimination condition
is met within 30 ms of the first mechanoreceptor discharge. Fig. 3C displays two samples of distance
matrices indicating how the input spike waves across the 42 mechanoreceptor afferents are clustered
by the decoding system over time. Before the occurrence of the perfect discrimination condition
(left matrix) different stimuli can have relatively small distances (e.g. P and N force directions),
which means that some interferences are affecting the decoding process. After 72 ms (right matrix),
all the initially overlapping contexts become pulled apart, which removes all interferences across
inputs and leads to a 100% accuracy in the discrimination process.
3.2
Optimal haptic context separation downstream from the cuneate nucleus
Second, the entire set of microneurography recordings (81 stimuli) is employed to analyze the information transmission properties of a network of 50 CN neurons in the presence of synaptic plasticity
5
A
B
2.5
40
35
inter-stimulus distance
I* = 100%
intra-stimulus distance
2
Information (bits)
30
D
VP
25
20
15
10
D critic =8
I*(R;S)
H*(R|S)
1.5
1
0.5
5
0
40
50
60
70
80
90
100
H* = 0
0
40
120
time (ms)
first input spike time
C
110
50
first input
spike time
60
70
80
90
100
110
time (ms)
120
~30 ms
500
500
P
P
9
20
400
400
U
U
300
D
15
200
10
R
R
100
100
N
N
100
R
200
D
300
U
400
P
5
N
1
0
0
300
D
5
200
0
500
0
N
100
R
200
D
300
U
400
P
500
Figure 3: Discrimination capacity upstream from the CN for a set of 5 stimuli (obtained by varying
the orientation parameter only) presented 100 times each. (A) Intra- and inter-stimulus distances
over time for a VP cost parameter CV P = 0.15. The perfect discrimination condition is met 72
ms after the stimulus onset and 30 ms after the arrival of the first spike. (B) Metrical information
and conditional entropy obtained with Dcritic = 8. (C) Distance matrices before and after the
occurrence of perfect discrimination.
(i.e. LTP/LTD based on the learning rule detailed in Sec. A.2). To compute I ? (R; S), the VP
distances DV P (r, r0 ) between any two CN population responses r, r0 are considered. Again, the
distance Dcritic is used to identify the perfect discrimination condition, and the VP cost parameter CV? P = 0.1 yielding the fastest perfect discrimination is selected. Fig. 4A shows that the CN
population achieves optimal context separation within 35 ms of the arrival of the first afferent spikes.
Selecting the optimal value of the critical distance, as done for Fig. 4A, corresponds to the situation
in which a readout system downstream from the CN would need a complete separation of haptic
percepts (e.g. for highly precise feature recognition). Relaxing this optimality constraint (e.g. to the
extent of very rapid, though less precise, reactions) can further speed up the discrimination process.
For instance, Fig. 4B indicates that setting Dcritic to a suboptimal value would lead to a partial
discrimination condition in which 80% of the maximum I ? (R; S) (with non-zero H ? (R|S)) can be
achieved within 15 ms of the arrival of the first pre-synaptic spike.
Figs. 4C-D illustrate the distributions of intra- and inter-stimulus distances 100 ms after stimulus
onset before and after learning. It is shown that while the distributions are well-separated after
learning, they are still largely overlapping before training (implying the impossibility of perfect
discrimination). It is also interesting to note that after (resp. before) learning the CN fired on
average n=217 (resp. 39) spikes, and that the maximum intra-stimulus distance was about DVmax
P =14
(resp. 45). The average uncertainty on the timing of a single spike can be expressed by ?t =
DVmax
P / CV P n. Since CV P = 0.1, ?t = 0.6 ms after learning and ? 12 ms before. This shows that
the plasticity rule helped to reduce the jitter on CN spikes, thus reducing the metrical conditional
entropy compared to the pre-learning condition.
Fig. 4E suggests that the plasticity rule leads to stable weight distributions that are invariant with respect to initial random conditions (uniform distribution between [0, 1]). After learning, the synaptic
6
A
B
7
7
I* = 100%
H*(R|S)
I*(R;S)
6
5
Information (bits)
Information (bits)
6
4
3
2
H*(R|S)
5
I* = 80%
4
3
2
1
1
H* = 0
0
40
50
first input
spike time
x 10
0
0
80
90
0
40
100
time (ms)
~35 ms
4
D
Count
4
70
2.2
50
first input
spike time
x 10 5
DVP
100
0
60
70
DVP
250
90
100
time (ms)
6
0
0
80
~15 ms
E
Count
C
60
H* > 0
log10 (# synapses)
I*(R;S)
0
1
synaptic weight
Figure 4: Information I ? (R; S) and conditional entropy H ? (R|S) over time. The CN population
consists of 50 cells. The 81 tactile stimuli are presented 100 times each. (A) Optimal discrimination
is reached 35 ms after the first afferent spike. (B) If the perfect discrimination constraint is relaxed
by reducing the critical distance, then the system can perform partial discrimination ?i.e 80% of
maximum I ? (R; S) and non-zero H ? (R|S)? already within 15 ms of the first spike time. (C-D)
Distributions of intra- and inter-stimulus distances (computed 100 ms after stimulus onset) before
and after training, respectively. (E) Distribution of CN synaptic weights after learning. In this
example, a network of 10000 cuneate neurons has been trained.
efficacies of the mechanoreceptor-to-CN projections converge towards a bimodal distribution with
one peak close to zero and the other peak close to the maximum weight.
Finally, Sec. A.3 and Fig. S3 report some supplementary results obtained by using a classical STDP
rule [1, 15] ?rather than the learning rule described in Secs. 2.2 and A.2? to train the CN network.
3.3
How does the size of the cuneate nucleus network influence discrimination?
An additional analysis was performed to study the relationship between the size of the CN population and the optimality of the encoding/decoding process. This analysis reveals that a lower bound
on the number of CN neurons exists in order to perform optimal (i.e. both very rapid and reliable)
discrimination of the 81 microneurography spike trains. As shown in Fig. 5, the perfect discrimination condition cannot be met with a population of less than 50 CN neurons. This result corroborates
the hypothesis that a spatiotemporal population code is a necessary condition for performing effective context separation of complex spiking signals [3, 6]. By increasing the number of neurons, the
discrimination becomes faster and saturates at 72 ms (which corresponds to the time at which the
first spike from the slowest volley of pulses arrives at the CN). It is also shown that the number of
spikes emitted on average by CN cells under the optimal discrimination condition decreases from
2.1 to 1.3 with the size of the CN population, supporting the idea that one spike per neuron is enough
to convey a significant amount of information.
7
85
2.1
Time (ms)
80
75
1.3
72
70
50
200
500
1000
2000
CN population size
Figure 5: Time necessary to perfectly discriminate the entire set of 81 stimuli as a function of the
size of the CN population. Each stimulus is presented 100 times. The numbers of spikes emitted on
average by each CN neuron when optimal discrimination occurs are also indicated in the diagram.
4
Discussion
This study focuses on how a population of 2nd order somatosensory neurons in the cuneate nucleus
(CN) can encode incoming spike trains ?obtained via microneurography recordings in humans? by
separating them in an abstract metrical space. The main contribution is the prediction concerning
a significant role of the CN in conveying optimal contextual accounts of peripheral tactile inputs to
downstream structures of the CNS.
It is shown that an encoding/decoding mechanism based on relative spike timing can account for
rapid and reliable transmission of tactile information at the level of the CN. In addition, it is emphasized that the variability of the CN conditioned responses to tactile stimuli constitutes a fundamental
measure when examining neurotransmission at this stage of the ascending somatosensory pathway.
More generally, the number of responses elicited by a stimulus is a critical issue when information
has to be transferred through multiple synaptic relays. If a single stimulus can possibly elicit millions of different responses on a neural layer, how can this plethora of data be effectively decoded
by downstream networks? Thus, neural information processing requires encoding mechanisms capable of producing as few responses as possible to a given stimulus while keeping these responses
different between stimuli.
A corollary contribution of this work consists in putting forth a novel definition of entropy, H ? (R),
to assess neurotransmission in the presence of large spike train spaces and with high temporal precision. An information theoretical analysis ?based on this novel definition of entropy? is used to
measure the ability of CN network to perform haptic context discrimination. The optimality condition corresponds to maximum information I ? (R; S) and (simultaneously) minimum conditional
entropy H ? (R|S) (which quantifies the variability of the CN conditioned responses).
Finally, the proposed information theoretical measure accounts for the metrical properties of the
response space explicitly and estimates the optimality of the encoding/decoding process based on
its context separation capability (which minimizes destructive interference over learning and maximizes memory capacity). The method does not call upon an a priori decoding analysis to build
predefined response clusters (e.g. as the confusion matrix method does to compute conditional
probabilities and then Shannon MI). Rather, the evaluation of the clustering process is embedded in
the entropy measure and, when the condition of optimal discrimination is reached, the existence of
well-defined clusters is ensured.
Acknowledgments. Granted by the EC Project SENSOPAC, IST-027819-IP.
8
References
[1] G. Bi and M. Poo. Distributed synaptic modification in neural networks induced by patterned
stimulation. Nature, 401:792?796, 1999.
[2] E. Bienenstock, L. Cooper, and P. Munro. Theory for the development of neuron selectivity:
orientation specificity and binocular interaction in visual cortex. J Neurosci, 2:32?48, 1982.
[3] S. Furukawa, L. Xu, and J.C. Middlebrooks. Coding of sound-source location by ensembles
of cortical neurons. J Neurosci, 20:1216?1228, 2000.
[4] T.J. Gawne, T.W. Kjaer, and B.J. Richmond. Latency: another potential code for featurebinding in the striate cortex. J Neurophysiol, 76:1356?1360, 1996.
[5] W. Gerstner and W. Kistler. Spiking Neuron Models. Cambridge University Press, 2002.
[6] T. Gollisch and M. Meister. Rapid neural coding in the retina with relative spike latencies.
Science, 319:1108?1111, 2008.
[7] P. Heil. First-spike latency of auditory neurons revisited. Curr Opin Neurobiol, 14:461?467,
2004.
[8] J.J. Hopfield. Pattern recognition computation using action potential timing for stimulus representation. Nature, 376:33?36, 1995.
[9] R.S. Johansson and I. Birznieks. First spikes in ensembles of human tactile afferents code
complex spatial fingertip events. Nat Neurosci, 7:170 ? 177, 2004.
[10] R.S. Johansson and J.R. Flanagan. Coding and use of tactile signals from the fingertips in
object manipulation tasks. Nat Rev Neurosci, 10:345?359, 2009.
[11] R.S. Johansson and A. Vallbo. Tactile sensory coding in the glabrous skin of the human hand.
Trends Neurosci, 6:27?32, 1983.
[12] E. Jones. Cortical and subcortical contributions to activity-dependent plasticity in primate
somatosensory cortex. Annu Rev Neurosci, 23:1?37, 2000.
[13] P. Koenig, A.K. Engel, and W. Singer. Integrator or coincidence detector? The role of the
cortical neuron revisited. Trends Neurosci, 19:130?137, 1996.
[14] Z.F. Mainen and T.J. Sejnowski. Reliability of spike timing in neocortical neurons. Science,
268:1503?1506, 1995.
[15] H. Markram, J. Luebke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy by
coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997.
[16] I. Nelken, G. Chechik, T.D. Mrsic-Flogel, A.J. King, and J.W. Schnupp. Encoding stimulus
information by spike numbers and mean response time in primary auditory cortex. J Comput
Neurosci, 19:199?221, 2005.
[17] S. Panzeri, R.S. Petersen, S.R Schultz, M. Lebedev, and M.E. Diamond. The role of spike
timing in the coding of stimulus location in rat somatosensory cortex. Neuron, 29:769?777,
2001.
[18] H.P. Saal, S. Vijayakumar, and R.S. Johansson. Information about complex fingertip parameters in individual human tactile afferent neurons. J Neurosci, 29:8022?8031, 2009.
[19] C.E. Shannon. A mathematical theory of communication. Bell Sys Tech J, 27:379?423, 1948.
[20] T. Toyoizumi, J.-P. Pfister, K. Aihara, and W. Gerstner. Generalized Bienenstock-CooperMunro rule for spiking neurons that maximizes information transmission. Proc Natl Acad Sci
U S A, 102(14):5239?5244, 2005.
[21] A. Vallbo and R.S. Johansson. Properties of cutaneous mechanoreceptors in the human hand
related to touch sensation. Hum Neurobiol, 3:3?14, 1984.
[22] R. VanRullen, R. Guyonneau, and S.J. Thorpe. Spike time make sense. Trends Neurosci,
28:1?4, 2005.
[23] J.D. Victor and K.P. Purpura. Nature and precision of temporal coding in visual cortex: a
metric-space analysis. J Neurophysiol, Vol 76:1310?1326, 1996.
9
| 3636 |@word trial:2 version:1 johansson:7 nd:5 r:5 pulse:1 accounting:3 initial:1 efficacy:3 selecting:1 mainen:1 past:1 reaction:1 current:2 contextual:2 yet:1 must:1 saal:1 physiol:1 plasticity:6 opin:1 remove:1 plot:2 aps:1 discrimination:38 v:1 implying:1 selected:1 nervous:2 sys:1 short:1 provides:2 revisited:2 location:2 psth:2 five:1 mathematical:1 along:1 direct:1 become:2 visco:1 consists:4 pathway:5 fitting:1 inter:14 rapid:5 dist:2 integrator:1 neurotransmission:4 gollisch:1 increasing:2 somatotopically:1 project:3 becomes:3 maximizes:2 transduced:1 neurobiol:2 minimizes:1 developed:2 temporal:7 exactly:1 ensured:1 middlebrooks:1 unit:2 medical:1 producing:1 before:7 timing:14 tends:1 acad:1 receptor:1 encoding:13 firing:2 black:1 umr:2 studied:1 quantified:1 evoked:1 suggests:3 relaxing:1 fastest:3 limited:1 patterned:1 bi:1 acknowledgment:1 flanagan:1 procedure:1 area:1 evolving:1 elicit:1 bell:1 projection:5 chechik:1 pre:2 radial:2 specificity:1 suggest:2 petersen:1 onto:1 close:2 cannot:1 put:1 context:7 influence:2 center:1 send:1 layout:1 attention:1 poo:1 rule:13 kjaer:1 population:21 discharge:4 resp:3 decode:1 hypothesis:4 romain:2 velocity:2 trend:3 recognition:3 asymmetric:1 binning:1 bottom:1 role:4 coincidence:3 cord:1 ensures:1 readout:1 decrease:1 dynamic:1 trained:1 purely:1 upon:2 efficiency:2 neurophysiol:2 hopfield:1 various:1 fiber:2 represented:1 finger:1 train:11 univ:3 distinct:2 fast:4 separated:1 effective:1 sejnowski:1 larger:1 plausible:1 toyoizumi:2 supplementary:1 otherwise:1 favor:1 ability:1 ip:1 sequence:3 interaction:1 coming:1 product:1 fr:2 rapidly:1 fired:1 forth:3 validate:1 dirac:1 getting:1 convergence:2 cluster:5 transmission:7 optimum:1 plethora:1 perfect:14 object:2 coupling:1 illustrate:1 propagating:2 nearest:1 received:1 eq:2 epsps:1 somatosensory:13 indicate:1 quantify:1 implies:1 direction:6 concentrate:1 met:4 sensation:1 mrsic:1 exploration:1 human:9 brainstem:2 transient:1 kistler:1 material:3 clustered:1 around:2 considered:6 normal:3 stdp:3 panzeri:1 predict:1 indentation:1 achieves:1 early:1 relay:3 favorable:1 proc:1 dvp:2 angelo:2 combinatorial:1 sensitive:1 repetition:1 engel:1 always:1 rather:4 varying:1 earliest:1 encode:1 corollary:1 focus:3 indicates:2 slowest:1 impossibility:1 contrast:1 richmond:1 tech:1 sense:1 dependent:2 cnrs:2 entire:3 mechanoreceptor:10 bienenstock:3 initially:1 france:2 issue:1 among:1 aforementioned:1 orientation:2 priori:1 development:1 spatial:1 frotscher:1 mutual:1 equal:1 never:2 biology:1 identical:3 jones:1 constitutes:2 report:1 stimulus:57 cardinal:1 few:3 employ:1 retina:1 randomly:1 thorpe:1 simultaneously:1 divergence:2 individual:1 phase:1 cns:3 curr:1 detection:1 regulating:1 fingertip:10 investigate:3 intra:15 highly:1 evaluation:1 arrives:1 yielding:1 natl:1 predefined:1 metrical:7 capable:1 partial:3 necessary:2 collateral:1 sweden:1 initialized:1 deformation:2 theoretical:4 instance:1 flogel:1 cost:7 hundred:2 srm:5 delay:1 uniform:2 examining:1 conduction:3 upmc:4 spatiotemporal:4 proximal:2 fundamental:2 peak:2 vijayakumar:1 receiving:1 decoding:17 lebedev:1 connectivity:1 again:1 central:3 s00:2 possibly:1 account:9 potential:2 coding:9 sec:5 matter:1 explicitly:1 afferent:16 depends:2 onset:4 performed:1 helped:1 analyze:1 red:1 wave:4 reached:3 thalamic:1 elicited:7 capability:1 depolarizing:1 vivo:1 contribution:3 ass:2 accuracy:1 largely:1 percept:2 ensemble:4 identify:1 conveying:1 vp:14 mere:1 none:1 worth:1 arleo:2 classified:1 gawne:1 detector:1 synapsis:3 whenever:1 synaptic:11 definition:7 raster:1 involved:1 destructive:1 conveys:1 mi:3 recovers:1 stop:2 auditory:3 electrophysiological:1 organized:1 nerve:3 feed:3 response:35 done:1 though:1 furthermore:1 governing:1 stage:1 binocular:1 hand:4 koenig:1 touch:1 assessment:1 propagation:1 overlapping:4 gray:1 impulse:2 indicated:1 assigned:1 read:1 distal:2 cerebellum:2 during:2 rat:1 m:32 generalized:1 complete:2 neocortical:1 confusion:2 performs:1 novel:4 stimulation:1 spiking:8 discriminated:1 overview:1 spinal:1 million:1 mechanoreceptors:10 significant:2 cambridge:1 cv:9 innervate:1 stochasticity:1 reliability:2 stable:1 cortex:7 longer:1 surface:1 similarity:5 curvature:2 posterior:1 recent:1 optimizes:1 apart:1 manipulation:4 selectivity:1 victor:2 furukawa:1 responsiveness:1 minimum:5 greater:1 relaxed:1 additional:1 employed:2 r0:23 determine:1 maximize:1 converge:1 signal:11 dashed:1 full:2 multiple:3 sound:1 thalamus:1 reduces:1 faster:2 adapt:1 concerning:1 roland:2 prediction:1 metric:5 bimodal:1 achieved:1 cell:3 whereas:1 addition:2 affecting:1 diagram:1 source:1 modality:1 haptic:9 recording:5 recruited:2 ltp:1 electro:1 induced:1 flow:1 call:2 emitted:2 presence:5 noting:1 enough:2 relaxes:1 vanrullen:1 perfectly:1 suboptimal:1 reduce:1 idea:1 cn:54 munro:2 granted:1 ltd:1 tactile:15 neurones:1 constitute:2 action:1 generally:1 latency:7 se:2 detailed:1 amount:1 generate:1 exist:1 millisecond:1 notice:1 s3:1 disjoint:1 per:1 blue:1 vol:1 group:1 putting:1 four:1 ist:1 downstream:5 sum:2 angle:1 uncertainty:1 respond:1 jitter:1 place:1 separation:6 bit:3 bound:2 layer:1 display:1 activity:1 occur:1 constraint:3 aspect:1 speed:2 optimality:4 performing:1 glabrous:1 relatively:1 transferred:1 according:7 peripheral:6 across:7 smaller:3 postsynaptic:1 ulnar:2 rev:2 biologically:1 s1:1 modification:1 primate:1 aihara:1 cutaneous:1 dv:9 invariant:1 umu:1 interference:3 taken:4 turn:1 count:3 mechanism:5 integr:1 needed:2 singer:1 ascending:3 meister:1 generalizes:1 permit:1 probe:1 apply:1 occurrence:2 existence:1 neuroanatomical:1 top:1 clustering:2 maintaining:1 log10:1 calculating:1 prof:1 build:1 classical:3 contact:3 skin:4 objective:1 already:2 spike:57 occurs:3 fa:4 primary:8 hum:1 striate:1 regulated:1 distance:42 separate:1 separating:1 capacity:2 sci:1 extent:1 code:7 length:1 modeled:3 relationship:2 ratio:1 difficult:1 mediating:1 regulation:1 sakmann:1 perform:3 diamond:1 upper:1 neuron:32 observation:1 supporting:4 situation:1 saturates:1 variability:4 precise:3 communication:1 unpublished:1 paris:4 mechanical:1 connection:1 bcm:1 below:1 pattern:5 usually:1 summarize:1 reliable:5 including:2 memory:1 suitable:4 event:2 natural:1 force:10 critical:9 scheme:3 carried:1 heil:1 coupled:1 literature:1 interdependent:1 evolve:1 determining:1 relative:7 embedded:1 interesting:1 subcortical:1 nucleus:10 principle:1 critic:3 free:1 keeping:1 pulled:1 neighbor:1 taking:1 markram:1 distributed:1 boundary:1 curve:1 cortical:3 sensory:3 forward:3 nelken:1 avoided:1 schultz:1 employing:1 ec:1 incoming:2 reveals:1 corroborates:1 decade:1 quantifies:1 purpura:2 nature:3 transfer:5 elastic:1 gerstner:2 complex:4 upstream:3 domain:1 main:2 neurosci:10 s2:1 arrival:3 convey:3 xu:1 fig:19 cooper:2 precision:5 decoded:1 prominently:1 volley:2 comput:1 annu:1 emphasized:2 inset:1 consist:1 exists:1 effectively:1 magnitude:1 nat:2 illustrates:1 occurring:1 conditioned:2 entropy:17 generalizing:1 forming:1 visual:3 expressed:1 scalar:1 corresponds:3 determines:2 conditional:7 presentation:1 king:1 towards:1 content:1 specifically:1 infinite:1 determined:1 reducing:2 total:1 nil:1 discriminate:2 pfister:1 experimental:2 shannon:7 indicating:1 assessed:1 dept:1 |
2,909 | 3,637 | Nonparametric Bayesian Models for Unsupervised
Event Coreference Resolution
Cosmin Adrian Bejan1 , Matthew Titsworth2 , Andrew Hickl2 , & Sanda Harabagiu1
1
Human Language Technology Research Institute, University of Texas at Dallas
2
Language Computer Corporation, Richardson, Texas
[email protected]
Abstract
We present a sequence of unsupervised, nonparametric Bayesian models for clustering complex linguistic objects. In this approach, we consider a potentially infinite number of features and categorical outcomes. We evaluated these models for
the task of within- and cross-document event coreference on two corpora. All the
models we investigated show significant improvements when compared against an
existing baseline for this task.
1 Introduction
In Natural Language Processing (NLP), the task of event coreference has numerous applications,
including question answering, multi-document summarization, and information extraction. Two
event mentions are coreferential if they share the same participants and spatio-temporal groundings.
Moreover, two event mentions are identical if they have the same causes and effects. For example,
the three documents listed in Table 1 contains four mentions of identical events but only the arrested,
apprehended, and arrest mentions from the documents 1 and 2 are coreferential. These definitions
were used in the tasks of Topic Detection and Tracking (TDT), as reported in [24].
Previous approaches to event coreference resolution [3] used the same lexeme or synonymy of the
verb describing the event to decide coreference. Event coreference was also tried by using the
semantic types of an ontology [17]. However, the features used by these approaches are hard to select
and require the design of domain specific constraints. To address this problems, we have explored
a sequence of unsupervised, nonparametric Bayesian models that are used to probabilistically infer
coreference clusters of event mentions from a collection of unlabeled documents. Our approach
is motivated by the recent success of unsupervised approaches for entity coreference resolution
[16, 22, 25] and by the advantages of using a large amount of data at no cost.
One model was inspired by the fully generative Bayesian model proposed by Haghighi and Klein
[16] (henceforth, H&K). However, to employ the H&K?s model for tasks that require clustering
objects with rich linguistic features (such as event coreference resolution), or to extend this model in
order to enclose additional observable properties is a challenging task [22, 25]. In order to counter
this limitation, we make a conditional independence assumption between the observable features
and propose a generalized framework (Section 3) that is able to easily incorporate new features.
During the process of learning the model described in Section 3, it was observed that a large amount
of time was required to incorporate and tune new features. This lead us to the challenge of creating a
framework which considers an unbounded number of features where the most relevant are selected
automatically. To accomplish this new goal, we propose two novel approaches (Section 4). The
first incorporates a Markov Indian Buffet Process (mIBP) [30] into a Hierarchical Dirichlet Process
(HDP) [28]. The second uses an Infinite Hidden Markov Model (iHMM) [5] coupled to an Infinite
Factorial Hidden Markov Model (iFHMM) [30].
In this paper, we focus on event coreference resolution, though adaptations for event identity resolution can be easily made. We evaluated the models on the ACE 2005 event corpus [18] and on a new
annotated corpus encoding within- and cross-document event coreference information (Section 5).
1
Document 1: San Diego Chargers receiver Vincent Jackson was arrested on suspicion of drunk driving on
Tuesday morning, five days before a key NFL playoff game.
...
Police apprehended Jackson in San Diego at 2:30 a.m. and booked him for the misdemeanour before his
release.
Document 2: Despite his arrest on suspicion of driving under the influence yesterday, Chargers receiver
Vincent Jackson will play in Sunday?s AFC divisional playoff game at Pittsburgh.
Document 3: In another anti-piracy operation, Navy warship on Saturday repulsed an attack on a merchant
vessel in the Gulf of Aden and nabbed 23 Somali and Yemeni sea brigands.
Table 1: Examples of coreferential and identical events.
2 Event Coreference Resolution
Models for solving event coreference and event identity can lead to the generation of ad-hoc event
hierarchies from text. A sample of a hierarchy capturing corefering and identical events, including
those from the example presented in Section 1, is illustrated in Figure 1.
generic
events
events
event
mentions
arrest
Event properties:
Vincent Jackson
Suspect:
Authorities: police
Tuesday
Time:
Location: San Diego
arrest
... arrested ... apprehended
Document 1
arrest
... arrest ...
... nabbed ...
Document 2
Document 3
Event properties:
sea brigands
Suspect:
Authorities: Navy warship
Saturday
Time:
Location: Gulf of Aden
Figure 1: A portion of the event hierarchy.
First, we introduce some basic notation.1 Next, to cluster the mentions that share common event
properties (as shown in Figure 1), we briefly describe the linguistic features of event mentions.
2.1 Notation
As input for our models, we consider a collection of I documents, each document i having Ji event
mentions. Each event mention is characterized by L feature types, FT, and each feature type is
represented by a finite number of feature values, f v. Therefore, we can represent the observable
properties of an event mention, em, as a vector of pairs h(FT1 : f v1i), . . . , (FTL : f vLi)i, where each
feature value index i ranges in the feature value space associated with a feature type.
2.2 Linguistic Features
We consider the following set of features associated to an event mention:2
Lexical Features (LF) To capture the lexical context of an event mention, we extract the following
features: the head word of the mention (HW), the lemma of the HW (HL), lemmas of left and right
words of the mention (LHL , RHL), and lemmas of left and right mentions (LHE , RHE).
Class Features (CF) These features aim to classify mentions into several types of classes: the
mention HW?s part-of-speech (POS), the word class of the HW (HWC), which can take one of the
following values hverb, noun, adjective, otheri, and the event class of the mention (EC). To extract
the event class associated to every event mention, we employed the event identifier described in [6].
WordNet Features (WF) We build three types of clusters over all the words from WordNet [9]
and use them as features for the mention HW. First cluster type associates an unique id to each
(word:HWC) pair (WNW). The second cluster type uses the transitive closure of the synonymous
relations to group words from WordNet (WNS). Finally, the third cluster type considers as grouping
criteria the category from WordNet lexicographer?s files that is associated to each word (WNL). For
cases when a new word does not belong to any of these WordNet clusters, we create a new cluster
with a new id for each of the three cluster types.
Semantic Features (SF) To extract features that characterize participants and properties of event
mentions, we use s semantic parser [8] trained on PropBank(PB) [23] and FrameNet(FN) [4] corpora. (For instance, for the apprehended mention from our example, Jackson is the feature value
1
2
For consistency, we try to preserve the notation of the original models.
In this subsection and the following section, the feature term is used in context of a feature type.
2
for A0 PB argument3 and the SUSPECT frame element (FEA0) of the ARREST frame.) Another semantic feature is the semantic frame (FR) that is evoked by an event mention. (For instance, all the
emphasized mentions from our example evoke the ARREST frame from FN.)
Feature Combinations (FC) We also explore various combinations of features presented above.
Examples include HW+POS, HL+FR, FE+A1, etc.
3 Finite Feature Models
In this section, we present a sequence of HDP mixture models for solving event coreference. For this
type of approach, a Dirichlet Process (DP) [10] is associated with each document, and each mixture
component, which in our case corresponds to an event, is shared across documents. To describe
these models, we consider Z the set of indicator random variables for indices of events, ?z the set
of parameters associated to an event z, ? a notation for all model parameters, and X a notation for
all random variables that represent observable features.
Given a document collection annotated with event mentions, the goal is to find the best assignment
of event indices, Z? , which maximize the posterior probability P (Z | X). In a Bayesian approach,
this probability is computed by integrating out all model parameters:
Z
Z
P (Z|X) = P (Z, ?|X)d? = P (Z|X, ?)P (?|X)d?
In order to describe our modifications, we first revisit a basic model from the set of models described
in H&K?s paper.
3.1 The One Feature Model
The one feature model, HDP1f , constitutes the simplest representation of an HDP model. In this
model, which is depicted graphically in Figure 2(a), the observable components are characterized
by only one feature. The distribution over events associated to each document ? is generated by a
Dirichlet process with a concentration parameter ? > 0. Since this setting enables a clustering of
event mentions at the document level, it is desirable that events are shared across documents and
the number of events K is inferred from data. To ensure this flexibility, a global nonparametric
DP prior with a hyperparameter ? and a global base measure H can be considered for ? [28]. The
global distribution drawn from this DP prior, denoted as ? 0 in Figure 2(a), encodes the event mixing
weights. Thus, same global events are used for each document, but each event has a document
specific distribution ?i that is drawn from a DP prior centered on ?0 .
To infer the true posterior probability of P (Z|X), we follow [28] in using a Gibbs sampling algorithm [12] based on the direct assignment sampling scheme. In this sampling scheme, the ? and ?
parameters are integrated out analytically. The formula for sampling an event index for mention j
from document i, Zi,j , is given by:4
P (Zi,j | Z?i,j , HL) ? P (Zi,j | Z?i,j )P (HLi,j | Z, HL?i,j )
where HLi,j is the head lemma of the event mention j from the document i.
First, in the generative process of an event mention, an event index z is sampled by using a mechanism that facilitates sampling from a prior for infinite mixture models called the Chinese Restaurant
Franchise (CRF) representation [28]:
??0u ,
if z = znew
P (Zi,j = z | Z?i,j , ?0 ) ?
nz + ??0z , otherwise
Here, nz is the number of event mentions with the event index z, znew is a new event index not used
already in Z?i,j , ?0z are the global mixing proportions associated to the K events, and ?0u is the
weight for the unknown mixture component.
Then, to generate the mention head lemma (in this model, X = hHLi), the event z is associated with
a multinomial emission distribution over the HL feature values having the parameters ? = h?hl
Z i.
We assume that this emission distribution is drawn from a symmetric Dirichlet distribution with
concentration ?HL :
3
A 0 annotates in PB a specific type of semantic role which represents the AGENT , the DOER , or the ACTOR
of a specific event. Another PB argument is A1, which plays the role of the PATIENT , the THEME , or the
EXPERIENCER of an event.
4 ?i,j
Z
represents a notation for Z ? {Zi,j }.
3
H
?
H
H
H
?
?
?
?0
?
?0
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?0
?
?
?
?
?
?0
Zi
Zi
Zi
Zi
HLi
HLi
Ji
(a)
HLi
FRi
L
Ji
I
POSi
Xi
I
(b)
(c)
Ji
FRi
I
Ji I
(d)
Figure 2: Graphical representation of four HDP models. Each node corresponds to a random variable. In
particular, shaded nodes denotes observable variables. Each rectangle captures the replication of the structure
it contains. The number of replications is indicated in the bottom-right corner of the rectangle. The model
depicted in (a) is an HDP model using one feature; the model in (b) employs HL and FR features; (c) illustrates
a flat representation of a limited number of features in a generalized framework (henceforth, HDPf lat ); and (d)
captures a simple example of structured network topology of three feature variables (henceforth, HDPstruct ).
The dependencies involving parameters ? and ? in models (b), (c), and (d) are omitted for clarity.
P (HLi,j = hl | Z, HL?i,j ) ? nhl,z + ?HL
where HLi,j is the head lemma of mention j from document i, and nhl,z is the number of times
the feature value hl has been associated with the event index z in (Z, HL?i,j ). We also apply the
Lidstone?s smoothing method to this distribution.
3.2 Adding More Features
A model in which observable components are represented only by one feature has the tendency to
cluster these components based on their feature value. To address this limitation, H&K proposed
a more complex model that is strictly customized for entity coreference resolution. On the other
hand, event coreference involves clustering complex objects characterized by richer features than
entity coreference (or topic detection), and therefore it is desirable to extend the HDP1f model with
a generalized model where additional features can be easily incorporated.
To facilitate this extension, we assume that feature variables are conditionally independent given Z.
This assumption considerably reduces the complexity of computing P (Z | X). For example, if we
want to incorporate another feature (e.g., F R) in the previous model, the formula becomes:
P (Zi,j | HL, FR) ? P (Zi,j )P (HLi,j , F Ri,j | Z) = P (Zi,j )P (HLi,j | Z)P (F Ri,j | Z)
In this formula, we omit the conditioning components of Z, HL, and FR for clarity. The graphical
representation corresponding to this model is illustrated in Figure 2(b). In general, if X consists of
L feature variables, the inference formula for the Gibbs sampler is defined as:
Y
P (Zi,j | X) ? P (Zi,j )
P (F Ti,j | Z)
F T ?X
The graphical model for this general setting is depicted in Figure 2(c). Drawing an analogy, the
graphical representation involving feature variables and Z variables resembles the graphical representation of a Naive Bayes classifier.
When dependencies between feature variables exist (e.g., in our case, frame elements are dependent
of the semantic frames that define them, and frames are dependent of the words that evoke them),
various global distributions are involved for computing P (Z | X). For instance, for the model
depicted in Figure 2(d) the posterior probability is given by:
Y
P (Zi,j )P (F Ri,j | HLi,j , ?)
P (F Ti,j | Z)
F T ?X
In this model, P (F Ri,j | HLi,j , ?) is a global distribution parameterized by ?, and the feature
variables considered are X = hHL, POS, FRi.
4
For all these extended models, we compute the prior and likelihood factors as described in the one
feature model. Also, following H&K, in the inference mechanism we assign soft counts for missing
features (e.g., unspecified PB argument).
4 Unbounded Feature Models
First, we present a generative model called the Markov Indian Buffet Process (mIBP) that provides a
mechanism in which each object can be represented by a sparse subset of a potentially unbounded set
of latent features [15, 14, 30].5 Then, to overcome the limitations regarding the number of mixture
components and the number of features associated with objects, we combine this mechanism with
an HDP model to form an mIBP-HDP hybrid. Finally, to account for temporal dependencies, we
employ an mIBP extension, called the Infinite Factorial Hidden Markov Model (iFHMM) [30], in
combination with an Infinite Hidden Markov Model (iHMM) to form the iFHMM-iHMM model.
4.1 The Markov Indian Buffet Process
As described in [30], the mIBP defines a distribution over an unbounded set of binary Markov chains,
where each chain can be associated to a binary latent feature that evolves over time according to
Markov dynamics. Specifically, if we denote by M the total number of feature chains and by T
the number of observable components (event mentions), the mIBP defines a probability distribution
over a binary matrix F with T rows, which correspond to observations, and an unbounded number
of columns (M ? ?), which correspond to features. An observation yt contains a subset from
the unbounded set of features {f 1 , f 2 , . . . , f M } that is represented in the matrix by a binary vector
Ft = hFt1 , Ft2 , . . . , FtM i, where Fti = 1 indicates that f i is associated to yt .
Therefore, F decomposes the observations and represents them as feature factors, which can then
be associated to hidden variables in an iFHMM as depicted in Figure 3(a). The transition matrix of
a binary Markov chain associated to a feature f m is defined as
1 ? am am
W(m) =
1 ? bm bm
(m)
m
= j | Ftm = i), the parameters am ? Beta(?? /M, 1) and bm ? Beta(? ? , ? ? ),
where Wij = P (Ft+1
m
and the initial state F0 = 0. In the generative process, the hidden variable of feature f m for an
1?F m F m
object yt , Ftm ? Bernoulli(am t?1 bmt?1 ).
To compute the probability of the feature matrix F6 , in which the parameters a and b are integrated
01
10
11
out analytically, we use the counting variables c00
m , cm , cm , and cm to record the 0 ? 0, 0 ? 1,
1 ? 0, and 1 ? 1 transitions f m has made in the binary chain m. The stochastic process that derives
the probability distribution in terms of these variables is defined as follows. The first component
samples a number of Poisson(?? ) features. In general, depending on the value that was sampled in
the previous step (t ? 1), a feature f m is sampled for the tth component according to the following
probabilities:
?
c11
m +?
?
10
+ ? + cm + c11
m
c00
m
m
m
P (Ft = 1 | Ft?1 = 0) = 00
cm + c01
m
m
P (Ftm = 1 | Ft?1
= 1) =
??
The tth component then repeats the same mechanism for sampling the next features until it finishes
the current number of sampled features M . After all features are sampled for the tth component,
a number of Poisson(?? /t) new features are assigned for this component and M gets incremented
accordingly.
4.2 The mIBP-HDP Model
One direct application of the mIBP is to integrate it into the HDP models proposed in Section 3. In
this way, the new nonparametric extension will have the benefits of capturing uncertainty regarding
the number of mixture components that are characterized by a potentially infinite number of features.
Since one observable component is associated with an unbounded countable set of features, we have
to provide a mechanism in which only a finite set of features will represent the component in the
HDP inference process.
5
6
In this section, a feature is represented by a (feature type:feature value) pair.
Technical details for computing this probability are described in [30].
5
S0
S1
S2
ST
FM
0
FM
1
FM
2
FM
T
FM
0
FM
1
FM
2
FM
T
F20
F21
F22
F2T
F20
F21
F22
F2T
F10
F11
F12
F1T
F10
F11
F12
F1T
Y1
Y2
YT
Y1
Y2
YT
(b)
(a)
Figure 3: (a) The Infinite Factorial Hidden Markov Model. (b) The iFHMM-iHMM model. (M? ?)
The idea behind this mechanism is to use slice sampling 7 [21] in order to derive a finite set of
features for yt . Letting qm be the number of times feature f m was sampled in the mIBP, and vt an
auxiliary variable for yt such that vt ? Uniform(1, max{qm | Ftm = 1}), we define the finite feature
set Bt for the observation yt as:
Bt = {f m | Ftm = 1 ? qm ? vt }
The finiteness of this feature set is based on the observation that, in the generative process of the
mIBP, only a finite set of features are sampled for a component. Another observation worth mentioning regarding the way this set is constructed is that only the most representative features of yt
get selected in Bt .
4.3 The iFHMM-iHMM Model
The iFHMM is a nonparametric Bayesian factor model that extends the Factorial Hidden Markov
Model (FHMM) [13] by letting the number of parallel Markov chains M be learned from data.
Although the iFHMM allows a more flexible representation of the latent structure, it can not be
used as a framework where the number of clustering components K is infinite. On the other hand,
the iHMM represents a nonparametric extension of the Hidden Markov Model (HMM) [27] that
allows performing inference on an infinite number of states K. In order to further increase the
representational power for modeling discrete time series data, we propose a nonparametric extension
that combines the best of the two models, and lets the parameters M and K be learned from data.
Each step in the new generative process, whose graphical representation is depicted in Figure 3(b),
is performed in two phases: (i) the latent feature variables from the iFHMM framework are sampled
using the mIBP mechanism; and (ii) the features sampled so far, which become observable during
this second phase, are used in an adapted beam sampling algorithm [29] to infer the clustering
components (or, in our case, latent events).
To describe the beam sampler for event coreference resolution, we introduce additional notation.
We denote by (s1 , . . . , sT ) the sequence of hidden states corresponding to the sequence of event
mentions (y1 , . . . , yT ), where each state st belong to one of the K events, st ? {1, . . . , K}, and
each mention yt is represented by a sequence of latent features hFt1 , Ft2 , . . . , FtM i. One element of
the transition probability ? is defined as ?ij = P (st = j | st?1 = i) and a mention yt is generated
according to a likelihood model F that is parameterized by a state-dependent parameter ?st (yt |
st ? F(?st )). The observation parameters ? are iid drawn from a prior base distribution H.
The beam sampling algorithm combines the ideas of slice sampling and dynamic programming for
an efficient sampling of state trajectories. Since in time series models the transition probabilities
have independent priors [5], Van Gael and colleagues [29] also used the HDP mechanism to allow couplings across transitions. For sampling the whole hidden state trajectory s, this algorithm
employs a forward filtering-backward sampling technique.
In the forward step of our implementation, we sample the feature variables using the mIBP as described in Section 4.1, and the auxiliary variable ut ? Uniform(0, ?st?1 st ) for each mention yt .
As explained in [29], the auxiliary variables u are used to filter only those trajectories s for which
7
The idea of using this procedure is inspired from [29] where a slice variable was used to sample a finite
number of state trajectories in the iHMM.
6
?st?1 st ? ut for all t. Also, in this step, we compute the probabilities P (st | y1:t , u1:t ) for all t as
described in [29]:
X
P (st?1 | y1:t?1 , u1:t?1 )
P (st | y1:t , u1:t ) ? P (yt | st )
st?1 :ut <?st?1 st
Here, the dependencies involving parameters ? and ? are omitted for clarity.
In the backward step, we first sample the event for the last state sT directly from P (sT | y1:T , u1:T )
and then, for all t : T ? 1, 1, we sample each state st given st+1 by using the formula P (st |
st+1 , y1:T , u1:T) ? P (st|y1:t , u1:t)P (st+1|st , ut+1).
To sample the emission distribution ? efficiently, and to ensure that each mention is characterized
by a finite set of representative features, we set the base distribution H to be conjugate with the
data distribution F in a Dirichlet-multinomial model with the sufficient statistics of the multinomial
distribution (o1 , . . . , oK ) defined as:
ok =
T
X
X
nmk
t=1 f m ?Bt
where nmk counts how many times feature f m was sampled for event k, and Bt stores a finite set
of features for yt as it is defined in Section 4.2.
5 Evaluation
Event Coreference Data One corpus used for evaluation is ACE 2005 [18]. This corpus annotates
within-document coreference information of specific types of events (such as Conflict, Justice, and
Life). After an initial processing phase, we extracted from ACE 6553 event mentions and 4946
events. To increase the diversity of events and to evaluate the models for both within- and crossdocument event coreference, we created the EventCorefBank corpus (ECB).8 This new corpus contains 43 topics, 1744 event mentions, 1302 within-document events, and 339 cross-document events.
For a more realistic approach, we trained the models on all the event mentions from the two corpora
and not only on the mentions manually annotated for event coreference (the true event mentions). In
this regard, we ran the event identifier described in [6] on the ACE and ECB corpora, and extracted
45289 and 21175 system mentions respectively.
The Experimental Setup Table 2 lists the recall (R), precision (P), and F-score (F) of our experiments averaged over 5 runs of the generative models. Since there is no agreement on the best
coreference resolution metric, we employed four metrics for our evaluation: the link -based MUC
metric [31], the mention -based B3 metric [2], the entity -based CEAF metric [19], and the pairwise
F1 (PW) metric. In the evaluation process, we considered only the true mentions of the ACE test
dataset and of the test sets of a 5-fold cross validation scheme on the ECB dataset. For evaluating
the cross-document coreference annotations, we adopted the same approach as described in [3] by
merging all the documents from the same topic into a meta-document and then scoring this document as performed for within-document evaluation. Also, for both corpora, we considered a set of
132 feature types, where each feature type consists on average of 3900 distinct feature values.
The Baseline A simple baseline for event coreference consists in grouping events by their event
classes [1]. To extract event classes, we employed the event identifier described in [6]. Therefore,
this baseline will categorize events into a small number of clusters, since the event identifier is
trained to predict the five event classes annotated in TimeBank [26]. As it was already observed
[20, 11], considering very few categories for coreference resolution tasks will result in overestimates
of the MUC scorer. For instance, a baseline that groups all entity mentions into the same entity
achieves the highest MUC score than any published system for the task of entity coreference. Similar
behaviour of the MUC metric is observed for event coreference resolution. For example, for crossdocument evaluation on ECB, a baseline that clusters all mentions into one event achieves 73.2%
MUC F-score, while the baseline listed in Table 2 achieves 72.9% MUC F-score.
HDP Extensions Due to memory limitations, we evaluated the HDPf lat and HDPstruct models
only on a restricted subset of manually selected feature types. In general, as shown in Table 2,
the HDPf lat model achieved the best performance results on the ACE test dataset, whereas the
8
This resource is available at http://www.hlt.utdallas.edu/?ady. The annotation process is described in [7].
7
Model
R
MUC
P
Baseline
HDP1f (HL )
HDPf lat
HDPstruct
mIBP-HDP
iFHMM-iHMM
94.3
62.2
53.5
61.9
48.7
48.7
33.1
43.1
54.2
49.0
41.9
48.8
Baseline
HDP1f (HL )
HDPf lat
HDPstruct
mIBP-HDP
iFHMM-iHMM
92.2
46.9
37.8
47.4
38.2
39.5
39.8
54.8
92.9
82.7
68.8
85.2
Baseline
HDP1f (HL )
HDPf lat
HDPstruct
mIBP-HDP
iFHMM-iHMM
90.5
47.7
44.4
51.9
40.0
48.4
61.1
70.5
95.3
89.5
79.8
89.0
B3
CEAF
F
R
P
F
R
P
ACE (within-document event coreference)
49.0 97.9 25.0 39.9 14.7 64.4
50.9 86.0 70.6 77.5 62.3 76.4
53.9 83.4 84.2 83.8 76.9 76.5
54.7 86.2 76.9 81.3 69.0 77.5
45.1 81.7 76.4 79.0 68.8 73.8
48.7 81.9 82.2 82.1 74.6 74.5
ECB (within-document event coreference)
55.6 97.7 55.8 71.0 44.5 80.1
50.4 84.3 89.0 86.5 83.4 79.6
53.4 82.1 99.2 89.8 93.9 78.2
60.1 84.3 97.1 90.2 92.7 81.1
48.9 82.1 95.3 88.2 90.3 78.5
53.9 82.5 98.1 89.6 93.1 78.8
ECB (cross-document event coreference)
72.9 93.8 49.6 64.9 36.6 72.7
56.8 67.0 86.2 75.3 76.2 57.1
60.5 65.0 98.7 78.3 86.9 56.0
65.7 69.3 95.8 80.4 86.2 60.1
53.2 63.1 94.1 75.5 82.7 54.6
62.7 67.0 96.4 79.0 85.5 58.0
F
R
PW
P
F
24.0
68.6
76.7
73.0
71.2
74.5
93.5
50.5
43.3
53.2
37.4
37.2
8.2
27.7
47.1
38.1
28.9
39.0
15.2
35.8
45.1
44.4
32.6
38.1
57.2
81.4
85.3
86.5
84.0
85.3
93.7
36.6
27.0
34.4
26.5
29.4
25.4
53.4
92.4
83.0
67.9
86.6
39.8
42.6
41.3
48.6
37.7
43.7
48.7
65.2
68.0
70.8
65.7
69.1
90.7
34.9
29.2
37.5
26.1
33.3
28.6
58.9
95.1
85.6
77.0
88.3
43.3
43.5
44.4
52.1
38.9
48.2
Table 2: Evaluation results for within- and cross-document event coreference resolution.
HDPstruct model, which also considers dependencies between feature types, proved to be more
effective on the ECB dataset for both within- and cross-document event coreference evaluation. The
set of feature types used to achieve these results consists of combinations of types from all feature
categories described in Section 2.2. For the results of the HDPstruct model listed in Table 2, we also
explored the conditional dependencies between the HL, FR, and FEA types.
As can be observed from Table 2, the results of the HDPf lat and HDPstruct models show an F-score
increase by 4-10% over the HDP1f model, and therefore prove that the HDP extensions provide a
more flexible representation for clustering objects characterized by rich properties.
mIBP-HDP In spite of its advantage of working with a potentially infinite number of features in an
HDP framework, the mIBP-HDP model did not achieve a satisfactory performance in comparison
with the other proposed models. However, the results were obtained by automatically selecting
only 2% of distinct feature values from the entire set of values extracted from both corpora. When
compared with the restricted set of features considered by the HDPf lat and HDPstruct models, the
percentage of values selected by mIBP-HDP is only 6%. A future research area for improving this
model is to consider other distributions for automatic selection of salient feature values.
iFHMM-iHMM In spite of the automatic feature selection employed for the iFHMM-iHMM model,
its results remain competitive against the results of the HDP extensions (where the feature types
were hand tuned). As shown in Table 2, most of the iFHMM-iHMM results fall in between the
HDPf lat and HDPstruct models. Also, these results indicate that the iFHMM-iHMM model is a
better framework than HDP in capturing the event mention dependencies simulated by the mIBP
feature sampling scheme. Similar to the mIBP-HDP model, to achieve these results, the iFHMMiHMM model uses only 2% values from the entire set of distinct feature values. For the experiments
of the iFHMM-iHMM results reported in Table 2, we set ?? =50, ? ? =0.5, and ? ? =0.5.
6 Conclusion
In this paper, we have described how a sequence of unsupervised, nonparametric Bayesian models
can be employed to cluster complex linguistic objects that are characterized by a rich set of features.
The experimental results proved that these models are able to solve real data applications in which
the feature and cluster numbers are treated as free parameters, and the selection of features is performed automatically. While the results of event coreference resolution are promising, we believe
that the classes of models proposed in this paper have a real utility for a wide range of applications.
8
References
[1] David Ahn. 2006. The stages of event extraction. In Proceedings of the Workshop on Annotating and
Reasoning about Time and Events, pages 1?8.
[2] Amit Bagga and Breck Baldwin. 1998. Algorithms for Scoring Coreference Chains. In Proc. of LREC.
[3] Amit Bagga and Breck Baldwin. 1999. Cross-Document Event Coreference: Annotations, Experiments,
and Observations. In Proceedings of the ACL-99 Workshop on Coreference and its Applications.
[4] Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In
Proceedings of COLING-ACL.
[5] Matthew J. Beal, Zoubin Ghahramani, and Carl Edward Rasmussen. 2002. The Infinite Hidden Markov
Model. In Proceedings of NIPS.
[6] Cosmin Adrian Bejan. 2007. Deriving Chronological Information from Texts through a Graph-based
Algorithm. In Proceedings of FLAIRS-2007.
[7] Cosmin Adrian Bejan and Sanda Harabagiu. 2008. A Linguistic Resource for Discovering Event Structures
and Resolving Event Coreference. In Proceedings of LREC-2008.
[8] Cosmin Adrian Bejan and Chris Hathaway. 2007. UTD-SRL: A Pipeline Architecture for Extracting Frame
Semantic Structures. In Proceedings of SemEval-2007.
[9] Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. MIT Press.
[10] Thomas S. Ferguson. 1973. A Bayesian Analysis of Some Nonparametric Problems. The Annals of
Statistics, 1(2):209?230.
[11] Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing Transitivity in Coreference Resolution.
In Proceedings of ACL/HLT-2008, pages 45?48.
[12] Stuart Geman and Donald Geman. 1984. Stochastic relaxation, Gibbs distributions and the Bayesian
restoration of images. . IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721?741.
[13] Z. Ghahramani and M. Jordan. 1997. Factorial Hidden Markov Models. Machine Learning, 29:245?273.
[14] Zoubin Ghahramani, T. L. Griffiths, and Peter Sollich, 2007. Bayesian Statistics 8, chapter Bayesian
nonparametric latent feature models, pages 201?225. Oxford University Press.
[15] Tom Griffiths and Zoubin Ghahramani. 2006. Infinite Latent Feature Models and the Indian Buffet
Process. In Proceedings of NIPS, pages 475?482.
[16] Aria Haghighi and Dan Klein. 2007. Unsupervised Coreference Resolution in a Nonparametric Bayesian
Model. In Proceedings of the ACL.
[17] Kevin Humphreys, Robert Gaizauskas, Saliha Azzam. 1997. Event Coreference for Information Extraction. In Proceedings of the Workshop on Operational Factors in Practical, Robust Anaphora Resolution
for Unrestricted Texts, 35th Meeting of ACL, pages 75?81.
[18] LDC-ACE05. 2005. ACE (Automatic Content Extraction) English Annotation Guidelines for Events.
[19] X. Luo. 2005. On Coreference Resolution Performance Metrics. In Proceedings of EMNLP.
[20] X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos 2004. A Mention-Synchronous Coreference
Resolution Algorithm Based On the Bell Tree. In Proceedings of ACL-2004.
[21] Radford M. Neal. 2003. Slice Sampling. The Annals of Statistics, 31:705?741.
[22] Vincent Ng. 2008. Unsupervised Models for Coreference Resolution. In Proceedings of EMNLP.
[23] Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An Annotated Corpus
of Semantic Roles. Computational Linguistics, 31(1):71?105.
[24] Ron Papka. 1999. On-line New Event Detection, Clustering and Tracking. Ph.D. thesis, Department of
Computer Science, University of Massachusetts.
[25] Hoifung Poon and Pedro Domingos. 2008. Joint Unsupervised Coreference Resolution with Markov
Logic. In Proceedings of EMNLP.
[26] J. Pustejovsky, P. Hanks, R. Sauri, A. See, R. Gaizauskas, A. Setzer, D. Radev, B. Sundheim, D. Day, L.
Ferro, and M. Lazo. 2003. The TimeBank Corpus. In Corpus Linguistics, pages 647?656.
[27] Lawrence R. Rabiner. 1989. A Tutorial on Hidden Markov Models and Selected Applications in Speech
Recognition. In Proceedings of the IEEE, pages 257?286.
[28] Yee Whye Teh, Michael Jordan, Matthew Beal, and David Blei. 2006. Hierarchical Dirichlet Processes.
Journal of the American Statistical Association, 101(476):1566?1581.
[29] Jurgen Van Gael, Yunus Saatci, Yee Whye Teh, and Zoubin Ghahramani. 2008. Beam Sampling for the
Infinite Hidden Markov Model. In Proceedings of ICML, pages 1088?1095.
[30] Jurgen Van Gael, Yee Whye Teh, and Zoubin Ghahramani. 2008. The Infinite Factorial Hidden Markov
Model. In Proceedings of NIPS.
[31] Marc Vilain, John Burger, John Aberdeen, Dennis Connolly, and Lynette Hirschman. 1995. A ModelTheoretic Coreference Scoring Scheme. In Proceedings of MUC-6, pages 45?52.
9
| 3637 |@word briefly:1 pw:2 proportion:1 justice:1 adrian:4 closure:1 mibp:20 tried:1 mention:51 initial:2 contains:4 series:2 score:5 selecting:1 daniel:1 tuned:1 document:39 existing:1 current:1 luo:2 john:3 fn:2 realistic:1 enables:1 generative:7 selected:5 discovering:1 intelligence:1 accordingly:1 record:1 blei:1 provides:1 authority:2 node:2 location:2 bmt:1 attack:1 ron:1 yunus:1 five:2 unbounded:7 kingsbury:1 constructed:1 direct:2 beta:2 saturday:2 become:1 replication:2 consists:4 prove:1 combine:3 dan:1 introduce:2 pairwise:1 ontology:1 multi:1 f11:2 inspired:2 automatically:3 considering:1 becomes:1 fti:1 project:1 moreover:1 notation:7 baker:1 burger:1 cm:5 unspecified:1 c01:1 corporation:1 temporal:2 berkeley:1 every:1 ti:2 sanda:2 chronological:1 classifier:1 qm:3 omit:1 overestimate:1 before:2 dallas:1 despite:1 encoding:1 id:2 oxford:1 acl:6 nz:2 resembles:1 f1t:2 evoked:1 challenging:1 shaded:1 mentioning:1 limited:1 palmer:1 range:2 averaged:1 unique:1 practical:1 hoifung:1 lf:1 procedure:1 area:1 bell:1 synonymy:1 word:9 integrating:1 donald:1 spite:2 zoubin:5 griffith:2 get:2 unlabeled:1 selection:3 context:2 influence:1 yee:3 www:1 lexical:3 missing:1 yt:16 graphically:1 resolution:21 ferro:1 ftm:7 jackson:5 deriving:1 his:2 annals:2 hierarchy:3 play:2 diego:3 parser:1 programming:1 carl:1 us:3 domingo:1 agreement:1 associate:1 element:3 recognition:1 geman:2 database:1 observed:4 ft:6 role:3 bottom:1 baldwin:2 capture:3 counter:1 incremented:1 highest:1 ran:1 rose:1 lidstone:1 complexity:1 dynamic:2 trained:3 solving:2 coreference:46 easily:3 po:3 joint:1 arrested:3 represented:6 various:2 chapter:1 distinct:3 describe:4 effective:1 kevin:1 outcome:1 navy:2 whose:1 ace:8 richer:1 solve:1 drawing:1 otherwise:1 annotating:1 statistic:4 richardson:1 beal:2 hoc:1 sequence:7 advantage:2 propose:3 fea:1 adaptation:1 fr:6 relevant:1 mixing:2 flexibility:1 achieve:3 representational:1 poon:1 f10:2 cluster:14 jing:1 sea:2 franchise:1 object:8 depending:1 andrew:1 derive:1 coupling:1 scorer:1 ij:1 utdallas:2 jurgen:2 edward:1 auxiliary:3 enclose:1 involves:1 indicate:1 annotated:5 filter:1 stochastic:2 centered:1 human:1 require:2 behaviour:1 assign:1 f1:1 charger:2 proposition:1 c00:2 strictly:1 ft1:1 extension:8 considered:5 lawrence:1 predict:1 matthew:3 driving:2 kambhatla:1 achieves:3 omitted:2 ecb:7 proc:1 him:1 f21:2 v1i:1 create:1 mit:1 sunday:1 aim:1 srl:1 finkel:1 probabilistically:1 linguistic:6 release:1 focus:1 emission:3 improvement:1 bernoulli:1 likelihood:2 indicates:1 baseline:10 wf:1 am:4 inference:4 dependent:3 synonymous:1 ferguson:1 integrated:2 bt:5 a0:1 entire:2 hidden:16 relation:1 wij:1 flexible:2 denoted:1 noun:1 smoothing:1 ady:2 extraction:4 having:2 sampling:16 manually:2 identical:4 propbank:1 represents:4 stuart:1 unsupervised:8 afc:1 constitutes:1 anaphora:1 icml:1 future:1 employ:4 aden:2 few:1 preserve:1 saatci:1 phase:3 tdt:1 detection:3 playoff:2 evaluation:8 mixture:6 behind:1 chain:7 tree:1 instance:4 column:1 classify:1 soft:1 ifhmm:17 modeling:1 restoration:1 assignment:2 cost:1 subset:3 uniform:2 connolly:1 characterize:1 reported:2 dependency:7 accomplish:1 considerably:1 st:29 muc:8 michael:1 wnl:1 posi:1 thesis:1 f22:2 emnlp:3 henceforth:3 corner:1 creating:1 american:1 account:1 f6:1 diversity:1 ad:1 performed:3 try:1 lowe:1 hirschman:1 portion:1 competitive:1 bayes:1 participant:2 parallel:1 annotation:4 f12:2 gildea:1 efficiently:1 correspond:2 rabiner:1 fhmm:1 bayesian:12 vincent:4 hli:11 iid:1 trajectory:4 worth:1 published:1 collin:1 hlt:3 definition:1 against:2 ihmm:15 colleague:1 vli:1 involved:1 associated:16 repulsed:1 sampled:10 dataset:4 proved:2 massachusetts:1 recall:1 subsection:1 ut:4 gaizauskas:2 fellbaum:1 ok:2 day:2 follow:1 tom:1 evaluated:3 though:1 hank:1 stage:1 until:1 hand:3 working:1 dennis:1 christopher:1 morning:1 defines:2 radev:1 indicated:1 believe:1 grounding:1 effect:1 facilitate:1 b3:2 true:3 y2:2 christiane:1 analytically:2 assigned:1 symmetric:1 satisfactory:1 semantic:9 illustrated:2 harabagiu:1 conditionally:1 neal:1 during:2 game:2 ft2:2 transitivity:1 yesterday:1 arrest:8 flair:1 criterion:1 generalized:3 whye:3 crf:1 reasoning:1 image:1 novel:1 charles:1 common:1 multinomial:3 ji:5 conditioning:1 extend:2 belong:2 association:1 significant:1 gibbs:3 automatic:3 consistency:1 language:3 actor:1 f0:1 ahn:1 annotates:2 etc:1 base:3 posterior:3 recent:1 store:1 meta:1 somali:1 success:1 binary:6 vt:3 life:1 meeting:1 scoring:3 additional:3 unrestricted:1 employed:5 c11:2 fri:3 maximize:1 jenny:1 ii:1 resolving:1 desirable:2 infer:3 reduces:1 technical:1 characterized:7 cross:9 a1:2 involving:3 basic:2 patient:1 metric:8 poisson:2 represent:3 rhl:1 achieved:1 beam:4 whereas:1 ftl:1 want:1 utd:1 finiteness:1 haghighi:2 file:1 suspect:3 facilitates:1 incorporates:1 jordan:2 extracting:1 counting:1 semeval:1 independence:1 restaurant:1 zi:15 finish:1 architecture:1 topology:1 fm:8 regarding:3 idea:3 texas:2 nfl:1 synchronous:1 motivated:1 utility:1 setzer:1 peter:1 gulf:2 speech:2 cause:1 gael:3 listed:3 tune:1 factorial:6 amount:2 nonparametric:12 ph:1 category:3 simplest:1 tth:3 generate:1 http:1 exist:1 percentage:1 tutorial:1 revisit:1 klein:2 discrete:1 hyperparameter:1 group:2 key:1 four:3 salient:1 nhl:2 pb:5 drawn:4 clarity:3 rectangle:2 backward:2 graph:1 relaxation:1 cosmin:4 run:1 znew:2 parameterized:2 uncertainty:1 hwc:2 extends:1 decide:1 electronic:1 capturing:3 rhe:1 lrec:2 fold:1 adapted:1 constraint:1 ri:4 flat:1 encodes:1 u1:6 argument:2 performing:1 structured:1 department:1 according:3 combination:4 manning:1 conjugate:1 across:3 remain:1 em:1 sollich:1 evolves:1 modification:1 s1:2 hl:19 explained:1 restricted:2 pipeline:1 resource:2 describing:1 count:2 mechanism:9 letting:2 adopted:1 available:1 operation:1 apply:1 f2t:2 hierarchical:2 generic:1 buffet:4 original:1 thomas:1 denotes:1 dirichlet:6 clustering:8 nlp:1 cf:1 include:1 ensure:2 graphical:6 lat:9 linguistics:2 lhe:1 ghahramani:6 build:1 chinese:1 amit:2 question:1 already:2 concentration:2 dp:4 link:1 simulated:1 entity:7 hmm:1 chris:1 topic:4 considers:3 enforcing:1 hdp:23 o1:1 index:8 setup:1 fe:1 potentially:4 robert:1 design:1 countable:1 implementation:1 summarization:1 unknown:1 guideline:1 teh:3 observation:8 markov:20 drunk:1 finite:9 anti:1 merchant:1 incorporated:1 head:4 extended:1 frame:8 y1:9 verb:1 police:2 inferred:1 david:2 pair:3 required:1 conflict:1 learned:2 nip:3 address:2 able:2 pattern:1 challenge:1 adjective:1 including:2 max:1 memory:1 ldc:1 power:1 event:111 natural:1 hybrid:1 treated:1 indicator:1 customized:1 scheme:5 technology:1 numerous:1 suspicion:2 created:1 lazo:1 categorical:1 transitive:1 coupled:1 extract:4 naive:1 text:3 wnw:1 prior:7 fully:1 generation:1 limitation:4 filtering:1 analogy:1 validation:1 integrate:1 agent:1 sufficient:1 s0:1 nmk:2 bank:1 share:2 roukos:1 row:1 repeat:1 last:1 free:1 rasmussen:1 english:1 allow:1 institute:1 fall:1 wide:1 sparse:1 benefit:1 slice:4 overcome:1 van:3 regard:1 transition:5 evaluating:1 rich:3 forward:2 collection:3 made:2 san:3 bm:3 ec:1 far:1 transaction:1 observable:10 evoke:2 logic:1 global:7 corpus:15 receiver:2 pittsburgh:1 spatio:1 xi:1 latent:8 decomposes:1 table:10 promising:1 robust:1 operational:1 improving:1 tuesday:2 vessel:1 investigated:1 complex:4 domain:1 marc:1 did:1 s2:1 whole:1 paul:1 identifier:4 fillmore:1 representative:2 ng:1 precision:1 theme:1 sf:1 answering:1 third:1 coling:1 hw:6 humphreys:1 formula:5 specific:5 emphasized:1 explored:2 list:1 grouping:2 derives:1 workshop:3 adding:1 merging:1 aria:1 f20:2 illustrates:1 depicted:6 aberdeen:1 fc:1 explore:1 papka:1 tracking:2 hathaway:1 radford:1 pedro:1 corresponds:2 extracted:3 conditional:2 goal:2 identity:2 lhl:1 shared:2 content:1 hard:1 martha:1 infinite:15 specifically:1 sampler:2 wordnet:6 lemma:6 called:3 lexeme:1 total:1 tendency:1 experimental:2 select:1 categorize:1 indian:4 incorporate:3 evaluate:1 |
2,910 | 3,638 | Indian Buffet Processes with Power-law Behavior
?
Yee Whye Teh and Dilan G?orur
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, United Kingdom
{ywteh,dilan}@gatsby.ucl.ac.uk
Abstract
The Indian buffet process (IBP) is an exchangeable distribution over binary matrices used in Bayesian nonparametric featural models. In this paper we propose
a three-parameter generalization of the IBP exhibiting power-law behavior. We
achieve this by generalizing the beta process (the de Finetti measure of the IBP) to
the stable-beta process and deriving the IBP corresponding to it. We find interesting relationships between the stable-beta process and the Pitman-Yor process (another stochastic process used in Bayesian nonparametric models with interesting
power-law properties). We derive a stick-breaking construction for the stable-beta
process, and find that our power-law IBP is a good model for word occurrences in
document corpora.
1
Introduction
The Indian buffet process (IBP) is an infinitely exchangeable distribution over binary matrices with
a finite number of rows and an unbounded number of columns [1, 2]. It has been proposed as a
suitable prior for Bayesian nonparametric featural models, where each object (row) is modeled with
a potentially unbounded number of features (columns). Applications of the IBP include Bayesian
nonparametric models for ICA [3], choice modeling [4], similarity judgements modeling [5], dyadic
data modeling [6] and causal inference [7].
In this paper we propose a three-parameter generalization of the IBP with power-law behavior. Using
the usual analogy of customers entering an Indian buffet restaurant and sequentially choosing dishes
from an infinitely long buffet counter, our generalization with parameters ? > 0, c > ?? and
? ? [0, 1) is simply as follows:
? Customer 1 tries Poisson(?) dishes.
? Subsequently, customer n + 1:
k ??
? tries dish k with probability mn+c
, for each dish that has previously been tried;
?(1+c)?(n+c+?)
? tries Poisson(? ?(n+1+c)?(c+?)
) new dishes.
where mk is the number of previous customers who tried dish k. The dishes and the customers
correspond to the columns and the rows of the binary matrix respectively, with an entry of the matrix
being one if the corresponding customer tried the dish (and zero otherwise). The mass parameter ?
controls the total number of dishes tried by the customers, the concentration parameter c controls
the number of customers that will try each dish, and the stability exponent ? controls the power-law
behavior of the process. When ? = 0 the process does not exhibit power-law behavior and reduces
to the usual two-parameter IBP [2].
Many naturally occurring phenomena exhibit power-law behavior, and it has been argued that using
models that can capture this behavior can improve learning [8]. Recent examples where this has led
to significant improvements include unsupervised morphology learning [8], language modeling [9]
1
and image segmentation [10]. These examples are all based on the Pitman-Yor process [11, 12, 13],
a generalization of the Dirichlet process [14] with power-law properties. Our generalization of the
IBP extends the ability to model power-law behavior to featural models, and we expect it to lead to
a wealth of novel applications not previously well handled by the IBP.
The approach we take in this paper is to first define the underlying de Finetti measure, then to derive
the conditional distributions of Bernoulli process observations with the de Finetti measure integrated
out. This automatically ensures that the resulting power-law IBP is infinitely exchangeable. We call
the de Finetti measure of the power-law IBP the stable-beta process. It is a novel generalization of
the beta process [15] (which is the de Finetti measure of the normal two-parameter IBP [16]) with
characteristics reminiscent of the stable process [17, 11] (in turn related to the Pitman-Yor process).
We will see that the stable-beta process has a number of properties similar to the Pitman-Yor process.
In the following section we first give a brief description of completely random measures, a class of
random measures which includes the stable-beta and the beta processes. In Section 3 we introduce
the stable-beta process, a three parameter generalization of the beta process and derive the powerlaw IBP based on the stable-beta process. Based on the proposed model, in Section 4 we construct
a model of word occurrences in a document corpus. We conclude with a discussion in Section 5.
2
Completely Random Measures
In this section we give a brief description of completely random measures [18]. Let ? be a measure
space with ? its ?-algebra. A random variable whose values are measures on (?, ?) is referred
to as a random measure. A completely random measure (CRM) ? over (?, ?) is a random measure such that ?(A)?
??(B) for all disjoint measurable subsets A, B ? ?. That is, the (random)
masses assigned to disjoint subsets are independent. An important implication of this property is
that the whole distribution over ? is determined (with usually satisfied technical assumptions) once
the distributions of ?(A) are given for all A ? ?.
CRMs can always be decomposed into a sum of three independent parts: a (non-random) measure,
an atomic measure with fixed atoms but random masses, and an atomic measure with random atoms
and masses. CRMs in this paper will only contain the second and third components. In this case we
can write ? in the form,
?=
N
X
uk ??k +
k=1
M
X
vl ??l ,
(1)
l=1
where uk , vl > 0 are the random masses, ?k ? ? are the fixed atoms, ?l ? ? are the random atoms,
and N, M ? N ? {?}. To describe ? fully it is sufficient to specify N and {?k }, and to describe the
joint distribution over the random variables {uk }, {vl }, {?l } and M . Each uk has to be independent
from everything else and has some distribution Fk . The random atoms and their weights {vl , ?l }
are jointly drawn from a 2D Poisson process over (0, ?] ? ? with some nonatomic rate measure
? called the L?evy measure.
R R The rate measure ? has to satisfy a number of technical properties; see
[18, 19] for details. If ? (0,?] ?(du ? d?) = M ? < ? then the number of random atoms M in ?
is Poisson distributed with mean M ? , otherwise there are an infinite number of random atoms. If ?
is described by ? and {?k , Fk }N
k=1 as above, we write,
? ? CRM(?, {?k , Fk }N
k=1 ).
3
(2)
The Stable-beta Process
In this section we introduce a novel CRM called the stable-beta process (SBP). It has no fixed atoms
while its L?evy measure is defined over (0, 1) ? ?:
?0 (du ? d?) = ?
?(1 + c)
u???1 (1 ? u)c+??1 duH(d?)
?(1 ? ?)?(c + ?)
(3)
where the parameters are: a mass parameter ? > 0, a concentration parameter c > ??, a stability
exponent 0 ? ? < 1, and a smooth base distribution H. The mass parameter controls the overall
mass of the process and the base distribution gives the distribution over the random atom locations.
2
The mean of the SBP can be shown to be E[?(A)] = ?H(A) for each A ? ?, while var(?(A)) =
? 1??
1+c H(A). Thus the concentration parameter and the stability exponent both affect the variability
of the SBP around its mean. The stability exponent also governs the power-law behavior of the SBP.
When ? = 0 the SBP does not have power-law behavior and reduces to a normal two-parameter beta
process [15, 16]. When c = 1 ? ? the stable-beta process describes the random atoms with masses
< 1 in a stable process [17, 11]. The SBP is so named as it can be seen as a generalization of both
the stable and the beta processes. Both the concentration parameter and the stability exponent can
be generalized to functions over ? though we will not deal with this generalization here.
3.1
Posterior Stable-beta Process
Consider the following hierarchical model:
? ? CRM(?0 , {}),
Zi |? ? BernoulliP(?)
iid, for i = 1, . . . , n.
(4)
The random measure ? is a SBP with no fixed atoms and with L?evy measure (3), while Zi ?
BernoulliP(?) is a Bernoulli process with mean ? [16]. This is also a CRM: in a small neighborhood
d? around ? ? ? it has a probability ?(d?) of having a unit mass atom in d?; otherwise it does not
have an atom in d?. If ? has an atom at ? the probability of Zi having an atom at ? as well is ?({?}).
If ? has a smooth component, say ?0 , Zi will have random atoms drawn from a Poisson process
with rate measure ?0 . In typical applications to featural models the atoms in Zi give the features
associated with data item i, while the weights of the atoms in ? give the prior probabilities of the
corresponding features occurring in a data item.
We are interested in both the posterior of ? given Z1 , . . . , Zn , as well as the conditional distribu?
tion of Zn+1 |Z1 , . . . , Zn with ? marginalized out. Let ?1? , . . . , ?K
be the K unique atoms among
?
Z1 , . . . , Zn with atom ?k occurring mk times. Theorem 3.3 of [20] shows that the posterior of ?
?
given Z1 , . . . , Zn is still a CRM, but now including fixed atoms given by ?1? , . . . , ?K
. Its updated
?
L?evy measure and the distribution of the mass at each fixed atom ?k are,
?|Z1 , . . . , Zn ? CRM(?n , {?k? , Fnk }K
k=1 ),
(5)
where
?(1 + c)
u???1 (1 ? u)n+c+??1 duH(d?),
?(1 ? ?)?(c + ?)
?(n + c)
Fnk (du) =
umk ???1 (1 ? u)n?mk +c+??1 du.
?(mk ? ?)?(n ? mk + c + ?)
?n (du ? d?) =?
(6a)
(6b)
Intuitively, the posterior is obtained as follows. Firstly, the posterior of ? must be a CRM since
both the prior of ? and the likelihood of each Zi |? factorize over disjoint subsets of ?. Secondly,
? must have fixed atoms at each ?k? since otherwise the probability that there will be atoms among
Z1 , . . . , Zn at precisely ?k? is zero. The posterior mass at ?k? is obtained by multiplying a Bernoulli
?likelihood? umk (1 ? u)n?mk (since there are mk occurrences of the atom ?k? among Z1 , . . . , Zn )
to the ?prior? ?0 (du?d?k? ) in (3) and normalizing, giving us (6b). Finally, outside of these K atoms
there are no other atoms among Z1 , . . . , Zn . We can think of this as n observations of 0 among n
iid Bernoulli variables, so a ?likelihood? of (1 ? u)n is multiplied into ?0 (without normalization),
giving the updated L?evy measure in (6a).
Let us inspect the distributions (6) of the fixed and random atoms in the posterior ? in turn. The
random mass at ?k? has a distribution Fnk which is simply a beta distribution with parameters (mk ?
?, n ? mk + c + ?). This differs from the usual beta process in the subtraction of ? from mk and
addition of ? to n ? mk + c. This is reminiscent of the Pitman-Yor generalization to the Dirichlet
process [11, 12, 13], where a discount parameter is subtracted from the number of customers seated
around each table, and added to the chance of sitting at a new table. On the other hand, the L?evy
measure of the random atoms of ? is still a L?evy measure corresponding to an SBP with updated
parameters
?(1 + c)?(n + c + ?)
,
?(n + 1 + c)?(c + ?)
c0 ? c + n,
?0 ? ?
3
?0 ? ?
H 0 ? H.
(7)
Note that the update depends only on n, not on Z1 , . . . , Zn . In summary, the posterior of ? is simply
an independent sum of an SBP with updated parameters and of fixed atoms with beta distributed
masses. Observe that the posterior ? is not itself a SBP. In other words, the SBP is not conjugate
to Bernoulli process observations. This is different from the beta process and again reminiscent
of Pitman-Yor processes, where the posterior is also a sum of a Pitman-Yor process with updated
parameters and fixed atoms with random masses, but not a Pitman-Yor process [11]. Fortunately,
the non-conjugacy of the SBP does not preclude efficient inference. In the next subsections we describe an Indian buffet process and a stick-breaking construction corresponding to the SBP. Efficient
inference techniques based on both representations for the beta process can be straightforwardly
generalized to the SBP [1, 16, 21].
3.2
The Stable-beta Indian Buffet Process
We can derive an Indian buffet process (IBP) corresponding to the SBP by deriving, for each n,
the distribution of Zn+1 conditioned on Z1 , . . . , Zn , with ? marginalized out. This derivation is
straightforward and follows closely that for the beta process [16]. For each of the atoms ?k? the
k ??
posterior of ?(?k? ) given Z1 , . . . , Zn is beta distributed with mean mn+c
. Thus
p(Zn+1 (?k? ) = 1|Z1 , . . . , Zn ) = E[?(?k? )|Z1 , . . . , Zn ] =
mk ? ?
n+c
(8)
k ??
Metaphorically speaking, customer n + 1 tries dish k with probability mn+c
. Now for the random
?
?
atoms. Let ? ? ?\{?1 , . . . , ?K }. In a small neighborhood d? around ?, we have:
Z 1
p(Zn+1 (d?) = 1|Z1 , . . . , Zn ) = E[?(d?)|Z1 , . . . , Zn ] =
u?n (du ? d?)
0
Z
1
?(1 + c)
u?1?? (1 ? u)n+c+??1 duH(d?)
?(1
?
?)?(c
+
?)
0
Z 1
?(1 + c)
=?
H(d?)
u?? (1 ? u)n+c+??1 du
?(1 ? ?)?(c + ?)
0
?(1 + c)?(n + c + ?)
H(d?)
=?
?(n + 1 + c)?(c + ?)
=
u?
(9)
?
}
Since Zn+1 is completely random and H is smooth, the above shows that on ?\{?1? , . . . , ?K
?(1+c)?(n+c+?)
Zn+1 is simply a Poisson process with rate measure ? ?(n+1+c)?(c+?) H. In particular, it will have
?(1+c)?(n+c+?)
Poisson(? ?(n+1+c)?(c+?)
) new atoms, each independently and identically distributed according to
H. In the IBP metaphor, this corresponds to customer n+1 trying new dishes, with each dish associated with a new draw from H. The resulting Indian buffet process is as described in the introduction.
It is automatically infinitely exchangeable since it was derived from the conditional distributions of
the hierarchical model (4).
Multiplying the conditional probabilities of each Zn given previous ones together, we get the joint
probability of Z1 , . . . , Zn with ? marginalized out:
Y
K
n
X
?(1+c)?(i?1+c+?)
?(mk ??)?(n?mk +c+?)?(1+c)
p(Z1 , . . . , Zn ) = exp ??
?h(?k? ), (10)
?(i+c)?(c+?)
?(1??)?(c+?)?(n+c)
i=1
k=1
?
?1? , . . . , ?K
where there are K atoms (dishes)
among Z1 , . . . , Zn with atom k appearing mk times,
and h is the density of H. (10) is to be contrasted with (4) in [1]. The Kh ! terms in [1] are absent
as we have to distinguish among these Kh dishes in assigning each of them a distinct atom (this
also contributes the h(?k? ) terms). The fact that (10) is invariant to permuting the ordering among
Z1 , . . . , Zn also indicates the infinite exchangeability of the stable-beta IBP.
3.3
Stick-breaking constructions
In this section we describe stick-breaking constructions for the SBP generalizing those for the beta
process. The first is based on the size-biased ordering of atoms induced by the IBP [16], while
4
the second is based on the inverse L?evy measure method [22], and produces a sequence of random
atoms of strictly decreasing masses [21].
The size-biased construction is straightforward: we use the IBP to generate the atoms (dishes) in the
SBP; each time a dish is newly generated the atom is drawn from H and its mass from Fnk . This
leads to the following procedure:
Jn ? Poisson(? ?(1+c)?(n?1+c+?)
),
?(n+c)?(c+?)
for n = 1, 2, . . .:
for k = 1, . . . , Jn :
vnk ? Beta(1 ? ?, n ? 1 + c + ?),
?=
Jn
? X
X
?nk ? H,
(11)
vnk ??nk .
n=1 k=1
The inverse L?evy measure is a general method of generating from a Poisson process with nonuniform rate measure. It essentially transforms the Poisson process into one with uniform rate,
generates a sample, and transforms the sample back. This method is more involved for the
SBP because the inverse transform has no analytically tractable form. The L?evy measure ?0 of
the SBP factorizes into a product ?0 (du ? d?) = L(du)H(d?) of a ?-finite measure L(du) =
?(1+c)
? ?(1??)?(c+?)
u???1 (1?u)c+??1 du over (0, 1) and a probability measure H over ?. This implies
that we can generate a sample {vl , ?l }?
l=1 of the random atoms of ? and their masses by first sampling the masses {vl }?
l=1 ? PoissonP(L) from a Poisson process on (0, 1) with rate measure L, and
associating each vl with an iid draw ?l ? H [19]. Now consider the mapping T : (0, 1) ? (0, ?)
given by
Z 1
Z 1
?(1 + c)
u???1 (1 ? u)c+??1 du.
(12)
T (u) =
L(du) =
?
?(1 ? ?)?(c + ?)
u
u
T is bijective and monotonically decreasing. The Mapping Theorem for Poisson processes [19]
?
shows that {vl }?
l=1 ? PoissonP(L) if and only if {T (vl )}l=1 ? PoissonP(L) where L is
?
Lebesgue measure on (0, ?). A sample {tl }l=1 ? PoissonP(L) can be easily drawn by letting
Pl
el ? Exponential(1) and setting tl = i=1 ei for all l. Transforming back with vl = T ?1 (tl ),
we have {vl }?
l=1 ? PoissonP(L). As t1 , t2 , . . . is an increasing sequence and T is decreasing,
v1 , v2 , . . . is a decreasing sequence of masses. Deriving the density of vl given vl?1 , we get:
n Z vl?1
o
dt
?(1+c)
???1
c+??1
l
p(vl |vl?1 ) = dv
p(t
|t
)
=
?
v
(1?v
)
exp
?
L(du)
. (13)
l
l?1
l
l
?(1??)?(c+?)
l
vl
In general these densities do not simplify and we have to resort to solving for T ?1 (tl ) numerically.
There are two cases for which they do simplify. For c = 1, ? = 0, the density function reduces to
?
p(vl |vl?1 ) = ?vl??1 /vl?1
, leading to the stick-breaking construction of the single parameter IBP
[21]. In the stable process case when c = 1 ? ? and ? 6= 0, the density of vl simplifies to:
n R
o
v
?(2??)
?(2??)
p(vl | vl?1 ) = ? ?(1??)?(1)
vl???1 ? exp ? vll?1 ? ?(1??)?(1)
u???1 du
n
o
??
??
(v
= ?(1 ? ?)vl???1 exp ? ?(1??)
?
v
)
.
(14)
l
l?1
?
Doing a change of values to yl = vl?? , we get:
p(yl |yl?1 ) = ? 1??
? exp
n
o
? ? 1??
(y
?
y
)
.
l
l?1
?
(15)
That is, each yl is exponentially distributed with rate ? 1??
? and offset by yl?1 . For general values
of the parameters we do not have an analytic stick breaking form. However note that the weights
generated using this method are still going to be strictly decreasing.
3.4
Power-law Properties
The SBP has a number of appealing power-law properties. In this section we shall assume ? > 0
since the case ? = 0 reduces the SBP to the usual beta process with less interesting power-law
properties. Derivations are given in the appendix.
5
!=1, c=1
5
4
"=0.8
"=0.5
"=0.2
"=0
3
10
number of dishes
mean number of dishes tried
4
10
!=1, c=1, "=0.5
10
10
3
10
2
10
2
10
1
10
1
10
0
0
10 0
10
2
4
10
10
number of customers
6
10
10 0
10
2
4
10
10
number of customers trying each dish
Figure 1: Power-law properties of the stable-beta Indian buffet process.
Firstly, the total number of dishes tried by n customers is O(n? ). The left panel of Figure 1 shows
this for varying ?. Secondly, the number of customers trying each dish follows a Zipf?s law [23].
This is shown in the right panel of Figure 1, which plots the number of dishes Km versus the
number of customers m trying each dish (that is, Km is the number of dishes k for which mk = m).
Asymptotically we can show that the proportion of dishes tried by m customers is O(m?1?? ). Note
that these power-laws are similar to those observed for Pitman-Yor processes. One aspect of the
SBP which is not power-law is the number of dishes each customer tries. This is simply Poisson(?)
distributed. It seems difficult obtain power-law behavior in this aspect within a CRM framework,
because of the fundamental role played by the Poisson process.
4
Word Occurrence Models with Stable-beta Processes
In this section we use the SBP as a model for word occurrences in document corpora. Let n be
the number of documents in a corpus. Let Zi ({?}) = 1 if word type ? occurs in document i and
0 otherwise, and let ?({?}) be the occurrence probability of word type ? among the documents
in the corpus. We use the hierarchical model (4) with a SBP prior1 on ? and with each document
modeled as a conditionally independent Bernoulli process draw. The joint distribution over the word
occurrences Z1 , . . . , Zn , with ? integrated out, is given by the IBP joint probability (10).
We applied the word occurrence model to the 20newsgroups dataset. Following [16], we modeled
the training documents in each of the 20 newsgroups as a separate corpus with a separate SBP. We
use the popularity of each word type across all 20 newsgroups as the base distribution2 : for each
word type ? let n? be the number of documents containing ? and let H({?}) ? n? .
In the first experiment we compared the SBP to the beta process by fitting the parameters ?, c and
? of both models to each newsgroup by maximum likelihood (in beta process case ? is fixed at
0) . We expect the SBP to perform better as it is better able to capture the power-law statistics of
the document corpora (see Figure 2). The ML values of the parameters across classes did not vary
much, taking values ? = 142.6 ? 40.0, c = 4.1 ? 0.9 and ? = 0.47 ? 0.1. In comparison, the
parameters values obtained by the beta process are ? = 147.3 ? 41.4 and c = 25.9 ? 8.4. Note that
the estimated values for c are significantly larger than for the SBP to allow the beta process to model
the fact that many words occur in a small number of documents (a consequence of the power-law
1
Words are discrete objects. To get a smooth base distribution we imagine appending each word type with
a U [0, 1] variate. This does not affect the modelling that follows.
2
The appropriate technique, as proposed by [16], would be to use a hierarchical SBP to tie the word occurrence probabilities across the newsgroups. However due to difficulties dealing with atomic base distributions
we cannot define a hierarchical SBP easily (see discussion).
6
12000
BP
SBP
DATA
BP
SBP
DATA
3
10
number of words
cumulative number of words
14000
10000
8000
6000
2
10
1
10
4000
2000
0
100
200
300
400
number of documents
500
10 0
10
1
2
10
10
number of documents per word
Figure 2: Power-law properties of the 20newsgroups dataset. The faint dashed lines are the distributions of words in the documents in each class, the solid curve is the mean of these lines. The dashed
lines are the means of the word distributions generated by the ML parameters for the beta process
(pink) and the SBP (green).
Table 1: Classification performance of SBP and beta process (BP). The jth column (denoted 1:j)
shows the cumulative rank j classification accuracy of the test documents. The three numbers after
the models are the percentages of training, validation and test sets respectively.
assigned to classes:
BP - 20/20/60
SBP - 20/20/60
BP - 60/20/20
SBP - 60/20/20
1
1:2
1:3
1:4
1:5
78.7(?0.5)
79.9(?0.5)
85.5(?0.6)
85.5(?0.4)
87.4(?0.2)
87.6(?0.1)
91.6(?0.3)
91.9(?0.4)
91.3(?0.2)
91.5(?0.2)
94.2(?0.3)
94.4(?0.2)
95.1(?0.2)
93.7(?0.2)
95.6(?0.4)
95.6(?0.3)
96.2(?0.2)
95.1(?0.2)
96.6(?0.3)
96.6(?0.3)
statistics of word occurrences; see Figure 2). We also plotted the characteristics of data simulated
from the models using the estimated ML parameters. The SBP has a much better fit than the beta
process to the power-law properties of the corpora.
In the second experiment we tested the two models on categorizing test documents into one of the
20 newsgroups. Since this is a discriminative task, we optimized the parameters in both models to
maximize the cumulative ranked classification performance. The rank j classification performance
is defined to be the percentage of documents where the true label is among the top j predicted classes
(as determined by the IBP conditional probabilities of the documents under each of the 20 newsgroup
classes). As the cost function is not differentiable, we did a grid search over the parameter space,
using 20 values of ?, c and ? each, and found the parameters maximizing the objective function on
a validation set separate from the test set. To see the effect of sample size on model performance we
tried splitting the documents in each newsgroup into 20% training, 20% validation and 60% test sets,
and into 60% training, 20% validation and 20% test sets. We repeated the experiment five times with
different random splits of the dataset. The ranked classification rates are shown in Table 1. Figure 3
shows that the SBP model has generally higher classification performances than the beta process.
5
Discussion
We have introduced a novel stochastic process called the stable-beta process. The stable-beta process
is a generalization of the beta process, and can be used in nonparametric Bayesian featural models
with an unbounded number of features. As opposed to the beta process, the stable-beta process has
a number of appealing power-law properties. We developed both an Indian buffet process and a
stick-breaking construction for the stable-beta process and applied it to modeling word occurrences
in document corpora. We expect the stable-beta process to find uses modeling a range of natural
phenomena with power-law properties.
7
?3
SBP?BP
6
x 10
4
2
0
?2
1
2
3
class order
4
5
Figure 3: Differences between the classification rates of the SBP and the beta process. The performance of the SBP was consistently higher than that of the beta process for each of the five runs.
We derived the stable-beta process as a completely random measure with L?evy measure (3). It
would be interesting and illuminating to try to derive it as an infinite limit of finite models, however
we were not able to do so in our initial attempts. A related question is whether there is a natural
definition of the stable-beta process for non-smooth base distributions. Until this is resolved in the
positive, we are not able to define hierarchical stable-beta processes generalizing the hierarchical
beta processes [16].
Another avenue of research we are currently pursuing is in deriving better stick-breaking constructions for the stable-beta process. The current construction requires inverting the integral (12), which
is expensive as it requires an iterative method which evaluates the integral numerically within each
iteration.
Acknowledgement
We thank the Gatsby Charitable Foundation for funding, Romain Thibaux, Peter Latham and Tom
Griffiths for interesting discussions, and the anonymous reviewers for help and feedback.
A
Derivation of Power-law Properties
We
large n and K assumptions here, and make use of Stirling?s approximation ?(n+1) ?
? will make
2?n(n/e)n , which is accurate in the larger n regime. The expected number of dishes is,
!
n
n ?
X
X
2?(i+c+??1)((i+c+??1)/e)i+c+??1
?(1+c)?(n+c+?)
?
? ?(n+1+c)?(c+?) ? O
E[K] =
i+c
i=1
=O
n
X
2?(i+c)((i+c)/e)
i=1
!
e
??+1
(1 +
??1 i+c
(i
i+c )
??1
+ c + ? ? 1)
i=1
=O
n
X
!
e
??+1 ??1 ??1
e
i
= O(n? ). (16)
i=1
We are interested in the joint distribution of the statistics (K1 , . . . , Kn ), where Km is the number
of dishes tried by exactly m customers and where there are a total of n customers in the restaurant.
Km
Qn
n!
As there are Qn K!Km ! m=1 m!(n?m)!
configurations of the IBP with the same statistics
m=1
(K1 , . . . , Kn ), we have (ignoring constant terms and collecting terms in (10) with mk = m),
K m
Qn
?(m??)?(n?m+c+?)?(1+c)
n!
p(K1 , . . . , Kn |n) ? Qn K!Km ! m=1 m!(n?m)!
.
(17)
?(1??)?(c+?)?(n+c)
m=1
Pn
Conditioning on K = m=1 Km as well, we see that (K1 , . . . , Kn ) is multinomial with the probability of a dish having m customers being proportional to the term in large parentheses. For large
m (and even larger n), this probability simplifies to,
?
2?(m?1??)((m?1??)/e)m?1??
?1??
?
O( ?(m??)
)
=
O
=
O
m
.
(18)
m
?(m+1)
2?m(m/e)
8
References
[1] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
In Advances in Neural Information Processing Systems, volume 18, 2006.
[2] Z. Ghahramani, T. L. Griffiths, and P. Sollich. Bayesian nonparametric latent feature models
(with discussion and rejoinder). In Bayesian Statistics, volume 8, 2007.
[3] D. Knowles and Z. Ghahramani. Infinite sparse factor analysis and infinite independent components analysis. In International Conference on Independent Component Analysis and Signal
Separation, volume 7 of Lecture Notes in Computer Science. Springer, 2007.
[4] D. G?or?ur, F. J?akel, and C. E. Rasmussen. A choice model with infinitely many latent features.
In Proceedings of the International Conference on Machine Learning, volume 23, 2006.
[5] D. J. Navarro and T. L. Griffiths. Latent features in similarity judgment: A nonparametric
Bayesian approach. Neural Computation, in press 2008.
[6] E. Meeds, Z. Ghahramani, R. M. Neal, and S. T. Roweis. Modeling dyadic data with binary
latent factors. In Advances in Neural Information Processing Systems, volume 19, 2007.
[7] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring
hidden causes. In Proceedings of the Conference on Uncertainty in Artificial Intelligence,
volume 22, 2006.
[8] S. Goldwater, T.L. Griffiths, and M. Johnson. Interpolating between types and tokens by estimating power-law generators. In Advances in Neural Information Processing Systems, volume 18, 2006.
[9] Y. W. Teh. A hierarchical Bayesian language model based on Pitman-Yor processes. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual
Meeting of the Association for Computational Linguistics, pages 985?992, 2006.
[10] E. Sudderth and M. I. Jordan. Shared segmentation of natural scenes using dependent PitmanYor processes. In Advances in Neural Information Processing Systems, volume 21, 2009.
[11] M. Perman, J. Pitman, and M. Yor. Size-biased sampling of Poisson point processes and
excursions. Probability Theory and Related Fields, 92(1):21?39, 1992.
[12] J. Pitman and M. Yor. The two-parameter Poisson-Dirichlet distribution derived from a stable
subordinator. Annals of Probability, 25:855?900, 1997.
[13] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. Journal of
the American Statistical Association, 96(453):161?173, 2001.
[14] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics,
1(2):209?230, 1973.
[15] N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history
data. Annals of Statistics, 18(3):1259?1294, 1990.
[16] R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet process. In
Proceedings of the International Workshop on Artificial Intelligence and Statistics, volume 11,
pages 564?571, 2007.
[17] M. Perman. Random Discrete Distributions Derived from Subordinators. PhD thesis, Department of Statistics, University of California at Berkeley, 1990.
[18] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78,
1967.
[19] J. F. C. Kingman. Poisson Processes. Oxford University Press, 1993.
[20] Y. Kim. Nonparametric Bayesian estimators for counting processes. Annals of Statistics,
27(2):562?588, 1999.
[21] Y. W. Teh, D. G?or?ur, and Z. Ghahramani. Stick-breaking construction for the Indian buffet process. In Proceedings of the International Conference on Artificial Intelligence and Statistics,
volume 11, 2007.
[22] R. L. Wolpert and K. Ickstadt. Simulations of l?evy random fields. In Practical Nonparametric
and Semiparametric Bayesian Statistics, pages 227?242. Springer-Verlag, 1998.
[23] G. Zipf. Selective Studies and the Principle of Relative Frequency in Language. Harvard
University Press, Cambridge, MA, 1932.
9
| 3638 |@word judgement:1 proportion:1 seems:1 c0:1 km:7 simulation:1 tried:9 solid:1 initial:1 configuration:1 united:1 document:20 current:1 assigning:1 reminiscent:3 must:2 analytic:1 plot:1 update:1 intelligence:3 item:2 evy:12 location:1 firstly:2 five:2 unbounded:3 beta:56 fitting:1 introduce:2 subordinators:1 expected:1 ica:1 behavior:11 morphology:1 decomposed:1 decreasing:5 automatically:2 metaphor:1 preclude:1 increasing:1 estimating:1 underlying:1 panel:2 mass:20 developed:1 berkeley:1 collecting:1 tie:1 exactly:1 uk:5 exchangeable:4 unit:2 stick:10 control:4 t1:1 positive:1 limit:1 consequence:1 oxford:1 range:1 unique:1 practical:1 atomic:3 differs:1 procedure:1 significantly:1 word:22 vnk:2 griffith:6 get:4 cannot:1 yee:1 measurable:1 customer:21 reviewer:1 maximizing:1 straightforward:2 independently:1 splitting:1 powerlaw:1 estimator:2 deriving:4 stability:5 vll:1 updated:5 annals:4 construction:10 imagine:1 us:1 romain:1 harvard:1 expensive:1 sbp:41 observed:1 role:1 capture:2 ensures:1 distribution2:1 ordering:2 counter:1 transforming:1 solving:1 algebra:1 meed:1 completely:7 easily:2 joint:5 resolved:1 derivation:3 distinct:1 pitmanyor:1 describe:4 london:1 artificial:3 choosing:1 neighborhood:2 outside:1 whose:1 larger:3 say:1 otherwise:5 ability:1 statistic:12 think:1 jointly:1 itself:1 transform:1 sequence:3 differentiable:1 ucl:2 propose:2 product:1 achieve:1 roweis:1 description:2 kh:2 produce:1 generating:1 object:2 help:1 derive:5 ac:1 ibp:24 predicted:1 implies:1 exhibiting:1 closely:1 stochastic:2 subsequently:1 everything:1 argued:1 generalization:11 anonymous:1 secondly:2 strictly:2 pl:1 around:4 normal:2 exp:5 mapping:2 vary:1 label:1 currently:1 always:1 pn:1 exchangeability:1 factorizes:1 varying:1 categorizing:1 derived:4 improvement:1 consistently:1 bernoulli:6 likelihood:4 indicates:1 modelling:1 rank:2 kim:1 inference:3 dependent:1 el:1 vl:27 integrated:2 ferguson:1 hidden:1 going:1 selective:1 interested:2 overall:1 among:10 classification:7 denoted:1 exponent:5 field:2 construct:1 once:1 having:3 atom:42 sampling:3 unsupervised:1 t2:1 simplify:2 prior1:1 lebesgue:1 attempt:1 permuting:1 wc1n:1 implication:1 accurate:1 integral:2 plotted:1 causal:1 mk:17 column:4 modeling:7 ar:1 zn:27 queen:1 stirling:1 cost:1 perman:2 entry:1 subset:3 uniform:1 johnson:1 straightforwardly:1 thibaux:2 kn:4 st:1 density:5 fundamental:1 international:5 yl:5 together:1 again:1 thesis:1 satisfied:1 containing:1 opposed:1 resort:1 american:1 leading:1 kingman:2 de:5 includes:1 satisfy:1 depends:1 tion:1 try:7 doing:1 bayes:1 square:1 accuracy:1 who:1 characteristic:2 akel:1 correspond:1 sitting:1 judgment:1 orur:1 goldwater:1 bayesian:13 iid:3 multiplying:2 history:1 definition:1 evaluates:1 frequency:1 involved:1 james:1 naturally:1 associated:2 newly:1 dataset:3 subsection:1 segmentation:2 back:2 higher:2 dt:1 tom:1 specify:1 though:1 until:1 hand:1 ei:1 effect:1 contain:1 true:1 analytically:1 assigned:2 entering:1 neal:1 deal:1 conditionally:1 subordinator:1 generalized:2 whye:1 trying:4 bijective:1 latham:1 image:1 novel:4 funding:1 multinomial:1 conditioning:1 exponentially:1 volume:10 association:2 numerically:2 significant:1 cambridge:1 gibbs:1 zipf:2 fk:3 grid:1 mathematics:1 language:3 stable:30 similarity:2 base:6 posterior:11 recent:1 dish:30 verlag:1 binary:4 life:1 poissonp:5 meeting:1 seen:1 fortunately:1 subtraction:1 maximize:1 monotonically:1 dashed:2 signal:1 reduces:4 smooth:5 technical:2 long:1 parenthesis:1 essentially:1 poisson:17 iteration:1 normalization:1 addition:1 semiparametric:1 wealth:1 else:1 sudderth:1 biased:3 navarro:1 induced:1 jordan:2 call:1 counting:1 hjort:1 split:1 identically:1 crm:9 dilan:2 affect:2 restaurant:2 zi:7 newsgroups:6 variate:1 associating:1 fit:1 simplifies:2 avenue:1 absent:1 whether:1 handled:1 peter:1 speaking:1 cause:1 generally:1 governs:1 transforms:2 nonparametric:11 discount:1 generate:2 percentage:2 metaphorically:1 neuroscience:1 disjoint:3 popularity:1 estimated:2 per:1 write:2 discrete:2 shall:1 finetti:5 drawn:4 v1:1 asymptotically:1 sum:3 wood:1 run:1 inverse:3 uncertainty:1 named:1 extends:1 pursuing:1 knowles:1 separation:1 excursion:1 draw:3 appendix:1 distinguish:1 played:1 annual:1 occur:1 precisely:1 bp:6 scene:1 ywteh:1 generates:1 aspect:2 department:1 pacific:1 according:1 pink:1 conjugate:1 describes:1 across:3 sollich:1 ur:2 appealing:2 intuitively:1 invariant:1 dv:1 conjugacy:1 previously:2 turn:2 letting:1 tractable:1 multiplied:1 ishwaran:1 observe:1 hierarchical:9 v2:1 appropriate:1 occurrence:11 appearing:1 appending:1 subtracted:1 buffet:14 jn:3 top:1 dirichlet:3 include:2 linguistics:2 marginalized:3 giving:2 k1:4 ghahramani:6 objective:1 added:1 question:1 occurs:1 parametric:1 concentration:4 usual:4 exhibit:2 nonatomic:1 separate:3 duh:3 simulated:1 thank:1 modeled:3 relationship:1 kingdom:1 difficult:1 potentially:1 perform:1 teh:3 inspect:1 observation:3 finite:3 crms:2 variability:1 nonuniform:1 introduced:1 inverting:1 z1:20 optimized:1 california:1 able:3 usually:1 regime:1 including:1 green:1 power:29 suitable:1 difficulty:1 ranked:2 natural:3 mn:3 improve:1 brief:2 featural:5 prior:5 acknowledgement:1 relative:1 law:30 fully:1 expect:3 lecture:1 interesting:5 proportional:1 rejoinder:1 analogy:1 var:1 versus:1 generator:1 validation:4 foundation:1 illuminating:1 sufficient:1 principle:1 charitable:1 seated:1 row:3 summary:1 token:1 rasmussen:1 jth:1 distribu:1 allow:1 taking:1 pitman:12 yor:12 distributed:6 sparse:1 curve:1 feedback:1 cumulative:3 qn:4 dealing:1 ml:3 sequentially:1 corpus:9 conclude:1 factorize:1 discriminative:1 search:1 iterative:1 latent:5 table:4 ignoring:1 contributes:1 du:16 interpolating:1 did:2 whole:1 dyadic:2 repeated:1 referred:1 tl:4 gatsby:3 inferring:1 exponential:1 breaking:10 third:1 theorem:2 offset:1 faint:1 normalizing:1 workshop:1 phd:1 conditioned:1 occurring:3 nk:2 wolpert:1 generalizing:3 led:1 simply:5 infinitely:5 fnk:4 springer:2 corresponds:1 chance:1 ma:1 conditional:5 shared:1 change:1 determined:2 infinite:6 typical:1 contrasted:1 total:3 called:3 newsgroup:3 indian:13 tested:1 phenomenon:2 |
2,911 | 3,639 | Data-driven calibration of linear estimators
with minimal penalties
Sylvain Arlot ?
CNRS ; Willow Project-Team
Laboratoire d?Informatique de
l?Ecole Normale Superieure
(CNRS/ENS/INRIA UMR 8548)
23, avenue d?Italie, F-75013 Paris, France
[email protected]
Francis Bach ?
INRIA ; Willow Project-Team
Laboratoire d?Informatique de
l?Ecole Normale Superieure
(CNRS/ENS/INRIA UMR 8548)
23, avenue d?Italie, F-75013 Paris, France
[email protected]
Abstract
This paper tackles the problem of selecting among several linear estimators in
non-parametric regression; this includes model selection for linear regression, the
choice of a regularization parameter in kernel ridge regression or spline smoothing, and the choice of a kernel in multiple kernel learning. We propose a new
algorithm which first estimates consistently the variance of the noise, based upon
the concept of minimal penalty which was previously introduced in the context of
model selection. Then, plugging our variance estimate in Mallows? CL penalty
is proved to lead to an algorithm satisfying an oracle inequality. Simulation experiments with kernel ridge regression and multiple kernel learning show that the
proposed algorithm often improves significantly existing calibration procedures
such as 10-fold cross-validation or generalized cross-validation.
1
Introduction
Kernel-based methods are now well-established tools for supervised learning, allowing to perform
various tasks, such as regression or binary classification, with linear and non-linear predictors [1, 2].
A central issue common to all regularization frameworks is the choice of the regularization parameter: while most practitioners use cross-validation procedures to select such a parameter, data-driven
procedures not based on cross-validation are rarely used. The choice of the kernel, a seemingly
unrelated issue, is also important for good predictive performance: several techniques exist, either
based on cross-validation, Gaussian processes or multiple kernel learning [3, 4, 5].
In this paper, we consider least-squares regression and cast these two problems as the problem of
selecting among several linear estimators, where the goal is to choose an estimator with a quadratic
risk which is as small as possible. This problem includes for instance model selection for linear
regression, the choice of a regularization parameter in kernel ridge regression or spline smoothing,
and the choice of a kernel in multiple kernel learning (see Section 2).
The main contribution of the paper is to extend the notion of minimal penalty [6, 7] to all discrete
classes of linear operators, and to use it for defining a fully data-driven selection algorithm satisfying
a non-asymptotic oracle inequality. Our new theoretical results presented in Section 4 extend similar results which were limited to unregularized least-squares regression (i.e., projection operators).
Finally, in Section 5, we show that our algorithm improves the performances of classical selection
procedures, such as GCV [8] and 10-fold cross-validation, for kernel ridge regression or multiple
kernel learning, for moderate values of the sample size.
?
?
http://www.di.ens.fr/?arlot/
http://www.di.ens.fr/?fbach/
1
2
Linear estimators
In this section, we define the problem we aim to solve and give several examples of linear estimators.
2.1
Framework and notation
Let us assume that one observes
Yi = f (xi ) + ?i ? R
for
i = 1...n ,
where ?1 , . . . , ?n are i.i.d. centered random variables with E[?2i ] = ? 2 unknown, f is an unknown
measurable function X 7? R and x1 , . . . , xn ? X are deterministic design points. No assumption
is made on the set X . The goal is to reconstruct the signal F = (f (xi ))1?i?n ? Rn , with some
estimator Fb ? Rn , depending only on (x1 , Y1 ), . . . , (xn , Yn ) , and having a small quadratic risk
Pn
n?1 kFb ? F k22 , where ?t ? Rn , we denote by ktk2 the ?2 -norm of t , defined as ktk22 := i=1 t2i .
In this paper, we focus on linear estimators Fb that can be written as a linear function of Y =
(Y1 , . . . , Yn ) ? Rn , that is, Fb = AY , for some (deterministic) n ? n matrix A . Here and in
the rest of the paper, vectors such as Y or F are assumed to be column-vectors. We present in
Section 2.2 several important families of estimators of this form. The matrix A may depend on
x1 , . . . , xn (which are known and deterministic), but not on Y , and may be parameterized by certain
quantities?usually regularization parameter or kernel combination weights.
2.2
Examples of linear estimators
In this paper, our theoretical results apply to matrices A which are symmetric positive semi-definite,
such as the ones defined below.
Ordinary least-squares regression / model selection. If we consider linear predictors from a
design matrix X ? Rn?p , then Fb = AY with A = X(X ? X)?1 X ? , which is a projection matrix
(i.e., A? A = A); Fb = AY is often called a projection estimator. In the variable selection setting,
one wants to select a subset J ? {1, . . . , p} , and matrices A are parameterized by J .
Kernel ridge regression / spline smoothing. We assume that a positive definite kernel k : X ?
X ? R is given, and we are looking for a function f : X ? R in the associated reproducing kernel
Hilbert space (RKHS) F , with norm k ? kF . If K denotes the n ? n kernel matrix, defined by
Kab = k(xa , xb ) , then the ridge regression estimator?a.k.a. spline smoothing estimator for spline
kernels [9]?is obtained by minimizing with respect to f ? F [2]:
n
1X
(Yi ? f (xi ))2 + ?kf k2F .
n i=1
Pn
The unique solution is equal to fb = i=1 ?i k(?, xi ) , where ? = (K + n?I)?1 Y . This leads to the
smoothing matrix A? = K(K + n?In )?1 , parameterized by the regularization parameter ? ? R+ .
Multiple kernel learning / Group Lasso / Lasso. We now assume that we have p different
kernels kj , feature spaces Fj and feature maps ?j : X ? Fj , j = 1, . . . , p . The group Lasso [10]
and multiple kernel learning [11, 5] frameworks consider the following objective function
J(f1 , . . . , fp ) =
1
n
n
X
i=1
p
p
X
X
2
Pp
yi ? j=1 hfj , ?j (xi )i +2?
kfj kFj = L(f1 , . . . , fp )+2?
kfj kFj .
j=1
j=1
p
Note that when ?j (x) is simply the j-th coordinate of x ? R , we get back the penalization by the
?1 -norm and thus the regular Lasso [12].
Using a1/2 = minb>0 12 { ab + b}
, we obtain
o a variational formulation of the sum of norms
Pp
Pp n kfj k2
2 j=1 kfj k = min??Rp+ j=1
?j + ?j . Thus, minimizing J(f1 , . . . , fp ) with respect to
(f1 , . . . , fp ) is equivalent to minimizing with respect to ? ? Rp+ (see [5] for more details):
min L(f1 , . . . , fp ) + ?
f1 ,...,fp
p
X
kfj k2
j=1
?j
p
X
p
X
?1
1 ? Pp
+?
?j = y
y+?
?j ,
j=1 ?j Kj + n?In
n
j=1
j=1
2
where In is the n ? n identity matrix. Moreover, given ? , this leads to a smoothing matrix of the
form
Pp
Pp
A?,? = ( j=1 ?j Kj )( j=1 ?j Kj + n?In )?1 ,
(1)
parameterized by the regularization parameter ? ? R+ and the kernel combinations in Rp+ ?note
that it depends only on ??1 ? , which can be grouped in a single parameter in Rp+ .
Thus, the Lasso/group lasso can be seen as particular (convex) ways of optimizing over ? . In
this paper, we propose a non-convex alternative with better statistical properties (oracle inequality
in Theorem 1). Note that in our setting, finding the solution of the problem is hard in general
since the optimization is not convex. However, while the model selection problem is by nature
combinatorial, our optimization problems for multiple kernels are all differentiable and are thus
amenable to gradient descent procedures?which only find local optima.
Non symmetric linear estimators. Other linear estimators are commonly used, such as nearestneighbor regression or the Nadaraya-Watson estimator [13]; those however lead to non symmetric
matrices A , and are not entirely covered by our theoretical results.
3
Linear estimator selection
In this section, we first describe the statistical framework of linear estimator selection and introduce
the notion of minimal penalty.
3.1
Unbiased risk estimation heuristics
Usually, several estimators of the form Fb = AY can be used. The problem that we consider in
this paper is then to select one of them, that is, to choose a matrix A . Let us assume that a family
of matrices (A? )??? is given (examples are shown in Section 2.2), hence a family of estimators
b ? ? , so that the
(Fb? )??? can be used, with Fb? := A? Y . The goal is to choose from data some ?
b
quadratic risk of F?b is as small as possible.
The best choice would be the oracle:
n
o
?? ? arg min n?1 kFb? ? F k22
,
???
which cannot be used since it depends on the unknown signal F . Therefore, the goal is to define a
b satisfying an oracle inequality
data-driven ?
n
o
(2)
n?1 kFb?b ? F k22 ? Cn inf n?1 kFb? ? F k22 + Rn ,
???
with large probability, where the leading constant Cn should be close to 1 (at least for large n) and
the remainder term Rn should be negligible compared to the risk of the oracle.
b
Many classical selection methods are built upon the ?unbiased risk estimation? heuristics: If ?
minimizes a criterion crit(?) such that
h
i
?? ? ?,
E [ crit(?) ] ? E n?1 kFb? ? F k22 ,
b satisfies an oracle inequality such as in Eq. (2) with large probability. For instance, crossthen ?
validation [14, 15] and generalized cross-validation (GCV) [8] are built upon this heuristics.
One way of implementing this heuristics is penalization, which consists in minimizing the sum of
the empirical risk and a penalty term, i.e., using a criterion of the form:
crit(?) = n?1 kFb? ? Y k22 + pen(?) .
The unbiased risk estimation heuristics, also called Mallows? heuristics, then leads to the ideal
(deterministic) penalty
h
i
h
i
penid (?) := E n?1 kFb? ? F k22 ? E n?1 kFb? ? Y k22 .
3
When Fb? = A? Y , we have:
2
2
kFb? ? F k22 = k(A? ? In )F k2 + kA? ?k2 + 2 hA? ?, (A? ? In )F i ,
(3)
2
2
2
b
b
kF? ? Y k2 = kF? ? F k2 + k?k2 ? 2 h?, A? ?i + 2 h?, (In ? A? )F i ,
(4)
Pn
n
n
where ? = Y ? F ? R and ?t, u ? R , ht, ui = i=1 ti ui . Since ? is centered with covariance
matrix ? 2 In , Eq. (3) and Eq. (4) imply that
2? 2 tr(A? )
,
n
up to the term ?E[n?1 k?k22 ] = ?? 2 , which can be dropped off since it does not vary with ? .
penid (?) =
(5)
Note that df(?) = tr(A? ) is called the effective dimensionality or degrees of freedom [16], so that
the ideal penalty in Eq. (5) is proportional to the dimensionality associated with the matrix A? ?
for projection matrices, we get back the dimension of the subspace, which is classical in model
selection.
The expression of the ideal penalty in Eq. (5) led to several selection procedures, in particular Mallows? CL (called Cp in the case of projection estimators) [17], where ? 2 is replaced by some estic2 . The estimator of ? 2 usually used with CL is based upon the value of the empirical risk at
mator ?
some ?0 with df(?0 ) large; it has the drawback of overestimating the risk, in a way which depends
on ?0 and F [18]. GCV, which implicitly estimates ? 2 , has the drawback of overfitting if the family
(A? )??? contains a matrix too close to In [19]; GCV also overestimates the risk even more than CL
for most A? (see (7.9) and Table 4 in [18]).
In this paper, we define an estimator of ? 2 directly related to the selection task which does not have
similar drawbacks. Our estimator relies on the concept of minimal penalty, introduced by Birg?e and
Massart [6] and further studied in [7].
3.2
Minimal and optimal penalties
We deduce from Eq. (3) the bias-variance decomposition of the risk:
h
i
2
tr(A?
2
? A? )?
E n?1 kFb? ? F k22 = n?1 k(A? ? In )F k2 +
= bias + variance ,
n
and from Eq. (4) the expectation of the empirical risk:
2
h
i
2 tr(A? ) ? tr(A?
2
2
? A? ) ?
?1 b
2
?1
E n kF? ? Y k2 ? k?k2 = n k(A? ? In )F k2 ?
.
n
(6)
(7)
Note that the variance term in Eq. (6) is not proportional to the effective dimensionality df(?) =
tr(A? ) but to tr(A?
? A? ) . Although several papers argue these terms are of the same order (for
instance, they are equal when A? is a projection matrix), this may not hold in general. If A? is
symmetric with a spectrum Sp(A? ) ? [0, 1] , as in all the examples of Section 2.2, we only have
?
0 ? tr(A?
? A? ) ? tr(A? ) ? 2 tr(A? ) ? tr(A? A? ) ? 2 tr(A? ) .
(8)
In order to give a first intuitive interpretation of Eq. (6) and Eq. (7), let us consider the kernel ridge
regression example and assume that the risk and the empirical risk behave as their expectations
in Eq. (6) and Eq. (7); see also Fig. 1. Completely rigorous arguments based upon concentration
inequalities are developed in [20] and summarized in Section 4, leading to the same conclusion as
the present informal reasoning.
2
First, as proved in [20], the bias n?1 k(A? ? In )F k2 is a decreasing function of the dimensionality
2 ?1
df(?) = tr(A? ) , and the variance tr(A?
is an increasing function of df(?) , as well
? A? )? n
as 2 tr(A? ) ? tr(A?
A
)
.
Therefore,
Eq.
(6)
shows
that
the optimal ? realizes the best trade-off
? ?
between bias (which decreases with df(?)) and variance (which increases with df(?)), which is a
classical fact in model selection.
Second, the expectation of the empirical risk in Eq. (7) can be decomposed into the bias and a
negative variance term which is the opposite of
2
penmin (?) := n?1 2 tr(A? ) ? tr(A?
(9)
? A? ) ? .
4
generalization errors
0.5
bias
variance ? ?2tr A2
?2trA
0
?2trA2 ? 2?2trA
generalization error ? bias + ?2 tr A2
empirical error??2 ? bias+?2trA2?2?2 tr A
?0.5
0
200
400
600
degrees of freedom ( tr A )
800
Figure 1: Bias-variance decomposition of the generalization error, and minimal/optimal penalties.
As suggested by the notation penmin , we will show it is a minimal penalty in the following sense.
If
n
o
bmin (C) ? arg min n?1 kFb? ? Y k2 + C pen (?) ,
?C ? 0,
?
min
2
???
bmin (C) behaves like a minithen, up to concentration inequalities that are detailed in Section 4.2, ?
mizer of
h
i
2
gC (?) = E n?1 kFb? ? Y k22 + C penmin (?) ?n?1 ? 2 = n?1 k(A? ? In )F k2 +(C?1) penmin (?) .
Therefore, two main cases can be distinguished:
bmin (C)) is huge: ?
bmin (C) overfits.
? if C < 1 , then gC (?) decreases with df(?) so that df(?
? if C > 1 , then gC (?) increases with df(?) when df(?) is large enough, so that
bmin (C)) is much smaller than when C < 1 .
df(?
b of
As a conclusion, penmin (?) is the minimal amount of penalization needed so that a minimizer ?
a penalized criterion is not clearly overfitting.
Following an idea first proposed in [6] and further analyzed or used in several other papers such as
[21, 7, 22], we now propose to use that penmin (?) is a minimal penalty for estimating ? 2 and plug
this estimator into Eq. (5). This leads to the algorithm described in Section 4.1.
Note that the minimal penalty given by Eq. (9) is new; it generalizes previous results [6, 7] where
penmin (A? ) = n?1 tr(A? )? 2 because all A? were assumed to be projection matrices, i.e., A?
? A? =
A? . Furthermore, our results generalize the slope heuristics penid ? 2 penmin (only valid for
projection estimators [6, 7]) to general linear estimators for which penid / penmin ? (1, 2] .
4
Main results
In this section, we first describe our algorithm and then present our theoretical results.
4.1
Algorithm
b of ? 2 using the minimal penalty in Eq. (9),
The following algorithm first computes an estimator of C
then considers the ideal penalty in Eq. (5) for selecting ? .
Input: ? a finite set with Card(?) ? Kn? for some K, ? ? 0 , and matrices A? .
b0 (C) ? arg min??? {kFb? ? Y k2 + C 2 tr(A? ) ? tr(A? A? ) } .
? ?C > 0 , compute ?
2
?
b0 (C))
b such that df(?
b ? n3/4 , n/10 .
? Find C
b ? arg min??? {kFb? ? Y k2 + 2C
b tr(A? )} .
? Select ?
2
In the steps 1 and 2 of the above algorithm, in practice, a grid in log-scale is used, and our theoretical
results from the next section suggest to use a step-size of order n?1/4 . Note that it may not be
5
b0 (C)) ? [n3/4 , n/10] ; therefore, our condition in
possible in all cases to find a C such that df(?
b0 (C)) < n3/4 and for all
b
b + ? , df(?
step 2, could be relaxed to finding a C such that for all C > C
?1/4+?
b
b ? ? , df(?0 (C)) > n/10 , with ? = n
C<C
, where ? > 0 is a small constant.
b with maximal jump between succesAlternatively, using the same grid in log-scale, we can select C
b0 (C))?note that our theoretical result then does not entirely hold, as we show
sive values of df(?
the presence of a jump around ? 2 , but do not show the absence of similar jumps elsewhere.
4.2
Oracle inequality
b be defined as in the algorithm of Section 4.1, with Card(?) ? Kn? for
b and ?
Theorem 1 Let C
some K, ? ? 0 . Assume that ?? ? ? , A? is symmetric with Sp(A? ) ? [0, 1] , that ?i are i.i.d.
Gaussian with variance ? 2 > 0 , and that ??1 , ?2 ? ? with
r
?
n
ln(n)
2
?1
2
df(?1 ) ? , df(?2 ) ? n, and ?i ? { 1, 2 } , n k(A?i ? In )F k2 ? ?
. (A1?2 )
2
n
Then, a numerical constant Ca and an event of probability at least 1 ? 8Kn?2 exist on which, for
every n ? Ca ,
!
!
r
p
44(? + 2) ln(n)
ln(n)
2
b
1 ? 91(? + 2)
? ?C ? 1+
?2 .
(10)
n
n1/4
Furthermore, if
?? ? 1, ?? ? ?,
h
i
n?1 tr(A? )? 2 ? ?E n?1 kFb? ? F k22 ,
(A3 )
then, a constant Cb depending only on ? exists such that for every n ? Cb , on the same event,
n
o 36(? + ? + 2) ln(n)? 2
40?
n?1 kFb?b ? F k22 ? 1 +
inf n?1 kFb? ? F k22 +
.
(11)
ln(n) ???
n
Theorem 1 is proved in [20]. The proof mainly follows from the informal arguments developed in
Section 3.2, completed with the following two concentration inequalities: If ? ? Rn is a standard
Gaussian random vector, ? ? Rn and M is a real-valued n ? n matrix, then for every x ? 0 ,
?
P |h?, ?i| ? 2x k?k2 ? 1 ? 2e?x
(12)
2
2
P ?? > 0, kM ?k2 ? tr(M ? M ) ? ? tr(M ? M ) + 2(1 + ??1 ) kM k x ? 1 ? 2e?x , (13)
where kM k is the operator norm of M . A proof of Eq. (12) and (13) can be found in [20].
4.3
Discussion of the assumptions of Theorem 1
Gaussian noise. When ? is sub-Gaussian, Eq. (12) and Eq. (13) can be proved for ? = ? ?1 ? at the
price of additional technicalities, which implies that Theorem 1 is still valid.
Symmetry. The assumption that matrices A? must be symmetric can certainly be relaxed, since it
is only used for deriving from Eq. (13) a concentration inequality for hA? ?, ?i . Note that Sp(A? ) ?
[0, 1] barely is an assumption since it means that A? actually shrinks Y .
Assumptions (A1?2 ). (A1?2 ) holds if max??? { df(?) } ? n/2 and the bias is smaller than
c df(?)?d for some c, d > 0 , a quite classical assumption in the context of model selection. Besides,
(A1?2 ) is much less restrictive and can even be relaxed, see [20].
Assumption (A3 ). The upper bound (A3 ) on tr(A? ) is certainly the strongest assumption of
Theorem 1, but it is only needed for Eq. (11). According to Eq. (6), (A3 ) holds with ? = 1 when
A? is a projection matrix since tr(A?
? A? ) = tr(A? ) . In the kernel ridge regression framework,
(A3 ) holds as soon as the eigenvalues of the kernel matrix K decrease like j ?? ?see [20]. In
general, (A3 ) means that Fb? should not have a risk smaller than the parametric convergence rate
associated with a model of dimension df(?) = tr(A? ) .
When (A3 ) does not hold, selecting among estimators whose risks are below the parametric rate
is a rather difficult problem and it may not be possible to attain the risk of the oracle in general.
6
selected degrees of freedom
selected degrees of freedom
400
minimal penalty
optimal penalty / 2
300
200
100
0
?2
0
log(C/?2)
2
200
optimal/2
minimal (discrete)
minimal (continuous)
150
100
50
0
?2
0
log(C/?2)
2
Figure 2: Selected degrees of freedom vs. penalty strength log(C/? 2 ) : note that when penalizing
by the minimal penalty, there is a strong jump at C = ? 2 , while when using half the optimal penalty,
this is not the case. Left: single kernel case, Right: multiple kernel case.
b
Nevertheless, an oracle inequality can still be proved without (A3 ), at the price of enlarging C
2 ?1
slightly and adding a small fraction of ? n tr(A? ) in the right-hand side of Eq. (11), see [20].
b is necessary in general: If tr(A? A? ) ? tr(A? ) for most ? ? ? , the minimal penalty
Enlarging C
?
b
is very close to 2? 2 n?1 tr(A? ) , so that according to Eq. (10), overfitting is likely as soon as C
underestimates ? 2 , even by a very small amount.
4.4
Main consequences of Theorem 1 and comparison with previous results
b is a consistent estimator
Consistent estimation of ? 2 . The first part of Theorem 1 shows that C
2
of ? in a general framework and under mild assumptions. Compared to classical estimators of ? 2 ,
b does not depend on the choice of some model
such as the one usually used with Mallows? CL , C
assumed to have almost no bias, which can lead to overestimating ? 2 by an unknown amount [18].
Oracle inequality. Our algorithm satisfies an oracle inequality with high probability, as shown by
Eq. (11): The risk of the selected estimator Fb?b is close to the risk of the oracle, up to a remainder
term which is negligible when the dimensionality df(?? ) grows with n faster than ln(n) , a typical
situation when the bias is never equal to zero, for instance in kernel ridge regression.
Several oracle inequalities have been proved in the statistical literature for Mallows? CL with a consistent estimator of ? 2 , for instance in [23]. Nevertheless, except for the model selection problem
(see [6] and references therein), all previous results were asymptotic, meaning that n is implicitly
assumed to be larged compared to each parameter of the problem. This assumption can be problematic for several learning problems, for instance in multiple kernel learning when the number p
of kernels may grow with n . On the contrary, Eq. (11) is non-asymptotic, meaning that it holds for
every fixed n as soon as the assumptions explicitly made in Theorem 1 are satisfied.
Comparison with other procedures. According to Theorem 1 and previous theoretical results
[23, 19], CL , GCV, cross-validation and our algorithm satisfy similar oracle inequalities in various
frameworks. This should not lead to the conclusion that these procedures are completely equivalent.
Indeed, second-order terms can be large for a given n , while they are hidden in asymptotic results
and not tightly estimated by non-asymptotic results. As showed by the simulations in Section 5, our
algorithm yields statistical performances as good as existing methods, and often quite better.
b is by construction smaller than
Furthermore, our algorithm never overfits too much because df(?)
b
b
the effective dimensionality of ?0 (C) at which the jump occurs. This is a quite interesting property
compared for instance to GCV, which is likely to overfit if it is not corrected because GCV minimizes
a criterion proportional to the empirical risk.
5
Simulations
Qd
Throughout this section, we consider exponential kernels on Rd , k(x, y) = i=1 e?|xi ?yi | , with the
x?sP
sampled i.i.d. from a standard multivariate Gaussian. The functions f are then selected randomly
m
as i=1 ?i k(?, zi ) , where both ? and z are i.i.d. standard Gaussian (i.e., f belongs to the RKHS).
7
10?fold CV
GCV
min. penalty
2.5
2
1.5
1
0.5
4
5
6
log(n)
7
mean( error / errorMallows )
mean( error / errororacle )
3
3.5
MKL+CV
GCV
kernel sum
min. penalty
3
2.5
2
1.5
1
3.5
4
4.5 5
log(n)
5.5
Figure 3: Comparison of various smoothing parameter selection (minikernel, GCV, 10-fold cross
validation) for various values of numbers of observations, averaged over 20 replications. Left: single
kernel, right: multiple kernels.
Jump. In Figure 2 (left), we consider data xi ? R6 , n = 1000, and study the size of the jump
in Figure 2 for kernel ridge regression. With half the optimal penalty (which is used in traditional
variable selection for linear regression), we do not get any jump, while with the minimal penalty we
always do. In Figure 2 (right), we plot the same curves for the multiple kernel learning problem with
two kernels on two different 4-dimensional variables, with similar results. In addition, we show two
ways of optimizing over ? ? ? = R2+ , by discrete optimization with n different kernel matrices?a
situation covered by Theorem 1?or with continuous optimization with respect to ? in Eq. (1), by
gradient descent?a situation not covered by Theorem 1.
Comparison of estimator selection methods. In Figure 3, we plot model selection results for 20
replications of data (d = 4, n = 500), comparing GCV [8], our minimal penalty algorithm, and
cross-validation methods. In the left part (single kernel), we compare to the oracle (which can be
computed because we can enumerate ?), and use for cross-validation all possible values of ? . In the
right part (multiple kernel), we compare to the performance of Mallows? CL when ? 2 is known (i.e.,
penalty in Eq. (5)), and since we cannot enumerate all ??s, we use the solution obtained by MKL
with CV [5]. We also compare to using our minimal penalty algorithm with the sum of kernels.
6
Conclusion
A new light on the slope heuristics. Theorem 1 generalizes some results first proved in [6] where
all A? are assumed to be projection matrices, a framework where assumption (A3 ) is automatically
satisfied. To this extent, Birg?e and Massart?s slope heuristics has been modified in a way that sheds
a new light on the ?magical? factor 2 between the minimal and the optimal penalty, as proved in
[6, 7]. Indeed, Theorem 1 shows that for general linear estimators,
2 tr(A? )
penid (?)
=
,
penmin (?)
2 tr(A? ) ? tr(A?
? A? )
(14)
which can take any value in (1, 2] in general; this ratio is only equal to 2 when tr(A? ) ? tr(A?
? A? ) ,
hence mostly when A? is a projection matrix.
Future directions. In the case of projection estimators, the slope heuristics still holds when the design is random and data are heteroscedastic [7]; we would like to know whether Eq. (14) is still valid
for heteroscedastic data with general linear estimators. In addition, the good empirical performances
of elbow heuristics based algorithms (i.e., based on the sharp variation of a certain quantity around
good hyperparameter values) suggest that Theorem 1 can be generalized to many learning frameworks (and potentially to non-linear estimators), probably with small modifications in the algorithm,
but always relying on the concept of minimal penalty.
Another interesting open problem would be to extend the results of Section 4, where Card(?) ?
Kn? is assumed, to continuous sets ? such as the ones appearing naturally in kernel ridge regression
and multiple kernel learning. We conjecture that Theorem 1 is valid without modification for a
?small? continuous ? , such as in kernel ridge regression where taking a grid of size n in log-scale is
almost equivalent to taking ? = R+ . On the contrary, in applications such as the Lasso with p ? n
variables, the natural set ? cannot be well covered by a grid of cardinality n? with ? small, and our
minimal penalty algorithm and Theorem 1 certainly have to be modified.
8
References
[1] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
[2] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, 2001.
[3] O. Chapelle and V. Vapnik. Model selection for support vector machines. In Advances in
Neural Information Processing Systems (NIPS), 1999.
[4] C. E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[5] F. Bach. Consistency of the group Lasso and multiple kernel learning. Journal of Machine
Learning Research, 9:1179?1225, 2008.
[6] L. Birg?e and P. Massart. Minimal penalties for Gaussian model selection. Probab. Theory
Related Fields, 138(1-2):33?73, 2007.
[7] S. Arlot and P. Massart. Data-driven calibration of penalties for least-squares regression. J.
Mach. Learn. Res., 10:245?279, 2009.
[8] P. Craven and G. Wahba. Smoothing noisy data with spline functions. Estimating the correct
degree of smoothing by the method of generalized cross-validation. Numer. Math., 31(4):377?
403, 1978/79.
[9] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
[10] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of The Royal Statistical Society Series B, 68(1):49?67, 2006.
[11] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the
kernel matrix with semidefinite programming. J. Mach. Learn. Res., 5:27?72, 2003/04.
[12] R. Tibshirani. Regression shrinkage and selection via the Lasso. Journal of The Royal Statistical Society Series B, 58(1):267?288, 1996.
[13] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. SpringerVerlag, 2001.
[14] D. M. Allen. The relationship between variable selection and data augmentation and a method
for prediction. Technometrics, 16:125?127, 1974.
[15] M. Stone. Cross-validatory choice and assessment of statistical predictions. J. Roy. Statist.
Soc. Ser. B, 36:111?147, 1974.
[16] T. Zhang. Learning bounds for kernel regression using effective data dimensionality. Neural
Comput., 17(9):2077?2098, 2005.
[17] C. L. Mallows. Some comments on Cp . Technometrics, 15:661?675, 1973.
[18] B. Efron. How biased is the apparent error rate of a prediction rule? J. Amer. Statist. Assoc.,
81(394):461?470, 1986.
[19] Y. Cao and Y. Golubev. On oracle inequalities related to smoothing splines. Math. Methods
Statist., 15(4):398?414 (2007), 2006.
[20] S. Arlot and F. Bach. Data-driven calibration of linear estimators with minimal penalties,
September 2009. Long version. arXiv:0909.1884v1.
? Lebarbier. Detecting multiple change-points in the mean of a gaussian process by model
[21] E.
selection. Signal Proces., 85:717?736, 2005.
[22] C. Maugis and B. Michel. Slope heuristics for variable selection and clustering via gaussian
mixtures. Technical Report 6550, INRIA, 2008.
[23] K.-C. Li. Asymptotic optimality for Cp , CL , cross-validation and generalized cross-validation:
discrete index set. Ann. Statist., 15(3):958?975, 1987.
9
| 3639 |@word mild:1 version:1 norm:5 open:1 km:3 simulation:3 covariance:1 decomposition:2 tr:42 contains:1 series:2 selecting:4 ecole:2 rkhs:2 existing:2 ka:1 comparing:1 written:1 must:1 numerical:1 plot:2 v:1 half:2 selected:5 hfj:1 math:2 detecting:1 zhang:1 replication:2 yuan:1 consists:1 introduce:1 indeed:2 relying:1 decreasing:1 decomposed:1 automatically:1 cardinality:1 increasing:1 elbow:1 project:2 estimating:2 unrelated:1 notation:2 moreover:1 minimizes:2 developed:2 finding:2 every:4 ti:1 tackle:1 shed:1 k2:19 assoc:1 ser:1 yn:2 arlot:5 overestimate:1 positive:2 negligible:2 dropped:1 local:1 consequence:1 mach:2 inria:4 umr:2 therein:1 studied:1 nearestneighbor:1 heteroscedastic:2 limited:1 nadaraya:1 averaged:1 unique:1 mallow:7 practice:1 definite:2 procedure:8 empirical:8 significantly:1 attain:1 projection:12 regular:1 suggest:2 get:3 cannot:3 close:4 selection:28 operator:3 context:2 risk:22 www:2 measurable:1 deterministic:4 map:1 equivalent:3 williams:1 convex:3 estimator:39 rule:1 deriving:1 notion:2 coordinate:1 variation:1 construction:1 programming:1 lanckriet:1 element:1 roy:1 satisfying:3 t2i:1 trade:1 decrease:3 observes:1 ui:2 cristianini:2 depend:2 crit:3 predictive:1 upon:5 completely:2 various:4 informatique:2 describe:2 effective:4 quite:3 heuristic:12 whose:1 solve:1 valued:1 apparent:1 reconstruct:1 noisy:1 seemingly:1 differentiable:1 eigenvalue:1 propose:3 maximal:1 fr:4 remainder:2 cao:1 intuitive:1 olkopf:1 convergence:1 optimum:1 depending:2 b0:5 eq:31 strong:1 soc:1 implies:1 qd:1 direction:1 drawback:3 correct:1 centered:2 observational:1 implementing:1 f1:6 generalization:3 hold:8 around:2 cb:2 vary:1 a2:2 estimation:5 realizes:1 combinatorial:1 grouped:2 tool:1 mit:2 clearly:1 gaussian:11 always:2 aim:1 modified:2 normale:2 rather:1 pn:3 shrinkage:1 focus:1 consistently:1 mainly:1 rigorous:1 sense:1 el:1 cnrs:3 hidden:1 willow:2 france:2 issue:2 among:3 classification:1 arg:4 smoothing:10 equal:4 field:1 never:2 having:1 validatory:1 k2f:1 future:1 report:1 spline:8 overestimating:2 randomly:1 tightly:1 replaced:1 n1:1 ab:1 freedom:5 friedman:1 technometrics:2 huge:1 certainly:3 numer:1 analyzed:1 mixture:1 semidefinite:1 light:2 xb:1 amenable:1 necessary:1 taylor:1 re:2 theoretical:7 minimal:25 instance:7 column:1 ordinary:1 subset:1 predictor:2 gcv:11 too:2 kn:4 siam:1 off:2 augmentation:1 central:1 satisfied:2 choose:3 leading:2 michel:1 li:1 de:2 summarized:1 includes:2 tra:2 satisfy:1 explicitly:1 depends:3 overfits:2 francis:2 slope:5 contribution:1 square:4 variance:11 mator:1 yield:1 generalize:1 strongest:1 underestimate:1 proces:1 pp:6 naturally:1 associated:3 di:2 proof:2 sampled:1 proved:8 efron:1 improves:2 dimensionality:7 hilbert:1 actually:1 back:2 supervised:1 formulation:1 amer:1 shrink:1 furthermore:3 xa:1 smola:1 overfit:1 hand:1 assessment:1 mkl:2 grows:1 kab:1 k22:15 concept:3 unbiased:3 regularization:7 hence:2 symmetric:6 criterion:4 generalized:5 stone:1 ay:4 ridge:12 bmin:5 cp:3 allen:1 fj:2 reasoning:1 meaning:2 variational:1 common:1 behaves:1 extend:3 interpretation:1 cambridge:1 cv:3 rd:1 grid:4 consistency:1 shawe:1 chapelle:1 calibration:4 sive:1 deduce:1 multivariate:1 showed:1 optimizing:2 moderate:1 driven:6 inf:2 belongs:1 certain:2 inequality:16 binary:1 watson:1 yi:4 seen:1 additional:1 relaxed:3 signal:3 semi:1 multiple:16 technical:1 faster:1 plug:1 bach:4 cross:15 lin:1 long:1 plugging:1 a1:5 prediction:3 regression:25 expectation:3 df:24 arxiv:1 kernel:51 addition:2 want:1 laboratoire:2 grow:1 sch:1 biased:1 fbach:1 rest:1 minb:1 massart:4 probably:1 comment:1 contrary:2 jordan:1 practitioner:1 presence:1 ideal:4 enough:1 zi:1 hastie:1 lasso:9 opposite:1 wahba:2 idea:1 cn:2 avenue:2 maugis:1 whether:1 expression:1 bartlett:1 penalty:36 enumerate:2 covered:4 detailed:1 amount:3 statist:4 http:2 exist:2 problematic:1 estimated:1 tibshirani:2 mizer:1 discrete:4 hyperparameter:1 group:4 nevertheless:2 penalizing:1 ht:1 v1:1 fraction:1 sum:4 parameterized:4 family:4 almost:2 throughout:1 kfb:17 entirely:2 bound:2 fold:4 quadratic:3 oracle:17 strength:1 n3:3 argument:2 min:9 optimality:1 conjecture:1 according:3 combination:2 craven:1 smaller:4 slightly:1 modification:2 ghaoui:1 unregularized:1 ln:6 previously:1 needed:2 know:1 informal:2 generalizes:2 apply:1 birg:3 appearing:1 distinguished:1 alternative:1 rp:4 magical:1 denotes:1 clustering:1 completed:1 restrictive:1 classical:6 society:2 objective:1 quantity:2 occurs:1 parametric:3 concentration:4 traditional:1 september:1 gradient:2 subspace:1 card:3 argue:1 considers:1 extent:1 barely:1 besides:1 index:1 relationship:1 ratio:1 minimizing:4 italie:2 difficult:1 mostly:1 potentially:1 negative:1 design:3 unknown:4 perform:1 allowing:1 upper:1 observation:1 finite:1 descent:2 behave:1 defining:1 situation:3 looking:1 team:2 y1:2 rn:9 gc:3 reproducing:1 sharp:1 introduced:2 cast:1 paris:2 established:1 nip:1 suggested:1 usually:4 below:2 pattern:1 fp:6 built:2 max:1 royal:2 event:2 natural:1 imply:1 kj:4 probab:1 literature:1 kf:5 asymptotic:6 fully:1 interesting:2 proportional:3 validation:15 penalization:3 degree:6 consistent:3 elsewhere:1 penalized:1 soon:3 rasmussen:1 bias:12 side:1 taking:2 curve:1 dimension:2 xn:3 valid:4 fb:12 computes:1 made:2 commonly:1 jump:8 implicitly:2 technicality:1 ktk2:1 overfitting:3 assumed:6 xi:7 spectrum:1 continuous:4 pen:2 table:1 nature:1 learn:2 ca:2 symmetry:1 cl:9 sp:4 main:4 noise:2 tra2:2 x1:3 fig:1 en:6 sub:1 kfj:7 exponential:1 comput:1 r6:1 theorem:17 enlarging:2 r2:1 lebarbier:1 a3:9 exists:1 vapnik:1 adding:1 led:1 simply:1 likely:2 minimizer:1 satisfies:2 relies:1 goal:4 identity:1 ann:1 price:2 absence:1 hard:1 springerverlag:1 change:1 sylvain:2 typical:1 except:1 corrected:1 called:4 rarely:1 select:5 support:1 superieure:2 |
2,912 | 364 | VLSI Implementation of TInMANN
Matt Melton Tan Phan Doug Reeves
Electrical and Computer Engineering Dept.
North Carolina State University
Raleigh, NC 27695-7911
Dave Van den Bout
Abstract
A massively parallel, all-digital, stochastic architecture - TlnMAN N - is
described which performs competitive and Kohonen types of learning. A
VLSI design is shown for a TlnMANN neuron which fits within a small,
inexpensive MOSIS TinyChip frame, yet which can be used to build larger
networks of several hundred neurons. The neuron operates at a speed of
15 MHz which allows the network to process 290,000 training examples
per second. Use of level sensitive scan logic provides the chip with 100%
fault coverage, permitting very reliable neural systems to be built.
1
INTRODUCTION
Uniprocessor simulation of neural networks has been the norm, but benefiting from
the parallelism in neural networks is impossible without specialized hardware. Most
hardware-based neural network simulators use a single high-speed AL U or multiple
DSP chips connected through communication buses. The first approach does not
allow exploration of the effects of parallelism, while the complex processors used in
the second approach hinder investigations into the minimal hardware needs of an
implementation. Such knowledge can be gained only if an implementation possess
the same characteristics as a neural network - i.e. that it be built from many
simple, cooperating processing elements. However, constructing and connecting
large numbers of processing elements (or neuron,,) is difficult. Highly-connected,
densely-packed analog neurons can be practically realized on a single VLSI chip,
but interconnecting several such chips into a larger system would require many I/O
pins. In addition, external parasitic capacitances and noise can affect the reliable
transfer of data between the chips. These problems are avoided in neural systems
1046
VLSI Implementation of TInMANN
based on noise-resistant digital signals that can be multiplexed over a small number
of wires.
The next section ofthis paper describes the basic theory, algorithm, and architecture
of the TlnMANN digital neural network. The third section illustrates the VLSI
design of a TlnMANN neuron that operates at 15 MHz, is completely testable, and
can be cascaded to form large Kohonen or competitive networks.
2
TlnMANN ALGORITHM AND ARCHITECTURE
In the competitive learning algorithm (Rumelhart, 1986), training vectors oflength
W, V= (Vi, V2,"" vw), are presented to a winner-take-all network of N neurons.
Each neuron i possesses a weight vector of length W, Wi
(Wil' Wi2, ... , WiW),
and a winning neuron k is selected as the one whose weight vector is closest to the
current training vector. Neuron k is then moved closer to the training vector by
modifying its weights as follows
=
W1cj ?= Wlcj
+ f ? (Vj
- W1cj)
0<f
< I,
1 ~ j ~ W.
H the network is trained with a set of vectors that are naturally clustered into
N groups, then each neural weight vector will eventually reside in the center of a
different group. Thereafter, an input vector applied to the network is encoded by
the neuron that has been sensitized to the cluster containing the input.
Kohonen's self-organizing feature maps (Kohonen, 1982) are trained using a generalization of competitive learning where each neuron i is provided with an additional
X-element vector, Xi = (Zit, Z'2, ... , ZiX), that defines its topological position with
relation to the other neurons in the network. As before, neuron k of the N neurons
wins if it is the closest to the current training vector, but the weight adjustment
now affects all neurons as determined by a decreasing function f of their topological
distance from neuron k and a threshold distance dr:
Wij ?= Wij
+ ? ? f( II X1c
-
Xi
II, dr) . (Vj
- Wij)
0<f
< I,
1
< j < W,
1~i
<N
.
This function allows the winning neuron to drag its neighbors toward a given section
of the input space so that topologically close neurons will eventually react similarly
to closely spaced input vectors.
The integer Markovian learning algorithm of Figure 1 simplifies the Kohonen learning procedure by noting that the neuron weights slowly integrate the effects of
stimuli. This integration can be done by stochastically updating the weights with
a probability proportional to the neural input. The stochastic update of the neural
weights is done by generating two uncorrelated random numbers, Ri and R 2 , on the
interval [0, dr] that each neuron compares to its distance from the current training
vector and its topological distance from the winning neuron, respectively. A neuron
will try to increment or decrement the elements of its weight vector closer to the
training vector if the absolute value of the intervening distance is greater than R i ,
thus creating a total movement proportional to the distance when averaged over
many cycles. This movement is inversely modulated by the topological distance
to the winning neuron k via a comparison with R2. The total effect produced by
these two stochastic processes is equivalent to that produced in Kohonen's original
algorithm, but only simple additive operations are now needed. Figure 2 shows
1047
1048
Melton, Phan, Reeves, and \an den Bout
1 j i :s; N j i ?= i + 1 )
for( i ?= 1i i =5 Wi j ?= j + 1 )
Wi; ?= random()
for( vE {training set} )
parallelfor( all neurons i )
for( i
?=
di ?= Ci
for(
i
?=
1; j =5 Wi j ?= j + 1 )
~?=di+lvi-Wiil
k?=1
for( i ?= 1i i =5 N i i ?= i + 1 )
if( di < die )
k?=i
parallelfor( all neurons i )
di ?= 0
for( j ?= 1i j ~ X; j
~
for( j
?=
?= ~
j ?= j
?=
+ IZii -
j
+1 )
zleil
1i j ~ Wi
+1)
Rl ?= random( ch)
R2 ?= random( ch)
parallelfor( all neurons i )
/* lItochalltic weight update */
if( Iv; - Wiil > Rl and ds =5 R2 )
wii ?= wii+ sign(vi - Wi;)
Figure 1: The integer Markovian learning algorithm.
our simplified algorithm operates correctly on a problem that has often been solved
using Kohonen networks.
The integer Markovian learning algorithm is practical to implement since only simple neurons are needed to do the additive operations and a single global bus can
handle all the broadcast transmissions. The high-level architecture for such an implementation is shown in Figure 3. TlnMANN consists of a global controller that
coordinates the actions of a linear array of neurons. The neurons contain circuitry
for comparing and updating their weights, and for enabling and disabling themselves during the conditional portions of the algorithm. The network topology is
configured by arranging the neurons in an X-dimensional space rather than by
storing a graph structure in the hardware. This allows the calculation of the topological distance between neurons using the same circuitry as is used in the weight
calculations. TlnMANN performs the following operations for each training vector:
1. The global controller broadcasts the W elements of v while each neuron accu-
mulates in A the absolute value of the difference between the elements of its
weight vector (stored in the small, local RAM) and those of the training vector.
2. The global controller does a binary search for the neuron closest to the training
VLSI Implementation of TInMANN
r
I
II
Figure 2: The evolution of 100 TlnMANN neurons when learning a twodimensional vector quantization.
vector by broadcasting distance values bisecting the range containing the winning neuron. The neurons do a comparison and signal on the wired-OR status
line if their distance is less than the broadcast value (i.e. the carry bit c is set).
Neurons with distances greater than the broadcast value are disabled by resetting their e flags. However, if no neuron is left enabled, the controller restores
the enable bits and adjusts its search region (this action is needed on ~ M /2
of the search steps, where M is the machine word length used by TlnMANN).
The last neuron left enabled is the winner of the competition (ties are resolved
by the conditional logic in each neuron).
3. The topological vector of the winning neuron is broadcast to the other neurons
through gate G. The other neurons accumulate into A and store into Tl the
absolute value of the difference between their topological vectors and that of
the winning neuron.
4. Random number R2 is broadcast by the global controller and those neurons
having topological distances in Tl greater than R2 are disabled. The remaining
neurons each compute the distance between a component of their weight vector
and that of the training vector broadcast by the global controller. All neurons
whose calculated distances are greater than random number Rl broadcast by
the controller will increment or decrement their weight elements depending
on the carry bits left in the c flags during the distance calculations. Then
all neurons are re-enabled and this step is repeated for the remaining W - 1
elements of the training vector.
A single training vector can be processed in 11 W + X + 2.5M + 15 clock cycles
(Van den Bout, 1989). A word-width of 10 bits and a clock cycle of 15 MHz would
allow TlnMANN to learn at a rate of 200,000 three-dimensional vectors per second
or 290,000 one-dimensional vectors per second.
3
THE VLSI IMPLEMENTATION OF TlnMANN
Figure 4 is a block diagram for the VLSI TlnMANN neuron built from the components listed in Table 1. The design was driven by the following requirements:
Size: The TlnMANN neuron had to fit within a MOSIS TinyChip frame, so we
used small, dense, ripple-carry adders. A 10-bit word size was selected as a
1049
1050
Melton, Phan, Reeves, and \an den Bout
Figure 3: The TlnMANN architecture.
Table 1: Components of the VLSI TlnMANN neuron.
I
Component
ABDiff
P
CFLAG
PASum
A
8-word memory
MUX
EFLAG
FUnction
10-bit, two's-complement, npple-borrow subtractor that calculates
differences between data in the neuron and data broadcast on the
global bus (B_Bus).
10-bit pipeline register that temporarily stores the difference output by ABDitf.
Records the sign bit of the difference stored in P.
10-bit, two's-complement, ripple-carry adder/subtract or that adds
or subtracts P from the accumulator depending on the sign bit in
CFLAG. This implements the absolute value function.
Accumulates the absolute values from PASum to form the Manhattan distance between a neuron and a training vector.
Stores the weight and topology vectors, the con6cience register (DeSieno, 1988), and one working register.
Steers the output of A or the memory to the input of ABDitf.
Stores the enable bit used for conditionally controlling the neuron
function during the binary search and weight update phases.
VLSI Implementation of TInMANN
!Len
ramAl
a..,path
err
Figure 4: Block Diagram of the VLSI TlnMANN neuron.
compromise between saving area and retaining numeric precision. The multiplexer was added so that A could be used as another temporary register. The
neuron logic was built with the OASIS silicon compiler (Kedem, 1990), but the
memory was hand-crafted to reduce its area. In the final TlnMANN neuron,
4000 transistors are divided between the arithmetic logic (7701' x 13001') and
the memory (7101' x 11601')'
Expandability: The use of broadcast communications reduces the total
TlnMANN chip I/O to only 35 pins. This low connectivity makes it practical to build large Kohonen networks. At the chip level, the use of a silicon
compiler lets us expand the design if more silicon area becomes available. For
example, the word-size could be readily expanded and the layout automatically
regenerated by changing a single-statement in the hardware description. Also,
higher-dimensional vector spaces could be supported by adding more memory.
Speed: In the worst case, the memory access time is 12 ns, each adder delay is
45 ns, and the write time for A is 10 ns. This would have limited TlnMANN
to a top speed of 9 MHz. P was added to break the critical path through the
adders and bring the clock frequency to 15 MHz. At the board level, the ripple
of status information through the OR gates is sped up by connecting the status
lines through an OR-tree.
Testability: To speed the diagnosis of system failures caused by defective chips,
the TlnMAN N neuron was made 100% testable by building EFLAG, CFLAG, P,
and A from level-sensitive scannable latches. Test patterns are shifted into the
chip through the scanJn pin and the results are shifted out through scan_out.
All faults are covered by only 27 test patterns. A 100% testable neural system
is built by concatenating the scan-in and scan_out pins of all the chips.
1051
1052
Melton, Phan, Reeves, and \an den Bout
Figure 5: Layout of the TlnMANN neuron.
Each component of the TlnMANN neuron was extensively simulated to check for
correct operation. To test the chip I/O, we performed a detailed circuit simulation
of two TlnMAN N neurons organized as a competitive network. The simulation
demonstrated the movement of the two neurons towards the centroids of two data
clusters used to provide training vectors.
Four of the TlnMANN neurons in Figure 5 were fabricated by MOSIS. Using the
built-in scan path, each was found to function at 20 MHz (the maximum speed of
our tester) . These chips are now being connected into a linear neural array and
attached to a global controller.
References
D. E. Van den Bout and T. K. Miller m. "TInMANN: The Integer Markovian
Artificial Neural Network". In IJCNN, pages II:205-II:211, 1989.
D. DeSieno. "Adding a Conscience to Competitive Learning". In IEEE International Conference on Neural NetworklJ, pages 1:117-1:124, 1988.
G. Kedem, F. Brglez, and K. Kozminski. "OASIS: A Silicon Compiler for
Rapid Implementation of Semi-custom Designs". In International WorklJhop on
Rapid SYlJtemlJ Proto typing, June 1990.
T. Kohonen. "Self-Organized Formation of Topologically Correct Feature Maps" .
Biological CyberneticlJ, 43:56-69, 1982.
D. Rumelhart and J. McClelland. Parallel Dilltributed ProcelJlJing: Ezplorations
in the Microstructure of Cognition, chapter 5. MIT Press, 1986.
| 364 |@word norm:1 simulation:3 carolina:1 carry:4 err:1 current:3 comparing:1 yet:1 readily:1 additive:2 update:3 selected:2 record:1 conscience:1 provides:1 consists:1 rapid:2 themselves:1 simulator:1 decreasing:1 automatically:1 becomes:1 provided:1 circuit:1 fabricated:1 tie:1 before:1 engineering:1 local:1 accumulates:1 path:3 drag:1 limited:1 range:1 averaged:1 practical:2 accumulator:1 block:2 implement:2 procedure:1 area:3 word:5 close:1 twodimensional:1 impossible:1 equivalent:1 map:2 demonstrated:1 center:1 layout:2 react:1 adjusts:1 array:2 borrow:1 enabled:3 handle:1 coordinate:1 increment:2 arranging:1 controlling:1 tan:1 element:8 rumelhart:2 updating:2 melton:4 kedem:2 electrical:1 solved:1 worst:1 region:1 connected:3 cycle:3 movement:3 wil:1 testability:1 hinder:1 trained:2 compromise:1 completely:1 bisecting:1 resolved:1 chip:12 chapter:1 artificial:1 formation:1 whose:2 encoded:1 larger:2 final:1 transistor:1 kohonen:9 organizing:1 tinychip:2 benefiting:1 intervening:1 moved:1 description:1 competition:1 cluster:2 transmission:1 requirement:1 ripple:3 wired:1 generating:1 depending:2 disabling:1 x1c:1 zit:1 coverage:1 tester:1 closely:1 correct:2 modifying:1 stochastic:3 exploration:1 enable:2 require:1 microstructure:1 clustered:1 generalization:1 investigation:1 biological:1 practically:1 cognition:1 circuitry:2 sensitive:2 mit:1 rather:1 june:1 dsp:1 check:1 centroid:1 vlsi:11 relation:1 expand:1 wij:3 izii:1 retaining:1 restores:1 integration:1 subtractor:1 saving:1 having:1 regenerated:1 stimulus:1 densely:1 ve:1 phase:1 highly:1 custom:1 multiplexed:1 closer:2 tree:1 iv:1 re:1 minimal:1 steer:1 markovian:4 mhz:6 hundred:1 delay:1 stored:2 international:2 connecting:2 connectivity:1 containing:2 broadcast:10 slowly:1 dr:3 external:1 stochastically:1 creating:1 multiplexer:1 north:1 configured:1 register:4 caused:1 vi:2 performed:1 try:1 break:1 portion:1 competitive:6 len:1 compiler:3 parallel:2 desieno:2 characteristic:1 resetting:1 miller:1 spaced:1 produced:2 dave:1 processor:1 uniprocessor:1 inexpensive:1 failure:1 frequency:1 naturally:1 di:4 knowledge:1 organized:2 higher:1 done:2 clock:3 d:1 working:1 hand:1 adder:4 defines:1 disabled:2 building:1 matt:1 effect:3 contain:1 evolution:1 conditionally:1 latch:1 during:3 self:2 width:1 die:1 performs:2 bring:1 specialized:1 sped:1 rl:3 winner:2 attached:1 analog:1 oflength:1 accumulate:1 silicon:4 reef:4 similarly:1 had:1 resistant:1 access:1 add:1 closest:3 driven:1 massively:1 store:4 binary:2 fault:2 additional:1 greater:4 signal:2 ii:5 arithmetic:1 multiple:1 semi:1 reduces:1 wiil:2 calculation:3 divided:1 permitting:1 wiw:1 calculates:1 basic:1 controller:8 addition:1 interval:1 diagram:2 posse:2 integer:4 tinmann:5 vw:1 noting:1 affect:2 fit:2 ezplorations:1 architecture:5 topology:2 reduce:1 simplifies:1 action:2 covered:1 listed:1 detailed:1 extensively:1 hardware:5 processed:1 mcclelland:1 shifted:2 sign:3 per:3 correctly:1 diagnosis:1 write:1 group:2 thereafter:1 four:1 threshold:1 changing:1 ram:1 mosis:3 graph:1 cooperating:1 topologically:2 bit:11 topological:8 ijcnn:1 ri:1 speed:6 expanded:1 describes:1 wi:6 den:6 pipeline:1 bus:3 pin:4 eventually:2 needed:3 available:1 operation:4 wii:2 v2:1 gate:2 original:1 top:1 remaining:2 testable:3 build:2 mux:1 capacitance:1 added:2 realized:1 win:1 distance:16 simulated:1 accu:1 toward:1 length:2 nc:1 difficult:1 statement:1 implementation:9 design:5 packed:1 neuron:64 wire:1 enabling:1 communication:2 frame:2 complement:2 bout:6 temporary:1 parallelism:2 pattern:2 wi2:1 built:6 reliable:2 memory:6 critical:1 typing:1 cascaded:1 inversely:1 doug:1 manhattan:1 proportional:2 digital:3 integrate:1 uncorrelated:1 storing:1 supported:1 last:1 raleigh:1 allow:2 neighbor:1 absolute:5 expandability:1 van:3 calculated:1 numeric:1 reside:1 made:1 avoided:1 simplified:1 subtracts:1 status:3 logic:4 global:8 xi:2 search:4 table:2 learn:1 transfer:1 complex:1 constructing:1 vj:2 lvi:1 dense:1 decrement:2 noise:2 repeated:1 defective:1 crafted:1 tl:2 board:1 n:3 interconnecting:1 precision:1 position:1 winning:7 concatenating:1 third:1 r2:5 ofthis:1 quantization:1 adding:2 gained:1 ci:1 illustrates:1 phan:4 subtract:1 broadcasting:1 adjustment:1 temporarily:1 ch:2 oasis:2 conditional:2 towards:1 determined:1 operates:3 flag:2 total:3 parasitic:1 scan:3 modulated:1 dept:1 proto:1 |
2,913 | 3,640 | Manifold Embeddings for Model-Based
Reinforcement Learning under Partial Observability
Keith Bush
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Joelle Pineau
School of Computer Science
McGill University
Montreal, Canada
[email protected]
Abstract
Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued
states that are unexplorable, are poorly described by first principles, and are only
partially observable. If partial observability can be overcome, these constraints
suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the context of offline, model-based reinforcement learning. We demonstrate that the embedding of
a system can change as a result of learning, and we argue that the best performing
embeddings well-represent the dynamics of both the uncontrolled and adaptively
controlled system. We apply this approach to learn a neurostimulation policy that
suppresses epileptic seizures on animal brain slices.
1
Introduction
The accessibility of large quantities of off-line discrete-time dynamic data?state-action sequences
drawn from real-world domains?represents an untapped opportunity for widespread adoption of
reinforcement learning. By real-world we imply domains that are characterized by continuous state,
noise, and partial observability. Barriers to making use of this data include: 1) goals (rewards) are
not well-defined, 2) exploration is expensive (or not permissible), and 3) the data does not preserve
the Markov property. If we assume that the reward function is part of the problem description, then
to learn from this data we must ensure the Markov property is preserved before we approximate the
optimal policy with respect to the reward function in a model-free or model-based way.
For many domains, particularly those governed by differential equations, we may leverage the inductive bias of locality during function approximation to satisfy the Markov property. When applied to model-free reinforcement learning, function approximation typically assumes that the value
function maps nearby states to similar expectations of future reward. As part of model-based reinforcement learning, function approximation additionally assumes that similar actions map to nearby
future states from nearby current states [10]. Impressive performance and scalability of local modelbased approaches [1, 2] and global model-free approaches [6, 17] have been achieved by exploiting
the locality of dynamics in fully observable state-space representations of challenging real-world
problems.
In partially observable systems, however, locality is not preserved without additional context. First
principle models offer some guidance in defining local dynamics, but the existence of known first
principles cannot always be assumed. Rather, we desire a general framework for reconstructing
state-spaces of partially observable systems which guarantees the preservation of locality. Nonlinear
dynamic analysis has long used manifold embeddings to reconstruct locally Euclidean state-spaces
of unforced, partially observable systems [24, 18] and has identified ways of finding these embeddings non-parametrically [7, 12]. Dynamicists have also used embeddings as generative models of
partially observable unforced systems [16] by numerically integrating over the resultant embedding.
1
Recent advances have extended the theory of manifold embeddings to encompass deterministically
and stochastically forced systems [21, 22].
A natural next step is to apply these latest theoretical tools to reconstruct and control partially observable forced systems. We do this by first identifying an appropriate embedding for the system
of interest and then leveraging the resultant locality to perform reinforcement learning in a modelbased way. We believe it may be more practical to address reinforcement learning under partial
observability in a model-based way because it facilitates reasoning about domain knowledge and
off-line validation of the embedding parameters.
The primary contribution of this paper is to formally combine and empirically evaluate these existing, but not well-known, methods by incorporating them in off-line, model-based reinforcement
learning of two domains. First, we study the use of embeddings to learn control policies in a partially observable variant of the well-known Mountain Car domain. Second, we demonstrate the
embedding-driven, model-based technique to learn an effective and efficient neurostimulation policy for the treatment of epilepsy. The neurostimulation example is important because it resides
among the hardest classes of learning domain?a continuous-valued state-space that is nonlinear,
partially observable, prohibitively expensive to explore, noisy, and governed by dynamics that are
currently not well-described by mathematical models drawn from first principles.
2
Methods
In this section we combine reinforcement learning, partial observability, and manifold embeddings
into a single mathematical formalism. We then describe non-parametric means of identifying the
manifold embedding of a system and how the resultant embedding may be used as a local model.
2.1
Reinforcement Learning
Reinforcement learning (RL) is a class of problems in which an agent learns an optimal solution to
a multi-step decision task by interacting with its environment [23]. Many RL algorithms exist, but
we will focus on the Q-learning algorithm.
Consider an environment (i.e. forced system) having a state vector, s ? RM , which evolves according to a nonlinear differential equation but is discretized in time and integrated numerically
according to the map, f . Consider an agent that interacts with the environment by selecting action,
a, according to a policy function, ?. Consider also that there exists a reward function, g, which informs the agent of the scalar goodness of taking an action with respect to the goal of some multi-step
decision task. Thus, for each time, t,
a(t)
s(t + 1)
r(t + 1)
= ?(s(t)),
= f (s(t), a(t)), and
= g(s(t), a(t)).
(1)
(2)
(3)
RL is the process of learning the optimal policy function, ? ? , that maximizes the expected sum of
future rewards, termed the optimal action-value function or Q-function, Q? , such that,
Q? (s(t), a(t)) = r(t + 1) + ? max Q? (s(t + 1), a),
a
(4)
where ? is the discount factor on [0, 1). Equation 4 assumes that Q? is known. Without a priori
knowledge of Q? an approximation, Q, must be constructed iteratively. Assume the current Qfunction estimate, Q, of the optimal, Q? , contains error, ?,
?(t) = r(t + 1) + ? max Q (s(t + 1), a) ? Q (s(t), a(t)) ,
a
where ?(t) is termed the temporal difference error or TD-error. The TD-error can be used to improve
the approximation of Q by
Q (s(t), a(t)) = Q (s(t), a(t)) + ??(t),
(5)
where ? is the learning rate. By selecting action a that maximizes the current estimate of Q, Qlearning specifies that over many applications of Equation 5, Q approaches Q? .
2
2.2
Manifold Embeddings for Reinforcement Learning Under Partial Observability
Q-learning relies on complete state observability to identify the optimal policy. Nonlinear dynamic
systems theory provides a means of reconstructing complete state observability from incomplete
state via the method of delayed embeddings, formalized by Takens? Theorem [24]. Here we present
the key points of Takens? Theorem utilizing the notation of Huke [8] in a deterministically forced
system.
Assume s is an M -dimensional, real-valued, bounded vector space and a is a real-valued action input
to the environment. Assuming that the state update f and the policy ? are deterministic functions,
Equation 1 may be substituted into Equation 2 to compose a new function, ?,
s(t + 1)
= f (s(t), ?(s(t))) ,
= ?(s(t)),
(6)
which specifies the discrete time evolution of the agent acting on the environment. If ? is a smooth
map ? : RM ? RM and this system is observed via function, y, such that
s?(t)
M
= y(s(t)),
?1
(7)
?1
where y : R ? R, then if ? is invertible, ? exists, and ?, ? , and y are continuously differentiable we may apply Takens? Theorem [24] to reconstruct the complete state-space of the observed
system. Thus, for each s?(t), we can construct a vector sE (t),
sE (t)
=
[?
s(t), s?(t ? 1), ..., s?(t ? (E ? 1))], E > 2M,
(8)
such that sE lies on a subset of RE which is an embedding of s. Because embeddings preserve the
connectivity of the original vector-space, in the context of RL the mapping ?,
sE (t + 1)
= ?(sE (t)),
(9)
may be substituted for f (Eqn. 6) and vectors sE (t) may be substituted for corresponding vectors
s(t) in Equations 1?5 without loss of generality.
2.3
Non-parametric Identification of Manifold Embeddings
Takens? Theorem does not define how to compute the embedding dimension of arbitrary sequences
of observations, nor does it provide a test to determine if the theorem is applicable. In general.
the intrinsic dimension, M , of a system is unknown. Finding high-quality embedding parameters
of challenging domains, such as chaotic and noise-corrupted nonlinear signals, occupy much of
the fields of subspace identification and nonlinear dynamic analysis. Numerous methods of note
exist, drawn from both disciplines. We employ a spectral approach [7]. This method, premised by
the singular value decomposition (SVD), is non-parametric, computationally efficient, and robust
to additive noise?all of which are useful in practical application. As will be seen in succeeding
sections, this method finds embeddings which are both accurate in theoretical tests and useful in
practice.
We summarize the spectral parameter selection algorithm as follows. Given a sequence of state ob? we choose a sufficiently large fixed embedding dimension, E.
? Sufficiently
servations ?s of length S,
large refers to a cardinality of dimension which is certain to be greater than twice the dimension
? ..., S},
? we:
in which the actual state-space resides. For each embedding window size, T?min ? {E,
?
?
1) define a matrix SE? having row vectors, sE? (t), t ? {Tmin , ..., S}, constructed according to the
rule,
sE? (t)
=
? ? 1)? )],
[?
s(t), s?(t ? ? ), ..., s?(t ? (E
(10)
? ? 1), 2) compute the SVD of the matrix S ? , and 3) record the vector of
where ? = T?min /(E
E
?
singular values, ?(Tmin ). Embedding parameters of ?s are found by analysis of the second singular
? ..., S}.
? The T?min value of the first local maxima of this sequence
values, ?2 (T?min ), T?min ? {E,
is the approximate embedding window, Tmin , of ?s. The approximate embedding dimension, E, is
the number of non-trivial singular values of ?(Tmin ) where we define non-trivial as a value greater
than the long-term trend of ?E? with respect to T?min . Embedding ?s according to Equation 10 via
?
parameters E and Tmin yields the matrix SE of row vectors, sE (t), t ? {Tmin , ..., S}.
3
2.4 Generative Local Models from Embeddings
The preservation of locality and dynamics afforded by the embedding allows an approximation of
the underlying dynamic system. To model this space we assume that the derivative of the Voronoi
region surrounding each embedded point is well-approximated by the derivative at the point itself,
a nearest-neighbors derivative [16]. Using this, we simulate trajectories as iterative numerical integration of the local state and gradient. We define the model and integration process formally.
Consider a dataset D as a set of temporally aligned sequences of state observations s?(t), action
? Applying the spectral embedding
observations a(t), and reward observations r(t), t ? {1, ..., S}.
E
? A local model
method to D yields a sequence of vectors sE (t) in R indexed by t ? {Tmin , ..., S}.
?
M of D is the set of 3-tuples, m(t) = {sE (t), a(t), r(t)}, t ? {Tmin , ..., S}, as well as operations
on these tuples, A(m(t)) ? a(t), S(m(t)) ? sE (t), Z(m(t)) ? z(t) where z(t) = [s(t), a(t)],
and U(M, a) ? Ma where Ma is the subset of tuples in M containing action a.
Consider a state vector x(i) in RE indexed by simulation time, i. To numerically integrate this
state we define the gradient according to our definition of locality, namely the nearest neighbor.
This step is defined differently for models having discrete and continuous actions. The model?s
nearest neighbor of x(i) when taking action a(i) is defined in the case of a discrete set of actions,
A, according to Equation 11 and in the continuous case it is defined by Equation 12,
m(tx(i) )
=
argmin
kS(m(t)) ? x(i)k, a ? A,
(11)
m(t)?U(M,a(i))
m(tx(i) )
=
argmin kZ(m(t)) ? [x(i), ?a(i)] k, a ? R.
(12)
m(t)?M
where ? is a scaling parameter on the action space. The model gradient and numerical integration
are defined, respectively, as,
?x(i)
= S(m(tx(i) + 1)) ? S(m(tx(i) )) and
?
?
x(i + 1) = x(i) + ?i ?x(i) + ? ,
(13)
(14)
where ? is a vector of noise and ?i is the integration step-size. Applying Equations 11?14 iteratively
simulates a trajectory of the underlying system, termed a surrogate trajectory. Surrogate trajectories
are initialized from state x(0). Equation 14 assumes that dataset D contains noise. This noise biases
the derivative estimate in RE , via the embedding rule (Eqn. 10). In practice, a small amount of
additive noise facilitates generalization.
2.5 Summary of Approach
Our approach is to combine the practices of dynamic analysis and RL to construct useful policies in
partially observable, real-world domains via off-line learning. Our meta-level approach is divided
into two phases: the modeling phase and the learning phase.
We perform the modeling phase in steps: 1) record a partially observable system (and its rewards)
under the control of a random policy or some other policy or set of policies that include observations
of high reward value; 2) identify good candidate parameters for the embedding via the spectral
embedding method; and 3) construct the embedding vectors and define the local model of the system.
During the learning phase, we identify the optimal policy on the local model with respect to the
rewards, R(m(t)) ? r(t), via batch Q-learning. In this work we consider strictly local function
approximation of the model and Q-function, thus, we define the Q-function as a set of values, Q,
indexed by the model elements, Q(m), m ? M. For a state vector x(i) in RE at simulation time
i, and an associated action, a(i), the reward and Q-value of this state can be indexed by either
Equation 11 or 12, depending on whether the action is discrete or continuous. Note, our technique
does not preclude the use of non-local function approximation, but here we assume a sufficient
density of data exists to reconstruct the embedded state-space with minimal bias.
3
Case Study: Mountain Car
The Mountain Car problem is a second-order, nonlinear dynamic system with low-dimensional,
continuous-valued state and action spaces. This domain is perhaps the most studied continuousvalued RL domain in the literature, but, surprisingly, there is little study of the problem in the case
where the velocity component of state is unobserved. While not a real-world domain as imagined in
the introduction, Mountain Car provides a familiar benchmark to evaluate our approach.
4
(a)
20
0.5
?1.0
0
?5
0
2.5
5.0
7.5
10.0
?1.0
?0.5
(d)
100000
200000
150000
(c) Embedding Performance, E=3
0.5
x(t?? )
?0.5
10
?1.0
5
?5
2.5
5.0
Tmin (sec)
7.5
10.0
?1.0
?0.5
0.0
x(t)
0.5
1000
Path?to?goal Length
Tmin
?2
0
50000
Training Samples
0.0
15
0.5
Learned Policy
?1
0
Singular Values
0.0
x(t)
Tmin (sec)
0.20
0.70
1.20
1.70
2.20
Max
Best
Random
0.20
0.70
1.20
1.70
2.20
100
5
?3
1000
0.0
x(t?? )
?0.5
10
?2
Max
Best
Random
100
Path?to?goal Length
Tmin
15
?1
Singular Values
(b) Embedding Performance, E=2
Random Policy
50000
100000
150000
200000
Training Samples
Figure 1: Learning experiments on Mountain Car under partial observability. (a) Embedding spectrum and accompanying trajectory (E = 3, Tmin = 0.70 sec.) under random policy. (b) Learning
performance as a function of embedding parameters and quantity of training data. (c) Embedding
spectrum and accompanying trajectory (E = 3, Tmin = 0.70 sec.) for the learned policy.
We use the Mountain Car dynamics and boundaries of Sutton and Barto [23]. We fix the initial state
for all experiments (and resets) to be the lowest point of the mountain domain with zero velocity,
which requires the longest path-to-goal in the optimal policy. Only the position element of the
state is observable. During the modeling phase, we record this domain under a random control
policy for 10,000 time-steps (?t = 0.05 seconds), where the action is changed every ?t = 0.20
seconds. We then compute the spectral embedding of the observations (Tmin = [0.20, 9.95] sec.,
? = 5). The resulting spectrum is presented in Figure 1(a). We conclude
?Tmin = 0.25 sec., and E
that the embedding of Mountain Car under the random policy requires dimension E = 3 with a
maximum embedding window of Tmin = 1.70 seconds.
To evaluate learning phase outcomes with respect to modeling phase outcomes, we perform an experiment where we model the randomly collected observations using embedding parameters drawn
from the product of the sets Tmin = {0.20, 0.70, 1.20, 1.70, 2.20} seconds and E = {2, 3}. While
we fix the size of the local model to 10,000 elements we vary the total amount of training samples
observed from 10,000 to 200,000 at intervals of 10,000. We use batch Q-learning to identify the
optimal policy in a model-based way?in Equation 5 the transition between state-action pair and
the resulting state-reward pair is drawn from the model (? = 0.001). After learning converges, we
execute the learned policy on the real system for 10,000 time-steps, recording the mean path-to-goal
length over all goals reached. Each configuration is executed 30 times.
We summarize the results of these experiments by log-scale plots, Figures 1(b) and (c), for embeddings of dimension two and three, respectively. We compare learning performance against three
measures: the maximum performing policy achievable given the dynamics of the system (path-togoal = 63 steps), the best (99th percentile) learned policy for each quantity of training data for each
embedding dimension, and the random policy. Learned performance is plotted as linear regression
fits of the data.
Policy performance results of Figures 1(b) and (c) may be summarized by the following observations. Performance positively relates to the quantity of off-line training data for all embedding
parameters. Except for the configuration (E = 2, Tmin = 0.20), influence of Tmin on learning
performance relative to E is small. Learning performance of 3-dimensional embeddings dominate
5
all but the shortest 2-dimensional embeddings. These observations indicate that the parameters of
the embedding ultimately determine the effectiveness of RL under partial observability. This is not
surprising. What is surprising is that the best performing parameter configurations are linked to
dynamic characteristics of the system under both a random policy and the learned policy.
To support this claim we collected 1,000 sample observations of the best policy (E = 3, Tmin =
0.70 sec., Ntrain = 200, 000) during control of the real Mountain Car domain (path-to-goal = 79
steps). We computed and plotted the embedding spectrum and first two dimensions of the embedding
in Figure 1(d). We compare these results to similar plots for the random policy in Figure 1(a).
We observe that the spectrum of the learned system has shifted such that the optimal embedding
parameters require a shorter embedding window, Tmin = 0.70?1.20 sec. and a lower embedding
dimension E = 2 (i.e., ?3 peaks at Tmin = 0.70?1.20 and ?3 falls below the trend of ?5 at this
window length). We confirm this by observing the embedding directly, Figure 1(d). Unlike the
random policy, which includes both an unstable spiral fixed point and limit cycle structure and
requires a 3-dimensional embedding to preserve locality, the learned policy exhibits a 2-dimensional
unstable spiral fixed point. Thus, the fixed-point structure (embedding structure) of the combined
policy-environment system changes during learning.
To reinforce this claim, we consider the difference between a 2-dimensional and 3-dimensional embedding. An agent may learn to project into a 2-dimensional plane of the 3-dimensional space, thus
decreasing its embedding dimension if the training data supports a 2-dimensional policy. We believe
it is no accident that (E = 3, Tmin = 0.70) is the best performing configuration across all quantities
of training data. This configuration can represent both 3-dimensional and 2-dimensional policies,
depending on the amount of training data available. It can also select between 2-dimensional embeddings having window sizes of Tmin = {0.35, 0.70} sec., depending on whether the second or
third dimension is projected out. One resulting parameter configuration (E = 2, Tmin = 0.35) is
near the optimal 2-dimensional configuration of Figure 1(b).
4
Case Study: Neurostimulation Treatment of Epilepsy
Epilepsy is a common neurological disorder which manifests itself, electrophysiologically, in the
form of intermittent seizures?intense, synchronized firing of neural populations. Researchers now
recognize seizures as artifacts of abnormal neural dynamics and rely heavily on the nonlinear dynamic systems analysis and control literature to understand and treat seizures [4]. Promising techniques have emerged from this union. For example, fixed frequency electrical stimulation of slices
of the rat hippocampus under artificially induced epilepsy have been demonstrated to suppress the
frequency, duration, or amplitude of seizures [9, 5]. Next generation epilepsy treatments, derived
from machine learning, promise maximal seizure suppression via minimal electrical stimulation by
adapting control policies to patients? unique neural dynamics. Barriers to constructing these treatments arise from a lack of first principles understanding of epilepsy. Without first principles, neuroscientists have only vague notions of what effective neurostimulation treatments should look like.
Even if effective policies could be envisioned, exploration of the vast space of policy parameters is
impractical without computational models.
Our specific control problem is defined as follows. Given labeled field potential recordings of brain
slices under fixed-frequency electrical stimulation policies of 0.5, 1.0, and 2.0 Hz, as well as unstimulated control data, similar to the time-series depicted in Figure 2(a), we desire to learn a stimulation
policy that suppresses seizures of a real, previously unseen, brain slice with an effective mean frequency (number of stimulations divided by the time the policy is active) of less than 1.0 Hz (1.0 Hz
is currently known to be the most robust suppression policy for the brain slice model we use [9, 5]).
As a further complication, on-line exploration is extremely expensive because the brain slices are
experimentally viable for periods of less than 2 hours.
Again, we approach this problem as separate modeling and learning phases. We first compute the
? = 15, presented in Figure 2(b). Using our knowlembedding spectrum of our dataset assuming E
edge of the interaction between embedding parameters and learning we select the embedding dimension E = 3 and embedding window Tmin = 1.05 seconds. Note, the strong maxima of ?2 at
Tmin = 110 seconds is the result of periodicity of seizures in our small training dataset. Periodicity
of spontaneous seizure formation, however, varies substantially between slices. We select a shorter
embedding window and rely on integration of the local model to unmask long-term dynamics.
6
(b) Neurostimulation Embedding Spectrum
(a) Example Field Potentials
60
100
150
50
0.5
1.0
2.0
1.5
Tmin (s)
?3
?2
?1
1st Principal Component
0
0.4
0.2
0.4
2nd Principal Component
0.6
0.6
Neurostimulation Model
0.2
2nd Principal Component
0.0
?0.6 ?0.4 ?0.2 0.0
(c)
?0.6 ?0.4 ?0.2 0.0
1 mV
200 sec
40
10
0
50
Tmin (s)
0.5 Hz
?3
30
?3
*
0
2 Hz
* ?2
20
Singular Values
200
150
?2
0
50
1 Hz
?1
100
Singular Values
250
Control
?0.6 ?0.4 ?0.2
0.0
0.2
0.4
0.6
3rd Principal Component
Figure 2: Graphical summary of the modeling phase of our adaptive neurostimulation study.
(a) Sample observations from the fixed-frequency stimulation dataset. Seizures are labeled with
horizontal lines. (b) The embedding spectrum of the fixed-frequency stimulation dataset. The large
maximum of ?2 at approximately 100 sec. is an artifact of the periodicity of seizures in the dataset.
*Detail of the embedding spectrum for Tmin = [0.05, 2.0] depicting a maximum of ?2 at the timescale of individual stimulation events. (c) The resultant neurostimulation model constructed from
embedding the dataset with parameters (E = 3, Tmin = 1.05 sec.). Note, the model has been
desampled 5? in the plot.
In this complex domain we apply the spectral method differently than described in Section 2. Rather
than building the model directly from the embedding (E = 3, Tmin = 1.05), we perform a change
? = 15, Tmin = 1.05), using the first three columns of the right sinof basis on the embedding (E
gular vectors, analogous to projecting onto the principal components. This embedding is plotted in
Figure 2(c). Also, unlike the previous case study, we convert stimulation events in the training data
from discrete frequencies to a continuous scale of time-elapsed-since-stimulation. This allows us to
combine all of the data into a single state-action space and then simulate any arbitrary frequency.
Based on exhaustive closed-loop simulations of fixed-frequency suppression efficacy across a spectrum of [0.001, 2.0] Hz, we constrain the model?s action set to discrete frequencies a = {2.0, 0.25}
Hz in the hopes of easing the learning problem. We then perform batch Q-learning over the model
(?t = 0.05, ? = 0.1, and ? = 0.00001), using discount factor ? = 0.9. We structure the reward
function to penalize each electrical stimulation by ?1 and each visited seizure state by ?20.
Without stimulation, seizure states comprise 25.6% of simulation states. Under a 1.0 Hz fixedfrequency policy, stimulation events comprise 5.0% and seizures comprise 6.8% of the simulation
states. The policy learned by the agent also reduces the percent of seizure states to 5.2% of simulation states while stimulating only 3.1% of the time (effective frequency equals 0.62 Hz). In
simulation, therefore, the learned policy achieves the goal.
We then deployed the learned policy on real brain slices to test on-line seizure suppression performance. The policy was tested over four trials on two unique brain slices extracted from the same
animal. The effective frequencies of these four trials were {0.65, 0.64, 0.66, 0.65} Hz. In all trials
seizures were effectively suppressed after a short transient period, during which the policy and slice
achieved equilibrium. (Note: seizures occurring at the onset of stimulation are common artifacts
of neurostimulation). Figure 3 displays two of these trials spaced over four sequential phases: (a)
a control (no stimulation) phase used to determine baseline seizure activity, (b) a learned policy
trial lasting 1,860 seconds, (c) a recovery phase to ensure slice viability after stimulation and to
recompute baseline seizure activity, and (d) a learned policy trial lasting 2,130 seconds.
7
(b) Policy Phase 1
2 mV
(a) Control Phase
60 sec
Stimulations
(c) Recovery Phase
*
(d) Policy Phase 2
Figure 3: Field potential trace of a real seizure suppression experiment using a policy learned from
simulation. Seizures are labeled as horizontal lines above the traces. Stimulation events are marked
by vertical bars below the traces. (a) A control phase used to determine baseline seizure activity.
(b) The initial application of the learned policy. (c) A recovery phase to ensure slice viability after
stimulation and recompute baseline seizure activity. (d) The second application of the learned policy.
*10 minutes of trace are omitted while the algorithm was reset.
5
Discussion and Related Work
The RL community has long studied low-dimensional representations to capture complex domains.
Approaches for efficient function approximation, basis function construction, and discovery of embeddings has been the topic of significant investigations [3, 11, 20, 15, 13]. Most of this work has
been limited to the fully observable (MDP) case and has not been extended to partially observable
environments. The question of state space representation in partially observable domains was tackled under the POMDP framework [14] and recently in the PSR framework [19]. These methods
address a similar problem but have been limited primarily to discrete action and observation spaces.
The PSR framework was extended to continuous (nonlinear) domains [25]. This method is significantly different from our work, both in terms of the class of representations it considers and in the
criteria used to select the appropriate representation. Furthermore, it has not yet been applied to
real-world domains. An empirical comparison with our approach is left for future consideration.
The contribution of our work is to integrate embeddings with model-based RL to solve real-world
problems. We do this by leveraging locality preserving qualities of embeddings to construct dynamic
models of the system to be controlled. While not improving the quality of off-line learning that
is possible, these models permit embedding validation and reasoning over the domain, either to
constrain the learning problem or to anticipate the effects of the learned policy on the dynamics of the
controlled system. To demonstrate our approach, we applied it to learn a neurostimulation treatment
of epilepsy, a challenging real-world domain. We showed that the policy learned off-line from an
embedding-based, local model can be successfully transferred on-line. This is a promising step
toward widespread application of RL in real-world domains. Looking to the future, we anticipate
the ability to adjust the embedding a priori using a non-parametric policy gradient approach over
the local model. An empirical investigation into the benefits of this extension are also left for future
consideration.
Acknowledgments
The authors thank Dr. Gabriella Panuccio and Dr. Massimo Avoli of the Montreal Neurological
Institute for generating the time-series described in Section 4. The authors also thank Arthur Guez,
Robert Vincent, Jordan Frank, and Mahdi Milani Fard for valuable comments and suggestions. The
authors gratefully acknowledge financial support by the Natural Sciences and Engineering Research
Council of Canada and the Canadian Institutes of Health Research.
8
References
[1] Christopher G. Atkeson, Andrew W. Moore, and Stefan Schaal. Locally weighted learning for control.
Artificial Intelligence Review, 11:75?113, 1997.
[2] Christopher G. Atkeson and Jun Morimoto. Nonparametric representation of policies and value functions:
A trajectory-based approach. In Advances in Neural Information Processing, 2003.
[3] M. Bowling, A. Ghodsi, and D. Wilkinson. Action respecting embedding. In Proceedings of ICML, 2005.
[4] F. Lopes da Silva, W. Blanes, S. Kalitzin, J. Parra, P. Suffczynski, and D. Velis. Dynamical diseases
of brain systems: Different routes to epileptic seizures. IEEE Transactions on Biomedical Engineering,
50(5):540?548, 2003.
[5] G. D?Arcangelo, G. Panuccio, V. Tancredi, and M. Avoli. Repetitive low-frequency stimulation reduces
epileptiform synchronization in limbic neuronal networks. Neurobiology of Disease, 19:119?128, 2005.
[6] Damien Ernst, Pierre Guerts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503?556, 2005.
[7] A. Galka. Topics in Nonlinear Time Series Analysis: with implications for EEG Analysis. World Scientific,
2000.
[8] J.P. Huke. Embedding nonlinear dynamical systems: A guide to Takens? Theorem. Technical report,
Manchester Institute for Mathematical Sciences, University of Manchester, March, 2006.
[9] K. Jerger and S. Schiff. Periodic pacing and in vitro epileptic focus. Journal of Neurophysiology,
73(2):876?879, 1995.
[10] Nicholas K. Jong and Peter Stone. Model-based function approximation in reinforcement learning. In
Proceedings of AAMAS, 2007.
[11] P.W. Keller, S. Mannor, and D. Precup. Automatic basis function construction for approximate dynamic
programming and reinforcement learning. In Proceedings of ICML, 2006.
[12] M. Kennel and H. Abarbanel. False neighbors and false strands: A reliable minimum embedding dimension algorithm. Physical Review E, 66:026209, 2002.
[13] S. Mahadevan and M. Maggioni. Proto-value functions: A Laplacian framework for learning representation and control in Markov decision processes. Journal of Machine Learning Research, 8:2169?2231,
2007.
[14] A. K. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis,
University of Rochester, 1996.
[15] R. Munos and A. Moore. Variable resolution discretization in optimal control. Machine Learning, 49:291?
323, 2002.
[16] U. Parlitz and C. Merkwirth. Prediction of spatiotemporal time series based on reconstructed local states.
Physical Review Letters, 84(9):1890?1893, 2000.
[17] Jan Peters, Sethu Vijayakumar, and Stefan Schaal. Natural actor-critic. In Proceedings of ECML, 2005.
[18] Tim Sauer, James A. Yorke, and Martin Casdagli.
65:3/4:579?616, 1991.
Embedology.
Journal of Statistical Physics,
[19] S. Singh, M. L. Littman, N. K. Jong, D. Pardoe, and P. Stone. Learning predictive state representations.
In Proceedings of ICML, 2003.
[20] W. Smart. Explicit manifold representations for value-functions in reinforcement learning. In Proceedings
of ISAIM, 2004.
[21] J. Stark. Delay embeddings for forced systems. I. Deterministic forcing. Journal of Nonlinear Science,
9:255?332, 1999.
[22] J. Stark, D.S. Broomhead, M.E. Davies, and J. Huke. Delay embeddings for forced systems. II. Stochastic
forcing. Journal of Nonlinear Science, 13:519?577, 2003.
[23] R. Sutton and A. Barto. Reinforcement learning: An introduction. The MIT Press, Cambridge, MA, 1998.
[24] F. Takens. Detecting strange attractors in turbulence. In D. A. Rand & L. S. Young, editor, Dynamical
Systems and Turbulence, volume 898, pages 366?381. Warwick, 1980.
[25] D. Wingate and S. Singh. On discovery and learning of models with predictive state representations of
state for agents with continuous actions and observations. In Proceedings of AAMAS, 2007.
9
| 3640 |@word neurophysiology:1 trial:6 achievable:1 hippocampus:1 nd:2 casdagli:1 simulation:8 decomposition:1 initial:2 configuration:7 contains:2 series:4 selecting:2 efficacy:1 existing:1 current:3 discretization:1 surprising:2 yet:1 guez:1 must:2 additive:2 numerical:2 unmask:1 plot:3 succeeding:1 update:1 generative:2 intelligence:1 ntrain:1 plane:1 mccallum:1 short:1 record:3 provides:2 recompute:2 complication:1 mannor:1 detecting:1 mathematical:3 constructed:3 differential:2 viable:1 combine:4 compose:1 expected:1 nor:1 multi:2 brain:8 discretized:1 continuousvalued:1 decreasing:1 td:2 actual:1 little:1 window:8 cardinality:1 preclude:1 project:1 notation:1 bounded:1 maximizes:2 underlying:2 lowest:1 what:2 mountain:9 argmin:2 substantially:1 suppresses:2 finding:2 unobserved:1 impractical:1 guarantee:1 temporal:1 every:1 prohibitively:1 rm:3 control:16 louis:1 before:1 engineering:2 local:16 treat:1 limit:1 sutton:2 servations:1 path:6 firing:1 approximately:1 easing:1 twice:1 k:1 studied:2 challenging:3 limited:2 adoption:1 practical:2 unique:2 acknowledgment:1 practice:3 union:1 chaotic:1 jan:1 empirical:2 adapting:1 significantly:1 fard:1 davy:1 integrating:1 refers:1 suggest:1 cannot:1 onto:1 selection:1 turbulence:2 context:3 applying:2 influence:1 map:4 deterministic:2 demonstrated:1 latest:1 duration:1 keller:1 pomdp:1 resolution:1 formalized:1 identifying:2 disorder:1 recovery:3 rule:2 utilizing:1 dominate:1 unforced:2 financial:1 embedding:61 population:1 notion:1 maggioni:1 analogous:1 mcgill:4 spontaneous:1 construction:2 heavily:1 programming:1 trend:2 element:3 expensive:3 particularly:1 approximated:1 velocity:2 labeled:3 observed:3 electrical:4 capture:1 wingate:1 region:1 cycle:1 valuable:1 envisioned:1 disease:2 environment:7 limbic:1 respecting:1 reward:13 wilkinson:1 littman:1 dynamic:22 ultimately:1 singh:2 smart:1 predictive:2 basis:3 vague:1 differently:2 tx:4 surrounding:1 forced:6 effective:6 describe:1 artificial:1 formation:1 outcome:2 exhaustive:1 emerged:1 valued:5 solve:1 warwick:1 reconstruct:5 ability:1 unseen:1 timescale:1 noisy:2 itself:2 galka:1 sequence:6 differentiable:1 interaction:1 product:1 reset:2 maximal:1 milani:1 aligned:1 loop:1 gular:1 poorly:1 ernst:1 description:1 scalability:1 exploiting:1 manchester:2 generating:1 converges:1 tim:1 depending:3 informs:1 montreal:3 andrew:1 damien:1 nearest:3 school:2 keith:1 strong:1 c:2 indicate:1 synchronized:1 avoli:2 stochastic:1 exploration:3 transient:1 require:1 fix:2 generalization:1 pacing:1 investigation:2 anticipate:2 parra:1 strictly:1 extension:1 accompanying:2 sufficiently:2 equilibrium:1 mapping:1 claim:2 vary:1 achieves:1 omitted:1 applicable:1 currently:2 visited:1 epileptiform:1 kennel:1 council:1 successfully:1 tool:1 weighted:1 hope:1 stefan:2 mit:1 always:1 rather:2 barto:2 derived:1 focus:2 schaal:2 longest:1 suppression:5 baseline:4 voronoi:1 typically:1 integrated:1 hidden:1 selective:1 among:1 priori:2 takens:6 animal:2 integration:5 field:4 construct:4 comprise:3 having:4 equal:1 represents:1 hardest:1 look:1 unstimulated:1 icml:3 future:6 report:1 employ:1 primarily:1 randomly:1 preserve:3 recognize:1 individual:1 delayed:1 familiar:1 phase:19 attractor:1 neuroscientist:1 interest:1 adjust:1 implication:1 accurate:1 edge:1 partial:8 arthur:1 shorter:2 sauer:1 intense:1 indexed:4 incomplete:1 euclidean:1 tree:1 initialized:1 re:4 plotted:3 guidance:1 theoretical:2 minimal:2 formalism:1 modeling:6 column:1 goodness:1 parametrically:1 subset:2 delay:2 varies:1 corrupted:1 periodic:1 spatiotemporal:1 combined:1 adaptively:1 st:1 density:1 peak:1 vijayakumar:1 physic:1 off:7 invertible:1 modelbased:2 discipline:1 continuously:1 precup:1 connectivity:1 again:1 thesis:1 containing:1 choose:1 isaim:1 dr:2 stochastically:1 derivative:4 abarbanel:1 stark:2 potential:3 premised:1 sec:13 summarized:1 includes:1 untapped:1 satisfy:1 mv:2 onset:1 closed:1 linked:1 observing:1 reached:1 rochester:1 contribution:2 morimoto:1 characteristic:1 yield:2 identify:4 spaced:1 identification:2 vincent:1 trajectory:7 researcher:1 definition:1 against:1 frequency:13 james:1 resultant:4 associated:1 dataset:8 treatment:6 broomhead:1 manifest:1 knowledge:2 car:8 amplitude:1 rand:1 execute:1 generality:1 furthermore:1 biomedical:1 eqn:2 horizontal:2 christopher:2 nonlinear:14 lack:1 widespread:2 pineau:1 mode:1 quality:3 perhaps:1 artifact:3 scientific:1 believe:2 mdp:1 building:1 effect:1 inductive:1 evolution:1 iteratively:2 moore:2 during:6 bowling:1 percentile:1 rat:1 criterion:1 stone:2 complete:3 demonstrate:3 silva:1 percent:1 reasoning:2 consideration:2 recently:1 embedology:1 common:2 stimulation:20 empirically:1 rl:10 vitro:1 physical:2 volume:1 imagined:1 numerically:3 epilepsy:7 significant:1 cambridge:1 rd:1 automatic:1 gratefully:1 actor:1 impressive:1 recent:1 showed:1 driven:1 forcing:2 termed:3 route:1 certain:1 meta:1 joelle:1 seen:1 preserving:1 additional:1 greater:2 minimum:1 accident:1 determine:4 shortest:1 period:2 signal:1 preservation:2 relates:1 encompass:1 ii:1 reduces:2 smooth:1 technical:1 characterized:1 offer:1 long:4 divided:2 controlled:3 laplacian:1 prediction:1 variant:1 regression:1 schiff:1 patient:1 expectation:1 repetitive:1 represent:2 achieved:2 penalize:1 preserved:2 interval:1 singular:8 permissible:1 unlike:2 comment:1 recording:2 induced:1 hz:11 facilitates:2 simulates:1 leveraging:2 effectiveness:1 jordan:1 near:1 leverage:1 canadian:1 mahadevan:1 embeddings:24 spiral:2 viability:2 fit:1 identified:1 observability:10 qfunction:1 whether:2 epileptic:3 jerger:1 peter:2 action:23 useful:3 se:14 pardoe:1 amount:3 nonparametric:1 discount:2 locally:2 specifies:2 occupy:1 exist:2 shifted:1 discrete:8 promise:1 key:1 four:3 drawn:5 vast:1 sum:1 convert:1 letter:1 lope:1 strange:1 decision:3 ob:1 seizure:25 scaling:1 abnormal:1 uncontrolled:1 display:1 tackled:1 electrophysiologically:1 activity:4 constraint:1 constrain:2 ghodsi:1 afforded:1 nearby:3 simulate:2 min:6 extremely:1 performing:4 martin:1 transferred:1 according:7 march:1 across:2 reconstructing:2 suppressed:1 evolves:1 making:1 lasting:2 projecting:1 computationally:1 equation:14 previously:1 available:1 operation:1 permit:1 apply:4 observe:1 appropriate:2 spectral:6 pierre:1 nicholas:1 batch:4 existence:1 original:1 gabriella:1 assumes:4 ensure:3 include:2 graphical:1 opportunity:1 wehenkel:1 question:1 quantity:5 parametric:4 primary:1 interacts:1 surrogate:2 exhibit:2 gradient:4 subspace:1 separate:1 reinforce:1 thank:2 accessibility:1 topic:2 manifold:9 argue:1 collected:2 unstable:2 trivial:2 considers:1 toward:1 sethu:1 assuming:2 length:5 yorke:1 executed:1 robert:1 frank:1 trace:4 suppress:1 policy:59 unknown:1 perform:5 vertical:1 observation:13 datasets:1 markov:4 benchmark:1 acknowledge:1 ecml:1 defining:1 extended:3 tmin:34 looking:1 neurobiology:1 interacting:1 intermittent:1 arbitrary:2 community:1 canada:3 namely:1 pair:2 elapsed:1 learned:18 hour:1 address:2 bar:1 below:2 dynamical:3 perception:1 summarize:2 max:4 reliable:1 event:4 natural:3 rely:2 jpineau:1 improve:1 imply:1 numerous:1 temporally:1 jun:1 health:1 review:3 literature:2 understanding:1 discovery:2 relative:1 embedded:2 fully:2 loss:1 synchronization:1 interesting:1 generation:1 suggestion:1 validation:2 integrate:2 agent:7 sufficient:1 principle:6 editor:1 critic:1 row:2 periodicity:3 summary:2 changed:1 surprisingly:1 free:3 offline:1 bias:3 guide:1 understand:1 institute:3 neighbor:4 fall:1 taking:2 barrier:2 munos:1 benefit:1 slice:12 overcome:1 dimension:15 boundary:1 world:11 transition:1 resides:2 kz:1 author:3 reinforcement:19 projected:1 adaptive:1 atkeson:2 transaction:1 reconstructed:1 approximate:4 observable:16 qlearning:1 confirm:1 global:1 active:1 assumed:1 conclude:1 tuples:3 spectrum:10 continuous:10 iterative:1 additionally:1 promising:2 learn:7 robust:2 ca:2 depicting:1 improving:1 eeg:1 complex:2 artificially:1 constructing:1 domain:23 substituted:3 da:1 noise:7 arise:1 aamas:2 neurostimulation:11 positively:1 neuronal:1 deployed:1 position:1 deterministically:2 explicit:1 lie:1 governed:2 candidate:1 mahdi:1 third:1 learns:1 young:1 theorem:6 minute:1 specific:1 incorporating:1 exists:3 intrinsic:1 false:2 sequential:1 effectively:1 phd:1 occurring:1 locality:9 depicted:1 explore:1 psr:2 desire:2 strand:1 partially:12 scalar:1 neurological:2 relies:1 extracted:1 ma:3 stimulating:1 goal:9 marked:1 massimo:1 change:3 experimentally:1 except:1 acting:1 principal:5 total:1 svd:2 jong:2 formally:2 select:4 support:3 bush:1 evaluate:3 proto:1 tested:1 |
2,914 | 3,641 | Hierarchical Mixture of Classification Experts
Uncovers Interactions between Brain Regions
Bangpeng Yao1
Dirk B. Walther2
Diane M. Beck2,3?
Li Fei-Fei1?
1
Computer Science Department, Stanford University, Stanford, CA 94305
2
Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801
3
Psychology Department, University of Illinois at Urbana-Champaign, Champaign, IL 61820
{bangpeng,feifeili}@cs.stanford.edu {walther,dmbeck}@illinois.edu
Abstract
The human brain can be described as containing a number of functional regions.
These regions, as well as the connections between them, play a key role in information processing in the brain. However, most existing multi-voxel pattern
analysis approaches either treat multiple regions as one large uniform region or
several independent regions, ignoring the connections between them. In this paper
we propose to model such connections in an Hidden Conditional Random Field
(HCRF) framework, where the classifier of one region of interest (ROI) makes
predictions based on not only its voxels but also the predictions from ROIs that it
connects to. Furthermore, we propose a structural learning method in the HCRF
framework to automatically uncover the connections between ROIs. We illustrate this approach with fMRI data acquired while human subjects viewed images
of different natural scene categories and show that our model can improve the
top-level (the classifier combining information from all ROIs) and ROI-level prediction accuracy, as well as uncover some meaningful connections between ROIs.
1
Introduction
In recent years, machine learning approaches for analyzing fMRI data have become increasingly
popular [15, 24, 18, 16]. In these multi-voxel pattern analysis (MVPA) approaches, patterns of
voxels are associated with particular stimuli, leading to verifiable predictions about independent
test data. Voxels are extracted from previously known regions of interest (ROIs) [15, 31], selected
from the brain by some statistical criterion [24], or defined by a sliding window (?searchlight?)
positioned at each location in the brain in turn [20]. All of these methods, however, ignore the
highly interconnected nature of the brain.
Neuroanatomical evidence from macaque monkeys [10] indicates that brain regions involved in
visual processing are indeed highly interconnected. Since research on human subjects is largely
limited to non-invasive procedures, considerably less is known about interactions between visual
areas in the human brain. Here we demonstrate a method of learning the interactions between
regions from fMRI data acquired while human subjects view images of natural scenes.
Determining the category of a natural scene (e.g. classifying a scene as a beach, or a forest) is important for many human activities such as navigation or object perception [30]. Despite the large variety
of images within and across categories, humans are very good at categorizing natural scenes [27, 9].
In our recent study of natural scene categorization in humans with functional magnetic resonance
imaging (fMRI), we discovered that information about natural scene categories is represented in patterns of activity in the parahippocampal place area (PPA), the retrosplenial cortex (RSC), the lateral
occipital complex (LOC), and the primary visual cortex (V1) [31]. We demonstrated that this information can be read out from fMRI activity with a linear support vector machine (SVM) classifier.
?
Diane M. Beck and Li Fei-Fei contributed equally to this work.
1
Given the highly interconnected nature of the brain, however, it is unlikely that these regions encode
natural scene categories independently of each other.
As previous ROI-based MVPA methods studies, in [31] we built predictors for each ROI independently, ignoring their interactions. The method in [31] neither explores connections among the ROIs
nor uses the connections to build a classifier on top of all ROIs. In this work, we propose a method
for simultaneously learning the voxel patterns associated with natural scene categories in several ROIs and their interactions in a Hidden Conditional Random Field (HCRF) [28] framework. In our model, the classifier of each ROI makes predictions based on not only its voxels, but
also the prediction results of the ROIs that it connects to. Using the same fMRI data set, we also explore a mutual information based method to discover functional connectivity [5]. Our current model
differs from [5], however, by applying a generative model to concurrently estimate the structure of
connectivity as well as maximize the end behavioral task (in this case, a scene classification task).
Furthermore, we propose a structural learning method to automatically uncover the structure
of the interactions between ROIs for natural scene categorization, i.e. to decide which ROIs
should be and which ones should not be connected. Unlike existing models for functional connectivity, which mostly rely on the correlation of time courses of voxels [23], our approach makes use of
the patterns of activity in ROIs as well as the category labels of the images presented to the subjects.
Built in the hierarchical framework of HCRF, our structural learning method utilizes information in
the voxel values at the bottom layer of the network as well as categorical labels at the top layer. In
our method, the connections between each pair of ROIs are evaluated for their potential to improve
prediction accuracy, and only those that show improvement will be added to the final structural map.
In the remaining part of this paper, we first elaborate on our model and structural learning approach
in Section 2. We discuss related work on MVPA and connectivity analysis in Section 3. Finally, we
present experimental results in Section 4 and conclude the paper in Section 5.
2
Modeling Interactions of Brain Regions: a HCRF Representation
The brain is highly interconnected, and the nature of the connections determines to a large extent
how information is processed in the brain. We model the connections of brain regions in a Hidden Conditional Random Field (HCRF) framework for the task of natural scene categorization and
propose a structural learning method to uncover the pattern of connectivity. In the first part of this
section we assume that the structural connections between brain regions are already known. We will
discuss in Section 2.2 how these connections are automatically learned.
2.1
Integrating Information across Brain Regions
Suppose we are given a set of regions of interest (ROIs) and connections between these regions (see
the intermediate layer of Fig.1). Existing ROI-based MVPA approaches build a classifier for each
ROI independently [15, 24, 18, 16, 31], neglecting the connections between ROIs. It is our objective
here to explore the structure of the connections between ROIs to improve prediction accuracy for
decoding viewed scene category from fMRI data.
In order to achieve these goals, we propose a Hidden Conditional Random Field (HCRF) model
(Fig.1) to allow each ROI to be influenced by the ROIs that it connects to and build a top-level
classifier which makes use of information in all ROIs. In this framework, the classifier for one ROI
makes prediction based on the voxels in this region as well as the results of the classifiers of its
connected ROIs, thereby improving the accuracy of each ROI. In the absence of evidence about
the directionality of connections, we assume them to be symmetric, i.e., to allow the information
between two ROIs to go in both directions to the same extent. On the technical side, using an
undirected model avoids the difficulties of defining a coherent generative process for graph structures
in directed models, thereby giving us more flexibility in representing complex patterns [29].
Our model starts with independently trained classifiers for each ROI as in [31] (the bottom layer of
Fig.1). Consider an fMRI data set whose individual brain acquisitions are associated with one of
? class labels. For an acquisition sample ?, the decision values of the ? independent classifiers are
represented as ? ? = {X?1 , ? ? ? , X?? }, where ? is the number of ROIs. X?? = {???,1 , ? ? ? , ???,? }
are the decision values for the ?-th ROI, where ???,? is the probability that region ? assigns sample
? to the ?-th class, irrespective of the information in any other ROI.
2
Z
Top-layer
Type-III
Potentials
Type-II
Potentials
Y3
Y4
Intermediate-layer
Y1
3
4
Bottom-layer
Type-I
Potentials
Y2
1
2
Figure 1: Illustration of the HCRF
model for modeling connections between
ROIs. Four ROIs, placed figuratively on a
schematic brain, are shown here for illustration of the model. Superscripts indexing different samples are omitted in the figure. ?
is the category label predicted from all ROIs.
?? , the hidden variable of the model, is the
prediction result of the classifier of ROI ?.
X? is the output of an independently trained
classifier for ROI ?. Section 2.1 gives details about the three types of connections.
In the figure thicker lines represent stronger
connections, thinner lines weaker connections. The weights of all connections and
connectivity pattern of the type-II potentials
are estimated by the model.
Given X?? as input, the classifier for ROI ? can directly predict sample ? as belonging to the ?? -th
class if ???,?? is the largest component of X?? . However, this method ignores the dependencies
between ROIs. To remedy this, our model allows collaborative error-correction over the ROIs by
using the given structure of connections (the intermediate layer of Fig.1). Denoting the prediction
results of the ROI classifiers as ? = {?1 , ? ? ? , ?? }, where ?? ? {1, ? ? ? , ?} is the classifier output
for ROI ?, our model allows for the predictions ?? and ?? to interact if ROIs ? and ? are connected
in the given structure (the intermediate layer in Fig.1).
Based on the ROI-level prediction results ?, our model outputs the category label of sample ?: ? ? ?
{1, ? ? ? , ?} (the top layer of Fig.1). Furthermore, because we cannot directly observe the prediction
of each ROI when acquiring the fMRI data, we treat ? as hidden variables. The underlying graphical
model is shown in Fig.1. To estimate the overall classification probability given the observed voxel
values, we marginalize over all possible values of ?. The HCRF model is therefore defined as
?
?
exp(?(? ? , ?, ? ? ; ?))
?
?
?
?
?(? ?? ; ?) =
?(? , ??? ; ?) = ? ??
(1)
?
?
? exp(?(?, ?, ? ; ?))
?
where ? are the parameters of the model, and ?(?, ?, ? ; ?) is a potential function parameterized by
?. We define the potential function ?(?, ?, ? ; ?) as the weighted sum of edge potential functions
defined on every edge ? (2-clique) of the model:
?
?(?, ?, ? ; ?) =
?? ?? (?, ?, ? )
(2)
?
As shown in Fig.1, there are three types of potentials which describe different edges in the model:
Type-I Potential ? = (X? , ?? ). Such edges model the distribution of class labels of different
ROIs conditioned on the observations X? . The edge connects an X? node and a ?? node where
? = 1, ? ? ? , ? . The edge potential function is defined by:
?? (?, ?, ? ) = ??? (?? , X? ) = ??,??
(3)
where ??,?? is the ?? -th component of the vector X? . A large weight for (X? , ?? ) implies that
the independent classifier trained on voxels of ROI ? is effective in giving correct predictions.
Type-II Potential ? = (?? , ?? ). Such edges model the dependencies between the ROIs. Note
that not all pairs of ROIs are connected. The edge potential function is defined by:
{
?, ?? = ??
?? (?, ?, ? ) = ??? (?? , ?? ) =
(4)
0, ?? ?= ??
where ? > 0. If two ROIs are connected, they tend to make similar predictions. A large weight for
(?? , ?? ) means the connection between ?? and ?? is strong.
Type-III Potential ? = (?, ?? ). Such edges define a joint distribution over the class label and the
prediction result of each ROI. The edge connects a ?? node and the ? node where ? = 1, ? ? ? , ? .
3
The edge potential function is defined by:
{
?? (?, ?, ? ) = ??? (?? , ?) =
?,
0,
?? = ?
?? ?= ?
(5)
where ? > 0. A large weight for (?, ?? ) means ROI ? has a big contribution to the top-level
prediction of the brain.
Allowing connected ROIs to interact with each other makes our model significantly different from
existing MVPA methods [15, 24, 18, 16], and can improve the prediction accuracy of each ROI.
Intuitively, if the values of all components in X?? are similar, then ROI ? is likely to have incorrect
predictions if its classifier merely relies on X?? . In such situations it is possible for the classifier for
one ROI to make better predictions if it can use the information in its connected ROIs.
2.2
Learning the Structural Connections of the Hidden Layer in HCRF Model
We have described a method that models the connections between ROIs to build a classification
predictor on top of all ROIs. However, for many tasks (e.g. scene categorization), one critical scientific goal is to uncover which ROIs are functionally connected for that task. Automatic learning
of the structures of graphical models is a difficult problem in machine learning. To illustrate the
difficulty, let us assume that we have 4 ROIs and that we want to explore all possible models of
connectivity between them. There are 6 possible connections between the ROIs, so in order to investigate whether all possible combinations of connections are present, we need to evaluate 26 = 64
different models. For 5 ROIs we have 10 potential connections, leading to 210 = 1024. In general,
given ? ROIs, there are 2? (? ?1)/2 possible combinations of connections. In situations with many
ROIs, evaluating all possible structures quickly becomes impractical because of the computational
constraints. Approximate approaches to learn the structures of directed graphs use the generative
process in the model [21, 19, 32]. For undirected graphs, it is usually assumed that the structures are
pre-defined [29]. Some incremental approaches [26, 22] were proposed for random fields construction. However the computational complexity of these approaches is still high.
In our model shown in Fig.1, the potentials represented by solid lines are fixed (type-I and type-III).
That is to say, each ROI always makes predictions based on the information in its voxels, and the
response at the top level is always influenced by the prediction results of all ROIs. That leaves the
dependencies between ROIs (type-II edges, the dashed line in Fig.1) to be learned. Therefore, our
structural learning starts from a graphical model containing only type-I and type-III potentials, without any interactions between ROIs. Based on this initial model, we evaluate each type-II potential
respectively to decide if it should be added to the model.
As we have described in Section 1, connections among ROIs play a key role in information processing. Executing a specific task (e.g., scene categorization) activates certain ROIs as well as rely on
connections between some of them. Inspired by this fact, we evaluate whether two ROIs, say ROIs
? and ?, should be connected by comparing two models with and without an edge between ?? and
Z
Z
Y3
Y4
Y1
Y1
Y2
Y3
Y4
Y2
Connect Y2 and Y4
3
4
1
2
Training accuracy: Pc
3
4
1
If and only if Pc > Pn
2
Training accuracy: Pn
Figure 2: An illustration for evaluating if ROIs 2 and 4 should be connected. All other ROIs are omitted. We
compare the performance of two modes with (left) and without (right) interactions between ROIs 2 and 4.
4
Input: ? ROIs and their feature vectors ? = {X1 , ? ? ? , X? }. A HCRF model ? with nodes ?,
?1 , ? ? ? , ?? , X1 , ? ? ? , X? , and edges (?1 , X1 ), ? ? ? ,(?? , X? ), (?, ?1 ), ? ? ? , (?, ?? ).
foreach pair of ROIs ? and ? do
Train an HCRF model with nodes ?, ?? , ?? , ?? , ?? , and edges (?? , X? ), (?? , X? ), (?, ?? ),
(?, ?? ), (?? , ?? ). Obtain training accuracy ?? ;
Train an HCRF model with nodes ?, ?? , ?? , ?? , ?? , and edges (?? , X? ), (?? , X? ), (?, ?? ),
(?, ?? ). Obtain training accuracy ?? ;
if ?? > ?? then Add edge (?? , ?? ) to the input model ?;
Output: The updated model ?.
Algorithm 1: The algorithm for uncovering structural connections between ROIs in the HCRF model.
?? . If allowing interactions between ROIs ? and ? helps to improve top-level recognition performance, thus more closely approximating human performance, then ? and ? should be connected.
Furthermore, we ignore information in all other ROIs when evaluating the connection between ROIs
? and ? (Fig.2). So the model will only contain 5 nodes: ?, ?? , ?? , X? , and X? . Although some
useful information might be lost compared to evaluating all possible combinations of connections,
approximating the algorithm in this way can enable the evaluation of many possible connections in
a reasonable amount of time, making this algorithm much more practical.
The structural learning algorithm is shown in Algorithm 1, and an illustration of evaluating the
connection between ROI 2 and 4 is in Fig.2.
2.3
Model Learning and Inference
Learning In the step of structural learning, we need to estimate model parameters to compare the
models with or without a type-II connection (see Fig.2 for an illustration). Once we have determined
which ROIs should interact, i.e. which type-II potentials should be set, we would like to find out the
strength of these connections as well as type-I and III potentials. Here the parameters ? = {?? }? are
learned by maximizing the conditional log-likelihood of class label ? on training data ? :
?
log ?(? ? ?? ? ; ?)
? ? = arg max ?(?) = arg max
?
?
?
?
?
exp(?(? ? , ?, ? ? ; ?))
= arg max
log ? ??
?
?
?
? exp(?(?, ?, ? ; ?))
?
(6)
The objective function is not concave due to the hidden variables ?. Although finding the global
optimum is difficult, we can still find a local optimum by iteratively updating the values of ? using
the gradient descent method. To be specific, we first set ? to be initial values ? (0) , and for each
iteration we adopt the following formula to update ? (?) to ? (?+1) :
? (?+1) = ? (?) ?
G(? (?) )? G(? (?) )
G(? (?) )
G(? (?) )? H(? (?) )G(? (?) )
(7)
where G(?) and H(?) are the gradient vector and Hessian matrix of ?(?) respectively. This iterative updating continues until reaching a maximum number of iterations or ?G(?)? is smaller
than a threshold. When the number of ROIs is large, marginalizing over all possible values of ? is
time-consuming. In such situations we can use Gibbs sampling to compute the gradient vector and
Hessian matrix of ?(?). In the case of natural scene categorization, evidence from neuroscience
studies have postulated that 7 regions are likely to play critical roles in this task [31]. We therefore
consider 7 ROIs in our experiment, allowing us to marginalize over all possible values of Y.
Inference Given the model parameters ? ? and a sample ? , the top-level prediction result is
? ? = arg max ?(??? ; ? ? )
?
(8)
After ? ? is obtained, we can get the prediction results corresponding to each ROI by
? ? = arg max ?(? ? , ??? ; ? ? )
?
5
(9)
3
Related Work
In this paper, we model the dependencies between ROIs in an HCRF framework, which improves
the ROI-level as well as the top-level decoding accuracy by allowing ROIs to exchange information.
Other approaches to inferring connections between brain regions from fMRI data can be broadly
separated into effective connectivity and functional connectivity [11]. Models for effective connectivity, such as Granger causality mapping [14] and dynamic causal modeling [13], model directed
connections between brain regions. These approaches were developed to account for biological temporal dependencies, which is not the case in this work. Functional connectivity refers to undirected
connections, which can be either model-driven or data-driven [23]. Model-driven methods usually
test a prior hypothesis by correlating the time courses of a seed voxel and a target voxel [12]. Datadriven methods, such as Independent Component Analysis [8], are typically used to identify spatial
modes of coherent activity in the brain at rest.
None of these methods, however, has the ability to use the specific relation between the patterns
of voxel activations inside ROIs and the ground truth of the experimental condition. The structural
learning method proposed in this paper offers an entirely new way to assess the interactions between
brain regions based on the exchange of information between ROIs so that the accuracy of decoding
experimental conditions from the data is improved. Furthermore in contrast with the conventional
model comparison approaches of trying to optimize the evidence of each model [2], our method
relates the connectivity structure to observed brain activities as well as the classes of stimuli that
elicited the activities. Therefore the model proposed here provides a novel and natural way to model
the implicit dependencies between different ROIs.
4
4.1
Experimental Evaluation
Data Set and Experimental Design
In order to evaluate the proposed method we re-analyze the fMRI data set from our work in [31].
In this experiment, 5 subjects were presented with color images of 6 scene categories: beaches,
buildings, forests, highways, industry, and mountains. Photographs were chosen to capture the high
variability within each scene category. Images were presented in blocks of 10 images of the same
category lasting for 16 seconds (8 brain acquisitions). Each subject performed 12 runs, with each
run containing one block for each of the six categories. Please refer to [31] for more details.
We use 7 ROIs that are likely to play critical roles for natural scene categorization. They were
determined in separate localizer scans: V1, left/right LOC, left/right PPA, left/right RSC. The data
for two subjects were excluded, because not all of the ROIs could be found in the localizer scans
for these subjects. For the analysis we use two nested cross validations over the 12 runs for each
subject. In the outer loop we cross-validate on each subject to test the performance of the proposed
method. For each subject, 11 runs out of 12 are selected as training samples and the remaining
run is used as the testing set. For each subject this procedure is repeated 12 times, in turn leaving
each run out for testing once. Average accuracy of the 36 experiments across all subjects is used to
evaluate the performance of the model. In the inner loop, we use 10 of the 11 training runs to train
an SVM classifier for each ROI and each subject, and the remaining run to learn the connections
between ROIs and train the HCRF model by using outputs of the SVM classifiers. We repeat this
procedure 11 times, giving us 11 models. Results of the 11 models on the test data in the inner loop
are combined using bagging [4]. We empirically set both ? in Equ.(4) and ? in Equ.(5) to 0.5.
4.2 Scene Classification Results and Analysis
In order to comprehensively evaluate the performance of the proposed structural learning and modeling approach, we consider different settings of the intermediate layer of our HCRF model. While
always keeping all type-I and type-III potentials connected, we consider five different dependencies
between the ROIs as shown in Fig.3. The setting in Fig.3(e) possesses all properties of our method:
the connections between ROIs are determined by structural learning, and the weights of the connections are obtained by estimating model parameters in Equ.(6). In order to estimate the effectiveness
of our structural learning method, we compare this setting with the situations where no connections
exists between any of the ROIs (Fig.3(a)), and all ROIs are fully connected (Fig.3(b,c)). In each connectivity situation, we either use the same (Fig.3(b,d)) or different (Fig.3(c,e)) weights for type-II
6
Y2
Y1
Y4
Y2
Y2
Y1
Y3
Y1
Y3
Y4
Y4
Y2
Y1
Y4
Y3
Y2
Y1
Y3
Y4
Y3
Figure 3: Various settings of the intermediate layer of our model. Dashed lines represent type-II potentials.
In each setting we keep all type-I and III potentials connected. For simplicity, we omit the visualizations of
type-I and III potentials here. Different line widths represent different potential weights. (a) No connection
exists between any pair of ROIs. (b,c) The ROIs are fully connected. (d,e) The connections between ROIs are
obtained by structural learning. (b,d) All type-II potentials have equal weights. (c,e) The weights of different
type-II potentials can be different. Note that (e) is the full model in this paper.
Table 1: Recognition accuracy for predicting natural scene categories with different methods (chance is 1/6).
?Overall classification? means the accuracy for predicting the categories by the top-level node in Fig.1. We
carry out experiments on the HCRF models with different settings of the type-II potentials, as shown in Fig.3.
Note that we always learn the weights of type-I and type-III potentials. We also list classification results of
the SVM classifiers independently trained on each ROI as the baseline. The bolded numbers indicate superior
performance compared to all other settings for each ROI. ? ? < 0.01; ?? ? < 0.005.
Method
Overall classification
V1
left LOC
right LOC
ROI
left PPA
right PPA
left RSC
right RSC
SVM
N/A
21%
22%
25%
27%
26%
30%
26%?
Fig.3(a)
31%?
22%
23%
24%
27%
28%?
30%?
27%
Fig.3(b)
29%?
25%
27%
27%
26%
28%?
30%?
29%?
Fig.3(c)
33%??
24%
29%?
30%?
28%?
31%?
32%?
30%?
Fig.3(d)
34%??
27%
31%?
29%?
31%?
31%?
33%??
30%?
Fig.3(e)
36%??
28%?
32%??
33%??
31%?
32%??
35%??
32%??
potentials. Note that the type-II potentials of the models in Fig.3(b,d) are also obtaining by learning.
Classification accuracy of the five different HCRF models, along with individual SVM classification
accuracy for each ROI, is shown in Tbl.1. Note that the model with no type-II potentials (Fig.3(a))
is different from independent SVM classifiers because of the type-I potentials.
From Table 1 it becomes clear that learning both the structure of the connections and their strengths
leads to more improvement in decoding accuracy than either one of these alone. The overall, toplevel classification rate increases from 31% for the variant of the model without any connections
(Fig.3(a)) to 36% for the variant with the structure of the model as well as the connection strengths
learned (Fig.3(e)). We see similar improvements for the individual ROIs: 4-5% for PPA and RSC,
6% for V1, and 9% for LOC. The fact that decoding from LOC benefits most from interacting with
other ROIs is interesting and significant. We will discuss this finding in more detail below.
4.3
Structural Learning Results and Analysis
Having established that our full HCRF model outperforms other comparison models in the recognition task, we now investigate how our model can shed light on learning connectivity between brain
regions. In the nested cross validation procedure, 12?11=132 structural maps are learned for each
subject. Tbl.2 reports for each subject which connections are present in what fraction of these structural maps. A connection is regarded as a strong connection for a subject if it presents in at least
half of the models learned for this subject. In Tbl.2 we use larger font size to denote the connections
which are strong on more subjects. Connections that are strong for all subjects are marked in bold.
We see that both LOC and PPA show strong interactions between the contralateral counterparts,
which makes sense for integrating information across the visual hemifields. We also observe strong
interactions between PPA and RSC across hemispheres, which underscores the importance of acrosshemifield integration of visual information. We see a similar effect in the interactions between LOC
and PPA: strong contralateral interactions. Left LOC also interacts strongly with right RSC.
7
Table 2: Statistics of structural connections. For each subject we have 132 learned structural maps (12-fold
cross-validation, each one has 11 models). This table shows the percentage of the times that the corresponding
connection is learned in the 132 experiments. Larger font size denotes connections that are strong on more
subjects. Connections that are strong on all subjects are marked in bold.
Connection
Sbj.1
Sbj.2
Sbj.3
V1-leftLOC
0.67
0.25
0.33
Connection
rightLOC-leftPPA
Sbj.1
0.58
Sbj.2
0.58
Sbj.3
0.66
V1-rightLOC
0.50
0.29
0.54
rightLOC-rightPPA
0.36
0.58
0.89
V1-leftPPA
0.44
0.29
0.36
rightLOC-leftRSC
0.63
0.38
0.31
V1-rightPPA
0.38
0.33
0.69
rightLOC-rightRSC
0.36
0.30
0.87
0.78
V1-leftRSC
0.29
0.30
0.23
leftPPA-rightPPA
0.99
0.56
V1-rightRSC
0.36
0.29
0.59
leftPPA-leftRSC
0.97
0.34
0.46
leftLOC-rightLOC
0.66
0.88
0.71
leftPPA-rightRSC
0.61
0.53
0.40
leftLOC-leftPPA
0.46
0.64
0.76
rightPPA-leftRSC
0.67
0.74
0.51
leftLOC-rightPPA
0.75
0.96
0.65
rightPPA-rightRSC
0.93
0.74
0.41
leftLOC-leftRSC
0.41
0.78
0.61
leftRSC-rightRSC
0.65
0.20
0.45
leftLOC-rightRSC
0.75
0.83
0.76
The strong interactions between PPA and RSC are not surprising, since both are typically associated
with the processing of natural scenes [25], albeit with slightly different roles [7]. The interactions
between LOC and PPA are somewhat more surprising, since LOC is usually associated with the
processing of isolated objects. Together with the strong improvement of decoding accuracy for
natural scene categories from LOC when it is allowed to interact with other ROIs (see above), this
suggests a role for LOC in scene categorization. It is conceivable that the detection of typical objects
(e.g., a car) helps with determining the scene category (e.g., highway), as has been shown in [17,
6]. On the other hand, it is also possible that information flows the other way, that scene-specific
information in PPA and RSC feeds into LOC to bias object detection based on the scene category
(see [3, 1]), and that the classifier decodes this bias signal in LOC. Fig.4 shows the connections
which are strong on at least two subjects.
left
right
RSC
RSC
Figure 4: Schematic illustration of the connections between the seven ROIs obtained by our
structural learning method. Activated regions for
the seven ROIs are marked in red. The connections shown in this figure are strong on at least
two of the three subjects. Connections that are
strong for all three subjects (marked with bold
in Table 2) are marked with thicker lines in this
figure.
left PPA
right PPA
left LOC
right LOC
5
V1
Conclusion
In this paper we modeled the interactions between brain regions in an HCRF framework. We also
presented a structural learning method to automatically uncover the connections between ROIs.
Experimental results showed that our approach can improve the top-level as well as ROI-level prediction accuracy, as well as uncover some meaningful connections between ROIs. One direction for
future work is to use an exploratory ?searchlight? approach [20] to automatically discover ROIs, and
apply our structural learning and modeling method to those ROIs.
Acknowledgements
This work is funded by National Institutes of Health Grant 1 R01 EY019429 (to L.F.-F., D.M.B.,
D.B.W.), a Beckman Postdoctoral Fellowship (to D.B.W.), a Microsoft Research New Faculty Fellowship (to L.F.-F.), and the Frank Moss Gift Fund (to L.F-F.). The authors would like to thank
Barry Chai, Linjie Luo, and Hao Su for helpful comments and discussions.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
[31]
[32]
M. Bar. Visual objects in context. Nature Rev Neurosci, 5(8):617?629, 2004.
D. Barber and C. M. Bishop. Bayesian model comparison by monte carlo chaining. In NIPS, 1997.
I. Biederman. Perceiving real-world scenes. Science, 177(4043):77?80, 1972.
L. Breiman. Bagging predictors. Mach Learn, 24:123?140, 1996.
B. Chai? , D. B. Walther? , D. M. Beck? , and L. Fei-Fei? . Exploring functional connectivities of the human
brain using multivariate information analysis. In NIPS, 2009. (? ,? indicates equal contribution).
J. L. Davenport and M. C. Potter. Scene consistency in object and background perception. Psychol Sci,
15(8):559?564, 2004.
R. A. Epstein and J. S. Higgins. Differential parahippocampal and retrosplenial involvement in three types
of scene recognition. Cereb Cortex, 17:1680?1693, 2007.
F. Esposito, E. Formisano, E. Seifritz, R. Geobel, R. Morrone, G. Tedeschi, and F. D. Salle. Spatial
independent component analysis of functional MRI time-series: To what extent do results depend on the
algorithm used. Hum Brain Mapp, 16:146?157, 2002.
L. Fei-Fei, A. Iyer, C. Koch, and P. Perona. What do we perceive in a glance of a real-world scene? J
Vision, 7(1):1?29, 2007.
D. J. Felleman and D. C. van Essen. Distributed hierarchical processing in the primate cerebral cortex.
Cereb Cortex, 1:1?47, 1991.
K. J. Friston. Functional and effective connectivity in neuroimaging: a synthesis. Hum Brain Mapp,
2:56?78, 1995.
K. J. Friston, C. Frith, F. P. Liddle, and R. Frackowiak. Functional connectivity: The principal-component
analysis of large (PET) data sets. J Cerebr Blood F Met, 13:5?14, 1993.
K. J. Friston, L. Harrison, and W. Penny. Dynamic causal modeling. NeuroImage, 19:1273?1302, 2003.
R. Goebel, A. Roebroeck, D.-S. Kim, and E. Formisano. Investigating directed cortical interactions in
time-resolved fMRI data using vector autoregressive modeling and granger causality mapping. Magn
Reson Imaging, 21:1251?1261, 2003.
J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. Schouten, and P. Pietrini. Distributed and overlapping
representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425?2430, 2001.
J.-D. Haynes and G. Rees. Predicting the orientation of invisible stimuli from activity in human primary
visual cortex. Nat Neurosci, 8:686?691, 2005.
A. Hollingworth and J. M. Henderson. Accurate visual memory for previously attended objects in natural
scenes. J Exp Psychol Human, 28:113?136, 2002.
Y. Kamitani and F. Tong. Decoding the visual and subjective contents of the human brain. Nat Neurosci,
8:679?685, 2005.
C. Kemp and J. B. Tenenbaum. The discovery of structural form. P Natl Acad Sci USA, 105(31):10687?
10692, 2008.
N. Kriegeskorte, R. Goebel, and P. Bandettini. Information-based functional brain mapping. P Natl Acad
Sci USA, 103(10):3863?3868, 2006.
W. Lam and F. Bacchus. Learning Bayesian belief networks: An approach based on the mdl principle.
Comput Intell, 10(4):269?293, 1994.
S. Lee, V. Ganapahthi, and D. Koller. Efficient structure learning of markov networks using ?1 regularization. In NIPS, 2006.
K. Li, L. Guo, J. Nie, G. Li, and T. Liu. Review of methods for functional brain connectivity detection
using fmri. Comput Med Imag Grap, 33:131?139, 2009.
D. Neill, A. Moore, F. Pereira, and T. Mitchell. Detecting significant multidimensional spatial clusters.
In NIPS, 2004.
K. O?Craven and N. Kanwisher. Mental imagery of faces and places activates corresponding stimulusspecific brain regions. J Cognitive Neurosci, 12:1013?1023, 2000.
S. D. Pietra, V. D. Pietra, and J. Lafferty. Inducing features of random fields. IEEE T Pattern Anal,
19(4):380?393, 1997.
M. C. Potter. Short-term conceptual memory for pictures. J Exp Psychol - Hum L, 2(5):509?522, 1976.
A. Quattoni, S. Wang, L.-P. Morency, M. Collins, and T. Darrell. Hidden conditional random fields. IEEE
T Pattern Anal, 29(10):1848?1852, 2007.
B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In UAI, 2002.
B. Tversky and K. Hemenway. Categories of scenes. Cognitive Psychol, 15:121?149, 1983.
D. B. Walther, E. Caddigan, L. Fei-Fei? , and D. M. Beck? . Natural scene categories revealed in distributed patterns of activity in the human brain. J Neurosci, 29(34):10573?10581, 2009. (? indicates
equal contribution).
M. L. Wong, W. Lam, and K. S. Leung. Using evoluntionary programming and minimum description
length principle for data mining of bayesian networks. IEEE T Pattern Anal, 21(2):174?178, 1999.
9
| 3641 |@word mri:1 faculty:1 stronger:1 kriegeskorte:1 uncovers:1 attended:1 thereby:2 solid:1 carry:1 initial:2 liu:1 loc:17 series:1 denoting:1 outperforms:1 existing:4 subjective:1 current:1 comparing:1 surprising:2 luo:1 activation:1 haxby:1 update:1 fund:1 alone:1 generative:3 selected:2 leaf:1 half:1 short:1 mental:1 provides:1 detecting:1 node:9 location:1 five:2 along:1 become:1 differential:1 walther:3 incorrect:1 retrosplenial:2 behavioral:1 inside:1 acquired:2 kanwisher:1 datadriven:1 indeed:1 nor:1 multi:2 brain:34 inspired:1 automatically:5 window:1 becomes:2 gift:1 discover:2 underlying:1 estimating:1 furey:1 what:3 mountain:1 monkey:1 developed:1 finding:2 impractical:1 temporal:2 y3:8 every:1 multidimensional:1 concave:1 thicker:2 shed:1 classifier:24 grant:1 omit:1 imag:1 magn:1 local:1 treat:2 thinner:1 acad:2 despite:1 mach:1 analyzing:1 might:1 suggests:1 limited:1 directed:4 practical:1 fei1:1 testing:2 lost:1 block:2 differs:1 procedure:4 area:2 significantly:1 pre:1 integrating:2 refers:1 get:1 cannot:1 marginalize:2 parahippocampal:2 context:1 applying:1 wong:1 optimize:1 conventional:1 map:4 demonstrated:1 maximizing:1 go:1 occipital:1 independently:6 simplicity:1 assigns:1 perceive:1 higgins:1 regarded:1 exploratory:1 updated:1 reson:1 construction:1 play:4 suppose:1 target:1 programming:1 us:1 hypothesis:1 ppa:13 recognition:4 updating:2 continues:1 bottom:3 role:6 observed:2 taskar:1 wang:1 capture:1 region:27 connected:15 complexity:1 nie:1 dynamic:2 tversky:1 trained:4 depend:1 resolved:1 joint:1 frackowiak:1 represented:3 various:1 train:4 separated:1 describe:1 effective:4 monte:1 whose:1 stanford:3 larger:2 say:2 ability:1 statistic:1 final:1 superscript:1 propose:6 lam:2 interaction:19 interconnected:4 combining:1 loop:3 flexibility:1 achieve:1 description:1 inducing:1 validate:1 chai:2 cluster:1 optimum:2 darrell:1 categorization:8 incremental:1 executing:1 object:8 help:2 illustrate:2 strong:14 c:1 predicted:1 implies:1 indicate:1 met:1 direction:2 closely:1 correct:1 human:14 enable:1 exchange:2 abbeel:1 feifeili:1 biological:1 exploring:1 correction:1 koch:1 ground:1 roi:120 exp:6 seed:1 mapping:3 predict:1 ventral:1 adopt:1 omitted:2 beckman:2 label:8 highway:2 largest:1 weighted:1 concurrently:1 activates:2 always:4 reaching:1 pn:2 breiman:1 categorizing:1 encode:1 improvement:4 indicates:3 likelihood:1 underscore:1 contrast:1 baseline:1 sense:1 kim:1 helpful:1 inference:2 leung:1 unlikely:1 typically:2 hidden:9 relation:1 perona:1 koller:2 overall:4 classification:11 among:2 uncovering:1 arg:5 orientation:1 resonance:1 spatial:3 integration:1 mutual:1 field:7 once:2 equal:3 having:1 beach:2 sampling:1 mvpa:5 haynes:1 fmri:13 future:1 report:1 stimulus:3 simultaneously:1 national:1 intell:1 individual:3 pietra:2 beck:3 connects:5 microsoft:1 detection:3 interest:3 highly:4 investigate:2 essen:1 mining:1 evaluation:2 henderson:1 mdl:1 mixture:1 navigation:1 pc:2 light:1 activated:1 natl:2 accurate:1 edge:17 neglecting:1 re:1 causal:2 isolated:1 rsc:11 industry:1 modeling:7 contralateral:2 uniform:1 predictor:3 bacchus:1 hemenway:1 dependency:7 connect:1 ishai:1 considerably:1 combined:1 rees:1 explores:1 lee:1 probabilistic:1 decoding:7 together:1 quickly:1 synthesis:1 connectivity:18 imagery:1 containing:3 davenport:1 cognitive:2 expert:1 leading:2 li:4 bandettini:1 account:1 potential:34 bold:3 postulated:1 kamitani:1 performed:1 view:1 analyze:1 red:1 start:2 elicited:1 collaborative:1 contribution:3 il:2 ass:1 accuracy:19 largely:1 bolded:1 identify:1 bayesian:3 decodes:1 none:1 carlo:1 quattoni:1 influenced:2 acquisition:3 involved:1 invasive:1 associated:5 popular:1 mitchell:1 color:1 car:1 improves:1 positioned:1 uncover:7 feed:1 response:1 improved:1 evaluated:1 strongly:1 furthermore:5 implicit:1 correlation:1 until:1 hand:1 su:1 overlapping:1 glance:1 mode:2 epstein:1 scientific:1 liddle:1 building:1 effect:1 usa:2 contain:1 y2:9 remedy:1 counterpart:1 regularization:1 read:1 symmetric:1 iteratively:1 excluded:1 moore:1 width:1 please:1 chaining:1 criterion:1 trying:1 mapp:2 demonstrate:1 cereb:2 felleman:1 invisible:1 image:7 novel:1 superior:1 functional:12 empirically:1 foreach:1 cerebral:1 functionally:1 refer:1 significant:2 goebel:2 gibbs:1 automatic:1 consistency:1 illinois:3 funded:1 cortex:7 add:1 multivariate:1 recent:2 showed:1 involvement:1 hemisphere:1 driven:3 certain:1 minimum:1 somewhat:1 maximize:1 barry:1 signal:1 dashed:2 ii:14 sliding:1 multiple:1 relates:1 full:2 champaign:3 technical:1 offer:1 cross:4 equally:1 schematic:2 prediction:26 variant:2 vision:1 iteration:2 represent:3 background:1 want:1 fellowship:2 harrison:1 leaving:1 rest:1 unlike:1 posse:1 comment:1 subject:26 tend:1 med:1 undirected:3 flow:1 lafferty:1 effectiveness:1 structural:26 intermediate:6 iii:9 revealed:1 variety:1 psychology:1 inner:2 whether:2 six:1 hessian:2 useful:1 clear:1 verifiable:1 amount:1 tenenbaum:1 processed:1 category:21 percentage:1 estimated:1 neuroscience:1 broadly:1 key:2 four:1 threshold:1 tbl:3 blood:1 neither:1 v1:11 imaging:2 graph:3 merely:1 fraction:1 year:1 sum:1 run:8 parameterized:1 place:2 reasonable:1 decide:2 utilizes:1 decision:2 entirely:1 layer:13 esposito:1 neill:1 fold:1 activity:9 toplevel:1 strength:3 constraint:1 fei:9 scene:34 department:2 hemifields:1 combination:3 craven:1 belonging:1 across:5 smaller:1 increasingly:1 slightly:1 rev:1 making:1 primate:1 lasting:1 intuitively:1 indexing:1 visualization:1 previously:2 turn:2 discus:3 granger:2 stimulusspecific:1 end:1 apply:1 observe:2 hierarchical:3 magnetic:1 bangpeng:2 sbj:6 pietrini:1 neuroanatomical:1 top:14 remaining:3 bagging:2 denotes:1 graphical:3 giving:3 build:4 approximating:2 r01:1 objective:2 added:2 already:1 hum:3 font:2 gobbini:1 primary:2 interacts:1 gradient:3 conceivable:1 separate:1 thank:1 lateral:1 sci:3 outer:1 seven:2 barber:1 extent:3 kemp:1 pet:1 potter:2 length:1 modeled:1 y4:9 illustration:6 difficult:2 mostly:1 neuroimaging:1 frank:1 hao:1 design:1 anal:3 contributed:1 allowing:4 observation:1 markov:1 urbana:3 descent:1 defining:1 relational:1 situation:5 variability:1 dirk:1 y1:8 discovered:1 interacting:1 biederman:1 searchlight:2 pair:4 connection:67 coherent:2 learned:8 established:1 macaque:1 nip:4 bar:1 usually:3 pattern:14 perception:2 below:1 built:2 max:5 memory:2 belief:1 critical:3 natural:18 rely:2 difficulty:2 predicting:3 friston:3 representing:1 improve:6 picture:1 irrespective:1 categorical:1 psychol:4 health:1 moss:1 prior:1 voxels:8 acknowledgement:1 discovery:1 review:1 determining:2 marginalizing:1 fully:2 interesting:1 validation:3 principle:2 classifying:1 hcrf:21 course:2 placed:1 repeat:1 keeping:1 schouten:1 side:1 allow:2 weaker:1 bias:2 institute:2 comprehensively:1 formisano:2 face:2 penny:1 benefit:1 van:1 distributed:3 cortical:1 evaluating:5 avoids:1 world:2 autoregressive:1 ignores:1 author:1 voxel:8 approximate:1 ignore:2 keep:1 clique:1 global:1 correlating:1 investigating:1 uai:1 conceptual:1 conclude:1 assumed:1 consuming:1 equ:3 morrone:1 discriminative:1 postdoctoral:1 iterative:1 table:5 nature:4 learn:4 ca:1 ignoring:2 obtaining:1 frith:1 forest:2 improving:1 interact:4 diane:2 complex:2 roebroeck:1 neurosci:5 big:1 repeated:1 allowed:1 x1:3 fig:31 causality:2 elaborate:1 tong:1 localizer:2 neuroimage:1 inferring:1 pereira:1 comput:2 formula:1 specific:4 bishop:1 list:1 svm:7 evidence:4 exists:2 albeit:1 importance:1 nat:2 iyer:1 conditioned:1 photograph:1 explore:3 likely:3 visual:9 acquiring:1 nested:2 truth:1 determines:1 relies:1 extracted:1 chance:1 conditional:6 viewed:2 goal:2 marked:5 absence:1 content:1 directionality:1 determined:3 typical:1 perceiving:1 principal:1 morency:1 experimental:6 meaningful:2 support:1 guo:1 scan:2 collins:1 evaluate:6 |
2,915 | 3,642 | Hierarchical Modeling of Local Image Features
through Lp-Nested Symmetric Distributions
Fabian Sinz
Max Planck Institute for Biological Cybernetics
Spemannstra?e 41
72076 T?ubingen, Germany
[email protected]
Eero P. Simoncelli
Center for Neural Science, and Courant Institute
of Mathematical Sciences, New York University
New York, NY 10003
[email protected]
Matthias Bethge
Max Planck Institute for Biological Cybernetics
Spemannstra?e 41
72076 T?ubingen, Germany
[email protected]
Abstract
We introduce a new family of distributions, called Lp -nested symmetric distributions, whose densities are expressed in terms of a hierarchical cascade of Lp norms. This class generalizes the family of spherically and Lp -spherically symmetric distributions which have recently been successfully used for natural image modeling. Similar to those distributions it allows for a nonlinear mechanism
to reduce the dependencies between its variables. With suitable choices of the
parameters and norms, this family includes the Independent Subspace Analysis
(ISA) model as a special case, which has been proposed as a means of deriving filters that mimic complex cells found in mammalian primary visual cortex.
Lp -nested distributions are relatively easy to estimate and allow us to explore the
variety of models between ISA and the Lp -spherically symmetric models. By fitting the generalized Lp -nested model to 8 ? 8 image patches, we show that the
subspaces obtained from ISA are in fact more dependent than the individual filter coefficients within a subspace. When first applying contrast gain control as
preprocessing, however, there are no dependencies left that could be exploited by
ISA. This suggests that complex cell modeling can only be useful for redundancy
reduction in larger image patches.
1
Introduction
Finding a precise statistical characterization of natural images is an endeavor that has concerned
research for more than fifty years now and is still an open problem. A thorough understanding of
natural image statistics is desirable from an engineering as well as a biological point of view. It
forms the basis not only for the design of more advanced image processing algorithms and compression schemes, but also for a better comprehension of the operations performed by the early visual
1
system and how they relate to the properties of the natural stimuli that are driving it. From both
perspectives, redundancy reducing algorithms such as Principal Component Analysis (PCA), Independent Component Analysis (ICA), Independent Subspace Analysis (ISA) and Radial Factorization
[11; 21] have received considerable interest since they yield image representations that are favorable
for compression and image processing and at the same time resemble properties of the early visual
system. In particular, ICA and ISA yield localized, oriented bandpass filters which are reminiscent
of receptive fields of simple and complex cells in primary visual cortex [4; 16; 10]. Together with the
Redundancy Reduction Hypothesis by Barlow and Attneave [3; 1], those observations have given
rise to the idea that these filters represent an important aspect of natural images which is exploited
by the early visual system.
Several result, however, show that the density model of ICA is too restricted to provide a good model
for natural images patches. Firstly, several authors have demonstrated that filter responses of ICA
filters on natural images are not statistically independent [20; 23; 6]. Secondly, after whitening, the
optimum of ICA in terms of statistical independence is very shallow or, in other words, all whitening
filters yield almost the same redundancy reduction [5; 2]. A possible explanation for that finding is
that, after whitening, densities of local image features are approximately spherical [24; 23; 12; 6].
This implies that those densities cannot be made independent by ICA because (i) all whitening filters
differ only by an orthogonal transformation, (ii) spherical densities are invariant under orthogonal
transformations, and (iii) the only spherical and factorial distribution is the Gaussian. Once local
image features become more distant from each other, the contour lines of the density deviates from
spherical and become more star-shaped. In order to capture this star-shaped contour lines one can
use the more general Lp -spherically symmetric distributions which are characterized by densities of
the form ?(y) = g(yp ) with yp = ( | yi | p )1/ p and p > 0 [9; 10; 21].
p=0.8
p=0.8
p=2
p=1.5
Figure 1: Scatter plots and marginal histograms of neighboring (left) and distant (right) symmetric whitening
filters which are shown at the top. The dashed Contours indicate the unit sphere for the optimal p of the best
fitting non-factorial (dashed line) and factorial (solid line) Lp -spherically symmetric distribution, respectively.
While close filters exhibit p = 2 (spherically symmetric distribution), the value of p decreases for more distant
filters.
As illustrated in Figure 1, the relationship between local bandpass filter responses undergoes a gradual transition from L2 -spherical for nearby to star-shaped (Lp -spherical with p < 2) for more distant
features [12; 21]. Ultimately, we would expect extremely distant features to become independent,
having a factorial density with p 0.8. When using a single Lp -spherically symmetric model for
the joint distribution of nearby and more distant features, a single value of p can only represent a
compromise for the whole variety of iso-probability contours. This raises the question whether a
combination of local spherical models, as opposed to a single Lp -spherical model, yields a better
characterization of the statistics of natural image patches. Possible ways to join several local models
are Independent Subspace Analysis (ISA) [10], which uses a factorial combination of locally Lp spherical densities, or Markov Random Fields (MRFs) [18; 13]. Since MRFs have the drawback
of being implicit density models and computationally very expensive for inference, we will focus
on ISA and our model. In principle, ISA could choose its subspaces such that nearby features are
grouped into a joint subspace which can then be well described by a spherical symmetric model
(p = 2) while more distant pixels, living in different subspaces, are assumed to be independent. In
fact, previous studies have found ISA to perform better than ICA for image patches as small as 8 ? 8
and to yield an optimal p 2 for the local density models [10]. On the other hand, the ISA model
assumes a binary partition into either a Lp -spherical or a factorial distribution which does not seem
to be fully justified considering the gradual transition described above.
2
Here, we propose a new family of hierarchical models by replacing the Lp -norms in the Lp -spherical
models by Lp -nested functions, which consist of a cascade of nested Lp -norms and therefore allow
for different values of p for different groups of filters. While this family includes the Lp -spherical
family and ISA models, it also includes densities that avoid the hard partition into either factorial
or Lp -spherical. At the same time, parameter estimation for these models can still be similarly
efficient and robust as for Lp -spherically symmetric models. We find that this family (i) fits the data
significantly better than ISA and (ii) generates interesting filters which are grouped in a sensible way
within the hierarchy. We also find that, although the difference in performance between Lp -spherical
and Lp -nested models is significant, it is small on 8 ? 8 patches, suggesting that within this limited
spatial range, the iso-probability contours of the joint density can still be reasonably approximated
by a single Lp -norm. Preliminary results on 16 ? 16 patches exhibit a more pronounced difference
between the Lp -nested and the Lp -spherically symmetric distribution, suggesting that the change in
p becomes more important for modelling densities over a larger spatial range.
2
Models
Lp -Nested Symmetric Distributions Consider the function
?
?
f (y) = ?
n1
X
! pp?
1
|yi |p1
?
+ ... + ?
i=1
? pp? ? p1?
`
?
|yi |p` ? ?
n
X
(1)
i=n1 +...+n`?1 +1
>
=
(ky1:n1 kp1 , ..., kyn?n` +1:n kp` )
.
p?
We call this type of functions Lp -nested and the resulting class of distributions Lp -nested symmetric.
Lp -nested symmetric distributions are a special case of the ?-spherical distributions which have a
density characterized by the form ?(y) = g(?(y)) where ? : Rn ? R is a positively homogeneous
function of degree one, i.e. it fulfills ?(ay) = a?(y) for any a ? R+ and y ? Rn [7]. Lp nested functions are obviously positively homogeneous. Of course, Lp -nested functions of Lp nested functions are again Lp -nested. Therefore, an Lp -nested function f in its general form can be
visualized by a tree in which each inner node corresponds to an Lp -norm while the leaves stand for
the coefficients of the vector y.
Because of the positive homogeneity it is possible to normalize a vector y with respect to ? and
obtain a coordinate respresentation x = r ? u where r = ?(y) and u = y/?(y). This implies that
.
the random variable Y has the stochastic representation Y = RU with independent U and R [7]
which makes it a generalization of the Gaussian Scale Mixture model [23]. It can be shown that
for a given ?, U always has the same distribution while the distribution %(r) of R determines the
specific ?(y) [7]. For a general ?, it is difficult to determine the distribution of U since the partition
function involves the surface area of the ?-unit sphere which is not analytically tractable in most
cases. Here, we show that Lp -nested functions allow for an analytical expression of the partition
function. Therefore, the corresponding distributions constitute a flexible yet tractable subclass of
?-spherical distributions.
In the remaining paper we adopt the following notational convention: We use multi-indices to index
single nodes of the tree. This means that I = ? denotes the root node, I = (?, i) = i denotes
its ith child, I = (i, j) the j th child of i and so on. The function values at individual inner nodes
I are denoted by fI , the vector of function values of the children of an inner node I by f I,1:`I =
(fI,1 , ..., fI,`I )> . By definition, parents and children are related via fI = kf I,1:`I kpI . The number of
children of a particular node I is denoted by `I .
Lp -nested symmetric distributions are a very general class of densities. For instance, since every Lp norm k ? kp is an Lp -nested function, Lp -nested distributions includes the family of Lp -spherically
symmetric distributions including (for p = 2) the family of spherically symmetric distributions.
1/p
When e.g. setting f = k ? k2 or f = (k ? kp2 ) , and choosing
the radial distribution % appropriately,
?1
one can recover the Gaussian ?(y) = Z exp ?kyk22 or the generalized spherical Gaussian
?(y) = Z ?1 exp (?kykp2 ), respectively. On the other hand, when choosing the Lp -nested function
f as in equation (1) and % to be the radial distribution of a p-generalized Normal distribution %(r) =
3
Z ?1 rn?1 exp (?rp? /s) [8; 22], the inner nodes f 1:`? become independent and we can recover an
ISA model. Note, however, that not all ISA models are also Lp -nested since Lp -nested symmetry
requires the radial distribution to be that of a p-generalized Normal.
In general, for a given radial distribution % on the Lp -nested radius f (y), an Lp -nested symmetric
distribution has the form
?(y) =
1
1
? %(f (y)) =
? %(f (y))
Sf (f (y))
Sf (1) ? f n?1 (y)
(2)
where Sf (f (y)) = Sf (1)?f n?1 (y) is the surface area of the Lp -nested sphere with the radius f (y).
This means that the partition function of a general Lp -nested symmetric distribution is the partition
function of the radial distribution normalized by the surface area of the Lp -nested sphere with radius
f (y). For a given f and a radius f? = f (y) this surface area is given by the equation
h
i
Q`I
#
"P
nI,k
k
I ?1
?
Y 1 `Y
Y
k=1
pI
i=1 nI,k nI,k+1
h i
Sf (f? ) = f?n?1 2n
= f?n?1 2n
B
,
`I ?1
nI
pI
pI
p`II ?1
p
?
I?I
I?I
k=1
I
pI
where I denotes the set of all multi-indices of inner nodes, nI the number of leaves of the subtree
under I and B [a, b] the beta function. Therefore, if the partition function of the radial distribution
can be computed easily, so can the partition function of the multivariate Lp -nested distribution.
Since the only part of equation (2) that includes free parameters is the radial distribution %, maximum
likelihood estimation of those parameters ? can be carried out on the univariate distribution % only,
because
(2)
argmax? log ?(y|?) = argmax? (? log Sf (f (y)) + log %(f (y)|?)) = argmax? log %(f (y)|?).
This means that parameter estimation can be done efficiently and robustly on the values of the Lp nested function.
Since, for a given f , an Lp -nested distribution is fully specified by a radial distribution, changing
the radial distribution also changes the Lp -nested distribution. This suggests an image decomposition constructed from a cascade of nonlinear, gain-control-like mappings reducing the dependence
between the filter coefficients. Similar to Radial Gaussianization or Lp -Radial Factorization algorithms [12; 21], the radial distribution %? of the root node is mapped into the radial distribution of
a p-generalized Normal via histogram equalization, thereby making its children exponential power
distributed and statistically independent [22]. This procedure is then repeated recursively for each
of the children until the leaves of the tree are reached.
Below, we estimate the multi-information (MI) between the filters or subtrees at different levels of
the hierarchy. In order to do that robustly, we need to know the joint distribution of their values. In
particular, we are interested in the joint distribution of the children f I,1:`I of a node I (e.g. layer 2
in Figure 2). Just from the form of an Lp -nested function one might guess that those children are
Lp -spherically symmetric distributed. However, this is not the case. For example, the children f 1:`?
of the root node (assuming that none of them is a leaf) follow the distribution
?(f 1:`? ) =
`?
%? (kf 1:`? kp? ) Y
f ni ?1 .
Sk?kp? (kf 1:`? kp? ) i=1 i
(3)
This implies that f 1:`? can be represented as a product of two independent random variables
`
p
p
u = f 1:`? /kf 1:`? kp? ? R+? and r = kf 1:`? kp? ? R+ with r ? %? and u1? , ..., u`?? ?
Dir n1 /p? , ..., n`? /p? following a Dirichlet distribution (see Additional Material). We call this
distribution a Dirichlet Scale Mixture (DSM). A similar form can be shown for the joint distribution
of leaves and inner nodes (summarizing the whole subtree below them). Unfortunately, only the
children f 1:`? of the root node are really DSM distributed. We were not able to analytically calculate the marginal distribution of an arbitrary node?s children f I,1:`I , but we suspect it to have a
similar form. For that reason we fit DSMs to those children f I,1:`? in the experiments below and
use the estimated model to assess the dependencies between them. We also use it for measuring the
dependencies between the subspaces of ISA.
4
Fitting DSMs via maximum likelihood can be carried out similarly to estimating Lp -nested distributions: Since the radial variables u and r are independent, the Dirichlet and the radial distribution
m
can be estimated on the normalized data points {ui }m
i=1 and their respective norms {ri }i=1 independently.
Lp -Spherically Symmetric Distributions and Independent Subspace Analysis The family of
Lp -spherically symmetric distributions are a special case of Lp -nested distributions for which
f (y) = kykp [9]. We use the ISA model by [10] where the filter responses y are modelled by
a factorial combination of Lp -spherically symmetric distributions sitting on each subspace
?(y) =
K
Y
?k (kyIk kpk ).
k=1
3
Experiments
Given an image patch x, all models used in this paper define densities over filter responses y = W x
of linear filters. This means, that all models have the form ?(y) = | det W |??(W x). The (n?1)?n
matrix W has the form W = QSP where P ? R(n?1)?n has mutually orthogonal rows and projects
onto the orthogonal complement of the DC-filter (filter with equal coefficients), S ? R(n?1)?(n?1)
is a whitening matrix and Q ? SOn?1 is an orthogonal matrix determining the final filter shapes
of W . When we speak of optimizing the filters according to a model, we mean optimizing Q over
SOn?1 . The reason for projecting out the DC component is, that it can behave quite differently
depending on the dataset. Therefore, it is usually removed and modelled separately. Since the DC
component is the same for all models and would only add a constant offset to the measures we use
in our experiments, we ignore it in the experiments below.
Data We use ten pairs of independently sampled training and test sets of 8 ? 8 (16 ? 16) patches
from the van Hateren dataset, each containing 100, 000 (500, 000) examples. Hyv?arinen and K?oster
[10] report that ISA already finds several subspaces for 8 ? 8 image patches. We perform all experiments with two different types of preprocessing: either we only whiten the data (WO-data), or we
whiten it and apply an additional contrast gain control step (CGC-data), for which we use the radial
factorization method described in [12; 21] with p = 2 in the symmetric whitening basis.
We use the same whitening procedure as in [21; 6]: Each dataset is centered on the mean over
examples and dimensions and rescaled such that whitening becomes volume conserving. Similarly,
we use the same orthogonal matrix to project out the DC-component of each patch (matrix P above).
1
On the remaining n ? 1 dimensions, we perform symmetric whitening (SYM) with S = C ? 2 where
C denotes the covariance matrix of the DC-corrected data C = cov [P X].
Evaluation Measures We use the Average Log Loss per component (ALL) for assessing the quality of the different models, which we estimate by takingPthe empirical average over a large ensemble
m
1
1
of test points ALL = ? n?1
hlog ?(y)iY ? ? m(n?1)
i=1 log ?(yi ). The ALL equals the entropy
if the model distribution equals the true distribution and is larger otherwise. For the CGC-data, we
adjust the ALL by the log-determinant of the CGC transformation [11]. In contrast to [10] this allows us to quantitively compare models across the two different types of preprocessing (WO and
CGC), which was not possible in [10].
In order to measure the dependence between different random
variables, we use the multi
Pd
1
information per component (MI) n?1
i=1 H[Yi ] ? H[Y ] which is the difference between the
sum of the marginal entropies and the joint entropy. The MI is a positive quantity which is zero
if and only if the joint distribution is factorial. We estimate the marginal entropies by a jackknifed
MLE entropy estimator [17] (corrected for the log of the bin width in order to estimate the differential entropy) where we adjust the bin width of the histograms suggested by Scott [19]. Instead of the
joint entropy, we use the ALL of an appropriate model distribution. Since the ALL is theoretically
always larger than the true joint entropy (ignoring estimation errors) using the ALL instead of the
joint entropy should underestimate the true MI, which is still sufficient for our purpose.
Parameter Estimation For all models (ISA, DSM, Lp -spherical and Lp -nested), we estimate the
parameters ? for the radial distribution as described above in Section 2. For a given filter matrix
5
W the values of the exponents p are estimated by minimizing the ALL at the ML estimates ??
over p = (p1 , ..., pq )> . For the Lp -nested distributions, we use the Nelder-Mead [15] method for
the optimization over p = (p1 , ..., pq )> and for the Lp -spherically symmetric distributions we use
Golden Search over the single p. For the ISA model, we carry out a Golden Search over p for
each subspace independently. For the Lp -spherical and the single models on the ISA subspaces,
we use a search range of p ? [0.1, 2.1] on p. For estimating the Dirichlet Scale Mixtures, we use
the fastfit package by Tom Minka to estimate the parameters of the Dirichlet distribution. The
radial distribution is estimated independently as described above.
When fitting the filters W to the different models (ISA, Lp -spherical and Lp -nested), we use a
gradient ascent on the log-likelihood over the orthogonal group by alternating between optimizing
the parameters p and ? and optimizing for W . For the gradient ascent, we compute the standard
Euclidean gradient with respect to W ? R(n?1)?(n?1) and project it back onto the tangent space of
SOn?1 . Using the gradient ?W obtained in that manner, we perform a line search with respect to
t using the backprojections of W + t ? ?W onto SOn?1 . This method is a simplified version of the
one proposed by [14].
Experiments with Independent Subspace Analysis and Lp -Spherically Symmetric Distributions We optimized filters for ISA models with K = 2, 4, 8, 16 subspaces embracing 32, 16, 8, 4
components (one subspace always had one dimension less due to the removal of the DC component),
and for an Lp -spherically symmetric model. When optimizing for W we use a radial ?-distribution
for the Lp -spherically symmetric models and a radial ?p distribution (kyIk kppkk is ?-distributed) for
the models on the single single subspaces of ISA, which is closer to the one used by [10]. After
optimization, we make a final optimization for p and ? using a mixture of log normal distributions
(log N ) with K = 6 mixture components on the radial distribution(s).
Lp -Nested Symmetric Distributions As for the Lp -spherically symmetric models, we use a radial
?-distribution for the optimization of W and a mixture of log N distributions for the final fit. We use
two different kind of tree structures for our experiments with Lp -nested symmetric distributions. In
the deep tree (DT) structure we first group 2?2 blocks of four neighboring SYM filters. Afterwards,
we group those blocks again in a quadtree manner until we reached the root node (see Figure 2A).
The second tree structure (PNDk ) was motivated by ISA. Here, we simply group the filter within
each subspace and joined them at the root node afterwards (see Figure 2B). In order to speed up
parameter estimation, each layer of the tree shared the same value of p.
Multi-Information Measurements For the ISA models, we estimated the MI between the filter
responses within each subspace and between the Lp -radii kyIk kpk , 1 ? k ? K. In the former case
we used the ALL of an Lp -spherically symmetric distribution with especially optimized p and ?, in
the latter a DSM with optimized radial and Dirichlet distribution as a surrogate for the joint entropy.
For the Lp -nested distribution, we estimate the MI between the children f I,1:`I of all inner nodes
I. In case the children are leaves, we use the ALL of an Lp -spherically symmetric distribution as
surrogate for the joint entropy, in case the children are inner nodes themselves, we use the ALL of
an DSM. The red arrows in Figure 2A exemplarily depict the entities between which the MI was
estimated.
4
Results and Discussion
Figure (2) shows the optimized filters for the DT and the PND16 tree structure (we included the
filters optimized on the first of ten datasets for all tree structures in the Additional Material). For
both tree structures, the filters on the lowest level are grouped according to spatial frequency and
orientation, whereas the variation in orientation is larger for the PND16 tree structure and some
filters are unoriented. The next layer of inner nodes, which is only present in the DT tree structure,
roughly joins spatial location, although each of those inner nodes has one child whose leaves are
global filters.
When looking at the various values of p at the inner nodes, we can observe that nodes which are
higher up in the tree usually exhibit a smaller value of p. Surprisingly, as can be seen in Figure 3
B and C, a smaller value of p does not correspond to a larger independence between the subtrees,
which are even more correlated because almost every subtree contains global filters. The small value
of p is caused by the fact that the DSM (the distribution of the subtree values) has to account for
this correlation which it can only do by decreasing the value of p (see Figure 3 and the DSM in
6
A
p2=1.693
p1=0.8413
p3=2.276
p2=0.8438
p1=0.77071
B
Layer 1
Layer 2
Layer 3
Figure 2: Examples for the tree structures of Lp -nested distributions used in the experiments: (A) shows
the DT structure with the corresponding optimized values. The red arrows display examples of groups of filters
or inner nodes, respectively, for which we estimated the MI. (B) shows the PND16 tree structure with the
corresponding values of p at the inner nodes and the optimized filters.
the Additional Material). Note that this finding is exactly opposite to the assumptions in the ISA
model which can usually not generate such a behavior (Figure 3A) as it models the two subtrees to
be independent. This is likely to be one reason for the higher ALL of the ISA models (see Table 1).
B
C
45
45
45
45
40
40
40
40
35
35
35
35
30
30
30
30
25
25
A
50
f 2 sampled
f2
||y 32:63 || p
2
sampled
2
p
25
||
32:63
||y
D
50
50
50
25
20
20
20
20
15
15
15
15
10
10
10
10
5
5
5
0
0
5
10
15
20
25
30
||y 1:32 || p sampled
1
35
40
45
50
0
0
5
10
15
20
||y
25
30
||
1:32 p
35
40
45
50
1
0
0
5
5
10
15
20
25
f1
30
35
40
45
50
0
0
5
10
15
20
25
30
f 1 sampled
35
40
45
50
Figure 3: Independence of subspaces for WO-data not justfied: (A) Subspace radii sampled from ISA2 , (B)
subspace radii of natural image patches in the ISA2 basis, (C) subtree values of the PND2 in the PND2 basis, and
(D) samples from the PND2 model. While the ISA2 model spreads out the radii almost over the whole positive
quadrant due to the independence assumption the samples from the Lp -nested subtrees are more concentrated
around the diagonal like the true data. The Lp -nested model can achieve this behavior since (i) it does not
assume a radial distribution that leads to independent radii on the subtrees and (ii) the subtree values f1 and f2
are DSM[n1 /p? , n2 /p? , ] distributed. By changing the value of p? , the DSM model can put more mass towards
the diagonal, which produces the ?beam-like? behavior shown in the plot.
Table 1 shows the ALL and the MI measurements for all models. Except for the ISA models on
WO-data, all performances are similar, whereas the Lp -nested models usually achieve the lowest
ALL independent of the particular tree structure used. For the WO-data, the Lp -spherical and the
ISA2 model come close to the performance of the Lp -nested models. For the other ISA models on
WO-data the ALL gets worse with increasing number of subspaces (bold font number in Table 1).
This reflects the effect described above: Contrary to the assumptions of the ISA model, the responses
of the different subspaces become in fact more correlated than the single filter responses. This can
also be seen in the MI measurements discussed below.
When looking at the ALL for CGC data, on the other hand, ISA suddenly becomes competitive.
This importance of CGC for ISA has already been noted in [10]. The small differences between all
the models in the CGC case shows that the contour change of the joint density for 8?8 patches is too
small to allow for a large advantage of the Lp -nested model, because contrast gain control (CGC)
7
directly corresponds to modeling the distribution with an Lp -spherically symmetric distribution [21].
Preliminary results on 16 ? 16 data (1.39 ? 0.003 for the Lp -nested and 1.45 ? 0.003 for the Lp spherical model on WO-data), however, show a more pronounced improvement with for the Lp nested model, indicating that a single p does not suffice anymore to capture all dependencies when
going to larger patch sizes.
When looking at the MI measurements between the filters/subtrees at different levels of the hierarchy
in the Lp -nested, Lp -spherically symmetric and ISA models, we can observe that for the WO-data,
the MI actually increases when going from lower to higher layers. This means that the MI between
the direct filter responses (layer 3 for DT and layer 2 for all others) is in fact lower than the MI
between the subspace radii or the inner nodes of the Lp -nested tree (layer 1-2 for DT, layer 1 for all
others). The highest MI is achieved between the children of the root node for the DT tree structure
(DT layer 1). As explained above this observation contradicts the assumptions of the ISA model and
probably causes it worse performance on the WO-data.
For the CGC-data, on the other hand, the MI has been substantially decreased by CGC over all levels
of the hierarchy. Furthermore, the single filter responses inside a particular subspace or subtree are
now more dependent than the subtrees or subspaces themselves. This suggests that the competitive
performance of ISA is not due to the model but only due to the fact that CGC made the data already
independent. In order to double check this result, we fitted an ICA model to the CGC-data [21] and
found an ALL of 1.41 ? 0.004 which is very close to the performance of ISA and the Lp -nested
distributions (which would not be the case for WO-data [21]).
Taken together, the ALL and the MI measurements suggest that ISA is not the best way to join
multiple local models into a single joint model. The basic assumption of the ISA model for natural
images is that filter coefficients can either be dependent within a subspace or must be independent
between different subspaces. However, the increasing ALL for an increasing number of subspaces
and the fact that the MI between subspaces is actually higher than within the subspaces, demonstrates
that this hard partition is not justified when the data is only whitened.
Family
Model
ALL
ALL CGC
MI Layer 1
MI Layer 1 CGC
MI Layer 2
MI Layer 2 CGC
MI Layer 3
MI Layer 3 GCG
Family
Model
ALL
ALL CGC
MI Layer 1
MI Layer 1 CGC
MI Layer 2
MI Layer 2 CGC
Lp -nested
Deep Tree
PND2
PND4
PND8
PND16
1.39 ? 0.004
1.39 ? 0.005
0.84 ? 0.019
0.0 ? 0.004
0.42 ? 0.021
0.002 ? 0.005
0.28 ? 0.036
0.04 ? 0.005
Lp -spherical
1.39 ? 0.004
1.40 ? 0.004
0.48 ? 0.008
0.10 ? 0.002
0.35 ? 0.017
0.01 ? 0.0008
1.39 ? 0.004
1.40 ? 0.005
0.7 ? 0.002
0.02 ? 0.003
0.33 ? 0.017
0.01 ? 0.004
1.40 ? 0.004
1.40 ? 0.004
0.75 ? 0.003
0.0 ? 0.009
0.28 ? 0.019
0.01 ? 0.006
1.39 ? 0.004
1.39 ? 0.004
0.61 ? 0.0036
0.0 ? 0.01
0.25 ? 0.025
0.02 ? 0.008
-
-
-
-
-
ISA2
ISA4
ISA8
ISA16
1.41 ? 0.004
1.41 ? 0.004
0.34 ? 0.004
0.00 ? 0.005
1.40 ? 0.005
1.41 ? 0.008
0.47 ? 0.01
0.00 ? 0.09
0.36 ? 0.017
0.004 ? 0.003
1.43 ? 0.006
1.39 ? 0.007
0.69 ? 0.012
0.00 ? 0.06
0.33 ? 0.019
0.03 ? 0.012
1.46 ? 0.006
1.40 ? 0.005
0.7 ? 0.018
0.00 ? 0.04
0.31 ? 0.032
0.02 ? 0.018
1.55 ? 0.006
1.41 ? 0.007
0.63 ? 0.0039
0.00 ? 0.02
0.24 ? 0.024
0.0006 ? 0.013
-
ISA
Table 1: ALL and MI for all models: The upper part shows the results for the Lp -nested models, the lower
part show the results for the Lp -spherical and the ISA models. The ALL for the Lp -nested models is almost
equal for all tree structures and a bit lower compared to the Lp -spherical and the ISA models. For the whitened
only data, the ALL increases significantly with the number of subspaces (bold font). For the CGC data, most
models perform similarly well. When looking at the MI, we can see that higher layers for whitened only data
are in fact more dependent than lower ones. For CGC data, the MI has dropped substantially over all layers due
to CGC. In that case, the lower layers are more independent.
In summary, our results show that Lp -nested symmetric distributions yield a good performance on
natural image patches, although the advantage over Lp -spherically symmetric distributions is fairly
small, suggesting that the distribution within these small patches (8 ? 8) is captured reasonably well
by a single Lp -norm. Furthermore, our results demonstrate that?at least for 8 ? 8 patches?the
assumptions of ISA are too rigid for WO-data and are trivially fulfilled for the CGC-data, since
CGC already removed most of the dependencies. We are currently working to extend this study to
larger patches, which we expect will reveal a more significant advantage for Lp -nested models.
8
References
[1] F. Attneave. Informational aspects of visual perception. Psychological Review, 61:183?193, 1954.
[2] R. Baddeley. Searching for filters with ?interesting? output distributions: an uninteresting direction to
explore? Network: Computation in Neural Systems, 7(2):409?421, 1996.
[3] H. B. Barlow. Sensory mechanisms, the reduction of redundancy, and intelligence. 1959.
[4] Anthony J. Bell and Terrence J. Sejnowski. An Information-Maximization approach to blind separation
and blind deconvolution. Neural Computation, 7(6):1129?1159, November 1995.
[5] Matthias Bethge. Factorial coding of natural images: how effective are linear models in removing higherorder dependencies? Journal of the Optical Society of America A, 23(6):1253?1268, June 2006.
[6] Jan Eichhorn, Fabian Sinz, and Matthias Bethge. Natural image coding in v1: How much use is orientation
selectivity? PLoS Comput Biol, 5(4):e1000336, April 2009.
[7] Carmen Fernandez, Jacek Osiewalski, and Mark F. J. Steel. Modeling and inference with ?-spherical
distributions. Journal of the American Statistical Association, 90(432):1331?1340, Dec 1995.
[8] Irwin R. Goodman and Samuel Kotz. Multivariate ?-generalized normal distributions. Journal of Multivariate Analysis, 3(2):204?219, Jun 1973.
[9] A. K. Gupta and D. Song. lp -norm spherical distribution. Journal of Statistical Planning and Inference,
60:241?260, 1997.
[10] A. Hyvarinen and U. Koster. Complex cell pooling and the statistics of natural images. Network: Computation in Neural Systems, 18(2):81?100, 2007.
[11] S Lyu and E P Simoncelli. Nonlinear extraction of ?independent components? of natural images using
radial Gaussianization. Neural Computation, 21(6):1485?1519, June 2009.
[12] S Lyu and E P Simoncelli. Reducing statistical dependencies in natural signals using radial Gaussianization. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Adv. Neural Information Processing
Systems 21, volume 21, pages 1009?1016, Cambridge, MA, May 2009. MIT Press.
[13] Siwei Lyu and E.P. Simoncelli. Modeling multiscale subbands of photographic images with fields of
gaussian scale mixtures. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(4):693?
706, 2009.
[14] J. H. Manton. Optimization algorithms exploiting unitary constraints. IEEE Transactions on Signal
Processing, 50:635 ? 650, 2002.
[15] J. A. Nelder and R. Mead. A simplex method for function minimization. The Computer Journal, 7(4):308?
313, Jan 1965.
[16] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381(6583):607?609, June 1996.
[17] Liam Paninski. Estimation of entropy and mutual information. Neural Computation, 15(6):1191?1253,
Jun 2003.
[18] S. Roth and M.J. Black. Fields of experts: a framework for learning image priors. In Computer Vision
and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages
860?867 vol. 2, 2005.
[19] David W. Scott. On optimal and data-based histograms. Biometrika, 66(3):605?610, Dec 1979.
[20] E.P. Simoncelli. Statistical models for images: compression, restoration and synthesis. In Signals, Systems
& Computers, 1997. Conference Record of the Thirty-First Asilomar Conference on, volume 1, pages
673?678 vol.1, 1997.
[21] F. Sinz and M. Bethge. The conjoint effect of divisive normalization and orientation selectivity on redundancy reduction. In Neural Information Processing Systems 2008, 2009.
[22] F. H. Sinz, S. Gerwinn, and M. Bethge. Characterization of the p-generalized normal distribution. Journal
of Multivariate Analysis, 100(5):817?820, 05 2009.
[23] M. J. Wainwright and E. P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images.
In Advances in neural information processing systems, volume 12, pages 855?861, 2000.
[24] Christoph Zetzsche, Gerhard Krieger, and Bernhard Wegmann. The atoms of vision: Cartesian or polar?
Journal of the Optical Society of America A, 16(7):1554?1565, Jul 1999.
9
| 3642 |@word determinant:1 version:1 compression:3 norm:10 open:1 hyv:1 gradual:2 decomposition:1 covariance:1 thereby:1 cgc:23 solid:1 recursively:1 carry:1 reduction:5 contains:1 scatter:1 yet:1 reminiscent:1 must:1 distant:7 partition:9 shape:1 eichhorn:1 plot:2 depict:1 intelligence:2 leaf:7 guess:1 iso:2 ith:1 record:1 characterization:3 node:26 location:1 fabee:1 firstly:1 mathematical:1 constructed:1 direct:1 become:5 beta:1 differential:1 fitting:4 inside:1 manner:2 introduce:1 theoretically:1 ica:8 behavior:3 mpg:2 p1:6 themselves:2 multi:5 planning:1 roughly:1 informational:1 spherical:28 decreasing:1 considering:1 increasing:3 becomes:3 project:3 estimating:2 suffice:1 mass:1 lowest:2 kind:1 substantially:2 finding:3 transformation:3 sinz:4 thorough:1 every:2 subclass:1 golden:2 exactly:1 biometrika:1 k2:1 demonstrates:1 control:4 unit:2 planck:2 positive:3 engineering:1 local:8 dropped:1 mead:2 approximately:1 might:1 black:1 suggests:3 christoph:1 factorization:3 limited:1 liam:1 range:3 statistically:2 thirty:1 block:2 procedure:2 jan:2 area:4 empirical:1 bell:1 cascade:3 significantly:2 word:1 radial:27 quadrant:1 suggest:1 get:1 cannot:1 close:3 onto:3 put:1 applying:1 equalization:1 demonstrated:1 center:1 roth:1 independently:4 kpi:1 mbethge:1 estimator:1 deriving:1 searching:1 coordinate:1 variation:1 hierarchy:4 gerhard:1 speak:1 homogeneous:2 us:1 hypothesis:1 expensive:1 approximated:1 recognition:1 mammalian:1 capture:2 calculate:1 adv:1 plo:1 decrease:1 removed:2 rescaled:1 highest:1 pd:1 ui:1 ultimately:1 raise:1 compromise:1 f2:2 basis:4 easily:1 joint:15 differently:1 represented:1 various:1 america:2 effective:1 sejnowski:1 kp:7 choosing:2 whose:2 quite:1 larger:8 cvpr:1 otherwise:1 statistic:4 cov:1 emergence:1 final:3 obviously:1 backprojections:1 advantage:3 matthias:3 analytical:1 propose:1 product:1 neighboring:2 conserving:1 achieve:2 pronounced:2 normalize:1 exploiting:1 parent:1 double:1 optimum:1 assessing:1 produce:1 depending:1 received:1 p2:2 resemble:1 implies:3 indicate:1 involves:1 differ:1 convention:1 direction:1 radius:10 drawback:1 gaussianization:3 come:1 filter:43 stochastic:1 centered:1 material:3 bin:2 arinen:1 f1:2 generalization:1 really:1 preliminary:2 biological:3 comprehension:1 secondly:1 around:1 normal:6 exp:3 mapping:1 lyu:3 driving:1 early:3 adopt:1 purpose:1 favorable:1 estimation:7 polar:1 currently:1 grouped:3 successfully:1 reflects:1 minimization:1 mit:1 gaussian:5 always:3 jackknifed:1 avoid:1 focus:1 june:3 notational:1 improvement:1 modelling:1 likelihood:3 check:1 contrast:4 summarizing:1 kp2:1 inference:3 dependent:4 mrfs:2 rigid:1 wegmann:1 koller:1 going:2 interested:1 germany:2 pixel:1 flexible:1 orientation:4 denoted:2 exponent:1 spatial:4 special:3 fairly:1 mutual:1 marginal:4 field:6 once:1 equal:4 shaped:3 having:1 atom:1 extraction:1 mimic:1 simplex:1 report:1 stimulus:1 others:2 oriented:1 kp1:1 homogeneity:1 individual:2 argmax:3 n1:5 interest:1 evaluation:1 adjust:2 mixture:8 zetzsche:1 subtrees:7 closer:1 respective:1 orthogonal:7 spemannstra:2 tree:20 euclidean:1 fitted:1 psychological:1 instance:1 modeling:6 measuring:1 restoration:1 maximization:1 dsm:9 uninteresting:1 too:3 dependency:8 dir:1 density:18 terrence:1 bethge:5 together:2 iy:1 gcg:1 synthesis:1 again:2 opposed:1 choose:1 containing:1 worse:2 american:1 expert:1 suggesting:3 account:1 de:2 star:3 bold:2 coding:2 includes:5 coefficient:5 caused:1 fernandez:1 blind:2 performed:1 view:1 root:7 reached:2 red:2 recover:2 competitive:2 jul:1 ass:1 ni:6 efficiently:1 ensemble:1 yield:6 sitting:1 correspond:1 modelled:2 none:1 cybernetics:2 siwei:1 kpk:2 definition:1 underestimate:1 pp:2 frequency:1 minka:1 attneave:2 mi:31 gain:4 sampled:6 dataset:3 kyk22:1 actually:2 back:1 higher:5 courant:1 dt:8 follow:1 tom:1 response:9 april:1 done:1 furthermore:2 kyn:1 implicit:1 just:1 until:2 correlation:1 hand:4 working:1 replacing:1 nonlinear:3 multiscale:1 undergoes:1 quality:1 reveal:1 olshausen:1 effect:2 normalized:2 true:4 barlow:2 former:1 analytically:2 alternating:1 symmetric:39 spherically:25 illustrated:1 width:2 noted:1 whiten:2 samuel:1 generalized:7 ay:1 demonstrate:1 jacek:1 image:31 recently:1 fi:4 volume:5 discussed:1 extend:1 unoriented:1 association:1 significant:2 measurement:5 cambridge:1 trivially:1 similarly:4 bruno:1 had:1 pq:2 cortex:2 surface:4 whitening:10 add:1 multivariate:4 perspective:1 optimizing:5 selectivity:2 ubingen:2 gerwinn:1 binary:1 yi:5 exploited:2 embracing:1 seen:2 captured:1 additional:4 determine:1 signal:3 dashed:2 ii:4 living:1 simoncelli:7 isa:43 desirable:1 afterwards:2 multiple:1 photographic:1 characterized:2 sphere:4 mle:1 qsp:1 basic:1 whitened:3 vision:2 histogram:4 represent:2 normalization:1 achieved:1 cell:5 beam:1 justified:2 whereas:2 dec:2 separately:1 decreased:1 appropriately:1 fifty:1 goodman:1 ascent:2 probably:1 suspect:1 pooling:1 contrary:1 seem:1 call:2 unitary:1 iii:1 easy:1 concerned:1 bengio:1 variety:2 independence:4 fit:3 opposite:1 reduce:1 idea:1 inner:14 det:1 whether:1 expression:1 pca:1 motivated:1 wo:11 song:1 york:2 cause:1 constitute:1 deep:2 useful:1 factorial:10 locally:1 ten:2 concentrated:1 visualized:1 generate:1 estimated:7 fulfilled:1 per:2 vol:2 group:6 redundancy:6 four:1 changing:2 respresentation:1 v1:1 year:1 sum:1 koster:1 package:1 family:12 almost:4 kotz:1 patch:18 p3:1 separation:1 bit:1 layer:25 display:1 constraint:1 ri:1 nearby:3 generates:1 aspect:2 u1:1 speed:1 extremely:1 carmen:1 optical:2 relatively:1 according:2 combination:3 across:1 smaller:2 son:4 contradicts:1 lp:105 shallow:1 making:1 projecting:1 restricted:1 invariant:1 explained:1 taken:1 asilomar:1 computationally:1 equation:3 mutually:1 mechanism:2 know:1 tractable:2 generalizes:1 operation:1 gaussians:1 apply:1 observe:2 hierarchical:3 subbands:1 appropriate:1 robustly:2 anymore:1 ky1:1 rp:1 top:1 assumes:1 remaining:2 denotes:4 dirichlet:6 especially:1 society:3 suddenly:1 question:1 already:4 quantity:1 font:2 receptive:2 primary:2 dependence:2 diagonal:2 surrogate:2 exhibit:3 gradient:4 subspace:34 higherorder:1 mapped:1 entity:1 sensible:1 tuebingen:2 reason:3 assuming:1 ru:1 code:1 index:3 relationship:1 minimizing:1 difficult:1 unfortunately:1 hlog:1 relate:1 rise:1 steel:1 design:1 quadtree:1 perform:5 upper:1 observation:2 markov:1 datasets:1 fabian:2 november:1 behave:1 looking:4 precise:1 dc:6 rn:3 arbitrary:1 david:2 complement:1 pair:1 specified:1 optimized:7 able:1 suggested:1 below:5 usually:4 scott:2 perception:1 pattern:2 max:2 including:1 explanation:1 wainwright:1 power:1 suitable:1 natural:18 advanced:1 scheme:1 carried:2 jun:2 oster:1 deviate:1 review:1 understanding:1 l2:1 tangent:1 kf:5 removal:1 determining:1 prior:1 exemplarily:1 fully:2 expect:2 loss:1 interesting:2 localized:1 conjoint:1 degree:1 sufficient:1 principle:1 editor:1 pi:4 row:1 course:1 summary:1 surprisingly:1 free:1 sym:2 allow:4 institute:3 sparse:1 distributed:5 van:1 dimension:3 transition:2 stand:1 contour:6 sensory:1 author:1 made:2 preprocessing:3 simplified:1 hyvarinen:1 transaction:2 ignore:1 bernhard:1 ml:1 global:2 assumed:1 eero:2 nelder:2 search:4 sk:1 table:4 nature:1 reasonably:2 robust:1 ignoring:1 symmetry:1 schuurmans:1 bottou:1 complex:4 anthony:1 spread:1 arrow:2 whole:3 n2:1 child:18 repeated:1 positively:2 join:3 ny:1 bandpass:2 sf:6 exponential:1 comput:1 removing:1 specific:1 nyu:1 offset:1 gupta:1 deconvolution:1 consist:1 importance:1 subtree:7 krieger:1 cartesian:1 entropy:12 simply:1 explore:2 univariate:1 likely:1 paninski:1 visual:6 expressed:1 joined:1 nested:58 corresponds:2 determines:1 ma:1 endeavor:1 towards:1 shared:1 considerable:1 hard:2 change:3 included:1 manton:1 except:1 reducing:3 corrected:2 principal:1 called:1 divisive:1 indicating:1 mark:1 latter:1 fulfills:1 irwin:1 hateren:1 baddeley:1 biol:1 correlated:2 |
2,916 | 3,643 | Modeling the spacing effect in sequential category
learning
Hongjing Lu
Department of Psychology & Statistics
[email protected]
Matthew Weiden
Department of Psychology
[email protected]
Alan Yuille
Department of Statistics, Computer Science & Psychology
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
We develop a Bayesian sequential model for category learning. The sequential
model updates two category parameters, the mean and the variance, over time. We
define conjugate temporal priors to enable closed form solutions to be obtained.
This model can be easily extended to supervised and unsupervised learning involving multiple categories. To model the spacing effect, we introduce a generic
prior in the temporal updating stage to capture a learning preference, namely, less
change for repetition and more change for variation. Finally, we show how this approach can be generalized to efficiently perform model selection to decide whether
observations are from one or multiple categories.
1 Introduction
Inductive learning the process by which a new concept or category is acquired through observation
of exemplars - poses a fundamental theoretical problem for cognitive science. When exemplars are
encountered sequentially, as is typical in everyday learning, then learning is influenced in systematic
ways by presentation order. One pervasive phenomenon is the spacing effect, manifested in the
finding that given a fixed amount of total study time with a given item, learning is facilitated when
presentations of the item are spread across a longer time interval rather than massed into a continuous
study period. In category learning, for example, exemplars of two categories can be spaced by
presenting them in an interleaved manner (e.g., A1 B1 A2 B2 A3 B3 ), or massed by presenting them
in consecutive blocks (e.g., A1 A2 A3 B1 B2 B3 ). Kornell & Bjork [1] show that when tested later on
classification of novel category members, spaced presentation yields superior performance relative
to massed presentation. Similar spacing effects have been obtained in studies of item learning [2]
and motor learning [3]. Moreover, spacing effects are found not only in human learning, but also in
various types of learning in other species, including rats and Aplysia [4][5].
In the present paper we will focus on spacing effects in the context of sequential category learning.
Standard statistical methods based on summary information are unable to deal with order effects,
including the performance difference between spaced and massed conditions. From a computational perspective, a sequential learning model is needed to construct category representations from
training examples and dynamically update parameters of these representations from trial to trial.
Bayesian sequential models have been successfully applied to model causal learning and animal
conditioning [6] [7]. In the context of category learning, if we assume that the representation for
each category can be specified by a Gaussian distribution where the mean ? and the variance ? 2 are
both random variables [8], then the learning model must aim to compute the posterior distribution
of the parameters for each category given all the observations xt from trial 1 to trial t, P (?, ? 2 |xt ).
1
However, given that both the mean and the variance of a category are random variables, standard
Kalman filtering [9] is not directly applicable in this case since it assumes a known variance, which
is not warranted in the current application.
In this paper, we extend traditional Kalman filtering in order to update two category parameters, the
mean and the variance, over time in the context of category learning. We define conjugate temporal priors to enable closed form solutions to be obtained in this learning model with two unknown
parameters. We will illustrate how the learning model can be easily extended to learning situations
involving multiple categories either with supervision (i.e., learners are informed of category membership for each training observation) or without supervision (i.e., category membership of each
training observation is not provided to learners). Surprisingly, we can also derive closed form solutions in the latter case. This reduces the need for employing particle filters as an approximation
to exact inference, commonly used in the case of unsupervised learning [10]. To model the spacing
effect, we introduce a generic prior in the temporal updating stage. Finally, we will show how this
approach can be generalized to efficiently perform model selection.
The organization of the present paper is as follows. In Section 2 we introduce the Bayesian sequential learning framework in the context of category learning, and discuss the conjugacy property
of the model. Section 3 and 4 demonstrate how to develop supervised and unsupervised learning
models, which can be compared with human performance. We draw general conclusions in section
5.
2 Bayesian sequential model
We adopt the framework of Bayesian sequential learning [11], termed Bayes-Kalman, a probabilistic
model in which learning is assumed to be a Markov process with unobserved states. The exemplars
in training are directly observable, but the representations of categories are hidden and unobservable. In this paper, we assume that categories can be represented as Gaussian distributions with two
unknown parameters, means and variances. These two unknown parameters need to be learned from
a limited number of exemplars (e.g., less than ten exemplars).
We now state the general framework and give the update rule for the simplest situation where the
training data is generated by a single category specified by a mean m and precision r ? the precision
is the inverse of the variance and is used to simplify the algebra. Our model assumes that the mean
can change over time and is denoted by mt , where t is the time step. The model is specified by the
prior distribution P (m0 , r), the likelihood function P (x|mt , r) for generating the observations, and
the temporal prior P (mt+1 |mt ) specifying how mt can vary over time. Note that the precision r is
estimated over time, which differs from standard Kalman filtering where it is assumed to be known.
Bayes-Kalman [11] gives iterative equations to determine the posterior P (mt , r|Xt ) after a sequence
of observations XT = {x1 , ..., xt }. The update equations are divided into two stages, prediction and
correction:
P (mt+1 , r|Xt ) =
Z
?
dmt P (mt+1 |mt )P (mt , r|Xt ),
(1)
??
P (mt+1 , r|Xt+1 ) = P (mt+1 , r|xt+1 , Xt ) =
P (xt+1 |mt+1 , r)P (mt+1 , r|Xt )
.
P (xt+1 |Xt )
(2)
Intuitively, the Bayes-Kalman first predicts the distribution P (mt+1 , r|Xt ) and then uses this as a
prior to correct for the new observation xt+1 and determine the new posterior P (mt+1 , r|Xt+1 ).
Note that the temporal prior P (mt+1 |mt ) implies that the model automatically pays most attention
to recent data and does not memorize the data, thus exhibiting sensitivity to the data ordering.
2.1 Conjugate priors
The distributions P (m0 , r), P (x|mt , r), P (mt+1 |mt ) are chosen to be conjugate, so that the distribution P (mt , r|Xt ) takes the same functional form as P (m0 , r). As shown in the following section,
this reduces the Bayes-Kalman equations to closed form update rules for the parameters of the dis2
tributions. The distributions are specified in terms of Gamma and Gaussian distributions:
? ? ??1
r
exp{??r}, r ? 0. Gamma.
?(?)
?
G(x : ?, ?) = { } exp{??/2(x ? ?)2 }. Gaussian.
2?
We specify the prior P (m0 , r) as the product of a Gaussian P (m0 |r) and a Gamma P (r):
g(r : ?, ?) =
P (m0 |r) = G(m0 : ?, ? r),
P (r) = g(r : ?, ?),
(3)
(4)
(5)
where ?, ?, ?, ? are the parameters of the distribution. For simplicity, we call this a GammaGaussian distribution with parameters ?, ?, ?, ?.
The likelihood function and temporal prior are both Gaussians:
P (xt |mt , r) = G(xt : mt , ?r),
P (mt+1 |mt ) = G(mt+1 : mt , ?r),
(6)
where ?, ? are constants.
The conjugacy of the distributions ensures that the posterior distribution P (mt , r|Xt ) will also be
a Gamma-Gaussian distribution with parameters ?t , ?t , ?t , ?t , where the update rules for these parameters are specified in the next section.
2.2 Update rules for the model parameters
The update rules for the model parameters follow from substituting the distributions into the BayesKalman equations 1, 2. We sketch how these update rules are obtained assuming that P (mt , r|Xt )
is a Gamma-Gaussian with parameters ?t , ?t , ?t , ?t , which is true for t = 0 using equations (5,6).
The form of the prediction equation and the temporal prior, see equations (1,6), ensures that
P (mt+1 , r|Xt ) is also a Gamma-Gaussian distribution with parameters ?t , ?tp , ?t , ?t , where
?t ?
.
(7)
?tp =
?t + ?
The correction equation and the likelihood function, see equations (2,6), ensure that
P (mt+1 , r|Xt+1 ) is also Gamma-Gaussian with parameters ?t+1 , ?t+1 , ?t+1 , ?t+1 given by:
?t+1 = ?t + 1/2,
?t+1 =
?t+1 = ?t +
?xt+1 + ?tp ?t
,
? + ?tp
??tp (xt+1 ? ?t )2
,
2(? + ?tp )
?t+1 = ? + ?tp .
(8)
Intuitively, the prediction only reduces the precision of m but makes no change to its mean or to the
distribution over r. By contrast, the new observation alters the mean of m (moving it closer to the
new observation xt+1 ), and also increases its precision, which sharpens the distribution on r.
2.3 Model evidence
We also need to compute the probability of the observation sequence Xt from the model (which will
be used later for model selection). This can be expressed recursively as:
p(Xt ) = p(xt |Xt?1 )p(xt?1 |Xt?2 )...p(x1 ).
(9)
This computation is also simplified becauseRwe use conjugate distributions. The terms in equation (9) can be expressed as P (xt+1 |Xt ) = dmt+1 drP (xt+1 |mt+1 , r)P (mt+1 , r|Xt ) and these
integrals can be calculated analytically yielding:
P (xt+1 |Xt ) = ?t +
?(?t +1/2) 1 ??t 1/2 ?t?t ?(?t + 1/2)
??t
{
}
(x ? ?t )2
.
2(? + ?t )
2? ? + ?t
?(?t )
3
(10)
3 Supervised category learning
Although the learning model is presented for one category, it can easily be extended to learning
multiple categories with known category membership for training data (i.e., under supervision). In
this section, we will first describe an experiment with two categories to show how the category
representations change over time; then we will simulate learning with six categories and compare
predictions with human data in psychological experiments.
3.1
Two-category learning with supervision
We first conduct a synthetic experiment with two categories under supervision. We generate six
training observations from one of two one-dimensional Gaussian distributions (representing categories A and B, respectively) with means [?0.4, 0.4] and standard deviation of 0.4. Two training
conditions are included, a massed condition with the data presentation order of AAABBB and a
spaced condition with the order of ABABAB.
To model the acquisition of category representations during training, we employ the Bayesian learning model as described in the previous section. In the correction stage of each trial, the model
updates the parameters corresponding to the category that produced the observation based on the
supervision (i.e., known category membership), following equation (8).
In the prediction stage, however, different values of a fixed model parameter ? are introduced to
incorporate a generic prior that controls how much the learner is willing to update category representations from one trial to the next. The basic hypothesis is that learners will have greater confidence in knowledge of a category presented on trial t than of a category absent on trial t. As a
consequence, the learner will be willing to accept more change in a category representation if the
observation on the previous trial was drawn from a different category. This generic prior does share
some conceptual similarity with a model developed by Kording et. al,[?], which assumes that the
moment-moment variance of the states is higher for faster timescales (p. 779).
More specifically, if the observation on trial t is from the first category, in the prediction phase we
will update the ?t parameters for the two categories, ?t 1 , ?t 2 , as:
?t 1 7?
?t 1 ?s
,
?t 1 + ?s
?t 2 7?
?t 2 ?d
,
?t 2 + ?d
(11)
in which ?s > ?d . In the simulation, we used ?s = 50 and ?d = .5
t=6
2
?2
0
2
0
t=6
?2
0
m
2
2
0
Category 1
Category 2
?2
0
m
2
20
40
?2
0
2
P(r)
20
40
0
20
r
r
Spaced @ t = 2
t=4
t=6
0
m
0
r
P(r)
P(m)
t=4
P(m)
Spaced @ t = 2
P(m)
m
P(r)
P(r)
?2
m
t=6
20
r
40
0
20
r
40
0
20
r
Figure 1: Posterior distributions of means P (mt |Xt ) and precisions P (rt |Xt ) updated on training
trials in two-category supervised learning. Blue lines indicate category parameters in the first category; and red lines indicate parameters in the second category. The top panel shows the results
for the massed condition (i.e., AAABBB), and the bottom panel shows the results for the spaced
condition (i.e., ABABAB). Please see in colour. We show the distributions only on even trials to
save space. See section 3.1.
Figure (1) shows the change of posterior distributions of the two unknown category parameters,
means P (mt |Xt ) and precisions P (rt |Xt ), over training trials. Figure (2) shows the category representation in the form of the posterior distribution of P (xt |Xt ). In the massed condition (i.e.,
4
40
P(r)
0
m
t=4
P(r)
?2
Massed @ t = 2
P(m)
t=4
P(m)
P(m)
Massed @ t = 2
40
AAABBB), the variance of the first category decreases over the first three trials, and then increases
over the second three trials because the observations are from the second category. The increase of
category variance reflects the forgetting that occurs if no new observations are provided for a particular category after a long interval. This type of forgetting does not occur in the spaced condition, as
the interleaved presentation order ABABAB ensured that each category recurs after a short interval.
Based upon the learned category representations, we can compute accuracy (the ability to discriminate between the two learnt distributions) using the posterior distributions of the two categories.
After 100 simulations, the average accuracy in the massed condition is 0.78, which is lower than the
0.84 accuracy in the spaced condition. Thus our model is able to predict the spacing effect found in
two-category supervised learning.
2
?2
0
x
2
?2
0
2
2
P(x)
P(x)
0
?2
0
2
?2
0
x
x
x
t=2
t=3
t=4
t=5
t=6
P(x)
?2
?2
x
P(x)
P(x)
Spaced @ t = 1
0
x
0
x
2
?2
0
2
x
?2
0
2
2
Category 1
Category 2
P(x)
?2
2
t=6
P(x)
0
x
t=5
P(x)
?2
t=4
P(x)
t=3
P(x)
t=2
P(x)
P(x)
Massed @ t = 1
?2
x
0
x
2
?2
0
2
x
Figure 2: Posterior distribution of each category, P (xt |Xt ), updated on training trials in the twocategory supervised learning. Same conventions as in figure (1). See section 3.1.
3.2 Modeling the spacing effect in six-category learning
Kornell and Bjork [1] asked human subjects to study six paintings by six different artists, with a
given artists paintings presented consecutively (massed) or interleaved with other artists paintings
(spaced). In the training phase, subjects were informed which artist created each training painting.
The same 36 paintings were studied in the training phase, but with different presentation orders
in the massed and spaced conditions. In the subsequent test phase, six new paintings (one from
each artist) were presented and subjects had to identify which artist painted each of a series of new
paintings. Four test blocks were tested with random display order for artists. In each test block,
participants were given feedback after making an identification response. Paintings presented in one
test block thus served as training examples for the subsequent test block. Human results are shown
in figure (4). Human subjects showed significantly better test performance after spaced than massed
training. Given that feedback was provided and one painting from each artist appeared in one test
block, it is not surprising that test performance increased across test blocks and the spacing effect
decreased with more test blocks.
To simulate the data, we generated training and test data from six one-dimensional Gaussian distributions with means [?2, ?1.2, ?0.4, 0.4, 1.2, 2] and standard deviation of 0.4. Figure (3) shows the
learned category representations in terms of posterior distributions. Depending on the presentation
order of training data (massed or spaced), the learned distributions differ in terms of means and variances for each category. To compare with human performance reported by Kornell and Bjork, the
model estimates accuracy in terms of discrimination between the two categories based upon learned
distributions. Figure (4) shows average accuracy from 1000 simulations. The result plot illustrates
that the model predictions match human performance well.
4 Unsupervised category learning
Both humans and animals can learn without supervision. For example, in the animal conditioning
literature, various studies have shown that exposing two stimuli in blocks (equivalent to a massed
condition) is less effective in producing generalization [12]. Balleine et. al. [4] found that with rats,
preexposure to two stimuli A and B (massed or spaced) determines the degree to which backward
blocking is subsequently obtained ? backward blocking occurs if the preexposure is spaced but not
5
Massed @ t = 6
t = 12
t = 18
t = 24
t = 30
t = 36
10
Spaced @ t = 6
0.5
0
?10
0
X
P(X)
P(X)
P(X)
X
X
X
X
t = 12
t = 18
t = 24
t = 30
t = 36
P(X)
P(X)
1
X
10
X
X
Category 1
Category 2
Category 3
Category 4
Category 5
Category 6
P(X)
0
X
P(X)
?10
P(X)
0
P(X)
0.5
P(X)
P(X)
P(X)
1
X
X
X
Figure 3: Posterior distribution of each category, P (xt |Xt ), updated on training trials in the sixcategory supervised learning. Same conventions as in figure (1). See section 3.2.
0.8
Massed
Spaced
Proportion Correct
0.7
0.6
0.5
0.4
0.3
0.2
1
2
3
4
Test Block
Figure 4: Human performance (left) and model prediction (right). Proportion correct as a function
of presentation training conditions (massed and spaced) and test block. See section 3.2.
if it is massed. They conclude that in the massed preexposure the rats are unable to distinguish
two separate categories for A and B, and therefore treat them as members of a single category. By
contrast, they conclude that rats can distinguish the categories A and B in the spaced preexposure.
In this section, we generalize the sequential category model to unsupervised learning, when the category membership of each training example is not provided to observers. We first derive the extension
of the sequential model to this case (surprisingly, showing we can obtain all results in closed form).
Then we determine whether massed and spaced stimuli (as in Balleine et. al.?s experiment [4]) are
most likely to have been generated by a single category or by two categories. We also assess the
importance of supervision in training by comparing performance after unsupervised learning with
that after supervised learning.
We consider a model with two hidden categories. Each category can be represented as a Gaussian
distribution with a mean and precision m1 , r1 and m2 , r2 . The likelihood function assumes that the
data is generated by either category with equal probability, since the category membership is not
provided,
1
1
P (x|m1 , r1 ) + P (x|m2 , r2 ),
2
2
with P (x|m1 , r1 ) = G(x : m1 , ?r1 ), P (x|m2 , r2 ) = G(x : m2 , ?r2 ).
P (x|m1 , r1 , m2 , r2 ) =
(12)
(13)
We specify prior distributions and temporal priors as before:
P (m10 , r1 ) = G(m10 : ?1 , ? r1 ),
P (m1t+1 |m1t )
=
G(m1t+1
:
m1t , ?r1 ),
P (m20 , r2 ) = G(m20 : ?2 , ? r2 )
P (m2t+1 |m2t )
=
G(m2t+1
:
m2t , ?r2 ).
(14)
(15)
The joint posterior distribution P (m1t , r1 , m2t , r2 |Xt ) after observations Xt can be formally obtained by applying the Bayes-Kalman update rules to the joint distribution ? i.e., replace (mt , r) by
(m1t , r1 , m2t , r2 ) in equations (1,2)). But this update is more complicated because we do not know
whether the new observation xt should be assigned to category 1 or category 2. Instead we have to
sum over all the possible assignments of the observations to the categories which gives 2t possible
assignments at time t. This can be performed efficiently in a recursive manner. Let At denote the set
of possible assignments at time t where each assignment is a string (a1 , ..., at ) of binary variables
6
of length t, where (1, ..., 1) is the assignment where all the observations are assigned to category 1,
(2, 1, ..., 1) assigns the first observation to category 2 and the remainder to category 1, and so on.
By substituting equations (12,14,15) into Bayes-Kalman we can obtain an iterative update equation
for P (m1t , r1 , m2t , r2 |Xt ). At time t we represent:
X
P (m1t , r1 , m2t , r2 |Xt ) =
?2(a1 ,...,at ) )P (a1 , ..., at |Xt ),
P (m1 , r|~
?1a1 ,...,at )P (m2 , r|~
(a1 ,...,at )?At
(16)
?i(a1 ,...,at )
where
denotes the values of the parameters ?
~ = (?, ?, ?, ? ) for category i (i ? {1, 2})
for observation sequence (a1 , ..., at ) and P (a1 , ..., at ) is the probability of assignment (a1 , ..., at ).
At t = 0 there is no observation sequence and P (m10 , r1 , m20 , r2 |Xt ) = P (m1 , r|~
?1 )P (m2 , r|~
?2 )
which corresponds to A0 containing a single element which has probability one.
The prediction stage updates the ? component of ?
~ i (a1 , ..., at ) by:
? i (a1 , ..., at ) 7?
? i (at )? i (a1 , ..., at )
.
i
t ) + ? (a1 , ..., at )
(17)
? i (a
We define ? i (at ) as larger if i = at and smaller if i 6= at , as specified in equation (11) to incorporate
the generic prior described in section 3.1.
The correction stage at time t + 1 introduces another observation, which must be assigned to the
two categories. This gives a new set At+1 of 2t+1 assignments of form (a1 , ..., at+1 ) and a new
posterior:
X
?2(a1 ,...,at+1 ) )P (a1 , ..., at+1 |Xt+1 ),
P (m1 , r|~
?1a1 ,...,at+1 )P (m2 , r|~
P (m1t+1 , r1 , m2t+1 , r2 |Xt+1 ) =
(a1 ,...,at+1 )?At+1
(18)
where we compute
?
~ i(a1 ,...,at+1 )
for i ? {1, 2} by:
1
1
?i(a1 ,...,at+1 ) = ?i(a1 ,...,at ) + 1/2, ?(a
= ?(a
+
1 ,...,at+1 )
1 ,...,at )
?i(a1 ,...,at+1 ) =
i
?xt+1 + ?(a
?i
1 ,...,at ) (a1 ,...,at )
1
? + ?(a
1 ,...,at )
i
??(a
(xt+1 ? ?1(a1 ,...,at ) )2
1 ,...,at )
i
)
2(? + ?(a
1 ,...,at )
,
i
i
= ? + ?(a
, (19)
?(a
1 ,....,at )
1 ,...,at+1
,
and we compute P (a1 , ..., at+1 ) by:
a
P (a1 , ..., at+1 |Xt+1 ) = P
)P (a1 , ..., at )
P (xt+1 |~
?(at+1
1 ,...,at )
a
(a1 ,...,at )
)P (a1 , ..., at )
P (xt+1 |~
?(at+1
1 ,...,at )
,
(20)
where
a
)
P (xt+1 |~
?(at+1
1 ,...,at )
=
Z
?(a1 ,...,at ) )
dmat+1 drat+1 P (xt+a |m(at+1 ) , rat+1 )P (m(at+1 ) , rat+1 |~
(21)
The model selection can, as before, be expressed as P (xt |Xt?1 )P (xt?1 |Xt )....P (x1 ), where
X
a
)P (a1 , ..., at ).
(22)
P (xt+1 |~
?(at+1
P (xt+1 |Xt ) =
1 ,...,at )
(a1 ,...,at )?At
We can now address the problem posed by Balleine et. al.?s preexposure experiments [4] ? why
do rats identify a single category for the massed stimuli but two categories for the spaced stimuli?
7
We treat this as a model selection problem. We compare the evidence for the sequential model
with one category, see equations (9,10), versus the evidence for the model with two categories, see
equations (9,22), for the two cases AAABBB (massed) and ABABAB (spaced).
We use the same data as described in section (3.1) but without providing category membership for
any of the training data. The left plot in figure (5) shows the result obtained by comparing model
evidence for the one-category model with model evidence for the two-category model. A greater
ratio value indicates greater support for the one-category account. As shown in figure (5), the model
decides that all training observations are from one category in the massed condition, but from two
different categories in the spaced condition (using zero as the decision threshold). These predictions
agree with with Balleine et. al.?s findings.
Proportion Correct
Model evidence ratio
1
0
?0.5
?1
?1.5
?2
Massed
0.8
0.7
0.6
0.5
Spaced
Conditions
Massed
Spaced
0.9
Supervised
Unsupervised
Learning conditions
Figure 5: Model selection and accuracy results. Left, model selection results as a function of presentation training conditions (massed and spaced). A greater ratio indicates more support for the
one-category account. Error bars indicate the standard error from 100 simulations. See section 4.2.
Right, comparison of supervised and unsupervised learning in terms of accuracy. See section 4.3.
To assess the influence of supervision on learning, we compare performance between supervised
learning (described in section (3.1)) with unsupervised learning (described in this section). To make
the comparison, we assume that learners are provided with the same training data and are informed
that the data are from two different categories, either with known category membership (supervised)
or unknown category membership (unsupervised) for each training observation. Accuracy measured
by discrimination between the two categories is compared in the right plot of figure (5). The model
predicts higher accuracy given supervised than unsupervised learning. Furthermore, the model predicts a spacing effect for both types of learning, although the effect is reduced with unsupervised
learning.
5 Conclusions
In this paper, we develop a Bayesian sequential model for category learning by updating category
representations over time based on two category parameters, the mean and the variance. Analytic
updating rules are obtained by defining conjugate temporal priors to enable closed form solutions.
A generic prior in the temporal updating stage is introduced to model the spacing effect. Parameter
estimation and model selection can be performed on the basis of updating rules. The current work
extends standard Kalman filtering, and is able to predict learning phenomena that have been observed
for humans and other animals.
In addition to explaining the spacing effect, our model predicts that subjects will become less certain
about their knowledge of learned categories as time passes, see the increase in category variance in
Figure 2. But our model is not standard Kalman filter (since the measurement variance is unknown),
so we do not predict exponential decay. Instead, as shown in Equation 10, our model predicts the
pattern of power-law forgetting that is fairly universal in human memory [14]
For small number of observations, our model is extremely efficient because we can derive analytic
solutions. For example, the analytic solutions for unsupervised learning requires only 0.2 seconds
for six observations while numerical integration takes 18 minutes. However, our model will scale
exponentially with the number of observations in unsupervised learning. Future work is to include
a pruning strategy to keep the complexity practical.
Acknowledgement
This research was supported a grant from Air Force FA 9550-08-1-0489.
8
References
[1] Kornell, N., & Bjork, R. A. (2008a). Learning concepts and categories: Is spacing the ?enemy of
induction?? Psychological Science, 19, 585-592.
[2] Bahrick, H.P., Bahrick, L.E., Bahrick, A.S., & Bahrick, P.E. (1993). Maintenance of foreign language
vocabulary and the spacing effect. Psychological Science, 4, 316321.
[3] Shea, J.B., & Morgan, R.L. (1979). Contextual interference effects on the acquisition, retention, and
transfer of a motor skill. Journal of Experimental Psychology: Human Learning and Memory, 5, 179187.
[4] Balleine, B. W. Espinet, A. & Gonzalez, F. (2005).Perceptual learning enhances retrospective revaluation of conditioned flavor preferences in rats. Journal of Experimental Psychology: Animal Behavior
Processes, 31(3): 341-50.
[5] Carew, T.J., Pinsker, H.M., & Kandel, E.R. (1972). Long-term habituation of a defensive withdrawal
reflex in Aplysia. Science, 175, 451454.
[6] Daw, N., Courville, A. C, & Dayan, P. (2007). Semi-rational Models of Conditioning: The Case of Trial
Order. In M. Oaksford and N. Chater (Eds.). The probabilistic mind: Prospects for rational models of
cognition. Oxford: Oxford University Press.
[7] Dayan, P. & Kakade, S. (2000). Explaining away in weight space. In T. K. Leen et al., (Eds.), Advances
in neural information processing systems (Vol. 13, pp. 451-457). Cambridge, MA: MIT Press.
[8] Fried, L. S., & Holyoak, K. J. (1984). Induction of category distributions: A framework for classification
learning. Journal of Experimental Psychology: Learning, Memory and Cognition, 10, 234-257.
[9] Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Transactions of the
ASME-Journal of Basic Engineering, 82:35-45.
[10] Schubert, J., & Sidenbladh, H. (2005). Sequential clustering with particle filters: Extimating the number
of clusters from data. 7th International Conference on Information Fusion (FUSION).
[11] Ho, Y-C & Lee, R.C.K. (1964). A Bayesian appraoch to problems in stochastic estimation and control.
IEEE Transactions on Automatic Control, 9, 333-339.
[12] Honey, R. C., Bateson, P., & Horn, G. (1994). The role of stimulus comparison in perceptual learning:
An investigation with the domestic chick. Quarterly Journal of Experimental Psychology: Comparative
and Physiological Psychology, 47(B), 83103.
[13] Kording, K. P., Tenenbaum, J. B., and Shadmehr, R. (2007). The dynamics of memory as a consequence
of optimal adaptation to a changing body. Nature Neuroscience, 10:779-786.
[14] Anderson, J. R. And Schooler, L. J. (1991). Reflections of the environment in memory. Psychological
Science, 2, 395-408.
9
| 3643 |@word trial:18 sharpens:1 proportion:3 holyoak:1 willing:2 simulation:4 recursively:1 moment:2 series:1 current:2 comparing:2 contextual:1 surprising:1 must:2 exposing:1 numerical:1 subsequent:2 m1t:9 analytic:3 motor:2 plot:3 update:17 discrimination:2 item:3 fried:1 short:1 preference:2 become:1 massed:29 manner:2 introduce:3 balleine:5 acquired:1 forgetting:3 behavior:1 automatically:1 domestic:1 provided:6 moreover:1 panel:2 string:1 developed:1 informed:3 finding:2 unobserved:1 temporal:11 honey:1 ensured:1 control:3 grant:1 producing:1 before:2 retention:1 engineering:1 treat:2 consequence:2 painted:1 oxford:2 dis2:1 studied:1 dynamically:1 specifying:1 limited:1 practical:1 horn:1 recursive:1 block:11 differs:1 universal:1 significantly:1 confidence:1 selection:8 context:4 applying:1 influence:1 equivalent:1 attention:1 simplicity:1 defensive:1 assigns:1 m2:8 rule:9 variation:1 updated:3 ababab:4 exact:1 us:1 hypothesis:1 element:1 updating:6 tributions:1 predicts:5 blocking:2 bottom:1 drp:1 observed:1 role:1 capture:1 ensures:2 ordering:1 decrease:1 prospect:1 environment:1 complexity:1 asked:1 pinsker:1 dynamic:1 algebra:1 yuille:2 upon:2 learner:6 basis:1 easily:3 joint:2 bahrick:4 various:2 represented:2 describe:1 effective:1 larger:1 posed:1 enemy:1 ability:1 statistic:2 sequence:4 product:1 remainder:1 adaptation:1 everyday:1 los:2 cluster:1 r1:14 generating:1 comparative:1 illustrate:1 develop:3 derive:3 stat:1 pose:1 measured:1 depending:1 exemplar:6 implies:1 memorize:1 indicate:3 exhibiting:1 convention:2 differ:1 correct:4 filter:3 consecutively:1 subsequently:1 stochastic:1 human:13 enable:3 carew:1 generalization:1 investigation:1 extension:1 correction:4 exp:2 cognition:2 predict:3 appraoch:1 matthew:1 substituting:2 m0:7 vary:1 consecutive:1 a2:2 adopt:1 estimation:2 applicable:1 schooler:1 repetition:1 successfully:1 reflects:1 mit:1 gaussian:12 aim:1 rather:1 chater:1 pervasive:1 focus:1 recurs:1 likelihood:4 indicates:2 contrast:2 inference:1 dayan:2 membership:9 foreign:1 accept:1 a0:1 hidden:2 schubert:1 unobservable:1 classification:2 denoted:1 animal:5 integration:1 fairly:1 equal:1 construct:1 unsupervised:14 future:1 stimulus:6 simplify:1 employ:1 gamma:7 phase:4 bjork:4 organization:1 introduces:1 yielding:1 integral:1 closer:1 conduct:1 causal:1 theoretical:1 psychological:4 increased:1 modeling:2 tp:7 assignment:7 deviation:2 reported:1 learnt:1 synthetic:1 m10:3 fundamental:1 sensitivity:1 international:1 systematic:1 probabilistic:2 lee:1 containing:1 cognitive:1 account:2 b2:2 later:2 performed:2 observer:1 closed:6 red:1 bayes:6 participant:1 complicated:1 hongjing:2 ass:2 air:1 accuracy:9 variance:14 efficiently:3 spaced:27 yield:1 painting:9 identify:2 drat:1 generalize:1 bayesian:8 identification:1 artist:8 produced:1 lu:1 served:1 bateson:1 influenced:1 ed:2 acquisition:2 pp:1 rational:2 knowledge:2 higher:2 supervised:13 follow:1 specify:2 response:1 leen:1 anderson:1 furthermore:1 stage:8 sketch:1 b3:2 effect:17 concept:2 true:1 inductive:1 analytically:1 assigned:3 deal:1 during:1 please:1 rat:8 generalized:2 presenting:2 asme:1 demonstrate:1 reflection:1 novel:1 superior:1 functional:1 mt:37 conditioning:3 exponentially:1 extend:1 m1:8 measurement:1 cambridge:1 automatic:1 particle:2 language:1 had:1 moving:1 longer:1 supervision:9 similarity:1 posterior:13 recent:1 showed:1 perspective:1 termed:1 certain:1 manifested:1 binary:1 aaabbb:4 morgan:1 greater:4 determine:3 period:1 semi:1 multiple:4 reduces:3 alan:1 match:1 faster:1 long:2 divided:1 a1:34 prediction:11 involving:2 basic:2 maintenance:1 represent:1 dmt:2 addition:1 spacing:15 interval:3 decreased:1 pass:1 subject:5 member:2 habituation:1 call:1 psychology:8 angeles:2 absent:1 whether:3 six:8 colour:1 retrospective:1 amount:1 ten:1 tenenbaum:1 category:116 simplest:1 reduced:1 generate:1 alters:1 estimated:1 neuroscience:1 blue:1 vol:1 four:1 threshold:1 drawn:1 changing:1 backward:2 revaluation:1 sum:1 facilitated:1 inverse:1 chick:1 extends:1 decide:1 draw:1 gonzalez:1 decision:1 interleaved:3 pay:1 distinguish:2 display:1 courville:1 encountered:1 occur:1 ucla:3 simulate:2 extremely:1 department:3 conjugate:6 across:2 smaller:1 kakade:1 preexposure:5 making:1 intuitively:2 interference:1 equation:18 conjugacy:2 agree:1 discus:1 needed:1 know:1 mind:1 gaussians:1 quarterly:1 away:1 generic:6 save:1 ho:1 assumes:4 top:1 ensure:1 denotes:1 include:1 clustering:1 occurs:2 strategy:1 fa:1 rt:2 traditional:1 enhances:1 sidenbladh:1 unable:2 separate:1 induction:2 assuming:1 kalman:12 length:1 providing:1 ratio:3 unknown:6 perform:2 observation:30 markov:1 kornell:4 aplysia:2 situation:2 extended:3 defining:1 introduced:2 namely:1 specified:6 california:1 learned:6 daw:1 address:1 able:2 bar:1 pattern:1 appeared:1 including:2 memory:5 power:1 force:1 representing:1 oaksford:1 created:1 m2t:9 prior:19 literature:1 acknowledgement:1 relative:1 law:1 filtering:5 versus:1 degree:1 share:1 summary:1 surprisingly:2 supported:1 explaining:2 feedback:2 calculated:1 vocabulary:1 commonly:1 simplified:1 employing:1 transaction:2 kording:2 pruning:1 observable:1 skill:1 keep:1 sequentially:1 decides:1 b1:2 conceptual:1 assumed:2 conclude:2 continuous:1 iterative:2 why:1 learn:1 transfer:1 nature:1 ca:1 warranted:1 spread:1 timescales:1 x1:3 body:1 m20:3 precision:8 exponential:1 kandel:1 perceptual:2 minute:1 xt:72 showing:1 r2:14 decay:1 physiological:1 a3:2 evidence:6 fusion:2 sequential:14 importance:1 shea:1 illustrates:1 conditioned:1 flavor:1 likely:1 expressed:3 reflex:1 corresponds:1 determines:1 ma:1 presentation:10 replace:1 change:7 included:1 typical:1 specifically:1 shadmehr:1 total:1 specie:1 discriminate:1 experimental:4 withdrawal:1 formally:1 support:2 latter:1 incorporate:2 tested:2 phenomenon:2 |
2,917 | 3,644 | Whose Vote Should Count More:
Optimal Integration of Labels from Labelers of
Unknown Expertise
Jacob Whitehill, Paul Ruvolo, Tingfan Wu, Jacob Bergsma, and Javier Movellan
Machine Perception Laboratory
University of California, San Diego
La Jolla, CA, USA
{ jake, paul, ting, jbergsma, movellan }@mplab.ucsd.edu
Abstract
Modern machine learning-based approaches to computer vision require very large
databases of hand labeled images. Some contemporary vision systems already
require on the order of millions of images for training (e.g., Omron face detector
[9]). New Internet-based services allow for a large number of labelers to collaborate around the world at very low cost. However, using these services brings
interesting theoretical and practical challenges: (1) The labelers may have wide
ranging levels of expertise which are unknown a priori, and in some cases may
be adversarial; (2) images may vary in their level of difficulty; and (3) multiple
labels for the same image must be combined to provide an estimate of the actual
label of the image. Probabilistic approaches provide a principled way to approach
these problems. In this paper we present a probabilistic model and use it to simultaneously infer the label of each image, the expertise of each labeler, and the
difficulty of each image. On both simulated and real data, we demonstrate that
the model outperforms the commonly used ?Majority Vote? heuristic for inferring
image labels, and is robust to both noisy and adversarial labelers.
1
Introduction
In recent years machine learning-based approaches to computer vision have helped to greatly accelerate progress in the field. However, it is now becoming clear that many practical applications
require very large databases of hand labeled images. The labeling of very large datasets is becoming
a bottleneck for progress. One approach to address this incoming problem is to make use of the vast
human resources on the Internet. Indeed, projects like the ESP game [17], the Listen game[16], Soylent Grid [15], and reCAPTCHA [18] have revealed the possibility of harnessing human resources to
solve difficult machine learning problems. While these approaches use clever schemes to obtain data
from humans for free, a more direct approach is to hire labelers online. Recent Web tools such as
Amazon?s Mechanical Turk [1] provide ideal solutions for high-speed, low cost labeling of massive
databases.
Due to the distributed and anonymous nature of these tools, interesting theoretical and practical
challenges arise. For example, principled methods are needed to combine the labels from multiple
experts and to estimate the certainty of the current labels. Which image should be labeled (or
relabeled) next must also be decided ? it may be prudent, for example, to collect many labels for
each image in order to increase one?s confidence in that image?s label. However, if an image is easy
and the labelers of that image are reliable, a few labels may be sufficient and valuable resources may
be used to label other images. In practice, combining the labels of multiple coders is a challenging
process due to the fact that: (1) The labelers may have wide ranging levels of expertise which are
unknown a priori, and in some cases may be adversarial; (2) images may also vary in their level of
difficulty, in a manner that may also be unknown a priori.
Probabilistic methods provide a principled way to approach this problem using standard inference
tools. We explore one such approach by formulating a probabilistic model of the labeling process,
which we call GLAD (Generative model of Labels, Abilities, and Difficulties), and using inference
methods to simultaneously infer the expertise of each labeler, the difficulty of each image, and the
most probable label for each image. On both simulated and real-life data, we demonstrate that the
model outperforms the commonly used ?Majority Vote? heuristic for inferring image labels, and is
robust to both adversarial and noisy labelers.
2
Modeling the Labeling Process
Consider a database of n images, each of which belongs to one of two possible categories of interest
(e.g., face/non-face; male/female; smile/non-smile; etc.). We wish to determine the class label Zj
(0 or 1) of each image j by querying from m labelers. The observed labels depend on several causal
factors: (1) the difficulty of the image; (2) the expertise of the labeler; and (3) the true label. We
model the difficulty of image j using the parameter 1/?j ? [0, ?) where ?j is constrained to be
positive. Here 1/?j = ? means the image is very ambiguous and hence even the most proficient
labeler has a 50% chance of labeling it correctly. 1/?j = 0 means the image is so easy that even the
most obtuse labeler will always label it correctly.
The expertise of each labeler i is modeled by the parameter ?i ? (??, +?). Here an ? = +?
means the labeler always labels images correctly; ?? means the labeler always labels the images
incorrectly, i.e., he/she can distinguish between the two classes perfectly but always inverts the label,
either maliciously or because of a consistent misunderstanding. In this case (?i < 0), the labeler
is said to be adversarial. Finally, ?i = 0 means that the labeler cannot discriminate between the
two classes ? his/her labels carry no information about the true image label Zj . Note that we do not
require the labelers to be human ? labelers can also be, for instance, automatic classifiers. Hence,
the proposed approach will provide a principled way of combining labels from any combination of
human and previously existing machine-based classifiers.
The labels given by labeler i to image j (which we call the given labels) are denoted as Lij and,
under the model, are generated as follows:
p(Lij = Zj |?i , ?j ) =
1
1 + e??i ?j
(1)
Thus, under the model, the log odds for the obtained labels being correct are a bilinear function
function of the difficulty of the label and the expertise of the labeler, i.e.,
log
p(Lij = Zj )
= ?i ?j
1 ? p(Lij = Zj )
(2)
More skilled labelers (higher ?i ) have a higher probability of labeling correctly. As the difficulty
1/?j of an image increases, the probability of the label being correct moves toward 0.5. Similarly,
as the labeler?s expertise decreases (lower ?i ), the chance of correctness likewise drops to 0.5.
Adversarial labelers are simply labelers with negative ?.
Figure 1 shows the causal structure of the model. True image labels Zj , labeler accuracy values ?i ,
and image difficulty values ?j are sampled from a known prior distribution. These determine the
observed labels according to Equation 1. Given a set of observed labels l = {lij }, the task is to infer
simultaneously the most likely values of Z = {Zj } (the true image labels) as well as the labeler
accuracies ? = {?i } and the image difficulty parameters ? = {?j }. In the next section we derive
the Maximum Likelihood algorithm for inferring these values.
3
Inference
The observed labels are samples from the {Lij } random variables. The unobserved variables are
the true image labels Zj , the different labeler accuracies ?i , and the image difficulty parameters
1/?j . Our goal is to efficiently search for the most probable values of the unobservable variables
2
Image difficulties
?
?
1
2
?
...
Z
...
3
?
n
True labels
Z
Z
1
2
3
Z
n
Observed labels
L11
L21
... L12...
L22
L32
...
Labeler accuracies
?
?
1
?
2
...
3
?
m
Figure 1: Graphical model of image difficulties, true image labels, observed labels, and labeler
accuracies. Only the shaded variables are observed.
Z, ? and ? given the observed data. Here we can use Expectation- Maximization approach (EM)
to obtain maximum likelihood estimates of the parameters of interest (the full derivation is in the
Supplementary Materials):
E step: Let the set of all given labels for an image j be denoted as lj = {lij 0 | j 0 = j}. Note
that not every labeler must label every single image. In this case, the index variable i in lij 0 refers
only to those labelers who labeled image j. We need to compute the posterior probabilities of all
zj ? {0, 1} given the ?, ? values from the last M step and the observed labels:
p(zj |l, ?, ?)
= p(zj |lj , ?, ?j )
? p(zj |?, ?j )p(lj |zj , ?, ?j )
Y
? p(zj )
p(lij |zj , ?i , ?j )
i
where we noted that p(zj |?, ?j ) = p(zj ) using the conditional independence assumptions from the
graphical model.
M step: We maximize the standard auxiliary function Q, which is defined as the expectation of the
joint log- likelihood of the observed and hidden variables (l, Z) given the parameters (?, ?), w.r.t. the
posterior probabilities of the Z values computed during the last E step:
Q(?, ?)
=
=
E [ln p(l, z|?, ?)]
?
!?
Y
Y
p(zj )
p(lij |zj , ?i , ?j ) ?
E ?ln
j
i
since lij are cond. indep. given z, ?, ?
X
X
=
E [ln p(zj )] +
E [ln p(lij |zj , ?i , ?j )]
j
ij
where the expectation is taken over z given the old parameter values ?old , ? old as estimated during
the last E- step. Using gradient ascent, we find values of ? and ? that locally maximize Q.
3.1
Priors on ?, ?
The Q function can be modified straightforwardly to handle a prior over each ?i and ?j by adding a
log- prior term for each of these variables. These priors may be useful, for example, if we know that
most labelers are not adversarial. In this case, the prior for ? can be made very low for ? < 0.
The prior probabilities are also useful when the ground- truth Z value of particular images is (somehow) known for certain. By ?clamping? the Z values (using the prior) for the images on which the
3
true label is known for sure, the model may be able to better estimate the other parameters. The Z
values for such images can be clamped by setting the prior probability p(zj ) (used in the E-Step) for
these images to be very high towards one particular class. In our implementation we used Gaussian
priors (? = 1, ? = 1) for ?. For ?, we need a prior that does not generate negative values. To do so
. 0
we re-parameterized ? = e? and imposed a Gaussian prior (? = 1, ? = 1) on ? 0 .
3.2
Computational Complexity
The computational complexity of the E-Step is linear in the number of images and the total number
of labels. For the M-Step, the values of Q and ?Q must be computed repeatedly until convergence.1
Computing each function is linear in the number of images, number of labelers, and total number of
image labels.
Empirically when using the approach on a database of 1 million images that we recently collected
and labeled we found that the EM procedure converged in about 10 minutes using a single core of
a Xeon 2.8 GHz processor. The algorithm is parallelizable and hence this running time could be
reduced substantially using multiple cores. Real time inference may also be possible if we maintain
parameters close to the solution that are updated as new labels become available. This would allow
using the algorithm in an active manner to choose in real-time which images should be labeled next
so as to minimize the uncertainty about the image labels.
4
Simulations
Here we explore the performance of the model using a set of image labels generated by the model
itself. Since, in this case we know the parameters Z, ?, and ? that generated the observed labels,
we can compare them with corresponding parameters estimated using the EM procedure.
In particular, we simulated between 4 and 20 labelers, each labeling 2000 images, whose true labels
Z were either 0 or 1 with equal probability. The accuracy ?i of each labeler was drawn from a normal
distribution with mean 1 and variance 1. The inverse-difficulty for each image ?j was generated
by exponentiating a draw from a normal distribution with mean 1 and variance 1. Given these
labeler abilities and image difficulties, the observed labels lij were sampled according to Equation
1 using Z. Finally, the EM inference procedure described above was executed to estimate ?, ?, Z.
This procedure was repeated 40 times to smooth out variability between trials. On each trial we
? and the true parameter values ?, ?.
? ?
computed the correlation between the parameter estimates ?,
The results (averaged over all 40 experimental runs) are shown in Figure 2. As expected, as the
number of labelers grows, the parameter estimates converge to the true values.
? that matched the true image labels Z. We
We also computed the proportion of label estimates Z
compared the maximum likelihood estimates of the GLAD model to estimates obtained by taking
the majority vote as the predicted label. The predictions of the proposed GLAD model were obtained by thresholding at 0.5 the posterior probability of the label of each image being of class 1
given the accuracy and difficulty parameters returned by EM (see Section 3). Results are shown
in Figure 2. GLAD makes fewer errors than the majority vote heuristic. The difference between
the two approaches is particularly pronounced when the number of labelers per image is small. On
many images, GLAD correctly infers the true image label Z even when that Z value was the minority opinion. In essence, GLAD is exploiting the fact that some labelers are experts (which it
infers automatically), and hence their votes should count more on these images than the votes of less
skilled labelers.
Modeling Image Difficulty : To explore the importance of estimating image difficulty we performed
a simple simulation: Image labels (0 or 1) were assigned randomly (with equal probability) to 1000
images. Half of the images were ?hard?, and half were ?easy.? Fifty simulated labelers labeled all
1000 images. The proportion of ?good? to ?bad? labelers is 25:1. The probability of correctness for
each image difficulty and labeler quality combination was given by the table below:
1
The libgsl conjugate gradient descent optimizer we used requires both Q and ?Q.
4
Effect of Number of Labelers on Parameter Estimates
1
0.95
0.8
Correlation
Proportion of Labels Correct
Effect of Number of Labelers on Accuracy
1
0.9
0.85
0.8
0.75
10
15
Number of Labelers
0.4
0.2
GLAD
Majority vote
5
0.6
20
0
Beta: Spearman Corr.
Alpha: Pearson Corr.
5
10
15
Number of Labelers
Figure 2: Left: The accuracies of the GLAD model versus simple voting for inferring the underlying
class labels on simulation data. Right: The ability of GLAD to recover the true alpha and beta
parameters on simulation data.
Image Type
Hard Easy
0.95
1
0.54
1
Labeler type
Good
Bad
We measured performance in terms of proportion of correctly estimated labels. We compared three
approaches: (1) our proposed method, GLAD; (2) the method proposed in [5], which models labeler
ability but not image difficulty; and (3) Majority Vote. The simulations were repeated 20 times
and average performance calculated for the three methods. The results shown below indicated that
modeling image difficulty can result in significant performance improvements.
Method
GLAD
Majority Vote
Dawid & Skene [5]
4.1
Error
4.5%
11.2%
8.4%
Stability of EM under Various Starting Points
Empirically we found that the EM procedure was fairly insensitive to varying the starting point of the
parameter values. In a simulation study of 2000 images and 20 labelers, we randomly selected each
?i ? U [0, 4] and log(?j ) ? U [0, 3], and EM was run until convergence. Over the 50 simulation
runs, the average percent-correct of the inferred labels was 85.74%, and the standard deviation of
the percent-correct over all the trials was only 0.024%.
5
Empirical Study I: Greebles
As a first test-bed for GLAD using real data obtained from the Mechanical Turk, we posted pictures
of 100 ?Greebles? [6], which are synthetically generated images that were originally created to study
human perceptual expertise. Greebles somewhat resemble human faces and have a ?gender?: Males
have horn-like organs that point up, whereas for females the horns point down. See Figure 3 (left)
for examples. Each of the 100 Greeble images was labeled by 10 different human coders on the Turk
for gender (male/female). Four greebles of each gender (separate from the 100 labeled images) were
given as examples of each class. Shown at a resolution of 48x48 pixels, the task required careful
inspection of the images in order to label them correctly. The ground-truth gender values were all
known with certainty (since they are rendered objects) and thus provided a means of measuring the
accuracy of inferred image labels.
5
20
Inferred Label Accuracy of Greeble Images
Accuracy (% correct)
1
0.95
0.9
0.85
2
GLAD
Majority Vote
3
4
5
6
7
Number of labels per image
8
Figure 3: Left: Examples of Greebles. The top two are ?male? and the bottom two are ?female.?
Right: Accuracy of the inferred labels, as a function of the number of labels M obtained for each
image, of the Greeble images using either GLAD or Majority Vote. Results were averaged over 100
experimental runs.
We studied the effect of varying the number of labels M obtained from different labelers for each
image, on the accuracy of the inferred Z. Hence, from the 10 labels total we obtained per Greeble
image, we randomly sampled 2 ? M ? 8 labels over all labelers during each experimental trial. On
each trial we compared the accuracy of labels Z as estimated by GLAD (using a threshold of 0.5
on p(Z)) to labels as estimated by the Majority Vote heuristic. For each value of M we averaged
performance for each method over 100 trials.
Results are shown in Figure 3 (right). For all values of M we tested, the labels as inferred by GLAD
are significantly higher than for Majority Vote (p < 0.01). This means that, in order to achieve the
same level of accuracy, fewer labels are needed. Moreover, the variance in accuracy was less for
GLAD than for Majority Vote for all M that were tested, suggesting that the quality of GLAD?s
outputs is more stable than of the heuristic method. Finally, notice how, for the even values of M ,
the Majority Vote accuracy decreases. This may stem from the lack of optimal decision rule under
Majority Vote when an equal number of labelers say an image is Male as who say it is Female.
GLAD, since it makes its decisions by also taking ability and difficulty into account, does not suffer
from this problem.
6
Empirical Study II: Duchenne Smiles
As a second experiment, we used the Mechanical Turk to label face images containing smiles as
either Duchenne or Non-Duchenne. A Duchenne smile (?enjoyment? smile) is distinguished from a
Non-Duchenne (?social? smile) through the activation of the Orbicularis Oculi muscle around the
eyes, which the former exhibits and the latter does not (see Figure 4 for examples). Distinguishing
the two kinds of smiles has applications in various domains including psychology experiments,
human-computer interaction, and marketing research. Reliable coding of Duchenne smiles is a
difficult task even for certified experts in the Facial Action Coding System.
We obtained Duchenne/Non-Duchenne labels for 160 images from 20 different Mechanical Turk
labelers; in total, there were 3572 labels. (Hence, labelers labeled each image a variable number of
times.) For ground truth, these images were also labeled by two certified experts in the Facial Action
Coding System. According to the expert labels, 58 out of 160 images contained Duchenne smiles.
Using the labels obtained from the Mechanical Turk, we inferred the image labels using either
GLAD or the Majority Vote heuristic, and then compared them to ground truth.
6
Duchenne Smiles
Non-Duchenne Smiles
Figure 4: Examples of Duchenne (left) and Non-Duchenne (right) smiles. The distinction lies in
the activation of Orbicularis Oculi muscle around the eyes, and is difficult to discriminate even for
experts.
Accuracy under Noise
Accuracy under Adversarialness
0.8
1
0.78
0.9
Accuracy
Accuracy
0.76
0.74
0.72
0.66
0
0.7
0.6
0.7
0.68
0.8
0.5
GLAD
Majority Vote
1000
2000
3000
4000
Num of Noisy Labels
5000
0.4
0
GLAD
Majority Vote
200
400
600
Num of Adversarial Labels
800
Figure 5: Accuracy (percent correct) of inferred Duchenne/Non-Duchenne labels using either
GLAD or Majority Vote under (left) noisy labelers or (right) adversarial labelers. As the number of
noise/adversarial labels increases, the performance of labels inferred using Majority Vote decreases.
GLAD, in contrast, is robust to these conditions.
Results: Using just the raw labels obtained from the Mechanical Turk, the labels inferred using
GLAD matched the ground-truth labels on 78.12% of the images, whereas labels inferred using
Majority Vote were only 71.88% accurate. Hence, GLAD resulted in about a 6% performance gain.
Simulated Noisy and Adversarial Labelers: We also simulated noisy and adversarial labeler conditions. It is to be expected, for example, that in some cases labelers may just try to complete the task
in a minimum amount of time disregarding accuracy. In other cases labelers may misunderstand the
instructions, or may be adversarial, thus producing labels that tend to be opposite to the true labels.
Robustness to such noisy and adversarial labelers is important, especially as the popularity of Webbased labeling tools increases, and the quality of labelers becomes more diverse. To investigate the
robustness of the proposed approaches we generated data from virtual ?labelers? whose labels were
completely uninformative, i.e., uniformly random. We also added artificial ?adversarial? labelers
whose labels tended to be the opposite of the true label for each image.
The number of noisy labels was varied from 0 to 5000 (in increments of 500), and the number of
adversarial labels was varied from 0 to 750 (in increments of 250). For each setting, label inference
accuracy was computed for both GLAD and the Majority Vote method. As shown in Figure 5, the
accuracy of GLAD-based label inference is much less affected from labeling noise than is Majority
Vote. When adversarial labels are introduced, GLAD automatically inferred that some labelers were
purposely giving the opposite label and automatically flipped their labels. The Majority Vote heuristic, in contrast, has no mechanism to recover from this condition, and the accuracy falls steeply.
7
7
Related Work
To our knowledge GLAD is the first model in the literature to simultaneously estimate the true label,
item difficulty, and coder expertise in an unsupervised and efficient manner.
Our work is related to the literature on standardized tests, particularly the Item Response Theory
(IRT) community (e.g., Rasch [10], Birnbaum [3]). The GLAD model we propose in this paper can
be seen as an unsupervised version of previous IRT models for the case in which the correct answers
(i.e., labels) are unknown.
Snow, et al [14] used a probabilistic model similar to Naive Bayes to show that by averaging multiple naive labelers (<= 10) one can obtain labels as accurate as a few expert labelers. Two key
differences between their model and GLAD are that: (1) they assume a significant proportion of
images have been pre-labeled with ground truth values, and (2) all the images have equal difficulty.
As we show in this paper, modeling image difficulty may be very important in some cases. Sheng,
et al [12] examine how to identify which images of an image dataset to label again in order to reduce
uncertainty in the posterior probabilities of latent class labels.
Dawid and Skene [5] developed a method to handle polytomous latent class variables. In their case
the notion of ?ability? is handled using full confusion matrices for each labeler. Smyth, et al [13]
used a similar approach to combine labels from multiple experts for items with homogeneous levels
of difficulty. Batchelder and Romney [2] infer test answers and test-takers? abilities simultaneously,
but do not estimate item difficulties and do not admit adversarial labelers.
Other approaches employ a Bayesian model of the labeling process that considers both variability
in labeler accuracies as well as item difficulty (e.g. [8, 7, 11]). However, inference in these models
is based on MCMC which is likely to suffer from high computational expense, and the need to wait
(arbitrarily long) for parameters to ?burn in? during sampling.
8
Summary and Further Research
An important bottleneck facing the machine learning community is the need for very large datasets
with hand-labeled data. Datasets whose scale was unthinkable a few years ago are becoming commonplace today. The Internet makes it possible for people around the world to cooperate on the
labeling of these datasets. However, this makes it unrealistic for individual researchers to obtain the
ground truth of each label with absolute certainty. Algorithms are needed to automatically estimate
the reliability of ad-hoc anonymous labelers, the difficulty of the different items in the dataset, and
the probability of the true labels given the currently available data.
We proposed one such system, GLAD, based on standard probabilistic inference on a model of the
labeling process. The approach can handle the millions of parameters (one difficulty parameter per
image, and one expertise parameter per labeler) needed to process large datasets, at little computational cost. The model can be used seamlessly to combine labels from both human labelers and
automatic classifiers. Experiments show that GLAD can recover the true data labels more accurately
than the Majority Vote heuristic, and that it is highly robust to both noisy and adversarial labelers.
Active Sampling: One advantage of probabilistic models is that they lend themselves to implementing active methods (e.g., Infomax [4]) for selecting which images should be re-labeled next. We are
currently pursuing the development of control policies for optimally choosing whether to obtain
more labels for a particular item ? so that the inferred Z label for that item becomes more certain
? versus obtaining more labels from a particular labeler ? so that his/her accuracy ? may be better
estimated, and all the images that he/she labeled can have their posterior probability estimates of Z
improved.
A software implementation of GLAD is available at http://mplab.ucsd.edu/?jake.
References
[1] Amazon. Mechanical turk. http://www.mturk.com.
[2] W. H. Batchelder and A. K. Romney. Test theory without an answer key. Psychometrika, 53(1):71?92,
1988.
8
[3] A. Birnbaum. Some latent trait models and their use in inferring an examinee?s ability. Statistical theories
of mental test scores, 1968.
[4] N. Butko and J. Movellan. I-POMDP: An infomax model of eye movement. In Proceedings of the
International Conference on Development and Learning, 2008.
[5] A. Dawid and A. Skene. Maximum likelihood estimation of observer error-rates using the em algorithm.
Applied Statistics, 28(1):20?28, 1979.
[6] I. Gauthier and M. Tarr. Becoming a ?greeble? expert: Exploring mechanisms for face recognition. Vision
Research, 37(12), 1997.
[7] V. Johnson. On bayesian analysis of multi-rater ordinal data: An application to automated essay grading.
Journal of the American Statistical Association, 91:42?51, 1996.
[8] G. Karabatsos and W. H. Batchelder. Markov chain estimation for test theory without an answer key.
Psychometrika, 68(3):373?389, 2003.
[9] Omron. OKAO vision brochure, July 2008.
[10] G. Rasch. Probabilistic Models for Some Intelligence and Attainment Tests. Denmark, 1960.
[11] S. Rogers, M. Girolami, and T. Polajnar. Semi-parametric analysis of multi-rater data. Statistics and
Computing, 2009.
[12] V. Sheng, F. Provost, and P. Ipeirotis. Get another label? improving data quality and data mining using
multiple noisy labelers. In Knowledge Discovery and Data Mining, 2008.
[13] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labelling of
venus images. In Advances of Neural Information Processing Systems, 1994.
[14] R. Snow, B. O?Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast - but is it good? evaluating non-expert
annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods on
Natural Language Processing, 2008.
[15] S. Steinbach, V. Rabaud, and S. Belongie. Soylent grid: it?s made of people! In International Conference
on Computer Vision, 2007.
[16] D. Turnbull, R. Liu, L. Barrington, and G. Lanckriet. A Game-based Approach for Collecting Semantic
Annotations of Music. In 8th International Conference on Music Information Retrieval (ISMIR), 2007.
[17] L. von Ahn and L. Dabbish. Labeling Images with A Computer Game. In Proceedings of the SIGCHI
conference on Human factors in computing systems, pages 319?326. ACM Press New York, NY, USA,
2004.
[18] L. von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. reCAPTCHA: Human-Based Character
Recognition via Web Security Measures. Science, 321(5895):1465, 2008.
9
| 3644 |@word trial:6 version:1 proportion:5 instruction:1 essay:1 simulation:7 jacob:2 brochure:1 carry:1 liu:1 score:1 selecting:1 outperforms:2 existing:1 subjective:1 current:1 com:1 activation:2 must:4 cheap:1 drop:1 generative:1 fewer:2 half:2 selected:1 item:8 intelligence:1 inspection:1 ruvolo:1 proficient:1 core:2 num:2 mental:1 skilled:2 direct:1 become:1 beta:2 combine:3 baldi:1 manner:3 expected:2 indeed:1 themselves:1 examine:1 multi:2 mplab:2 automatically:4 actual:1 little:1 becomes:2 project:1 estimating:1 matched:2 underlying:1 provided:1 moreover:1 psychometrika:2 coder:3 kind:1 substantially:1 developed:1 unobserved:1 certainty:3 every:2 collecting:1 voting:1 classifier:3 control:1 producing:1 positive:1 service:2 soylent:2 esp:1 bilinear:1 becoming:4 burn:1 webbased:1 studied:1 collect:1 challenging:1 shaded:1 jurafsky:1 averaged:3 decided:1 practical:3 horn:2 practice:1 movellan:3 procedure:5 empirical:3 significantly:1 confidence:1 pre:1 refers:1 wait:1 get:1 cannot:1 clever:1 close:1 butko:1 www:1 imposed:1 starting:2 pomdp:1 resolution:1 amazon:2 rule:1 maliciously:1 his:2 stability:1 handle:3 notion:1 increment:2 updated:1 diego:1 today:1 massive:1 smyth:2 homogeneous:1 distinguishing:1 steinbach:1 lanckriet:1 dawid:3 recognition:2 particularly:2 database:5 labeled:15 observed:12 bottom:1 orbicularis:2 commonplace:1 indep:1 decrease:3 contemporary:1 movement:1 valuable:1 principled:4 complexity:2 depend:1 completely:1 accelerate:1 joint:1 various:2 derivation:1 fast:1 artificial:1 labeling:13 pearson:1 choosing:1 harnessing:1 whose:5 heuristic:8 supplementary:1 solve:1 say:2 ability:8 statistic:2 taker:1 noisy:10 itself:1 certified:2 online:1 hoc:1 advantage:1 propose:1 hire:1 interaction:1 combining:2 achieve:1 bed:1 pronounced:1 exploiting:1 convergence:2 object:1 derive:1 measured:1 ij:1 progress:2 auxiliary:1 predicted:1 resemble:1 girolami:1 rasch:2 snow:2 correct:8 human:12 opinion:1 material:1 virtual:1 implementing:1 rogers:1 require:4 anonymous:2 probable:2 exploring:1 around:4 ground:8 normal:2 vary:2 optimizer:1 estimation:2 label:116 currently:2 organ:1 correctness:2 tool:4 always:4 gaussian:2 modified:1 varying:2 she:2 improvement:1 likelihood:5 steeply:1 greatly:1 contrast:2 adversarial:19 seamlessly:1 romney:2 inference:9 lj:3 her:2 hidden:1 perona:1 pixel:1 unobservable:1 prudent:1 priori:3 denoted:2 development:2 constrained:1 integration:1 fairly:1 field:1 equal:4 ng:1 sampling:2 labeler:29 tarr:1 flipped:1 unsupervised:2 few:3 employ:1 modern:1 randomly:3 simultaneously:5 resulted:1 rater:2 individual:1 maintain:1 interest:2 possibility:1 investigate:1 highly:1 mining:2 mcmillen:1 male:5 dabbish:1 chain:1 accurate:2 obtuse:1 x48:1 facial:2 old:3 maurer:1 re:2 causal:2 theoretical:2 instance:1 xeon:1 modeling:4 measuring:1 maximization:1 turnbull:1 cost:3 l32:1 deviation:1 johnson:1 optimally:1 straightforwardly:1 answer:4 combined:1 international:3 probabilistic:8 infomax:2 again:1 von:2 containing:1 choose:1 l22:1 admit:1 expert:10 american:1 suggesting:1 account:1 coding:3 ad:1 performed:1 helped:1 try:1 observer:1 recover:3 bayes:1 annotation:2 misunderstanding:1 minimize:1 accuracy:29 variance:3 who:2 likewise:1 efficiently:1 identify:1 greeble:5 raw:1 bayesian:2 accurately:1 expertise:12 researcher:1 processor:1 converged:1 ago:1 l21:1 detector:1 parallelizable:1 tended:1 turk:8 batchelder:3 sampled:3 gain:1 dataset:2 knowledge:2 listen:1 infers:2 javier:1 higher:3 originally:1 response:1 improved:1 marketing:1 just:2 until:2 correlation:2 hand:3 sheng:2 examinee:1 web:2 gauthier:1 lack:1 somehow:1 brings:1 quality:4 indicated:1 grows:1 barrington:1 usa:2 effect:3 true:19 former:1 hence:7 assigned:1 burl:1 laboratory:1 semantic:1 game:4 during:4 ambiguous:1 noted:1 essence:1 complete:1 demonstrate:2 confusion:1 oculus:2 percent:3 cooperate:1 image:103 ranging:2 purposely:1 recently:1 empirically:2 insensitive:1 million:3 association:1 he:2 trait:1 significant:2 connor:1 automatic:2 collaborate:1 grid:2 similarly:1 language:2 reliability:1 stable:1 ahn:2 etc:1 labelers:51 bergsma:1 posterior:5 recent:2 female:5 jolla:1 belongs:1 certain:2 arbitrarily:1 life:1 muscle:2 seen:1 minimum:1 somewhat:1 determine:2 maximize:2 converge:1 july:1 ii:1 semi:1 multiple:7 full:2 infer:4 stem:1 smooth:1 long:1 retrieval:1 prediction:1 vision:6 expectation:3 mturk:1 whereas:2 uninformative:1 fifty:1 ascent:1 sure:1 tend:1 smile:13 odds:1 call:2 ideal:1 revealed:1 synthetically:1 easy:4 automated:1 independence:1 psychology:1 perfectly:1 opposite:3 reduce:1 venus:1 grading:1 bottleneck:2 whether:1 handled:1 suffer:2 returned:1 york:1 repeatedly:1 action:2 useful:2 clear:1 amount:1 locally:1 category:1 reduced:1 generate:1 http:2 zj:22 notice:1 estimated:6 correctly:7 per:5 popularity:1 diverse:1 affected:1 key:3 four:1 threshold:1 blum:1 drawn:1 birnbaum:2 vast:1 year:2 run:4 inverse:1 parameterized:1 uncertainty:2 ismir:1 wu:1 pursuing:1 draw:1 decision:2 internet:3 distinguish:1 software:1 speed:1 fayyad:1 formulating:1 rendered:1 glad:35 skene:3 according:3 combination:2 conjugate:1 spearman:1 em:9 character:1 taken:1 ln:4 resource:3 equation:2 previously:1 count:2 mechanism:2 needed:4 know:2 ordinal:1 available:3 sigchi:1 distinguished:1 robustness:2 top:1 running:1 greebles:5 standardized:1 graphical:2 music:2 giving:1 ting:1 especially:1 jake:2 move:1 already:1 added:1 parametric:1 said:1 exhibit:1 gradient:2 separate:1 simulated:6 majority:24 collected:1 l12:1 considers:1 toward:1 denmark:1 minority:1 modeled:1 index:1 difficult:3 executed:1 recaptcha:2 expense:1 whitehill:1 negative:2 irt:2 implementation:2 policy:1 unknown:5 l11:1 datasets:5 markov:1 descent:1 incorrectly:1 variability:2 ucsd:2 varied:2 provost:1 community:2 inferred:13 introduced:1 mechanical:7 required:1 security:1 california:1 distinction:1 address:1 able:1 below:2 perception:1 challenge:2 reliable:2 including:1 lend:1 unrealistic:1 difficulty:31 natural:2 ipeirotis:1 scheme:1 eye:3 picture:1 created:1 naive:2 lij:13 prior:12 literature:2 discovery:1 interesting:2 querying:1 versus:2 facing:1 sufficient:1 consistent:1 thresholding:1 summary:1 last:3 free:1 allow:2 wide:2 fall:1 face:6 taking:2 absolute:1 distributed:1 ghz:1 calculated:1 world:2 evaluating:1 commonly:2 made:2 san:1 exponentiating:1 rabaud:1 social:1 alpha:2 active:3 incoming:1 belongie:1 search:1 latent:3 table:1 nature:1 robust:4 ca:1 attainment:1 obtaining:1 improving:1 posted:1 domain:1 abraham:1 noise:3 paul:2 arise:1 repeated:2 ny:1 inferring:6 wish:1 inverts:1 lie:1 clamped:1 perceptual:1 minute:1 down:1 bad:2 disregarding:1 adding:1 corr:2 importance:1 relabeled:1 labelling:1 clamping:1 simply:1 explore:3 likely:2 duchenne:15 contained:1 gender:4 truth:8 chance:2 acm:1 conditional:1 goal:1 careful:1 towards:1 hard:2 uniformly:1 averaging:1 total:4 discriminate:2 experimental:3 la:1 vote:27 cond:1 people:2 latter:1 mcmc:1 tested:2 |
2,918 | 3,645 | Bayesian estimation of orientation preference maps
Sebastian Gerwinn
MPI for Biological Cybernetics
and University of T?ubingen
Computational Vision and Neuroscience
Spemannstrasse 41, 72076 T?ubingen
[email protected]
Jakob H. Macke
MPI for Biological Cybernetics
and University of T?ubingen
Computational Vision and Neuroscience
Spemannstrasse 41, 72076 T?ubingen
[email protected]
Leonard E. White
Duke Institute for Brain Sciences
Duke University
Durham, NC 27705, USA
[email protected]
Matthias Kaschube
Lewis-Sigler Institute for Integrative Genomics
and Department of Physics
Princeton University
Princeton, NJ 08544, USA
[email protected]
Matthias Bethge
MPI for Biological Cybernetics
and University of T?ubingen
Computational Vision and Neuroscience Group
Spemannstrasse 41,
72076 T?ubingen
[email protected]
Abstract
Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium
imaging and voltage sensitive dye imaging can be used to measure the functional
organization of visual cortex across different spatial and temporal scales. Here, we
present Bayesian methods based on Gaussian processes for extracting topographic
maps from functional imaging data. In particular, we focus on the estimation of
orientation preference maps (OPMs) from intrinsic signal imaging data. We model
the underlying map as a bivariate Gaussian process, with a prior covariance function that reflects known properties of OPMs, and a noise covariance adjusted to
the data. The posterior mean can be interpreted as an optimally smoothed estimate of the map, and can be used for model based interpolations of the map from
sparse measurements. By sampling from the posterior distribution, we can get error bars on statistical properties such as preferred orientations, pinwheel locations
or pinwheel counts. Finally, the use of an explicit probabilistic model facilitates
interpretation of parameters and quantitative model comparisons. We demonstrate
our model both on simulated data and on intrinsic signaling data from ferret visual
cortex.
1
Introduction
Neurons in the visual cortex of primates and many other mammals are organized according to their
tuning properties. The most prominent example of such a topographic organization is the layout of
neurons according to their preferred orientation, the orientation preference map (OPM) [1, 2, e.g.].
The statistical structure of OPMs [3, 4] and other topographic maps has been the focus of extensive
1
research, as have been the relationships between different maps [5]. Orientation preference maps
can be measured using optical imaging of intrinsic signals, voltage sensitive dye imaging, functional
magnetic resonance imaging [6], or 2-photon calcium imaging [2, 7]. For most of these methods
the signal-to-noise ratio is low, i.e. the stimulus specific part of the response is small compared to
non-specific background fluctuations. Therefore, statistical pre-processing of the data is required in
order to extract topographic maps from the raw experimental data. Here, we propose to use Gaussian
process methods [8] for estimating topographic maps from noisy imaging data. While we will focus
on the case of OPMs, the methods used will be applicable more generally.
The most common analysis method for intrisic signaling data is to average the data within each
stimulus condition, and report differences between conditions. In the case of OPMs, this amounts
to estimating the preferred orientation at each pixel by vector averaging the different stimulus orientations weighted according to the evoked responses. In a second step, spatial bandpass filtering
is usually applied in order to obtain smoother maps. One disadvantage of this approach is that the
frequency characteristics of the bandpass filters are free parameters which are often set ad-hoc, and
may have a substantial impact on the statistics of the obtained map [9, 10]. In addition, the approach
ignores the effect of anisotropic and correlated noise [11, 10], which might result in artifacts.
Methods aimed at overcoming these limitations include analysis techniques based on principal component analysis, linear discriminant analysis, oriented PCA [12] (and extensions thereof [11]) as
well as variants of independent component analysis [9]. Finally, paradigms employing periodically
changing stimuli [13, 14] use differences in their temporal characteristics to separate signal and
noise components. These methods have in common that they do not make any parametric assumptions about the relationship between stimulus and response, between different stimuli, or about the
smoothness of the maps. Rather, they attempt to find ?good? maps by searching for filters which are
maximally discriminative between different stimulus conditions. In particular, they differ from the
classical approach in that they do not assume the noise to be isotropic and uncorrelated, but make it
hard to incorporate prior knowledge about the structure of maps, and can therefore be data-intensive.
Here, we attempt to combine the strengths of the classical and discriminative models by combining
prior knowledge about maps with flexible noise models into a common probabilistic model.
We encode prior knowledge about the statistical structure of OPMs in the covariance function of a
Gaussian Process prior over maps. By combining the prior with the data through an explicit generative model of the measurement process, we obtain a posterior distribution over maps. Compared
to previously proposed methods for analyzing multivariate imaging methods, the GP approach has
a number of advantages:
? Optimal smoothing: The mean of the posterior distribution can be interpreted as an optimally smoothed map. The filtering is adaptive, i.e. it will adjust to the amount and quality
of the data observed at any particular location.
? Non-isotropic and correlated noise: In contrast to the standard smoothing approach, noise
with correlations across pixels as well as non-constant variances can be modelled.
? Interpolations: The model returns an estimate of the preferred orientation at any location,
not only at those at which measurements were obtained. This can be used, e.g., for artifact
removal, or for inferring maps from multi-electrode recordings.
? Explicit probabilistic model: The use of an explicit, generative model of the data facilitates
both the interpretation and setting of parameters quantitative model comparisons.
? Model based uncertainty estimates: The posterior variances at each pixel can be used to
compute point-wise error bars at each pixel location [9, 11]. By sampling from the posterior
(using the full posterior covariance), we can also get error bars on topological or global
properties of the map, such as pinwheel counts or locations.
Mathematically speaking, we are interested in inferring a vector field (the 2-dimensional vector
encoding preferred orientation) across the cortical surface from noisy measurements. Related problems have been studied in spatial statistics, e.g. in the estimation of wind-fields in geo-statistics [15],
where GP methods for this problem are often referred to as co-kriging methods [16, 17].
2
2
2.1
Methods
Encoding Model
We model an imaging experiment, where at each of N trials, the activity at n pixels is measured.
The response ri (x) at trial i to a stimulus parameterised by Vi is given by
ri (x) =
d
X
vki mk (x) + i (x) = vi> mk (x) + i (x),
(1)
k=1
i.e. the mean response at each pixel is modelled to be a linear function of some stimulus parameters
vki .
This can be written compactly as ri = M vi + ?i or ri = Vi> m + ?i . Here, ri and ?i are ndimensional vectors, M is an n ? d dimensional matrix, Vi = vi ? In , ? is the Kronecker-product
and m = vec(M ) is an nd-dimensional vector.
We refer to the coefficients mk (x) as feature maps, as they indicate the selectivity of pixel x to
stimulus feature k. In the specific case of modelling an orientation preference map, we have d = 2
and vi = (cos(2?i ), sin(2?i ))> . Then, the argument of the complex number m0 (x) = m1 (x) +
im2 (x) is the preferred orientation at location x, whereas the absolute value of m0 (x) is a measure
of its selectivity. While this approach assumes cosine-tuning curves at each measurement location,
it can be generalized to arbitrary tuning curves by including terms corresponding to cosines with
different frequencies.
We assume that the noise-residuals ? are normally distributed with covariance ? , and a Gaussian
prior with covariance Km for the feature map vector m. Then, the posterior distribution over m is
Gaussian with posterior covariance ?post and mean ?post :
!
X
?1
?1
>
?post = Km +
vi vi ? ??1
(2)
i
!
?post = ?post
X
Vj ??1
ri
i
= ?post Id ? ??1
X
vi ? ri
(3)
i
We note that the posterior covariance will have block structure provided that the prior covariance
Km has block structure, i.e. if different feature
maps are statistically independent a priori, and the
P
stimuli are un-correlated on average, i.e. i vi vi> = Dv is diagonal. Hence, inference for different
maps ?de-couples?, and we do not have to store the full joint covariance over all d maps.
2.2
Choosing a prior
We need to specify the covariance function K(m(x), m(x0 )) of the prior distribution over maps.
As cortical maps, and in particular orientation preference maps, have been studied extensively in
the past [5], we actually have prior knowledge (rather than just prior assumptions) to guide the
choice of a prior. It is known that orientation preference maps are smooth [2] and that they have a
semi-periodic structure of regularly spaced columns. Hence, filtering white noise with appropriately
chosen filters [18] yields maps which visually look like measured OPMs (see Fig. 1). While it is
known that real OPMs differ from Gaussian random fields in their higher order statistics [3], use
of a Gaussian prior can be motivated by the maximum entropy principle: We assume a prior with
minimal higher-order correlations, with the goal of inferring them from the experimental data [3].
For simplicity, we take the prior to be isotropic, i.e. not to favour any direction over others. (For real
maps, there is a slight anisotropy [19]).
We assume that each prior sample is generated by convolving a two-dimensional
Gaussian
white
P2
?k
1 x2
exp
?
,
?
=
??2 ,
noise image with a Difference-of-Gaussians filter f (x) = k=1 2??
2
2
1
2 ?k
k
and ?2 = 2?1 . This will result in a prior which is uncorrelated in the different maps component, i.e.
3
Cov(m1 (x), m2 (x0 )) = 0, and a stationary covariance function given by
Kc (? ) = Kc (kx ? x0 k) = Cov(m1 (x), m1 (x0 ))
2
X
?k ?l
1
?2
=
exp
?
.
2?(?k2 + ?l2 )
2 ?k2 + ?l2
(4)
k,l=1
Then, the prior covariance matrix Km can be written as Km = Ic ? Kc . This prior has two hyperparameters, namely the absolute magnitude ?1 and the kernel width ?1 . In principle, optimization
of the marginal likelihood can be used to set hyper-parameters. In practice, it turned out to be
computationally more efficient to select them by matching the radial component of the empirically
observed auto-correlation function of the map [16], see Fig. 1 B).
A)
B)
C)
2.5
Covariance
2
Empirical
Difference of Gaussian
1.5
1
0.5
0
-0.5
0
20
40
Distance (pixels)
60
Figure 1: Prior covariance: A) Covariance function derived from the Difference-of-Gaussians. B)
Radial component of prior covariance function and of covariance of raw data. C Angle-map of one
sample from the prior, with ?1 = 4. Each color corresponds to an angle in [0, 180? ].
2.3
Approximate inference
The formulas for the posterior mean and covariance involve covariance matrices over all pixels. On
a map of size nx ? ny , there are n = nx ? ny pixels, so we would have to store and compute with
matrices of size n ? n, which would limit this approach to maps of relatively small size. A number
of approximation techniques have been proposed to make large scale inference feasible in models
with Gaussian process priors (see [8] for an overview). Here, we utilize the fact that the spectrum
of eigenvalues drops off quickly for many kernel functions [20, 21], including the Difference-ofGaussians used here. This means that the covariance matrix Kc can be approximated well by a low
rank matrix product Kc ? GG> , where G is of size n ? q, q n (see [17] for a related idea).
To find G, we perform an incomplete Cholesky factorization on the matrix Kc . This can be done
without having to store Kc in memory explicitly.
In this case, the posterior covariance can be calculated without ever having to store (or even invert)
the full prior covariance:
?1 > ?1
?1
> ?1
?post = Id ? Kc ? ? ?1 Kc ??1
G ?
Kc ,
(5)
? ? G ?Iq + G ? G
where ? = 2/N . We restrict the form of the noise covariance either to be diagonal (i.e. assume
uncorrelated noise), or more generally to be of the form ? = D + G R G>
. Here, G is of size
n ? q , q n, and D is a diagonal matrix. In other words, the functional form of the covariance
matrix is assumed to be the same as in factor analysis models [22, 23]: The low rank term G
models correlation across pixels, whereas the diagonal matrix D models independent noise. We
assume this model to regularize the noise covariance to ensure that the noise covariance has full
rank even when the number of data-points is less than the number of pixels [22]. The matrices G
and D can be fit using expectation maximization without ever having to calculate the full noise
covariance across all pixels. We initialize the noise covariance by calculating the noise variances for
each stimulus condition, and averaging this initial estimate across stimulus conditions. We iterate
between calculating the posterior mean (using the current estimate of ? ), and obtaining a pointestimate of the most likely noise covariance given the mean [24]. In all cases, a very small number
of iterations lead to convergence.
4
A)
B)
C)
E)
F)
180
135
90
45
0
D)
1
180
0.9
135
Correlation
0.8
90
0.7
0.6
45
0.5
0.4
0
GP with correlations
GP, no correlations
Smoothing (optimized)
16
40
80
160
320
Simulus presentations
640
Figure 2: Illustration on synthetic data: A) Ground truth map used to generate the data. B) Raw
map, estimated using 10 trials of each direction. C) GP-reconstruction of the map. D) Posterior
variance of GP, visualized as size of 95% confidence intervals on preferred orientations. Superimposed are the zero-crossings of the GP map. E) Reconstruction by smoothing with fixed Gaussian
filter, filter-width optimized by maximizing correlation with ground truth. F) Reconstruction performance as a function of stimulus presentations used, for GP with noise-correlations, GP without
noise-correlations, and simple smoothing.
3
3.1
Results
Illustration on synthetic data
To illustrate the ability of our method to recover maps from noisy recordings, we generated a synthetic map (a sample from the prior distribution, ?true map?, see Fig. 2 A), and simulated responses
to each of 8 different oriented gratings by sampling from the likelihood (1). The parameters were
chosen to be roughly comparable with the experimental data (see below). We reconstructed the map
using our GP method (low rank approximation of rank q = 1600, noise correlations of rank q = 5)
on data sets of different sizes (N = 8 ? (2, 5, 10, 20, 30, 40, 80)). Figure 2 C) shows the angular
components of the posterior mean of the GP, our reconstruction of the map. We use the posterior
variances to also calculate a pointwise 95% confidence interval on the preferred orientation at each
location, shown in Fig. 2 D). As expected, the confidence intervals are biggest near pinwheels,
where the orientation selectivity of pixels is low, and therefore the preferred orientation is not well
defined.
To evaluate the performance of the model, we quantified its reconstruction performance by computing the correlation coefficient of the posterior mean and the true map, each represented as a long
vector with 2n elements. We compared the GP map against a map obtained by filtering the raw
map (Fig. 2 B) with a Gaussian kernel (Fig. 2 D), where the kernel width was chosen by maximizing the similarity with the ?true map?. This yields an optimistic estimate of the performance of the
smoothed map, as setting the optimal filter-size requires access to the ground truth. We can see that
the GP map converges to the true map more quickly than the smoothed map (Fig. 2 F). For example,
using 16 stimulus presentations, the smoothed map has a correlation with the ground truth of 0.45,
whereas the correlation of the GP map is 0.77. For the simple smoothing method, about 120 presentations would be required to achieve this performance level. When we ignore noise-correlations
(i.e. assume ? to be diagonal), GP still outperforms simple smoothing, although by a much smaller
amount (Fig. 2 F).
5
3.2
Application to data from ferret visual cortex
To see how well the method works on real data, we used it to analyze data from an intrinsic signal
optical imaging experiment. The central portion of the visuotopic map in visual areas V1 and V2
of an anesthetized ferret was imaged with red light while square wave gratings (spatial frequency
0.1 cycles/degree) were presented on a screen. Gratings were presented in 4 different orientations
(0? , 45? , 90? and 135? ), and moving along one of the two directions orthogonal to its orientation
(temporal frequency 3.2Hz). Each of the 8 possible directions was presented 100 times in a pseudorandom order for a duration of 5 seconds each, with an interstimulus interval of 8 seconds. Intrinsic
signals were collected using a digital camera with pixel-size 30?m. The response ri was taken
to be the average activity in a 5 second window relative to baseline Each response vector ri was
normalized to have mean 0 and standard deviation 1, no spatial filtering was performed. For all
analyses in this paper, we concentrated on a region of size 100 by 100 pixels. The large data set
with a total of 800 stimulus presentations made it possible to quantify the performance of our model
by comparing it to unsmoothed maps. Figure 3 A) shows the map estimated by vector averaging all
800 presentations, without any smoothing. However, the GP method itself is designed to also work
robustly on smaller data sets, and we are primarily interested in its performance in estimating maps
using only few stimulus presentations.
3.3
Bayesian estimation of orientation preference maps
For real measured data, we do not know ground truth to estimate the performance of our model.
Therefore, we used 5% of the data for estimating the map, and compared this map with the (unsmoothed) map estimated on the other 95% of data, which served as our proxy for ground truth. As
above, we compared the GP map against one obtained by smoothing with a Gaussian kernel, where
the kernel width of the smoothing kernel was chosen by maximizing its correlation with (our proxy
for) the ground truth. The GP map outperformed the smoothing map consistently: For 18 out of 20
different splits into training and test data, the correlation of the GP map was higher (p = 2 ? 10?4 ,
average correlations c = 0.84 ? 0.01 for GP, c = 0.79 ? 0.015 for smoothing). The same held true
when we smoothed maps with a Difference of Gaussians filter rather than a Gaussian (19 out of 20,
average correlation c = 0.71 ? 0.08).
A)
B)
C)
Figure 3: OPMs in ferret V1 A) Raw map, estimated from 720 out of 800 stimuli. B) Smoothed
map estimated from other 80 stimuli, filter width obtained by maximizing the correlation to map
A. C) GP reconstruction of map. The GP has a correlation with the map shown in A) of 0.87, the
performance of the smoothed map is 0.74.
One of the strengths of the GP model is that the filter-parameters are inferred by the model, and do
not have to be set ad-hoc. The analysis above shows that, even if when optimized the filter-width for
smoothing (which would not be possible in a real experiment), the GP still outperforms the approach
of smoothing with a Gaussian window. In addition, it is important to keep in mind that using the
posterior mean as a clean estimate of the map is only one feature of our model. In the following,
we will use the GP model to optimally interpolate a sparsely sampled map, and to the posterior
distribution to obtain error bars over the pinwheel-counts and locations of the map.
6
3.4
Interpolating the map
The posterior mean ?(x) of the model can be evaluated for any x. This makes it possible to extend
the map to locations at which no data was recorded. We envisage this to be useful in two kinds of
applications: First, if the measurement is corrupted in some pixels (e.g. because of a vessel artifact),
we attempt to recover the map in this region by model-based interpolation. We explored this scenario
by cutting out a region of the map described above (inside of ellipse in Fig. 4 A), and using the GP
to fill in the map. The correlation between the true map and the GP map in the filled-in region was
0.77. As before, we compared to smoothing with a Gaussian filter, for which the correlation was
0.59.
In addition, multi-electrode arrays [25] can be used to measure neural activity at multiple locations
simultaneously. Provided that the electrode spacing is small enough, it should be possible to reconstruct at least a rough estimate of the map from such discrete measurements. We simulated a multielectrode recording by only using the measured activity at 49 pixel locations which were chosen to
be spaced 400?m apart. Then, we attempted to infer the full map using only these 49 measurements,
and our prior knowledge about OPMs encoded in the prior covariance. The reconstruction is shown
in Fig. 4 C. As before, the GP map outperforms the smoothing approach (c = 0.78 vs. c = 0.81).
Discriminative analysis methods for imaging data can not be used for such interpolations.
A)
B)
C)
D)
Figure 4: Interpolations: A) Filling in: The region inside the white ellipse was reconstructed by the
GP using only the data outside the ellipse. B) Map estimated from all 800 stimulus presentations,
with ?electrode locations? superimposed. C) GP-reconstruction of the map, estimated only from the
49 pixels colored in in gray in B). D) Smoothing reconstruction of the map.
3.5
Posterior uncertainty
As both our prior and the likelihood are Gaussian, the posterior distribution is also Gaussian, with
mean ?post and covariance ?post . By sampling from this posterior distribution, we can get error bars
not only on the preferred orientations in individual pixels (as we did for Fig. 2 D), but also for
global properties of the map. For example, the location [10] and total number [3, 4] of pinwheels
(singularities at which both map components vanish) has received considerable attention in the past.
Figure 5 A) and B) shows two samples from the posterior distribution, which differ both in their
pinwheel locations and counts (A: 39, B: 28, C:31). To evaluate our certainty in the pinwheel
locations, we calculate a two-dimensional histogram of pinwheel locations across samples (Fig. 5 D
and E). One can see that the histogram gets more peaked with increasing data-set size. We illustrate
this effect by calculating the entropy of the (slightly smoothed) histograms, which seems to keep
decreasing for larger data-set sizes, indicating that we are more confident in the exact locations of
the pinwheels.
4
Discussion
We introduced Gaussian process methods for estimating orientation preference maps from noisy
imaging data. By integrating prior knowledge about the spatial structure of OPMs with a flexible
noise model, we aimed to combine the strengths of classical analysis methods with discriminative
approaches. While we focused on the analysis of intrinsic signal imaging data, our methods are
expected to be also applicable to other kinds of imaging data. For example, functional magnetic
7
A)
B)
C)
D)
E)
F)
12.2
Entropy
12
11.8
11.6
11.4
11.2
80 160 240 320 400
Stimulus presentations
Figure 5: Posterior uncertainty: A B C) Three samples from the posterior distribution, using 80
stimuli (zoomed in for better visibility). D E) Density-plot of pinwheel locations when map is
estimated with 40 and 800 stimuli, respectively. F) Entropy of pinwheel-density as a measure of
confidence in the pinwheel locations.
resonance imaging is widely used as a non-invasive means of measuring brain activity, and has been
reported to be able to estimate orientation preference maps in human subjects [6].
In contrast to previously used analysis methods for intrinsic signal imaging, ours is based on a
generative model of the data. This can be useful for quantitative model comparisons, and for investigating the coding properties of the map. For example, it can be used to investigate the relative
impact of different model-properties on decoding performance. We assumed a GP prior over maps,
i.e. assumed the higher-order correlations of the maps to be minimal. However, it is known that the
statistical structure of OPMs shows systematic deviations from Gaussian random fields [3, 4], which
implies that there could be room for improvement in the definition of the prior. For example, using
priors which are sparse [26] (in an appropriately chosen basis) could lead to superior reconstruction
ability, and facilitate reconstructions which go beyond the auto-correlation length of the GP-prior
[27]. Finally, one could use generalized linear models rather than a Gaussian noise model [26, 28].
However, it is unclear how general noise correlation structures can be integrated in these models in a
flexible manner, and whether the additional complexity of using a more involved noise model would
lead to a substantial increase in performance.
Acknowledgements
This work is supported by the German Ministry of Education, Science, Research and Technology through the
Bernstein award to MB (BMBF; FKZ: 01GQ0601), the Werner-Reichardt Centre for Integrative Neuroscience
T?ubingen, and the Max Planck Society.
References
[1] G G Blasdel and G Salama. Voltage-sensitive dyes reveal a modular organization in monkey striate cortex.
Nature, 321(6070):579?85, Jan 1986.
[2] Kenichi Ohki, Sooyoung Chung, Yeang H Ch?ng, Prakash Kara, and R Clay Reid. Functional imaging
with cellular resolution reveals precise micro-architecture in visual cortex. Nature, 433(7026):597?603,
2005.
8
[3] F Wolf and T Geisel. Spontaneous pinwheel annihilation during visual development. Nature,
395(6697):73?8, 1998.
[4] M. Kaschube, M. Schnabel, and F. Wolf. Self-organization and the selection of pinwheel density in visual
cortical development. New Journal of Physics, 10(1):015009, 2008.
[5] Naoum P Issa, Ari Rosenberg, and T Robert Husson. Models and measurements of functional maps in
v1. J Neurophysiol, 99(6):2745?2754, 2008.
[6] Essa Yacoub, Noam Harel, and K?amil Ugurbil. High-field fmri unveils orientation columns in humans. P
Natl Acad Sci Usa, 105(30):10607?12, Jul 2008.
[7] Ye Li, Stephen D Van Hooser, Mark Mazurek, Leonard E White, and David Fitzpatrick. Experience with moving visual stimuli drives the early development of cortical direction selectivity. Nature,
456(7224):952?6, Dec 2008.
[8] C.E. Rasmussen and C.K.I. Williams. Gaussian processes for machine learning. Springer, 2006.
[9] M Stetter, I Schiessl, T Otto, F Sengpiel, M H?ubener, T Bonhoeffer, and K Obermayer. Principal component analysis and blind separation of sources for optical imaging of intrinsic signals. Neuroimage, 11(5
Pt 1):482?90, May 2000.
[10] Jonathan R Polimeni, Domhnull Granquist-Fraser, Richard J Wood, and Eric L Schwartz. Physical limits
to spatial resolution of optical recording: clarifying the spatial structure of cortical hypercolumns. Proc
Natl Acad Sci U S A, 102(11):4158?4163, 2005 Mar 15.
[11] T. Yokoo, BW Knight, and L. Sirovich. An optimization approach to signal extraction from noisy multivariate data. Neuroimage, 14(6):1309?1326, 2001.
[12] R Everson, B W Knight, and L Sirovich. Separating spatially distributed response to stimulation from
background. i. optical imaging. Biological cybernetics, 77(6):407?17, Dec 1997.
[13] Valery A Kalatsky and Michael P Stryker. New paradigm for optical imaging: temporally encoded maps
of intrinsic signal. Neuron, 38(4):529?545, 2003 May 22.
[14] A Sornborger, C Sailstad, E Kaplan, and L Sirovich. Spatiotemporal analysis of optical imaging data.
Neuroimage, 18(3):610?21, Mar 2003.
[15] D. Cornford, L. Csato, D.J. Evans, and M. Opper. Bayesian analysis of the scatterometer wind retrieval
inverse problem: some new approaches. Journal of the Royal Statistical Society. Series B, Statistical
Methodology, pages 609?652, 2004.
[16] N. Cressie. Statistics for spatial data. Terra Nova, 4(5):613?617, 1992.
[17] N. Cressie and G. Johannesson. Fixed rank kriging for very large spatial data sets. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 70(1):209?226, 2008.
[18] A S Rojer and E L Schwartz. Cat and monkey cortical columnar patterns modeled by bandpass-filtered
2d white noise. Biol Cybern, 62(5):381?391, 1990.
[19] D M Coppola, L E White, D Fitzpatrick, and D Purves. Unequal representation of cardinal and oblique
contours in ferret visual cortex. P Natl Acad Sci Usa, 95(5):2621?3, Mar 1998.
[20] Francis R Bach and Michael I Jordan. Kernel independent component analysis. Journal of Machine
Learning Research, 3:1:48, 2002.
[21] C. Williams and M. Seeger. Using the Nystrom method to speed up kernel machines. In International
Conference on Machine Learning, volume 17, 2000.
[22] Donald Robertson and James Symons. Maximum likelihood factor analysis with rank-deficient sample
covariance matrices. J. Multivar. Anal., 98(4):813?828, 2007.
[23] Byron M Yu, John P Cunningham, Gopal Santhanam, Stephen I Ryu, Krishna V Shenoy, and Maneesh
Sahani. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population
activity. J Neurophysiol, 102(1):614?635, 2009 Jul.
[24] K. Kersting, C. Plagemann, P. Pfaff, and W. Burgard. Most likely heteroscedastic Gaussian process
regression. In Proceedings of the 24th international conference on Machine learning, pages 393?400.
ACM New York, NY, USA, 2007.
[25] Ian Nauhaus, Andrea Benucci, Matteo Carandini, and Dario L Ringach. Neuronal selectivity and local
map structure in visual cortex. Neuron, 57(5):673?679, 2008 Mar 13.
[26] H. Nickisch and M. Seeger. Convex variational bayesian inference for large scale generalized linear
models. In International Conference on Machine Learning, 2009.
[27] F. Wolf, K. Pawelzik, T. Geisel, DS Kim, and T. Bonhoeffer. Optimal smoothness of orientation preference
maps. Network: Computation in Neural SystemsComputation in neurons and neural systems, pages 97?
101, 1994.
[28] K. Rahnama Rad and L. Paninski. Efficient estimation of two-dimensional firing rate surfaces via gaussian
process methods. Network: Computation in Neural Systems, under review, 2009.
9
| 3645 |@word trial:4 seems:1 nd:1 km:5 integrative:2 covariance:33 mammal:1 initial:1 series:2 ours:1 past:2 outperforms:3 current:1 comparing:1 written:2 john:1 evans:1 periodically:1 visibility:1 drop:1 designed:1 plot:1 v:1 stationary:1 generative:3 yokoo:1 isotropic:3 oblique:1 colored:1 filtered:1 location:20 preference:11 along:1 combine:2 inside:2 manner:1 x0:4 expected:2 andrea:1 mpg:3 scatterometer:1 roughly:1 multi:2 brain:2 decreasing:1 anisotropy:1 pawelzik:1 window:2 increasing:1 provided:2 estimating:5 underlying:1 kind:2 interpreted:2 monkey:2 nj:1 temporal:3 quantitative:3 certainty:1 prakash:1 k2:2 schwartz:2 normally:1 planck:1 reid:1 intrisic:1 before:2 shenoy:1 local:1 limit:2 acad:3 encoding:2 analyzing:1 id:2 fluctuation:1 interpolation:5 matteo:1 firing:1 might:1 studied:2 quantified:1 evoked:1 heteroscedastic:1 co:2 factorization:1 statistically:1 camera:1 practice:1 block:2 signaling:2 jan:1 area:1 empirical:1 maneesh:1 matching:1 pre:1 radial:2 word:1 confidence:4 integrating:1 donald:1 rahnama:1 get:4 selection:1 cybern:1 map:107 maximizing:4 layout:1 attention:1 go:1 duration:1 williams:2 focused:1 resolution:2 convex:1 mbethge:1 simplicity:1 m2:1 array:1 fill:1 regularize:1 population:1 searching:1 spontaneous:1 pt:1 exact:1 duke:3 cressie:2 crossing:1 element:1 approximated:1 robertson:1 sparsely:1 observed:2 calculate:3 cornford:1 region:5 cycle:1 knight:2 sirovich:3 kriging:2 substantial:2 complexity:1 sigler:1 unveils:1 eric:1 basis:1 neurophysiol:2 compactly:1 joint:1 represented:1 cat:1 hyper:1 choosing:1 outside:1 polimeni:1 encoded:2 larger:1 widely:1 modular:1 plagemann:1 reconstruct:1 otto:1 ability:2 statistic:5 cov:2 topographic:5 gp:32 noisy:5 itself:1 envisage:1 hoc:2 advantage:1 eigenvalue:1 matthias:2 essa:1 propose:1 reconstruction:11 product:2 zoomed:1 mb:1 turned:1 combining:2 achieve:1 interstimulus:1 convergence:1 electrode:4 mazurek:1 converges:1 iq:1 illustrate:2 measured:5 received:1 grating:3 p2:1 geisel:2 indicate:1 implies:1 quantify:1 differ:3 direction:5 filter:12 human:2 education:1 biological:4 singularity:1 mathematically:1 adjusted:1 extension:1 annihilation:1 ic:1 ground:7 visually:1 exp:2 blasdel:1 m0:2 fitzpatrick:2 early:1 vki:2 estimation:5 proc:1 outperformed:1 applicable:2 gq0601:1 sensitive:3 reflects:1 weighted:1 rough:1 gaussian:26 gopal:1 rather:4 kersting:1 voltage:3 rosenberg:1 sengpiel:1 encode:1 derived:1 focus:3 improvement:1 consistently:1 modelling:1 likelihood:4 rank:8 superimposed:2 contrast:2 seeger:2 baseline:1 kim:1 inference:4 integrated:1 cunningham:1 kc:10 salama:1 interested:2 pixel:20 orientation:26 flexible:3 priori:1 development:3 resonance:2 spatial:10 smoothing:17 initialize:1 marginal:1 field:5 having:3 ng:1 sampling:4 extraction:1 look:1 yu:1 filling:1 peaked:1 fmri:1 report:1 stimulus:24 others:1 micro:1 primarily:1 few:1 richard:1 oriented:2 cardinal:1 harel:1 simultaneously:1 interpolate:1 individual:1 bw:1 attempt:3 organization:4 investigate:1 adjust:1 light:1 natl:3 held:1 benucci:1 experience:1 orthogonal:1 filled:1 incomplete:1 minimal:2 mk:3 column:2 disadvantage:1 measuring:1 werner:1 maximization:1 geo:1 deviation:2 burgard:1 optimally:3 reported:1 coppola:1 spatiotemporal:1 periodic:1 corrupted:1 nickisch:1 synthetic:3 hypercolumns:1 confident:1 density:3 international:3 terra:1 probabilistic:3 physic:2 off:1 decoding:1 systematic:1 michael:2 bethge:1 quickly:2 central:1 recorded:1 convolving:1 macke:1 chung:1 return:1 li:1 photon:2 de:4 issa:1 coding:1 coefficient:2 explicitly:1 ad:2 vi:12 blind:1 performed:1 wind:2 optimistic:1 analyze:1 francis:1 portion:1 red:1 recover:2 wave:1 purves:1 jul:2 square:1 variance:5 characteristic:2 spaced:2 yield:2 modelled:2 bayesian:5 raw:5 mc:1 served:1 drive:1 cybernetics:4 sebastian:1 definition:1 against:2 frequency:4 involved:1 invasive:1 thereof:1 nystrom:1 james:1 nauhaus:1 couple:1 sampled:1 carandini:1 knowledge:6 color:1 organized:1 clay:1 actually:1 higher:4 methodology:2 response:9 maximally:1 specify:1 done:1 evaluated:1 mar:4 parameterised:1 just:1 angular:1 correlation:26 d:1 quality:1 artifact:3 gray:1 reveal:1 usa:5 effect:2 facilitate:1 normalized:1 true:6 ye:1 dario:1 hence:2 spatially:1 imaged:1 ringach:1 white:7 sin:1 spemannstrasse:3 width:6 during:1 self:1 mpi:3 cosine:2 generalized:3 prominent:1 gg:1 demonstrate:1 image:1 wise:1 variational:1 ari:1 common:3 superior:1 functional:7 stimulation:1 empirically:1 overview:1 physical:1 volume:1 anisotropic:1 extend:1 interpretation:2 m1:4 slight:1 im2:1 measurement:9 refer:1 vec:1 smoothness:2 tuning:3 ofgaussians:1 centre:1 moving:2 access:1 cortex:8 surface:2 similarity:1 posterior:26 multivariate:2 dye:3 apart:1 scenario:1 selectivity:5 store:4 ubingen:7 gerwinn:1 krishna:1 ministry:1 additional:1 paradigm:2 signal:12 semi:1 smoother:1 full:6 multiple:1 ohki:1 infer:1 stephen:2 smooth:1 multivar:1 bach:1 long:1 retrieval:1 post:9 award:1 fraser:1 impact:2 variant:1 regression:1 vision:3 expectation:1 iteration:1 kernel:9 histogram:3 invert:1 dec:2 csato:1 background:2 addition:3 whereas:3 spacing:1 interval:4 ferret:5 source:1 appropriately:2 recording:4 hz:1 subject:1 facilitates:2 deficient:1 byron:1 regularly:1 jordan:1 visuotopic:1 extracting:1 near:1 bernstein:1 split:1 enough:1 iterate:1 fit:1 architecture:1 restrict:1 fkz:1 idea:1 pfaff:1 intensive:1 favour:1 whether:1 motivated:1 pca:1 ugurbil:1 speaking:1 york:1 generally:2 useful:2 aimed:2 involve:1 amount:3 extensively:1 concentrated:1 visualized:1 generate:1 neuroscience:4 estimated:8 discrete:1 santhanam:1 group:1 changing:1 clean:1 utilize:1 v1:3 imaging:25 wood:1 angle:2 inverse:1 uncertainty:3 separation:1 comparable:1 multielectrode:1 topological:1 activity:6 strength:3 kronecker:1 ri:9 x2:1 speed:1 argument:1 unsmoothed:2 pseudorandom:1 optical:8 relatively:1 department:1 according:3 kenichi:1 across:7 smaller:2 slightly:1 primate:1 dv:1 yeang:1 taken:1 computationally:1 previously:2 count:4 german:1 know:1 mind:1 nova:1 gaussians:3 everson:1 v2:1 magnetic:2 robustly:1 assumes:1 include:1 ensure:1 calculating:3 ellipse:3 classical:3 society:3 parametric:1 stryker:1 striate:1 diagonal:5 unclear:1 obermayer:1 distance:1 separate:1 simulated:3 sci:3 clarifying:1 separating:1 nx:2 collected:1 tuebingen:3 discriminant:1 cellular:1 kaschube:3 length:1 pointwise:1 relationship:2 illustration:2 ratio:1 modeled:1 nc:1 robert:1 noam:1 kaplan:1 anal:1 calcium:2 perform:1 neuron:5 pinwheel:15 rojer:1 ever:2 precise:1 jakob:2 smoothed:9 arbitrary:1 overcoming:1 inferred:1 introduced:1 david:1 namely:1 required:2 extensive:1 optimized:3 rad:1 unequal:1 ryu:1 able:1 bar:5 beyond:1 usually:1 below:1 pattern:1 including:2 memory:1 max:1 royal:2 ndimensional:1 residual:1 technology:1 temporally:1 extract:1 auto:2 genomics:1 sahani:1 reichardt:1 prior:34 review:1 l2:2 removal:1 acknowledgement:1 relative:2 stetter:1 limitation:1 filtering:5 digital:1 degree:1 proxy:2 principle:2 uncorrelated:3 supported:1 free:1 rasmussen:1 guide:1 institute:2 absolute:2 sparse:2 anesthetized:1 distributed:2 van:1 curve:2 calculated:1 cortical:6 opper:1 contour:1 ignores:1 made:1 adaptive:1 employing:1 reconstructed:2 approximate:1 ignore:1 preferred:10 cutting:1 keep:2 global:2 investigating:1 reveals:1 assumed:3 discriminative:4 spectrum:1 un:1 nature:4 obtaining:1 vessel:1 complex:1 interpolating:1 vj:1 did:1 noise:29 hyperparameters:1 kara:1 neuronal:1 fig:12 referred:1 biggest:1 simulus:1 screen:1 ny:3 bmbf:1 neuroimage:3 inferring:3 explicit:4 bandpass:3 vanish:1 ian:1 formula:1 specific:3 explored:1 bivariate:1 intrinsic:10 magnitude:1 johannesson:1 kx:1 durham:1 columnar:1 entropy:4 bonhoeffer:2 paninski:1 likely:2 visual:11 yacoub:1 springer:1 ch:1 corresponds:1 truth:7 wolf:3 lewis:1 acm:1 goal:1 presentation:9 leonard:2 room:1 feasible:1 hard:1 considerable:1 averaging:3 principal:2 total:2 experimental:3 attempted:1 indicating:1 select:1 cholesky:1 mark:1 schnabel:1 jonathan:1 incorporate:1 evaluate:2 princeton:3 biol:1 correlated:3 |
2,919 | 3,646 | On the Convergence of the Concave-Convex
Procedure
Bharath K. Sriperumbudur
Department of Electrical and Computer Engineering
University of California, San Diego
La Jolla, CA 92093
[email protected]
Gert R. G. Lanckriet
Department of Electrical and Computer Engineering
University of California, San Diego
La Jolla, CA 92093
[email protected]
Abstract
The concave-convex procedure (CCCP) is a majorization-minimization algorithm
that solves d.c. (difference of convex functions) programs as a sequence of convex
programs. In machine learning, CCCP is extensively used in many learning algorithms like sparse support vector machines (SVMs), transductive SVMs, sparse
principal component analysis, etc. Though widely used in many applications, the
convergence behavior of CCCP has not gotten a lot of specific attention. Yuille and
Rangarajan analyzed its convergence in their original paper, however, we believe
the analysis is not complete. Although the convergence of CCCP can be derived
from the convergence of the d.c. algorithm (DCA), its proof is more specialized
and technical than actually required for the specific case of CCCP. In this paper,
we follow a different reasoning and show how Zangwill?s global convergence theory of iterative algorithms provides a natural framework to prove the convergence
of CCCP, allowing a more elegant and simple proof. This underlines Zangwill?s
theory as a powerful and general framework to deal with the convergence issues of
iterative algorithms, after also being used to prove the convergence of algorithms
like expectation-maximization, generalized alternating minimization, etc. In this
paper, we provide a rigorous analysis of the convergence of CCCP by addressing
these questions: (i) When does CCCP find a local minimum or a stationary point
of the d.c. program under consideration? (ii) When does the sequence generated by CCCP converge? We also present an open problem on the issue of local
convergence of CCCP.
1 Introduction
The concave-convex procedure (CCCP) [30] is a majorization-minimization algorithm [15] that is
popularly used to solve d.c. (difference of convex functions) programs of the form,
min
f (x)
x
s.t.
ci (x) ? 0, i ? [m],
dj (x) = 0, j ? [p],
(1)
where f (x) = u(x) ? v(x) with u, v and ci being real-valued convex functions, dj being an affine
function, all defined on Rn . Here, [m] := {1, . . . , m}. Suppose v is differentiable. The CCCP
1
algorithm is an iterative procedure that solves the following sequence of convex programs,
x(l+1) ? arg min
u(x) ? xT ?v(x(l) )
x
s.t.
ci (x) ? 0, i ? [m],
dj (x) = 0, j ? [p].
(2)
As can be seen from (2), the idea of CCCP is to linearize the concave part of f , which is ?v,
around a solution obtained in the current iterate so that u(x) ? xT ?v(x(l) ) is convex in x, and
therefore the non-convex program in (1) is solved as a sequence of convex programs as shown in
(2). The original formulation of CCCP by Yuille and Rangarajan [30] deals with unconstrained
and linearly constrained problems. However, the same formulation can be extended to handle any
constraints (both convex and non-convex). CCCP has been extensively used in solving many nonconvex programs (of the form in (1)) that appear in machine learning. For example, [3] proposed a
successive linear approximation (SLA) algorithm for feature selection in support vector machines,
which can be seen as a special case of CCCP. Other applications where CCCP has been used include
sparse principal component analysis [27], transductive SVMs [11, 5, 28], feature selection in SVMs
[22], structured estimation [10], missing data problems in Gaussian processes and SVMs [26], etc.
The algorithm in (2) starts at some random point x(0) ? {x : ci (x) ? 0, i ? [m]; dj (x) = 0, j ?
[p]}, solves the program in (2) and therefore generates a sequence {x(l) }?
l=0 . The goal of this paper
is to study the convergence of {x(l) }?
:
(i)
When
does
CCCP
find
a
local
minimum or a stationary
l=0
point1 of the program in (1)? (ii) Does {x(l) }?
converge?
If
so,
to
what
and
under what conditions?
l=0
From a practical perspective, these questions are highly relevant, given that CCCP is widely applied
in machine learning.
In their original CCCP paper, Yuille and Rangarajan [30, Theorem 2] analyzed its convergence, but
we believe the analysis is not complete. They showed that {x(l) }?
l=0 satisfies the monotonic descent
property, i.e., f (x(l+1) ) ? f (x(l) ) and argued that this descent property ensures the convergence
of {x(l) }?
l=0 to a minimum or saddle point of the program in (1). However, a rigorous proof is
not provided, to ensure that their claim holds for all u, v, {ci } and {dj }. Answering the previous
questions, however, requires a rigorous proof of the convergence of CCCP that explicitly mentions
the conditions under which it can happen.
In the d.c. programming literature, Pham Dinh and Hoai An [8] proposed a primal-dual subdifferential method called DCA (d.c. algorithm) for solving a general d.c. program of the form
min{u(x) ? v(x) : x ? Rn }, where it is assumed that u and v are proper lower semi-continuous
convex functions, which form a larger class of functions than the class of differentiable functions.
It can be shown that if v is differentiable, then DCA exactly reduces to CCCP. Unlike in CCCP,
DCA involves constructing two sets of convex programs (called the primal and dual programs) and
solving them iteratively in succession such that the solution of the primal is the initialization to the
dual and vice-versa. See [8] for details. [8, Theorem 3] proves the convergence of DCA for general d.c. programs. The proof is specialized and technical. It fundamentally relies on d.c. duality,
however, outlining the proof in any more detail requires a substantial discussion which would lead
us too far here. In this work, we follow a fundamentally different approach and show that the convergence of CCCP, specifically, can be analyzed in a more simple and elegant way, by relying on
Zangwill?s global convergence theory of iterative algorithms. We make some simple assumptions on
the functions involved in (1), which are not too restrictive and therefore applicable to many practical
situations. The tools employed in our proof are of completely different flavor than the ones used in
the proof of DCA convergence: DCA convergence analysis exploits d.c. duality while we use the
notion of point-to-set maps as introduced by Zangwill. Zangwill?s theory is a powerful and general
framework to deal with the convergence issues of iterative algorithms. It has also been used to prove
the convergence of the expectation-maximation (EM) algorithm [29], generalized alternating minimization algorithms [12], multiplicative updates in non-negative quadratic programming [25], etc.
and is therefore a natural framework to analyze the convergence of CCCP in a more direct way.
The paper is organized as follows. In Section 2, we provide a brief introduction to majorizationminimization (MM) algorithms and show that CCCP is obtained as a particular form of majorization1
x? is said to be a stationary point of a constrained optimization problem if it satisfies the corresponding
Karush-Kuhn-Tucker (KKT) conditions. Assuming constraint qualification, KKT conditions are necessary for
the local optimality of x? . See [2, Section 11.3] for details.
2
minimization. The goal of this section is also to establish the literature on MM algorithms and show
where CCCP fits in it. In Section 3, we present Zangwill?s theory of global convergence, which is a
general framework to analyze the convergence behavior of iterative algorithms. This theory is used
to address the global convergence of CCCP in Section 4. This involves analyzing the fixed points
of the CCCP algorithm in (2) and then showing that the fixed points are the stationary points of the
program in (1). The results in Section 4 are extended in Section 4.1 to analyze the convergence of
the constrained concave-convex procedure that was proposed by [26] to deal with d.c. programs
with d.c. constraints. We briefly discuss the local convergence issues of CCCP in Section 5 and
conclude the section with an open question.
2 Majorization-minimization
MM algorithms can be thought of as a generalization of the well-known EM algorithm [7]. The
general principle behind MM algorithms was first enunciated by the numerical analysts, Ortega
and Rheinboldt [23] in the context of line search methods. The MM principle appears in many
places in statistical computation, including multidimensional scaling [6], robust regression [14],
correspondence analysis [13], variable selection [16], sparse signal recovery [4], etc. We refer the
interested reader to a tutorial on MM algorithms [15] and the references therein.
The general idea of MM algorithms is as follows. Suppose we want to minimize f over ? ? Rn .
The idea is to construct a majorization function g over ? ? ? such that
?
f (x) ? g(x, y), ? x, y ? ?
.
(3)
f (x) = g(x, x), ? x ? ?
Thus, g as a function of x is an upper bound on f and coincides with f at y. The majorization
algorithm corresponding with this majorization function g updates x at iteration l by
x(l+1) ? arg min g(x, x(l) ),
x??
(4)
unless we already have x(l) ? arg minx?? g(x, x(l) ), in which case the algorithm stops. The majorization function, g is usually constructed by using Jensen?s inequality for convex functions, the
first-order Taylor approximation or the quadratic upper bound principle [1]. However, any other
method can also be used to construct g as long as it satisfies (3). It is easy to show that the above
iterative scheme decreases the value of f monotonically in each iteration, i.e.,
f (x(l+1) ) ? g(x(l+1) , x(l) ) ? g(x(l) , x(l) ) = f (x(l) ),
(5)
where the first inequality and the last equality follow from (3) while the sandwiched inequality
follows from (4).
Note that MM algorithms can be applied equally well to the maximization of f by simply reversing
the inequality sign in (3) and changing the ?min? to ?max? in (4). In this case, the word MM refers to
minorization-maximization, where the function g is called the minorization function. To put things
in perspective, the EM algorithm can be obtained by constructing the minorization function g using
Jensen?s inequality for concave functions. The construction of such a g is referred to as the E-step,
while (4) with the ?min? replaced by ?max? is referred to as the M-step. The algorithm in (3) and
(4) is also referred to as the auxiliary function method, e.g., for non-negative matrix factorization
[18]. [17] studied this algorithm under the name optimization transfer while [19] referred to it as
the SM algorithm, where ?S? stands for the surrogate step (same as the majorization/minorization
step) and ?M? stands for the minimization/maximization step depending on the problem at hand. g
is called the surrogate function. In the following example, we show that CCCP is an MM algorithm
for a particular choice of the majorization function, g.
Example 1 (Linear Majorization). Let us consider the optimization problem, minx?? f (x) where
f = u ? v, with u and v both real-valued, convex, defined on Rn and v differentiable. Since v is
convex, we have v(x) ? v(y) + (x ? y)T ?v(y), ? x, y ? ?. Therefore,
f (x) ? u(x) ? v(y) ? (x ? y)T ?v(y) =: g(x, y).
It is easy to verify that g is a majorization function of f . Therefore, we have
x(l+1)
?
(6)
arg min g(x, x(l) )
x??
= arg min u(x) ? xT ?v(x(l) ).
x??
3
(7)
If ? is a convex set, then the above procedure reduces to CCCP, which solves a sequence of convex
programs. As mentioned before, CCCP is proposed for unconstrained and linearly constrained
non-convex programs. This example shows that the same idea can be extended to any constraint set.
Suppose u and v are strictly convex, then a strict descent can be achieved in (5) unless x(l+1) = x(l) ,
i.e., if x(l+1) 6= x(l) , then
f (x(l+1) ) < g(x(l+1) , x(l) ) < g(x(l) , x(l) ) = f (x(l) ).
(8)
The first strict inequality follows from (6). The strict convexity of u leads to the strict convexity of g
and therefore g(x(l+1) , x(l) ) < g(x(l) , x(l) ) unless x(l+1) = x(l) .
3 Global convergence theory of iterative algorithms
For an iterative procedure like CCCP to be useful, it must converge to a local optimum or a stationary
point from all or at least a significant number of initialization states and not exhibit other nonlinear
system behaviors, such as divergence or oscillation. This behavior can be analyzed by using the
global convergence theory of iterative algorithms developed by Zangwill [31]. Note that the word
?global convergence? is a misnomer. We will clarify it below and also introduce some notation and
terminology.
To understand the convergence of an iterative procedure like CCCP, we need to understand the
notion of a set-valued mapping, or point-to-set mapping, which is central to the theory of global
convergence.2 A point-to-set map ? from a set X into a set Y is defined as ? : X ? P(Y ), which
assigns a subset of Y to each point of X, where P(Y ) denotes the power set of Y . We introduce
few definitions related to the properties of point-to-set maps that will be used later. Suppose X and
Y are two topological spaces. A point-to-set map ? is said to be closed at x0 ? X if xk ? x0
as k ? ?, xk ? X and yk ? y0 as k ? ?, yk ? ?(xk ), imply y0 ? ?(x0 ). This concept of
closure generalizes the concept of continuity for ordinary point-to-point mappings. A point-to-set
map ? is said to be closed on S ? X if it is closed at every point of S. A fixed point of the map
? : X ? P(X) is a point x for which {x} = ?(x), whereas a generalized fixed point of ? is a
point for which x ? ?(x). ? is said to be uniformly compact on X if there exists a compact set H
independent of x such that ?(x) ? H for all x ? X. Note that if X is compact, then ? is uniformly
compact on X. Let ? : X ? R be a continuous function. ? is said to be monotonic with respect to
? whenever y ? ?(x) implies that ?(y) ? ?(x). If, in addition, y ? ?(x) and ?(y) = ?(x) imply
that y = x, then we say that ? is strictly monotonic.
Many iterative algorithms in mathematical programming can be described using the notion of pointto-set maps. Let X be a set and x0 ? X a given point. Then an algorithm, A, with initial point x0
is a point-to-set map A : X ? P(X) which generates a sequence {xk }?
k=1 via the rule xk+1 ?
A(xk ), k = 0, 1, . . .. A is said to be globally convergent if for any chosen initial point x0 , the
sequence {xk }?
k=0 generated by xk+1 ? A(xk ) (or a subsequence) converges to a point for which a
necessary condition of optimality holds. The property of global convergence expresses, in a sense,
the certainty that the algorithm works. It is very important to stress the fact that it does not imply
(contrary to what the term might suggest) convergence to a global optimum for all initial points x0 .
With the above mentioned concepts, we now state Zangwill?s global convergence theorem [31, Convergence theorem A, page 91].
Theorem 2 ([31]). Let A : X ? P(X) be a point-to-set map (an algorithm) that given a point
x0 ? X generates a sequence {xk }?
k=0 through the iteration xk+1 ? A(xk ). Also let a solution set
? ? X be given. Suppose
(1) All points xk are in a compact set S ? X.
(2) There is a continuous function ? : X ? R such that:
(a) x ?
/ ? ? ?(y) < ?(x), ? y ? A(x),
2
Note that depending on the objective and constraints, the minimizer of the CCCP algorithm in (2) need
not be unique. Therefore, the algorithm takes x(l) as its input and returns a set of minimizers from which an
element, x(l+1) is chosen. Hence the notion of point-to-set maps appear naturally in such iterative algorithms.
4
(b) x ? ? ? ?(y) ? ?(x), ? y ? A(x).
(3) A is closed at x if x ?
/ ?.
Then the limit of any convergent subsequence of {xk }?
k=0 is in ?. Furthermore, limk?? ?(xk ) =
?(x? ) for all limit points x? .
The general idea in showing the global convergence of an algorithm, A is to invoke Theorem 2
by appropriately defining ? and ?. For an algorithm A that solves the minimization problem,
min{f (x) : x ? ?}, the solution set, ? is usually chosen to be the set of corresponding stationary points and ? can be chosen to be the objective function itself, i.e., f , if f is continuous. In
Theorem 2, the convergence of ?(xk ) to ?(x? ) does not automatically imply the convergence of xk
to x? . However, if A is strictly monotone with respect to ?, then Theorem 2 can be strengthened by
using the following result due to Meyer [20, Theorem 3.1, Corollary 3.2].
Theorem 3 ([20]). Let A : X ? P(X) be a point-to-set map such that A is uniformly compact,
closed and strictly monotone on X, where X is a closed subset of Rn . If {xk }?
k=0 is any sequence
generated by A, then all limit points will be fixed points of A, ?(xk ) ? ?(x? ) =: ?? as k ? ?,
where x? is a fixed point, kxk+1 ? xk k ? 0, and either {xk }?
k=0 converges or the set of limit points
of {xk }?
k=0 is connected. Define F (a) := {x ? F : ?(x) = a} where F is the set of fixed points of
?
A. If F (?? ) is finite, then any sequence {xk }?
k=0 generated by A converges to some x? in F (? ).
Both these results just use basic facts of analysis and are simple to prove and understand. Using
these results on the global convergence of algorithms, [29] has studied the convergence properties
of the EM algorithm, while [12] analyzed the convergence of generalized alternating minimization
procedures. In the following section, we use these results to analyze the convergence of CCCP.
4
Convergence theorems for CCCP
Let us consider the CCCP algorithm in (2) pertaining to the d.c. program in (1). Let Acccp be the
point-to-set map, x(l+1) ? Acccp (x(l) ) such that
Acccp (y) = arg min{u(x) ? xT ?v(y) : x ? ?},
(9)
where ? := {x : ci (x) ? 0, i ? [m], dj (x) = 0, j ? [p]}. Let us assume that {ci } are differentiable convex functions defined on Rn . We now present the global convergence theorem for
CCCP.
Theorem 4 (Global convergence of CCCP?I). Let u and v be real-valued differentiable convex
functions defined on Rn . Suppose ?v is continuous. Let {x(l) }?
l=0 be any sequence generated by
Acccp defined by (9). Suppose Acccp is uniformly compact3 on ? and Acccp (x) is nonempty for
any x ? ?. Then, assuming suitable constraint qualification, all the limit points of {x(l) }?
l=0 are
stationary points of the d.c. program in (1). In addition liml?? (u(x(l) )?v(x(l) )) = u(x? )?v(x? ),
where x? is some stationary point of Acccp .
Before we proceed with the proof of Theorem 4, we need a few additional results. The idea of the
proof is to show that any generalized fixed point of Acccp is a stationary point of (1), which is shown
below in Lemma 5, and then use Theorem 2 to analyze the generalized fixed points.
Lemma 5. Suppose x? is a generalized fixed point of Acccp and assume that constraints in (9) are
qualified at x? . Then, x? is a stationary point of the program in (1).
Proof. We have x? ? Acccp (x? ) and the constraints in (9) are qualified at x? . Then, there exists
? p
Lagrange multipliers {?i? }m
i=1 ? R+ and {?j }j=1 ? R such that the following KKT conditions
hold:
?
Pm
Pp
? ?u(x? ) ? ?v(x? ) + i=1 ?i? ?ci (x? ) + j=1 ??j ?dj (x? ) = 0,
(10)
c (x ) ? 0, ? ? ? 0, c (x )?i? = 0, ? i ? [m]
? di (x? ) = 0, ?i? ? R, ?i j ?? [p].
j
?
j
(10) is exactly the KKT conditions of (1) which are satisfied by (x? , {?i? }, {??j }) and therefore, x?
is a stationary point of (1).
3
Assuming that for every x ? ?, the set H(x) := {y : u(y) ? u(x) ? v(y) ? v(x), y ? Acccp (?)} is
bounded is also sufficient for the result to hold.
5
Before proving Theorem 4, we need a result to test the closure of Acccp . The following result from
[12, Proposition 7] shows that the minimization of a continuous function forms a closed point-to-set
map. A similar sufficient condition is also provided in [29, Equation 10].
Lemma 6 ([12]). Given a real-valued continuous function h on X ? Y , define the point-to-set map
? : X ? P(Y ) by
?(x)
=
arg min
h(x, y 0 )
0
y ?Y
= {y : h(x, y) ? h(x, y 0 ), ? y 0 ? Y }.
(11)
Then, ? is closed at x if ?(x) is nonempty.
We are now ready to prove Theorem 4.
Proof of Theorem 4. The assumption of Acccp being uniformly compact on ? ensures that condition
(1) in Theorem 2 is satisfied. Let ? be the set of all generalized fixed points of Acccp and let
? = f = u ? v. Because of the descent property in (5), condition (2) in Theorem 2 is satisfied. By
our assumption on u and v, we have g(x, y) = u(x) ? v(y) ? (x ? y)T ?v(y) is continuous in x
and y. Therefore, by Lemma 6, the assumption of non-emptiness of Acccp (x) for any x ? ? ensures
that Acccp is closed on ? and so satisfies condition (3) in Theorem 2. Therefore, by Theorem 2,
(l)
all the limit points of {x(l) }?
l=0 are the generalized fixed points of Acccp and liml?? (u(x ) ?
(l)
v(x )) = u(x? ) ? v(x? ), where x? is some generalized fixed point of Acccp . By Lemma 5, since
the generalized fixed points of Acccp are stationary points of (1), the result follows.
Remark 7. If ? is compact, then Acccp is uniformly compact on ?. In addition, since u is continuous
on ?, by the Weierstrass theorem4 [21], it is clear that Acccp (x) is nonempty for any x ? ? and
therefore is also closed on ?. This means, when ? is compact, the result in Theorem 4 follows
trivially from Theorem 2.
In Theorem 4, we considered the generalized fixed points of Acccp . The disadvantage with this
case is that it does not rule out ?oscillatory? behavior [20]. To elaborate, we considered {x? } ?
Acccp (x? ). For example, let ?0 = {x1 , x2 } and let Acccp (x1 ) = Acccp (x2 ) = ?0 and u(x1 ) ?
v(x1 ) = u(x2 ) ? v(x2 ) = 0. Then the sequence {x1 , x2 , x1 , x2 , . . .} could be generated by Acccp ,
with the convergent subsequences converging to the generalized fixed points x1 and x2 . Such an
oscillatory behavior can be avoided if we allow Acccp to have fixed points instead of generalized
fixed points. With appropriate assumptions on u and v, the following stronger result can be obtained
on the convergence of CCCP through Theorem 3.
Theorem 8 (Global convergence of CCCP?II). Let u and v be strictly convex, differentiable functions defined on Rn . Also assume ?v be continuous. Let {x(l) }?
l=0 be any sequence generated by
Acccp defined by (9). Suppose Acccp is uniformly compact on ? and Acccp (x) is nonempty for
any x ? ?. Then, assuming suitable constraint qualification, all the limit points of {x(l) }?
l=0
are stationary points of the d.c. program in (1), u(x(l) ) ? v(x(l) ) ? u(x? ) ? v(x? ) =: f ?
as l ? ?, for some stationary point x? , kx(l+1) ? x(l) k ? 0, and either {x(l) }?
l=0 con?
verges or the set of limit points of {x(l) }?
is
a
connected
and
compact
subset
of
S
(f
), where
l=0
S (a) := {x ? S : u(x) ? v(x) = a} and S is the set of stationary points of (1). If S (f ? ) is
?
finite, then any sequence {x(l) }?
l=0 generated by Acccp converges to some x? in S (f ).
Proof. Since u and v are strictly convex, the strict descent property in (8) holds and therefore Acccp
is strictly monotonic with respect to f . Under the assumptions made about Acccp , Theorem 3 can
be invoked, which says that all the limit points of {x(l) }?
l=0 are fixed points of Acccp , which either
converge or form a connected compact set. From Lemma 5, the set of fixed points of Acccp are
already in the set of stationary points of (1) and the desired result follows from Theorem 3.
Theorems 4 and 8 answer the questions that we raised in Section 1. These results explicitly provide
sufficient conditions on u, v, {ci } and {dj } under which the CCCP algorithm finds a stationary point
of (1) along with the convergence of the sequence generated by the algorithm. From Theorem 8, it
should be clear that convergence of f (x(l) ) to f ? does not automatically imply the convergence of
x(l) to x? . The convergence in the latter sense requires more stringent conditions like the finiteness
of the set of stationary points of (1) that assume the value of f ? .
4
Weierstrass theorem states: If f is a real continuous function on a compact set K ? Rn , then the problem
min{f (x) : x ? K} has an optimal solution x? ? K.
6
4.1
Extensions
So far, we have considered d.c. programs where the constraint set is convex. Let us consider a
general d.c. program given by
min
u0 (x) ? v0 (x)
x
s.t.
ui (x) ? vi (x) ? 0, i ? [m],
(12)
where {ui }, {vi } are real-valued convex and differentiable functions defined on Rn . While dealing
with kernel methods for missing variables, [26] encountered a problem of the form in (12) for which
they proposed a constrained concave-convex procedure given by
x(l+1) ? arg min
x
s.t.
u0 (x) ? vb0 (x; x(l) )
ui (x) ? vbi (x; x(l) ) ? 0, i ? [m],
(13)
where vbi (x; x(l) ) := vi (x(l) ) + (x ? x(l) )T ?vi (x(l) ). Note that, similar to CCCP, the algorithm
in (13) is a sequence of convex programs. Though [26, Theorem 1] have provided a convergence
analysis for the algorithm in (13), it is however not complete due to the fact that the convergence
of {x(l) }?
l=0 is assumed. In this subsection, we provide its convergence analysis, following an
approach similar to what we did for CCCP by considering a point-to-set map, Bccp associated with
the iterative algorithm in (13), where x(l+1) ? Bccp (x(l) ). In Theorem 10, we provide the global
convergence result for the constrained concave-convex procedure, which is an equivalent version of
Theorem 4 for CCCP. We do not provide the stronger version of the result as in Theorem 8 as it can
be obtained by assuming strict convexity of u0 and v0 . Before proving Theorem 10, we need an
equivalent version of Lemma 5 which we provide below.
Lemma 9. Suppose x? is a generalized fixed point of Bccp and assume that constraints in (13) are
qualified at x? . Then, x? is a stationary point of the program in (12).
Proof. Based on the assumptions x? ? Bccp (x? ) and the constraint qualification at x? in (13),
there exist Lagrange multipliers {?i? }m
i=1 ? R+ (for simplicity, we assume all the constraints to be
inequality constraints) such that the following KKT conditions hold:
?
P
?
? ?u0 (x? ) + m
i=1 ?i (?ui (x? ) ? ?vi (x? )) = ?v0 (x? ),
ui (x? ) ? vi (x? ) ? 0, ? ? ? 0, i ? [m],
(14)
? (u (x ) ? v (x ))? ? = i0, i ? [m].
i ?
i ?
i
which is exactly the KKT conditions for (12) satisfied by (x? , {?i? }) and therefore, x? is a stationary
point of (12).
Theorem 10 (Global convergence of constrained CCP). Let {ui }, {vi } be real-valued differentiable
convex functions on Rn . Assume ?v0 to be continuous. Let {x(l) }?
l=0 be any sequence generated
by Bccp defined in (13). Suppose Bccp is uniformly compact on ? := {x : ui (x) ? vi (x) ? 0, i ?
[m]} and Bccp (x) is nonempty for any x ? ?. Then, assuming suitable constraint qualification,
all the limit points of {x(l) }?
l=0 are stationary points of the d.c. program in (12). In addition
liml?? (u0 (x(l) ) ? v0 (x(l) )) = u0 (x? ) ? v0 (x? ), where x? is some stationary point of Bccp .
Proof. The proof is very similar to that of Theorem 4 wherein we check whether Bccp satisfies the
conditions of Theorem 2 and then invoke Lemma 9. The assumptions mentioned in the statement
of the theorem ensure that conditions (1) and (3) in Theorem 2 are satisfied. [26, Theorem 1] has
proved the descent property, similar to that of (5), which simply follows from the linear majorization
idea and therefore the descent property in condition (2) of Theorem 2 holds. Therefore, the result
follows from Theorem 2 and Lemma 9.
5
On the local convergence of CCCP: An open problem
The study so far has been devoted to the global convergence analysis of CCCP and the constrained
concave-convex procedure. As mentioned before, we say an algorithm is globally convergent if for
any chosen starting point, x0 , the sequence {xk }?
k=0 generated by xk+1 ? A(xk ) converges to a
point for which a necessary condition of optimality holds. In the results so far, we have shown
7
that all the limit points of any sequence generated by CCCP (resp. its constrained version) are the
stationary points (local extrema or saddle points) of the program in (1) (resp. (12)). Suppose, if
x0 is chosen such that it lies in an ?-neighborhood around a local minima, x? , then will the CCCP
sequence converge to x? ? If so, what is the rate of convergence? This is the question of local
convergence that needs to be addressed.
[24] has studied the local convergence of bound optimization algorithms (of which CCCP is an
example) to compare the rate of convergence of such methods to that of gradient and second-order
methods. In their work, they considered the unconstrained version of CCCP with Acccp to be a pointto-point map that is differentiable. They showed that depending on the curvature of u and v, CCCP
will exhibit either quasi-Newton behavior with fast, typically superlinear convergence or extremely
slow, first-order convergence behavior. However, extending these results to the constrained setup as
in (2) is not obvious. The following result due to Ostrowski which can be found in [23, Theorem
10.1.3] provides a way to study the local convergence of iterative algorithms.
Proposition 11 (Ostrowski). Suppose that ? : U ? Rn ? Rn has a fixed point x? ? int(U ) and
? is Fr?echet-differentiable at x? . If the spectral radius of ?0 (x? ) satisfies ?(?0 (x? )) < 1, and if x0
is sufficiently close to x? , then the iterates {xk } defined by xk+1 = ?(xk ) all lie in U and converge
to x? .
Few remarks are in place regarding the usage of Proposition 11 to study the local convergence of
CCCP. Note that Proposition 11 treats ? as a point-to-point map which can be obtained by choosing
u and v to be strictly convex so that x(l+1) is the unique minimizer of (2). x? in Proposition 11
can be chosen to be a local minimum. Therefore, the desired result of local convergence with at
least linear rate of convergence is obtained if we show that ?(?0 (x? )) < 1. However, currently we
are not aware of a way to compute the differential of ? and, moreover, to impose conditions on the
functions in (2) so that ? is a differentiable map. This is an open question coming out of this work.
On the other hand, the local convergence behavior of DCA has been proved for two important classes
of d.c. programs: (i) the trust region subproblem [9] (minimization of a quadratic function over a
Euclidean ball) and (ii) nonconvex quadratic programs [8]. We are not aware of local optimality
results for general d.c. programs using DCA.
6
Conclusion & Discussion
The concave-convex procedure (CCCP) is widely used in machine learning. In this work, we analyze
its global convergence behavior by using results from the global convergence theory of iterative
algorithms. We explicitly mention the conditions under which any sequence generated by CCCP
converges to a stationary point of a d.c. program with convex constraints. The proposed approach
allows an elegant and direct proof and is fundamentally different from the highly technical proof
for the convergence of DCA, which implies convergence for CCCP. It illustrates the power and
generality of Zangwill?s global convergence theory as a framework for proving the convergence of
iterative algorithms. We also briefly discuss the local convergence of CCCP and present an open
question, the settlement of which would address the local convergence behavior of CCCP.
Acknowledgments
The authors thank anonymous reviewers for their constructive comments. They wish to acknowledge support from the National Science Foundation (grant DMS-MSPA 0625409), the Fair Isaac
Corporation and the University of California MICRO program.
References
[1] D. B?ohning and B. G. Lindsay. Monotonicity of quadratic-approximation algorithms. Annals of the
Institute of Statistical Mathematics, 40(4):641?663, 1988.
[2] J. F. Bonnans, J. C. Gilbert, C. Lemar?echal, and C. A. Sagastiz?abal. Numerical Optimization: Theoretical
and Practical Aspects. Springer-Verlag, 2006.
[3] P. S. Bradley and O. L. Mangasarian. Feature selection via concave minimization and support vector
machines. In Proc. 15th International Conf. on Machine Learning, pages 82?90. Morgan Kaufmann, San
Francisco, CA, 1998.
8
[4] E. J. Candes, M. Wakin, and S. Boyd. Enhancing sparsity by reweighted `1 minimization. J. Fourier
Anal. Appl., 2007. To appear.
[5] R. Collobert, F. Sinz, J. Weston, and L. Bottou. Large scale transductive SVMs. Journal of Machine
Learning Research, 7:1687?1712, 2006.
[6] J. deLeeuw. Applications of convex analysis to multidimensional scaling. In J. R. Barra, F. Brodeau,
G. Romier, and B. Van Cutsem, editors, Recent advantages in Statistics, pages 133?146, Amsterdam, The
Netherlands, 1977. North Holland Publishing Company.
[7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. J. Roy. Stat. Soc. B, 39:1?38, 1977.
[8] T. Pham Dinh and L. T. Hoai An. Convex analysis approach to d.c. programming: Theory, algorithms
and applications. Acta Mathematica Vietnamica, 22(1):289?355, 1997.
[9] T. Pham Dinh and L. T. Hoai An. D.c. optimization algorithms for solving the trust region subproblem.
SIAM Journal of Optimization, 8:476?505, 1998.
[10] C. B. Do, Q. V. Le, C. H. Teo, O. Chapelle, and A. J. Smola. Tighter bounds for structured estimation. In
Advances in Neural Information Processing Systems 21, 2009. To appear.
[11] G. Fung and O. L. Mangasarian. Semi-supervised support vector machines for unlabeled data classification. Optimization Methods and Software, 15:29?44, 2001.
[12] A. Gunawardana and W. Byrne. Convergence theorems for generalized alternating minimization procedures. Journal of Machine Learning Research, 6:2049?2073, 2005.
[13] W. J. Heiser. Correspondence analysis with least absolute residuals. Comput. Stat. Data Analysis, 5:337?
356, 1987.
[14] P. J. Huber. Robust Statistics. John Wiley, New York, 1981.
[15] D. R. Hunter and K. Lange. A tutorial on MM algorithms. The American Statistician, 58:30?37, 2004.
[16] D. R. Hunter and R. Li. Variable selection using MM algorithms. Annals of Statistics, 33:1617?1642,
2005.
[17] K. Lange, D. R. Hunter, and I. Yang. Optimization transfer using surrogate objective functions with
discussion. Journal of Computational and Graphical Statistics, 9(1):1?59, 2000.
[18] D. D. Lee and H. S. Seung. Algorithms for non-negative matrix factorization. In T.K. Leen, T.G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems 13, pages 556?562.
MIT Press, Cambridge, 2001.
[19] X.-L. Meng. Discussion on ?optimization transfer using surrogate objective functions?. Journal of Computational and Graphical Statistics, 9(1):35?43, 2000.
[20] R. R. Meyer. Sufficient conditions for the convergence of monotonic mathematical programming algorithms. Journal of Computer and System Sciences, 12:108?121, 1976.
[21] M. Minoux. Mathematical Programming: Theory and Algorithms. John Wiley & Sons Ltd., 1986.
[22] J. Neumann, C. Schn?orr, and G. Steidl. Combined SVM-based feature selection and classification. Machine Learning, 61:129?150, 2005.
[23] J. M. Ortega and W. C. Rheinboldt. Iterative Solution of Nonlinear Equations in Several Variables.
Academic Press, New York, 1970.
[24] R. Salakhutdinov, S. Roweis, and Z. Ghahramani. On the convergence of bound optimization algorithms.
In Proc. 19th Conference in Uncertainty in Artificial Intelligence, pages 509?516, 2003.
[25] F. Sha, Y. Lin, L. K. Saul, and D. D. Lee. Multiplicative updates for nonnegative quadratic programming.
Neural Computation, 19:2004?2031, 2007.
[26] A. J. Smola, S. V. N. Vishwanathan, and T. Hofmann. Kernel methods for missing variables. In Proc. of
the Tenth International Workshop on Artificial Intelligence and Statistics, 2005.
[27] B. K. Sriperumbudur, D. A. Torres, and G. R. G. Lanckriet. Sparse eigen methods by d.c. programming.
In Proc. of the 24th Annual International Conference on Machine Learning, 2007.
[28] L. Wang, X. Shen, and W. Pan. On transductive support vector machines. In J. Verducci, X. Shen, and
J. Lafferty, editors, Prediction and Discovery. American Mathematical Society, 2007.
[29] C. F. J. Wu. On the convergence properties of the EM algorithm. Annals of Statistics, 11(1):95?103,
1983.
[30] A. L. Yuille and A. Rangarajan. The concave-convex procedure. Neural Computation, 15:915?936, 2003.
[31] W. I. Zangwill. Nonlinear Programming: A Unified Approach. Prentice-Hall, Englewood Cliffs, N.J.,
1969.
9
| 3646 |@word version:5 briefly:2 stronger:2 underline:1 open:5 closure:2 heiser:1 mention:2 initial:3 bradley:1 current:1 must:1 john:2 numerical:2 happen:1 hofmann:1 update:3 stationary:24 intelligence:2 xk:29 weierstrass:2 provides:2 iterates:1 successive:1 minorization:4 mathematical:4 along:1 constructed:1 direct:2 differential:1 prove:5 introduce:2 x0:11 huber:1 behavior:11 salakhutdinov:1 relying:1 globally:2 automatically:2 company:1 considering:1 provided:3 notation:1 bounded:1 moreover:1 what:5 developed:1 unified:1 extremum:1 corporation:1 sinz:1 certainty:1 every:2 multidimensional:2 concave:12 exactly:3 grant:1 appear:4 before:5 engineering:2 local:19 qualification:5 treat:1 limit:11 analyzing:1 cliff:1 meng:1 might:1 initialization:2 therein:1 studied:3 acta:1 appl:1 minoux:1 factorization:2 practical:3 unique:2 acknowledgment:1 zangwill:10 procedure:15 thought:1 pointto:2 boyd:1 word:2 refers:1 suggest:1 ccp:1 superlinear:1 selection:6 close:1 unlabeled:1 put:1 context:1 prentice:1 gilbert:1 equivalent:2 map:18 reviewer:1 missing:3 attention:1 starting:1 convex:40 shen:2 simplicity:1 recovery:1 assigns:1 rule:2 proving:3 handle:1 gert:2 notion:4 resp:2 diego:2 suppose:13 construction:1 lindsay:1 annals:3 programming:9 lanckriet:2 element:1 roy:1 subproblem:2 electrical:2 solved:1 wang:1 region:2 ensures:3 connected:3 decrease:1 yk:2 substantial:1 mentioned:4 dempster:1 convexity:3 ui:7 seung:1 solving:4 yuille:4 completely:1 fast:1 pertaining:1 artificial:2 neighborhood:1 choosing:1 widely:3 solve:1 valued:7 larger:1 say:3 statistic:7 transductive:4 itself:1 laird:1 sequence:22 differentiable:12 advantage:1 coming:1 fr:1 relevant:1 roweis:1 convergence:83 rangarajan:4 optimum:2 extending:1 neumann:1 converges:6 depending:3 linearize:1 stat:2 solves:5 soc:1 auxiliary:1 involves:2 implies:2 kuhn:1 radius:1 popularly:1 gotten:1 stringent:1 argued:1 bonnans:1 karush:1 generalization:1 anonymous:1 proposition:5 tighter:1 strictly:8 extension:1 clarify:1 hold:8 pham:3 around:2 mm:12 considered:4 sufficiently:1 hall:1 mapping:3 claim:1 abal:1 estimation:2 proc:4 applicable:1 currently:1 teo:1 vice:1 tool:1 minimization:14 mit:1 gaussian:1 corollary:1 derived:1 check:1 likelihood:1 rigorous:3 sense:2 minimizers:1 i0:1 typically:1 quasi:1 interested:1 issue:4 arg:8 dual:3 classification:2 constrained:10 special:1 raised:1 construct:2 aware:2 fundamentally:3 micro:1 few:3 divergence:1 national:1 replaced:1 statistician:1 englewood:1 highly:2 analyzed:5 primal:3 behind:1 devoted:1 necessary:3 unless:3 incomplete:1 taylor:1 euclidean:1 desired:2 theoretical:1 disadvantage:1 maximization:4 ordinary:1 addressing:1 subset:3 too:2 answer:1 combined:1 international:3 siam:1 lee:2 invoke:2 central:1 satisfied:5 gunawardana:1 conf:1 verge:1 american:2 return:1 li:1 orr:1 north:1 int:1 explicitly:3 vi:8 collobert:1 multiplicative:2 later:1 lot:1 closed:10 analyze:6 start:1 candes:1 hoai:3 majorization:12 minimize:1 kaufmann:1 succession:1 hunter:3 bharath:1 oscillatory:2 whenever:1 definition:1 sriperumbudur:2 pp:1 involved:1 tucker:1 isaac:1 dm:1 obvious:1 associated:1 echet:1 naturally:1 proof:18 di:1 con:1 stop:1 proved:2 subsection:1 organized:1 actually:1 dca:10 appears:1 supervised:1 follow:3 verducci:1 wherein:1 formulation:2 leen:1 though:2 generality:1 misnomer:1 furthermore:1 just:1 smola:2 hand:2 trust:2 nonlinear:3 continuity:1 believe:2 usage:1 name:1 dietterich:1 verify:1 concept:3 multiplier:2 byrne:1 equality:1 hence:1 alternating:4 iteratively:1 deal:4 reweighted:1 coincides:1 generalized:16 ortega:2 stress:1 complete:3 reasoning:1 consideration:1 invoked:1 mangasarian:2 specialized:2 dinh:3 refer:1 significant:1 versa:1 cambridge:1 unconstrained:3 trivially:1 pm:1 mathematics:1 dj:8 chapelle:1 v0:6 etc:5 curvature:1 showed:2 recent:1 perspective:2 jolla:2 verlag:1 nonconvex:2 inequality:7 seen:2 minimum:5 additional:1 morgan:1 impose:1 employed:1 converge:6 monotonically:1 signal:1 semi:2 ii:4 u0:6 reduces:2 technical:3 academic:1 long:1 lin:1 cccp:60 equally:1 converging:1 prediction:1 regression:1 basic:1 enhancing:1 expectation:2 iteration:3 kernel:2 achieved:1 subdifferential:1 want:1 whereas:1 addition:4 addressed:1 finiteness:1 appropriately:1 unlike:1 limk:1 strict:6 comment:1 elegant:3 thing:1 contrary:1 point1:1 lafferty:1 yang:1 easy:2 iterate:1 fit:1 lange:2 idea:7 regarding:1 whether:1 ltd:1 proceed:1 york:2 remark:2 useful:1 theorem4:1 clear:2 netherlands:1 extensively:2 svms:6 exist:1 tutorial:2 sign:1 express:1 terminology:1 vietnamica:1 sla:1 changing:1 vbi:2 tenth:1 monotone:2 powerful:2 uncertainty:1 place:2 reader:1 wu:1 oscillation:1 maximation:1 scaling:2 bound:5 convergent:4 correspondence:2 quadratic:6 topological:1 encountered:1 nonnegative:1 annual:1 constraint:16 vishwanathan:1 x2:7 software:1 generates:3 aspect:1 fourier:1 min:14 optimality:4 extremely:1 department:2 structured:2 fung:1 ball:1 em:6 y0:2 son:1 pan:1 ostrowski:2 equation:2 discus:2 nonempty:5 generalizes:1 appropriate:1 spectral:1 eigen:1 original:3 denotes:1 include:1 ensure:2 publishing:1 graphical:2 wakin:1 newton:1 exploit:1 restrictive:1 ghahramani:1 prof:1 establish:1 sandwiched:1 society:1 objective:4 question:8 already:2 liml:3 sha:1 surrogate:4 said:6 exhibit:2 minx:2 gradient:1 thank:1 assuming:6 analyst:1 setup:1 statement:1 negative:3 anal:1 bharathsv:1 proper:1 allowing:1 upper:2 sm:1 finite:2 acknowledge:1 descent:7 situation:1 extended:3 defining:1 rn:13 ucsd:2 introduced:1 required:1 schn:1 california:3 steidl:1 address:2 usually:2 below:3 sparsity:1 program:34 rheinboldt:2 including:1 max:2 power:2 suitable:3 natural:2 residual:1 scheme:1 brief:1 imply:5 ready:1 tresp:1 literature:2 discovery:1 outlining:1 foundation:1 affine:1 sufficient:4 rubin:1 principle:3 editor:3 echal:1 sagastiz:1 last:1 majorizationminimization:1 qualified:3 allow:1 understand:3 institute:1 saul:1 absolute:1 sparse:5 van:1 stand:2 settlement:1 author:1 made:1 san:3 avoided:1 ohning:1 far:4 compact:15 dealing:1 monotonicity:1 global:22 kkt:6 assumed:2 conclude:1 francisco:1 subsequence:3 continuous:12 iterative:18 search:1 transfer:3 robust:2 ca:3 bottou:1 constructing:2 did:1 linearly:2 fair:1 x1:7 referred:4 elaborate:1 strengthened:1 torres:1 slow:1 wiley:2 meyer:2 wish:1 comput:1 lie:2 answering:1 mspa:1 theorem:47 specific:2 xt:4 showing:2 jensen:2 svm:1 exists:2 workshop:1 ci:9 mathematica:1 illustrates:1 kx:1 flavor:1 simply:2 saddle:2 lagrange:2 kxk:1 amsterdam:1 holland:1 monotonic:5 springer:1 minimizer:2 satisfies:6 relies:1 weston:1 goal:2 lemar:1 specifically:1 uniformly:8 reversing:1 principal:2 lemma:10 called:4 ece:1 duality:2 la:2 support:6 latter:1 constructive:1 |
2,920 | 3,647 | Beyond Categories: The Visual Memex Model for
Reasoning About Object Relationships
Tomasz Malisiewicz, Alexei A. Efros
Robotics Institute
Carnegie Mellon University
{tmalisie,efros}@cs.cmu.edu
Abstract
The use of context is critical for scene understanding in computer vision, where
the recognition of an object is driven by both local appearance and the object?s relationship to other elements of the scene (context). Most current approaches rely
on modeling the relationships between object categories as a source of context.
In this paper we seek to move beyond categories to provide a richer appearancebased model of context. We present an exemplar-based model of objects and their
relationships, the Visual Memex, that encodes both local appearance and 2D spatial
context between object instances. We evaluate our model on Torralba?s proposed
Context Challenge against a baseline category-based system. Our experiments
suggest that moving beyond categories for context modeling appears to be quite
beneficial, and may be the critical missing ingredient in scene understanding systems.
1
Introduction
Image understanding is one of the Holy Grail problems in computer vision. Understanding a scene
arguably requires parsing the image into its constituent objects. In real scenes composed of many
different objects, the spatial configuration of one object can facilitate recognition of related objects [1], and quite often ambiguities in recognition cannot be resolved without looking beyond the
spatial extent of the object in question. Thus, algorithms which jointly recognize many objects at
once by taking account of contextual relationships have been quite popular. While early systems
relied on hand-coded rules for inter-object context (e.g. [2, 3]), more modern approaches typically
perform inference in a probabilistic graphical model with respect to categories where object interactions are modeled as higher order potentials [4, 5, 6, 7, 8, 9, 10]. One important implicit assumption
made by all such models is that interactions between object instances can be adequately modeled as
relationships between human-defined object categories.
In this paper we challenge this ?category assumption? for object-object interactions and propose
a novel category-free approach for modeling object relationships. We propose a new framework,
the Visual Memex Model, for representing and reasoning about object identities and their contextual relationships in an exemplar-based, non-parametric way. We evaluate our model on Antonio
Torralba?s proposed Context Challenge [11] against a baseline category-based system.
2
Motivation
The use of categories (classes) to represent concepts (e.g. visual objects) is so prevalent in computer
vision and machine learning that most researchers don?t give it a second thought. Faced with a new
task, one simply carves up the solution space into classes (e.g. cars, people, buildings), assigns class
labels to training examples and applies one of the many popular classifiers to arrive at a solution.
1
However, we believe that it is worthwhile to re-examine the basic assumption behind categorization,
and especially its role in modeling relationships between objects.
Theories of categorization date back to the ancient Greeks. Aristotle defined categories as discrete
entities characterized by a set of properties shared by all their members [12]. His categories are
mutually exclusive, and every member of a category is equal. This classical view is still the most
widely accepted way of reasoning about categories and taxonomies in hard sciences. However, as
pointed out by Wittgenstein, this is almost certainly not the way most of our everyday concepts work
(e.g. what is the set of properties that define the concept ?game? and nothing else? [13]). Empirical
evidence for typicality (e.g. a robin is a more commonly cited example of ?bird? than a chicken)
and multiple category memberships (e.g. chicken is both ?bird? and ?food?) further complicate the
Aristotelian view.
The ground-breaking work of cognitive psychologist Eleanor Rosch [14] demonstrated that humans
do not cut up the world into neat categories defined by shared properties, but instead use similarity
as the basis of categorization. Her Prototype Theory postulates that an object?s class is determined
by its similarity to (a set of) prototypes which define each category, allowing for varying degree of
membership. Such Prototype models have been successfully used for object recognition [15, 16].
Going even further, Exemplar Theory [17, 18] rejects the need for explicit category representation,
arguing instead that a concept can be implicitly formed via all its observed instances. This allows
for a dynamic definition of categories based on data availability and task (e.g. an object can be a
vehicle, a car, a Volvo, or Bob?s Volvo). A recent operationalization of the exemplar model in the
visual domain can be found in [19].
But it might not be too productive to concentrate on the various categorization theories without considering the final aim ? what do we need categories for? One argument is that categorization is a tool
to facilitate knowledge transfer. E.g. having been attacked once by a tiger, it?s critically important
to determine if a newly observed object belongs to the tiger category so as to utilize the information
from the previous encounter. Note that here recognizing the explicit category is unimportant, as
long as the two tigers could be associated with each other. Guided by this intuition and evidence
from cognitive neuroscience, Bar [20] outlined the importance of analogies, associations, and prediction in the human brain. He argues that the goal of visual perception is not to recognize an object
in the traditional sense of categorizing it (i.e. asking ?what is this??), but instead linking the input
with an analogous representation in memory (i.e. asking ?what is this like??). Once a novel input is
linked with analogous representations, associated representations are activated rapidly and predict
the representations of what is most likely to occur next.
These ideas regarding analogies, associations, and prediction are surprisingly similar to Vannevar
Bush?s 1945 concept of the Memex [21] ? which was seen decades later as pioneering hypertext and
the World Wide Web. Concerned with the transmission and accessibility of scientific ideas, Bush
faulted the ?artificiality of systems of indexing? and proposed the Memory Extender (Memex), a
physical device which would help find information based on association instead of strict categorical
indexing. The associative links were to be entered manually by the user and could be of several
different types. Chains of links would form into longer ?associative trails? creating new narratives
in the concept space. For Bush ?the process of tying two items together is the important thing.?
Inspired by these diverse ideas that are, nonetheless, all pointing in the same general direction, we
have been motivated to try to evaluate them on a concrete problem, to see if they can offer benefits
over the more traditional classification framework. One particular area where we feel these ideas
might prove very useful is in modeling relationships between objects within an image. Therefore,
in this paper we propose, in an homage to Bush, the Visual Memex Model, as a first step towards
operationalizing the direct modeling of associations between visual objects, and compare it with
more standard tools for the same task.
3
The Visual Memex Model
Our starting point is Vannevar Bush?s observation that strict categorical indexing of concepts has
severe limitations. Abandoning rigid object categories, we embrace Bush?s and Bar?s belief in the
primary role of associations, but unlike Bush, we aim to discover these associations automatically
from the data. At the core of our model is an exemplar-based representation of objects [18, 19]. The
Visual Memex can then be thought of as a vast graph, with nodes representing all the object instances
2
Figure 1: The Visual Memex graph encodes object similarity (solid black edge) and spatial context
(dotted red edge) between pairs of object exemplars. A spatial context feature is stored for each
context edge. The Memex graph can be used to interpret a new image (left) by associating image
segments with exemplars in the graph (orange edges) and propagating the information. Figure best
viewed in color.
in the dataset, and arcs representing the different types of associations between them (Figure 1).
There are two types of arcs in our model, encoding two different relationships between objects: 1)
visual similarity (e.g. this car looks like that car), and 2) contextual associations (e.g. this car is next
to this building).
Once the graph is built, it can be used to interpret a novel image (Figure 1, left) by first connecting
segments within the image with similar stored exemplars, and then propagating contextual information between these exemplars through the graph. When an exemplar gets activated, visually similar
exemplars as well as other contextually relevant objects get activated as well. This way, exemplarto-exemplar similarity in the Memex graph can serve as Bush?s ?trails? to link concepts together
in a non-parametric, query-dependent way, without the use of predefined categories. For example, in Figure 1, we should be able to infer that a car seen from the rear often co-occurs with an
oblique building wall (but not a frontal wall) ? something which category-based models would be
hard-pressed to achieve.
Formally, we define the Visual Memex Model as a graph G = (V, ES , EC , {D}, {f }) consisting
of N object exemplar nodes V , similarity edges ES , context edges EC , N per-exemplar similarity
functions {D}, and the spatial features {f } associated with each context edge. We now describe
how to learn the similarity functions {D} from data to create the structure of the Visual Memex.
3.1
Similarity Edges
We use the per-exemplar distance-function learning algorithm of Malisiewicz et al [19] to learn the
object similarity edges. For each exemplar, the algorithm learns which other exemplars it is similar
to as well as a distance function. A distance function is a linear combination of elementary distances
used to measure similarity to the exemplar. We use the same 14 color, shape, texture, and location
features as used in [19]. For the j-th exemplar, wj is the vector of 14 weights, bj is a scalar bias,
and ?j ? {0, 1}|C| is a binary indicator vector which encodes which other exemplars the current
exemplar is similar to. We solve [wj? , b?j , ??j ] = arg minw,b,? fj (w, b, ?), but since the exemplars?
optimization problems are independent we drop the j suffix for clarity. Let di be the vector of 14
Euclidean distances between the exemplar whose similarity we are learning (the focal exemplar)
and the i-th exemplar. C is the set of exemplars that have the same label as the focal exemplar. Let
L(x) = max(1 ? x, 0)2 be the hinge-squared loss function. A different w, b, and ? are learned
per-exemplar by optimizing the following functional:
X
X
?
?i L(?(wT di + b)) +
L(wT di + b) ? ?|| ? ||2
(1)
f (w, b, ?) = ||w||2 +
2
i?C
i?C
/
We minimize the above SVM-like objective function via an alternating optimization strategy as in
[19]. The algorithm uses labels (see Section 3.3) during learning where the regularization term favors
connecting to many similarly-labeled exemplars and the loss term favors separability in distance
3
associations
window
tree
Category
Estimation
tree
window
door
window
car
car
?
wheel
?
wheel
car
person
wheel
road
window
fence
building
sidewalk
door
tree
road
Figure 2: Torralba?s Context Challenge: ?How far can you go without running a local object detector?? The task is to reason about the identity of the hidden object (denoted by a ???) without
local information. In our category-free Visual Memex model, object predictions are generated in the
form of exemplar associations for the hidden object. In a category-based model, the category of the
hidden object is directly estimated.
space. We create a similarity edge between two exemplars if they are deemed similar by each
others? distance functions. We use a fixed ? = .00001 and ? = 100 for all exemplars.
3.2
Context Edges
When two objects occur inside a single image, we encode their 2-D spatial relationship into a context feature vector f ? <10 (visualized as red dotted edges in Figure 1). The context feature vector
encodes relative overlap, relative displacement, relative scale, and relative height of the bottom-most
pixel between two exemplar regions in a single image. This feature captures the spatial relationship
between two regions and does not take into account any appearance information ? it is a generalization of the spatial features used in [8]. We measure the similarity between two context features
0 2
using a Gaussian kernel: K(f , f 0 ) = e??1 || f ? f || with ?1 = 1.0.
3.3
Building the Visual Memex
We extract a large database of exemplar objects and their ground-truth segmentation masks from
the LabelMe [22] dataset and learn the structure of the Visual Memex in an offline setting. We use
objects from the 30 most frequently occurring categories in LabelMe. Similarity edges are created
using the per-exemplar distance function learning framework of [19], and context edges are created
each time two exemplars are observed in the same image. We have a total of N = 87, 802 exemplars
in the Visual Memex, |ES | = 276, 782 similarity edges, and |EC | = 989, 106 context edges.
4
Evaluating on the Context Challenge
The intuition that we would like to evaluate is that many useful regularities of the visual world are
lost when dealing solely with categories (e.g. the side view of a building should associate more with
a side view of a car than a frontal view of a car). The key motivation behind the Visual Memex is
that context should depend on the appearance of an object and not just the category it belongs to. In
order to test this hypothesis against the commonly held practice of abstracting away appearance into
categories, we need a rich evaluation dataset as well as a meaningful evaluation task.
We found that the Context Challenge [11] recently proposed by Antonio Torralba fits our needs
perfectly. The evaluation task is inspired by the question: ?How far can you go without running an
object detector?? The goal is to recognize a single object in the image without peeking at pixels
belonging to that object. Torralba presented an algorithm for predicting the category and scale of
an object using only contextual information [23], but his notion of context is scene-centered (where
the appearance of the entire image is used for prediction). Since the context we wish study in this
paper is object-centered, we use an object-centered formulation of the Context Challenge. While
it is not clear if the absolute performance numbers on the Context Challenge are very meaningful
in themselves, we feel that it is an ideal task for studying object-centered context and the role of
categorization assumptions in such models.
4
In our variant of the Context Challenge, the goal is to predict the category of a hidden object yi solely
based on its spatial relationships to some provided objects ? without using the pixels belonging to
the hidden object at all. For our study, we use manually provided regions and category labels of K
supporting objects inside a single image. We refer to the identities of the K supporting objects in the
image as {y1 , . . . , yK } (where y ? {1, . . . , |C|}) and the set of K 2D spatial relationship features
between each supporting object and the hidden object as {f i1 , . . . , f iK }.
4.1
Inference in the Visual Memex Model
In this section, we explain how to use the Visual Memex graph (automatically constructed from
data) to perform inference for the Context Challenge hidden-object prediction task. Not making the
?category assumption,? the model is defined with respect to exemplar associations for the hidden
object. Inference in the model returns a compatibility score between every exemplar and the hidden
object, and can be though of as returning an ordered list of exemplar associations. Due to the nature
of exemplar associations as opposed to category assignments, a supporting object can be associated
with multiple exemplars as opposed to a single category. We create soft exemplar associations
between each of the supporting objects and the exemplars in the Visual Memex using the similarity
functions {D} (see Section 3.1).
{S1 , . . . , SK } are the appearance features for the K supporting objects. Aaj is the affinity between
exemplar a in the Visual Memex and the j-th supporting object and is created by evaluating Sj
under a?s distance function Aaj = e?Da (Sj ) . ?(ei , ej , f ij ) is the pairwise compatibility between
exemplar ei and ej under the spatial feature f ij . Let Wab be the adjacency matrix representation
of the similarity edges (Wuv = [(u, v) ? ES ]). Inference in the Visual Memex Model is done
by optimizing the following conditional distribution which scores the assignment of an arbitrary
exemplar ei to the hidden object based on contextual relations:
p(ei |A1 , . . . , AK , f i1 , . . . , f iK ) ?
K X
N
Y
Aaj ?(ei , ea , f ij )
(2)
j=1 a=1
P
log ?(ei , ej , f ij )
=
(u,v)?EC
P
Wiu Wjv K(f ij , f uv )
(u,v)?EC
Wiu Wjv
(3)
The reason for the summation inside Equation 3 is that it aggregates contextual interactions from
similar exemplars. By doing this, we effectively ?densify? the contextual interactions in the Visual
Memex. An interpretation of this densification procedure is that we are creating a kernel density
estimator for an arbitrary pair of exemplars (ei , ej ) via a weighted sum of kernels placed at context
features in the data set {f uv } : (u, v) ? EC where the weights Wiu Wjv measure visual similarity
between pairs (ei , ej ) and (eu , ev ) .
We experimented with using a single kernel, ?(ei , ej | f ij ) = K(f ij , f ei ,ej ), and found that the integration of multiple features via the densification described above is a key ingredient for successful
Visual Memex inference.
Finally, after performing inference in the Visual Memex Model, we are left with a score for each exemplar. At this stage, as far as our model is concerned, the recognition has already been performed.
However, since the task we are evaluated on is category-based, we combine the returned exemplars
into a vote for categories using Luce?s Axiom of Choice [17] which averages the exemplar responses
per-category.
4.2
CoLA-based Parametric Model
We would like to evaluate the Visual Memex model against a more traditional, category-based framework with parametric inter-category relationships. One of the most recent and successful approaches
is the CoLA model [8]. CoLA learns a set of parameters for each pair of categories which correspond
to relative strengths of the four different top,above,below,inside spatial relationships. In the case of
dealing with categories directly we consider a conditional distribution over the category of the hidden object yi that factors as a star graph with K leaves (with the hidden object being connected to
5
all the supporting objects). ? are model parameters, ? is a pairwise potential that measures the compatibility of two categories with a specified spatial relationship, and Z is a normalization constant
such that the conditional distribution sums to 1.
p(yi |y1 , . . . , yK , f i1 , . . . , f iK , ?) =
K
1 Y
?(yi , yj , f ij , ?)
Z j=1
(4)
Following [8], we use a feature function h(f ) that computes the affinity between feature f and a set
of prototypical spatial relationships. We automatically find P prototypical spatial relationships by
clustering all spatial feature vectors {f } in the training set via the popular Kmeans algorithm. Let
h(f ) ? <P be the normalized vector of affinities to cluster centers {c1 , . . . , cP }. ? is the set of
all parameters in this model, with ?(yi , yj ) ? <P being the parameters associated with the pair of
categories (yi , yj ).
log ?(yi , yj , f ij , ?)
[h(f ij )T ] ?(yi , yj )
=
hi (f ) ? e??|| f ?ci ||
2
(5)
(6)
We tried using the four prototypical relationships corresponding to above, below, inside, and outside
as in [8], but found that using Kmeans with significantly larger number of prototypes P = 30
produced superior results. For learning ?, we found the maximum likelihood ? using gradient
descent. The training objective function was optimized to mimic what happens during testing on
the Context Challenge task. Since the distributions for the Context Challenge task are defined with
respect to a single category variable (see Equation 4), we could compute the partition function
directly and didn?t require any approximations as in [8] (which required training in a loopy graph).
4.3
Reduced KDE Memex Model
Since the Visual Memex Model and the CoLA-inspired model make different assumptions with
respect to objects (category-based vs. exemplar-based) and context (parametric vs. nonparametric),
we feel it would also be useful to examine a hybrid model ? dubbed the Reduced KDE Memex Model
? which uses a nonparametric model of context but operates on object categories. The Reduced
KDE Memex Model is created by collapsing all exemplars belonging to a single category into fullyconnected components which can be thought of as adding categories into the Visual Memex graph.
Identities between individual exemplars are lost, and thus we lose the fine details of a spatial context.
By forming categories, we can no longer say a particular spatial relationship is between a blue side
view of a car and an oblique brick building, we can only say it is a relationship between a car and
a building. Now that we are left with an unordered bag of spatial relationships {f } between two
categories, we need a way to measure compatibility between a newly observed f and the stored
relationships.
We use the same form of the Context Challenge conditional distribution as in Equation 4. We use
a Kernel Density Estimator(KDE) for every pair of categories, and the potential ? can be thought
of as a matrix of such estimators. The use of nonparametric potentials in graphical models has been
already explored in the domain of texture analysis [24]. ?ij is the Kronecker delta function.
P
log ?(yi , yj , f ij ) =
(u,v)?EC
?yi yu ?yj yv K(f ij , f uv )
P
? yi yu ? yj yv
(u,v)?EC
(7)
The Reduced Memex model, being category-based and nonparametric, aggregates the spatial relationships across many different pairs of exemplars from two categories. While we used a fixed
kernel K which measures distance isotropically across the dimensions of f , the advantage of such
a nonparametric approach is that with enough data the particularities of K do not matter. We also
experimented with a Nearest Neighbor based model, but found the Kernel Density Estimation approach to be superior.
6
Visual Memex
KDE
CoLA
person
car
tree
window
head
building
sky
wall
road
sidewalk
sign
chair
door
mountain
table
floor
streetlight
lamp
plant
pole
balcony
wheel
text
grass
column
pane
trash
blind
ground
arm
Context Challenge Prediction Confidence
1
person
car
tree
window
head
building
sky
wall
road
sidewalk
sign
chair
door
mountain
table
floor
streetlight
lamp
plant
pole
balcony
wheel
text
grass
column
pane
trash
blind
ground
arm
0.9
0.8
0.7
Precision
person
car
tree
window
head
building
sky
wall
road
sidewalk
sign
chair
door
mountain
table
floor
streetlight
lamp
plant
pole
balcony
wheel
text
grass
column
pane
trash
blind
ground
arm
0.6
0.5
0.4
0.3
0.2
Visual Memex
KDE
CoLA
0.1
m d
ar un
o
gr d
n
bli h
as
tr e
n n
pa m
lu
co ss
a
gr t
x
te el
he y
w on
lc
ba
le
po nt
pla p ht
m ig
la etl
e
str r
o
flo le in
b
ta unta
o
m r
o
do ir
a
ch n
lk
sigewa
sid d
a
ro
all
w
y g
sk din
il
bu d
a w
he do
in
w
ee
tr
r
ca son
r
pe
m d
ar un
o
gr d
n
bli h
as
tr e
n n
pa m
lu
co ss
a
gr t
x
te el
he y
w on
lc
ba
le
po nt
pla p ht
m ig
la etl
e
str
or
flo le in
b
ta unta
o
m r
o
do ir
a
ch n
lk
sigewa
sid d
a
ro
all
w
y g
sk din
il
bu d
a w
he do
in
w
ee
tr
r
ca son
r
pe
m d
ar un
o
gr d
n
bli h
as
tr e
n n
pa m
lu
co ss
a
gr t
x
te el
he y
w on
lc
ba
le
po nt
pla p ht
m ig
la etl
e
str r
o
flo le in
b
ta unta
o
m r
o
do ir
a
ch n
lk
sigewa
sid d
a
ro
all
w
y g
sk din
il
bu d
a w
he do
in
w
ee
tr
r
ca son
r
pe
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Context Challenge Recognition Accuracy for 30 Categories
1
0.8
Visual Memex
KDE
CoLA
0.6
0.4
0.2
0
ca
pe
r
rso
n
tre
e
win
he
ad
do
w
bu
sk
y
ild
ing
wa
ll
roa
d
sid
sig
ew
n
alk
ch
air
do
or
tab
mo
un
le
tain
flo
or
str
ee
lam
p
tlig
ht
pla
nt
po
le
wh
ba
lco
ee
l
ny
tex
t
gr
as
s
pa
co
lum
ne
n
tra
sh
bli
nd
gr
arm
ou
nd
Figure 3: a.) Context Challenge confusion matrices for the 3 methods: Visual Memex, KDE, and
CoLA. b.) Recognition Precision versus Recall when thresholding output based on confidence. c)
Side by side comparison of the 3 methods? accuracies for 30 categories.
5
Results and Discussion
For the Context Challenge evaluation, we use 200 randomly selected densely labeled images from
LabelMe [22]. Our testset contains 3048 total objects from 30 different categories. For an image
with K objects, we solve K Context Challenge problems with one hidden object and K-1 supporting
objects. Qualitative results on this prediction task can be seen in Figure 4.
We evaluate the performance of our Visual Memex model, the Reduced Memex KDE model, and the
CoLA-inspired model with respect to categorization performance (confusion matrices can be seen in
top left of Figure 3). The overall recognition accuracy of the Visual Memex Model, Reduced Memex
Model, and CoLA are .527, .430, and .457 respectively. Note that the Visual Memex Model performs
significantly better than the baselines. Taking a closer look at the per-category accuracies of the three
methods (see bottom of Figure 3), we see that the CoLA-based method fails on many categories.
The average per-category recognition accuracies of the three methods are: .534, .454, and .213. The
Visual Memex Model still performs the best, but we see a significant drop in performance for the
category-based CoLA method. CoLA is biased towards the popular categories, returning the most
frequently occurring category (window) quite often. Overall, the Visual Memex Model achieves the
best performance for 21 out of the 30 categories.
In addition, we plot precision recall curves for each of the three methods to determine if high confidence returned by each model is correlated with high recognition rates (top right of Figure 3). The
Visual Memex model has the most significant high-precision low-recall regime, suggesting that its
confidence is a good measure of success. The relatively flat curve for the CoLA method is related to
the problem of overcompensation for popular classes as mentioned above. The distributions returned
by CoLA tend to degenerate to a single non-zero value (most often on one of the popular categories
such as window). This is why the maximum probability returned by CoLA isn?t a good measure of
confidence.
We also demonstrate the power of the Visual Memex to predict appearance solely based on contextual interactions with other objects and their visual appearance. The middle row of Figure 4
demonstrates some of these associations. Note how in row 1, a plausible viewpoint is selected rather
than just a random car. In row 3 we see that the appearance of snow on one mountain suggests
that the other portion of the image also contains a snowy mountain. In summary, we presented a
category-free Visual Memex Model and applied it to the task of contextual object recognition within
the experimental framework of the Context Challenge. Our experiments confirm our intuition that
moving beyond categories is beneficial for improved modeling of relationships between objects.
Acknowledgements. This research was in part funded by NSF CAREER award IIS-0546547, NSF
Graduate Research Fellowship, a Guggenheim Fellowship, as well as generous gift from Google. A.
Efros thanks the WILLOW team at ENS, Paris for their hospitality.
7
Input Image + Hidden Region
Visual Memex Exemplar Predictions
Categorization Results
al
su ex
Vi em
M
E
KD
Co
LA
table
floor
al
su ex
Vi em
wall
M
door
KD
E
Co
floor
table
door
wall
sidewalk
LA car
road
person
al
su ex
Vi em
M
KD
E
Co
LA
al
su ex
Vi em
M
E
KD
LA
Co
al
su ex
Vi em
M
KD
E
LA
Co
al
su ex
Vi em
M
E
KD
LA
Co
al
su ex
Vi em
M
E
KD
LA
Co
Figure 4: Qualitative Results on the Context Challenge. Exemplar predictions are from the Visual
Memex model and categorization results are from the Visual Memex model, the KDE Model, and
CoLA[8].
8
References
[1] Moshe Bar and Shimon Ullman. Spatial context in recognition. Perception, 25:343?352, 1996. 1
[2] A.R. Hanson and E.M. Riseman. Visions: A computer system for interpreting scenes. Computer Vision
Systems, pages 303?333, 1978. 1
[3] T.M. Strat and M.A. Fischler. Context-based vision: Recognizing objects using information from both
2-d and 3-d imagery. PAMI, 13:1050?1065, 1991. 1
? Carreira-Perpi?na? n. Multiscale conditional random fields
[4] Xuming He, Richard S. Zemel, and Miguel A.
for image labeling. CVPR, pages 695?702, 2004. 1
[5] Sanjiv Kumar and Martial Hebert. A hierarchical field framework for unified context-based classification.
ICCV, 2005. 1
[6] Jamie Shotton, John M. Winn, Carsten Rother, and Antonio Criminisi. Textonboost: Joint appearance,
shape and context modeling for multi-class object recognition and segmentation. ECCV, 2006. 1
[7] Andrew Rabinovich, Anrea Vedaldi, Carolina Galleguillos, Eric Wiewora, and Serge Belongie. Objects
in context. ICCV, 2007. 1
[8] Carolina Galleguillos, Andrew Rabinovich, and Serge Belongie.
occurrence, location and appearance. ECCV, 2008. 1, 4, 5, 6, 8
Object categorization using co-
[9] Devi Parikh, C. Lawrence Zitnick, and Tsuhan Chen. From appearance to context-based recognition:
Dense labeling in small images. CVPR, 2008. 1
[10] Bryan C. Russell, Antonio Torralba, Ce Liu, Rob Fergus, and William T. Freeman. Object recognition by
scene alignment. NIPS, 2007. 1
[11] Antonio Torralba. The context challenge. http://web.mit.edu/torralba/www/carsAndFacesInContext.html.
1, 4
[12] Aristotle. Categories. 2
[13] Ludwig Wittgenstein. Philosophical Investigations. Blackwell Publishing, 1953. 2
[14] Eleanor Rosch. Principles of categorization. Cognition and Categorization, pages 27?48, 1978. 2
[15] Shimon Edelman. Representation, similarity and the chorus of prototypes. Minds and Machines, 1995. 2
[16] Ariadna Quattoni, M. Collins, and Trevor Darrell. Transfer learning for image classification with sparse
prototype representations. CVPR, 2008. 2
[17] D. L. Medin and M.M. Schaffer. Context theory of classification learning. Psychological Review, 85:207?
238, 1978. 2, 5
[18] Robert M. Nosofsky. Attention, similarity, and the identification-categorization relationship. Journal of
Experimental Psychology: General, 115(1):39?57, 1986. 2
[19] Tomasz Malisiewicz and Alexei A. Efros. Recognition by association via learning per-exemplar distances.
CVPR, 2008. 2, 3, 4
[20] Moshe Bar. The proactive brain: memory for predictions. Philosophical Transactions of the Royal Society
B, 364:1235?1243, 2009. 2
[21] Vannevar Bush. As we may think. The Atlantic Monthly, 1945. 2
[22] Bryan Russell, Antonio Torralba, Kevin Murphy, , and William T. Freeman. Labelme: a database and
web-based tool for image annotation. International Journal of Computer Vision, 77:157?173, 2008. 4, 7
[23] Antonio Torralba. Contextual priming for object detection. International Journal of Computer Vision,
53:169?191, 2003. 4
[24] Rupert Paget and I. D. Longstaff. Texture synthesis via a noncausal nonparametric multiscale markov
random field. IEEE Transactions on Image Processing, 1998. 6
9
| 3647 |@word middle:1 nd:2 cola:17 seek:1 tried:1 carolina:2 textonboost:1 pressed:1 tr:6 solid:1 holy:1 configuration:1 contains:2 score:3 liu:1 atlantic:1 current:2 contextual:11 nt:4 parsing:1 john:1 sanjiv:1 partition:1 shape:2 drop:2 plot:1 v:2 grass:3 leaf:1 device:1 item:1 selected:2 lamp:3 core:1 oblique:2 node:2 location:2 height:1 constructed:1 direct:1 ik:3 roa:1 qualitative:2 prove:1 edelman:1 combine:1 aristotle:2 fullyconnected:1 inside:5 pairwise:2 inter:2 mask:1 themselves:1 examine:2 frequently:2 multi:1 brain:2 inspired:4 freeman:2 automatically:3 food:1 tex:1 window:9 considering:1 str:4 etl:3 provided:2 discover:1 gift:1 overcompensation:1 didn:1 snowy:1 what:6 tying:1 mountain:5 balcony:3 unified:1 dubbed:1 sky:3 every:3 returning:2 classifier:1 ro:3 demonstrates:1 arguably:1 local:4 wuv:1 encoding:1 ak:1 solely:3 pami:1 might:2 black:1 bird:2 suggests:1 co:13 contextually:1 graduate:1 malisiewicz:3 abandoning:1 medin:1 pla:4 arguing:1 yj:8 testing:1 lost:2 practice:1 procedure:1 displacement:1 area:1 empirical:1 axiom:1 thought:4 reject:1 significantly:2 vedaldi:1 confidence:5 road:6 suggest:1 get:2 cannot:1 wheel:6 context:53 www:1 demonstrated:1 missing:1 center:1 go:2 attention:1 starting:1 typicality:1 assigns:1 rule:1 estimator:3 his:2 notion:1 analogous:2 feel:3 user:1 trail:2 us:2 hypothesis:1 sig:1 associate:1 element:1 pa:4 recognition:16 cut:1 labeled:2 database:2 observed:4 role:3 bottom:2 capture:1 hypertext:1 wj:2 region:4 connected:1 eu:1 russell:2 yk:2 mentioned:1 intuition:3 fischler:1 productive:1 dynamic:1 depend:1 segment:2 serve:1 eric:1 basis:1 resolved:1 po:4 joint:1 various:1 streetlight:3 describe:1 query:1 zemel:1 labeling:2 aggregate:2 kevin:1 outside:1 quite:4 richer:1 widely:1 solve:2 whose:1 aaj:3 larger:1 say:2 particularity:1 s:3 favor:2 cvpr:4 plausible:1 think:1 jointly:1 peeking:1 final:1 associative:2 advantage:1 propose:3 lam:1 interaction:6 jamie:1 relevant:1 date:1 rapidly:1 entered:1 degenerate:1 achieve:1 ludwig:1 flo:4 everyday:1 constituent:1 regularity:1 transmission:1 cluster:1 darrell:1 categorization:13 object:87 help:1 andrew:2 propagating:2 miguel:1 exemplar:59 nearest:1 ij:13 c:1 concentrate:1 greek:1 guided:1 direction:1 snow:1 criminisi:1 centered:4 human:3 extender:1 adjacency:1 require:1 trash:3 generalization:1 wall:7 investigation:1 elementary:1 summation:1 ground:5 visually:1 lawrence:1 cognition:1 predict:3 bj:1 mo:1 pointing:1 efros:4 achieves:1 torralba:10 early:1 generous:1 narrative:1 estimation:2 lose:1 label:4 bag:1 create:3 successfully:1 tool:3 weighted:1 mit:1 hospitality:1 gaussian:1 aim:2 rather:1 ej:7 varying:1 categorizing:1 encode:1 fence:1 prevalent:1 likelihood:1 baseline:3 sense:1 inference:7 dependent:1 el:3 rigid:1 membership:2 suffix:1 typically:1 rear:1 entire:1 hidden:14 her:1 relation:1 going:1 willow:1 i1:3 compatibility:4 pixel:3 arg:1 classification:4 overall:2 denoted:1 html:1 spatial:22 integration:1 orange:1 equal:1 once:4 field:3 having:1 manually:2 look:2 yu:2 mimic:1 others:1 richard:1 modern:1 randomly:1 composed:1 recognize:3 faulted:1 individual:1 densely:1 murphy:1 consisting:1 william:2 detection:1 alexei:2 evaluation:4 certainly:1 severe:1 alignment:1 sh:1 behind:2 activated:3 held:1 chain:1 predefined:1 noncausal:1 edge:17 closer:1 minw:1 tree:6 euclidean:1 ancient:1 re:1 psychological:1 instance:4 brick:1 modeling:8 soft:1 asking:2 column:3 ar:3 assignment:2 rabinovich:2 loopy:1 pole:3 wiu:3 recognizing:2 successful:2 gr:8 too:1 stored:3 person:5 cited:1 density:3 thanks:1 international:2 bu:4 probabilistic:1 together:2 connecting:2 concrete:1 nosofsky:1 na:1 synthesis:1 squared:1 ambiguity:1 postulate:1 imagery:1 opposed:2 collapsing:1 cognitive:2 creating:2 return:1 ullman:1 account:2 potential:4 suggesting:1 star:1 unordered:1 availability:1 matter:1 tra:1 vi:7 blind:3 ad:1 vehicle:1 view:6 later:1 try:1 performed:1 linked:1 doing:1 red:2 yv:2 relied:1 tab:1 portion:1 tomasz:2 annotation:1 minimize:1 formed:1 ir:3 il:3 accuracy:5 air:1 correspond:1 serge:2 sid:4 identification:1 critically:1 produced:1 lu:3 researcher:1 bob:1 wab:1 detector:2 explain:1 quattoni:1 complicate:1 trevor:1 definition:1 against:4 nonetheless:1 associated:5 di:3 newly:2 dataset:3 popular:6 wh:1 recall:4 knowledge:1 car:18 color:2 segmentation:2 ou:1 ea:1 back:1 operationalizing:1 appears:1 higher:1 ta:3 strat:1 wittgenstein:2 response:1 improved:1 formulation:1 done:1 though:1 evaluated:1 just:2 implicit:1 stage:1 hand:1 tre:1 web:3 ei:10 su:7 ild:1 multiscale:2 google:1 scientific:1 believe:1 building:11 facilitate:2 concept:8 normalized:1 adequately:1 regularization:1 galleguillos:2 din:3 alternating:1 ll:1 game:1 during:2 demonstrate:1 confusion:2 argues:1 cp:1 performs:2 fj:1 interpreting:1 reasoning:3 image:23 novel:3 recently:1 parikh:1 superior:2 functional:1 physical:1 association:16 he:9 linking:1 interpretation:1 interpret:2 mellon:1 refer:1 significant:2 monthly:1 uv:3 outlined:1 focal:2 similarly:1 pointed:1 funded:1 moving:2 similarity:21 longer:2 something:1 recent:2 optimizing:2 belongs:2 driven:1 binary:1 success:1 proactive:1 yi:11 seen:4 floor:5 determine:2 eleanor:2 ii:1 multiple:3 infer:1 ing:1 characterized:1 offer:1 long:1 award:1 coded:1 a1:1 prediction:10 variant:1 basic:1 vision:8 cmu:1 represent:1 kernel:7 normalization:1 robotics:1 chicken:2 c1:1 addition:1 fellowship:2 fine:1 chorus:1 winn:1 else:1 source:1 biased:1 unlike:1 strict:2 tend:1 thing:1 member:2 ee:5 door:7 ideal:1 shotton:1 enough:1 concerned:2 fit:1 psychology:1 associating:1 perfectly:1 idea:4 prototype:6 regarding:1 luce:1 motivated:1 alk:1 tain:1 returned:4 bli:4 antonio:7 useful:3 clear:1 unimportant:1 nonparametric:6 visualized:1 category:74 reduced:6 http:1 nsf:2 dotted:2 sign:3 neuroscience:1 estimated:1 per:8 delta:1 bryan:2 blue:1 diverse:1 carnegie:1 discrete:1 key:2 four:2 clarity:1 ce:1 ht:4 utilize:1 vast:1 graph:12 sum:2 you:2 arrive:1 almost:1 hi:1 strength:1 occur:2 kronecker:1 scene:8 flat:1 encodes:4 argument:1 chair:3 pane:3 performing:1 kumar:1 relatively:1 embrace:1 combination:1 guggenheim:1 belonging:3 kd:7 beneficial:2 across:2 son:3 separability:1 em:7 rob:1 making:1 s1:1 happens:1 psychologist:1 iccv:2 indexing:3 grail:1 equation:3 mutually:1 mind:1 studying:1 sidewalk:5 worthwhile:1 away:1 hierarchical:1 occurrence:1 encounter:1 top:3 running:2 clustering:1 publishing:1 graphical:2 hinge:1 carves:1 especially:1 classical:1 society:1 move:1 objective:2 question:2 rosch:2 occurs:1 already:2 parametric:5 primary:1 exclusive:1 strategy:1 traditional:3 moshe:2 affinity:3 gradient:1 win:1 distance:11 link:3 entity:1 accessibility:1 riseman:1 extent:1 reason:2 rother:1 modeled:2 relationship:28 tsuhan:1 robert:1 taxonomy:1 kde:10 rupert:1 ba:4 perform:2 allowing:1 observation:1 markov:1 arc:2 descent:1 attacked:1 lco:1 supporting:9 looking:1 head:3 team:1 y1:2 arbitrary:2 schaffer:1 pair:7 required:1 specified:1 paris:1 optimized:1 philosophical:2 hanson:1 blackwell:1 learned:1 nip:1 beyond:5 bar:4 able:1 below:2 perception:2 ev:1 regime:1 challenge:21 pioneering:1 built:1 max:1 memory:3 royal:1 belief:1 power:1 critical:2 overlap:1 rely:1 hybrid:1 predicting:1 indicator:1 arm:4 representing:3 ne:1 lk:3 created:4 deemed:1 martial:1 categorical:2 extract:1 isn:1 lum:1 faced:1 text:3 understanding:4 acknowledgement:1 review:1 relative:5 loss:2 plant:3 abstracting:1 prototypical:3 limitation:1 xuming:1 analogy:2 versus:1 ingredient:2 degree:1 thresholding:1 viewpoint:1 principle:1 row:3 eccv:2 summary:1 surprisingly:1 placed:1 free:3 neat:1 hebert:1 ariadna:1 offline:1 bias:1 side:5 institute:1 wide:1 neighbor:1 taking:2 absolute:1 sparse:1 benefit:1 curve:2 dimension:1 world:3 evaluating:2 rich:1 computes:1 made:1 commonly:2 ig:3 testset:1 ec:8 far:3 transaction:2 sj:2 implicitly:1 dealing:2 confirm:1 belongie:2 fergus:1 don:1 un:4 decade:1 sk:5 why:1 robin:1 table:5 learn:3 transfer:2 nature:1 ca:4 correlated:1 career:1 priming:1 domain:2 da:1 zitnick:1 dense:1 motivation:2 nothing:1 en:1 ny:1 lc:3 precision:4 fails:1 explicit:2 wish:1 pe:4 breaking:1 learns:2 shimon:2 perpi:1 densification:2 list:1 experimented:2 svm:1 explored:1 evidence:2 operationalization:1 adding:1 effectively:1 importance:1 ci:1 texture:3 te:3 occurring:2 chen:1 simply:1 appearance:13 likely:1 forming:1 devi:1 visual:48 ordered:1 scalar:1 isotropically:1 applies:1 ch:4 truth:1 conditional:5 identity:4 goal:3 viewed:1 memex:50 kmeans:2 towards:2 carsten:1 shared:2 labelme:4 tiger:3 hard:2 carreira:1 determined:1 operates:1 wt:2 total:2 accepted:1 e:4 la:10 rso:1 vote:1 meaningful:2 experimental:2 ew:1 formally:1 people:1 collins:1 bush:9 frontal:2 evaluate:6 ex:7 |
2,921 | 3,648 | Sensitivity analysis in HMMs
with application to likelihood maximization
Pierre-Arnaud Coquelin,
Vekia, Lille, France
Romain Deguest?
Columbia University, New York City, NY 10027
[email protected]
[email protected]
R?mi Munos
INRIA Lille - Nord Europe, Sequel Project, France
[email protected]
Abstract
This paper considers a sensitivity analysis in Hidden Markov Models with continuous state and observation spaces. We propose an In?nitesimal Perturbation
Analysis (IPA) on the ?ltering distribution with respect to some parameters of the
model. We describe a methodology for using any algorithm that estimates the ?ltering density, such as Sequential Monte Carlo methods, to design an algorithm
that estimates its gradient. The resulting IPA estimator is proven to be asymptotically unbiased, consistent and has computational complexity linear in the number
of particles.
We consider an application of this analysis to the problem of identifying unknown
parameters of the model given a sequence of observations. We derive an IPA
estimator for the gradient of the log-likelihood, which may be used in a gradient
method for the purpose of likelihood maximization. We illustrate the method with
several numerical experiments.
1 Introduction
We consider a parameterized hidden Markov model (HMM) de?ned on continuous state and observation spaces. The HMM is de?ned by a state process (Xt )t?0 ? X and an observation process
(Yt )t?1 ? Y that are parameterized by a continuous parameter ? = (?1 , . . . , ?d ) ? ?, where ? is a
compact subset of Rd .
The state process is a Markov chain taking its values in a (measurable) state space X, with initial
probability measure ? ? M(X) (i.e. X0 ? ?) and Markov transition kernel K(?, xt , dxt+1 ). We
assume that we can sample this Markov chain using a transition function F and independent random
numbers, i.e. for all t ? 0,
i.i.d.
Xt+1 = F (?, Xt , Ut ), with Ut ? ?,
(1)
where F : ? ? X ? U ? X and (U, ?(U ), ?) is a probability space. In many practical situations
U = [0, 1]p , ? is uniform, thus Ut is a p-uple of uniform random numbers. For simplicity, we
adopt the notations F (?, x?1 , u) , F? (?, u), where F? is the ?rst transition function (i.e. X0 =
F? (?, U?1 ) with U?1 ? ?).
The observation process (Yt )t?1 lies in a (measurable) space Y and is linked with the state process
by the conditional probability measure P(Yt ? dyt |Xt = xt ) = g(?, xt , yt ) dyt , where g : ? ?
?
also af?liated with CMAP, Ecole Polytechnique, France
1
X ? Y ? [0, 1] is the marginal density function of Yt given Xt . We assume that observations are
conditionally independent given the state.
Since the transition and observation processes are parameterized by the parameter ?, the state Xt
and the observation Yt processes depend explicitly on ?. For notation simplicity we will omit to
write the dependence of ? (in K, F , g, Xt , Yt , ...) when there is no possible ambiguity.
One of the main interest in HMMs is to recover the state at time n given a sequence of past observations (y1 , . . . , yn ) (written y1:n ). The ?ltering distribution (or belief state)
?n (dxn ) , P(Xn ? dxn |Y1:n = y1:n )
is the distribution of Xn conditioned on the information y1:n . We de?ne analogously the predictive
distribution
?n+1|n (dxn+1 ) , P(Xn+1 ? dxn+1 |Y1:n = y1:n ).
Our contribution is an In?nitesimal Perturbation Analysis (IPA) that estimates the gradient ??n
(where ? refers to the derivative with respect to the ?parameter ?) of the ?ltering distribution ?n .
More precisely, we estimate ??n (f ) (where ?(f ) , X f (x)?(dx)) for any integrable function f
under the ?ltering distribution ?n .
We also consider as application, the problem of parameter identi?cation in HMMs which consists
in estimating the (unknown) parameter ?? of the model that has served to generate the sequence
of observations. In a Maximum Likelihood (ML) approach, one searches for the parameter ? that
maximizes the likelihood (or its logarithm) given the sequence of observations. The log-likelihood
of parameter ? is de?ned by ln (?) , log p? (y1:n ) where p? (y1:n ) dy1:n , P(Y1:n (?) ? dy1:n ).
The Maximum Likelihood (ML) estimator ??n , arg max??? ln (?) is asymptotically consistent
(in the sense that ??n converges almost surely to the true parameter ?? when n ? ? under some
identi?ably conditions and mild assumptions on the model, see Theorem 2 of [DM01]). Thus, using
the ML approach, the parameter identi?cation problem reduces to an optimization problem.
Our second contribution is a sensitivity analysis of the predictive distribution ??t+1|t , for t <
n, which enables to estimate the gradient ?ln (?) of the log-likelihood function, which may be
used in a (stochastic) gradient method for the purpose of optimizing the likelihood. The approach
is numerically illustrated on two parameter identi?cation problems (autoregressive model and a
stochastic volatility model) and compared to other approaches (EM algorithm, the Kalman ?lter,
and the Likelihood ratio approach) when these latter apply.
2
Links with other works
First, let us mention that we are interested in the continuous state case since numerous applications
in signal processing, ?nance, robotics, or telecommunications naturally ?t in this framework. In the
general setting there exists no closed-form expression of the ?ltering distribution (unlike in ?nite
spaces where the Viterbi algorithm may apply or in linear-Gaussian models where the Kalman ?lter
can be used). Thus, in this paper, we will make use of the so-called Sequential Monte Carlo
methods (SMC) (also known as Particle Filters) which are numerical tools that can be applied to
a large class of models, see e.g. [DFG01]. For illustration, a challenging example in ?nance is
the problem of parameter estimation in the stochastic volatility model, which is a non-linear nonGaussian continuous space HMM parameterized by three continuous parameters (see e.g. [ME07])
which will be described in the experimental section.
A usual approach for parameter estimation consists in performing a maximum likelihood estimation
(MLE), i.e. search for the most likely value of the parameter, given the observed data. For ?nite state
space problems, the Expectation Maximization (EM) algorithm is a popular method for solving the
MLE problem. However, in continuous space problems, see [CM05], the EM algorithm is dif?cult to
use mainly because the Expectation part relies on the estimation of the posterior path measure which
is intractable in many situations. The Maximization part may also be very complicated and timeconsuming when the model does not belong to a linear or exponential family. An alternative method
consists in using brute force optimization methods based on the evaluation of the likelihood such
as grid-based or simulated annealing methods. These approaches, which can be seen as black-box
optimization are not very ef?cient in high dimensional parameter spaces.
2
Another approach is to treat the parameter as part of the state variable and then compute the optimal
?lter (see [DFG01] and [Sto02]). In this case, the Bayesian posterior distribution of the parameter
is a marginal of the optimal ?lter. It is well known that those methods are stable only under certain
conditions, see [Pap07], and do not perform well in practice for a large number of time steps.
A last solution consists in using an optimization procedure based on the evaluation of the gradient
of the log-likelihood function with respect to the parameter. These approaches have been studied in
the ?eld of continuous space HMMs e.g. in [DT03, FLM03, PDS05, Poy06]. The idea was to use a
likelihood ratio approach (also called score method) to evaluate the gradient of the likelihood. This
approach suffers from high variance of the estimator, in particular for problems with small noise in
the dynamic. To tackle this issue, [PDS05] proposed to use a marginal particle ?lter instead of a
simple path-based particle ?lter as Monte Carlo approximation method. This approach is ef?cient
in terms of variance reduction but its computational complexity becomes quadratic in the number of
particles instead of being linear, like in path-based particle methods.
The IPA approach proposed in this paper is an alternative gradient-based maximum likelihood approach. Compared with works on gradient approaches previously cited, the IPA provides usually a
lower variance estimators than the likelihood ratio methods, and its numerical complexity is linear
in the number of particles.
Other works related to ours are the so-called tangent ?lter approach described in [CGN01] for dynamics coming from a discretization of a diffusion process, and the Finite-Difference (FD) approach
described in a different setting (i.e. policy gradient in Partially Observable Markov Decision Processes) in [CDM08]. A similar FD estimator could be designed in our setting too but the resulting
FD estimator would be biased (like usual FD schemes) whereas the IPA estimator is not.
3
Sequential Monte Carlo methods (SMC)
Given a measurable test function f : X ? R, we have:
?
?n
?n
f (xn ) t=0 K(xt?1 , dxt )Gt (xt )
E[f (Xn ) t=0 Gt (Xt )]
? ?n
?n
.
=
?n (f ) , E[f (Xn )|Y1:n = y1:n ] =
E[ t=0 Gt (Xt )]
t=0 K(xt?1 , dxt )Gt (xt )
(2)
where we used the simpli?ed notation: Gt (xt ) , g(xt , yt ) and G0 (x0 ) , 1.
In general, it is impossible to write ?n (f ) analytically except for speci?c cases (such as linear/Gaussian with Kalman ?ltering). In this paper, we consider a numerical approximation of ?n (f )
based on a SMC method. But it should be mentioned that other methods (such as Extended Kalman
?lter, quantization methods, Markov Chain Monte Carlo methods) may be used as well to build the
IPA estimator that we propose in the next section.
The basic SMC method, called Bootstrap Filter, see [DFG01] for details, approximates ?n (f ) by an
?N
empirical distribution ?nN (f ) , N1 i=1 f (xin ) made of N particles x1:N
n .
Algorithm 1 Generic Sequential Monte Carlo
for t = 1 to n do
iid
Sampling: Sample uit?1 ? ? and set x
eit = F (xit?1 , uit?1 ), ?i ? {1, . . . , N }. Then de?ne the
importance sampling weights wti =
Gt (e
xit )
PN
,
xjt )
j=1 Gt (e
Resampling: Set xit = x
ekt i , ?i ? {1, . . . , N }, where k1:N are indices selected from the weights
1:N
wt .
end for
?N
RETURN: ?nN (f ) = N1 i=1 f (xin )
The sampling (or transition) step generates a successor particle population x
e1:N
according to the
t
state dynamics from the previous population x1:N
.
The
importance
sampling
weights
wt1:N are evalt?1
uated, and the resampling (or selection) step resamples (with replacement) N particles x1:N
from
t
1:N
the set x
e1:N
according
to
the
weights
w
.
Resampling
is
used
to
avoid
the
problem
of
degeneracy
t
t
of the algorithm, i.e. that most of the weights decreases to zero. It consists in selecting new parti3
?N
?N
cle positions such as to preserve a consistency property (i.e. i=1 wti ?(e
xit ) = E[ N1 i=1 ?(xit )]).
The simplest version introduced in [GSS93] chooses the selection indices kt1:N by an independent
sampling from the set {1, . . . , N } according to a multinomial distribution with parameters wt1:N ,
i.e. P(kti = j) = wtj , for all 1 ? i ? N . The idea is to replicate the particles in proportion to their
weights. Many variants have been proposed in the literature, among which the strati?ed resampling
method [Kit96] which is optimal in terms of variance minimization.
Convergence issues of ?nN (f ) to ?n (f ) (e.g. Law of Large Numbers or Central Limit Theorems) are
discussed in [Del04] or [DM08]. For our purpose we note that under mild conditions on f , ?nN (f ) is
an asymptotically unbiased (see [DMDP07] for the asymptotic expression of the bias) and consistent
estimator of ?n (f ).
4
4.1
In?nitesimal Perturbation Analysis in HMMs
Sensitivity analysis of the ?ltering distribution
The following decomposition of the gradient of the ?ltering distribution ?n applied to a function f :
?n
?n
?n
]
[
?E[f (Xn ) t=0 Gt (Xt )]
E[f (Xn ) t=0 Gt (Xt )]
?E[ t=0 Gt (Xt )]
?n
?n
=
?[?n (f )] = ?
??n (f ) ?n
E[ t=0 Gt (Xt )]
E[ t=0 Gt (Xt )]
E[ t=0 Gt (Xt )]
(3)
shows that the problem of
?nding
an
estimator
of
??
(f
)
is
reduced
to
the
problem
of
?nding
an
n
?n
estimator of ?E[f (Xn ) t=0 Gt (Xt )]. There are two dominant in?nitesimal methods for estimating the gradient of an expectation in a Markov chain: the In?nitesimal Perturbation Analysis (IPA)
method and the Score Function (SF) method (also called likelihood ratio method), see for instance
[Gla91] and [P?96] for a detailed presentation of both methods. SF has been used in [DT03, FLM03]
to estimate ??n . Although IPA is known for having a lower variance than SF in general, as far as
we know, it has never been used in this context. This is therefore the object of this Section.
Under appropriate smoothness assumptions (see Proposition 1 below), the gradient of an expectation
over a random variable X is equal to an expectation involving the pair of random variables (X, ?X)
?E[f (X)] = E[?[f (X)]] = E[f 0 (X)?X],
(where 0 refers
?nto the derivative with respect to the state variable). Applying this property to estimate
?E [f (Xn ) t=0 Gt (Xt )], we deduce
[
]
[ [
]]
n
n
?
?
?E f (Xn )
Gt (Xt ) = E ? f (Xn )
Gt (Xt )
t=0
[(
=E
?[f (Xn )] + f (Xn )
t=0
[(
=E
t=0
n
?
0
f (Xn )?Xn + f (Xn )
?[Gt (Xt )]
Gt (Xt )
)
n
?
]
Gt (Xt )
t=0
n
?
G0 (Xt )?Xt + ?Gt (Xt )
t
Gt (Xt )
t=0
Now we de?ne an augmented Markov chain (Xt , Zt , Rt )t?0 by
(where Zt , ?Xt )
?
{
? Xt+1 =
X0 = F? (U?1 ), U?1 ? ?
Zt+1 =
Z0 = ?F? (U?1 ),
?t ? 0,
?
R0 = 0,
Rt+1 =
)
n
?
]
Gt (Xt ) .
(4)
t=0
the following recursive relations
F (Xt , Ut ), where Ut ? ?
?F (Xt , Ut ) + F 0 (Xt , Ut )Zt ,
Rt +
G0t+1 (Xt+1 )Zt+1 +?Gt+1 (Xt+1 )
,
Gt+1 (Xt+1 )
By introducing this augmented Markov Chain in Equation (4) and using Equation (3) we can rewrite
??n (f ) as:
?n
?n
E[(f 0 (Xn )Zn + f (Xn )Rn ) t=0 Gt (Xt )]
E[Rn t=0 Gt (Xt )]
?n
?n
??n (f ) =
? ?n (f )
E[ t=0 Gt (Xt )]
E[ t=0 Gt (Xt )]
?n
E[(f 0 (Xn )Zn + Rn (f (Xn ) ? ?n (f ))) t=0 Gt (Xt )]
?n
=
.
(5)
E[ t=0 Gt (Xt )]
We now state some suf?cient conditions under which the previous derivations are sound.
4
Proposition 1. Equation (5) is valid on ? whenever the following conditions are satis?ed:
? for all ? ? ?, the path ? 7? (X0 , X1 , ? ? ? , Xn )(?) is almost surely (a.s.) differentiable,
? for all ? ? ?, f is a.s. continuously differentiable at Xn (?), and for all 1 ? t ? n, Gt is
a.s. continuously differentiable at (?, Xt (?)),
? ? 7? f (Xn (?)) and for all 1 ? t ? n, ? 7? Gt (?, Xt (?)) are a.s. continuous and piecewise
differentiable throughout ?,
? Let D be the random subset of ? at which f (Xn (?)) or one Gt (?, Xt (?))
?n fails to be differentiable. We require that E[sup |f 0 (Xn ) Zn + Rn (f (Xn ) ? ?n (f ))| t=0 Gt (Xt )] < ?,
? ?D
/
The proof of this Proposition is a direct application of Theorem 1.2 from [Gla91]. We notice that
requiring the a.s. differentiability of the path ? 7? (X0 , X1 , ? ? ? , Xn )(?) is equivalent to requiring
that for all ? ? ?, the transition function F is a.s. continuously differentiable with respect to ?.
From Equation (5), we can derive the IPA estimator of ??n (f ) by using a SMC algorithm:
InN ,
N
N
(
1 ?[ 0 i i
1 ? j )]
f (xn )zn + f (xin ) rni ?
r ,
N i=1
N j=1 n
(6)
where (xin , zni , rni ) are particles derived by using a SMC algorithm on the augmented Markov chain
(Xt , Zt , Rt ) described in Algorithm 2.
Algorithm 2 IPA estimation of ??n
for t = 1 to n do
For all i ? {1, . . . , N } do
iid
?it = F (xit?1 , uit?1 ),
Sample uit?1 ? ? and set x
i
,
Set z?ti = ?F (xit?1 , uit?1 ) + F 0 (xit?1 , uit?1 )zt?1
G0t (?
xit )?
zti +?Gt (?
xit )
, and
Gt (?
xit )
= (?
xkt i , z?tki , r?tki ), where
i
+
Set r?ti = rt?1
Set (xit , zti , rti )
end for
RETURN: InN =
1
N
[
?N
i=1
Gt (?
xit )
P
xjt )
j Gt (?
selected from wt1:N ,
compute the weights wi,t =
k1:N are the indices
(
)]
?N
f 0 (xin )zni + f (xin ) rni ? N1 j=1 rnj
Proposition 2. Under the assumptions of Proposition 1, the estimator InN de?ned by (6) has a bias
O(N ?1 ) and is consistent with ??n (f ), i.e. E[InN ] = ??n (f ) + O(N ?1 ), and limN ?? InN =
??n (f ) almost surely. In addition, its (asymptotic) variance is O(N ?1 ).
Proof. We use the general SMC convergence properties for Feynman-Kac (FK) models (see [Del04]
or [DM08]) which, applied to a FK ?ow with Markov chain X0:n , (random) potential func?N
tions G(X0:n ), and test function
H(X0:n ), states that the SMC estimate: N1 i=1 H(xi0:n ) is
Q
E[H(X
)
n
G(X )]
0:n
t
t=0
Q
consistent with
. Moreover, an asymptotic expression of the bias, given
E[ n
t=0 G(Xt )]
in [DMDP07], shows that it is of order O(N ?1 ). Applying those results to the test function
H , f 0 (Xn )Zn + Rn (f (Xn ) ? ?n (f )), using the representation (5) of the gradient, we deduce that
the SMC estimator (6) is asymptotically unbiased and consistent with ??n (f ). Now the asymptotic
variance is O(N ?1 ) since the Central Limit Theorem (see e.g. [Del04, DM08]) applies to the IPA
estimator (6) of (5).
Remark 1. Notice that the computation of the gradient estimator requires O(nN md) (where m is
the dimension of X) elementary operations, which is linear in the number of particles N and linear
in the number of parameters d, and has memory requirement O(N md).
5
4.2
Gradient of the log-likelihood
In the Maximum Likelihood approach for the problem of parameter identi?cation, one may follow
a stochastic gradient method for maximizing the log-likelihood ln (?) where the gradient
?ln (?) =
n?1
?
t=0
??t+1|t (Gt+1 )
?t+1|t (Gt+1 )
is obtained by estimating each term ??t+1|t (Gt+1 ) of the sum using a similar decomposition as in
(5) and (4) for the predictive distribution applied to Gt+1 :
]
[
?t
E[Gt+1 (Xt+1 ) k=0 Gk (Xk )]
??t+1|t (Gt+1 ) = ?
?t
E[ k=0 Gk (Xk )]
?t
?t
?E[Gt+1 (Xt+1 ) k=0 Gk (Xk )]
?E[ k=0 Gk (Xk )]
=
? ?t+1|t (Gt+1 ) ?t
?t
E[ k=0 Gk (Xk )]
E[ k=0 Gk (Xk )]
with
?E[Gt+1 (Xt+1 )
[(
Gk (Xk )] = E ?Gt+1 (Xt+1 ) + G0t+1 (Xt+1 )?Xt+1
t
?
k=0
+Gt+1 (Xt+1 )
t
t
?
G0 (Xk )?Xk + ?Gk (Xk ) ) ?
k
k=0
Gk (Xk )
We deduce the IPA estimator of ?ln (?)
?N (
i
n
?
xit )(rt?1
zti + Gt (?
xit )?
xit ) + G0t (?
?
i=1 ?Gt (?
JnN ,
?N
xit )
t=1
i=1 Gt (?
1
N
]
Gk (Xk ) .
k=0
?
j
j rt?1 )
)
,
where (xin , zni , rni ) (and (?
xin , z?ni , r?ni )) are particles derived by using a SMC algorithm on the augmented Markov chain (Xt , Zt , Rt ) described in the previous subsection. Using similar arguments
as those detailed in proofs of Propositions 1 and 2, we have that this estimator is asymptotically
unbiased and consistent with ?ln (?).
The resulting gradient algorithm is described in?
Algorithm 3. The steps
? ?k are chosen appropriately
so that local convergence occurs (e.g. such that k?1 ?k = ? and k?1 ?k2 < ?), see e.g. [KY97]
for a detailed analysis of Stochastic Approximation algorithms.
Algorithm 3 Likelihood Maximization by gradient ascent using the IPA estimator of ?ln (?)
for k = 1, 2, . . . , Number of gradient steps do
Initialize J0N = 0
for t = 1 to n do
For all i ? {1, . . . , N } do
iid
Sample uit?1 ? ? and set x
?it = F (xit?1 , uit?1 ),
i
i
i
i
Set z?t = ?F (xt?1 , ut?1 ) + F 0 (xit?1 , uit?1 )zt?1
,
PN
i
0
i
i
i
1
xt )+Gt (?
xt )?
zt +Gt (?
xit )(rt?1
?N
i=1 (?Gt (?
N
N
PN
Set Jt = Jt?1 +
G (?
xi )
i
Set r?ti = rt?1
+
G0t (?
xit )?
zti +?Gt (?
xit )
Gt (?
xit )
i=1
t
P
j
j
rt?1
))
Gt (?
xit )
P
G
xjt )
t (?
j
from wt1:N .
and compute the weights wti =
Set (xit , zti , rti ) = (?
xkt i , z?tki , r?tki ), where k1:N are indices selected
end for
Perform a gradient ascent step: ?k = ?k?1 + ?k JnN (?k?1 )
end for
6
,
t
1
0.89
1.24
0.88
0.95
1.22
0.9
1.18
0.87
1.2
0.85
?
?
?
0.86
0.85
0.84
1.16
1.14
1.12
0.83
0.8
1.1
0.82
0.81
1.08
0.75
1.06
1
2
1
3
2
3
1
Method
Method
2
3
Method
Figure 1: Box-and-whiskers plots of the three parameters (?, ?, ?) estimates for the AR1 model
with ?? = (0.8, 1.0, 1.0). We compare three methods: (1) Kalman, (2) EM and (3) IPA. Here we
used n = 500 observations and N = 102 particles.
5
Numerical experiments
We consider two typical problems and report our results focussing on the variance of the estimator:
Autoregressive model AR1 is a simple linear-Gaussian HMMs thus may be solved by other methods (such as Kalman ?ltering and EM algorithms) which enables to compare the performances of
several algorithms for parameter identi?cation. The dynamics are
X0 ? N (0, ? 2 ),
i.i.d.
and for t ? 1,
Xt
Yt
= ?Xt?1 + ?Ut ,
= Xt + ?Vt ,
(7)
i.i.d.
where Ut ? N (0, 1) and Vt ? N (0, 1) are independent sequences of random variables, and
? = (?, ?, ?) is a three-dimensional parameter in (R+ )3 .
Stochastic volatility model is very popular in the ?eld of quantitative ?nance [ME07] to evaluate
derivative securities, such as options. This is a non-linear non-Gaussian model, so the Kalman
method cannot be used anymore. The dynamics are
X0 ? N (0, ? 2 ),
i.i.d.
and for t ? 1,
i.i.d.
Xt
Yt
= ?Xt?1 + ?Ut ,
= ? exp (Xt /2) Vt ,
(8)
where again Ut ? N (0, 1) and Vt ? N (0, 1) and the parameter ? = (?, ?, ?) ? (R+ ) .
3
5.1 Parameter identi?cation
Figure 1 shows the results of our IPA gradient estimator for the AR1 parameter identi?cation problem and compares those with two other methods: Kalman ?lter (K) and EM (which apply since the
model is linear-Gaussian). The unknown parameter used is ?? = (0.8, 1.0, 1.0). Notice the apparent
bias of the three methods in the estimation of ?? (even for Kalman which provides here the exact
?ltering distribution) since the number of observations n = 500 is ?nite. For IPA, we used N = 102
particles and 150 gradient iterations. Algorithm 3 was run 50 times with random starting points uni? where ? = (0.5, 0.5, 0.5) and ?? = (1.0, 1.5, 1.5) in order to illustrate
formly drawn between [?, ?],
that the method is not sensitive to the starting point.
We observe that in terms of estimation accuracy, IPA is very competitive to the other methods,
Kalman and EM, which are designed for speci?c models (here linear-Gaussian). The IPA method
applies to general models, for example, to the stochastic volatility model. Figure 2 shows the sets of
estimates of ?? = (0.8, 1.0, 1.0) using IPA with n = 103 observations and N = 102 particles (no
comparison is made here since Kalman does not apply and EM becomes more complicated).
5.2 Variance study for Score and IPA algorithms
IPA and Score methods provide gradient estimators for general models. We compare the variance
of the corresponding estimators of the gradient ?ln for the AR1 since for this model we know its
exact value (using Kalman).
7
1.25
1.2
1.15
Values
1.1
1.05
1
0.95
0.9
0.85
0.8
1
2
3
Parameter number
Figure 2: Box-and-whiskers plots of the three parameters (?, ?, ?) estimates for the IPA method
applied to the stochastic volatility model with ?? = (0.8, 1.0, 1.0). We used n = 103 observations
and N = 102 particles.
Figure 3 shows the variance of the IPA and Score estimators of the partial derivative ?? ln (we
focused our study on ? since the problem of volatility estimation is challenging, and also because
the value of ? in?uences the respective performances of the two algorithms, which is not the case
for the other parameters ?, ?). We used n = N = 103 . The IPA estimator performs better than the
Score estimator for small values of ?. On the other hand, in case of huge variance in the state model,
it is better to use the Score estimator.
110
Score method
IPA method
100
90
80
vN
n
70
60
50
40
30
20
10
0.2
0.4
0.6
?
0.8
1
1.2
1.4
Figure 3: Variance of the log-likelihood derivative ?? ln computed with both the IPA and Score methods. The true parameter is ?? = (?? , ? ? , ? ? ) = (0.8, 1.0, 1.0) and the estimations are computed at
? = (0.7, ?, 0.9).
Let us mention that the variance of the IPA (as well as Score) estimator increases when the number
of observations n increases. However, under weak conditions on the HMM [LM00], the ?ltering
distribution and its gradient forget exponentially fast the initial distribution. This property has already been used for EM estimators in [CM05] to show that ?xed-lag smoothing drastically reduces
the variance without signi?cantly raising the bias. Similar smoothing (either ?xed-lag or discounted)
would provide ef?cient variance reduction techniques for the IPA estimator as well.
6
Conclusions
We proposed a sensitivity analysis in HMMs based on an In?nitesimal Perturbation Analysis and
provided a computationally ef?cient gradient estimator that provides an interesting alternative to
the usual Score method. We showed how this analysis may be used for estimating the gradient of
the log-likelihood in a gradient-based likelihood maximization approach for the purpose of parameter identi?cation. Finally let us mention that estimators of higher-order derivatives (e.g. Hessian)
could be derived as well along this IPA approach, which would enable to use more sophisticated
optimization techniques (e.g. Newton method).
8
References
[CDM08]
[CGN01]
[CM05]
[Del04]
[DFG01]
[DM01]
[DM08]
P.A. Coquelin, R. Deguest, and R. Munos. Particle ?lter-based policy gradient in
POMDPs. In Neural Information Processing Systems, 2008.
F. C?rou, F. Le Gland, and N. J. Newton. Stochastic particle methods for linear tangent
?ltering equations. In J.-L. Menaldi, E. Rofman, and A. Sulem, editors, Optimal Control and PDE?s - Innovations and Applications, in honor of Alain Bensoussan?s 60th
anniversary, pages 231?240. IOS Press, 2001.
O. Capp? and E. Moulines. On the use of particle ?ltering for maximum likelihood
parameter estimation. European Signal Processing Conference, 2005.
P. Del Moral. Feynman-Kac Formulae, Genealogical and Interacting Particle Systems
with Applications. Springer, 2004.
A. Doucet, N. De Freitas, and N. Gordon. Sequential Monte Carlo Methods in Practice.
Springer, 2001.
R. Douc and C. Matias. Asymptotics of the maximum likelihood estimator for general
hidden markov models. Bernouilli, 7:381?420, 2001.
R. Douc and E. Moulines. Limit theorems for weighted samples with applications to
sequential monte carlo methods. Annals of Statistics, 36:5:2344?2376, 2008.
[DMDP07] P. Del Moral, A. Doucet, and GW Peters. Sharp Propagation of Chaos Estimates for
Feynman?Kac Particle Models. SIAM Theory of Probability and its Applications, 51
(3):459?485, 2007.
[DT03]
A. Doucet and V.B. Tadic. Parameter estimation in general state-space models using
particle methods. Ann. Inst. Stat. Math, 2003.
[FLM03] J. Fichoud, F. LeGland, and L. Mevel. Particle-based methods for parameter estimation
and tracking : numerical experiments. Technical Report 1604, IRISA, 2003.
[Gla91]
[GSS93]
[Kit96]
[KY97]
[LM00]
[ME07]
[Pap07]
[PDS05]
[P?96]
[Poy06]
[Sto02]
P. Glasserman. Gradient estimation via perturbation analysis. Kluwer, 1991.
N. Gordon, D. Salmond, and A. F. M. Smith. Novel approach to nonlinear and nongaussian bayesian state estimation. In Proceedings IEE-F, volume 140, pages 107?113,
1993.
G. Kitagawa. Monte-Carlo ?lter and smoother for non-Gaussian nonlinear state space
models. J. Comput. Graph. Stat., 5:1?25, 1996.
H. J. Kushner and G. Yin. Stochastic Approximation Algorithms and Applications.
Springer-Verlag, Berlin and New York, 1997.
F. LeGland and L. Mevel. Exponential forgetting and geometric ergodicity in hidden
markov models. mathematic and control sugnal and systems, 13:63?93, 2000.
R. Mamon and R.J. Elliott. Hidden markov models in ?nance. International Series in
Operations Research and Management Science, 104, 2007.
A. Papavasiliou. A uniformly convergent adaptive particle ?lter. Journal of Applied
Probability, 42 (4):1053?1068, 2007.
G. Poyadjis, A. Doucet, and S.S. Singh. Particle methods for optimal ?lter derivative:
Application to parameter estimation. In IEEE ICASSP, 2005.
G. P?ug. Optimization of Stochastic Models: The Interface Between Simulation and
Optimization. Kluwer Academic Publishers, 1996.
G. Poyiadjis. Particle Method for Parameter Estimation in General State Space Models.
PhD thesis, University of Cambridge, 2006.
G. Storvik. Particle ?lters for state-space models with the presence of unknown static
parameters. IEEE Transactions on Signal Processing, 50:281?289, 2002.
9
| 3648 |@word mild:2 version:1 proportion:1 replicate:1 simulation:1 decomposition:2 eld:2 mention:3 reduction:2 initial:2 series:1 score:11 selecting:1 ecole:1 ours:1 past:1 freitas:1 discretization:1 nitesimal:6 dx:1 written:1 numerical:6 enables:2 designed:2 plot:2 resampling:4 selected:3 cult:1 xk:12 smith:1 provides:3 math:1 along:1 direct:1 consists:5 x0:11 forgetting:1 nto:1 zti:5 moulines:2 discounted:1 glasserman:1 becomes:2 project:1 estimating:4 notation:3 moreover:1 maximizes:1 provided:1 xed:2 quantitative:1 ti:3 tackle:1 k2:1 brute:1 control:2 omit:1 deguest:2 yn:1 tki:4 treat:1 local:1 limit:3 io:1 path:5 inria:2 black:1 studied:1 challenging:2 dif:1 hmms:7 smc:10 practical:1 practice:2 recursive:1 bootstrap:1 procedure:1 nite:3 asymptotics:1 empirical:1 refers:2 cannot:1 selection:2 wt1:4 context:1 impossible:1 applying:2 measurable:3 equivalent:1 yt:10 timeconsuming:1 maximizing:1 rnj:1 starting:2 focused:1 simplicity:2 identifying:1 estimator:34 population:2 annals:1 exact:2 romain:1 observed:1 solved:1 rou:1 decrease:1 mentioned:1 complexity:3 dynamic:5 depend:1 solving:1 rewrite:1 singh:1 predictive:3 wtj:1 irisa:1 capp:1 icassp:1 eit:1 derivation:1 fast:1 describe:1 monte:9 apparent:1 lag:2 statistic:1 sequence:5 differentiable:6 inn:5 propose:2 coming:1 fr:2 rst:1 convergence:3 requirement:1 converges:1 object:1 volatility:6 derive:2 illustrate:2 tions:1 stat:2 signi:1 filter:2 stochastic:11 successor:1 enable:1 require:1 proposition:6 elementary:1 kitagawa:1 exp:1 viterbi:1 adopt:1 purpose:4 estimation:16 sensitive:1 city:1 tool:1 weighted:1 minimization:1 gaussian:7 uated:1 pn:3 avoid:1 derived:3 xit:25 stratus:1 likelihood:27 mainly:1 sense:1 inst:1 nn:5 mathematic:1 hidden:5 relation:1 france:3 interested:1 arg:1 issue:2 among:1 uences:1 smoothing:2 initialize:1 marginal:3 equal:1 bernouilli:1 never:1 having:1 sampling:5 lille:2 report:2 piecewise:1 gordon:2 preserve:1 replacement:1 n1:5 interest:1 fd:4 satis:1 huge:1 evaluation:2 chain:9 partial:1 respective:1 logarithm:1 instance:1 zn:5 maximization:6 introducing:1 subset:2 uniform:2 too:1 iee:1 chooses:1 density:2 cited:1 sensitivity:5 siam:1 international:1 sequel:1 cantly:1 analogously:1 continuously:3 nongaussian:2 again:1 ambiguity:1 central:2 management:1 thesis:1 derivative:7 return:2 potential:1 de:8 explicitly:1 closed:1 linked:1 sup:1 competitive:1 recover:1 option:1 complicated:2 ably:1 contribution:2 ltering:14 ni:2 accuracy:1 variance:16 ekt:1 douc:2 weak:1 bayesian:2 iid:3 carlo:9 served:1 pomdps:1 cation:8 suffers:1 whenever:1 ed:3 matias:1 naturally:1 proof:3 mi:1 static:1 degeneracy:1 popular:2 subsection:1 ut:12 sophisticated:1 higher:1 follow:1 methodology:1 box:3 ergodicity:1 hand:1 nonlinear:2 propagation:1 del:2 gland:1 xkt:2 requiring:2 unbiased:4 true:2 analytically:1 arnaud:1 illustrated:1 conditionally:1 gw:1 polytechnique:1 performs:1 interface:1 resamples:1 chaos:1 ef:4 dy1:2 novel:1 multinomial:1 ug:1 exponentially:1 volume:1 belong:1 discussed:1 approximates:1 xi0:1 numerically:1 kluwer:2 cambridge:1 smoothness:1 rd:1 grid:1 consistency:1 fk:2 particle:29 stable:1 europe:1 gt:59 deduce:3 dominant:1 posterior:2 showed:1 optimizing:1 certain:1 verlag:1 honor:1 vt:4 integrable:1 seen:1 simpli:1 speci:2 r0:1 surely:3 focussing:1 signal:3 smoother:1 sound:1 reduces:2 technical:1 academic:1 af:1 rti:2 pde:1 mle:2 e1:2 variant:1 basic:1 xjt:3 involving:1 expectation:5 iteration:1 kernel:1 robotics:1 whereas:1 addition:1 annealing:1 fichoud:1 limn:1 publisher:1 appropriately:1 biased:1 unlike:1 ascent:2 dxn:4 presence:1 wti:3 idea:2 expression:3 moral:2 peter:1 york:2 hessian:1 remark:1 detailed:3 differentiability:1 simplest:1 reduced:1 generate:1 kac:3 ipa:31 uple:1 notice:3 write:2 ar1:4 drawn:1 diffusion:1 lter:13 asymptotically:5 graph:1 sum:1 run:1 parameterized:4 telecommunication:1 almost:3 family:1 throughout:1 vn:1 decision:1 convergent:1 quadratic:1 precisely:1 generates:1 argument:1 performing:1 ned:4 according:3 em:9 wi:1 ln:11 equation:5 computationally:1 previously:1 know:2 feynman:3 end:4 operation:2 apply:4 observe:1 generic:1 nance:4 appropriate:1 pierre:1 anymore:1 alternative:3 kushner:1 newton:2 kt1:1 k1:3 build:1 g0:3 already:1 occurs:1 dependence:1 usual:3 rt:11 md:2 gradient:33 ow:1 link:1 simulated:1 berlin:1 hmm:4 considers:1 kalman:12 index:4 illustration:1 ratio:4 innovation:1 tadic:1 nord:1 gk:10 design:1 legland:2 zt:10 policy:2 unknown:4 perform:2 zni:3 observation:16 markov:16 finite:1 situation:2 extended:1 y1:12 rn:5 perturbation:6 interacting:1 sharp:1 introduced:1 pair:1 security:1 raising:1 identi:9 usually:1 below:1 poyadjis:1 max:1 memory:1 belief:1 force:1 scheme:1 ne:3 numerous:1 nding:2 columbia:2 func:1 literature:1 geometric:1 tangent:2 asymptotic:4 law:1 whisker:2 dxt:3 suf:1 interesting:1 proven:1 kti:1 rni:4 lters:1 elliott:1 consistent:7 editor:1 anniversary:1 last:1 alain:1 drastically:1 bias:5 salmond:1 taking:1 munos:3 dimension:1 xn:31 transition:6 uit:9 cle:1 autoregressive:2 valid:1 made:2 adaptive:1 far:1 transaction:1 compact:1 observable:1 uni:1 ml:3 doucet:4 xi:1 continuous:9 search:2 european:1 main:1 noise:1 x1:5 augmented:4 cient:5 ny:1 fails:1 position:1 exponential:2 sf:3 lie:1 comput:1 theorem:5 z0:1 formula:1 xt:74 jt:2 exists:1 intractable:1 quantization:1 sequential:6 importance:2 cmap:1 phd:1 conditioned:1 mevel:2 forget:1 yin:1 remi:1 formly:1 likely:1 tracking:1 partially:1 applies:2 springer:3 relies:1 conditional:1 presentation:1 ann:1 typical:1 except:1 uniformly:1 wt:1 called:5 dyt:2 experimental:1 xin:8 coquelin:2 latter:1 genealogical:1 evaluate:2 |
2,922 | 3,649 | Robust Nonparametric Regression with Metric-Space
valued Output
Matthias Hein
Department of Computer Science, Saarland University
Campus E1 1, 66123 Saarbr?ucken, Germany
[email protected]
Abstract
Motivated by recent developments in manifold-valued regression we propose a
family of nonparametric kernel-smoothing estimators with metric-space valued
output including several robust versions. Depending on the choice of the output
space and the metric the estimator reduces to partially well-known procedures for
multi-class classification, multivariate regression in Euclidean space, regression
with manifold-valued output and even some cases of structured output learning.
In this paper we focus on the case of regression with manifold-valued input and
output. We show pointwise and Bayes consistency for all estimators in the family
for the case of manifold-valued output and illustrate the robustness properties of
the estimators with experiments.
1
Introduction
In recent years there has been an increasing interest in learning with output which differs from
the case of standard classification and regression. The need for such approaches arises in several
applications which possess more structure than the standard scenarios can model. In structured
output learning, see [1, 2, 3] and references therein, one generalizes multiclass classification to
more general discrete output spaces, in particular incooperating structure of the joint input and
output space. These methods have been successfully applied in areas like computational biology,
natural language processing and information retrieval. On the other hand there has been a recent
series of work which generalizes regression with multivariate output to the case where the output
space is a Riemannian manifold, see [4, 5, 6, 7], with applications in signal processing, computer
vision, computer graphics and robotics. One can also see this branch as structured output learning
if one thinks of a Riemannian manifold as isometrically embedded in a Euclidean space. Then the
restriction that the output has to lie on the manifold can be interpreted as constrained regression in
Euclidean space, where the constraints couple several output features together.
In this paper we propose a family of kernel estimators for regression with metric-space valued input
and output motivated by estimators proposed in [6, 8] for manifold-valued regression. We discuss
loss functions and the corresponding Bayesian decision theory for this general regression problem.
Moreover, we show that this family of estimators has several well known estimators as special
cases for certain choices of the output space and its metric. However, our main emphasis lies on
the problem of regression with manifold-valued input and output which includes the multivariate
Euclidean case. In particular, we show for all our proposed estimators their pointwise and Bayes
consistency, that is in the limit as the sample size goes to infinity the estimated mapping converges
to the Bayes optimal mapping. This includes estimators implementing several robust loss functions
like the L1 -loss, Huber loss or the ?-insensitive loss. This generality is possible since our proof
considers directly the functional which is minimized instead of its minimizer as it is usually done in
consistency proofs of the Nadaraya-Watson estimator. Finally, we conclude with a toy experiment
illustrating the robustness properties and difference of the estimators.
1
2
Bayesian decision theory and loss functions for metric-space valued output
We consider the structured output learning problem where the task is to learn a mapping ? : M ? N
between two metric spaces M and N , where dM denotes the metric of M and dN the metric of N .
We assume that both metric spaces M and N are separable1 . In general, we are in a statistical
setting where the given input/output pairs (Xi , Yi ) are i.i.d. samples from a probability measure P
on M ? N .
In order to prove later on consistency of our metric-space valued estimator we first have to define the
Bayes optimal mapping ?? : M ? N in the case where M and N are general metric spaces which
depends on the employed loss function. In multivariate regression the most common loss function
2
is, L(y, f (x)) = ky ? f (x)k2 . However, it is well known that this loss is sensitive to outliers. In
univariate regression one therefore uses the L1 -loss or other robust loss functions like the Huber or ?insensitive loss. For the L1 -loss the Bayes optimal function f ? is given as f ? (x) = Med[Y |X = x],
where Med denotes the median of P(Y |X = x) which is a robust location measure. Several generalizations of the median for multivariate output have been proposed, see e.g. [9]. In this paper we refer
to the minimizer of the loss function L(y, f (x)) = ky ? f (x)kRn resp. L(y, f (x)) = dN (y, f (x))
as the (generalized) median, since this seems to be the only generalization of the univariate median which has a straightforward extension to metric spaces. In analogy to Euclidean case, we will
therefore use loss functions penalizing the distance between predicted output and desired output:
L(y, ?(x)) = ? dN (y, ?(x)) ,
y ? N, x ? M,
where ? : R+ ? R+ . We will later on restrict ? to a certain family of functions. The associated
risk (or expected loss) is: R? (?) = E[L(Y, ?(X))] and its Bayes optimal mapping ??? : M ? N
can then be determined by
??? :=
arg min
R? (?) =
arg min
E[? dN (Y, ?(X)) ]
?:M ?N, ? measurable
=
arg min
?:M ?N, ? measurable
?:M ?N, ? measurable
EX [EY |X [? dN (Y, ?(X)) | X].
(1)
In the second step we used a result of [10] which states that a joint probability measure on the product
of two separable metric spaces can always be factorized into a conditional probability measure and
the marginal. In order that the risk is well-defined,
we assume that there exists a measurable mapping
? : M ? N so that E[? dN (Y, ?(X)) ] < ?. This holds always once N has bounded diameter.
Apart from the global risk R? (?) we analyze for each x ? M the pointwise risk R?0 (x, ?(x)),
R?0 (x, ?(x)) = EY |X [? dN (Y, ?(X)) | X = x],
which measures the loss suffered by predicting ?(x) for the input x ? M . The total loss R? (?) of
the mapping ? is then R? (?) = E[R?0 (X, ?(X))]. As in standard regression the factorization allows
to find the Bayes optimal mapping ?? pointwise,
Z
?
0
?? (x) = arg min R? (x, p) = arg min E[? dN (Y, p) | X = x] = arg min
? dN (y, p) d?x (y),
p?N
p?N
p?N
N
where d?x is the conditional probability of Y conditioned on X = x. Later on we prove consistency
for a set of kernel estimators each using a different loss function ? from the following class of
functions.
Definition 1 A convex function ? : R+ ? R+ is said to be (?, s)-bounded if
? ? : R+ ? R+ is continuously differentiable, monotonically increasing and ?(0) = 0,
? ?(2x) ? ? ?(x) for x ? s and ?(s) > 0 and ?0 (s) > 0.
Several functions ? corresponding to standard loss functions in regression are (?, s)-bounded:
? Lp -type loss: ?(x) = x? for ? ? 1 is (2? , 1)-bounded,
? Huber-loss: ?(x) =
1
2x2
?
for x ?
?
2
and ?(x) = 2x ?
?
2
A metric space is separable if it contains a countable dense subset.
2
for x >
?
2
is (3, 2? )-bounded.
? ?-insensitive loss: ?(x) = 0 for x ? ? and ?(x) = x ? ? if x > ? is (3, 2?)-bounded.
While uniqueness of the minimizer of the pointwise loss functional R?0 (x, ?) cannot be guaranteed
anymore in the case of metric space valued output, the following lemma shows that R?0 (x, ?) has
reasonable properties (all longer proofs can be found in Section 7 or in the supplementary material).
It generalizes a result provided in [11] for ?(x) = x2 to all (?, s)-bounded losses.
Lemma 1 Let N be a complete and separable metric space such that d(x, y) < ? for all x, y ? N
and every closed and bounded set is compact. If ? is (?, s)-bounded and R?0 (x, q) < ? for some
q ? N , then
? R?0 (x, p) < ? for all p ? N ,
? R?0 (x, ?) is continuous on N ,
? The set of minimizers Q? = arg min R?0 (x, q) exists and is compact.
q?N
It is interesting to have a look at one special loss, the case ?(x) = x2 . The minimizer of the
pointwise risk,
Z
F (p) = arg min
d2N (y, p) d?x (y),
p?N
N
is called the Frech?et mean2 or Karcher mean in the case where N is a manifold. It is the generalization of a mean in Euclidean space to a general metric space. Unfortunately, it needs to be no longer
unique as in the Euclidean case. A simple example is the sphere as the output space together with
a uniform probability measure on it. In this case every point p on the sphere attains the same value
F (p) and thus the global minimum is non-unique. We refer to [12, 13, 11] for more information
under which conditions one can prove uniqueness of the global minimizer if N is a Riemannian
manifold. The generalization of the median to Riemannian manifolds, that is ?(x) = x, is discussed
in [9, 4, 8]. For a discussion of the computation of the median in general metric spaces see [14].
3
A family of kernel estimators with metric-space valued input and output
In the following we provide the definition of the kernel estimator with metric-space valued output motivated by the two estimators proposed in [6, 8] for manifold-valued output. We use in the
following the notation kh (x) = h1m k(x/h).
Definition 2 Let (Xi , Yi )li=1 be the sample with Xi ? M and Yi ? N . The metric-space-valued
kernel estimator ?l : M ? N from metric space M to metric space N is defined for all x ? M as
l
1X
? dN (q, Yi ) kh dM (x, Xi ) ,
(2)
?l (x) = arg min
l i=1
q?N
where ? : R+ ? R+ is (?, s)-bounded and k : R+ ? R+ .
If the data contains a large fraction of outliers one should use a robust loss function ?, see Section 6. Usually the kernel function should be monotonically decreasing since the interpretation of
kh dM (x, Xi ) is to measure the similarity between x and Xi in M which should decrease as the
distance increases. The computational complexity to determine ?l (x) is quite high as for each test
point one has to solve an optimization problem but comparable to structured output learning (see
discussion below) where one maximizes for each test point the score function over the output space.
For manifold-valued output we will describe in the next section a simple gradient-descent type optimization scheme in order to determine ?l (x).
It is interesting to see that several well-known nonparametric estimators for classification and regression can be seen as special cases of this estimator (or a slightly more general form) for different
choices of the output space, its metric and the loss function. In particular, the approach shows a certain analogy of a generalization of regression into a continuous space (manifold-valued regression)
and regression into a discrete space (structured output learning).
2
In some cases the set of all local minimizers is denoted as the Frech?et mean set and the Frech?et mean is
called unique if there exists only one global minimizer.
3
Multiclass classification: Let N = {1, . . . , K} where K denotes the number of classes K. If
there is no special class-structure, then we use the discrete metric on N , dN (q, q 0 ) = 1 if q 6= q 0 and
0 else leads for any ? to the standard multiclass classification scheme using a majority vote. Costsensitive multiclass classification can be done by using dN (q, q 0 ) to model the cost of misclassifying
class q by class q 0 . Since general costs can generally not be modeled by a metric, it should be noted
that the estimator can be modified using a similarity function, s : N ? N ? R,
?l (x) = arg max
q?N
l
1X
s q, Yi kh dM (x, Xi ) ,
l i=1
(3)
The consistency result below can be generalized to this case given that N has finite cardinality.
Multivariate regression: Let N = Rn and M be a metric space. Then for ?(x) = x2 , one gets
l
1X
2
kq ? Yi k kh dM (x, Xi ) ,
l i=1
q?N
P
l
1
kh dM (x,Xi ) Yi
. This is the well-known Nadaraya-Watson
which has the solution, ?l (x) = l 1 Pi=1
l
?l (x) = arg min
l
i=1
kh dM (x,Xi )
estimator, see [15, 16], on a metric space. In [17] a related estimator is discussed when M is a closed
Riemannian manifold and [18] discusses the Nadaraya-Watson estimator when M is a metric space.
Manifold-valued regression: In [6] the estimator ?l (x) has been proposed for the case where N is
a Riemannian manifold and ?(x) = x2 , in particular with the emphasis on N being the manifold of
shapes. The discussion of a robust median-type estimator, that is ?(x) = x, has been done recently
in [8]. While it has been shown in [7] that an approach using a global smoothness regularizer
outperforms the estimator ?l (x), it is a well working baseline with a simple implementation, see
Section 4.
Structured output: Structured output
learning, see [1, 2, 3] and references therein, can be formulated using kernels k (x1 , q1 ), (x2 , q2 ) on the product M ? N of input and output space, which are
supposed to measure jointly the similarity and thus can capture non-trivial dependencies between
input and output. Using such kernels [1, 2, 3] learn a score function s : M ? N ? R, with
?(x) = arg max s(x, q).
q?N
being the final prediction for x ? M . The similarity to our estimator ?l (x) in (2) becomes more
obvious when we use that in the framework of [1] the learned score function can be written as
?l (x) = arg max
q?N
l
1X
?i k (x, q), (Xi , Yi ) ,
l i=1
(4)
where ? ? Rl is the learned coefficient vector. Apart from the coefficient vector ? this has almost
the form of the previously discussed estimator in Equation (3), using a joint similarity function on
input and output space. Clearly, a structured output method where the coefficients ? have been
optimized, should perform better than ?i = const. In cases where training time is prohibitive the
estimator without ? is an alternative, at least it provides a useful baseline for structured output
learning. Moreover, if the joint kernel factorizes, k (x1 , q1 ), (x2 , q2 ) = kM (x1 , x2 ) kN (q1 , q2 ) on
M and N , and kN (q, q) = const., then one can rewrite the problem in (4) as,
l
?l (x) = arg min
q?N
1X
?i kM (x, Xi )d2N (q, Yi ),
l i=1
where dN is the induced (semi)-metric3 of kN . Apart from the learned coefficients this is basically
equivalent to ?l (x) in (2) for ?(x) = x2 .
In the following we restrict ourselves to the case where M and N are Riemannian manifolds. In this
case the optimization to obtain ?l (x) can still be done very efficiently as the next section shows.
3
The kernel kN induces a (semi)-metric dN on N via: d2N (p, q) = kN (p, p) + kN (q, q) ? 2kN (p, q).
4
4
Implementation of the kernel estimator for manifold-valued output
For fixed x ? M , the functional F (q) for q ? N which is optimized in the kernel estimator ?l (x)
can be rewritten with wi = kh (dM (x, Xi )) as,
F (q) =
l
X
wi ? dN (q, Yi ) .
i=1
Pl
The covariant gradient of F (q) is given as, ?F q = i=1 wi ?0 dN (p, Yi ) vi , where vi ? Tq N is
a tangent vector at q with kvi kTq N = 1 given by the tangent vector at q of the minimizing4 geodesic
from Yi to q (pointing ?away? from Yi ). Denoting by expq : Tq N ? N the exponential map at q,
the simple gradient descent based optimization scheme can be written as
? choose a random point q0 from N ,
? while stopping criteria not fulfilled,
1. compute gradient ?F at qk
2. one has: qk+1 = expqk ? ??F |qk
3. determine stepsize ? by Armijo rule [19].
As stopping criterion we use either the norm of the gradient or a threshold on the change of F . For
the experiments in Section 6 we get convergence in 5 to 40 steps.
5
Consistency of the kernel estimator for manifold-valued input and output
In this section we show the pointwise and Bayes consistency of the kernel estimator ?l in the case
where M and N are Riemannian manifolds. This case already subsumes several of the interesting
applications discussed in [6, 8]. The proof of consistency of the general metric-space valued kernel
estimator (for a restricted class of metric spaces including all Riemannian manifolds) requires high
technical overload which is interesting in itself but which would make the paper hard accessible.
The consistency of ?l will be proven under the following assumptions:
Assumptions (A1):
The loss ? : R+ ? R+ is (?, s)-bounded.
(Xi , Yi )li=1 is an i.i.d. sample of P on M ? N ,
M and N are compact m-and n-dimensional manifolds,
The data-generating measure P on M ? N is absolutely continuous with respect to the
natural volume element,
5. The marginal density on M fulfills: p(x) ? pmin , ? x ? M ,
6. The density p(?, y) is continuous on M for all y ? N ,
R
2
7. The kernel fulfills: a 1s?r1 ? k(s) ? b e?? s and Rm kxk k(kxk) dx < ?,
1.
2.
3.
4.
Note, that existence of a density is not necessary for consistency. However,
? in order to keep the
proofs simple, we restrict ourselves to this setting. In the following dV = det g dx denotes the
natural volume element of a Riemannian manifold with metric g, vol(S) and diam(N ) are the
volume and diameter of the set S. For the proof of our main theorem we need the following two
propositions. The first one summarizes two results from [20].
Proposition 1 Let M be a compact m-dimensional Riemannian manifold. Then, there exists r0 > 0
and S1 , S2 > 0 such that for all x ? M the volume of the balls B(x, r) with radius r ? r0 satisfies,
S1 rm ? vol B(x, r) ? S2 rm .
m
) 2
.
Moreover, the cardinality K of a ?-covering of M is upper bounded as, K ? vol(N
S1
?
4
The set of points where there the minimizing geodesic is not unique, the so called cut locus, has measure
zero and therefore plays no role in the optimization.
5
Moreover, we need a result about convolutions on manifolds.
Proposition 2 Let the assumptions A1 hold, then if f is continuous we get for any x ? M \?M ,
Z
lim
kh (dM (x, z))f (z) dV (z) = Cx f (x),
h?0
M
R
where Cx = limh?0 M kh (dM (x, z)) dV (z) > 0. If moreover f is Lipschitz continuous with
Lipschitz constant L, then there exists a h0 > 0 such that for all h < h0 (x),
Z
kh (dM (x, z))f (z) dV (z) = Cx f (x) + O(h).
M
The following main theorem proves the almost sure pointwise convergence of the manifold-valued
kernel estimator for all (?, s)-bounded loss functions ?.
Theorem 1 Suppose the assumptions in A1 hold. Let ?l (x) be the estimate of the kernel estimator
for sample size l. If h ? 0 and lhm / log l ? ?, then for any x ? M \?M ,
lim |R?0 (x, ?l (x)) ? arg min R?0 (x, q)| = 0,
l??
almost surely.
q?N
If additionally p(?, y) is Lipschitz-continuous for any y ? N , then
p
lim |R?0 (x, ?l (x)) ? arg min R?0 (x, q)| = O(h) + O log l/(l hm ) ,
l??
almost surely.
q?N
1
The optimal rate is given by h = O (log l/l) 2+m so that
1
lim R?0 (x, ?l (x)) ? arg min R?0 (x, q) = O log l/l 2+m ,
l??
almost surely.
q?N
Note, that the condition l hm / log l ? ? for convergence is the same as for the Nadaraya-Watson
estimator on a m-dimensional Euclidean space. This had to be expected as this condition still holds
if one considers multivariate output, see [15, 16]. Thus, doing regression with manifold-valued
output is not more ?difficult? than standard regression with multivariate output.
Next, we show Bayes consistency of the manifold-valued kernel estimator.
Theorem 2 Let the assumptions A1 hold. If h ? 0 and lhm / log l ? ?, then
lim R? (?l ) ? R? (?? ) = 0,
l??
almost surely.
Proof:
We have, R? (?l ) ? R? (?? ) ? E[|R?0 (X, ?l (X)) ? R?0 (X, ?? (X))|]. Moreover,
we have almost everywhere, liml?? R?0 (x, ?l (x)) = R?0 (x, ?? (x)) almost surely. Since
E[R?0 (X, ?(X))] < ? and E[R?0 (X, ?? (X))] < ?, an extension of the dominated convergence
theorem proven by Glick, see [21], provides the result.
6
Experiments
We illustrate the differences of median and mean type estimator on a synthetic dataset with the task
of estimating a curve on the sphere, that is M = [0, 1] and N = S 1 . The kernel used had the form,
k |x ? y|/h = 1 ? |x ? y|/h. The parameter h was found by 5-fold cross validation from the set
[5, 10, 20, 40] ? 10?3 . The results are summarized for different levels of outliers and different levels
of van-Mises noise (note that the parameter k is inverse to the variance of the distribution) in Table
1. As expected the the L1 -loss and the Huber loss as robust loss functions outperform the L2 -loss
in the presence of outliers, whereas the L2 -loss outperforms the robust versions when no outliers
are present. Note, that the Huber loss as a hybrid version between L1 - and L2 -loss is even slightly
better than the L1 -loss in the presence of outliers as well as in the outlier free case. Thus for a given
dataset it makes sense not only to do cross-validation of the parameter h of the kernel function but
also over different loss functions in order to adapt to possible outliers in the data.
6
Figure 1: Regression problem on the sphere with 1000 training points (black points). The blue
points are the ground truth disturbed by van Mises noise with parameter k = 100 and 20% (outliers)
with k = 3. The estimated curves are shown in green. Left: Result of L1 -loss, mean error (ME)
0.256, mean squared error (MSE) 0.165. Middle: Result of L2 -loss: ME = 0.265, MSE = 0.169.
Right: Result of Huber loss with ? = 0.1: ME = 0.255, MSE = 0.165. In particular, the curves
found using L1 and Huber loss are very close to the ground truth.
Table 1: Mean squared error (unit 10?1 ) for regression on the sphere - for different noise levels k,
number of labeled points, without and with outliers. Results are averaged over 10 runs.
Number of samples
L1 -Loss
k = 100
?(x) = x
k = 1000
L2 -Loss
k = 100
?(x) = x2
k = 1000
Huber-Loss
k = 100
with ? = 0.1 k = 1000
7
100
0.63 ? 0.11
0.43 ? 0.12
0.43 ? 0.10
0.28 ? 0.16
0.61 ? 0.11
0.42 ? 0.12
no outliers
500
0.260 ? 0.027
0.043 ? 0.005
0.230 ? 0.007
0.032 ? 0.003
0.257 ? 0.026
0.040 ? 0.005
1000
0.219 ? 0.003
0.030 ? 0.001
0.208 ? 0.001
0.025 ? 0.001
0.218 ? 0.003
0.028 ? 0.001
100
2.1 ? 0.2
2.1 ? 0.5
2.0 ? 0.2
2.0 ? 0.4
2.1 ? 0.2
2.1 ? 0.5
20% outliers
500
1000
1.57 ? 0.05 1.521 ? 0.015
1.45 ? 0.03
1.400 ? 0.008
1.59 ? 0.02
1.549 ? 0.021
1.51 ? 0.03
1.447 ? 0.015
1.57 ? 0.05 1.520 ? 0.021
1.44 ? 0.02 1.397 ? 0.008
Proofs
Lemma 2 Let ? : R+ ? R be convex, differentiable and monotonically increasing. Then
min{?0 (x), ?0 (y)}|y ? x| ? |?(y) ? ?(x)| ? max{?0 (x), ?0 (y)}|y ? x|.
1
Pl
?(d (q,Y )) k (d
(x,X ))
i
M
h
0
Proof of Theorem 1 We define R?,l
(x, q) = l i=1 E[kNh (dMi(x,X))]
. Note that ?l (x) =
0
arg min R?,l (x, q) as we have only divided by a constant factor. We use the standard technique for
q?N
the pointwise estimate,
0
0
R?0 (x, ?l (x)) ? min R?0 (x, q) ? R?0 (x, ?l (x)) ? R?,l
(x, ?l (x)) + R?,l
(x, ?l (x)) ? min R?0 (x, q)
q?N
q?N
? 2 sup
q?N
0
|R?,l
(x, q)
?
R?0 (x, q)|.
In Porder to bound the supremum, we will work on the event E, where we assume,
1 l kh (dM (x,Xi ))
m
l i=1
? 1 < 12 , which holds with probability 1 ? 2 e?C l h for some constant C.
E[kh (dM (x,X))]
K
Moreover, we assume to
have
a ?-covering of N with centers N? = {q? }?=1 where using Lemma
1 we have K ?
Introducing
vol(N )
S1
R?E (x, q)
=
n
2
. Thus for each q ? N there exists q? ? N? such that
?
E[?(dN (q,Y ))kh (dM (x,X))]
and using the decomposition,
E[kh (dM (x,X))]
dN (q, q? ) ? ?.
0
0
0
0
R?,l
(x, q) ? R?0 (x, q) =R?,l
(x, q) ? R?,l
(x, q? ) + R?,l
(x, q? ) ? R?E (x, q? )
+ R?E (x, q? ) ? R?E (x, q) + R?E (x, q) ? R?0 (x, q),
we have to control four terms,
P
0
1l li=1 ? dN (q, Yi ) ? ? dN (q? , Yi ) kh (dM (x, Xi ))
0
R?,l (x, q)?R?,l
(x, q? ) =
E[kh (dM (x, X))]
P
1 l kh (dM (x, Xi ))
? 2 dN (q, q? ) ?0 diam(N ) l i=1
? 3 ?0 diam(N ) ?.
E[kh (dM (x, X))]
7
where we have used Lemma 2 and the fact that E holds. Then, there exists a constant C such that
vol(N ) 2 n ?C l hm ?2
0
P max |R?,l
(x, q? ) ? R?E (x, q? )| > ? ? 2
e
,
1???K
S1
?
Pl
which can be shown using Bernstein?s inequality for 1l i=1 Wi ? E[Wi ] where Wi =
?(dN (q? ,Yi ))kh (dM (x,Xi ))
together with a union bound over the elements in the covering N? using
E[kh (dM (x,X))]
|Wi | ?
b ?(diam(N ))
,
a hm S1 r1m pmin
Var Wi ?
?(diam(N ))2 E[kh2 (dM (x, X))]
b ?(diam(N ))2
?
,
(E[kh (dM (x, X))])2
a hm S1 r1m pmin
where we used Proposition 1 to lower bound vol(B(x, h r1 )) for small enough h. Third, we get for
the third term using again Lemma 2,
|R?E (x, q? ) ? R?E (x, q)| ? 2?0 (diam(N ))dN (q, q? ) ? 2?0 (diam(N ))?.
Last, we have to bound the approximation error R?E (x, q) ? R?0 (x, q), Under the continuity assumption on the joint density p(x, y) we can use Proposition 2. For every x ? M \?M we get,
Z
Z
lim
kh (dM (x, z))p(z, y)dV (z) = Cx p(x, y), lim
kh (dM (x, z))p(z)dV (z) = Cx p(x),
h?0
h?0
M
where Cx > 0. Thus with
Z
fh =
kh (dM (x, z))p(z, y)dV (z),
M
M
Z
gh =
kh (dM (x, z))p(z)dV (z),
M
we get for every x ? M \?M ,
f
f
|fh ? f |
|gh ? g|
h
lim ? ? lim
+ lim f
= 0,
h?0 gh
h?0
h?0
g
gh
g gh
where we have used gh ? aS1 r1 pmin > 0 and g = Cx p(x) > 0. Moreover, using results from
the proof of Proposition 2 one can show fh < C for some constant C. Thus fh /gh < C for some
constant and fh /gh ? f /g as h ? 0. Using the dominated convergence theorem we thus get
Z
p(x, y)
E[?(dN (q, Y ))kh (dM (x, X))]
E
=
? dN (q, y)
dy = R?0 (x, q).
lim R? (x, q) = lim
h?0
h?0
E[kh (dM (x, X))]
p(x)
N
For the case where the joint density is Lipschitz continuous one gets using Proposition 2, R?E (x, q) =
R?0 (x, q) + O(h).
In total, there exist constants A, B, C, D1 , D2 , such that for sufficiently small h one has with probm 2
1
ability 1 ? AeB n log( ? )?Clh ? ,
0
sup |R?,l
(x, q) ? R?E (x, q)| ? 2D1 ? + ?.
q?N
m
lh
? ? together with
With ? = l?s for some s > 0 one gets convergence if log
l
E
0
limh?0 R? (x, q) = R? (x, q). For the case where p(?, y) is Lipschitz continuous for all
y ? N we have R?E (x, q) = R?0 (x, q) + O(h) and can choose s large enough so that the bound
lhm
from the approximation error dominates the one of the covering. Under the condition log
l ? ? the
probabilistic bound is summable in l which yields almost sure convergence by the Borel-CantelliLemma. The optimal rate in the Lipschitz continuous case is then determined by fixing h such that
both terms of the bound are of the same order.
Acknowledgments
We thank Florian Steinke for helpful discussions about relations between generalized kernel estimators and structured output learning. This work has been partially supported by the Cluster of
Excellence MMCI at Saarland University.
8
References
[1] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured
and interdependent output variables. JMLR, 6:1453?1484, 2005.
[2] J. Weston, G. BakIr, O. Bousquet, B. Sch?olkopf, T. Mann, and W. S. Noble. Joint kernel maps.
In Predicting Structured Data, pages 67?84. MIT Press, 2007.
[3] E. Ricci, T. De Bie, and N. Cristianini. Magic moments for structured output prediction. JMLR,
9:2803?2846, 2008.
[4] K.V. Mardia and P.E. Jupp. Directional statistics. Wiley New York, 2000.
[5] Inam Ur Rahman, Iddo Drori, Victoria C. Stodden, David L. Donoho, and Peter Schroder.
Multiscale representations for manifold-valued data. Multiscale Modeling and Simulation,
4(4):1201?1232, 2005.
[6] B. C. Davis, P. T. Fletcher, E. Bullitt, and S. Joshi. Population shape regression from random
design data. Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on,
pages 1?7, 2007.
[7] F. Steinke and M. Hein. Non-parametric regression between Riemannian manifolds. In Advances in Neural Information Processing Systems (NIPS) 21, pages 1561 ? 1568, 2009.
[8] P. T. Fletcher, S. Venkatasubramanian, and S. Joshi. The geometric median on Riemannian
manifolds with application to robust atlas estimation. NeuroImage, 45:143 ? 152, 2009.
[9] C. G. Small. A survey of multidimensional medians. International Statistical Review, 58:263?
277, 1990.
[10] D. Blackwell and M. Maitra. Factorization of probability measures and absolutely measurable
sets. Proc. Amer. Math. Soc., 92(2):251?254, 1984.
[11] R. Bhattacharya and V. Patrangenaru. Large sample theory of intrinsic and extrinsic sample
means on manifolds I. Ann. Stat., 31(1):1?29, 2003.
[12] H. Karcher. Riemannian center of mass and mollifier smoothing. Communications on Pure
and Applied Mathematics, 30:509?541, 1977.
[13] W. Kendall. Probability, convexity, and harmonic maps with small image. I. Uniqueness and
fine existence. Proc. London Math. Soc., 61(2):371?406, 1990.
[14] P. Indyk. Sublinear time algorithms for metric space problems. In Proceedings of the 31st
Symposium on Theory of computing (STOC), pages 428 ? 434, 1999.
[15] L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A Distribution-Free Theory of Nonparametric
Regression. Springer, New York, 2004.
[16] W. Greblicki and M. Pawlak. Nonparametric System Identification. Cambridge University
Press, Cambrige, 2008.
[17] B. Pelletier. Nonparametric regression estimation on closed Riemannian manifolds. J. of
Nonparametric Stat., 18:57?67, 2006.
[18] S. Dabo-Niang and N. Rhomari. Estimation non parametrique de la regression avec variable
explicative dans un espace metrique. C. R. Math. Acad. Sci. Paris, 1:75?80, 2003.
[19] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, Mass., 1999.
[20] M. Hein. Uniform convergence of adaptive graph-based regularization. In G. Lugosi and
H. Simon, editors, Proc. of the 19th Conf. on Learning Theory (COLT), pages 50?64, Berlin,
2006. Springer.
[21] N. Glick. Consistency conditions for probability estimators and integrals of density estimators.
Utilitas Math., 6:61?74, 1974.
9
| 3649 |@word illustrating:1 middle:1 version:3 seems:1 norm:1 km:2 d2:1 simulation:1 decomposition:1 q1:3 moment:1 venkatasubramanian:1 series:1 contains:2 score:3 denoting:1 outperforms:2 jupp:1 expq:1 bie:1 dx:2 written:2 belmont:1 shape:2 hofmann:1 atlas:1 prohibitive:1 inam:1 provides:2 math:4 location:1 saarland:2 dn:25 symposium:1 prove:3 glick:2 excellence:1 huber:8 expected:3 multi:1 decreasing:1 ucken:1 cardinality:2 increasing:3 becomes:1 provided:1 estimating:1 bounded:13 campus:1 moreover:8 factorized:1 notation:1 maximizes:1 mass:2 interpreted:1 q2:3 every:4 multidimensional:1 isometrically:1 k2:1 rm:3 control:1 unit:1 bertsekas:1 local:1 limit:1 acad:1 lugosi:1 black:1 emphasis:2 therein:2 nadaraya:4 factorization:2 averaged:1 unique:4 acknowledgment:1 union:1 differs:1 procedure:1 drori:1 area:1 kh2:1 orfi:1 altun:1 get:9 cannot:1 close:1 tsochantaridis:1 risk:5 restriction:1 measurable:5 equivalent:1 map:3 disturbed:1 center:2 go:1 straightforward:1 convex:2 survey:1 pure:1 estimator:43 rule:1 population:1 resp:1 play:1 suppose:1 programming:1 us:1 element:3 cut:1 labeled:1 role:1 capture:1 decrease:1 convexity:1 complexity:1 cristianini:1 geodesic:2 rewrite:1 joint:7 regularizer:1 describe:1 london:1 h0:2 quite:1 supplementary:1 valued:26 solve:1 ability:1 statistic:1 think:1 jointly:1 itself:1 final:1 indyk:1 differentiable:2 matthias:1 propose:2 product:2 dans:1 supposed:1 kh:28 mollifier:1 ky:2 olkopf:1 convergence:8 cluster:1 r1:3 generating:1 converges:1 depending:1 illustrate:2 stat:2 fixing:1 soc:2 c:1 predicted:1 radius:1 material:1 implementing:1 mann:1 ricci:1 generalization:5 proposition:7 avec:1 extension:2 pl:3 hold:7 sufficiently:1 ground:2 fletcher:2 mapping:8 pointing:1 fh:5 uniqueness:3 estimation:3 proc:3 frech:3 sensitive:1 successfully:1 mit:1 clearly:1 always:2 modified:1 factorizes:1 krzy:1 focus:1 joachim:1 attains:1 baseline:2 sense:1 helpful:1 minimizers:2 stopping:2 sb:1 relation:1 germany:1 arg:18 classification:7 schroder:1 colt:1 denoted:1 development:1 smoothing:2 constrained:1 special:4 marginal:2 dmi:1 once:1 biology:1 look:1 noble:1 espace:1 minimized:1 ourselves:2 tq:2 interest:1 integral:1 necessary:1 lh:1 euclidean:8 walk:1 desired:1 hein:4 modeling:1 d2n:3 karcher:2 cost:2 introducing:1 subset:1 uniform:2 kq:1 graphic:1 dependency:1 kn:7 synthetic:1 st:1 density:6 international:2 accessible:1 probabilistic:1 together:4 continuously:1 squared:2 again:1 choose:2 summable:1 conf:1 pmin:4 toy:1 li:3 de:3 gy:1 summarized:1 subsumes:1 includes:2 coefficient:4 depends:1 vi:2 later:3 closed:3 kendall:1 analyze:1 doing:1 sup:2 bayes:9 simon:1 qk:3 variance:1 efficiently:1 yield:1 clh:1 directional:1 bayesian:2 identification:1 basically:1 definition:3 ktq:1 obvious:1 dm:29 proof:10 riemannian:15 associated:1 mi:2 couple:1 dataset:2 lim:12 limh:2 bakir:1 amer:1 done:4 generality:1 rahman:1 hand:1 working:1 multiscale:2 nonlinear:1 continuity:1 costsensitive:1 scientific:1 regularization:1 q0:1 covering:4 noted:1 davis:1 criterion:2 generalized:3 complete:1 l1:9 gh:8 image:1 harmonic:1 recently:1 common:1 functional:3 rl:1 insensitive:3 volume:4 discussed:4 interpretation:1 refer:2 cambridge:1 zak:1 smoothness:1 consistency:13 mathematics:1 language:1 had:2 longer:2 similarity:5 cambrige:1 multivariate:8 recent:3 apart:3 scenario:1 certain:3 inequality:1 watson:4 yi:17 seen:1 minimum:1 florian:1 employed:1 ey:2 r0:2 determine:3 surely:5 monotonically:3 signal:1 semi:2 branch:1 reduces:1 technical:1 adapt:1 cross:2 sphere:5 retrieval:1 divided:1 e1:1 a1:4 prediction:2 regression:31 mmci:1 vision:2 metric:35 kernel:24 robotics:1 whereas:1 fine:1 else:1 median:10 suffered:1 sch:1 posse:1 sure:2 induced:1 med:2 joshi:2 presence:2 bernstein:1 enough:2 restrict:3 multiclass:4 det:1 motivated:3 peter:1 york:2 generally:1 useful:1 stodden:1 nonparametric:7 induces:1 diameter:2 outperform:1 exist:1 misclassifying:1 estimated:2 fulfilled:1 extrinsic:1 blue:1 discrete:3 vol:6 four:1 threshold:1 penalizing:1 graph:1 fraction:1 year:1 run:1 inverse:1 everywhere:1 family:6 reasonable:1 almost:9 decision:2 summarizes:1 dy:1 comparable:1 bound:7 guaranteed:1 fold:1 constraint:1 infinity:1 x2:10 dominated:2 bousquet:1 aeb:1 min:18 separable:3 department:1 structured:14 pelletier:1 ball:1 slightly:2 ur:1 wi:8 lp:1 explicative:1 s1:7 outlier:12 restricted:1 dv:8 iccv:1 equation:1 previously:1 discus:2 locus:1 generalizes:3 rewritten:1 victoria:1 away:1 r1m:2 anymore:1 stepsize:1 bhattacharya:1 alternative:1 robustness:2 existence:2 denotes:4 const:2 prof:1 already:1 liml:1 parametric:1 said:1 gradient:5 distance:2 thank:1 h1m:1 sci:1 berlin:1 majority:1 athena:1 me:3 manifold:36 considers:2 trivial:1 pointwise:9 modeled:1 minimizing:1 difficult:1 unfortunately:1 stoc:1 magic:1 implementation:2 countable:1 design:1 perform:1 upper:1 convolution:1 finite:1 descent:2 communication:1 rn:1 david:1 pair:1 blackwell:1 paris:1 optimized:2 learned:3 saarbr:1 nip:1 usually:2 below:2 including:2 max:5 green:1 event:1 natural:3 hybrid:1 predicting:2 scheme:3 hm:5 review:1 interdependent:1 l2:5 tangent:2 geometric:1 embedded:1 loss:46 sublinear:1 interesting:4 analogy:2 proven:2 var:1 validation:2 maitra:1 editor:1 pi:1 supported:1 last:1 free:2 steinke:2 van:2 curve:3 adaptive:1 compact:4 uni:1 keep:1 supremum:1 global:5 conclude:1 xi:18 continuous:10 un:1 table:2 additionally:1 as1:1 learn:2 robust:10 mse:3 main:3 dense:1 s2:2 noise:3 x1:3 borel:1 wiley:1 neuroimage:1 exponential:1 lie:2 mardia:1 jmlr:2 third:2 krn:1 theorem:7 kvi:1 dominates:1 exists:7 intrinsic:1 conditioned:1 margin:1 cx:7 univariate:2 kxk:2 partially:2 springer:2 covariant:1 minimizer:6 satisfies:1 truth:2 weston:1 conditional:2 diam:8 formulated:1 donoho:1 ann:1 lipschitz:6 change:1 hard:1 determined:2 lemma:6 total:2 called:3 la:1 vote:1 arises:1 armijo:1 fulfills:2 overload:1 absolutely:2 kohler:1 d1:2 ex:1 |
2,923 | 365 | Planning with an Adaptive World Model
Sebastian B. Thrun
German National Research
Center for Computer
Science (GMD)
D-5205 St. Augustin, FRG
Knut Moller
University of Bonn
Department of
Computer Science
D-5300 Bonn, FRG
Alexander Linden
German National Research
Center for Computer
Science (GMD)
D-5205 St. Augustin, FRG
Abstract
We present a new connectionist planning method [TML90]. By interaction
with an unknown environment, a world model is progressively constructed using gradient descent. For deriving optimal actions with respect to
future reinforcement, planning is applied in two steps: an experience network proposes a plan which is subsequently optimized by gradient descent
with a chain of world models, so that an optimal reinforcement may be
obtained when it is actually run. The appropriateness of this method is
demonstrated by a robotics application and a pole balancing task.
1
INTRODUCTION
Whenever decisions are to be made with respect to some events in the future,
planning has been proved to be an important and powerful concept in problem
solving. Planning is applicable if an autonomous agent interacts with a world, and
if a reinforcement is available which measures only the over-all performance of the
agent. Then the problem of optimizing actions yields the temporal credit assignment
problem [Sut84], i.e. the problem of assigning particular reinforcements to particular
actions in the past. The problem becomes more complicated if no knowledge about
the world is available in advance.
Many connectionist approaches so far solve this problem directly, using techniques
based on the interaction of an adaptive world model and an adaptive controller
[Bar89, Jor89, Mun87]. Although such controllers are very fast after training, training itself is rather complex, mainly because of two reasons: a) Since future is not
considered explicitly, future effects must be directly encoded into the world model.
This complicates model training. b) Since the controller is trained with the world
model, training of the former lags behind the latter. Moreover, if there do exist
450
Planning with an Adaptive World Model
:
:
state
Figure 1: The training of the model network is a system identification task.
Internal parameters are estimated by gradient descent, e.g. by backpropagation.
several optimal actions, such controllers will only generate at most one regardless of
all others, since they represent many-to-one functions. E.g., changing the objective
function implies the need of an expensive retraining.
In order to overcome these problems, we applied a planning technique to reinforcement learning problems. A model network which approximates the behavior of the
world is used for looking ahead into future and optimizing actions by gradient descent with respect to future reinforcement. In addition, an experience network is
trained in order to accelerate and improve planning.
2
2.1
LOOK-AHEAD PLANNING
SYSTEM IDENTIFICATION
Planning needs a world model. Training of the world model,is adopted from
[Bar89, Jor89, Mun87]. Formally, the world maps actions to subsequent states and
reinforcements (Fig. 1). The world model used here is a standard non-recurrent or
a recurrent connectionist network which is trained by backpropagation or related
gradient descent algorithms [WZ88, TS90]. Each time an action is performed on the
world their resulting state and reinforcement is compared with the corresponding
prediction by the model network. The difference is used for adapting the internal parameters of the model in small steps, in order to improve its accuracy. The resulting
model approximates the world's behavior.
Our planning technique relies mainly on two fundamental steps: Firstly, a plan is
proposed either by some heuristic or by a so-called experience network. Secondly,
this plan is optimized progressively by gradient descent in action space. First, we
will consider the second step.
2.2
PLAN OPTIMIZATION
In this section we show the optimization of plans by means of gradient descent. For
that purpose, let us assume an initial plan, i.e. a sequence of N actions, is given. The
first action of this plan together with the current state (and, in case of a recurrent
model network, its current context activations) are fed into the model network (Fig.
2). This gives us a prediction for the subsequent state and reinforcement of the world.
If we assume that the state prediction is a good estimation for the next state, we can
proceed by predicting the immediate next state and reinforcement from the second
action of the plan correspondingly. This procedure is repeated for each of the N
stages of the plan. The final output is a sequence of N reinforcement predictions,
which represents the quality of the plan. In order to maximize reinforcement, we
451
452
Thrun, l\1OIler, and Linden
"')}"".".'..'.
~_ _m_od_e_l_n_e-.-r_o_r_k_(N_J_---,11-oI.1---- plan: JVlh action
??
? ? ? ...r.=..
model network (2)
~-- plan: 2nd action
model network (1)
1+--- plan: lit action
.... .. ..... . ...
.. :/;:::":;:/::;">:>:.
(PLANNING RESULT)
L.....-_ _ _ _,--,,.--_.:.-;..._---l
context units
recurrent networks on!
Figure 2: Looking ahead by the chain of model networks.
establish a differentiable reinforcement energy function Ereinf, which measures the
deviation of predicted and desired reinforcement. The problem of optimizing plans
is transformed to the problem of minimizing Ereinf' Since both Ereinf and the chain
of model networks are differentiable, the gradients of the plan with respect to Ereinf
can be computed. These gradients are used for changing the plan in small steps,
which completes the gradient descent optimization.
The whole update procedure is repeated either until convergence is observed or,
which makes it more convenient for real-time applications, a predefined number of
iterations - note that in the latter case the computational effort is linear in N. From
the planning procedure we obtain the optimized plan, the first action 1 of which is
then performed on the world. Now the whole procedure is repeated.
The gradients of the plan with respect to Ereinf can be computed either by backpropagation through the chain of models or by a feed-forward algorithm which is
related to [WZ88, TS90]:
Hand in hand with the activations we propagate also the gradients
a activationj (r)
a actioni (s)
et, (r)
(1)
through the chain of models. Here i labels all action input units and j all units of
the whole model network, r (1S;rS;N) is the time associated with the rth model of
the chain, and s (1<s<r) is the time of the sth action. Thus, for each action (V'i, s)
its influence on later activations (V'j, V'r>s) of the chain of networks, including all
predictions, is measured by et,( r).
It has been shown in an earlier paper that this gradient can easily be propagated
forward through the network [TML90]:
et,(r) =
if j action input unit
if r=l 1\ j state/context input unit
if r>l 1\ j state/context input unit
(j' corresponding output unit of preceding model)
logistic'(netj(r)).
L
weightjl e!.,(r)
(2)
otherwise
IEpred(j)
11? an unknown world is to be explored, this action might be disturbed by adding a
small random variable.
Planning with an Adaptive World Model
The reinforcement energy to be minimized is defined as
N
Ereinf
-
~
L L gk (T) . (reinf~ T=l
activationk (T?) 2
(3)
?
k
(k numbers the reinforcement output units, reinf~ is the desired reinforcement value, usually Vk: reinf~=l, and gk weights the reinforcement with respect to T and k,
in the simplest case gk(T)=l.) Since Ereinf is differentiable, we can compute the gradient of Ereinf with respect to each particular reinforcement prediction. From these
gradients and the gradients ef, of the reinforcement prediction units the gradients
(i, _ {) {) ~rein( )
actloni S
N
= - L.J
~ L.J
~ gk( T) . (reinf~ T='
activationk( T? . ef,( T)
(4)
k
are derived which indicate how to change the plan in order to minimize
Ereinf'
Variable plan lengths: The feed-forward manner of the propagation allows it
to vary the number of look-ahead steps due to the current accuracy of the model
network. Intuitively, if a model network has a relatively large error, looking far
into future makes little sense. A good heuristic is to avoid further look-ahead if the
current linear error (due to the training patterns) of the model network is larger
than the effect of the first action of the plan to the current predictions. This effect
is exactly the gradients
T). Using variable plan lengths might overcome the
difficulties in finding an appropriate plan length N a priori.
efl (
2.3
INITIAL PLANS - THE EXPERIENCE NETWORK
It remains to show how to obtain initial plans. There are several basic strategies
which are more or less problem-dependent, e.g. random, average over previous actions etc. Obviously, if some planning took place before, the problem of finding an
initial plan reduces to the problem of finding a simple action, since the rest of the
previous plan is a good candidate for the next initial plan.
A good way of finding this action is the experience network. This network is trained to predict the result of the planning procedure by observing the world's state
and, in the case of recurrent networks, the temporal context information from the
model network. The target values are the results of the planning procedure. Although the experience network is trained like a controller [Bar89], it is used in a
different way, since outcoming actions are further optimized by the planning procedure. Thus, even if the knowledge of the experience network lags behind the model
network's, the derived actions are optimized with respect to the "knowledge" of
the model network rather than the experience network. On the other hand, while
the optimization is gradually shifted into the experience network, planning can be
progressively shortened.
3
APPROACHING A ROLLING BALL WITH A ROBOT
ARM
We applied planning with an adaptive world model to a simulation of a real-time
robotics task: A robot arm in 3-dimensional space was to approach a rolling ball.
Both hand position (i.e. x,y,z and hand angle) and ball position (i.e. x' ,y') were
observed by a camera system in workspace. Conversely, actions were defined as
angular changes of the robot joints in configuration space. Model and experience
453
454
Thrun, MOller, and Linden
H
B
:8
1?10
X-V-Space
current hand pOS.
current ball pos.
previous ban pOS .
plans
Figure 3: (a) The recurrent model network (white) and the experience network (grey) at
the robotics task. (b) Planning: Starting with the initial plan 1, the approximation leads
finally to plan 10. The first action of this plan is then performed on the world.
networks are shown in Fig. 3a. Note that the ball movement was predicted by
a recurrent Elman-type network, since only the current ball position was visible
at any time. The arm prediction is mathematically more sophisticated, because
kinematics and inverse kinematics are required to solve it analytically.
The reason why planning makes sense at this task is that we did not want the robot
arm to minimize the distance between hand and ball at each step - this would
obviously yield trajectories in which the hand follows the ball, e.g.:
Figure 4: Basic strategy, the
arm "follows" the ball.
robot
arm
Instead, we wanted the system to find short cuts by making predictions about the
ball's next movement. Thus, the reinforcement measured the distance in workspace.
Fig. 3b illustrates a "typical" planning process with look-ahead N =4, 9 iterations,
gk( r)
1.3 T (c.f. (2))2, a weighted stepsize TJ 0.05? 0.9 T , and well-trained model
and experience networks. Starting with an initial plan 1 by the experience network
=
=
2This exponential function is crucial for minimizing later distances rather than the
sooner.
Planning with an Adaptive World Model
the optimization led to plan 10. It is clear to see that the resulting action surpassed
the initial plan, which demonstrates the appropriateness of the optimization.
The final trajectory was:
robot
arm
Figure 5: Planning: The
arm finds the short cut.
We were now interested in modifying the behavior of the arm. Without further
learning of either the model or the experience network, we wanted the arm to
approach the ball from above. For this purpose we changed the energy function (7):
Before the arm was to approach the ball, the energy was minimal if the arm reached
a position exactly above the ball. Since the experience network was not trained for
that task, we doubled the number of iteration steps. This led to:
robot
arm
Figure 6: The arm approa.ches from above due to a
modified energy function.
A first implementation on a real robot arm with a camera system showed similar
results.
4
POLE BALANCING
Next, we applied our planning method to the pole balancing task adopted from
[And89]. One main difference to the task described above is the fact that gradient
descent is not applicable with binary reinforcement, since the better the approximation by the world model, the more the gradient vanishes. This effect can be
prevented by using a second model network with weight decay, which is trained
with the same training patterns. Weight decay smoothes the binary mapping. By
using the model network for prediction only and the smoothed network for gradient
propagation, the pole balancing problem became solvable. We see this as a general
455
456
Thrun, MOller, and Linden
technique for applying gradient descent to binary reinforcement tasks.
We were especially interested in the dependency of look-ahead and the duration of
balance. It turned out that in most randomly chosen initial configurations of pole
and cart the look-ahead N = 4 was sufficient to balance the pole more than 20000
steps. If the cart is moved randomly, after on average 10 movements the pole falls.
5
DISCUSSION
The planning procedure presented in this paper has two crucial limitations. By using
a bounded look-ahead, effects of actions to reinforcement beyond this bound can
not be taken into account. Even if the plan lengths are kept variable (as described
above), each particular planning process must use a finite plan. Moreover, using
gradient descent as search heuristic implies the danger of getting stuck in local
minima. It might be interesting to investigate other search heuristics.
On the other hand this planning algorithm overcomes certain problems of adaptive controller networks, namely: a) The training is relatively fast, since the model
network does not include temporal effects. b) Decisions are optimized due to the
current "knowledge" in the system, and no controller lags behind the model network. c) The incorporation of additional constraints to the objective function at
runtime is possible, as demonstrated. d) By using a probabilistic experience network the planning algorithm is able to act as a non-deterministic many-to-many
controller. Anyway, we have not investigated the latter point yet.
Acknowledgements
The authors thank J org Kindermann and Frank Smieja for many fruitful discussions
and Michael Contzen and Michael FaBbender for their help with the robot arm.
References
[And89] C.W. Anderson. Learning to control an inverted pendulum using neural
networks. IEEE Control Systems Magazine, 9(3):31-37, 1989.
[Bar89] A. G. Barto. Connectionist learning for control: An overview. Technical
Report COINS TR 89-89, Dept. of Computer and Information Science,
University of Massachusetts, Amherst, MA, September 1989.
[Jor89] M. I. Jordan. Generic constraints on unspecified target constraints. In
Proceedings of the First International Joint Conference on Neural Networks, Washington, DC, San Diego, 1989. IEEE TAB NN Committee.
[Mun87] P. Munro. A dual backpropagation scheme for scalar-reward learning. In
Ninth Annual Conference of the Cognitive Science Society, pages 165-176,
Hillsdale, NJ, 1987. Cognitive Science Society, Lawrence Erlbaum.
[Sut84] R. S. Sutton. Temporal Credit Assignment in Reinforcement Learning.
PhD thesis, University of Massachusetts, 1984.
[TML90] S. Thrun, K. Moller, and A. Linden. Adaptive look-ahead planning. In
G. Dorffner, editor, Proceedings KONNAIIOEGAI, Springer, Sept. 1990.
[TS90]
S. Thrun and F. Smieja. A general feed-forward algorithm for gradientdescent in connectionist networks. TR 483, GMD, FRG, Nov. 1990.
[WZ88] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. TR ICS Report 8805, Institute for
Cognitive Science, University of California, San Diego, CA, 1988.
| 365 |@word retraining:1 nd:1 grey:1 r:1 propagate:1 simulation:1 tr:3 initial:9 configuration:2 past:1 current:9 activation:3 assigning:1 yet:1 must:2 visible:1 subsequent:2 wanted:2 progressively:3 update:1 short:2 firstly:1 org:1 constructed:1 manner:1 behavior:3 elman:1 planning:31 little:1 becomes:1 moreover:2 bounded:1 unspecified:1 finding:4 nj:1 temporal:4 act:1 runtime:1 exactly:2 demonstrates:1 control:3 unit:9 continually:1 before:2 local:1 sutton:1 shortened:1 might:3 conversely:1 camera:2 backpropagation:4 procedure:8 danger:1 adapting:1 convenient:1 doubled:1 context:5 influence:1 applying:1 disturbed:1 fruitful:1 map:1 demonstrated:2 center:2 deterministic:1 williams:1 regardless:1 starting:2 duration:1 deriving:1 anyway:1 autonomous:1 target:2 diego:2 magazine:1 expensive:1 cut:2 observed:2 movement:3 environment:1 vanishes:1 reward:1 trained:8 solving:1 accelerate:1 easily:1 joint:2 po:3 approa:1 fast:2 rein:1 encoded:1 lag:3 solve:2 heuristic:4 larger:1 otherwise:1 itself:1 final:2 obviously:2 sequence:2 differentiable:3 took:1 interaction:2 turned:1 moved:1 getting:1 convergence:1 help:1 recurrent:8 measured:2 predicted:2 implies:2 indicate:1 appropriateness:2 modifying:1 subsequently:1 hillsdale:1 frg:4 secondly:1 mathematically:1 credit:2 considered:1 ic:1 lawrence:1 mapping:1 predict:1 vary:1 purpose:2 estimation:1 applicable:2 label:1 augustin:2 kindermann:1 knut:1 weighted:1 modified:1 rather:3 avoid:1 barto:1 derived:2 vk:1 mainly:2 sense:2 dependent:1 nn:1 jor89:3 transformed:1 interested:2 dual:1 priori:1 proposes:1 plan:37 washington:1 represents:1 lit:1 look:8 future:7 minimized:1 connectionist:5 others:1 report:2 randomly:2 national:2 investigate:1 behind:3 tj:1 netj:1 chain:7 predefined:1 experience:16 sooner:1 desired:2 minimal:1 complicates:1 earlier:1 assignment:2 pole:7 deviation:1 rolling:2 erlbaum:1 dependency:1 st:2 fundamental:1 amherst:1 international:1 workspace:2 ches:1 probabilistic:1 michael:2 together:1 thesis:1 cognitive:3 account:1 explicitly:1 performed:3 later:2 observing:1 pendulum:1 reached:1 tab:1 complicated:1 minimize:2 oi:1 accuracy:2 became:1 yield:2 identification:2 trajectory:2 sebastian:1 whenever:1 energy:5 associated:1 propagated:1 proved:1 massachusetts:2 knowledge:4 sophisticated:1 actually:1 feed:3 anderson:1 angular:1 stage:1 until:1 hand:9 propagation:2 logistic:1 quality:1 effect:6 concept:1 former:1 analytically:1 white:1 ef:2 overview:1 approximates:2 rth:1 robot:9 etc:1 showed:1 optimizing:3 certain:1 binary:3 inverted:1 minimum:1 additional:1 preceding:1 maximize:1 reduces:1 technical:1 prevented:1 prediction:11 basic:2 controller:8 surpassed:1 iteration:3 represent:1 robotics:3 addition:1 want:1 completes:1 crucial:2 rest:1 cart:2 jordan:1 zipser:1 approaching:1 munro:1 effort:1 dorffner:1 proceed:1 action:30 clear:1 gmd:3 simplest:1 generate:1 exist:1 shifted:1 estimated:1 changing:2 efl:1 kept:1 run:1 angle:1 inverse:1 powerful:1 place:1 smoothes:1 decision:2 bound:1 annual:1 ahead:10 incorporation:1 constraint:3 bonn:2 relatively:2 department:1 ball:13 sth:1 making:1 intuitively:1 gradually:1 taken:1 remains:1 german:2 kinematics:2 committee:1 fed:1 adopted:2 available:2 appropriate:1 generic:1 stepsize:1 smieja:2 coin:1 running:1 include:1 reinf:4 especially:1 establish:1 society:2 objective:2 strategy:2 interacts:1 september:1 gradient:23 distance:3 thank:1 thrun:6 reason:2 length:4 balance:2 minimizing:2 frank:1 gk:5 implementation:1 unknown:2 finite:1 descent:11 immediate:1 looking:3 dc:1 ninth:1 smoothed:1 namely:1 required:1 optimized:6 california:1 beyond:1 able:1 usually:1 pattern:2 including:1 event:1 difficulty:1 predicting:1 solvable:1 arm:16 scheme:1 improve:2 sept:1 acknowledgement:1 fully:1 interesting:1 limitation:1 agent:2 sufficient:1 editor:1 balancing:4 changed:1 ban:1 institute:1 fall:1 correspondingly:1 overcome:2 world:25 forward:4 made:1 adaptive:9 reinforcement:25 stuck:1 author:1 san:2 far:2 nov:1 overcomes:1 search:2 why:1 ca:1 moller:4 complex:1 investigated:1 did:1 main:1 whole:3 repeated:3 fig:4 position:4 exponential:1 candidate:1 gradientdescent:1 explored:1 decay:2 linden:5 adding:1 phd:1 illustrates:1 led:2 scalar:1 springer:1 outcoming:1 relies:1 ma:1 change:2 typical:1 called:1 formally:1 internal:2 latter:3 alexander:1 dept:1 |
2,924 | 3,650 | Zero-Shot Learning with Semantic Output Codes
Dean Pomerleau
Intel Labs
Pittsburgh, PA 15213
[email protected]
Mark Palatucci
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Tom M. Mitchell
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Geoffrey Hinton
Computer Science Department
University of Toronto
Toronto, Ontario M5S 3G4, Canada
[email protected]
Abstract
We consider the problem of zero-shot learning, where the goal is to learn a classifier f : X ? Y that must predict novel values of Y that were omitted from
the training set. To achieve this, we define the notion of a semantic output code
classifier (SOC) which utilizes a knowledge base of semantic properties of Y to
extrapolate to novel classes. We provide a formalism for this type of classifier
and study its theoretical properties in a PAC framework, showing conditions under which the classifier can accurately predict novel classes. As a case study, we
build a SOC classifier for a neural decoding task and show that it can often predict
words that people are thinking about from functional magnetic resonance images
(fMRI) of their neural activity, even without training examples for those words.
1
Introduction
Machine learning algorithms have been successfully applied to learning classifiers in many domains
such as computer vision, fraud detection, and brain image analysis. Typically, classifiers are trained
to approximate a target function f : X ? Y , given a set of labeled training data that includes all
possible values for Y , and sometimes additional unlabeled training data.
Little research has been performed on zero-shot learning, where the possible values for the class
variable Y include values that have been omitted from the training examples. This is an important
problem setting, especially in domains where Y can take on many values, and the cost of obtaining
labeled examples for all values is high. One obvious example is computer vision, where there are
tens of thousands of objects which we might want a computer to recognize.
Another example is in neural activity decoding, where the goal is to determine the word or object
a person is thinking about by observing an image of that person?s neural activity. It is intractable
to collect neural training images for every possible word in English, so to build a practical neural
decoder we must have a way to extrapolate to recognizing words beyond those in the training set.
This problem is similar to the challenges of automatic speech recognition, where it is desirable to
recognize words without explicitly including them during classifier training. To achieve vocabulary
independence, speech recognition systems typically employ a phoneme-based recognition strategy
(Waibel, 1989). Phonemes are the component parts which can be combined to construct the words
of a language. Speech recognition systems succeed by leveraging a relatively small set of phoneme
1
recognizers in conjunction with a large knowledge base representing words as combinations of
phonemes.
To apply a similar approach to neural activity decoding, we must discover how to infer the component parts of a word?s meaning from neural activity. While there is no clear consensus as to how the
brain encodes semantic information (Plaut, 2002), there are several proposed representations that
might serve as a knowledge base of neural activity, thus enabling a neural decoder to recognize a
large set of possible words, even when those words are omitted from a training set.
The general question this paper asks is:
Given a semantic encoding of a large set of concept classes, can we build a classifier to
recognize classes that were omitted from the training set?
We provide a formal framework for addressing this question and a concrete example for the task
of neural activity decoding. We show it is possible to build a classifier that can recognize words a
person is thinking about, even without training examples for those particular words.
1.1
Related Work
The problem of zero-shot learning has received little attention in the machine learning community.
Some work by Larochelle et al. (2008) on zero-data learning has shown the ability to predict novel
classes of digits that were omitted from a training set. In computer vision, techniques for sharing
features across object classes have been investigated (Torralba & Murphy, 2007; Bart & Ullman,
2005) but relatively little work has focused on recognizing entirely novel classes, with the exception
of Lampert et al. (2009) predicting visual properties of new objects and Farhadi et al. (2009) using
visual property predictions for object recognition.
In the neural imaging community, Kay et al. (2008) has shown the ability to decode (from visual
cortex activity) which novel visual scenes a person is viewing from a large set of possible images,
but without recognizing the image content per se.
The work most similar to our own is Mitchell (2008). They use semantic features derived from
corpus statistics to generate a neural activity pattern for any noun in English. In our work, by
contrast, we focus on word decoding, where given a novel neural image, we wish to predict the
word from a large set of possible words. We also consider semantic features that are derived from
human labeling in addition to corpus statistics. Further, we introduce a formalism for a zero-shot
learner and provide theoretical guarantees on its ability to recognize novel classes omitted from a
training set.
2
Classification with Semantic Knowledge
In this section we formalize the notion of a zero-shot learner that uses semantic knowledge to extrapolate to novel classes. While a zero-shot learner could take many forms, we present one such model
that utilizes an intermediate set of features derived from a semantic knowledge base. Intuitively,
our goal is to treat each class not as simply an atomic label, but instead represent it using a vector
of semantic features characterizing a large number of possible classes. Our models will learn the
relationship between input data and the semantic features. They will use this learned relationship in
a two step prediction procedure to recover the class label for novel input data. Given new input data,
the models will predict a set of semantic features corresponding to that input, and then find the class
in the knowledge base that best matches that set of predicted features. Significantly, this procedure
will even work for input data from a novel class if that class is included in the semantic knowledge
base (i.e. even if no input space representation is available for the class, but a feature encoding of it
exists in the semantic knowledge base).
2.1
Formalism
Definition 1. Semantic Feature Space
A semantic feature space of p dimensions is a metric space in which each of the p dimensions
encodes the value of a semantic property. These properties may be categorical in nature or may
contain real-valued data.
2
As an example, consider a semantic space for describing high-level properties of animals. In this
example, we?ll consider a small space with only p = 5 dimensions. Each dimension encodes a
binary feature: is it furry? does it have a tail? can it breathe underwater? is it carnivorous? is it
slow moving? In this semantic feature space, the prototypical concept of dog might be represented
as the point {1, 1, 0, 1, 0}.
Definition 2. Semantic Knowledge Base
A semantic knowledge base K of M examples is a collection of pairs {f, y}1:M such that f ? F p
is a point in a p dimensional semantic space F p and y ? Y is a class label from a set Y . We assume
a one-to-one encoding between class labels and points in the semantic feature space.
A knowledge base of animals would contain the semantic encoding and label for many animals.
Definition 3. Semantic Output Code Classifier
A semantic output code classifier H : X d ? Y maps points from some d dimensional raw-input
space X d to a label from a set Y such that H is the composition of two other functions, S and L,
such that:
H
= L(S(?))
S
L
:
:
Xd ? F p
Fp ? Y
This model of a zero-shot classifier first maps from a d dimensional raw-input space X d into a
semantic space of p dimensions F p , and then maps this semantic encoding to a class label. For
example, we may imagine some raw-input features from a digital image of a dog first mapped into
the semantic encoding of a dog described earlier, which is then mapped to the class label dog. As
a result, our class labels can be thought of as a semantic output code, similar in spirit to the errorcorrecting output codes of Dietterich and Bakiri (1995).
As part of its training input, this classifier is given a set of N examples D that consists of pairs
{x, y}1:N such that x ? X d and y ? Y . The classifier is also given a knowledge base K of M
examples that is a collection of pairs {f, y}1:M such that f ? F p and y ? Y . Typically, M >> N ,
meaning that data in semantic space is available for many more class labels than in the raw-input
space. Thus,
A semantic output code classifier can be useful when the knowledge base K covers more of the
possible values for Y than are covered by the input data D.
To learn the mapping S, the classifier first builds a new set of N examples {x, f }1:N by replacing
each y with the respective semantic encoding f according to its knowledge base K.
The intuition behind using this two-stage process is that the classifier may be able to learn the
relationship between the raw-input space and the individual dimensions of the semantic feature
space from a relatively small number of training examples in the input space. When a new example
is presented, the classifier will make a prediction about its semantic encoding using the learned S
map. Even when a new example belongs to a class that did not appear in the training set D, if the
prediction produced by the S map is close to the true encoding of that class, then the L map will
have a reasonable chance of recovering the correct label. As a concrete example, if the model can
predict the object has fur and a tail, it would have a good chance of recovering the class label dog,
even without having seen images of dogs during training. In short:
By using a rich semantic encoding of the classes, the classifier may be able to extrapolate and
recognize novel classes.
3
Theoretical Analysis
In this section we consider theoretical properties of a semantic output code classifier that determine
its ability to recognize instances of novel classes. In other words, we will address the question:
Under what conditions will the semantic output code classifier recognize examples from classes
omitted from its training set?
3
In answering this question, our goal is to obtain a PAC-style bound: we want to know how much
error can be tolerated in the prediction of the semantic properties while still recovering the novel
class with high probability. We will then use this error bound to obtain a bound on the number of
examples necessary to achieve that level of error in the first stage of the classifier. The idea is that
if the first stage S(?) of the classifier can predict the semantic properties well, then the second stage
L(?) will have a good chance of recovering the correct label for instances from novel classes.
As a first step towards a general theory of zero-shot learning, we will consider one instantiation
of a semantic output code classifier. We will assume that semantic features are binary labels, the
first stage S(?) is a collection of PAC-learnable linear classifiers (one classifier per feature), and the
second stage L(?) is a 1-nearest neighbor classifier using the Hamming distance metric. By making
these assumptions, we can leverage existing PAC theory for linear classifiers as well as theory for
approximate nearest neighbor search. Much of our nearest-neighbor analysis parallels the work of
Ciaccia and Patella (2000).
We first want to bound the amount of error we can tolerate given a prediction of semantic features.
To find this bound, we define F to be the distribution in semantic feature space of points from the
knowledge base K. Clearly points (classes) in semantic space may not be equidistant from each
other. A point might be far from others, which would allow more room for error in the prediction of
semantic features for this point, while maintaining the ability to recover its unique identity (label).
Conversely, a point close to others in semantic space will have lower tolerance of error. In short, the
tolerance for error is relative to a particular point in relation to other points in semantic space.
We next define a prediction q to be the output of the S(?) map applied to some raw-input example
x ? X d . Let d(q, q 0 ) be the distance between the prediction q and some other point q 0 representing
a class in the semantic space. We define the relative distribution Rq for a point q as the probability
that the distance from q to q 0 is less than some distance z:
Rq (z) = P (d(q, q 0 ) ? z)
This empirical distribution depends on F and is just the fraction of sampled points from F that are
less than some distance z away from q. Using this distribution, we can also define a distribution on
the distance to the nearest neighbor of q, defined as ?q :
Gq (z) = P (?q ? z)
which is given in Ciaccia (2000) as:
Gq (z) = 1 ? (1 ? Rq (z))n
where n is the number of actual points drawn from the distribution F. Now suppose that we define
?q to be the distance a prediction q for raw-input example x is from the true semantic encoding of
the class to which x belongs. Intuitively, the class we infer for input x is going to be the point closest
to prediction q, so we want a small probability ? that the distance ?q to the true class is larger than
the distance between q and its nearest neighbor, since that would mean there is a spurious neighbor
closer to q in semantic space than the point representing q?s true class:
P (?q ? ?q ) ? ?
Rearranging we can put this in terms of the distribution Gq and then can solve for ?q :
P (?q ? ?q ) ? ?
Gq (?q ) ? ?
If Gq (?) were invertible, we could immediately recover the value ?q for a desired ?. For some
distributions, Gq (?) may not be a 1-to-1 function, so there may not be an inverse. But Gq (?) will
never decrease since it is a cumulative distribution function. We will therefore define a function G?1
q
h
i
such that: G?1
(?)
=
argmax
G
(?
)
?
?
.
q q
?q
q
So using nearest neighbor for L(?), if ?q ? G?1
q (?), then we will recover the correct class with
at least 1 ? ? probability. To ensure that we achieve this error bound, we need to make sure the
max
total error of S(?) is less than G?1
. We assume in this analysis
q (?) which we define as ?q
that we have p binary semantic features and a Hamming distance metric, so ?qmax defines the total
4
number of mistakes we can make predicting the binary features. Note with our assumptions, each
semantic feature is PAC-learnable using a linear classifier from a d dimensional raw input space. To
simplify the analysis, we will treat each of the p semantic features as independently learned. By the
PAC assumption, the true error (i.e. probability of the classifier making a mistake) of each of the p
learned hypotheses is , then the expected number of mistakes over the p semantic features will be
?qmax if we set = ?qmax /p. Further, the probability of making at most ?qmax mistakes is given by
the binomial distribution: BinoCDF(?qmax ; p, ?qmax /p)
We can obtain the desired error rate for each hypothesis by utilizing the standard PAC bound for
VC-dimension1 (Mitchell, 1997). To obtain a hypothesis with (1 ? ?) probability that has true error
at most = ?qmax /p = G?1 (?)/p, then the classifier requires a number of examples Mq,? :
i
p h
max
Mq,? ?
4log(2/?)
+
8(d
+
1)log(13p/?
)
(1)
q
?qmax
If each of the p classifiers (feature predictors) is learned with this many examples, then with probability (1 ? ?)p , all feature predictors will achieve the desired error rate. But note that this is only
the probability of achieving p hypotheses with the desired true error rate. The binomial CDF yields
the probability of making at most ?qmax mistakes total, and the (1 ? ?) term above specifies the
probability of recovering the true class if a maximum of this many mistakes were made. Therefore,
there are three probabilistic events required for the semantic output code classifier to predict a novel
class and the total (joint) probability of these events is:
P (there are p feature predictors with true error ? ?qmax /p) ?
P (at most ?qmax mistakes made | there are p feature predictors with true error ? ?qmax /p ) ?
P (recovering true class | at most ?qmax mistakes made)
and since ?qmax = G?1
q (?), the total probability is given by:
?1
(1 ? ?)p ? BinoCDF(G?1
q (?); p, Gq (?)/p) ? (1 ? ?)
(2)
In summary, given desired error parameters (1??) and (1??) for the two classifier stages, Equation
2 provides the total probability of correctly predicting a novel class. Given the value for ? we can
compute the necessary for each feature predictor. We are guaranteed to obtain the total probability
if the feature predictors were trained with Mq,? raw-input examples as specified in Equation 1.
To our knowledge, Equations 1 and 2 specify the first formal guarantee that provides conditions
under which a classifier can predict novel classes.
4
Case Study: Neural Decoding of Novel Thoughts
In this section we empirically evaluate a semantic output code classifier on a neural decoding task.
The objective is to decode novel words a person is thinking about from fMRI images of the person?s
neural activity, without including example fMRI images of those words during training.
4.1
Datasets
We utilized the same fMRI dataset from Mitchell (2008). This dataset contains the neural activity
observed from nine human participants while viewing 60 different concrete words (5 examples from
12 different categories). Some examples include animals: bear, dog, cat, cow, horse
and vehicles: truck, car, train, airplane, bicycle. Each participant was shown a
word and a small line drawing of the concrete object the word represents. The participants were
asked to think about the properties of these objects for several seconds while images of their brain
activity were recorded.
Each image measures the neural activity at roughly 20,000 locations (i.e. voxels) in the brain. Six
fMRI scans were taken for each word. We used the same time-averaging described in Mitchell
(2008) to create a single average brain activity pattern for each of the 60 words, for each participant.
1
The VC dimension of linear classifiers in d dimensions is d + 1
5
In the language of the semantic output code classifier, this dataset represents the collection D of
raw-input space examples.
We also collected two semantic knowledge bases for these 60 words. In the first semantic knowledge base, corpus5000, each word is represented as a co-occurrence vector with the 5000 most
frequent words from the Google Trillion-Word-Corpus2 .
The second semantic knowledge base, human218, was created using the Mechanical Turk human
computation service from Amazon.com. There were 218 semantic features collected for the 60
words, and the questions were selected to reflect psychological conjectures about neural activity
encoding. For example, the questions related to size, shape, surface properties, and typical usage.
Example questions include is it manmade? and can you hold it?. Users of the Mechanical Turk
service answered these questions for each word on a scale of 1 to 5 (definitely not to definitely yes).
4.2
Model
In our experiments, we use multiple output linear regression to learn the S(?) map of the semantic
output code classifier. Let X ? <N ?d be a training set of fMRI examples where each row is the
image for a particular word and d is the number of dimensions of the fMRI image. During training,
we use the voxel-stability-criterion that does not use the class labels described in Mitchell (2008) to
reduce d from about 20,000 voxels to 500. Let Y ? <N ?p be a matrix of semantic features for those
words (obtained from the knowledge base K) where p is the number of semantic features for that
b ? <d?p . In
word (e.g. 218 for the human218 knowledge base). We learn a matrix of weights W
this model, each output is treated independently, so we can solve all of them quickly in one matrix
operation (even with thousands of semantic features):
b = (XT X + ?I)?1 XT Y
W
(3)
where I is the identity matrix and ? is a regularization parameter chosen automatically using the
cross-validation scoring function (Hastie et al., 2001)3 . Given a novel fMRI image x, we can obtain
a prediction bf of the semantic features for this image by multiplying the image by the weights:
bf = x ? W
b
For the second stage of the semantic output code classifier, L(?), we simply use a 1-nearest neighbor
classifier. In other words, L(bf) will take the prediction of features and return the closest point in a
given knowledge base according the Euclidean distance (L2 ) metric.
4.3
Experiments
Using the model and datasets described above, we now pose and answer three important questions.
1. Can we build a classifier to discriminate between two classes, where neither class appeared in
the training set?
To answer this question, we performed a leave-two-out-cross-validation. Specifically, we trained the
model in Equation 3 to learn the mapping between 58 fMRI images and the semantic features for
their respective words. For the first held out image, we applied the learned weight matrix to obtain
a prediction of the semantic features, and then we used a 1-nearest neighbor classifier to compare
the vector of predictions to the true semantic encodings of the two held-out words. The label was
chosen by selecting the word with the encoding closest to the prediction for the fMRI image. We
then performed the same test using the second held-out image. Thus, for each iteration of the crossvalidation, two separate comparisons were made. This process was repeated for all 60
2 = 1, 770
possible leave-two-out combinations leading to 3,540 total comparisons.
Table 1 shows the results for two different semantic feature encodings. We see that the human218
semantic features significantly outperformed the corpus5000 features, with mean accuracies over
the nine participants of 80.9% and 69.7% respectively. But for both feature sets, we see that it is
possible to discriminate between two novel classes for each of the nine participants.
2
Vectors are normalized to unit length and do not include 100 stop words like a, the, is
We compute the cross-validation score for each task and choose the parameter that minimizes the average
loss across all output tasks.
3
6
Table 1: Percent accuracies for leave-two-out-cross-validation for 9 fMRI participants (labeled P1P9). The values represent classifier percentage accuracy over 3,540 trials when discriminating between two fMRI images, both of which were omitted from the training set.
corpus5000
human218
P1
79.6
90.3
P2
67.0
82.9
P3
69.5
86.6
P4
56.2
71.9
P5
77.7
89.5
P6
65.5
75.3
P7
71.2
78.0
P8
72.9
77.7
P9
67.9
76.2
Bear Predicted
Bear Target
Dog Target
Dog Predicted
Bear & Dog Prediction Match
0.10
0.08
0.06
0.04
Answer
Mean
69.7
80.9
0.02
0.00
-0.02
-0.04
-0.06
-0.08
-0.10
Is it an
animal?
Is it man- Do you see
Is it
made?
it daily? helpful?
Can you Would you Do you
hold it? find it in a love it?
house?
Does it Is it wild? Does it
stand on
provide
two legs?
protection?
Figure 1: Ten semantic features from the human218 knowledge base for the words bear and dog.
The true encoding is shown along with the predicted encoding when fMRI images for bear and dog
were left out of the training set.
2. How is the classifier able to discriminate between closely related novel classes?
Figure 1 shows ten semantic questions (features) from the human218 dataset. The graph shows the
true values along with the predicted feature values for both bear and dog when trained on the other
58 words. We see the model is able to learn to predict many of the key features that bears and dogs
have in common such as is it an animal? as well as those that differentiate between the two, such as
do you see it daily? and can you hold it? For both of these novel words, the features predicted from
the neural data were closest to the true word.
3. Can we decode the word from a large set of possible words?
Given the success of the semantic output code classifier at discriminating between the brain images
for two novel words, we now consider the much harder problem of discriminating a novel word from
a large set of candidate words. To test this ability, we performed a leave-one-out-cross-validation,
where we trained using Equation 3 on images and semantic features for 59 words. We then predicted the features for the held-out image of the 60th word, and then performed a 1-nearest neighbor
classification in a large set of candidate words.
We tested two different word sets. The first was mri60 which is the collection of all 60 concrete
nouns for which we collected fMRI data, including the 59 training words and the single held out
word. The second set was noun940, a collection of 940 English nouns with high familiarity,
concreteness and imagineability, compiled from Wilson (1988) and Snodgrass (1980). For this set
of words, we added the true held-out word to the set of 940 on each cross-validation iteration. We
performed this experiment using both the corpus5000 and human218 feature sets. The rank
accuracy results (over 60 cross-validation iterations) of the four experiments are shown in Figure 2.
The human218 features again significantly outperform corpus5000 on both mean and median
rank accuracy measures, and both feature sets perform well above chance. On 12 of 540 total
presentations of the mri60 words (60 presentations for each of nine participants), the human218
features predicted the single held-out word above all 59 other words in its training set. While just
a bit above chance level (9/540), the fact that the model ever chooses the held-out word over all
the training words is noteworthy since the model is undoubtedly biased towards predicting feature
values similar to the words on which it was trained. On the noun940 words, the model predicted
the correct word from the set of 941 alternatives a total of 26 times for the human218 features and
22 times for the corpus5000 features. For some subjects, the model correctly picked the right
7
Rank Accuracy
100%
Accuracy
90%
Mean Rank
80%
Median Rank
70%
60%
50%
Chance
40%
corpus5000
human218
corpus5000
mri60 Word Set
human218
noun940 Word Set
Figure 2: The mean and median rank accuracies across nine participants for two different semantic
feature sets. Both the original 60 fMRI words and a set of 940 nouns were considered.
Table 2: The top five predicted words for a novel fMRI image taken for the word in bold (all fMRI
images taken from participant P1). The number in the parentheses contains the rank of the correct
word selected from 941 concrete nouns in English.
Bear
(1)
bear
fox
wolf
yak
gorilla
Foot
(1)
foot
feet
ankle
knee
face
Screwdriver
(1)
screwdriver
pin
nail
wrench
dagger
Train
(1)
train
jet
jail
factory
bus
Truck
(2)
jeep
truck
minivan
bus
sedan
Celery
(5)
beet
artichoke
grape
cabbage
celery
House
(6)
supermarket
hotel
theater
school
factory
Pants
(21)
clothing
vest
t-shirt
clothes
panties
word from the set of 941 more than 10% of the time. The chance accuracy of predicting a word
correctly is only 0.1%, meaning we would expect less than one correct prediction across all 540
presentations.
As Figure 2 shows, the median rank accuracies are often significantly higher than the mean rank
accuracies. Using the human218 features on the noun940 words, the median rank accuracy is
above 90% for each participant while the mean is typically about 10% lower. This is due to the fact
that several words are consistently predicted poorly. The prediction of words in the categories animals, body parts, foods, tools, and vehicles typically perform well, while the words in the categories
furniture, man-made items, and insects often perform poorly.
Even when the correct word is not the closest match, the words that best match the predicted features
are often very similar to the held-out word. Table 2 shows the top five predicted words for eight
different held-out fMRI images for participant P1 (i.e. the 5 closest words in the set of 941 to the
predicted vector of semantic features).
5
Conclusion
We presented a formalism for a zero-shot learning algorithm known as the semantic output code
classifier. This classifier can predict novel classes that were omitted from a training set by leveraging
a semantic knowledge base that encodes features common to both the novel classes and the training
set. We also proved the first formal guarantee that shows conditions under which this classifier will
predict novel classes.
We demonstrated this semantic output code classifier on the task of neural decoding using semantic
knowledge bases derived from both human labeling and corpus statistics. We showed this classifier
can predict the word a person is thinking about from a recorded fMRI image of that person?s neural
activity with accuracy much higher than chance, even when training examples for that particular
word were omitted from the training set and the classifier was forced to pick the word from among
nearly 1,000 alternatives.
We have shown that training images of brain activity are not required for every word we would like
a classifier to recognize. These results significantly advance the state-of-the-art in neural decoding
and are a promising step towards a large vocabulary brain-computer interface.
8
References
Bart, E., & Ullman, S. (2005). Cross-generalization: learning novel classes from a single example
by feature replacement. Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE
Computer Society Conference on, 1, 672?679 vol. 1.
Ciaccia, P., & Patella, M. (2000). PAC nearest neighbor queries: Approximate and controlled search
in high-dimensional and metric spaces. Data Engineering, International Conference on, 244.
Dietterich, T. G., & Bakiri, G. (1995). Solving multiclass learning problems via error-correcting
output codes. Journal of Artificial Intelligence Research.
Farhadi, A., Endres, I., Hoiem, D., & Forsyth, D. (2009). Describing objects by their attributes. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR).
Hastie, T., Tibshirani, R., & Friedman, J. H. (2001). The elements of statistical learning. Springer.
Kay, K. N., Naselaris, T., Prenger, R. J., & Gallant, J. L. (2008). Identifying natural images from
human brain activity. Nature, 452, 352?355.
Lampert, C. H., Nickisch, H., & Harmeling, S. (2009). Learning to detect unseen object classes
by between-class attribute transfer. Proceedings of the IEEE Computer Society Conference on
Computer Vision and Pattern Recognition (CVPR).
Larochelle, H., Erhan, D., & Bengio, Y. (2008). Zero-data learning of new tasks. AAAI Conference
on Artificial Intelligence.
Mitchell, T., et al. (2008). Predicting human brain activity associated with the meanings of nouns.
Science, 320, 1191?1195.
Mitchell, T. M. (1997). Machine learning. New York: McGraw-Hill.
Mitchell, T. M., Hutchinson, R., Niculescu, R. S., Pereira, F., Wang, X., Just, M., & Newman, S.
(2004). Learning to decode cognitive states from brain images. Machine Learning, 57, 145?175.
Plaut, D. C. (2002). Graded modality-specific specialization in semantics: A computational account
of optic aphasia. Cognitive Neuropsychology, 19, 603?639.
Snodgrass, J., & Vanderwart, M. (1980). A standardized set of 260 pictures: Norms for name agreement, image agreement, familiarity and visual complexity. Journal of Experimental Psychology:
Human Learning and Memory, 174?215.
Torralba, A., & Murphy, K. P. (2007). Sharing visual features for multiclass and multiview object
detection. IEEE Trans. Pattern Anal. Mach. Intell., 29, 854?869.
van der Maaten, L., & Hinton, G. (2008). Visualizing data using t-SNE. Journal of Machine
Learning Research, 9(Nov), 2579?2605.
Waibel, A. (1989). Modular construction of time-delay neural networks for speech recognition.
Neural Computation, 1, 39?46.
Wilson, M. (1988). The MRC psycholinguistic database: Machine readable dictionary, version 2.
Behavioral Research Methods, 6?11.
9
| 3650 |@word trial:1 version:1 norm:1 bf:3 ankle:1 pick:1 asks:1 harder:1 shot:10 contains:2 score:1 selecting:1 hoiem:1 existing:1 com:2 protection:1 must:3 shape:1 bart:2 intelligence:2 selected:2 item:1 p7:1 short:2 provides:2 plaut:2 toronto:3 location:1 five:2 along:2 consists:1 wild:1 behavioral:1 introduce:1 g4:1 p8:1 expected:1 roughly:1 p1:3 love:1 brain:11 shirt:1 automatically:1 p9:1 little:3 actual:1 farhadi:2 food:1 discover:1 what:1 minimizes:1 sedan:1 clothes:1 guarantee:3 every:2 xd:1 classifier:54 unit:1 appear:1 service:2 engineering:1 treat:2 mistake:8 encoding:17 mach:1 noteworthy:1 might:4 collect:1 conversely:1 co:1 practical:1 unique:1 harmeling:1 atomic:1 digit:1 procedure:2 empirical:1 significantly:5 thought:2 word:84 fraud:1 unlabeled:1 close:2 put:1 dean:2 map:8 demonstrated:1 attention:1 independently:2 focused:1 amazon:1 knee:1 immediately:1 correcting:1 identifying:1 utilizing:1 theater:1 kay:2 mq:3 stability:1 notion:2 underwater:1 target:3 imagine:1 suppose:1 decode:4 user:1 construction:1 us:1 hypothesis:4 agreement:2 pa:3 element:1 recognition:9 utilized:1 labeled:3 database:1 observed:1 p5:1 wang:1 thousand:2 decrease:1 neuropsychology:1 rq:3 intuition:1 complexity:1 asked:1 trained:6 solving:1 serve:1 learner:3 joint:1 represented:2 cat:1 train:3 forced:1 prenger:1 query:1 artificial:2 labeling:2 horse:1 newman:1 modular:1 larger:1 valued:1 solve:2 beet:1 drawing:1 cvpr:3 ability:6 statistic:3 unseen:1 think:1 differentiate:1 gq:8 p4:1 frequent:1 poorly:2 ontario:1 achieve:5 crossvalidation:1 leave:4 object:11 pose:1 nearest:10 school:1 received:1 p2:1 soc:2 recovering:6 c:3 predicted:14 larochelle:2 foot:3 manmade:1 correct:7 closely:1 attribute:2 vc:2 human:7 viewing:2 generalization:1 clothing:1 hold:3 considered:1 mapping:2 predict:14 bicycle:1 dictionary:1 torralba:2 omitted:10 outperformed:1 label:17 create:1 successfully:1 tool:1 naselaris:1 clearly:1 wilson:2 conjunction:1 derived:4 focus:1 consistently:1 fur:1 rank:10 contrast:1 detect:1 helpful:1 niculescu:1 typically:5 spurious:1 relation:1 going:1 semantics:1 classification:2 among:1 insect:1 resonance:1 noun:6 animal:7 art:1 construct:1 never:1 having:1 represents:2 nearly:1 thinking:5 fmri:19 others:2 simplify:1 employ:1 recognize:10 intell:1 individual:1 murphy:2 argmax:1 replacement:1 friedman:1 detection:2 undoubtedly:1 behind:1 held:10 closer:1 screwdriver:2 necessary:2 daily:2 respective:2 fox:1 euclidean:1 desired:5 theoretical:4 celery:2 psychological:1 instance:2 formalism:4 earlier:1 cover:1 cost:1 addressing:1 predictor:6 recognizing:3 delay:1 answer:3 hutchinson:1 endres:1 nickisch:1 tolerated:1 combined:1 chooses:1 person:8 definitely:2 international:1 discriminating:3 probabilistic:1 decoding:9 invertible:1 quickly:1 concrete:6 again:1 aaai:1 recorded:2 reflect:1 choose:1 cognitive:2 style:1 return:1 ullman:2 leading:1 account:1 bold:1 includes:1 forsyth:1 explicitly:1 depends:1 performed:6 vehicle:2 picked:1 lab:1 observing:1 dagger:1 recover:4 participant:12 parallel:1 accuracy:13 phoneme:4 yield:1 yes:1 raw:10 accurately:1 produced:1 multiplying:1 mrc:1 m5s:1 psycholinguistic:1 sharing:2 definition:3 hotel:1 turk:2 obvious:1 associated:1 hamming:2 sampled:1 stop:1 dataset:4 proved:1 mitchell:10 knowledge:26 car:1 formalize:1 tolerate:1 higher:2 tom:2 specify:1 just:3 stage:8 p6:1 replacing:1 google:1 defines:1 usage:1 dietterich:2 name:1 concept:2 contain:2 true:16 normalized:1 regularization:1 furry:1 semantic:84 visualizing:1 ll:1 during:4 criterion:1 hill:1 multiview:1 interface:1 percent:1 image:35 meaning:4 novel:31 common:2 functional:1 empirically:1 tail:2 mellon:2 composition:1 automatic:1 language:2 moving:1 recognizers:1 cortex:1 surface:1 compiled:1 base:23 artichoke:1 closest:6 own:1 showed:1 belongs:2 binary:4 success:1 der:1 scoring:1 seen:1 additional:1 determine:2 patella:2 multiple:1 desirable:1 infer:2 match:4 jet:1 cross:8 parenthesis:1 controlled:1 prediction:19 regression:1 vision:6 cmu:2 metric:5 palatucci:1 sometimes:1 represent:2 iteration:3 robotics:1 addition:1 want:4 median:5 modality:1 biased:1 sure:1 subject:1 leveraging:2 spirit:1 leverage:1 intermediate:1 bengio:1 independence:1 equidistant:1 psychology:1 hastie:2 cow:1 reduce:1 idea:1 airplane:1 multiclass:2 six:1 specialization:1 speech:4 york:1 nine:5 useful:1 clear:1 se:1 covered:1 amount:1 ten:3 category:3 generate:1 specifies:1 outperform:1 percentage:1 per:2 correctly:3 tibshirani:1 carnegie:2 vol:1 key:1 four:1 achieving:1 drawn:1 neither:1 corpus2:1 imaging:1 graph:1 concreteness:1 fraction:1 inverse:1 you:7 qmax:14 reasonable:1 nail:1 utilizes:2 p3:1 maaten:1 bit:1 entirely:1 bound:7 guaranteed:1 furniture:1 truck:3 activity:19 optic:1 scene:1 encodes:4 answered:1 relatively:3 conjecture:1 department:2 according:2 waibel:2 combination:2 across:4 making:4 leg:1 intuitively:2 errorcorrecting:1 taken:3 equation:5 bus:2 describing:2 pin:1 aphasia:1 know:1 available:2 operation:1 apply:1 eight:1 away:1 magnetic:1 occurrence:1 alternative:2 original:1 binomial:2 top:2 include:4 ensure:1 standardized:1 maintaining:1 readable:1 build:6 especially:1 bakiri:2 wrench:1 society:3 graded:1 objective:1 question:11 added:1 ciaccia:3 strategy:1 distance:11 separate:1 mapped:2 decoder:2 collected:3 consensus:1 code:19 length:1 relationship:3 sne:1 pomerleau:2 anal:1 perform:3 gallant:1 datasets:2 enabling:1 hinton:3 ever:1 community:2 canada:1 dog:14 pair:3 required:2 specified:1 mechanical:2 grape:1 learned:6 trans:1 address:1 beyond:1 able:4 pattern:6 fp:1 appeared:1 challenge:1 gorilla:1 including:3 max:2 memory:1 event:2 treated:1 natural:1 predicting:6 representing:3 picture:1 created:1 categorical:1 supermarket:1 voxels:2 l2:1 relative:2 loss:1 expect:1 bear:10 prototypical:1 geoffrey:1 digital:1 validation:7 pant:1 row:1 summary:1 english:4 formal:3 allow:1 institute:1 neighbor:11 characterizing:1 face:1 tolerance:2 van:1 dimension:9 vocabulary:2 stand:1 cumulative:1 rich:1 collection:6 made:6 far:1 voxel:1 erhan:1 approximate:3 nov:1 mcgraw:1 snodgrass:2 instantiation:1 corpus:3 pittsburgh:3 search:2 table:4 promising:1 learn:8 nature:2 transfer:1 rearranging:1 obtaining:1 investigated:1 domain:2 did:1 lampert:2 repeated:1 body:1 intel:2 slow:1 pereira:1 wish:1 yak:1 factory:2 candidate:2 house:2 answering:1 familiarity:2 xt:2 specific:1 pac:8 showing:1 learnable:2 intractable:1 exists:1 simply:2 visual:6 springer:1 wolf:1 chance:8 trillion:1 breathe:1 cdf:1 succeed:1 goal:4 identity:2 presentation:3 towards:3 room:1 man:2 content:1 included:1 typical:1 specifically:1 averaging:1 total:10 discriminate:3 experimental:1 exception:1 mark:1 people:1 scan:1 evaluate:1 tested:1 extrapolate:4 |
2,925 | 3,651 | Learning Brain Connectivity of Alzheimer's
Disease from Neuroimaging Data
Shuai Huang 1, Jing Li 1, Liang Sun 2,3, Jun Liu 2,3, Teresa Wu1, Kewei Chen 4,
Adam Fleisher 4, Eric Reiman 4, Jieping Ye 2,3
1
Industrial Engineering, 2Computer Science and Engineering, and 3 Center for Evolutionary
Functional Genomics, The Biodesign Institute, Arizona State University, Tempe, USA
{shuang31, jing.li.8, sun.liang, j.liu, teresa.wu, jieping.ye}@asu.edu
4
Banner Alzheimer?s Institute and Banner PET Center, Banner Good Samaritan Medical
Center, Phoenix, USA
{kewei.chen , adam.fleisher, eric.reiman}@bannerhealth.com
Abstract
Recent advances in neuroimaging techniques provide great potentials for
effective diagnosis of Alzheimer?s disease (AD), the most common form of
dementia. Previous studies have shown that AD is closely related to the
alternation in the functional brain network, i.e., the functional connectivity
among different brain regions. In this paper, we consider the problem of
learning functional brain connectivity from neuroimaging, which holds
great promise for identifying image-based markers used to distinguish
Normal Controls (NC), patients with Mild Cognitive Impairment (MCI),
and patients with AD.
More specifically, we study sparse inverse
covariance estimation (SICE), also known as exploratory Gaussian
graphical models, for brain connectivity modeling. In particular, we apply
SICE to learn and analyze functional brain connectivity patterns from
different subject groups, based on a key property of SICE, called the
?monotone property? we established in this paper. Our experimental results
on neuroimaging PET data of 42 AD, 116 MCI, and 67 NC subjects reveal
several interesting connectivity patterns consistent with literature findings,
and also some new patterns that can help the knowledge discovery of AD.
1
In trod u cti on
Alzheimer?s disease (AD) is a fatal, neurodegenerative disorder characterized by progressive
impairment of memory and other cognitive functions. It is the most common form of
dementia and currently affects over five million Americans; this number will grow to as
many as 14 million by year 2050. The current knowledge about the cause of AD is very
limited; clinical diagnosis is imprecise with definite diagnosis only possible by autopsy;
also, there is currently no cure for AD, while most drugs only alleviate the symptoms.
To tackle these challenging issues, the rapidly advancing neuroimaging techniques provide
great potentials. These techniques, such as MRI, PET, and fMRI, produce data (images) of
brain structure and function, making it possible to identify the difference between AD and
normal brains. Recent studies have demonstrated that neuroimaging data provide more
sensitive and consistent measures of AD onset and progression than conventional clinical
assessment and neuropsychological tests [1].
Recent studies have found that AD is closely related to the alternation in the functional brain
network, i.e., the functional connectivity among different brain regions [ 2]-[3]. Specifically,
it has been shown that functional connectivity substantially decreases between the
hippocampus and other regions of AD brains [3]-[4]. Also, some studies have found
increased connectivity between the regions in the frontal lobe [ 6]-[7].
Learning functional brain connectivity from neuroimaging data holds great promise for
identifying image-based markers used to distinguish among AD, MCI (Mild Cognitive
Impairment), and normal aging. Note that MCI is a transition stage from normal aging to
AD. Understanding and precise diagnosis of MCI have significant clinical value since it can
serve as an early warning sign of AD. Despite all these, existing research in functional brain
connectivity modeling suffers from limitations. A large body of functional connectivity
modeling has been based on correlation analysis [2]-[3], [5]. However, correlation only
captures pairwise information and fails to provide a complete account for the interaction of
many (more than two) brain regions. Other multivariate statistical methods have also been
used, such as Principle Component Analysis (PCA) [8], PCA-based Scaled Subprofile Model
[9], Independent Component Analysis [10]-[11], and Partial Least Squares [12]-[13], which
group brain regions into latent components. The brain regions within each component are
believed to have strong connectivity, while the connectivity between components is weak.
One major drawback of these methods is that the latent components may not correspond to
any biological entities, causing difficulty in interpretation. In addition, graphical models
have been used to study brain connectivity, such as structural equation models [14]-[15],
dynamic causal models [16], and Granger causality. However, most of these approaches are
confirmative, rather than exploratory, in the sense that they require a prior model of brain
connectivity to begin with. This makes them inadequate for studying AD brain connectivity,
because there is little prior knowledge about which regions should be involved and how they
are connected. This makes exploratory models highly desirable.
In this paper, we study sparse inverse covariance estimation (SICE), also known as
exploratory Gaussian graphical models, for brain connectivity modeling. Inverse covariance
matrix has a clear interpretation that the off-diagonal elements correspond to partial
correlations, i.e., the correlation between each pair of brain regions given all other regions.
This provides a much better model for brain connectivity than simple correlation analysis
which models each pair of regions without considering other regions. Also, imposing
sparsity on the inverse covariance estimation ensures a reliable brain connectivity to be
modeled with limited sample size, which is usually the case in AD studies since clinical
samples are difficult to obtain. From a domain perspective, imposing sparsity is also valid
because neurological findings have demonstrated that a brain region usually only directly
interacts with a few other brain regions in neurological processes [ 2]-[3]. Various algorithms
for achieving SICE have been developed in recent year [ 17]-[22]. In addition, SICE has been
used in various applications [17], [21], [23]-[26].
In this paper, we apply SICE to learn functional brain connectivity from neuroimaging and
analyze the difference among AD, MCI, and NC based on a key property of SICE, called the
?monotone property? we established in this paper. Unlike the previous study which is based
on a specific level of sparsity [26], the monotone property allows us to study the
connectivity pattern using different levels of sparsity and obtain an order for the strength of
connection between pairs of brain regions. In addition, we apply bootstrap hypothesis testing
to assess the significance of the connection. Our experimental results on PET data of 42 AD,
116 MCI, and 67 NC subjects enrolled in the Alzheimer?s Disease Neuroimaging Initiative
project reveal several interesting connectivity patterns consistent with literature findings,
and also some new patterns that can help the knowledge discovery of AD.
2
S ICE : B ack grou n d an d th e Mon oton e P rop erty
An inverse covariance matrix can be represented graphically. If used to represent brain
connectivity, the nodes are activated brain regions; existence of an arc between two nodes
means that the two brain regions are closely related in the brain's functiona l process.
Let
be all the brain regions under study. We assume that
follows a
multivariate Gaussian distribution with mean and covariance matrix . Let
be the
inverse covariance matrix. Suppose we have samples (e.g., subjects with AD) for these
brain regions. Note that we will only illustrate here the SICE for AD, whereas the SICE for
MCI and NC can be achieved in a similar way.
We can formulate the SICE into an optimization problem, i.e.,
(1)
where
is the sample covariance matrix;
,
, and
denote the
determinant, trace, and sum of the absolute values of all elements of a matrix, respectively.
The part ?
? in (1) is the log-likelihood, whereas the part ?
?
represents the ?sparsity? of the inverse covariance matrix . (1) aims to achieve a tradeoff
between the likelihood fit of the inverse covariance estimate and the sparsity. The tradeoff is
controlled by , called the regularization parameter; larger will result in more sparse estimate
for . The formulation in (1) follows the same line of the -norm regularization, which has been
introduced into the least squares formulation to achieve model sparsity and the resulting model is
called Lasso [27]. We employ the algorithm in [19] in this paper. Next, we show that with
going from small to large, the resulting brain connectivity models have a monotone property.
Before introducing the monotone property, the following definitions are needed.
Definition: In the graphical representation of the inverse covariance, if node
to
by an arc, then
is called a ?neighbor? of . If
is connected to
chain of arcs, then
is called a ?connectivity component? of .
is connected
though some
Intuitively, being neighbors means that two nodes (i.e., brain regions) are directly connected,
whereas being connectivity components means that two brain regions are indirectly
connected, i.e., the connection is mediated through other regions. In other words, not being
connectivity components (i.e., two nodes completely separated in the graph) means that the
two corresponding brain regions are completely independent of each other. Connectivity
components have the following monotone property:
Monotone property of SICE: Let
components of
with
and
and
be the sets of all the connectivity
, respectively. If
, then
.
Intuitively, if two regions are connected (either directly or indirectly) at one level of
sparseness (
), they will be connected at all lower levels of sparseness (
). Proof
of the monotone property can be found in the supplementary file [29]. This monotone
property can be used to identify how strongly connected each node (brain region)
to its
connectivity components. For example, assuming that
and
,
this means that
is more strongly connected to
than . Thus, by changing from small
to large, we can obtain an order for the strength of connection between pairs of brain
regions. As will be shown in Section 3, this order is different among AD, MCI, and NC.
3
3.1
Ap p l i cati on i n B rai n Con n ecti vi ty M od el i n g of AD
D a t a a c q u i s i t i o n a n d p re p ro c e s s i n g
We apply SICE on FDG-PET images for 49 AD, 116 MCI, and 67 NC subjects downloaded from
the ADNI website. We apply Automated Anatomical Labeling (AAL) [28] to extract data from
each of the 116 anatomical volumes of interest (AVOI), and derived average of each AVOI for
every subject. The AVOIs represent different regions of the whole brain.
3.2
B r a i n c o n n e c t i v i t y mo d e l i n g b y S I C E
42 AVOIs are selected for brain connectivity modeling, as they are considered to be potentially
related to AD. These regions distribute in the frontal, parietal, occipital, and temporal lobes. Table
1 list of the names of the AVOIs with their corresponding lobes. The number before each AVOI is
used to index the node in the connectivity models.
We apply the SICE algorithm to learn one connectivity model for AD, one for MCI, and one for
NC, for a given . With different ?s, the resulting connectivity models hold a monotone property,
which can help obtain an order for the strength of connection between brain regions. To show the
order clearly, we develop a tree-like plot in Fig. 1, which is for the AD group. To generate this
plot, we start at a very small value (i.e., the right-most of the horizontal axis), which results in a
fully-connected connectivity model. A fully-connected connectivity model is one that contains no
region disconnected with the rest of the brain. Then, we decrease by small steps and record the
order of the regions disconnected with the rest of the brain regions.
Table 1: Names of the AVOIs for connectivity modeling (?L? means that the brain region
is located at the left hemisphere; ?R? means right hemisphere.)
Frontal lobe
Parietal lobe
Occipital lobe
Temporal lobe
1 Frontal_Sup_L
13 Parietal_Sup_L
21 Occipital_Sup_L 27 T emporal_Sup_L
2 Frontal_Sup_R
14 Parietal_Sup_R
22 Occipital_Sup_R 28 T emporal_Sup_R
3 Frontal_Mid_L
15 Parietal_Inf_L
23 Occipital_Mid_L 29 T emporal_Pole_Sup_L
4 Frontal_Mid_R
16 Parietal_Inf_R
24 Occipital_Mid_R 30 T emporal_Pole_Sup_R
5 Frontal_Sup_Medial_L
17 Precuneus_L
25 Occipital_Inf_L
31 T emporal_Mid_L
6 Frontal_Sup_Medial_R
18 Precuneus_R
26 Occipital_Inf_R
32 T emporal_Mid_R
7 Frontal_Mid_Orb_L
19 Cingulum_Post_L
33 T emporal_Pole_Mid_L
8 Frontal_Mid_Orb_R
20 Cingulum_Post_R
34 T emporal_Pole_Mid_R
9 Rectus_L
35 T emporal_Inf_L 8301
10 Rectus_R
36 T emporal_Inf_R 8302
11 Cingulum_Ant_L
37 Fusiform_L
12 Cingulum_Ant_R
38 Fusiform_R
39 Hippocampus_L
40 Hippocampus_R
41 ParaHippocampal_L
42 ParaHippocampal_R
For example, in Fig. 1, as decreases below
(but still above ), region ?Tempora_Sup_L? is
the first one becoming disconnected from the rest of the brain. As decreases below
(but still
above ), the rest of the brain further divides into three disconnected clusters, including the
cluster of ?Cingulum_Post_R? and ?Cingulum_Post_L?, the cluster of ?Fusiform_R? up to
?Hippocampus_L?, and the cluster of the other regions. As continuously decreases, each current
cluster will split into smaller clusters; eventually, when reaches a very large value, there will be
no arc in the IC model, i.e., each region is now a cluster of itself and the split will stop. The
sequence of the splitting gives an order for the strength of connection between brain regions.
Specifically, the earlier (i.e., smaller ) a region or a cluster of regions becomes disconnected from
the rest of the brain, the weaker it is connected with the rest of the brain. For example, in Fig. 1, it
can be known that ?Tempora_Sup_L? may be the weakest region in the brain network of AD; the
second weakest ones are the cluster of ?Cingulum_Post_R? and ?Cingulum_Post_L?, and the
cluster of ?Fusiform_R? up to ?Hippocampus_L?. It is very interesting to see that the weakest and
second weakest brain regions in the brain network include ?Cingulum_Post_R? and
?Cingulum_Post_L? as well as regions all in the temporal lobe, all of which have been found to be
affected by AD early and severely [3]-[5].
Next, to facilitate the comparison between AD and NC, a tree-like plot is also constructed for NC,
as shown in Fig. 2. By comparing the plots for AD and NC, we can observe the following two
distinct phenomena: First, in AD, between-lobe connectivity tends to be weaker than within-lobe
connectivity. This can be seen from Fig. 1 which shows a clear pattern that the lobes become
disconnected with each other before the regions within each lobe become disconnected with each
other, as goes from small to large. This pattern does not show in Fig. 2 for NC. Second, the
same brain regions in the left and right hemisphere are connected much weaker in AD than in NC.
This can be seen from Fig. 2 for NC, in which the same brain regions in the left and right
hemisphere are still connected even at a very large for NC. However, this pattern does not show
in Fig. 1 for AD.
Furthermore, a tree-like plot is also constructed for MCI (Fig. 3), and compared with the plots for
AD and NC. In terms of the two phenomena discussed previously, MCI shows similar patterns to
AD, but these patterns are not as distinct from NC as AD. Specifically, in terms of the first
phenomenon, MCI also shows weaker between-lobe connectivity than within-lobe connectivity,
which is similar to AD. However, the degree of weakerness is not as distinctive as AD. For
example, a few regions in the temporal lobe of MCI, including ?Temporal_Mid_R? and
?Temporal_Sup_R?, appear to be more strongly connected with the occipital lobe than with other
regions in the temporal lobe. In terms of the second phenomenon, MCI also shows weaker
between-hemisphere connectivity in the same brain region than NC. However, the degree of
weakerness is not as distinctive as AD. For example, several left-right pairs of the same brain
regions are still connected even at a very large , such as ?Rectus_R? and ?Rectus_L?,
?Frontal_Mid_Orb_R? and ?Frontal_Mid_Orb _L?, ?Parietal_Sup_R? and ?Parietal_Sup_L?, as
well as ?Precuneus_R? and ?Precuneus_L?. All above findings are consistent with the knowledge
that MCI is a transition stage between normal aging and AD.
Large ?
?3
?2
?1
Small ?
Fig 1: Order for the strength of connection between brain regions of AD
Large ?
Small ?
Fig 2: Order for the strength of connection between brain regions of NC
Fig 3: Order for the strength of connection between brain regions of MCI
Furthermore, we would like to compare how within-lobe and between-lobe connectivity is
different across AD, MCI, and NC. To achieve this, we first learn one connectivity model for AD,
one for MCI, and one for NC. We adjust the in the learning of each model such that the three
models, corresponding to AD, MCI, and NC, respectively, will have the same total number of
arcs. This is to ?normalize? the models, so that the comparison will be more focused on how the
arcs distribute differently across different models. By selecting different values for the total
number of arcs, we can obtain models representing the brain connectivity at different levels of
strength. Specifically, given a small value for the total number of arcs, only strong arcs will show
up in the resulting connectivity model, so the model is a model of strong brain connectivity; when
increasing the total number of arcs, mild arcs will also show up in the resulting connectivity
model, so the model is a model of mild and strong brain connectivity.
For example, Fig. 4 shows the connectivity models for AD, MCI, and NC with the total number of
arcs equal to 50 (Fig. 4(a)), 120 (Fig. 4(b)), and 180 (Fig. 4(c)). In this paper, we use a ?matrix?
representation for the SICE of a connectivity model. In the matrix, each row represents one node
and each column also represents one node. Please see Table 1 for the correspondence between the
numbering of the nodes and the brain region each number represents. The matrix contains black
and white cells: a black cell at the -th row, -th column of the matrix represents existence of an
arc between nodes
and
in the SICE-based connectivity model, whereas a white cell
represents absence of an arc. According to this definition, the total number of black cells in the
matrix is equal to twice the total number of arcs in the SICE-based connectivity model. Moreover,
on each matrix, four red cubes are used to highlight the brain regions in each of the four lobes; that
is, from top-left to bottom-right, the red cubes highlight the frontal, parietal, occipital, and
temporal lobes, respectively. The black cells inside each red cube reflect within-lobe connectivity,
whereas the black cells outside the cubes reflect between-lobe connectivity.
While the connectivity models in Fig. 4 clearly show some connectivity difference between AD,
MCI, and NC, it is highly desirable to test if the observed difference is statistically significant.
Therefore, we further perform a hypothesis testing and the results are summarized in Table 2.
Specifically, a P-value is recorded in the sub-table if it is smaller than 0.1, such a P-value is further
highlighted if it is even smaller than 0.05; a ?---? indicates that the corresponding test is not
significant (P-value>0.1). We can observe from Fig. 4 and Table 2:
Within-lobe connectivity: The temporal lobe of AD has significantly less connectivity than NC.
This is true across different strength levels (e.g., strong, mild, and weak) of the connectivity; in
other words, even the connectivity between some strongly-connected brain regions in the temporal
lobe may be disrupted by AD. In particular, it is clearly from Fig. 4(b) that the regions
?Hippocampus? and ?ParaHippocampal? (numbered by 39-42, located at the right-bottom corner
of Fig. 4(b)) are much more separated from other regions in AD than in NC. The decrease in
connectivity in the temporal lobe of AD, especially between the Hippocampus and other regions,
has been extensively reported in the literature [3]-[5]. Furthermore, the temporal lobe of MCI does
not show a significant decrease in connectivity, compared with NC. This may be because MCI
does not disrupt the temporal lobe as badly as AD.
AD
MCI
NC
Fig 4(a): SICE-based brain connectivity models (total number of arcs equal to 50)
AD
MCI
NC
Fig 4(b): SICE-based brain connectivity models (total number of arcs equal to 120)
AD
MCI
NC
Fig 4(c): SICE-based brain connectivity models (total number of arcs equal to 180)
The frontal lobe of AD has significantly more connectivity than NC, which is true across different
strength levels of the connectivity. This has been interpreted as compensatory reallocation or
recruitment of cognitive resources [6]-[7]. Because the regions in the frontal lobe are typically
affected later in the course of AD (our data are early AD), the increased connectivity in the frontal
lobe may help preserve some cognitive functions in AD patients. Furthermore, the frontal lobe of
MCI does not show a significant increase in connectivity, compared with NC. This indicates that
the compensatory effect in MCI brain may not be as strong as that in AD brains.
Table 2: P-values from the statistical significance test of connectivity difference among
AD, MCI, and NC
(a) Total number of arcs = 50
(b) Total number of arcs = 120 (c) Total number of arcs = 180
There is no significant difference among AD, MCI, and NC in terms of the connectivity within the
parietal lobe and within the occipital lobe. Another interesting finding is that all the P-values in the
third sub-table of Table 2(a) are insignificant. This implies that distribution of the strong
connectivity within and between lobes for MCI is very similar to NC; in other words, MCI has not
been able to disrupt the strong connectivity among brain regions (it disrupts some mild and weak
connectivity though).
Between-lobe connectivity: In general, human brains tend to have less between-lobe connectivity
than within-lobe connectivity. A majority of the strong connectivity occurs within lobes, but rarely
between lobes. These can be clearly seen from Fig. 4 (especially Fig. 4(a)) in which there are
much more black cells along the diagonal direction than the off-diagonal direction, regardless of
AD, MCI, and NC.
The connectivity between the parietal and occipital lobes of AD is significantly more than NC
which is true especially for mild and weak connectivity. The increased connectivity between the
parietal and occipital lobes of AD has been previously reported in [3]. It is also interpreted as a
compensatory effect in [6]-[7]. Furthermore, MCI also shows increased connectivity between the
parietal and occipital lobes, compared with NC, but the increase is not as significant as AD.
While the connectivity between the frontal and occipital lobes shows little difference between AD
and NC, such connectivity for MCI shows a significant decrease especially for mild and weak
connectivity. Also, AD may have less temporal-occipital connectivity, less frontal-parietal
connectivity, but more parietal-temporal connectivity than NC.
Between-hemisphere connectivity: Recall that we have observed from the tree-like plots in Figs. 3
and 4 that the same brain regions in the left and right hemisphere are connected much weaker in
AD than in NC. It is desirable to test if this observed difference is statistically significant. To
achieve this, we test the statistical significance of the difference among AD, MCI, and NC, in term
of the number of connected same-region left-right pairs. Results show that when the total number
of arcs in the connectivity models is equal to 120 or 90, none of the tests is significant. However,
when the total number of arcs is equal to 50, the P-values of the tests for ?AD vs. NC?, ?AD vs.
MCI?, and ?MCI vs. NC? are 0.009, 0.004, and 0.315, respectively. We further perform tests for
the total number of arcs equal to 30 and find the P-values to be 0. 0055, 0.053, and 0.158,
respectively. These results indicate that AD disrupts the strong connectivity between the same
regions of the left and right hemispheres, whereas this disruption is not significant in MCI.
4
Con cl u si on
In the paper, we applied SICE to model functional brain connectivity of AD, MCI, and NC based
on PET neuroimaging data, and analyze the patterns based on the monotone property of SICE. Our
findings were consistent with the previous literature and also showed some new aspects that may
suggest further investigation in brain connectivity research in the future.
R e f e re n c e s
[1] S. Molchan. (2005) The Alzheimer's disease neuroimaging initiative. Business Briefing: US
Neurology Review, pp.30-32, 2005.
[2] C.J. Stam, B.F. Jones, G. Nolte, M. Breakspear, and P. Scheltens. (2007) Small-world networks and
functional connectivity in Alzheimer?s disease. Cerebral Corter 17:92-99.
[3] K. Supekar, V. Menon, D. Rubin, M. Musen, M.D. Greicius. (2008)
Network Analysis of Intrinsic
Functional Brain Connectivity in Alzheimer's Disease. PLoS Comput Biol 4(6) 1-11.
[4] K. Wang, M. Liang, L. Wang, L. Tian, X. Zhang, K. Li and T. Jiang. (2007) Altered Functional
Connectivity in Early Alzheimer?s Disease: A Resting-State fMRI Study, Human Brain Mapping 28, 967978.
[5] N.P. Azari, S.I. Rapoport, C.L. Grady, M.B. Schapiro, J.A. Salerno, A. Gonzales-Aviles. (1992) Patterns
of interregional correlations of cerebral glucose metabolic rates in patients with dementia of the Alzheimer
type. Neurodegeneration 1: 101?111.
[6] R.L. Gould, B.Arroyo, R,G. Brown, A.M. Owen, E.T. Bullmore and R.J. Howard. (2006) Brain
Mechanisms of Successful Compensation during Learning in Alzheimer Disease, Neurology 67, 1011-1017.
[7] Y. Stern. (2006) Cognitive Reserve and Alzheimer Disease, Alzheimer Disease Associated Disorder 20,
69-74.
[8] K.J. Friston. (1994) Functional and effective connectivity: A synthesis. Human Brain Mapping 2, 56-78.
[9] G. Alexander, J. Moeller. (1994) Application of the Scaled Subprofile model: a statistical approach to the
analysis of functional patterns in neuropsychiatric disorders: A principal component approach to modeling
regional patterns of brain function in disease. Human Brain Mapping, 79-94.
[10] V.D. Calhoun, T. Adali, G.D. Pearlson, J.J. Pekar. (2001) Spatial and temporal independent component
analysis of functional MRI data containing a pair of task-related waveforms. Hum.Brain Mapp. 13, 43-53.
[11] V.D. Calhoun, T. Adali, J.J. Pekar, G.D. Pearlson. (2003) Latency (in)sensitive ICA. Group independent
component analysis of fMRI data in the temporal frequency domain. Neuroimage. 20, 1661-1669.
[12] A.R. McIntosh, F.L. Bookstein, J.V. Haxby, C.L. Grady. (1996) Spatial pattern analysis of functional
brain images using partial least squares. Neuroimage. 3, 143-157.
[13] K.J. Worsley, J.B. Poline, K.J. Friston, A.C. Evans. (1997) Characterizing the response of PET and
fMRI data using multivariate linear models. Neuroimage. 6, 305-319.
[14] E. Bullmore, B. Horwitz, G. Honey, M. Brammer, S. Williams, T. Sharma. (2000) How good is good
enough in path analysis of fMRI data? NeuroImage 11, 289?301.
[15] A.R. McIntosh, C.L. Grady, L.G. Ungerieider, J.V. Haxby, S.I. Rapoport, B. Horwitz. (1994) Network
analysis of cortical visual pathways mapped with PET. J. Neurosci. 14 (2), 655?666.
[16] K.J. Friston, L. Harrison, W. Penny. (2003) Dynamic causal modelling. Neuroimage 19, 1273-1302.
[17] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. (2008) Model selection through sparse maximum
likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning Research 9:485516.
[18] J. Dahl, L. Vandenberghe, and V. Roycowdhury. (2008) Covariance selection for nonchordal graphs via
chordal embedding. Optimization Methods Software 23(4):501-520.
[19] J. Friedman, T.astie, and R. Tibsirani. (2007) Spares inverse covariance estimation with the graphical
lasso, Biostatistics 8(1):1-10.
[20] J.Z. Huang, N. Liu, M. Pourahmadi, and L. Liu. (2006) Covariance matrix selection and estimation via
penalized normal likelihood. Biometrika, 93(1):85-98.
[21] H. Li and J. Gui. (2005) Gradient directed regularization for sparse Gaussian concentration graphs, with
applications to inference of genetic networks. Biostatistics 7(2):302-317.
[22] Y. Lin. (2007) Model selection and estimation in the gaussian graphical model. Biometrika 94(1)19-35,
2007.
[23] A. Dobra, C. Hans, B. Jones, J.R. Nevins, G. Yao, and M. West. (2004) Sparse graphical models for
exploring gene expression data. Journal of Multivariate Analysis 90(1):196-212.
[24] A. Berge, A.C. Jensen, and A.H.S. Solberg. (2007) Sparse inverse covariance estimates for hyperspectral
image classification, Geoscience and Remote Sensing, IEEE Transactions on, 45(5):1399-1407.
[25] J.A. Bilmes. (2000) Factored sparse inverse covariance matrices. In ICASSP:1009-1012.
[26] L. Sun and et al. (2009) Mining Brain Region Connectivity for Alzheimer's Disease Study via Sparse
Inverse Covariance Estimation. In KDD: 1335-1344.
[27] R. Tibshirani. (1996) Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society Series B 58(1):267-288.
[28] N. Tzourio-Mazoyer and et al. (2002) Automated anatomical labeling of activations in SPM using a
macroscopic anatomical parcellation of the MNI MRI single subject brain. Neuroimage 15:273-289.
[29] Supplemental information for ?Learning Brain Connectivity of Alzheimer's Disease from Neuroimaging
Data?. http://www.public.asu.edu/~jye02/Publications/AD-supplemental-NIPS09.pdf
| 3651 |@word mild:8 determinant:1 mri:3 hippocampus:3 norm:1 lobe:45 covariance:17 pearlson:2 liu:4 contains:2 series:1 selecting:1 genetic:1 existing:1 current:2 com:1 od:1 comparing:1 chordal:1 si:1 activation:1 evans:1 kdd:1 haxby:2 plot:7 cingulum_post_l:4 v:3 asu:2 website:1 selected:1 schapiro:1 record:1 provides:1 node:11 zhang:1 five:1 along:1 constructed:2 become:2 initiative:2 pathway:1 inside:1 pairwise:1 ica:1 disrupts:2 brain:87 little:2 considering:1 increasing:1 becomes:1 begin:1 project:1 moreover:1 erty:1 biostatistics:2 interpreted:2 substantially:1 developed:1 supplemental:2 finding:6 warning:1 temporal:15 oton:1 every:1 tackle:1 honey:1 ro:1 scaled:2 biometrika:2 control:1 medical:1 appear:1 ice:1 before:3 engineering:2 tends:1 aging:3 severely:1 despite:1 jiang:1 tempe:1 path:1 becoming:1 ap:1 black:6 twice:1 challenging:1 limited:2 greicius:1 tian:1 statistically:2 neuropsychological:1 directed:1 nevins:1 testing:2 definite:1 bootstrap:1 drug:1 neurodegenerative:1 significantly:3 imprecise:1 word:3 numbered:1 suggest:1 parahippocampal:1 salerno:1 selection:5 www:1 conventional:1 demonstrated:2 jieping:2 center:3 sice:22 graphically:1 occipital:10 go:1 regardless:1 focused:1 formulate:1 williams:1 identifying:2 disorder:3 splitting:1 factored:1 vandenberghe:1 embedding:1 exploratory:4 suppose:1 hypothesis:2 element:2 located:2 bottom:2 observed:3 wang:2 capture:1 fleisher:2 region:64 ensures:1 connected:19 sun:3 azari:1 plo:1 decrease:8 remote:1 disease:14 dynamic:2 serve:1 distinctive:2 eric:2 completely:2 icassp:1 differently:1 various:2 represented:1 parietal_sup_r:2 separated:2 distinct:2 effective:2 labeling:2 outside:1 mon:1 larger:1 supplementary:1 precuneus_r:2 calhoun:2 fatal:1 bullmore:2 highlighted:1 itself:1 sequence:1 interaction:1 causing:1 rapidly:1 moeller:1 achieve:4 normalize:1 stam:1 cluster:10 jing:2 produce:1 adam:2 help:4 illustrate:1 develop:1 strong:10 berge:1 implies:1 indicate:1 direction:2 waveform:1 closely:3 drawback:1 human:4 public:1 spare:1 require:1 alleviate:1 investigation:1 biological:1 exploring:1 hold:3 considered:1 ic:1 normal:6 great:4 mapping:3 mo:1 reserve:1 major:1 early:4 estimation:8 reiman:2 currently:2 pourahmadi:1 sensitive:2 pekar:2 clearly:4 gaussian:6 aim:1 rather:1 shrinkage:1 publication:1 derived:1 modelling:1 likelihood:4 indicates:2 industrial:1 sense:1 inference:1 el:2 typically:1 going:1 issue:1 among:9 classification:1 rop:1 spatial:2 cube:4 equal:8 progressive:1 represents:6 jones:2 fmri:5 future:1 few:2 employ:1 preserve:1 gui:1 friedman:1 interest:1 highly:2 mining:1 adjust:1 activated:1 chain:1 partial:3 trod:1 tree:4 divide:1 re:2 causal:2 increased:4 column:2 modeling:7 earlier:1 introducing:1 successful:1 inadequate:1 reported:2 banner:3 disrupted:1 off:2 synthesis:1 continuously:1 yao:1 connectivity:101 reflect:2 recorded:1 containing:1 huang:2 cognitive:6 corner:1 american:1 worsley:1 li:4 account:1 potential:2 distribute:2 summarized:1 ad:76 onset:1 vi:1 later:1 analyze:3 red:3 start:1 neuropsychiatric:1 grady:3 ass:1 square:3 correspond:2 identify:2 weak:5 none:1 bilmes:1 horwitz:2 reach:1 suffers:1 definition:3 ty:1 pp:1 involved:1 frequency:1 proof:1 associated:1 con:2 stop:1 recall:1 knowledge:5 dobra:1 response:1 formulation:2 though:2 symptom:1 strongly:4 furthermore:5 stage:2 shuai:1 correlation:6 horizontal:1 marker:2 assessment:1 banerjee:1 spm:1 scheltens:1 menon:1 reveal:2 usa:2 name:2 ye:2 facilitate:1 true:3 effect:2 brown:1 regularization:3 white:2 kewei:2 during:1 fdg:1 please:1 ack:1 briefing:1 pdf:1 complete:1 mapp:1 disruption:1 image:6 common:2 functional:20 phoenix:1 volume:1 million:2 discussed:1 interpretation:2 cerebral:2 resting:1 significant:11 glucose:1 imposing:2 mcintosh:2 han:1 multivariate:5 recent:4 showed:1 perspective:1 hemisphere:8 binary:1 alternation:2 seen:3 sharma:1 desirable:3 characterized:1 adni:1 clinical:4 believed:1 lin:1 hippocampus_l:3 controlled:1 regression:1 patient:4 represent:2 aal:1 achieved:1 cell:7 addition:3 whereas:6 harrison:1 grow:1 macroscopic:1 rest:6 unlike:1 regional:1 file:1 subject:7 tend:1 alzheimer:15 structural:1 split:2 enough:1 automated:2 affect:1 fit:1 nolte:1 lasso:3 wu1:1 tradeoff:2 expression:1 pca:2 cause:1 impairment:3 latency:1 clear:2 enrolled:1 extensively:1 generate:1 http:1 sign:1 tibshirani:1 anatomical:4 diagnosis:4 promise:2 affected:2 group:4 key:2 four:2 mazoyer:1 achieving:1 changing:1 dahl:1 advancing:1 graph:3 monotone:11 year:2 sum:1 inverse:13 recruitment:1 wu:1 distinguish:2 correspondence:1 arizona:1 badly:1 mni:1 strength:10 software:1 aspect:1 gould:1 numbering:1 rai:1 according:1 disconnected:7 smaller:4 across:4 making:1 intuitively:2 ghaoui:1 bookstein:1 equation:1 resource:1 previously:2 granger:1 eventually:1 mechanism:1 needed:1 studying:1 reallocation:1 apply:6 progression:1 observe:2 indirectly:2 existence:2 top:1 include:1 graphical:7 parcellation:1 especially:4 society:1 occurs:1 hum:1 concentration:1 diagonal:3 interacts:1 evolutionary:1 biodesign:1 gradient:1 gonzales:1 mapped:1 entity:1 majority:1 pet:8 assuming:1 modeled:1 index:1 liang:3 nc:45 neuroimaging:12 difficult:1 potentially:1 trace:1 stern:1 perform:2 arc:24 howard:1 compensation:1 parietal:9 precise:1 introduced:1 pair:7 connection:9 compensatory:3 teresa:2 established:2 able:1 usually:2 pattern:16 below:2 sparsity:7 reliable:1 memory:1 including:2 royal:1 difficulty:1 business:1 friston:3 representing:1 altered:1 axis:1 aspremont:1 jun:1 mediated:1 extract:1 genomics:1 prior:2 literature:4 discovery:2 understanding:1 review:1 fully:2 highlight:2 interesting:4 limitation:1 downloaded:1 degree:2 consistent:5 rubin:1 principle:1 metabolic:1 row:2 course:1 poline:1 penalized:1 weaker:6 institute:2 neighbor:2 characterizing:1 absolute:1 sparse:9 penny:1 cortical:1 transition:2 cure:1 valid:1 world:1 rapoport:2 transaction:1 gene:1 astie:1 neurology:2 disrupt:2 latent:2 table:9 learn:4 cl:1 domain:2 brammer:1 significance:3 neurosci:1 whole:1 body:1 causality:1 fig:26 west:1 fails:1 sub:2 neuroimage:6 comput:1 third:1 supekar:1 specific:1 dementia:3 jensen:1 sensing:1 list:1 insignificant:1 jye02:1 weakest:4 intrinsic:1 hyperspectral:1 sparseness:2 breakspear:1 chen:2 visual:1 neurological:2 geoscience:1 cti:1 owen:1 absence:1 tzourio:1 specifically:6 principal:1 called:6 total:16 mci:42 experimental:2 rarely:1 alexander:1 frontal:10 adali:2 phenomenon:4 biol:1 |
2,926 | 3,652 | Semi-Supervised Learning with the Graph Laplacian:
The Limit of Infinite Unlabelled Data
Boaz Nadler
Dept. of Computer Science and Applied Mathematics
Weizmann Institute of Science
Rehovot, Israel 76100
[email protected]
Nathan Srebro
Toyota Technological Institute
Chicago, IL 60637
[email protected]
Xueyuan Zhou
Dept. of Computer Science
University of Chicago
Chicago, IL 60637
[email protected]
Abstract
We study the behavior of the popular Laplacian Regularization method for SemiSupervised Learning at the regime of a fixed number of labeled points but a large
number of unlabeled points. We show that in Rd , d > 2, the method is actually not
well-posed, and as the number of unlabeled points increases the solution degenerates to a noninformative function. We also contrast the method with the Laplacian
Eigenvector method, and discuss the ?smoothness? assumptions associated with
this alternate method.
1
Introduction and Setup
In this paper we consider the limit behavior of two popular semi-supervised learning (SSL) methods
based on the graph Laplacian: the regularization approach [15] and the spectral approach [3]. We
consider the limit when the number of labeled points is fixed and the number of unlabeled points
goes to infinity. This is a natural limit for SSL as the basic SSL scenario is one in which unlabeled
data is virtually infinite. We can also think of this limit as ?perfect? SSL, having full knowledge
of the marginal density p(x). The premise of SSL is that the marginal density p(x) is informative
about the unknown mapping y(x) we are trying to learn, e.g. since y(x) is expected to be ?smooth?
in some sense relative to p(x). Studying the infinite-unlabeled-data limit, where p(x) is fully known,
allows us to formulate and understand the underlying smoothness assumptions of a particular SSL
method, and judge whether it is well-posed and sensible. Understanding the infinite-unlabeled-data
limit is also a necessary first step to studying the convergence of the finite-labeled-data estimator.
We consider the following setup: Let p(x) be an unknown smooth density on a compact domain ? ?
Rd with a smooth boundary. Let y : ? ? Y be the unknown function we wish to estimate. In case of
regression Y = R whereas in binary classification Y = {?1, 1}. The standard (transductive) semisupervised learning problem is formulated as follows: Given l labeled points, (x1 , y1 ), . . . , (xl , yl ),
with yi = y(xi ), and u unlabeled points xl+1 , . . . , xl+u , with all points xi sampled i.i.d. from p(x),
the goal is to construct an estimate of y(xl+i ) for any unlabeled point xl+i , utilizing both the labeled
and the unlabeled points. We denote the total number of points by n = l + u. We are interested in
the regime where l is fixed and u ? ?.
1
2
SSL with Graph Laplacian Regularization
We first consider the following graph-based approach formulated by Zhu et. al. [15]:
y?(x) = arg min In (y)
subject to y(xi ) = yi , i = 1, . . . , l
y
where
In (y) =
1 X
Wi,j (y(xi ) ? y(xj ))2
n2 i,j
(1)
(2)
is a Laplacian regularization term enforcing ?smoothness? with respect to the n?n similarity matrix
W . This formulation has several natural interpretations in terms of, e.g. random walks and electrical
circuits [15]. These interpretations, however, refer to a fixed graph, over a finite set of points with
given similarities.
In contrast, our focus here is on the more typical scenario where the points xi ? Rd are a random
sample from a density p(x), and W is constructed based on this sample. We would like to understand
the behavior of the method in terms of the density p(x), particularly in the limit where the number
of unlabeled points grows. Under what assumptions on the target labeling y(x) and on the density
p(x) is the method (1) sensible?
The answer, of course, depends on how the matrix W is constructed. We consider the common
situation where the similarities are obtained by applying some decay filter to the distances:
kxi ?xj k
(3)
Wi,j = G
?
where G : R+ ? R+ is some function with an adequately fast decay. Popular choices are the
2
Gaussian filter G(z) = e?z /2 or the ?-neighborhood graph obtained by the step filter G(z) = 1z<1 .
For simplicity, we focus here on the formulation (1) where the solution is required to satisfy the
constraints at the labeled points exactly. In practice, the hard labeling constraints are often replaced
with a softer loss-based data term, which is balanced against the smoothness term In (y), e.g. [14, 6].
Our analysis and conclusions apply to such variants as well.
Limit of the Laplacian Regularization Term
As the number of unlabeled examples grows the regularization term (2) converges to its expectation,
where the summation is replaced by integration w.r.t. the density p(x):
Z Z
?
k
lim In (y) = I (?) (y) =
(y(x) ? y(x? ))2 p(x)p(x? )dxdx? .
G kx?x
(4)
?
n??
?
?
In the above limit, the bandwidth ? is held fixed. Typically, one would also drive the bandwidth ?
to zero as n ? ?. There are two reasons for this choice. First, from a practical perspective, this
makes the similarity matrix W sparse so it can be stored and processed. Second, from a theoretical
perspective, this leads to a clear and well defined limit of the smoothness
regularization term In (y),
p
at least when ? ? 0 slowly enough1 , namely when ? = ?( d log n/n). If ? ? 0 as n ? ?,
and as long as n? d / log n ? ?, then after appropriate normalization, the regularizer converges to
a density weighted gradient penalty term [7, 8]:
Z
lim C?dd+2 In (y) = lim C?dd+2 I (?) (y) = J(y) =
k?y(x)k2 p(x)2 dx
(5)
n??
??0
?
R
where C = Rd kzk2 G(kzk)dz, and assuming 0 < C < ? (which is the case for both the Gaussian
and the step filters). This energy functional J(f ) therefore encodes the notion of ?smoothness? with
respect to p(x) that is the basis of the SSL formulation (1) with the graph constructions specified by
(3). To understand the behavior and appropriateness of (1) we must understand this functional and
the associated limit problem:
y?(x) = arg min J(y)
subject to y(xi ) = yi , i = 1, . . . , l
(6)
y
p
When ? = o( d 1/n) then all non-diagonal weights Wi,j vanish (points no longer have
p any ?close by?
neighbors). We are not aware of an analysis covering the regime where ? decays roughly as d 1/n, but would
be surprised if a qualitatively different meaningful limit is reached.
1
2
3
Graph Laplacian Regularization in R1
We begin by considering the solution of (6) for one dimensional data, i.e. d = 1 and x ? R. We first
consider the situation where the support of p(x) is a continuous interval ? = [a, b] ? R (a and/or
b may be infinite). Without loss of generality, we assume the labeled data is sorted in increasing
order a 6 x1 < x2 < ? ? ? < xl 6 b. Applying the theory of variational calculus, the solution y?(x)
satisfies inside each interval (xi , xi+1 ) the Euler-Lagrange equation
d
dy
p2 (x)
= 0.
dx
dx
Performing two integrations and enforcing the constraints at the labeled points yields
Rx
1/p2 (t)dt
i
y(x) = yi + R xxi+1
(yi+1 ? yi )
for xi 6 x 6 xi+1
1/p2 (t)dt
xi
(7)
with y(x) = x1 for a 6 x 6 x1 and y(x) = xl for xl 6 x 6 b. If the support of p(x) is a union of
disjoint intervals, the above analysis and the form of the solution applies in each interval separately.
The solution (7) seems reasonable and desirable from the point of view of the ?smoothness? assumptions: when p(x) is uniform, the solution interpolates linearly between labeled data points, whereas
across low-density regions, where p(x) is close to zero, y(x) can change abruptly. Furthermore,
the regularizer J(y) can be interpreted as a Reproducing Kernel Hilbert Space (RKHS) squared
semi-norm, giving us additional insight into this choice of regularizer:
Rb
Theorem 1. Let p(x) be a smooth density on ? = [a, b] ? R such that Ap = 41 a 1/p2 (t)dt < ?.
Then, J(f ) can be written as a squared semi-norm J(f ) = kf k2Kp induced by the kernel
Z ?
x
1
Kp (x, x? ) = Ap ? 12
dt
(8)
.
2
x p (t)
with a null-space of all constant functions. That is, kf kKp is the norm of the projection of f onto
the RKHS induced by Kp .
If p(x) is supported on several disjoint intervals, ? = ?i [ai , bi ], then J(f ) can be written as a
squared semi-norm induced by the kernel
R ?
( Rb
i dt
1
1 x dt
?
if x, x? ? [ai , bi ]
2
2
?
4
p
(t)
2
p
(t)
x
ai
(9)
Kp (x, x ) =
0
if x ? [ai , bi ], x? ? [aj , bj ], i 6= j
with a null-space spanned by indicator functions 1[ai ,bi ] (x) on the connected components of ?.
Proof. For any f (x) =
P
?i Kp (x, xi ) in the RKHS induced by Kp :
Z 2
X
df
J(f ) =
p2 (x)dx =
?i ?j Jij
dx
i,j
Z
d
d
where Jij =
Kp (x, xi ) Kp (x, xj )p2 (x)dx
dx
dx
i
(10)
When xi and xj are in different connected components of ?, the gradients of Kp (?, xi ) and Kp (?, xj )
are never non-zero together and Jij = 0 = Kp (xi , xj ). When they are in the same connected
component [a, b], and assuming w.l.o.g. a 6 xi 6 xj 6 b:
"Z
#
Z xj
Z b
xi
1
1
?1
1
Jij =
dt +
dt +
dt
2
2
4 a p2 (t)
xi p (t)
xj p (t)
Z
Z
1 xj 1
1 b 1
(11)
=
dt ?
dt = Kp (xi , xj ).
4 a p2 (t)
2 xi p2 (t)
P
Substituting Jij = Kp (xi , xj ) into (10) yields J(f ) =
?i ?j Kp (xi , xj ) = kf kKp .
3
Combining Theorem 1 with the Representer Theorem [13] establishes that the solution of (6) (or of
any variant where the hard constraints are replaced by a data term) is of the form:
y(x) =
l
X
?j Kp (x, xj ) +
j=1
X
?i 1[ai ,bi ] (x),
i
where i ranges over the connected components [ai , bi ] of ?, and we have:
J(y) =
l
X
?i ?j Kp (xi , xj ).
(12)
i,j=1
Viewing the regularizer as kyk2Kp suggests understanding (6), and so also its empirical approximation (1), by interpreting Kp (x, x? ) as a density-based ?similarity measure? between x and x? . This
similarity measure indeed seems sensible: for a uniform density it is simply linearly decreasing as a
function of the distance. When the density is non-uniform, two points are relatively similar only if
they are connected by a region in which 1/p2 (x) is low, i.e. the density is high, but are much less
?similar?, i.e. related to each other, when connected by a low-density region. Furthermore, there is
no dependence between points in disjoint components separated by zero density regions.
4
Graph Laplacian Regularization in Higher Dimensions
The analysis of the previous section seems promising, at it shows that in one dimension, the SSL
method (1) is well posed and converges to a sensible limit. Regretfully, in higher dimensions this is
not the case anymore. In the following theorem we show that the infimum of the limit problem (6) is
zero and can be obtained by a sequence of functions which are certainly not a sensible extrapolation
of the labeled points.
Theorem 2. Let p(x) be a smooth density over Rd , d > 2, bounded from above by some constant
pmax , and let (x1 , y1 ), . . . , (xl , yl ) be any (non-repeating) set of labeled examples. There exist continuous functions y? (x), for any ? > 0, all satisfying the constraints y? (xj ) = yj , j = 1, . . . , l, such
??0
??0
that J(y? ) ?? 0 but y? (x) ?? 0 for all x 6= xj , j = 1, . . . , l.
Proof. We present a detailed proof for the case of l = 2 labeled points. The generalization of the
proof to more labeled points is straightforward. Furthermore, without loss of generality, we assume
the first labeled point is at x0 = 0 with y(x0 ) = 0 and the second labeled point is at x1 with
kx1 k = 1 and y(x1 ) = 1. In addition, we assume that the ball B1 (0) of radius one centered around
the origin is contained in ? = {x ? Rd | p(x) > 0}.
We first consider the case d > 2. Here, for any ? > 0, consider the function
y? (x) = min kxk
,
1
?
which indeed satisfies the two constraints y? (xi ) = yi , i = 0, 1. Then,
Z
Z
p2 (x)
pmax
J(y? ) =
dx
6
dx = p2max Vd ?d?2
2
2
?
?
B? (0)
B? (0)
(13)
where Vd is the volume of a unit ball in Rd . Hence, the sequence of functions y? (x) satisfy the
constraints, but for d > 2, inf ? J(y? ) = 0.
For d = 2, a more extreme example is necessary: consider the functions
2
y? (x) = log kxk? +? log 1+?
for kxk 6 1
?
and y? (x) = 1 for kxk > 1. These functions satisfy the two constraints y? (xi ) = yi , i = 0, 1 and:
Z
Z 1
4p2max
kxk2
2
4
r2
?i2
h
?
?i
J(y? ) = h ? 1+?
p
(x)dx
6
1+? 2
(kxk2 +?)2
(r 2 +?)2 2?rdr
log
6
?
4?p2max
h ?
?i
1+? 2
log
?
log
B1 (0)
log
1+?
?
=
4?p2max ??0
??
log 1+?
?
4
0.
?
0
The implication of Theorem 2 is that regardless of the values at the labeled points, as u ? ?, the
solution of (1) is not well posed. Asymptotically, the solution has the form of an almost everywhere constant function, with highly localized spikes near the labeled points, and so no learning
is performed. In particular, an interpretation in terms of a density-based kernel Kp , as in the onedimensional case, is not possible.
Our analysis also carries over to a formulation where a loss-based data term replaces the hard label
constraints, as in
l
1X
y? = arg min
(y(xj ) ? yj )2 + ?In (y)
y(x) l
j=1
In the limit of infinite unlabeled data, functions of the form y? (x) above have a zero data penalty
term (since they exactly match the labels) and also drive the regularization term J(y) to zero. Hence,
it is possible to drive the entire objective functional (the data term plus the regularization term) to
zero with functions that do not generalize at all to unlabeled points.
4.1
Numerical Example
We illustrate the phenomenon detailed by Theorem 2 with a simple example. Consider a density
p(x) in R2 , which is a mixture of two unit variance spherical Gaussians, one per class, centered at
the origin and at (4, 0). We sample a total of n = 3000 points, and label two points from each of
the two components (four total). We then construct a similarity matrix using a Gaussian filter with
? = 0.4.
Figure 1 depicts the predictor y?(x) obtained from (1). In fact, two different predictors are shown,
obtained by different numerical methods for solving (1). Both methods are based on the observation
that the solution y?(x) of (1) satisfies:
y?(xi ) =
n
X
j=1
Wij y?(xj ) /
n
X
Wij
on all unlabeled points i = l + 1, . . . , l + u.
(14)
j=1
Combined with the constraints of (1), we obtain a system of linear equations that can be solved
by Gaussian elimination (here invoked through MATLAB?s backslash operator). This is the method
used in the top panels of Figure 1. Alternatively, (14) can be viewed as an update equation for y?(xi ),
which can be solved via the power method, or label propagation [2, 6]: start with zero labels on the
unlabeled points and iterate (14), while keeping the known labels on x1 , . . . , xl . This is the method
used in the bottom panels of Figure 1.
As predicted, y?(x) is almost constant for almost all unlabeled points. Although all values are very
close to zero, thresholding at the ?right? threshold does actually produce sensible results in terms of
the true -1/+1 labels. However, beyond being inappropriate for regression, a very flat predictor is still
problematic even from a classification perspective. First, it is not possible to obtain a meaningful
confidence measure for particular labels. Second, especially if the size of each class is not known apriori, setting the threshold between the positive and negative classes is problematic. In our example,
setting the threshold to zero yields a generalization error of 45%.
The differences between the two numerical methods for solving (1) also point out to another problem
with the ill-posedness of the limit problem: the solution is numerically very un-stable.
A more quantitative evaluation, that also validates that the effect in Figure 1 is not a result of choosing a ?wrong? bandwidth ?, is given in Figure 2. We again simulated data from a mixture of two
Gaussians, one Gaussian per class, this time in 20 dimensions, with one labeled point per class, and
an increasing number of unlabeled points. In Figure 2 we plot the squared error, and the classification error of the resulting predictor y?(x). We plot the classification error both when a threshold
of zero is used (i.e. the class is determined by sign(?
y (x))) and with the ideal threshold minimizing
the test error. For each unlabeled sample size, we choose the bandwidth ? yielding the best test
performance (this is a ?cheating? approach which provides a lower bound on the error of the best
method for selecting the bandwidth). As the number of unlabeled examples increases the squared
error approaches 1, indicating a flat predictor. Using a threshold of zero leads to an increase in the
classification error, possibly due to numerical instability. Interestingly, although the predictors become very flat, the classification error using the ideal threshold actually improves slightly. Note that
5
DIRECT INVERSION
SQUARED ERROR
SIGN ERROR: 45%
10
y(x) > 0
y(x) < 0
1
5
0
OPTIMAL BANDWIDTH
1
6
0.95
4
0.9
2
0.85
0
0
?1
10
0
200
400
600
800
0?1 ERROR (THRESHOLD=0)
0.32
?5
10
0
5
?10
0
?10
?5
?5
POWER METHOD
0
5
10
SIGN ERR: 17.1
10
1
0
0.3
1
0.28
0.5
0.26
0
0
200 400 600 800
0?1 ERROR (IDEAL THRESHOLD)
0.19
5
0
200
400 600
800
OPTIMAL BANDWIDTH
0
200 400 600 800
OPTIMAL BANDWIDTH
1.5
8
0
?1
10
0.18
6
0.17
4
?5
10
0
5
?10
0
?5
?10
?5
0
5
10
Figure 1: Left plots: Minimizer of Eq. (1). Right plots:
the resulting classification according to sign(y). The four
labeled points are shown by green squares. Top: minimization via Gaussian elimination (MATLAB backslash).
Bottom: minimization via label propagation with 1000 iterations - the solution has not yet converged, despite small
residuals of the order of 2 ? 10?4 .
0.16
0
200
400
600
800
2
0
200
400
600
800
Figure 2: Squared error (top), classification error
with a threshold of zero (center) and minimal classification error using ideal threhold (bottom), of the
minimizer of (1) as a function of number of unlabeled points. For each error measure and sample
size, the bandwidth minimizing the test error was
used, and is plotted.
ideal classification performance is achieved with a significantly larger bandwidth than the bandwidth
minimizing the squared loss, i.e. when the predictor is even flatter.
4.2
Probabilistic Interpretation, Exit and Hitting Times
As mentioned above, the Laplacian regularization method (1) has a probabilistic interpretation in
terms of a random walk on the weighted graph. Let x(t) denote a random walk
P on the graph with
transition matrix M = D?1 W where D is a diagonal matrix with Dii = j Wij . Then, for the
binary classification case with yi = ?1 we have [15]:
h
i
y?(xi ) = 2 Pr x(t) hits a point labeled +1 before hitting a point labeled -1 x(0) = xi ? 1
We present an interpretation of our analysis in terms of the limiting properties of this random walk.
Consider, for simplicity, the case where the two classes are separated by a low density region. Then,
the random walk has two intrinsic quantities of interest. The first is the mean exit time from one
cluster to the other, and the other is the mean hitting time to the labeled points in that cluster. As the
number of unlabeled points increases and ? ? 0, the random walk converges to a diffusion process
[12]. While the mean exit time then converges to a finite value corresponding to its diffusion analogue, the hitting time to a labeled point increases to infinity (as these become absorbing boundaries
of measure zero). With more and more unlabeled data the random walk will fully mix, forgetting
where it started, before it hits any label. Thus, the probability of hitting +1 before ?1 will become
uniform across the entire graph, independent of the starting location xi , yielding a flat predictor.
5
Keeping ? Finite
At this point, a reader may ask whether the problems found in higher dimensions are due to taking
the limit ? ? 0. One possible objection is that there is an intrinsic characteristic scale for the data
?0 where (with high probability) all points at a distance kxi ? xj k < ?0 have the same label. If this
is the case, then it may not necessarily make sense to take values of ? < ?0 in constructing W .
However, keeping ? finite while taking the number of unlabeled points to infinity does not resolve
the problem. On the contrary, even the one-dimensional case becomes ill-posed in this case. To
see this, consider a function y(x) which is zero everywhere except at the labeled points, where
y(xj ) = yj . With a finite number of labeled points of measure zero, I (?) (y) = 0 in any dimension
6
y
50 points
500 points
3500 points
1
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
?2
0
2
4
6
?1
?2
0
2
4
6
?1
?2
0
2
4
6
x
Figure 3: Minimizer of (1) for a 1-d problem with a fixed ? = 0.4, two labeled points and an increasing number
of unlabeled points.
and for any fixed ? > 0. While this limiting function is discontinuous, it is also possible to construct
??0
a sequence of continuous functions y? that all satisfy the constraints and for which I (?) (y? ) ?? 0.
This behavior is illustrated in Figure 3. We generated data from a mixture of two 1-D Gaussians
centered at the origin and at x = 4, with one Gaussian labeled ?1 and the other +1. We used
two labeled points at the centers of the Gaussians and an increasing number of randomly drawn
unlabeled points. As predicted, with a fixed ?, although the solution is reasonable when the number
of unlabeled points is small, it becomes flatter, with sharp spikes on the labeled points, as u ? ?.
6
Fourier-Eigenvector Based Methods
Before we conclude, we discuss a different approach for SSL, also based on the Graph Laplacian,
suggested by Belkin and Niyogi [3]. Instead of using the Laplacian as a regularizer, constraining
candidate predictors y(x) non-parametrically to those with small In (y) values, here the predictors
are constrained to the low-dimensional space spanned by the first few eigenvectors of the Laplacian:
The similarity matrix W is computed as before, and P
the Graph Laplacian matrix L = D ? W is
considered (recall D is a diagonal matrix with Dii = j Wij ). Only predictors
Pp
y?(x) = j=1 aj ej
(15)
spanned by the first p eigenvectors e1 , . . . , ep of L (with smallest eigenvalues) are considered. The
coefficients aj are chosen by minimizing a loss function on the labeled data, e.g. the squared loss:
Pl
(?
a1 , . . . , a
?p ) = arg min j=1 (yj ? y?(xj ))2 .
(16)
Unlike the Laplacian Regularization method (1), the Laplacian Eigenvector method (15)?(16) is
well posed in the limit u ? ?. This follows directly from the convergence of the eigenvectors of
the graph Laplacian to the eigenfunctions of the corresponding Laplace-Beltrami operator [10, 4].
Eigenvector based methods were shown empirically to provide competitive generalization performance on a variety of simulated and real world problems. Belkin and Niyogi [3] motivate the
approach by arguing that ?the eigenfunctions of the Laplace-Beltrami operator provide a natural basis for functions on the manifold and the desired classification function can be expressed in such a
basis?. In our view, the success of the method is actually not due to data lying on a low-dimensional
manifold, but rather due to the low density separation assumption, which states that different class labels form high-density clusters separated by low density regions. Indeed, under this assumption and
with sufficient separation between the clusters, the eigenfunctions of the graph Laplace-Beltrami operator are approximately piecewise constant in each of the clusters, as in spectral clustering [12, 11],
providing a basis for a labeling that is constant within clusters but variable across clusters. In other
settings, such as data uniformly distributed on a manifold but without any significant cluster structure, the success of eigenvector based methods critically depends on how well can the unknown
classification function be approximated by a truncated expansion with relatively few eigenvectors.
We illustrate this issue with the following three-dimensional example: Let p(x) denote the uniform
density in the box [0, 1] ? [0, 0.8] ? [0, 0.6], where the box lengths are different to prevent eigenvalue
multiplicity. Consider learning three different functions, y1 (x) = 1x1 >0.5 , y2 (x) = 1x1 >x2 /0.8 and
y3 (x) = 1x2 /0.8>x3 /0.6 . Even though all three functions are relatively simple, all having a linear
separating boundary between the classes on the manifold, as shown in the experiment described in
Figure 4, the Eigenvector based method (15)?(16) gives markedly different generalization performances on the three targets. This happens both when the number of eigenvectors p is set to p = l/5
as suggested by Belkin and Niyogi, as well as for the optimal (oracle) value of p selected on the test
set (i.e. a ?cheating? choice representing an upper bound on the generalization error of this method).
7
Prediction Error (%)
p = #labeled points/5
40
optimal p
20 labeled points
40
Approx. Error
50
20
20
0
20
20
40
60
# labeled points
0
10
20
40
60
# labeled points
0
0
5
10
15
# eigenvectors
0
0
5
10
15
# eigenvectors
Figure 4: Left three panels: Generalization Performance of the Eigenvector Method (15)?(16) for the three
different functions described in the text. All panels use n = 3000 points. Prediction counts the number of sign
agreements with the true labels. Rightmost panel: best fit when many (all 3000) points are used, representing
the best we can hope for with a few leading eigenvectors.
The reason for this behavior is that y2 (x) and even more so y3 (x) cannot be as easily approximated
by the very few leading eigenfunctions?even though they seem ?simple? and ?smooth?, they are
significantly more complicated than y1 (x) in terms of measure of simplicity implied by the Eigenvector Method. Since the density is uniform, the graph Laplacian converges to the standard Laplacian and its eigenfunctions have the form ?i,j,k (x) = cos(i?x1 ) cos(j?x2 /0.8) cos(k?x3 /0.6),
making it hard to represent simple decision boundaries which are not axis-aligned.
7
Discussion
Our results show that a popular SSL method, the Laplacian Regularization method (1), is not wellbehaved in the limit of infinite unlabeled data, despite its empirical success in various SSL tasks.
The empirical success might be due to two reasons.
First, it is possible that with a large enough number of labeled points relative to the number of
unlabeled points, the method is well behaved. This regime, where the number of both labeled and
unlabeled points grow while l/u is fixed, has recently been analyzed by Wasserman and Lafferty
[9]. However, we do not find this regime particularly satisfying as we would expect that having
more unlabeled data available should improve performance, rather than require more labeled points
or make the problem ill-posed. It also places the user in a delicate situation of choosing the ?just
right? number of unlabeled points without any theoretical guidance.
Second, in our experiments we noticed that although the predictor y?(x) becomes extremely flat, in
binary tasks, it is still typically possible to find a threshold leading to a good classification performance. We do not know of any theoretical explanation for such behavior, nor how to characterize
it. Obtaining such an explanation would be very interesting, and in a sense crucial to the theoretical
foundation of the Laplacian Regularization method. On a very practical level, such a theoretical understanding might allow us to correct the method so as to avoid the numerical instability associated
with flat predictors, and perhaps also make it appropriate for regression.
The reason that the Laplacian regularizer (1) is ill-posed in the limit is that the first order gradient
is not a sufficient penalty in high dimensions. This fact is well known in spline theory, where the
d
Sobolev Embedding Theorem [1] indicates one must control at least d+1
2 derivatives in R . In the
context of Laplacian regularization, this can be done using the iterated Laplacian: replacing the
d+1
graph Laplacian matrix L = D ? W , where D is the diagonal degree matrix, with L 2 (matrix to
d+1
the 2 power). In the infinite unlabeled data limit, this corresponds to regularizing all order- d+1
2
(mixed) partial derivatives. In the typical case of a low-dimensional manifold in a high dimensional
ambient space, the order of iteration should correspond to the intrinsic, rather then ambient, dimensionality, which poses a practical problem of estimating this usually unknown dimensionality. We
are not aware of much practical work using the iterated Laplacian, nor a good understanding of its
appropriateness for SSL.
A different approach leading to a well-posed solution is to include also an ambient regularization
term [5]. However, the properties of the solution and in particular its relation to various assumptions
about the ?smoothness? of y(x) relative to p(x) remain unclear.
Acknowledgments The authors would like to thank the anonymous referees for valuable suggestions. The research of BN was supported by the Israel Science Foundation (grant 432/06).
8
References
[1] R.A. Adams, Sobolev Spaces, Academic Press (New York), 1975.
[2] A. Azran, The rendevous algorithm: multiclass semi-supervised learning with Markov Random Walks,
ICML, 2007.
[3] M. Belkin, P. Niyogi, Using manifold structure for partially labelled classification, NIPS, vol. 15, 2003.
[4] M. Belkin and P. Niyogi, Convergence of Laplacian Eigenmaps, NIPS 19, 2007.
[5] M. Belkin, P. Niyogi and S. Sindhwani, Manifold Regularization: A Geometric Framework for Learning
from Labeled and Unlabeled Examples, JMLR, 7:2399-2434, 2006.
[6] Y. Bengio, O. Delalleau, N. Le Roux, label propagation and quadratic criterion, in Semi-Supervised
Learning, Chapelle, Scholkopf and Zien, editors, MIT Press, 2006.
[7] O. Bosquet, O. Chapelle, M. Hein, Measure Based Regularization, NIPS, vol. 16, 2004.
[8] M. Hein, Uniform convergence of adaptive graph-based regularization, COLT, 2006.
[9] J. Lafferty, L. Wasserman, Statistical Analysis of Semi-Supervised Regression, NIPS, vol. 20, 2008.
[10] U. von Luxburg, M. Belkin and O. Bousquet, Consistency of spectral clustering, Annals of Statistics, vol.
36(2), 2008.
[11] M. Meila, J. Shi. A random walks view of spectral segmentation, AI and Statistics, 2001.
[12] B. Nadler, S. Lafon, I.G. Kevrekidis, R.R. Coifman, Diffusion maps, spectral clustering and eigenfunctions of Fokker-Planck operators, NIPS, vol. 18, 2006.
[13] B. Sch?olkopf, A. Smola, Learning with Kernels, MIT Press, 2002.
[14] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, B. Sch?olkopf, Learning with local and global consistency,
NIPS, vol. 16, 2004.
[15] X. Zhu, Z. Ghahramani, J. Lafferty, Semi-Supervised Learning using Gaussian fields and harmonic functions, ICML, 2003.
9
| 3652 |@word inversion:1 seems:3 norm:4 calculus:1 bn:1 carry:1 backslash:2 selecting:1 rkhs:3 interestingly:1 rightmost:1 err:1 yet:1 dx:11 must:2 written:2 numerical:5 chicago:3 informative:1 wellbehaved:1 noninformative:1 plot:4 update:1 selected:1 provides:1 location:1 constructed:2 direct:1 become:3 surprised:1 scholkopf:1 inside:1 coifman:1 x0:2 forgetting:1 indeed:3 expected:1 roughly:1 behavior:7 nor:2 decreasing:1 spherical:1 resolve:1 inappropriate:1 considering:1 increasing:4 becomes:3 begin:1 estimating:1 underlying:1 bounded:1 circuit:1 panel:5 kevrekidis:1 null:2 israel:2 what:1 interpreted:1 eigenvector:8 quantitative:1 y3:2 exactly:2 k2:1 wrong:1 hit:2 control:1 unit:2 grant:1 planck:1 positive:1 before:5 local:1 limit:22 despite:2 ap:2 approximately:1 might:2 plus:1 suggests:1 co:3 bi:6 range:1 weizmann:2 practical:4 acknowledgment:1 arguing:1 yj:4 practice:1 union:1 x3:2 empirical:3 significantly:2 projection:1 confidence:1 onto:1 unlabeled:33 close:3 operator:5 cannot:1 context:1 applying:2 instability:2 map:1 dz:1 center:2 shi:1 go:1 straightforward:1 regardless:1 starting:1 formulate:1 simplicity:3 roux:1 rdr:1 wasserman:2 estimator:1 insight:1 utilizing:1 spanned:3 embedding:1 notion:1 laplace:3 limiting:2 annals:1 target:2 construction:1 user:1 origin:3 agreement:1 referee:1 satisfying:2 particularly:2 approximated:2 labeled:38 bottom:3 ep:1 electrical:1 solved:2 region:6 connected:6 technological:1 valuable:1 balanced:1 mentioned:1 motivate:1 solving:2 exit:3 basis:4 easily:1 various:2 regularizer:6 separated:3 fast:1 kp:17 labeling:3 neighborhood:1 choosing:2 posed:9 larger:1 delalleau:1 niyogi:6 statistic:2 think:1 transductive:1 validates:1 sequence:3 eigenvalue:2 jij:5 aligned:1 combining:1 degenerate:1 kx1:1 olkopf:2 convergence:4 cluster:8 r1:1 produce:1 perfect:1 converges:6 adam:1 illustrate:2 ac:1 pose:1 eq:1 p2:11 c:1 predicted:2 judge:1 appropriateness:2 beltrami:3 radius:1 discontinuous:1 correct:1 filter:5 centered:3 viewing:1 softer:1 elimination:2 dii:2 require:1 premise:1 generalization:6 anonymous:1 summation:1 pl:1 lying:1 around:1 considered:2 nadler:3 mapping:1 bj:1 substituting:1 smallest:1 label:14 establishes:1 weighted:2 minimization:2 hope:1 mit:2 gaussian:8 rather:3 zhou:2 ej:1 avoid:1 focus:2 indicates:1 contrast:2 sense:3 typically:2 entire:2 relation:1 wij:4 interested:1 issue:1 arg:4 classification:15 ill:4 colt:1 constrained:1 ssl:13 integration:2 marginal:2 apriori:1 construct:3 aware:2 having:3 never:1 field:1 icml:2 representer:1 spline:1 piecewise:1 belkin:7 few:4 randomly:1 replaced:3 delicate:1 interest:1 highly:1 evaluation:1 certainly:1 mixture:3 extreme:1 analyzed:1 yielding:2 held:1 implication:1 ambient:3 partial:1 necessary:2 walk:9 desired:1 plotted:1 guidance:1 hein:2 theoretical:5 minimal:1 parametrically:1 euler:1 uniform:7 predictor:13 eigenmaps:1 characterize:1 stored:1 kkp:2 answer:1 kxi:2 combined:1 bosquet:1 density:25 probabilistic:2 yl:2 together:1 squared:9 again:1 von:1 choose:1 slowly:1 possibly:1 derivative:2 leading:4 flatter:2 coefficient:1 satisfy:4 kzk2:1 depends:2 performed:1 view:3 extrapolation:1 reached:1 start:1 competitive:1 complicated:1 il:3 square:1 variance:1 characteristic:1 yield:3 correspond:1 generalize:1 iterated:2 critically:1 rx:1 drive:3 converged:1 against:1 energy:1 pp:1 associated:3 proof:4 sampled:1 popular:4 ask:1 recall:1 knowledge:1 lim:3 improves:1 dimensionality:2 hilbert:1 segmentation:1 actually:4 higher:3 dt:11 supervised:6 formulation:4 done:1 box:2 though:2 generality:2 furthermore:3 just:1 smola:1 navin:1 replacing:1 propagation:3 infimum:1 aj:3 perhaps:1 behaved:1 grows:2 semisupervised:2 effect:1 true:2 y2:2 adequately:1 regularization:20 hence:2 i2:1 illustrated:1 covering:1 criterion:1 trying:1 interpreting:1 variational:1 invoked:1 harmonic:1 recently:1 common:1 absorbing:1 functional:3 empirically:1 volume:1 interpretation:6 onedimensional:1 numerically:1 refer:1 significant:1 ai:8 smoothness:8 rd:7 approx:1 consistency:2 mathematics:1 meila:1 chapelle:2 stable:1 similarity:8 longer:1 perspective:3 inf:1 scenario:2 binary:3 success:4 yi:9 additional:1 semi:9 zien:1 full:1 desirable:1 mix:1 smooth:6 unlabelled:1 match:1 academic:1 long:1 e1:1 a1:1 laplacian:27 prediction:2 variant:2 basic:1 regression:4 expectation:1 df:1 iteration:2 normalization:1 kernel:5 represent:1 achieved:1 xxi:1 whereas:2 addition:1 separately:1 interval:5 objection:1 grow:1 crucial:1 sch:2 unlike:1 eigenfunctions:6 markedly:1 subject:2 induced:4 virtually:1 contrary:1 lafferty:3 seem:1 near:1 ideal:5 constraining:1 bengio:1 enough:1 iterate:1 xj:22 variety:1 fit:1 bandwidth:11 multiclass:1 whether:2 penalty:3 abruptly:1 interpolates:1 york:1 matlab:2 clear:1 detailed:2 eigenvectors:8 repeating:1 processed:1 exist:1 problematic:2 sign:5 disjoint:3 per:3 rb:2 rehovot:1 vol:6 four:2 threshold:11 drawn:1 prevent:1 diffusion:3 graph:19 asymptotically:1 luxburg:1 everywhere:2 place:1 almost:3 reasonable:2 reader:1 separation:2 sobolev:2 decision:1 dy:1 bound:2 replaces:1 quadratic:1 oracle:1 infinity:3 constraint:11 x2:4 flat:6 encodes:1 bousquet:2 nathan:1 fourier:1 min:5 extremely:1 performing:1 relatively:3 according:1 alternate:1 ball:2 across:3 slightly:1 remain:1 wi:3 making:1 happens:1 pr:1 multiplicity:1 equation:3 discus:2 count:1 know:1 studying:2 available:1 gaussians:4 apply:1 spectral:5 appropriate:2 anymore:1 top:3 clustering:3 include:1 giving:1 ghahramani:1 especially:1 implied:1 objective:1 noticed:1 quantity:1 spike:2 dependence:1 diagonal:4 unclear:1 gradient:3 distance:3 thank:1 simulated:2 separating:1 vd:2 sensible:6 manifold:7 reason:4 enforcing:2 assuming:2 length:1 providing:1 minimizing:4 setup:2 negative:1 pmax:2 unknown:5 upper:1 observation:1 markov:1 finite:6 truncated:1 situation:3 y1:4 reproducing:1 sharp:1 posedness:1 namely:1 required:1 specified:1 cheating:2 lal:1 nip:6 beyond:1 suggested:2 usually:1 regime:5 green:1 explanation:2 analogue:1 power:3 natural:3 indicator:1 residual:1 zhu:2 representing:2 improve:1 axis:1 started:1 text:1 understanding:4 geometric:1 nati:1 kf:3 regretfully:1 relative:3 fully:2 loss:7 expect:1 mixed:1 interesting:1 suggestion:1 srebro:1 localized:1 foundation:2 degree:1 sufficient:2 dd:2 thresholding:1 editor:1 course:1 supported:2 keeping:3 uchicago:2 understand:4 allow:1 institute:2 neighbor:1 taking:2 sparse:1 distributed:1 boundary:4 kzk:1 dimension:7 transition:1 world:1 lafon:1 author:1 qualitatively:1 adaptive:1 compact:1 boaz:2 global:1 b1:2 conclude:1 xi:31 alternatively:1 continuous:3 un:1 promising:1 learn:1 obtaining:1 expansion:1 necessarily:1 constructing:1 domain:1 linearly:2 n2:1 x1:11 depicts:1 wish:1 xl:10 candidate:1 kxk2:2 vanish:1 jmlr:1 toyota:1 theorem:8 r2:2 decay:3 intrinsic:3 kx:1 simply:1 lagrange:1 kxk:4 contained:1 hitting:5 expressed:1 partially:1 sindhwani:1 applies:1 corresponds:1 minimizer:3 satisfies:3 fokker:1 weston:1 goal:1 formulated:2 sorted:1 p2max:4 viewed:1 labelled:1 hard:4 change:1 infinite:8 typical:2 determined:1 except:1 uniformly:1 total:3 meaningful:2 indicating:1 support:2 dept:2 regularizing:1 phenomenon:1 |
2,927 | 3,653 | Breaking Boundaries: Active Information Acquisition
Across Learning and Diagnosis
Ashish Kapoor and Eric Horvitz
Microsoft Research
1 Microsoft Way
Redmond, WA 98052
Abstract
To date, the processes employed for active information acquisition during periods
of learning and diagnosis have been considered as separate and have been applied
in distinct phases of analysis. While active learning centers on the collection of
information about training cases in order to build better predictive models, diagnosis uses fixed predictive models for guiding the collection of observations about
a specific test case at hand. We introduce a model and inferential methods that
bridge these phases of analysis into a holistic approach to information acquisition
that considers simultaneously the extension of the predictive model and the probing of a case at hand. The bridging of active learning and real-time diagnostic
feature acquisition leads to a new class of policies for learning and diagnosis.
1
Introduction
Consider a real-world problem scenario where the challenge is to diagnose a patient who presents
with several salient symptoms by performing inference with a probabilistic diagnostic model. The
diagnostic model is trained from a database of patients, where training cases may have missing features. Assume we have at our discretion an evidential budget that enables us to acquire additional
information so as to make a good diagnosis. Traditionally, such a budget has been spent solely on
performing real-time observations about the case at hand, for example, by carrying out additional
tests on a patient presenting to a physician with some previously identified complaints, signs, and
symptoms. However, there lies another opportunity to improving diagnostic models?that of allocating some or all of the evidential budget to extending some portion of the training database, and
then learning an updated diagnostic model for use in inference about the case at hand. This broader
perspective on diagnostic reasoning has real-world implications. For instance, investing efforts to
observe features that are currently missing in training cases, such as missing details on presenting
symptoms or on outcomes of prior patient cases, might preempt the need for carrying out a painful
or risky medical test on the patient at hand. We focus on the promise of developing methods that
jointly consider informational value and costs of acquiring information about both the case at hand
and about cases in the training library, and weighing the potential contributions of each of these
potential sources of information during diagnosis.
To date, the process of diagnosis has focused on the use of a fixed predictive model, which in turn is
used to generate recommendations for the observations to gather. Similarly, efforts in active learning
have focused on gathering information about the training cases in order to build better predictive
models. The active collection of the different types of missing information under a budget, spanning
methods that have been referred to separately as learning and diagnosis, is graphically depicted in
Figure 1. While diagnosis-time information acquisition methods focus on acquiring information
about the test case at hand, induction-time methods focus on collecting information about training
cases for learning a good predictive model. We shall describe methods that weave together these two
perspectives on information acquisition that have been handled separately to date, yielding a holistic
approach to evidence collection in the context of the larger learning and prediction system. The
1
Training Cases
(Possibly Incomplete)
Diagnosis Time
Information Acquisition
Induction Time
Information Acquisition
Predictive Model
Diagnostic Challenge
(Possibly Incomplete)
Figure 1: Illustration of induction-time and diagnosis-time active information acquisition.
Induction-time active learning focuses on acquiring information for the pool of data used to train
a diagnostic model; diagnosis-time information acquisition focuses on the next best observations to
acquire from the test case at hand.
methodology applies to situations where there is a single diagnostic challenge, as well as broader
conceptions of diagnosis over streams of cases over time.
We take a decision-theoretic perspective on the joint consideration of observations about the case at
hand and about options for extending the training set. We start by directly modeling how the training
data might affect the outcome of the predictions about test cases at hand, thus, relaxing the common
assumption that a predictive model is fixed during diagnosis. Real-world diagnostic applications
have made this assumption to date, often employing an information-theoretic or decision theoreticcriterion, such as value of information (VOI), during diagnosis to collect data about the case at
hand. The holistic method can guide the acquisition of data for training cases that are missing
arbitrary combinations of features and labels. The methodology extends active learning beyond the
situation where training is done from a case library of completely specified instances, where each
case contains a complete set of observations. We shall show how the more holistic active-learning
approach allows for a fine-grained triaging of information to acquire by deliberating in parallel about
the value of acquiring missing information from cases either in the training or the test set.
2
Related Research
As we mentioned, efforts to date on the use of active learning for training classification models have
largely focused on the task of acquiring labels, and assume that all of the features are observed in
advance. Popular heuristics for selecting unlabeled data points include uncertainty in classification
[1, 2], reduction in version space for SVMs [13], expected informativeness [9], disagreement among
a committee of classifiers [3], and expected reduction in classification [10]. There has been limited
work on methods for actively selecting missing features for instantiation. Lizotte et al. [8] tackle the
problem of selecting features in a budgeted learning scenario. Specifically, they solve a problem that
can be viewed as the inverse of traditional active learning; given class labels, they seek to determine
the best features to compute for each instance such that a good predictive model can be trained
under a budget. Even rarer are attempts to unify active acquisition of features with the acquisition
of missing class labels. Research on this more general active learning includes work with graphical
probabilistic models by Tong and Koller [14] and by Saar-Tsechansky et al. [11].
Several methods have been used for guiding data acquisition at diagnosis time. The goal is to identify
the best additional observations to acquire for making inferences and for ultimately taking actions
given inferences about the class of a test case at hand [4, 5, 6, 7, 12]. The best tests and observations
to make are computed with methods that compute or approximate the VOI. VOI for each potential
new observation is computed by considering the probability distribution over the class of the case at
focus of attention of based on observations made so far, and the uncertainties expected after making
each proposed observation. New evidence to collect is triaged by considering the expected utility
of the best immediate actions versus the actions taken after the new observations, considering the
2
costs of making each proposed observation. Thus, VOI balances the informational benefits and the
observational costs of the new observations under uncertainty.
3
Approach
We shall now describe a Bayesian model that smoothly combines induction-time and diagnosis-time
information acquisition. The methods move beyond the task of parameter and structure estimation
explored in the prior studies of active learning and directly model statistical relationships amongst
the data points.
Assume that we are given a training corpus with n independent training instances Di = {(xi , ti )}.
Here, xi are the d dimensional features and their labels are denoted as ti . The training cases can
be incomplete; not S
all of the labels and features in the training set D are observed. Hence, we
represent Di = Dio Dih , where Dio and Dih represent the mutually exclusive subsets of observed
and unobserved components respectively in the ith data instance.
Let us consider a test data point as x? where our task is to recover the label t? for the test case1 .
Similar to the training cases, we again assume that x? is not fully observed and that there are unobserved features. Given a budget for acquiring information, our goal is to determine the missing
components either from the training set or among the missing features in the test case so that we
make the best prediction on t? .
Approaches to active learning leverage the statistical relationships among sets of observations within
cases with their class labels. The computation of expected value of information has been carried
out with an information-theoretic method such as with procedures that seek to minimize entropy
or maximize information gain. We compute such measures by directly modeling the conditional
density of the test label t? , given all that has been observed:
p(t? |xo? , Do ) = p(t? |xo? , D1o , .., Dno )
(1)
Here, xo? represents the observed components of the test case and we define the set of all observed
variables in the training corpus as Do = {D1o , .., Dno } (similarly we?ll use Dh = {D1h , .., Dnh }). We
note that the strategy of directly modeling the statistical dependencies among all of the training data
and the test case is a departure from most existing classification methods. Given a training corpus,
most methods try to fit a model or learn a classifier that best explains the training data and use this
learned model to classify test cases. This two-phase approach introduces a separation in information
acquisition for training and testing; consequently, active information acquisition is limited either
to real-time diagnosis or to training-time active learning and does not fully allow modeling of the
joint statistics for the training and the test data. Directly modeling the dependency of the test label
t? on the training and the test data as described in Equation 1 allows us to reason about next best
information to observe by considering how posterior distributions changes with the acquisition of
missing information. Assuming that we can compute predictive distributions as given in Equation
1, the next section describes how we can utilize such models to actively seek information.
3.1
Decision-Theoretic Selective Sampling
We are interested in selectively sampling unobserved information, either about the training set or the
test case, in order to make a better prediction. If available budget allows for multiple observations,
our the goal is to determine an optimal set of variables to observe. However, performing such nonmyopic analyses is prohibitively expensive for many active learning heuristics [7]. In practice, the
selective sampling task is performed in a greedy manner. That is starting from an empty set, the
algorithm selects one element at a time according to the active learning criterion. We note that
Krause et al. [6] provides a detailed analysis of myopic and non-myopic strategies, and describes
situations where losses in a greedy approach can be bounded. In this work, we adopt a greedy
strategy.
The decision-theoretic selective sampling criterion we use estimates the values of acquiring information, which in turn can be used as a guiding principle in active learning. We can quantify such
1
For simplicity, we limit our discussion to a single test point; the analysis described generalizes directly to
considering larger set of test points
3
value in terms of information gain. Intuitively, knowing one more bit of information may tighten
a probability distribution over the class of the test case. On the other hand, observations are acquired at a price. By considering this reduction in uncertainty along with the cost of obtaining such
information, we can formulate a selective sampling criterion.
Let us assume that we have a probabilistic model and appropriate inference procedures that would
allow us to compute the conditional distribution of the test label t? given all the observed entities
Do (Equation 1). Then, such computations can be used determining the expected information gain.
Expected information gain is formally defined as expected reduction in uncertainty over the t? as
we observe more evidence. In order to balance the benefit of observing a feature/label with the cost
of its observation, we use expected return on information (ROI) as a selection criteria that aims to
maximize information gain per unit cost:
H(t? |Do ) ? Ed [H(t? |d ? Do )]
ROI : d? = arg max
C(d)
d?D h
(2)
Here, H(?) denotes the entropy and Ed [?] is the expectation with respect to the current model. Note,
here d can either be a feature value or a label and C(?) denotes the cost associated with observing
information d. This strategy differs from the VOI criteria that aims to minimize total operational
cost of the system. Unlike VOI, the proposed criteria does not require that the gain from selective
sampling and the cost of observing observation have the same currency; consequently, ROI can be
used more generally. Note, the proposed framework for active information acquisition easily extends
to scenarios where the cost and the benefits of the system can be measure in a single currency and
VOI can be applied. Also note that while the ROI formulation we introduces considers a single test,
similar computations can be done for a larger set of test points by considering the joint entropy over
the test labels. Without the introduction of assumptions of conditional independence that are not
overly restrictive (described below) the joint formulation can be computed as the sum of the ROI
evaluated for each of the test cases. We now describe how we can model the joint statistics among
the training and the test cases simultaneously.
3.2
Modeling Joint Dependencies
Let us consider a probabilistic model to describe the joint dependencies among features and the
label of an instance. If we denote the parameters of the model with ?, then, given the training
? according to
data, the classical approach in learning the model would attempt to find a best value ?
some optimization criterion. However, in our case we are interested in modeling joint dependencies
among all of the data (both training or testing). Consequently, in our analysis, we consider the
model parameters ? as a random variable over which we marginalize in order to generate a posterior
predictive distribution. Formally, we rewrite Equation 1 as:
Z
o
o
p(t? |x? , D ) =
p(t? , ?|xo? , Do ),
(3)
?
where the Bayesian treatment of ? allows us to marginalize over ? and model direct statistical dependencies between the different data points; consequently, we can determine how different features
and labels directly affect the test prediction. Note that considering model parameters ? as random
variables is consistent with principles of Bayesian modeling and is similar in spirit to prior research,
such as [9] and [15].
In order to compute the integral in Equation (3), we need to characterize p(t? , ?|xo? , Do ), which in
turn defines a joint distribution over all of the data instances and the parameters ? of the model.
First, we consider individual data instances and model the joint distribution of features and labels of
the instance as a Markov Random Field (MRF)2 . Then, assuming conditional independence between
data points3 given the model parameters, the joint distribution that includes all the instances and the
parameters ? can be written as:
p(D, ?) ? p(?)
n
Y
1
exp[?T ?(xi , ti )]
Z(?)
i=1
2
We limit ourselves to the case where both the labels and the features are binary (0 or 1).
The conditional independence assumption also allows us to compute ROI for a set of test cases by summing
individual ROI values.
3
4
Here, Z(?) is the partition function that normalizes the distribution, ? are parameters of the model
with a Gaussian prior p(?) ? N (0, ?). Also, ?(x, t) = [t, tx1 , .., tx2 , ?(x)] is the appended
feature set and is in correspondence with the underlying undirected graphical model. In theory, the
features can be functions of all the individual features of x. However, we restrict ourselves to a
Boltzmann machine that has individual and pairwise features only and corresponds to an undirected
graphical model GF = {VF , EF } where each node VF corresponds to an individual feature and the
edges in EF between the nodes correspond to the pairwise features. A fully connected GF graph can
represent an arbitrary distribution. However, the computational complexity of situations involving
large numbers of features may require pruning of the graph to achieve tractability.
Using Bayes rule and the conditional independence assumption, Equation 3 reduces to:
Z
o
o
p(t? |x? , D ) =
p(t? |xo? , ?) ? p(?|Do )
(4)
?
The first term p(t? |xo? , ?) inside the integral can be interpreted as likelihood of t? given the observed components x? of the test case and the parameter ?. Similarly p(?|Do ) is the posterior
distribution over parameter ? given all the observations in the training corpus. We review details of
these computations below.
3.3
Computational Challenges
Given the set of all observations Do , we first seek to infer the posterior distribution p(?|Do ) which
can be written as:
n Z
Y
o
p(?|D ) ? p(?)
p(Dio , Dih |?)
i=1
Dih
Computing the posterior is intractable as it is a product of the Gaussian prior with non-Gaussian data
likelihood terms. In general, the problem of inferring model parameters in an undirected graphical
model is a hard one. Welling and Parise [15] propose Bethe-Laplace approximation to infer model
parameters for a Markov Random Field. In a similar spirit, we employ Laplace approximation that
uses Bethe or a tree-structured approximation albeit with data that is partially observed. The idea
? of the exact posterior distribution
behind Laplace approximation is to fit a Gaussian at the mode ?
? ?), where:
p(?|Do ) ? N (?,
? = E?? [?(x, t)?(x, t)T ] ? E?? [?(x, t)]E?? [?(x, t)]T
? Note, that it is non-trivial to find the mode
Here, E?? [?] denote expectation with respect to p(x, t|?).
? as well as the covariance matrix ?, as the underlying graphical structure is complex. While
?
? is usually
the covariance ? is approximated using the linear response algorithm [15], the mode ?
found by running a gradient descent procedure that minimizes the negative log of the posterior
(L = ? log(p(?|D)). The gradients of this objective can be succinctly written as:
?1
?L = ?
??
n
X
[E?,Dio [?(x, t)] ? E? [?(x, t)]]
(5)
i=1
Here, E?,Dio [?] is the expectation with respect to the distribution conditioned on the observed variables: p(x|?, Dio ). Note, that computing the first expectation term is trivial for the fully observed
case. However, partially observed cases requires exact inference. Similarly, the computation of the
second expectation term in the gradient requires exact inference. For the fully connected graphs,
exact inference is hard and we must rely on approximations.
One approach is to approximate GF by a tree, which we denote as GM I that preserves an estimation
of mutual information among variables. Specifically GM I is the maximal spanning tree of an undirected graphical model, which has the same structure as the original graph and with edges weighted
by empirical mutual information.
We have the choice of either running loopy belief propagation (BP) for approximate inference on the
full graph GF or doing an exact inference on the tree approximation GM I . As the features ?(x, t)
only consist of single and pairwise variables, belief propagation directly provides the required expectations over the features of MRF. In our work, we observed better results when using loopy BP;
5
however, it was much faster to run inference on the tree structured graphs. Consequently, we used
loopy BP to compute the posterior p(?|Do ) given the training data. Also note that given the Gaussian approximation to p(?|Do ), the required predictive distribution p(t? |x? , Do ) can be computed
using sampling [15]. Finally, ROI computations require that for each d ? Dh , we infer p(t? |d ? Do )
for d = 0 and d = 1 and compute the expected conditional entropy. This repeated inference for all
the missing bits in the data can be time consuming; thus, the tree-structured approximation was used
to do all ROI computations and to determine the next bit of information to seek.
4
Experiments and Results
We shall compare proposed active information acquisition, which does not distinguish between
induction-time and diagnosis-time analyses, against other alternatives on a synthetic dataset and
two real-world applications. Previewing our results, we find that the proposed scheme outperforms
its competitors in terms of accuracy over the test points and provides a significant boost for considerably less incurred cost. The significant gains we obtained over approaches that limit themselves
to separately consider induction-time or diagnosis-time information acquisition suggests that the
holistic perspective can provide broader and more efficient options to acquire information.
4.1
Experiments with Synthetic Data
We first sought to evaluate the basic operation of the proposed framework with a synthetic training
set of Boolean data generated by randomly sampling labels with a fair coin toss. The features of the
data are 14 dimensional and consist of partially informative and partially random features. Out of
the 14 features, seven are randomly generated using a fair coin toss, while the rest of the features are
generated by multiplying the label with all of the seven randomly generated features individually.
We note that, even with full observations and a perfect data model for 0.78% of the cases, the
prediction cannot be better than random. This arises whenever all of the randomly generated bits
are 0 which in turn blocks any information about the label being observed. For the rest of the cases,
perfect prediction is feasible with only seven features. We considered a dataset with 100 examples
for experiments on this synthetic data. Further, we consider a 50-50 train and test split and assume
that 25% of the total bits are unobserved and that the target of the selective sampling procedure is to
determine the best next observations to make so as to best predict the labels for the test cases.
We assume that the cost of observing a label in the training data is directly proportional to the
possible number of features that can be computed for every data point (that is, c(d) = Dim). The
features, drawn from either the training or testing set, are much cheaper and have a unit cost of
observation. We set the costs of observing labels of test cases to infinity; consequently the active
learning methods never observe them.
We compared the joint selection (Diagnosis+Induction) advocated in this work with 1) diagnosistime active information acquisition (Diagnosis), where information bits are sampled only from the
test case at hand and 2) induction-time active acquisition (Induction). In addition, we considered
two different flavors of induction-time active inquisition where either only features or only labels
were allowed to be sampled. We refer to these two flavors as Induction (features only) and Induction
(labels only) respectively. In all of the cases, we used ROI for active learning as described in section
3.1. Finally, we compare these methods with the baseline of random sampling strategy.
Figure 2 (left) shows the recognition results with increasing costs during active acquisition of information. We plot the overall classification accuracy over the test set on the y-axis and the cost
incurred on the x-axis. Each point on the graph signifies an average recognition on the test set over
10 random training and test splits. From the figure, we see that all sampling strategies show increases
in accuracy as the cost increases, but Diagnosis+Induction has advantages over other methods. First,
Diagnosis+Induction obtains better recognition results for a fixed incurred cost, outperforming the
diagnosis-time sampling strategy as well as all the flavors of induction-time information acquisition.
Second, the Diagnosis+Induction sampling strategy levels off to the maximum performance fairly
quickly when compared to other methods. Performance of Diagnosis only and Random sampling are
noticeably worse than the other alternatives. Also, we note that the induction (features-only) stops
abruptly for the synthetic case as most of the features in the learning problem are uninformative;
after initial rounds the algorithm stops sampling. In summary, all of the active methods for active
6
Boolean
0.9
0.88
0.86
Diagnosis+Induction
Diagnosis
Induction
Induction (features only)
Induction (labels only)
Random
0.84
0.82
0.8
50
100
150
200
250
0.85
Diagnosis+Induction
Diagnosis
Induction
Induction (features only)
Induction (labels only)
Random
Accuracy on Test Set
0.92
Accuracy on Test Set
Accuracy on Test Set
0.92
0.9
0.94
0.78
0
Voting
Pathfinder
0.96
0.8
0.75
0.7
0.9
0.88
0.86
0.84
0.82
Diagnosis+Induction
Diagnosis
Induction
Induction (features only)
Induction (labels only)
Random
0.8
0.78
0.76
0.65
0
50
100
Cost
150
200
250
300
Cost
350
400
450
500
0.74
0
50
100
150
200
250
300
350
Cost
Figure 2: Comparison of various selective selection schemes (best viewed in color).
information acquisition do better than random; however, Induction+Diagnosis strategy achieves best
combination of recognition performance and cost efficiency.
In order to analyze different sampling methods, we look at the sampling behavior of different active
learning mechanisms. Figure 3 (left) illustrates the statistics of sampled information at the termination of the active learning procedure. The bars with different shades denote the sampling distribution
amongst training labels, training features and the test features, which are generated by averaging
over the 10 runs. While the Induction (features only), Induction (labels only) and diagnosis strategy
just acquire labels, features for training data and features for the test cases respectively, the Diagnosis+Induction approaches show acquisition of information from different kinds of sources. We note
that the random sampling strategy also samples from both labels and features; however, as indicated
by Figure 2 (left) this strategy is not optimal as it does not take the cost structure into account. Diagnosis+Induction is the most flexible scheme and it aims to acquire information from all facets of
the classification problem by properly considering gains in predictive power and balancing it with
the cost of information acquisition.
4.2
Experiment on Pathfinder Data
Availability and access of large medical databases enables us to build better predictive models for
various diagnostic purposes. While most efforts have focused on active data acquisition for diagnosis
only [5], our framework promises a broader set of options to a diagnostician, where he can reason
whether to perform additional tests on a patient or seek more information about the training set.
We analyze one such scenario where the goal is to build a predictive model that would guide surgical pathologists who study the lymphatic system with the diagnosis of lymph-node diseases. This
dataset consists of labels of ?benign? or ?malignant? to lymph-node follicles from 48 subjects. The
features signify sets of histological features viewed at low and high power under the microscope
that an expert surgical pathologist believed could be informative to that label. The proposed holistic perspective on active learning supports the scenario where pathologists in pursuit of a diagnosis
need to determine the next observations either from the test case at hand or consider querying for
historical records in order to successfully label the lymph node (or, more generally, diagnose the
disease). For this experiment, we consider random splits 30 training examples and 18 test cases and
again assume that 25% of the total bits are unobserved. The experiment protocol is same as the one
for synthetic data where we report results averaged over 10 runs and the test set is used to compare
the recognition performance.
The results on the Pathfinder data are shown in Figure 2 (Middle). As before, x-axis and y-axis
denote costs incurred and overall classification accuracy on the test data over 10 random training
and test splits. Again we see that the Diagnosis+Induction performs better than the other methods
and attains high accuracy at a fairly low cost. However, one difference in this experiment is the fact
that Random sampling strategy outperforms active Diagnosis and active Induction (features only).
This suggests that the labels in the training cases are highly informative when compared to the features. This in turn is reflected by the similar performance of Diagnosis+Induction with Induction
and Induction (only) towards the end of active learning run. Upon further analysis, we found that
Diagnosis+Induction, Induction and Induction (labels only) end up selecting similar training labels,
consequently reaching similar performance towards the end. This further reinforces the validity of
the hypothesis that the training labels are very informative. On analyzing the sampling behavior of
different methods (Figure 3 (middle)) we again find that the Diagnosis+Induction approaches show
acquisition of information from different kinds of sources. However, we also note that the proportion of sampled training labels is remarkably few and very similar for both Diagnosis+Induction
7
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Pathfinder
Test Feature
Training Labels
Training Feature
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Voting
Test Feature
Distribution of Sampled Information
Training Feature
Distribution of Sampled Information
Distribution of Sampled Information
Boolean
Training Labels
Training Labels
Training Feature
Test Feature
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Figure 3: Statistics of different information selected in active learning.
and Induction, hinting that there might be particular cases that are highly informative about the prediction task. In summary, Diagnosis+Induction again provides best recognition rates at low costs,
demonstrating the effectiveness of the unified perspective on active learning.
4.3
Experiments on Congressional Voting Records
Surveys have been popular information gathering tools, however, the cost of acquiring information
by surveying can be costly and is often fraught with missing information. Intelligent information
acquisition with active learning promises efficient use of limited resources. The holistic perspective
on data acquisition can help avoid probing subjects for potentially risky or expensive questions by
considering accessible information (for example, information such as demographics, age, etc.) or
initially unavailable labels about the past survey takers.
We analyze a similar survey task of determining affiliation of subjects based on incomplete historical
data. This data set includes votes for each of the U.S. House of Representatives Congressmen on the
16 key votes on United States policies. There are 435 data instances classified as Democrats versus
Republican where 16 attributes for each of data represents a Yes or No on a vote. Further, out of
435 ? 16 features, there are 392 instances missing. The presence of missing features makes this a
challenging active-learning problem. We consider 10 random splits with 100 training instances and
335 test cases and report results averaged over these splits.
Experimental results on the voting data are shown in Figure 2 (right). Each point on the graph signifies an average recognition on the test set over 10 random training and test splits. Similar to the
earlier experiments, we see improvement in recognition accuracy on the test set for different sampling schemes. Performance of Diagnosis only, Induction (features only), and Random sampling
are noticeably worse than the other alternatives. Diagnosis+Induction again shows superior performance attaining a high accuracy at a relatively low cost. Upon analyzing the statistics of sampled
information (Figure 3 (right)) at the termination of the active learning procedure, we see that while
Diagnosis+Induction approaches show acquisition of information from different kinds of sources,
it is significantly different from the Random strategy whose sampling distribution is close to the
true distribution of the available information bits. By considering information gain and the cost
structure through ROI, Diagnosis+Induction is able to achieve the best combination of recognition
performance and cost efficiency.
5
Conclusion
We introduced a scheme for active data acquisition that removes the separation between diagnosistime and induction-time active information acquisition. The task of diagnosis changes qualitatively
with the use of methods that take a more holistic perspective on active learning, simultaneously
considering information acquisition for extending a case library as well as for identifying the next
best features to observe about the diagnostic challenge at hand. We ran several experiments that
showed the effectiveness of combining diagnosis-time and induction-time active learning. We are
pursuing several related challenges and opportunities, including analysis of approximate inference
techniques and non-myopic extensions.
8
References
[1] N. Cesa-Bianchi, A. Conconi and C. Gentile (2003). Learning probabilistic linear-threshold
classifiers via selective sampling. COLT.
[2] S. Dasgupta, A. T. Kalai and C. Monteleoni (2005). Analysis of perceptron-based active learning.
COLT.
[3] Y. Freund, H. S. Seung, E. Shamir and N. Tishby (1997). Selective Sampling Using the Query
by Committee Algorithm. Machine Learning Volume 28.
[4] R. Greiner, A. Grove and D. Roth (2002). Learning Cost-Sensitive Active Classifiers. Artificial
Intelligence Volume 139(2).
[5] D. Heckerman, E. Horvitz, and B. N. Nathwani (1992). Toward Normative Expert Systems: Part
I The Pathfinder Project. Methods of Information in Medicine 31:90-105.
[6] P. Kanani and P. Melville (2008). Prediction-time Active Feature-value Acquisition for Customer
Targeting. NIPS Workshop on Cost Sensitive Learning.
[7] A. Krause, A. Singh and C. Guestrin (2008). Near-optimal Sensor Placements in Gaussian
Processes: Theory, Efficient Algorithms and Empirical Studies. JMLR Volume 9(2).
[8] D. Lizotte, O. Madani and R. Greiner (2003). Budgeted Learning of Naive-Bayes Classifiers.
UAI.
[9] D. MacKay (1992a). Information-Based Objective Functions for Active Data Selection. Neural
Computation Volume 4(4).
[10] N. Roy and A. McCallum (2001). Toward Optimal Active Learning through Sampling Estimation of Error Reduction. ICML.
[11] M. Saar-Tsechansky, P. Melville and F. Provost (2008). Active Feature-value Acquisition.
Management Science.
[12] V. S. Sheng and C. X. Ling (2006). Feature value acquisition in testing: a sequential batch test
algorithm. ICML.
[13] S. Tong and D. Koller (2001). Support Vector Machine Active Learning with Applications to
Text Classification. JMLR Volume 2.
[14] S. Tong and D. Koller (2001). Active learning for parameter estimation in Bayesian networks.
NIPS.
[15] M. Welling and S. Parise (2006) Bayesian Random Fields: The Bethe-Laplace Approximation.
UAI.
9
| 3653 |@word middle:2 version:1 proportion:1 termination:2 seek:6 covariance:2 reduction:5 initial:1 contains:1 selecting:4 united:1 horvitz:2 existing:1 outperforms:2 current:1 past:1 written:3 must:1 partition:1 informative:5 benign:1 enables:2 remove:1 plot:1 greedy:3 selected:1 weighing:1 intelligence:1 mccallum:1 ith:1 record:2 provides:4 node:5 along:1 direct:1 consists:1 weave:1 combine:1 inside:1 manner:1 introduce:1 pairwise:3 acquired:1 expected:10 behavior:2 themselves:1 informational:2 considering:12 increasing:1 project:1 bounded:1 underlying:2 kind:3 interpreted:1 minimizes:1 surveying:1 voi:7 unified:1 unobserved:5 every:1 collecting:1 ti:3 voting:4 tackle:1 complaint:1 classifier:5 prohibitively:1 unit:2 medical:2 before:1 limit:3 analyzing:2 discretion:1 solely:1 might:3 collect:2 relaxing:1 suggests:2 challenging:1 limited:3 case1:1 averaged:2 testing:4 practice:1 block:1 differs:1 procedure:6 empirical:2 significantly:1 inferential:1 preempt:1 cannot:1 unlabeled:1 selection:4 marginalize:2 close:1 targeting:1 context:1 customer:1 center:1 missing:15 roth:1 graphically:1 attention:1 starting:1 focused:4 painful:1 unify:1 simplicity:1 formulate:1 survey:3 identifying:1 rule:1 traditionally:1 laplace:4 updated:1 target:1 gm:3 shamir:1 exact:5 us:2 hypothesis:1 element:1 roy:1 expensive:2 approximated:1 recognition:9 database:3 observed:15 tsechansky:2 connected:2 ran:1 mentioned:1 disease:2 complexity:1 seung:1 parise:2 ultimately:1 trained:2 carrying:2 rewrite:1 singh:1 predictive:15 upon:2 eric:1 efficiency:2 completely:1 easily:1 joint:12 various:2 train:2 distinct:1 describe:4 query:1 artificial:1 outcome:2 whose:1 heuristic:2 larger:3 solve:1 melville:2 statistic:5 taker:1 jointly:1 advantage:1 propose:1 product:1 maximal:1 combining:1 kapoor:1 date:5 holistic:8 achieve:2 empty:1 extending:3 perfect:2 spent:1 help:1 advocated:1 congressman:1 quantify:1 attribute:1 observational:1 noticeably:2 explains:1 dio:6 require:3 extension:2 considered:3 roi:11 exp:1 predict:1 sought:1 adopt:1 achieves:1 purpose:1 estimation:4 label:45 currently:1 bridge:1 individually:1 sensitive:2 successfully:1 tool:1 weighted:1 sensor:1 gaussian:6 aim:3 reaching:1 kalai:1 avoid:1 broader:4 focus:6 properly:1 improvement:1 likelihood:2 lizotte:2 baseline:1 attains:1 dim:1 inference:13 initially:1 koller:3 selective:9 interested:2 selects:1 arg:1 classification:8 among:8 overall:2 denoted:1 flexible:1 colt:2 fairly:2 mutual:2 mackay:1 field:3 never:1 sampling:27 represents:2 look:1 icml:2 report:2 intelligent:1 employ:1 few:1 randomly:4 simultaneously:3 preserve:1 individual:5 cheaper:1 madani:1 phase:3 ourselves:2 microsoft:2 attempt:2 highly:2 introduces:2 yielding:1 behind:1 myopic:3 implication:1 allocating:1 grove:1 integral:2 edge:2 tree:6 incomplete:4 instance:13 classify:1 modeling:8 boolean:3 facet:1 earlier:1 loopy:3 cost:33 tractability:1 signifies:2 subset:1 tishby:1 characterize:1 dependency:6 saar:2 synthetic:6 considerably:1 density:1 accessible:1 probabilistic:5 physician:1 off:1 pool:1 ashish:1 together:1 quickly:1 again:6 cesa:1 management:1 possibly:2 worse:2 expert:2 return:1 actively:2 account:1 potential:3 attaining:1 includes:3 availability:1 stream:1 performed:1 try:1 diagnose:2 observing:5 doing:1 portion:1 start:1 recover:1 option:3 parallel:1 bayes:2 analyze:3 contribution:1 minimize:2 appended:1 accuracy:10 who:2 largely:1 correspond:1 identify:1 surgical:2 yes:1 bayesian:5 multiplying:1 classified:1 evidential:2 monteleoni:1 whenever:1 ed:2 against:1 competitor:1 acquisition:39 triaged:1 associated:1 di:2 gain:9 sampled:8 dataset:3 treatment:1 popular:2 stop:2 color:1 methodology:2 response:1 reflected:1 formulation:2 done:2 evaluated:1 symptom:3 just:1 hand:16 sheng:1 propagation:2 defines:1 mode:3 indicated:1 validity:1 true:1 hence:1 round:1 ll:1 during:5 criterion:7 presenting:2 theoretic:5 complete:1 performs:1 reasoning:1 consideration:1 ef:2 nonmyopic:1 common:1 superior:1 volume:5 he:1 significant:2 refer:1 similarly:4 access:1 etc:1 posterior:8 showed:1 perspective:8 scenario:5 binary:1 outperforming:1 affiliation:1 guestrin:1 additional:4 gentile:1 employed:1 nathwani:1 determine:7 maximize:2 period:1 multiple:1 currency:2 full:2 reduces:1 infer:3 faster:1 believed:1 prediction:9 mrf:2 involving:1 basic:1 patient:6 expectation:6 represent:3 microscope:1 addition:1 uninformative:1 separately:3 fine:1 krause:2 signify:1 remarkably:1 source:4 rest:2 unlike:1 subject:3 undirected:4 spirit:2 effectiveness:2 near:1 leverage:1 presence:1 split:7 conception:1 congressional:1 affect:2 fit:2 independence:4 identified:1 restrict:1 follicle:1 idea:1 knowing:1 tx1:1 whether:1 handled:1 utility:1 bridging:1 pathfinder:5 effort:4 abruptly:1 action:3 generally:2 detailed:1 svms:1 generate:2 sign:1 diagnostic:12 overly:1 per:1 reinforces:1 diagnosis:53 promise:3 shall:4 dasgupta:1 key:1 salient:1 demonstrating:1 threshold:1 drawn:1 budgeted:2 utilize:1 graph:8 sum:1 run:4 inverse:1 uncertainty:5 extends:2 pursuing:1 separation:2 decision:4 vf:2 bit:8 distinguish:1 correspondence:1 placement:1 infinity:1 bp:3 pathologist:3 performing:3 relatively:1 structured:3 developing:1 according:2 combination:3 across:1 describes:2 heckerman:1 making:3 intuitively:1 gathering:2 xo:7 taken:1 equation:6 mutually:1 previously:1 resource:1 turn:5 committee:2 mechanism:1 malignant:1 dih:4 end:3 demographic:1 available:2 generalizes:1 operation:1 pursuit:1 observe:6 appropriate:1 disagreement:1 alternative:3 coin:2 batch:1 original:1 denotes:2 running:2 include:1 graphical:6 opportunity:2 medicine:1 restrictive:1 build:4 classical:1 move:1 objective:2 question:1 strategy:14 costly:1 exclusive:1 traditional:1 amongst:2 gradient:3 separate:1 entity:1 seven:3 considers:2 trivial:2 spanning:2 induction:53 reason:2 toward:2 assuming:2 relationship:2 illustration:1 balance:2 acquire:7 potentially:1 negative:1 policy:2 boltzmann:1 perform:1 bianchi:1 observation:25 markov:2 descent:1 immediate:1 situation:4 provost:1 arbitrary:2 rarer:1 introduced:1 required:2 specified:1 lymph:3 learned:1 boost:1 nip:2 beyond:2 redmond:1 bar:1 below:2 usually:1 able:1 departure:1 challenge:6 max:1 including:1 belief:2 power:2 rely:1 scheme:5 republican:1 library:3 risky:2 axis:4 carried:1 naive:1 gf:4 text:1 prior:5 review:1 determining:2 freund:1 fully:5 loss:1 proportional:1 querying:1 versus:2 age:1 triaging:1 incurred:4 gather:1 consistent:1 informativeness:1 principle:2 tx2:1 balancing:1 normalizes:1 succinctly:1 summary:2 histological:1 guide:2 allow:2 perceptron:1 taking:1 benefit:3 boundary:1 world:4 collection:4 made:2 qualitatively:1 historical:2 employing:1 far:1 tighten:1 welling:2 approximate:4 pruning:1 obtains:1 active:55 instantiation:1 uai:2 corpus:4 summing:1 consuming:1 xi:3 investing:1 learn:1 bethe:3 operational:1 obtaining:1 improving:1 unavailable:1 fraught:1 complex:1 protocol:1 ling:1 repeated:1 fair:2 allowed:1 referred:1 representative:1 probing:2 tong:3 inferring:1 guiding:3 lie:1 house:1 breaking:1 jmlr:2 grained:1 shade:1 specific:1 normative:1 explored:1 hinting:1 evidence:3 intractable:1 consist:2 workshop:1 albeit:1 sequential:1 budget:7 conditioned:1 illustrates:1 points3:1 flavor:3 smoothly:1 depicted:1 entropy:4 democrat:1 greiner:2 conconi:1 partially:4 recommendation:1 applies:1 acquiring:8 corresponds:2 dh:2 conditional:7 viewed:3 goal:4 consequently:7 towards:2 toss:2 price:1 feasible:1 change:2 hard:2 specifically:2 averaging:1 total:3 experimental:1 vote:3 selectively:1 formally:2 support:2 arises:1 evaluate:1 |
2,928 | 3,654 | FACTORIE: Probabilistic Programming
via Imperatively Defined Factor Graphs
Andrew McCallum, Karl Schultz, Sameer Singh
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003
{mccallum, kschultz, sameer}@cs.umass.edu
Abstract
Discriminatively trained undirected graphical models have had wide empirical
success, and there has been increasing interest in toolkits that ease their application to complex relational data. The power in relational models is in their repeated
structure and tied parameters; at issue is how to define these structures in a powerful and flexible way. Rather than using a declarative language, such as SQL
or first-order logic, we advocate using an imperative language to express various
aspects of model structure, inference, and learning. By combining the traditional,
declarative, statistical semantics of factor graphs with imperative definitions of
their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE,
a software library for an object-oriented, strongly-typed, functional language. In
experimental comparisons to Markov Logic Networks on joint segmentation and
coreference, we find our approach to be 3-15 times faster while reducing error by
20-25%?achieving a new state of the art.
1
Introduction
Conditional random fields [1], or discriminatively trained undirected graphical models, have become
the tool of choice for addressing many important tasks across bioinformatics, natural language processing, robotics, and many other fields [2, 3, 4]. While relatively simple structures such as linear
chains, grids, or fully-connected affinity graphs have been employed successfully in many contexts,
there has been increasing interest in more complex relational structure?capturing more arbitrary
dependencies among sets of variables, in repeated patterns?and interest in models whose variablefactor structure changes during inference, as in parse trees and identity uncertainty. Implementing
such complex models from scratch in traditional programming languages is difficult and error-prone,
and hence there has been several efforts to provide a high-level language in which models can be
specified and run. For generative, directed graphical models these include BLOG [5], IBAL [6],
and Church [7]. For conditional, undirected graphical models, these include Relational Markov
Networks (RMNs) using SQL [8], and Markov Logic Networks (MLNs) using first-order logic [9].
Regarding logic, for many years there has been considerable effort in integrating first-order logic and
probability [9, 10, 11, 12, 13]. However, we contend that in many of these proposed combinations,
the ?logic? aspect is not crucial to the ultimate goal of accurate and expressive modeling. The power
of relational factor graphs is in their repeated relational structure and tied parameters. First-order
logic is one way to specify this repeated structure, but it is less than ideal because of its focus on
boolean outcomes and inability to easily and efficiently express relations such as graph reachability
and set size comparison. Logical inference is used in some of these systems, such as P RISM [12],
but in others, such as Markov Logic [9], it is largely replaced by probabilistic inference.
1
This paper proposes an approach to probabilistic programming that preserves the declarative statistical semantics of factor graphs, while at the same time leveraging imperative constructs (pieces
of procedural programming) to greatly aid both efficiency and natural intuition in specifying model
structure, inference, and learning, as detailed below. Our approach thus supports users in combining
both declarative and procedural knowledge. Rather than first-order logic, model authors have access
to a Turing complete language when writing their model specification. The point, however, is not
merely to have greater formal expressiveness; it is ease-of-use and efficiency.
We term our approach imperatively defined factor graphs (IDFs). Below we develop this approach
in the context of Markov chain Monte-Carlo inference, and define four key imperative constructs?
arguing that they provide a natural interface to central operations in factor graph construction and
inference. These imperative constructs (1) define the structure connecting variables and factors,
(2) coordinate variable values, (3) map the variables neighboring a factor to sufficient statistics,
and (4) propose jumps from one possible world to another. A model written as an IDF is a factor
graph, with all the traditional semantics of factors, variables, possible worlds, scores, and partition
functions; we are simply providing an extremely flexible language for their succinct specification,
which also enables efficient inference and learning.
Our first embodiment of the approach is the system we call FACTORIE (loosely named for ?Factor
graphs, Imperative, Extensible?, see http://factorie.cs.umass.edu) strongly-typed, functional programming language Scala [14]. The choice of Scala stems from key inherent advantages
of the language itself, plus its full interoperability with Java, and recent growing usage in the machine learning community. By providing a library and direct access to a full programming language
(as opposed to our own, new ?little language?), the model authors have familiar and extensive resources for implementing the procedural aspects of the design, as well as the ability to beneficially
mix data pre-processing, evaluation, and other book-keeping code in the same files as the probabilistic model specification. Furthermore, FACTORIE is object-oriented in that variables and factor
templates are objects, supporting inheritance, polymophism, composition, and encapsulation.
The contributions of this paper are introducing the novel IDF methodology for specifying factor
graphs, and successfully demonstrating it on a non-trivial task. We present experimental results
applying FACTORIE to the substantial task of joint inference in segmentation and coreference of
research paper citations, surpassing previous state-of-the-art results. In comparison to Markov Logic
(Alchemy) on the same data, we achieve a 20-25% reduction in error, and do so 3-15 times faster.
2
Imperatively Defined Factor Graphs
A factor graph G is a bipartite graph over factors and variables defining a probability distribution
over a set of target variables y, optionally conditioned on observed variables x. A factor ?i computes a scalar value over the subset of variables that are its neighbors in the graph. Often this realvalued function is defined as the exponential of the dot product over sufficient statistics {fik (xi , yi )}
and parameters {?ik }, where k ? {1 . . . Ki } and Ki is the number of parameters for factor ?i .
Factor graphs often use parameter tying, i.e. the same parameters for several factors. A factor
template Tj consists of parameters {?jk }, sufficient statistic functions {fjk }, and a description of
an arbitrary relationship between variables, yielding a set of satisfying tuples {(xi , yi )}. For each
of these variable tuples (xi , yi ) ? Tj that fulfills the relationship, the factor template instantiates
a factor that shares {?jk } and {fjk } with all other instantiations of Tj . Let T be the set of factor
templates. In this case the probability distribution is defined:
?
?
Kj
Y
X
1 Y
p(y|x) =
exp ?
?jk fjk (xi , yi )?.
Z(x)
Tj ?T (xi ,yi )?Tj
k=1
As in all relational factor graphs, our language supports variables and factor template definitions. In
our case the variables?which can be binary, categorical, ordinal, real, etc?are typed objects in the
object-oriented language, and can be sub-classed. Relations between variables can be represented
directly as members (instance variables) in these variable objects, rather than as indices into global
tables. In addition we allow for new variable types to be programmed by model authors via polymorphism. For example, the user can easily create new variable types such as a set-valued variable type,
2
Figure 1: Example of variable classes for a linear chain and a coreference model.
class Token(str:String) extends EnumVariable(str)
class Label(str:String, val token:Token) extends EnumVariable(str) with VarInSeq
class Mention(val string:String) extends PrimitiveVariable[Entity]
class Entity extends SetVariable[Mention] {
var canonical:String = ""
def add(m:Mention, d:DiffList) = {
super.add(m,d); m.set(this,d)
canonical = recomputeCanonical(members)
}
def remove(m:Mention, d:DiffList) = {
super.remove(m,d); m.set(null,d)
canonical = recomputeCanonical(members)
}
}
representing a group of unique values, as well as traits augmenting variables to represent sequences
of elements with left and right neighbors.
Typically, IDF programming consists of two distinct stages: defining the data representation, then
defining the factors for scoring. This separation offers great flexibility. In the first stage the model
author implements infrastructure for storing a possible world?variables, their relations and values.
Somewhat surprisingly, authors can do this with a mind-set and style they would employ for deterministic programming, including usage of standard data structures such as linked lists, hash tables
and objects embedded in other objects. In some cases authors must provide API functions for ?undoing? and ?redoing? changes to variables that will be tracked by MCMC, but in most cases such
functionality is already provided by the library?s wide variety of variable object implementations.
For example, in a linear-chain CRF model, a variable containing a word token can be declared as the
Token class shown in Figure 1.1 A variable for labels can be declared similarly, with the addition
that each Label2 object has an instance variable that points to its corresponding Token. The second
stage of our linear-chain CRF implementation is described in Section 2.2.
Consider also the task of entity resolution in which we have a set of Mentions to be co-referenced
into Entities. A Mention contains its string form, but its value as a random variable is the Entity
to which it is currently assigned. An Entity is a set-valued variable?the set of Mentions assigned
to it; it holds and maintains a canonical string form representative of all its Mentions (see Figure
13 ). The add/remove methods are explained in section 2.3.
2.1
Inference and Imperative Constraint Preservation
For inference, we rely on MCMC to achieve efficiency with models that not only have large treewidth but an exponentially-sized unrolled network, as is common with complex relational data
[15, 9, 5]. The key is to avoid unrolling the network over multiple hypotheses, and to represent
only one variable-value configuration at a time. As in BLOG [5], MCMC steps can adjust model
structure as necessary, and with each step the FACTORIE library automatically builds a DiffList?a
compact object containing the variables changed by the step, as well as undo and redo capabilities.
Calculating the factor graph?s ?score? for a step only requires DiffList variables, their factors, and
neighboring variables, as described in Section 2.4. In fact, unlike BLOG and BLAISE [16], we
build inference and learning entirely on DiffList scores and never need to score the entire model.
This enables efficient reasoning about observed data larger than memory, or models in which the
number of factors is a high-degree polynomial of the number of variables.
A key component of many MCMC inference procedures is the proposal distribution that proposes
changes to the current configuration. This is a natural place for injecting prior knowledge about
coordination of variable values and various structural changes. In fact, in some cases we can avoid
1
Objects of class EnumVariable hold variables with a value selected from a finite enumerated set.
In Scala var/val indicates a variable declaration; trait VarInSeq provides methods for obtaining next
and prev labels in a sequence.
3
In Scala def indicates a function definition where the value returned is the last line-of-code in the function;
members is the set of variables in the superclass SetVariable.
2
3
Figure 2: Examples of FACTORIE factor templates. Some error-checking code is elided for brevity.
val crfTemplate = new TemplateWithDotStatistics3[Label,Label,Token] {
def unroll1 (label:Label) = Factor(label, label.next, label.token)
def unroll2 (label:Label) = Factor(label.prev, label, label.prev.token)
def unroll3 (token:Token) = throw new Error("Token values shouldn?t change")
}
val depParseTemplate = new Template1[Node] with DotStatistics2[Word,Word] {
def unroll1(n:Node) = n.selfAndDescendants
def statistics(n:Node) = Stat(n.word, closestVerb(n).word)
def closestVerb(n:Node) = if (isVerb(n.word)) n else closestVerb(n.parent)
}
val corefTemplate = new Template2[Mention,Entity] with DotStatistics1[Bool] {
def unroll1 (m:Mention) = Factor(m, m.entity)
def unroll2 (e:Entity) = for (mention <- e.mentions) yield Factor(mention, e)
def statistics(m:Mention,e:Entity) = Bool(distance(m.string,e.canonical)<0.5)
}
val logicTemplate1 = Forany[Person] { p => p.smokes ??> p.cancer }
val logicTemplate2 = Forany[Person] { p => p.friends.smokes <??> p.smokes }
expensive deterministic factors altogether with property-preserving proposal functions [17]. For example, coreference transitivity can be efficiently enforced by proper initialization and a transitivitypreserving proposal function; projectivity in dependency parsers can be enforced similarly. We term
this imperative constraint preservation. In FACTORIE proposal distributions may be implemented
by the model author. Alternatively, the FACTORIE library provides several default inference methods, including Gibbs sampling, as well as default proposers for many variable classes.
2.2
Imperative Structure Definition
At the heart of model structure definition is the pattern of connectivity between variables and factors,
and the DiffList must have extremely efficient access to this. Unlike BLOG, which uses a complex,
highly-indexed data structure that must be updated during inference, we instead specify this connectivity imperatively: factor template objects have methods (e.g., unroll1, unroll2, etc., one for
each factor argument) that find the factor?s other variable neighbors given a single variable from the
DiffList. This is typically accomplished using a simple data structure that is already available as
part of the natural representation of the data, (e.g., as would be used by a non-probabilistic programmer). The unroll method then constructs a Factor with these neighbors as arguments, and returns
it. The unroll method may optionally return multiple Factors in response to a single changed variable. Note that this approach also efficiently supports a model structure that varies conditioned on
variable values, because the unroll methods can examine and perform calculations on these values.
Thus we now have the second stage of FACTORIE programming, in which the model author implements the factor templates that define the factors which score possible worlds. In our linearchain CRF example, the factor between two succesive Labels and a Token might be declared as
crfTemplate in Figure 2. Here unroll1 simply uses the token instance variable of each Label to
find the corresponding third argument to the factor. This simple example does not, however, show the
true expressive power of imperative structure definition. Consider instead a model for dependency
parsing (with similarly defined Word and Node variables). In the same Figure, depParsingTemplate
defines a template for factors that measure compatibility between a word and its closest verb as
measured through parse tree connectivity. Such arbitrary-depth graph search is awkward in firstorder logic, yet it is a simple one-line recursive method in FACTORIE. The statistics method is
described below in Section 2.4.
Consider also the coreference template measuring the compatibility between a Mention and the
canonical representation of its assigned Entity. In response to a moved Mention, unroll1 returns
a factor between the Mention and its newly assigned Entity. In response to a changed Entity,
unroll2 returns a list of factors between itself all its member Mentions. It is inherent that sometimes
different unroll methods will construct multiple copies of the same factor; they are automatically
deduplicated by the FACTORIE library. Syntactic sugar for extended first-order logic primitives is
also provided, and these can be mixed with imperative constructs; see the bottom of Figure 2 for
two small examples. Specifying templates in FACTORIE can certainly be more verbose when not
restricted to first-order logic; in this case we trade off some brevity for flexibility.
4
2.3
Imperative Variable Coordination
Variables? value-assignment methods can be overriden to automatically change other variable values in coordination with the assignment?an often-desirable encapsulation of domain knowledge we
term imperative variable coordination. For example, in response to a named entity label change, a
coreference mention can have its string value automatically adjusted, rather than relying on MCMC
inference to stumble upon this self-evident coordination. In Figure 1, Entity does a basic form of
coordination by re-calculating its canonical string representation whenever a Mention is added or
removed from its set.
The ability to use prior knowledge for imperative variable coordination also allows the designer
to define the feasible region for the sampling. In the proposal function, users make changes by
calling value-assignment functions, and any changes made automatically through coordinating
variables are appended to the DiffList. Since a factor template?s contribution to the overall score
will not change unless its neighboring variables have changed, once we know every variable that has
changed we can efficiently score the proposal.
2.4
Imperative Variable-Statistics Mapping
In a somewhat unconventional use of functional mapping, we support a separation between factor
neighbors and sufficient statistics. Neighbors are variables touching the factor whose changes imply
that the factor needs to be re-scored. Sufficient statistics are the minimal set of variable values that
determine the score contribution of the factor. These are usually the same; however, by allowing
a function to perform the mapping, we provide an extremely powerful yet simple way to allow
model designers to represent their data in natural ways, and concern themselves separately with how
to parameterize them. For example, the two neighbors of a skip-edge factor [18] may each have
cardinality equal to the number of named entities types, but we may only care to have the skip-edge
factor enforce whether or not they match. We term this imperative variable-statistics mapping.
Consider corefTemplate in Figure 2, the neighbors of the template are hMention, Entityi pairs.
However, the sufficient statistic is simply a Boolean based on the ?distance? of the unrolled Mention
from the canonical value of the Entity. This allows the template to separate the natural representation of possible worlds from the sufficient statistics needed to score its factors. Note that these
sufficient statistics can be calculated as arbitrary functions of the unrolled Mention and the Entity.
The models described in Section 3 use a number of factors whose sufficient statistics derive from the
domains of its neighbors as well as those with arbitrary feature functions based on their neighbors.
An MCMC proposal is scored as follows. First, a sample is generated from the proposal distribution,
placing an initial set of variables in the DiffList. Next the value-assignment method is called
for each of the variables on the DiffList, and via imperative variable coordination other variables
may be added to the DiffList. Given the set of variables that have changed, FACTORIE iterates
over each one and calls the unroll function for factor templates matching the variable?s type. This
dynamically provides the relevant structure of the graph via imperative structure definition, resulting
in a set of factors that should be re-scored. The neighbors of each returned factor are given to the
template?s statistics function, and the sufficient statistics are used to generate the factor?s score
using the template?s current parameter vector. These scores are summed, producing the final score
for the MCMC step.
2.5
Learning
Maximum likelihood parameter estimation traditionally involves finding the gradient, however for
complex models this can be prohibitively expensive since it requires the inference of marginal distributions over factors. Alternatively some have proposed online methods, such as perceptron, which
avoids the need for marginals however still requires full decoding which can also be computationally
expensive. We avoid both of these issues by using sample-rank [19]. This is a parameter estimation
method that learns a ranking over all possible configurations by observing the difference between
scores of proposed MCMC jumps. Parameter changes are made when the model?s ranking of a
proposed jump disagrees with a ranking determined by labeled truth. When there is such a disagreement, a perceptron-style update to active parameters is performed by finding all factors whose score
has changed (i.e., factors with a neighbor in the DiffList). The active parameters are indexed by the
5
sufficient statistics of these factors. Sample-rank is described in detail in [20]. As with inference,
learning is efficient because it uses the DiffList and the imperative constructs described earlier.
3
Joint Segmentation and Coreference
Tasks involving multiple information extraction steps are traditionally solved using a pipeline architecture, in which the output predictions of one stage are input to the next stage. This architecture
is susceptible to cascading of errors from one stage to the next. To minimize this error, there has
been significant interest in joint inference over multiple steps of an information processing pipeline
[21, 22, 23]. Full joint inference usually results in exponentially large models for which learning and
inference become intractable. One widely studied joint-inference task in information extraction is
segmentation and coreference of research paper citation strings [21, 23, 24]. This involves segmenting citation strings into author, title and venue fields (segmentation), and clustering the citations
that refer to the same underlying paper entity (coreference). Previous results have shown that joint
inference reduces error [21], and this task provides a good testbed for probabilistic programming.
We now describe an IDF for the task. For more details, see [24].
3.1
Variables and Proposal Distribution
As in the example given in Section 2, a Mention represents a citation and is a random variable that
takes a single Entity as its value. An Entity is a set-valued variable containing Mention variables.
This representation eliminates the need for an explicit transitivity constraint, since a Mention can
hold only one Entity value, and this value is coordinated with the Entity?s set-value.
Variables for segmentation consist of Tokens, Labels and Fields. Each Token represents an observed word in a citation. Each Token has a corresponding Label which is an unobserved variable
that can take one of four values: author, title, venue or none. There are three Field variables associated with each Mention, one for each field type (author, venue or title), that store the contiguous
block of Tokens representing the Field; Labels and Fields are coordinated. This alternate representation of segmentation provides flexibility in specifying factor templates over predicted Fields.
The proposal function for coreference randomly selects a Mention, and with probability 0.8 moves
it to a random existing cluster, otherwise to a new singleton cluster. The proposal function for
segmentation selects a random Field and grows or shrinks it by a random amount. When jointly
performing both tasks, one of the proposal functions is randomly selected. The value-assignment
function for the Field ensures that the Labels corresponding to the affected Tokens are correctly
set when a Field is changed. This is an example of imperative variable coordination.
3.2
Factor Templates
Segmentation Templates: Segmentation templates examine only Field, Label and Token variables, i.e. not using information from coreference predictions. These factor templates are IDF translations of the Markov logic rules described in [21]. There is a template between every Token and its
Label. Markov dependencies are captured by a template that examines successive Labels as well
as the Token of the earlier Label. The sufficient statistics for these factors are the tuples created
from the neighbors of the factor: e.g., the values of two Labels and one Token. We also have a
factor template examining every Field with features based on the presence of numbers, dates, and
punctuation. This takes advantage of variable-statistics mapping.
Coreference Templates: The isolated coreference factor templates use only Mention variables.
They consist of two factor templates that share the same sufficient statistics, but have separate
weight vectors and different ways of unrolling the graph. An Affinity factor is created for all pairs of
Mentions that are coreferent, while a Repulsion factor is created for all pairs that are not coreferent.
The features of these templates correspond to the SimilarTitle and SimilarVenue first-order features
in [21]. We also add SimilarDate and DissimilarDate features that look at the ?date-like? tokens.
Joint Templates: To allow the tasks to influence each other, factor templates are added that are
unrolled during both segmentation and coreference sampling. Thus these factor templates neighbor
Mentions, Fields, and Labels, and use the segmentation predictions for coreference, and viceversa. We add templates for the JntInfCandidates rule from [21]. We create this factor template
6
Table 1: Cora coreference and segmentation results
Fellegi-Sunter
Isolated MLN
Joint MLN
Isolated IDF
Joint IDF
Coreference
Prec/Recall
F1
Cluster Rec.
78.0/97.7
86.7
62.7
94.3/97.0
95.6
78.1
94.3/97.0
95.6
75.2
97.09/95.42 96.22
86.01
95.34/98.25 96.71
94.62
Author
n/a
99.3
99.5
99.35
99.42
Segmentation F1
Title Venue
n/a
n/a
97.3
98.2
97.6
98.3
97.63 98.58
97.99 98.78
Total
n/a
98.2
98.4
98.51
98.72
such that (m, m0 ) are unrolled only if they are in the same Entity. The neighbors include Label
and Mention. Affinity and Repulsion factor templates are also created between pairs of Fields
of the same type; for Affinity the Fields belong to coreferent mention pairs, and for Repulsion they
belong to a pair of mentions that are not coreferent. The features of these templates denote similarity
between field strings, namely: StringMatch, SubString, Prefix/SuffixMatch, TokenIntersectSize, etc.
One notable difference between the JntInfCandidate and joint Affinity/Repulsion templates is the
possible number of instantiations. JntInfCandidates can be calculated during preprocessing as there
are O(nm2 ) of these (where n is the maximum mention length, and m is the number of mentions).
However, preprocessing joint Affinity/Repulsion templates is intractable as the number of such factors is O(m2 n4 ). We are able to deal with such a large set of possible factor instantiations due to
the interplay of structure definition, variable-statistics mapping, and on-the-fly feature calculation.
Our model also contains a number of factor templates that cannot be easily captured by first-order
logic. For example consider StringMatch and SubString between two fields. For arbitrary length
strings these features require the model designer to specify convoluted logic rules. The rules are even
less intuitive when considering a feature based on more complex calculations such as StringEditDistance. It is conceivable to preprocess and store all instantiations of these features, but in practice this
is intractable. Thus on-the-fly feature calculation within FACTORIE is employed to remain tractable.
4
Experimental Results
The joint segmentation and coreference model described above is applied to the Cora dataset[25].4
The dataset contains 1295 total mentions in 134 clusters, with a total of 36487 tokens. Isolated training consists of 5 loops of 100,000 samples each, and 300,000 samples for inference. For the joint
task we run training for 5 loops of 250,000 samples each, with 750,000 samples for inference. We
average the results of 10 runs of three-fold cross validation, with the same folds as [21]. Segmentation is evaluated on token precision, recall and F1. For coreference, pairwise coreference decisions
are evaluated. The fraction of clusters that are correctly predicted (cluster recall) is also calculated.
In Table 1, we see both our isolated and joint models outperform the previous state-of-the-art results
of [21] on both tasks. We see a 25.23% error reduction in pairwise coreference F1, and a 20.0%
error reduction of tokenwise segmentation F1 when comparing to the joint MLN. The improvements
of joint over isolated IDF are statistically significant at 1% using the T-test.
The experiments run very quickly, which can be attributed to sample-rank and the application of
variable coordination and structure definition of the models as described earlier. Each of the isolated
tasks finishes initialization, training and evaluation within 3 minutes, while the joint task takes 18
minutes. The running time for the MLNs reported in [21] are between 50-90 minutes for learning
and inference. Thus we can see that IDFs provide a significant boost in efficiency by avoiding the
need to unroll or score the entire graph. Note also that the timing result from [21] is for a model that
did not enforce transitivity constraints on the coreference predictions. Adding transitivity constraints
dramatically increases running time [26], whereas the IDF supports transitivity implicitly.
4
Available at http://alchemy.cs.washington.edu/papers/poon07
7
5
Related Work
Over the years there have been many efforts to build graphical models toolkits. Many of
them are useful as teaching aids, such as the Bayes Net Toolbox and Probabilistic Modeling
Toolkit (PMTK)[27] (both in Matlab), but do not scale up to substantial real problems.
There has been growing interest in building systems that can perform as workhorses, doing real
work on large data. For example, Infer.NET (CSoft) [28] is intended to be deployed in a number of Microsoft products, and has been applied to problems in computer vision. Like IDFs it is
embedded in a pre-existing programming language, rather than embodying its own new ?little language,? and its users have commented positively about this facet. Unlike IDFs it is designed for
messaging-passing inference, and must unroll the graphical model before inference, creating factors
to represent all possible worlds, which makes it unsuitable for our applications. The very recent
language Figaro [29] is also implemented as a library. Like FACTORIE it is implemented in Scala,
and provides an object-oriented framework for models; unlike FACTORIE it tightly intertwines data
representation and scoring, and it is not designed for changing model structure during inference; it
also does not yet support learning.
BLOG [5] and some of its derivatives can also scale to substantial data sets, and, like IDFs, are
designed for graphical models that cannot be fully unrolled. Unlike IDFs, BLOG, as well as IBAL
[6] and Church [7], are designed for generative models, though Church can also represent conditional, undirected models. We are most interested in supporting advanced discriminative models of
the type that have been successful for natural language processing, computer vision, bioinformatics,
and elsewhere. Note that FACTORIE also supports generative models; for example latent Dirichlet
allocation can be coded in about 15 lines.
Two systems focussing on discriminatively-trained relational models are relational Markov networks
(RMNs) [8], and Markov logic networks (MLNs, with Alchemy as its most popular implementation).
To define repeated relational structure and parameter tying, both use declarative languages: RMNs
use SQL and MLNs use first-order logic. By contrast, as discussed above, IDFs are in essence an
experiment in taking an imperative approach.
There has, however, been both historical and recently growing interest in using imperative programming languages for defining learning systems and probabilistic models. For example, work on theory refinement [30] viewed domain theories as ?statements in a procedural programming language,
rather than the common view of a domain theory being a collection of declarative Prolog statements.? More recently, IBAL [6] and Church [7] are both fundamentally programs that describe the
generative storyline for the data. IDFs, of course, share the combination of imperative programming
with probabilistic modeling, but IDFs have their semantics defined by undirected factor graphs, and
are typically discriminatively trained.
6
Conclusion
In this paper we have described imperatively defined factor graphs (IDFs), a framework to support
efficient learning and inference in large factor graphs of changing structure. We preserve the traditional, declarative, statistical semantics of factor graphs while allowing imperative definitions of the
model structure and operation. This allows model authors to combine both declarative and procedural domain knowledge, while also obtaining significantly more efficient inference and learning than
declarative approaches. We have shown state-of-the-art results in citation matching that highlight
the advantages afforded by IDFs for both accuracy and speed.
Acknowledgments
This work was supported in part by NSF medium IIS-0803847; the Central Intelligence Agency,
the National Security Agency and National Science Foundation under NSF grant IIS-0326249;
SRI International subcontract #27-001338 and ARFL prime contract #FA8750-09-C-0181; Army
prime contract number W911NF-07-1-0216 and University of Pennsylvania subaward number 103548106. Any opinions, findings and conclusions or recommendations expressed in this material are
the authors? and do not necessarily reflect those of the sponsor.
8
References
[1] John D. Lafferty, Andrew McCallum, and Fernando Pereira. Conditional random fields: Probabilistic
models for segmenting and labeling sequence data. In Int Conf on Machine Learning (ICML), 2001.
[2] Charles Sutton and Andrew McCallum. An introduction to conditional random fields for relational learning. In Introduction to Statistical Relational Learning. 2007.
[3] A. Bernal, K. Crammer, A. Hatzigeorgiou, and F. Pereira. Global discriminative learning for higheraccuracy computation gene prediction. In PloS Computational Biology, 2007.
[4] A. Quottoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS, 2004.
[5] Brian Milch. Probabilistic Models with Unknown Objects. PhD thesis, University of California, Berkeley,
2006.
[6] Avi Pfeffer. IBAL: A probabilistic rational programming language. In IJCAI, pages 733?740, 2001.
[7] Noah D. Goodman, Vikash K. Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua B. Tenenbaum.
Church: a language for generative models. In Uncertainty in Artificial Intelligence (UAI), 2008.
[8] Ben Taskar, Abbeel Pieter, and Daphne Koller. Discriminative probabilistic models for relational data. In
Uncertainty in Artificial Intelligence (UAI), 2002.
[9] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine Learning, 62(1-2), 2006.
[10] David Poole. Probabilistic horn abduction and bayesian networks. Artificial Intelligence, 64, 1993.
[11] Stephen Muggleton and Luc DeRaedt. Inductive logic programming theory and methods. In Journal of
Logic Programming, 1994.
[12] Taisuke Sato and Yoshitaka Kameya. PRISM: a language for symbolic-statistical modeling. In International Joint Conference on Artificial Intelligence (IJCAI), 1997.
[13] Luc De Raedt and Kristian Kersting. Probabilistic logic learning. SIGKDD Explorations: MultiRelational Data Mining, 2003.
[14] Martin Odersky. An Overview of the Scala Programming Language (second edition). Technical Report
IC/2006/001, EPFL Lausanne, Switzerland, 2006.
[15] Aron Culotta and Andrew McCallum. Tractable learning and inference with high-order representations.
In ICML WS on Open Problems in Statistical Relational Learning, 2006.
[16] Keith A. Bonawitz. Composable Probabilistic Inference with Blaise. PhD thesis, MIT, 2008.
[17] Aron Culotta. Learning and inference in weighted logic with application to natural language processing.
PhD thesis, University of Massachusetts, 2008.
[18] Charles Sutton and Andrew McCallum. Collective segmentation and labeling of distant entities in information extraction. Technical Report TR#04-49, University of Massachusetts, July 2004.
[19] Aron Culotta, Michael Wick, and Andrew McCallum. First-order probabilistic models for coreference
resolution. In NAACL: Human Language Technologies (NAACL/HLT), 2007.
[20] Khashayar Rohanimanesh, Michael Wick, and Andrew McCallum. Inference and learning in large factor
graphs with a rank based objective. Technical Report UM-CS-2009-08, University of Massachusetts,
Amherst, 2009.
[21] Hoifung Poon and Pedro Domingos. Joint inference in information extraction. In AAAI, 2007.
[22] Vasin Punyakanok, Dan Roth, and Wen-tau Yih. The necessity of syntactic parsing for semantic role
labeling. In International Joint Conf on Artificial Intelligence (IJCAI), pages 1117?1123, 2005.
[23] Ben Wellner, Andrew McCallum, Fuchun Peng, and Michael Hay. An integrated, conditional model of
information extraction and coreference with application to citation matching. In AUAI, 2004.
[24] Sameer Singh, Karl Schultz, and Andrew McCallum. Bi-directional joint inference for entity resolution
and segmentation using imperatively-defined factor graphs. In ECML PKDD, pages 414?429, 2009.
[25] Andrew McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. A machine learning approach to
building domain-specific search engines. In Int Joint Conf on Artificial Intelligence (IJCAI), 1999.
[26] Hoifung Poon, Pedro Domingos, and Marc Sumner. A general method for reducing the complexity of
relational inference and its application to MCMC. In AAAI, 2008.
[27] Kevin Murphy and Matt Dunham. PMTK: Probabilistic modeling toolkit. In Neural Information Processing Systems (NIPS) Workshop on Probabilistic Programming, 2008.
[28] John Winn and Tom Minka. Infer.NET/CSoft, 2008. http://research.microsoft.com/mlp/ml/Infer/Csoft.htm.
[29] Avi Pfeffer. Figaro: An Object-Oriented Probabilistic Programming Language. Technical report, Charles
River Analytics, 2009.
[30] Richard Maclin and Jude W. Shavlik. Creating advice-taking reinforcement learners. Machine Learning,
22, 1996.
9
| 3654 |@word sri:1 polynomial:1 open:1 pieter:1 mention:36 tr:1 yih:1 reduction:3 necessity:1 configuration:3 contains:3 uma:2 score:15 initial:1 daniel:1 prefix:1 fa8750:1 existing:2 current:2 comparing:1 com:1 yet:3 written:1 must:4 parsing:2 john:2 distant:1 partition:1 enables:2 remove:3 designed:4 update:1 hash:1 generative:5 selected:2 intelligence:7 mln:3 mccallum:11 infrastructure:1 provides:6 iterates:1 node:5 successive:1 daphne:1 direct:1 become:2 ik:1 consists:3 advocate:1 prev:3 combine:1 dan:1 pairwise:2 peng:1 themselves:1 examine:2 growing:3 pkdd:1 relying:1 alchemy:3 automatically:5 little:2 str:4 cardinality:1 increasing:2 unrolling:2 provided:2 considering:1 underlying:1 medium:1 null:1 tying:2 string:14 finding:3 unobserved:1 berkeley:1 every:3 firstorder:1 auai:1 prohibitively:1 um:1 grant:1 producing:1 segmenting:2 before:1 referenced:1 timing:1 api:1 sutton:2 toolkits:2 might:1 plus:1 initialization:2 studied:1 dynamically:1 specifying:4 lausanne:1 co:1 ease:2 programmed:1 analytics:1 bi:1 statistically:1 directed:1 unique:1 acknowledgment:1 arguing:1 horn:1 hoifung:2 recursive:1 block:1 implement:2 practice:1 figaro:2 procedure:1 empirical:1 java:1 significantly:1 matching:3 viceversa:1 pre:2 integrating:1 word:9 symbolic:1 cannot:2 context:2 applying:1 writing:1 influence:1 milch:1 map:1 deterministic:2 roth:1 primitive:1 sumner:1 resolution:3 blaise:2 fik:1 rule:4 examines:1 cascading:1 m2:1 coordinate:1 traditionally:2 updated:1 construction:2 target:1 parser:1 user:5 punyakanok:1 programming:20 us:3 hypothesis:1 domingo:3 element:1 roy:1 satisfying:1 jk:3 expensive:3 rec:1 recognition:1 labeled:1 pfeffer:2 observed:3 bottom:1 taskar:1 fly:2 role:1 solved:1 parameterize:1 region:1 ensures:1 connected:1 culotta:3 plo:1 trade:1 removed:1 substantial:3 intuition:1 projectivity:1 agency:2 complexity:1 sugar:1 trained:4 singh:2 coreference:24 upon:1 bipartite:1 efficiency:5 learner:1 easily:3 joint:23 htm:1 various:2 represented:1 distinct:1 describe:2 monte:1 artificial:6 labeling:3 avi:2 outcome:1 kevin:1 whose:4 larger:1 valued:3 widely:1 rennie:1 otherwise:1 ability:2 statistic:21 richardson:1 syntactic:2 jointly:1 itself:2 final:1 online:1 interplay:1 advantage:3 sequence:3 net:3 propose:1 product:2 neighboring:3 relevant:1 combining:2 loop:2 date:2 poon:2 flexibility:3 achieve:2 description:1 moved:1 intuitive:1 beneficially:1 convoluted:1 vasin:1 parent:1 cluster:6 darrell:1 ijcai:4 bernal:1 ben:2 object:17 derive:1 andrew:10 develop:1 stat:1 augmenting:1 friend:1 measured:1 mansinghka:1 keith:2 throw:1 implemented:4 c:4 reachability:1 treewidth:1 skip:2 involves:2 predicted:2 switzerland:1 functionality:1 exploration:1 human:1 programmer:1 opinion:1 material:1 implementing:2 require:1 polymorphism:1 f1:5 abbeel:1 brian:1 enumerated:1 adjusted:1 fellegi:1 hold:3 ic:1 exp:1 great:1 mapping:6 matthew:1 m0:1 mlns:4 interoperability:1 estimation:2 injecting:1 label:29 currently:1 coordination:10 title:4 create:2 successfully:2 tool:1 weighted:1 cora:2 mit:1 super:2 rather:6 avoid:3 kersting:1 focus:1 improvement:1 rank:4 indicates:2 likelihood:1 abduction:1 greatly:1 contrast:1 sigkdd:1 inference:38 repulsion:5 epfl:1 typically:3 entire:2 integrated:1 maclin:1 w:1 relation:3 koller:1 selects:2 semantics:5 interested:1 compatibility:2 issue:2 among:1 flexible:2 overall:1 proposes:2 art:4 summed:1 marginal:1 field:22 construct:7 never:1 once:1 equal:1 sampling:3 extraction:5 washington:1 placing:1 represents:2 look:1 icml:2 biology:1 kamal:1 others:1 report:4 fundamentally:1 inherent:2 employ:1 wen:1 richard:1 oriented:5 randomly:2 preserve:2 tightly:1 national:2 murphy:1 familiar:1 replaced:1 intended:1 microsoft:2 interest:6 mlp:1 highly:1 mining:1 evaluation:2 adjust:1 certainly:1 punctuation:1 yielding:1 tj:5 chain:5 accurate:1 edge:2 necessary:1 unless:1 tree:2 indexed:2 loosely:1 re:3 isolated:7 minimal:1 instance:3 modeling:5 boolean:2 earlier:3 facet:1 contiguous:1 extensible:1 w911nf:1 measuring:1 raedt:1 assignment:5 introducing:1 addressing:1 imperative:24 subset:1 examining:1 successful:1 reported:1 encapsulation:2 dependency:4 varies:1 person:2 venue:4 international:3 amherst:3 river:1 probabilistic:20 off:1 contract:2 decoding:1 michael:3 connecting:1 quickly:1 connectivity:3 thesis:3 central:2 reflect:1 aaai:2 opposed:1 containing:3 messaging:1 conf:3 book:1 creating:2 derivative:1 style:2 return:4 prolog:1 singleton:1 de:1 redoing:1 int:2 verbose:1 coordinated:2 notable:1 ranking:3 piece:1 performed:1 view:1 aron:3 jason:1 linked:1 observing:1 doing:1 bayes:1 maintains:1 capability:1 multirelational:1 contribution:3 appended:1 minimize:1 coreferent:4 accuracy:1 largely:1 efficiently:4 yield:1 correspond:1 preprocess:1 directional:1 bayesian:1 none:1 carlo:1 substring:2 intertwines:1 ibal:4 whenever:1 hlt:1 definition:10 typed:3 minka:1 associated:1 attributed:1 gain:1 newly:1 dataset:2 rational:1 massachusetts:4 popular:1 logical:1 recall:3 knowledge:6 segmentation:19 factorie:19 specify:3 methodology:1 response:4 scala:6 awkward:1 evaluated:2 rmns:3 strongly:2 shrink:1 furthermore:1 though:1 stage:7 parse:2 expressive:2 smoke:3 defines:1 sunter:1 grows:1 usage:2 building:2 naacl:2 matt:1 true:1 unroll:7 hence:1 assigned:4 inductive:1 semantic:1 deal:1 during:5 transitivity:5 self:1 essence:1 subcontract:1 evident:1 complete:1 crf:3 workhorse:1 csoft:3 interface:1 reasoning:1 novel:1 recently:2 charles:3 common:2 functional:3 tracked:1 overview:1 exponentially:2 belong:2 discussed:1 surpassing:1 trait:2 marginals:1 significant:4 composition:1 refer:1 gibbs:1 grid:1 similarly:3 teaching:1 arfl:1 language:28 had:1 dot:1 bool:2 access:3 sql:3 specification:3 similarity:1 toolkit:2 etc:3 add:5 seymore:1 closest:1 own:2 recent:2 touching:1 prime:2 store:2 hay:1 blog:6 success:1 binary:1 prism:1 yi:5 accomplished:1 scoring:2 joshua:1 preserving:1 captured:2 greater:1 somewhat:2 care:1 employed:2 undoing:1 determine:1 fernando:1 focussing:1 july:1 preservation:2 ii:2 full:4 mix:2 sameer:3 infer:3 stem:1 multiple:5 desirable:1 reduces:1 faster:2 match:1 calculation:4 offer:1 cross:1 muggleton:1 technical:4 coded:1 sponsor:1 prediction:5 involving:1 basic:1 vision:2 jude:1 represent:5 sometimes:1 robotics:1 proposal:12 addition:2 whereas:1 separately:1 winn:1 else:1 crucial:1 goodman:1 eliminates:1 unlike:5 file:1 undo:1 undirected:5 member:5 leveraging:1 lafferty:1 call:3 structural:1 presence:1 ideal:1 variety:1 finish:1 architecture:2 pennsylvania:1 regarding:1 vikash:1 whether:1 ultimate:1 wellner:1 effort:3 redo:1 returned:2 passing:1 matlab:1 dramatically:1 useful:1 detailed:1 amount:1 tenenbaum:1 succesive:1 embodying:1 http:3 generate:1 outperform:1 canonical:8 nsf:2 designer:3 coordinating:1 correctly:2 wick:2 affected:1 express:2 group:1 key:4 four:2 procedural:6 demonstrating:1 commented:1 achieving:1 changing:2 graph:29 merely:1 fraction:1 year:2 shouldn:1 enforced:2 run:4 turing:1 powerful:2 uncertainty:3 named:3 extends:4 place:1 separation:2 decision:1 proposer:1 capturing:1 ki:2 def:12 entirely:1 fold:2 sato:1 noah:1 constraint:5 idf:20 afforded:1 software:1 calling:1 declared:3 aspect:3 speed:1 argument:3 extremely:3 performing:1 relatively:1 martin:1 department:1 alternate:1 combination:2 instantiates:1 across:1 remain:1 n4:1 explained:1 restricted:1 heart:1 pipeline:2 computationally:1 resource:1 needed:1 ordinal:1 mind:1 know:1 unconventional:1 tractable:2 available:2 operation:3 enforce:2 disagreement:1 prec:1 altogether:1 clustering:1 include:3 running:2 dirichlet:1 graphical:7 calculating:2 unsuitable:1 build:3 move:1 objective:1 already:2 added:3 traditional:4 affinity:6 gradient:1 conceivable:1 distance:2 separate:2 entity:26 khashayar:1 trivial:1 declarative:10 code:3 length:2 index:1 relationship:2 providing:2 unrolled:6 optionally:2 difficult:1 susceptible:1 statement:2 classed:1 design:1 implementation:3 proper:1 collective:1 unknown:1 contend:1 perform:3 allowing:2 markov:11 finite:1 tom:1 ecml:1 supporting:2 defining:4 relational:16 extended:1 arbitrary:6 verb:1 community:1 expressiveness:1 david:1 pair:6 namely:1 specified:1 extensive:1 toolbox:1 security:1 california:1 engine:1 testbed:1 nm2:1 boost:1 nip:2 able:1 poole:1 below:3 pattern:2 usually:2 stephen:1 program:1 including:2 memory:1 tau:1 power:3 natural:9 rely:1 advanced:1 representing:2 technology:1 library:7 imply:1 realvalued:1 created:4 church:5 categorical:1 fjk:3 kj:1 prior:2 inheritance:1 checking:1 val:8 disagrees:1 embedded:2 fully:2 stumble:1 discriminatively:4 highlight:1 mixed:1 allocation:1 var:2 composable:1 validation:1 foundation:1 degree:1 sufficient:13 storing:1 share:3 translation:1 karl:2 prone:1 course:1 changed:8 token:26 surprisingly:1 last:1 keeping:1 cancer:1 copy:1 elsewhere:1 supported:1 formal:1 allow:4 perceptron:2 shavlik:1 wide:2 template:39 neighbor:15 taking:2 embodiment:1 default:2 world:6 depth:1 calculated:3 computes:1 avoids:1 author:14 made:2 jump:3 preprocessing:2 refinement:1 schultz:2 historical:1 collection:1 reinforcement:1 citation:8 compact:1 implicitly:1 logic:24 gene:1 ml:1 global:2 active:2 instantiation:4 uai:2 tuples:3 xi:5 discriminative:3 alternatively:2 search:2 latent:1 table:4 storyline:1 bonawitz:2 scratch:1 rohanimanesh:1 obtaining:2 nigam:1 complex:7 necessarily:1 domain:7 marc:1 did:1 fuchun:1 scored:3 edition:1 succinct:1 repeated:5 positively:1 advice:1 representative:1 deployed:1 aid:2 precision:1 sub:1 pereira:2 explicit:1 exponential:1 tied:2 third:1 learns:1 minute:3 specific:1 list:2 imperatively:7 concern:1 intractable:3 consist:2 workshop:1 adding:1 phd:3 conditioned:2 simply:3 army:1 expressed:1 scalar:1 recommendation:1 kristian:1 pedro:3 truth:1 ma:1 declaration:1 conditional:7 superclass:1 identity:1 goal:1 sized:1 viewed:1 luc:2 considerable:1 change:12 feasible:1 determined:1 reducing:2 called:1 total:3 experimental:3 support:8 inability:1 fulfills:1 brevity:2 bioinformatics:2 crammer:1 collins:1 mcmc:9 avoiding:1 subaward:1 |
2,929 | 3,655 | A Bayesian Model for Simultaneous Image
Clustering, Annotation and Object Segmentation
Lan Du, Lu Ren, 1 David B. Dunson and Lawrence Carin
Department of Electrical and Computer Engineering
1
Statistics Department
Duke University
Durham, NC 27708-0291, USA
{ld53, lr, lcarin}@ee.duke.edu, [email protected]
Abstract
A non-parametric Bayesian model is proposed for processing multiple images.
The analysis employs image features and, when present, the words associated
with accompanying annotations. The model clusters the images into classes, and
each image is segmented into a set of objects, also allowing the opportunity to
assign a word to each object (localized labeling). Each object is assumed to be
represented as a heterogeneous mix of components, with this realized via mixture
models linking image features to object types. The number of image classes, number of object types, and the characteristics of the object-feature mixture models
are inferred nonparametrically. To constitute spatially contiguous objects, a new
logistic stick-breaking process is developed. Inference is performed efficiently
via variational Bayesian analysis, with example results presented on two image
databases.
1 Introduction
There has recently been much interest in developing statistical models for analyzing and organizing images, based on image features and, when available, auxiliary information, such as words
(e.g., annotations). Three important aspects of this problem are: (i) sorting multiple images
into scene-level classes, (ii) image annotation, and (iii) segmenting and labeling localized objects
within images. Probabilistic topic models, originally developed for text analysis [8, 12], have been
adapted and extended successfully for many image-understanding problems [3, 6, 9?11, 16, 23, 24].
Moreover, recent work has also used the Dirichlet process (DP) [5] or similar non-parametric priors to enhance the topic-model structure [2, 20, 26]. Using such statistical models, researchers
[2, 3, 6, 10, 16, 20, 23, 24, 26] have addressed two or all three of the objectives simultaneously
within a single setting. Such unified formalisms have realized marked improvements in overall algorithm performance. A relatively complete summary of the literature may be found in [16, 23],
where the advantages of the approaches in [16, 23] are described relative to previous related approaches [3, 6, 10, 11, 18, 24, 27]. The work in [16, 23] is based on the correspondence LDA
(Corr-LDA) model [6]. The approach in [23] integrates the Corr-LDA model and the supervised
LDA (sLDA) model [7] into a single framework. Although good classification performance was
achieved using this approach, the model is employed in a supervised manner, utilizing scene-labeled
images for scene classification. A class label variable is introduced in [16] to cluster all images in
an unsupervised manner, and a switching variable to address noisy annotations. Nevertheless, to
improve performance, in [16] some images are required for supervised learning, based on the segmented and labeled objects obtained via the method proposed in [10], with these used to initialize
the algorithm.
The research reported here seeks to build upon and extend recent research on unified image-analysis
models. Specifically, motivated by [16, 23], we develop a novel non-parametric Bayesian model
1
that simultaneously addresses all three objectives discussed above. The four main contributions of
this paper are:
? Each object in an image is represented as a mixture of image-feature model parameters, accounting for the heterogeneous character of individual objects. This framework captures the idea that a
particular object may be composed as an aggregation of distinct parts. By contrast, each object is
only associated with one image-feature component/atom in the Corr-LDA-like models [6, 16, 23].
? Multiple images are processed jointly; all, none or a subset of the images may be annotated. The
model infers the linkage between image-feature parameters and object types, with this linkage used
to yield localized labeling of objects within all images. The unsupervised framework is executed
without the need for a human to constitute training data.
? A novel logistic stick-breaking process (LSBP) is proposed, imposing the belief that proximate
portions of an image are more likely to reside within the same segment (object). This spatially constrained prior yields contiguous objects with sharp boundaries, and via the aforementioned mixture
models the segmented objects may be composed of heterogeneous building blocks.
? The proposed model is nonparametric, based on use of stick-breaking constructions [13], which
can be easily implemented by fast variational Bayesian (VB) inference [14]. The number of image
classes, number of object types, number of image-feature mixture components per object, and the
linkage between words and image model parameters are inferred nonparametrically.
2 The Hierarchical Generative Model
2.1 Bag of image features
We jointly process data from M images, and each image is assumed to come from an associated
class type (e.g., city scene, beach scene, office scene, etc.). The class type associated with image m
is denoted by zm ? {1, . . . , I}, and it is drawn from the mixture model
I
X
zm ?
ui ?i , u ? StickI (?u )
(1)
i=1
where StickI (?u ) is a stick-breaking process [13] that is truncated to I sticks, with hyper-parameter
?u > 0. The symbol ?i represents a unit measure at the integer i, and the parameter ui denotes the
probability that image type i will be observed across the M images.
The observed data are image feature vectors, each tied to a local region in the image (for example,
associated with an over-segmented portion of the image). The Lm observed image feature vectors
m
associated with image m are {xml }L
l=1 , and the lth feature vector is assumed drawn xml ? F (? ml ).
The expression F (?) represents the feature model, and ? ml represents the model parameters.
Each image is assumed to be composed of a set of latent objects. An indicator variable ?ml defines
which object type the lth feature vector from image m is associated with, and it is drawn
K
X
?ml ?
wzm k ?k , w i ? StickK (?w )
(2)
k=1
where index k corresponds to the kth type of object that may reside within an image. The vector
wi defines the probability that each of the K object types will occur, conditioned on the image type
i ? {1, . . . , I}; the kth component of w zm , wzm k , denotes the probability of observing object type
k in image m, when image m was drawn from class zm ? {1, . . . , I}.
m
The image class zm and corresponding objects {?ml }L
l=1 associated with image m are latent varim
ables. The generative process for the observed data, {xml }L
l=1 , is manifested via mixture models
with respect to model parameter ?. Specifically, a separate such mixture model is manifested for
each of the K object types, motivated by the idea that each object will in general be composed of a
different set of image-feature building blocks. The mixture model for object type k ? {1, . . . , K}
is represented as
J
X
Gk =
hkj ???j , hk ? StickJ (?h ) , ??j ? H
(3)
j=1
where H is a base measure, usually selected to be conjugate to F (?).
2.2 Bag of clustered image features
While the model described above is straightforward to understand, it has been found to be ineffecPK
tive. This is because each of the ?ml is drawn i.i.d. from k=1 wzm k ?k , and therefore there is
2
nothing in the model that encourages the image features, xml and xml? , which are associated with
the same image-feature atom ??j , to be assigned to the same object k.
To address this limitation, we add a clustering step within each of the images; this is similar to
the structure of the hierarchical Dirichlet process (HDP) [21]. Specifically, consider the following
augmented model:
T
K
I
X
X
X
xml ? F (? ml ) , ?ml ? Gcml , cml ?
vmt ??mt , ?mt ?
wzm k ?k , zm ?
ui ?i (4)
t=1
k=1
i=1
where v m ? StickT (?v ), and Gk is as defined in (3). We make truncation level T < K, to
encourage a relatively small number of objects in a given image.
2.3 Linking words with images
In the above discussion it was assumed that the only observed data are the image feature vectors
m
{xml }L
l=1 . However, there are situations for which annotations (words) may be available for at
least a subset of the M images. In this setting we assume that we have a K-dimensional dictionary
of words associated with objects in images, and a word is assigned to each of the objects k ?
{1, . . . , K}. Of the collection of M images, some may be annotated and some not, and all will
be processed simultaneously by the joint model; in so doing, annotations will be inferred for the
originally non-annotated images.
For an image for which no annotation is given, the image is assumed generated via (4). When
an annotation is available, the words associated with image m are represented as a vector y m =
[ym1 , ? ? ? , ymK ]T , where ymk denotes the number of times word k is present in the annotation to
image m (typically ymk will either be one or zero), and y m is assumed drawn from a multinomial
distribution associated with a parameter ?m : y m ? Mult(?m ). If image m is in class zm , then we
simply set
y m ? Mult(wzm ) , wi ? StickK (?w )
(5)
Namely, ?m = w zm , recalling that wi defines the probability of observing each object type
for image class i. When a dictionary of K words is available, we generally use wi ?
Dir(?w /K, . . . , ?w /K), consistent with LDA [8].
3 Encouraging Spatially Contiguous Objects
3.1 Logistic stick-breaking process (LSBP)
In (5), note that once the image class zm is drawn for image m, the order/location of the xml within
the image may be interchanged, and nothing in the generative process will change. This is because
the indicator variable cml , which defines the object class associated with feature vector l in image m,
PT
is drawn i.i.d. cml ? t=1 vmt ??mt . It is therefore desirable to impose that if two feature vectors
are proximate within the image, they are likely to be associated with the same object.
With each feature vector xml there is an associated spatial location, which we denote sml (this is a
two-dimensional vector). We wish to draw
T
K
X
X
cml ?
vmt (sml )??mt , ?mt ?
wzm k ?k
(6)
t=1
k=1
where the cluster probabilities vmt (sml ) are now a function of position sml (the ?mt ? {1, . . . , K}
correspond to object types). The challenge, therefore, becomes development of a means of constructing vmt (s) to encourage nearby feature vectors to come from the same object type. Toward this goal,
let ?[gmt (s)] represent a logistic link function, which is a function of s. For t = 1, . . . , T ? 1 we
impose
t?1
Y
vmt (s) = ?[gmt (s)]
{1 ? ?[gm? (s)]}
(7)
PT ?1
? =1
PLm (m)
(m)
where vmT (s) = 1 ? t=1 vmt (s). We define gmt (s) =
l=1 Wtl K(s, sml ) + Wt0
where K(s, sml ) is a kernel, and here we utilize the radial basis function kernel K(s, sml ) =
exp[?ks ? sml k2 /?mt ]. The parameter kernel width ?mt plays an important role in dictating the
size of segments associated with stick t, and therefore these parameters should be learned by the
data in the analysis. In practice we define a library of discrete kernel widths ?? = {??d }D
d=1 , and
infer each ?mt , placing a uniform prior on the elements of ?? .
3
We desire that a given stick vmt (s) has importance (at most) over a localized region, and therefore
(m)
(m)
(m)
m
we impose sparseness priors on parameters {Wtl }L
? N (0, (?tl )?1 ), and
l=0 . Specifically, Wtl
(m)
(m)
?tl is drawn from a gamma prior, with hyper-parameters set to encourage most ?tl ? ?. Such a
Student-t prior is also applied in [4]. The model described above is termed a logistic stick-breaking
PT
PK
process (LSBP). For notational convenience, cml ? t=1 vmt (sml )??mt and ?mt ? k=1 wzm k ?k
constructed as above is represented as a draw from LSBPT (wzm ). Figure 1 depicts the detailed
generative process of the proposed model with LSBP.
(
)
S
c
e
n
e
1
S
c
e
n
e
2
S
c
e
n
e
i
i
S
c
e
n
e
I
I
i
L
L
I
?
?
?
z
~
m
=
i
(
d
u
i
~
~
S
m
l
k
B
y
m
G
l
u
i
l
d
i
n
g
G
i
1
)
i
i
(
)
i
S
k
B
T
r
e
i
i
i
(
v
)
y
u
i
l
d
i
n
g
e
G
r
a
s
s
?
c
G
?
r
a
s
s
w
~
L
S
B
P
(
)
~
~
T
m
l
T
z
m
l
r
e
e
m
G
l
G
m
Figure 1: Depiction of the generative process. (i) A scene-class indicator zm ? {1, . . . , I} is drawn to define
the image class; (ii) conditioned on zm , and using the LSBP, contiguous segmented blocks are constituted,
with associated words defined by object indicator cml ? {1, ? ? ? , K}, where w i defines the probability of
observing each object type for image class i; (iii) conditioned on cml , image-feature atoms are drawn from
appropriate mixture models Gcml , linked to over-segmented regions within each of the object clusters; (iv) the
image-feature model parameters are responsible for generating the image features, via the model F (?), where
? is the image-feature parameter.
3.2 Discussion of LSBP properties and comparison with KSBP
(m)
There are two key components of the LSBP construction: (i) sparseness promotion on the Wtl ,
(m)
and (ii) the use of a logistic link function to define spatial stick weights. A particular non-zero Wtl
is (via the kernel) associated with the lth local spatial region, with spatial extent defined by ?mt . If
(m)
Wtl is sufficiently large, the ?clipping? property of the logistic link yields a spatially contiguous
(t)
and extended region over which the tth LSBP layer will dominate. Specifically, cml will likely be
(m)
the same for data samples located near (defined by ?mt ) where a large Wtl resides, since in this
region ?[gmt (s)] ? 1. All locations s for which (roughly) gmt (s) ? 4 will have ? via the ?clipping?
manifested via the logistic ? nearly the same high probability of being associated with model layer
t. Sharp segment boundaries are also encouraged by the steep slope of the logistic function.
A related use of spatial information is constituted via the kernel stick-breaking process (KSBP) [2].
With the KSBP, rather than assuming exchangeable data, the vmt (s) in (6) is defined as:
t?1
Y
(8)
vmt (s) = Vmt K(s, ?mt ) [1 ? Vmt K(s, ?m? ; ?)] , Vmt ? Beta(1, ?0 )
?
where K(s, ?mt ) represents a kernel distance between the feature-vector spatial coordinate s and a
local basis location ?mt associated with the tth stick. Although such a model also establishes spatial
dependence within local regions, the form of the prior has not been found explicit enough to impose
smooth segments with sharp boundaries, as demonstrated in [2].
4 Using the Proposed Model
4.1 Inference
Bayesian inference seeks to estimate the posterior distribution of the latent variables ? , given the
observed data D and hyper-parameters ?. We employ variational Bayesian (VB) [14] inference as a
compromise between accuracy and efficiency. This method approximates an intractableQ
joint posterior p(?|D) of all the hidden variables by a product of marginal distributions q(?) = f qf (?f ),
each over only a single hidden variable ?f . The optimal parameterization of qf (?f ) for each
variable is obtained by minimizing the Kullback-Leibler divergence between the variational approximation q(?) and the true joint posterior p(?).
4
4.2 Processing images with no words given
m
If one is given M images, all non-annotated, then the model may be employed on the data {xml }L
l=1 ,
for m = 1, . . . , M , from which a posterior distribution is inferred on the image model parameters
{??j }Jj=1 , and on {Gk }K
k=1 . Note that properties of the image classes and of the objects within
images is inferred by processing all M images jointly. By placing all images within the context of
each other, the model is able to infer which building blocks (classes and objects) are responsible for
all of the data. In this sense the simultaneous processing of multiple images is critical: the learning
of properties of objects in one image is aided by the properties being learned for objects in all other
images, through the inference of inter-relationships and commonalities.
After the M images are analyzed in the absence of annotations, one may observe example portions
of the M images, to infer the link between actual object characteristics within imagery and the
associated latent object indicator to which it was assigned. With this linkage made, one may assign
words to all or a subset of the K object types. After words are assigned to previously latent object
types, the results of the analysis (with no additional processing) may be used to automatically label
regions (objects) in all of the images. This is manifested because each of the cluster indicators cml
is associated with a latent localized object type (to which a word may now be assigned).
4.3 Joint processing of images and annotations
We may consider problems for which a subset of the images are provided with annotations (but not
the explicit location and segmented-out objects); the words are assumed to reside in a prescribed
dictionary of object types. The generation of the annotations (and images) is constituted via the
model in (5), with the LSBP employed as discussed. We do not require that all images are annotated
(the non-annotated images help learn the properties of the image features, and are therefore useful
even if they do not provide information about the words). It is desirable that the same word be
annotated for multiple images. The presence of the same word within the annotations of multiple
images encourages the model to infer what objects (represented in terms of image features) are
common to the associated images, aiding the learning. Hence, the presence of annotations serves as
a learning aid (encourages looking for commonalities between particular images, if words are shared
in the associated annotations). Further, the annotations associated with images may disambiguate
objects that appear similar in image-feature space (because they will have different annotations).
From the above discussion, the model performance will improve as more images are annotated
with each word, but presumably this annotation is much easier for the human than requiring one to
segment out and localize words within a scene.
5 Experimental Results
Experiments are performed on two real-world data sets: subsets of Microsoft Research (MSRC)
data ( http://research.microsoft.com/en-us/projects/objectclassrecognition/ ) and UIUC-Sport data from
[15, 16], the latter images originally obtained from the Flickr website and available online (
http://vision.cs.princeton.edu/lijiali/event dataset/ ).
For the MSRC dataset, 10 categories of images with manual annotations are selected: ?tree?, ?building?, ?cow?, ?face?, ?car?, ?sheep?, ?flower?, ?sign?, ?book? and ?chair?. The number of images
in the ?cow? class is 45, and in the ?sheep? class there are 35; there are 30 images in all other
classes. From each category, we randomly choose 10 images, and remove the annotations, treating
these as non-annotated images within the analysis (to allow quantification of inferred-annotation
quality). Each image is of size 213 ? 320 or 320 ? 213. In addition, we remove all words that
occur less that 8 times (approximately 1% of all words). There are 14 unique words: ?void?, ?building?, ?grass?, ?tree?, ?cow?, ?sheep?, ?sky?, ?face?, ?car?, ?flower?, ?sign?, ?book?, ?chair? and
?road?. We assume that each word corresponds to a visual object in the image. Regarding the case
in which multiple words may refer to the same object, one may use the method mentioned in [16] to
group synonyms in the preprocessing phase (not necessary here). The following analysis, in which
annotated and non-annotated images are processed jointly, is executed as discussed in Section 4.3.
The UIUC-Sport dataset [15, 16] contains 8 types of sports: ?badminton?, ?bocce?, ?croquet?,
?polo?, ?rock climbing?, ?rowing?, ?sailing? and ?snowboarding?. Here we randomly choose 25
images for each category, and each image is resized to a dimension of 240 ? 320 or 320 ? 240.
Since the annotations are not available at the cited website, the analysis is initially performed with
no words, as discussed in Section 4.2. After performing this analysis, and upon examining the
properties of segmented data associated with each (latent) object class on a small subset of the data,
5
we can infer words associated with some important Gk , and then label portions (objects) within each
image via the inferred words. This process is different than in [6, 16, 23], in which annotations were
employed.
When investigating algorithm performance, we make comparisons to Corr-LDA [6]. Our objectives
are related to those in [16, 23], but to the authors? knowledge the associated software is not currently
available. The Corr-LDA model [6] is relatively simple, and has been coded ourselves. We also
examine our model with the proposed LSBP replaced with with KSBP.
5.1 Image preprocessing
Each image is first segmented into 800 ?superpixels?, which are local, coherent and
preserve most of the structure necessary for segmentation at the scale of interest [19].
The software used for over-segmentation is discussed in [17] and is available online
(http://www.cs.sfu.ca/?mori/research/superpixels/ ). Each superpixel is represented by both color and
texture descriptors, based on the local RGB, hue [25] feature vectors and also the output of maximum response (MR) filter banks [22] (http://www.robots.ox.ac.uk/?vgg/research/texclass/filters.html).
We discretize these features using a codebook of size 64 (other codebook sizes gave similar performance), and then calculate the distribution [1] for each feature within each superpixel as visual
words [3, 6, 10, 11, 20, 23, 24].
Since each superpixel is represented by three visual
N words, the
N mixture atoms
??j are three multinomial distributions {Mult(??1j ) Mult(??2j ) Mult(??3j )} for
j = 1, ? ? ? , J.
Accordingly,
the Nvariational distribution in the VB [14] analysis is
N
q(? ?j ) = Dir(??1j |?
?1j ) Dir(??2j |?
?2j ) Dir(??3j |?
?3j ).
The center of each superpixel is recorded as the location coordinate sml . The set of discrete kernel widths ?? are defined by 30, 35, ? ? ? , 160, and a uniform multinomial prior is placed on these
parameters (the size of each kernel is inferred, for each of the T LSBP layers, and separately in
each of the M images). To save computational resources, rather than centering a kernel at each of
the Lm points associated with the superpixels, the kernel spatial centers are placed once every 20
superpixels.
We set truncation levels I = 20, J = 50 and T = 10 (similar results were found for larger truncations). For analysis on UIUC-Sport dataset, K = 40. All gamma priors for precision parameters
(m)
m ,M
?6
?w , ?v or {?tl }T,L
, 10?6 ). All these hyper-parameters
t=1,l=0,m=1 , ?u and ?h are set as (10
and truncation levels have not been optimized or tuned. In the following comparisons, the number
of topics is set to be same as the atom number, J = 50, and the Dirichlet hyperparameters are
set as (1/J, . . . , 1/J)T for Corr-LDA model; a gamma prior is also used for the KSBP precision
parameter, ?0 in (8), also set as (10?6 , 10?6 ).
5.2 Scene clustering
The proposed model automatically learns a posterior distribution on mixture-weights u and in so
doing infers an estimate of the proper number of scene classes. As shown in Figure 2, although we
initialized the truncation level to I = 20, for the MSRC dataset only the first 10 clusters are selected
as being important (the mixture weights for other clusters are very small); recall that ?truth? indicated that there were 10 classes. In addition, based on the learned posterior word distribution wi
for each image class i, we can further infer which words/objects are probable for each scene class.
In Figure 2, we show two example w i for the MSRC ?building? and ?cow? classes. Although not
shown here for brevity, the analysis on UIUC features correctly inferred the 8 image classes associated with that data (without using annotations). By examining the words and segmented objects
extracted with high probability as represented by wi , we may also assign names to each of the 18
image classes across both the MSRC and UIUC data, consistent with the associated class labels
provided with the data.
For each image m ? {1, . . . , M } we also have a posterior distribution on the associated class
indicator zm . We approximate the membership for each image by assigning it to the mixture with
largest probability. This ?hard? decision is employed to provide scene-level label for each image (the
Bayesian analysis can also yield a ?soft? decision in terms of a full posterior distribution). Figure 3
presents the confusion matrices for the proposed model with and without LSBP, on both the MSRC
and UIUC datasets. Both forms of the model yield relatively good results, but the average accuracy
indicates that the model with LSBP performs better than that without LSBP for both datasets. Note
6
that the results in Figure 3 for the UIUC-Sport data cannot be directly compared with those in [6, 16],
since our experiments were performed on non-annotated images.
Using the concepts discussed in Section 4.2, and employing results from the processed nonannotated UIUC-Sport data, we examined the properties of segmented data associated with each
(latent) object type. We inferred the presence of 12 unique objects, and these objects were assigned
the following words: ?human?, ?horse?, ?grass?, ?sky?, ?tree?, ?ground?,?water?, ?rock?, ?court?,
?boat?, ?sailboat? and ?snow?. Using these words, we annotated each image and re-trained our
model in the presence of annotations. After doing so, the average accuracies of scene-level clustering are improved to 72.0% and 69.0% with and without LSBP, respectively. The improvement
in performance, relative to processing the images without annotations, is attributed to the ability of
words to disambiguate distinct objects that have similar properties in image-feature space (e.g., the
distinct use of ?boat? and ?sailboat?, which helps distinguish rowing and sailing).
building
Microsoft Research Data
cow
0.4
0.2
0.1
0.3
Probability
0.15
Probability
Mixture Weight
0.4
0.2
0.1
0.05
0
0
5
10
Cluster Index
15
0
20
0.3
0.2
0.1
Building
Sky
Grass
Tree
Object Index
0
Void
Grass
Cow
Tree
Void
Object Index
Building
Figure 2: Example inferred latent properties associated with MSRC dataset. Left: Posterior distribution on
the mixture-weights u, quantifying the probability of scene classes (10 classes are inferred). Middle and Right:
Example probability of objects for a given class, w i (probability of object/words); here we only give the top 5
words for each class.
without LSBP
tree .83
building .10
cow .04
face .03
car .03
sheep .03
flower .00
sign .03
book .00
chair .10
.13
.80
.02
.10
.10
.03
.07
.03
.00
.03
.00
.00
.87
.00
.00
.09
.00
.00
.00
.00
.03
.03
.00
.73
.00
.00
.03
.00
.03
.00
.00
.00
.00
.00
.87
.00
.00
.00
.00
.00
.00
.00
.07
.00
.00
.86
.00
.00
.00
.00
without LSBP
with LSBP
.00
.00
.00
.07
.00
.00
.83
.10
.00
.00
.00
.00
.00
.07
.00
.00
.07
.80
.13
.00
.00
.07
.00
.00
.00
.00
.00
.03
.83
.00
.00
tree .87 .10 .00 .03 .00 .00 .00 .00
.00 building .13 .83 .00 .03 .00 .00 .00 .00
.00
cow .04 .00 .89 .00 .00 .07 .00 .00
.00
face .03 .07 .00 .80 .00 .00 .07 .03
.00
car .00 .17 .00 .00 .83 .00 .00 .00
.00 sheep .00 .00 .11 .00 .00 .89 .00 .00
.00 flower .00 .00 .00 .10 .00 .00 .87 .03
.00
sign .00 .07 .00 .00 .00 .00 .03 .87
.00
book .00 .00 .00 .03 .00 .00 .00 .10
.87
chair .07 .03 .00 .00 .00 .00 .00 .00
.00
.00
.00
.00
.00
.00
.00
.03
.87
.00
with LSBP
.00 badmi. .76 .00 .08 .04 .00 .04 .04
.00 bocce .08 .44 .24 .04 .04 .08 .00
.00
croquet .04 .08 .72 .08 .04 .04 .00
.00
polo .04 .04 .12 .64 .04 .04 .04
.00
.00
rockc .00 .04 .04 .00 .76 .04 .04
.00 sailing .04 .04 .04 .04 .00 .44 .32
.00
rowing .04 .04 .04 .04 .04 .28 .44
.00
.90 snowb. .04 .08 .04 .04 .08 .04 .04
.04 badmi. .76 .04 .04 .04 .00 .04 .04 .04
.08
bocce .04 .48 .24 .04 .04 .04 .04 .08
.00 croquet .04 .08 .72 .08 .04 .04 .00 .00
.04
polo .04 .04 .12 .64 .04 .04 .04 .04
.08
rockc .00 .08 .04 .00 .76 .04 .00 .08
.08
sailing .04 .04 .00 .04 .00 .52 .28 .08
.08 rowing .04 .04 .04 .04 .04 .24 .52 .04
.64 snowb. .04 .08 .00 .04 .08 .04 .04 .68
Figure 3: Comparisons using confusion matrices for all images in each dataset (all of the annotated and nonannotated images in MSRC; all the non-annotated images in UIUC-Sport). The left two results are for MSRC,
and the right two for UIUC-Sport. In each pair, the result is without LSBP, and the right is with LSBP. Average
performance, left to right: 82.90%, 86.80%, 60.50% and 63.50%.
5.3 Image annotation
The proposed model infers a posterior distribution for the indicator variables cml (defining the object/word for super-pixel l in image m). Similar to the ?hard? image-class assignment discussed
above, a ?hard? segmentation is employed here to provide object labels for each super-pixel. For the
MSRC images for which annotations were held out, we evaluate whether the words associated with
objects in a given image were given in the associated annotation (thus, our annotation is defined by
the words we have assigned to objects in an image).
Table 1: Comparison of precision and recall values for annotation and segmentation with Corr-LDA [6], our
model without LSBP (Simp. Model) and the extended models with KSBP (Ext. with KSBP) and LSBP (Ext.
with LSBP) on MSRC datasets. To evaluate annotation performance, the results are just calculated based on
non-annotated images; while for segmentation, the results are based on all images.
Annotation
Corr-LDA
F
Segmentation
Simp. Model
Ext. with LSBP
Prec Rec
Prec Rec
F
F
Corr-LDA
Prec Rec
F
Simp. Model
Ext. with KSBP
Ext. with LSBP
Prec Rec
Prec Rec
Prec Rec
Object
Prec Rec
F
F
car
.18
.60
.28
.70
.70
.70
.70
.70
.70
.13
.08
.10
.49
.38
.43
.56
.50
.53
.61
.58
.60
F
tree
.30
.50
.38
.50
.60
.55
.55
.60
.57
.06
.03
.04
.43
.38
.40
.48
.44
.46
.51
.48
.50
sheep
.17
.60
.27
.70
.70
.70
.70
.70
.70
.02
.02
.02
.53
.63
.58
.57
.63
.60
.60
.62
.61
sky
.38
.65
.48
.66
.60
.63
.68
.60
.64
.39
.29
.33
.40
.51
.45
.49
.54
.51
.55
.55
.55
chair
.14
.60
.22
.70
.70
.70
.70
.70
.70
.13
.16
.15
.57
.55
.56
.58
.55
.57
.59
.55
.57
Mean
.23
.63
.32
.65
.63
.64
.67
.65
.65
.17
.18
.16
.49
.51
.50
.53
.53
.53
.56
.54
.54
We use precision-recall and F-measures [16, 23] to quantitatively evaluate the annotation performance. The left part of Table 1 lists detailed annotation results for five objects, as well as the overall
scores from all objects classes for the MSRC data. Our annotation results consistently and significantly outperform Corr-LDA, especially for the precision values.
7
5.4 Object segmentation
Figure 4 shows some detailed object-segmentation results of Corr-LDA and the proposed model
(with and without LSBP). We observe that our models generally yield visibly better segmentation
relative to Corr-LDA. For example, for complicated objects the Corr-LDA segmentation results are
very sensitive to the feature variance, and an object is generally segmented into many small, detailed
parts. By contrast, due to the imposed mixture structure on each object, our models cluster small
parts into one aggregate object. Furthermore, LSBP encourages local contiguous regions to be
grouped in the same segment, and therefore it is less sensitive to localized variability. In addition,
compared with results shown in [2], which also used the MSRC dataset, one may observe KSBP
cannot do as well as LSBP in maintaining spatial contiguity, as discussed in Section 3.2. Due to
space limitations, detailed example comparison between LSBP and KSBP will be shown elsewhere
in a longer report; the quantitative comparison in Table 1 further demonstrate the advantages of
LSBP over KSBP.
t
r
e
e
b
u
i
l
d
i
n
g
s
i
g
n
c
r
o
q
u
e
t
p
o
l
o
r
o
c
k
S
W
R
o
a
r
e
a
t
e
a
i
l
b
o
a
t
r
d
a
T
c
e
C
a
n
r
a
W
a
t
e
r
B
H
t
u
m
C
o
u
r
t
o
T
S
S
k
k
y
y
H
B
u
i
l
d
i
n
g
r
C
o
e
e
u
m
a
n
B
w
H
V
o
i
u
m
a
n
R
d
o
u
i
l
d
i
n
c
k
g
S
a
i
l
b
a
t
R
o
c
k
o
G
r
a
s
s
B
u
i
l
d
i
n
g
r
e
e
r
a
s
R
o
c
k
T
S
S
k
k
y
B
u
i
l
d
i
n
g
y
T
r
e
e
H
T
r
e
u
m
a
n
e
S
i
g
n
r
H
s
e
o
H
T
r
e
e
B
u
i
l
d
i
n
r
a
s
m
a
n
g
G
r
G
u
G
a
s
r
a
s
s
G
s
s
s
T
S
k
r
e
e
y
R
S
k
o
c
k
y
T
T
r
e
r
e
e
H
u
m
a
n
e
S
B
u
i
l
d
i
n
i
g
n
g
H
o
r
s
e
H
T
r
e
B
u
i
l
d
i
n
r
a
s
a
s
a
n
s
G
r
m
g
G
G
G
u
e
r
a
s
r
a
s
s
s
s
Figure 4: Example segmentation and labeling results. First row: original images; second row: Corr-LDA [6];
third row: proposed model without LSBP; fourth row: proposed model with LSBP. Columns 1-3 from MSRC
dataset; Columns 4-6 from UIUC-Sport dataset. The name of original images are inferred by scene-level
classification via our model. The UIUC-Sport results are based on the words inferred by our model.
The MSRC database provides manually defined segmentations, to which we quantitatively compare.
The right part of Table 1 compares results of the proposed model with Corr-LDA. As indicated in
Table 1, the proposed model (with and without LSBP) significantly outperforms Corr-LDA for all
objects. Moreover, due to imposed spatial contiguity, the models with KSBP and LSBP are better
than without.
The experiments have been performed in non-optimized software written in Matlab, on a Pentium
PC with 1.73 GHz CPU and 4G RAM. One VB run of our model with LSBP, for 70 VB iterations,
required nearly 7 hours for 320 images from MSRC dataset. Typically 50 VB iterations are required
to achieve convergence. The UIUC-Sport data required comparable CPU time. It typically took less
than half the CPU time for our model without LSBP on a same dataset. All results are based on a
single VB run, with random initialization.
6 Conclusions
A nonparametric Bayesian model has been developed for clustering M images into classes; the images are represented as a aggregation of distinct localized objects, to which words may be assigned.
To infer the relationships between image objects and words (labels), we only need to make the association between inferred model parameters and words. This may be done as a post-processing step if
no words are provided, and it may done in situ if all or a subset of the M images are annotated. Spatially contiguous objects are realized via a new logistic stick-breaking process. Quantitative model
performance is highly competitive relative to competing approaches, with relatively fast inference
realized via variational Bayesian analysis. The authors acknowledge partial support from ARO,
AFOSR, DOE, NGA and ONR.
8
References
[1] T. Ahonen and M. Pietik?ainen. Image description using joint distribution of filter bank responses. Pattern
Recogntion Letters, 30:368?376, 2009.
[2] Q. An, C. Wang, I. Shterev, E. Wang, L. Carin, and D. B. Dunson. Hierarchical kernel stick-breaking
process for multi-task image analysis. In ICML, 2008.
[3] K. Barnard, P. Duygulu, N. de Freitas, D. Forsyth, D. M. Blei, and M. I. Jordan. Matching words and
pictures. JMLR, 3:1107?1135, 2003.
[4] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In UAI, 2000.
[5] D. Blackwell and J. B. MacQueen. Ferguson distributions via Polya urn schemes. Ann. Statist., 1(2):353?
355, 1973.
[6] D. M. Blei and M. Jordan. Modeling annotated data. In SIGIR, 2003.
[7] D. M. Blei and J. D. McAuliffe. Supervised topic model. In NIPS, 2007.
[8] D. M. Blei, A. Ng, and M. I. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[9] A. Bosch, A. Zisserman, and X. Munoz. Scene classification via plsa. In ECCV, 2006.
[10] L. Cao and L. Fei-Fei. Spatially coherent latent topic model for concurrent segmentation and classification
of objects and scenes. In ICCV, 2007.
[11] L. Fei-Fei and P. Perona. A Bayesian hieratchical model for learning natural scence categories. In CVPR,
2005.
[12] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Mach. Learn., 42(1-2):177?
196, 2001.
[13] H. Ishwaran and L. F. James. Gibbs sampling methods for stick-breaking priors. JASA, 96(453):161?173,
2001.
[14] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. Saul. An introduction to variational methods for
graphical models. Mach. Learn., 37(2):183?233, 1999.
[15] J. Li and L. Fei-Fei. What, where and who? classfying events by scene and object recognition. In ICCV,
2007.
[16] J. Li, R. Socher, and L. Fei-Fei. Towards total scene understaning: classification, annotation and segmentation in an automatic framework. In CVPR, 2009.
[17] G. Mori. Guiding model search using segmentation. In ICCV, 2005.
[18] A. Rabinovich, A. Vedaldi, C. Galleguillos, and E. Wiewiora. Objects in context. In ICCV, 2007.
[19] X. Ren and J. Malik. Learning a classification model foe segmentation. In ICCV, 2003.
[20] E. B. Sudderth and M. I. Jordan. Shared segementation of natural scenes using dependent pitman-yor
processes. In NIPS, 2008.
[21] Y. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. JASA, 101:1566?1582, 2005.
[22] M. Varma and A. Zisserman. Classifying images of materials: Achieving viewpoint and illumination
independence. In ECCV, 2002.
[23] C. Wang, D. M. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, 2009.
[24] X. Wang and E. Grimson. Spatial latent dirichlet allocation. In NIPS, 2007.
[25] J. V. D. Weijer and C. Schmid. Coloring local feature extraction. In ECCV, 2006.
[26] O. Yakhnenko and V. Honavar. Multi-modal hierarchical Dirichlet process model for predicting image
annotation and image-object label correspondence. In SIAM SDM, 2009.
[27] Z.-H. Zhou and M.-L. Zhang. Mutlti-instance multi-label learning with application to scene classification.
In NIPS, 2006.
9
| 3655 |@word middle:1 plsa:1 cml:10 seek:2 rgb:1 accounting:1 contains:1 score:1 tuned:1 outperforms:1 freitas:1 com:1 assigning:1 written:1 wiewiora:1 plm:1 hofmann:1 remove:2 treating:1 ainen:1 grass:4 generative:5 selected:3 website:2 half:1 parameterization:1 accordingly:1 lr:1 blei:6 provides:1 codebook:2 location:6 zhang:1 five:1 constructed:1 beta:1 manner:2 inter:1 roughly:1 examine:1 uiuc:13 multi:3 automatically:2 encouraging:1 actual:1 cpu:3 becomes:1 provided:3 project:1 moreover:2 what:2 contiguity:2 developed:3 unified:2 sky:4 every:1 quantitative:2 k2:1 bocce:3 stick:15 exchangeable:1 unit:1 uk:1 wzm:8 appear:1 mcauliffe:1 segmenting:1 simp:3 engineering:1 local:8 switching:1 ext:5 mach:2 analyzing:1 approximately:1 initialization:1 k:1 examined:1 snowboarding:1 unique:2 responsible:2 practice:1 block:4 lcarin:1 mult:5 significantly:2 ym1:1 matching:1 word:50 radial:1 road:1 vedaldi:1 yakhnenko:1 convenience:1 cannot:2 context:2 www:2 imposed:2 demonstrated:1 center:2 straightforward:1 sigir:1 stats:1 utilizing:1 dominate:1 varma:1 badminton:1 coordinate:2 construction:2 pt:3 gm:1 play:1 duke:3 superpixel:4 element:1 recognition:1 located:1 rec:7 database:2 labeled:2 observed:6 role:1 electrical:1 capture:1 wang:4 calculate:1 region:9 mentioned:1 grimson:1 ui:3 trained:1 segment:6 compromise:1 upon:2 pietik:1 efficiency:1 basis:2 easily:1 joint:5 represented:10 distinct:4 fast:2 labeling:4 horse:1 hyper:4 aggregate:1 slda:1 larger:1 cvpr:3 ability:1 statistic:1 jointly:4 noisy:1 online:2 beal:1 advantage:2 sdm:1 rock:2 took:1 aro:1 product:1 zm:12 cao:1 organizing:1 achieve:1 description:1 convergence:1 cluster:9 generating:1 object:92 help:2 develop:1 ac:1 bosch:1 polya:1 auxiliary:1 implemented:1 c:2 come:2 snow:1 annotated:18 filter:3 human:3 material:1 require:1 assign:3 clustered:1 probable:1 segementation:1 accompanying:1 sufficiently:1 ground:1 exp:1 presumably:1 lawrence:1 lm:2 interchanged:1 dictionary:3 commonality:2 integrates:1 bag:2 label:9 currently:1 sensitive:2 largest:1 grouped:1 concurrent:1 successfully:1 city:1 establishes:1 promotion:1 super:2 rather:2 zhou:1 resized:1 jaakkola:1 office:1 improvement:2 notational:1 consistently:1 indicates:1 superpixels:4 hk:1 contrast:2 visibly:1 pentium:1 sense:1 inference:7 dependent:1 membership:1 ferguson:1 typically:3 initially:1 hidden:2 perona:1 pixel:2 overall:2 classification:9 aforementioned:1 html:1 denoted:1 development:1 constrained:1 spatial:11 initialize:1 weijer:1 marginal:1 once:2 extraction:1 beach:1 atom:5 encouraged:1 manually:1 represents:4 placing:2 sampling:1 unsupervised:3 carin:2 nearly:2 icml:1 report:1 quantitatively:2 employ:2 randomly:2 composed:4 gamma:3 simultaneously:3 preserve:1 individual:1 divergence:1 replaced:1 phase:1 ourselves:1 recogntion:1 microsoft:3 recalling:1 interest:2 highly:1 situ:1 sheep:6 mixture:17 analyzed:1 pc:1 held:1 encourage:3 partial:1 necessary:2 tree:8 iv:1 initialized:1 re:1 instance:1 formalism:1 soft:1 column:2 modeling:1 contiguous:7 assignment:1 rabinovich:1 clipping:2 subset:7 uniform:2 examining:2 reported:1 dir:4 cited:1 siam:1 probabilistic:2 enhance:1 lsbp:37 imagery:1 recorded:1 choose:2 book:4 li:2 de:1 sml:10 student:1 forsyth:1 performed:5 observing:3 doing:3 portion:4 linked:1 aggregation:2 competitive:1 complicated:1 annotation:41 slope:1 contribution:1 accuracy:3 variance:1 descriptor:1 characteristic:2 efficiently:1 yield:6 correspond:1 who:1 climbing:1 bayesian:11 lu:1 ren:2 none:1 lijiali:1 researcher:1 foe:1 simultaneous:3 flickr:1 manual:1 centering:1 james:1 associated:36 attributed:1 ahonen:1 dataset:12 recall:3 knowledge:1 car:5 infers:3 color:1 segmentation:17 coloring:1 originally:3 ymk:3 supervised:4 tipping:1 response:2 improved:1 zisserman:2 modal:1 done:2 ox:1 furthermore:1 just:1 nonparametrically:2 defines:5 logistic:10 lda:19 quality:1 indicated:2 building:11 usa:1 name:2 requiring:1 true:1 concept:1 galleguillos:1 hence:1 assigned:8 spatially:6 leibler:1 semantic:1 width:3 encourages:4 complete:1 demonstrate:1 confusion:2 performs:1 image:153 variational:7 novel:2 recently:1 common:1 multinomial:3 mt:16 sailing:4 linking:2 extend:1 discussed:8 approximates:1 association:1 refer:1 munoz:1 imposing:1 gibbs:1 automatic:1 msrc:16 gmt:5 rowing:4 robot:1 longer:1 depiction:1 etc:1 base:1 add:1 posterior:10 recent:2 termed:1 manifested:4 onr:1 additional:1 impose:4 mr:1 employed:6 ii:3 multiple:7 mix:1 desirable:2 infer:7 full:1 segmented:12 smooth:1 proximate:2 post:1 coded:1 heterogeneous:3 vision:1 croquet:3 iteration:2 represent:1 kernel:12 achieved:1 addition:3 separately:1 addressed:1 void:3 sailboat:2 sudderth:1 jordan:6 integer:1 ee:1 near:1 presence:4 iii:2 enough:1 independence:1 gave:1 competing:1 cow:8 idea:2 regarding:1 vgg:1 court:1 whether:1 motivated:2 expression:1 linkage:4 constitute:2 jj:1 dictating:1 matlab:1 generally:3 useful:1 detailed:5 nonparametric:2 aiding:1 hue:1 statist:1 processed:4 category:4 tth:2 http:4 outperform:1 sign:4 per:1 correctly:1 discrete:2 group:1 key:1 four:1 lan:1 nevertheless:1 achieving:1 drawn:11 localize:1 utilize:1 ram:1 nga:1 run:2 letter:1 fourth:1 sfu:1 draw:2 decision:2 vb:7 comparable:1 layer:3 distinguish:1 correspondence:2 adapted:1 occur:2 fei:10 scene:21 software:3 nearby:1 aspect:1 ables:1 prescribed:1 chair:5 duygulu:1 performing:1 urn:1 relatively:5 department:2 developing:1 honavar:1 conjugate:1 across:2 character:1 wi:6 iccv:5 mori:2 resource:1 previously:1 serf:1 available:8 ishwaran:1 observe:3 hierarchical:5 appropriate:1 prec:7 save:1 original:2 denotes:3 clustering:5 dirichlet:7 top:1 graphical:1 opportunity:1 maintaining:1 ghahramani:1 build:1 especially:1 objective:3 malik:1 realized:4 wtl:7 parametric:3 dependence:1 dp:1 kth:2 distance:1 separate:1 link:4 polo:3 topic:5 extent:1 toward:1 water:1 assuming:1 hdp:1 index:4 relationship:2 minimizing:1 nc:1 dunson:3 executed:2 steep:1 gk:4 proper:1 allowing:1 discretize:1 teh:1 datasets:3 macqueen:1 acknowledge:1 truncated:1 situation:1 extended:3 looking:1 defining:1 variability:1 sharp:3 inferred:15 david:1 introduced:1 tive:1 required:4 namely:1 pair:1 optimized:2 blackwell:1 vmt:15 coherent:2 learned:3 hour:1 nip:4 address:3 able:1 usually:1 flower:4 pattern:1 challenge:1 belief:1 critical:1 event:2 natural:2 quantification:1 predicting:1 indicator:8 boat:2 scheme:1 improve:2 xml:10 library:1 picture:1 schmid:1 text:1 prior:11 understanding:1 literature:1 relative:4 afosr:1 generation:1 limitation:2 allocation:2 localized:7 jasa:2 consistent:2 viewpoint:1 bank:2 classifying:1 row:4 qf:2 elsewhere:1 summary:1 eccv:3 placed:2 truncation:5 allow:1 understand:1 saul:1 face:4 pitman:1 yor:1 ghz:1 boundary:3 dimension:1 calculated:1 world:1 resides:1 reside:3 collection:1 made:1 preprocessing:2 author:2 employing:1 approximate:1 kullback:1 ml:8 investigating:1 uai:1 assumed:8 search:1 latent:13 wt0:1 table:5 disambiguate:2 learn:3 ca:1 du:1 constructing:1 pk:1 main:1 constituted:3 synonym:1 hyperparameters:1 nothing:2 augmented:1 tl:4 en:1 depicts:1 ng:1 aid:1 precision:5 position:1 guiding:1 wish:1 explicit:2 tied:1 breaking:10 jmlr:2 third:1 learns:1 bishop:1 symbol:1 list:1 socher:1 corr:16 importance:1 texture:1 illumination:1 conditioned:3 sparseness:2 sorting:1 durham:1 easier:1 simply:1 likely:3 visual:3 desire:1 sport:11 corresponds:2 truth:1 extracted:1 lth:3 marked:1 goal:1 quantifying:1 ann:1 towards:1 shared:2 absence:1 barnard:1 change:1 aided:1 hard:3 specifically:5 hkj:1 total:1 experimental:1 support:1 latter:1 brevity:1 relevance:1 evaluate:3 princeton:1 |
2,930 | 3,656 | Functional network reorganization in motor cortex
can be explained by reward-modulated Hebbian
learning
Robert Legenstein1?, Steven M. Chase2,3,4 , Andrew B. Schwartz2,3 , Wolfgang Maass1
1
Institute for Theoretical Computer Science, Graz University of Technology, Austria
2
Department of Neurobiology, University of Pittsburgh
3
Center for the Neural Basis of Cognition
4
Department of Statistics, Carnegie Mellon University
Abstract
The control of neuroprosthetic devices from the activity of motor cortex neurons
benefits from learning effects where the function of these neurons is adapted to
the control task. It was recently shown that tuning properties of neurons in monkey motor cortex are adapted selectively in order to compensate for an erroneous
interpretation of their activity. In particular, it was shown that the tuning curves of
those neurons whose preferred directions had been misinterpreted changed more
than those of other neurons. In this article, we show that the experimentally observed self-tuning properties of the system can be explained on the basis of a
simple learning rule. This learning rule utilizes neuronal noise for exploration and
performs Hebbian weight updates that are modulated by a global reward signal.
In contrast to most previously proposed reward-modulated Hebbian learning rules,
this rule does not require extraneous knowledge about what is noise and what is
signal. The learning rule is able to optimize the performance of the model system
within biologically realistic periods of time and under high noise levels. When the
neuronal noise is fitted to experimental data, the model produces learning effects
similar to those found in monkey experiments.
1 Introduction
It is a commonly accepted hypothesis that adaptation of behavior results from changes in synaptic efficacies in the nervous system. However, there exists little knowledge about how changes in
synaptic efficacies change behavior and about the learning principles that underlie such changes. Recently, one important hint has been provided in the experimental study [1] of a monkey controlling
a neuroprostethic device. The monkey?s intended movement velocity vector can be extracted from
the firing rates of a group of recorded units by the population vector algorithm, i.e., by computing
the weighted sum of their PDs, where each weight is the unit?s normalized firing rate [2]. 1 In [1],
this velocity vector was used to control a cursor in a 3D virtual reality environment. The task for the
monkey was to move the cursor from the center of an imaginary cube to a target appearing at one of
its corners. It is well known that performance increases with practice when monkeys are trained to
move to targets in similar experimental setups, i.e., the function of recorded neurons is adapted such
that control over the new artificial ?limb? is improved [3]. In [1], it was systematically studied how
such reorganization changes the tuning properties of recorded neurons. The authors manipulated
the interpretation of recorded firing rates by the readout system (i.e., the system that converts firing
?
To whom correspondence should be addressed: [email protected]
In general, a unit is not necessarily equal to a neuron in the experiments. Since the spikes of a unit are
determined by a spike sorting algorithm, a unit may represent the mixed activity of several neurons.
1
1
rates of recorded neurons into cursor movements). When the interpretation was altered for a subset
of neurons, the tuning properties of the neurons in this subset changed significantly stronger than
those of neurons for which the interpretation of the readout system was not changed. Hence, the experiment showed that motor cortical neurons can change their activity specifically and selectively to
compensate for an altered interpretation of their activity within some task. Such adjustment strategy
is quite surprising, since it is not clear how the cortical adaption mechanism is able to determine for
which subset of neurons the interpretation was altered. We refer to this learning effect as the ?credit
assignment? effect.
In this article, we propose a simple synaptic learning rule and apply it to a model neural network.
This learning rule is capable of optimizing performance in a 3D reaching task and it can explain
the learning effects described in [1]. It is biologically realistic since weight changes are based
exclusively on local variables and a global scalar reward signal R(t). The learning rule is rewardmodulated Hebbian in the following sense: Weight changes at synapses are driven by the correlation
between a global reward signal, the presynaptic activity, and the difference of the postsynaptic potential from its recent mean (see [4] for a similar approach). Several reward-modulated Hebbian
learning rules have been studied for quite some time both in the context of rate-based [5, 6, 7, 8, 4]
and spiking models [9, 10, 11, 12, 13, 14, 15, 16]. They turn out to be viable learning mechanisms in
many contexts and constitute a biologically plausible alternative [17, 18] to backpropagation based
mechanisms preferentially used in artificial neural networks. One important feature of the learning
rule proposed in this article is that noisy neuronal output is used for exploration to improve performance. It was often hypothesized that neuronal variability can optimize motor performance. For
example in songbirds, syllable variability results in part from variations in the motor command, i. e.
the variability of neuronal activity [19]. Furthermore, there exists evidence for the songbird system
that motor variability reflects meaningful motor exploration that can support continuous learning
[20]. We show that relatively high amounts of noise are beneficial for the adaptation process but
not problematic for the readout system. We find that under realistic noise conditions, the learning
rule produces effects surprisingly similar to those found in the experiments of [1]. Furthermore,
the version of the reward-modulated Hebbian learning rule that we propose does not require extraneous information about what is noise and what is signal. Thus, we show in this study that
reward-modulated learning is a possible explaination for experimental results about neuronal tuning
changes in monkey pre-motor cortex. This suggests that reward-modulated learning is an important
plasticity mechanism for the acquisition of goal-directed behavior.
2 Learning effects in monkey motor cortex
In this section, we briefly describe the experimental results of [1] as well as the network that we used
to model learning in motor cortex. Neurons in motor and premotor cortex of primates are broadly
tuned to intended arm movement direction [21, 3].2 This sets the basis for the ability to extract
intended arm movement from recorded neuronal activity in in these areas. The tuning curve of a
direction tuned neuron is given by its firing rate as a function of movement direction. This curve can
be fitted reasonably well by a cosine function. The preferred direction (PD) pi ? R3 of a neuron i is
defined as the direction in which the cosine fit to its firing rate is maximal, and the modulation depth
is defined as the difference in firing rate between the maximum of the cosine fit and the baseline
(mean). The experiments in [1] consisted of a sequence of four brain control sessions: Calibration,
Control, Perturbation, and Washout. The tuning functions of an average of 40 recorded neurons
were obtained in the Calibration session where the monkey moved its hand in a center out reaching
task. Those PDs (or manipulated versions of them) were later used for decoding neural trajectories.
We refer to PDs used for decoding as ?decoding PDs? (dPDs) in order to distinguish them from
measured PDs. In Control, Perturbation, and Washout sessions the monkey had to perform a cursor
control task in a 3D virtual reality environment (see Figure 1B). The cursor was initially positioned
in the center of an imaginary cube, a target position on one of the corners of the cube was randomly
selected and made visible. When the monkey managed to hit the target position with the cursor
or a 3s time period expired, the cursor position was reset to the origin and a new target position
was randomly selected from the eight corners of the imaginary cube. In the Control session, the
measured PDs were used as dPDs for cursor control. In the Perturbation session, the dPDs of a
randomly selected subset of neurons (25% or 50% of the recorded neurons) were altered. This was
2
Arm movement refers to movement of the endpoint of the arm.
2
B
A
plastic
weights w ij
cursor
velocity y (t)
via dPDs
recorded
neurons
monkey arm
velocity
input to motor
cortex x(t)
cursor
position
target
direction
y*(t)
target position
motor cortex neurons s(t)
Figure 1: Description of the 3D cursor control task and network model for cursor control. A)
Schematic of the network model. A set of m neurons project to ntotal noisy neurons in motor
cortex. The monkey arm movement was modeled by a fixed linear mapping from the activities of
the modeled motor cortex neurons to the 3D velocity vector of the monkey arm. A subset of n
neurons in the simulated motor cortex was recorded for cursor control. The cursor velocity was
given by the population vector. B) The task was to move the cursor from the center of an imaginary
cube to one of its eight corners.
achieved by rotating the measured PDs by 90 degrees around the x, y, or z axes (all PDs were rotated
around a single common axis in each experiment). We term these neurons rotated neurons. Other
dPDs remained the same as in the Control session (non-rotated neurons). The measured PDs were
used for cursor control in the subsequent Washout session. In the Perturbation session, neurons
adapted their firing behavior to compensate for the altered dPDs. The authors observed differential
effects of learning for the two groups of non-rotated neurons and rotated neurons. Rotated neurons
tended to shift their PDs in the direction of dPD rotation, thus compensating for the perturbation.
For non-rotated neurons, the change of the preferred directions was weaker and significantly less
strongly biased towards the rotation direction. We refer to this differential behavior of rotated and
non-rotated neurons as the ?credit assignment effect?.
Network and neuron model: Our aim in this article is to explain the described effects in the
simplest possible model. The model consisted of two populations of neurons, see Figure 1A. The
input population modeled those neurons which provide input to the neurons in motor cortex. It
consisted of m = 100 neurons with activities x1 (t), . . . , xm (t) ? R. Another population modeled
neurons in motor cortex which receive inputs from the input population. It consisted of ntotal =
340 neurons with activities s1 (t), . . . , sntotal (t).3 All modeled motor cortex neurons were used to
determine the monkey arm movement in our model. A small number of them (n = 40) modeled
recorded neurons used for cursor control. We denote the activities of this subset as s1 (t), . . . , sn (t).
The total synaptic input ai (t) for neuron i at time t was modeled as a noisy weighted sum of its
inputs:
m
X
ai (t) =
wij xj (t) + ?i (t),
?i (t) drawn from distribution D(?),
(1)
j=1
where wij is the synaptic efficacy from input neuron j to neuron i. These weights were set randomly
from a uniform distribution in the interval [?0.5, 0.5] at the beginning of each simulation. ?i (t)
models some exploratory signal needed to explore possibly better network behaviors. In cortical
neurons, this exploratory signal could for example result from neuronal or synpatic noise, or it could
be spontaneous activity of the neuron. An independent sample from the zero mean distribution D(?)
was drawn as the exploratory signal ?i (t) at each time step. The parameter ? (exploration level)
3
The distinction between these two layers is purely functional. Input neurons may be situated in extracortical
areas, in other cortical areas, or even in motor cortex itself. The functional feature of these two populations
in our model is that learning takes place solely in synapses of projections between these population since the
aim of this article is to explain the learning effects in the simplest model. But in principle the same learning is
applicable to multilayer networks.
3
determines the variance of the distribution and hence the amount of noise in the neuron. A nonlinear
function was applied to the total synaptic input, si (t) = ? (ai (t)), to obtain the activity si (t) of
neuron i at time t. We used ? : R ? R is the piecewise linear activation function ?(x) = max{x, 0}
in order to guarantee non-negative firing rates.
Task model: We modeled the cursor control task as shown in Figure 1B. Eight possible cursor target
positions were located at the corners of a unit cube in 3D space which had its center at the origin
of the coordinate system. At each time step t the desired direction of cursor movement y? (t) was
computed from the current cursor and target position. By convention, the desired direction y? (t) had
unit Euclidean norm. From the desired movement direction y? (t), the activities x1 (t), . . . , xm (t)
of the neurons that provide input to the motor cortex neurons were computed and the activities
s1 (t), . . . , sn (t) of the recorded neurons were used to determine the cursor velocity via their population activity vector (see below).
In order to model the cursor control experiment, we had to determine the PDs of recorded neurons.
Obviously, to determine PDs, one needs a model for monkey arm movement. In monkeys, the transformation from motor cortical activity to arm movements involves a complicated system of several
synaptic stages. In our model, we treated this transformation as a black box. Experimental findings
suggest that monkey arm movements can be predicted quite well by a linear model based on the
activities of a small number of motor cortex neurons [3]. We therefore assumed that the direction
of the monkey arm movement yarm (t) at time t can be modeled in a linear way, using the activities of the total population of the ntotal cortical neurons s1 (t), . . . , sntotal (t) in our simple model
and a fixed randomly chosen 3 ? ntotal linear mapping Q (see [23]). With the transformation from
motor cortex neurons to monkey arm movements being defined, the input to the network for a given
desired direction y? should be chosen such that motor cortex neurons produce a monkey arm movement close to the desired movement direction. We therefore calculated from the desired movement
direction input activities x(t) = crate (W total )? Q? y? (t), where Q? denotes the pseudo-inverse of
Q, W total denotes the matrix of weights wij before learning, and crate scales the input activity
such that the activities of the neurons in the simulated motor cortex could directly be interpreted as
rates in Hz [23]. This transformation from desired directions to input neuron activities was defined
initially and held fixed during each simulation because learning took place in our model in a single
synaptic stage from neurons of the input population to neurons in the motor cortex population in our
model and therefore the coding of desired directions did not change in the input population.
As described above, a subset of the motor cortex population was chosen to model recorded neurons
that were used for cursor control. For each modeled recorded neuron i ? {1, . . . , n}, we determined
the preferred direction pi ? R3 as well as the baseline activity ?i and the modulation depth ?i by
fitting a cosine tuning on the basis of simulated monkey arm movements [1, 23]. In the simulation
? i of rotated neurons were rotated versions of the measured PDs pi
of a Perturbation session, dPDs p
(as in [1], one of the x, y, or z axis was chosen and the PDs were rotated by 90 degrees around this
axis), whereas the dPDs of non-rotated neurons were identical to their measured PDs. The dPDs
were then used to determine the movement velocity y(t) of the cursor by the population vector
algorithm [1, 2, 23]. This decoding strategy is consistent with an interpretation of the neural activity
which codes for the velocity of the movement.
3 Adaptation with an online learning rule
Adaptation of synaptic efficacies wij from input neurons to neurons in motor cortex is necessary
? i do not produce optimal cursor trajectories. Assume that suboptimal
if the actual decoding PDs p
? 1, . . . , p
? n are used for decoding. Then for some input x(t), the movement of the cursor is
dPDs p
not in the desired direction y? (t). The weights wij should therefore be adapted such that at every
time step t the direction of movement y(t) is close to the desired direction y? (t). We can quantify
the angular match Rang (t) at time t by the cosine of the angle between movement direction y(t) and
T
?
y(t) y (t)
desired direction y? (t): Rang (t) = ||y(t)||?||y
? (t)|| . This measure has a value of 1 if the cursor moves
exactly in the desired direction, it is 0 if the cursor moves perpendicular to the desired direction, and
it is -1 if the cursor movement is in the opposite direction.
We assume in our model that all synapses receive information about a global reward R(t). The
general idea that a neuromodulatory signal gates local synaptic plasticity was studied in [4]. In that
4
study, the idea was implemented by learning rules where the weight changes are proportional to the
covariance between the reward signal R and some measure of neuronal activity N at the synapse.
Here, N could correspond to the presynaptic activity, the postsynaptic activity, or the product of
both. The authors showed that such learning rules can explain a phenomenon called Herrnstein?s
matching law. Interestingly, for the analysis in [4] the specific implementation of this correlation
based adaptation mechanism is not important. We investigate in this article a learning rule of this
type:
?
EH rule:
?wij (t) = ? xj (t) [ai (t) ? a
?i (t)] R(t) ? R(t)
,
(2)
?
where a
?i (t) and R(t)
denote the low-pass filtered version of ai (t) and R(t) with an exponential
kernel4 . We refer to this rule as the exploratory Hebb rule (EH rule) in this article. The important
feature of this learning rule is that apart from variables which are locally available for each neuron
(xj (t), ai (t), a
?i (t)), only a single scalar signal, R(t), is needed to evaluate performance.5 The
reward signal R(t) is provided by some neural circuit which evaluates performance of the system.
In our simulations, we simply used the angular match Rang (t) as this reward signal. Weight updates
of the rule are based on correlations between deviations of the reward signal R(t) and the activation
ai (t) from their means. It adjusts weights such that rewards above mean are reinforced. The EH
rule (2) approximates gradient ascent on the reward signal by exploring alternatives to the actual
behavior with the help of some exploratory signal ?(t). The deviation of the activation from the
recent mean ai (t) ? a
?i (t) isP
an estimate of the exploratory term ?i (t) at timePt if the mean a
?i (t) is
based on neuron activations j wij xj (t? ) which are similar to the activation j wij xj (t) at time t.
Here we make use of (1) the fact that weights are changing very slowly and (2) the continuity of the
task (inputs x at successive time points are similar). Then, (2) can be seen as an approximation of
?
?wij (t) = ? xj (t)?i (t) R(t) ? R(t)
.
(3)
This rule is a typical node-perturbation learning rule [6, 7, 22, 10] which can be shown to approximate gradient ascent, see e.g. [10]. A simple derivation that shows the link between the EH rule (2)
and gradient ascent is given in [23].
The EH learning rule differs from other node-perturbation rules in an important aspect. In many
node-perturbation learning rules, the noise needs to be accessible to the learning mechanism separately from the output signal. For example, in [6] and [7] binary neurons were used. The weight
updates there depend on the probability of the neuron to output 1. In [10] the noise term is directly
incorporated in the learning rule. The EH rule does not directly need the noise signal. Instead a
temporally filtered version of the neurons activation is used to estimate the noise signal. Obviously,
this estimate is only sufficiently accurate if the input to the neuron is temporally stable on small time
scales.
4 Comparison with experimentally observed learning effects
In this section, we explore the EH rule (2) in a cursor control task that was modeled to closely match
the experimental setup in [1]. Each simulated session consisted of a sequence of movements from
the center to a target position at one of the corners of the imaginary cube, with online weight updates
during the movements. In monkey experiments, perturbation of decoding PDs lead to retuning of
PDs with the above described credit assignment effect [1]. In order to obtain biologically plausible
values for the noise distribution in our neuron model, the noise in our model was fitted to data
from experiments (see [23]). Analysis of the neuronal responses in the experiments showed that the
variance of the response for a given desired direction scaled roughly linearly with the mean firing
rate of that neuron for this direction. We obtained this behavior with our neuron model with noise
that is a mixture of an activation-independent noise source and a noise source where the variance
scales linearly with the activation of the neuron. In particular, the noise term ?i (t) of neuron i was
drawn from the uniform
in [??i (x(t)), ?i (x(t))] with an exploration level ?i given by
r distribution
P
m
?i (x(t)) = 10 + 2.8 ?
w
x
(t)
. The constants where chosen fit neuron behavior in the
ij
j
j=1
data. We note that in all simulations with the EH rule, the input activities xj (t) were scaled in such a
way that the output of the neuron at time t could be interpreted directly as the firing rate of the neuron
4
5
? = 0.8R(t
? ? 1) + 0.2R(t)
We used a
?i (t) = 0.8?
ai (t ? 1) + 0.2ai (t) and R(t)
A rule where the activation ai is replaced by the output si and obtained very similar results.
5
B
A
C
ang. match R(t)
1
0.75
0.5
0
200
t [sec]
Figure 2: One example simulation of the 50% perturbation experiment with the EH rule and dataderived network parameters. A) Angular match Rang as a function of learning time. Every 100th
time point is plotted. B) PD shifts drawn on the unit sphere (arbitrary units) for non-rotated (black
traces) and rotated (light cyan traces) neurons from their initial values (light) to their values after
training (dark, these PDs are connected by the shortest path on the unit sphere). The straight line indicates the rotation axis. C) Same as B, but the view was altered such that the rotation axis is directed
towards the reader. The PDs of rotated neurons are consistently rotated in order to compensate for
the perturbation.
Shift perp. to perturbation direction [?]
A
25% perturbation
B
60
50% perturbation
60
non?rotated
rotated
30
30
0
0
?30
?30
?60
?60 ?30
0
30
60
90
Shift along perturbation direction [?]
?60
?60 ?30
0
30
60
90
Shift along perturbation direction [?]
Figure 3: PD shifts in simulated Perturbation sessions are in good agreement with experimental
results (compare to Figure 3A,B in [1]). Shift in the PDs measured after simulated perturbation
sessions relative to initial PDs for all units in 20 simulated experiments where 25% (A) or 50% (B)
of the units were rotated. Dots represent individual data points and black circled dots represent the
means of rotated (light gray) and non-rotated (dark gray) units.
at time t. With such scaling, we obtained output values of the neurons without the exploratory signal
in the range of 0 to 120Hz with a roughly exponential distribution. Having estimated the variability
of neuronal response, the learning rate ? remained the last free parameter of the model. To constrain
this parameter, ? was chosen such that the performance in the 25% perturbation task approximately
matched the monkey performance.
We simulated the two types of perturbation experiments reported in [1] in our model network with
40 recorded neurons. In the first set of simulations, a random set of 25% of recorded neurons were
rotated neurons in Perturbation sessions. In the second set of simulations, we chose 50 % of the
recorded neurons to be rotated. In each simulation, 320 targets were presented to the model, which
is similar to the number of target presentations in [1]. Results for one example run are shown in
Figure 2. The shifts in PDs of recorded neurons induced by training in 20 independent trials were
compiled and analyzed separately for rotated neurons and non-rotated neurons. The results are
in good agreement with the experimental data, see Figure 3. In the simulated 25% perturbation
6
experiment, the mean shift of the PD for rotated neurons was 8.2 ? 4.8 degrees, whereas for nonrotated neurons, it was 5.5 ? 1.6 degrees. This relatively small effect is similar to the effect observed
in [1] where the PD shift of rotated (non-rotated) units was 9.9 (5.2) degrees. The effect is more
pronounced in the 50% perturbation experiment (see below). We also compared the deviation of the
movement trajectory from the ideal straight line in rotation direction half way to the target6 from
early trials to the deviation of late trials, where we scaled the results to a cube of 11cm side length
in order to be able to compare the results directly to the results in [1]. In early trials, the trajectory
deviation was 9.2 ? 8.8mm, which was reduced by learning to 2.4 ? 4.9mm. In the simulated
50% perturbation experiment, the mean shift of the PD for rotated neurons was 18.1 ? 4.2 degrees,
whereas for non-rotated neurons, it was 12.1 ? 2.6 degrees (in monkey experiments [1] this was
21.7 and 16.1 degrees respectively). The trajectory deviation was 23.1 ? 7.5mm in early trials, and
4.8 ? 5.1mm in late trials. Here, the early deviation was stronger than in the monkey experiment,
while the late deviation was smaller.
The EH rule (2) falls into the general class of correlation-based learning rules described in [4].
In these rules the weight change is proportional to the covariance of the reward signal and some
measure of neuronal activity. We performed the same experiment with slightly different correlationbased rules
?
?wij (t) = ? xj (t)ai (t) R(t) ? R(t)
,
(4)
?wij (t) = ? xj (t) [ai (t) ? a
?i (t)] R(t),
(5)
(compare to (2)). The performance improvements were similar to those obtaint with the EH rule.
However, no credit assignment effect was observed with these rules. In the simulated 50% perturbation experiment, the mean shift of the PD of rotated neurons (non-rotated neurons) was 12.8 ? 3.6
(12.0 ? 2.4) degrees for rule (4) and 25.5 ? 4 (26.8 ? 2.8) degrees for rule (5).
In the monkey experiment, training in the Perturbation session also induced in a decrease of the
modulation depth of rotated neurons. This resulted in a decreased contribution of these neurons
to the cursor movement. We observed a qualitatively similar resultin our simulations. In the 25%
perturbation simulation, modulation depths decreased on average by 2.7?4.3Hz for rotated neurons.
Modulation depths for non-rotated neurons increased on average by 2.2 ? 3.9Hz (average over 20
independent simulations). In the 50% perturbation simulation, the changes in modulation depths
were ?3, 6 ? 5.5Hz for rotated neurons and 5.4 ? 6Hz for non-rotated neurons.7 Thus, the relative
contribution of rotated neurons on cursor movement decreased.
Comparing the results obtained by our simulations to those of monkey experiments (compare Figure
3 to Figure 3 in [1]), it is interesting that quantitatively similar effects were obtained when noise
level and learning rate was constrained by the experimental data. One should note here that tuning
changes due to learning depend on the noise level. For small exploration levels, PDs changed only
slightly and the difference in PD change between rotated and non-rotated neurons was small, while
for large noise levels, PD change differences can be quite drastic. Also the learning rate ? influences
the amount of PD shift differences with higher learning rates leading to stronger credit assignment
effects, see [23] for details.
The performance of the system before and after learning is shown in Figure 4. The neurons in the
network after training are subject to the same amount of noise as the neurons in the network before training, but the angular match after training shows much less fluctuation than before training.
Hence, the network automatically suppresses jitter on the trajectory in the presence of high exploration levels ?. We quantified this observation by computing the standard deviation of the angle
between the cursor velocity vector and the desired movement direction for 100 randomly drawn
noise samples.8 The mean standard deviation for 50 randomly drawn target directions was always
decreased by learning. In the mean over the 20 simulations, the mean STD over 50 target directions
was 7.9 degrees before learning and 6.3 degrees after learning. Hence, the network not only adapted
its response to the input, it also found a way to optimize its sensitivity to the exploratory signal.
6
These deviations were computed as described in [1]
When comparing these results to experimental results, one has to take into account the modulation depths
in monkey experiments were around 10Hz whereas in the simulations, they were around 25Hz
8
This effect is not caused by a larger norm of the weight vectors. The comparison was done with weight
vectors after training normalized to their L2 norm before training.
7
7
ang. match R(t)
1
0.5
before learning
after learning
0
0
1
2
t [sec]
3
Figure 4: Network performance before and after learning
for 50% perturbation. Angular match Rang (t) of the cursor movements in one reaching trial before (gray) and after
(black) learning as a function of the time since the target
was first made visible. The black curve ends prematurely
because the target was reached faster. After learning temporal jitter of the performance was reduced, indicating reduced
sensitivity to noise.
5 Discussion
Jarosiewicz et al. [1] discussed three strategies that could potentially be used by the monkey to compensate for the errors caused by perturbations: re-aiming, re-weighting, and re-mapping. Using the
re-aiming strategy, the monkey compensates for perturbations by aiming for a virtual target located
in the direction that offsets the visuomotor rotation. The authors identified a global change in the
activity level of all neurons. This indicates a re-aiming strategy of the monkey. Re-weighting would
suppress the use of rotated units, leading to a reduction of their modulation depths. A reduction of
modulation depths of rotated neurons was also identified in the experimentals. A re-mapping strategy would selectively change the directional tunings of rotated units. Rotated neurons shifted their
PDs more than the non-rotated population in the experiments. Hence, the authors found elements of
all three strategies in their data. These three elements of neuronal adaptation were also identified in
our model: a global change in activity of neurons (all neurons changed their tuning properties; reaiming), a reduction of modulation depths for rotated neurons (re-weighting), and a selective change
of the directional tunings of rotated units (re-mapping). This modeling study therefore suggests that
all three elements can be explained by a single synaptic adaptation strategy that relies on noisy neuronal activity and visual feedback that is made accessible to all synapses in the network by a global
reward signal. It is noteworthy that the credit assignment phenomenon is an emergent feature of the
learning rule rather than implemented in some direct way. Intuitively, this behavior can be explained
in the following way. The output of non-rotated neurons is consistent with the interpretation of the
readout system. So if this output is strongly altered, performance will likely drop. On the other hand,
if the output of a rotated neuron is radically different, this will often improve performance. Hence,
the relatively high noise levels measured in experiments are probably important for the credit assignment phenomenon. Under such realistic noise conditions, our model produced effects surprisingly
similar to those found in the monkey experiments. Thus, this study shows that reward-modulated
learning can explain detailed experimental results about neuronal adaptation in motor cortex and
therefore suggests that reward-modulated learning is an essential plasticity mechanism in cortex.
The results of this modeling paper also support the hypotheses introduced in [24]. The authors presented data which suggests that neural representations change randomly (background changes) even
without obvious learning, while systematic task-correlated representational changes occur within a
learning task.
Reward-modulated Hebbian learning rules are currently the most promising candidate for a learning
mechanism that can support goal-directed behavior by local synaptic changes in combination with
a global performance signal. The EH rule (2) is one particularly simple instance of such rules that
exploits temporal continuity of inputs and an exploration signal - a signal which would show up as
?noise? in neuronal recordings. We showed that large exploration levels are beneficial for learning
while they do not interfere with the performance of the system because of pooling effects of readout
elements. This study therefore provides a hypothesis about the role of ?noise? or ongoing activity in
cortical circuits as a source for exploration utilized by local learning rules.
Acknowledgments
This work was supported by the Austrian Science Fund FWF [S9102-N13, to R.L. and W.M.]; the
European Union [FP6-015879 (FACETS), FP7-216593 (SECO), FP7-506778 (PASCAL2), FP7231267 (ORGANIC) to R.L. and W.M.]; and by the National Institutes of Health [R01-NS050256,
EB005847, to A.B.S.].
8
References
[1] B. Jarosiewicz, S. M. Chase, G. W. Fraser, M. Velliste, R. E. Kass, and A. B. Schwartz. Functional network reorganization during learning in a brain-computer interface paradigm. Proc. Nat. Acad. Sci. USA,
105(49):19486?91, 2008.
[2] A. P. Georgopoulos, R. E. Ketner, and A. B. Schwartz. Primate motor cortex and free arm movements to
visual targets in three- dimensional space. ii. coding of the direction of movement by a neuronal population. J. Neurosci., 8:2928?2937, 1988.
[3] A. B. Schwartz. Useful signals from motor cortex. J. Physiology, 579:581?601, 2007.
[4] Y. Loewenstein and H. S. Seung. Operant matching is a generic outcome of synaptic plasticity based on
the covariance between reward and neural activity. Proc. Nat. Acad. Sci. USA, 103(41):15224?15229,
2006.
[5] A. G. Barto, R. S. Sutton, and C. W. Anderson. Neuronlike adaptive elements that can solve difficult
learning control problems. IEEE Trans. Syst. Man Cybern., SMC-13(5):834?846, 1983.
[6] P. Mazzoni, R. A. Andersen, and M. I. Jordan. A more biologically plausible learning rule for neural
networks. Proc. Nat. Acad. Sci. USA, 88(10):4433?4437, 1991.
[7] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8:229?256, 1992.
[8] J. Baxter and P. L. Bartlett. Direct gradient-based reinforcement learning: I. gradient estimation algorithms. Technical report, Research School of Information Sciences and Engineering, Australian National
University, 1999.
[9] X. Xie and H. S. Seung. Learning in neural networks by reinforcement of irregular spiking. Phys. Rev. E,
69(041909), 2004.
[10] I. R. Fiete and H. S. Seung. Gradient learning in spiking neural networks by dynamic perturbation of
conductances. Phys. Rev. Lett., 97(4):048104?1 to 048104?4, 2006.
[11] J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner. Optimal spike-timing-dependent plasticity for
precise action potential firing in supervised learning. Neural Computation, 18(6):1318?1348, 2006.
[12] E. M. Izhikevich. Solving the distal reward problem through linkage of STDP and dopamine signaling.
Cerebral Cortex, 17:2443?2452, 2007.
[13] D. Baras and R. Meir. Reinforcement learning, spike-time-dependent plasticity, and the bcm rule. Neural
Computation, 19(8):2245?2279, 2007.
[14] R. V. Florian. Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
Neural Computation, 6:1468?1502, 2007.
[15] M. A. Farries and A. L. Fairhall. Reinforcement learning with modulated spike timing-dependent synaptic
plasticity. J. Neurophys., 98:3648?3665, 2007.
[16] R. Legenstein, D. Pecevski, and W. Maass. A learning theory for reward-modulated spike-timingdependent plasticity with application to biofeedback. PLoS Computational Biology, 4(10):1?27, 2008.
[17] C. H. Bailey, M. Giustetto, Y.-Y. Huang, R. D. Hawkins, and E. R. Kandel. Is heterosynaptic modulation
essential for stabilizing Hebbian plasticity and memory? Nat. Rev. Neurosci., 1:11?20, 2000.
[18] Q. Gu. Neuromodulatory transmitter systems in the cortex and their role in cortical plasticity. Neuroscience, 111(4):815?835, 2002.
[19] Samuel J. Sober, Melville J. Wohlgemuth, and Michael S. Brainard. Central contributions to acoustic
variation in birdsong. J. Neurosci., 28(41):10370?9, 2008.
[20] E. C. Tumer and M. S. Brainard. Performance variability enables adaptive plasticity of ?crystallized? adult
birdsong. Nature, 250(7173):1240?1244, 2007.
[21] A. P. Georgopoulos, A. P. Schwartz, and R. E. Ketner. Neuronal population coding of movement direction.
Science, 233:1416?1419, 1986.
[22] J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. J. Artif. Intell. Res., 15:319?350,
2001.
[23] R. Legenstein, S. M. Chase, A. B. Schwartz, and W. Maass. A reward-modulated hebbian learning
rule can explain experimentally observed network reorganization in a brain control task. Submitted for
publication, 2009.
[24] U. Rokni, A G. Richardson, E. Bizzi, and H. S. Seung. Motor learning with unstable neural representations. Neuron, 54:653?666, 2007.
9
| 3656 |@word trial:7 briefly:1 version:5 fiete:1 stronger:3 norm:3 simulation:16 covariance:3 reduction:3 initial:2 efficacy:4 exclusively:1 tuned:2 interestingly:1 imaginary:5 current:1 comparing:2 ka:1 surprising:1 neurophys:1 si:3 activation:9 realistic:4 visible:2 subsequent:1 plasticity:12 enables:1 motor:34 drop:1 update:4 fund:1 half:1 selected:3 device:2 nervous:1 beginning:1 filtered:2 tumer:1 provides:1 node:3 successive:1 misinterpreted:1 along:2 direct:2 differential:2 viable:1 fitting:1 roughly:2 behavior:11 brain:3 compensating:1 automatically:1 little:1 actual:2 provided:2 project:1 matched:1 circuit:2 what:4 cm:1 interpreted:2 monkey:33 suppresses:1 finding:1 transformation:4 guarantee:1 pseudo:1 temporal:2 every:2 exactly:1 scaled:3 hit:1 schwartz:5 control:22 unit:17 underlie:1 before:9 engineering:1 local:4 perp:1 timing:3 aiming:4 acad:3 sutton:1 jarosiewicz:2 firing:12 modulation:12 solely:1 path:1 black:5 approximately:1 chose:1 noteworthy:1 studied:3 quantified:1 suggests:4 smc:1 perpendicular:1 range:1 directed:3 acknowledgment:1 practice:1 union:1 differs:1 backpropagation:1 signaling:1 area:3 significantly:2 physiology:1 projection:1 matching:2 pre:1 organic:1 refers:1 suggest:1 close:2 context:2 influence:1 cybern:1 optimize:3 center:7 williams:1 stabilizing:1 rule:51 adjusts:1 loewenstein:1 population:18 exploratory:8 variation:2 coordinate:1 controlling:1 target:18 spontaneous:1 hypothesis:3 origin:2 agreement:2 velocity:10 element:5 particularly:1 located:2 utilized:1 std:1 observed:7 steven:1 role:2 graz:1 readout:5 connected:1 plo:1 movement:36 decrease:1 pd:35 environment:2 reward:25 seung:4 dynamic:1 trained:1 depend:2 solving:1 purely:1 basis:4 gu:1 isp:1 emergent:1 derivation:1 describe:1 artificial:2 visuomotor:1 crystallized:1 outcome:1 whose:1 quite:4 premotor:1 plausible:3 larger:1 solve:1 toyoizumi:1 compensates:1 ability:1 statistic:1 seco:1 melville:1 richardson:1 noisy:4 itself:1 online:2 obviously:2 chase:2 sequence:2 took:1 propose:2 maximal:1 reset:1 adaptation:8 product:1 representational:1 description:1 moved:1 pronounced:1 produce:4 rotated:50 help:1 brainard:2 andrew:1 measured:8 ij:2 school:1 implemented:2 predicted:1 involves:1 s9102:1 convention:1 quantify:1 direction:40 australian:1 closely:1 exploration:10 virtual:3 sober:1 require:2 synpatic:1 wohlgemuth:1 exploring:1 mm:4 around:5 credit:7 sufficiently:1 stdp:1 hawkins:1 cognition:1 mapping:5 pecevski:1 early:4 bizzi:1 estimation:2 proc:3 applicable:1 currently:1 weighted:2 reflects:1 rokni:1 always:1 aim:2 reaching:3 rather:1 command:1 barto:1 publication:1 ax:1 improvement:1 consistently:1 transmitter:1 indicates:2 contrast:1 baseline:2 sense:1 dependent:4 initially:2 wij:11 selective:1 extraneous:2 constrained:1 cube:8 equal:1 having:1 identical:1 biology:1 report:1 connectionist:1 piecewise:1 hint:1 quantitatively:1 randomly:8 manipulated:2 resulted:1 national:2 individual:1 intell:1 replaced:1 intended:3 conductance:1 neuronlike:1 investigate:1 mixture:1 analyzed:1 light:3 held:1 accurate:1 capable:1 necessary:1 euclidean:1 rotating:1 desired:15 plotted:1 re:10 biofeedback:1 theoretical:1 fitted:3 increased:1 instance:1 modeling:2 facet:1 assignment:7 deviation:11 subset:7 uniform:2 reported:1 sensitivity:2 accessible:2 systematic:1 decoding:7 michael:1 andersen:1 central:1 recorded:19 huang:1 possibly:1 slowly:1 corner:6 leading:2 syst:1 account:1 potential:2 coding:3 sec:2 caused:2 igi:1 later:1 view:1 performed:1 wolfgang:1 reached:1 complicated:1 contribution:3 variance:3 reinforced:1 correspond:1 directional:2 plastic:1 produced:1 trajectory:6 straight:2 submitted:1 explain:6 synapsis:4 phys:2 tended:1 synaptic:15 evaluates:1 acquisition:1 obvious:1 austria:1 knowledge:2 positioned:1 higher:1 xie:1 supervised:1 response:4 improved:1 synapse:1 done:1 box:1 strongly:2 anderson:1 furthermore:2 angular:5 stage:2 correlation:4 hand:2 nonlinear:1 interfere:1 continuity:2 gray:3 izhikevich:1 artif:1 usa:3 effect:22 hypothesized:1 normalized:2 consisted:5 managed:1 herrnstein:1 hence:6 maass:2 distal:1 during:3 self:1 songbird:2 timingdependent:1 cosine:5 samuel:1 performs:1 interface:1 recently:2 common:1 rotation:6 functional:4 spiking:3 endpoint:1 cerebral:1 discussed:1 interpretation:8 approximates:1 mellon:1 refer:4 ai:13 neuromodulatory:2 tuning:13 velliste:1 session:14 had:5 dot:2 calibration:2 stable:1 cortex:30 compiled:1 showed:4 recent:2 optimizing:1 driven:1 apart:1 binary:1 seen:1 florian:1 determine:6 shortest:1 period:2 paradigm:1 signal:27 ii:1 hebbian:9 washout:3 technical:1 match:8 faster:1 compensate:5 sphere:2 fraser:1 schematic:1 target6:1 multilayer:1 austrian:1 dopamine:1 dpd:1 represent:3 achieved:1 irregular:1 receive:2 whereas:4 crate:2 separately:2 background:1 addressed:1 interval:1 decreased:4 source:3 biased:1 ascent:3 probably:1 hz:8 induced:2 subject:1 recording:1 pooling:1 jordan:1 fwf:1 presence:1 ideal:1 baxter:2 xj:9 fit:3 identified:3 suboptimal:1 rang:5 opposite:1 idea:2 shift:13 bartlett:2 linkage:1 birdsong:2 constitute:1 action:1 useful:1 clear:1 detailed:1 amount:4 dark:2 ang:2 locally:1 situated:1 simplest:2 reduced:3 meir:1 problematic:1 shifted:1 estimated:1 neuroscience:1 broadly:1 carnegie:1 group:2 four:1 drawn:6 changing:1 fp6:1 sum:2 convert:1 run:1 inverse:1 angle:2 jitter:2 heterosynaptic:1 place:2 reader:1 utilizes:1 legenstein:3 scaling:1 layer:1 cyan:1 syllable:1 distinguish:1 correspondence:1 activity:36 fairhall:1 adapted:6 occur:1 constrain:1 georgopoulos:2 aspect:1 relatively:3 n13:1 department:2 combination:1 beneficial:2 smaller:1 slightly:2 postsynaptic:2 rev:3 biologically:5 primate:2 s1:4 explained:4 intuitively:1 operant:1 previously:1 turn:1 r3:2 mechanism:8 needed:2 fp7:2 drastic:1 end:1 available:1 apply:1 limb:1 eight:3 generic:1 appearing:1 bailey:1 alternative:2 gate:1 denotes:2 tugraz:1 exploit:1 r01:1 move:5 spike:7 mazzoni:1 strategy:8 gradient:8 link:1 simulated:11 sci:3 whom:1 presynaptic:2 barber:1 unstable:1 code:1 reorganization:4 modeled:11 length:1 preferentially:1 setup:2 difficult:1 robert:2 potentially:1 trace:2 negative:1 suppress:1 implementation:1 policy:1 perform:1 neuron:120 observation:1 neurobiology:1 variability:6 incorporated:1 precise:1 prematurely:1 perturbation:33 arbitrary:1 introduced:1 bcm:1 acoustic:1 distinction:1 trans:1 adult:1 able:3 below:2 xm:2 max:1 memory:1 pascal2:1 treated:1 eh:12 arm:16 altered:7 improve:2 technology:1 temporally:2 axis:5 extract:1 health:1 sn:2 circled:1 l2:1 relative:2 law:1 mixed:1 interesting:1 proportional:2 degree:12 consistent:2 article:7 principle:2 expired:1 correlationbased:1 systematically:1 pi:3 changed:5 surprisingly:2 last:1 free:2 supported:1 side:1 weaker:1 institute:2 fall:1 benefit:1 curve:4 depth:10 neuroprosthetic:1 cortical:8 calculated:1 feedback:1 lett:1 author:6 commonly:1 made:3 qualitatively:1 adaptive:2 reinforcement:6 baras:1 approximate:1 preferred:4 global:8 pittsburgh:1 assumed:1 continuous:1 dpds:10 reality:2 promising:1 nature:1 reasonably:1 gerstner:1 necessarily:1 european:1 did:1 linearly:2 neurosci:3 noise:29 x1:2 neuronal:18 fluctuation:1 hebb:1 position:9 exponential:2 kandel:1 candidate:1 late:3 weighting:3 remained:2 erroneous:1 specific:1 offset:1 evidence:1 exists:2 essential:2 nat:4 cursor:35 horizon:1 sorting:1 simply:1 explore:2 likely:1 visual:2 adjustment:1 scalar:2 radically:1 determines:1 adaption:1 extracted:1 relies:1 goal:2 presentation:1 towards:2 man:1 experimentally:3 change:25 determined:2 specifically:1 typical:1 infinite:1 total:5 called:1 pas:1 accepted:1 experimental:12 pfister:1 meaningful:1 indicating:1 selectively:3 support:3 modulated:13 ongoing:1 evaluate:1 phenomenon:3 correlated:1 |
2,931 | 3,657 | Dirichlet-Bernoulli Alignment: A Generative Model
for Multi-Class Multi-Label Multi-Instance Corpora
Shuang-Hong Yang
College of Computing
Georgia Tech
[email protected]
Hongyuan Zha
College of Computing
Georgia Tech
[email protected]
Bao-Gang Hu
NLPR & LIAMA
Chinese Academy of Sciences
[email protected]
Abstract
We propose Dirichlet-Bernoulli Alignment (DBA), a generative model for corpora in which each pattern (e.g., a document) contains a set of instances (e.g.,
paragraphs in the document) and belongs to multiple classes. By casting predefined classes as latent Dirichlet variables (i.e., instance level labels), and modeling
the multi-label of each pattern as Bernoulli variables conditioned on the weighted
empirical average of topic assignments, DBA automatically aligns the latent topics discovered from data to human-defined classes. DBA is useful for both pattern
classification and instance disambiguation, which are tested on text classification
and named entity disambiguation in web search queries respectively.
1 Introduction
We consider multi-class, multi-label and multi-instance classification (M3 C), a task of learning decision rules from corpora in which each pattern consists of multiple instances1 and is associated
with multiple classes. M3 C finds its application in many fields: For example, in web page classification, a web page (pattern) typically comprises of different entities (instances) (e.g., texts, pictures
and videos) and is usually associated with several different topics (e.g., finance, sports and politics). In such tasks, a pattern usually consists of a set of instances, and the possible instances may
be too diverse in nature (e.g., of different structures or types, described by different features) to be
represented in a universal space. What makes the problem more complicated and challenging is
that the pattern is usually ambiguous, i.e., it can belong to several different classes simultaneously.
Traditional classification algorithms are typically incapable of handling such complications.
Even for corpora consisting of relatively homogenous data, treating the tasks as M3 C might still
be advantageous since it enables us to explore the inner structures and the ambiguity of the data
simultaneously. For example, in text classification, a document usually comprises several separate
semantic parts (e.g., paragraphs), and several different topics are evolving along these parts. Since
the class-labels are often only locally tied to the document (e.g., paragraphs are often far more topicfocused than the whole document), base the classification on the whole document would incur too
much noise and in turn harm the performance. In addition, treating the task as M3 C also offers a
natural way to track the topic evolution along paragraphs, a task that is otherwise difficult to handle.
M3 C also arises naturally when the acquisition of labeled data is expensive. For example, in scene
classification, a picture usually contains several objects (e.g., cat, desk, man) belonging to several
different classes (e.g., animal, furniture, human). Ideal annotation requires a skilled expert to specify
both the exact location and class label of each object in the image, which, though not completely
impossible, involves too much human efforts especially for large image repositories. The annotation
burden would be greatly relieved if each image is labeled as a whole (e.g., a caption indicating what
is in the image), which, however, requires the learning system to be capable to handle M3 C tasks.
1
A ?pattern? or ?example? is a typical sample in a data collection and an?instance? is a part of a ?pattern?.
1
Recently, the Latent Dirichlet Allocation (LDA, [4]) model has been established for automatic extraction of topical structures from large repository of documents. LDA is a highly-modularized
probabilistic model with various variations and extensions (e.g., [2, 3]). By modeling a document
as a mixture over topics, LDA allows each document to be associated with multiple topics with
different proportions, and thus provides a promising way to capture the heterogeneity/ambiguity in
the data. However, the topics discovered by LDA are implicit (i.e., each topic is expressed as a distribution over words, comprehensible interpretation of which requires human expertise), and cannot
be easily aligned to the topics of human interests. In addition, the standard LDA does not model the
multi-instance structure of a pattern. Hence, LDA and its like cannot be directly applied to M3 C.
In this paper, by taking advantage of the LDA building blocks, we present a new probabilistic generative model for multi-class, multi-label and multi-instance corpora, referred to as Dirichlet-Bernoulli
Alignment (DBA). DBA assumes a tree-structure about the data, i.e., each multi-labeled pattern is a
bag of single-labeled instances. In DBA, each pattern is modeled as a mixture over the set of predefined classes, an instance is then generated independently conditioned on a sampled class-label,
and the label of a pattern is generated from a Bernoulli distribution conditioned on all the sampled
labels used for generating its instances. DBA is essentially a topic model similar to LDA except that
(1) an instance rather than a single feature is generated conditioned on each sampled topic; and (2)
instead of using implicit topics for dimensionality reduction as in LDA, DBA casts each class as an
explicit topic to gain discriminative power from the data. Through likelihood maximization, DBA
automatically aligns the topics discovered from the data to the predefined classes of our interests.
DBA can be naturally tailored to M3 C tasks for both pattern classification and instance disambiguation. In this paper, we apply the DBA model to text classification tasks and an interesting real-world
problem, i.e., named entity disambiguation for web search queries. The experiments confirm the
usefulness of the proposed DBA model.
The rest parts of this paper is organized as follows. Section 2 briefly reviews some related topics
and Section 3 presents the formal description of the corpora used in M3 C and the basic assumptions
of our model. Section 4 introduces the detailed DBA model. In Section 5, we establish algorithms
for inference and parameter estimation for DBA. And in Section 6, we apply the DBA model to text
classification and query disambiguation tasks. Finally, Section 7 presents concluding remarks.
2 Related Works
Traditional classification largely focuses on a single-label single-instance framework (i.e., i.i.d patterns, associated with exclusive/disjoint classes). However, the real-world is more like a web of
(sub-)patterns connected with a web of classes that they belong to. Clearly, M3 C reflects more of
the reality. Recently, two partial solutions, i.e., multi-instance classification (MIC) [7, 11, 1] and
multi-label classification (MLC) [10, 8, 5] were investigated. MIC assumes that each pattern consists of multiple instances but belongs to a single class, whereas MLC studies single-instance pattern
associated with multiple classes. Although both MLC and MIC have drawn increasing attentions in
the literature, neither of them can handle the cases where multi-instance and multi-label are simultaneously present. Perhaps the first work investigating M3 C is [13], in which the authors proposed an
indirect solution, i.e., to convert an M3 C task into several MIC or MLC sub-tasks each of which is
then divided into single-label and single-instance classification problems and solved by discriminative algorithms such as AdaBoost or SVM. A practical challenge of this approach is its complexity,
i.e, the number of sub-tasks can be huge, making the training data extremely sparse for each subclassifier and the computation cost unacceptably high in both training and testing. Recently, Cour et
al proposed a discriminative framework [6] based on convex surrogate loss minimization for classifying ambiguously labeled images; and Xu et al established a hybrid generative/discriminative
approach (i.e., a heuristically regularized LDA classifier) [12] to mining named entity from web
search click-through data. In this paper, we present a generative approach for M3 C.
Our proposed DBA model can be viewed as a supervised version of topic models. A widely used
topic model for categorical data is the LDA model [4]. By modeling a pattern as a random mixture
over latent topics and a topic as a Multinomial distribution over features in a dictionary, LDA is
effective in discovering implicit topics from a corpus. The supervised LDA (sLDA) model [2], by
linking the empirical topics to the label of each pattern, is able to learn classifiers using Generalized
Linear Models. However, both LDA and sLDA are in essence dimensionality reduction techniques,
and cannot be employed directly for the M3 C tasks.
2
pattern
X
?
z
a
class
c
c
c
instance
x
x
x
f
...
B
L
M
y
N
(a)
(b)
Figure 1: (a): Tree structure of a multi-class multi-label multi-instance corpus. (b):A graphic representation of the DBA model with multinomial bag-of-feature instance model.
3 Problem Formalization
Intuitively, we can think of a pattern as a document, an instance as a paragraph, and a feature as a
word. In M3 C, we are interested in inferring class labels for both the document and its paragraphs.
Formally, let X ? RD denote the instance space (e.g., a vector space), Y = {1, 2, . . . , C} (C > 2)
the set of class labels, and F = {f1 , f2 , . . . , fD } the dictionary of features. A multi-class, multilabel multi-instance corpus D consists of a set of input patterns {Xn }n=1,2,...,N along with the
corresponding labels {Yn }n=1,2,...,N , where each pattern Xn = {xmn }m=1,2,...,Mn contains a set
of instances xmn ? X , and Yn ? Y consists of a set of class labels. The goal of M3 C is to find a
decision rule Y = ?(X) : 2X ? 2Y , where 2A denotes the power set of a set A. For simplicity, we
make the following assumptions.
Assumption 1 [Exchangeability]: A corpus is a bag of patterns, and each pattern is a bag of instances.
Assumption 2 [Distinguishablity]: Each pattern can belong to several classes, but each instance
belongs to a single class.
These assumptions are equivalent to assuming a tree structure for the corpus (Figure 1(a)).
4 Dirichlet-Bernoulli Alignment
In this section, we present Dirichlet-Bernoulli Alignment (DBA), a probabilistic generative model
for the multi-class, multi-label and multi-instance corpus described in Section 3. In DBA, each
pattern X in a corpus D is assumed to be generated by the following process:
1. Sample ??Dir(a).
2. For each of the M instances in X:
? Choose a class z ?Mult(?);
? Generate an instance x ? p(x|z, B);
3. Generate the label y? p(y|z1:M ,?).
We assume the total number of predefined classes, C, is known and fixed. In DBA, a =
[a1 , . . . , aC ]? with ac > 0, c = 1, . . . , C, is a C-vector prior parameter for a Dirichlet distribuPC
tion Dir(a), which is defined in the (C-1)-simplex: ?c > 0, c=1 ?c = 1. z is a class indicator, i.e.,
a binary C-vector with the 1-of-C code: zc = 1 if the c-th class is chosen, and ?i 6= c, zi = 0.
y = [y1 , . . . , yC ]? is also a binary C-vector with yc = 1 if the pattern X belongs to the c-th class
and yc = 0 otherwise.
In this paper, we assume the label of a pattern is generated by a cost-sensitive voting process according to the labels of the instances in it, which is intuitively reasonable. As a result, yc (c = 1, . . . , C) is
generated from a Bernoulli distribution, i.e., p(yc |?c ) = (?c )yc (1??c )(1?yc ) , where ? is a probability vector based on a weighted empirical average of the Dirichlet realization ???z, ?z = [?
z1 , . . . , z?C ]?
PM
1
is the average of z1 , . . . , zM : z?c = M m=1 zmc . For example, ? can be a Dirichlet distribution
??Dir(?1 z?1 , . . . , ?C z?C ). In this paper, we use a logistic model:
p(yc = 1|?z, ?) =
3
exp(?c z?c )
.
1 + exp(?c z?c )
(1)
In practice, the set of possible instances can be quite diverse, such as pictures, texts, music and
videos on a web page. Without loss of generality, we follow the convention of topic models to
assume that each instance x is a bag of discrete features {f1 , f2 , . . . , fL } and use a multinomial
distribution2:
D
p(x|z, B) = p({f1 , . . . , fL }|z, B) ? bxc11 bxc22 . . . bxcD
|zc =1 ,
where L is the total number of feature occurrences in x (e.g., the length of a paragraph), B =
[b1 , . . . , bD ] is a C ? D-matrix with the (c, d)-th entry bcd = p(fd = 1|zc = 1) and xd is the
frequency of fd in x. The joint probability is then given by:
p(X, y, Z, ?|a, B, ?) = p(?|a)
M
Y
p(zm |?)
m=1
L
Y
l=1
!
p(fml |B, zm ) p(y|?z, ?).
(2)
The graphical model for DBA is depicted in Figure 1(b). We can see that DBA has a diagram very
similar to that of sLDA (Figure 1 in [2]). The key differences are: (1) Instead of using implicit
topics for dimensionality reduction as in sLDA, DBA casts the predefined classes as explicit topics
to discover the discriminative properties from the data; (2) A bag-of-feature instance rather than a
single feature is generated conditioned on each sampled topic (class); (3) DBA models a multi-class,
multi-label multi-instance corpus and can be applied directly to M3 C, i.e., the classification of each
pattern as well as the instances within it.
5 Parameter Estimation and Inference
Both parameter estimation and inferential tasks in DBA involve intractable computation of marginal
probabilities. We use variational methods to approximate those distributions.
5.1 Variational Approximations
We use the following fully-factorized variational distribution to approximate the posterior distribution of the latent variables:
!
P
C
M
Y
Y
?( C
?c ?1
zmc
c=1 ?c )
q(Z, ?|?, ?) = q(?|?)
q(zm |?m ) = QC
?c
?mc ,
m=1
c=1 ?(?c ) c=1
m=1
M
Y
(3)
where ? and ?=[?1 ,. . . ,?M ] are variational parameters for a pattern X. We have:
log P (X, y|a, B, ?) = log
Z X
?
p(X, y, Z, ?|a, B, ?)d?
(4)
=L(?, ?) + KL(q(Z, ?|?, ?)||p(Z, ?|a, B, ?)) ? max L(?, ?),
? ,?
R
q(x)
where KL(q(x)||p(x)) = x q(x) log p(x)
dx is the Kullback-Leibler (KL) divergence between two
distributions p and q, and L(?) is the variational lower bound for the log-likelihood:
L(?, ?) = log
+
M
X
m=1
Z X
?
q(Z, ?|?, ?) log
Z
Eq [log p(zm |?)] +
M
X
Z
p(X, y, Z, ?|a, B, ?)
d? = Eq [log p(?|a)]
q(Z, ?|?, ?)
(5)
Eq [log p(xm |B, zm )] + Eq [log p(y|?z, ?)] + Hq .
m=1
2
This is only a simple special case instance model for DBA. It is quite straightforward to substitute other
instance models such as Gaussian, Poisson and other more complicated models like Gaussian mixtures.
4
The first two terms and the fifth term (the entropy of the variational distribution) in the right-hand
side of Eq.(5) are identical to the corresponding terms in sLDA [2]. The third term, i.e., the variational expectation of the log likelihood for instance observations is:
M
X
Eq [log p(xm |B, zm )] =
m=1
M X
C X
D
X
?mc xmd log bcd .
(6)
m=1 c=1 d=1
The forth term in the righthand side of Eq.(5) corresponds to the expected log likelihood of observing
the labels given the topic assignments:
Eq [log p(y|?z, ?)] =
M
C
C
X
1 XX
1
?c z?c
??c z?c
(yc ? )?c ?mc ?
Eq [log(exp
+ exp
)].
M m=1 c=1
2
2
2
c=1
(7)
We bound the second term above by using the lower bound for logistic function [9]:
? log(exp
?c z?c
??c z?c
?c
+ exp
) > ? log(1 + exp(??c )) ?
+ ?c (?2c z?c2 ? ?c2 )
2
2
2
?c
? ? log(1 + exp(??c )) ?
+ 2?c (?c z?c ?c ? ?c2 ),
2
(8)
where ?=[?1 , . . . , ?C ]? are variational parameters, ?c = 4?1c tanh( ?2c ), and the second order residue
term is omitted since the lower bound is exact when ?c = ??c z?c .
Obtaining an approximate posterior distribution for the latent variables is then reduced to optimizing
the objective max L(q) or min KL(q||p) with respect to the variational parameters. By using Lagrange multipliers, we can easily derive the optimal condition which can be achieved by iteratively
updating the variational parameters according to the following formulas:
?mc ?
D
Y
xmd
(bcd )
d=1
?c = ac +
M
X
?c
?c
exp ?(?c ) +
[2yc ? 1 + tanh( )] ,
2M
2
M
1 X
?c = ??c
?mc ,
M m=1
?mc ,
m=1
(9)
where ?(?) is the digamma function. Note that instead of only one feature contributing to ?mc as in
LDA, all the features appearing in an instance are now responsible for contributing. This property
tends to make DBA more robust to data sparsity. Also, DBA makes use of the supervision inforP
mation with a term C
?c (2yc ? 1) in the variational likelihood bound L. As L is optimized,
c=1 ?c z
this term is equivalent to maximizing the likelihood of sampling the classes to which the pattern bePM
longs: {max ?c m=1 zmc , if yc = 1} and simultaneously minimizing the likelihood of sampling
PM
the classes to which the pattern does not belong: {min ?c m=1 zmc , if yc = 0}. Here ?c (-?c )
acts like a utility (cost) of assigning X to the c-th class. As a result, it tends to align the Dirichlet
topics discovered from the data to the class labels (Bernoulli observations) y. This is why we coin
the name Dirichlet-Bernoulli Alignment.
5.2 Parameter Estimation
The maximum likelihood parameter estimation of DBA relies on the variational approximation procedure. Given a corpus D = {(Xn , yn )}n=1,...,N , the MLE can be formulated as:
N
X
a? , B ? , ?? = arg max log P (D|a, B, ?) = arg max
max L(? n , ?n |a, B, ?).
a,B,? n=1 ? n ,?n
5
(10)
Data Set
Text
Query
#Train
1200
300
#Test
679
100
Table 1: Characteristic of the data sets.
D
C
|Y |avg #(|Y | > 1)
500
10
1.4
721 (38.4%)
2000 101
1.4
99 (24.8%)
DBA
MNB
MIMLSVM
Mavg
8.2
65
Mmin
1
3
Mmax
36
731
MIMLBoost
100
90
80
70
acq
corn
crude
earn
grain
interest
money
ship
trade
wheat
overall
Figure 2: Accuracies(%) of DBA, MNB, MIMLSVM, and MIMLBoost for text classification.
The two-layer optimization in Eq.(10) involves two groups of parameters corresponding to the DBA
model and its variational approximation, respectively. Optimizing alternatively between these two
groups leads to a Variational Expectation Maximization (VEM) algorithm similar to the one used in
LDA, where the E-step corresponds to the variational approximation for each pattern in the corpus.
And the M-step in turn maximizes the objective in Eq.(6) w.r.t. the model parameters. These two
steps are repeated alternatively until convergence.
5.3 Inference
DBA involves three types of inferential tasks. The first task is to infer the latent variables for a
given pattern, which is straightforward after the variational approximation. The second task, pattern classification, addresses prediction of labels for a new pattern X: p(yc = 1|X; a, B, ?) ?
?c
?c
1 PM
exp(?c ??c )/(1 + exp(?c ??c )), where ??c = M
m=1 ?mc and the term 2M [2yc ? 1 + tanh( 2 )]
is removed when updating ? in Eq.(9). The third task,
instance disambiguation, finds labels
R
for each instances within a pattern: p(zm |X, y) = ? p(zm , ?|X, y)d? ? q(zm |?m ), that is,
p(zmc = 1|X, y) = ?mc .
6 Experiments
In this section, we conduct extensive experiments to test the DBA model as it is applied to pattern
classification and instance disambiguation respectively. We first apply DBA to text classification and
compare its performance with state-of-the-art M3 C algorithms. Then the instance disambiguation
performance of DBA is tested on a novel real-world task, i.e., named entity disambiguation for web
search queries. Table 1 shows the information of the data sets used in our experiments.
6.1 Text Classification
This experiment is conducted on the ModApte split of the Reuters-21578 text collection, which
contains 10788 documents belonging to the most popular 10 classes. We use the top 500 words with
the highest document frequency as features, and represent each document as a pattern with each of
its paragraphs being an instance in order to exploit the semantic structure of documents explicitly.
After eliminating the documents that have empty label set or less than 20 features, we obtain a subset
of 1879 documents, among which 721 documents (about 38.4%) have multiple labels. The average
number of labels per document is 1.4?0.6 and the average number of instances (paragraphs) per
pattern (document) is 8.2?4.8. The data set is further randomly partitioned into a subset of 1200
documents for training and the rest for testing.
For comparison, we also test two state-of-the-art M3 C algorithms, the MIMLSVM and MIMLBoost
[13], and use the Multinomial Na??ve Bayes (MNB) classifier trained on the vector space model of the
whole documents as the baseline. For a fair comparison, linear kernel is used in both MIMLSVM
and MIMLBoost and all the hyper-parameters are tuned by 5-fold cross validation prior to training.
We use the Hamming-Accuracy [13] to evaluate the results, for DBA and MNB, the label is estimated by: y = ?(p(y = 1|X) > t), where the cut-off probability threshold is also selected based
on 5-fold cross validation. Each experiment is repeated for 5 random runs and the average results
are reported by a bar chart as depicted in Figure 2. We can see that: (1) for most classes, the three
6
Table 2: Accuracy@N (N = 1, 2, 3) and micro-averaged and macro-averaged F-measures of DBA, MNB and
SVM based disambiguation methods.
Method
MNB-TF
MNB-TF-IDF
SVM-TF
SVM-TF-IDF
DBA
A@1
0.4154
0.4177
0.4927
0.4912
0.5415
Gain
30.4%
29.6%
9.9%
10.2%
-
A@2
0.4913
0.4918
NA
NA
0.6175
Gain
25.7%
25.6%
A@3
0.5168
0.5176
NA
NA
0.6482
-
DBA
MNBTF
MNBTFIDF
SVMTF
SVMTFIDF
-
Gain
30.4%
29.6%
9.9%
10.2%
-
Fmacro
0.3144
0.2988
0.3720
0.3670
0.4622
Gain
47.0%
54.7%
24.2%
25.0%
0.6
0.6
0.4
0.4
0.2
0.2
0
20
80
100
DBA
MNBTF
MNBTFIDF
SVMTF
SVMTFIDF
0.8
recall
0.8
0
Fmicro
0.4154
0.4177
0.4927
0.4912
0.5415
1
1
precision
Gain
25.4%
25.2%
40
60
80
0
100
class
0
20
40
60
class
Figure 3: Precision and Recall scores for each of 101 classes by using DBA, MNB and SVM based methods.
M3 C algorithms outperform the MNB baseline; (2) the performance of DBA is at least comparable
with MIMLBoost and MIMLSVM. For most classes and overall, DBA performs the best, whereas
for some classes, MIMLBoost and MIMLSVM perform even slightly worse than MNB. A possible reason might be: if the documents are very short, splitting them might introduce severe data
sparseness and in turn harms the performance. We also observe that DBA is much more efficient
than MIMLBoost and MIMLSVM. For training, DBA takes 42 mins on average, in contrast to 557
minutes (MIMLSVM) and 806 minutes (MIMLBoost).
6.2 Named Entity Disambiguation
Query ambiguity is a fundamental obstacle for search engine to capture users? search intentions. In
this section, we employ DBA to disambiguate the named entities in web search queries. This is a
very challenging problem because queries are usually very short (2 to 3 words on average), noisy
(e.g., misspellings, abbreviations, less grammatical structure) and topic-distracted. A single namedentity query Q can be viewed as a combination of a single named entity e and a set of context words
w (the remaining text in Q). By differentiating the possible meanings of the named entity in a query
and identifying the most possible one, entity disambiguation can help search engines to capture the
precise information need of the user and in turn improve search by responding with the truly most
relevant documents. For example, when a user inputs ?When are the casting calls for Harry Potter
in USA??, the system should be able to identify that the ambiguous named entity ?Harry Potter?
(i.e., it can be a movie, a book or a game) really refers to a movie in this specific query.
We treat the ambiguity of e as a hidden class z over e and make use of the query log as a data
source for mining the relationship among e, w and z. In particular, the query log can be viewed
as a multi-class, multi-label and multi-instance corpus {(Xn , Yn )}n=1,2,...,N , in which each pattern X corresponds to a named-entity e and is characterized by a set of instances {xm }m=1,2,...,M
corresponding to all the contexts {wm }m=1,2,...,M that co-occur with e in queries, and the label Y
contains all the ambiguities of e.
Our data was based on a snapshot of answers.yahoo.com crawled in early 2008, containing
216563 queries from 101 classes. We manually collect 400 named entities and label them according
to the labels of their co-occurring queries in Yahoo! CQA. A randomly chosen subset of 300 entities
are used as training data and the other 100 are used for testing. We compare our DBA based method
with baselines including Multinomial Na??ve Bayes classifier using TF (MNB-TF) or TF-IDF (MNBTFIDF) as word attributes, and SVM classifier using TF (SVM-TF) or TFIDF (SVM-TF-IDF). For
SVM, a similar scheme as MIMLSVM is used for learning M3 C classifiers.
Table 2 demonstrates the Accuracy@N (N = 1, 2, 3) as well as micro-averaged and macro-average
F-measure scores of each disambiguation approach3. All the results are obtained through 5-fold
cross-validation. From the table, we observe that DBA achieves significantly better performance
than all the other methods. In particular, for Accuracy@1 scores, DBA can achieve a gain of about
3
Since SVM only outputs hard class assignments, there is no Accuracy@2,3 for SVM based methods.
7
30% relative to two MNB methods, and about 10% relative to two SVM methods; for macro-average
F-measures, DBA can achieve a gain of about 50% over MNB methods, and about 25% over SVM
methods. As a reference, in Figure 3, we also illustrate the sorted precision and recall scores for each
of the 101 classes. We can see that, DBA slightly outperforms the baselines in terms of precision,
and significantly performs better in terms of the recall scores. In particular, for recall, DBA can
achieve a gain of more than 50% relative to MNB and SVM baselines.
7 Concluding Remarks
Multi-class, multi-label and multi-instance classification (M3 C) is encountered in many applications.
Even for task that is not explicitly an M3 C problem, it might still be advantageous to treat it as
M3 C so as to better explore its inner structures and effectively handle the ambiguities. M3 C also
naturally arises from the difficulty of acquiring finely-labeled data. In this paper, we have proposed a
probabilistic generative model for M3 C corpora. The proposed DBA model is useful for both pattern
classification and instance disambiguation, as has been tested respectively in text classification and
named-entity disambiguation tasks.
An interesting observation in practice is that, although there might be a large number of
classes/topics, usually a pattern is only associated with a very limited number of them. In our
experiment, we found that substantial improvement could be achieved by simply enforcing label
sparsity, e.g., by using LASSO style regularization. In future, we will investigate such ?Label Parsimoniousness? in a principled way. Another meaningful investigation would be to explicitly capture
or explore the class correlations by using, for example, the Logistic Normal distribution [3] rather
than Dirichlet.
Acknowledgments
Hongyuan Zha is supported by NSF #DMS-0736328 and grant from Microsoft. Bao-Gang Hu is
supported by NSFC #60275025 and the MOST of China grant #2007DFC10740.
References
[1] Andrews S. and Hofmann T. (2003) Multiple Instance Learning via Disjunctive Programming
Boosting, In Advances in Neural Information Processing Systems 17 (NIPS?03), MIT Press.
[2] Blei D. and McAuliffe J. (2007) Supervised topic models. In Advances in Neural Information
Processing Systems 21 (NIPS?07), MIT Press.
[3] Blei D. and Lafferty J. (2007) A correlated topic model of Science. Annals of Applied Statistics.
Vol. 1, No. 1, pp. 17?35, 2007.
[4] Blei D., Ng A. and Jordan M. (2003) Latent Dirichlet Allocation. Journal of Machine Learning
Research, Vol. 3, pp.993?1022, Jan. 2003, MIT Press.
[5] Boutell M. R., Luo J., Shen X. and Brown C. M. (2004) Learning Multi-Label Scene Classification. Pattern Recognition, 37(9), pp.1757?1771, 2004.
[6] Cour T., Sapp B., Jordan C. and Taskar B. (2009) Learning from Ambiguously Labeled Images,
In the 23rd IEEE Conference on Computer Vision and Pattern Recognition (CVPR?09).
[7] Dietterich T. G., Lathrop R. H., Lozano-Perez T. (1997) Solving the Multiple-Instance Problem
with Axis-Parallel Rectangles. Artificial Intelligence Journal, Vol. 89, pp.31?71, Jan.1997.
[8] Ghamrawi N. and McCallum A. (2005) Collective Multi-Label Classification, In ACM International Conference On Information And Knowledge Management (CIKM?05), pp.195?200.
[9] Jaakkola, T. and Jordan M. I. (2000). Bayesian parameter estimation via variational methods.
Statistics and Computing, Vol 10, Issue 1, pp. 25?37.
[10] Ueda N. and Saito K. (2002) Parametric Mixture Models For Multi-Labeled Text. In Advances
in Neural Information Processing Systems 15 (NIPS?02).
[11] Viola P., Platt J. and Zhang C. (2006). Multiple Instance Boosting For Object Detection. In
Advances in Neural Information Processing Systems 20 (NIPS?06), pp.1419?1426, MIT Press.
[12] Xu G., Yang S.-H. and Li H. (2009) Named Entity Mining from Click-Through Data Using
Weakly Supervised LDA, In ACM Knowledge Discovery and Data Mining (KDD?09).
[13] Zhou Z.-H. and Zhang M.-L. (2006) Multi-Instance Multi-Label Learning with Application to
Scene Classification, In Advances in Neural Information Processing Systems 20 (NIPS?06).
8
| 3657 |@word repository:2 version:1 briefly:1 eliminating:1 advantageous:2 proportion:1 hu:2 heuristically:1 dba:54 reduction:3 contains:5 score:5 tuned:1 document:24 outperforms:1 com:1 luo:1 assigning:1 dx:1 bd:1 grain:1 kdd:1 hofmann:1 enables:1 treating:2 generative:7 discovering:1 selected:1 unacceptably:1 intelligence:1 mccallum:1 short:2 blei:3 provides:1 boosting:2 complication:1 location:1 zhang:2 along:3 skilled:1 c2:3 consists:5 paragraph:9 introduce:1 expected:1 multi:38 automatically:2 increasing:1 discover:1 xx:1 maximizes:1 factorized:1 what:2 voting:1 act:1 xd:1 finance:1 classifier:6 demonstrates:1 platt:1 grant:2 yn:4 mcauliffe:1 treat:2 tends:2 nsfc:1 might:5 misspelling:1 china:1 collect:1 challenging:2 co:2 limited:1 averaged:3 practical:1 responsible:1 acknowledgment:1 testing:3 practice:2 block:1 procedure:1 jan:2 saito:1 universal:1 empirical:3 evolving:1 mult:1 significantly:2 inferential:2 word:6 intention:1 refers:1 cannot:3 context:2 impossible:1 equivalent:2 maximizing:1 straightforward:2 attention:1 independently:1 convex:1 boutell:1 qc:1 simplicity:1 splitting:1 identifying:1 shen:1 rule:2 handle:4 variation:1 annals:1 nlpr:2 user:3 exact:2 caption:1 programming:1 mic:4 expensive:1 recognition:2 updating:2 cut:1 labeled:8 disjunctive:1 taskar:1 solved:1 capture:4 wheat:1 connected:1 distribution2:1 trade:1 removed:1 highest:1 substantial:1 principled:1 complexity:1 multilabel:1 trained:1 weakly:1 solving:1 incur:1 f2:2 completely:1 easily:2 joint:1 indirect:1 represented:1 cat:1 various:1 train:1 effective:1 query:16 artificial:1 hyper:1 quite:2 slda:5 widely:1 cvpr:1 otherwise:2 statistic:2 think:1 noisy:1 advantage:1 propose:1 ambiguously:2 zm:10 macro:3 aligned:1 relevant:1 realization:1 achieve:3 academy:1 forth:1 description:1 bao:2 cour:2 convergence:1 empty:1 generating:1 object:3 help:1 derive:1 illustrate:1 ac:4 andrew:1 eq:12 involves:3 convention:1 attribute:1 human:5 f1:3 really:1 investigation:1 tfidf:1 extension:1 normal:1 exp:11 dictionary:2 early:1 achieves:1 omitted:1 estimation:6 bag:6 label:43 tanh:3 sensitive:1 tf:10 weighted:2 reflects:1 minimization:1 mit:4 clearly:1 gaussian:2 mation:1 rather:3 zhou:1 exchangeability:1 casting:2 gatech:2 crawled:1 jaakkola:1 focus:1 improvement:1 bernoulli:10 likelihood:8 tech:2 greatly:1 digamma:1 contrast:1 baseline:5 inference:3 typically:2 hidden:1 interested:1 arg:2 classification:27 overall:2 among:2 issue:1 yahoo:2 animal:1 art:2 special:1 homogenous:1 field:1 marginal:1 extraction:1 ng:1 sampling:2 manually:1 identical:1 future:1 simplex:1 micro:2 employ:1 randomly:2 simultaneously:4 divergence:1 ve:2 consisting:1 microsoft:1 detection:1 interest:3 huge:1 fd:3 highly:1 mining:4 investigate:1 righthand:1 severe:1 alignment:6 introduces:1 mixture:5 truly:1 perez:1 predefined:5 capable:1 partial:1 tree:3 conduct:1 instance:58 modeling:3 obstacle:1 assignment:3 maximization:2 cost:3 entry:1 subset:3 usefulness:1 shuang:1 conducted:1 graphic:1 too:3 reported:1 answer:1 dir:3 fundamental:1 international:1 probabilistic:4 off:1 earn:1 na:6 ambiguity:6 management:1 containing:1 choose:1 worse:1 book:1 expert:1 style:1 li:1 harry:2 explicitly:3 tion:1 observing:1 zha:3 bayes:2 wm:1 complicated:2 parallel:1 annotation:2 vem:1 acq:1 chart:1 accuracy:6 largely:1 characteristic:1 identify:1 bayesian:1 mc:9 ghamrawi:1 expertise:1 cc:1 aligns:2 acquisition:1 frequency:2 pp:7 dm:1 naturally:3 associated:6 hamming:1 sampled:4 gain:9 popular:1 recall:5 knowledge:2 dimensionality:3 sapp:1 organized:1 supervised:4 follow:1 adaboost:1 specify:1 though:1 generality:1 implicit:4 until:1 correlation:1 hand:1 web:10 logistic:3 lda:17 perhaps:1 building:1 dietterich:1 name:1 usa:1 multiplier:1 brown:1 evolution:1 hence:1 regularization:1 lozano:1 leibler:1 iteratively:1 semantic:2 mmax:1 mmin:1 game:1 ambiguous:2 essence:1 hong:1 generalized:1 performs:2 image:6 variational:17 meaning:1 novel:1 recently:3 multinomial:5 belong:4 interpretation:1 linking:1 automatic:1 rd:2 fml:1 pm:3 supervision:1 money:1 base:1 align:1 posterior:2 optimizing:2 belongs:4 ship:1 incapable:1 binary:2 employed:1 multiple:10 infer:1 characterized:1 offer:1 cross:3 divided:1 mle:1 a1:1 prediction:1 xmd:2 basic:1 essentially:1 expectation:2 poisson:1 vision:1 represent:1 tailored:1 kernel:1 achieved:2 addition:2 whereas:2 residue:1 diagram:1 source:1 rest:2 finely:1 lafferty:1 jordan:3 call:1 yang:2 ideal:1 split:1 zi:1 lasso:1 click:2 inner:2 cn:1 politics:1 utility:1 effort:1 remark:2 useful:2 detailed:1 involve:1 subclassifier:1 desk:1 locally:1 reduced:1 generate:2 outperform:1 nsf:1 estimated:1 disjoint:1 track:1 per:2 cikm:1 diverse:2 discrete:1 vol:4 group:2 key:1 threshold:1 drawn:1 neither:1 rectangle:1 convert:1 run:1 named:13 reasonable:1 ueda:1 disambiguation:15 decision:2 comparable:1 fl:2 bound:5 layer:1 furniture:1 fold:3 encountered:1 gang:2 occur:1 relieved:1 idf:4 scene:3 bcd:3 mnb:14 extremely:1 concluding:2 min:3 relatively:1 corn:1 according:3 combination:1 belonging:2 slightly:2 partitioned:1 making:1 intuitively:2 handling:1 turn:4 apply:3 observe:2 occurrence:1 appearing:1 coin:1 comprehensible:1 substitute:1 mimlboost:8 assumes:2 dirichlet:14 denotes:1 top:1 remaining:1 graphical:1 responding:1 music:1 exploit:1 chinese:1 especially:1 establish:1 objective:2 parametric:1 exclusive:1 traditional:2 surrogate:1 hq:1 separate:1 entity:16 modapte:1 topic:32 reason:1 enforcing:1 assuming:1 potter:2 code:1 length:1 modeled:1 relationship:1 minimizing:1 difficult:1 collective:1 perform:1 observation:3 snapshot:1 heterogeneity:1 viola:1 precise:1 topical:1 discovered:4 y1:1 distracted:1 mlc:4 cast:2 kl:4 extensive:1 z1:3 optimized:1 engine:2 established:2 nip:5 address:1 able:2 xmn:2 bar:1 usually:7 pattern:47 xm:3 yc:15 sparsity:2 challenge:1 max:6 including:1 video:2 ia:1 power:2 natural:1 hybrid:1 regularized:1 difficulty:1 indicator:1 mn:1 scheme:1 improve:1 movie:2 picture:3 axis:1 categorical:1 text:14 review:1 literature:1 prior:2 discovery:1 contributing:2 relative:3 loss:2 fully:1 interesting:2 allocation:2 shy:1 validation:3 classifying:1 supported:2 zc:3 formal:1 side:2 taking:1 differentiating:1 fifth:1 sparse:1 modularized:1 grammatical:1 xn:4 world:3 author:1 collection:2 avg:1 far:1 approximate:3 kullback:1 confirm:1 investigating:1 hongyuan:2 corpus:18 harm:2 assumed:1 b1:1 discriminative:5 alternatively:2 search:9 latent:8 why:1 mavg:1 disambiguate:1 reality:1 table:5 promising:1 nature:1 learn:1 robust:1 obtaining:1 investigated:1 whole:4 noise:1 reuters:1 repeated:2 fair:1 xu:2 referred:1 georgia:2 formalization:1 sub:3 inferring:1 comprises:2 explicit:2 precision:4 crude:1 tied:1 third:2 formula:1 minute:2 specific:1 mimlsvm:9 svm:14 burden:1 intractable:1 effectively:1 conditioned:5 occurring:1 sparseness:1 entropy:1 depicted:2 simply:1 explore:3 lagrange:1 expressed:1 sport:1 acquiring:1 corresponds:3 relies:1 acm:2 abbreviation:1 viewed:3 goal:1 formulated:1 sorted:1 man:1 hard:1 typical:1 except:1 total:2 lathrop:1 m3:26 meaningful:1 indicating:1 formally:1 college:2 arises:2 evaluate:1 tested:3 correlated:1 |
2,932 | 3,658 | Positive Semidefinite Metric Learning with Boosting
Chunhua Shen?? , Junae Kim?? , Lei Wang? , Anton van den Hengel?
?
NICTA Canberra Research Lab, Canberra, ACT 2601, Australia?
?
Australian National University, Canberra, ACT 0200, Australia
?
The University of Adelaide, Adelaide, SA 5005, Australia
Abstract
The learning of appropriate distance metrics is a critical problem in image classification and retrieval. In this work, we propose a boosting-based technique, termed
B OOST M ETRIC, for learning a Mahalanobis distance metric. One of the primary
difficulties in learning such a metric is to ensure that the Mahalanobis matrix remains positive semidefinite. Semidefinite programming is sometimes used to enforce this constraint, but does not scale well. B OOST M ETRIC is instead based
on a key observation that any positive semidefinite matrix can be decomposed
into a linear positive combination of trace-one rank-one matrices. B OOST M ETRIC thus uses rank-one positive semidefinite matrices as weak learners within an
efficient and scalable boosting-based learning process. The resulting method is
easy to implement, does not require tuning, and can accommodate various types
of constraints. Experiments on various datasets show that the proposed algorithm
compares favorably to those state-of-the-art methods in terms of classification accuracy and running time.
1
Introduction
It has been an extensively sought-after goal to learn an appropriate distance metric in image classification and retrieval problems using simple and efficient algorithms [1?5]. Such distance metrics
are essential to the effectiveness of many critical algorithms such as k-nearest neighbor (kNN), kmeans clustering, and kernel regression, for example. We show in this work how a Mahalanobis
metric is learned from proximity comparisons among triples of training data. Mahalanobis distance, a.k.a. Gaussian quadratic distance, is parameterized by a positive semidefinite (p.s.d.) matrix.
Therefore, typically methods for learning a Mahalanobis distance result in constrained semidefinite
programs. We discuss the problem setting as well as the difficulties for learning such a p.s.d. matrix. If we let ai , i = 1, 2 ? ? ? , represent a set of points in RD , the training data consist of a set of
constraints upon the relative distances between these points, S
S = {(ai , aj , ak )|distij < distik },
where distij measures the distance between ai and aj . We are interested in the case that dist
computes the Mahalanobis
distance. The Mahalanobis distance between two vectors takes the form:
p
kai ? aj kX = (ai ? aj )? X(ai ? aj ), with X < 0, a p.s.d. matrix. It is equivalent to learn a projection matrix L and X = LL? . Constraints such as those above often arise when it is known that ai
and aj belong to the same class of data points while ai , ak belong to different classes. In some cases,
these comparison constraints are much easier to obtain than either the class labels or distances between data elements. For example, in video content retrieval, faces extracted from successive frames
at close locations can be safely assumed to belong to the same person, without requiring the individual to be identified. In web search, the results returned by a search engine are ranked according
to the relevance, an ordering which allows a natural conversion into a set of constraints.
?
NICTA is funded through the Australian Government?s Backing Australia?s Ability initiative, in part
through the Australian Research Council.
The requirement of X being p.s.d. has led to the development of a number of methods for learning
a Mahalanobis distance which rely upon constrained semidefinite programing. This approach has a
number of limitations, however, which we now discuss with reference to the problem of learning a
p.s.d. matrix from a set of constraints upon pairwise-distance comparisons. Relevant work on this
topic includes [3?8] amongst others.
Xing et al [4] firstly proposed to learn a Mahalanobis metric for clustering using convex optimization. The inputs are two sets: a similarity set and a dis-similarity set. The algorithm maximizes the
distance between points in the dis-similarity set under the constraint that the distance between points
in the similarity set is upper-bounded. Neighborhood component analysis (NCA) [6] and large margin nearest neighbor (LMNN) [7] learn a metric by maintaining consistency in data?s neighborhood
and keeping a large margin at the boundaries of different classes. It has been shown in [7] that
LMNN delivers the state-of-the-art performance among most distance metric learning algorithms.
The work of LMNN [7] and PSDBoost [9] has directly inspired our work. Instead of using hinge
loss in LMNN and PSDBoost, we use the exponential loss function in order to derive an AdaBoostlike optimization procedure. Hence, despite similar purposes, our algorithm differs essentially in
the optimization. While the formulation of LMNN looks more similar to support vector machines
(SVM?s) and PSDBoost to LPBoost, our algorithm, termed B OOST M ETRIC, largely draws upon
AdaBoost [10].
In many cases, it is difficult to find a global optimum in the projection matrix L [6]. Reformulationlinearization is a typical technique in convex optimization to relax and convexify the problem [11].
In metric learning, much existing work instead learns X = LL? for seeking a global optimum, e.g.,
[4, 7, 12, 8]. The price is heavy computation and poor scalability: it is not trivial to preserve the
semidefiniteness of X during the course of learning. Standard approaches like interior point Newton
methods require the Hessian, which usually requires O(D4 ) resources (where D is the input dimension). It could be prohibitive for many real-world problems. Alternative projected (sub-)gradient is
adopted in [7, 4, 8]. The disadvantages of this algorithm are: (1) not easy to implement; (2) many
parameters involved; (3) slow convergence. PSDBoost [9] converts the particular semidefinite program in metric learning into a sequence of linear programs (LP?s). At each iteration of PSDBoost, an
LP needs to be solved as in LPBoost, which scales around O(J 3.5 ) with J the number of iterations
(and therefore variables). As J increases, the scale of the LP becomes larger. Another problem is
that PSDBoost needs to store all the weak learners (the rank-one matrices) during the optimization.
When the input dimension D is large, the memory required is proportional to JD2 , which can be
prohibitively huge at a late iteration J. Our proposed algorithm solves both of these problems.
Based on the observation from [9] that any positive semidefinite matrix can be decomposed into a
linear positive combination of trace-one rank-one matrices, we propose B OOST M ETRIC for learning
a p.s.d. matrix. The weak learner of B OOST M ETRIC is a rank-one p.s.d. matrix as in PSDBoost.
The proposed B OOST M ETRIC algorithm has the following desirable properties: (1) B OOST M ETRIC
is efficient and scalable. Unlike most existing methods, no semidefinite programming is required.
At each iteration, only the largest eigenvalue and its corresponding eigenvector are needed. (2)
B OOST M ETRIC can accommodate various types of constraints. We demonstrate learning a Mahalanobis metric by proximity comparison constraints. (3) Like AdaBoost, B OOST M ETRIC does not
have any parameter to tune. The user only needs to know when to stop. In contrast, both LMNN
and PSDBoost have parameters to cross validate. Also like AdaBoost it is easy to implement. No
sophisticated optimization techniques such as LP solvers are involved. Unlike PSDBoost, we do not
need to store all the weak learners. The efficacy and efficiency of the proposed B OOST M ETRIC is
demonstrated on various datasets.
Throughout this paper, a matrix is denoted by a bold upper-case letter (X); a column vector is
denoted by a bold lower-case letter (x). The ith row of X is denoted by
PXi: and the ith column
X:i . Tr(?) is the trace of a symmetric matrix and hX, Zi = Tr(XZ? ) = ij Xij Zij calculates the
inner product of two matrices. An element-wise inequality between two vectors like u ? v means
ui ? vi for all i. We use X < 0 to indicate that matrix X is positive semidefinite.
2
2.1
Algorithms
Distance Metric Learning
As discussed, the Mahalanobis metric is equivalent to linearly transform the data by a projection
matrix L ? RD?d (usually D ? d) before calculating the standard Euclidean distance:
dist2ij = kL? ai ? L? aj k22 = (ai ? aj )? LL? (ai ? aj ) = (ai ? aj )? X(ai ? aj ).
(1)
Although one can learn L directly as many conventional approaches do, in this setting, non-convex
constraints are involved, which make the problem difficult to solve. As we will show, in order to
convexify these conditions, a new variable X = LL? is introduced instead. This technique has been
used widely in convex optimization and machine learning such as [12]. If X = I, it reduces to the
Euclidean distance. If X is diagonal, the problem corresponds to learning a metric in which the
different features are given different weights, a.k.a. feature weighting.
In the framework of large-margin learning, we want to maximize the distance between distij and
distik . That is, we wish to make dist2ij ?dist2ik = (ai ?ak )? X(ai ?ak )?(ai ?aj )? X(ai ?aj ) as
large as possible under some regularization. To simplify notation, we rewrite the distance between
dist2ij and dist2ik as dist2ij ? dist2ik = hAr , Xi,
Ar = (ai ? ak )(ai ? ak )? ? (ai ? aj )(ai ? aj )? ,
(2)
r = 1, ? ? ? , |S
S|. |S
S| is the size of the set S
S.
2.2
Learning with Exponential Loss
We derive a general algorithm for p.s.d. matrix learning with exponential loss. Assume that we want
to find a p.s.d. matrix X < 0 such that a bunch of constraints
hAr , Xi > 0, r = 1, 2, ? ? ? ,
are satisfied as well as possible. These constraints need not be all strictly satisfied. We can define
the margin ?r = hAr , Xi, ?r. By employing exponential loss, we want to optimize
P|S
S|
min log
S|, X < 0.
(P0)
r=1 exp ??r + v Tr(X) s.t. ?r = hAr , Xi, r = 1, ? ? ? , |S
Note that: (1) We have worked on the logarithmic version of the sum of exponential loss. This
transform does not change the original optimization problem of sum of exponential loss because
the logarithmic function is strictly monotonically decreasing. (2) A regularization term Tr(X) has
been applied. Without this regularization, one can always multiply an arbitrarily large factor to X
to make the exponential loss approach zero in the case of all constraints being satisfied. This tracenorm regularization may also lead to low-rank solutions. (3) An auxiliary variable ?r , r = 1, . . .
must be introduced for deriving a meaningful dual problem, as we show later.
PJ
We can decompose X into: X = j=1 wj Zj , with wj ? 0, rank(Zj ) = 1 and Tr(Zj ) = 1, ?j.
So
E P
D
PJ
PJ
J
(3)
?r = hAr , Xi = Ar , j=1 wj Zj = j=1 wj hAr , Zj i = j=1 wj Hrj = Hr: w, ?r.
Here Hrj is a shorthand for Hrj = hAr , Zj i. Clearly Tr(X) =
2.3
PJ
j=1 wj
Tr(Zj ) = 1? w.
The Lagrange Dual Problem
We now derive the Lagrange dual of the problem we are interested in. The original problem (P0)
now becomes
P|S
S|
?
S|, w ? 0.
(P1)
min log
r=1 exp ??r + v1 w, s.t. ?r = Hr: w, r = 1, ? ? ? , |S
In order to derive its dual, we write its Lagrangian
P|S
P|S
S|
S|
?
?
L(w, ?, u, p) = log
r=1 exp ??r + v1 w +
r=1 ur (?r ? Hr: w) ? p w,
(4)
with p ? 0. Here u and p are Lagrange multipliers. The dual problem is obtained by finding the
saddle point of L; i.e., supu,p inf w,? L.
L2
L1
z
}|
{
}|
{
z
P|S
P|S
P|S
S|
S|
S|
?
?
?
exp
??
+
u
?
+
inf
(v1
?
u
H
?
p
)w
= ? r=1 ur log ur .
inf L = inf log
r
r
r:
r=1
r=1
w,?
w
?
The infimum of L1 is found by setting its first derivative to zero and we have:
P
? r ur log ur if u ? 0, 1? u = 1,
inf L1 =
?
??
otherwise.
The infimum is Shannon entropy. L2 is linear in w, hence L2 must be 0. It leads to
P|S
S|
?
r=1 ur Hr: ? v1 .
The Lagrange dual problem of (P1) is an entropy maximization problem, which writes
P|S
S|
max ? r=1 ur log ur , s.t. u ? 0, 1? u = 1, and (5).
u
(5)
(D1)
Weak and strong duality hold under mild conditions [11]. That means, one can usually solve one
problem from the other. The KKT conditions link the optimal between these two problems. In our
case, it is
exp ???r
u?r = P|S
, ?r.
(6)
S|
?
k=1 exp ??k
While it is possible to devise a totally-corrective column generation based optimization procedure
for solving our problem as the case of LPBoost [13], we are more interested in considering one-ata-time coordinate-wise descent algorithms, as the case of AdaBoost [10], which has the advantages:
(1) computationally efficient and (2) parameter free. Let us start from some basic knowledge of
column generation because our coordinate descent strategy is inspired by column generation.
If we knew all the bases Zj (j = 1 . . . J) and hence the entire matrix H is known, then either the
primal (P1) or the dual (D1) could be trivially solved (at least in theory) because both are convex
optimization problems. We can solve them in polynomial time. Especially the primal problem is
convex minimization with simple nonnegativeness constraints. Off-the-shelf software like LBFGSB [14] can be used for this purpose. Unfortunately, in practice, we do not access all the bases: the
number of possible Z?s is infinite. In convex optimization, column generation is a technique that is
designed for solving this difficulty.
Instead of directly solving the primal problem (P1), we find the most violated constraint in the dual
(D1) iteratively for the current solution and add this constraint to the optimization problem. For this
purpose, we need to solve
o
n
S|
? = argmax P|S
(7)
Z
Z
r=1 ur Ar , Z , s.t. Z ? ?1 .
Here ?1 is the set of trace-one rank-one matrices. We discuss how to efficiently solve (7) later. Now
we move on to derive a coordinate descent optimization procedure.
2.4
Coordinate Descent Optimization
We show how an AdaBoost-like optimization procedure can be derived for our metric learning problem. As in AdaBoost, we need to solve for the primal variables wj given all the weak learners up to
iteration j.
Optimizing for wj Since we are interested in the one-at-a-time coordinate-wise optimization, we
keep w1 , w2 , . . . , wj?1 fixed when solving for wj . The cost function of the primal problem is (in
the following derivation, we drop those terms irrelevant to the variable wj )
P|S
S|
Cp (wj ) = log r=1 exp(??j?1
) ? exp(?Hrj wj ) + vwj .
r
Clearly, Cp is convex in wj and hence there is only one minimum that is also globally optimal. The
first derivative of Cp w.r.t. wj vanishes at optimality, which results in
P|S
S|
j?1
exp(?wj Hrj ) = 0.
(8)
r=1 (Hrj ? v)ur
Algorithm 1 Bisection search for wj .
1
2
3
4
Input: An interval [wl , wu ] known to contain the optimal value of wj and convergence tolerance ? > 0.
repeat
? wj = 0.5(wl + wu );
? if l.h.s. of (8) > 0 then
wl = wj ;
else
5
6
7
wu = wj .
until wu ? wl < ? ;
Output: wj .
If Hrj is discrete, such as {+1, ?1} in standard AdaBoost, we can obtain a close-form solution
similar to AdaBoost. Unfortunately in our case, Hrj can be any real value. We instead use bisection
to search for the optimal wj . The bisection method is one of the root-finding algorithms. It repeatedly divides an interval in half and then selects the subinterval in which a root exists. Bisection is a
simple and robust, although it is not the fastest algorithm for root-finding. Newton-type algorithms
are also applicable here. Algorithm 1 gives the bisection procedure. We have utilized the fact that
the l.h.s. of (8) must be positive at wl . Otherwise no solution can be found. When wj = 0, clearly
the l.h.s. of (8) is positive.
Updating u
j, we have
The rule for updating the dual variable u can be easily obtained from (6). At iteration
ujr ? exp ??jr ? uj?1
exp(?Hrj wj ), and
r
derived from (6). So once wj is calculated, we can update u as
P|S
S|
j
r=1 ur
= 1,
uj?1
exp(?Hrj wj )
r
, r = 1, . . . , |S
S|,
z
P|S
S|
where z is a normalization factor so that r=1 ujr = 1. This is exactly the same as AdaBoost.
ujr =
2.5
(9)
Base Learning Algorithm
In this section, we show that the optimization problem (7) can be exactly and efficiently solved using
eigenvalue-decomposition (EVD). From Z < 0 and rank(Z) = 1, we know that Z has the format:
Z = ??? , ? ? RD ; and Tr(Z) = 1 means k?k2 = 1. We have
P|S
P|S
S|
S|
S|
? P|S
r=1 ur Ar , Z =
r=1 ur Ar , Z = ?
r=1 ur Ar ?.
By denoting
S|
? = P|S
A
(10)
r=1 ur Ar ,
??
the base learning optimization equals: max? ? A?, s.t. k?k2 = 1. It is clear that the largest
? ?max (A),
? and its corresponding eigenvector ? 1 gives the solution to the above
eigenvalue of A,
?
problem. Note that A is symmetric. Also see [9] for details.
? is also used as one of the stopping criteria of the algorithm. Form the condition (5),
?max (A)
? < v means that we are not able to find a new base matrix Z
? that violates (5)?the algorithm
?max (A)
converges. We summarize our main algorithmic results in Algorithm 2.
3
3.1
Experiments
Classification on Benchmark Datasets
We evaluate B OOST M ETRIC on 15 datasets of different sizes. Some of the datasets have very high
dimensional inputs. We use PCA to decrease the dimensionality before training on these datasets
(datasets 2-6). PCA pre-processing helps to eliminate noises and speed up computation. We have
Algorithm 2 Positive semidefinite matrix learning with boosting.
Input:
? Training set triplets (ai , aj , ak ) ? S
S; Compute Ar , r = 1, 2, ? ? ? , using (2).
? J: maximum number of iterations;
? (optional) regularization parameter v; We may simply set v to a very small value, e.g., 10?7 .
1
2
3
4
5
6
7
1
Initialize: u0r = |S
, r = 1 ? ? ? |S
S|;
S|
for j = 1, 2, ? ? ? , J do
? and its eigenvector of A
? in (10);
? Find a new base Zj by finding the largest eigenvalue (?max (A))
?
? if ?max (A) < v then
break (converged);
? Compute wj using Algorithm 1;
? Update u to obtain ujr , r = 1, ? ? ? |S
S| using (9);
P
D?D
Output: The final p.s.d. matrix X ? R
, X = Jj=1 wj Zj .
used USPS and MNIST handwritten digits, ORL face recognition datasets, Columbia University
Image Library (COIL20)1 , and UCI machine learning datasets2 (datasets 7-13), Twin Peaks and
Helix. The last two are artificial datasets3 .
Experimental results are obtained by averaging over 10 runs (except USPS-1). We randomly split the
datasets for each run. We have used the same mechanism to generate training triplets as described
in [7]. Briefly, for each training point ai , k nearest neighbors that have same labels as yi (targets),
as well as k nearest neighbors that have different labels from yi (imposers) are found. We then
construct triplets from ai and its corresponding targets and imposers. For all the datasets, we have
set k = 3 except that k = 1 for datasets USPS-1, ORLFace-1 and ORLFace-2 due to their large
size. We have compared our method against a few methods: Xing et al [4], RCA [5], NCA [6]
and LMNN [7]. LMNN is one of the state-of-the-art according to recent studies such as [15]. Also
in Table 1, ?Euclidean? is the baseline algorithm that uses the standard Euclidean distance. The
codes for these compared algorithms are downloaded from the corresponding authors? websites. We
have released our codes for B OOST M ETRIC at [16]. Experiment setting for LMNN follows [7]. For
B OOST M ETRIC, we have set v = 10?7 , the maximum number of iterations J = 500. As we can
see from Table 1, we can conclude: (1) B OOST M ETRIC consistently improves kNN classification
using Euclidean distance on most datasets. So learning a Mahalanobis metric based upon the large
margin concept does lead to improvements in kNN classification. (2) B OOST M ETRIC outperforms
other algorithms in most cases (on 11 out of 15 datasets). LMNN is the second best algorithm on
these 15 datasets statistically. LMNN?s results are consistent with those given in [7]. (3) Xing et
al [4] and NCA can only handle a few small datasets. In general they do not perform very well. A
good initialization is important for NCA because NCA?s cost function is non-convex and can only
find a local optimum.
Influence of v Previously, we claim that our algorithm is parameter-free like AdaBoost. However,
we do have a parameter v in B OOST M ETRIC. Actually, AdaBoost simply set v = 0. The coordinatewise gradient descent optimization strategy of AdaBoost leads to an ?1 -norm regularized maximum
margin classifier [17]. It is shown that AdaBoost minimizes its loss criterion with an ?1 constraint on
the coefficient vector. Given the similarity of the optimization of B OOST M ETRIC with AdaBoost,
we conjecture that B OOST M ETRIC has the same property. Here we empirically prove that as long
as v is sufficiently small, the final performance is not affected by the value of v. We have set v from
10?8 to 10?4 and run B OOST M ETRIC on 3 UCI datasets. Table 2 reports the final 3NN classification
error with different v. The results are nearly identical.
Computational time As we discussed, one major issue in learning a Mahalanobis distance is
heavy computational cost because of the semidefiniteness constraint.
1
http://www1.cs.columbia.edu/CAVE/software/softlib/coil-20.php
http://archive.ics.uci.edu/ml/
3
http://boosting.googlecode.com/files/dataset1.tar.bz2
2
Table 1: Test classification error rates (%) of a 3-nearest neighbor classifier on benchmark datasets. Results of
NCA and Xing et al [4] on large datasets are not available either because the algorithm does not converge or
due to the out-of-memory problem.
dataset
USPS-1
USPS-2
ORLFace-1
ORLFace-2
MNIST
COIL20
Letters
Wine
Bal
Iris
Vehicle
Breast-Cancer
Diabetes
Twin Peaks
Helix
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Euclidean
5.18
3.56 (0.28)
3.33 (1.47)
5.33 (2.70)
4.11 (0.43)
0.19 (0.21)
5.74 (0.24)
26.23 (5.52)
18.13 (1.79)
2.22 (2.10)
30.47 (2.41)
3.28 (1.06)
27.43 (2.93)
1.13 (0.09)
0.60 (0.12)
Xing et al [4]
10.38 (4.81)
11.12 (2.12)
2.22 (2.10)
28.66 (2.49)
3.63 (0.93)
27.87 (2.71)
RCA
32.71
5.57 (0.33)
5.75 (2.85)
4.42 (2.08)
4.31 (0.42)
0.32 (0.29)
5.06 (0.26)
2.26 (1.95)
19.47 (2.39)
3.11 (2.15)
21.42 (2.46)
3.82 (1.15)
26.48 (1.61)
1.02 (0.09)
0.61 (0.11)
NCA
3.92 (2.01)
3.75 (1.63)
27.36 (6.31)
4.81 (1.80)
2.89 (2.58)
22.61 (3.26)
4.31 (1.10)
27.61 (1.55)
LMNN
7.51
2.18 (0.27)
6.67 (2.94)
2.83 (1.77)
4.19 (0.49)
2.41 (1.80)
4.34 (0.36)
5.47 (3.01)
11.87 (2.14)
2.89 (2.58)
22.57 (2.16)
3.19 (1.43)
26.78 (2.42)
0.98 (0.11)
0.61 (0.13)
B OOST M ETRIC
2.96
1.99 (0.24)
2.00 (1.05)
3.00 (1.31)
4.09 (0.31)
0.02 (0.07)
3.54 (0.18)
2.64 (1.59)
8.93 (2.28)
2.89 (2.78)
19.17 (2.10)
2.45 (0.95)
25.04 (2.25)
0.14 (0.08)
0.58 (0.12)
Table 2: Test error (%) of a 3-nearest neighbor classifier with different values of the parameter v. Each
experiment is run 10 times. We report the mean and variance. As expected, as long as v is sufficiently small, in
a wide range it almost does not affect the final classification performance.
v
10?8
10?7
10?6
10?5
10?4
Bal
8.98 (2.59) 8.88 (2.52) 8.88 (2.52) 8.88 (2.52) 8.93 (2.52)
B-Cancer 2.11 (0.69) 2.11 (0.69) 2.11 (0.69) 2.11 (0.69) 2.11 (0.69)
26.0 (1.33) 26.0 (1.33) 26.0 (1.33) 26.0 (1.34) 26.0 (1.46)
Diabetes
Our algorithm is generally fast. It involves matrix operations and an EVD for finding its largest
eigenvalue and its corresponding eigenvector. The time complexity of this EVD is O(D2 ) with
D the input dimensions. We compare our algorithm?s running time with LMNN in Fig. 1 on the
artificial dataset (concentric circles). We vary the input dimensions from 50 to 1000 and keep the
number of triplets fixed to 250. Instead of using standard interior-point SDP solvers that do not scale
well, LMNN heuristically combines sub-gradient descent in both the matrices L and X. At each
iteration, X is projected back onto the p.s.d. cone using EVD. So a full EVD with time complexity
O(D3 ) is needed. Note that LMNN is much faster than SDP solvers like CSDP [18]. As seen from
Fig. 1, when the input dimensions are low, B OOST M ETRIC is comparable to LMNN. As expected,
when the input dimensions become high, B OOST M ETRIC is significantly faster than LMNN. Note
that our implementation is in Matlab. Improvements are expected if implemented in C/C++.
3.2
Visual Object Categorization and Detection
The proposed B OOST M ETRIC and the LMNN are further compared on four classes of the Caltech101 object recognition database [19], including Motorbikes (798 images), Airplanes (800), Faces
(435), and Background-Google (520). For each image, a number of interest regions are identified
by the Harris-affine detector [20] and the visual content in each region is characterized by the SIFT
descriptor [21]. The total number of local descriptors extracted from the images of the four classes
800
CPUtimeperrun(seconds)
700
600
BoostMetric
LMNN
500
400
300
200
100
0
0
200
400
600
inputdimensions
800
1000
Figure 1: Computation time of the proposed B OOST M ETRIC and the LMNN method versus the input data?s dimensions on an artificial dataset. B OOST M ETRIC is faster than
LMNN with large input dimensions because at each iteration B OOST M ETRIC only needs to calculate the largest
eigenvector and LMNN needs a full eigen-decomposition.
5.5
Euclidean
LMNN
BoostMetric
15
10
5
Testerrorof3-nearestneighbor(%)
Testerrorof3-nearestneighbor(%)
20
5
4.5
4
3.5
3
2.5
0
dim.:100D
200D
1000 2000 3000 4000 5000 6000 7000 8000 9000
Numberoftriplets
Figure 2: Test error (3-nearest neighbor) of B OOST M ETRIC on the Motorbikes vs. Airplanes datasets. The
second figure shows the test error against the number of training triplets with a 100-word codebook. Test error
of LMNN is 4.7% ? 0.5% with 8631 triplets for training, which is worse than B OOST M ETRIC. For Euclidean
distance, the error is much larger: 15% ? 1%.
are about 134, 000, 84, 000, 57, 000, and 293, 000, respectively. This experiment includes both object categorization (Motorbikes vs. Airplanes) and object detection (Faces vs. Background-Google)
problems. To accumulate statistics, the images of two involved object classes are randomly split as
10 pairs of training/test subsets. Restricted to the images in a training subset (those in a test subset
are only used for test), their local descriptors are clustered to form visual words by using k-means
clustering. Each image is then represented by a histogram containing the number of occurrences of
each visual word.
Motorbikes vs. Airplanes This experiment discriminates the images of a motorbike from those
of an airplane. In each of the 10 pairs of training/test subsets, there are 959 training images and
639 test images. Two visual codebooks of size 100 and 200 are used, respectively. With the resulting histograms, the proposed B OOST M ETRIC and the LMNN are learned on a training subset and
evaluated on the corresponding test subset. Their averaged classification error rates are compared
in Fig. 2 (left). For both visual codebooks, the proposed B OOST M ETRIC achieves lower error rates
than the LMNN and the Euclidean distance, demonstrating its superior performance. We also apply
a linear SVM classifier with its regularization parameter carefully tuned by 5-fold cross-validation.
Its error rates are 3.87% ? 0.69% and 3.00% ? 0.72% on the two visual codebooks, respectively. In
contrast, a 3NN with B OOST M ETRIC has error rates 3.63% ? 0.68% and 2.96% ? 0.59%. Hence,
the performance of the proposed B OOST M ETRIC is comparable to or even slightly better than the
SVM classifier. Also, Fig. 2 (right) plots the test error of the B OOST M ETRIC against the number of
triplets for training. The general trend is that more triplets lead to smaller errors.
Faces vs. Background-Google This experiment uses the two object classes as a retrieval problem. The target of retrieval is the face images. The images in the class of Background-Google are
randomly collected from the Internet and they are used to represent the non-target class. B OOSTM ETRIC is first learned from a training subset and retrieval is conducted on the corresponding test
subset. In each of the 10 training/test subsets, there are 573 training images and 382 test images.
Again, two visual codebooks of size 100 and 200 are used. Each face image in a test subset is used
as a query, and its distances from other test images are calculated by B OOST M ETRIC, LMNN and
the Euclidean distance. For each metric, the precision of the retrieved top 5, 10, 15 and 20 images
are computed. The retrieval precision for each query are averaged on this test subset and then averaged over the whole 10 test subsets. B OOST M ETRIC consistently attains the highest values, which
again verifies its advantages over LMNN and the Euclidean distance. With a codebook size 200,
very similar results are obtained. See [16] for the experiment results.
4
Conclusion
We have presented a new algorithm, B OOST M ETRIC, to learn a positive semidefinite metric using
boosting techniques. We have generalized AdaBoost in the sense that the weak learner of B OOSTM ETRIC is a matrix, rather than a classifier. Our algorithm is simple and efficient. Experiments
show its better performance over a few state-of-the-art existing metric learning methods. We are
currently combining the idea of on-line learning into B OOST M ETRIC to make it handle even larger
datasets.
References
[1] T. Hastie and R. Tibshirani. Discriminant adaptive nearest neighbor classification. IEEE Trans.
Pattern Anal. Mach. Intell., 18(6):607?616, 1996.
[2] J. Yu, J. Amores, N. Sebe, P. Radeva, and Q. Tian. Distance learning for similarity estimation.
IEEE Trans. Pattern Anal. Mach. Intell., 30(3):451?462, 2008.
[3] B. Jian and B. C. Vemuri. Metric learning using Iwasawa decomposition. In Proc. IEEE Int.
Conf. Comp. Vis., pages 1?6, Rio de Janeiro, Brazil, 2007. IEEE.
[4] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning, with application to
clustering with side-information. In Proc. Adv. Neural Inf. Process. Syst. MIT Press, 2002.
[5] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning a Mahalanobis metric from
equivalence constraints. J. Mach. Learn. Res., 6:937?965, 2005.
[6] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood component analysis. In Proc. Adv. Neural Inf. Process. Syst. MIT Press, 2004.
[7] K. Q. Weinberger, J. Blitzer, and L. K. Saul. Distance metric learning for large margin nearest
neighbor classification. In Proc. Adv. Neural Inf. Process. Syst., pages 1473?1480, 2005.
[8] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Proc. Adv. Neural Inf.
Process. Syst., 2005.
[9] C. Shen, A. Welsh, and L. Wang. PSDBoost: Matrix-generation linear programming for positive semidefinite matrices learning. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou,
editors, Proc. Adv. Neural Inf. Process. Syst., pages 1473?1480, Vancouver, Canada, 2008.
[10] R. E. Schapire. Theoretical views of boosting and applications. In Proc. Int. Conf. Algorithmic
Learn. Theory, pages 13?25, London, UK, 1999. Springer-Verlag.
[11] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[12] K. Q. Weinberger and L. K. Saul. Unsupervised learning of image manifolds by semidefinite
programming. Int. J. Comp. Vis., 70(1):77?90, 2006.
[13] A. Demiriz, K.P. Bennett, and J. Shawe-Taylor. Linear programming boosting via column
generation. Mach. Learn., 46(1-3):225?254, 2002.
[14] C. Zhu, R. H. Byrd, and J. Nocedal. L-BFGS-B: Algorithm 778: L-BFGS-B, FORTRAN
routines for large scale bound constrained optimization. ACM Trans. Math. Softw., 23(4):550?
560, 1997.
[15] L. Yang, R. Jin, L. Mummert, R. Sukthankar, A. Goode, B. Zheng, S. Hoi, and M. Satyanarayanan. A boosting framework for visuality-preserving distance metric learning and its
application to medical image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. IEEE computer Society Digital Library, November 2008, http://doi.ieeecomputersociety.
org/10.1109/TPAMI.2008.273.
[16] http://code.google.com/p/boosting/.
[17] S. Rosset, J. Zhu, and T. Hastie. Boosting as a regularized path to a maximum margin classifier.
J. Mach. Learn. Res., 5:941?973, 2004.
[18] B. Borchers. CSDP, a C library for semidefinite programming. Optim. Methods and Softw.,
11(1):613?623, 1999.
[19] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. IEEE Trans.
Pattern Anal. Mach. Intell., 28(4):594?611, April 2006.
[20] K. Mikolajczyk and C. Schmid. Scale & affine invariant interest point detectors. Int. J. Comp.
Vis., 60(1):63?86, 2004.
[21] D. G. Lowe. Distinctive image features from scale-invariant keypoints. Int. J. Comp. Vis.,
60(2):91?110, 2004.
| 3658 |@word mild:1 version:1 briefly:1 polynomial:1 norm:1 d2:1 heuristically:1 decomposition:3 p0:2 tr:8 shot:1 accommodate:2 etric:40 efficacy:1 zij:1 denoting:1 tuned:1 psdboost:10 outperforms:1 existing:3 current:1 com:2 optim:1 goldberger:1 must:3 nonnegativeness:1 designed:1 drop:1 update:2 plot:1 v:5 half:1 prohibitive:1 website:1 ith:2 boosting:11 codebook:2 location:1 successive:1 math:1 firstly:1 org:1 become:1 initiative:1 shorthand:1 prove:1 combine:1 pairwise:1 expected:3 p1:4 dist:1 xz:1 sdp:2 inspired:2 lmnn:28 decomposed:2 decreasing:1 globally:1 salakhutdinov:1 byrd:1 solver:3 totally:1 becomes:2 considering:1 bounded:1 notation:1 maximizes:1 weinshall:1 minimizes:1 eigenvector:5 finding:5 convexify:2 safely:1 cave:1 act:2 exactly:2 prohibitively:1 k2:2 classifier:7 uk:1 medical:1 positive:14 before:2 local:3 despite:1 mach:7 ak:7 path:1 initialization:1 nearestneighbor:2 equivalence:1 fastest:1 janeiro:1 range:1 statistically:1 averaged:3 tian:1 nca:7 globerson:1 practice:1 implement:3 differs:1 supu:1 writes:1 digit:1 procedure:5 lbfgsb:1 significantly:1 projection:3 boyd:1 pre:1 word:3 onto:1 close:2 interior:2 influence:1 sukthankar:1 optimize:1 equivalent:2 conventional:1 demonstrated:1 lagrangian:1 convex:10 shen:2 rule:1 deriving:1 vandenberghe:1 handle:2 coordinate:5 brazil:1 target:4 user:1 programming:6 us:3 diabetes:2 element:2 trend:1 recognition:2 utilized:1 updating:2 amores:1 lpboost:3 database:1 wang:2 solved:3 calculate:1 wj:29 region:2 adv:5 ordering:1 decrease:1 highest:1 russell:1 discriminates:1 vanishes:1 ui:1 complexity:2 rewrite:1 solving:4 upon:5 distinctive:1 efficiency:1 learner:6 usps:5 easily:1 various:4 represented:1 corrective:1 derivation:1 fast:1 london:1 doi:1 artificial:3 query:2 borchers:1 neighborhood:2 hillel:1 kai:1 larger:3 solve:6 widely:1 relax:1 otherwise:2 ability:1 statistic:1 knn:3 transform:2 demiriz:1 final:4 sequence:1 eigenvalue:5 advantage:2 tpami:1 propose:2 product:1 relevant:1 uci:3 combining:1 roweis:2 validate:1 scalability:1 convergence:2 requirement:1 optimum:3 categorization:2 converges:1 object:7 help:1 derive:5 blitzer:1 ij:1 nearest:9 sa:1 strong:1 solves:1 auxiliary:1 c:1 involves:1 indicate:1 australian:3 implemented:1 australia:4 violates:1 hoi:1 require:2 government:1 hx:1 clustered:1 decompose:1 strictly:2 hold:1 proximity:2 around:1 sufficiently:2 ic:1 exp:12 algorithmic:2 claim:1 major:1 sought:1 vary:1 achieves:1 released:1 wine:1 purpose:3 estimation:1 proc:7 applicable:1 label:3 currently:1 council:1 largest:5 wl:5 datasets3:1 minimization:1 mit:2 clearly:3 gaussian:1 always:1 rather:1 shelf:1 tar:1 derived:2 pxi:1 improvement:2 consistently:2 rank:9 contrast:2 attains:1 kim:1 sense:1 baseline:1 dim:1 rio:1 stopping:1 nn:2 typically:1 eliminate:1 entire:1 perona:1 koller:1 interested:4 selects:1 backing:1 issue:1 classification:12 among:2 dual:9 denoted:3 development:1 art:4 constrained:3 initialize:1 equal:1 once:1 construct:1 ng:1 softw:2 identical:1 look:1 unsupervised:1 yu:1 nearly:1 others:1 report:2 simplify:1 few:3 randomly:3 preserve:1 national:1 intell:4 individual:1 argmax:1 welsh:1 detection:2 huge:1 interest:2 multiply:1 zheng:1 semidefinite:17 primal:5 har:7 euclidean:11 divide:1 taylor:1 circle:1 re:2 theoretical:1 column:7 ar:8 disadvantage:1 boostmetric:2 maximization:1 cost:3 subset:12 csdp:2 conducted:1 rosset:1 person:1 peak:2 off:1 w1:1 again:2 satisfied:3 containing:1 collapsing:1 worse:1 conf:2 derivative:2 syst:5 de:1 semidefiniteness:2 bfgs:2 bold:2 twin:2 includes:2 coefficient:1 int:5 vi:5 later:2 root:3 break:1 lab:1 vehicle:1 view:1 lowe:1 sebe:1 xing:6 start:1 php:1 accuracy:1 variance:1 largely:1 efficiently:2 descriptor:3 anton:1 weak:7 handwritten:1 bisection:5 bunch:1 comp:4 converged:1 detector:2 against:3 involved:4 stop:1 dataset:3 knowledge:1 dimensionality:1 improves:1 routine:1 sophisticated:1 actually:1 back:1 carefully:1 adaboost:15 april:1 formulation:1 evaluated:1 datasets2:1 until:1 web:1 google:5 infimum:2 aj:16 lei:1 k22:1 requiring:1 multiplier:1 contain:1 concept:1 hence:5 regularization:6 symmetric:2 iteratively:1 mahalanobis:14 ll:4 during:2 d4:1 criterion:2 bal:2 iris:1 generalized:1 demonstrate:1 delivers:1 l1:3 cp:3 image:22 wise:3 superior:1 empirically:1 belong:3 discussed:2 accumulate:1 cambridge:1 ai:23 tuning:1 rd:3 consistency:1 trivially:1 tracenorm:1 shawe:1 funded:1 access:1 similarity:6 base:6 add:1 coil20:2 recent:1 retrieved:1 optimizing:1 inf:10 irrelevant:1 chunhua:1 termed:2 store:2 verlag:1 inequality:1 arbitrarily:1 yi:2 devise:1 seen:1 minimum:1 preserving:1 converge:1 maximize:1 monotonically:1 full:2 desirable:1 keypoints:1 reduces:1 faster:3 characterized:1 cross:2 long:2 retrieval:8 calculates:1 scalable:2 regression:1 basic:1 breast:1 essentially:1 metric:27 iteration:10 sometimes:1 kernel:1 represent:2 normalization:1 histogram:2 background:4 want:3 interval:2 else:1 jian:1 w2:1 unlike:2 archive:1 file:1 effectiveness:1 jordan:1 yang:1 split:2 easy:3 bengio:1 affect:1 zi:1 hastie:2 identified:2 inner:1 codebooks:4 idea:1 airplane:5 pca:2 returned:1 hessian:1 jj:1 repeatedly:1 matlab:1 generally:1 clear:1 tune:1 extensively:1 softlib:1 category:1 generate:1 http:5 schapire:1 xij:1 zj:10 tibshirani:1 write:1 discrete:1 shental:1 affected:1 key:1 hrj:10 four:2 demonstrating:1 d3:1 pj:4 distij:3 nocedal:1 v1:4 convert:1 sum:2 cone:1 run:4 parameterized:1 letter:3 throughout:1 almost:1 wu:4 draw:1 orl:1 comparable:2 bound:1 internet:1 fold:1 quadratic:1 constraint:20 worked:1 fei:2 software:2 speed:1 min:2 optimality:1 format:1 conjecture:1 according:2 combination:2 poor:1 jr:1 smaller:1 slightly:1 hertz:1 ur:15 lp:4 www1:1 den:1 restricted:1 invariant:2 rca:2 computationally:1 resource:1 remains:1 previously:1 discus:3 mechanism:1 needed:2 know:2 fortran:1 adopted:1 available:1 operation:1 apply:1 appropriate:2 enforce:1 occurrence:1 neighbourhood:1 alternative:1 weinberger:2 motorbike:5 eigen:1 original:2 top:1 running:2 ensure:1 clustering:4 maintaining:1 hinge:1 newton:2 calculating:1 especially:1 uj:2 society:1 seeking:1 move:1 strategy:2 primary:1 diagonal:1 amongst:1 gradient:3 distance:34 distik:2 link:1 topic:1 manifold:1 collected:1 discriminant:1 trivial:1 nicta:2 code:3 difficult:2 unfortunately:2 favorably:1 trace:4 implementation:1 anal:4 perform:1 conversion:1 upper:2 observation:2 datasets:21 benchmark:2 descent:6 jin:1 november:1 optional:1 hinton:1 frame:1 canada:1 concentric:1 introduced:2 pair:2 required:2 kl:1 engine:1 learned:3 trans:5 able:1 bar:1 usually:3 pattern:4 summarize:1 program:3 max:7 memory:2 video:1 including:1 critical:2 difficulty:3 ranked:1 natural:1 rely:1 regularized:2 hr:4 zhu:2 library:3 columbia:2 schmid:1 l2:3 vancouver:1 relative:1 loss:9 generation:6 limitation:1 proportional:1 versus:1 triple:1 validation:1 digital:1 downloaded:1 affine:2 consistent:1 editor:1 helix:2 heavy:2 row:1 cancer:2 course:1 ata:1 caltech101:1 repeat:1 last:1 keeping:1 free:2 dis:2 side:1 neighbor:9 wide:1 face:7 evd:5 saul:2 van:1 tolerance:1 boundary:1 dimension:8 calculated:2 world:1 hengel:1 computes:1 dataset1:1 author:1 mikolajczyk:1 adaptive:1 projected:2 employing:1 keep:2 ml:1 global:2 kkt:1 assumed:1 conclude:1 knew:1 xi:5 fergus:1 search:4 triplet:8 table:5 learn:10 robust:1 subinterval:1 schuurmans:1 bottou:1 main:1 linearly:1 whole:1 noise:1 arise:1 coordinatewise:1 verifies:1 fig:4 canberra:3 slow:1 precision:2 sub:2 wish:1 exponential:7 late:1 weighting:1 learns:1 sift:1 svm:3 essential:1 consist:1 exists:1 mnist:2 kx:1 margin:8 easier:1 entropy:2 led:1 logarithmic:2 simply:2 saddle:1 visual:8 lagrange:4 springer:1 u0r:1 corresponds:1 extracted:2 harris:1 coil:1 acm:1 goal:1 kmeans:1 oost:38 price:1 bennett:1 content:2 change:1 programing:1 vemuri:1 typical:1 infinite:1 except:2 averaging:1 total:1 duality:1 experimental:1 shannon:1 meaningful:1 support:1 adelaide:2 relevance:1 violated:1 evaluate:1 d1:3 |
2,933 | 3,659 | Abstraction and relational learning
Charles Kemp & Alan Jern
Department of Psychology
Carnegie Mellon University
{ckemp,ajern}@cmu.edu
Abstract
Most models of categorization learn categories de?ned by characteristic features
but some categories are described more naturally in terms of relations. We present
a generative model that helps to explain how relational categories are learned and
used. Our model learns abstract schemata that specify the relational similarities
shared by instances of a category, and our emphasis on abstraction departs from
previous theoretical proposals that focus instead on comparison of concrete instances. Our ?rst experiment suggests that abstraction can help to explain some
of the ?ndings that have previously been used to support comparison-based approaches. Our second experiment focuses on one-shot schema learning, a problem
that raises challenges for comparison-based approaches but is handled naturally by
our abstraction-based account.
Categories such as family, sonnet, above, betray, and imitate differ in many respects but all of them
depend critically on relational information. Members of a family are typically related by blood or
marriage, and the lines that make up a sonnet must rhyme with each other according to a certain
pattern. A pair of objects will demonstrate ?aboveness? only if a certain spatial relationship is
present, and an event will qualify as an instance of betrayal or imitation only if its participants relate
to each other in certain ways. All of the cases just described are examples of relational categories.
This paper develops a computational approach that helps to explain how simple relational categories
are acquired.
Our approach highlights the role of abstraction in relational learning. Given several instances of
a relational category, it is often possible to infer an abstract representation that captures what the
instances have in common. We refer to these abstract representations as schemata, although others
may prefer to call them rules or theories. For example, a sonnet schema might specify the number
of lines that a sonnet should include and the rhyming pattern that the lines should follow. Once a
schema has been acquired it can support several kinds of inferences. A schema can be used to make
predictions about hidden aspects of the examples already observed?if the ?nal word in a sonnet is
illegible, the rhyming pattern can help to predict the identity of this word. A schema can be used
to decide whether new examples (e.g. new poems) qualify as members of the category. Finally, a
schema can be used to generate novel examples of a category (e.g. novel sonnets).
Most researchers would agree that abstraction plays some role in relational learning, but Gentner [1]
and other psychologists have emphasized the role of comparison instead [2, 3]. Given one example
of a sonnet and the task of deciding whether a second poem is also a sonnet, a comparison-based
approach might attempt to establish an alignment or mapping between the two. Approaches that rely
on comparison or mapping are especially prominent in the literature on analogical reasoning [4, 5],
and many of these approaches can be viewed as accounts of relational categorization [6]. For example, the problem of deciding whether two systems are analogous can be formalized as the problem
of deciding whether these systems are instances of the same relational category. Despite some notable exceptions [6, 7], most accounts of analogy focus on comparison rather than abstraction, and
suggest that ?analogy passes from one instance of a generalization to another without pausing for
explicit induction of the generalization? (p 95) [8].
1
Schema s
0?Q ?x ?y Q(x) < Q(y) ? D1 (x) < D1 (y)
Group g
Observation o
Figure 1: A hierarchical generative model for learning and using relational categories. The schema
s at the top level is a logical sentence that speci?es which groups are valid instances of the category. The group g at the second level is randomly sampled from the set of valid instances, and the
observation o is a partially observed version of group g.
Researchers that focus on comparison sometimes discuss abstraction, but typically suggest that
abstractions emerge as a consequence of comparing two or more concrete instances of a category [3, 5, 9, 10]. This view, however, will not account for one-shot inferences, or inferences
based on a single instance of a relational category. Consider a learner who is shown one instance of
a sonnet then asked to create a second instance. Since only one instance is provided, it is hard to
see how comparisons between instances could account for success on the task. A single instance,
however, will sometimes provide enough information for a schema to be learned, and this schema
should allow subsequent instances to be generated [11]. Here we develop a formal framework for
exploring relational learning in general and one-shot schema learning in particular.
Our framework relies on the hierarchical Bayesian approach, which provides a natural way to combine abstraction and probabilistic inference [12]. The hierarchical Bayesian approach supports representations at multiple levels of abstraction, and helps to explains how abstract representations (e.g.
a sonnet schema) can be acquired given observations of concrete instances (e.g. individual sonnets).
The schemata we consider are represented as sentences in a logical language, and our approach
therefore builds on previous probabilistic methods for learning and using logical theories [13, 14].
Following previous authors, we propose that logical representations can help to capture the content
of human knowledge, and that Bayesian inference helps to explain how these representations are
acquired and how they support inductive inference.
The following sections introduce our framework then evaluate it using two behavioral experiments.
Our ?rst experiment uses a standard classi?cation task where participants are shown one example
of a category then asked to decide which of two alternatives is more likely to belong to the same
category. Tasks of this kind have previously been used to argue for the importance of comparison,
but we suggest that these tasks can be handled by accounts that focus on abstraction. Our second
experiment uses a less standard generation task [15, 16] where participants are shown a single example of a category then asked to generate additional examples. As predicted by our abstraction-based
account, we ?nd that people are able to learn relational categories on the basis of a single example.
1
A generative approach to relational learning
Our examples so far have used real-world relational categories such as family and sonnet but we now
turn to a very simple domain where relational categorization can be studied. Each element in the
domain is a group of components that vary along a number of dimensions?in Figure 1, the components are ?gures that vary along the dimensions of size, color, and circle position. The groups can
be organized into categories?one such category includes groups where every component is black.
Although our domain is rather basic it allows some simple relational regularities to be explored. We
can consider categories, for example, where all components in a group must be the same along some
dimension, and categories where all components must be different along some dimension. We can
also consider categories de?ned by relationships between dimensions?for example, the category
that includes all groups where the size and color dimensions are correlated.
Each category is associated with a schema, or an abstract representation that speci?es which groups
are valid instances of the category. Here we consider schemata that correspond to rules formulated
2
1
2
3
4
5
6
7
? ?
?
?
?x
Di (x) =, 6=, <, > vk
?x
? ??
?
?
?
?x
?y x 6= y ?
D (x) =, 6=, <, > Di (y)
?x
?y x 6= y ? 8 i9
?
? <?=
?
?
?x Di (x) =, 6= vk ? Dj (x) =, 6= vl
: ;
?
8 9
0
1
<?=
?
?
?
?
?x?y x 6= y ? @Di (x) =, 6=, <, > Di (y) ? Dj (x) =, 6=, <, > Dj (y)A
: ;
?
? ?? ??
?
?
?
?Q
?x
?y x 6= y ?
Q(x) =, 6=, <, > Q(y)
?Q
?x
?y x 6= y ?
8 9
0
1
?
?
<?=
?
?
?
?
?Q Q 6= Di ?
?x?y x 6= y ? @Q(x) =, 6=, <, > Q(y) ? Di (x) =, 6=, <, > Di (y)A
?Q Q 6= Di ?
: ;
? 8 9
0
1
? ??
?
<?=
?
?
?
?
?Q
?R Q 6= R ?
?x?y x 6= y ? @Q(x) =, 6=, <, > Q(y) ? R(x) =, 6=, <, > R(y)A
?Q
?R Q 6= R ?
: ;
?
Table 1: Templates used to construct a hypothesis space of logical schemata. An instance of a given
template can be created by choosing an element from each set enclosed in braces (some sets are laid
out horizontally to save space), replacing each occurrence of Di or Dj with a dimension (e.g. D1 )
and replacing each occurrence of vk or vl with a value (e.g. 1).
in a logical language. The language includes three binary connectives?and (?), or (?), and if
and only if (?). Four binary relations (=, 6=, <, and >) are available for comparing values along
dimensions. Universal quanti?cation (?x) and existential quanti?cation (?x) are both permitted,
and the language includes quanti?cation over objects (?x) and dimensions (?Q). For example, the
schema in Figure 1 states that all dimensions are aligned. More precisely, if D1 is the dimension
of size, the schema states that for all dimensions Q, a component x is smaller than a component y
along dimension Q if and only if x is smaller in size than y. It follows that all three dimensions must
increase or decrease together.
To explain how rules in this logical language are learned we work with the hierarchical generative
model in Figure 1. The representation at the top level is a schema s, and we assume that one or
more groups g are generated from a distribution P (g|s). Following a standard approach to category
learning [17, 18], we assume that g is uniformly sampled from all groups consistent with s:
1 g is consistent with s
p(g|s) ?
(1)
0 otherwise
For all applications in this paper, we assume that the number of components in a group is known
and ?xed in advance.
The bottom level of the hierarchy speci?es observations o that are generated from a distribution
P (o|g). In most cases we assume that g can be directly observed, and that P (o|g) = 1 if o = g and
0 otherwise. We also consider the setting shown in Figure 1 where o is generated by concealing a
component of g chosen uniformly at random. Note that the observation o in Figure 1 includes only
four of the components in group g, and is roughly analogous to our earlier example of a sonnet with
an illegible ?nal word.
To convert Figure 1 into a fully-speci?ed probabilistic model it remains to de?ne a prior distribution
P (s) over schemata. An appealing approach is to consider all of the in?nitely many sentences in
the logical language already mentioned, and to de?ne a prior favoring schemata which correspond
to simple (i.e. short) sentences. We approximate this approach by considering a large but ?nite
space of sentences that includes all instances of the templates in Table 1 and all conjunctions of
these instances. When instantiating one of these templates, each occurrence of Di or Dj should be
replaced by one of the dimensions in the domain. For example, the schema in Figure 1 is a simpli?ed
instance of template 6 where Di is replaced by D1 . Similarly, each instance of vk or vl should be
replaced by a value along one of the dimensions. Our ?rst experiment considers a problem where
there are are three dimensions and three possible values along each dimension (i.e. vk = 1, 2, or
3). As a result there are 1568 distinct instances of the templates in Table 1 and roughly one million
3
conjunctions of these instances. Our second experiment uses three dimensions with ?ve values along
each dimension, which leads to 2768 template instances and roughly three million conjunctions of
these instances.
The templates in Table 1 capture most of the simple regularities that can be formulated in our logical
language. Template 1 generates all rules that include quanti?cation over a single object variable and
no binary connectives. Template 3 is similar but includes a single binary connective. Templates
2 and 4 are similar to 1 and 3 respectively, but include two object variables (x and y) rather than
one. Templates 5, 6 and 7 add quanti?cation over dimensions to Templates 2 and 4. Although the
templates in Table 1 capture a large class of regularities, several kinds of templates are not included.
Since we do not assume that the dimensions are commensurable, values along different dimensions
cannot be directly compared (?x D1 (x) = D2 (x) is not permitted. For the same reason, comparisons to a dimension value must involve a concrete dimension (?x D1 (x) = 1 is permitted) rather
than a dimension variable (?Q ?x Q(x) = 1 is not permitted). Finally, we exclude all schemata
where quanti?cation over objects precedes quanti?cation over dimensions, and as a result there are
some simple schemata that our implementation cannot learn (e.g. ?x?y?Q Q(x) = Q(y)).
The extension of each schema is a set of groups, and schemata with the same extension can be
assigned to the same equivalence class. For example, ?x D1 (x) = v1 (an instance of template 1)
and ?x D1 (x) = v1 ? D1 (x) = v1 (an instance of template 3) end up in the same equivalence class.
Each equivalence class can be represented by the shortest sentence that it contains, and we de?ne
our prior P (s) over a set that includes a single representative for each equivalence class. The prior
probability P (s) of each sentence is inversely proportional to its length: P (s) ? ?|s| , where |s| is
the length of schema s and ? is a constant between 0 and 1. For all applications in this paper we set
? = 0.8.
The generative model in Figure 1 can be used for several purposes, including schema learning (inferring a schema s given one or more instances generated from the schema), classi?cation (deciding
whether group gnew belongs to a category given one or more instances of the category) and generation (generating a group gnew that belongs to the same category as one or more instances). Our ?rst
experiment explores all three of these problems.
2
Experiment 1: Relational classi?cation
Our ?rst experiment is organized around a triad task where participants are shown one example of a
category then asked to decide which of two choice examples is more likely to belong to the category.
Triad tasks are regularly used by studies of relational categorization, and have been used to argue
for the importance of comparison [1]. A comparison-based approach to this task, for instance, might
compare the example object to each of the choice objects in order to decide which is the better
match. Our ?rst experiment is intended in part to explore whether a schema-learning approach can
also account for inferences about triad tasks.
Materials and Method. 18 adults participated for course credit and interacted with a custom-built
computer interface. The stimuli were groups of ?gures that varied along three dimensions (color,
size, and ball position, as in Figure 1). Each shape was displayed on a single card, and all groups in
Experiment 1 included exactly three cards. The cards in Figure 1 show ?ve different values along
each dimension, but Experiment 1 used only three values along each dimension.
The experiment included inferences about 10 triads. Participants were told that aliens from a certain
planet ?enjoy organizing cards into groups,? and that ?any group of cards will probably be liked
by some aliens and disliked by others.? The ten triad tasks were framed as questions about the
preferences of 10 aliens. Participants were shown a group that Mr X likes (different names were
used for the ten triads), then shown two choice groups and told that ?Mr X likes one of these groups
but not the other.? Participants were asked to select one of the choice groups, then asked to generate
another 3-card group that Mr X would probably like. Cards could be added to the screen using an
?Add Card? button, and there were three pairs of buttons that allowed each card to be increased or
decreased along the three dimensions. Finally, participants were asked to explain in writing ?what
kind of groups Mr X likes.?
The ten triads used are shown in Figure 2. Each group is represented as a 3 by 3 matrix where
rows represent cards and columns show values along the three dimensions. Triad 1, for example,
4
(a) D1 value always 3
321
332
313
1
0.5
1
231
323
333
1
4
0.5
4
311
122
333
311
113
313
8 12 16 20 24
211
222
233
211
232
223
1
4
0.5
4
211
312
113
8 12 16 20 24
1
1
4
8 12 16 20 24
312
312
312
313
312
312
1
8 12 16 20 24
211
232
123
4
8 12 16 20 24
1
0.5
231
322
213
112
212
312
4
8 12 16 20 24
4
8 12 16 20 24
0.5
1
0.5
0.5
8 12 16 20 24
0.5
4
8 12 16 20 24
0.5
1
1
4
4
(j) Some dimension has no repeats
0.5
1
311
232
123
231
132
333
1
0.5
8 12 16 20 24
0.5
111
312
213
231
222
213
(i) All dimensions have no repeats
331
122
213
4
1
0.5
8 12 16 20 24
0.5
4
8 12 16 20 24
(h) Some dimension uniform
1
4
4
0.5
1
311
212
113
0.5
1
321
122
223
0.5
8 12 16 20 24
0.5
4
0.5
331
322
313
1
0.5
8 12 16 20 24
(f) Two dimensions anti-aligned
(g) All dimensions uniform
133
133
133
4
0.5
1
321
222
123
0.5
1
8 12 16 20 24
1
0.5
8 12 16 20 24
1
0.5
111
212
313
331
212
133
1
(e) Two dimensions aligned
311
322
333
311
113
323
4
(d) D1 and D3 anti-aligned
0.5
1
0.5
1
1
0.5
1
0.5
8 12 16 20 24
(c) D2 and D3 aligned
1
132
332
233
1
0.5
331
323
333
(b) D2 uniform
1
311
321
331
8 12 16 20 24
311
331
331
4
8 12 16 20 24
4
8 12 16 20 24
0.5
Figure 2: Human responses and model predictions for the ten triads in Experiment 1. The plot at the
left of each panel shows model predictions (white bars) and human preferences (black bars) for the
two choice groups in each triad. The plots at the right of each panel summarize the groups created
during the generation phase. The 23 elements along the x-axis correspond to the regularities listed
in Table 2.
5
1
2
3
4
5
6
7
8
9
10
11
12
All dimensions aligned
Two dimensions aligned
D1 and D2 aligned
D1 and D3 aligned
D2 and D3 aligned
All dimensions aligned or anti-aligned
Two dimensions anti-aligned
D1 and D2 anti-aligned
D1 and D3 anti-aligned
D2 and D3 anti-aligned
All dimensions have no repeats
Two dimensions have no repeats
13
14
15
16
17
18
19
20
21
22
23
One dimension has no repeats
D1 has no repeats
D2 has no repeats
D3 has no repeats
All dimensions uniform
Two dimensions uniform
One dimension uniform
D1 uniform
D2 uniform
D3 uniform
D1 value is always 3
Table 2: Regularities used to code responses to the generation tasks in Experiments 1 and 2
has an example group including three cards that each take value 3 along D1 . The ?rst choice group
is consistent with this regularity but the second choice group is not. The cards in each group were
arrayed vertically on screen, and were initially sorted as shown in Figure 2 (i.e. ?rst by D3 , then by
D2 and then by D1 ). The cards could be dragged around on screen, and participants were invited
to move them around in order to help them understand each group. The mapping between the three
dimensions in each matrix and the three dimensions in the experiment (color, position, and size) was
randomized across participants, and the order in which triads were presented was also randomized.
Model predictions and results. Let ge be the example group presented in the triad task and g1
and g2 be the two choice groups. We use our model to compute the relative probability of two
hypotheses: h1 which states that ge and g1 are generated from the same schema and that g2 is sampled randomly from all possible groups, and h2 which states that ge and g2 are generated from the
same schema. We set P (h1 ) = P (h2 ) = 0.5, and compute posterior probabilities P (h1 |ge , g1 , g2 )
and P (h2 |ge , g1 , g2 ) by integrating over all schemata in the hypothesis space already described.
Our model assumes that two groups are considered similar to the extent that they appear to have
been generated by the same underlying schema, and is consistent with the generative approach to
similarity described by Kemp et al. [19].
Model predictions for the ten triads are shown in Figure 2. In each case, the choice probabilities
plotted (white bars) are the posterior probabilities of hypotheses h1 and h2 . In nine out of ten cases
the best choice according to the model is the most common human response. Responses to triads 2c
and 2d support the idea that people are sensitive to relationships between dimensions (i.e. alignment
and anti-alignment). Triads 2e and 2f are similar to triads studied by Kotovsky and Gentner [1], and
we replicate their ?nding that people are sensitive to relationships between dimensions even when
the dimensions involved vary from group to group. The one case where human responses diverge
from model predictions is shown in Figure 2h. Note that the schema for this triad involves existential
quanti?cation over dimensions (some dimension is uniform), and according to our prior P (s) this
kind of quanti?cation is no more complex than other kinds of quanti?cation. Future applications of
our approach can explore the idea that existential quanti?cation over dimensions (?Q) is psychologically more complex than universal quanti?cation over dimensions (?Q) or existential quanti?cation
over cards (?x), and can consider logical languages that incorporate this inductive bias.
To model the generation phase of the experiment we computed the posterior distribution
X
P (gnew |ge , g1 , g2 ) =
P (gnew |s)P (s|h, ge , g1 , g2 )P (h|ge , g1 , g2 )
s,h
where P (h|ge , g1 , g2 ) is the distribution used to model selections in the triad task. Since the space
of possible groups is large, we visualize this distribution using a pro?le that shows the posterior
probability assigned to groups consistent with the 23 regularities shown in Table 2. The white bar
plots in Figure 2 show pro?les predicted by the model, and the black plots immediately above show
pro?les computed over the groups generated by our 18 participants.
In many of the 10 cases the model accurately predicts regularities in the groups generated by people.
In case 2c, for example, the model correctly predicts that generated groups will tend to have no
repeats along dimensions D2 and D3 (regularities 15 and 16) and that these two dimensions will be
aligned (regularities 2 and 5). There are, however, some departures from the model?s predictions,
and a notable example occurs in case 2d. Here the model detects the regularity that dimensions D1
and D3 are anti-aligned (regularity 9). Some groups generated by participants are consistent with
6
(a) All dimensions aligned
1
0.5
1
8 12 16 20 24
(c) D1 has no repeats, D2 and D3 uniform
1
8 12 16 20 24
0.5
1
8 12 16 20 24
354
312
1
8 12 16 20 24
8 12 16 20 24
4
8 12 16 20 24
4
8 12 16 20 24
0.5
423
414
214
315
0.5
314
4
0.5
0.5
4
8 12 16 20 24
1
251
532
314
145
0.5
4
4
(f) All dimensions have no repeats
1
1
335
8 12 16 20 24
(e) All dimensions uniform
314
314
314
314
8 12 16 20 24
0.5
432
514
324
224
424
0.5
1
1
0.5
4
4
0.5
314
0.5
4
8 12 16 20 24
1
431
433
135
335
0.5
1
4
(d) D2 uniform
1
433
1
322
8 12 16 20 24
0.5
0.5
344
333
223
555
222
4
1
1
0.5
0.5
124
224
324
524
311
322
333
354
324
1
0.5
4
311
322
333
355
134
121
232
443
555
443
1
111
333
444
555
(b) D2 and D3 aligned
Figure 3: Human responses and model predictions for the six cases in Experiment 2. In (a) and (b),
the 4 cards used for the completion and generation phases are shown on either side of the dashed line
(completion cards on the left). In the remaining cases, the same 4 cards were used for both phases.
The plots at the right of each panel show model predictions (white bars) and human responses (black
bars) for the generation task. In each case, the 23 elements along each x-axis correspond to the
regularities listed in Table 2. The remaining plots show responses to the completion task. There are
125 possible responses, and the four responses shown always include the top two human responses
and the top two model predictions.
this regularity, but people also regularly generate groups where two dimensions are aligned rather
than anti-aligned (regularity 2). This result may indicate that some participants are sensitive to
relationships between dimensions but do not consider the difference between a positive relationship
(alignment) and an inverse relationship (anti-alignment) especially important.
Kotovsky and Gentner [1] suggest that comparison can explain how people respond to triad tasks,
although they do not provide a computational model that can be compared with our approach. It is
less clear how comparison might account for our generation data, and our next experiment considers
a one-shot generation task that raises even greater challenges for a comparison-based approach.
3
Experiment 2: One-shot schema learning
As described already, comparison involves constructing mappings between pairs of category instances. In some settings, however, learners make con?dent inferences given a single instance of a
category [15, 20], and it is dif?cult to see how comparison could play a major role when only one
instance is available. Models that rely on abstraction, however, can naturally account for one-shot
relational learning, and we designed a second experiment to evaluate this aspect of our approach.
7
Several previous studies have explored one-shot relational learning. Holyoak and Thagard [21]
developed a study of analogical reasoning using stories as stimuli and found little evidence of oneshot schema learning. Ahn et al. [11] demonstrated, however, that one-shot learning can be achieved
with complex materials such as stories, and modeled this result using explanation-based learning.
Here we use much simpler stimuli and explore a probabilistic approach to one-shot learning.
Materials and Method. 18 adults participated for course credit. The same individuals completed
Experiments 1 and 2, and Experiment 2 was always run before Experiment 1. The same computer
interface was used in both experiments, and the only important difference was that the ?gures in
Experiment 2 could now take ?ve values along each dimension rather than three.
The experiment included two phases. During the generation phase, participants saw a 4-card group
that Mr X liked and were asked to generate two 5-card groups that Mr X would probably like.
During the completion phase, participants were shown four members of a 5-card group and were
asked to generate the missing card. The stimuli used in each phase are shown in Figure 3. In the
?rst two cases, slightly different stimuli were used in the generation and completion phases, and in
all remaining cases the same set of four cards was used in both cases. All participants responded to
the six generation questions before answering the six completion questions.
Model predictions and results. The generation phase is modeled as in Experiment 1, but now the
posterior distribution P (gnew |ge ) is computed after observing a single instance of a category. The
human responses in Figure 3 (white bars) are consistent with the model in all cases, and con?rm that
a single example can provide suf?cient evidence for learners to acquire a relational category. For
example, the most common response in case 3a was the 5-card group shown in Figure 1?a group
with all three dimensions aligned.
To model the completion phase, let oe represent a partial observation of group ge . Our model
infers which
P card is missing from ge by computing the posterior distribution P (ge |oe ) ?
P (oe |ge ) s P (ge |s)P (s), where P (oe |ge ) captures the idea that oe is generated by randomly concealing one component of ge . The white bars in Figure 3 show model predictions, and in ?ve out of
six cases the best response according to the model is the same as the most common human response.
In the remaining case (Figure 3d) the model generates a diffuse distribution over all cards with value
3 on dimension 2, and all human responses satisfy this regularity.
4
Conclusion
We presented a generative model that helps to explain how relational categories are learned and
used. Our approach captures relational regularities using a logical language, and helps to explain
how schemata formulated in this language can be learned from observed data. Our approach differs
in several respects from previous accounts of relational categorization [1, 5, 10, 22]. First, we focus
on abstraction rather than comparison. Second, we consider tasks where participants must generate
examples of categories [16] rather than simply classify existing examples. Finally, we provide a
formal account that helps to explain how relational categories can be learned from a single instance.
Our approach can be developed and extended in several ways. For simplicity, we implemented our
model by working with a ?nite space of several million schemata, but future work can consider
hypothesis spaces that assign non-zero probability to all regularities that can be formulated in the
language we described. The speci?c logical language used here is only a starting point, and future
work can aim to develop languages that provide a more faithful account of human inductive biases.
Finally, we worked with a domain that provides one of the simplest ways to address core questions
such as one-shot learning. Future applications of our general approach can consider domains that
include more than three dimensions and a richer space of relational regularities.
Relational learning and analogical reasoning are tightly linked, and hierarchical generative models
provide a promising approach to both problems. We focused here on relational categorization, but
future studies can explore whether probabilistic accounts of schema learning can help to explain
the inductive inferences typically considered by studies of analogical reasoning. Although there are
many models of analogical reasoning, there are few that pursue a principled probabilistic approach,
and the hierarchical Bayesian approach may help to ?ll this gap in the literature.
Acknowledgments We thank Maureen Satyshur for running the experiments. This work was supported in part
by NSF grant CDI-0835797.
8
References
[1] L. Kotovsky and D. Gentner. Comparison and categorization in the development of relational
similarity. Child Development, 67:2797?2822, 1996.
[2] D. Gentner and A. B. Markman. Structure mapping in analogy and similarity. American
Psychologist, 52:45?56, 1997.
[3] D. Gentner and J. Medina. Similarity and the development of rules. Cognition, 65:263?297,
1998.
[4] B. Falkenhainer, K. D. Forbus, and D. Gentner. The structure-mapping engine: Algorithm and
examples. Arti?cial Intelligence, 41:1?63, 1989.
[5] J. E. Hummel and K. J. Holyoak. A symbolic-connectionist theory of relational inference and
generalization. Psychological Review, 110:220?264, 2003.
[6] M. Mitchell. Analogy-making as perception: a computer model. MIT Press, Cambridge, MA,
1993.
[7] D. R. Hofstadter and the Fluid Analogies Research Group. Fluid concepts and creative analogies: computer models of the fundamental mechanisms of thought. 1995.
[8] W. V. O. Quine and J. Ullian. The Web of Belief. Random House, New York, 1978.
[9] J. Skorstad, D. Gentner, and D. Medin. Abstraction processes during concept learning: a
structural view. In Proceedings of the 10th Annual Conference of the Cognitive Science Society,
pages 419?425. 2009.
[10] D. Gentner and J. Loewenstein. Relational language and relational thought. In E. Amsel
and J. P. Byrnes, editors, Language, literacy and cognitive development: the development and
consequences of symbolic communication, pages 87?120. 2002.
[11] W. Ahn, W. F. Brewer, and R. J. Mooney. Schema acquisition from a single example. Journal
of Experimental Psychology: Learning, Memory and Cognition, 18(2):391?412, 1992.
[12] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman &
Hall, New York, 2nd edition, 2003.
[13] C. Kemp, N. D. Goodman, and J. B. Tenenbaum. Learning and using relational theories. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing
Systems 20, pages 753?760. MIT Press, Cambridge, MA, 2008.
[14] S. Kok and P. Domingos. Learning the structure of Markov logic networks. In Proceedings of
the 22nd International Conference on Machine Learning, 2005.
[15] J. Feldman. The structure of perceptual categories. Journal of Mathematical Psychology, 41:
145?170, 1997.
[16] A. Jern and C. Kemp. Category generation. In Proceedings of the 31st Annual Conference of
the Cognitive Science Society, pages 130?135. Cognitive Science Society, Austin, TX, 2009.
[17] D. Conklin and I. H. Witten. Complexity-based induction. Machine Learning, 16(3):203?225,
1994.
[18] J. B. Tenenbaum and T. L. Grif?ths. Generalization, similarity, and Bayesian inference. Behavioral and Brain Sciences, 24:629?641, 2001.
[19] C. Kemp, A. Bernstein, and J. B. Tenenbaum. A generative theory of similarity. In B. G. Bara,
L. Barsalou, and M. Bucciarelli, editors, Proceedings of the 27th Annual Conference of the
Cognitive Science Society, pages 1132?1137. Lawrence Erlbaum Associates, 2005.
[20] C. Kemp, N. D. Goodman, and J. B. Tenenbaum. Theory acquisition and the language of
thought. In Proceedings of the 30th Annual Conference of the Cognitive Science Society,
pages 1606?1611. Cognitive Science Society, Austin, TX, 2008.
[21] K. J. Holyoak and P. Thagard. Analogical mapping by constraint satisfaction. Cognitive Science, 13(3):295?355, 1989.
[22] L. A. A. Doumas, J. E. Hummel, and C. M. Sandhofer. A theory of the discovery and predication of relational concepts. Psychological Review, 115(1):1?43, 2008.
[23] M. L. Gick and K. J. Holyoak. Schema induction and analogical transfer. Cognitive Psychology, 15:1?38, 1983.
9
| 3659 |@word version:1 replicate:1 nd:3 holyoak:4 d2:14 arti:1 shot:10 contains:1 existing:1 comparing:2 must:6 planet:1 subsequent:1 arrayed:1 shape:1 plot:6 designed:1 generative:9 intelligence:1 imitate:1 cult:1 short:1 core:1 provides:2 preference:2 simpler:1 mathematical:1 along:20 dragged:1 combine:1 behavioral:2 introduce:1 acquired:4 roughly:3 brain:1 detects:1 little:1 considering:1 provided:1 underlying:1 panel:3 what:2 xed:1 kind:6 connective:3 pursue:1 developed:2 cial:1 every:1 exactly:1 rm:1 platt:1 grant:1 enjoy:1 appear:1 positive:1 before:2 vertically:1 consequence:2 despite:1 rhyme:1 nitely:1 might:4 black:4 emphasis:1 studied:2 equivalence:4 suggests:1 conklin:1 dif:1 medin:1 faithful:1 acknowledgment:1 falkenhainer:1 differs:1 nite:2 universal:2 thought:3 word:3 integrating:1 suggest:4 symbolic:2 cannot:2 selection:1 gelman:1 writing:1 demonstrated:1 missing:2 starting:1 focused:1 formalized:1 kotovsky:3 immediately:1 simplicity:1 rule:5 loewenstein:1 analogous:2 hierarchy:1 play:2 us:3 hypothesis:5 domingo:1 associate:1 element:4 predicts:2 observed:4 role:4 bottom:1 capture:6 triad:19 thagard:2 oe:5 decrease:1 mentioned:1 principled:1 complexity:1 asked:9 raise:2 depend:1 learner:3 basis:1 cdi:1 sonnet:13 represented:3 tx:2 distinct:1 precedes:1 choosing:1 richer:1 otherwise:2 g1:8 propose:1 aligned:23 organizing:1 roweis:1 analogical:7 rst:9 interacted:1 regularity:19 categorization:7 generating:1 liked:2 object:7 help:13 develop:2 completion:7 implemented:1 predicted:2 involves:2 indicate:1 differ:1 human:12 material:3 explains:1 ndings:1 assign:1 generalization:4 exploring:1 extension:2 dent:1 marriage:1 around:3 credit:2 considered:2 deciding:4 hall:1 lawrence:1 mapping:7 predict:1 visualize:1 cognition:2 major:1 vary:3 purpose:1 sensitive:3 saw:1 create:1 mit:2 always:4 aim:1 rather:8 conjunction:3 focus:6 vk:5 alien:3 inference:12 abstraction:16 vl:3 typically:3 initially:1 hidden:1 relation:2 favoring:1 koller:1 development:5 spatial:1 once:1 construct:1 satyshur:1 chapman:1 jern:2 markman:1 future:5 others:2 stimulus:5 develops:1 connectionist:1 few:1 randomly:3 ve:4 tightly:1 individual:2 replaced:3 intended:1 phase:11 hummel:2 attempt:1 bara:1 custom:1 alignment:5 grif:1 partial:1 circle:1 plotted:1 theoretical:1 psychological:2 instance:39 increased:1 earlier:1 column:1 classify:1 uniform:13 erlbaum:1 st:1 explores:1 randomized:2 fundamental:1 international:1 probabilistic:6 told:2 diverge:1 together:1 concrete:4 cognitive:9 american:1 account:14 exclude:1 de:5 includes:8 satisfy:1 notable:2 view:2 h1:4 schema:45 observing:1 linked:1 participant:17 responded:1 characteristic:1 who:1 correspond:4 bayesian:6 accurately:1 critically:1 researcher:2 mooney:1 cation:16 explain:11 ed:2 acquisition:2 involved:1 naturally:3 associated:1 di:12 con:2 sampled:3 logical:12 mitchell:1 knowledge:1 color:4 infers:1 organized:2 follow:1 permitted:4 specify:2 response:16 just:1 working:1 web:1 replacing:2 name:1 concept:3 inductive:4 assigned:2 white:6 poem:2 during:4 ll:1 prominent:1 demonstrate:1 interface:2 pro:3 reasoning:5 novel:2 charles:1 common:4 witten:1 million:3 belong:2 mellon:1 refer:1 cambridge:2 feldman:1 framed:1 similarly:1 language:16 dj:5 similarity:7 ahn:2 add:2 posterior:6 belongs:2 certain:4 binary:4 success:1 qualify:2 additional:1 simpli:1 greater:1 mr:6 speci:5 shortest:1 dashed:1 multiple:1 infer:1 alan:1 match:1 prediction:12 instantiating:1 basic:1 cmu:1 rhyming:2 sometimes:2 represent:2 psychologically:1 achieved:1 proposal:1 participated:2 decreased:1 goodman:2 invited:1 brace:1 pass:1 probably:3 tend:1 member:3 regularly:2 call:1 structural:1 bernstein:1 enough:1 carlin:1 psychology:4 idea:3 whether:7 six:4 handled:2 york:2 nine:1 oneshot:1 bucciarelli:1 clear:1 involve:1 listed:2 kok:1 ten:6 tenenbaum:4 category:43 simplest:1 generate:7 gentner:9 nsf:1 ajern:1 correctly:1 carnegie:1 group:55 four:5 blood:1 d3:13 nal:2 v1:3 button:2 convert:1 concealing:2 run:1 inverse:1 barsalou:1 respond:1 family:3 laid:1 decide:4 i9:1 prefer:1 annual:4 precisely:1 worked:1 constraint:1 diffuse:1 generates:2 aspect:2 ned:2 department:1 according:4 creative:1 ball:1 smaller:2 across:1 slightly:1 maureen:1 appealing:1 making:1 psychologist:2 agree:1 previously:2 remains:1 discus:1 turn:1 mechanism:1 brewer:1 singer:1 ge:17 end:1 available:2 hierarchical:6 occurrence:3 save:1 alternative:1 top:4 assumes:1 include:5 remaining:4 completed:1 running:1 especially:2 establish:1 build:1 society:6 move:1 already:4 question:4 added:1 occurs:1 thank:1 card:25 argue:2 considers:2 kemp:6 extent:1 reason:1 induction:3 length:2 code:1 modeled:2 relationship:7 acquire:1 relate:1 fluid:2 implementation:1 stern:1 forbus:1 observation:6 markov:1 predication:1 anti:11 displayed:1 relational:37 extended:1 communication:1 illegible:2 varied:1 pair:3 sentence:7 engine:1 learned:6 adult:2 able:1 bar:8 address:1 pattern:3 perception:1 departure:1 challenge:2 summarize:1 built:1 including:2 memory:1 explanation:1 belief:1 event:1 satisfaction:1 natural:1 rely:2 inversely:1 ne:3 axis:2 created:2 nding:1 existential:4 prior:5 literature:2 review:2 discovery:1 relative:1 fully:1 highlight:1 generation:14 suf:1 proportional:1 analogy:6 enclosed:1 h2:4 consistent:7 rubin:1 editor:3 story:2 ckemp:1 row:1 austin:2 course:2 repeat:11 supported:1 formal:2 allow:1 understand:1 bias:2 side:1 template:17 emerge:1 gnew:5 dimension:69 valid:3 world:1 author:1 far:1 pausing:1 approximate:1 logic:1 imitation:1 table:9 promising:1 learn:3 transfer:1 complex:3 constructing:1 domain:6 quanti:13 edition:1 allowed:1 child:1 representative:1 cient:1 screen:3 position:3 inferring:1 explicit:1 medina:1 hofstadter:1 house:1 answering:1 perceptual:1 learns:1 departs:1 emphasized:1 explored:2 evidence:2 importance:2 gap:1 simply:1 likely:2 explore:4 horizontally:1 partially:1 g2:9 gures:3 relies:1 ma:2 identity:1 viewed:1 formulated:4 sorted:1 shared:1 content:1 hard:1 included:4 uniformly:2 classi:3 e:3 experimental:1 exception:1 select:1 people:6 support:5 incorporate:1 evaluate:2 d1:23 correlated:1 |
2,934 | 366 | Learning to See Rotation and
Dilation with a Hebb Rule
Martin I. Sereno and Margaret E. Sereno
Cognitive Science D-015
University of California, San Diego
La Jolla, CA 92093-0115
Abstract
Previous work (M.I. Sereno, 1989; cf. M.E. Sereno, 1987) showed that a
feedforward network with area VI-like input-layer units and a Hebb rule
can develop area MT-like second layer units that solve the aperture
problem for pattern motion. The present study extends this earlier work
to more complex motions. Saito et al. (1986) showed that neurons with
large receptive fields in macaque visual area MST are sensitive to
different senses of rotation and dilation, irrespective of the receptive field
location of the movement singularity. A network with an MT-like
second layer was trained and tested on combinations of rotating, dilating,
and translating patterns. Third-layer units learn to detect specific senses
of rotation or dilation in a position-independent fashion, despite having
position-dependent direction selectivity within their receptive fields.
1
INTRODUCTION
The visual systems of mammals and especially primates are capable of prodigious feats of
movement. object. and scene recognition under noisy conditions--feats we would like to
copy with artificial networks. We are just beginning to understand how biological
networks are wired up during development and during learning in the adult. Even at this
stage. however. it is clear that explicit error signals and the apparatus for propagating them
backwards across layers are probably not involved. On the other hand. there is a growing
body of evidence for connections whose strength can be modified (via NMDA channels)
as functions of the correlation between pre- and post-synaptic activity. The present project
was to try to learn to detect pattern rotation and dilation by example. using a simple Hebb
320
Learning to See Rotation and Dilation with a Hebb Rule
rule. By building up complex filters in stages using a simple, realistic learning rule, we
reduce the complexity of what must be learned with more explicit supervision at higher
levels.
1.1
ORIENT ATION SELECTIVITY
Some of the connections responsible for the selectivity of cortical neurons to local stimulus
features develop in the absence of patterned visual experience. For example, primary
visual cortex (VI or area 17) contains orientation-selective neurons at birth in several
animals. Linsker (1986a,b) has shown that feedforward networks with gaussian
topographic interlayer connections, linear summation, and simple hebb rules, develop
orientation selective units in higher layers when trained on noise. In his linear system,
weight updates for a layer can be written as a function of the two-point correlation
characterizing the previous layer. Noise applied to the input layer causes the emergence of
connections that generate gaussian correlations at the second layer. This in tum drives the
development of more complex correlation functions in the third layer (e.g., difference-ofgaussians). Rotational symmetry is broken in higher layers with the emergence of Gaborfunction-like connection patterns reminiscent of simple cells in the cortex.
1.2 PATTERN MOTION SELECTIVITY
The ability to see coherent motion fields develops late in primates. Human babies, for
example, fail to see the transition from unstructured to structured motion--e.g., the
transition between randomly moving dots and circular 2-D motion--for several months.
The transition from horizontally moving dots with random y-axis velocities to dots with
sinusoidal y-axis velocities (which gives the percept of a rotating 3-D cylinder) is seen even
later (Spitz, Stiles-Davis, & Siegel, 1988). This suggests that the cortex requires many
experiences of moving displays in order to learn how to recognize the various types of
coherent texture motions.
However, orientation gradients, shape from shading, and pattern translation, dilation, and
rotation cannot be detected with the kinds of filters that can be generated solely by noise.
The correlations present in visual scenes are required in order for these higher level filters
to arise.
1.3
NEUROPHYSIOLOGICAL MOTIVATION
Moving stimuli are processed in successive stages in primate visual cortical areas. The first
cortical stage is layer 4Ca of VI, which receives its main ascending input from the
magnocellular layers of the lateral geniculate nucleus. Layer 4Ca projects to layer 4B,
which contains many tightly-tuned direction-selective neurons. These neurons, however,
respond to moving contours as if these contours were moving perpendicular to their local
orientation (Movshon et at, 1985).
Layer 4B neurons project directly and indirectly to area MT, where a subset of neurons
show a relatively narrow peak in the direction tuning curve for a plaid that is lined up with
the peak for a single grating. These neurons therefore solve the aperture problem for
pattern translation presented to them by the local motion detectors in layer 4 B of VI. MT
neurons, however, appear to be largely blind to the sense of pattern rotation or dilation
(Saito et al., 1986). Thus, there is a higher order 'aperture problem' that is solved by the
neurons in the parts of areas MST and 7a that distinguish senses of pattern rotation and
321
322
Sereno and Sereno
dilation. The present model provides a rationale for how these stages might naturally arise
in development.
2
RESULTS
In previous work (M.1. Sereno, 1989; cf. M.E. Sereno, 1987) a simple 2-layer feedforward
architecture sufficed for an MT-like solution to the aperture problem for local translational
motion. Units in the fIrst layer were granted tuning curves like those in VI, layer 4B. Each
first-layer unit responded to a particular range of directions and speeds of the component
of movement perpendicular to a local contour. Second layer units developed MT-like
receptive fields that solved the aperture problem for local pattern translation when trained
on locally jiggled gratings rigidly moving in randomly chosen pattern directions.
2.1
NETWORK ARCHITECTURE
A similar architecture was used for second-to-third layer connections (see Fig. l--a sample
network with 5 directions and 3 speeds). As with Linsker, a new input layer was
constructed from a canonical unit, suitably transformed. Thus, second-layer units were
granted tuning curves resembling those found in MT (as well as those generated by firstto-second layer leaming)--that is, they responded to the local pattern translation but were
blind to particular senses of local rotation, dilation, and shear. There were 12 different local
Third
Layer
(=MST)
~jJ
...
~.f.. probability
'-~:-----"":":..:..::.J
of
........
connection
r:
I.J . 'llxAy
Second Layer
(=MT)
First Layer
(=Vl, Layer 4B)
Figure 1: Network Architecture
pattern directions and 4 different local pattern speeds at each x -y location (48 different units
at each of 100 x-y points). Second-layer excitatory tuning curves were piecewise linear
with half-height overlap for both direction and speed. Direction tuning was set to be 2-3
times as important as speed tuning in determining the activation of input units. Input units
Learning to See Rotation and Dilation with a Hebb Rule
generated untuned feedforward inhibition for off-directions and off-speeds. Total
inhibition was adjusted to balance total excitation. The probability that a unit in the first
layer connected to a unit in the second layer fell off as a gaussian centered on the
retinotopically equivalent point in the second layer. Since receptive fields in areas MST
and 7a are large. the interlayer divergence was increased relative to the divergence in the
first-to-second layer connections. Third layer units received several thousand connections.
The network is similar to that of Linsker except that there is no activity-independent decay
(k j ) for synaptic weights and no offset (k2) for the correlation term. The activation. outj ?
for each unit is a linear weighted sum of its inputs. ini scaled by a, and clipped to maximum
and minimum values:
out}.
={
a.~)niWeightij
i
outmax, mill
.
Weights are also clipped to maximum and minimum values. The change in each weight,
tJ.weightij' is a simple fraction, 8, of the product of the pre- and post-synaptic values:
/}weight I}.. = f,in I
.out.
}
The learning rate, 8, was set so that about 1.000 patterns could be presented before most
weights saturated. The stable second-layer weight patterns seen by Linsker (1986a) are
reproduced by this model when it is trained on noise input. However, since it lacks k2' it
cannot generate center-surround weight structures given only gaussian correlations as
input.
2.2
TRAINING PATTERNS
Second-to-third layer connections were trained with full or partial field rotations. dilations.
and translations. Each stimulus consisted of a set of local pattern motions at each x-y point
that were: 1) rotating clockwise or counterclockwise around, 2) dilating or contracting
toward, or 3) translating through a randomly chosen location. The singularity was always
within the input array. Both full and partial field rotations and dilations were effective
training stimuli for generating rotation and dilation selectivity.
2.3
POSITION-INDEPENDENT TUNING CURVES
Post-training rotation and dilation tuning curves for different receptive-field locations were
generated for many third-layer units using paradigms similar to those used on real neurons.
The location of the motion singularity of the test stimulus was varied across layer two.
Third-layer units often responded selectively to a particular sense of rotation or dilation at
each visual field test location. A sizeable fraction of units (10-60%) responded in a
position-independent way after unsupervised learning on rotating and dilating fields.
Similar responses were found using both partial- and full-field test stimuli.
These units thus resemble the neurons in primate visual area MSTd (10-40% of the total
there) recorded by Saito et a1. (1986), Duffy and Wurtz (1990). and Andersen et a1. (1990)
that showed position-independent responses to rotations and dilations. Other third-layer
units had position-dependent tuning--that is, they changed their selectivity for stimuli
centered at different visual field locations, as, in fact, do a majority of actual MSTd
neurons.
323
324
Sereno and Sereno
2.4
POSITION-DEPENDENT WEIGHT STRUCTURES
Given the position- independence of the selective response to rotations and/or dilations in
some of the third-layer units, it was surprising to find that most such units had weight
structures indicating that local direction sensitivity varied systematically across a unit's
receptive field. Regions of maximum weights in direction-speed subspace tended to vary
smoothly across x-y space such that opposite ends of the receptive field were sensitive to
opposite directions. This picture obtained with full and medium-sized partial field training
examples, breaking down only when the rotating and dilating training patterns were
substantially smaller than the receptive fields of third-layer units. In the last case, smooth
changes in direction selectivity across space were interrupted at intervals by discontinuities.
An essentially position-independent tuning curve is achieved because any off-center
clockwise rotation that has its center within the receptive field of a unit selective for
clockwise rotation will activate a much larger number of input units connected with large
positive weights than will any off-center counterclockwise rotation (see Fig 2).
recep'tive
field
local direction selectivity
of trained unit sensitive
to clockwise rotation
receptive field
test for position in variance
by rotating off-center
stimulus in receptive field
stimulus
pattern
most local directions
match with
clockWIse stimulus
most local directions
clash with
opposite rotation
Figure 2: Position-dependent weights and
position-independent responses
Saito et al. (1986), Duffy & Wurtz (1990), and Andersen et al. (1990) have all suggested
that true translationally-invariant detection of rotation and dilation sense must involve
Learning to See Rotation and Dilation with a Hebb Rule
several hierarchical processing stages and a complex connection pattern. The present
results show that position-independent responses are exhibited by units with positiondependent local direction selectivity. as originally exhibited with small stimuli in area 7a
by Motter and Mountcastle (1981).
2.5
WHY WEIGHTS ARE PERIODIC IN DIRECTION-SPEED SUBSPACE
For all training sets. the receptive fields of all units contained regions of all-max weights
and all-min weights within the direction-speed subspace at each x-y point. For comparison.
if the model is trained on uncorrelated direction noise (a different random local direction at
each x-y point). third layer input weight structures still exhibit regions of all-max and allmin weights in the direction-speed subspace at each x-y point in the second layer. In
contrast to weight structures generated by rigid motion. however. the location of these
regions for a unit are not correlated across x-y space. These regions emerge at each x-y
location because the overlap in the input unit tuning curves generates local two-point
correlations in direction-speed subspace that are amplified by a hebb rule (Linsker. 1986a).
This mechanism prevents more complex weight structures (like those envisaged by the
neurophysiologists and those generated by backpropagation) from emerging. The twopoint correlations across x-y space generated by jiggled gratings. or by the rotation and
dilation training sets serve to align the all-max or all-min regions in the case of translation
sensitivity. or generate smooth gradients in the case of sensitivity to rotation and dilation.
2.6
WHY MT DOES NOT LEARN TO DETECT ROTATION AND DILATION
Saito et al. (1986) demonstrated that MT neurons are not selective for particular senses of
pattern rotation and dilation. but only for particular pattern translations (MT neurons will
of course respond to a part of a large rotation or dilation that locally approximates the unit' s
translational directional tuning). MT neurons in the present model do not develop this
selectivity even when trained on rotating and dilating stimuli because of the smaller
divergence in the first layer (V 1) to second layer (MT) connection. The local views of
rotations and dilations seen by MT are apparently noise-like enough that any second order
selectivity is averaged out. A larger (unrealistic) divergence allows a few units to solve the
aperture problem and detect rotation and dilation in one step.
Training sets that contain many pure-translation stimuli along with the rotating and dilating
stimuli fail to bring about the emergence of selectivity to senses of rotation and dilation
(most units reliably detect only particular translations in this case). Satisfactory
performance is achieved only if the translating stimuli are on average smaller than the
rotating and dilating stimuli. This may point to a regularity in the poorly characterized
stimulus set that the real visual system experiences, and perhaps in this case. has come to
depend on for normal development.
DISCUSSION
This exercise found a particularly simple solution to our problem that in retrospect should
have been obvious from fIrst principles. The present results suggest that this simple
solution is also easily learned with simple Hebb rule. Two points warrant discussion.
First. this model achieves a reasonable degree of translational invariance in the detection of
several simple kinds of pattern motion despite having weight structures that approximate a
simple centered template. Such a solution to approximately translationally invariant
325
326
Sereno and Sereno
pattern detection may be applicable, and more importantly, practically learnable, for other
more complex patterns, as long as the local features of interest vary reasonably smoothly
and the pattern is not presented too far off-center. These constraints may characterize many
foveated objects.
Second, given that the tuning curves for particular stimulus features often change in a
continuous fashion as one moves across the cortex (e.g., orientation tuning, direction
tuning), there is likely to be a pervasive tendency in the cortex for receptive fields in higher
areas to be constructed from subunits that receive strong connections from nearby cells in
the lower area.
Acknowledgements
We thank Udo Wehmeier, Nigel Goddard, and David Zipser for help and discussions.
Networks and displays were constructed on the Rochester Connectionist Simulator.
References
Andersen, R, M. Graziano, and R Snowden (1990) Translational invariance and
attentional modulation ofMST cells. Soc. Neurosci., Abstr. 16:7.
Duffy, C.J. and RH. Wurtz (1990) Organization of optic flow sensitive receptive fields in
cortical area MST. Soc. Neurosci., Abstr. 16:6.
Linsker, R. (1986a) From basic network principles to neural architecture: emergence of
spatial-opponent cells. Proc. Nat. Acad. Sci. 83, 7508-7512.
Linsker, R (1986b) From basic network principles to neural architecture: emergence of
orientation-selective cells. Proc. Nat. Acad. Sci. 83, 8390-8394.
Motter, B.C. and V.B. Mountcastle (1981) The functional properties of the light-sensitive
neurons of the posterior parietal cortex studied in waking monkeys: foveal sparing and
opponent vector organization. Jour. Neurosci. 1:3-26.
Movshon, J.A., E.H. Adelson, M.S. Gizzi, and W.T. Newsome (1985) Analysis of moving
visual patterns. In C. Chagas, R Gattass, and C. Gross (eds.), Pattern Recognition
Mechanisms. Springer-Verlag, pp. 117-151.
Saito, H., M. Yukie, K. Tanaka, K. Hikosaka, Y. Fukada and E. Iwai (1986) Integration of
direction signals of image motion in the superior temporal sulcus of the macaque
monkey. Jour. Neurosci. 6:145-157.
Sereno, M.E. (1987) Modeling stages of motion processing in neural networks.
Proceedings of the 9th Annual Cognitive Science Conference, pp. 405-416.
Sereno, M.l. (1988) The visual system. In l.W.v. Seelen, U.M. Leinhos, & G. Shaw (eds.),
Organization of Neural Networks. VCH, pp.176-184.
Sereno, M.l. (1989) Learning the solution to the aperture problem for pattern motion with
a hebb rule. In D.S. Touretzky (ed.), Advances in Neural Information Processing
Systems I. Morgan Kaufmann Publishers, pp. 468-476.
R. V. Spitz, J. Stiles-Davis & RM. Siegel. Infant perception of rotation from rigid
structure-from-motion displays. Soc. Neurosci., Abstr. 14, 1244 (1988).
| 366 |@word suitably:1 mammal:1 shading:1 contains:2 foveal:1 tuned:1 clash:1 surprising:1 activation:2 must:2 written:1 reminiscent:1 interrupted:1 realistic:1 mst:5 shape:1 seelen:1 update:1 infant:1 half:1 beginning:1 provides:1 location:9 successive:1 height:1 along:1 constructed:3 interlayer:2 growing:1 simulator:1 actual:1 project:3 medium:1 what:1 kind:2 substantially:1 emerging:1 monkey:2 developed:1 temporal:1 k2:2 scaled:1 rm:1 unit:35 appear:1 before:1 positive:1 local:20 apparatus:1 acad:2 despite:2 rigidly:1 solely:1 modulation:1 approximately:1 might:1 studied:1 suggests:1 patterned:1 perpendicular:2 range:1 averaged:1 responsible:1 backpropagation:1 saito:6 area:13 positiondependent:1 pre:2 suggest:1 cannot:2 equivalent:1 demonstrated:1 center:6 resembling:1 dilating:7 unstructured:1 pure:1 rule:11 array:1 importantly:1 his:1 diego:1 velocity:2 recognition:2 particularly:1 fukada:1 sparing:1 solved:2 thousand:1 region:6 connected:2 movement:3 gross:1 broken:1 complexity:1 trained:8 depend:1 serve:1 mstd:2 easily:1 various:1 effective:1 activate:1 artificial:1 detected:1 birth:1 whose:1 larger:2 solve:3 ability:1 topographic:1 emergence:5 noisy:1 reproduced:1 product:1 poorly:1 amplified:1 margaret:1 regularity:1 abstr:3 wired:1 generating:1 object:2 help:1 chaga:1 develop:4 propagating:1 received:1 strong:1 soc:3 grating:3 resemble:1 come:1 direction:26 plaid:1 filter:3 centered:3 human:1 translating:3 leinhos:1 biological:1 singularity:3 summation:1 adjusted:1 practically:1 around:1 normal:1 vary:2 achieves:1 proc:2 geniculate:1 applicable:1 sensitive:5 weighted:1 gaussian:4 always:1 modified:1 snowden:1 pervasive:1 contrast:1 sense:3 detect:5 dependent:4 rigid:2 vl:1 selective:7 transformed:1 translational:4 orientation:6 development:4 animal:1 spatial:1 integration:1 field:24 having:2 adelson:1 unsupervised:1 warrant:1 linsker:7 connectionist:1 stimulus:18 develops:1 piecewise:1 few:1 randomly:3 recognize:1 tightly:1 divergence:4 translationally:2 cylinder:1 detection:3 organization:3 interest:1 circular:1 saturated:1 light:1 sens:6 tj:1 capable:1 partial:4 experience:3 rotating:9 increased:1 earlier:1 modeling:1 newsome:1 subset:1 too:1 characterize:1 nigel:1 periodic:1 jour:2 peak:2 sensitivity:3 off:7 graziano:1 andersen:3 recorded:1 cognitive:2 wehmeier:1 sinusoidal:1 sizeable:1 vi:5 blind:2 later:1 try:1 view:1 apparently:1 sufficed:1 jiggled:2 rochester:1 responded:4 variance:1 largely:1 percept:1 kaufmann:1 directional:1 drive:1 detector:1 tended:1 touretzky:1 synaptic:3 ed:3 spitz:2 pp:4 involved:1 obvious:1 naturally:1 nmda:1 tum:1 higher:6 originally:1 response:5 just:1 stage:7 outj:1 correlation:9 retrospect:1 hand:1 receives:1 lack:1 perhaps:1 yukie:1 building:1 consisted:1 true:1 contain:1 twopoint:1 satisfactory:1 during:2 davis:2 excitation:1 ini:1 motion:17 bring:1 image:1 gizzi:1 rotation:33 superior:1 shear:1 mt:14 functional:1 retinotopically:1 approximates:1 surround:1 ofgaussians:1 tuning:15 had:2 dot:3 moving:8 stable:1 supervision:1 cortex:6 inhibition:2 align:1 posterior:1 showed:3 jolla:1 selectivity:12 verlag:1 baby:1 seen:3 minimum:2 morgan:1 paradigm:1 envisaged:1 signal:2 clockwise:5 full:4 smooth:2 match:1 characterized:1 hikosaka:1 long:1 stile:2 post:3 a1:2 basic:2 essentially:1 wurtz:3 achieved:2 cell:5 receive:1 interval:1 publisher:1 exhibited:2 probably:1 fell:1 counterclockwise:2 flow:1 zipser:1 backwards:1 feedforward:4 enough:1 independence:1 architecture:6 opposite:3 reduce:1 granted:2 movshon:2 cause:1 jj:1 clear:1 involve:1 locally:2 processed:1 generate:3 canonical:1 motter:2 sulcus:1 fraction:2 sum:1 orient:1 respond:2 extends:1 clipped:2 reasonable:1 layer:49 distinguish:1 display:3 annual:1 activity:2 strength:1 optic:1 constraint:1 scene:2 nearby:1 generates:1 speed:11 min:2 martin:1 relatively:1 structured:1 combination:1 across:8 smaller:3 primate:4 invariant:2 fail:2 lined:1 mechanism:2 ascending:1 end:1 opponent:2 hierarchical:1 indirectly:1 shaw:1 cf:2 goddard:1 especially:1 magnocellular:1 move:1 receptive:15 primary:1 exhibit:1 gradient:2 subspace:5 thank:1 attentional:1 lateral:1 sci:2 majority:1 toward:1 rotational:1 balance:1 iwai:1 reliably:1 neuron:17 parietal:1 subunit:1 varied:2 waking:1 tive:1 david:1 required:1 connection:13 vch:1 california:1 coherent:2 learned:2 narrow:1 tanaka:1 macaque:2 discontinuity:1 adult:1 suggested:1 pattern:30 perception:1 max:3 unrealistic:1 overlap:2 ation:1 picture:1 axis:2 irrespective:1 mountcastle:2 acknowledgement:1 determining:1 relative:1 contracting:1 rationale:1 udo:1 untuned:1 nucleus:1 degree:1 principle:3 systematically:1 uncorrelated:1 translation:9 excitatory:1 changed:1 course:1 last:1 copy:1 understand:1 template:1 characterizing:1 emerge:1 curve:9 cortical:4 transition:3 contour:3 san:1 far:1 approximate:1 aperture:7 feat:2 continuous:1 dilation:27 why:2 learn:4 channel:1 reasonably:1 ca:3 symmetry:1 complex:6 main:1 neurosci:5 rh:1 motivation:1 noise:6 arise:2 sereno:15 body:1 fig:2 siegel:2 fashion:2 hebb:10 position:13 explicit:2 exercise:1 breaking:1 third:12 late:1 down:1 specific:1 learnable:1 offset:1 decay:1 evidence:1 texture:1 nat:2 foveated:1 duffy:3 smoothly:2 mill:1 likely:1 neurophysiological:1 visual:12 horizontally:1 prevents:1 contained:1 springer:1 month:1 sized:1 leaming:1 absence:1 change:3 neurophysiologists:1 except:1 total:3 invariance:2 tendency:1 la:1 indicating:1 selectively:1 tested:1 correlated:1 |
2,935 | 3,660 | Discriminative Network Models of Schizophrenia
Guillermo A. Cecchi, Irina Rish
IBM T. J. Watson Research Center
Yorktown Heights, NY, USA
Marion Plaze
INSERM - CEA - Univ. Paris Sud
Research Unit U.797
Neuroimaging & Psychiatry
SHFJ & Neurospin, Orsay, France
Catherine Martelli
Departement de Psychiatrie
et d?Addictologie
Centre Hospitalier Paul Brousse
Villejuif, France
Benjamin Thyreau
Neurospin
CEA, Saclay, France
Bertrand Thirion
INRIA
Saclay, France
Marie-Laure Paillere-Martinot
AP-HP, Adolescent Psychopathology
and Medicine Dept., Maison de Solenn,
Cochin Hospital, University Paris Descartes
F-75014 Paris, France
Jean-Luc Martinot
INSERM - CEA - Univ. Paris Sud
Research Unit U.797
Neuroimaging & Psychiatry
SHFJ & Neurospin, Orsay, France
Jean-Baptiste Poline
Neurospin
CEA, Saclay, France
Abstract
Schizophrenia is a complex psychiatric disorder that has eluded a characterization
in terms of local abnormalities of brain activity, and is hypothesized to affect the
collective, ?emergent? working of the brain. We propose a novel data-driven approach to capture emergent features using functional brain networks [4] extracted
from fMRI data, and demonstrate its advantage over traditional region-of-interest
(ROI) and local, task-specific linear activation analyzes. Our results suggest that
schizophrenia is indeed associated with disruption of global brain properties related to its functioning as a network, which cannot be explained by alteration of
local activation patterns. Moreover, further exploitation of interactions by sparse
Markov Random Field classifiers shows clear gain over linear methods, such as
Gaussian Naive Bayes and SVM, allowing to reach 86% accuracy (over 50% baseline - random guess), which is quite remarkable given that it is based on a single
fMRI experiment using a simple auditory task.
1
Introduction
It has been long recognized that extracting an informative set of application-specific features from
the raw data is essential in practical applications of machine learning, and often contributes even
more to the success of learning than the choice of a particular classifier. In biological applications,
such as brain image analysis, proper feature extraction is particularly important since the primary
objective of such studies is to gain a scientific insight rather than to learn a ?black-box? predictor;
thus, the focus shifts towards the discovery of predictive patterns, or ?biomarkers?, forming a basis
for interpretable predictive models. Conversely, biological knowledge can drive the definition of
features and lead to more powerful classification.
The objective of this work is to identify biomarkers predictive of schizophrenia based on fMRI
data collected for both schizophrenic and non-schizophrenic subjects performing a simple auditory
task in the scanner [14]. Unlike some other brain disorders (e.g., stroke or Parkinsons disease),
schizophrenia appears to be ?delocalized?, i.e. difficult to attribute to a dysfunction of some par1
ticular brain areas1 . The failure to identify specific areas, as well as the controversy over which
localized mechanisms are responsible for the symptoms associated with schizophrenia, have led us
amongst others [7, 1, 10] to hypothesize that this disease may be better understood as a disruption of
the emergent, collective properties of normal brain states, which can be better captured by functional
networks [4], based on inter-voxel correlation strength, as opposed (or limited) to activation failures
localized to specific, task-dependent areas.
To test this hypothesis, we measured diverse topological features of the functional networks and
compared them across the normal subjects and schizophrenic patients groups. Specifically, we
decided to ask the following questions: (1) What specific effects does schizophrenia have on the
functional connectivity of brain networks? (2) Does schizophrenia affect functional connectivity
in ways that are congruent with the effect it has on area-specific, task-dependent activations? (3)
Is it possible to use functional connectivity to improve the classification accuracy of schizophrenic
patients?
In answer to these questions, we will show that degree maps, which assign to each voxel the number
of its neighbors in a network, identify spatially clustered groups of voxels with statistically significant group (i.e. normal vs. schizophrenic) differences; moreover, these highly significant voxel
subsets are quite stable over different data subsets. In contrast, standard linear activation maps commonly used in fMRI analysis show much weaker group differences as well as stability. Moreover,
degree maps yield very informative features, allowing for up to 86% classification accuracy (with
50% baseline), as opposed to standard local voxel activations. The best accuracy is achieved by further exploiting non-local interactions with probabilistic graphical models such as Markov Random
Fields, as opposed to linear classifiers.
Finally, we demonstrate that traditional approaches based on a direct comparison of the correlation
at the level of relevant regions of interest (ROIs) or using a functional parcellation technique [17],
do not reveal any statistically significant differences between the groups. Indeed, a more data-driven
approach that exploits properties of voxel-level networks appears to be necessary in order to achieve
high discriminative power.
2
Background and Related Work
In Functional Magnetic Resonance Imaging (fMRI), a MR scanner non-invasively records a subject?s blood-oxygenation-level dependent (BOLD) signal, known to be correlated with neural activity, as a subject performs a task of interest (e.g., viewing a picture or reading a sentence). Such scans
produce a sequence of 3D images, where each image typically has on the order of 10,000-100,000
subvolumes, or voxels, and the sequence typically contains a few hundreds of time points, or TRs
(time repetitions). Standard fMRI analysis approaches, such as the General Linear Model (GLM)
[9], examine mass-univariate relationships between each voxel and the stimulus in order to build
so-called statistical parametric maps that associate each voxel with some statistics that reflects its
relationship to the stimulus. Commonly used activation maps depict the ?activity? level of each
voxel determined by the linear correlation of its time course with the stimulus (see Supplemental
Material for details).
Clearly, such univariate analysis can miss important information contained in the interactions among
voxels. Indeed, as it was shown in [8], highly predictive models of mental states can be built from
voxels with sub-maximal activation. Recently, applying multivariate predictive methods to fMRI
became an active area of research, focused on predicting ?mental states? from fMRI data [11, 13, 2].
However, our focus herein is not just predictive modeling, but rather discovery of interpretable
features with high discriminative power. Also, our problem is much more high-dimensional, since
each sample (e.g., schizophrenic vs. non-schizophrenic) corresponds to a sequence of 3D images
over about 400 time points, rather than to a single 3D image as in [11, 13, 2].
While the importance of modeling brain connectivity and interactions became widely recognized in
the current fMRI-analysis literature [6, 19, 16], practical applications of the proposed approaches
such as dynamic causal modeling [6], dynamic Bays nets [19], or structural equations [16] were
1
This is often referred to as the disconnection hypothesis [5, 15], and can be traced back to the early research
on schizophrenia: in 1906, Wernicke [18] was the first one to postulate that anatomical disruption of association
fiber tracts is at the roots of psychosis; in fact, the term schizophrenia was introduced by Bleuler [3] in 1911,
and was meant to describe the separation (splitting) of different mental functions.
2
1
2
3
4
5
6
7
8
9
10
ROI name
?Temporal mid L?
?Temporal mid et sup L?
?Frontal inf L?
?cuneus L?
?Temporal sup et mid L?
?Angular L?
?Temporal sup R?
?Angular R?
?Cingulum post R?
?ACC?
(x,y,z) position
-44,-48,4
-56,-36,0
-40,28,0
-12,-72,24
-52,-16,-8
-44,-48,32
40,-64,24
40,-64,24
4,-32,24
0,20,30
Anatomical position
Left temporal
Middle and superior left temporal
Left Inferior frontal
Left cuneus
Middle and superior left temporal
Left angular gyrus
Right superior temporal
Right angular gyrus
Right posterior cingulum
Anterior cingulated cortex
Figure 1: Regions of Interest and their location on standard brain.
usually limited to interactions analysis among just a few (e.g., less than 15) known brain regions
believed to be relevant to the task or phenomenon of interest. In this paper, we demonstrate that such
model-based region-of-interest (ROI) analysis may fail to reveal informative interactions which,
nevertheless, become visible at the finer-grain voxel level when using a purely data-driven, networkbased approach [4]. Moreover, while recent publications have already indicated that functional
networks in the schizophrenic brain display disrupted topological properties, we demonstrate, for
the first time, that (1) specific topological properties (e.g. voxel degrees) of functional networks can
help to construct highly-predictive schizophrenia classifiers that generalize well and (2) functional
network differences cannot be attributed to alteration of local activation patterns, a hypothesis that
was not ruled out by the results of [1, 10] and similar work.
3
Experimental Setup
The present study is a reanalysis of image datasets previously acquired according to the methodology described in [14]. Two groups of 12 subjects each were submitted to the same experimental
paradigm involving language: schizophrenic patients and age-matched normal controls (same experiment was performed with a third group of alcoholic patients, yielding similar results - see Suppl.
Materials for details). The studies had been performed after approval of the local ethics committee
and all subjects were studied after they gave written informed consent. The task is based on auditory stimuli; subjects listen to emotionally neutral sentences either in native (French) or foreign
language. Average length (3.5 sec mean) or pitch of both kinds of sentences is normalized. In order
to catch attention of subjects, each trial begins with a short (200 ms) auditory tone, followed by
the actual sentence. The subject?s attention is asserted through a simple validation task: after each
played sentences, a short pause of 750 ms is followed by a 500 ms two-syllable auditory cue, which
belongs to the previous sentence or not, to which the subject must answer to by yes (the cue is part
of the previous sentence) or no with push-buttons, when the language of the sentence was his own.
For each subject, two fMRI acquisition runs are acquired, each of which consisted of 420-scans
(from which the first 4 are discarded to eliminate T1 effect). A full fMRI run contains 96 trials, with
32 sentences in French (native), 32 sentences in foreign languages, and 32 silence interval controls.
Data were spatially realigned and warped into the MNI template and smoothed (FWHM of 5mm)
using SPM5 (www.fil.ucl.ac.uk); also, standard SPM5 motion correction was performed. Several
subjects were excluded from the consideration due to excessive head motion in the scanner, leaving us with 11 schizophrenic and 11 healthy subjects, i.e. the total of 44 samples (there were two
samples per subject, corresponding to the two runs of the experiment). Each sample associated with
roughly 53,000 voxels (after removing out-of-brain voxels from the original 53 ? 63 ? 46 image),
over 420 time points (TRs), i.e. with more than 22,000,000 voxels/variables. Thus, some kind of
dimensionality reduction and/or feature extraction is necessary prior to learning a predictive model.
4 Methods
We explored two different data analysis approaches aimed at discovery of discriminative patterns:
(1) model-driven approaches based on prior knowledge about the regions of interest (ROI) that are
believed to be relevant to schizophrenia, or model-based functional clustering, and (2) data-driven
approaches based on various features extracted from the fMRI data, such as standard activation maps
and a set of topological features derived from functional networks.
4.1
Model-Driven Approach using ROI
First, we decided to test whether the interactions between several known regions of interest (ROIs)
would contain enough discriminative information about schizophrenic versus normal subjects. Ten
3
regions of interests (ROI) were defined using previous literature on schizophrenia and language
studies, including inferior, middle and superior left temporal cortex, left inferior temporal cortex,
left cuneus, left angular gyrus, right superior temporal, right angular gyrus, right posterior cingulum, and anterior cingular cortex (Figure 1). Each region was defined as a sphere of 12mm diameter
centered on the x,y,z coordinates of the corresponding ROI. Because predefined regions of interest may be based on too much a priori knowledge and miss important areas, we also ran a more
exploratory analysis. A second set of 600 ROI?s was defined automatically using a parcellation algorithm [17] that estimates, for each subject, a collection of regions based on task-based functional
signal similarity and position in the MNI space.
Time series were extracted as the spatial mean over each ROI, leading to 10 time series per subject
for the predefined ROIs and 600 for the parcellation technique. The connectivity measures were
of two kinds. First, the correlation coefficient was computed along time between ROIs blindly with
respect to the experimental paradigm. Additionally, we computed a psycho-physiological interaction
(PPI), by contrasting the correlation coefficient weighted by experimental conditions (i.e. correlation
weighted by the ?Language French? condition versus correlation weighted by ?Control? condition
after convolution with a standard hemodynamic response function). Those connectivity measures
were then tested for significance using standards non parametric tests between groups (Wilcoxon
signed-rank test) with corrected p-values for multiple comparisons.
4.2
Data-driven Approach: Feature Extraction
Topological Features and Degree Maps. In order to continue investigating possible disruptions
of global brain functioning associated with schizophrenia, we decided to explore lower-level (as
compared to ROI-level) functional brain networks [4] constructed at the voxel level: (1) pair-wise
Pearson correlation coefficients are computed among all pairs of time-series (vi (t), vj (t)) where
vi (i) corresponds to the BOLD signal of i-th voxel; (2) an edge between a pair of voxels (i, j) is
included in the network if the correlation between vi and vj exceeds a specified threshold (herein,
we used the same threshold of c(Pearson)=0.7 for all voxel pairs).
For each subject, and each run, a separate functional network was constructed. Next, we measured
a number of its topological features, including the degree distribution, mean degree, the size of the
largest connected subgraph (giant component), and so on (see the supplemental material for the full
list). Besides global topological features, we also computed a series of degree maps based on the
individual voxel degree in functional network: (1) full degree maps, where the value assigned to
each voxel is the total number of links in the corresponding network node, (2) long-distance degree
maps, where the value is the number of links making non-local connections (5 voxels apart or more),
and (3) inter-hemispheric degree maps, where only links reaching across the brain hemispheres are
considered when computing each voxel?s degree.
Activation maps. To find out whether local task-dependent linear activations alone could possibly
explain the differences between the schizophrenic and normal brains, we used as a baseline set of
features based on the standard voxel activation maps. For each subject, and for each run, activation
maps, as well as their differences, or activation contrast maps, were obtained using several regressors
based on the language task, as described in the supplemental material (for simplicity, we will refer
to all such maps as activation maps). The activation values of each voxel were subsequently used
as features in the classification task. Similarly to degree maps, we also computed a global feature,
mean-activation (mean-t-val)), by taking the mean absolute value of the voxel?s t-statistics. Both
activation and degree maps for each sample were also normalized, i.e. divided by their maximal
value for the given sample.
4.3
Classification Approaches
First, off-the-shelf methods such Gaussian Naive Bayes (GNB) and Support Vector Machines (SVM)
were used in order to compare the discriminative power of different sets of features described above.
Moreover, we decided to further investigate our hypothesis that interactions among voxels contain
highly discriminative information, and compare those linear classifiers against probabilistic graphical models that explicitly model such interactions. Specifically, we learn a classifier based on a
sparse Gaussian Markov Random Field (MRF) model [12], which leads to a convex problem with
unique optimal solution, and can be solved efficiently; herein, we used the COVSEL procedure [12].
The weight on the l1 -regularization penalty serves as a tuning parameter of the classifier, allowing
to control the sparsity of the model, as described below.
4
Sparse Gaussian MRF classifier. Let X = {X1 , ..., Xp } be a set of p random variables (e.g.,
voxels), and let G = (V, E) be an undirected graphical model (Markov Network, or MRF) representing conditional independence structure of the joint distribution P (X). The set of vertices
V = {1, ..., p} is in the one-to-one correspondence with the set X. There is no edge between Xi
and Xj if and only if the two variables are conditionally independent given all remaining variables.
Let x = (x1 , ..., xp ) denote a random assignment to X. We will assume a multivariate Gaussian
1 T
1
probability density p(x) = (2?)?p/2 det(C) 2 e? 2 x Cx , where C = ??1 is the inverse covariance matrix, and the variables are normalized to
zero mean. Let x1 , ..., xn be a set of n i.i.d.
Phave
n
samples from this distribution, and let S = n1 i=1 xTi xi denote the empirical covariance matrix.
Missing edges in the above graphical model correspond to zero entries in the inverse covariance matrix C, and thus the problem of learning the structure for the above probabilistic graphical model is
equivalent to the problem of learning the zero-pattern of the inverse-covariance matrix 2 . A popular
approach is to use l1 -norm regularization that is known to promote sparse solutions, while still allowing (unlike non-convex lq -norm regularization with 0 < q < 1) for efficient optimization. From
the Bayesian point of view, this is equivalent to assuming that the parameters of the inverse covariance matrix C = ??1 are independent random variables Cij following the Laplace distributions
?
p(Cij ) = 2ij e??ij |Cij ??ij | with zero location parameters (means) ?ij and equal scale parameters
Qp Qp
P
2
?ij = ?. Then p(C) = i=1 j=1 p(Cij ) = (?/2)p e??||C||1 , where ||C||1 = ij |Cij | is the
(vector) l1 -norm of C. Assume a fixed parameter ?, our objective is to find arg maxC?0 p(C|X),
where X is the n ? p data matrix, or equivalently, since p(C|X) = P (X, C)/p(X) and p(X) does
not include C, to find arg maxC?0 P (X, C), over positive definite matrices C. This yields the
following optimization problem considered, for example, in [12]
max ln det(C) ? tr(SC) ? ?||C||1
C?0
where det(A) and tr(A) denote the determinant and the trace (the sum of the diagonal elements) of
a matrix A, respectively. For the classification task, we estimate on the training data the Gaussian conditional density p(x|y) (i.e. the (inverse) covariance matrix parameter) for each class
Y = {0, 1} (schizophrenic vs non-schizophrenic), and then choose the most-likely class label
arg maxc p(x|c)P (c) for each unlabeled test sample x.
Variable Selection: We used variable selection as a preprocessing step before applying a particular classifier, in order to (1) reduce the computational complexity of classification (especially for
sparse MRF, which, unlike GNB and SVM, could not be directly applied to over 50,000 variables),
(2) reduce noise and (3) identify relatively small predictive subsets of voxels. We applied a simple filter-based approach, selecting a subset of top-ranked voxels, where the ranking criterion used
p-values resulting from the paired t-test, with the null-hypothesis being that the voxel values corresponding to schizophrenic and non-schizophrenic subjects came from distributions with equal
means. The variables were ranked in the ascending order of their p-values (lower p = higher confidence in between-group differences), and classification results on top k voxels will be presented for
a range of k values.
Evaluation via Cross-validation. We used leave-one-subject-out rather than leave-one-sample-out
cross-validation, since the two runs (two samples) for each subject are clearly not i.i.d. and must be
handled together to avoid biases towards overly-optimistic results.
5 Results
Model-driven ROI analysis. First, we observed that correlations (blind to experimental paradigm)
between regions and within subjects were very strong and significant (p-value of 0.05, corrected
for the number of comparisons) when tested against 0 for all subjects (mean correlation > 0.8 for
every group). However, these inter-region correlations do not seem to differ significantly between
the groups. The parcellation technique led to some smaller p-values, but also to a stricter correction
for multiple comparison and no correlation was close to the corrected threshold. Concerning the
psycho-physiological interaction, results were closer to significance, but did not survive multiple
comparisons. In conclusion, we could not detect significant differences between the schizophrenic
patient data and normal subjects in either the BOLD signal correlation or the interaction between
the signal and the main experimental contrast (native language versus silence).
2
Note that the inverse of the empirical covariance matrix, even if it exists, does not typically contain exact
zeros. Therefore, an explicit sparsity constraint is usually added to the estimation process.
5
P values and FDR correction
0
10
?1
10
?2
10
?3
p value
10
?4
10
?5
10
?6
10
?7
0.05* k/N
activation 1 FrenchNative?Silence
activation 6 FrenchNative
degree (full)
degree (long?distance)
degree (inter?hemispheric)
10
?8
10
?9
10
0
10
1
10
2
3
10
10
4
10
5
10
k/N
(a)
(b)
Figure 2: (a) FDR-corrected 2-sample t-test results for (normalized) degree maps, where the null hypothesis at
each voxel assumes no difference between the schizophrenic vs normal groups. Red/yellow denotes the areas
of low p-values passing FDR correction at ? = 0.05 level (i.e., 5% false-positive rate). Note that the mean
(normalized) degree at those voxels was always (significantly) higher for normals than for schizophrenics. (b)
Direct comparison of voxel p-values and FDR threshold: p-values sorted in ascending order; FDR test select
voxels with p < ? ? k/N (? - false-positive rate, k - the index of a p-value in the sorted sequence, N - the total
number of voxels). Degree maps yield a large number (1033, 924 and 508 voxels in full, long-distance and
inter-hemispheric degree maps, respectively) of highly-significant (very low) p-values, staying far below the
FDR cut-off line, while only a few voxels survive FDR in case of activation maps: 7 and 2 voxels in activation
maps 1 (contrast ?FrenchNative - Silence?) and 6 (?FrenchNative?), respectively (the rest of the activation maps
do not survive the FDR correction at all).
Data-driven analysis: topological vs activation features. Empirical results are consistent with our
hypothesis that schizophrenia disrupts the normal structure of functional networks in a way that is
not derived from alterations in the activation; moreover, they demonstrate that topological properties
are highly predictive, consistently outperforming predictions based on activations.
1. Voxel-wise statistical analysis. Degree maps show much stronger statistical differences between the schizophrenic vs. non-schizophrenic groups than the activation maps. Figure 2 show
the 2-sample t-test results for the full degree map and the activation maps, after False-Discovery
Rate (FDR) correction for multiple comparisons (standard in fMRI analysis), at ? = 0.05 level
(i.e., 5% false-positive rate). While the degree map (Figure 2a) shows statistically significant differences bilaterally in auditory areas (specifically, normal group has higher degrees than schizophrenic
group), the activation maps show almost no significant differences at all: practically no voxels there
survived the FDR correction (Figure 2b. This suggests that (a) the differences in the collective behavior cannot be explained by differences in the linear task-related response, and that (b) topology
of voxel-interaction networks is more informative than task-related activations, suggesting an abnormal degree distribution for schizophrenic patients that appear to lack hubs in auditory cortex,
i.e., have significantly lower (normalized) voxel degrees in that area than the normal group (possibly
due to a more even spread of degrees in schizophrenic vs. normal networks). Moreover, degree
maps demonstrate much higher stability than activation maps with respect to selecting a subset of
top ranked voxels over different subsets of data. Figure 3a shows that degree maps have up to
almost 70% top-ranked voxels in common over different training data sets when using the leaveone-subject out cross-validation, while activation maps have below 50% voxels in common between
different selected subsets. This property of degree vs activation features is particularly important for
interpretability of predictive modeling.
2. Inter-hemispheric degree distributions. A closer look at the degree distributions reveals that a
large percentage of the differential connectivity appears to be due to long-distance, inter-hemispheric
links. Figure 3a compares (normalized) histograms, for schizophrenic (red) versus normal (blue)
groups, of the fraction of inter-hemispheric connections over the total number of connections, computed for each subject within the group. The schizophrenic group shows a significant bias towards
low relative inter-hemispheric connectivity. A t-test analysis of the distributions indicates that differences are statistically significant (p=2.5x10-2). Moreover, it is evident that a major contributor to
the high degree difference discussed before is the presence of a large number of inter-hemispheric
connections in the normal group, which is lacking in schizophrenic group. Furthermore, we selected
a bilateral regions of interest (ROI?s) corresponding to left and right Brodmann Area 22 (roughly, the
clusters in Figure 2a), such that the linear activation for these ROI?s was not significantly different
between the groups, even in the uncorrected case. For each subject, the link between the left and
6
Stability of top?ranked voxel subset
0.5
0.4
degree(full)
degree (long distance)
degree(inter?hemispheric)
activation 1 (and 3)
activation 2 (and 4)
activation 5
activation 6
activation 7
activation 8
0.3
0.2
0.1
0
0
1000
2000
3000
4000
5000
# of top?ranked voxels selected
1
0.6
Histogram over samples
% voxels in common
0.6
Histograms over samples
0.7
0.4
0.2
0
0
0.8
0.6
0.4
0.2
0
0.1
0.2
0
5
10
Relative link density
Relative link density
15
?4
x 10
(a)
(b)
(c)
Figure 3: (a) Stability of feature subset selection over CV folds, i.e. the percent of voxels in common among
the subsets of k top variables selected at all CV folds. (b) Disruption of global inter-hemispheric connectivity.
For each subject, we compute the fraction of inter-hemispheric connections over the total number of connections, and plot a normalized histogram over all subjects in a particular group (normal - blue, schizophrenic red). (c) Disruption of task-dependent inter-hemispheric connectivity between specific ROIs (Brodmann Area
22 selected bilaterally). The ROIs were defined by a 9 mm radius ball centered at [x=-42, y=-24, z=3] and
[x=42, y=-24, z=3].
Feature
degree (D)
clustering coeff. (C)
geodesic dist. (G)
mean activation (A)
D+A
C+A
G+A
G +D +C
G+D+C+A
(GNB
27.5%
30.0%
67.5%
40.0%
27.5%
27.5%
45.0%
37.5%
30.0%
SVM
27.5%
42.5%
45.0%
45%
27.5%
45.0%
45.0%
27.5%
27.5%
MRF(0.01)
27.5%
45.0%
45.0%
72.5%
32.5%
55.0%
72.5%
27.5%
32.5%
Feature
degree (full)
degree (long-distance)
degree (inter-hemis)
activation 1 (and 3)
activation 2 (and 4)
activation 5
activation 6
activation 7
activation 8
Error
16%
21%
32%
54%
50%
43%
36%
32%
30%
False Pos
27%
32%
46%
29%
55%
18%
27%
18%
23%
False Neg
5%
9%
18%
82%
45%
68%
46%
46%
37%
(a)
(b)
Table 1: Classification errors using (a) global features and (b) activation and degree maps (using SVM on the
complete set of voxels (i.e., without voxel subset selection).
right ROIs was computed as the fraction of ROI-to-ROI connections over all connections; Figure
3c shows the normalized histograms. Clearly, the normal group displays a high density of interhemispheric connections, which are significantly disrupted in the schizophrenic group (p=3.7x107). This provides a strong indication that the group differences in connectivity cannot be explained
by differences in local activation.
3. Global features. For each global feature (full list in Suppl. Mat.) we computed its mean for
each group and p-value produced by the t-test, as well as the classification accuracies using our
classifiers. While more details are presented in the supplemental material, we outline here the main
observations: while mean activation (we used map 8, the best performer for SVM on the full set of
voxels - see Table1b) had an relatively low p-value of 5.5 ? 10?4 , as compared to less significant
p = 5.3 ? 10?2 for mean-degree, the predictive power of the latter, alone or in combination with
some other features, was the best among global features reaching 27.5% in schizophrenic vs normal
classification (Table 1a), while mean activation yielded more than 40% error with all classifiers.
4. Classification results using degree vs. activation maps. While mean-degree indicates the
presence of discriminative information in voxel degrees, its generalization ability, though the best
among global features and their combinations, is relatively poor. However, voxel-level degree maps
turned out to be excellent predictive features, often outperforming activation features by far. Table
1b compares prediction made by SVM on complete maps (without voxel subset selection): both
full and long-distance degree maps greatly outperform all activation maps, achieving 16% error
vs. above 30% for even the best-performing activation map 8. Next, in Figure 4, we compare the
predictive power of different maps when using all three classifiers: Support Vector Machines (SVM),
Gaussian Naive Bayes (GNB) and sparse Gaussian Markov Random Field (MRF), on the subsets
of k top-ranked voxels, for a variety of k values. We used the best-performing activation map 8
from the Table above, as well as maps 1 and 6 (that survived FDR); map 6 was also outperforming
other activation maps in low-voxel regime. To avoid clutter, we only plot the two best-performing
degree maps out of three (i.e., full and long-distance ones). For sparse MRF, we experimented with
a variety of ? values, ranging from 0.0001 to 10, and present the best results. We can see that: (a)
Degree maps frequently outperform activation maps, for all classifiers we used; the differences are
7
Support Vector Machine:
schizophrenic vs normal
Gaussian Naive Bayes
schizophrenic vs normal
0.8
classification error
0.7
0.6
activation 1 FrenchNative ? Silence
activation 6 FrenchNative
activation 8 Silence
degree (long?distance)
degree (full)
0.7
0.6
MRF vs GNB vs SVM:
schizophrenic vs normal
Markov Random Field:
schizophrenic vs normal
0.8
0.8
activation 1 FrenchNative ? Silence
activation 6 FrenchNative
activation 8 Silence
degree (long?distance)
degree (full)
0.7
0.6
0.8
activation 1 FrenchNative ? Silence
activation 6 FrenchNative
activation 8 Silence
degree (long?distance)
degree (full)
0.7
0.6
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
1
10
0.1
1
10
0.1
2
10
3
10
K top voxels (ttest)
2
10
3
10
4
10
K top voxels (ttest)
5
10
50
100
150
200
K top voxels (ttest)
250
300
MRF (0.1): degree (long?distance)
GNB: degree (long?distance)
SVM:degree (long?distance)
0.1
50
100
150
200
250
300
K top voxels (ttest)
(a)
(b)
(c)
(d)
Figure 4: Classification results comparing (a) GNB, (b) SVM and (c) sparse MRF on degree versus activation
contrast maps; (d) all three classifiers compared on long-distance degree maps (best-performing for MRF).
particularly noticeable when the number of selected voxels is relatively low. The most significant
differences are observed for SVM in low-voxel (approx. < 500) and full-map regimes, as well as
for MRF classifiers: it is remarkable that degree maps can achieve an impressively low error of
14% with only 100 most significant voxels, while even the best activation map 6 requires more than
200-300 to get just below 30% error; the other activation maps perform much worse, often above
30-40% error, or even just at the chance level. (b) Full and long-distance degree maps perform quite
similarly, with long-distance map achieving the best result (14% error) using MRFs. (c) Among the
activation maps only, while the map 8 (?Silence?) outperforms others on the full set of voxels using
SVM, its behavior in low-voxel regime is quite poor (always above 30-35% error); instead, map
6 (?FrenchNative?) achieves best performance among activation maps in this regime3 . (d) MRF
classifiers clearly outperform SVM and GNB, possibly due to their ability to capture inter-voxel
relationships that are highly discriminative between the two classes (see Figure 4d).
6 Summary
The contributions of this paper are two-fold. From a machine-learning and fMRI analysis perspective, we (a) introduced a novel feature-construction approach based on topological properties of
functional networks, that is generally applicable to any multivariate-timeseries classification problems, and can outperform standard linear activation approaches in fMRI analysis field, (b) demonstrated advantages of this data-driven approach over prior-knowledge-based (ROI) approaches, and
(c) demonstrated advantages of network-based classifiers (Markov Random Fields) over linear models (SVM, Naive Bayes) on fMRI data, suggesting to exploit voxel interactions in fMRI analyzes
(i.e., treat brain as a network). From neuroscience perspective, we provided strong support for the
hypothesis that schizophrenia is associated with the disruption of global, emergent brain properties
which cannot be explained just by alteration of local activation patterns. Moreover, while prior art
is mainly focused on exploring the differences between the functional and anatomical networks of
schizophrenic patients versus healthy subjects [10, 1], this work, to our knowledge, is the first attempt to explore the generalization ability of predictive models of schizophrenia built on network
features.
Finally, a word of caution. Note that the schizophrenia patients studied here have been selected for
their prominent, persistent, and pharmaco-resistant auditory hallucinations [14], which might have
increased their clinical homogeneity. However, the patient group is not representative of the full
spectrum of the disease, and thus our conclusions may not necessarily apply to all schizophrenia
patients, due to the clinical characteristics and size of the studied samples.
Acknowledgements
We would like to thank Rahul Garg for his help with the data preprocessing and many stimulating
discussions that contributed to the ideas of this paper, and Drs. Andr?e Galinowski, Thierry Gallarda,
and Frank Bellivier who recruited and clinically rated the patients. We also would like to thank
INSERM as promotor of the MR data acquired (project RBM 01 ? 26).
3
We also observed that performing normalization really helped activation maps, since otherwise their performance could get much worse, especially with MRFs - we provide those results in supplemental material.
8
References
[1] D.S. Bassett, E.T. Bullmore, B.A. Verchinski, V.S. Mattay, D.R. Weinberger, and A. MeyerLindenberg. Hierarchical organization of human cortical networks in health and schizophrenia.
J Neuroscience, 28(37):9239?9248, 2008.
[2] A. Battle, G. Chechik, and D. Koller. Temporal and cross-subject probabilistic models for
fmri prediction tasks. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Information Processing Systems 19, pages 121?128. MIT Press, Cambridge, MA, 2007.
[3] E. Bleuler. Dementia Praecox or the Group of Schizophrenias. International Universities Press,
New York, NY, 1911.
[4] V.M. Eguiluz, D.R. Chialvo, G.A. Cecchi, M. Baliki, and A.V. Apkarian. Scale-free functional
brain networks. Physical Review Letters, 94(018102), 2005.
[5] K.J. Friston and C.D. Frith. Schizophrenia: A Disconnection Syndrome? Clinical Neuroscience, (3):89?97, 1995.
[6] K.J. Friston, L. Harrison, and W.D. Penny. Dynamic Causal Modelling. Neuroimage,
19(4):1273?1302, Aug 2003.
[7] A.G. Garrity, G. D. Pearlson, K. McKiernan, D. Lloyd, K.A. Kiehl, and V.D. Calhoun. Aberrant ?Default Mode? Functional Connectivity in Schizophrenia. Am J Psychiatry, 164:450?
457, March 2007.
[8] J.V. Haxby, M.I. Gobbini, M.L. Furey, A.Ishai, J.L. Schouten, and P. Pietrini. Distributed
and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science,
293(5539):2425?2430, 2001.
[9] K. J. Friston et al. Statistical parametric maps in functional imaging - a general linear approach.
Human Brain Mapping, 2:189?210, 1995.
[10] Y. Liu, M. Liang, Y. Zhou, Y. He, Y. Hao, M. Song, C. Yu, H. Liu, Z. Liu, and T. Jiang.
Disrupted Small-World Networks in Schizophrenia. Brain, 131:945?961, February 2008.
[11] T.M. Mitchell, R. Hutchinson, R.S. Niculescu, F. Pereira, X. Wang, M. Just, and S. Newman.
Learning to Decode Cognitive States from Brain Images. Machine Learning, 57:145?175,
2004.
[12] O.Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum
likelihood estimation for multivariate gaussian or binary data. Journal of Machine Learning
Research, 9:485?516, March 2008.
[13] F. Pereira and G. Gordon. The Support Vector Decomposition Machine. In ICML2006, pages
689?696, 2006.
[14] M. Plaze, D. Bartrs-Faz, JL Martinot, D. Januel, F. Bellivier, R. De Beaurepaire, S. Chanraud, J. Andoh, JP Lefaucheur, E. Artiges, C. Pallier, and ML Paillere-Martinot. Left superior
temporal gyrus activation during sentence perception negatively correlates with auditory hallucination severity in schizophrenia patients. Schizophrenia Research, 87(1-3):109?115, 2006.
[15] K.E. Stephan, K.J. Friston, and C.D. Frith. Dysconnection in Schizophrenia: From Abnormal
Synaptic Plasticity to Failures of Self-monitoring. Schizophrenia Bulletin, 35(3):509?527,
2009.
[16] A. J. Storkey, E. Simonotto, H. Whalley, S. Lawrie, L. Murray, and D. McGonigle. Learning
structural equation models for fmri. In Advances in Neural Information Processing Systems
19, pages 1329?1336. 2007.
[17] B. Thirion, G. Flandin, P. Pinel, A. Roche, P. Ciuciu, and J.-B. Poline. Dealing with the
shortcomings of spatial normalization: Multi-subject parcellation of fmri datasets. Human
Brain Mapping, 27(8):678?693, 2006.
[18] C. Wernicke. Grundrisse der psychiatrie. Thieme, 1906.
[19] L. Zhang, D. Samaras, N. Alia-Klein, N. Volkow, and R. Goldstein. Modeling neuronal interactivity using dynamic bayesian networks. In Advances in Neural Information Processing
Systems 18, pages 1593?1600. 2006.
9
| 3660 |@word trial:2 determinant:1 exploitation:1 middle:3 norm:3 stronger:1 pearlson:1 covariance:7 decomposition:1 tr:2 reduction:1 liu:3 contains:2 series:4 selecting:2 outperforms:1 rish:1 current:1 anterior:2 comparing:1 aberrant:1 activation:77 written:1 must:2 grain:1 visible:1 informative:4 oxygenation:1 plasticity:1 haxby:1 hypothesize:1 plot:2 interpretable:2 depict:1 v:17 alone:2 martinot:4 cue:2 guess:1 selected:7 tone:1 short:2 record:1 mental:3 characterization:1 provides:1 node:1 location:2 zhang:1 height:1 along:1 constructed:2 direct:2 become:1 differential:1 persistent:1 acquired:3 inter:16 indeed:3 behavior:2 disrupts:1 examine:1 dist:1 frequently:1 sud:2 brain:25 roughly:2 approval:1 bertrand:1 multi:1 automatically:1 actual:1 xti:1 begin:1 provided:1 moreover:9 matched:1 project:1 mass:1 furey:1 null:2 what:1 kind:3 thieme:1 informed:1 supplemental:5 contrasting:1 caution:1 giant:1 temporal:14 every:1 stricter:1 classifier:17 uk:1 control:4 unit:2 paillere:2 platt:1 appear:1 t1:1 positive:4 understood:1 local:11 cuneus:3 before:2 treat:1 jiang:1 ap:1 inria:1 black:1 signed:1 might:1 studied:3 garg:1 conversely:1 suggests:1 limited:2 fwhm:1 statistically:4 range:1 decided:4 practical:2 responsible:1 unique:1 definite:1 procedure:1 survived:2 area:10 empirical:3 significantly:5 chechik:1 confidence:1 word:1 lawrie:1 psychiatric:1 suggest:1 get:2 cannot:5 unlabeled:1 selection:6 close:1 applying:2 www:1 equivalent:2 map:65 demonstrated:2 center:1 missing:1 attention:2 adolescent:1 convex:2 focused:2 simplicity:1 disorder:2 splitting:1 pinel:1 insight:1 his:2 stability:4 exploratory:1 coordinate:1 laplace:1 construction:1 decode:1 exact:1 brousse:1 hypothesis:8 associate:1 element:1 storkey:1 particularly:3 cut:1 native:3 observed:3 solved:1 capture:2 wang:1 mcgonigle:1 region:14 connected:1 ran:1 disease:3 benjamin:1 complexity:1 dynamic:4 geodesic:1 controversy:1 interhemispheric:1 predictive:15 purely:1 apkarian:1 negatively:1 samara:1 basis:1 po:1 joint:1 emergent:4 various:1 fiber:1 univ:2 describe:1 shortcoming:1 sc:1 newman:1 pearson:2 jean:2 quite:4 widely:1 otherwise:1 calhoun:1 ability:3 statistic:2 bullmore:1 advantage:3 sequence:4 indication:1 net:1 ucl:1 propose:1 chialvo:1 interaction:14 maximal:2 relevant:3 turned:1 consent:1 subgraph:1 achieve:2 olkopf:1 exploiting:1 cluster:1 congruent:1 produce:1 tract:1 leave:2 staying:1 object:1 help:2 psychiatrie:2 ac:1 measured:2 ij:6 thierry:1 noticeable:1 aug:1 strong:3 uncorrected:1 differ:1 radius:1 attribute:1 plaze:2 subsequently:1 filter:1 centered:2 human:3 viewing:1 material:6 assign:1 clustered:1 generalization:2 really:1 biological:2 exploring:1 correction:7 scanner:3 mm:3 fil:1 considered:2 practically:1 normal:22 roi:23 mapping:2 major:1 ventral:1 achieves:1 early:1 estimation:2 whalley:1 applicable:1 label:1 healthy:2 contributor:1 largest:1 repetition:1 networkbased:1 reflects:1 weighted:3 hoffman:1 mit:1 clearly:4 gaussian:10 always:2 rather:4 reaching:2 avoid:2 parkinson:1 realigned:1 zhou:1 shelf:1 publication:1 pallier:1 derived:2 focus:2 consistently:1 rank:1 indicates:2 mainly:1 likelihood:1 greatly:1 contrast:5 psychiatry:3 modelling:1 baseline:3 detect:1 am:1 dependent:5 el:1 mrfs:2 niculescu:1 foreign:2 typically:3 eliminate:1 psycho:2 koller:1 france:7 arg:3 classification:15 among:9 wernicke:2 priori:1 resonance:1 spatial:2 art:1 field:7 construct:1 equal:2 extraction:3 look:1 survive:3 excessive:1 yu:1 promote:1 fmri:20 others:2 stimulus:4 gordon:1 few:3 homogeneity:1 individual:1 irina:1 n1:1 attempt:1 organization:1 interest:11 highly:7 investigate:1 evaluation:1 hallucination:2 yielding:1 asserted:1 predefined:2 edge:3 closer:2 necessary:2 ruled:1 causal:2 covsel:1 subvolumes:1 increased:1 modeling:5 assignment:1 vertex:1 subset:13 marion:1 neutral:1 predictor:1 hundred:1 entry:1 too:1 ishai:1 answer:2 hutchinson:1 disrupted:3 density:5 international:1 probabilistic:4 off:2 together:1 roche:1 connectivity:12 promotor:1 postulate:1 opposed:3 choose:1 possibly:3 worse:2 cognitive:1 warped:1 leading:1 suggesting:2 de:3 alteration:4 bold:3 sec:1 lloyd:1 coefficient:3 explicitly:1 ranking:1 vi:3 blind:1 performed:3 root:1 view:1 bilateral:1 optimistic:1 helped:1 sup:3 red:3 bayes:5 ttest:4 contribution:1 accuracy:5 became:2 characteristic:1 efficiently:1 who:1 yield:3 identify:4 correspond:1 yes:1 yellow:1 generalize:1 raw:1 bayesian:2 produced:1 monitoring:1 drive:1 finer:1 stroke:1 acc:1 submitted:1 explain:1 reach:1 maxc:3 synaptic:1 definition:1 failure:3 against:2 acquisition:1 associated:5 attributed:1 rbm:1 gain:2 auditory:9 popular:1 ask:1 mitchell:1 knowledge:5 listen:1 dimensionality:1 ethic:1 goldstein:1 back:1 appears:3 higher:4 brodmann:2 methodology:1 response:2 rahul:1 box:1 symptom:1 though:1 furthermore:1 just:6 angular:6 bilaterally:2 psychopathology:1 correlation:14 working:1 banerjee:1 lack:1 overlapping:1 french:3 mode:1 reveal:2 indicated:1 scientific:1 usa:1 effect:3 hypothesized:1 name:1 normalized:9 functioning:2 consisted:1 contain:3 regularization:3 assigned:1 spatially:2 excluded:1 conditionally:1 during:1 dysfunction:1 self:1 inferior:3 yorktown:1 m:3 hemispheric:12 criterion:1 prominent:1 evident:1 complete:2 demonstrate:6 outline:1 performs:1 motion:2 l1:3 percent:1 disruption:7 image:8 wise:2 consideration:1 novel:2 recently:1 ranging:1 superior:6 common:4 functional:23 qp:2 physical:1 jp:1 jl:1 association:1 discussed:1 he:1 significant:13 refer:1 cambridge:1 cv:2 tuning:1 approx:1 hp:1 similarly:2 centre:1 bleuler:2 language:8 had:2 stable:1 resistant:1 cortex:6 similarity:1 wilcoxon:1 multivariate:4 posterior:2 recent:1 own:1 perspective:2 inf:1 driven:10 belongs:1 apart:1 catherine:1 hemisphere:1 binary:1 outperforming:3 watson:1 success:1 continue:1 came:1 der:1 neg:1 captured:1 analyzes:2 mr:2 performer:1 syndrome:1 recognized:2 gnb:8 paradigm:3 signal:5 full:19 multiple:4 x10:1 exceeds:1 believed:2 long:18 sphere:1 cross:4 divided:1 concerning:1 post:1 baptiste:1 schizophrenia:28 clinical:3 paired:1 pitch:1 descartes:1 involving:1 mrf:13 prediction:3 patient:12 blindly:1 histogram:5 normalization:2 suppl:2 achieved:1 background:1 interval:1 harrison:1 leaving:1 sch:1 rest:1 unlike:3 simonotto:1 subject:33 recruited:1 undirected:1 seem:1 extracting:1 orsay:2 structural:2 presence:2 abnormality:1 enough:1 stephan:1 variety:2 affect:2 independence:1 gave:1 xj:1 topology:1 reduce:2 idea:1 det:3 shift:1 biomarkers:2 whether:2 handled:1 cecchi:2 penalty:1 song:1 passing:1 york:1 generally:1 clear:1 aimed:1 clutter:1 mid:3 ten:1 diameter:1 gyrus:5 outperform:4 percentage:1 andr:1 neuroscience:3 overly:1 per:2 x107:1 klein:1 anatomical:3 diverse:1 blue:2 mat:1 group:29 delocalized:1 nevertheless:1 blood:1 traced:1 threshold:4 achieving:2 marie:1 imaging:2 button:1 fraction:3 sum:1 run:6 inverse:6 letter:1 powerful:1 almost:2 separation:1 flandin:1 coeff:1 abnormal:2 followed:2 played:1 display:2 syllable:1 correspondence:1 topological:10 fold:3 trs:2 activity:3 mni:2 strength:1 yielded:1 constraint:1 alia:1 performing:6 relatively:4 according:1 neurospin:4 ball:1 combination:2 poor:2 clinically:1 battle:1 across:2 smaller:1 march:2 departement:1 making:1 explained:4 ghaoui:1 glm:1 ln:1 equation:2 spm5:2 previously:1 thirion:2 mechanism:1 fail:1 committee:1 ascending:2 drs:1 serf:1 apply:1 schizophrenic:35 hierarchical:1 magnetic:1 weinberger:1 pietrini:1 original:1 top:12 clustering:2 remaining:1 include:1 assumes:1 graphical:5 denotes:1 ppi:1 medicine:1 exploit:2 parcellation:5 build:1 especially:2 february:1 murray:1 objective:3 question:2 already:1 added:1 gobbini:1 parametric:3 primary:1 traditional:2 diagonal:1 amongst:1 distance:17 separate:1 link:7 thank:2 collected:1 assuming:1 length:1 besides:1 index:1 relationship:3 psychosis:1 equivalently:1 difficult:1 neuroimaging:2 setup:1 cij:5 liang:1 frank:1 hao:1 trace:1 ciuciu:1 collective:3 proper:1 fdr:11 perform:2 allowing:4 contributed:1 convolution:1 observation:1 markov:7 datasets:2 discarded:1 timeseries:1 cingulum:3 severity:1 head:1 smoothed:1 introduced:2 pair:4 paris:4 specified:1 sentence:11 connection:9 eluded:1 herein:3 usually:2 pattern:6 below:4 perception:1 regime:3 reading:1 sparsity:2 saclay:3 built:2 including:2 max:1 interpretability:1 power:5 ranked:7 friston:4 predicting:1 pause:1 cea:4 representing:1 improve:1 rated:1 picture:1 aspremont:1 catch:1 naive:5 health:1 prior:4 voxels:37 discovery:4 literature:2 val:1 acknowledgement:1 review:1 relative:3 lacking:1 volkow:1 impressively:1 interactivity:1 versus:6 remarkable:2 localized:2 age:1 validation:4 degree:63 xp:2 consistent:1 editor:1 reanalysis:1 ibm:1 poline:2 guillermo:1 course:1 summary:1 free:1 silence:11 bias:2 weaker:1 disconnection:2 schouten:1 neighbor:1 template:1 taking:1 martelli:1 face:1 absolute:1 sparse:9 leaveone:1 penny:1 distributed:1 bulletin:1 default:1 xn:1 cortical:1 world:1 ticular:1 commonly:2 collection:1 regressors:1 preprocessing:2 made:1 voxel:35 far:2 correlate:1 inserm:3 ml:1 global:11 active:1 investigating:1 reveals:1 dealing:1 discriminative:9 xi:2 spectrum:1 bay:1 table:4 additionally:1 learn:2 frith:2 contributes:1 excellent:1 complex:1 necessarily:1 vj:2 did:1 significance:2 main:2 spread:1 noise:1 paul:1 bassett:1 x1:3 neuronal:1 referred:1 representative:1 ny:2 sub:1 position:3 neuroimage:1 explicit:1 pereira:2 lq:1 third:1 removing:1 specific:8 invasively:1 hub:1 thyreau:1 dementia:1 explored:1 list:2 svm:15 physiological:2 experimented:1 essential:1 exists:1 false:6 importance:1 push:1 cx:1 led:2 univariate:2 explore:2 forming:1 likely:1 contained:1 corresponds:2 chance:1 extracted:3 ma:1 stimulating:1 conditional:2 sorted:2 towards:3 luc:1 emotionally:1 included:1 specifically:3 determined:1 corrected:4 miss:2 called:1 hospital:1 total:5 experimental:6 faz:1 select:1 support:5 latter:1 scan:2 meant:1 frontal:2 dept:1 hemodynamic:1 tested:2 phenomenon:1 correlated:1 |
2,936 | 3,661 | Analysis of SVM with Indefinite Kernels
Yiming Ying? , Colin Campbell? and Mark Girolami?
?Department of Engineering Mathematics, University of Bristol,
Bristol BS8 1TR, United Kingdom
?Department of Computer Science, University of Glasgow,
S.A.W. Building, G12 8QQ, United Kingdom
Abstract
The recent introduction of indefinite SVM by Luss and d?Aspremont [15] has effectively demonstrated SVM classification with a non-positive semi-definite kernel (indefinite kernel). This paper studies the properties of the objective function
introduced there. In particular, we show that the objective function is continuously
differentiable and its gradient can be explicitly computed. Indeed, we further show
that its gradient is Lipschitz continuous. The main idea behind our analysis is that
the objective function is smoothed by the penalty term, in its saddle (min-max)
representation, measuring the distance between the indefinite kernel matrix and
the proxy positive semi-definite one. Our elementary result greatly facilitates the
application of gradient-based algorithms. Based on our analysis, we further develop Nesterov?s smooth optimization approach [17, 18] for indefinite SVM which
has an optimal convergence rate for smooth problems. Experiments on various
benchmark datasets validate our analysis and demonstrate the efficiency of our
proposed algorithms.
1 Introduction
Kernel methods [5, 24] such as Support Vector Machines (SVM) have recently attracted much attention due to their good generalization performance and appealing optimization approaches. The basic
idea of kernel methods is to map the data into a high dimensional (even infinite-dimensional) feature
space through a kernel function. The kernel function over samples forms a similarity kernel matrix
which is usually required to be positive semi-definite (PSD). The PSD property of the similarity
matrix ensures that the SVM can be efficiently solved by a convex quadratic programming.
However, many potential kernel matrices could be non-positive semi-definite. Such cases are quite
common in applications such as the sigmoid kernel [14] for various values of the hyper-parameters,
hyperbolic tangent kernels [25], and the protein sequence similarity measures derived from SmithWaterman and BLAST score [23]. The problem of learning with a non-PSD similarity matrix (indefinite kernel) has recently attracted considerable attention [4, 8, 9, 14, 20, 21, 26]. One widely
used method is to convert the indefinite kernel matrix into a PSD one by using the spectrum transformation. The denoise method neglects the negative eigenvalues [8, 21], flip [8] takes the absolute
value of all eigenvalues, shift [22] shifts eigenvalues to be positive by adding a positive constant, and
the diffusion method [11] takes the exponentials of eigenvalues. One can also see [26] for a detailed
coverage. However, useful information in the data could be lost in the above spectral transformations
since they are separated from the process of training classifiers. In [9], the classification problem
with indefinite kernels is regarded as the minimization of the distance between convex hulls in the
pseudo-Euclidean space. In [20], general Reproducing Kernel Kre??n spaces (RKKS) with indefinite
kernels are introduced which allows a general representer theorem and regularization formulations.
Luss and d?Aspremont [15] recently proposed a regularized formulation for SVM classification
with indefinite kernel. Training a SVM with an indefinite kernel was viewed as a learning the kernel
1
matrix problem [13] i.e. learning a proxy PSD kernel matrix to approximate the indefinite one.
Without realizing that the objective function is differentiable, the authors quadratically smoothed
the objective function, and then formulated two approximate algorithms including the projected
gradient method and the analytic center cutting plane method.
In this paper we follow the formulation of SVM with indefinite kernels proposed in [15]. We mainly
establish the differentiability of the objective function (see its precise definition in equation (3)) and
prove that it is, indeed, differentiable with Lipschitz continuous gradient. This elementary result
suggests there is no need to smooth the objective function which greatly facilitates the application
of gradient-based algorithms. The main idea behind our analysis is from its saddle (min-max) representation which involves a penalty term in the form of Frobenius norm of matrices, measuring
the distance between the indefinite kernel matrix and the proxy PSD one. This penalty term can be
regarded as a Moreau-Yosida regularization term [12] to smooth out the objective function.
The paper is organized as follows. In Section 2, we review the formulation of indefinite SVM
classification presented in [15]. Our main contribution is outlined in Section 3. There, we first show
that the objective function of interest is continuously differentiable and its gradient function can
be explicitly computed. Indeed, we further show that its gradient is Lipschitz continuous. Based
on our analysis, in Section 4 we propose a simplified formulation of the projected gradient method
presented in [15] and show that it has a convergence rate of O(1/k) where k is the iteration number.
We further develop Nesterov?s smooth optimization approach [17, 18] for indefinite SVM which
has an optimal convergence rate of O(1/k 2 ) for smooth problems. In Section 5, our analysis and
proposed optimization approaches are validated by experiments on various benchmark data sets.
2 Indefinite SVM Classification
In this section we review the regularized formulation of indefinite SVM presented in [15]. To this
end, we introduce some notation. Let Nn = {1, 2, . . . , n} for any n ? N and S n be the space of
all n ? n symmetric matrices. If A ? S n is positive semi-definite, we write it as A ? 0. The
n
. For any A, B ? Rn?n , hA, BiF := Tr(A> B) where
cone of PSD matrices is denoted by S+
Tr(?) denotes the trace of a matrix. Finally, the Frobenius norm over the vector space S n is denoted,
1
for any A ? S n , by kAkF := (Tr(A> A)) 2 . The standard Euclidean norm and inner product are
respectively denoted by k ? k and h?, ?i.
Let a set of training samples be given by inputs x = {xi ? Rd : i ? Nn } and outputs y = {yi ?
{?1} : i ? Nn }. Suppose that K is a positive semi-definite kernel matrix (proxy kernel matrix)
on inputs x. Let matrix Y = diag(y), vector e be an n-dimensional vector of all ones and C be a
positive trade-off parameter. Then, the dual formulation of 1-norm soft margin SVM [5, 24] is given
by
max? ?> e ? 12 ?> Y KY ?
s.t.
?> y = 0, 0 ? ? ? C.
Since we assume that K is positive semi-definite, the above problem is a standard convex quadratic
program [2] and a global solution can be efficiently obtained by, e.g., the primal-dual interior
method. Suppose now we are only given an indefinite kernel matrix K0 ? S n . Luss and
d?Aspremont [15] proposed the following max-min approach to simultaneously learn a proxy PSD
kernel matrix K for the indefinite matrix K0 and the SVM classification:
minK max? ?> e ? 21 ?> Y KY ? + ?kK ? K0 k2F
(1)
s.t.
?> y = 0, 0 ? ? ? C, K ? 0.
Let Q1 = {? ? Rn : ?> y = 0, 0 ? ? ? C} and L(?, K) = ?> e ? 21 ?> Y KY ? + ?kK ? K0 k2F .
By the min-max theorem [2], problem (1) is equivalent to
max minn L(?, K).
(2)
??Q1 K?S+
For simplicity, we refer to the following function defined by
f (?) = minn L(?, K)
K?S+
(3)
as the objective function. It is obviously concave since f is the minimum of a sequence of concave
functions. We also call the associated function L(?, K) the saddle representation of the objective
function f .
2
For fixed ? ? Q1 , the optimization K(?) = arg minK?0 L(?, K) is equivalent to a projection to
n
the semi-definite cone S+
. Indeed, it was shown in [15] that the optimal solution is given by
K(?) = (K0 + Y ??> Y /(4?))+
(4)
n
where, for any matrix A ? S , the notation A+ denotes the positive part of A by simply setting
n
its negative eigenvalues to zero. The optimal solution (?? , K ? ) ? Q1 ? S+
to the above min-max
n
problem is a saddle point of L(?, K) (see e.g. [2]), i.e. for any ? ? Q1 , K ? S+
there holds
?
?
?
?
n
L(?, K ) ? L(? , K ) ? L(? , K). For a matrix A ? S , denote its maximum eigenvalue by
?max (A). The next lemma tells us that the optimal solution K ? belongs to a bounded domain in
n
S+
.
Lemma 1. Problem (2) is equivalent to the formulation max??Q1 minK?Q2 L(?, K) and the objective function can be defined by
f (?) = min L(?, K)
K?Q2
o
n
2
n
.
where Q2 := K ? S+
: ?max (K) ? ?max (K0 ) + nC
4?
(5)
Proof. By the saddle theorem [2], we have L(?? , K ? ) = minK?Q2 L(?? , K). Combining this
with equation (4) yields that K ? = K(?? ) = (K0 + Y ?? (?? )> Y /(4?))+ . We can easily? see
?max (K ? ) ? ?max (K0 + Y ?? (?? )> Y /(4?) ? ?max (K0 ) + ?max (Y ?? (?? )> Y /(4?) ?
? 2
?max (K0 ) + k?4?k , where the second to last inequality uses the property of maximum eigenvalues (e.g. [10, Page 201]) i.e. ?max (A + B) ? ?max (A) + ?max (B) for any A, B ? S n . Note
that 0 ? ?? ? C, k?? k2 ? nC 2 . Combining this with the above inequality yields the desired
lemma.
It is worthy of mentioning that it was shown in [18, Theorem 1] that a function g has a Lipschitz
continuous gradient if it enjoys a special structure: g(?) = min{hA?, Ki + ?d(K) : K ? Q}
where Q is a closed convex subset in a certain vector space and d(?) is a strongly convex function,
and, most importantly, A is a linear operator. Since the variable ? appeared in a quadratic form, i.e.
?> Y KY ?, in the objective function defined by (5), it can not be written as the above special form,
and hence the theorem there can not be applied to our case.
3 Differentiability of the Objective Function
The following lemma outlines a very useful characterization of differentiable properties of the optimal value function [3, Theorem 4.1], essentially due to Danskin [7].
Lemma 2. Let X be a metric space and U be a normed space. Suppose that for all x ? X the
function L(?, ?) is differentiable, L(?, x) and ?? L(?, x), the derivative of L(?, x), are continuous
on X ? U and let Q be a compact subset of X . Define the optimal value function as f (?) =
inf x?Q L(?, x). The optimal value function is directionally differentiable. Furthermore, if for ? ?
U , L(?, ?) has a unique minimizer x(?) over Q then f is differentiable at ? and the gradient of f is
given by ?f (?) = ?? L(?, x(?)).
Applying the above lemma to the objective function f defined by equation (5), we have:
Theorem 1. The objective function f defined by (3) (equivalently by (5)) is differentiable and its
gradient is given by
?f (?) = e ? Y (K0 + Y ??> Y /(4?))+ Y ?.
(6)
Proof. We apply Lemma 2 with X = S n and Q = Q2 ? S n , U = Q1 and x = K. To this
end, we first prove the uniqueness of K(?). Suppose there are two minimizers K1 , K2 for problem
arg minK?S+n L(?, K). By the first order optimality condition, for the minimizer K1 , we have that
h?K L(?, K1 ), K2 ? K1 iF ? 0. Considering the minimizer K2 , we also have h?K L(?, K2 ), K1 ?
K2 iF ? 0. Noting that ?K L(?, K) = ? 12 Y ??> Y + 2?(K ? K0 ) and adding the above two firstorder optimaility inequalities together, we have ?kK2 ?K1 k2F ? 0 which means that K1 = K2 , and
hence completes the proof of the uniqueness of K(?). Now the desired result follows directly from
Lemma 2 by noting that the derivative of L w.r.t. the first argument ?? L(?, K) = e ? Y KY ?.
3
Indeed, we can go further to establish the Lipschitz continuity of ?f based on the strongly convex
property of L(?, ?). To this end, we first establish a useful lemma.
Lemma 3. For any ?1 , ?2 ? Q1 , there holds k(K0 + Y ?1 ?1> Y /(4?))+ ? (K0 +
Y ?2 ?2> Y /(4?))+ kF ? (k?1 k + k?2 k)k?1 ? ?2 k/(4?).
Proof. Let ?K L(?, ?) denote the gradient w.r.t. K. Now, consider the minimization problem arg minK?Q2 L(?, K). By the first order optimality conditions, for any K ? Q2 there
holds h?K L(?, K(?)), K ? K(?)iF ? 0. Applying the above inequality twice implies that
h?K L(?1 , K(?1 )), K(?2 ) ? K(?1 )iF ? 0, and h?K L(?2 , K(?2 )), K(?1 ) ? K(?2 )iF ? 0. Consequently, h?K L(?1 , K(?1 )) ? ?K L(?2 , K(?2 )), K(?2 ) ? K(?1 )iF ? 0. Substituting the fact
that ?K L(?, K) = ? 21 Y ??> Y + 2?(K ? K0 ) into the above equation, we have 4?kK(?1 ) ?
K(?2 )k2F ? hY (?2 ?2> ? ?1 ?1> )Y, K(?2 ) ? K(?1 )iF ? kY (?2 ?2> ? ?1 ?1> )Y kF kK(?2 ) ?
K(?1 )kF . Consequently,
k(?2 ?2> ? ?1 ?1> )kF
kY (?2 ?2> ? ?1 ?1> )Y kF
?
(7)
4?
4?
where the last inequality follows from the fact that Y is an orthonormal matrix since yi ? {?1}
and Y = diag(y1 , . . . , yn ). Note that k?2 ?2> ? ?1 ?1> kF = k(?2 ? ?1 )?2> ? ?1 (?1 ? ?2 )> kF ?
(k?1 k+k?2 k)k?1 ??2 k. Putting this back into inequality (7) completes the proof of the lemma.
kK(?1 ) ? K(?2 )kF ?
It is interesting to point out that the above lemma can be alternatively established by delicate techniques in matrix analysis. To see this, recall that a spectral function G : S n ? S n is defined
by applying a real-valued function g to the eigenvalues of its argument i.e. for any K ? S n with
eigen-decomposition K = U diag(?1 , . . . , ?n )U > , G(K) := U diag(g(?1 ), . . . , g(?n ))U > . The
perturbation inequality in matrix analysis [1, Lemma VII.5.5] shows that if g is Lipschitz continuous with Lipschitz constant ? then kG(K1 ) ? G(K2 )kF ? ?kK1 ? K2 kF ,
?K1 , K2 ? S n .
Applying the above inequality with g(t) = max(0, t) and K1 = K0 + Y ?1 ?1> Y /(4?) and
K2 = K0 + Y ?2 ?2> Y /(4?) implies equation (7), and hence Lemma 3. However, we prefer the
original proof presented for Lemma 3 since it explains more clearly how the strong convexity of the
regularization term kK ? K0 k2F plays a critical role in the analysis.
From the above lemma, we can establish the Lipschitz continuity of the gradient of the objective
function.
Theorem 2. The gradient of the objective function given by (6) is Lipschitz continuous with Lipschitz
2
constant L = ?max (K0 ) + nC
? i.e. ?for any ?1 , ?2 ? Q1 the following inequality holds k?f (?1 ) ?
?
?f (?2 )k ? ?max (K0 )) + nC 2 /? k?1 ? ?2 k.
Proof. For any ?1 , ?2 ? Q1 , from representation of ?f in Theorem 1 the term k?f (?1 )??f (?2 )k
can be bounded by
n ?
o
?
kY (K0 + Y ?1 ?1> Y /(4?))+ ? (K0 + Y ?2 ?2> Y /(4?))+ Y ?1 k
n
o
(8)
+ kY (K0 + Y ?2 ?2> Y /(4?))+ Y (?2 ? ?1 )k .
Now it remains to estimate the two terms within parentheses on the right-hand side of inequality (8).
Let?s begin with the first one by applying Lemma 3.
?
?
>
>
kY (K
? 0 + Y ?1 ?1 Y>/(4?))+ ? (K0 + Y ?2 ?2 Y>/(4?))+ Y??1 k
? kY
? + Y kF k?1 k
? (K0 + Y ?1 ?1 Y /(4?))
? + ?? (K0 + Y ?2 ?2 Y /(4?))
(9)
? k K0 + Y ?1 ?1> Y /(4?) + ? K0 + Y ?2 ?2> Y /(4?) + kF k?1 k
? k?1 k (k?1 k + k?2 k) k?1 ? ?2 k/(4?) ?
nC 2
2?
k?1 ? ?2 k.
where the second inequality follows from the fact that Y is an orthonormal matrix. For the
second term on the right-hand side of inequality (8), we apply the fact proved in Theorem 1
>
that K(?)
? Q2 for any ? ? Q
?
? 1 . Indeed, kY (K0? + Y ?2 ?2 Y /(4?))+ Y (??2 ? ?1 )k ?
?max Y (K0 + Y ?2 ?2> Y /(4?))+ Y k?2 ? ?1 k ? ?max (K0 + Y ?2 ?2> Y /(4?))+ k?2 ? ?1 k ?
h
i
2
?max (K0 ) + nC
k?1 ? ?2 k. Putting this equation and (9) back into equality (8) completes the
4?
proof of Theorem 2.
4
Simplified Projected Gradient Method (SPGM)
2
1. Choose ? ? ?max (K0 ) + nC
. Let ? > 0, ?0 ? Q1 be given and set k = 0.
??
?
2. Compute ?f (?k ) = e ? Y K0 + Y ?k ?k> Y /(4?) + Y ?k .
3. ?k+1 = PQ1 (?k + ?f (?k )/?) .
4. Set k ? k + 1. Go to step 2 until the stopping criterion less than ?.
Table 1: Pseudo-code of projected gradient method
4
Smooth Optimization Algorithms
This section is based on the theoretical analysis above, mainly Theorem 2. We first outline a simplified version of the projected gradient method proposed in [15] and show it has a convergence rate
of O(1/k) where k is the iteration number. We can further develop a smooth optimization approach
[17, 18] for indefinite SVM (5). This scheme has an optimal convergence rate O(1/k 2 ) for smooth
problems which has been applied to various problems, e.g. [6].
4.1
Simplified Projected Gradient Method
In [15], the objective function was smoothed by adding a quadratic term (see details in Section 3
there) and then they proposed a projected gradient algorithm to solve this approximation problem.
Using the explicit gradient representation in Theorem 1 we formulate its simplified version in Table
1 where the projection PQ1 : Rn ? Q1 is defined, for any ? ? Rn , by
PQ1 (?) = arg min k? ? ?k2 .
??Q1
(10)
Indeed, from Theorem 2 we can further obtain the following result by developing the techniques in
Sections 2.1.5, 2.2.3 and 2.2.4 of [18].
i
h
2
and {?k : k ? N} be given by the simplified projected
Lemma 4. Let ? ? ?max (K0 ) + nC
?
gradient method in Table 1. For any ? ? Q1 , the following inequality holds f (?k+1 ) ? f (?) +
?h?k ? ?k+1 , ? ? ?k i + ?2 k?k ? ?k+1 k2 .
Proof. We know from Theorem 2 that ?f is Lipschitz continuous with Lipschitz constant L =
R1
2
?max (K0 ) + nC
? , then we have f (?) ? f (?k ) ? h?f (?k ), ? ? ?k i = 0 h?f (?? + (1 ? ?)?k ) ?
R1
?f (?k ), ? ? ?k id? ? ?L 0 (1 ? ?)k? ? ?k k2 d? ? ? ?2 k? ? ?k k2 . Applying this inequality with
? = ?k+1 implies that
?
?f (?k ) ? h?f (?k ), ?k+1 ? ?k i ? ?f (?k+1 ) ? k?k+1 ? ?k k2 .
(11)
2
Let ?(?) = ?f (?k ) ? ?f (?k )(? ? ?k ) + ?2 k? ? ?k k2 which implies that ?k+1 =
arg min??Q1 ?(?). Then, by the first-order optimality condition over ?k+1 there holds, for any
? ? Q1 , h??(?k ), ? ? ?k+1 i ? 0, i.e. ?h?f (?k ), ? ? ?k+1 i ? ?h?k+1 ? ?k , ?k+1 ? ?i. Adding
this equation and (11) together yields that ?f (?k ) ? h?f (?k ), ? ? ?k i ? ?f (?k+1 ) + ?h?k ?
?k+1 , ???k i+ ?2 k?k ??k+1 k2 . Also, since ?f is convex, ?f (?) ? ?f (?k )?h?f (?k ), ???k i.
Combining this with the above inequality finishes the proof of the lemma.
i
h
2
Theorem 3. Let ? ? ?max (K0 ) + nC
and the iteration sequence {?k : k ? N} be given by the
?
simplified projected gradient method in Table 1. Then, we have that
?
(12)
f (?k+1 ) ? f (?k ) + k?k+1 ? ?k k2 ,
2
Moreover,
?
max f (?) ? f (?k ) ?
k?0 ? ?? k2
(13)
??Q1
2k
where ?? is an optimal solution of problem max??Q1 f (?).
5
Nesterov?s Smooth Optimization Method (SMM)
1. Let ? > 0, k = 0 and initialize ?0 ? Q1 and let L = ?max (K0 )) + nC 2 /?.
?
?
2. Compute ?f (?k ) = e ? Y K0 + Y ?k ?k> Y /(4?) + Y ?k .
3. Compute ?k = PQ1 (?
(?k )/L) .
? k + ?f
?
Pk
4. Compute ?k = PQ1 ?0 + i=0 (i + 1)?f (?k )/(2L) .
2
5. Set ?k+1 = k+3
?k + k+1
k+3 ?k .
6. Set k ? k + 1. Go to step 2 until the stopping criterion less than ?.
Table 2: Pseudo-code of first-order Nesterov?s smooth optimization method
Proof. Applying Lemma 4 with ? = ?k yields inequality (12). To prove inequality (13), we first
apply Lemma 4 with ? = ?? to get that, for any i, max??Q1 f (?) ? f (?i ) ? ??h?i ? ?i+1 , ?? ?
?i i ? ?2 k?i ? ?i+1 k2 = ?2 k?? ? ?i k2 ? ?2 k?? ? ?i+1 k2 . Adding them over i from 0 and k ? 1
and also, noting from (12) that {max??Q1 f (?) ? f (?k ) : k ? N} is decreasing, we have that
Pk?1
?
?
2
k (max??Q1 f (?) ? f (?k )) ?
i=0 (max??Q1 f (?) ? f (?i+1 )) ? 2 k? ? ?0 k . This completes the proof of the theorem.
From the above theorem, the sequence {f (?k ) : k ? N} is monotonically increasing and the
iteration complexity of SPGM is O(L/?) for finding an ?-optimal solution.
4.2
Nesterov?s Smooth Optimization Method
In [18, 17], Nesterov proposed an efficient smooth optimization method for solving convex programming problems of the form
min g(x)
x?U
where g is a convex function with Lipschitz continuous gradient, and U is a closed convex set in Rn .
Specifically, suppose there exists L > 0 such that k?g(x) ? ?g(x0 )k ? Lkx ? x0 k, ?x, x0 ? U.
The smooth optimization approach needs to introduce a proxy-function d(x) associated with the set
U . It is assumed to be continuous and strongly convex on U with convexity parameter ? > 0.
Let x0 = arg minx?U d(x). Without loss of generality, assume that d(x0 ) = 0. Thus, strong
convexity of d means that , for any x ? U , d(x) ? 21 ?kx ? x0 k2 . Then, a specific first-order smooth
optimization
scheme detailed in [18] can be then applied to the function g with convergence rate
p
in O( L/?). The first-order method needs to define a proxy-function associated with Q1 . Here,
we define the proxy-function by d(?) = 21 k? ? ?0 k2 with ?0 ? Q1 . The Lipschitz constant of
?f is established in Theorem 2 given by L = ?max (K0 ) + nC 2 /?. Translating the first-order
Nesterov?s scheme [18, Section 3] to our problem (5), we can get the smooth optimization algorithm
for indefinite SVM, see its pseudo-code in Table 2. One can see [17] for its variants with general
step sizes.
The effectiveness of the first-order Nesterov?s algorithm largely depends on the Steps 2, 3 and 4 outlined in Table 2. By Theorem 1, the computation of ?f (?k ) in Step 2 needs an eigen-decomposition.
Steps 3 and 4 are the projection problem (10) by replacing ? respectively by ?k + ?f (?k )/L
Pk
and ?0 + i=0 (i + 1)?f (?i )/(2L). The convergence of this optimal method was shown in [18]:
? 2
0 ?? k
?
max??Q1 f (?) ? f (?k ) ? 4Lk?
(k+1)(k+2) where ? is one of the optimal solutions. It is worthy of
pointing out that either {f (?k ) : k ? N} or {f (?k ) : k ? N} may not monotonically increase,
however it can be made to monotonically increase by a simple modification of the algorithm [18].
In addition, the above estimation of the Lipschitz constant L could be loose in reality and one could
further accelerate the algorithm by using a line search scheme [16].
4.3
Related Work and Complexity Discussion
We list the theoretical time complexity of algorithms to run Indefinite SVM. It is worth noting that
the number of iterations to reach a target precision of ? means that ?f (?k ) ? min??Q1 ?f (?) =
max??Q1 f (?) ? f (?k ) ? ?. However, this does not mean the dual gap as used in [15] is less
than ?. In [15], the objective function is smoothed by adding a quadratic term and then they further
6
proposed a projected gradient algorithm and analytic center cutting plane method (ACCPM)1 . As
proved in Theorem 3, the number of iterations of the projected gradient method is usually O(L/?).
In each iteration, the main complexity cost O(n3 ) is from the eigen-decomposition. Hence, the
overall complexity of SPGM is O(n3 L/?). As discussed in [15], ACCPM has an overall complexity
is O(n4 log(1/?)2 ) for finding an ?-optimal solution. However, this method needs to use interior
methods at each iteration which would be slow for large scale datasets.
Chen and Ye [4] reformulated indefinite SVM as an appealing semi-infinite quadratically constrained
linear programming (SIQCLP) without applying extra smoothing techniques. There, the algorithm
iteratively solves a linear programming with a finite number of quadratic constraints. The iteration
complexity of semi-infinite linear programming is usually O(1/?3 ). In each iteration, one needs
to find maximum violation constraints which involves eigen-decomposition of complexity O(n3 ).
Hence, the overall complexity is of O(n3 /?3 ). The main limitation of this approach is that one needs
to save the subset of increasing quadratically constrained conditions indexed by n ? n matrices and
iteratively solve a quadratically constrained linear programming (QCLP). The QCLP sub-problem
can be solved by general software packages, e.g. Mosek (http://www.mosek.com/), which is generally slow in our experience. This tends to make the algorithm inefficient during the iteration process,
although pruning techniques were proposed to avoid too many quadratically constrained conditions.
Based on our theoretical results (Theorem 2), Nesterov?s smooth optimization method can be applied. The complexity of this smooth optimization method (SMM) mainly relies on the eigenvalue
decomposition on Step 2 listed in Table 2 which costs O(n3 ). Step 3 and 4 are projections onto
the convex region Q1 which costs O(n log n) as pointedpout in [15]. The first-order smooth optimization approach [17, 18] has iteration complexity
O( L/?) for finding an ?-optimal solution.
p
3
Consequently, the overall complexity is O(n L/?). Hence, from theoretical comparison the complexity of smoothing optimization is better than the simplified projected gradient method (SPGM)
and SIQCLP. Compared with ACCPM, SMM has better dependence on the sample number n but
with a worse precision i.e. worse dependence on ?.
5 Experimental Validation
We run our proposed smooth optimization approach and simplified projected gradient method on
various datasets to validate our analysis. The experiments are done on several benchmark data sets
from the UCI repository [19] including Sonar, Ionosphere, Heart, Pima Indians Diabetes, Breast
Cancer, and USPS with digits 3 and 5. For USPS dataset, we randomly select 600 samples for
each digit. All the results reported are based on 10 random training/test partition with ratio 4/1.
In each data split, as in [4] we first generate a Gaussian kernel matrix K with the hyper-parameter
determined by cross-validation on the training data using LIBSVM and then construct indefinite
b Here, the noisy matrix E
b =
matrices by adding a small noisy matrix i.e. K0 := K ? 0.1E.
(E + E 0 )/2 where E is randomly generated by zero mean and identity covariance matrix. For all
methods, the parameters C and ? for Indefinite SVM are tuned by cross-validation and we terminate
the algorithm if the relative change of the objective value is less than 10?6 .
In Table 3, we report the average test set accuracy (%) and CPU time (seconds) across different
algorithms: smooth optimization method (SMM), simplified projected gradient method (SPGM),
analytic center cutting plane method (ACCPM), and semi-infinite quadratically constrained linear
programming (SIQCLP). For the QCLP sub-problem in the SIQCLP method, we use Mosek software package (http://www.mosek.com/). We can see that test accuracies are statistically the same
across different algorithms, which validates our analysis on the objective function. In particular, we
observe that SMM is consistently more efficient than other methods, especially for a large number
of training samples. SIQCLP needs much more time since, in each iteration, it needs to solves a
quadratically constrained linear programming. In Figure 1, we plot the objective values versus iteration on Sonar and Diabetes for SMM, SPGM, and ACCPM. The SIQCLP approach is not included
here since its objective value is not based on the iteration w.r.t. the variable ? which does not directly yield an increasing iteration sequence of objective values in contrast to those of the other three
algorithms. From Figure 1, we can see that SMM converges faster than SPGM which is consistent
with the complexity analysis. The convergence of ACCPM is quite similar to SMM, especially for
1
MATLAB codes are available in http://www.princeton.edu/ rluss/IndefiniteSVM.htm
7
Data
Sonar
Size
208
?min
?1.38
?max
21.47
Ionosphere
351
-2.08
101.34
Heart
270
-1.98
178.03
Diabetes
768
-3.44
539.12
Breast-cancer
683
-2.87
290.41
USPS-35
1200
?3.72
112.65
SMM
76.34%
0.74s
93.14%
5.47s
79.81%
3.54s
70.00%
39.93s
95.93%
5.71s
96.33%
23.22s
SPGM
76.34%
5.12s
93.43%
28.93s
79.44%
12.05s
69.86%
345.48s
96.02%
50.13s
96.33%
236.00s
ACCPM
75.12%
3.20s
93.54%
22.73s
79.25%
11.96s
70.52%
678.85s
96.02%
212.96s
96.04%
3713.05s
SIQCLP
76.09%
244.55s
93.54%
455.81s
79.25%
689.17s
69.73%
3134.31s
95.40%
4610.82s
95.54%
5199.17s
Table 3: Average test set accuracy (%) and CPU time in seconds (s) of different algorithms where
?max (?min ) denotes the average maximum (minimum) eigenvalues of the indefinite kernel matrix
over training samples.
4
0.5
x 10
100
Objectve value
0
0
?0.5
?100
?1
SPGM
SPGM
?2
SMM
?200
SMM
?1.5
?300
ACCPM
ACCPM
?2.5
?400
?3
?500
?3.5
?4
20
40
60
80
100
?600
120
Iteration
20
40
60
80
100
Iteration
Figure 1: Objective value versus iteration: Sonar (left) and Diabetes (right). Curves: SMM (blue),
SPGM (red) and ACCPM (black)
small-sized datasets which coincides with the complexity analysis in Section 4.3 since it generally
has a high precision. However, ACCPM needs more time in each iteration than SMM and this observation becomes more apparent for the relatively large datasets shown in the time comparison of
Table 3.
6
Conclusion
In this paper we analyzed the regularization formulation for training SVM with indefinite kernels
proposed by Luss and d?Aspremont [15]. We show that the objective function of interest is continuously differentiable with Lipschitz continuous gradient. Our elementary analysis greatly facilitates
the application of gradient-based methods. We formulated a simplified version of the projected gradient method presented in [15] and showed that it has a convergence rate of O(1/k). We further
developed Nesterov?s smooth optimization method [17, 18] for Indefinite SVM which has an optimal convergence rate of O(1/k 2 ) for smooth problems. Experiments on various datasets validate
our analysis and the efficiency of our proposed optimization approach. In future, we are planning to
further accelerate the algorithm by using a line search scheme [16]. We are also applying this method
to real biological datasets such as protein sequence analysis using sequence alignment measures.
Acknowledgements
This work is supported by EPSRC grant EP/E027296/1.
8
References
[1] R. Bhatia. Matrix analysis. Graduate texts in Mathematics. Springer, 1997.
[2] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, 2004.
[3] J. F. Bonnans and A. Shapiro. Optimization problems with perturbation: A guided tour. SIAM
Review, 40: 202?227, 1998.
[4] J. Chen and J. Ye. Training SVM with Indefinite Kernels. ICML, 2008.
[5] N. Cristianini and J. Shawe-Taylor. An introduction to support vector machines and other
kernel-based learning methods. Cambridge University Press, 2000.
[6] A. d?Aspremont, O. Banerjee and L. El Ghaoui. First-order methods for sparse covariance
selection. SIAM Journal on Matrix Analysis and its Applications, 30: 56?66, 2007.
[7] J.M. Danskin. The theory of max-min and its applications to weapons allocation problems,
Springer-Verlag, New York, 1967.
[8] T. Graepel, R. Herbrich, P. Bollmann-Sdorra, and K. Obermayer. Classification on pairwise
proximity data. NIPS, 1998.
[9] B. Haasdonk. Feature space interpretation of SVMs with indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27: 482?492, 2005.
[10] R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, 1991.
[11] R. I. Kondor and J. Laffferty. Diffusion kernels on graphs and other discrete input spaces.
ICML, 2002.
[12] C. Lemar?echal and C. Sagastiz?abal. Practical aspects of the Moreau-Yosida regularization:
theoretical preliminaries. SIAM Journal on Optimization, 7: 367?385, 1997.
[13] G. R. G. Lanckriet, N. Cristianini, N., P. L. Bartlett, L. E. Ghaoui, and M. I. Jordan. Learning
the kernel matrix with semidefinite programming. J. of Machine Learning Research, 5: 27?
72, 2004.
[14] H.-T. Lin and C. J. Lin. A study on sigmoid kernels for SVM and the training of non-psd
kernels by smo-type methods. Technical Report, National Taiwan University, 2003.
[15] R. Luss and A. d?Aspremont. Support vector machine classification with indefinite kernels.
NIPS, 2007.
[16] A. Nemirovski. Efficient methods in convex programming. Lecture Notes, 1994.
[17] Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Springer, 2003.
[18] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming,
103:127?152, 2005.
[19] D. Newman, S. Hettich, C. Blake, and C. Merz. UCI repository of machine learning datasets.
1998.
[20] C. S. Ong, X. Mary, S. Canu, and A. J. Smola. Learning with non-positive kernels. ICML,
2004.
[21] E. Pekalska, P. Paclik, and R. P. W. Duin. A generalized kernel approach to dissimilaritybased classification. J. of Machine Learning Research, 2: 175?211, 2002.
[22] V. Roth, J. Laub, M. Kawanabe, and J. M. Buhmann. Optimal cluster preserving embedding of
nonmetric proximity data. IEEE Transactions on Pattern Analysis and Machine Intelligence,
25:1540?1551, 2003.
[23] H. Saigo, J.P.Vert and N. Ueda, and T. Akutsu. Protein homology detection using string alignment kernels. Bioinformatics, 20: 1682?1689., 2004.
[24] B. Sch?olkopf, and A.J. Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. The MIT Press, 2001.
? vari, and R. C. Williamson. Regularization with dot-product kernels.
[25] A. J. Smola, Z. L. O?
NIPS, 2000.
[26] G. Wu, Z. Zhang, and E. Y. Chang. An analysis of transformation on non-positive semidefinite
similarity matrix for kernel machines. Technical Report, UCSB, 2005.
9
| 3661 |@word repository:2 kondor:1 version:3 norm:4 decomposition:5 covariance:2 q1:29 tr:4 score:1 united:2 tuned:1 com:2 attracted:2 written:1 partition:1 analytic:3 plot:1 intelligence:2 plane:3 realizing:1 characterization:1 herbrich:1 zhang:1 mathematical:1 laub:1 prove:3 introductory:1 introduce:2 blast:1 pairwise:1 x0:6 indeed:7 planning:1 bif:1 decreasing:1 cpu:2 considering:1 increasing:3 becomes:1 begin:1 notation:2 bounded:2 moreover:1 sdorra:1 kg:1 string:1 q2:8 developed:1 finding:3 transformation:3 pseudo:4 firstorder:1 concave:2 classifier:1 k2:25 grant:1 yn:1 positive:13 engineering:1 tends:1 id:1 black:1 twice:1 kk2:1 suggests:1 mentioning:1 nemirovski:1 graduate:1 statistically:1 unique:1 horn:1 practical:1 lost:1 definite:8 siqclp:7 digit:2 hyperbolic:1 vert:1 projection:4 boyd:1 protein:3 get:2 onto:1 interior:2 selection:1 operator:1 applying:9 www:3 equivalent:3 map:1 demonstrated:1 center:3 roth:1 go:3 attention:2 normed:1 convex:15 formulate:1 simplicity:1 glasgow:1 regarded:2 importantly:1 orthonormal:2 vandenberghe:1 embedding:1 qq:1 target:1 suppose:5 play:1 programming:11 us:1 diabetes:4 lanckriet:1 ep:1 role:1 epsrc:1 solved:2 haasdonk:1 region:1 ensures:1 trade:1 yosida:2 convexity:3 complexity:15 nesterov:12 cristianini:2 ong:1 solving:1 efficiency:2 usps:3 easily:1 accelerate:2 htm:1 k0:41 various:6 separated:1 tell:1 bhatia:1 hyper:2 newman:1 quite:2 apparent:1 widely:1 valued:1 solve:2 noisy:2 validates:1 directionally:1 obviously:1 sequence:7 differentiable:10 eigenvalue:10 propose:1 product:2 uci:2 combining:3 frobenius:2 validate:3 ky:12 olkopf:1 convergence:10 cluster:1 r1:2 converges:1 yiming:1 develop:3 solves:2 strong:2 coverage:1 involves:2 implies:4 girolami:1 guided:1 hull:1 translating:1 explains:1 bonnans:1 generalization:1 preliminary:1 biological:1 elementary:3 hold:6 proximity:2 blake:1 pointing:1 substituting:1 abal:1 uniqueness:2 estimation:1 minimization:3 bs8:1 clearly:1 mit:1 gaussian:1 avoid:1 derived:1 validated:1 consistently:1 mainly:3 greatly:3 contrast:1 minimizers:1 stopping:2 nn:3 el:1 arg:6 classification:9 dual:3 overall:4 denoted:3 constrained:6 special:2 initialize:1 smoothing:2 construct:1 k2f:5 icml:3 representer:1 mosek:4 future:1 report:3 randomly:2 simultaneously:1 national:1 kk1:1 delicate:1 psd:9 detection:1 interest:2 alignment:2 violation:1 analyzed:1 semidefinite:2 behind:2 primal:1 experience:1 indexed:1 euclidean:2 taylor:1 desired:2 theoretical:5 soft:1 measuring:2 cost:3 subset:3 tour:1 johnson:1 too:1 reported:1 siam:3 off:1 together:2 continuously:3 choose:1 ucsb:1 worse:2 derivative:2 inefficient:1 potential:1 explicitly:2 depends:1 closed:2 red:1 contribution:1 accuracy:3 largely:1 efficiently:2 yield:5 lu:5 worth:1 bristol:2 reach:1 definition:1 associated:3 proof:12 proved:2 dataset:1 recall:1 organized:1 graepel:1 nonmetric:1 campbell:1 back:2 follow:1 formulation:9 done:1 strongly:3 generality:1 furthermore:1 smola:3 until:2 hand:2 replacing:1 banerjee:1 qclp:3 continuity:2 mary:1 building:1 ye:2 homology:1 regularization:7 hence:6 equality:1 symmetric:1 iteratively:2 during:1 coincides:1 criterion:2 generalized:1 outline:2 demonstrate:1 recently:3 common:1 sigmoid:2 discussed:1 interpretation:1 refer:1 cambridge:3 rd:1 outlined:2 mathematics:2 canu:1 shawe:1 saigo:1 dot:1 similarity:5 lkx:1 recent:1 showed:1 inf:1 belongs:1 certain:1 verlag:1 inequality:17 yi:2 preserving:1 minimum:2 colin:1 monotonically:3 semi:11 smooth:25 technical:2 faster:1 cross:2 lin:2 parenthesis:1 variant:1 basic:2 breast:2 essentially:1 metric:1 iteration:20 kernel:44 addition:1 completes:4 sch:1 extra:1 weapon:1 facilitates:3 bollmann:1 effectiveness:1 jordan:1 call:1 noting:4 split:1 finish:1 inner:1 idea:3 shift:2 bartlett:1 penalty:3 reformulated:1 york:1 matlab:1 useful:3 generally:2 detailed:2 listed:1 svms:1 differentiability:2 http:3 generate:1 shapiro:1 blue:1 write:1 discrete:1 putting:2 indefinite:32 libsvm:1 diffusion:2 graph:1 convert:1 cone:2 run:2 package:2 ueda:1 wu:1 hettich:1 prefer:1 ki:1 quadratic:6 duin:1 constraint:2 n3:5 software:2 hy:1 aspect:1 argument:2 min:14 optimality:3 relatively:1 department:2 developing:1 spgm:11 across:2 appealing:2 n4:1 modification:1 ghaoui:2 heart:2 equation:7 remains:1 loose:1 know:1 flip:1 end:3 available:1 apply:3 observe:1 kawanabe:1 spectral:2 save:1 eigen:4 original:1 denotes:3 neglect:1 k1:10 especially:2 establish:4 objective:27 dependence:2 obermayer:1 gradient:32 minx:1 distance:3 topic:1 taiwan:1 code:4 minn:2 kk:6 ratio:1 ying:1 nc:12 kingdom:2 equivalently:1 pima:1 trace:1 negative:2 mink:6 danskin:2 observation:1 datasets:8 benchmark:3 finite:1 precise:1 y1:1 rn:5 worthy:2 reproducing:1 smoothed:4 perturbation:2 introduced:2 required:1 pekalska:1 smo:1 quadratically:7 established:2 nip:3 beyond:1 usually:3 pattern:2 appeared:1 program:1 max:43 including:2 critical:1 regularized:2 buhmann:1 scheme:5 lk:1 aspremont:6 text:1 review:3 acknowledgement:1 tangent:1 kf:12 relative:1 loss:1 lecture:2 kakf:1 interesting:1 limitation:1 allocation:1 versus:2 validation:3 proxy:8 consistent:1 echal:1 cancer:2 sagastiz:1 course:1 supported:1 last:2 enjoys:1 side:2 absolute:1 sparse:1 moreau:2 curve:1 vari:1 author:1 made:1 projected:15 simplified:11 transaction:2 approximate:2 compact:1 smm:13 cutting:3 pruning:1 global:1 assumed:1 xi:1 alternatively:1 spectrum:1 continuous:11 search:2 sonar:4 table:11 reality:1 learn:1 terminate:1 williamson:1 domain:1 diag:4 pk:3 main:5 denoise:1 slow:2 precision:3 sub:2 explicit:1 exponential:1 theorem:22 specific:1 list:1 svm:24 ionosphere:2 exists:1 adding:7 effectively:1 margin:1 kx:1 gap:1 chen:2 vii:1 simply:1 saddle:5 akutsu:1 accpm:11 paclik:1 chang:1 springer:3 minimizer:3 relies:1 viewed:1 formulated:2 identity:1 g12:1 consequently:3 sized:1 lipschitz:16 lemar:1 considerable:1 change:1 included:1 infinite:4 specifically:1 determined:1 lemma:21 experimental:1 merz:1 select:1 mark:1 support:4 bioinformatics:1 indian:1 princeton:1 |
2,937 | 3,662 | Variational Inference for the
Nested Chinese Restaurant Process
Chong Wang
Computer Science Department
Princeton University
David M. Blei
Computer Science Department
Princeton University
[email protected]
[email protected]
Abstract
The nested Chinese restaurant process (nCRP) is a powerful nonparametric
Bayesian model for learning tree-based hierarchies from data. Since its posterior distribution is intractable, current inference methods have all relied on MCMC
sampling. In this paper, we develop an alternative inference technique based
on variational methods. To employ variational methods, we derive a tree-based
stick-breaking construction of the nCRP mixture model, and a novel variational
algorithm that efficiently explores a posterior over a large set of combinatorial
structures. We demonstrate the use of this approach for text and hand written digits
modeling, where we show we can adapt the nCRP to continuous data as well.
1
Introduction
For many application areas, such as text analysis and image analysis, learning a tree-based hierarchy
is an appealing approach to illuminate the internal structure of the data. In such settings, however,
the combinatoric space of tree structures makes model selection unusually daunting. Traditional
techniques, such as cross-validation, require us to enumerate all possible model structures; this kind
of methodology quickly becomes infeasible in the face of the set of all trees.
The nested Chinese restaurant process (nCRP) [1] addresses this problem by specifying a generative
probabilistic model for tree structures. This model can then be used to discover structure from data
using Bayesian posterior computation. The nCRP has been applied to several problems, such as
fitting hierarchical topic models [1] and discovering taxonomies of images [2, 3].
The nCRP is based on the Chinese restaurant process (CRP) [4], which is closely linked to the
Dirichlet process in its application to mixture models [5]. As a complicated Bayesian nonparametric
model, posterior inference in an nCRP-based model is intractable, and previous approaches all rely
Gibbs sampling [1, 2, 3]. While powerful and flexible, Gibbs sampling can be slow to converge and
it is difficult to assess the convergence [6, 7]. Here, we develop an alternative for posterior inference
for nCRP-based models.
Our solution is to use the optimization-based variational methods [8]. The idea behind variational
methods is to posit a simple distribution over the latent variables, and then to fit this distribution to
be close to the posterior of interest. Variational methods have been successfully applied to several
Bayesian nonparametric models, such as Dirichlet process (DP) mixtures [9, 10, 11], hierarchical
Dirichlet processes (HDP) [12], Pitman-Yor processes [13] and Indian buffet processes (IBP) [14].
The work presented here is unique in that our optimization of the variational distribution searches the
combinatorial space of trees. Similar to Gibbs sampling, our method includes an exploration of a
latent structure associated with the free parameters in addition to their values. First, we describe the
tree-based stick-breaking construction of nCRP, which is needed for variational inference. Second,
we develop our variational inference algorithm, which explores the infinite tree space associated with
the nCRP. Finally, we study the performance of our algorithm on discrete and continuous data sets.
1
2
Nested Chinese restaurant process mixtures
The nested Chinese restaurant process (nCRP) is a distribution over hierarchical partitions [1]. It
generalizes the Chinese restaurant process (CRP), which is a distribution over partitions. The CRP
can be described by the following metaphor. Imagine a restaurant with an infinite number of tables,
and imagine customers entering the restaurant in sequence. The dth customer sits at a table according
to the following distribution,
mk if k is previous occupied
p(cd = k|c1:(d?1) ) ?
(1)
?
if k is a new table,
where mk is the number of previous customers sitting at table k and ? is a positive scalar. After D
customers have sat down, their seating plan describes a partition of D items.
In the nested CRP, imagine now that tables are organized in a hierarchy: there is one table at the first
level; it is associated with an infinite number of tables at the second level; each second-level table
is associated with an infinite number of tables at the third level; and so on until the Lth level. Each
customer enters at the first level and comes out at the Lth level, generating a path with L tables as
she sits in each restaurant. Moving from a table at level ` to one of its subtables at level ` + 1, the
customer draws following the CRP using Equation 1. (This description is slightly different from the
metaphor in [1], but leads to the same distribution.)
The nCRP mixture model can be derived by analogy to the CRP mixture model [15]. (From now
on, we will use the term ?nodes? instead of ?tables.?) Each node is associated with a parameter w,
where w ? G0 and G0 is called the base distribution. Each data point is drawn by first choosing a
path in the tree according to the nCRP, and then choosing its value from a distribution that depends
on the parameters in that path. An additional hidden variable x represents other latent quantities
that can be used in this distribution. This is a generalization of the model described in [1]. For data
D = {tn }N
n=1 , the nCRP mixture assumes that the nth data point tn is drawn as follows:
1. Draw a path cn |c1:(n?1) ? nCRP(?, c1:(n?1) ), which contains L nodes from the tree.
2. Draw a latent variable xn ? p(xn |?).
3. Draw an observation tn ? p(tn |Wcn , xn , ? ).
The parameters ? and ? are associated with the latent variables x and data generating distribution,
respectively. Note that Wcn contains the wi s selected by the path cn . Specific applications of the
nCRP mixture depend on the particular forms of p(w), p(x) and p(t|Wc , x).
The corresponding posterior of the latent variables decomposes the data into a collection of paths, and
provides distributions of the parameters attached to each node in those paths. Even though the nCRP
assumes an ?infinite? tree, the paths associated with the data will only populate a portion of that tree.
Through this posterior, the nCRP mixture can be used as a flexible tree-based mixture model that
does not assume a particular tree structure in advance of the data.
Hierarchical topic models. The nCRP mixture described above includes the hierarchical topic
model of [1] as a special case. In that model, observed data are documents, i.e., a list of N words
from a fixed vocabulary. The nodes of the tree are associated with distributions over words (?topics?),
and each document is associated with both a path in the tree and with a vector of proportions over its
levels. Given a path, a document is generated by repeatedly generating level assignments from the
proportions and then words from the corresponding topics. In the notation above, p(w) is a Dirichlet
distribution over the vocabulary simplex, p(x) is a joint distribution of level proportions (from a
Dirichlet) and level assignments (N draws from the proportions), and p(t|Wc , x) are the N draws
from the topics (for each word) associated with x.
Tree-based hierarchical component analysis. For continuous data, if p(w), p(x) and p(t|Wc , x)
are appropriate Gaussian distributions, we obtain hierarchical component analysis, a generalization
of probabilistic principal component analysis (PPCA) [16, 17]. In this model, w is the component
parameter for the node it belongs to. Each path c can be thought as a PPCA model with factor loading
Wc specified by that path. Then each data point chooses a path (also a PPCA model specified by that
path) and draw the factors x. This model can also be thought as an infinite mixtures of PPCA model,
2
Figure 1: Left. A possible tree structure in a 3-level nCRP. Right. The tree-based stick-breaking
construction of a 3-level nCRP.
where each PPCA can share components. In addition, we can incorporate the general exponential
family PCA [18, 19] into the nCRP framework.1
2.1
Tree-based stick-breaking construction
CRP mixtures can be equivalently formulated using the Dirichlet process (DP) as a distribution over
the distribution of each data point?s random parameter [21, 4]. An advantage of expressing the CRP
mixture with a DP is that the draw from the DP can be explicitly represented using the stick-breaking
construction [22]. The DP bundles the scaling parameter ? and base distribution G0 . A draw from a
DP(?, G0 ) is described as
Qi?1
P?
vi ? Beta(1, ?), ?i = vi j=1 (1 ? vj ), wi ? G0 , i ? {1, 2, ? ? ? }, G = i=1 ?i ?wi ,
P?
where ? are the stick lengths, and i=1 ?i = 1 almost surely. This representation also illuminates
the discreteness of a distribution drawn from a DP.
For the nCRP, we develop a similar stick-breaking construction. At the first level, the root node?s
stick length is ?1 = v1 ? 1. For all the nodes at the second level, their stick lengths are constructed
P?
Qi?1
as for the DP, i.e., ?1i = ?1 v1i j=1 (1 ? v1j ) for i = {1, 2, ? ? ? , ?} and i=1 ?1i = ?1 =
1. The stick-breaking construction is then applied to each of these stick segments at the second
level. For example, the ?11 portion of the stick is divided up into an infinite number of pieces
according to the stick-breaking process. For the segment ?1k , the stick lengths of its children are
P?
Qi?1
?1ki = ?1k v1ki j=1 (1 ? v1kj ), for i = {1, 2, ? ? ? , ?} and i=1 ?1ki = ?1k . The whole process
continues for L levels. This construction is best understood by Figure 1 (Right).
Although this stick represents an infinite tree, the nodes are countable and each node is uniquely
identified by a sequence of L numbers. We will denote all Beta draws as V , each of which are
independent draws from Beta(1, ?) (except for the root v1 , which is equal to one).
The tree-based stick-breaking construction lets us calculate the conditional probability of a path given
V . Let the path c = [1, c2 , ? ? ? , cL ],
QL
QL
Qc` ?1
p(c|V ) = `=1 ?1,c2 ,??? ,c` = `=1 v1,c2 ,??? ,c` j=1
(1 ? v1,c2 ,??? ,j ).
(2)
By integrating out V in Equation 2, we recover the nCRP. Given Equation 2, the joint probability of
a data set under the nCRP mixture is
QN
p(t1:N , x1:N , c1:N , V , W ) = p(V )p(W ) n=1 p(cn |V )p(xn )p(tn |Wcn , xn ).
(3)
This representation is the basis for variational inference.
3
Variational inference for the nCRP mixture
The central computational problem in Bayesian modeling is posterior inference: Given data, what is
the conditional distribution of the latent variables in the model? In the nCRP mixture, these latent
variables provide the tree structure and node parameters.
1
We note that Bach and Jordan [20] studied tree-dependent component analysis, a generalization of independent component analysis where the components are organized in a tree. This model expresses a different
philosophy: Their tree reflects the actual conditional dependencies among the components. Data are not
generated by choosing a path first, but by a linear transformation of all components in the tree.
3
Posterior inference in an nCRP mixture has previously relied on Gibbs sampling, in which we sample
from a Markov chain whose stationary distribution is the posterior [1, 2, 3]. Variational inference
provides an alternative methodology: Posit a simple (e.g., factorized) family of distributions over
the latent variables indexed by free parameters (called ?variational parameters?). Then fit those
parameters to be close in KL divergence to the true posterior of interest [8, 23].
Variational inference for Bayesian nonparametric models uses a truncated stick-breaking representation in the variational distribution [9] ? free variational parameters are allowed only up to the
truncation level. If the truncation is too large, the variational algorithm will still isolate only a subset
of components; if the truncation is too small, methods have been developed to expand the truncated
stick as part of the variational algorithm [10]. In the nCRP mixture, however, the challenge is that the
tree structure is too large even to effectively truncate. We will address this by defining search criteria
for adaptively adjusting the structure of the variational distribution, searching over the set of trees to
best accommodate the data.
3.1
Variational inference based on the tree-based stick-breaking construction
We first address the problem of variational inference with a truncated tree of fixed structure. Suppose
that we have a truncated tree T and let MT be the set of all nodes in T . Our family of variational
distributions is defined as follows,
Q
Q
QN
q(W , V , x1:N , c1:N ) = i?M
(4)
/ T q(wi )q(vi )
i?MT p(wi )p(vi )
n=1 q(cn )q(xn ),
where: (1) Distributions p(wi ) and p(vi ) for i ?
/ MT , are the prior distributions, containing
no variational parameters; (2) Distributions q(wi ) and q(vi ) for i ? MT contain the variational
parameters that we want to optimize for the truncated tree T ; (3) Distribution q(cn ) is the variational
multinomial distribution over all the possible paths, not just those in the truncated tree T . Note that
there are infinite number of paths. We will address this issue below; (4) Distribution q(xn ) is the
variational distribution for the latent variable xn and it is in the same family of distribution, as p(xn ).
In summary, this family of distributions retains the infinite tree structure. Moreover, this family is
nested [10, 11]: If a truncated tree T1 is a subtree of a truncated tree T2 then variational distributions
defined over T1 are a special case of those defined over T2 . Theoretically, the solution found using
T2 is at least as good as the one found using T1 . This allows us to use greedy search to find a better
tree structure.
With the variational distributions (Equation 4) and the joint distributions (Equation 3), we turn to the
details of posterior inference. Equivalent to minimizing KL is tightening the bound on the likelihood
of the observations D = {tn }N
n=1 given by Jensen?s inequality [8],
log p(t1:N ) ? Eq [log p(t1:N , V , W , x1:N , c1:N )] ? Eq [log q(V , W , x1:N , c1:N )]
h
i P
i P
i
h
h
P
p(tn |xn ,Wcn )p(cn |V )
N
N
p(xn )
i )p(vi )
= i?MT Eq log p(w
+
E
log
+
E
log
q
q
n=1
n=1
q(wi )q(vi )
q(xn )
q(cn )
, L(q).
(5)
We optimize L(q) using coordinate ascent. First we isolate the terms that only contain q(cn ),
L (q(cn )) = Eq [log p(tn |xn , Wcn )p(cn |V )] ? Eq [log q(cn )] .
(6)
Then we find the optimal solution for q(cn ) by setting the gradient to zero:
q(cn = c) ? Sn,c , exp {Eq [log p(cn = c|V )] + Eq [log p(tn |xn , Wc )]} .
(7)
Since the values of q(cn = c) is infinite, operating coordinate ascent over q(cn = c) is difficult. We
plug the optimal q(cn ) (Equation 7) into Equation 6 to obtain the lower bound
P
L (q(cn )) = log c Sn,c .
(8)
Two issues arise: 1) the variational distribution q(cn ) has infinite number
of values, and we need
P
to find an efficient way to manipulate this. 2) the lower bound log c Sn,c (Equation 8) contains
infinite sum, which pose a problem in evaluation. In the appendix, we show that all the operations
can be done only via the truncated tree T . We summarize the results as follows. Let c? be a path in
T , either an inner path (a path ending at an inner node) or a full path (a path ending at a leaf node).
Note that the inner path is only defined for the truncated tree T . The number of such c? is finite. In the
4
nCRP tree, denote child(?
c) as the set of all full paths that are not in T but include c? as a sub path.
As a special case, if c? is a full path, child(?
c) just contains itself. As shown in the appendix, we can
compute these quantities efficiently:
P
P
q(cn = c?) , c:c?child(?c) q(cn = c) and Sn,?c , c:c?child(?c) Sn,c .
(9)
Consequently iterating over the truncated tree T using c? is the same as iterating all the full paths in
the nCRP tree. And these are all we need for doing variational inference.
Next, we move to optimize q(vi |ai , bi ) for i ? MT , where ai and bi are variational parameters for
Beta distribution q(vi ). Let the path containing vi be [1, c2 , ? ? ? , c`0 ], where `0 ? L. We isolate the
term that only contains vi from the lower bound (Equation 5),
PN P
L (q(vi )) = Eq [log p(vi ) ? log q(vi )] + n=1 c q(cn = c) log p(cn = c|V ).
(10)
After plugging Equation 2 into 10 and setting the gradient to be zero, we obtain the optimal q(vi ),
a? ?1
q(vi ) ? vi i
a?i = 1 +
b?i = ? +
?
(1 ? vi )bi ?1 ,
PN P
n=1
PN
n=1
c`0 +1 ,??? ,cL
P
q(cn = [1, c2 , ? ? ? , c`0 , c`0 +1 , ? ? ? , cL ]),
j,c`0 +1 ,??? ,cL :j>c`0
q(cn = [1, c2 , ? ? ? , c`0 ?1 , j, c`0 +1 , ? ? ? , cL ]),
(11)
where the infinite sum involved can be solved using Equations 9.
The variational update functions for W and x depend on the actual distributions we use, and deriving
them is straightforward. If they include an infinite sum then we apply similar techniques as we did
for q(vi ).
3.2
Refining the tree structure during variational inference
Since our variational distribution is nested, a larger truncated tree will always (theoretically) achieve
a lower bound at least as tight as a smaller truncated tree. This allows us to search the infinite tree
space until a certain criterion is satisfied (e.g., relative change of the lower bound). To achieve this,
we present several heuristics to guide us to do so. All these operations are performed on the truncated
tree T .
Grow. This operation is similar to what Gibbs sampling does in searching the tree space. We
implement two heuristics: 1) Randomly choose several data points, and for each of them sample
a path c? according to q(cn = c?). If it is an inner path, expand it a full path; 2) For every inner
PN
path in T , first compute the quantity g(?
c) = n=1 q(cn = c?). Then sample an inner path (say c?? )
according to g(?
c), and expand it to full path.
Prune. If a certain path gets very little probability assignments from all data points, we eliminate
PN
this path ? for path c, the criterion is n=1 q(cn = c) < ?, where ? is a small number. We use
? = 10?6 ). This mimics Gibbs sampling in the sense that for nCRP (or CRP), if a certain path (table)
gets no assignments in the sampling process, it will never get any assignment any more according to
Equation 1.
Merge. If paths i and j give almost equal posterior distributions, merging these two paths is
employed [24]. The measure is J(i, j) = PiT Pj /|Pi ||Pj |, where Pi = [q(c1 = i), ? ? ? , q(cN =
i)]T . We use 0.95 as the threshold in our experiments.
In theory, Prune and Merge may decrease the lower bound. Empirically, we found even sometime
it does, the effect is negligible. (but reduced the size of the tree). For continuous data settings, we
additionally implement the Split method used in [24].
4
Experiments
In this section, we demonstrate variational inference for the nCRP. We analyze both discrete and
continuous data using the two applications discussed in Section 2.
5
Method
Gibbs sampling
Var. inference
Var. inference (G)
Per-word test set likelihood
JACM
Psy. Review
?5.3922 ? 0.0052 ?5.7834 ? 0.0149
?5.4331 ? 0.0100 ?5.8430 ? 0.0153
?5.4495 ? 0.0118 ?5.8593 ? 0.0157
PNAS
?6.4961 ? 0.0068
?6.5736 ? 0.0050
?6.5996 ? 0.0153
Table 1: Test set likelihood comparison on three datasets. Var. inference (G): variational inference
initialized from the initialization of Gibbs sampling. Variational inference can give competitive
performance on test set likelihood.
4.1
Hierarchical topic modeling
For discrete data, we compare variational inference compared with Gibbs sampling for hierarchical
topic modeling. Three corpora are used in the experiments: (1) JACM: a collection of 536 abstracts
from the Journal of the ACM from years 1987 to 2004 with a vocabulary size of 1,539 and around
68K words; (2) Psy. Review: a collection of 1,272 psychology abstracts from Psychological Review
from years 1967 to 2003, with a vocabulary size of 1,971 and around 137K words; (3) PNAS: a
collection of 5,000 abstracts from the Proceedings of the National Academy of Sciences from years
1991 to 2001, with a vocabulary size of 7762 and around 895K words. Those terms occurring in
fewer than 5 documents were removed.
Local maxima can be a problem for both Gibbs sampling and variational inference. To avoid them in
Gibbs sampling, we randomly restart the sampler 200 times and take the trajectory with the highest
average posterior likelihood. We run the Gibbs sampling for 10000 iterations and collect the results
for post analysis. For variational inference, we use two types of initializations 1) similar to Gibbs
sampling, we gradually add data points during the variational inference as well ? add a new path for
each document in the initialization; 2) we initialize the variational inference from the initialization
for Gibbs sampling ? using the MAP estimate using one Gibbs sample. We set L = 3 for all the
experiments and use the same hyperparameters in both algorithms. Specifically, the stick-breaking
prior parameter ? is set to 1.0; the symmetric Dirichlet prior parameter for the topics is set to 1.0; the
prior for level proportions is skewed to favor high levels (50, 20, 10). (This is suggested in [1].) We
run the variational inference until the relative change of log-likelihood is less than 0.001.
Per-word test set likelihood. We use test set likelihood as a measure of performance. The procedure is to divide the corpus into a training set Dtrain and a test set Dtest , and approximate the likelihood
of Dtest given Dtrain . We use the same method in Teh et al. [12] to approximate it. Specifically, we
use posterior means ?? and ?? to represent the estimated topic mixture proportions over L levels and
topic multinomial parameters. For the variational method, we use
QN P
Q P
p({t1 , ? ? ? , tN }test ) = n=1 c q(cn = c) j n,` ??n,` ??c` ,tnj ,
where ?? and ?? are estimated using mean values from the variational distributions. For Gibbs sampling,
we use S samples and compute
QN
PS P
Q P
s ?s
p({t1 , ? ? ? , tN }test ) = n=1 S1 s=1 c ?csn j n,` ??n,`
?c` ,tnj ,
where ??s and ??s are estimated using sample s [25, 12]. We use 30 samples collected at a lag of
PS P
10 after a 200-sample burn-in for a document in test set. Actually, 1/S s=1 c ?csn gives the
empirical estimation of p(cn ), where in variational inference, we approximate it using q(cn ). Table 1
shows the test likelihood comparison using five-fold cross validation. This shows our model can give
competitive performance in term of the test set likelihood. This discrepancy is similar to that in [12]
when variational inference is compared the collapsed Gibbs sampling for HDP.
Topic visualizations. Figures 2 and 3 show the tree-based topic visualizations from JACM and
PNAS datasets. These are quite similar to those obtained by Gibbs sampling (see [1]).
4.2
Modeling handwritten digits using hierarchical component analysis
For continuous data, we use the hierarchical component analysis for modeling handwritten digits
(http://archive.ics.uci.edu/ml). This dataset contains 3823 handwritten digits as a training set and
6
of a and
is in
networks
network
distributed
parallel
processors
routing
network
communication
sorting
distributed
queuing
closed
trees
spanning
productform
programs
logic
rules
resolution
program
logarithmic
improves
upon
worstcase
o
atomic
concurrent
waitfree
control
shared
queries
formulas
complexity
query
classes
methods
tree
decomposition
compression
greedy
functions
polynomial
boolean
compression
input
building
edges
desired
efficiency
together
planar
graphs
maximum
component
essentially
Figure 2: A sub network discovered on JACM dataset, each topic represented by top 5 terms. The
whole tree has 30 nodes, with an average branching factor 2.64.
the in and
a to
dna
rna
replication
strand
recombination
rad
dna
telomerase
brca
recombination
hot
ra
cox
pcna
spot
girk
channels
gag
exchanger
currents
species
evolution
based
time
visual
dye
coupling
tnf
plus
gap
species
populations
years
population
genetic
cardiac
mice
er
ar
heart
sleep
fa
orfs
maize
haplotype
leptin
gh
age
mice
cardiac
fk
dimerization
erythropoietin
reversibly
interleukin
Figure 3: A sub network discovered on PNAS dataset, each topic represented by top 5 terms. The
whole tree has 45 nodes, with an average branching factor 2.93.
1797 as a testing set. Each digit contains 64 integer attributes, ranging from 0-16. As described in
section 2, we use PPCA [16] as the basic model for each path. We use a global mean parameter ? for
all paths, although a model with an individual mean parameter for each path can be similarly derived.
We put broad priors over the parameters similar to those in variational Bayesian PCA [17]. The
stick-breaking prior parameter ? = 1 is set to be 1.0; for each node, w ? N (0, 103 ); ? ? N (0, 103 );
the inverse of the variance for the noise model in PPCA is ? and ? ? Gamma(10?3 , 10?3 ). Again,
we run the variational inference until the relative change of log-likelihood is less than 0.001.
We compare the reconstruction error with PCA. To compute the reconstruction error for our model,
we first select the path for each data point using its MAP estimation by c?n = arg maxc q(cn = c).
Then we use the similar approach [26, 24] to reconstruct tn ,
? + ?.
?
t?n = Wc?n (Wc?n T Wc?n )?1 Wc?n T (tn ? ?)
We test our model using depth L = 2, 3, 4, 5. All of our models run within 2 minutes. The
reconstruction errors for both the training and testing set are shown in Table 2. Our model gives lower
reconstruction errors than PCA.
5
Conclusions
In this paper, we presented the variational inference algorithm for the nested Chinese restaurant
process based on its tree-based stick-breaking construction. Our result indicates that the variational
7
Reconstruction error on handwritten digits
#Depth HCA (tr) PCA (tr) HCA (te) PCA (te)
2(9)
631.6
863.0
699.4
878.5
3(14)
559.8
722.3
585.6
727.7
4(18)
463.4
621.0
506.1
633.0
5(22)
384.8
553.0
461.8
564.2
Table 2: Reconstruction error comparison (Tr: train; Te: test). HCA stands for hierarchical component
analysis. PCA uses L largest components. In the first column, 2(9) means L = 2 with 9 nodes
inferred using our model. Others are similarly defined. HCA gives lower reconstruction errors.
inference is a powerful alternative method for the widely used Gibbs sampling. We also adapt the
nCRP to model continuous data, e.g. in hierarchical component analysis.
Acknowledgements. We thank anonymous reviewers for insightful suggestions. David M. Blei is
supported by ONR 175-6343, NSF CAREER 0745520, and grants from Google and Microsoft.
Appendix: efficiently manipulating Sn,c and q(cn = c)
Case 1: All nodes of the path are in T , c ? MT . Let Z0 , Eq [log p(tn |xn , Wc )]. We have
n hP
i
o
Pc` ?1
L
Sn,c = exp Eq
(log(v
)
+
log(1
?
v
))
+
Z
.
(12)
1,c
,???
,c
1,c
,???
,j
0
2
2
`
`=1
j=1
Case 2: At least one node is not in T , c 6? MT . Although c 6? MT , c must have some nodes
in MT . Then c can be written as c = [?
c, c`0 +1 , ? ? ? , cL ], where c? , [1, c2 , ? ? ? , c`0 ] ? MT and
[?
c, c`0 +1 , ? ? ? , c` ] 6? MT for any ` > `0 . In the truncated tree T , let j0 be the maximum index
for the child node whose parent path is c?, then we know if c`0 +1 > j0 , [?
c, c`0 +1 , ? ? ? , cL ] 6? MT .
Now we fix the sub path c? and let [c`0 +1 , ? ? ? , cL ] vary (satisfying c`0 +1 > j0 ). All these possible
paths constitute a set: child(?
c) , {[?
c, c`0 +1 , ? ? ? , cL ] : c`0 +1 > j0 }. According to Equation 4, for
any c ? child(?
c) , Z0 , Eq [log p(tn |xn , Wc )] is constant, since the variational distribution for w
outside the truncated tree is the same prior distribution. We have
P
c?child(?
c) Sn,c
n
hP
io
P
Pc` ?1
L
= c?child(?c) exp Z0 + Eq
`=1 (log(v1,??? ,c` ) +
j=1 log(1 ? v1,c2 ,??? ,j ))
n hP
io
Pc` ?1
exp(Z0 +(L?`0 )Ep [log(v)])
`0
= (1?exp(
exp
E
(log(v
)
+
log(1
?
v
))
q
1,c2 ,??? ,c`
1,c2 ,??? ,j
`=1
j=1
Ep [log(1?v)]))L?`0
hP
i
j0
? exp Eq
,
(13)
j=1 log(1 ? v1,c2 ,??? ,c`0 ,j )
where v ? Beta(1, ?). Such cases contain all inner nodes in the
P truncated tree T . Note that Case 1
is a special case of Case 2 by setting `0 = L. Given all these, c Sn,c can be computed efficiently.
Furthermore, given Equations 13 and Equation 7, we define
P
P
q(cn = c?) , c?child(?c) q(cn = c) ? c?child(?c) Sn,c ,
(14)
which corresponds the sum of probabilities from all paths in child(?
c). We note that this organization
only depends on the truncated tree T and is sufficient for variational inference.
8
References
[1] Blei, D. M., T. L. Griffiths, M. I. Jordan, et al. Hierarchical topic models and the nested Chinese restaurant
process. In NIPS. 2003.
[2] Bart, E., I. Porteous, P. Perona, et al. Unsupervised learning of visual taxonomies. In CVPR. 2008.
[3] Sivic, J., B. C. Russell, A. Zisserman, et al. Unsupervised discovery of visual object class hierarchies. In
CVPR. 2008.
[4] Aldous, D. Exchangeability and related topics. In Ecole d?Ete de Probabilities de Saint-Flour XIII 1983,
pages 1?198. Springer, 1985.
[5] Ferguson, T. S. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1(2):209?
230, 1973.
[6] Neal, R. Probabilistic inference using Markov chain Monte Carlo methods. Tech. Rep. CRG-TR-93-1,
Department of Computer Science, University of Toronto, 1993.
[7] Robert, C., G. Casella. Monte Carlo Statistical Methods. Springer-Verlag, New York, NY, 2004.
[8] Jordan, M. I., Z. Ghahramani, T. S. Jaakkola, et al. An introduction to variational methods for graphical
models. Learning in Graphical Models, 1999.
[9] Blei, D. M., M. I. Jordan. Variational methods for the Dirichlet process. In ICML. 2004.
[10] Kurihara, K., M. Welling, N. A. Vlassis. Accelerated variational Dirichlet process mixtures. In NIPS.
2006.
[11] Kurihara, K., M. Welling, Y. W. Teh. Collapsed variational Dirichlet process mixture models. In IJCAI.
2007.
[12] Teh, Y. W., K. Kurihara, M. Welling. Collapsed variational inference for HDP. In NIPS. 2008.
[13] Sudderth, E. B., M. I. Jordan. Shared segmentation of natural scenes using dependent Pitman-Yor processes.
In NIPS. 2008.
[14] Doshi, F., K. T. Miller, J. Van Gael, et al. Variational inference for the Indian buffet process. In AISTATS,
vol. 12. 2009.
[15] Escobar, M. D., M. West. Bayesian density estimation and inference using mixtures. Journal of the
American Statistical Association, 90:577?588, 1995.
[16] Tipping, M. E., C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical
Society, Series B, 61:611?622, 1999.
[17] Bishop, C. M. Variational principal components. In ICANN. 1999.
[18] Collins, M., S. Dasgupta, R. E. Schapire. A generalization of principal components analysis to the
exponential family. In NIPS. 2001.
[19] Mohamed, S., K. A. Heller, Z. Ghahramani. Bayesian exponential family PCA. In NIPS. 2008.
[20] Bach, F. R., M. I. Jordan. Beyond independent components: Trees and clusters. JMLR, 4:1205?1233,
2003.
[21] Antoniak, C. E. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems.
The Annals of Statistics, 2(6):1152?1174, 1974.
[22] Sethuraman, J. A constructive definition of Dirichlet priors. Statistica Sinica, 4:639?650, 1994.
[23] Wainwright, M., M. Jordan. Variational inference in graphical models: The view from the marginal
polytope. In Allerton Conference on Control, Communication and Computation. 2003.
[24] Ueda, N., R. Nakano, Z. Ghahramani, et al. SMEM algorithm for mixture models. Neural Computation,
12(9):2109?2128, 2000.
[25] Griffiths, T. L., M. Steyvers. Finding scientific topics. Proc Natl Acad Sci USA, 101 Suppl 1:5228?5235,
2004.
[26] Tipping, M. E., C. M. Bishop. Mixtures of probabilistic principal component analysers. Neural Computation, 11(2):443?482, 1999.
9
| 3662 |@word cox:1 polynomial:1 compression:2 proportion:6 loading:1 decomposition:1 tr:4 accommodate:1 contains:7 wcn:5 series:1 genetic:1 document:6 ecole:1 csn:2 current:2 written:2 must:1 partition:3 update:1 bart:1 stationary:1 generative:1 discovering:1 selected:1 item:1 greedy:2 leaf:1 fewer:1 blei:5 provides:2 node:23 toronto:1 sits:2 allerton:1 five:1 constructed:1 c2:12 beta:5 replication:1 fitting:1 theoretically:2 ra:1 actual:2 metaphor:2 little:1 becomes:1 discover:1 notation:1 moreover:1 factorized:1 what:2 kind:1 developed:1 finding:1 transformation:1 every:1 unusually:1 stick:22 control:2 grant:1 positive:1 t1:8 understood:1 negligible:1 local:1 io:2 acad:1 path:52 merge:2 burn:1 plus:1 initialization:4 studied:1 specifying:1 collect:1 pit:1 bi:3 unique:1 testing:2 atomic:1 chongw:1 implement:2 digit:6 procedure:1 spot:1 j0:5 area:1 empirical:1 thought:2 word:9 integrating:1 griffith:2 get:3 close:2 selection:1 put:1 collapsed:3 optimize:3 equivalent:1 map:2 customer:6 reviewer:1 straightforward:1 resolution:1 qc:1 rule:1 tnf:1 deriving:1 steyvers:1 population:2 searching:2 coordinate:2 annals:2 hierarchy:4 construction:11 imagine:3 suppose:1 us:2 satisfying:1 continues:1 observed:1 ep:2 wang:1 enters:1 solved:1 calculate:1 decrease:1 removed:1 highest:1 russell:1 complexity:1 depend:2 tight:1 segment:2 upon:1 efficiency:1 basis:1 joint:3 represented:3 train:1 describe:1 monte:2 query:2 analyser:1 choosing:3 outside:1 whose:2 heuristic:2 larger:1 lag:1 quite:1 say:1 widely:1 reconstruct:1 cvpr:2 favor:1 statistic:2 itself:1 sequence:2 advantage:1 reconstruction:7 uci:1 achieve:2 academy:1 description:1 convergence:1 parent:1 p:2 ijcai:1 cluster:1 generating:3 escobar:1 object:1 derive:1 develop:4 coupling:1 pose:1 ibp:1 eq:13 c:2 come:1 posit:2 closely:1 attribute:1 exploration:1 routing:1 require:1 fix:1 generalization:4 anonymous:1 crg:1 around:3 ic:1 exp:7 vary:1 estimation:3 sometime:1 proc:1 combinatorial:2 concurrent:1 largest:1 v1i:1 successfully:1 reflects:1 gaussian:1 always:1 rna:1 occupied:1 pn:5 avoid:1 exchangeability:1 jaakkola:1 derived:2 refining:1 she:1 likelihood:12 indicates:1 tech:1 sense:1 psy:2 inference:41 dependent:2 ferguson:1 eliminate:1 hidden:1 perona:1 manipulating:1 expand:3 issue:2 among:1 flexible:2 arg:1 plan:1 special:4 initialize:1 marginal:1 equal:2 never:1 sampling:20 represents:2 broad:1 unsupervised:2 icml:1 mimic:1 simplex:1 t2:3 discrepancy:1 others:1 hca:4 employ:1 ete:1 xiii:1 randomly:2 gamma:1 divergence:1 national:1 individual:1 microsoft:1 organization:1 interest:2 evaluation:1 flour:1 chong:1 mixture:26 pc:3 behind:1 natl:1 bundle:1 ncrp:34 chain:2 edge:1 tree:62 indexed:1 divide:1 initialized:1 desired:1 mk:2 psychological:1 column:1 modeling:6 combinatoric:1 boolean:1 ar:1 retains:1 assignment:5 subset:1 too:3 dtrain:2 dependency:1 chooses:1 adaptively:1 density:1 explores:2 probabilistic:5 together:1 quickly:1 mouse:2 again:1 central:1 satisfied:1 containing:2 choose:1 dtest:2 american:1 gag:1 de:2 includes:2 explicitly:1 depends:2 vi:20 piece:1 queuing:1 root:2 performed:1 closed:1 view:1 linked:1 doing:1 portion:2 analyze:1 relied:2 recover:1 complicated:1 competitive:2 parallel:1 ass:1 variance:1 efficiently:4 miller:1 sitting:1 bayesian:11 handwritten:4 reversibly:1 carlo:2 trajectory:1 processor:1 maxc:1 casella:1 definition:1 involved:1 mohamed:1 doshi:1 associated:10 ppca:7 dataset:3 adjusting:1 improves:1 organized:2 segmentation:1 actually:1 tipping:2 methodology:2 planar:1 zisserman:1 daunting:1 done:1 though:1 furthermore:1 just:2 crp:9 until:4 hand:1 google:1 scientific:1 building:1 effect:1 usa:1 contain:3 true:1 evolution:1 entering:1 symmetric:1 neal:1 during:2 skewed:1 uniquely:1 branching:2 criterion:3 illuminates:1 demonstrate:2 tn:15 gh:1 image:2 variational:62 ranging:1 novel:1 multinomial:2 mt:13 empirically:1 tnj:2 haplotype:1 attached:1 discussed:1 association:1 expressing:1 gibbs:19 ai:2 fk:1 similarly:2 hp:4 moving:1 operating:1 base:2 add:2 posterior:16 dye:1 aldous:1 belongs:1 certain:3 verlag:1 inequality:1 onr:1 rep:1 additional:1 employed:1 prune:2 surely:1 converge:1 full:6 pnas:4 adapt:2 plug:1 cross:2 bach:2 divided:1 post:1 manipulate:1 plugging:1 qi:3 basic:1 essentially:1 iteration:1 represent:1 suppl:1 c1:8 addition:2 want:1 grow:1 sudderth:1 archive:1 ascent:2 isolate:3 jordan:7 integer:1 split:1 restaurant:12 fit:2 psychology:1 identified:1 inner:7 idea:1 cn:36 pca:8 york:1 constitute:1 repeatedly:1 enumerate:1 iterating:2 gael:1 nonparametric:6 dna:2 reduced:1 http:1 schapire:1 nsf:1 estimated:3 per:2 discrete:3 dasgupta:1 vol:1 express:1 threshold:1 drawn:3 discreteness:1 pj:2 v1:7 graph:1 sum:4 year:4 run:4 inverse:1 powerful:3 family:8 almost:2 ueda:1 draw:11 appendix:3 scaling:1 ki:2 bound:7 fold:1 sleep:1 scene:1 wc:11 department:3 according:7 truncate:1 describes:1 slightly:1 smaller:1 cardiac:2 wi:8 appealing:1 s1:1 gradually:1 heart:1 equation:15 visualization:2 previously:1 turn:1 needed:1 know:1 generalizes:1 operation:3 apply:1 hierarchical:14 appropriate:1 alternative:4 buffet:2 assumes:2 dirichlet:12 include:2 top:2 porteous:1 saint:1 graphical:3 nakano:1 recombination:2 ghahramani:3 chinese:9 society:1 move:1 g0:5 quantity:3 fa:1 traditional:1 illuminate:1 gradient:2 dp:8 thank:1 sci:1 restart:1 topic:18 seating:1 polytope:1 collected:1 spanning:1 hdp:3 length:4 index:1 minimizing:1 equivalently:1 difficult:2 ql:2 sinica:1 robert:1 taxonomy:2 tightening:1 countable:1 teh:3 observation:2 markov:2 datasets:2 finite:1 truncated:18 defining:1 vlassis:1 communication:2 incorporate:1 discovered:2 inferred:1 david:2 specified:2 kl:2 rad:1 sivic:1 nip:6 address:4 dth:1 suggested:1 beyond:1 below:1 challenge:1 summarize:1 program:2 royal:1 wainwright:1 hot:1 natural:1 rely:1 nth:1 smem:1 sethuraman:1 sn:10 text:2 prior:8 review:3 acknowledgement:1 discovery:1 heller:1 relative:3 suggestion:1 analogy:1 var:3 age:1 validation:2 sufficient:1 subtables:1 share:1 cd:1 pi:2 summary:1 supported:1 free:3 truncation:3 infeasible:1 populate:1 guide:1 face:1 pitman:2 yor:2 distributed:2 van:1 depth:2 xn:16 vocabulary:5 ending:2 stand:1 qn:4 collection:4 welling:3 approximate:3 logic:1 ml:1 global:1 sat:1 corpus:2 continuous:7 latent:10 search:4 decomposes:1 table:17 additionally:1 channel:1 career:1 cl:9 vj:1 did:1 aistats:1 icann:1 statistica:1 whole:3 noise:1 arise:1 hyperparameters:1 child:13 allowed:1 x1:4 west:1 slow:1 ny:1 sub:4 exponential:3 breaking:14 jmlr:1 third:1 down:1 formula:1 minute:1 z0:4 specific:1 bishop:3 insightful:1 jensen:1 er:1 list:1 intractable:2 merging:1 effectively:1 te:3 subtree:1 occurring:1 sorting:1 gap:1 logarithmic:1 antoniak:1 jacm:4 visual:3 strand:1 scalar:1 springer:2 nested:10 corresponds:1 worstcase:1 acm:1 conditional:3 lth:2 formulated:1 consequently:1 v1j:1 shared:2 change:3 infinite:16 except:1 specifically:2 sampler:1 kurihara:3 principal:5 called:2 specie:2 select:1 internal:1 collins:1 indian:2 philosophy:1 accelerated:1 constructive:1 mcmc:1 princeton:4 |
2,938 | 3,663 | Structural inference affects depth perception in the
context of potential occlusion
Ian H. Stevenson and Konrad P. K?ording
Department of Physical Medicine and Rehabilitation
Northwestern University
Chicago, IL 60611
[email protected]
Abstract
In many domains, humans appear to combine perceptual cues in a near-optimal,
probabilistic fashion: two noisy pieces of information tend to be combined linearly with weights proportional to the precision of each cue. Here we present
a case where structural information plays an important role. The presence of a
background cue gives rise to the possibility of occlusion, and places a soft constraint on the location of a target - in effect propelling it forward. We present
an ideal observer model of depth estimation for this situation where structural
or ordinal information is important and then fit the model to human data from a
stereo-matching task. To test whether subjects are truly using ordinal cues in a
probabilistic manner we then vary the uncertainty of the task. We find that the
model accurately predicts shifts in subject?s behavior. Our results indicate that the
nervous system estimates depth ordering in a probabilistic fashion and estimates
the structure of the visual scene during depth perception.
1
Introduction
Understanding how the nervous system makes sense of uncertain visual stimuli is one of the central
goals of perception research. One strategy to reduce uncertainty is to combine cues from several
sources into a good joint estimate. If the cues are Gaussian, for instance, an ideal observer should
combine them linearly with weights proportional to the precision of each cue. In the past few
decades, a number of studies have demonstrated that humans combine cues during visual perception
to reduce uncertainty and often do so in near-optimal, probabilistic ways [1, 2, 3, 4].
In most situations, each cue gives noisy information about the variable of interest that can be modeled as a Gaussian likelihood function about the variable. Recently [5] have suggested that subjects
may combine a metric cue (binocular disparity) with ordinal cues (convexity or familiarity of faces)
during depth perception. In these studies ordinal cues were modeled as simple biases. We argue that
the effect of such ordinal cues stems from a structural inference process where an observer estimates
the structure of the visual scene along with depth cues.
The importance of structural inference and occlusion constraints, particularly of hard constraints,
has been noted previously [6, 7, 8]. For instance, it was found that points presented to one eye but
not the other have a perceived depth that is constrained by the position of objects presented to both
eyes. Although these unpaired image points do not contain depth cues in the usual sense, subjects
were able to estimate their depth. This indicates that human subjects indeed use the inferred structure
of a visual scene for the estimation of depth.
Here we formalize the constraints presented by occlusion using a probabilistic framework. We first
present the model and illustrate its ability to describe data from [7]. Then we present results from
a new stereo-vision experiment in which subjects were asked to match the depth of an occluding
1
or occluded circle. The model accurately predicts human behavior in this task and describes the
changes that occur when we increase depth uncertainty. These results cannot be explained by traditional cue combination or even more recent relevance (causal inference) models [9, 10, 11, 12]. Our
constraint-based approach may thus be useful in understanding how subjects make sense of cluttered
scenes and the impact of structural inference on perception.
2
2.1
Theory
An Ordinal Cue Combination Model
We assume that observers receive noisy information about the depth of objects in the world. For
concreteness, we assume that there is a central object c and a surrounding object s. The exact shapes
and relative positions of these two objects are not important, but naming them will simplify the
notation that follows. We assume that each of these objects has a true, hidden depth (xc and xs ) and
observers receive noisy observations of these depths (yc and ys ).
In a scene with potential occlusion there may be two (or more) possible interpretations of an image
(Fig 1A). When there is no occlusion (structure S1 ) the depth observations of the two objects are
independent. That is, we assume that the depth of the surrounding object in the scene s has no influence on our estimate of the depth of c. The distribution of observations is assumed to be Gaussian
and is physically determined by disparity, shading, texture, or other depth cues and their associated
uncertainties. In this case the joint distribution of the observations given the hidden positions is
p(yc , ys |xc , xs , S1 ) = p(yc |xc , S1 )p(ys |xs , S1 ) = Nyc (xc , ?c )Nys (xs , ?s ).
(1)
When occlusion does occur, however, the position of the central object c is bounded by the depth of
the surrounding, occluded object (structure S2 )
p(yc , ys |xc , xs , S2 ) ?
Nyc (xc , ?c )Nys (xs , ?s ) if xc > xs ,
0
if xc ? xs .
(2)
An ideal observer can then make use of this ordinal information in estimating the depth of the
occluding object. The (marginal) posterior distribution over the hidden depth of the central object
xc can be found by marginalizing over the depth of the surrounding object xs and possible structures
(S1 and S2 ).
p(xc | yc , ys ) = p(xc | yc , ys , S1 )p(S1 ) + p(xc | yc , ys , S2 )p(S2 )
Observation
s
c
S2
c
s
c
s
Constraint
B
S1
p(S1) = 0
ys
p(S1) = 0.25
ys
p(S1) = 1
ys
p(xc| yc, ys,S1)
p(xc| yc, ys)
Marginal
Posterior
A
(3)
yc
yc
yc
xc
Figure 1: An occlusion model with soft-constraints. (A) Two possible structures leading to the
same observation: one without occlusion S1 and one with occlusion S2 . (B) Examples of biases in
the posterior estimate of xc for complete (left), moderate (center), and no relevance (right). In the
cases shown, the observed depth of the central stimulus yc is the same as the observed depth of the
surrounding stimulus ys . Note that when yc ys the constraint will not bias estimates of xc .
2
Using the assumption of conditional independence and assuming flat priors over the hidden depths
xc and xs , the first term in this expression is
Z
p(xc | yc , ys , S1 ) =
Z
=
p(xc |yc , ys , xs , S1 )p(xs | yc , ys , S1 )dxs
Z
p(xc |yc , S1 )p(xs |ys , S1 )dxs = Nxc (yc , ?c )Nxs (ys , ?s )dxs
(4)
= Nxc (yc , ?c ).
The second term is then
Z
p(xc | yc , ys , S2 ) =
p(xc |yc , ys , xs , S2 )p(xs | yc , ys , S2 )dxs
Z
p(yc , ys |xc , xs , S2 )dxs
=
Z
(5)
xc
=
Nxc (yc , ?c )Nxs (ys , ?s )dxs
??
=
1
[erf (?s (xc ? ys ))/2 + 1/2]Nxc (yc , ?c ),
Z
where step 2 uses Bayes? rule and the assumption of flat priors, ?s = 1/
normalizing factor. Combining these two terms gives the marginal posterior
p(xc | yc , ys ) =
p
(2?)/?s and Z is a
1
[(1 ? p(S1 ))(erf (?s (xc ? ys ))/2 + 1/2) + p(S1 )] Nxc (yc , ?c ),
Z
(6)
which describes the best estimate of the depth of the central object. Intuitively, the term in square
brackets constrains the possible depths of the central object c (Fig 1B). The p(S1 ) term allows for the
possibility that the constraint should not apply. Similar to models of causal inference [11, 12, 9, 10],
the surrounding stimulus may be irrelevant, in which case we should simply rely on the observation
of the target.
Here we have described two specific structures in the world that result in the same observation. Real
world stimuli may result from a much larger set of possible structures. Generally, we can simply
split structures into those with occlusion O and those without occlusion ?O. Above, S1 corresponds
to the set of possible structures without occlusion ?O, and S2 corresponds to the set of possible
structures with occlusion O. It is not necessary to actually enumerate the possible structures.
Similar to traditional cue combination models, where there is an analytic form for the expected value
of the target (linear combination weighted by the precision of each cue), we can write down analytic
expressions for E[xc ] for at least one case. For p(S1 ) = 0, ?s ? 0 the mean of the marginal
posterior is the expected value of a truncated Gaussian
E(xc |ys < xc ) = yc + ?c ?(
ys ? yc
)
?c
(7)
?(?)
Where ?(?) = [1??(?)]
, ?(?) is the PDF for the standard normal distribution and ?(?) is the CDF.
For yc = ys , for instance,
E(xc |ys < xc ) = yc + 0.8?c
(8)
It is important to note that, similar to classical cue combination models, estimation of the target is improved by combining depth information with the occlusion constraint. The variance of p(xc |yc , ys )
is smaller than that of p(xc | yc , ys , S1 ).
3
2.2
Modeling Data from Nakayama and Shimojo (1990)
To illustrate the utility of this model, we fit data from [7]. In this experiment subjects were presented
with a rectangle in each eye. Horizontal disparity between the two rectangles gave the impression of
depth. To test how subjects perceive occluded objects, a small vertical bar was presented to one eye,
giving the impression that the large rectangle was occluding the bar and leading to unpaired image
points (Fig 2A). Subjects were then asked to match the depth of this vertical bar by changing the disparity of another image in which the bar was presented in stereo. Despite the absence of direct depth
cues, subjects assigned a depth to the vertical bar. Moreover, for a range of horizontal distances, the
assigned depth was consistent with the constraint provided by the stereo-rectangle (Fig 2B). These
results systematically characterize the effect of structural estimation on depth estimates. Without
ordinal information, the horizontal distance between the rectangle and the vertical bar should have
no effect on the perceived depth of the bar.
In our model yc and ys are simply observations on the depth of two objects: in this case, the unpaired
vertical bar and the large rectangle. Since there isn?t direct disparity for the vertical bar, we assume
that horizontal distance from the large rectangle serves as the depth cue. In reality an infinity of
depths are compatible with a given horizontal distance (Fig 2A, dotted lines). However, the size and
shape of the vertical bar serve as indirect cues, which we assume generate a Gaussian likelihood
(as in Eq. 1). We fit our model to this data with three free parameters: ?s , ?c , and a relevance
term p(O). The event O corresponds to occlusion (case S2 ), while ?O corresponds to the set of
possible structures leading to the same observation without occlusion. For the valid stimuli where
occlusion can account for the vertical bar being seen in only one eye, ?s = 4.45 arcmin, ?c = 12.94
arcmin and p(?O) = 0.013 minimized the squared error between the data and model fit (Fig 2C).
For invalid stimuli we assume that p(?O) = 1, which matches subject?s responses.
B
Un
Ima paire
d
ge
Poi
nts
A
L
L
Model
R
R
L
Valid
Invalid
Data
20
R
10
0
Valid Stimuli
Invalid Stimuli
60
40
20
0
0
20
40
60
DISTANCE (min arc)
Figure 2: Experiment and data from [7]. A) Occlusion puts hard constraints on the possible depth of
unpaired image points (top). This leads to ?valid? and ?invalid? stimuli (bottom). B) When subjects
were asked to judge the depth of unpaired image points they followed these hard constraints (dotted
lines) for a range of distances between the large rectangle and vertical bar (top). The two figures
show a single subject?s response when the vertical bar was positioned to the left or right of a large
rectangle. The ordinal cue combination model can describe this behavior as well as deviations from
the constraints for large distances (bottom).
4
3
Experimental Methods
To test this model in a more general setting where depth is driven by both paired and unpaired
image points we constructed a simple depth matching experiment. Subjects (N=7) were seated
60cm in front of a CRT wearing shutter glasses (StereoGraphics CrystalEyes, 100Hz refresh rate)
and asked to maintain their head position on a chin-rest. The experiment consisted of two tasks: a
two-alternative forced choice task (2AFC) to measure subjects? depth acuity and a stereo-matching
task to measure their perception of depth when a surrounding object was present. The target (central)
objects were drawn on-screen as circles (13.0 degrees diameter) composed of random dots on a
background pedestal of random dots (Fig 3).
In the 2AFC task, subjects were presented with two target objects with slightly different horizontal
disparities and asked to indicate using the keyboard which object was closer. The reference object
had a horizontal disparity of 0.57 degrees and was positioned randomly each trial on either the left
or right side. The pedestal had a horizontal disparity of -0.28 degrees. Subjects performed 100 trials
in which the disparity of the test object was chosen using optimal experimental design methods [13].
After the first 10 trials the next sample was chosen to maximize the conditional mutual information
between the responses and the parameter for the just-noticeable depth difference (JND) given the
sample position. This allowed us to efficiently estimate the JND for each subject.
In the stereo-matching task subjects were presented with two target objects and a larger surrounding
circle (25.2 degrees diameter) paired with one of the targets. Subjects were asked to match the depth
of the unpaired target with that of the paired target using the keyboard (100 trials). The depth of
the paired target was held fixed across trials at 0.57 degrees horizontal disparity while the position
of the surrounding circle was varied between 0.14-1.00 degrees horizontal disparity. The depth of
the unpaired target was selected randomly at the beginning of each trial to minimize any effects
of the starting position. All objects were presented in gray-scale and the target was presented offcenter from the surrounding object to avoid confounding shape cues. The side on which the paired
target and surrounding object appeared (left or right side of the screen) was also randomly chosen
from trial to trial, and all objects were within the fusional limits for this task. When asked, subjects
reported that diplopia occurred only when they drove the unpaired target too far in one direction or
the other.
Each of these tasks (the 2AFC task and the stereo-matching task) was performed for two uncertainty conditions: a low and high uncertainty condition. We varied the uncertainty by changing the
distribution of disparities for the individual dots which composed the target objects and the larger
occluding/occluded circle. In the low uncertainty condition the disparity for each dot was drawn
from a Gaussian distribution with a variance of 2.2 arc minutes. In the high uncertainty condition
A
B
Figure 3: Experimental design. Each trial consists of a matching task in which subjects control the
depth of an unpaired circle (A, left). Subjects attempt to match the depth of this unpaired circle to
the depth of a target circle which is surrounded by a larger object (A, right). Divergent fusers can
fuse (B) to see the full stimulus. The contrast has been reversed for visibility. To measure depth
acuity, subjects also complete a two-alternative forced choice task (2AFC) using the same stimulus
without the surrounding object.
5
the disparities were drawn with a variance of 6.5 arc minutes. All subjects had normal or corrected
to normal vision and normal stereo vision (as assessed by a depth acuity < 5 arcmin in the low
uncertainty 2AFC task). All experimental protocols were approved by IRB and in accordance with
Northwestern University?s policy statement on the use of humans in experiments. Informed consent
was obtained from all participants.
4
Results
All subjects showed increased just-noticeable depth differences between the low and high uncertainty conditions. The JNDs were significantly different across conditions (one-sided paired t-test,
p= 0.0072), suggesting that our manipulation of uncertainty was effective (Fig 4A). In the matching
task, subjects were, on average, biased by the presence of the surrounding object. As the disparity
of the surrounding object was increased and disparity cues suggested that s was closer than c, this
bias increased. Consistent with our model, this bias was higher in the high uncertainty condition
(Fig 4B and C). However, the difference between uncertainty conditions was only significant for
two surround depths (0.6 and 1.0 degrees, one-sided paired t-test p=0.004, p=0.0281) and not significant as a main effect (2-way ANOVA p=0.3419). To model the bias, we used the JNDs estimated
from the 2AFC task, and fit two free parameters: ?s and p(?O), by minimizing the squared error
between model predictions and subject?s responses. The model provided an accurate fit for both
individual subjects and the across subject data (Fig 4B and C). For the across subject data, we found
?s = 0.085 arcmin for the low uncertainty condition and ?s = 0.050 arcmin for the high uncertainty
Difference in perceived depth
(arcmin)
B
Just noticeable depth difference
(arcmin)
A
8
*
3.5
3
2.5
2
1.5
C
1
0.5
0
14
w ty
gh ty
Lo tain
Hi rtain
r
ce
ce
Un
Un
12
yc
8
Subject IV
6
4
4
2
2
0
0
?2
?2
0.2
0.4
0.6
0.8
yc
10
6
?4
Across Subject Average
N=7
?4
1
Depth of surrounding object (degrees)
0.2
0.4
0.6
0.8
1
Depth of surrounding object (degrees)
Figure 4: Experimental results. (A) Just noticeable depth differences for the two uncertainty conditions averaged across subjects. (B) and (C) show the difference between the perceived depth of the
unpaired target and the paired target (the bias) as a function of the depth of the surrounding circle.
Results for a typical subject (B) and the across subject average (C). Dots and error-bars denote subject responses, solid lines denote model fits, and dotted lines denote the depth of the paired target,
which was fixed. Error bars denote SEM (N=7).
6
condition. In these cases, p(?O) was not significantly different from zero and the simplified model
in which p(?O) = 0 was preferred (cross-validated likelihood ratio test). Over the range of depths
we tested, this relevance term does not seem to play a role. However, we predict that for larger
discrepancies this relevance term would come into play as subjects begin to ignore the surrounding
object (as in Fig 2).
Note that if the presence of a surrounding object had no effect subjects would be unbiased across
depths of the occluded object. Two subjects (out of 7) did not show bias; however, both subjects
had normal stereo vision and this behavior did not appear to be correlated with low or high depth
acuity. Since subjects were allowed to free-view the stimulus, it is possible that some subjects were
able to ignore the surrounding object completely. As with the invalid stimuli in [7], a model where
p(?O) = 1 accurately fit data from these subjects. The rest of the subjects demonstrated bias (see
Fig 4B for an example), but more data may be need to conclusively show differences between the
two uncertainty conditions and causal inference effects.
5
Discussion
The results presented above illustrate the importance of structural inference in depth perception.
We have shown that potential occlusion can bias perceived depth, and a probabilistic model of the
constraints accurately accounts for subjects? perception during occlusion tasks with unpaired image
points [7] as well as a novel task designed to probe the effects of structural inference.
A
x1
?
x2
y1
B
C
Ordinal
Cue Combination
y2
E[x2]
E[x2]
E[x2]
y1
y1
Cue
Combination
D
Causal
Inference
Model
Relation
References
Cue Combination
Causal Inference
Ordinal Cue Combination
x1 = x2
probabilistic x1=x2
probabilistic x1>x2
e.g. Alais and Burr (2004), Ernst and Banks (2002)
e.g. Knill (2007), Kording et al. (2007)
model presented here
Figure 5: Models of cue combination. (A) Given the observations (y1 and y2 ) from two sources, how
should we estimate the hidden sources x1 and x2 ? (B) Classical cue combination models assume
x1 = x2 . This results in a linear weighting of the cues. Non-linear cue combination can be explained
by causal inference models where x1 and x2 are probabilistically equal. (C) In the model presented
here, ordinal information introduces an asymmetry into cue combination. x1 and x2 are related here
by a probabilistic inequality. (D) A summary of the relation between x1 and x2 for each model class.
7
A number of studies have proposed probabilistic accounts of depth perception [1, 4, 12, 14], and
a variety of cues, such as disparity, shading, and texture, can all be combined to estimate depth
[4, 12]. However, accounting for structure in the visual scene and use of occlusion constraints is
typically qualitative or limited to hard constraints where certain depth arrangements are strictly ruled
out [6, 14]. The model presented here accounts for a range of depth perception effects including
perception of both paired and unpaired image points. Importantly, this model of perception explains
the effects of ordinal cues in a cohesive structural inference framework.
More generally, ordinal information introduces asymmetry into cue combination. Classically, cue
combination models assume a generative model in which two observations arise from the same hidden source. That is, the hidden source for observation 1 is equal to the hidden source for observation
2 (Fig 5A). More recently, causal inference or cue conflict models have been developed that allow
for the possibility of probabilistic equality [9, 11, 12]. That is, there is some probability that the
two sources are equal and some probability that they are unequal. This addition explains a number
of nonlinear perceptual effects [9, 10] (Fig 5B). The model presented here extends these previous
models by introducing ordinal information and allowing the relationship between the two sources
to be an inequality - where the value from one source is greater than or less than the other. As with
causal inference models, relevance terms allow the model to capture probabilistic inequality, and
this type of mixture model allows descriptions of asymmetric and nonlinear behavior (Fig 5C). The
ordinal cue combination model thus increases the class of behaviors that can be modeled by cue
combination and causal inference and should have applications for other modalities where ordinal
and structural information is important.
References
[1] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal
fashion. Nature, 415(6870):429?33, 2002.
[2] D. Kersten and A. Yuille. Bayesian models of object perception. Current Opinion in Neurobiology,
13(2):150?158, 2003.
[3] D. C. Knill and W. Richards. Perception as Bayesian inference. Cambridge University Press, 1996.
[4] M. S. Landy, L. T. Maloney, E. B. Johnston, and M. Young. Measurement and modeling of depth cue
combination: In defense of weak fusion. Vision Research, 35(3):389?412, 1995.
[5] J. Burge, M. A. Peterson, and S. E. Palmer. Ordinal configural cues combine with metric disparity in
depth perception. Journal of Vision, 5(6):5, 2005.
[6] D. Geiger, B. Ladendorf, and A. Yuille. Occlusions and binocular stereo. International Journal of Computer Vision, 14(3):211?226, 1995.
[7] K. Nakayama and S. Shimojo. da vinci stereopsis: Depth and subjective occluding contours from unpaired
image points. Vision Research, 30(11):1811, 1990.
[8] J. J. Tsai and J. D. Victor. Neither occlusion constraint nor binocular disparity accounts for the perceived
depth in the sieve effect. Vision Research, 40(17):2265?2275, 2000.
[9] K. P. K?ording, U. Beierholm, W. J. Ma, S. Quartz, J. B. Tenenbaum, and L. Shams. Causal inference in
multisensory perception. PLoS ONE, 2(9), 2007.
[10] K. Wei and K. K?ording. Relevance of error: what drives motor adaptation? Journal of Neurophysiology,
101(2):655, 2009.
[11] M. O. Ernst and H. H. B?ulthoff. Merging the senses into a robust percept. Trends in Cognitive Sciences,
8(4):162?169, 2004.
[12] D. C. Knill. Robust cue integration: A bayesian model and evidence from cue-conflict studies with
stereoscopic and figure cues to slant. Journal of Vision, 7(7):5, 2007.
[13] L. Paninski. Asymptotic theory of information-theoretic experimental design. Neural Computation,
17(7):1480?1507, 2005.
[14] K. Nakayama and S. Shimojo. Experiencing and perceiving visual surfaces. Science, 257(5075):1357?
1363, Sep 1992.
8
| 3663 |@word neurophysiology:1 trial:9 approved:1 accounting:1 irb:1 solid:1 shading:2 disparity:19 ording:3 past:1 subjective:1 current:1 nt:1 refresh:1 chicago:1 shape:3 analytic:2 motor:1 visibility:1 designed:1 cue:48 selected:1 generative:1 nervous:2 beginning:1 location:1 ladendorf:1 along:1 constructed:1 direct:2 diplopia:1 qualitative:1 consists:1 combine:6 burr:1 manner:1 expected:2 indeed:1 behavior:6 nor:1 provided:2 estimating:1 notation:1 bounded:1 moreover:1 begin:1 what:1 cm:1 developed:1 informed:1 configural:1 control:1 appear:2 accordance:1 limit:1 despite:1 limited:1 palmer:1 range:4 statistically:1 averaged:1 ulthoff:1 significantly:2 matching:7 cannot:1 put:1 context:1 influence:1 kersten:1 demonstrated:2 center:1 starting:1 cluttered:1 perceive:1 rule:1 importantly:1 target:20 play:3 drove:1 experiencing:1 exact:1 beierholm:1 us:1 trend:1 particularly:1 asymmetric:1 richards:1 predicts:2 observed:2 role:2 bottom:2 capture:1 ordering:1 plo:1 convexity:1 constrains:1 asked:7 occluded:5 serve:1 yuille:2 completely:1 sep:1 joint:2 indirect:1 surrounding:20 forced:2 describe:2 effective:1 larger:5 ability:1 erf:2 noisy:4 adaptation:1 combining:2 consent:1 ernst:3 description:1 arcmin:7 asymmetry:2 object:40 illustrate:3 noticeable:4 eq:1 indicate:2 judge:1 come:1 direction:1 human:7 crt:1 opinion:1 explains:2 strictly:1 normal:5 predict:1 vary:1 perceived:6 estimation:4 weighted:1 gaussian:6 avoid:1 poi:1 probabilistically:1 validated:1 acuity:4 likelihood:3 indicates:1 contrast:1 sense:3 glass:1 inference:18 typically:1 hidden:8 relation:2 alais:1 constrained:1 integration:1 mutual:1 marginal:4 equal:3 afc:6 discrepancy:1 minimized:1 stimulus:14 simplify:1 few:1 randomly:3 composed:2 individual:2 ima:1 occlusion:24 maintain:1 attempt:1 interest:1 possibility:3 introduces:2 truly:1 bracket:1 mixture:1 sens:1 held:1 accurate:1 closer:2 necessary:1 iv:1 circle:9 ruled:1 causal:10 uncertain:1 instance:3 increased:3 soft:2 modeling:2 introducing:1 deviation:1 front:1 too:1 characterize:1 reported:1 combined:2 international:1 probabilistic:12 squared:2 central:8 classically:1 cognitive:1 leading:3 account:5 stevenson:2 potential:3 pedestal:2 suggesting:1 piece:1 performed:2 view:1 observer:6 bayes:1 participant:1 minimize:1 il:1 square:1 variance:3 efficiently:1 percept:1 weak:1 bayesian:3 accurately:4 drive:1 maloney:1 ty:2 associated:1 formalize:1 positioned:2 actually:1 higher:1 response:5 improved:1 wei:1 just:4 binocular:3 horizontal:10 nonlinear:2 gray:1 effect:13 contain:1 true:1 consisted:1 unbiased:1 y2:2 equality:1 assigned:2 sieve:1 cohesive:1 konrad:1 during:4 noted:1 pdf:1 chin:1 impression:2 complete:2 theoretic:1 gh:1 image:10 novel:1 recently:2 physical:1 interpretation:1 occurred:1 significant:2 measurement:1 surround:1 cambridge:1 slant:1 nyc:2 dxs:6 had:5 dot:5 surface:1 posterior:5 recent:1 confounding:1 showed:1 moderate:1 irrelevant:1 driven:1 manipulation:1 keyboard:2 certain:1 inequality:3 victor:1 seen:1 greater:1 maximize:1 full:1 sham:1 stem:1 match:5 cross:1 naming:1 y:34 paired:10 impact:1 prediction:1 vision:10 metric:2 physically:1 receive:2 background:2 addition:1 johnston:1 source:9 modality:1 biased:1 rest:2 haptic:1 subject:47 tend:1 hz:1 seem:1 structural:11 near:2 presence:3 ideal:3 split:1 variety:1 affect:1 fit:8 independence:1 gave:1 reduce:2 shift:1 whether:1 expression:2 shutter:1 defense:1 utility:1 tain:1 stereo:10 enumerate:1 useful:1 generally:2 tenenbaum:1 unpaired:15 diameter:2 generate:1 dotted:3 stereoscopic:1 estimated:1 write:1 drawn:3 changing:2 neither:1 anova:1 ce:2 rectangle:9 fuse:1 concreteness:1 uncertainty:19 place:1 extends:1 geiger:1 hi:1 followed:1 occur:2 constraint:18 infinity:1 scene:7 flat:2 x2:12 min:1 department:1 combination:19 describes:2 smaller:1 slightly:1 across:8 rehabilitation:1 s1:24 explained:2 intuitively:1 sided:2 previously:1 ordinal:18 ge:1 serf:1 apply:1 probe:1 alternative:2 top:2 landy:1 xc:35 medicine:1 giving:1 classical:2 arrangement:1 strategy:1 usual:1 traditional:2 distance:7 reversed:1 argue:1 assuming:1 modeled:3 relationship:1 ratio:1 minimizing:1 statement:1 rise:1 design:3 policy:1 allowing:1 vertical:10 observation:14 arc:3 truncated:1 situation:2 neurobiology:1 head:1 y1:4 varied:2 inferred:1 offcenter:1 conflict:2 unequal:1 able:2 suggested:2 bar:15 perception:17 yc:37 appeared:1 including:1 event:1 rely:1 eye:5 vinci:1 isn:1 prior:2 understanding:2 marginalizing:1 relative:1 asymptotic:1 northwestern:3 proportional:2 integrate:1 degree:9 consistent:2 bank:2 systematically:1 seated:1 surrounded:1 lo:1 compatible:1 summary:1 free:3 bias:10 side:3 allow:2 peterson:1 face:1 depth:77 world:3 valid:4 contour:1 forward:1 simplified:1 far:1 kording:1 ignore:2 preferred:1 conclusively:1 nxs:2 assumed:1 shimojo:3 stereopsis:1 un:3 decade:1 reality:1 nature:1 robust:2 nakayama:3 sem:1 domain:1 protocol:1 da:1 did:2 main:1 linearly:2 s2:13 arise:1 knill:3 allowed:2 x1:9 fig:15 screen:2 fashion:3 ny:2 precision:3 position:8 burge:1 perceptual:2 weighting:1 young:1 ian:1 down:1 minute:2 familiarity:1 specific:1 quartz:1 x:16 divergent:1 normalizing:1 fusion:1 evidence:1 merging:1 importance:2 texture:2 simply:3 paninski:1 visual:8 corresponds:4 cdf:1 ma:1 conditional:2 goal:1 invalid:5 absence:1 hard:4 change:1 determined:1 typical:1 corrected:1 perceiving:1 experimental:6 multisensory:1 occluding:5 assessed:1 relevance:7 tsai:1 wearing:1 tested:1 correlated:1 |
2,939 | 3,664 | Efficient Recovery of Jointly Sparse Vectors
Liang Sun, Jun Liu, Jianhui Chen, Jieping Ye
School of Computing, Informatics, and Decision Systems Engineering
Arizona State University
Tempe, AZ 85287
{sun.liang,j.liu,jianhui.chen,jieping.ye}asu.edu
Abstract
We consider the reconstruction of sparse signals in the multiple measurement vector (MMV) model, in which the signal, represented as a matrix, consists of a set
of jointly sparse vectors. MMV is an extension of the single measurement vector
(SMV) model employed in standard compressive sensing (CS). Recent theoretical studies focus on the convex relaxation of the MMV problem based on the
(2, 1)-norm minimization, which is an extension of the well-known 1-norm minimization employed in SMV. However, the resulting convex optimization problem
in MMV is significantly much more difficult to solve than the one in SMV. Existing algorithms reformulate it as a second-order cone programming (SOCP) or
semidefinite programming (SDP) problem, which is computationally expensive
to solve for problems of moderate size. In this paper, we propose a new (dual)
reformulation of the convex optimization problem in MMV and develop an efficient algorithm based on the prox-method. Interestingly, our theoretical analysis
reveals the close connection between the proposed reformulation and multiple kernel learning. Our simulation studies demonstrate the scalability of the proposed
algorithm.
1 Introduction
Compressive sensing (CS), also known as compressive sampling, has recently received increasing
attention in many areas of science and engineering [3]. In CS, an unknown sparse signal is reconstructed from a single measurement vector. Recent theoretical studies show that one can recover
certain sparse signals from far fewer samples or measurements than traditional methods [4, 8]. In
this paper, we consider the problem of reconstructing sparse signals in the multiple measurement
vector (MMV) model, in which the signal, represented as a matrix, consists of a set of jointly sparse
vectors. MMV is an extension of the single measurement vector (SMV) model employed in standard
compressive sensing.
The MMV model was motivated by the need to solve the neuromagnetic inverse problem that arises
in Magnetoencephalography (MEG), which is a modality for imaging the brain [7]. It arises from
a variety of applications, such as DNA microarrays [11], equalization of sparse communication
channels [6], echo cancellation [9], magenetoencephalography [12], computing sparse solutions to
linear inverse problems [7], and source localization in sensor networks [17]. Unlike SMV, the signal
in the MMV model is represented as a set of jointly sparse vectors sharing their common nonzeros
occurring in a set of locations [5, 7]. It has been shown that the additional block-sparse structure can
lead to improved performance in signal recovery [5, 10, 16, 21].
Several recovery algorithms have been proposed for the MMV model in the past [5, 7, 18, 24, 25].
Since the sparse representation problem is a combinatorial optimization problem and is in general
NP-hard [5], the algorithms in [18, 25] employ the greedy strategy to recover the signal using an
iterative scheme. One alternative is to relax it into a convex optimization problem, from which the
1
global optimal solution can be obtained. The most widely studied approach is the one based on the
(2, 1)-norm minimization [5, 7, 10]. A similar relaxation technique (via the 1-norm minimization)
is employed in the SMV model. Recent studies have shown that most of theoretical results on the
convex relaxation of the SMV model can be extended to the MMV model [5], although further theoretical investigation is needed [26]. Unlike the SMV model where the 1-norm minimization can
be solved efficiently, the resulting convex optimization problem in MMV is much more difficult to
solve. Existing algorithms formulate it as a second-order cone programming (SOCP) or semdefinite
programming (SDP) [16] problem, which can be solved using standard software packages such as
SeDuMi [23]. However, for problems of moderate size, solving either SOCP or SDP is computationally expensive, which limits their use in practice.
In this paper, we derive a dual reformulation of the (2, 1)-norm minimization problem in MMV.
More especially, we show that the (2, 1)-norm minimization problem can be reformulated as a
min-max problem, which can be solved efficiently via the prox-method with a nearly dimensionindependent convergence rate [19]. Compared with existing algorithms, our algorithm can scale to
larger problems while achieving high accuracy. Interestingly, our theoretical analysis reveals the
close relationship between the resulting min-max problem and multiple kernel learning [14]. We
have performed simulation studies and our results demonstrate the scalability of the proposed algorithm in comparison with existing algorithms.
Notations: All matrices are boldface uppercase. Vectors are boldface lowercase. Sets and spaces
are denoted with calligraphic letters. The p-norm of the vector v = (v1 , ? ? ? , vd )T ? IRd is defined
?P
?1
d
p p
as kvkp :=
. The inner product on IRm?d is defined as hX, Yi = tr(XT Y). For
|v
|
i
i=1
matrix A ? IRm?d , we denote by ai and ai the ith row and the ith column of A, respectively. The
(r, p)-norm of A is defined as:
?m
! p1
X
kAkr,p :=
kai kpr
.
(1)
i=1
2
The Multiple Measurement Vector Model
In the SMV model, one aims to recover the sparse signal w from a measurement vector b = Aw
for a given matrix A [3]. The SMV model can be extended to the multiple measurement vector
(MMV) model, in which the signal is represented as a set of jointly sparse vectors sharing a common
set of nonzeros [5, 7]. The MMV model aims to recover the sparse representations for SMVs
simultaneously. It has been shown that the MMV model provably improves the standard CS recovery
by exploiting the block-sparse structure [10, 21].
Specifically, in the MMV model we consider the reconstruction of the signal represented by a matrix
W ? IRd?n , which is given by a dictionary (or measurement matrix) A ? IRm?d and multiple
measurement vector B ? IRm?n such that
B = AW.
(2)
Each column of A is associated with an atom, and a set of atom is called a dictionary. A sparse
representation means that the matrix W has a small number of rows containing nonzero entries.
Usually, we have m ? d and d > n.
Similar to SMV, we can use kWkp,0 to measure the number of rows in W that contain nonzero entries. Thus, the problem of finding the sparsest representation of the signal W in MMV is equivalent
to solving the following problem, a.k.a. the sparse representation problem:
(P0) :
min kWkp,0 ,
W
s.t.
AW = B.
(3)
Some typical choices of p include p = ? and p = 2 [25]. However, solving (P0) requires enumerating all subsets of the set {1, 2, ? ? ? , d}, which is essentially a combinatorial optimization problem
and is in general NP-hard [5]. Similar to the use of the 1-norm minimization in the SMV model, one
natural alternative is to use kWkp,1 instead of kWkp,0 , resulting in the following convex optimization problem (P1):
(P1) : min kWkp,1 , s.t. AW = B.
(4)
W
2
The relationship between (P0) and (P1) for the MMV model has been studied in [5].
For p = 2, the optimal W is given by solving the following convex optimization problem:
1
min kWk22,1
W 2
s.t. AW = B.
(5)
Existing algorithms formulate Eq. (5) as a second-order cone programming (SOCP) problem or a
semidefinite programming (SDP) problem [16]. Recall that the optimizaiton problem in Eq. (5) is
equivalent to the following problem by removing the square in the objective:
1
min kWk2,1
W 2
s.t. AW = B.
By introducing auxiliary variable ti (i = 1, ? ? ? , d), this problem can be reformulated in the standard
second-order cone programming (SOCP) formulation:
d
min
1X
ti
2 i=1
s.t.
kWi k2 ? ti , ti ? 0, i = 1, ? ? ? , d,
W,t1 ,??? ,td
(6)
AW = B.
Based on this SOCP formulation, it can also be transformed into the standard semidefinite programming (SDP) formulation:
d
min
W,t1 ,??? ,td
s.t.
1X
ti
2 i=1
?
?
T
ti I Wi ? 0, t ? 0, i = 1, ? ? ? , d,
i
Wi
ti
(7)
AW = B.
The interior point method [20] and the bundle method [13] can be applied to solve SOCP and SDP.
However, they do not scale to problems of moderate size, which limits their use in practice.
3
The Proposed Dual Formulation
In this section we present a dual reformulation of the optimization problem in Eq. (5). First, some
preliminary results are summarized in Lemmas 1 and 2:
Lemma 1. Let A and X be m-by-d matrices. Then the following holds:
?
1?
hA, Xi ?
kXk22,1 + kAk22,? .
(8)
2
When the equality holds, we have kXk2,1 = kAk2,? .
Pm
i
Proof. It follows from the definition of the (r, p)-norm in Eq. (1) that kXk2,1 =
i=1 kx k2 ,
i
k
and kAk2,? = max1?i?m ka k2 . Without loss of generality, we assume that ka k2 =
max1?i?m kai k2 for 1 ? k ? m . Thus, kAk2,? = kak k2 , and we have
hA, Xi =
m
X
T
ai xi ?
i=1
?
?
1? k 2
ka k2 +
2
m
X
kai k2 kxi k2 ?
i=1
m
X
kak k2 kxi k2 = kak k2
i=1
!2 ?
?
1?
kAk22,? + kXk22,1 .
kxi k2 ? =
2
i=1
?m
X
Clearly, the last inequality becomes equality when kXk2,1 = kAk2,? .
Lemma 2. Let A and X be defined as in Lemma 1. Then the following holds:
?
?
1
1
2
max hA, Xi ? kXk2,1 = kAk22,? .
X
2
2
3
m
X
i=1
kxi k2
k
Proof. Denote the set Q = {k : 1 ? k ? m, ka
k2 = max1?i?m kai k2 }. Let {?k }m
k=1 be such
Pm
/ Q, ?k ? 0 for k ? Q, and k=1 ?k = 1. Clearly, all inequalities in the proof
that ?k = 0 for k ?
of Lemma 1 become equalities if and only if we construct the matrix X as follows:
?
?k ak , if k ? Q
xk =
(9)
0, otherwise.
Thus, the maximum of hA, Xi ? 12 kXk22,1 is 21 kAk22,? , which is achieved when X is constructed
as in Eq. (9).
Based on the results established in Lemmas 1 and 2, we can derive the dual formulation of the
optimization problem in Eq. (5) as follows. First we construct the Lagrangian L:
1
1
kWk22,1 ? hU, AW ? Bi = kWk22,1 ? hU, AWi + hU, Bi.
2
2
The dual problem can be formulated as follows:
L(W, U) =
1
max min kWk22,1 ? hU, AWi + hU, Bi.
U
W 2
(10)
It follows from Lemma 2 that
?
?
?
?
1
1
1
2
2
T
min
kWk2,1 ? hU, AWi = min
kWk2,1 ? hA U, Wi = ? kAT Uk22,? .
W
W
2
2
2
Note that from Lemma 2, the equality holds if and only if the optimal W? can be represented as
W? = diag(?)AT U,
(11)
where ? = [?1 , ? ? ? , ?d ]T ? IRd , ?i ? 0 if k(AT U)i k2 = kAT Uk2,? , ?i = 0 if k(AT U)i k2 <
Pd
kAT Uk2,? , and i=1 ?i = 1. Thus, the dual problem can be simplified into the following form:
1
max ? kAT Uk22,? + hU, Bi.
U
2
(12)
Following the definition of the (2, ?)-norm, we can reformulate the dual problem in Eq. (12) as a
min-max problem, as summarized in the following theorem:
Theorem 1. The optimization problem in Eq. (5) can be formulated equivalently as:
(
)
n
d
X
1X
T
T
max
uj bj ?
?i uj Gi uj ,
(13)
Pd min
u1 ,??? ,un
2 i=1
i=1 ?i =1,?i ?0
j=1
where the matrix Gi is defined as Gi = ai aTi (1 ? i ? d), and ai is the ith column of A.
Proof. Note that kAT Uk22,? can be reformulated as follows:
?
?
kAT Uk22,? = max kaTi Uk22 = max {tr(UT ai aTi U)} = max {tr(UT Gi U)}
1?i?d
=
?i ?0,
1?i?d
max
P
d
i=1
d
X
?i =1
1?i?d
?i tr(UT Gi U).
(14)
i=1
Substituting Eq. (14) into Eq. (12), we obtain the following problem:
d
1
1X
max ? kAT Uk22,? + hU, Bi ? max P min
?i tr(UT Gi U).
hU, Bi ?
d
U
U
2
2 i=1
i=1 ?i =1,?i ?0
(15)
Since the Slater?s condition [2] is satisfied, the minimization and maximization in Eq. (15) can be
exchanged, resulting in the min-max problem in Eq. (13).
Corollary 1. Let (? ? , U? ) be the optimal solution to Eq. (13) where ? ? = (?1? , ? ? ? , ?d? )T . If ?i? > 0,
then k(AT U? )i k2 = kAT U? k2,? .
4
Based on the solution to the dual problem in Eq. (13), we can construct the optimal solution to the
primal problem in Eq. (5) as follows. Let W? be the optimal solution of Eq. (5). It follows from
Lemma 2 that we can construct W? based on AT U? as in Eq. (11). Recall that W? must satisfy
the equality constraint AW? = B. The main result is summarized in the following theorem:
Theorem 2. Given W? = diag(?)AT U? , where ? = [?1 , ? ? ? , ?d ] ? IRd , ?i ? 0, ?i > 0 only if
?
?i
Pd
k AT U? k2 = kAT U? k2,? , and i=1 ?i = 1. Then, AW? = B if and only if (?, U? ) is the
optimal solution to the problem in Eq. (13).
Proof. First we assume that (?, U? ) is the optimal solution to the problem in Eq. (13). It follows
that the partial derivative of the objective function with respect to U? in Eq. (13) is 0, that is,
B ? Adiag(?)AT U? = 0 ? AW? = B.
Next we prove the reverse direction by assuming AW? = B. Since W? = diag(?)AT U? , we have
0 = B ? AW? = B ? Adiag(?1 , ? ? ? , ?d )AT U? .
(16)
Define the function ?(?1 , ? ? ? , ?d , U) as
(
)
d
n
d
X
1X
1X
T
T
T
?(?1 , ? ? ? , ?d , U) = hU, Bi ?
?i tr(U Gi U) =
uj bj ?
?i uj Gi uj .
2 i=1
2 i=1
j=1
We consider the function ?(?1 , ? ? ? , ?d , U) with fixed ?i = ?i (1 ? i ? d). Note that this function
is concave with respect to U, thus its maximum is achieved when its partial derivative with respect
??
to U is zero. It follows from Eq. (16) that ?U
is zero when U = U? . Thus, we have
?U, ?(?1 , ? ? ? , ?d , U) ? ?(?1 , ? ? ? , ?d , U? ).
?
With a fixed U = U , ?(?1 , ? ? ? , ?d , U? ) is a linear combination of ?i (1 ? i ? d) as:
d
1X
?(?1 , ? ? ? , ?d , U? ) = hU? , Bi ?
?i k(AT U? )i k22 .
2 i=1
By the assumption, we have k(AT U? )i k = kAT U? k2,? , if ?i > 0. Thus, we have
d
X
?i = 1, ?i ? 0.
?(?1 , ? ? ? , ?d , U? ) ? ?(?1 , ? ? ? , ?d , U? ), ??1 , ? ? ? , ?d satisfying
i=1
Pd
Therefore, for any U, ?1 , ? ? ? , ?d such that i=1 ?i = 1, ?i ? 0, we have
?(?1 , ? ? ? , ?d , U) ? ?(?1 , ? ? ? , ?d , U? ) ? ?(?1 , ? ? ? , ?d , U? ),
(17)
which implies that (?1 , ? ? ? , ?d , U? ) is a saddle point of the min-max problem in Eq. (13). Thus,
(?, U? ) is the optimal solution to the problem in Eq. (13).
Theorem 2 shows that we can reconstruct the solution to the primal problem based on the solution to
the dual problem in Eq. (13). It paves the way for the efficient implementation based on the min-max
formulation in Eq.(13). In this paper, the prox-method [19], which is discussed in detail in the next
section, is employed to solve the dual problem in Eq. (13).
An interesting observation is that the resulting min-max problem in Eq. (13) is closely related to the
optimization problem in multiple kernel learning (MKL) [14]. The min-max problem in Eq. (13)
can be reformulated as
?
n ?
X
1 T
T
max
uj bj ? uj Guj ,
(18)
Pd min
u1 ,??? ,un
2
i=1 ?i =1,?i ?0
j=1
where the positive semidefinite (kernel) matrix G is constrained as a linear combination of a set of
o
n
Pd
T d
as G = i=1 ?i Gi .
base kernels Gi = ai ai
i=1
The formulation in Eq. (18) connects the MMV problem to MKL. Many efficient algorithms [14,
22, 27] have been developed in the past for MKL, which can be applied to solve (13). In [27],
an extended level set method was proposed to solve MKL, which was shown to outperform the
one based on the semi-infinite linear programming formulation [22]. However, the extended level
set method
involves a linear programming in each iteration and its theoretical convergence rate of
?
O(1/ N ) (N denotes the number of iterations) is slower than the proposed algorithm presented in
the next section.
5
4
The Main Algorithm
We propose to employ the prox-method [19] to solve the min-max formulation in Eq. (13), which has
a differentiable and convex-concave objective function. The algorithm is called ?MMVprox ?. The
prox-method is a first-order method [1, 19] which is specialized for solving the saddle point problem
and has a nearly dimension-independent convergence rate of O(1/N ) (N denotes the number of
iterations). We show that each iteration of MMVprox has a low computational cost, thus it scales to
large-size problems.
The key idea is to convert the min-max problem to the associated variational inequality (v.i.) problem, which is then iteratively solved by a series of v.i. problems. Let z = (?, U). The problem in
Eq. (13) is equivalent to the following associated v.i. problem [19]:
Find z? = (? ? , U? ) ? S : hF (z? ), z ? z? i ? 0, ?z ? S, S = X ? Y,
where
(19)
?
?
?
?(?, U), ?
?(?, U)
(20)
??
?U
is an operator constituted by the gradient of ?(?, ?), X = {? ? IRd : k?k1 = 1, ?i ? 0}, and
Y = IRm?n .
?
F (z) =
In solving the v.i. problem in Eq. (19), one key building block is the following projection problem:
?
?
1
? ? zi ,
zk22 + h?
Pz (?
z) = arg min k?
z, z
(21)
??S 2
z
? U)
? U).
? and z
? Denote (?? , U? ) = Pz (?
? = (?,
? = (?,
where z
z). It is easy to verify that
1 ?
? 2,
?? = arg min k? ? (? ? ?)k
2
?
2
??X
and
?
U? = U ? U.
(22)
(23)
Following [19], we present the pseudocode of the proposed MMVprox algorithm in Algorithm 1. In
each iteration, we compute the projection (21) so that wt,s is sufficiently close to wt,s?1 (controlled
by the parameter ?). It has been shown in [19] that, when ? ? ?12L [L denotes the Lipschitz
continuous constant of the operator F (?)], the inner iteration converges within two iterations, i.e.,
wt,2 = wt,1 always holds. Moreover, Algorithm 1 has a global dimension-independent convergence
rate of O(1/N ).
Algorithm 1 The MMVprox Algorithm
Input: A, B, ?, z0 = (?0 , U0 ), and ?
Output: ?, U and W.
Step t (t ? 1): Set wt,0 = zt?1 and find the smallest s = 1, 2, . . . such that
wt,s = Pzt?1 (?F (wt,s?1 )), kwt,s ? wt,s?1 k2 ? ?.
Set zt = wt,s
Final Step: Set ? =
Pt
i=1
t
?i
,U=
Pt
i=1
t
Ui
, W = diag(?)AT U.
Time Complexity It costs O(dmn) to evaluate the operator F (?) at a given point. ?? in Eq. (22)
involves the Euclidean projection onto the simplex [1], which can be solved in linear time, i.e., in
O(d); and U? in Eq. (23) can be analytically computed in O(mn) time. Recall that at each iteration
t, the inner iteration is at most 2. Thus, the time complexity for any given outer iteration is O(dmn).
Our analysis shows that MMVprox scales to large-size problems.
In comparison, the second-order methods such as SOCP have a much higher complexity per iteration. According to [15], the SOCP in Eq. (6) costs O(d3 (n + 1)3 ) per iteration. In MMV, d is
typically larger than m. In this case, the proposed MMVprox algorithm has a much smaller cost
per iteration than SOCP. This explains why MMVprox scales better than SOCP, as shown in our
experiments in the next section.
6
Table 1: The averaged recovery results over 10 experiments (d = 100, m = 50, and n = 80).
Data set
1
2
3
4
5
6
7
8
9
10
Mean
Std
5
p
kW ? Wp k2F /(dn)
3.2723e-6
3.4576e-6
2.6971e-6
2.4099e-6
2.9611e-6
2.5701e-6
2.0884e-6
2.3454e-6
2.6807e-6
2.7172e-6
2.7200e-6
4.1728e-7
p
kAWp ? Bk2F /(mn)
1.4467e-5
1.8234e-5
1.4464e-5
1.4460e-5
1.4463e-5
1.4459e-5
1.4469e-5
1.4475e-5
1.4461e-5
1.4481e-5
1.4843e-5
1.1914e-6
Experiments
In this section, we conduct simulations to evaluate the proposed MMVprox algorithm in terms of the
recovery quality and scalability.
Experiment Setup We generated a set of synthetic data sets (by varying the values of m, n, and
d) for our experiments: the entries in A ? IRm?d were independently generated from the standard
normal distribution N (0, 1); W ? IRd?n (the ground truth of the recovery problems) was generated
in two steps: (1) randomly select k rows with nonzero entries; (2) randomly generate the entries of
those k rows from N (0, 1). We denote by Wp the solution obtained from the proposed MMVprox
algorithm. Ideally, Wp should be close to W. Our experiments were performed on a PC with Intel
Core 2 Duo T9500 2.6G CPU and 4G RAM. We employed the optimization package SeDuMi [23]
for solving the SOCP formulation. All codes were implemented in Matlab. In all experiments, we
terminate MMVprox when the change of the consecutive approximate solutions is less than 1e-6.
Recovery Quality In this experiment, we evaluate the recovery quality of the proposed MMVprox
algorithm. We applied MMVprox on the data sets of size d = 100, m = 50, n = 80, and reported
the averaged experimental resultspover 10 random repetitions. We measured
p the recovery quality in
terms of the mean squared error: kW ? Wp k2F /(dn). We also reported kAWp ? Bk2F /(mn),
which measures the violation of the constraint in Eq. (5). The experimental results are presented in
Table 1. We can observe from the table that MMVprox recovers the sparse signal successfully in all
cases.
Next, we study how the recovery error changes as the sparsity of W varies. Specifically, we applied
MMVprox on the data sets of size d = 100, m = 400, and n = 10 with k (the number of nonzero
p
rows of W) varying from 0.05d to 0.7d, and used kW ? Wp k2F /(dn) as the recovery quality
measure. The averaged experimental results over 20 random repetitions are presented in Figure 1.
We can observe from the figure that MMVprox works well in all cases, and a larger k (less sparse
W) tends to result in a larger recovery error.
?6
p
kW ? Wp k2F /(dn)
2
x 10
1.5
1
0.5
0
0.05
0.2
0.35
k/d
0.5
0.7
Figure 1: The increase of the recovery error as the sparsity level decreases
7
Scalability In this experiment, we study the scalability of the proposed MMVprox algorithm. We
generated a collection of data sets by varying m from 10 to 200 with a step size of 10, and setting
n = 2m and d = 4m accordingly. We applied SOCP and MMVprox on the data sets and recorded
their computation time. The experimental results are presented in Figure 2 (a), where the x-axis
corresponds to the value of m, and the y-axis corresponds to log(t), where t denotes the computation time (in seconds). We can observe from the figure that the computation time of both algorithms
increases as m increases and SOCP is faster than MMVprox on small problems (m ? 40); when
m > 40, MMVprox outperforms SOCP; when the value of m is large (m > 80), the SOCP formulation cannot be solved by SeDuMi, while MMVprox can still be applied. This experimental
result demonstrates the good scalability of the proposed MMVprox algorithm in comparison with
the SOCP formulation.
10
8
4
SOCP
MMVprox
0
log(t)
6
log(t)
SOCP
MMVprox
2
4
2
?2
?4
?6
0
?8
?2
?10
50
100 m
150
200
(a)
50
100 m
150
200
(b)
Figure 2: Scalability comparison of MMVprox and SOCP: (a) the computation time for both algorithms as the
problem size varies; and (b) the average computation time of each iteration for both algorithms as the problem
size varies. The x-axis denotes the value of m, and the y-axis denotes the computation time in seconds (in log
scale).
To further examine the scalability of both algorithms, we compare the execution time of each iteration for both SOCP and the proposed algorithm. We use the same setting as in the last experiment,
i.e., n = 2m, d = 4m, and m ranges from 10 to 200 with a step size of 10. The time comparison
of SOCP and MMVprox is presented in Figure 2 (b). We observe that MMVprox has a significantly
lower cost than SOCP in each iteration (note that SOCP is not applicable for m > 80). This is
consistent with our complexity analysis in Section 4.
We can observe from Figure 2 that when m is small, the computation time of SOCP and MMVprox
is comparable, although MMVprox is much faster in each iteration. This is because MMVprox is a
first-order method, which has a slower convergence rate than the second-order method SOCP. Thus,
there is a tradeoff between scalability and convergence rate. Our experiments show the advantage of
MMVprox for large-size problems.
6 Conclusions
In this paper, we consider the (2, 1)-norm minimization for the reconstruction of sparse signals in
the multiple measurement vector (MMV) model, in which the signal consists of a set of jointly
sparse vectors. Existing algorithms formulate it as second-order cone programming or semdefinite
programming, which is computationally expensive to solve for problems of moderate size. In this
paper, we propose an equivalent dual formulation for the (2, 1)-norm minimization in the MMV
model, and develop the MMVprox algorithm for solving the dual formulation based on the proxmethod. In addition, our theoretical analysis reveals the close connection between the proposed
dual formulation and multiple kernel learning. Our simulation studies demonstrate the effectiveness
of the proposed algorithm in terms of recovery quality and scalability. In the future, we plan to
compare existing solvers for multiple kernel learning [14, 22, 27] with the proposed MMVprox
algorithm. In addition, we plan to examine the efficiency of the prox-method for solving various
MKL formulations.
Acknowledgements
This work was supported by NSF IIS-0612069, IIS-0812551, CCF-0811790, NIH R01-HG002516,
NGA HM1582-08-1-0016, and NSFC 60905035.
8
References
[1] A. Ben-Tal and A. Nemirovski. Non-Euclidean restricted memory level method for large-scale convex
optimization. Mathematical Programming, 102(3):407?56, 2005.
[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004.
[3] E. Cand`es. Compressive sampling. In International Congress of Mathematics, number 3, pages 1433?
1452, Madrid, Spain, 2006.
[4] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly
incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489?509, 2006.
[5] J. Chen and X. Huo. Theoretical results on sparse representations of multiple-measurement vectors. IEEE
Transactions on Signal Processing, 54(12):4634?4643, 2006.
[6] S.F. Cotter and B.D. Rao. Sparse channel estimation via matching pursuit with application to equalization.
IEEE Transactions on Communications, 50(3):374?377, 2002.
[7] S.F. Cotter, B.D. Rao, Kjersti Engan, and K. Kreutz-Delgado. Sparse solutions to linear inverse problems
with multiple measurement vectors. IEEE Transactions on Signal Processing, 53(7):2477?2488, 2005.
[8] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289?1306, 2006.
[9] D.L. Duttweiler. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Transactions on Speech and Audio Processing, 8(5):508?518, 2000.
[10] Y.C. Eldar and M. Mishali. Robust recovery of signals from a structured union of subspaces. To Appear
in IEEE Transactions on Information Theory, 2009.
[11] S. Erickson and C. Sabatti. Empirical bayes estimation of a sparse vector of gene expression changes.
Statistical Applications in Genetics and Molecular Biology, 4(1):22, 2008.
[12] I.F. Gorodnitsky, J.S. George, and B.D. Rao. Neuromagnetic source imaging with focuss: a recursive
weighted minimum norm algorithm. Electroencephalography and Clinical Neurophysiology, 95(4):231?
251, 1995.
[13] H. Jean-Baptiste and C. Lemarechal. Convex Analysis and Minimization Algorithms I: Fundamentals
(Grundlehren Der Mathematischen Wissenschaften). Springer, Berlin, 1993.
[14] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix
with semidefinite programming. Jouranl of Machine Learning Research, 5:27?72, 2004.
[15] M. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming.
Linear Algebra and its Applications, 284(1-3):193?228, 1998.
[16] F. Parvaresh M. Stojnic and B. Hassibi. On the reconstruction of block-sparse signals with an optimal
number of measurements. CoRR, 2008.
[17] D. Malioutov, M. Cetin, and A. Willsky. Source localization by enforcing sparsity through a laplacian. In
IEEE Workshop on Statistical Signal Processing, pages 553?556, 2003.
[18] M. Mishali and Y.C. Eldar. Reduce and boost: Recovering arbitrary sets of jointly sparse vectors. IEEE
Transactions on Signal Processing, 56(10):4692?4702, 2008.
[19] A. Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with Lipschitz
continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on
Optimization, 15(1):229?251, 2005.
[20] Y.E. Nesterov and A.S. Nemirovskii. Interior-point Polynomial Algorithms in Convex Programming.
SIAM Publications, Philadelphia, PA, 1994.
[21] M. Duarte R.G. Baraniuk, V. Cevher and C. Hegde. Model-based compressive sensing. Submitted to
IEEE Transactions on Information Theory, 2008.
[22] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning. Journal of
Machine Learning Research, 7:1531?1565, 2006.
[23] J.F. Sturm. Using sedumi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization
Methods and Software, 11(12):625?653, 1999.
[24] J.A. Tropp. Algorithms for simultaneous sparse approximation. Part II: Convex relaxation. Signal Processing, 86(3):589?602, 2006.
[25] J.A. Tropp, A.C. Gilbert, and M.J. Strauss. Algorithms for simultaneous sparse approximation. Part I:
Greedy pursuit. Signal Processing, 86(3):572?588, 2006.
[26] E. van den Berg and M. P. Friedlander. Joint-sparse recovery from multiple measurements. Technical
Report, Department of Computer Science, University of British Columbia, 2009.
[27] Z. Xu, R. Jin, I. King, and M.R. Lyu. An extended level method for efficient multiple kernel learning. In
Advances in Neural Information Processing Systems, pages 1825?1832, 2008.
9
| 3664 |@word neurophysiology:1 polynomial:1 norm:15 hu:11 simulation:4 p0:3 tr:6 delgado:1 liu:2 series:1 interestingly:2 ati:2 past:2 existing:7 outperforms:1 ka:4 must:1 greedy:2 asu:1 fewer:1 accordingly:1 xk:1 huo:1 ith:3 core:1 location:1 mathematical:1 dn:4 constructed:1 become:1 kak22:4 consists:3 prove:1 p1:4 examine:2 sdp:6 cand:2 brain:1 td:2 cpu:1 solver:1 increasing:1 becomes:1 spain:1 electroencephalography:1 notation:1 moreover:1 duo:1 developed:1 compressive:6 finding:1 ti:7 concave:3 k2:24 demonstrates:1 uk:1 appear:1 t1:2 positive:1 engineering:2 cetin:1 tends:1 limit:2 congress:1 ak:1 nsfc:1 tempe:1 awi:3 studied:2 nemirovski:2 bi:8 range:1 averaged:3 practice:2 block:4 union:1 recursive:1 kat:10 dmn:2 area:1 empirical:1 significantly:2 projection:3 boyd:2 matching:1 onto:1 close:5 interior:2 operator:4 cannot:1 romberg:1 equalization:2 gilbert:1 equivalent:4 cancelers:1 lagrangian:1 jieping:2 hegde:1 attention:1 independently:1 convex:15 formulate:3 recovery:17 jianhui:2 vandenberghe:2 pt:2 exact:1 programming:16 lanckriet:1 pa:1 expensive:3 satisfying:1 std:1 slater:1 solved:6 sun:2 sonnenburg:1 decrease:1 pd:6 ui:1 complexity:4 ideally:1 neuromagnetic:2 cristianini:1 lobo:1 nesterov:1 solving:9 algebra:1 localization:2 max1:3 efficiency:1 joint:1 represented:6 various:1 jean:1 widely:1 solve:10 larger:4 kai:4 relax:1 otherwise:1 reconstruct:1 compressed:1 gi:10 jointly:7 echo:2 final:1 advantage:1 differentiable:1 reconstruction:5 propose:3 product:1 adaptation:1 az:1 scalability:10 olkopf:1 exploiting:1 convergence:7 converges:1 ben:1 derive:2 develop:2 measured:1 school:1 received:1 eq:36 auxiliary:1 c:4 involves:2 implies:1 implemented:1 recovering:1 direction:1 closely:1 mishali:2 explains:1 hx:1 investigation:1 preliminary:1 gorodnitsky:1 extension:3 hold:5 sufficiently:1 ground:1 normal:1 lyu:1 bj:3 substituting:1 dictionary:2 consecutive:1 smallest:1 estimation:2 applicable:1 combinatorial:2 repetition:2 successfully:1 smv:12 weighted:1 cotter:2 minimization:12 clearly:2 sensor:1 always:1 aim:2 varying:3 publication:1 corollary:1 focus:2 adiag:2 zk22:1 duarte:1 el:1 lowercase:1 typically:1 transformed:1 tao:1 provably:1 arg:2 dual:14 eldar:2 denoted:1 plan:2 constrained:1 construct:4 sampling:2 atom:2 biology:1 kw:4 k2f:4 nearly:2 future:1 simplex:1 np:2 report:1 employ:2 randomly:2 simultaneously:1 kwt:1 connects:1 highly:1 violation:1 semidefinite:5 pc:1 uppercase:1 primal:2 bundle:1 kpr:1 partial:2 sedumi:4 conduct:1 incomplete:1 euclidean:2 irm:6 exchanged:1 bk2f:2 theoretical:9 cevher:1 column:3 rao:3 maximization:1 cost:5 introducing:1 entry:5 subset:1 reported:2 aw:14 varies:3 kxi:4 synthetic:1 international:1 fundamental:1 siam:2 informatics:1 squared:1 satisfied:1 recorded:1 containing:1 derivative:2 prox:7 socp:26 summarized:3 kvkp:1 satisfy:1 performed:2 recover:4 hf:1 bayes:1 proportionate:1 square:2 accuracy:1 kakr:1 efficiently:2 malioutov:1 submitted:1 simultaneous:2 sharing:2 definition:2 frequency:1 associated:3 proof:5 recovers:1 recall:3 ut:4 improves:1 higher:1 improved:1 formulation:16 generality:1 sturm:1 stojnic:1 tropp:2 mkl:5 quality:6 building:1 ye:2 normalized:1 k22:1 verify:1 ccf:1 contain:1 equality:5 analytically:1 symmetric:1 nonzero:4 iteratively:1 wp:6 kak:3 demonstrate:3 variational:2 recently:1 nih:1 common:2 specialized:1 pseudocode:1 discussed:1 mathematischen:1 kwk2:3 measurement:16 cambridge:2 ai:8 pm:2 mathematics:1 cancellation:1 afer:1 base:1 recent:3 moderate:4 reverse:1 certain:1 inequality:4 calligraphic:1 yi:1 der:1 minimum:1 additional:1 george:1 employed:6 signal:25 semi:1 u0:1 multiple:16 ii:3 nonzeros:2 smooth:1 technical:1 faster:2 clinical:1 molecular:1 baptiste:1 controlled:1 laplacian:1 essentially:1 iteration:17 kernel:10 achieved:2 addition:2 source:3 modality:1 sch:2 unlike:2 kwi:1 kwk22:4 grundlehren:1 effectiveness:1 jordan:1 easy:1 variety:1 zi:1 inner:3 idea:1 reduce:1 tradeoff:1 microarrays:1 enumerating:1 motivated:1 engan:1 expression:1 bartlett:1 ird:6 reformulated:4 speech:1 matlab:2 dna:1 generate:1 outperform:1 nsf:1 uk2:2 per:3 key:2 reformulation:4 achieving:1 d3:1 v1:1 imaging:2 ram:1 relaxation:4 monotone:1 cone:7 convert:1 nga:1 inverse:3 package:2 letter:1 uncertainty:1 baraniuk:1 decision:1 comparable:1 arizona:1 constraint:2 software:2 kwkp:5 tal:1 u1:2 min:24 structured:1 department:1 according:1 combination:2 smaller:1 reconstructing:1 wi:3 den:1 restricted:1 ghaoui:1 computationally:3 needed:1 pursuit:2 observe:5 alternative:2 slower:2 denotes:6 include:1 k1:1 especially:1 uj:8 r01:1 objective:3 strategy:1 kak2:4 traditional:1 pave:1 erickson:1 gradient:1 subspace:1 berlin:1 uk22:6 vd:1 outer:1 boldface:2 willsky:1 enforcing:1 assuming:1 meg:1 code:1 relationship:2 reformulate:2 liang:2 difficult:2 equivalently:1 mmv:23 setup:1 implementation:1 zt:2 unknown:1 observation:1 jin:1 extended:5 communication:2 nemirovskii:1 arbitrary:1 pzt:1 toolbox:1 connection:2 lemarechal:1 established:1 boost:1 lebret:1 sabatti:1 usually:1 sparsity:3 max:21 memory:1 natural:1 mn:3 scheme:1 kxk22:3 axis:4 jun:1 columbia:1 philadelphia:1 acknowledgement:1 friedlander:1 loss:1 interesting:1 consistent:1 principle:1 row:6 genetics:1 supported:1 last:2 hm1582:1 sparse:31 van:1 dimension:2 collection:1 simplified:1 far:1 transaction:9 reconstructed:1 approximate:1 gene:1 global:2 reveals:3 kreutz:1 xi:5 un:2 iterative:1 continuous:2 why:1 table:3 channel:2 terminate:1 robust:2 wissenschaften:1 diag:4 main:2 constituted:1 xu:1 intel:1 madrid:1 hassibi:1 sparsest:1 kxk2:4 removing:1 theorem:5 z0:1 british:1 xt:1 sensing:5 pz:2 workshop:1 strauss:1 corr:1 execution:1 occurring:1 kx:1 chen:3 saddle:3 springer:1 corresponds:2 truth:1 formulated:2 magnetoencephalography:1 donoho:1 king:1 lipschitz:2 hard:2 change:3 specifically:2 typical:1 infinite:1 wt:9 lemma:9 called:2 experimental:5 e:2 atsch:1 select:1 berg:1 arises:2 evaluate:3 audio:1 |
2,940 | 3,665 | A Neural Implementation of the Kalman Filter
Leif H. Finkel
Department of Bioengineering
University of Pennsylvania
Philadelphia, PA 19103
Robert C. Wilson
Department of Psychology
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
Recent experimental evidence suggests that the brain is capable of approximating
Bayesian inference in the face of noisy input stimuli. Despite this progress, the
neural underpinnings of this computation are still poorly understood. In this paper we focus on the Bayesian filtering of stochastic time series and introduce a
novel neural network, derived from a line attractor architecture, whose dynamics
map directly onto those of the Kalman filter in the limit of small prediction error.
When the prediction error is large we show that the network responds robustly to
changepoints in a way that is qualitatively compatible with the optimal Bayesian
model. The model suggests ways in which probability distributions are encoded
in the brain and makes a number of testable experimental predictions.
1
Introduction
There is a growing body of experimental evidence consistent with the idea that animals are somehow able to represent, manipulate and, ultimately, make decisions based on, probability distributions. While still unproven, this idea has obvious appeal to theorists as a principled way in which to
understand neural computation. A key question is how such Bayesian computations could be performed by neural networks. Several authors have proposed models addressing aspects of this issue
[15, 10, 9, 19, 2, 3, 16, 4, 11, 18, 17, 7, 6, 8], but as yet, there is no conclusive experimental evidence
in favour of any one and the question remains open.
Here we focus on the problem of tracking a randomly moving, one-dimensional stimulus in a noisy
environment. We develop a neural network whose dynamics can be shown to approximate those of a
one-dimensional Kalman filter, the Bayesian model when all the distributions are Gaussian. Where
the approximation breaks down, for large prediction errors, the network performs something akin to
outlier or change detection and this ?failure? suggests ways in which the network can be extended to
deal with more complex, non-Gaussian distributions over multiple dimensions.
Our approach rests on the modification of the line attractor network of Zhang [26]. In particular, we
make three changes to Zhang?s network, modifying the activation rule, the weights and the inputs
in such a way that the network?s dynamics map exactly onto those of the Kalman filter when the
prediction error is small. Crucially, these modifications result in a network that is no longer a line
attractor and thus no longer suffers from many of the limitations of these networks.
2
Review of the one-dimensional Kalman filter
For clarity of exposition and to define notation, we briefly review the equations behind the onedimensional Kalman filter. In particular, we focus on tracking the true location of an object, x(t),
over time, t, based on noisy observations of its position z(t) = x(t) + nz (t), where nz (t) is zero
mean Gaussian random noise with standard deviation ?z (t), and a model of its dynamics, x(t+1) =
1
x(t) + v(t) + nv (t), where v(t) is the velocity signal and nv (t) is a Gaussian noise term with zero
mean and standard deviation ?v (t). Assuming that ?z (t), ?v (t) and v(t) are all known, then the
Kalman filter?s estimate of the position, x
?(t), can be computed via the following three equations
x
?(t + 1) = x
?(t) + v(t)
(1)
1
1
1
=
+
(2)
?
?x (t + 1)2
?
?x (t)2 + ?v (t)2
?z (t + 1)2
?
?x (t + 1)2
x
?(t + 1) = x
?(t + 1) +
[z(t + 1) ? x
?(t + 1)]
(3)
?z (t + 1)2
In equation 1 the model computes a prediction, x
?(t + 1), for the position at time t + 1; equation 2 updates the model?s uncertainty, ?
?x (t + 1), in its estimate; and equation 3 updates the model?s estimate
of position, x
?(t + 1), based on this uncertainty and the prediction error [z(t + 1) ? x
?(t + 1)].
3
The neural network
The network is a modification of Zhang?s line attractor model of head direction cells [26]. We use
rate neurons and describe the state of the network at time t with the membrane potential vector, u(t),
where each component of u(t) denotes the membrane potential of a single neuron. In discrete time,
the update equation is then
u(t + 1) = wJf [u(t)] + I(t + 1)
(4)
where w scales the strength of the weights, J is the connectivity matrix, f [?] is the activation rule that
maps membrane potential onto firing rate, and I(t + 1) is the input at time t + 1. As in [26], we set
J = Jsym + ?(t)Jasym such that the the connections are made up of a mixture of symmetric, Jsym ,
and asymmetric components, Jasym (defined as spatial derivative of Jsym ), with mixing strength
?(t) that can vary over time. Although the results presented here do not depend strongly on the exact
forms of Jsym and Jasym , for concreteness we use the following expressions
?
?
cos 2?(i?j)
?1
N
2?
2?(i ? j)
sym
sym
asym
?
?
Jij = Kw exp
Jij
(5)
? c ; Jij
=?
sin
2
2
?w
N ?w
N
where N is the number of neurons in the network and ?w , Kw and c are constants that determine
the width and excitatory and inhibitory connection strengths respectively.
To approximate the Kalman filter, the activation function must implement divisive inhibition [14, 13]
f [u] =
[u]+
P
S + ? i [ui ]+
(6)
where [u]+ denotes recitification of u; ? determines the strength of the divisive feedback and S
determines the gain when there is no previous activity in the network.
When w = 1, ?(t) = 0 and I(t) = 0, the network is a line attractor over a wide range of Kw ,
?w , c, S and ?, having a continuum of fixed points (as N ? ?). Each fixed point has the same
shape, taking the form of a smooth membrane potential profile, U(x) = Jsym f [U(x)], centered at
location, x, in the network.
When ?(t) 6= 0, the bump of activity can be made to move over time (without losing its shape) [26]
and hence, so long as ?(t) = v(t), implement the prediction step of the Kalman filter (equation
1). That is, if the bump at time t is centered at x
?(t), i.e. u(t) = U(?
x(t)), then at time t + 1 it is
centered at x
?(t + 1) = x
?(t) + ?(t), i.e. u(t + 1) = U(?
x(t) + ?(t)) = U(?
x(t + 1)). Thus, in
this configuration, the network can already implement the first step of the Kalman filter through its
recurrent connectivity. The next two steps, equations 2 and 3, however, remain inaccessible as the
network has no way of encoding uncertainty and it is unclear how it will deal with external inputs.
4
Relation to Kalman filter - small prediction error case
In this section we outline how the neural network dynamics can be mapped onto those of a Kalman
filter. In the interests of space we focus only on the main points of the derivation, leaving the full
working to the supplementary material.
2
Our approach is to analyze the network in terms of U, which, for clarity, we define here to be the
fixed point membrane potential profile of the network when w = 1, ?(t) = 0, I(t) = 0, S = S0 and
? = ?0 . Thus, the results described here are independent of the exact form of U so long as it is a
smooth, non-uniform profile over the network.
We begin by making the assumption that both the input, I(t), and the network membrane potential,
u(t), take the form of scaled versions U, with the former encoding the noisy observations, z(t), and
the latter encoding the network?s estimate of position, x
?(t), i.e.,
I(t) = A(t)U(z(t)) and
u(t) = ?(t)U(?
x(t))
(7)
Substituting this ansatz for membrane potential into the left hand side of equation 4 gives
LHS = ?(t + 1)U (?
x(t + 1))
(8)
and into the right hand side of equation 4 gives
RHS = wJf [?(t)U (?
x(t))] + A(t + 1)U (z(t + 1))
|
{z
} |
{z
}
recurrent input
external input
(9)
For the ansatz to be self-consistent we require that RHS can be written in the same form as LHS.
We now show that this is the case.
As in the previous section, the recurrent input, implements the prediction step of the Kalman filter,
which, after a little algebra (see supplementary material), allows us to write
RHS ? CU (?
x(t + 1)) + A(t + 1)U (z(t + 1))
|
{z
} |
{z
}
prediction
external input
(10)
With the variable C defined as
C=
where I =
P
i
1
S
1
w(S0 +?0 I) ?(t)
+
(11)
?I
w(S0 +?0 I)
[Ui (?
x(t))]+ .
If we now suppose that the prediction error [z(t + 1) ? x
?(t + 1)] is small, then we can linearize
around the prediction, x
?(t + 1), to get (see supplementary material)
A(t + 1)
[z(t + 1) ? x
?(t + 1)]
(12)
RHS ? [C + A(t + 1)] U x
?(t + 1) +
A(t + 1) + C
which is of the same form as equation 8 and thus the ansatz holds. More specifically, equating terms
in equations 8 and 12, we can write down expressions for ?(t + 1) and x
?(t + 1)
?(t + 1) ? C + A(t + 1) =
x
?(t + 1) ? x
?(t) +
1
S
1
w(S0 +?0 I) ?(t)
+
?I
w(S0 +?0 I)
+ A(t + 1)
A(t + 1)
[z(t + 1) ? x(t + 1)]
?(t + 1)
(13)
(14)
which, if we define w such that
S
=1
w(S0 + ?0 I)
i.e.
w=
S
S0 + ?0 I
(15)
are identical to equations 2 and 3 so long as
(a)
?(t) ?
1
?
?x (t)2
(b)
A(t) ?
1
?z (t)2
(c)
?I
? ?v (t)2
S
(16)
Thus the network dynamics, when the prediction error is small, map directly onto the Kalman filter
equations. This is our main result.
3
B 100
80
80
neuron #
100
neuron #
A
60
40
20
40
20
0
C
60
20
40
60
time step
80
100
D
60
20
40
60
time step
80
100
0
20
40
60
time step
80
100
6
4
?x(t)
position
50
0
40
2
30
20
0
0
20
40
60
time step
80
100
Figure 1: Comparison of noiseless network dynamics with dynamics of the Kalman Filter for small
prediction errors.
4.1
Implications
Reciprocal code for uncertainty in input and estimate Equation 16a provides a link between the
strength of activity in the network and the overall uncertainty in the estimate of the Kalman filter,
?
?x (t), with uncertainty decreasing as the activity increases. A similar relation is also implied for the
uncertainty in the observations, ?z (t), where equation 16b suggests that this should be reflected in
the magnitude of the input, A(t). Interestingly, such a scaling, without a corresponding narrowing
of tuning curves, is seen in the brain [20, 5, 2].
Code for velocity signal As with Zhang?s line attractor network [26], the mean of the velocity
signal, v(t) is encoded into the recurrent connections of the network, with the degree of asymmetry
in the weights, ?(t), proportional to the speed. Such hard coding of the velocity signal represents
a limitation of the model, as we would like to be able to deal with arbitrary, time varying speeds.
However, this kind of change could be implemented by pre-synaptic inhibition [24] or by using a
?double-ring? network similar to [25].
Equation 16c implies that the variance of the velocity signal, ?v (t), is encoded in the strength of the
divisive feedback, ? (assuming constant S). This is very different from Zhang?s model, that has no
concept of uncertainty and is also very different from the traditional view of divisive inhibition that
sees it as a mechanism for gain control [14, 13].
The network is no longer a line attractor This can be seen by considering the fixed point values
of the scale factor, ?(t), when the input current, I(t) = 0. Requiring ?(t + 1) = ?(t) = ?? in
equation 13 gives values for these fixed points as
S
S0 + ?0 I
w?
(17)
?? = 0
and
?? =
?I
?I
This second solution is exactly zero when w satisfies equation 15, hence the network only has one
fixed point corresponding to the all zero state and is not a line attractor. This is a key result as it
removes all of the constraints required for line attractor dynamics such as infinite precision in the
weights and lack of noise in the network and thus the network is much more biologically plausible.
4.2
An example
In figure 1 we demonstrate the ability of the network to approximate the dynamics of a onedimensional Kalman filter. The input, shown in figure 1A, is a noiseless bump of current centered
4
B 100
80
80
neuron #
100
neuron #
A
60
40
20
40
20
0
C
60
20
40
60
time step
80
100
D
60
0
20
40
60
time step
80
100
0
20
40
60
time step
80
100
2
?x(t)
position
50
40
30
1.5
1
0.5
20
0
0
20
40
60
time step
80
100
Figure 2: Response of the network when presented with a noisy moving bump input.
at the position of the observation, z(t). The observation noise has standard deviation ?z (t) = 5, the
speed v(t) = 0.5 for 1 ? t < 50 and v(t) = ?0.5 for 50 ? t < 100 and the standard deviation of
the random walk dynamics, ?v (t) = 0.2. In accordance with equation 16b, the height of each bump
is scaled by 1/?z (t)2 .
In figure 1B we plot the output activity of the network over time. Darker shades correspond to higher
firing rates. We assume that the network gets the correct velocity signal, i.e. ?(t) = v(t) and ? is
set such that equation 16c holds. The other parameters are set to Kw = 1, ?w = 0.2, c = 0.05,
S = S0 = 1 and ?0 = 1 which gives I = 5.47. As can be seen from the plot, the amount of
activity in the network steadily grows from zero over time to an asymptotic value, corresponding
to the network?s increasing certainty in its predictions. The position of the bump of activity in the
network is also much less jittery than the input bump, reflecting a certain amount of smoothing.
In figure 1C we compare the positions of the input bumps (gray dots) with the position of the network
bump (black line) and the output of the equivalent Kalman filter (red line). The network clearly
tracks the Kalman filter estimatepextremely well. The same is true for the network?s estimate of
the uncertainty, computed as 1/ ?(t) and shown as the black line in figure 1D, which tracks the
Kalman filter uncertainty (red line) almost exactly.
5
Effect of input noise
We now consider the effect of noise on the ability of the network to implement a Kalman filter. In
particular we consider noise in the input signal, which for this simple one layer network is equivalent
to having noise in the update equation. For brevity, we only present the main results along with the
results of simulations, leaving more detailed analysis to the supplementary material.
Specifically, we consider input signals where the only source of noise is in the input current i.e. there
is no additional jitter in the position of the bump as there was in the noiseless case, thus we write
I(t) = A(t)U (x(t)) + (t)
(18)
where (t) is some noise vector. The main effect of the noise is that it perturbs the effective position
of the input bump. This can be modeled by extracting the maximum likelihood estimate of the input
position given the noisy input and then using this position as the input to the equivalent Kalman
filter. Because of the noise, this extracted position is not, in general, the same as the noiseless input
position and for zero mean Gaussian noise with covariance ?, the variance of the perturbation, ?z (t),
5
B 10
6
ERMS vs KF
A
?
4
2
0
8
6
4
2
0
0
0.5
1
1.5
?noise
2
2.5
3
0
0.5
1
1.5
?noise
2
2.5
3
Figure 3: Effect of noise magnitude on performance of network.
is approximately given by
1
?z (t) ?
A(t)
r
2
U0T ??1 U0
(19)
Now, for the network to approximate a Kalman filter, equation 16b must hold which means that we
require the magnitude of the covariance matrix to scale in proportion to the strength of the input
signal, A(t), i.e. ? ? A(t). Interestingly this relation is true for Poisson noise, the type of noise
that is found all over the brain.
In figure 2 we demonstrate the ability of the network to approximate a Kalman filter. In panel A
we show the input current which is a moving bump of activity corrupted by independent Gaussian
noise of standard deviation ?noise = 0.23, or about two thirds of the maximum height of the fixed
point bump, U. This is a high noise setting and it is hard to see the bump location by eye. The
network dramatically cleans up this input signal (figure 2B) and the output activity, although still
noisy, reflects the position of the underlying stimulus much more faithfully than the input. (Note
that the colour scales in A and B are different).
In panel C we compare the position of the output bump in the network (black line) with that of the
equivalent Kalman filter. To do this we first fit the noisy input bump at each time to obtain input
positions z(t) shown as gray dots. Then using ?z = 2.23 computed using equation 19 we can
compute the estimates of the equivalent Kalman filter (thick red line). which closely match those
of the network (black line). Similarly, there is good agreement between the two estimates of the
uncertainty, ?
?x (t), panel D (black line - network, red line - Kalman filter).
5.1
Performance of the network as a function of noise magnitude
The noise not only affects the position of the input bump but also, in a slightly more subtle manner,
causes a gradual decline in the ability of the network to emulate a Kalman filter. The reason for
this (outlined in more detail in the supplementary material) is that the output bump scale factor,
?, decreases as a function of the noise variance, ?noise . This effect is illustrated in figure 3A
where we plot the steady state value of ? (for constant input strength, A(t)) as a function of ?noise .
The average results of simulations on 100 neurons are shown as the red dots, while the black line
represents the results of the theory in the supplementary material.
The reason for the decline in ? as ?noise goes up is that, because of the rectifying non-linearity in
the activation rule, increasing ?noise increases the amount of noisy activity in the network. Because
of inhibition (both divisive and subtractive) in the network, this ?noisy activity? competes with the
bump activity and decreases it - thus reducing ?.
This decrease in ? results in a change in the Kalman gain of the network, by equation 14, making
it different from that of the equivalent Kalman filter, thus degrading the network?s performance. We
quantify this difference in figure 3B where we plot the root mean squared error (in units of neural
position) between the network and the equivalent Kalman filter as a function of ?noise . As before, the
results of simulations are shown as red dots and the theory (outlined in the supplementary material)
is the black line. To give some sense for the scale on this plot, the horizontal blue line corresponds
to the maximum height of the (noise free) input bump. Thus we may conclude that the performance
of the network and the theory are robust up to fairly large values of ?noise .
6
100
B 100
80
80
neuron #
neuron #
A
60
40
20
40
20
0
C
60
20
40
60
time step
80
100
D
100
0
20
40
60
time step
80
100
0
20
40
60
time step
80
100
3
2
60
?i(t)
position
80
40
1
20
0
0
0
20
40
60
time step
80
100
Figure 4: Response of the network to changepoints.
6
Response to changepoints (and outliers) - large prediction error case
We now consider the dynamics of the network when the prediction error is large. By large we mean
that the prediction error is greater than the width of the bump of activity in the network. Such a big
discrepancy could be caused by an outlier or a changepoint, i.e. a sustained large and abrupt change
in the input position at a random time. In the interests of space we focus only on the latter case and
such an input, with a changepoint at t = 50, is shown in figure 4A.
In figure 4B we show the network?s response to this stimulus. As before, prior to the change, there
is a single bump of activity whose position approximates that of a Kalman filter. However, after the
changepoint, the network maintains two bumps of activity for several time steps. One at the original
position, that shrinks over time and essentially predicts where the input would be if the change had
not occurred, and a second, that grows over time, at the location of the input after the changepoint.
Thus in the period immediately after the changepoint, the network can be thought of as encoding
two separate and competing hypotheses about the position of the stimulus, one corresponding to the
case where no change has occurred, and the other, the case where a change occurred at t = 50.
In figure 4C we compare the position of the bump(s) in the network (black dots whose size reflects
the size of each bump) to the output from the Kalman filter (red line). Before the changepoint, the
two agree well, but after the change, the Kalman filter becomes suboptimal, taking a long time to
move to the new position. The network, however, by maintaining two hypotheses reacts much better.
Finally, in figure 4D we plot the scale factor, ?i (t), of each bump as computed from the simulations
(black dots) and from the approximate analytic solution described in the supplementary material
(red line for bump at 30, blue line for bump at 80). As can be seen, there is good agreement between
theory and simulation, with the largest discrepancy occurring for small values of the scale factor.
Thus, when confronted with a changepoint, the network no longer approximates a Kalman filter
and instead maintains two competing hypotheses in a way that is qualitatively similar to that of the
run-length distribution in [1]. This is an extremely interesting result and hints at ways in which more
complex distributions may be encoded in these type of networks.
7
7.1
Discussion
Relation to previous work
Of the papers mentioned in the introduction, two are of particular relevance to the current work.
In the first, [8], the authors considered a neural implementation of the Kalman filter using line
7
attractors. Although this work, at first glance, seems similar to what is presented here, there are
several major differences, the main one being that our network is not a line attractor at all, while the
results in [8] rely on this property. Also, in [8], the Kalman gain is changed manually, where as in
our case it adjusts automatically (equations 13 and 14), and the form of non-linearity is different.
Probabilistic population coding [16, 4] is more closely related to model presented here. Combined
with divisive normalization, these networks can implement a Kalman filter exactly, while the model
presented here can ?only? approximate one. While this may seem like a limitation of our network,
we see it as an advantage as the breakdown of the approximation leads to a more robust response to
outliers and changepoints than a pure Kalman filter.
7.2
Extension beyond one-dimensional Gaussians
A major limitation of the current model is that it only applies to one-dimensional Gaussian tracking
- clearly an unreasonable restriction for the brain. One possible way around this limitation is hinted
at by the response of the network in the changepoint case where we saw two, largely independent
bumps of activity in the network. This ability to encode multiple ?particles? in the network may
allow networks of this kind to implement something like the dynamics of a particle filter [12] that
can approximate the inference process for non-linear and non-Gaussian systems. Such a possibility
is an intriguing idea for future work.
7.3
Experimental predictions
The model makes at least two easily testable predictions about the response of head direction cells
[21, 22, 23] in rats. The first comes by considering the response of the neurons in the ?dark?.
Assuming that all bearing cues can indeed be eliminated, by setting A(t) = 0 in equation 13, we
expect the activity of the neurons to fall off as 1/t and that the shape of the tuning curves will
remain approximately constant. Note that this prediction is vastly different from the behaviour of a
line attractor, where we would not expect the level of activity to fall off at all in the dark.
Another, slightly more ambitious experiment would involve perturbing the reliability of one of the
landmark cues. In particular, one could imagine a training phase, where the position of one landmark
is jittered over time, such that each time the rat encounters it it is at a slightly different heading. In
the test case, all other, reliable, landmark cues would be removed and the response of head direction
cells measured in response to presentation of the unreliable cue alone. The prediction of the model
is that this would reduce the strength of the input, A, which in turn reduces the level of activity in
the head direction cells, ?. In particular, if ?z is the jitter of the unreliable landmark, then we expect
? to scale as 1/?z2 . This prediction is very different from that of a line attractor which would predict
a constant level of activity regardless of the reliability of the landmark cues.
8
Conclusions
In this paper we have introduced a novel neural network model whose dynamics map directly onto
those of a one-dimensional Kalman filter when the prediction error is small. This property is robust
to noise and when the prediction error is large, such as for changepoints, the output of the network
diverges from that of the Kalman filter, but in a way that is both interesting and useful. Finally, the
model makes two easily testable experimental predictions about head direction cells.
Acknowledgements
We would like to thank the anonymous reviewers for their very helpful comments on this work.
References
[1] R.P. Adams and D.J.C. MacKay. Bayesian online changepoint detection. Technical report, University of
Cambridge, Cambridge, UK, 2007.
[2] J. S. Anderson, I. Lampl, D. C. Gillespie, and D. Ferster. The contribution of noise to contrast invariance
of orientation tuning in cat visual cortex. Science, 290:1968?1972, 2000.
8
[3] M. J. Barber, J. W. Clark, and C. H. Anderson. Neural representation of probabilistic information. Neural
Computation, 15:1843?1864, 2003.
[4] J. Beck, W. J. Ma, P. E. Latham, and A. Pouget. Probabilistic population codes and the exponential family
of distributions. Progress in Brain Research, 165:509?519, 2007.
[5] K. H. Britten, M. N. Shadlen, W. T. Newsome, and J. A. Movshon. Response of neurons in macaque mt
to stochastic motion signals. Visual Neuroscience, 10(1157-1169), 1993.
[6] S. Deneve. Bayesian spiking neurons i: Inference. Neural Computation, 20:91?117, 2008.
[7] S. Deneve. Bayesian spiking neurons ii: Learning. Neural Computation, 20:118?145, 2008.
[8] S. Deneve, J.-R. Duhammel, and A. Pouget. Optimal sensorimotor integration in recurrent cortical networks: a neural implementation of kalman filters. Journal of Neuroscience, 27(21):5744?5756, 2007.
[9] S. Deneve, P. E. Latham, and A. Pouget. Reading population codes: a neural implementation of ideal
observers. Nature Neuroscience, 2(8):740?745, 1999.
[10] S. Deneve, P. E. Latham, and A. Pouget. Efficient computation and cue integration with noisy population
codes. Nature Neuroscience, 4(8):826?831, 2001.
[11] J. I. Gold and M. N. Shadlen. Representation of a perceptual decision in developing oculomotor commands. Nature, 404(390-394), 2000.
[12] N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/non-gaussian bayesian
state estimation. IEE-Proceedings-F, 140:107?113, 1993.
[13] D. J. Heeger. Modeling simple cell direction selectivity with normalized half-squared, linear operators.
Journal of Neurophysiology, 70:1885?1897, 1993.
[14] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181?198, 1993.
[15] P. E. Latham, S. Deneve, and A. Pouget. Optimal computation with attractor networks. Journal of
Physiology Paris, 97(683-694), 2003.
[16] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population
codes. Nature Neuroscience, 9(11):1432?1438, 2006.
[17] R. P. N. Rao. Bayesian computation in recurrent neural circuits. Neural Computation, 16:1?38, 2004.
[18] R. P. N. Rao. Hierarchical bayesian inference in networks of spiking neurons. In Advances in Neural
Information Processing Systems, volume 17, 2005.
[19] M. Sahani and P. Dayan. Doubly distributional population codes: simultaneous representation of uncertainty and multiplicity. Neural Computation, 15:2255?2279, 2003.
[20] G. Sclar and R. D. Freeman. Orientation selectivity in the cat?s striate cortex is invariant with stimulus
contrast. Experimental Brain Research, 46:457?461, 1982.
[21] J. S. Taube, R. U. Muller, and J. B. Ranck. Head-direction cells recorded from postsubiculum in freely
moving rats. i. description and quantitative analysis. Journal of Neuroscience, 10(2):420?435, 1990.
[22] J. S. Taube, R. U. Muller, and J. B. Ranck. Head-direction cells recorded from postsubiculum in freely
moving rats. ii. effects of environmental manipulations. Journal of Neuroscience, 10(2):436?447, 1990.
[23] S. I. Wiener and J. S. Taube. Head direction cells and the neural mechanisms of spatial orientation. MIT
Press, 2005.
[24] L.-G. Wu and P. Saggau. Presynaptic inhibition of elicited neurotransmitter release. Trends in Neuroscience, 20:204?212, 1997.
[25] X. Xie, R. H. Hahnloser, and H. S. Seung. Double-ring network model of the head-direction system.
Physical Review E, 66:0419021?0419029, 2002.
[26] K. Zhang. Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: a theory. Journal of Neuroscience, 16(6):2112?2126, 1996.
9
| 3665 |@word neurophysiology:1 cu:1 version:1 briefly:1 proportion:1 seems:1 open:1 simulation:5 crucially:1 gradual:1 covariance:2 wjf:2 configuration:1 series:1 interestingly:2 ranck:2 current:6 z2:1 erms:1 activation:4 yet:1 intriguing:1 must:2 written:1 shape:3 analytic:1 remove:1 plot:6 update:4 v:1 alone:1 cue:6 half:1 reciprocal:1 smith:1 provides:1 location:4 zhang:6 height:3 along:1 sustained:1 doubly:1 manner:1 introduce:1 indeed:1 growing:1 brain:7 freeman:1 decreasing:1 automatically:1 little:1 considering:2 increasing:2 becomes:1 begin:1 notation:1 underlying:1 panel:3 linearity:2 competes:1 circuit:1 what:1 kind:2 degrading:1 nj:1 certainty:1 quantitative:1 exactly:4 scaled:2 uk:1 control:1 unit:1 before:3 understood:1 accordance:1 limit:1 despite:1 encoding:4 firing:2 approximately:2 black:9 nz:2 equating:1 suggests:4 co:1 range:1 implement:7 thought:1 physiology:1 pre:1 get:2 onto:6 operator:1 restriction:1 equivalent:7 map:5 reviewer:1 go:1 regardless:1 abrupt:1 immediately:1 pure:1 pouget:6 rule:3 adjusts:1 asym:1 population:6 imagine:1 suppose:1 exact:2 losing:1 hypothesis:3 agreement:2 pa:1 velocity:6 trend:1 asymmetric:1 breakdown:1 predicts:1 distributional:1 narrowing:1 decrease:3 removed:1 principled:1 mentioned:1 environment:1 inaccessible:1 ui:2 seung:1 dynamic:15 ultimately:1 depend:1 algebra:1 easily:2 emulate:1 cat:3 neurotransmitter:1 derivation:1 describe:1 effective:1 whose:5 encoded:4 supplementary:8 plausible:1 ability:5 noisy:11 online:1 confronted:1 advantage:1 jij:3 mixing:1 poorly:1 gold:1 description:1 double:2 asymmetry:1 diverges:1 adam:1 ring:2 object:1 develop:1 recurrent:6 linearize:1 measured:1 progress:2 implemented:1 implies:1 come:1 quantify:1 direction:11 thick:1 closely:2 correct:1 filter:41 stochastic:2 modifying:1 centered:4 material:8 require:2 behaviour:1 anonymous:1 hinted:1 extension:1 hold:3 around:2 considered:1 exp:1 predict:1 bump:29 substituting:1 changepoint:9 major:2 vary:1 continuum:1 estimation:1 saw:1 largest:1 faithfully:1 reflects:2 mit:1 clearly:2 gaussian:9 finkel:1 varying:1 command:1 wilson:1 encode:1 derived:1 focus:5 release:1 likelihood:1 contrast:2 sense:1 helpful:1 inference:5 dayan:1 relation:4 issue:1 overall:1 orientation:4 animal:1 spatial:3 smoothing:1 fairly:1 mackay:1 integration:2 having:2 eliminated:1 manually:1 identical:1 kw:4 represents:2 discrepancy:2 future:1 report:1 stimulus:6 gordon:1 hint:1 randomly:1 beck:2 phase:1 attractor:14 detection:2 interest:2 possibility:1 mixture:1 behind:1 implication:1 bioengineering:1 underpinnings:1 capable:1 lh:2 walk:1 modeling:1 rao:2 newsome:1 addressing:1 deviation:5 uniform:1 iee:1 corrupted:1 jittered:1 combined:1 probabilistic:4 off:2 ansatz:3 connectivity:2 squared:2 vastly:1 recorded:2 external:3 derivative:1 potential:7 coding:2 caused:1 performed:1 break:1 observer:1 view:1 root:1 analyze:1 red:8 maintains:2 elicited:1 rectifying:1 contribution:1 wiener:1 variance:3 largely:1 ensemble:1 correspond:1 bayesian:12 simultaneous:1 suffers:1 synaptic:1 failure:1 sensorimotor:1 steadily:1 obvious:1 gain:4 subtle:1 reflecting:1 higher:1 xie:1 reflected:1 response:12 shrink:1 strongly:1 anderson:2 working:1 hand:2 horizontal:1 nonlinear:1 lack:1 glance:1 somehow:1 gray:2 grows:2 effect:6 normalized:1 concept:1 requiring:1 true:3 former:1 hence:2 symmetric:1 illustrated:1 deal:3 sin:1 width:2 self:1 steady:1 rat:4 outline:1 demonstrate:2 latham:5 performs:1 motion:1 novel:3 mt:1 spiking:3 perturbing:1 physical:1 volume:1 occurred:3 approximates:2 onedimensional:2 theorist:1 cambridge:2 tuning:3 outlined:2 postsubiculum:2 similarly:1 particle:2 had:1 dot:6 reliability:2 moving:5 longer:4 cortex:3 inhibition:5 something:2 recent:1 manipulation:1 selectivity:2 certain:1 muller:2 seen:4 additional:1 greater:1 freely:2 determine:1 period:1 taube:3 signal:11 u0:1 ii:2 multiple:2 full:1 reduces:1 smooth:2 technical:1 match:1 long:4 manipulate:1 prediction:27 noiseless:4 essentially:1 poisson:1 represent:1 normalization:2 cell:11 leaving:2 source:1 rest:1 nv:2 comment:1 seem:1 extracting:1 ideal:1 reacts:1 affect:1 fit:1 psychology:1 architecture:1 pennsylvania:1 competing:2 suboptimal:1 reduce:1 idea:3 decline:2 favour:1 expression:2 colour:1 akin:1 movshon:1 cause:1 dramatically:1 useful:1 detailed:1 involve:1 amount:3 dark:2 inhibitory:1 neuroscience:10 track:2 blue:2 discrete:1 write:3 key:2 clarity:2 clean:1 deneve:6 concreteness:1 run:1 uncertainty:12 jitter:2 almost:1 family:1 wu:1 decision:2 scaling:1 layer:1 activity:20 strength:9 constraint:1 aspect:1 speed:3 extremely:1 department:2 developing:1 membrane:7 remain:2 slightly:3 modification:3 making:2 biologically:1 outlier:4 invariant:1 multiplicity:1 equation:27 agree:1 remains:1 turn:1 mechanism:2 changepoints:5 gaussians:1 unreasonable:1 hierarchical:1 robustly:1 encounter:1 original:1 lampl:1 denotes:2 maintaining:1 testable:3 approximating:1 implied:1 move:2 question:2 already:1 striate:2 responds:1 unproven:1 unclear:1 traditional:1 perturbs:1 link:1 mapped:1 separate:1 thank:1 landmark:5 barber:1 presynaptic:1 reason:2 assuming:3 kalman:42 code:7 modeled:1 length:1 robert:1 implementation:4 ambitious:1 observation:5 neuron:16 extended:1 head:10 perturbation:1 arbitrary:1 introduced:1 required:1 paris:1 conclusive:1 connection:3 macaque:1 able:2 beyond:1 reading:1 oculomotor:1 reliable:1 gillespie:1 rely:1 eye:1 britten:1 philadelphia:1 sahani:1 review:3 prior:1 acknowledgement:1 kf:1 asymptotic:1 expect:3 interesting:2 limitation:5 filtering:1 proportional:1 leif:1 clark:1 degree:1 consistent:2 s0:9 shadlen:2 subtractive:1 compatible:1 excitatory:1 changed:1 free:1 sym:2 heading:1 side:2 u0t:1 understand:1 allow:1 salmond:1 wide:1 fall:2 face:1 taking:2 feedback:2 dimension:1 curve:2 cortical:1 computes:1 author:2 qualitatively:2 made:2 approximate:8 unreliable:2 conclude:1 nature:4 robust:3 bearing:1 complex:2 jittery:1 main:5 rh:4 big:1 noise:33 profile:3 body:1 darker:1 precision:1 position:30 heeger:2 exponential:1 perceptual:1 saggau:1 third:1 down:2 shade:1 appeal:1 evidence:3 intrinsic:1 magnitude:4 occurring:1 visual:3 tracking:3 sclar:1 applies:1 corresponds:1 determines:2 satisfies:1 extracted:1 ma:2 environmental:1 hahnloser:1 presentation:1 exposition:1 ferster:1 change:10 hard:2 specifically:2 infinite:1 reducing:1 invariance:1 experimental:7 divisive:6 latter:2 brevity:1 relevance:1 princeton:3 |
2,941 | 3,666 | Distribution Matching for Transduction
Novi Quadrianto
RSISE, ANU & SML, NICTA
Canberra, ACT, Australia
[email protected]
James Petterson
RSISE, ANU & SML, NICTA
Canberra, ACT, Australia
[email protected]
Alex J. Smola
Yahoo! Research
Santa Clara, CA, USA
[email protected]
Abstract
Many transductive inference algorithms assume that distributions over training
and test estimates should be related, e.g. by providing a large margin of separation
on both sets. We use this idea to design a transduction algorithm which can be
used without modification for classification, regression, and structured estimation.
At its heart we exploit the fact that for a good learner the distributions over the
outputs on training and test sets should match. This is a classical two-sample
problem which can be solved efficiently in its most general form by using distance
measures in Hilbert Space. It turns out that a number of existing heuristics can be
viewed as special cases of our approach.
1
Introduction
Transduction relies on the fundamental assumption that training and test data should exhibit similar
behavior. For instance, in large margin classification a popular concept is to assume that both training
and test data should be separable with a large margin [4]. A similar matching assumption is made
by [8, 15] in requiring that class means are balanced between training and test set. Corresponding
distributional assumptions are made for classification by [5], for regression by [10], and in the
context of sufficient statistics on the marginal polytope by [3, 6].
Such matching assumptions are well founded: after all, we assume that both training data X =
{x1 , . . . , xm } ? X and test data X 0 := {x01 , . . . , x0m0 } ? X are drawn independently and identically
distributed from the same distribution p(x) on a domain X. It therefore follows that for any function
(or set of functions) f : X ? R the distribution of f (x) where x ? p(x) should also behave in the
same way on both training and test set. Note that this is not automatically true if we get to choose f
after seeing X and X 0 .
Rather than indirectly incorporating distributional similarity, e.g. by a large margin heuristic, we
cast this goal as a two-sample problem which will allow us to draw on a rich body of literature for
comparing distributions. One advantage of our setting is its full generality. That is, it is applicable
without much need for customization to all estimation problems, whether structured or not. Furthermore, our approach is scalable and can be used easily with online optimization algorithms requiring
no additional storage and only an additional O(1) computation per observation. This allows us to
perform a multi-category classification on a dataset with 3.2?106 observations. At its heart it uses the
following: rather than minimizing only the empirical risk, regularized risk, log-posterior, or related
quantities obtained only on the training set, let us add a divergence term characterizing the mismatch
in distributions between training and test set. We show that the Maximum-Mean-Discrepancy [7] is
a suitable quantity for this purpose. Moreover, we show that for certain choices of kernels we are
able to recover a number of existing transduction constraints as a special case.
Note that our setting is entirely complementary to the notion of modifying the function space due
to the availability of additional data. The latter stream of research led to the use of graph kernels
and similar density-related algorithms [1]. It is often referred to as the cluster assumption in semisupervised learning. In other words, both methods can be combined as needed. That said, while
1
distribution matching always holds thus making our method always applicable, it is not entirely
clear whether the cluster assumption is always satisfied (e.g. assume a noisy classification problem).
Distribution matching, however, comes with a nontrivial price: the objective of the optimization
problem ceases to be convex except for rather special cases (which correspond to algorithms that
have been proposed as previous work). While this is a downside, it is a property inherent in most
transduction algorithms ? after all, we are dealing with algorithms to obtain self-consistent labelings, predictions, or regressions on the data and there may exist more than one potential solution.
2
The Model
Supervised Learning Denote by X and Y the domains of data and labels and let Pr(x, y) be a
distribution on X ? Y from which we are drawing observations. Moreover, denote by X, Y sets of
data and labels of the training set and by X 0 , Y 0 test data and labels respectively. In general, when
designing an estimator one attempts to minimize some regularized risk functional
m
Rreg [f, X, Y ] :=
1 X
l(xi , yi , f ) + ??[f ]
m i=1
(1)
or alternatively (in a Bayesian setting) one deals with a log-posterior probability
log p(f |X, Y ) =
m
X
log p(yi |xi , f ) + log p(f ) + const.
(2)
i=1
Here p(f ) is the prior of the parameter choice f and p(yi |xi , f ) denotes the likelihood. f typically
is a mapping X ? R (for scalar problems such as regression or classification) or X ? Rd (for
multivariate problems such as named entity tagging, image annotation, matching, ranking, or more
generally the clique potentials of graphical models). Note that we are free to choose f from one
of many function classes such as decision trees, neural networks, or (nonparametric) linear models.
The specific choice boils down to the ability to control the complexity of f efficiently, to one?s prior
knowledge of what constitutes a simple function, to runtime constraints, and to the availability of
scalable algorithms. In general, we will denote the training-data dependent term by
Rtrain [f, X, Y ]
(3)
and we assume that finding some f for which Rtrain [f, X, Y ] is small is desirable. An analogous
reasoning applies to sampling-based algorithms, however we skip them for the sake of conciseness.
Distribution Matching Denote by f (X) := {f (x1 ), . . . , f (xm )} and by f (X 0 ) :=
{f (x01 ), . . . , f (x0m0 )} the applications of our estimator (and any related quantities) to training and
test set respectively. For f chosen a-priori, the distributions from which f (X) and f (X 0 ) are drawn
coincide. Clearly, this should also hold whenever f is chosen by an estimation process. After all,
we want that the empirical risk on the training and test sets match. While this cannot be checked
directly, we can at least check closeness between the distributions of f (x). This reasoning leads us
to the following additional term for the objective function of a transduction problem:
D(f (X), f (X 0 ))
0
(4)
0
Here D(f (X), f (X )) denotes the distance between the two distributions f (X) and f (X ). This
leads to an overall objective for learning
Rtrain [f, X, Y ] + ?D(f (X), f (X 0 )) for some ? > 0
(5)
when performing transductive inference. For instance, we could use the Kolmogorov-Smirnov statistic between both sets as our criterion, that is, we could use
D(f (X), f (X 0 )) = kF (f (X)) ? F (f (X 0 ))k?
(6)
the L? norm between the cumulative distribution functions F associated with the empirical distributions f (X) and f (X 0 ) to quantify the differences between both distributions. The problem with
the above choice of distance is that it is not easily computable: we first need to evaluate f on both X
and X 0 , then sort the arguments, and finally compute the largest deviation between both sets before
2
we can even attempt computing gradients or using a similar optimization procedure. Such a choice
is clearly computationally undesirable.
Instead, we propose the following: denote by H a Reproducing Kernel Hilbert Space with kernel k
defined on X. In this case one can show [7] that whenever k is characteristic (or universal), the map
2
? : p ? ?[p] := Ex?p(x) [k(x, ?)] with associated distance D(p, p0 ) := k?[p] ? ?[p0 ]k
(7)
characterizes a distribution uniquely. Examples of a characteristic kernel is Gaussian RBF, Laplacian
and B2n+1 -splines. It is possible to design online estimates of the distance quantity which can be
used for fast two-sample tests between ?[X] and ?[X 0 ]. Details on how this can be achieved are
deferred to Section 4.
3
Special Cases
Before discussing a specific algorithm let us consider a number of special cases to show that this
basic idea is rather common in the literature (albeit not as explicit as in the present paper).
Mean Matching for Classification Joachims [8] uses the following balancing constraint in the
objective function of a binary classifier where y?(x) = sgn(f (x)) for f (x) = hw, xi. In order to
balance the outputs between training and test set, [8] imposes the linear constraint
0
m
m
1 X
1 X
f (xi ) = 0
f (x0i ).
m i=1
m i=1
(8)
Assuming a linear kernel k on R this constraint is equivalent to requiring that
0
m
m
1 X
1 X
?[f (X)] =
hf (xi ), ?i = 0
hf (x0i ), ?i = ?[f (X 0 )].
m i=1
m i=1
(9)
Note that [8] uses the margin distribution as an additional criterion which will be discussed later.
This setting can be extended to multiclass categorization and estimation with structured random
variables in a straightforward fashion [15] simply by requiring a constraint corresponding to (9) to
be satisfied for all possible values of y via
0
m
m
1 X
1 X
hf (xi , y), ?i = 0
hf (x0i , y), ?i for all y ? Y.
m i=1
m i=1
(10)
This is equivalent to a linear kernel on RY and the requirement that the distributions of the values
f (x, y) match for all y.
Distribution Matching for Classification G?artner et. al. [5] propose to perform transduction by
requiring that the conditional class probabilities on training and test set match. That is, for classifiers
generating a distribution of the form yi0 ? p(yi0 |x0i , w) they require that the marginal class probability
on the test set matches the empirical class probability on the training set. Again, this can be cast in
terms of distribution matching via
0
m
m
1 X
1 X
?[g ? f (X)] =
hg ? f (xi ), ?i = 0
hg ? f (x0i ), ?i = ?[g ? f (X 0 )]
m i=1
m i=1
Here g(?) = 1+e1?? denotes the likelihood of y = 1 in logistic regression for the model p(y|?) =
1
1+e?y? . Note that instead of choosing the logistic transform g we could have picked a large number
of other transformations. Indeed, we may strengthen the requirement above to hold for all g in some
given function class G as follows:
?
?
m0
m
X
X
1
1
D(f (X), f (X 0 )) := sup ?
g ? f (xi ) ? 0
g ? f (x0i )?
(11)
m i=1
g?G m i=1
If we restrict ourselves to g having bounded norm in a Reproducing Kernel Hilbert Space we obtain
exactly the criterion (7). Gretton et. al. [7] show by duality that this is equivalent to the distance
proposed in (11). In other words, generalizing distribution matching to apply to transforms other
than the logistic leads us directly to our new transduction criterion.
3
Figure 1: Score distribution of f (x) = hw, xi + b on the ?iris? toy dataset. From left to right:
induction scores on the training set; test set; transduction scores on the training set; test set; Note
that while the margin distributions on training and test set are very different for induction, the ones
for transduction match rather well. It results in a 10% reduction of the misclassification error.
Distribution Matching for Regression A similar idea for transduction was proposed by [10] in
the context of regression: requiring that both means and predictive variances of the estimate agree
between training and test set. For a heteroscedastic regression estimate this constraint between training and test set is met simply by ensuring that the distributions over first and second order moments
of a Gaussian exponential family distribution match. The same goal can be achieved by using a
polynomial kernel of second degree on the estimates, which shows that regression transduction can
be viewed as a special case.
Large Margin Hypothesis A key assumption in transduction is that a good hypothesis is characterized by a large margin of separation on both training and test set. Typically, the latter is enforced
by some nonconvex function, e.g. of the form max(0, 1 ? |f (x)|), thus leading to a nonconvex optimization problem. Generalizations of this approach to multiclass and structured estimation settings
is not entirely trivial and requires a number of heuristic choices (e.g. how to define the equivalent of
the hat function max(0, 1 ? |?|) that is commonly used in binary transduction).
Instead, if we require that the distribution of values f (x, ?) on X 0 match those on X, we automatically obtain a loss function which enforces the large margin hypothesis whenever it is actually
achievable on the training set. After all, assume that f (X) exhibits a large margin of separation
whereas f (X 0 ) does not. In this case, D(f (X), f (X 0 )) is large and we obtain better risk minimizers by minimizing the discrepancy of the distributions. The key point is that by using a two-sample
criterion it is possible to obtain such criteria automatically without the need for heuristic choices.
See Figure 1 for illustrations of this idea.
4
Algorithm
Streaming Approximation In general, minimizing D(f (X), f (X 0 )) is computationally infeasible since the estimation of the distributional distance requires access to f (X) and f (X 0 ) rather than
evaluations on a small sample. However, for Hilbert-Space based distance measures it is possible to
find an online estimate of D as follows [7]:
2
D(p, p0 ) := k?[p] ? ?[p0 ]k =
Ex?p(x) [k(x, ?)] ? Ex0 ?p0 (x0 ) [k(x0 , ?)]
(12)
= Ex,?x?p Ex0 ,?x0 ?p0 [k(x, x
?) ? k(x, x
?0 ) ? k(?
x, x0 ) + k(x0 , x
?0 )] (13)
? denotes a second set of observations drawn from the same distribution. Note that
The symbol (.)
(13) decomposes into a sum over 4 kernel functions, each of which takes as arguments a pair of
instances drawn from p and p0 respectively. Hence we can find an unbiased estimate via
m
X
? := 1
D
Di where
m i=1
Di := [k(f (xi ), f (xi+1 )) ? k(f (xi ), f (x0i+1 )) ? k(f (xi+1 ), f (x0i )) + k(f (x0i ), f (x0i+1 ))] (14)
under the assumption that X and X 0 contain iid data. Note that the assumption automatically fails
if there is sequential dependence within the sets X or X 0 (e.g. we see all positive labels before we
see the negative ones). In this case it is necessary to randomize X and X 0 .
4
? decomposes into an
Stochastic Gradient Descent The fact that the estimator of the distance D
average over a function of pairs from the training and test set respectively means that we can use Di
as a stochastic approximation. Applying the same reasoning to the loss function in the regularized
risk (1) we obtain the following loss
?l(xi , xi+1 , yi , yi+1 , x0 , x0 , f )
(15)
i
i+1
:= l(xi , yi , f ) + l(xi+1 , yi+1 , f ) + 2??[f ]+
?[k(f (xi ), f (xi+1 )) ? k(f (xi ), f (x0i+1 )) ? k(f (xi+1 ), f (x0i )) + k(f (x0i ), f (x0i+1 ))]
as a stochastic estimate of the objective function defined in (5). This suggests Algorithm 1, which is
a nonconvex variant of [12]. Note that at no time we need to store past data even for computing the
distance between both distributions.
Algorithm 1 Stochastic Gradient Descent
Input: Convex set A, objective function ?l
Initialize w = 0
for t = 1 to N do
Sample (xi , yi ), (xi+1 , yi+1 ) ? p(x, y) and x0i , x0i+1 ? p(x)
Update w ? w ? ?t ?w ?l(xi , xi+1 , yi , yi+1 , x0i , x0i+1 , f ) where f (x) = h?(x), wi
Project w onto A via w ? argminw?A
kw ? wk.
?
?
end for
Remark: The streaming formulation does not impose any in-principle limitation regarding matching
sample sizes. The only difference is that in the unmatched case we want to give samples from
both distributions different weights (1/m and 1/m? respectively), e.g. by modifying the sampling
procedure (see Table 3, Section 5).
DC Programming Alternatively, the Concave Convex Procedure, best known as DC programming in optimization [2], can be used to find an approximate solution of the problem in (5) by
solving a succession of convex programs. DC programming has been used extensively in almost
any other transductive algorithms to deal with non-convexity of the objective function. It works as
follows: for a given function F (x) that can be written as a difference of two convex functions G and
H via F (x) = G(x) ? H(x), the below inequality
F (x) ? F? (x, x0 ) := G(x) ? H(x0 ) ? hx ? x0 , ?x H(x0 )i
(16)
holds for all x0 with equality for x = x0 , due to the convexity of H(x). This implies an iterative
algorithm for finding a local minimum of F by minimizing the upper bound F? (x, x0 ) and subsequently updating x0 ? argminx F (x, x0 ) to the minimizer of the upper bound.
In order to minimize an additively decomposable objective function as in our transductive estimation, we could use stochastic gradient descent on the convex upper bound. Note that here the convex
upper bound is given by a sum over the convex upper bounds for all terms. This strategy, however, is deficient in a significant aspect: the convex upper bounds on each of the loss terms become
increasingly loose as we move f away from the current point of approximation. It would be considerably better if we updated the upper bound after every stochastic gradient descent step. This
variant, however, is identical to stochastic gradient descent on the original objective function due to
the following:
?x F (x)|x=x = ?x F? (x, x0 )|x=x = ?x G(x)|x=x ? ?x H(x)|x=x for all x0 .
(17)
0
0
0
0
In other words, in order to compute the gradient of the upper bound we need not compute the upper
bound itself. Instead we may use the nonconvex objective directly, hence we did not pursue DC
programming approach and Algorithm 1 applies.
5
Experiments
To demonstrate the applicability of our approach, we apply transduction to binary and multiclass
classification both on toy datasets from the UCI repository [16] and the LibSVM site [17], plus
5
a larger scale multi-category classification dataset with 3.2 ? 106 observations. We also perform
experiments on a structured estimation problem, i.e. Japanese named entity recognition task and
CoNLL-2000 base NP chunking task.
Algorithms Since we are not aware of other transductive algorithms which can be applied easily
to all the problems we consider, we choose problem-specific transduction algorithms as competitors.
Multi Switch Transductive SVM (MultiSwitch) is used for binary classification [14]. This method
is a variant of transductive SVM algorithm [8] tailored for linear semi-supervised binary classification on large and sparse datasets and involves switching of more than a single pair of labels at a
time. For multiclass categorization we pick a Gaussian processes based transductive algorithm with
distribution matching term (GPDistMatch) [5].
We use stochastic gradient descent for optimization in both inductive and transductive settings for
binary and multiclass losses. More specifically, for transduction we use the Gaussian RBF kernel to
compare distributions in (14). Note that, in the multiclass case, the additional distribution matching
term measures the distance between multivariate functions.
Small Scale Experiments We used the following datasets: binary (breastcancer, derm, optdigits,
wdbc, ionosphere, iris, specft, pageblock, tae, heart, splice, adult, australian, bupa, cmc, german,
pima, tic, yeast, sonar, cleveland, svmguide3 and musk) from the UCI repository and multiclass
(usps, satimage, segment, svmguide2, vehicle). The data was preprocessed to have zero mean and
unit variance.
Since we anticipate the relevant length scale in the margin distribution to be in the order of 1 (after
all, we use a loss function, i.e. a hinge loss, which uses a margin of 1) we pick a Gaussian RBF
kernel width of 0.2 for binary classification.? Moreover, to take scaling in the number of classes
into account we choose a kernel width of 0.1 c for multicategory classification. Here c denotes the
number of classes. We could indeed vary this width but we note in our experiments that the proposed
method is not sensitive to this kernel width.
We split data equally into training and test sets, performing model selection on the training set and
assessing performance on the test set. In these small scale experiments, we tune hyperparameters via
5-fold cross validation on the entire training set. The whole procedure was then repeated 5 times to
obtain confidence bounds. More specifically, in the model selection stage, for transduction we adjust
the regularization ? and the transductive weight term ? (obviously, for inductive inference we only
need to adjust ?). For MultiSwitch Transduction the positive class fraction of unlabeled data was
estimated using the training set [14]. Likewise, the two associated regularization parameters were
tuned on the training set. For GP transduction both the regularization and divergence parameters
were adjusted.
Results The experimental results are summarized in Figure 2 for a binary setting and in Table
1 for a multiclass problem. In 23 binary datasets, transduction outperforms the inductive setup in
20 of them. Arguably, our proposed transductive method performs on a par with state-of-the-art
transductive approach for each learning problem. In the binary estimation, out of 23 datasets, our
method performs significantly worse than MultiSwitch transduction algorithm in 4 datasets (adult,
bupa, pima, and svmguide3) and significantly better on 2 datasets (ionosphere and pageblock), using
a one-sided paired t-test with 95% confidence. Overall, both algorithms are very comparable. The
advantage of our approach is that it is ?plug and play?, i.e. for different problems we only need
to use the appropriate supervised loss function. The distribution matching penalty itself remains
unchanged. Further, by casting the transductive solution as an online optimization method, our
approach scales well.
Larger Scale Experiments Since one of the key points of our approach is that it can be applied
to large problems, we performed transduction on the DMOZ ontology [20] of topics. We selected
the top 2 levels of the topic tree (575) and removed all but the 100 most frequent ones, since a
large number of topics occurs only very rarely. This left us with 89.2% of the initial webpages.
As feature vectors we used the standard bag of words representation of the web page descriptions
with TF-IDF weighting. The dictionary size (and therefore the dimensionality of our features) is
6
Figure 2: Error rate on 23 binary estimation problems. Left panel, DistMatch against Induction;
Right panel, DistMatch against MultiSwitch. DistMatch: distribution matching (ours) and
MultiSwitch: Multi switch transductive SVM, [14]. Height of the box encodes standard error of DistMatch and width of the box encodes standard error of Induction / MultiSwitch.
Table 1: Error rate ? standard deviation on a multi-category estimation problem. DistMatch:
distribution matching (ours) and GPDistMatch: Gaussian Process transduction, [5].
dataset
m classes Induction
DistMatch
GPDistMatch
730
10 0.143?0.021 0.125?0.019 0.140?0.034
usps
satimage
620
6 0.190?0.052 0.186?0.037 0.212?0.034
693
7 0.279?0.090 0.206?0.047 0.181?0.020
segment
svmguide2 391
3 0.280?0.028 0.256?0.020 0.231?0.018
423
4 0.385?0.070 0.333?0.048 0.336?0.060
vehicle
Table 2: Error rate on the DMOZ ontology for increasing training / test set sizes.
training / test set size 50,000 100,000 200,000 400,000 800,000 1,600,000
induction
0.365
0.362
0.337
0.299
0.300
0.268
0.344
0.326
0.330
0.288
0.263
0.250
transduction
Table 3: Error rate on the DMOZ ontology for fixed training set size of 100,000 samples.
test set size
100,000 200,000 400,000 800,000 1,600,000
induction
0.358
0.358
0.357
0.357
0.357
0.326
0.316
0.306
0.322
0.329
transduction
Table 4: Accuracy, precision, recall and F?=1 score on the Japanese named entity task.
Accuracy Precision Recall F1 Score
induction
96.82
84.15
72.49
77.89
transduction
97.13
84.46
75.30
79.62
Table 5: Accuracy, precision, recall and F?=1 score on the CoNLL-2000 base NP chunking task.
Accuracy Precision Recall F1 Score
induction
95.72
90.99
90.72
90.85
transduction
96.05
91.73
91.97
91.85
1,319,489. For these larger scale experiments, we use a dataset of up to 3.2 ? 106 observations. To
our knowledge, our proposed transduction method is the only one that scales very well due to the
stochastic approximation.
For each experiment, we split data into training and test sets. Model selection is perform on the
training set by putting aside part of the training data as a validation set which is then used exclusively
for tuning the hyperparameters. In large scale transduction two issues matter: firstly, the algorithm
needs to be scalable with respect to the training set size. Secondly, we need to be able to scale the
algorithm with respect to the test set. Both results can be seen in Tables 2 and 3. Note that Table 2
uses an equal split between training and test sets, while Table 3 uses an unequal split where the test
7
set has many more observations. We see that the algorithm improves with increasing data size, both
for training and test sets. In the latter case, only up to some point: for the larger test sets (800,000
and 1,600,000) it decreases (although still stays better than inductive?s). We suspect that a locationdependent transduction score would be useful in this context ? i.e. instead of only minimizing the
discrepancy between decision function values on training and test set D(f (X), f (X 0 )) we could
also introduce local features D((X, f (X)), (X 0 , f (X 0 ))).
Japanese Named Entity Recognition Experiments A key advantage of our transduction algorithm is it can be applied to structured estimation without modification. We used the Japanese
named-entity recognition dataset provided with the CRF++ toolkit [18]. The data contains 716
Japanese sentences with 17 annotated named entities. The task is to detect and classify proper nouns
and numerical information in a document into categories such as names of persons, organizations,
locations, times and quantities. Conditional random fields (CRFs) [9] are considered to be the stateof-the-art framework for this sequential labeling problem [11].
As the basis of our implementation we used Leon Bottou?s CRF code [19]. We use simple 1D chain
CRFs with first order Markov dependency between name tags. That is, we have clique potentials
joining adjacent labels (yi , yi+1 ), but which are independent of the text itself, and clique potentials
joining words and labels (xi , yi ). Since the former do not depend on the test data there is no need
to enforce distribution matching. For the latter, though, we want to enforce that clique potentials
are distributed in the same way between training and test set. The stationarity assumption in the
potentials implies that this needs to hold uniformly over all such cliques.
Since the number of tokens per sentence is variable, i.e. the chain length itself is a random variable,
we perform distribution matching on a per-token basis ? we oversample each token 10 times in our
experiments. This strikes a balance between statistical accuracy and computational efficiency. The
additional distribution matching term is then measuring the distance between these over-sampled
clique potentials. As before, we split data equally into training and test sets and put aside part of
the training data as a validation set which is used exclusively for tuning the hyperparameters. We
relied on the feature template provided in CRF++ for this task. We report results in Table 4, that is
precision (fraction of name tags which match the reference tags), recall (fraction of reference tags
returned), and their harmonic mean, F?=1 are reported. Transduction outperforms induction in all
metrics.
CoNLL-2000 Base NP Chunking Experiments Our second structured estimation experiment is
the CoNLL-2000 base NP chunking dataset [13] as provided in the CRF++ toolkit. The task is to
divide text into syntactically correlated parts. The dataset has 900 sentences and the goal is to label
each word with a label indicating whether the word is outside a chunk, starts a chunk, or continues
a chunk.
Similarly to Japanese named entity recognition task, 1D chain CRFs with only first order Markov
dependency between chunk tags are modeled. We considered binary-valued features which depend
on the words, part-of-speech tags, and labels in the neighborhood of a given word as encoded in
the CRF++ feature template. The same experimental setup as in named entity experiments is used.
The results in terms of accuracy, precision, recall and F1 score are summarized in Table 5. Again,
transduction outperforms the inductive setup.
6
Summary and Discussion
We proposed a transductive estimation algorithm which is a) simple, b) general c) scalable and d)
works well when compared to the state of the art algorithms applied to each specific problem. Not
only is it useful for classical binary and multiclass categorization problems but it also applies to
ontologies and structured estimation problems. It is not surprising that it performs very comparably
to existing algorithms, since they can, in many cases, be seen as special instances of the general
purpose distribution matching setting.
Extensions of distribution matching beyond simply modeling f (X) and instead, modeling
(X, f (X)), that is, the introduction of local features, obtaining good theoretical bounds on the
shrinkage of the function class via the distribution matching constraint, and applications to other
function classes (e.g. balancing decision trees) are subject of future research.
8
References
[1] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press,
Cambridge, MA, 2006.
[2] T. Pham Dinh and L. Hoai An. A D.C. optimization algorithm for solving the trust-region
subproblem. SIAM Journal on Optimization, 8(2):476?505, 1988.
[3] G. Druck, G.S. Mann, and A. McCallum. Learning from labeled features using generalized
expectation criteria. In S.-H. Myaeng, D.W. Oard, F. Sebastiani, T.-S. Chua, and M.-K. Leong,
editors, SIGIR, pages 595?602. ACM, 2008.
[4] A. Gammerman, Volodya Vovk, and Vladimir Vapnik. Learning by transduction. In Proceedings of Uncertainty in AI, pages 148?155, Madison, Wisconsin, 1998.
[5] T. G?artner, Q.V. Le, S. Burton, A. J. Smola, and S. V. N. Vishwanathan. Large-scale multiclass
transduction. In Y. Weiss, B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information
Processing Systems 18, pages 411?418, Cambride, MA, 2006. MIT Press.
[6] J. Grac?a, K. Ganchev, and B. Taskar. Expectation maximization and posterior constraints. In
J. C. Platt, D. Koller, Y. Singer, and S. T. Roweis, editors, NIPS. MIT Press, 2007.
[7] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the
two sample problem. Technical Report 157, MPI for Biological Cybernetics, 2008.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In
I. Bratko and S. Dzeroski, editors, Proc. Intl. Conf. Machine Learning, pages 200?209, San
Francisco, 1999. Morgan Kaufmann Publishers.
[9] J. D. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic modeling
for segmenting and labeling sequence data. In Proc. Intl. Conf. Machine Learning, volume 18,
pages 282?289, San Francisco, CA, 2001. Morgan Kaufmann.
[10] Q.V. Le, A.J. Smola, T. G?artner, and Y. Altun. Transductive gaussian process regression with
automatic model selection. In J. F?urnkranz, T. Scheffer, and M. Spiliopoulou, editors, European Conference of Machine Learning, volume 4212 of LNAI. 306-317, 2006.
[11] A. McCallum and W. Li. Early results for named entity recognition with conditional random
fields, feature induction and web enhanced lexicons. In CoNLL, 2003.
[12] Y. Nesterov and J.-P. Vial. Confidence level solutions for stochastic programming. Technical Report 2000/13, Universit?e Catholique de Louvain - Center for Operations Research and
Economics, 2000.
[13] E.F. Tjong Kim Sang and S. Buchholz. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. Conf. Computational Natural Language Learning, pages 127?132, Lisbon, Portugal, 2000.
[14] V. Sindhwani and S.S. Keerthi. Large scale semi-supervised linear SVMs. In SIGIR ?06:
Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 477?484, New York, NY, USA, 2006. ACM Press.
[15] A. Zien, U. Brefeld, and T. Scheffer. Transductive support vector machines for structured
variables. In ICML, pages 1183?1190, 2007.
[16] UCI repository, http://archive.ics.uci.edu/ml/
[17] LibSVM, http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/
[18] CRF++, http://chasen.org/?taku/software/CRF++
[19] Stochastic Gradient Descent code, http://leon.bottou.org/projects/sgd
[20] DMOZ ontology, http://www.dmoz.org
9
| 3666 |@word rreg:1 repository:3 polynomial:1 norm:2 smirnov:1 yi0:2 achievable:1 additively:1 p0:7 pick:2 sgd:1 moment:1 reduction:1 initial:1 contains:1 score:9 exclusively:2 tuned:1 ours:2 document:1 past:1 existing:3 outperforms:3 current:1 com:2 comparing:1 surprising:1 clara:1 gmail:1 written:1 numerical:1 update:1 aside:2 selected:1 mccallum:3 chua:1 location:1 lexicon:1 org:4 firstly:1 height:1 become:1 artner:3 introduce:1 x0:18 tagging:1 indeed:2 ontology:5 behavior:1 multi:5 ry:1 automatically:4 quad:1 increasing:2 project:2 cleveland:1 moreover:3 bounded:1 panel:2 provided:3 what:1 tic:1 pursue:1 finding:2 transformation:1 every:1 act:2 concave:1 runtime:1 exactly:1 universit:1 classifier:2 platt:2 x0m0:2 control:1 unit:1 arguably:1 segmenting:1 before:4 positive:2 local:3 switching:1 joining:2 plus:1 au:1 suggests:1 heteroscedastic:1 bupa:2 enforces:1 procedure:4 empirical:4 universal:1 significantly:2 matching:24 word:9 confidence:3 seeing:1 altun:1 get:1 cannot:1 undesirable:1 onto:1 selection:4 unlabeled:1 storage:1 context:3 risk:6 applying:1 breastcancer:1 put:1 www:2 equivalent:4 map:1 center:1 crfs:3 straightforward:1 economics:1 independently:1 convex:9 sigir:3 decomposable:1 estimator:3 chasen:1 notion:1 analogous:1 updated:1 enhanced:1 play:1 strengthen:1 programming:5 us:6 designing:1 hypothesis:3 recognition:5 updating:1 continues:1 distributional:3 labeled:1 taskar:1 subproblem:1 csie:1 solved:1 burton:1 region:1 decrease:1 removed:1 balanced:1 convexity:2 complexity:1 nesterov:1 depend:2 solving:2 segment:2 predictive:1 efficiency:1 learner:1 usps:2 basis:2 easily:3 kolmogorov:1 fast:1 labeling:2 choosing:1 outside:1 neighborhood:1 heuristic:4 larger:4 valued:1 encoded:1 drawing:1 ability:1 statistic:2 transductive:18 transform:1 noisy:1 itself:4 gp:1 online:4 obviously:1 advantage:3 sequence:1 brefeld:1 propose:2 argminw:1 frequent:1 uci:4 relevant:1 roweis:1 description:1 olkopf:3 webpage:1 cluster:2 requirement:2 assessing:1 intl:2 categorization:3 generating:1 dzeroski:1 x0i:18 skip:1 come:1 implies:2 quantify:1 met:1 australian:1 cmc:1 involves:1 rasch:1 annotated:1 modifying:2 stochastic:11 subsequently:1 australia:2 sgn:1 libsvmtools:1 mann:1 require:2 hx:1 f1:3 generalization:1 ntu:1 anticipate:1 biological:1 secondly:1 adjusted:1 extension:1 hold:5 pham:1 considered:2 ic:1 mapping:1 m0:1 vary:1 dictionary:1 early:1 purpose:2 estimation:15 proc:3 applicable:2 bag:1 label:10 sensitive:1 largest:1 ex0:2 tf:1 ganchev:1 grac:1 mit:3 clearly:2 always:3 gaussian:7 rather:6 shrinkage:1 casting:1 tjong:1 joachim:2 likelihood:2 check:1 kim:1 detect:1 inference:4 dependent:1 minimizers:1 streaming:2 typically:2 entire:1 lnai:1 koller:1 labelings:1 overall:2 classification:15 musk:1 issue:1 stateof:1 priori:1 yahoo:1 development:1 noun:1 art:3 special:7 initialize:1 marginal:2 equal:1 aware:1 field:3 having:1 sampling:2 identical:1 kw:1 novi:2 constitutes:1 icml:1 discrepancy:3 future:1 np:4 spline:1 report:3 inherent:1 petterson:2 divergence:2 ourselves:1 argminx:1 keerthi:1 attempt:2 organization:1 stationarity:1 evaluation:1 adjust:2 deferred:1 hg:2 chain:3 necessary:1 tree:3 divide:1 theoretical:1 instance:4 classify:1 modeling:3 downside:1 measuring:1 maximization:1 applicability:1 deviation:2 reported:1 dependency:2 considerably:1 combined:1 chunk:4 person:1 density:1 fundamental:1 siam:1 borgwardt:1 international:1 stay:1 probabilistic:1 druck:1 again:2 satisfied:2 choose:4 unmatched:1 worse:1 conf:3 leading:1 sang:1 toy:2 li:1 account:1 potential:7 de:1 sml:2 wk:1 availability:2 summarized:2 matter:1 pageblock:2 ranking:1 stream:1 later:1 vehicle:2 picked:1 performed:1 characterizes:1 sup:1 start:1 recover:1 sort:1 hf:4 relied:1 annotation:1 hoai:1 minimize:2 accuracy:6 variance:2 characteristic:2 efficiently:2 succession:1 correspond:1 likewise:1 kaufmann:2 bayesian:1 comparably:1 iid:1 cybernetics:1 whenever:3 checked:1 competitor:1 against:2 james:2 associated:3 conciseness:1 di:3 boil:1 sampled:1 dataset:8 popular:1 recall:6 knowledge:2 dimensionality:1 improves:1 hilbert:4 actually:1 supervised:5 wei:1 formulation:1 box:2 though:1 generality:1 furthermore:1 smola:5 stage:1 oversample:1 web:2 trust:1 logistic:3 yeast:1 semisupervised:1 usa:2 name:3 concept:1 requiring:6 true:1 unbiased:1 contain:1 hence:2 equality:1 inductive:5 regularization:3 former:1 deal:2 adjacent:1 self:1 uniquely:1 width:5 mpi:1 iris:2 criterion:7 generalized:1 crf:7 demonstrate:1 performs:3 syntactically:1 reasoning:3 image:1 harmonic:1 common:1 functional:1 volume:2 discussed:1 significant:1 dinh:1 cambridge:1 sebastiani:1 ai:1 rd:1 tuning:2 automatic:1 similarly:1 portugal:1 language:1 toolkit:2 access:1 chapelle:1 similarity:1 add:1 base:4 posterior:3 multivariate:2 store:1 certain:1 nonconvex:4 inequality:1 binary:14 discussing:1 yi:14 seen:2 minimum:1 additional:7 morgan:2 impose:1 locationdependent:1 taku:1 strike:1 semi:3 zien:2 full:1 desirable:1 gretton:2 technical:2 match:9 characterized:1 plug:1 cross:1 retrieval:1 e1:1 equally:2 paired:1 laplacian:1 ensuring:1 prediction:1 scalable:4 regression:10 basic:1 variant:3 metric:1 expectation:2 kernel:15 tailored:1 achieved:2 whereas:1 want:3 publisher:1 sch:3 archive:1 suspect:1 subject:1 deficient:1 lafferty:1 spiliopoulou:1 leong:1 split:5 identically:1 switch:2 restrict:1 idea:4 regarding:1 computable:1 multiclass:10 whether:3 penalty:1 returned:1 speech:1 york:1 remark:1 generally:1 useful:2 santa:1 clear:1 tune:1 transforms:1 nonparametric:1 vial:1 extensively:1 svms:1 category:4 http:5 exist:1 estimated:1 per:3 gammerman:1 urnkranz:1 key:4 putting:1 drawn:4 preprocessed:1 libsvm:2 svmguide2:2 graph:1 fraction:3 sum:2 enforced:1 uncertainty:1 named:9 family:1 almost:1 separation:3 draw:1 decision:3 scaling:1 conll:6 comparable:1 entirely:3 bound:11 fold:1 annual:1 nontrivial:1 constraint:9 idf:1 alex:2 vishwanathan:1 software:1 encodes:2 sake:1 tag:6 aspect:1 argument:2 leon:2 performing:2 separable:1 structured:9 increasingly:1 wi:1 tw:1 modification:2 making:1 pr:1 sided:1 heart:3 chunking:5 computationally:2 agree:1 remains:1 turn:1 loose:1 german:1 cjlin:1 needed:1 singer:1 end:1 operation:1 apply:2 away:1 indirectly:1 appropriate:1 enforce:2 rtrain:3 hat:1 original:1 denotes:5 top:1 graphical:1 hinge:1 madison:1 const:1 exploit:1 multicategory:1 classical:2 unchanged:1 objective:10 move:1 quantity:5 occurs:1 randomize:1 strategy:1 dependence:1 said:1 exhibit:2 gradient:9 distance:12 entity:9 topic:3 polytope:1 trivial:1 dmoz:5 nicta:3 induction:11 svmguide3:2 assuming:1 length:2 code:2 modeled:1 illustration:1 providing:1 minimizing:5 balance:2 vladimir:1 setup:3 pima:2 negative:1 design:2 implementation:1 proper:1 perform:5 upper:9 observation:7 b2n:1 datasets:7 markov:2 descent:7 behave:1 extended:1 dc:4 reproducing:2 cast:2 rsise:2 pair:3 sentence:3 unequal:1 louvain:1 nip:1 adult:2 able:2 beyond:1 below:1 xm:2 mismatch:1 buchholz:1 program:1 max:2 suitable:1 misclassification:1 natural:1 lisbon:1 regularized:3 bratko:1 text:3 prior:2 literature:2 kf:1 wisconsin:1 loss:8 par:1 limitation:1 validation:3 x01:2 degree:1 sufficient:1 consistent:1 imposes:1 principle:1 editor:6 balancing:2 summary:1 token:3 free:1 infeasible:1 catholique:1 allow:1 template:2 characterizing:1 sparse:1 distributed:2 cumulative:1 rich:1 made:2 commonly:1 coincide:1 san:2 founded:1 approximate:1 dealing:1 clique:6 ml:1 francisco:2 xi:27 alternatively:2 iterative:1 decomposes:2 sonar:1 table:12 ca:2 correlated:1 obtaining:1 bottou:2 japanese:6 european:1 domain:2 did:1 whole:1 hyperparameters:3 quadrianto:1 repeated:1 complementary:1 x1:2 body:1 site:1 canberra:2 referred:1 scheffer:2 transduction:36 fashion:1 ny:1 precision:6 fails:1 pereira:1 explicit:1 exponential:1 weighting:1 hw:2 splice:1 down:1 specific:4 symbol:1 cease:1 svm:3 ionosphere:2 closeness:1 incorporating:1 albeit:1 sequential:2 vapnik:1 anu:2 margin:12 wdbc:1 customization:1 generalizing:1 led:1 simply:3 scalar:1 volodya:1 applies:3 sindhwani:1 minimizer:1 relies:1 acm:3 ma:2 conditional:4 viewed:2 goal:3 optdigits:1 rbf:3 satimage:2 price:1 shared:1 specifically:2 except:1 uniformly:1 vovk:1 duality:1 experimental:2 rarely:1 indicating:1 support:2 latter:4 evaluate:1 tae:1 ex:3 |
2,942 | 3,667 | Learning to Hash with Binary Reconstructive
Embeddings
Brian Kulis and Trevor Darrell
UC Berkeley EECS and ICSI
Berkeley, CA
{kulis,trevor}@eecs.berkeley.edu
Abstract
Fast retrieval methods are increasingly critical for many large-scale analysis tasks,
and there have been several recent methods that attempt to learn hash functions for
fast and accurate nearest neighbor searches. In this paper, we develop an algorithm
for learning hash functions based on explicitly minimizing the reconstruction error
between the original distances and the Hamming distances of the corresponding
binary embeddings. We develop a scalable coordinate-descent algorithm for our
proposed hashing objective that is able to efficiently learn hash functions in a variety of settings. Unlike existing methods such as semantic hashing and spectral
hashing, our method is easily kernelized and does not require restrictive assumptions about the underlying distribution of the data. We present results over several domains to demonstrate that our method outperforms existing state-of-the-art
techniques.
1
Introduction
Algorithms for fast indexing and search have become important for a variety of problems, particularly in the domains of computer vision, text mining, and web databases. In cases where the amount
of data is huge?large image repositories, video sequences, and others?having fast techniques for
finding nearest neighbors to a query is essential. At an abstract level, we may view hashing methods
for similarity search as mapping input data (which may be arbitrarily high-dimensional) to a lowdimensional binary (Hamming) space. Unlike standard dimensionality-reduction techniques from
machine learning, the fact that the embeddings are binary is critical to ensure fast retrieval times?
one can perform efficient linear scans of the binary data to find the exact nearest neighbors in the
Hamming space, or one can use data structures for finding approximate nearest neighbors in the
Hamming space which have running times that are sublinear in the number of total objects [1, 2].
Since the Hamming distance between two objects can be computed via an xor operation and a bit
count, even a linear scan in the Hamming space for a nearest neighbor to a query in a database of
100 million objects can currently be performed within a few seconds on a typical workstation. If the
input dimensionality is very high, hashing methods lead to enormous computational savings.
In order to be successful, hashing techniques must appropriately preserve distances when mapping to
the Hamming space. One of the basic but most widely-employed methods, locality-sensitive hashing
(LSH) [1, 2], generates embeddings via random projections and has been used for many large-scale
search tasks. An advantage to this technique is that the random projections provably maintain the
input distances in the limit as the number of hash bits increases; at the same time, it has been
observed that the number of hash bits required may be large in some cases to faithfully maintain
the distances. On the other hand, several recent techniques?most notably semantic hashing [3]
and spectral hashing [4]?attempt to overcome this problem by designing hashing techniques that
leverage machine learning to find appropriate hash functions to optimize an underlying hashing
objective. Both methods have shown advantages over LSH in terms of the number of bits required
1
to find good approximate nearest neighbors. However, these methods cannot be directly applied in
kernel space and have assumptions about the underlying distributions of the data. In particular, as
noted by the authors, spectral hashing assumes a uniform distribution over the data, a potentially
restrictive assumption in some cases.
In this paper, we introduce and analyze a simple objective for learning hash functions, develop an efficient coordinate-descent algorithm, and demonstrate that the proposed approach leads to improved
results as compared to existing hashing techniques. The main idea is to construct hash functions
that explicitly preserve the input distances when mapping to the Hamming space. To achieve this,
we minimize a squared loss over the error between the input distances and the reconstructed Hamming distances. By analyzing the reconstruction objective, we show how to efficiently and exactly
minimize the objective function with respect to a single variable. If there are n training points, k
nearest neighbors per point in the training data, and b bits in our desired hash table, our method ends
up costing O(nb(k + log n)) time per iteration to update all hash functions, and provably reaches a
local optimum of the reconstruction objective. In experiments, we compare against relevant existing
hashing techniques on a variety of important vision data sets, and show that our method is able to
compete with or outperform state-of-the-art hashing algorithms on these data sets. We also apply
our method on the very large Tiny Image data set of 80 million images [5], to qualitatively show
some example retrieval results obtained by our proposed method.
1.1
Related Work
Methods for fast nearest neighbor retrieval are generally broken down into two families. One group
partitions the data space recursively, and includes algorithms such as k ? d trees [6], M-trees [7],
cover trees [8], metric trees [9], and other related techniques. These methods attempt to speed up
nearest neighbor computation, but can degenerate to a linear scan in the worst case. Our focus in
this paper is on hashing-based methods, which map the data to a low-dimensional Hamming space.
Locality-sensitive hashing [1, 2] is the most popular method, and extensions have been explored for
accommodating distances such as ?p norms [10], learned metrics [11], and image kernels [12]. Algorithms based on LSH typically come with guarantees that the approximate nearest neighbors (neighbors within (1 + ?) times the true nearest neighbor distance) may be found in time that is sublinear
in the total number of database objects (but as a function of ?). Unlike standard dimensionalityreduction techniques, the binary embeddings allow for extremely fast similarity search operations.
Several recent methods have explored ways to improve upon the random projection techniques used
in LSH. These include semantic hashing [3], spectral hashing [4], parameter-sensitive hashing [13],
and boosting-based hashing methods [14].
2
Hashing Formulation
In the following section, we describe our proposed method, starting with the choice of parameterization for the hash functions and the objective function to minimize. We then develop a coordinatedescent algorithm used to minimize the objective function, and discuss extensions of the proposed
approach.
2.1
Setup
Let our data set be represented by a set of n vectors, given by X = [x1 x2 ... xn ]. We will assume
that these vectors are normalized to have unit ?2 norm?this will make it easier to maintain the
proper scale for comparing distances in the input space to distance in the Hamming space.1 Let a
kernel function over the data be denoted as ?(xi , xj ). We use a kernel function as opposed to the
standard inner product to emphasize that the algorithm can be expressed purely in kernel form.
We would like to project each data point to a low-dimensional binary space to take advantage of fast
nearest neighbor routines. Suppose that the desired number of dimensions of the binary space is b;
we will compute the b-dimensional binary embedding by projecting our data using a set of b hash
functions h1 , ..., hb . Each hash function hi is a binary-valued function, and our low-dimensional
1
Alternatively, we may scale the data appropriately by a constant so that the squared Euclidean distances
? xj k2 are in [0, 1].
1
kxi
2
2
? i = [h1 (xi ); h2 (xi ); ...; hb (xi )]. Finally, denote
binary reconstruction can be represented as x
1
1
2
?
?i ? x
? j k2 . Notice that d and d? are always between
d(xi , xj ) = 2 kxi ? xj k and d(xi , xj ) = b kx
0 and 1.
2.2
Parameterization and Objective
In standard random hyperplane locality-sensitive hashing (e.g. [1]), each hash function hp is generated independently by selecting a random vector rp from a multivariate Gaussian with zero-mean
and identity covariance. Then the hash function is given as hp (x) = sign(rpT x). In contrast, we
propose to generate a sequence of hash functions that are dependent on one another, in the same
spirit as in spectral hashing (though with a different parameterization). We introduce a matrix W of
size b ? n, and we parameterize the hash functions h1 , ..., hp , ..., hb as follows:
?X
?
s
hp (x) = sign
Wpq ?(xpq , x) .
q=1
Note that the data points xpq for each hash function need not be the same for each hq (that is, each
hash function may utilize different sets of points). Similarly, the number of points s used for each
hash function may change, though for simplicity we will present the case when s is the same for each
function (and so we can represent all weights via the b ? s matrix W ). Though we are not aware
of any existing methods that parameterize the hash functions in this way, this parameterization is
natural for several reasons. It does not explicitly assume anything about the distribution of the
data. It is expressed in kernelized form, meaning we can easily work over a variety of input data.
Furthermore, the form of each hash function?the sign of a linear combination of kernel function
values?is the same as several kernel-based learning algorithms such as support vector machines.
Rather than simply choosing the matrix W based on random hyperplanes, we will specifically construct this matrix to achieve good reconstructions. In particular, we will look at the squared error
? We minimize the
between the original distances (using d) and the reconstructed distances (using d).
following objective with respect to the weight matrix W :
X
? i , xj ))2 .
O({xi }ni=1 , W ) =
(d(xi , xj ) ? d(x
(1)
(i,j)?N
The set N is a selection of pairs of points, and can be chosen based on the application. Typically,
we will choose this to be a set of pairs which includes both the nearest neighbors as well as other
pairs from the database (see Section 3 for details). If we choose k pairs for each point, then the total
size of N will be nk.
2.3
Coordinate-Descent Algorithm
The objective O given in (1) is highly non-convex in W , making optimization the main challenge in
using the proposed objective for hashing. One of the most difficult issues is due to the fact that the
reconstructions are binary; the objective is not continuous or differentiable, so it is not immediately
clear how an effective algorithm would proceed. One approach is to replace the sign function by the
sigmoid function, as is done with neural networks and logistic regression.2 Then the objective O
and gradient ?O can both be computed in O(nkb) time. However, our experience with minimizing
O with such an approach using a quasi-Newton L-BFGS algorithm typically resulted in poor local
optima; we need an alternative method.
Instead of the continuous relaxation, we will consider fixing all but one weight Wpq , and optimize
the original objective O with respect to Wpq . Surprisingly, we will show below that an exact, optimal
update to this weight can be achieved in time O(n log n+nk). Such an approach will update a single
hash function hp ; then, by choosing a single weight to update for each hash function, we can update
all hash functions in O(nb(k + log n)) time. In particular, if k = ?(log n), then we can update
all hash functions on the order of the time it takes to compute the objective function itself, making
the updates particularly efficient. We will also show that this method provably converges to a local
optimum of the objective function O.
2
The sigmoid function is defined as s(x) = 1/(1 + e?x ), and its derivative is s? (x) = s(x)(1 ? s(x)).
3
We sketch out the details of our coordinate-descent scheme below. We begin with a simple lemma
characterizing how the objective function changes when we update a single hash function.
? i , xj ). Consider updating some hash function hold to hnew
? ij = d(xi , xj ) ? d(x
Lemma 1. Let D
?
(where d uses hold ), and let ho and hn be the n ? 1 vectors obtained by applying the old and new
hash functions to each data point, respectively. Then the objective function O from (1) after updating
the hash function can be expressed as
?2
X ?
1
1
2
2
?
O=
.
Dij + (ho (i) ? ho (j)) ? (hn (i) ? hn (j))
b
b
(i,j)?N
? old and D
? new be the matrices of reconstructed
Proof. For notational convenience in this proof, let D
distances using hold and hnew , respectively, and let Hold and Hnew be the n ? b matrices of old
and new hash bits, respectively. Also, let et be the t-th standard basis vector and e be a vector of
all ones. Note that Hnew = Hold + (hn ? ho )eTt , where t is the index of the hash function being
? old as
updated. We can express D
?
?
? old = 1 ?old eT + e?T ? 2Hold H T ,
D
old
old
b
where ?old is the vector of squared norms of the rows of Hold . Note that the corresponding vector
of squared norms of the rows of Hnew may be expressed as ?new = ?old ? ho + hn since the hash
vectors are binary-valued. Therefore we may write
?
? new = 1 (?old + hn ? ho )eT + e(?old + hn ? ho )T
D
b
?
?2(Hold + (hn ? ho )eTt )(Hold + (hn ? ho )eTt )T
?
?
? old + 1 (hn ? ho )eT + e(hn ? ho )T ? 2(hn hT ? ho hT )
= D
n
o
b
?
?
? old ? 1 (ho eT + ehTo ? 2ho hTo ) ? (hn eT + ehTn ? 2hn hTn ) ,
= D
b
? new to
where we have used the fact that Hold et = ho . We can then write the objective using D
obtain
?2
X ?
? ij + 1 (ho (i) + ho (j) ? 2ho (i)ho (j)) ? 1 (hn (i) + hn (j) ? 2hn (i)hn (j))
O =
D
b
b
(i,j)?N
?2
X ?
? ij + 1 (ho (i) ? ho (j))2 ? 1 (hn (i) ? hn (j))2 ,
=
D
b
b
(i,j)?N
since ho (i)2 = ho (i) and hn (i)2 = hn (i). This completes the proof.
The lemma above demonstrates that, when updating a hash function, the new objective function can
? ij . Next
be computed in O(nk) time, assuming that we have computed and stored the values of D
we show that we can compute an optimal weight update in time O(nk + n log n).
Consider choosing some hash function hp , and choose one weight index q, i.e. fix all entries of
W except Wpq , which corresponds to the one weight updated during this iteration of coordinatedescent. Modifying the value of Wpq results in updating hp to a new hashing function hnew . Now,
? pq , such
for every point x, there is a hashing threshold: a new value of Wpq , which we will call W
that
s
X
? pq ?(xpq , x) = 0.
W
q=1
4
Observe that, if cx =
Ps
q=1
Wpq ?(xpq , x), then the threshold tx is given by
tx = Wpq ?
cx
.
?(xpq , x)
We first compute the thresholds for all n data points: once we have the values of cx for all x,
computing tx for all points requires O(n) time. Since we are updating a single Wpq per iteration,
we can update the values of cx in O(n) time after updating Wpq , so the total time to compute all
thresholds tx is O(n).
Next, we sort the thresholds in increasing order, which defines a set of n + 1 intervals (interval 0 is
the interval of values smaller than the first threshold, interval 1 is the interval of points between the
first and the second threshold, and so on). Observe that, for any fixed interval, the new computed
hash function hnew does not change over the entire interval. Furthermore, observe that as we cross
from one threshold to the next, a single bit of the corresponding hash vector flips. As a result, we
need only compute the objective function at each of the n + 1 intervals, and choose the interval
that minimizes the objective function. We choose a value Wpq within that interval (which will be
optimal) and update the hash function using this new choice of weight. The following result shows
that we can choose the appropriate interval in time O(nk). When we add the cost of sorting the
thresholds, the total cost of an update to a single weight Wpq is O(nk + n log n).
Lemma 2. Consider updating a single hash function. Suppose we have a sequence of hash vectors
ht0 , ..., htn such that htj?1 and htj differ by a single bit for 1 ? j ? n. Then the objective functions
for all n + 1 hash functions can be computed in O(nk) time.
Proof. The objective function may be computed in O(nk) time for the hash function ht0 corresponding to the smallest interval. Consider the case when going from ho = htj?1 to hn = htj for
some 1 ? j ? n. Let the index of the bit that changes in hn be a. The only terms of the sum in
the objective that change are ones of the form (a, j) ? N and (i, a) ? N . Let fa = 1 if ho (a) =
0, hn (a) = 1, and fa = ?1 otherwise. Then we can simplify (hn (i) ? hn (j))2 ? (ho (i) ? ho (j))2
to fa (1 ? 2hn (j)) when a = i and to fa (1 ? 2hn (i)) when a = j (the expression is zero when
i = j and will not contribute to the objective). Therefore the relevant terms in the objective function
as given in Lemma 1 may be written as:
?2
?2
X ?
X ?
? aj ? fa (1 ? 2hn (j)) +
? ia ? fa (1 ? 2hn (i)) .
D
D
b
b
(a,j)?N
(i,a)?N
As there are k nearest neighbors, the first sum will have k elements and can be computed in O(k)
time. The second summation may have more or less than k terms, but across all data points there will
? as we progress through the hash functions,
be k terms on average. Furthermore, we must update D
which can also be straightforwardly done in O(k) time on average. Completing this process over all
n + 1 hash functions results in a total of O(nk) time.
Putting everything together, we have shown the following result:
Theorem 3. Fix all but one entry Wpq of the hashing weight matrix W . An optimal update to Wpq
to minimize (1) may be computed in O(nk + n log n) time.
Our overall strategy successively cycles through each hash function one by one, randomly selects a
weight to update for each hash function, and computes the optimal updates for those weights. It then
repeats this process until reaching local convergence. One full iteration to update all hash functions
requires time O(nb(k + log n)). Note that local convergence is guaranteed in a finite number of
updates since each update will never increase the objective function value, and only a finite number
of possible hash configurations are possible.
2.4
Extensions
The method described in the previous section may be enhanced in various ways. For instance,
the algorithm we developed is completely unsupervised. One could easily extend the method to
a supervised one, which would be useful for example in large-scale k-NN classification tasks. In
this scenario, one would additionally receive a set of similar and dissimilar pairs of points based on
5
class labels or other background knowledge. For all similar pairs, one could set the target original
distance to be zero, and for all dissimilar pairs, one could set the target original distance to be large
(say, 1).
One may also consider loss functions other than the quadratic loss considered in this paper. Another
option would be to use an ?1 -type loss, which would not penalize outliers as severely. Additionally,
one may want to introduce regularization, especially for the supervised case. For example, the
addition of an ?1 regularization over the entries of W could lead to sparse hash functions, and may
be worth additional study.
3
Experiments
We now present results comparing our proposed approach to the relevant existing methods?locality
sensitive hashing, semantic hashing (RBM), and spectral hashing. We also compared against the
Boosting SSC algorithm [14] but were unable to find parameters to yield competitive performance,
and so we do not present those results here. We implemented our binary reconstructive embedding
method (BRE) and LSH, and used the same code for spectral hashing and RBM that was employed
in [4]. We further present some qualitative results over the Tiny Image data set to show example
retrieval results obtained by our method.
3.1
Data Sets and Methodology
We applied the hashing algorithms to a number of important large-scale data sets from the computer vision community. Our vision data sets include: the Photo Tourism data [15], a collection
of approximately 300,000 image patches, processed using SIFT to form 128-dimensional vectors;
the Caltech-101 [16], a standard benchmark for object recognition in the vision community; and
LabelMe and Peekaboom [17], two image data set on top of which global Gist descriptors have
been extracted. We also applied our method to MNIST, the standard handwritten digits data set, and
Nursery, one of the larger UCI data sets.
We mean-centered the data and normalized the feature vectors to have unit norm. Following the suggestion in [4], we apply PCA (or kernel PCA in the case of kernelized data) to the input data before
applying spectral hashing or BRE?the results of the RBM method and LSH were better without
applying PCA, so PCA is not applied for these algorithms. For all data sets, we trained the methods
using 1000 randomly selected data points. For training the BRE method, we select nearest neighbors using the top 5th percentile of the training distances and set the target distances to 0; we found
that this ensures that the nearest neighbors in the embedded space will have Hamming distance very
close to 0. We also choose farthest neighbors using the 98th percentile of the training distances and
maintained their original distances as target distances. Having both near and far neighbors improves
performance for BRE, as it prevents a trivial solution where all the database objects are given the
same hash key. The spectral hashing and RBM parameters are set as in [4, 17]. After constructing the hash functions for each method, we randomly generate 3000 hashing queries (except for
Caltech-101, which has fewer than 4000 data points; in this case we choose the remainder of the
data as queries).
We follow the evaluation scheme developed in [4]. We collect training/test pairs such that the unnormalized Hamming distance using the constructed hash functions is less than or equal to three.
We then compute the percentage of these pairs that are nearest neighbors in the original data space,
which are defined as pairs of points from the training set whose distances are in the top 5th percentile.
This percentage is plotted as the number of bits increases. Once the number of bits is sufficiently
high (e.g. 50), one would expect that distances with a Hamming distance less than or equal to three
would correspond to nearest neighbors in the original data embedding.
3.2
Quantitative Results
In Figure 1, we plot hashing retrieval results over each of the data sets. We can see that the BRE
method performs comparably to or outperforms the other methods on all data sets. Observe that
both RBM and spectral hashing underperform all other methods on at least one data set. On some
6
0.6
BRE
Spectral hashing
RBM
LSH
0
10
20
30
Number of bits
40
50
0.9
0.8
0.7
BRE
Spectral hashing
RBM
LSH
0.6
0.5
0.4
10
20
0.8
0.6
BRE
Spectral hashing
RBM
LSH
0.4
0.2
0
10
20
30
Number of bits
40
50
1
0.8
0.6
BRE
Spectral hashing
RBM
LSH
0.4
0.2
0
10
20
MNIST
1
Prop. of good neighbors with Hamm. distance <= 3
Prop. of good neighbors with Hamm. distance <= 3
Peekaboom
30
Number of bits
40
50
0.8
0.6
BRE
Spectral hashing
RBM
LSH
0.4
0.2
0
10
20
30
Number of bits
40
50
Nursery
1
Prop. of good neighbors with Hamm. distance <= 3
0.2
1
Prop. of good neighbors with Hamm. distance <= 3
0.8
0.4
LabelMe
Caltech?101
Prop. of good neighbors with Hamm. distance <= 3
Prop. of good neighbors with Hamm. distance <= 3
Photo Tourism
1
30
Number of bits
40
50
1
0.8
BRE
Spectral hashing
RBM
LSH
0.6
0.4
0.2
0
10
20
30
Number of bits
40
50
Figure 1: Results over Photo Tourism, Caltech-101, LabelMe, Peekaboom, MNIST, and Nursery.
The plots show how well the nearest neighbors in the Hamming space (pairs of data points with
unnormalized Hamming distance less than or equal to 3) correspond to the nearest neighbors (top
5th percentile of distances) in the original dataset. Overall, our method outperforms, or performs
comparably to, existing methods. See text for further details.
data sets, RBM appears to require significantly more than 1000 training images to achieve good
performance, and in these cases the training time is substantially higher than the other methods.
One surprising outcome of these results is that LSH performs well in comparison to the other existing methods (and outperforms some of them for some data sets)?this stands in contrast to the
results of [4], where LSH showed significantly poorer performance (we also evaluated our LSH
implementation using the same training/test split as in [4] and found similar results). The better
performance in our tests may be due to our implementation of LSH; we use Charikar?s random
projection method [1] to construct hash tables.
In terms of training time, the BRE method typically converges in 50?100 iterations of updating
all hash functions, and takes 1?5 minutes to train per data set on our machines (depending on the
number of bits requested). Relatively speaking, the time required for training is typically faster than
RBM but slower than spectral hashing and LSH. Search times in the binary space are uniform across
each of the methods and our timing results are similar to those reported previously (see, e.g. [17]).
3.3
Qualitative Results
Finally, we present qualitative results on the large Tiny Image data set [5] to demonstrate our method
applied to a very large database. This data set contains 80 million images, and is one of the largest
readily available data sets for content-based image retrieval. Each image is stored as 32 ? 32 pixels,
and we employ the global Gist descriptors that have been extracted for each image.
We ran our reconstructive hashing algorithm on the Gist descriptors for the Tiny Image data set
using 50 bits, with 1000 training images used to construct the hash functions as before. We selected
a random set of queries from the database and compared the results of a linear scan over the Gist
features with the hashing results over the Gist features. When obtaining hashing results, we collected
the nearest neighbors in the Hamming space to the query (the top 0.01% of the Hamming distances),
and then sorted these by their distance in the original Gist space. Some example results are displayed
in Figure 2; we see that, with 50 bits, we can obtain very good results that are qualitatively similar
to the results of the linear scan.
7
Figure 2: Qualitative results over the 80 million images in the Tiny Image database [5]. For each
group of images, the top left image is the query, the top row corresponds to a linear scan, and the
second row corresponds to the hashing retrieval results using 50 hash bits. The hashing results are
similar to the linear scan results but are significantly faster to obtain.
4
Conclusion and Future Work
In this paper, we presented a method for learning hash functions, developed an efficient coordinatedescent algorithm for finding a local optimum, and demonstrated improved performance on several
benchmark vision data sets as compared to existing state-of-the-art hashing algorithms. One avenue
for future work is to explore alternate methods of optimization; our approach, while simple and fast,
may fall into poor local optima in some cases. Second, we would like to explore the use of our
algorithm in the supervised setting for large-scale k-NN tasks.
Acknowledgments
This work was supported in part by DARPA, Google, and NSF grants IIS-0905647 and IIS-0819984.
We thank Rob Fergus for the spectral hashing and RBM code, and Greg Shakhnarovich for the
Boosting SSC code.
References
[1] M. Charikar. Similarity Estimation Techniques from Rounding Algorithms. In STOC, 2002.
[2] P. Indyk and R. Motwani. Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality. In STOC, 1998.
[3] R. R. Salakhutdinov and G. E. Hinton. Learning a Nonlinear Embedding by Preserving Class Neighbourhood Structure. In AISTATS, 2007.
[4] Y. Weiss, A. Torralba, and R. Fergus. Spectral Hashing. In NIPS, 2008.
[5] A. Torralba, R. Fergus, and W. T. Freeman. 80 Million Tiny Images: A Large Dataset for Non-parametric
Object and Scene Recognition. TPAMI, 30(11):1958?1970, 2008.
8
[6] J. Freidman, J. Bentley, and A. Finkel. An Algorithm for Finding Best Matches in Logarithmic Expected
Time. ACM Transactions on Mathematical Software, 3(3):209?226, September 1977.
[7] P. Ciaccia, M. Patella, and P. Zezula. M-tree: An Efficient Access Method for Similarity Search in Metric
Spaces. In VLDB, 1997.
[8] A. Beygelzimer, S. Kakade, and J. Langford. Cover Trees for Nearest Neighbor. In ICML, 2006.
[9] J. Uhlmann. Satisfying General Proximity / Similarity Queries with Metric Trees. Information Processing
Letters, 40:175?179, 1991.
[10] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-Sensitive Hashing Scheme Based on p-Stable
Distributions. In SOCG, 2004.
[11] P. Jain, B. Kulis, and K. Grauman. Fast Image Search for Learned Metrics. In CVPR, 2008.
[12] K. Grauman and T. Darrell. Pyramid Match Hashing: Sub-Linear Time Indexing Over Partial Correspondences. In CVPR, 2007.
[13] G. Shakhnarovich, P. Viola, and T. Darrell. Fast Pose Estimation with Parameter-Sensitive Hashing. In
ICCV, 2003.
[14] G. Shakhnarovich. Learning Task-specific Similarity. PhD thesis, MIT, 2006.
[15] N. Snavely, S. Seitz, and R. Szeliski. Photo Tourism: Exploring Photo Collections in 3D. In SIGGRAPH
Conference Proceedings, pages 835?846, New York, NY, USA, 2006. ACM Press.
[16] L. Fei-Fei, R. Fergus, and P. Perona. Learning Generative Visual Models from Few Training Examples:
an Incremental Bayesian Approach Tested on 101 Object Categories. In Workshop on Generative Model
Based Vision, Washington, D.C., June 2004.
[17] A. Torralba, R. Fergus, and Y. Weiss. Small Codes and Large Databases for Recognition. In CVPR, 2008.
9
| 3667 |@word kulis:3 repository:1 nkb:1 norm:5 underperform:1 vldb:1 seitz:1 covariance:1 recursively:1 reduction:1 configuration:1 contains:1 selecting:1 outperforms:4 existing:9 comparing:2 surprising:1 beygelzimer:1 must:2 readily:1 written:1 partition:1 plot:2 gist:6 update:19 hash:58 generative:2 selected:2 fewer:1 parameterization:4 boosting:3 contribute:1 hyperplanes:1 mathematical:1 constructed:1 become:1 qualitative:4 introduce:3 notably:1 expected:1 salakhutdinov:1 freeman:1 curse:1 increasing:1 project:1 begin:1 underlying:3 minimizes:1 substantially:1 developed:3 htj:4 finding:4 guarantee:1 berkeley:3 every:1 quantitative:1 hnew:7 exactly:1 grauman:2 k2:2 demonstrates:1 unit:2 farthest:1 grant:1 before:2 local:7 timing:1 limit:1 severely:1 analyzing:1 datar:1 approximately:1 collect:1 acknowledgment:1 hamm:6 digit:1 significantly:3 projection:4 ett:3 cannot:1 convenience:1 selection:1 close:1 nb:3 applying:3 optimize:2 map:1 demonstrated:1 starting:1 independently:1 convex:1 simplicity:1 immediately:1 embedding:4 coordinate:4 updated:2 enhanced:1 suppose:2 target:4 exact:2 us:1 designing:1 element:1 recognition:3 particularly:2 updating:8 satisfying:1 database:9 observed:1 worst:1 parameterize:2 ensures:1 cycle:1 icsi:1 ran:1 broken:1 trained:1 shakhnarovich:3 purely:1 upon:1 basis:1 completely:1 easily:3 darpa:1 siggraph:1 represented:2 tx:4 various:1 train:1 jain:1 fast:11 describe:1 reconstructive:3 effective:1 query:8 choosing:3 outcome:1 whose:1 widely:1 valued:2 larger:1 say:1 cvpr:3 otherwise:1 itself:1 indyk:2 sequence:3 advantage:3 differentiable:1 tpami:1 reconstruction:6 lowdimensional:1 propose:1 product:1 remainder:1 relevant:3 uci:1 degenerate:1 achieve:3 convergence:2 motwani:1 darrell:3 optimum:5 p:1 incremental:1 converges:2 object:8 depending:1 develop:4 pose:1 fixing:1 ij:4 nearest:23 progress:1 implemented:1 come:1 differ:1 peekaboom:3 modifying:1 centered:1 everything:1 require:2 fix:2 brian:1 summation:1 extension:3 exploring:1 hold:10 proximity:1 sufficiently:1 considered:1 mapping:3 torralba:3 smallest:1 estimation:2 label:1 currently:1 uhlmann:1 sensitive:7 largest:1 faithfully:1 mit:1 always:1 gaussian:1 rather:1 reaching:1 finkel:1 focus:1 june:1 notational:1 contrast:2 dependent:1 nn:2 typically:5 entire:1 kernelized:3 perona:1 quasi:1 going:1 selects:1 provably:3 pixel:1 issue:1 overall:2 classification:1 denoted:1 art:3 tourism:4 uc:1 equal:3 construct:4 saving:1 having:2 aware:1 washington:1 once:2 never:1 look:1 unsupervised:1 icml:1 future:2 others:1 simplify:1 few:2 employ:1 randomly:3 preserve:2 resulted:1 maintain:3 attempt:3 huge:1 mining:1 highly:1 evaluation:1 accurate:1 poorer:1 partial:1 experience:1 tree:7 euclidean:1 old:14 desired:2 plotted:1 instance:1 cover:2 cost:2 entry:3 uniform:2 successful:1 dij:1 rounding:1 stored:2 straightforwardly:1 reported:1 eec:2 kxi:2 together:1 squared:5 thesis:1 successively:1 opposed:1 choose:8 hn:31 ssc:2 derivative:1 bfgs:1 includes:2 explicitly:3 performed:1 view:1 h1:3 analyze:1 competitive:1 sort:1 option:1 minimize:6 ni:1 greg:1 xor:1 descriptor:3 efficiently:2 yield:1 correspond:2 handwritten:1 bayesian:1 comparably:2 worth:1 reach:1 trevor:2 against:2 proof:4 rbm:14 hamming:18 workstation:1 dataset:2 popular:1 knowledge:1 dimensionality:3 improves:1 routine:1 bre:12 appears:1 hashing:56 higher:1 supervised:3 follow:1 methodology:1 improved:2 wei:2 formulation:1 done:2 though:3 evaluated:1 furthermore:3 langford:1 until:1 hand:1 sketch:1 web:1 nonlinear:1 google:1 defines:1 logistic:1 aj:1 bentley:1 usa:1 normalized:2 true:1 xpq:5 regularization:2 semantic:4 rpt:1 during:1 maintained:1 noted:1 anything:1 percentile:4 unnormalized:2 demonstrate:3 performs:3 image:21 meaning:1 sigmoid:2 million:5 extend:1 hp:7 similarly:1 pq:2 lsh:17 access:1 stable:1 similarity:6 add:1 multivariate:1 recent:3 showed:1 scenario:1 binary:15 arbitrarily:1 caltech:4 preserving:1 additional:1 employed:2 ii:2 patella:1 full:1 faster:2 match:2 cross:1 retrieval:8 scalable:1 basic:1 regression:1 vision:7 metric:5 iteration:5 kernel:8 represent:1 pyramid:1 achieved:1 penalize:1 receive:1 background:1 want:1 addition:1 htn:2 interval:12 completes:1 appropriately:2 unlike:3 spirit:1 call:1 near:1 leverage:1 split:1 embeddings:5 hb:3 variety:4 xj:9 inner:1 idea:1 avenue:1 expression:1 pca:4 proceed:1 speaking:1 york:1 generally:1 useful:1 clear:1 amount:1 processed:1 category:1 generate:2 outperform:1 percentage:2 nsf:1 notice:1 sign:4 per:4 nursery:3 write:2 express:1 group:2 putting:1 key:1 threshold:9 enormous:1 coordinatedescent:3 costing:1 ht:2 utilize:1 ht0:2 relaxation:1 sum:2 compete:1 letter:1 family:1 patch:1 bit:21 hi:1 completing:1 guaranteed:1 correspondence:1 quadratic:1 fei:2 x2:1 scene:1 software:1 generates:1 speed:1 extremely:1 wpq:14 relatively:1 charikar:2 alternate:1 combination:1 poor:2 smaller:1 across:2 increasingly:1 kakade:1 rob:1 making:2 projecting:1 outlier:1 indexing:2 iccv:1 socg:1 zezula:1 previously:1 discus:1 count:1 hto:1 flip:1 end:1 photo:5 available:1 operation:2 apply:2 observe:4 spectral:19 appropriate:2 neighbourhood:1 alternative:1 ho:27 slower:1 rp:1 original:10 assumes:1 running:1 ensure:1 include:2 top:7 newton:1 freidman:1 restrictive:2 especially:1 objective:29 ciaccia:1 fa:6 strategy:1 parametric:1 mirrokni:1 snavely:1 september:1 gradient:1 hq:1 distance:39 unable:1 thank:1 accommodating:1 collected:1 trivial:1 reason:1 assuming:1 code:4 index:3 minimizing:2 setup:1 difficult:1 potentially:1 stoc:2 implementation:2 proper:1 perform:1 benchmark:2 finite:2 descent:4 displayed:1 viola:1 hinton:1 community:2 pair:11 required:3 learned:2 nip:1 able:2 below:2 challenge:1 video:1 ia:1 critical:2 natural:1 scheme:3 improve:1 text:2 embedded:1 loss:4 expect:1 sublinear:2 suggestion:1 h2:1 tiny:6 row:4 surprisingly:1 repeat:1 supported:1 allow:1 szeliski:1 neighbor:32 fall:1 characterizing:1 sparse:1 overcome:1 dimension:1 xn:1 stand:1 computes:1 author:1 qualitatively:2 collection:2 far:1 transaction:1 reconstructed:3 approximate:4 emphasize:1 global:2 xi:9 fergus:5 alternatively:1 search:8 continuous:2 table:2 additionally:2 learn:2 ca:1 obtaining:1 requested:1 constructing:1 domain:2 aistats:1 main:2 x1:1 ny:1 sub:1 down:1 theorem:1 minute:1 removing:1 specific:1 sift:1 explored:2 essential:1 workshop:1 mnist:3 phd:1 kx:1 nk:10 sorting:1 easier:1 locality:5 cx:4 logarithmic:1 simply:1 explore:2 visual:1 prevents:1 expressed:4 corresponds:3 extracted:2 acm:2 prop:6 identity:1 sorted:1 towards:1 replace:1 labelme:3 content:1 change:5 typical:1 specifically:1 except:2 hyperplane:1 lemma:5 total:6 select:1 immorlica:1 support:1 scan:7 dissimilar:2 tested:1 |
2,943 | 3,668 | A Sparse Non-Parametric Approach for Single
Channel Separation of Known Sounds
Paris Smaragdis
Adobe Systems Inc.
[email protected]
Madhusudana Shashanka
Mars Inc.
[email protected]
Bhiksha Raj
Carnegie Mellon University
[email protected]
Abstract
In this paper we present an algorithm for separating mixed sounds from
a monophonic recording. Our approach makes use of training data which
allows us to learn representations of the types of sounds that compose the
mixture. In contrast to popular methods that attempt to extract compact generalizable models for each sound from training data, we employ
the training data itself as a representation of the sources in the mixture.
We show that mixtures of known sounds can be described as sparse combinations of the training data itself, and in doing so produce significantly
better separation results as compared to similar systems based on compact
statistical models.
Keywords: Example-Based Representation, Signal Separation, Sparse Models.
1
Introduction
This paper deals with the problem of single-channel signal separation ? separating out signals from individual sources in a mixed recording. As of recently, a popular statistical
approach has been to obtain compact characterizations of individual sources and employ
them to identify and extract their counterpart components from mixture signals. Statistical characterizations may include codebooks [1], Gaussian mixture densities [2], HMMs [3],
independent components [4, 5], sparse dictionaries [6], non-negative decompositions [7?9]
and latent variable models [10, 11]. All of these methods attempt to derive a generalizable
model that captures the salient characteristics of each source. Separation is achieved by
abstracting components from the mixed signal that conform to the statistical characterizations of the individual sources. The key here is the specific statistical model employed ? the
more effectively it captures the specific characteristics of the signal sources, the better the
separation that may be achieved.
In this paper we argue that, given any sufficiently large collection of data from a source,
the best possible characterization of any data is, quite simply, the data themselves. This
has been the basis of several example-based characterizations of a data source, such as
nearest-neighbor, K-nearest neighbor, Parzen-window based models of source distributions
etc. Here, we use the same idea to develop a monaural source-separation algorithm that
directly uses samples from the training data to represent the sources in a mixture. Using
this approach we sidestep the need for a model training step, and we can rely on a very
flexible reconstruction process, especially as compared with previously used statistical models. Identifying the proper samples from the training data that best approximate a sample
of the mixture is of course a hard combinatorial problem, which can be computationally
demanding. We therefore formulate this as a sparse approximation problem and proceed
to solve it with an efficient algorithm. We additionally show that this approach results in
1
source estimates which are guaranteed to lie on the source manifold, as opposed to trainedbasis approaches which can produce arbitrary outputs that will not necessarily be plausible
source estimates.
Experimental evaluations show that this approach results in separated signals that exhibit
significantly higher performance metrics as compared to conceptually similar techniques
which are based on various types of combinations of generalizable bases representing the
sources.
2
Proposed Method
In this section we cover the underlying statistical model we will use, introduce some of the
complications that one might encounter when using it and finally we propose an algorithm
that resolves these issues.
2.1
The Basic Model
Given a magnitude spectrogram of a single source, each spectral frame is modeled as a
histogram of repeated draws from a multinomial distribution over the frequency bins. At
a given time frame t, consider a random process characterized by the probability Pt (f ) of
drawing frequency f in a given draw. The distribution Pt (f ) is unknown but what one
can observe instead is the result of multiple draws from the process, that is the observed
spectral vector. The model assumes that Pt (f ) is comprised of bases indexed by a latent
variable z. The latent factors are represented by P (f |z). The probability of picking the
z-th distribution in the t-th time frame can be represented by Pt (z). We use this model to
learn the source-specific bases given by Pt (f |z) as done in [10, 11]. At this point this model
is conceptually very similar to the non-negative factorization models in [8, 9].
Now let the matrix VF ?T of entries vf t represent the magnitude spectrogram of the mixture
sound and vt represent time frame t (the t-th column vector of matrix V). Each mixture
spectral frame is again modeled as a histogram of repeated draws, from the multinomial
distributions corresponding to every source. The model for each mixture frame includes an
additional latent variable s representing each source, and is given by
X
X
Ps (f |z)Pt (z|s),
(1)
Pt (f ) =
Pt (s)
s
z?{zs }
where Pt (f ) is the probability of observing frequency f in time frame t in the mixture
spectrogram, Ps (f |z) is the probability of frequency f in the z-th learned basis vector from
source s, Pt (z|s) is the probability of observing the z-th basis vector of source s at time t,
{zs } represents the set of values the latent variable z can take for source s, and Pt (s) is the
probability of observing source s at time t.
We can assume that for each source in the mixture we have an already trained model in the
form of basis vectors Ps (f |z). These bases will represent a dictionary of spectra that best
describe each source. Armed with this knowledge we can decompose a new mixture of these
known sources in terms of the contributions of the dictionaries for each source. To do so we
can use the EM algorithm to estimate Pt (z|s) and Pt (s):
Pt (s, z|f ) =
Pt (z|s) =
Pt (s) =
Pt (s)Pt (z|s)Ps (f |z)
P
z?{zs } Ps (f |z)Pt (z|s)
s Pt (s)
P
vf t Pt (s, z|f )
Pf
f,z vf t Pt (s, z|f )
P
P
f vf t
z?{zs } Pt (s, z|f )
P P
P
s
z?{zs } Pt (s, z|f )
f vf t
P
(2)
(3)
(4)
The reconstruction of the contribution of source s in the mixture can then be computed as
P
Pt (s) z?{zs } Ps (f |z)Pt (z|s)
(s)
P
vf t
v?f t = P
z?{zs } Ps (f |z)Pt (z|s)
s Pt (s)
2
Source A
Source B
Mixture
Convex Hull A
Convex Hull B
Simplex
Figure 1: Illustration of the basic model. The triangles denote the position of basis functions
for two source classes. The square is an instance of a mixture of the two sources. The mixture
point is not within the convex hull which covers either source, but it is within the convex
hull defined by all the bases combined.
These reconstructions will approximate the magnitude spectrogram of each source in the
mixture. Once we obtain these reconstructions we can use them to modulate the original
phase spectrogram of the mixture and obtain the time-series representation of the sources.
Let us now pursue a brief pictorial understanding of this algorithm, which will help us
introduce the concepts in the next section. Each basis vector and the mixture input will lie
in a F ? 1 dimensional simplex (due to the fact that these quantities are normalized to sum
to unity). Each source?s basis set will define a convex hull within which any point can be
approximated using these bases. Assuming that the training data is accurate, all potential
inputs from that source should lie in that area. The union of all the source bases will define
a larger space in which a mixture input will be inside. Any mixture point can then be
approximated as a weighted sum of multiple bases from both sources. For visualization of
these concepts for F = 3, see figure 1.
2.2
Using Training Data Directly as a Dictionary
In this paper, we would like to explain the mixture frame from the training spectral frames
instead of using a smaller set of learned bases. There are two rationales behind this decision.
The first is that the resulting large dictionary provides a better description of the sources,
as opposed to the less expressive learned-basis models. As we show later on, this holds even
for learned-basis models with dictionaries as large as the proposed method?s. The secondary
rationale behind this operation is based on the observation that the points defined by the
convex hull of a source?s model, do not necessarily all fall on that source?s manifold. To
visualize this problem consider the plots in figure 2. In both of these plots the sources
exhibit a clear structure. In the left plot both sources appear in a circular pattern, and
in the right plot in a spiral form. As shown in [12], learning a set of bases that explains
these sources results in defining a convex hull that surrounds the training data. Under this
model potential source estimates can now lie anywhere inside these hulls. Using trainedbasis models, if we decompose the mixture points in these figures we obtain two source
estimates which do not lie in the same manifold as the original sources. Although the input
was adequately approximated, there is no guarantee that the extracted sources are indeed
appropriate outcomes for their sound class.
In order to address this problem and to also provide a richer dictionary for the source
reconstructions, we will make direct use of the training data in order to explain the mixture,
and bypass the basis representation as an abstraction. To do so we will use each frame of the
(s)
spectrograms of the training sequences as the bases Ps (f |z). More specifically, let WF ?T (s)
(s)
be the training spectrogram from source s and let wt represent the time frame t from the
spectrogram. In this case, the latent variable z for source s takes T (s) values, and the z-th
basis function will be given by the (normalized) z-th column vector of W(s) .
3
Source A
Source B
Mixture
Convex Hull A
Convex Hull B
Estimate for A
Estimate for B
Approximation
of mixture
Source A
Source B
Mixture
Convex Hull A
Convex Hull B
Estimate for A
Estimate for B
Approximation
of mixture
Figure 2: Two examples where the separation process using trained bases provides poor
source estimates. In both plots the training data for each source are denoted by ? and ?,
and the mixture sample by . The learned bases of each source are the vertices of the two
dashed convex hulls that enclose each class. The source estimates and the approximation
of the mixture are denoted by ?, + and
. In the left case the two sources lie on two
overlapping circular areas, the source estimates however lie outside these areas. On the
right, the two sources form two intertwined spirals. The recovered sources lie very closely
on the competing source?s area, thereby providing a highly inappropriate decomposition.
Although the mixture was well approximated in both cases, the estimated sources were
poor representations of their classes.
With the above model we would ideally want to use one dictionary element per source at
any point in time. Doing so will ensure that the outputs would lie on the source manifold,
and also offset any issues of potential overcompleteness. One way to ensure this is to
perform a reconstruction such that we only use one element of each source at any time,
much akin to a nearest-neighbor model, albeit in an additive setting. This kind of search
can be computationally very demanding so we instead treat this as a sparse approximation
problem. The intuition is that at any given point in time, the mixture frame is explained
by very few active elements from the training data. In other words, we need the mixture
weight distributions and the speaker priors to be sparse at every time instant.
We use the concept of entropic prior introduced in [13] to enforce sparsity. Given a probability distribution ?, entropic prior is defined as
P
Pe (?) = e?H(?)
(5)
where H(?) = ? i ?i log ?i is the entropy of the distribution. A sparse representation, by
definition, has few ?active? elements which means that the representation has low entropy.
Hence, imposing this prior during maximum a posteriori estimation is a way to minimize
entropy during estimation which will result in a sparse ? distribution. We would like to
minimize the entropies of both the speaker dependent mixture weight distributions (given
by Pt (z|s)) and the source priors (given by Pt (s)) at every frame. In other words, we want
to minimize H(z|s) and H(s) at every time frame. However, we know from information
theory that
H(z, s) = H(z|s) + H(s).
Thus, reducing the entropy of the joint distribution Pt (z, s) is equivalent to reducing the
conditional entropy of the source dependent mixture weights and the entropy of the source
priors.
Since the dictionary is already known and is given by the normalized spectral frames from
source training spectrograms, the parameter to be estimated is given by Pt (z, s). The model,
written in terms of this parameter, is given by
X X
Ps (f |z)Pt (z, s).
Pt (f ) =
s z?{zs }
where we have modified equation (1) by representing Pt (s)Pt (z|s) as Pt (z, s). We use the
Expectation-Maximization algorithm to derive the update equations. Let all parameters
to be estimated be represented by ?. We impose an entropic prior distribution on Pt (z, s)
4
Source A
Source B
Mixture
Estimate for A
Estimate for B
Approximation
of mixture
Source A
Source B
Mixture
Estimate for A
Estimate for B
Approximation
of mixture
Figure 3: Using a sparse reconstruction on the data in figure 2. Note how in contrast to that
figure the source estimates are now identified as training data points, and are thus plausible
solutions. The approximation of the mixture is the nearest point of the line connecting the
two source estimates, to the actual mixture input. Note that the proper solution is the one
that results in such a line that is as close as possible to the mixture point, and not one that
is defined by two training points close to the mixture.
given by
log P (?) = ?
XX X
t
s
Pt (z, s) log Pt (z, s),
z?{zs }
where ? is a parameter indicating the extent of sparsity desired. The E-step is given by
Pt (z, s|f ) = P P
s
Pt (z, s)Ps (f |z)
z?{zs } Pt (z, s)Ps (f |z)
and the M-step by
?t
+ ? + ? log Pt (z, s) + ?t = 0
(6)
Pt (z, s)
P
where we have let ? represent
f vf t Pt (s, z|f ) and ?t is the Lagrange multiplier. The
above M-step equation is a system of simultaneous transcendental equations for Pt (z, s).
Brand [13] proposes a method to solve such problems using the Lambert W function [14].
It can be shown that Pt (z, s) can be estimated as
P?t (z, s) =
??/?
.
W(??e1+?t /? /?)
(7)
Equations (6),(7) form a set of fixed point iterations that typically converge in 2-5 iterations
[13].
Once Pt (z, s) is estimated, the reconstruction of source s can be computed as
P
z?{z } Ps (f |z)Pt (z, s)
(s)
vf t
v?f t = P P s
s
z?{zs } Ps (f |z)Pt (z, s)
Now let us consider how this problem resolves the issues presented in figure 2. In figure 3
we show the results obtained using this approach on the same data. The sparsity parameter
? as set to 0.1. In both plots we see that the source reconstructions lie on a training point,
thereby being a plausible source estimate. The approximation of the mixture is not as exact
as before, since now it has to lie on the line connecting the two active source elements.
This is not however an issue of concern since in practice the approximation is always good
enough, and the guarantee of a plausible source estimate is more valuable than the exact
approximation of the mixture.
Alternative means to strive towards similar results would be to make use of priors such as
in [15, 16]. In these approaches the priors are imposed on the mixture weights and thus
are not as effective for this particular task since they still suffer from the symptoms of
learned-basis models. This was verified through cursory simulations, which also revealed an
additional computational complexity penalty against such models.
5
Amplitude
Training samples index
5
0
?5
Input source 1
10
5
0
?5
Input source 2
Training sample weights 1
5
10
15
20
25
5
10
5
10
15
20
25
5
10
15
Time frame index
20
15
Time frame index
20
25
Training sample weights 2
25
Figure 4: An oracle case where we fit training data from two speakers, on the mixture
of that data. The top plots show the input waveforms, and the bottom plots shows the
estimated weights multiplied with the source priors. As expected the weights exhibit two
diagonal traces which imply that the algorithm we used has fit the data appropriately.
3
Experimental Results
In this section we present the results of experiments done with real speech data. All of these
experiments we performed on data from the TIMIT speech database on 0dB male/female
mixtures. The sources were sampled as 16 kHz, we used 64 ms windows for the spectrogram
computation, and an overlap of 32 ms. Before the FFT computation, the input was tapered
using a square-root Hann window. The training data was around 25 sec worth of speech for
each speaker, and the testing mixture was about 3 sec long. We evaluated the separation
performance using the metrics provided in [17]. These metrics include the Signal to Interference Ratio (SIR), the Signal to Distortion Ratio (SDR), and the Signal to Artifacts Ratio
(SAR). The first is a measure of how well we suppress the interfering speaker, whereas the
other two provide us with a sense of how much the extracted source is corrupted due to the
separation process. All of these are measured in dB and the higher they are the better the
performance is deemed to be.
In the following sections we first present some ?oracle tests? that validate that indeed
this algorithm is performing as expected, and we then proceed to more realistic testing.
Finally, we show the performance impact of pruning the training data in order to speed up
computation time.
3.1
Oracle tests
In order to verify that this approach works we go through a few oracle experiments. In these
tests we include the actual solutions as training data and we make sure that the answers are
exactly what we would expect to find. The first experiment we perform is on a mixture for
which the training data includes its isolated constituent sentences. In this experiment we
would expect to see two dictionary components active at each point in time, one from each
speaker?s dictionary, and both of these progressing through the component index linearly
through time. As shown in figure 4, we observe exactly that behavior. This test provides a
sanity check which verifies that given an answer this algorithm can properly identify it.
A more comprehensive oracle test is shown in figure 5. In this experiment, the training
data were again the same as the testing data. We averaged the results from 10 runs using
different combinations of speakers, varying sparsity parameters and number of bases. The
sparsity parameter ? was checked for various values from 0 to 0.8, and we used trained-basis
models with 5, 10, 20, 40, 80, 160 and 320 bases, as well as the proposed scenario where
all the training data is used as a dictionary. The primary observation from this experiment
is that the more bases we use the better the results get. We also see that increasing the
sparsity parameter we see a modest improvement in most cases.
6
Signal to Distortion Ratio
Signal to Interference Ratio
Signal to Artifacts Ratio
20
10
10
10
5
0
5
0
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Train
320
160
Bases
0
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Train
320
160
Bases
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Train
320
160
Bases
Figure 5: Average separation performance metrics for oracle cases, as dependent on the
choice of different number of elements in the speaker?s dictionary, and different choices of
the entropic prior parameter ?. The left plot shows the SDR, the middle plot the SIR, and
the right plot the SAR, all in dB. The basis row labeled as ?Train? is the case where we use
all the training data as a basis set.
Signal to Distortion Ratio
Signal to Interference Ratio
Signal to Artifacts Ratio
20
10
10
10
5
0
5
0
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Bases
Train
320
160
0
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Bases
Train
320
160
0.8
0.5
0.3
0.2
0.1
0.01
?
0
5
10
20
40
80
Train
320
160
Bases
Figure 6: Average separation performance metrics for real-world cases, as dependent on the
choice of different number of elements in the speaker?s dictionary, and different choices of
the entropic prior parameter ?. The left plot shows the SDR, the middle plot the SIR, and
the right plot the SAR, all in dB. Sparsely using all of the training data clearly outperforms
low-rank models by a significant margin on all metrics.
3.2
Results on Realistic Situations
Let us now consider the more realistic case where the mixture data is different from the
training set. In the following simulation we repeat the previous experiment, but in this case
there are no common elements between the training and testing data. The input mixture
has to be reconstructed using approximate samples. The results are now very different in
nature. We do not obtain such high numbers in performance as in the oracle case, but
we also see a stronger trend in favor of sparsity and the use of all the training data as a
dictionary. The results are shown in figure 6. We can clearly see that in all metrics using all
the training data significantly outperforms trained-basis models. More importantly, we see
that this is not because we have a larger dictionary. For trained-bases we see a performance
peak at around 80 bases, but then we observe a deterioration in performance as we use a
larger dictionary. Using the actual training data results in a significant boost though. Due
to the high dimensionality of the data the effect of sparsity is a little more subtle, but we still
see a helpful boost especially for the SIR which is the most important of the performance
measures. We see some decrease in the SAR, which is expected since the reconstructions are
made using elements that look like the remaining data, and are not made to approximate
the actual input mixture. This does not mean that the extracted sources are distorted and
of poor quality, but rather that they don?t match the original inputs exactly. The use of
sparsity ensures that the output is a plausible speech signal devoid of artifacts like distortion
and musical noise. The effects of sparsity alone in the proposed case are shown separately
in figure 7.
7
dB
15
10
5
SDR
0
0
SIR
0.01
SAR
0.1
0.2
Sparsity parameter ?
0.3
0.5
0.8
Figure 7: A slice of the results in figure 6 in which we only show the case where we use
all the training data as adictionary. The horizontal axis represents various values for the
sparsity parameter ?.
dB
15
10
5
SDR
0
0%
SIR
20%
SAR
40%
60%
70%
80%
Percentage of discarded training frames
90%
95%
Figure 8: Effect of discarding low energy training frames. The horizontal axis denotes the
percentage of training frames that have been discarded. These are averaged results using a
sparsity parameter ? = 0.1.
The unfortunate side effect of the proposed method is that we need to use a dictionary
which can be substantially larger than otherwise. In order to address this concern we show
that the size of the training data can be easily pruned down to a size comparable to trainedbasis models and still outperform them. Since sound signals, especially speech, tend to have
a considerable amount of short-term pauses and regions of silence, we can use an energy
threshold to in order to select the loudest frames of the training spectrogram as bases. In
figure 8 we show how the separation performance metrics are influenced as we increasingly
remove bases which lie under various energy percentiles. It is clear that even after discarding
up to at least 70% of the lowest energy training frames the performance is still approximately
the same. After that we see some degradation since we start discarding significant parts of
the training data. Regardless this scheme outperforms trained-basis models of equivalent
size. For the 80% percentile case, a trained-basis model of the same size dictionary results
in roughly half the values in all performance metrics, a very significant handicap for the
same amount of computational and memory requirements.
The experiments in this paper were all conducted in MATLAB on an average modern desktop machine. Overall computations for a single mixture took roughly 4 sec when not using
the sparsity prior, 14 sec when using the sparsity prior (primarily due to slow computation
of Lambert?s function), and dropped down to 5 sec when using the 30% highest energy
frames from the training data.
4
Conclusion
In this paper we present a new approach to solving the monophonic source separation
problem. The contributions of this paper lies primarily in the choice of using all the training
data as opposed to a trained-basis model. In order to do so we present a sparse learning
algorithm which can efficiently solve this problem, and also guarantees that the returned
source estimates are plausible given the training data. We provide experiments that show
how this approach is influenced by the use of varying sparsity constraints and training data
selection. Finally we demonstrate how this approach can generate significantly superior
results as compared to trained-basis methods.
8
References
[1] S. T. Roweis, One microphone source separation, in Advances in Neural Information
Processing Systems, 2001.
[2] Reddy, A.M. and B. Raj. Soft Mask Methods for Single-Channel Speaker Separation, in
IEEE Transactions of Audio, Speech, and Language Processing, Volume: 15, Issue: 6,
Aug 2007.
[3] T. Kristjansson, J. Hershey, P. Olsen, S. Rennie, and R. Gopinath, Super-human multitalker speech recognition: The IBM 2006 speech separation challenge system, in International Conference on Spoken Language Processing (INTERSPEECH), 2006, pp. 97?100,
Kluwer Academic Publishers, ch. 20, pp. 295304.
[4] Casey, M.A., and A. Westner. Separation of mixed audio sources by independent subspace analysis, in Proceedings of the International Conference of Computer Music, 2000.
[5] Jang, G.-J., T.-W. Lee. A Maximum Likelihood Approach to Single-channel Source
Separation, in Journal of Machine Learning Research 4 (2003) pp. 1365?1392.
[6] Pearlmutter, B., M. Zibulevsky, Blind Source Separation by Sparse Decomposition in a
Signal Dictionary, in Neural Computation 13, pp. 863?882. 2001.
[7] L. Benaroya, L. M. Donagh, F. Bimbot, and R. Gribonval, Non negative sparse representation for wiener based source separation with a single sensor, in Acoustics, Speech,
and Signal Processing, IEEE International Conference on, 2003, pp. 613?616.
[8] M. N. Schmidt and R. K. Olsson, Single-channel speech separation using sparse nonnegative matrix factorization, in International Conference on Spoken Language Processing (INTERSPEECH), 2006.
[9] T. Virtanen, Sound source separation using sparse coding with temporal continuity
objective, in International Computer Music Conference, ICMC, 2003.
[10] Smaragdis, P. Raj, B. and Shashanka, M.V. 2007. Supervised and Semi-Supervised Separation of Sounds from Single-Channel Mixtures. In proceedings of ICA 2007. London,
UK. September 2007.
[11] Raj, B.; Smaragdis, P. 2005. Latent Variable Decomposition of Spectrograms for single
channel speaker separation. In Proceedings of the IEEE Workshop on Applications of
Signal Processing to Audio and Acoustics, New Paltz, NY, October, 2005.
[12] Shashanka, M.V., B. Raj, P. Smaragdis, 2007. Sparse Overcomplete Latent Variable
Decoposition of Counts Data. In Neural Information Processing Systems (NIPS), Vancouver, BC, Canada. December 2007.
[13] Brand, M.E. Pattern Discovery via Entropy Minimization. In Uncertainty 99, AISTATS99,1999.
[14] Corless, R.M., G.H. Gonnet, D.E.G. Hare, D.J. Jeffrey, and D.E. Knuth. On the Lambert W Function. Advances in Computational Mathematics,1996.
[15] Bouguila N. and D. Ziou. Using unsupervised learning of a finite Dirichlet mixture
model to improve pattern recognition applications, Pattern Recognition Letters, Volume
26, Issue 12, September 2005.
[16] Hinneburg, A., Gabriel, H.-H. and Gohr, A. Bayesian Folding-In with Dirichlet Kernels
for PLSI, in Seventh IEEE International Conference on Data Mining, Oct. 2007
[17] F?evotte, C., R. Gribonval and E. Vincent. 2005. BSS EVAL Toolbox User Guide, IRISA
Technical Report 1706, Rennes, France, April 2005.
9
| 3668 |@word middle:2 stronger:1 simulation:2 kristjansson:1 decomposition:4 thereby:2 series:1 bc:1 outperforms:3 recovered:1 com:1 written:1 transcendental:1 realistic:3 additive:1 remove:1 plot:14 update:1 alone:1 half:1 desktop:1 cursory:1 short:1 gribonval:2 characterization:5 provides:3 complication:1 direct:1 compose:1 inside:2 introduce:2 mask:1 ica:1 expected:3 roughly:2 themselves:1 indeed:2 behavior:1 resolve:2 little:1 actual:4 armed:1 pf:1 window:3 inappropriate:1 increasing:1 provided:1 xx:1 underlying:1 bouguila:1 lowest:1 what:2 kind:1 pursue:1 substantially:1 z:11 generalizable:3 spoken:2 guarantee:3 temporal:1 every:4 exactly:3 uk:1 appear:1 before:2 dropped:1 treat:1 virtanen:1 approximately:1 might:1 hmms:1 factorization:2 averaged:2 testing:4 union:1 practice:1 area:4 significantly:4 word:2 get:1 close:2 selection:1 equivalent:2 imposed:1 go:1 regardless:1 convex:12 formulate:1 identifying:1 importantly:1 sar:6 pt:51 user:1 exact:2 us:1 element:9 trend:1 approximated:4 recognition:3 sparsely:1 database:1 labeled:1 observed:1 bottom:1 capture:2 region:1 ensures:1 decrease:1 highest:1 zibulevsky:1 valuable:1 intuition:1 complexity:1 ideally:1 trained:9 solving:1 irisa:1 basis:20 triangle:1 easily:1 joint:1 various:4 represented:3 train:7 separated:1 describe:1 effective:1 london:1 outcome:1 outside:1 sanity:1 quite:1 richer:1 larger:4 solve:3 plausible:6 distortion:4 drawing:1 otherwise:1 rennie:1 favor:1 itself:2 sequence:1 took:1 reconstruction:10 propose:1 roweis:1 description:1 validate:1 constituent:1 p:13 requirement:1 produce:2 help:1 derive:2 develop:1 measured:1 nearest:4 keywords:1 aug:1 c:1 enclose:1 waveform:1 closely:1 hull:13 human:1 bin:1 explains:1 decompose:2 hold:1 sufficiently:1 around:2 visualize:1 dictionary:20 entropic:5 estimation:2 combinatorial:1 overcompleteness:1 weighted:1 minimization:1 clearly:2 sensor:1 gaussian:1 always:1 super:1 modified:1 rather:1 varying:2 casey:1 evotte:1 properly:1 improvement:1 rank:1 check:1 likelihood:1 contrast:2 wf:1 sense:1 posteriori:1 progressing:1 helpful:1 abstraction:1 dependent:4 typically:1 france:1 issue:6 overall:1 flexible:1 denoted:2 proposes:1 corless:1 once:2 represents:2 look:1 unsupervised:1 simplex:2 report:1 employ:2 few:3 modern:1 primarily:2 olsson:1 comprehensive:1 individual:3 loudest:1 pictorial:1 phase:1 sdr:5 jeffrey:1 hann:1 attempt:2 circular:2 highly:1 mining:1 eval:1 evaluation:1 male:1 mixture:57 behind:2 accurate:1 modest:1 indexed:1 desired:1 isolated:1 overcomplete:1 instance:1 column:2 soft:1 cover:2 maximization:1 vertex:1 entry:1 comprised:1 conducted:1 seventh:1 answer:2 corrupted:1 combined:1 density:1 peak:1 devoid:1 international:6 bu:1 lee:1 picking:1 parzen:1 connecting:2 again:2 opposed:3 ziou:1 sidestep:1 strive:1 potential:3 sec:5 coding:1 includes:2 inc:2 blind:1 later:1 performed:1 root:1 doing:2 observing:3 start:1 timit:1 contribution:3 minimize:3 square:2 wiener:1 musical:1 characteristic:2 efficiently:1 identify:2 conceptually:2 bayesian:1 vincent:1 lambert:3 worth:1 explain:2 simultaneous:1 influenced:2 checked:1 definition:1 against:1 energy:5 frequency:4 pp:5 hare:1 sampled:1 popular:2 knowledge:1 dimensionality:1 subtle:1 amplitude:1 higher:2 supervised:2 hershey:1 april:1 shashanka:4 done:2 mar:1 symptom:1 evaluated:1 anywhere:1 though:1 horizontal:2 expressive:1 overlapping:1 continuity:1 gonnet:1 quality:1 artifact:4 bhiksha:2 effect:4 concept:3 normalized:3 multiplier:1 counterpart:1 adequately:1 hence:1 verify:1 deal:1 during:2 interspeech:2 speaker:11 percentile:2 m:2 demonstrate:1 pearlmutter:1 recently:1 common:1 superior:1 multinomial:2 icmc:1 khz:1 volume:2 kluwer:1 mellon:1 significant:4 surround:1 imposing:1 mathematics:1 language:3 etc:1 base:26 plsi:1 alum:1 female:1 raj:5 scenario:1 vt:1 additional:2 impose:1 spectrogram:12 employed:1 converge:1 signal:21 dashed:1 semi:1 multiple:2 sound:10 technical:1 match:1 characterized:1 academic:1 long:1 e1:1 adobe:2 impact:1 basic:2 cmu:1 metric:9 expectation:1 histogram:2 represent:6 iteration:2 kernel:1 deterioration:1 achieved:2 folding:1 whereas:1 want:2 separately:1 source:98 publisher:1 appropriately:1 rennes:1 sure:1 recording:2 tend:1 db:6 december:1 revealed:1 spiral:2 enough:1 fft:1 fit:2 competing:1 identified:1 codebooks:1 idea:1 akin:1 penalty:1 suffer:1 returned:1 speech:10 proceed:2 matlab:1 gabriel:1 clear:2 amount:2 hinneburg:1 generate:1 outperform:1 percentage:2 estimated:6 per:1 conform:1 carnegie:1 intertwined:1 key:1 salient:1 threshold:1 tapered:1 verified:1 bimbot:1 sum:2 run:1 letter:1 uncertainty:1 distorted:1 separation:25 draw:4 decision:1 vf:9 comparable:1 handicap:1 guaranteed:1 smaragdis:4 oracle:7 nonnegative:1 constraint:1 speed:1 pruned:1 performing:1 madhusudana:1 combination:3 poor:3 smaller:1 em:1 increasingly:1 unity:1 explained:1 interference:3 computationally:2 equation:5 visualization:1 previously:1 reddy:1 count:1 know:1 operation:1 multiplied:1 observe:3 spectral:5 appropriate:1 enforce:1 alternative:1 encounter:1 jang:1 schmidt:1 original:3 assumes:1 top:1 include:3 ensure:2 remaining:1 denotes:1 unfortunate:1 dirichlet:2 instant:1 music:2 especially:3 objective:1 already:2 quantity:1 parametric:1 primary:1 diagonal:1 exhibit:3 september:2 subspace:1 separating:2 manifold:4 argue:1 extent:1 assuming:1 modeled:2 index:4 illustration:1 providing:1 ratio:9 october:1 trace:1 negative:3 suppress:1 proper:2 unknown:1 perform:2 observation:2 discarded:2 finite:1 defining:1 situation:1 frame:23 monaural:1 arbitrary:1 canada:1 introduced:1 paris:2 toolbox:1 sentence:1 acoustic:2 learned:6 boost:2 nip:1 address:2 pattern:4 sparsity:16 challenge:1 memory:1 overlap:1 demanding:2 rely:1 pause:1 representing:3 scheme:1 improve:1 brief:1 imply:1 axis:2 deemed:1 extract:2 prior:14 understanding:1 discovery:1 vancouver:1 sir:6 expect:2 rationale:2 mixed:4 abstracting:1 gopinath:1 bypass:1 interfering:1 ibm:1 row:1 course:1 repeat:1 silence:1 side:1 guide:1 neighbor:3 fall:1 sparse:16 slice:1 bs:1 world:1 collection:1 made:2 transaction:1 reconstructed:1 approximate:4 compact:3 pruning:1 olsen:1 active:4 spectrum:1 don:1 search:1 latent:8 additionally:1 channel:7 learn:2 nature:1 necessarily:2 monophonic:2 linearly:1 noise:1 verifies:1 repeated:2 slow:1 ny:1 position:1 lie:13 pe:1 down:2 specific:3 discarding:3 offset:1 concern:2 workshop:1 albeit:1 effectively:1 knuth:1 magnitude:3 margin:1 entropy:8 simply:1 lagrange:1 ch:1 extracted:3 oct:1 conditional:1 modulate:1 towards:1 considerable:1 hard:1 specifically:1 reducing:2 wt:1 degradation:1 microphone:1 secondary:1 experimental:2 brand:2 indicating:1 select:1 audio:3 |
2,944 | 3,669 | Large Scale Nonparametric Bayesian Inference:
Data Parallelisation in the Indian Buffet Process
Finale Doshi-Velez?
University of Cambridge
Cambridge, CB21PZ, UK
[email protected]
David Knowles?
University of Cambridge
Cambridge, CB21PZ, UK
[email protected]
Shakir Mohamed?
University of Cambridge
Cambridge, CB21PZ, UK
[email protected]
Zoubin Ghahramani
University of Cambridge
Cambridge, CB21PZ, UK
[email protected]
Abstract
Nonparametric Bayesian models provide a framework for flexible probabilistic
modelling of complex datasets. Unfortunately, the high-dimensional averages required for Bayesian methods can be slow, especially with the unbounded representations used by nonparametric models. We address the challenge of scaling
Bayesian inference to the increasingly large datasets found in real-world applications. We focus on parallelisation of inference in the Indian Buffet Process
(IBP), which allows data points to have an unbounded number of sparse latent
features. Our novel MCMC sampler divides a large data set between multiple
processors and uses message passing to compute the global likelihoods and posteriors. This algorithm, the first parallel inference scheme for IBP-based models,
scales to datasets orders of magnitude larger than have previously been possible.
1
Introduction
From information retrieval to recommender systems, from bioinformatics to financial market analysis, the amount of data available to researchers has exploded in recent years. While large, these
datasets are often still sparse: For example, a biologist may have expression levels from thousands
of genes from only a few people. A ratings database may contain millions of users and thousands
of movies, but each user may have only rated a few movies. In such settings, Bayesian methods
provide a robust approach to drawing inferences and making predictions from sparse information.
At the heart of Bayesian methods is the idea that all unknown quantities should be averaged over
when making predictions. Computing these high-dimensional average is thus a key challenge in
scaling Bayesian inference to large datasets, especially for nonparametric models.
Advances in multicore and distributed computing provide one answer to this challenge: if each processor can consider only a small part of the data, then inference in these large datasets might become
more tractable. However, such data parallelisation of inference is nontrivial?while simple models
might only require pooling a small number of sufficient statistics [1], inference in more complex
models might require the frequent communication of complex, high-dimensional probability distributions between processors. Building on work on approximate asynchronous multicore inference
for topic models [2], we develop a message passing framework for data-parallel Bayesian inference
applicable to a variety of models, including matrix factorization and the Indian Buffet Process (IBP).
?
Authors contributed equally.
1
Nonparametric models are attractive for large datasets because they automatically adapt to the complexity of the data, relieving the researcher from the need to specify aspects of the model such as the
number of latent factors. Much recent work in nonparametric Bayesian modelling has focused on
the Chinese restaurant process (CRP), which is a discrete distribution that can be used to assign data
points to an unbounded number of clusters. However, many real-world datasets have observations
that may belong to multiple clusters?for example, a gene may have multiple functions; an image
may contain multiple objects. The IBP [3] is a distribution over infinite sparse binary matrices that
allows data points to be represented by an unbounded number of sparse latent features or factors.
While the parallelisation method we present in this paper is applicable to a broad set of models, we
focus on inference for the IBP because of its unique challenges and potential.
Many serial procedures have been developed for inference in the IBP, including variants of Gibbs
sampling [3, 4], which may be augmented with Metropolis split-merge proposals [5], slice sampling [6], particle filtering [7], and variational inference [8]. With the exception of the accelerated
Gibbs sampler of [4], these methods have been applied to datasets with less than 1,000 observations.
To achieve efficient paralellisation, we exploit an idea recently introduced in [4], which maintains
a distribution over parameters while sampling. Coupled with a message passing scheme over processors, this idea enables computations for inference to be distributed over many processors with
few losses in accuracy. We demonstrate our approach on a problem with 100,000 observations. The
largest application of IBP inference to date, our work opens the use of the IBP and similar models
to a variety of data-intensive applications.
2
Latent Feature Model
The IBP can be used to define models in which each observation is associated with a set of latent
factors or features. A binary feature-assignment matrix Z represents which observations possess
which hidden features, where Znk = 1 if observation n has feature k and Znk = 0 otherwise.
For example, the observations might be images and the hidden features could be possible objects in
those images. Importantly, the IBP allows the set of such possible hidden features to be unbounded.
To generate a sample from the IBP, we first imagine that the rows of Z (the observations) are customers and the columns of Z (the features) are dishes in an infinite buffet. The first customer takes
the first Poisson(?) dishes. The following customers try previously sampled dishes with probability
mk /n, where mk is the number of people who tried dish k before customer n. Each customer also
takes Poisson(?/n) new dishes. The value Znk records if customer n tried dish k. This generative
process allows an unbounded set of features but guarantees that a finite dataset will contain a finite
number of features with probability one. The process is also exchangeable in that the order in which
customers visit the buffet has no impact on the distribution of Z. Finally, if the effect of possessing
a feature is independent of the feature index, the model is also exchangeable in the columns of Z.
We associate with the feature assignment matrix Z, a feature matrix A with rows that parameterise
the effect that possessing each feature has on the data. Given these matrices, we write the probability
of the data as P (X|Z, A). Our work requires that P (A|X, Z) can be computed or approximated
efficiently by an exponential family distribution. Specifically, we apply our techniques to both a
fully-conjugate linear-Gaussian model and non-conjugate Bernoulli model.
Linear Gaussian Model.
We model an N ?D real-valued data matrix X as a product:
X = ZA + ?,
(1)
where Z is the binary feature-assignment matrix and A is a K by D real-valued matrix with an
independent Gaussian prior N (0, ?a2 ) on each element (see cartoon in Figure 1(a)). Each element
of the N by D noise matrix ? is independent with a N (0, ?n2 ) distribution. Given Z and X, the
posterior on the features A is Gaussian, given by mean and covariance
?1
?1
?2
?2
?A = Z T Z + x2 I
ZT X
?A = ?x2 Z T Z + x2 I
(2)
?a
?a
Bernoulli Model.
We use a leaky, noisy-or likelihood for each element of an N ?D matrix X:
P
P (Xnd = 1|Z, A) = 1 ? ? ? k Znk Akd .
(3)
2
Root
po
ste
r
io
r
s
tic
r
tis
rio
sta
ste
po
sta
tis
tic
s
prior
N
P2
+?
P3
(a) Representation of the linear-Gaussian model.
The data X is generated from the product of the
feature assignment matrix Z and feature matrix A.
In the Bernoulli model, the product ZA adjusts the
probability of X = 1
r
K
rio
tis
tic
s
P1
sta
... *
D
ste
A
po
~
K
s
tic or
tis
ri
ste
po
Z
...
N
D
sta
X
P4
(b) Message passing process. Processors send sufficient statistics of
the likelihood up to the root, which
calculates and sends the (exact) posterior back to the processors.
Figure 1: Diagrammatic representation of the model structure and the message passing process.
Each element of the A matrix is binary with independent Bernoulli(pA ) priors. The parameters ?
and ? determine how ?leaky? and how ?noisy? the or-function is, respectively. Typical hyperparameter values are ? = 0.95 and ? = 0.2. The posterior P (A|X, Z) cannot be computed in closed
form; however, a mean-field variational posterior in which we approximate P (A|X, Z) as product
QK,D
of independent Bernoulli variables k,d qkd (akd ) can be readily derived.
3
Parallel Inference
We describe both synchronous and asynchronous procedures for approximate, parallel inference in
the IBP that combines MCMC with message passing. We first partition the data among the processors, using X p to denote the subset of observations X assigned to processor p. We use Z p to denote
the latent features associated with the data on processor p. In [4], the distribution P (A|X?n , Z?n )
was used to derive an accelerated sampler for sampling Zn , where n indexes the nth observation and
?n is the set of all observations except n. In our parallel inference approach, each processor p maintains a distribution P p (A|X?n , Z?n ), a local approximation to P (A|X?n , Z?n ). The distributions
P p are updated via message passing between the processors.
The inference alternates between three steps:
? Message passing: processors communicate to compute the exact P (A|X, Z).
? Gibbs sampling: processors sample a new set of Z p ?s in parallel.
? Hyperparameter sampling: a root processor resamples global hyperparameters
The sampler is approximate because during Gibbs sampling, all processors resample elements of Z
at the same time; their posteriors P p (A|X, Z) are no longer the true P (A|X, Z).
Message Passing We use Bayes rule to factorise the posterior over features P (A|Z, X):
Y
P (A|Z, X) ? P (A)
P (X p |Z p , A)
(4)
p
If the prior P (A) and the likelihoods P (X p |Z p , A) are conjugate exponential family models, then
the sufficient statistics of P (A|Z, X) are the sum of the sufficient statistics of each term on the right
side of equation (4). For example, the sufficient statistics in the linear-Gaussian model are means
and covariances; in the Bernoulli model, they are counts of how often each element Akd equals one.
The linear-Gaussian messages have size O(K 2 + KD), and the Bernoulli messages O(KD), where
K is the number of features. For nonparametric models such as the IBP, the number of features K
grows as O(log N ). This slow growth means that messages remain small, even for large datasets.
The most straightforward way to compute the full posterior is to arrange processors in a tree architecture, as belief propagation is then exact. The message s from processor p to processor q is:
X
sp?q = lp +
sr?p
r?N (p)\q
3
where N (p)\q are the processors attached to p besides q and lp are the sufficient statistics from
processor p. A dummy neighbour containing the statistics of the priorP
is connected to (an arbitrarily
p
designated) root processor. Also passed are the feature counts mpk = n?X p Znk
, the popularity of
feature k within processor p. (See figure 1(b) for a cartoon.)
Gibbs Sampling
In general, Znk can be Gibbs-sampled using Bayes rule
P (Znk |Z?nk , X) ? P (Znk |Z?nk )P (X|Z).
The probability P (Znk |Z?nk ) depends on the size of the dataset N and the number of observations
mk using feature k. At the beginning of the Gibbs sampling stage, each processor has the correct
p
p
values of mk . We compute m?p
k = mk ? mk , and, as the processor?s internal feature counts mk are
?p
p
?p
updated, approximate mk ? mk + mk . This approximation assumes mk stays fixed during the
current stage (good for popular features).
The collapsed likelihood P (X|Z) integrating out the feature values A is given by:
Z
P (X|Z) ?
P (Xn |Zn , A)P (A|Z?n , X?n )dA,
A
(A|Z,X)
. In conjugate models, P (A|Z?n , X?n )
where the partial posterior P (A|Z?n , X?n ) ? PP(X
n |Zn ,A)
can be efficiently computed by subtracting observation n?s contribution to the sufficient statistics.1
For non-conjugate models, we can use an exponential family distribution Q(A) to approximate
P (A|X, Z) during message passing. A draw A ? Q?p (A) is then used to initialise an uncollapsed
Gibbs sampler. The outputted samples of A are used to compute sufficient statistics for the likelihood
P (X|Z). In both cases, new features are added as described in [3].
Hyperparameter Resampling The IBP concentration parameter ? and hyperparameters of the
likelihood can also be sampled during inference. Resampling ? depends only on the total number of
active features; thus it can easily be resampled at the root and propagated to the other processors. In
the linear-Gaussian model, the posteriors on the noise and feature variances (starting from gamma
priors) depend on various squared-errors, which can also be computed in a distributed fashion.
For more general, non-conjugate models, resampling the hyperparameters requires two steps. In
the first step, a hyperparameter value is proposed by the root and propagated to the processors.
The processors each compute the likelihood of the current and proposed hyperparameter values and
propagate this value back to root. The root evaluates a Metropolis step for the hyperparameters
and propagates the decision back to the leaves. The two-step approach introduces a latency in the
resampling but does not require any additional message passing rounds.
Asynchronous Operation So far we have discussed message passing, Gibbs sampling, and hyperparameter resampling as if they occur in separate phases. In practice, these phases may occur
asynchronously: between its Gibbs sweeps, each processor updates its feature posterior based on
the most current messages it has received and sends likelihood messages to its parent. Likewise,
the root continuously resamples hyperparameters and propagates the values down through the tree.
While another layer of approximation, this asynchronous form of message passing allows faster processors to share information and perform more inference on their data instead of waiting for slower
processors.
Implementation Note When performing parallel inference in the IBP, a few factors need to be
considered with care. Other parallel inference for nonparametric models, such as the HDP [2],
simply matched features by their index, that is, assumed that the ith feature on processor p was also
the ith feature on processor q. In the IBP, we find that this indiscriminate feature merging is often
disastrous when adding or deleting features: if none of the observations in a particular processor are
using a feature, we cannot simply delete that column of Z and shift the other features over?doing
so destroys the alignment of features across processors.
1
In the IBP, only the linear-Gaussian model exhibits this conjugate structure. However, many other matrix
factorization models (such as PCA) often have this conjugate form.
4
4
Comparison to Exact Metropolis
Because all Z p ?s are sampled at once, the posteriors P p (A|X, Z) used by each processor in section 3
are no longer exact. Below we show how Metropolis?Hastings (MH) steps can make the parallel
sampler exact, but introduce significant computational overheads both in computing the transition
probabilities and in the message passing. We argue that trying to do exact inference is a poor
use of computational resources (especially as any finite chain will not be exact); empirically, the
approximate sampler behaves similarly to the MH sampler while finding higher likelihood regions
in the data.
Exact Parallel Metropolis Sampler. Ideally, we would simply add an MH accept/reject step after
each stage of the approximate inference to make the sampler exact. Unfortunately, the approximate
sampler makes several non-independent random choices in each stage of the inference, making the
reverse proposal inconvenient to compute. We circumvent this issue by fixing the random seed, making the initial stage of the approximate sampler a deterministic function, and then add independent
random noise to create a proposal distribution. This approach makes both the forward and reverse
transition probabilities simple to compute.
Formally, let Z?p be the matrix output after a set of Gibbs sweeps on Z p . We use all the Z?p ?s to
propose a new Z ? matrix. The acceptance probability of the proposal is
min(1,
P (X|Z ? )P (Z ? )Q(Z ? ? Z)
),
P (X|Z)P (Z)Q(Z ? Z ? )
(5)
where the likelihood terms P (X|Z) and P (Z) are readily computed in a distributed fashion. For
the transition distribution Q, we note that if we set the random seed r, then the matrix Z?p from the
Gibbs sweeps in the processor is some deterministic function of the input matrix Z p . The proposal
Z p? is a (stochastic) noisy representation of Z?p in which for example
p?
P (Znk
= 1) = .99 if
p
Z?nk
= 1,
p?
P (Znk
= 1) = .01 if
p
Z?nk
=0
(6)
?
p
where K should be at least the number of features in Z?p . We set Znk
= 0 for k > K. (See cartoon
in figure 2.)
To compute the backward probability, we take Z p? and apply the same number of Gibbs sampling
?
sweeps with the same random seed r. The resulting Z? p? is a deterministic function of Z p . The
?
backward probability Q(Z p ? Z p ) which is the probability of going from Z p? to Z p using 6.
While the transition probabilities can be computed in a distributed, asynchronous fashion, all of the
processors must synchronise when deciding whether to accept the proposal.
Experimental Comparison To compare the exact Metropolis and approximate inference techniques, we ran each inference type on 1000 block images of [3] on 5 simulated processors. Each
test was repeated 25 times. For each of the 25 tests, we create a held out dataset by setting elements
of the last 100 images as missing values. For the first 50 test images, we set all even numbered
dimensions as the missing elements, and every odd numbered dimension as the missing values for
the last 50 images. Each sampler was run for 10,000 iterations with 5 Gibbs sweeps per iteration;
statistics were collected from the second half of the chain. To keep the probability of an acceptance
reasonable, we allowed each processor to change only small parts of its Z p : the feature assignments
Zn for 1, 5, or 10 data points each during each sweep.
In table 1, we see that the approximate sampler runs about five times faster than the exact samplers
while achieving comparable (or better) predictive likelihoods and reconstruction errors on heldout data. Both the acceptance rates and the predictive likelihoods fall as the exact sampler tries
to take larger steps, suggesting that the difference between the approximate and exact sampler?s
performance on predictive likelihood is due to poor mixing by the exact sampler. Figure 4 shows
empirical CDFs for the number of features k , IBP concentration parameter ?, the noise variance ?n2 ,
and the feature variance ?a2 . The approximate sampler (black) produces similar CDFs to the various
exact Metropolis samplers (gray) for the variances; the concentration parameter is smaller, but the
feature counts are similar to the single-processor case.
5
Zp
Zp
Z p?
Gibbs with
Random
fixed seed
noise
Method
Time (s)
MH, n = 1
MH, n = 5
MH, n = 10
Approximate
717
1075
1486
179
Figure 2: Cartoon of MH
proposal
Test L2
Error
0.0468
0.0488
0.0555
0.0487
Test Log
Likelihood
0.1098
0.0893
0.0196
0.1292
MH Accept
Proportion
0.1106
0.0121
0.0062
-
Table 1: Evaluation of exact and approximate methods.
Empirical CDF for IBP Concentration
1
Empirical CDF for IBP Concentration
0.9
Empirical CDF for Noise Variance
Empirical CDF for Feature Variance
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.5
0.4
0.3
Single Processor
0.2
0
6
7
8
9
10
11
12
0.5
0.4
Single Processor
0.3
Approximate Sampling
0.7
0.6
0.5
0.4
0.3
Single Processor
0.2
Exact Sampling, various windows
Exact Sampling, various windows
5
0.6
0.2
Approximate Sampling
0.1
0.7
Cumilative Probability
0.6
Cumilative Probability
0.7
Cumilative Probability
Cumilative Probability
0.8
13
14
0
0.5
15
0.1
1
1.5
2
2.5
3
0.6
0.5
0.4
0.3
Single Processor
0.2
Approximate Sampling
0.1
Feature Count
0.7
0
0.1
3.5
Approximate Sampling
0.1
Exact Sampling, various windows
0.15
0.2
0.25
IBP Concentration Parameter
0.3
0.35
0.4
0.45
0.5
0.55
0
0.6
Exact Sampling, various windows
0.2
0.25
Noise Variance
(a) Active feature count k (b) IBP Concentration ?
0.3
0.35
0.4
0.45
0.5
Feature Variance
(c) Noise variance
?x2
(d) Feature variance ?a2
Figure 3: Empirical CDFs: The solid black line is the approximate sampler; the three solid gray lines
are the MH samplers with n equal to 1, 5, and 10 (lighter shades indicate larger n. The approximate
sampler and the MH samplers for smaller n have similar CDFs; the n = 10 MH sampler?s differing
CDF indicates it did not mix in 7500 iterations (reasonable since its acceptance rate was 0.0062).
5
Analysis of Mixing Properties
We ran a series of experiments on 10,000 36-dimensional block images of [3] to study the effects
of various sampler configurations on running time, performance, and mixing time properties of
the sampler. 5000 elements of the data matrix were held-out as test data. Figure 4 shows test
log-likelihoods using 1, 7, 31 and 127 parallel processors simulated in software, using 1000 outer
iterations with 5 Gibbs inner iterations each. The parallel samplers have similar test likelihoods as
the serial algorithm with significant savings in running time. The characteristic shape of the test
likelihood, similar across all testing regimes, indicates how the features are learned. Initially, a large
number of features are added, which provides improvements in the test likelihood. A refinement
phase, in which excess features are pruned, provides further improvements.
Figure 4 shows hairiness-index plots for each of the test cases after thinning and burn-in. The hairiness index, based on the method of CUSUM for monitoring MCMC convergence [9, 10], monitors
how often the derivatives of sampler statistics?in our case, the number of features, the test likelihood, and ??change in sign; infrequent changes in sign indicate that the sampler may not be mixed.
The outer bounds on the plots are the 95% confidence bounds. The index stays within the bounds
suggesting that the chains are mixing.
Finally, we considered the trade-off between mixing and running time as the number of outer iterations and inner Gibbs iterations are varied. Each combination of inner and outer iterations was
set so that the total number of Gibbs sweeps through the data was 5000. Mixing efficiency was
Processors = 1
Test Loglikelihood for inner = 5 and outer = 1000 iterations
Processors = 7
0.5
0
1
0.5
0
?0.5
20 40 60 80 100
20 40 60 80100
Processors = 31
Processors = 127
?1
Proc = 1
?1.5
Proc = 7
Proc = 31
Proc = 127
?2
?1
0
1
2
Time (s)
3
4
5
1
Hairiness Index
Hairiness Index
Test loglikelihood
0
1
Hairiness Index
Hairiness Index
0.5
0.5
0
20 40 60 80100
1
0.5
0
20 40 60 80100
Figure 4: Change in likelihood for various numbers of processors over the simulation time. The
corresponding hairiness index plots are shown on the left.
6
5
10
1
Proc = 1
i = 50, o = 100
i = 20, o = 250
Proc = 31
i = 10, o = 500
4
10
Proc = 127
i = 5, o = 1000
i = 1, o = 5000
0.8
Total Time (s)
# Effective Samples per Outer Iter.
Proc = 7
0.9
0.7
3
10
0.6
2
10
0.5
0.4
0
1
10
20
30
#Inner Iterations
40
10
50
1
7
31
127
# Processors
Figure 5: Effects of changing the number of inner iterations on: (a) The effective sample size (b)
Total running time (Gibbs and Message passing).
Table 2: Test log-likelihoods on real-world datasets for the serial, synchronous and asynchronous
inference types.
Dataset
N
D
AR Faces [11]
2600
1598
Piano [12]
57931
161
Flickr [13]
100000
1000
Description
faces with lighting, accessories (real-valued)
STDFT of a piano recording
(real-valued)
indicators of image tags
(binary-valued)
Serial
p=1
-4.74
Synch
p = 16
-4.77
Async
p = 16
-4.84
-1.435
-1.182
-1.228
?
-0.0584
measured via the effective number of samples per sample [10], which evaluates what fraction of
the samples are independent (ideally, we would want all samples to be independent, but MCMC
produces dependent chains). Running time for Gibbs sampling was taken to be the time required by
the slowest processor (since all processors must synchronize before message passing); the total time
reflected the Gibbs time and the message-passing time. As seen in figure 5, completing fewer inner
Gibbs iterations per outer iteration results in faster mixing, which is sensible as the processors are
communicating about their data more often. However, having fewer inner iterations requires more
frequent message passing; as the number of processors becomes large, the cost of message passing
becomes a limiting factor.2
6
Real-world Experiments
We tested our parallel scheme on three real world datasets on a 16 node cluster using the Matlab
Distributed Computing Engine, using 3 inner Gibbs iterations per outer iteration. The first dataset
was a set of 2,600 frontal face images with 1,598 dimensions [11]. While not extremely large,
the high-dimensionality of the dataset makes it challenging for other inference approaches. The
piano dataset [12] consisted of 57,931 samples from a 161-dimensional short-time discrete Fourier
transform of a piano piece. Finally, the binary-valued Flickr dataset [13] indicated whether each
of 1000 popular keywords occurred in the tags of 100,000 images from Flickr. Performance was
measured using test likelihoods and running time. Test likelihoods look only at held-out data and
thus they allow us to ?honestly? evaluate the model?s fit. Table 2 summarises the data and shows that
all approaches had similar test-likelihood performance.
In the faces and music datasets, the Gibbs time per iteration improved almost linearly as the number
of processors increased (figure 6). For example, we observed a 14x-speedup for p = 16 in the music
dataset. Meanwhile, the message passing time remained small even with 16 processors?7% of the
Gibbs time for the faces data and 0.1% of the Gibbs time for the music data. However, waiting for
synchronisation became a significant factor in the synchronous sampler. Figure 6(c) compares the
times for running inference serially, synchronously and asynchronously with 16 processors. The
2
We believe part of the timing results may be an artifact, as the simulation overestimates the message passing
time. In the actual parallel system (section 6), the cost of message passing was negligible.
7
7
1200
100
80
60
40
20
0
1
2
4
8
16
number of processors
(a) Timing analysis for faces
dataset
?1.8
sampling
waiting
1000
x 10
?2
800
log joint
sampling
waiting
mean time per iteration/s
mean time per outer iteration/s
120
600
400
?2.4
serial P=1
synchronous P=16
asynchronous P=16
?2.6
200
0
?2.2
1
2
4
8
16
number of processors
(b) Timing analysis for music
dataset
?2.8
?2
10
0
2
10
10
4
10
time/s
(c) Timing comparison for different
approaches
Figure 6: Bar charts comparing sampling time and waiting times for synchronous parallel inference.
asynchronous inference is 1.64 times faster than the synchronous case, reducing the computational
time from 11.8s per iteration to 7.2s.
7
Discussion and Conclusion
As datasets grow, parallelisation is an increasingly attractive and important feature for doing inference. Not only does it allow multiple processors/multicore technologies to be leveraged for largescale analyses, but it also reduces the amount of data and associated structures that each processor
needs to keep in memory. Existing work has focused both on general techniques to efficiently split
variables across processors in undirected graphical models [14] and factor graphs [15] and specific
models such as LDA [16, 17]. Our work falls in between: we leverage properties of a specific kind
of parallelisation?data parallelisation?for a fairly broad class of models.
Specifically, we describe a parallel inference procedure that allows nonparametric Bayesian models
based on the Indian Buffet Process to be applied to large datasets. The IBP poses specific challenges
to data parallelisation in that the dimensionality of the representation changes during inference and
may be unbounded. Our contribution is an algorithm for data-parallelisation that leverages a compact representation of the feature posterior that approximately decorrelates the data stored on each
processor, thus limiting the communication bandwidth between processors. While we focused on
the IBP, the ideas presented here are applicable to a more general problems in unsupervised learning
including bilinear models such as PCA, NMF, and ICA.
Our sampler is approximate, and we show that in conjugate models, it behaves similarly to an exact sampler?but with much less computational overhead. However, as seen in the Bernoulli case,
variational message passing for non-conjugate data doesn?t always produce good results if the approximating distribution is a poor match for the true feature posterior. Determining when variational
message passing is successful is an interesting question for future work. Other interesting directions
include approaches for dynamically optimising the network topology (for example, slower processors could be moved lower in the tree). Finally, we note that a middle ground between synchronous
and asynchronous operations as we presented them might be a system that gives each processor a
certain amount of time, instead of a certain number of iterations, to do Gibbs sweeps. Further study
along these avenues should lead to even more efficient data-parallel Bayesian inference techniques.
8
References
[1] C. Chu, S. Kim, Y. Lin, Y. Yu, G. Bradski, A. Ng, and K. Olukotun, ?Map-reduce for machine
learning on multicore,? in Advances in Neural Information Processing Systems, p. 281, MIT
Press, 2007.
[2] A. Asuncion, P. Smyth, and M. Welling, ?Asynchronous distributed learning of topic models,?
in Advances in Neural Information Processing Systems 21, 2008.
[3] T. Griffiths and Z. Ghahramani, ?Infinite latent feature models and the Indian buffet process,?
in Advances in Neural Information Processing Systems, vol. 16, NIPS, 2006.
[4] F. Doshi-Velez and Z. Ghahramani, ?Accelerated inference for the Indian buffet process,? in
International Conference on Machine Learning, 2009.
[5] E. Meeds, Z. Ghahramani, R. Neal, and S. Roweis, ?Modeling dyadic data with binary latent
factors,? in Advances in Neural Information Processing Systems, vol. 19, pp. 977?984, 2007.
[6] Y. W. Teh, D. G?or?ur, and Z. Ghahramani, ?Stick-breaking construction for the Indian buffet
process,? in Proceedings of the Intl. Conf. on Artificial Intelligence and Statistics, vol. 11,
pp. 556?563, 2007.
[7] F. Wood and T. L. Griffiths, ?Particle filtering for nonparametric Bayesian matrix factorization,? in Advances in Neural Information Processing Systems, vol. 19, pp. 1513?1520, 2007.
[8] F. Doshi-Velez, K. T. Miller, J. Van Gael, and Y. W. Teh, ?Variational inference for the Indian buffet process,? in Proceedings of the Intl. Conf. on Artificial Intelligence and Statistics,
vol. 12, pp. 137?144, 2009.
[9] S. P. Brooks and G. O. Roberts, ?Convergence assessment techniques for Markov Chain Monte
Carlo,? Statistics and Computing, vol. 8, pp. 319?335, 1998.
[10] C. R. Robert and G. Casella, Monte Carlo Statistical Methods. Springer, second ed., 2004.
[11] A. M. Mart?inez and A. C. Kak, ?PCA versus LDA,? IEEE Trans. Pattern Anal. Mach. Intelligence, vol. 23, pp. 228?233, 2001.
[12] G. E. Poliner and D. P. W. Ellis, ?A discriminative model for polyphonic piano transcription,?
EURASIP J. Appl. Signal Process., vol. 2007, no. 1, pp. 154?154, 2007.
[13] T. Kollar and N. Roy, ?Utilizing object-object and object-scene context when planning to find
things.,? in International Conference on Robotics and Automation, 2009.
[14] C. G. Joseph Gonzalez, Yucheng Low, ?Residual splash for optimally parallelizing belief propagation,? in Proceedings of the Twelfth International Conference on Artificial Intelligence and
Statistics (D. van Dyk and M. Welling, eds.), vol. 5, pp. 177?184, JMLR, 2009.
[15] D. Stern, R. Herbrich, and T. Graepel, ?Matchbox: Large scale online Bayesian recommendations,? in 18th International World Wide Web Conference (WWW2009), April 2009.
[16] R. Nallapati, W. Cohen, and J. Lafferty, ?Parallelized variational EM for Latent Dirichlet Allocation: An experimental evaluation of speed and scalability,? in ICDMW ?07: Proceedings
of the Seventh IEEE International Conference on Data Mining Workshops, (Washington, DC,
USA), pp. 349?354, IEEE Computer Society, 2007.
[17] D. Newman, A. Asuncion, P. Smyth, and M. Welling, ?Distributed inference for Latent Dirichlet Allocation,? in Advances in Neural Information Processing Systems 20 (J. Platt, D. Koller,
Y. Singer, and S. Roweis, eds.), pp. 1081?1088, Cambridge, MA: MIT Press, 2008.
9
| 3669 |@word middle:1 proportion:1 hairiness:7 open:1 indiscriminate:1 twelfth:1 simulation:2 tried:2 propagate:1 eng:1 covariance:2 solid:2 initial:1 configuration:1 series:1 existing:1 current:3 comparing:1 chu:1 must:2 readily:2 partition:1 shape:1 enables:1 plot:3 exploded:1 update:1 polyphonic:1 resampling:5 generative:1 leaf:1 half:1 fewer:2 intelligence:4 beginning:1 ith:2 short:1 record:1 provides:2 node:1 herbrich:1 five:1 unbounded:7 along:1 become:1 combine:1 overhead:2 introduce:1 ica:1 market:1 p1:1 planning:1 automatically:1 actual:1 window:4 becomes:2 matched:1 what:1 tic:4 kind:1 developed:1 differing:1 finding:1 guarantee:1 synchronisation:1 every:1 ti:4 growth:1 uk:7 exchangeable:2 stick:1 platt:1 overestimate:1 before:2 negligible:1 local:1 timing:4 io:1 bilinear:1 mach:1 merge:1 approximately:1 might:5 black:2 burn:1 dynamically:1 challenging:1 appl:1 factorization:3 cdfs:4 averaged:1 unique:1 testing:1 practice:1 block:2 procedure:3 empirical:6 outputted:1 reject:1 confidence:1 integrating:1 griffith:2 numbered:2 zoubin:2 cannot:2 collapsed:1 context:1 deterministic:3 customer:7 missing:3 map:1 send:1 straightforward:1 starting:1 focused:3 communicating:1 adjusts:1 rule:2 utilizing:1 importantly:1 financial:1 initialise:1 updated:2 limiting:2 imagine:1 construction:1 infrequent:1 user:2 exact:22 lighter:1 smyth:2 us:1 associate:1 element:9 pa:1 approximated:1 poliner:1 roy:1 xnd:1 database:1 observed:1 thousand:2 region:1 connected:1 trade:1 ran:2 complexity:1 ideally:2 cam:3 depend:1 predictive:3 efficiency:1 meed:1 po:4 easily:1 mh:11 joint:1 represented:1 various:8 describe:2 effective:3 monte:2 artificial:3 newman:1 larger:3 valued:6 loglikelihood:2 drawing:1 otherwise:1 statistic:15 transform:1 noisy:3 shakir:1 asynchronously:2 online:1 propose:1 subtracting:1 reconstruction:1 product:4 p4:1 frequent:2 akd:3 ste:4 date:1 mixing:7 achieve:1 roweis:2 description:1 moved:1 scalability:1 parent:1 cluster:3 convergence:2 zp:2 intl:2 produce:3 uncollapsed:1 object:5 derive:1 develop:1 ac:3 fixing:1 pose:1 measured:2 multicore:4 keywords:1 received:1 odd:1 ibp:24 p2:1 indicate:2 direction:1 correct:1 stochastic:1 require:3 assign:1 considered:2 ground:1 deciding:1 seed:4 arrange:1 a2:3 resample:1 proc:8 applicable:3 largest:1 create:2 mit:3 destroys:1 gaussian:9 always:1 derived:1 focus:2 improvement:2 modelling:2 likelihood:25 bernoulli:8 indicates:2 slowest:1 cb21pz:4 kim:1 rio:2 inference:41 dependent:1 accept:3 initially:1 hidden:3 koller:1 going:1 issue:1 among:1 flexible:1 fairly:1 biologist:1 field:1 equal:2 once:1 saving:1 having:1 sampling:23 cartoon:4 optimising:1 represents:1 broad:2 look:1 unsupervised:1 yu:1 washington:1 future:1 few:4 sta:4 neighbour:1 gamma:1 phase:3 factorise:1 acceptance:4 message:30 bradski:1 mining:1 evaluation:2 alignment:1 introduces:1 held:3 chain:5 partial:1 matchbox:1 tree:3 divide:1 inconvenient:1 delete:1 mk:11 increased:1 column:3 modeling:1 elli:1 ar:1 zn:4 assignment:5 cost:2 subset:1 successful:1 seventh:1 optimally:1 stored:1 answer:1 international:5 stay:2 probabilistic:1 off:1 continuously:1 squared:1 containing:1 leveraged:1 conf:2 derivative:1 suggesting:2 potential:1 relieving:1 automation:1 depends:2 piece:1 try:2 root:9 closed:1 doing:2 bayes:2 maintains:2 parallel:17 asuncion:2 contribution:2 chart:1 accuracy:1 became:1 qk:1 who:1 efficiently:3 miller:1 variance:10 likewise:1 characteristic:1 bayesian:13 none:1 carlo:2 monitoring:1 lighting:1 researcher:2 processor:68 za:2 flickr:3 casella:1 ed:3 evaluates:2 pp:11 mohamed:1 doshi:3 associated:3 propagated:2 sampled:4 dataset:11 popular:2 dimensionality:2 graepel:1 back:3 thinning:1 higher:1 reflected:1 specify:1 improved:1 april:1 stage:5 crp:1 accessory:1 hastings:1 web:1 assessment:1 propagation:2 artifact:1 gray:2 indicated:1 lda:2 grows:1 believe:1 building:1 effect:4 usa:1 contain:3 true:2 consisted:1 assigned:1 neal:1 attractive:2 round:1 during:6 kak:1 trying:1 demonstrate:1 parallelisation:9 image:11 variational:6 resamples:2 novel:1 recently:1 possessing:2 behaves:2 empirically:1 cohen:1 attached:1 million:1 belong:1 discussed:1 qkd:1 occurred:1 velez:3 significant:3 cambridge:9 gibbs:27 similarly:2 particle:2 had:1 longer:2 add:2 posterior:14 recent:2 alum:1 dish:6 reverse:2 certain:2 binary:7 arbitrarily:1 seen:2 additional:1 care:1 parallelized:1 determine:1 signal:1 multiple:5 full:1 mix:1 reduces:1 faster:4 adapt:1 match:1 retrieval:1 lin:1 serial:5 equally:1 visit:1 impact:1 prediction:2 variant:1 calculates:1 poisson:2 iteration:21 robotics:1 proposal:7 want:1 grow:1 sends:2 posse:1 sr:1 pooling:1 recording:1 undirected:1 kollar:1 thing:1 lafferty:1 finale:2 leverage:2 split:2 variety:2 restaurant:1 fit:1 architecture:1 bandwidth:1 topology:1 inner:9 idea:4 reduce:1 avenue:1 intensive:1 shift:1 synchronous:7 whether:2 expression:1 pca:3 passed:1 passing:24 matlab:1 latency:1 gael:1 amount:3 nonparametric:10 generate:1 sign:2 async:1 dummy:1 popularity:1 per:9 discrete:2 write:1 hyperparameter:6 vol:9 waiting:5 key:1 iter:1 achieving:1 monitor:1 changing:1 backward:2 graph:1 olukotun:1 fraction:1 year:1 sum:1 wood:1 run:2 communicate:1 family:3 reasonable:2 almost:1 knowles:1 p3:1 draw:1 gonzalez:1 decision:1 scaling:2 comparable:1 layer:1 resampled:1 bound:3 completing:1 nontrivial:1 occur:2 x2:4 ri:1 software:1 scene:1 tag:2 aspect:1 fourier:1 speed:1 min:1 extremely:1 pruned:1 performing:1 speedup:1 designated:1 alternate:1 combination:1 poor:3 conjugate:10 kd:2 remain:1 across:3 increasingly:2 smaller:2 ur:1 em:1 lp:2 metropolis:7 joseph:1 making:4 heart:1 taken:1 equation:1 resource:1 previously:2 count:6 singer:1 tractable:1 available:1 operation:2 apply:2 buffet:10 slower:2 assumes:1 running:7 include:1 dirichlet:2 graphical:1 music:4 exploit:1 ghahramani:5 especially:3 chinese:1 approximating:1 society:1 summarises:1 sweep:8 added:2 quantity:1 question:1 concentration:7 exhibit:1 separate:1 simulated:2 outer:9 sensible:1 topic:2 argue:1 collected:1 hdp:1 besides:1 index:11 unfortunately:2 disastrous:1 robert:2 implementation:1 anal:1 zt:1 stern:1 unknown:1 contributed:1 perform:1 recommender:1 teh:2 observation:14 datasets:15 markov:1 finite:3 honestly:1 communication:2 dc:1 varied:1 synchronously:1 parallelizing:1 nmf:1 rating:1 david:1 introduced:1 required:2 engine:1 learned:1 nip:1 brook:1 address:1 trans:1 bar:1 yucheng:1 below:1 pattern:1 regime:1 challenge:5 including:3 memory:1 belief:2 deleting:1 serially:1 circumvent:1 synchronize:1 indicator:1 largescale:1 residual:1 nth:1 scheme:3 movie:2 rated:1 cusum:1 technology:1 coupled:1 dyk:1 prior:5 piano:5 l2:1 determining:1 loss:1 fully:1 heldout:1 mixed:1 parameterise:1 interesting:2 filtering:2 allocation:2 versus:1 znk:12 sufficient:8 propagates:2 share:1 row:2 last:2 asynchronous:10 side:1 allow:2 fall:2 wide:1 mpk:1 face:6 decorrelates:1 sparse:5 leaky:2 distributed:8 slice:1 van:2 dimension:3 xn:1 world:6 transition:4 doesn:1 author:1 forward:1 refinement:1 icdmw:1 far:1 welling:3 excess:1 approximate:23 compact:1 transcription:1 gene:2 keep:2 global:2 active:2 assumed:1 discriminative:1 latent:10 table:4 robust:1 complex:3 meanwhile:1 synch:1 da:1 sp:1 did:1 linearly:1 noise:8 hyperparameters:5 n2:2 nallapati:1 repeated:1 allowed:1 dyadic:1 augmented:1 ng:1 fashion:3 slow:2 exponential:3 breaking:1 jmlr:1 synchronise:1 down:1 diagrammatic:1 remained:1 shade:1 specific:3 workshop:1 merging:1 adding:1 magnitude:1 splash:1 nk:5 simply:3 recommendation:1 springer:1 cdf:5 mart:1 ma:1 change:5 eurasip:1 infinite:3 specifically:2 typical:1 except:1 sampler:33 reducing:1 total:5 experimental:2 exception:1 formally:1 internal:1 people:2 bioinformatics:1 indian:8 accelerated:3 frontal:1 evaluate:1 mcmc:4 tested:1 |
2,945 | 367 | Relaxation Networks for Large Supervised Learning Problems
Joshua Alspector Robert B. Allen Anthony Jayakumar
Torsten Zeppenfeld and Ronny Meir
Bellcore
Morristown, NJ 07962-1910
Abstract
Feedback connections are required so that the teacher signal on the output
neurons can modify weights during supervised learning. Relaxation methods
are needed for learning static patterns with full-time feedback connections.
Feedback network learning techniques have not achieved wide popularity
because of the still greater computational efficiency of back-propagation. We
show by simulation that relaxation networks of the kind we are implementing in
VLSI are capable of learning large problems just like back-propagation
networks. A microchip incorporates deterministic mean-field theory learning as
well as stochastic Boltzmann learning. A multiple-chip electronic system
implementing these networks will make high-speed parallel learning in them
feasible in the future.
1. INTRODUCTION
For supervised learning in neural networks, feedback connections are required so that the
teacher signal on the output neurons can affect the learning in the network interior. Even
though back-propagation[l] networks are feedforward in processing, they have implicit
feedback paths during learning for error propagation. Networks with explicit, full-time
feedback paths can perform pattern completion[2] and can have interesting temporal and
dynamical properties in contrast to the single forward pass processing of multilayer
perceptrons trained with back-propagation or other means. Because of the potential for
complex dynamics, feedback networks require a reliable method of relaxation for
learning and retrieval of static patterns. The Boltzmann machine[3] uses stochastic
settling while the mean-field theory (MFT) version[4] [5] uses a more computationally
efficient deterministic technique.
Neither of these feedback network learning techniques has achieved wide popularity
because of the greater computational efficiency of back-propagation. However, this is
likely to change in the near future because the feedback networks will be implemented in
VLSI[6] making them available for learning experiments on high-speed parallel hardware.
In this paper, we therefore raise the following questions: whether these types of learning
networks have the same representational and learning power as the more thoroughly
studied back-propagation methods, how learning in such networks scales with problem
size, and whether they can solve usefully large problems. Such questions are difficult to
1015
1016
answer with computer simulations because of the large amount of computer time
required compared to back-propagation, but, as we show, the indications are promising.
2. SIMULATIONS
2.1 Procedure
In this section, we compare back-propagation, Boltzmann machine, and MFf networks
on a variety of test problems. The back-propagation technique performs gradient descent
in weight space by differentiation of an objective function, usually the error,
E
(st - Sk-)2
= L
outputs k
where
st is the target output and Sk- is the actual output.
G =
L
[stlog(stlsk)
We choose to use the function
+ (1-st)IOg[(1-Sk+)/(1-sk-)]]
(1)
outputs k
for a more direct comparison to the Boltzmann machine[7] which has
L
G=
pt1og(Pg+lpg)
(2)
global states g
where Pg is the probability of a global state.
Individual neurons in the Boltzmann machine have a probabilistic decision' rule such that
neuron k is in state Sk = I with probability
I
Pi
= 1+e -net.tT
(3)
where neti = LWijSj is the net input to each neuron and T is a parameter that acts like
j
temperature in a physical system and is represented by the noise term in Eq. (4), which
follows. In the relaxation models, each neuron performs the activation computation
Si
= f (gain* (neti +noisei ?
(4)
where f is a monotonic non-linear function such as tanh. In simulations of the
Boltzmann machine, this is a step function corresponding to a high value of gain. The
noise is chosen from a zero mean gaussian distribution whose width is proportional to the
temperature. This closely approximates the distribution in Eq. (3) and matches our
hardware implementation, which supplies uncorrelated noise to each neuron. The noise
is slowly reduced as annealing proceeds. For MFf learning, the noise is zero but the
gain term has a finite value proportional to liT taken from the annealing schedule. Thus
the non-linearity sharpens as 'annealing' proceeds.
The network is annealed in two phases, + and -, corresponding to clamping the outputs
in the desired state and allowing them to run free at each pattern presentation. The
learning rule which adjusts the weights Wij from neuron j to neuron i is
L\wij=sgn[ (SjSjt-(SjSj)-]'
(5)
Note that this measures the instantaneous correlations after annealing. For both phases
each synapse memorizes the correlations measured at the end of the annealing cycle and
weight adjustment is then made, (i.e., online). The sgn matches our hardware
1017
implementation which changes weights by one each time.
2.2 Scaling
To study learning time as a function of problem size, we chose as benchmarks the parity
and replication (identity) problems. The parity problem is the generalization of
exclusive-OR for arbitrary input size, n. It is difficult because the classification regions
are disjoint with every change of input bit, but it has only one output. The goal of the
replication problem is for the output to duplicate the bit pattern found on the input after
being transformed by the hidden layer. There are as many output neurons as input. For
the replication problem, we chose the hidden layer to have the same number of neurons
as the input layer, while for parity we chose the hidden layer to have twice the number as
the input layer.
For back-propagation simulations, we used a learning rate of 0.3 and zero momentum.
For MFT simulations, we started at a high temperature of Thi = K (1.4)10 ((ranin) where
K = 1-10. We annealed in 20 steps dividing the temperature by 1.4 each time. The
janin parameter is the number of inputs from other neurons to a neuron in the hidden
layer. We did 3 neuron update cycles at each temperature. For Boltzmann, we increased
this to 11 updates because of the longer equilibration time. We used high gain rather
than strictly binary units because of the possibility that the binary Boltzmann units would
have exactly zero net input making annealing fruitless.
Replication Comparison
Parity Comparison
104
10 5
-
104
~
-o--1lZ
_ cycles
~
_ _ MFT
_ _ MFT
-o--1lZ
10)
,
I
1
!
? cycles
103
_>_n.
--
102
102
o
10
Input bits
o
10
Input
8it~
Figure 1. Scaling of Parity (1 a) and Replication (1 b) Problem with Input Size
Fig. la plots the results of an average of 10 runs and shows that the number of patterns
required to learn to 90% correct for parity scales as an exponential in n for all three
networks. This is not surprising since the training set size is exponential and no
constraints were imposed to help the network generalize from a small amount of data.
An activation range of -1 to 1 was used on both this problem and the replication problem.
There is no appreciable difference in learning as a function of patterns presented. Actual
1018
Alspector, Allen, Jayakumar, Zeppenfeld, and Meir
computer time is larger by an additional factor of n 2 to account for the increase in the
number of connections. Direct parallel implementation will reduce this additional factor
to less than n . Computer time for MFr learning was an additional factor of 10 slower
than back-propagation and stochastic Boltzmann learning was yet another factor of 10
slower. The hardware implementation will make these techniques roughly equal in speed
and far faster than any simulation of back-propagation. Fig. Ib shows analogous results
for the replication problem.
2.3 NETtalk
As an example of a large problem, we chose the NETtalk[8] corpus with 20,000 words.
Fig. 2 shows the learning curves for back-propagation, Boltzmann, and :MFT learning.
An activation range of 0 to 1 gave the best results on this problem, possibly due to the
sparse coding of text and phonemes. We can see that back-propagation does better on
this problem which we believe may be due to the ambiguity in mapping letters to
multiple phonemic outputs.
0.8
0.6
----r
fraction
oorrect
.._.__.......--..+. . . .
0.4
!
. ????????1I ???
i
--{)-BP
i
.... ri
~BZ
~MFT(inc)
.. ......... _....
j
!;
,
I
i
I
..... . ... .... .. . . +! . .. . . ... . ..... . ' Hi ..... . ... . ? ..... .... . j .. ... . .. ....... ..... t?
0.2
!
I
o
o
4.000 104
8.000 104
1.000105
cycles
Figure 2. Learning Curves for NETtalk
2.4 Dynamic Range Manipulation
For all problems, we checked to see if reducing the dynamic range of the weights to 5
bits, equivalent to our VLSI implementation, would hinder learning. In most cases, there
was no effect. Dynamic range was a limitation for the two largest replication problems
with MFr. By adding an occasional global decay which decremented the absolute value
of the weights, we were able to achieve good learning. Our implementation is capable of
doing this. There was also a degradation of performance on the back-propagation
version of the parity problem which took about a factor of three longer to learn with a 5
bit weight range.
1019
3. VLSI IMPLEMENTATION
The previous section shows that relaxation networks are as capable as back-propagation
networks of learning large problems even though they are slower in computer
simulations. We are, however, implementing these feedback networks in VLSI which
will speed up learning by many orders of magnitude. Our choice of learning technique
for implementation is due mainly to the local learning rule which makes it much easier to
cast these networks into electronics than back-propagation.
Figure 3. Photo of 32-Neuron Bellcore Learning Chip
Fig. 3 shows a microchip which has been fabricated. It contains 32 neurons and 992
connections (496 bidirectional synapses). On the extreme right is a noise generator
which supplies 32 uncorrelated pseudo-random noise sources [91 to the neurons to their
left. These noise sources are summed along with the weighted post-synaptic signals from
other neurons at the input to each neuron in order to implement the simulated annealing
process of the stochastic Boltzmann machine. The neuron amplifiers implement a nonlinear activation function which has variable gain to provide for the gain sharpening
fur.ction of the MFT technique. The range of neuron gain can also be adjusted to allow
for scaling in summing currents due to adjustable network size.
Most of the area is occupied by the synapse array. Each synapse digitally stores a weight
ranging from -15 to +15 as 4 bits plus a sign. It multiples the voltage input from the
presynaptic neuron by this weight to output a current. One conductance direction can be
disconnected so that we can experiment with asymmetric networks in accordance with
our recent findings[lOl. Although the synapses can have their weights set externally, they
are designed to be adaptive. They store correlations using the local learning rule of Eq.
1020
Alspector, Allen, Jayakumar, Zeppenfeld, and Meir
(5) and adjust their weights accordingly.
Although the chip is still being tested, some measurements can be reported. Fig. 4a
shows a family of transfer functions of a neuron, showing how the gain is continually
adjustable by varying a control voltage. Fig. 4b shows the transfer function of a synapse
as different weights are loaded. The input linear range is about 2 volts.
Measured synapse transfer function
Measured Neuron Transfer Function
V
~--garn
_ - ?11
_ - - ?7
~-- '!
~- 11
15
~
-200
0
?100
Ir'4XJI curent (pAl
100
200
300
~~~~~~~~~~~~~~~
D.S
u
2
2S
3
3.S
Input voltage (VI
Figure 4. Transfer Functions of Electronic Neuron and Synapse
Fig. 5 shows two different neuron outputs with a decreasing noise signal added in. The
upper trace shows a neuron driven by a function generator while the center trace shows
an undriven neuron. The lower trace is the noise control voltage common to all neurons.
The chip is designed to be cascaded with other similar chips in a board-level system
which can be accessed by a computer. The nodes which sum current from synapses for
net input into a neuron are available externally for connection to other chips and for
external clamping of neurons or other external input. We expect to be able to present
roughly 100,000 patterns per second to the chip for learning as was determined from a
previous prototype system[6] that was not cascadable. This speed will not be strongly
affected by the increased network size of a multiple-chip system because of the inherent
parallelism whereby each neuron and synapse updates its own state.
4. CONCLUSION
We have shown by simulation that relaxation networks of the kind we are implementing
are as capable of learning large problems as back-propagation networks. A multiple-Chip
electronic system implementing these networks will make high-speed parallel learning in
them feasible in the future.
Relaxation Networks for Large Supervised Learning Problems
Figure 5. Neuron Signals in the Presence of Noise Generator Input
REFERENCES
1. D.E. Rumelhart, G.E. Hinton, & R.1. Williams, "Learning Internal Representations by Error
Propagation", in Parallel Distributed Processing: Explorations in the Microstructure of
Cognition. Vol. 1: Foundations, D.E. Rumelhart & 1.L McClelland (eds.), MIT Press,
Cambridge, MA (1986), p. 318.
2.1.1. Hopfield, "Neural Networks and Physical Systems with Emergent
Computational Abilities", Proc. Natl. Acad. Sci. USA, 79,2554-2558 (1982).
Collective
3. D.H. Ackley, G.E. Hinton, & T.l. Sejnowski, "A Learning Algorithm for Boltzmann
Machines", Cognitive Science 9 (985) pp. 147-169.
4. C. Peterson & J.R. Anderson, "A Mean Field Learning Algorithm for Neural Networks",
Complex Systems, 1:5, 995-1019, (1987).
5. G. Hinton, "Detenninistic Boltzmann Learning Perfonns Steepest Descent in Weight-Space",
Neural Computation, 1, 143-150 (1989).
6.1. Alspector, B. Gupta, & R.B. Allen, "Perfonnance of a Stochastic Learning Microchip" in
Advances in Neural Information Processing Systems edited by D. Tourctzky (MorganKaufmann, Palo Alto), pp. 748-760. (1989).
7.1.1. Hopfield, "Learning Algorithms and Probability Distributions in Feed-Forward and FeedBack networks", Proc. Natl. Acad. Sci. USA, 84, 8429-8433 (1987).
8. T.l. Sejnowski & C.R. Rosenberg, "Parallel Networks that Learn to Pronounce English Text",
Complex Systems, 1, 145-168 (1987).
9.1. Alspector, 1.W. Gannett, S. Haber, M.B. Parker, & R. Chu, "A VLSI-Efficient Technique for
Generating Multiple Uncorrclated Noise Sources and Its Application to Stochastic Neural
Networks", IEEE Trans. Circuits & Systems, 38, 109, (Jan., 1991).
10. R.B. Allen & 1. Alspector, "Learning of Stable States in Stochastic Asymmetric Networks",
IEEE Trans. Neural Networks. 1,233-238, (1990).
1021
| 367 |@word torsten:1 version:2 sharpens:1 simulation:9 pg:2 electronics:1 contains:1 current:3 surprising:1 activation:4 si:1 yet:1 chu:1 plot:1 designed:2 update:3 accordingly:1 steepest:1 node:1 accessed:1 along:1 direct:2 supply:2 replication:8 microchip:3 xji:1 roughly:2 alspector:6 decreasing:1 actual:2 linearity:1 alto:1 circuit:1 kind:2 finding:1 sharpening:1 differentiation:1 nj:1 fabricated:1 temporal:1 pseudo:1 every:1 act:1 morristown:1 usefully:1 exactly:1 control:2 unit:2 continually:1 local:2 modify:1 accordance:1 acad:2 path:2 chose:4 twice:1 plus:1 studied:1 range:8 pronounce:1 implement:2 procedure:1 jan:1 area:1 thi:1 word:1 interior:1 ronny:1 equivalent:1 deterministic:2 imposed:1 center:1 annealed:2 williams:1 equilibration:1 rule:4 adjusts:1 array:1 analogous:1 target:1 us:2 rumelhart:2 zeppenfeld:3 asymmetric:2 ackley:1 region:1 cycle:5 edited:1 digitally:1 dynamic:4 hinder:1 trained:1 raise:1 efficiency:2 hopfield:2 chip:9 emergent:1 represented:1 ction:1 sejnowski:2 whose:1 larger:1 solve:1 ability:1 online:1 indication:1 net:4 took:1 achieve:1 representational:1 generating:1 help:1 completion:1 measured:3 phonemic:1 eq:3 dividing:1 implemented:1 direction:1 closely:1 correct:1 stochastic:7 exploration:1 sgn:2 implementing:5 require:1 microstructure:1 generalization:1 perfonns:1 adjusted:1 strictly:1 mapping:1 cognition:1 proc:2 tanh:1 palo:1 largest:1 weighted:1 mit:1 gaussian:1 rather:1 occupied:1 varying:1 voltage:4 rosenberg:1 fur:1 mainly:1 contrast:1 hidden:4 vlsi:6 wij:2 transformed:1 classification:1 bellcore:2 summed:1 field:3 equal:1 lit:1 future:3 decremented:1 duplicate:1 inherent:1 individual:1 phase:2 amplifier:1 conductance:1 possibility:1 adjust:1 extreme:1 natl:2 capable:4 detenninistic:1 perfonnance:1 desired:1 increased:2 pal:1 reported:1 answer:1 teacher:2 thoroughly:1 st:3 probabilistic:1 ambiguity:1 choose:1 slowly:1 possibly:1 external:2 cognitive:1 jayakumar:3 account:1 potential:1 coding:1 inc:1 vi:1 memorizes:1 doing:1 parallel:6 ir:1 phoneme:1 loaded:1 generalize:1 synapsis:3 checked:1 synaptic:1 ed:1 pp:2 static:2 gain:8 schedule:1 back:18 bidirectional:1 feed:1 supervised:4 synapse:7 though:2 strongly:1 anderson:1 just:1 implicit:1 correlation:3 nonlinear:1 propagation:20 believe:1 usa:2 effect:1 volt:1 nettalk:3 during:2 width:1 whereby:1 tt:1 performs:2 allen:5 temperature:5 ranging:1 instantaneous:1 common:1 physical:2 approximates:1 measurement:1 mft:7 cambridge:1 fruitless:1 lol:1 stable:1 longer:2 own:1 recent:1 driven:1 manipulation:1 store:2 binary:2 joshua:1 greater:2 additional:3 mfr:2 signal:5 full:2 multiple:6 match:2 faster:1 retrieval:1 post:1 iog:1 multilayer:1 bz:1 achieved:2 annealing:7 source:3 incorporates:1 near:1 presence:1 feedforward:1 variety:1 affect:1 gave:1 reduce:1 prototype:1 whether:2 amount:2 hardware:4 mcclelland:1 reduced:1 meir:3 sign:1 disjoint:1 popularity:2 per:1 undriven:1 vol:1 affected:1 neither:1 relaxation:8 fraction:1 sum:1 run:2 letter:1 family:1 electronic:3 decision:1 scaling:3 bit:6 layer:6 hi:1 constraint:1 bp:1 ri:1 speed:6 disconnected:1 making:2 sjsj:1 taken:1 computationally:1 neti:2 needed:1 end:1 photo:1 available:2 occasional:1 slower:3 objective:1 question:2 added:1 exclusive:1 gradient:1 simulated:1 sci:2 presynaptic:1 difficult:2 robert:1 trace:3 implementation:8 collective:1 boltzmann:13 adjustable:2 perform:1 allowing:1 upper:1 neuron:33 benchmark:1 finite:1 descent:2 hinton:3 janin:1 arbitrary:1 cast:1 required:4 connection:6 trans:2 able:2 proceeds:2 dynamical:1 pattern:8 usually:1 parallelism:1 reliable:1 haber:1 power:1 settling:1 cascaded:1 started:1 gannett:1 text:2 expect:1 interesting:1 limitation:1 proportional:2 generator:3 foundation:1 uncorrelated:2 pi:1 parity:7 free:1 english:1 allow:1 wide:2 peterson:1 absolute:1 sparse:1 distributed:1 feedback:11 curve:2 forward:2 made:1 adaptive:1 lz:2 far:1 global:3 corpus:1 summing:1 sk:5 promising:1 learn:3 transfer:5 complex:3 anthony:1 did:1 noise:12 fig:7 board:1 parker:1 momentum:1 explicit:1 exponential:2 ib:1 lpg:1 externally:2 showing:1 decay:1 gupta:1 adding:1 magnitude:1 clamping:2 easier:1 likely:1 adjustment:1 monotonic:1 ma:1 identity:1 goal:1 presentation:1 appreciable:1 feasible:2 change:3 determined:1 reducing:1 degradation:1 pas:1 la:1 perceptrons:1 internal:1 morgankaufmann:1 cascadable:1 tested:1 |
2,946 | 3,670 | Multi-step Linear Dyna-style Planning
Hengshuai Yao
Department of Computing Science
University of Alberta
Edmonton, AB, Canada T6G2E8
Shalabh Bhatnagar
Department of Computer Science
and Automation
Indian Institute of Science
Bangalore, India 560012
Dongcui Diao
School of Economics and Management
South China Normal University
Guangzhou, China 518055
Abstract
In this paper we introduce a multi-step linear Dyna-style planning algorithm. The
key element of the multi-step linear Dyna is a multi-step linear model that enables multi-step projection of a sampled feature and multi-step planning based on
the simulated multi-step transition experience. We propose two multi-step linear
models. The first iterates the one-step linear model, but is generally computationally complex. The second interpolates between the one-step model and the
infinite-step model (which turns out to be the LSTD solution), and can be learned
efficiently online. Policy evaluation on Boyan Chain shows that multi-step linear
Dyna learns a policy faster than single-step linear Dyna, and generally learns faster
as the number of projection steps increases. Results on Mountain-car show that
multi-step linear Dyna leads to much better online performance than single-step
linear Dyna and model-free algorithms; however, the performance of multi-step
linear Dyna does not always improve as the number of projection steps increases.
Our results also suggest that previous attempts on extending LSTD for online
control were unsuccessful because LSTD looks infinite steps into the future, and
suffers from the model errors in non-stationary (control) environments.
1 Introduction
Linear Dyna-style planning extends Dyna to linear function approximation (Sutton, Szepesv?ari,
Geramifard & Bowling, 2008), and can be used in large-scale applications. However, existing Dyna
and linear Dyna-style planning algorithms are all single-step, because they only simulate sampled
features one step ahead. This is many times insufficient as one does not exploit in such a case all
possible future results. We extend linear Dyna architecture by using a multi-step linear model of
the world, which gives what we call the multi-step linear Dyna-style planning. Multi-step linear
Dyna-style planning is more advantageous than existing linear Dyna, because a multi-step model of
the world can project a feature multiple steps into the future and give more steps of results from the
feature.
For policy evaluation we introduce two multi-step linear models. The first is generated by iterating
the one-step linear model, but is computationally complex when the number of features is large. The
second, which we call the ?-model, interpolates between the one-step linear model and an infinitestep linear model of the world, and is computationally efficient to compute online. Our multi-step
linear Dyna-style planning for policy evaluation, Dyna(k), uses the multi-step linear models to generate k-steps-ahead prediction of the sampled feature, and applies a generalized TD (temporal dif1
ference, e.g., see (Sutton & Barto, 1998)) learning on the imaginary multi-step transition experience.
When k is equal to 1, we recover the existing linear Dyna-style algorithm; when k goes to infinity,
we actually use the LSTD (Bradtke & Barto, 1996; Boyan, 1999) solution for planning.
For the problem of control, related work include least-squares policy iteration (LSPI) (Lagoudakis &
Parr, 2001; Lagoudakis & Parr, 2003; Li, Littman & Mansley, 2009), and linear Dyna-style planning
for control. LSPI is an offline algorithm, that learns a greedy policy out of a data set of experience,
through a number of iterations, each of which sweeps the data set and alternates between LSTD
and policy improvement. Sutton et al. (2008) explored the use of linear function approximation
with Dyna for control, which does planning using a set of linear action models built from state
to state. In this paper, we first build a one-step model from state-action pair to state-action pair
through tracking the greedy policy. Using this tracking model for planning is in fact another way of
doing single-step linear Dyna-style planning. In a similar manner to policy evaluation, we also have
two multi-step models for control. We build the iterated multi-step model by iterating the one-step
tracking model. Also, we build a ?-model for control by interpolating the one-step tracking model
and the infinite-step model (also built through tracking). As the infinite-step model coincides with
the LSTD solution, we actually propose an online LSTD control algorithm.
Policy evaluation on Boyan Chain shows that multi-step linear Dyna learns a policy faster than
single-step linear Dyna. Results on the Mountain-car experiment show that multi-step linear Dyna
can find the optimal policy faster than single-step linear Dyna; however, the performance of multistep linear Dyna does not always improve as the number of projection steps increases. In fact, LSTD
control and the infinite-step linear Dyna for control are both unstable, and some intermediate value
of k makes the k-step linear Dyna for control perform the best.
2 Backgrounds
S
Given a Markov decision process (MDP) with a state space = {1, 2, . . . , N }, the problem of
policy evaluation is to predict the long-term reward of a policy ? for every state s ? :
V ? (s) =
?
X
? t rt ,
0 < ? < 1,
S
s0 = s,
t=0
where rt is the reward received by the agent at time t. Given n (n ? N ) feature functions ?j :
7? , j = 1, . . . , n, the feature of state i is ?(i) = [?1 (i), ?2 (i), . . . , ?n (i)]T . Now V ? can
be approximated using V? ? = ??, where ? is the weight vector, and ? is the feature matrix whose
entries are ?i,j = ?j (i), i = 1, . . . , N ; j = 1, . . . , n. At time t, linear TD(0) updates the weights as
S
R
?t+1 = ?t + ?t ?t ?t ,
?t = rt + ??tT ?t+1 ? ?tT ?t ,
where ?t is a positive step-size and ?t corresponds to ?(st ).
Most of earlier work on Dyna uses a lookup table representation of states (Sutton, 1990; Sutton &
Barto, 1998). Modern Dyna is more advantageous in the use of linear function approximation, which
is called linear Dyna-style planning (Sutton et al., 2008). We denote the state transition probability
?
matrix of policy ? by P ? , whose (i, j)th component is Pi,j
= E? {st+1 = j|st = i}; and denote the
?
expected reward vector of policy ? by R , whose ith component is the expected reward of leaving
state i in one step. Linear Dyna tries to estimate a compressed model of policy ?:
(F ? )T = (?T D? ?)?1 ? ?T D? P ? ?;
f ? = (?T D? ?)?1 ? ?T D? R? ,
where D? is the N ? N matrix whose diagonal entries correspond to the steady distribution of states
under policy ?. F ? and f ? constitute the world model of linear Dyna for policy evaluation, and are
estimated online through gradient descent:
?
Ft+1
= Ft? + ?t (?t+1 ? Ft? ?t )?Tt ;
?
ft+1
= ft? + ?t (rt ? ?Tt ft? )?t ,
(1)
respectively, where the features and reward are all from real world experience and ?t is the modeling
step-size.
Dyna repeats some steps of planning in each of which it samples a feature, projects it using the world
model, and plans using linear TD(0) based on the imaginary experience. For policy evaluation, the
2
fixed-point of linear Dyna is the same as that of linear TD(0) under some assumptions (Tsitsiklis &
Van Roy, 1997; Sutton et al., 2008), that satisfies
A? ?? + b? = 0 :
A? = ?T D? (?P ? ? I)?;
b ? = ?T D ? R ? ,
where IN ?N is the identity matrix.
3 The Multi-step Linear Model
In the lookup table representation, (P ? )T and R? constitute the one-step world model. The k-step
transition model of the world is obtained by iterating (P ? )T , k times with discount (Sutton, 1995):
P (k) = (?(P ? )T )k ,
?k = 1, 2, . . .
At the same time we accumulate the rewards generated in the process of this iterating:
R(k) =
k?1
X
(?P ? )j R? ,
?k = 1, 2, . . . ,
j=0
where R(k) is called the k-step reward model. P (k) and R(k) predict a feature k steps into the
future. In particular, P (k) ? is the feature of the expected state after k steps from ?, and (R(k) )T ? is
the expected accumulated rewards in k steps from ?. Notice that
V ? = R(k) + (P (k) )T V ? ,
?k = 1, 2, . . . ,
(2)
which is a generalization of the Bellman equation, V ? = R? + ?P ? V ? .
3.1 The Iterated Multi-step Linear Model
In the linear function approximation, F ? and f ? constitute the one-step linear model. Similar to the
lookup table representation, we can iterate F ? , k times, and accumulate the approximated rewards
along the way:
k?1
X
F (k) = (?F ? )k ;
f (k) =
(?(F ? )T )j f ? .
j=0
(k)
(k)
We call (F , f ) the iterated multi-step linear model. By this definition, we extend (2) to the
k-step linear Bellman equation:
V? ? = ??? = ?f (k) + ?(F (k) )T ?? ,
?k = 1, 2, . . . ,
(3)
where ?? is the linear TD(0) solution.
3.2 The ?-model
The quantities F (k) and f (k) require powers of F ? . One can first estimate F ? and f ? , and then
estimate F (k) and f (k) using powers of the estimated F ? . However, real life tasks require a lot
of features. Generally (F ? )k requires O((k ? 1)n3 ) computation, which is too complex when the
number of features (n) is large.
Rather than using F (k) and f (k) , we would like to explore some other multi-step model that is cheap
in computation but is still meaningful in some sense. First let us see how F (k) and f (k) are used
if they can be computed. Given an imaginary feature ??? , we look k steps ahead to see our future
feature by applying F (k) :
(k) ?
??(k)
?? .
? = F
(k)
As k grows, F (k) diminishes and thus ??? converges to 0. 1 This means that the more steps we look
into the future from a given feature, the more ambiguous is our resulting feature. It suggests that we
1
This is because ?F ? has a spectral radius smaller than one, cf. Lemma 9.2.2 of (Bertsekas, Borkar &
Nedich, 2004).
3
can use a decayed one-step linear model to approximate the effects of looking multiple steps into
the future:
L(k) = (??)k?1 ?F ? ,
parameterized by a factor ? ? (0, 1]. To guarantee that the optimality (3) still holds, we define
l(k) = (I ? (L(k) )T )(I ? ?(F ? )T )?1 f ? .
We call (L(k) , l(k) ) the ?-model. When k = 1, we have L(1) = F (1) = ?F ? and l(1) = f (1) = f ? ,
recovering the one-step model used by existing linear Dyna. Notice that L(k) diminishes as k grows,
which is consistent with the fact that F (k) also diminishes as k grows. Finally, the infinite-step
model reduces to a single vector, l(?) = f (?) = ?? . The intermediate k interpolates between the
single-step model and infinite-step model.
For intermediate k, computation of L(k) has the same complexity as the estimation of F ? . Interestingly, all l(k) can be obtained by shifting from l(?) by an amount that shrinks l(?) itself: 2
l(k)
= (I ? (L(k) )T )(I ? ?(F ? )T )?1 f ? ,
= l(?) ? (L(k) )T l(?) .
(4)
The case of k = 1 is interesting. The linear Dyna algorithm (Sutton et al., 2008) takes advantage
of the fact that l(1) = f ? and estimates it through gradient descent. On the other hand, in our Dyna
algorithm, we use (4) and estimate all l(k) from the estimation of l(?) , which is generally no longer
a gradient-descent estimate.
4 Multi-step Linear Dyna-style Planning for Policy Evaluation
The architecture of multi-step linear Dyna-style planning, Dyna(k), is shown in Algorithm 1. Generally any valid multi-step model can be used in the architecture. For example, in the algorithm we
can take M (k) = F (k) and m(k) = f (k) , giving us a linear Dyna architecture using the iterated
multi-step linear model, which we call the Dyna(k)-iterate.
In the following we present the family of Dyna(k) planning algorithms that use the ?-model. We first
develop a planning algorithm for the infinite-step model, and based on it we then present Dyna(k)
planning using the ?-model for any finite k.
4.1 Dyna(?): Planning using the Infinite-step Model
The infinite-step model is preferable in computation because F (?) diminishes and the model reduces to f (?) . It turns out that f (?) can be further simplified to allow an efficient online estimation:
f (?)
= (I ? ?(F ? )T )?1 f ?
= (?T D? ? ? ??T D? P ? ?)?1 ? ?T D? ?f ?
= ?(A? )?1 b? .
We can accumulate A? and b? online like LSTD (Bradtke & Barto, 1996; Boyan, 1999; Xu et al.,
2002) and solve f (?) by matrix inversion methods or recursive least-square methods.
As with traditional Dyna, we initially sample a feature ?? from some distribution ?. We then apply
the infinite-step model to get the expected future features and all the possible future rewards:
?
?
r?(?) = (f (?) )T ?.
??(?) = F (?) ?;
Next, a generalized linear TD(0) is applied on this simulated experience.
? ?.
?
?? := ?? + ?(?
r(?) + ??T ??(?) ? ??T ?)
Because ??(?) = 0, this simplifies into
? ?.
?
?? := ?? + ?(?
r(?) ? ??T ?)
We call this algorithm Dyna(?), which actually uses the LSTD solution for planning.
2
Similarly f (k) can be obtained by shifting from f (?) by an amount that shrinks itself.
4
Algorithm 1 Dyna(k) algorithm for evaluating policy ? (using any valid k-step model).
Initialize ?0 and some model
Select an initial state
for each time step do
Take an action a according to ?, observing rt and ?t+1
?t+1 = ?t + ?t (rt + ??Tt+1 ?t ? ?Tt ?t )?t
/* linear TD(0) */
Update M (k) and m(k)
Set ??0 = ?t+1
repeat ? = 1 to p
/*Planning*/
Sample ??? ? ?(?)
??(k) = M (k) ???
/* ??(?) = 0*/
(k)
(k) T ?
r? = (m ) ??
(k)
(k)
??? +1 := ??? + ?? (?
r? + ???T ??? ? ???T ??? )??? /*Generalized k-step linear TD(0) learning */
Set ?t+1 = ??? +1
end for
4.2 Planning using the ?-model
The k-step ?-model is efficient to estimate, and can be directly derived from the single-step and
infinite-step models:
?
L(k) = (??)k?1 ?Ft+1
;
l(k) = f (?) ? (L(k) )T f (?) ,
respectively, where the infinite-step model is estimated by f (?) = (A?t+1 )?1 b?t+1 . Given an imagi? we look k steps ahead to see the future features and rewards:
nary feature ?,
?
??(k) = L(k) ?;
?
r?(k) = (l(k) )T ?.
Thus we obtain an imaginary k-step transition experience ?? ? (??(k) ,
k-step version of linear TD(0):
r?(k) ), on which we apply a
? ?.
?
??? +1 = ??? + ?(?
r(k) + ???T ??(k) ? ???T ?)
We call this algorithm the Dyna(k)-lambda planning algorithm. When k = 1, we obtain another
single-step Dyna, Dyna(1). Notice that Dyna(1) uses f (?) while the linear Dyna uses f ? . When
k ? ?, we obtain the Dyna(?) algorithm.
5 Planning for Control
Planning for control is more difficult than that for policy evaluation because in control the policy
changes from time step to time step. Linear Dyna uses a separate model for each action, and these
action models are from state to state (Sutton et al., 2008). Our model for control is different in that
it is from state-action pair to state-action pair. However, rather than building a model for all stateaction pairs, we build only one state-action model that tracks the sequence of greedy actions. Using
this greedy-tracking model is another way of doing linear Dyna-style planning. In the following we
first build the single-step greedy-tracking model and the infinite-step greedy-tracking model, and
based on these tracking models we build the iterated model and the ?-model.
Our extension of linear Dyna to control contains a TD control step (we use Q-learning), and we
call it the linear Dyna-Q architecture. In the Q-learning step, the next feature is already implicitly
selected. Recall that Q-learning selects the largest next Q-function as the target for TD learning,
? t+1 (st+1 , a? ) = maxa? ?(st+1 , a? )T ?t . Alternatively, the greedy next state-action
which is maxa? Q
feature
~t+1 = arg
?
max
??T ?t
?
? =?(st+1 ,?)
is selected by Q-learning. We build a single-step projection matrix between state-action pairs, F , by
moving its projection of the current feature towards the greedy next state-action feature (tracking):
~t+1 ? Ft ?t )?T .
Ft+1 = Ft + ?t (?
t
5
(5)
Algorithm 2 Dyna-Q(k)-lambda: k-step linear Dyna-Q algorithm for control (using the ?-model).
Initialize F0 , A0 , b0 and ?0
Select an initial state
for each time step do
Take action a at st (using ?-greedy), observing rt and st+1
? t+1 , a? )
Choose a? that leads to the largest Q(s
~ = ?(st+1 , a? )
Set ? = ?(st , a), ?
~ T ?t ? ?T ?t )?
?t+1 = ?t + ?t (rt + ? ?
/*Q-learning*/
T
~
At+1 = At + ?t (? ? ? ?)T , bt+1 = bt + ?t rt
f (?) = ?(At+1 )?1 bt+1
/*Using matrix inversion or recursive least-squares */
~ ? Ft ?)?T ,
Ft+1 = Ft + ?t (?
L(k) = (??)k?1 ?Ft+1
l(k) = f (?) ? (L(k) )T f (?)
Set ??0 = ?t+1
repeat ? times
/*Planning*/
Sample ??? ? ?
??(k) = L(k) ???
r?(k) = (l(k) )T ???
(k)
(k)
??? +1 := ??? + ?? (?
r? + ???T ??? ? ???T ??? )???
Set ?t+1 = ??? +1
end for
Estimation of the single-step reward model, f , is the same as in policy evaluation.
In a similar manner, in the infinite-step model, matrix A is updated using the greedy next feature,
while vector b is updated in the same way as in LSTD. Given A and b, we can solve them and get
f (?) . Once the one-step model and the infinite-step model are available, we interpolate them and
compute the ?-model in a similar manner to policy evaluation. The complete multi-step Dyna-Q
control algorithm using the ?-model is shown in Algorithm 2. We noticed that f (?) can be directly
used for control, giving an online LSTD control algorithm.
We can also extend the iterated multi-step model and Dyna(k)-iterate to control. Given the singlestep greedy-tracking model, we can iterate it and get the iterated multi-step linear model in a similar
way to policy evaluation. The linear Dyna for control using the iterated greedy-tracking model
(which we call Dyna-Q(k)-iterate) is straightforward and thus not shown.
6 Experimental Results
6.1 Boyan Chain Example
The problem we consider is exactly the same as that considered by Boyan (1999). The root mean
square error (RMSE) of the value function is used as a criterion. Previously it was shown that linear
Dyna can learn a policy faster than model-free TD methods in the beginning episodes (Sutton et al.,
2008). However, after some episodes, their implementation of linear Dyna became poorer than
TD. A possible reason leading to their results may be that the step-sizes of learning, modeling and
planning were set to the same value. Also, their step-size diminishes according to 1/(traj#)1.1 ,
which does not satisfy the standard step-size rule required for stochastic approximation. In our linear
Dyna algorithms, we used different step-sizes for learning, modeling and planning.
(1) Learning step-size. We used here the same step-size rule for TD as Boyan (1999), where ? =
0.1(1 + 100)/(traj# + 100) was found to be the best in the class of step-sizes and also used it
for TD in the learning sub-procedure of all linear Dyna algorithms. (2) Modeling step-size. For
Dyna(k)-lambda, we used ?T = 0.5(1 + 10)/(10 + T ) for estimation of F ? , where T is the number
of state visits across episodes. For linear Dyna, the estimation of F ? and f ? also used the same ?T .
(3) Planning step-size. In our experiments all linear Dyna algorithms simply used ?? = 0.1.
6
15
15
10
10
Dyna(1)
TD
RMSE(Log)
RMSE (Log)
Dyna(10)?lambda
1
LSTD, A =?0.1I
0
1
LSTD, A =?0.1I
0
LSTD, A =?I
0
LSTD, A =?10I
Dyna(?)
0
Dyna(3)?iterate
Dyna(5)?iterate
Dyna(10)?iterate
Linear Dyna
0.1 0
10
1
10
0.1 0
10
2
10
Episodes (Log)
1
10
2
10
Episodes (Log)
Figure 1: Results on Boyan Chain. Left: comparison of RMSE of Dyna(k)-iterate with LSTD.
Right: comparison of RMSE of Dyna(k)-lambda with TD and LSTD.
The weights of various learning algorithms, f ? for the linear Dyna, and b? for Dyna(k) were all
initialized to zero. No eligibility trace is used for any algorithm. In the planning step, all Dyna
algorithms sampled a unit basis vector whose nonzero component was in a uniformly random location. In the following we report the results of planning only once. All RMSEs of algorithms were
averaged over 30 (identical) sets of trajectories.
Figure 1 (left) shows the performance of Dyna(k)-iterate and LSTD, and Figure 1 (right) shows
the performance of Dyna(k)-lambda, LSTD and TD. All linear Dyna algorithms were found to be
significantly and consistently faster than TD. Furthermore, multi-step linear Dyna algorithms were
much faster than single-step linear Dyna algorithms. Matrix A of LSTD and Dyna(k)-lambda needs
perturbation in initialization, which has a great impact on the performances of two algorithms. For
LSTD, we tried initialization of A?0 to ?10I, ?I, ?0.1I, and showed their effects in Figure 1 (left),
in which A?0 = ?0.1I was the best for LSTD. Similar to LSTD, Dyna(k)-lambda is also sensitive
to A?0 . Linear Dyna and Dyna(k)-iterate do not use A? and thus do not have to tune A?0 . F ? was
initialized to 0 for Dyna(k) (k < ?) and linear Dyna. In Figure 1 (right) LSTD and Dyna(k)-lambda
were compared under the same setting (Dyna(k)-lambda also used A0 = ?0.1I). Dyna(k)-lambda
used ? = 0.9.
6.2 Mountain-car
We used the same Mountain-car environment and tile coding as in the linear Dyna paper (Sutton
et al., 2008). The state feature has a dimension of 10, 000. The state-action feature is shifted from
the state feature, and has a dimension of 30, 000 because there are three actions of the car. Because
the feature and matrix are really large, we were not able to compute the iterated model, and hence
we only present here the results of Dyna-Q(k)-lambda.
Experimental setting. (1)Step-sizes. The Q-learning step-size was chosen to be 0.1, in both the
independent algorithm and the sub-procedure of Dyna-Q(k)-lambda. The planning step-size was
0.1. The matrix F is much more dense than A and leads to a very slow online performance. To
tackle this problem, we avoided computing F explicitly, and used a least-squares computation of
the projection, given in the supplementary material. In this implementation, there is no modeling
step-size. (2)Initialization. The parameters ? and b were both initialized to 0. A was initialized to
?I. (3)Other setting. The ? value for Dyna-Q(k)-lambda was 0.9. We recorded the state-action
pairs online and replayed the feature of a past state-action pair in planning. We also compared the
linear Dyna-style planning for control (with state features) (Sutton et al., 2008), which has three
sets of action models for this problem. In linear Dyna-style planning for control we replayed a state
feature of a past time step, and projected it using the model of the action that was selected at that
time step. No eligibility trace or exploration was used. Results reported below were all averaged
over 30 independent runs, each of which contains 20 episodes.
7
?100
Dyna?Q(10)?lambda
Dyna?Q(5)?lambda
Online Return
?150
?200
Dyna?Q(1)
Dyna?Q(20)?lambda
?250
Dyna?Q(?)
Linear Dyna
?300
Q?learning
?350
5
10
15
20
Episode
Figure 2: Results on Mountain-car: comparison of online return of Dyna-Q(k)-lambda, Q-learning
and linear Dyna for control.
Results are shown in Figure 2. Linear Dyna-style planning algorithms were found to be significantly
faster than Q-learning. Multi-step planning algorithms can be still faster than single-step planning
algorithms. The results also show that planning too many steps into the future is harmful, e.g.,
Dyna-Q(20)-lambda and Dyna-Q(?) gave poorer performance than Dyna-Q(5)-lambda and DynaQ(10)-lambda. This shows that some intermediate values of k trade off the model accuracy and the
depth of looking ahead, and performed best. In fact, Dyna-Q(?) and LSTD control algorithm were
both unstable, and typically failed once or twice in 30 runs. The intuition is that in control the policy
changes from time step to time step and the model is highly non-stationary. By solving the model
and looking infinite steps into the future, LSTD and Dyna-Q(?) magnify the errors in the model.
7 Conclusion and Future Work
We have taken important steps towards extending linear Dyna-style planning to multi-step planning.
Multi-step linear Dyna-style planning uses multi-step linear models to project a simulated feature
multiple steps into the future. For control, we proposed a different way of doing linear Dyna-style
planning, that builds a model from state-action pair to state-action pair, and tracks the greedy action selection. Experimental results show that multi-step linear Dyna-style planning leads to better
performance than existing single-step linear Dyna-style planning on Boyan chain and Mountaincar problems. Our experimental results show that linear Dyna-style planning can achieve a better
performance by using different step-sizes for learning, modeling, and planning than using a uniform step-size for the three sub-procedures. While it is not clear from previous work, our results
fully demonstrate the advantages of linear Dyna over TD/Q-learning for both policy evaluation and
control.
Our work also sheds light on why previous attempts on developing independent online LSTD control
were not successful (e.g., forgetting strategies (Sutton et al., 2008)). LSTD and Dyna-Q(?) can
become unstable because they magnify the model errors by looking infinite steps into the future.
Current experiments do not include comparisons with any other LSTD control algorithm because
we did not find in the literature an independent LSTD control algorithm. LSPI is usually off-line, and
its extension to online control has to deal with online exploration (Li et al., 2009). Some researchers
have combined LSTD in critic within the Actor-Critic framework (Xu et al., 2002; Peters & Schaal,
2008); however, LSTD there is still not an independent control algorithm.
Acknowledgements
The authors received many feedbacks from Dr. Rich Sutton and Dr. Csaba Szepesv?ari. We gratefully
acknowledge their help in improving the paper in many aspects. We also thank Alborz Geramifard
for sending us Matlab code of tile coding. This research was supported by iCORE, NSERC and the
Alberta Ingenuity Fund.
8
References
Bertsekas, D. P., Borkar, V., & Nedich, A. (2004). Improved temporal difference methods with linear
function approximation. Learning and Approximate Dynamic Programming (pp. 231?255). IEEE
Press.
Boyan, J. A. (1999). Least-squares temporal difference learning. ICML-16.
Bradtke, S., & Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning.
Machine Learning, 22, 33?57.
Li, L., Littman, M. L., & Mansley, C. R. (2009). Online exploration in least-squares policy iteration.
AAMAS-8.
Peters, J., & Schaal, S. (2008). Natural actor-critic. Neurocomputing, 71, 1180?1190.
Sutton, R. S. (1990). Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. ICML-7.
Sutton, R. S. (1995). TD models: modeling the world at a mixture of time scales. ICML-12.
Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. MIT Press.
Sutton, R. S., Szepesv?ari, C., Geramifard, A., & Bowling, M. (2008). Dyna-style planning with
linear function approximation and prioritized sweeping. UAI-24.
Tsitsiklis, J. N., & Van Roy, B. (1997). An analysis of temporal-difference learning with function
approximation. IEEE Transactions on Automatic Control, 42, 674?690.
Xu, X., He, H., & Hu, D. (2002). Efficient reinforcement learning using recursive least-squares
methods. Journal of Artificial Intelligence Research, 16, 259?292.
9
| 3670 |@word version:1 inversion:2 advantageous:2 hu:1 tried:1 initial:2 contains:2 interestingly:1 past:2 existing:5 imaginary:4 current:2 enables:1 cheap:1 update:2 fund:1 stationary:2 greedy:13 selected:3 intelligence:1 beginning:1 ith:1 iterates:1 location:1 along:1 become:1 manner:3 introduce:2 forgetting:1 expected:5 ingenuity:1 planning:52 multi:40 bellman:2 alberta:2 td:21 project:3 what:1 mountain:5 maxa:2 csaba:1 guarantee:1 temporal:5 every:1 stateaction:1 tackle:1 shed:1 preferable:1 exactly:1 control:36 unit:1 bertsekas:2 positive:1 sutton:19 reacting:1 multistep:1 twice:1 initialization:3 china:2 suggests:1 averaged:2 recursive:3 procedure:3 significantly:2 projection:7 suggest:1 get:3 selection:1 applying:1 go:1 economics:1 straightforward:1 mansley:2 rule:2 updated:2 target:1 programming:2 us:7 element:1 roy:2 approximated:2 ft:14 episode:7 trade:1 intuition:1 environment:2 complexity:1 reward:12 littman:2 dynamic:2 solving:1 basis:1 various:1 artificial:1 whose:5 supplementary:1 solve:2 compressed:1 itself:2 online:17 advantage:2 sequence:1 propose:2 achieve:1 magnify:2 extending:2 t6g2e8:1 converges:1 help:1 develop:1 school:1 b0:1 received:2 recovering:1 radius:1 stochastic:1 exploration:3 material:1 require:2 generalization:1 really:1 extension:2 hold:1 considered:1 normal:1 great:1 predict:2 parr:2 estimation:6 diminishes:5 sensitive:1 largest:2 mit:1 always:2 rather:2 guangzhou:1 barto:6 derived:1 schaal:2 improvement:1 consistently:1 sense:1 accumulated:1 bt:3 typically:1 a0:2 initially:1 integrated:1 selects:1 arg:1 geramifard:3 ference:1 plan:1 initialize:2 equal:1 once:3 identical:1 look:4 icml:3 future:15 report:1 bangalore:1 modern:1 interpolate:1 neurocomputing:1 ab:1 attempt:2 highly:1 evaluation:14 mixture:1 light:1 chain:5 poorer:2 experience:7 harmful:1 initialized:4 earlier:1 modeling:7 entry:2 uniform:1 successful:1 too:2 reported:1 combined:1 st:10 decayed:1 off:2 yao:1 recorded:1 management:1 diao:1 choose:1 tile:2 dr:2 lambda:21 style:24 leading:1 return:2 li:3 lookup:3 coding:2 automation:1 satisfy:1 explicitly:1 performed:1 try:1 lot:1 root:1 doing:3 observing:2 recover:1 rmse:5 square:9 accuracy:1 became:1 efficiently:1 correspond:1 iterated:9 bhatnagar:1 trajectory:1 researcher:1 suffers:1 definition:1 pp:1 sampled:4 recall:1 car:6 actually:3 alborz:1 improved:1 replayed:2 shrink:2 furthermore:1 hand:1 mdp:1 grows:3 building:1 effect:2 shalabh:1 hence:1 nonzero:1 deal:1 bowling:2 eligibility:2 ambiguous:1 steady:1 coincides:1 criterion:1 generalized:3 tt:6 complete:1 demonstrate:1 bradtke:3 dif1:1 nedich:2 ari:3 lagoudakis:2 extend:3 he:1 accumulate:3 automatic:1 similarly:1 gratefully:1 moving:1 f0:1 longer:1 actor:2 showed:1 life:1 icore:1 multiple:3 hengshuai:1 reduces:2 faster:9 long:1 visit:1 impact:1 prediction:1 iteration:3 szepesv:3 background:1 leaving:1 nary:1 south:1 call:9 intermediate:4 iterate:11 gave:1 architecture:6 simplifies:1 peter:2 interpolates:3 constitute:3 action:23 matlab:1 generally:5 iterating:4 clear:1 tune:1 amount:2 discount:1 generate:1 notice:3 shifted:1 estimated:3 track:2 key:1 run:2 parameterized:1 extends:1 family:1 decision:1 ahead:5 infinity:1 n3:1 aspect:1 simulate:1 optimality:1 department:2 developing:1 according:2 alternate:1 smaller:1 across:1 taken:1 computationally:3 equation:2 previously:1 turn:2 dyna:128 end:2 sending:1 available:1 apply:2 spectral:1 singlestep:1 include:2 cf:1 exploit:1 giving:2 build:8 approximating:1 lspi:3 sweep:1 noticed:1 already:1 quantity:1 strategy:1 rt:9 diagonal:1 traditional:1 gradient:3 separate:1 thank:1 simulated:3 unstable:3 reason:1 code:1 insufficient:1 difficult:1 trace:2 implementation:2 policy:31 perform:1 markov:1 finite:1 acknowledge:1 descent:3 looking:4 perturbation:1 sweeping:1 canada:1 pair:10 required:1 learned:1 able:1 below:1 usually:1 built:2 unsuccessful:1 max:1 shifting:2 power:2 natural:1 boyan:10 improve:2 literature:1 acknowledgement:1 fully:1 interesting:1 rmses:1 agent:1 consistent:1 s0:1 pi:1 critic:3 repeat:3 supported:1 free:2 offline:1 tsitsiklis:2 allow:1 institute:1 india:1 van:2 feedback:1 dimension:2 depth:1 transition:5 world:9 valid:2 evaluating:1 rich:1 author:1 reinforcement:2 projected:1 simplified:1 avoided:1 transaction:1 approximate:2 implicitly:1 uai:1 alternatively:1 why:1 table:3 learn:1 improving:1 traj:2 complex:3 interpolating:1 did:1 dense:1 aamas:1 xu:3 edmonton:1 slow:1 sub:3 learns:4 explored:1 borkar:2 simply:1 explore:1 failed:1 nserc:1 tracking:12 lstd:33 applies:1 corresponds:1 satisfies:1 mountaincar:1 identity:1 towards:2 prioritized:1 change:2 infinite:18 uniformly:1 lemma:1 called:2 experimental:4 meaningful:1 select:2 indian:1 |
2,947 | 3,671 | Subject independent EEG-based BCI decoding
Siamac Fazli
Cristian Grozea
M?arton Dan?oczy
Florin Popescu
Benjamin Blankertz
Klaus-Robert M?uller
Abstract
In the quest to make Brain Computer Interfacing (BCI) more usable, dry electrodes have emerged that get rid of the initial 30 minutes required for placing an
electrode cap. Another time consuming step is the required individualized adaptation to the BCI user, which involves another 30 minutes calibration for assessing
a subject?s brain signature. In this paper we aim to also remove this calibration
proceedure from BCI setup time by means of machine learning. In particular, we
harvest a large database of EEG BCI motor imagination recordings (83 subjects)
for constructing a library of subject-specific spatio-temporal filters and derive a
subject independent BCI classifier. Our offline results indicate that BCI-na??ve
users could start real-time BCI use with no prior calibration at only a very moderate performance loss.
1 Introduction
The last years in BCI research have seen drastically reduced training and calibration times due
to the use of machine learning and adaptive signal processing techniques (see [9] and references
therein) and novel dry electrodes [18]. Initial BCI systems were based on operant conditioning
and could easily require months of training on the subject side before it was possible to use them
[1, 10]. Second generation BCI systems require to record a brief calibration session during which
a subject assumes a fixed number of brain states, say, movement imagination and after which the
subject-specific spatio-temporal filters (e.g. [6]) are inferred along with individualized classifiers
[9]. Recently, first steps to transfer a BCI user?s filters and classifiers between sessions was studied
[14] and a further online-study confirmed that indeed such transfer is possible without significant
performance loss [16]. In the present paper we even will go one step further in this spirit and propose
a subject-independent zero-training BCI that enables both experienced and novice BCI subjects to
use BCI immediately without calibration.
Our offline study applies a number of state of the art learning methods (e.g. SVM, Lasso etc.)
in order to optimally construct such one-size-fits-all classifiers from a vast number of redundant
features, here a large filter bank available from 83 BCI users. The use of sparsifying techniques
specifically tell us what are the interesting aspects in EEG that are predictive to future BCI users. As
expected, we find that a distribution of different alpha band features in combination with a number of
characteristic common spatial patterns (CSPs) is highly predictive for all users. What is found as the
outcome of a machine learning experiment can also be viewed as a compact quantitative description
of the characteristic variability between individuals in the large subject group. Note that it is not
the best subjects that characterize the variance necessary for a subject independent algorithm, rather
the spread over existing physiology is to be represented concisely. Clearly, our proceedure may also
be of use appart from BCI in other scientific fields, where complex characteristic features need to
be homogenized into one overall inference model. The paper first provides an overview of the data
used, then the ensemble learning algorithm is outlined, consisting of the procedure for building the
1
filters, the classifiers and the gating function, where we apply various machine learning methods.
Interestingly we are able to successfully classify trials of novel subjects with zero training suffering
only a small loss in performance. Finally we put our results into perspective.
2 Available Data and Experiments
We used 83 BCI datasets (sessions), each consisting of 150 trials from 83 individual subjects. Each
trial consists of one of two predefined movement imaginations, being left and right hand, i.e. data
was chosen such that it relies only on these 2 classes, although originally three classes were cued during the calibration session, being left hand (L), right hand (R) and foot (F). 45 EEG channels, which
are in accordance with the 10-20 system, were identified to be common in all sessions considered.
The data were recorded while subjects were immobile, seated on a comfortable chair with arm rests.
The cues for performing a movement imagination were given by visual stimuli, and occurred every
4.5-6 seconds in random order. Each trial was referenced by a 3 second long time-window starting
at 500 msec after the presentation of the cue. Individual experiments consisted of three different
training paradigms. The first two training paradigms consisted of visual cues in form of a letter or
an arrow, respectively. In the third training paradigm the subject was instructed to follow a moving
target on the screen. Within this target the edges lit up to indicate the type of movement imagination
required. The experimental proceedure was designed to closely follow [3]. Electromyogram (EMG)
on both forearms and the foot were recorded as well as electrooculogram (EOG) to ensure there
were no real movements of the arms and that the movements of the eyes were not correlated to the
required mental tasks.
3 Generation of the Ensemble
The ensemble consists of a large redundant set of subject-dependent common spatial pattern filters (CSP cf. [6]) and their matching classifiers (LDA). Each dataset is first preprocessed by 18
predefined temporal filters (i.e. band-pass filters) in parallel (see upper panel of Figure 1). A corresponding spatial filter and linear classifier is obtained for every dataset and temporal filter. Each
resulting CSP-LDA couple can be interpreted as a potential basis function. Finding an appropriate
weighting for the classifier outputs of these basis functions is of paramount importance for the accurate prediction. We employed different forms of regression and classification in order to find an
optimal weighting for predicting the movement imagination data of unseen subjects[2, 4]. This processing was done by leave-one-subject-out cross-validation, i.e. the session of a particular subject
was removed, the algorithm trained on the remaining trials (of the other subjects) and then applied
to this subject?s data (see lower panel of Figure 1).
3.1 Temporal Filters
The ?-rhythm (9-14 Hz) and synchronized components in the ?-band (16-22 Hz) are macroscopic
idle rhythms that prevail over the postcentral somatosensory cortex and precentral motor cortex,
when a given subject is at rest. Imaginations of movements as well as actual movements are known
to suppress these idle rhythms contralaterally. However, there are not only subject-specific differences of the most discriminative frequency range of the mentioned idle-rhythms, but also session
differences thereof.
We identified 18 neurophysiologically relevant temporal filters, of which 12 lie within the ?-band,
3 in the ?-band, two in between ?- and ?-band and one broadband 7 ? 30Hz. In all following
performance related tables we used the percentage of misclassified trials, or 0-1 loss.
3.2 Spatial Filters and Classifiers
CSP is a popular algorithm for calculating spatial filters, used for detecting event-related (de)synchronization (ERD/ERS), and is considered to be the gold-standard of ERD-based BCI systems
[13, 19, 6]. The CSP algorithm maximizes the variance of right hand trials, while simultaneously
minimizing the variance for left hand trials. Given the two covariance matrices ?1 and ?2 , of size
channels x concatenated timepoints, the CSP algorithm returns the matrices W and D. W is a matrix of projections, where the i-th row has a relative variance of di for trials of class 1 and a relative
2
Figure 1: 2 Flowcharts of the ensemble method. The red patches in the top panel illustrate the
inactive nodes of the ensemble after sparsification.
variance of 1 ? di for trials of class 2. D is a diagonal matrix with entries di ? [0, 1], with length n,
the number of channels:
W ?1 W T = D
and
W ?2 W T = I ? D
(1)
Best discrimination is provided by filters with very high (emphazising one class) or very low eigenvalues (emphazising the other class), we therefore chose to only include projections with the highest
2 and corresponding lowest 2 eigenvalues for our analysis. We use Linear Discriminant Analysis
(LDA) [5], each time filtered session corresponds to a CSP set and to a matched LDA.
3.3 Final gating function
The final gating function combines the outputs of the individual ensemble members to a single one.
This can be realized in many ways. For a number of ensemble methods the mean has proven to be
a surprisingly good choice [17]. As a baseline for our ensemble we simply averaged all outputs of
our individual classifiers. This result is given as mean in Table 2.
Classification We employ various classification methods such as k Nearest Neighbor (kNN), Linear
Discriminant Analysis (LDA), Support Vector Machine (SVM) and a Linear Programming Machine
(LPM) [12].
Quadratic regression with `1 regularization
argmin
(k)
wij
X
x?X\Xk
v
u B
uX X
(hk (x) ? y(x))2 + ?t
X
i=1 j?S\Sk x?X\Xk
hk (x) =
B
X
X
(k)
?
cij (x)2 ?
wij cij (x) ? b
B
X
X
i=1 j?S\Sk
?
(k)
|wij | + |b|?
(2)
(3)
i=1 j?S\Sk
where cij (x) ? [??; ?] is the continuous classifier output, before thresholding, obtained from the
session j by applying the bandpass filter i, B is the number of frequency bands, S the complete set
3
Figure 2: Feature selection during cross-validation: white dashes mark the features kept after regularization for the prediction of the data of each subject. The numbers on the vertical axis represent
the subject index as well as the Error Rate (%). The red line depicts the baseline error of individual
subjects (classical auto-band CSP). Features as well as baseline errors are sorted by the error magnitude of the self-prediction. Note that some of the features are useful in predicting the data of most
other subjects, while some are rarely or never used.
of sessions, X the complete data set, Sk the set of sessions of subject k, Xk the dataset for subject
k
k, y(x) is the class label of trial x and wij
in equation (3) are the weights given to the LDA outputs.
The hyperparameter ? in equation (2) was varied on a logarithmic scale and multiplied by a dataset
scaling factor which accounted for fluctuations in voting population distribution and size for each
subject. The dataset scaling factor is computed using cij (x), for all x ? X \ Xk . For computational
efficiency reasons the hyperparameter was tuned on a small random subset of subjects whose labels
are to be predicted from data obtained from other subjects such that the resulting test/train error
ratio was minimal, which in turn affected the choice (leave in/out) of classifiers among the 83x18
candidates. The `1 regularized regression with this choice of ? was then applied to all subjects, with
results (in terms of feature sparsification) shown in Figure 2. In fact the exemplary CSP patterns
shown in the lower part of the Figure exhibit neurophysiologically meaningful activation in motorcortical areas. The most predictive subjects show smooth monopolar patterns, while subjects with a
higher self-prediction loss slowly move from dipolar to rather ragged maps. From the point of view
of approximation even the latter make sense for capturing the overall ensemble variance.
The implementation of the regressions were performed using CVX, a package for specifying and
solving convex programs [11]. We coupled an `2 loss with an `1 penalty term on a linear voting
scheme ensemble.
Least Squares Regression Is a special case of equation (2), with ? = 0.
3.4 Validation
The subject-specific CSP-based classification methods with automatically, subject-dependent tuned
temporal filters (termed reference methods) are validated by an 8-fold cross-validation, splitting the
data chronologically. The chronological splitting for cross-validation is a common practice in EEG
classification, since the non-stationarity of the data is thus preserved [9].
To validate the quality of the ensemble learning we employed a leave-one-subject out crossvalidation (LOSO-CV) procedure, i.e. for predicting the labels of a particular subject we only use
data from other subjects.
4 Results
Overall performance of the reference methods, other baseline methods and of the ensemble method
is presented in Table 2. Reference method performances of subject-specific CSP-based classification
are presented with heuristically tuned frequency bands [6]. Furthermore we considered much simpler (zero-training) methods as a control. Laplacian stands for the power difference in two Laplace
filtered channels (C3 vs. C4) and simple band-power stands for the power difference of the same two
4
% of data
10
20
30
40
kNN
31.3
32.0
32.7
32.7
classification
LDA LPM
45.3 37.3
40.0 38.0
38.7 37.3
36.0 37.9
SVM
31.3
28.7
33.1
31.3
regression
LSR LSR-`1
46.0
30.7
42.0
31.3
38.0
30.0
36.7
29.3
Table 1: Main results of various machine learning algorithms.
approach
method
# <25%
25%-tile
median
75%-tile
mean
31
17.3
30.7
41.3
kNN
30
17.3
31.3
42.0
machine learning
zero training
LDA LPM SVM LSR
18
14
29
19
27.3 26.7
18.7 26.0
36.0 37.3
28.7 36.7
43.3 44.0
41.3 44.0
LSR-`1
36
16.0
29.3
40.7
Lap
24
22.0
34.7
45.3
classical
training
BP
CSP
11
39
31.3
11.9
38.7
25.9
45.3
41.4
Table 2: Comparing ML results to various baselines.
channels without any spatial filtering. For the simple zero-training methods we chose a broad-band
filter of 7 ? 30Hz, since it is the least restrictive and scored one of the best performances on a subject
level. The bias b in equation (3) can be tuned broadly for all sessions or corrected individually by
session, and implemented for online experiments in multiple ways [16, 20, 15]. In our case we chose
to adapt b without label information, but operating under the assumption that class frequency is balanced. We therefore simply subtracted the mean over all trials of a given session. Table 1 shows
a comparison of the various classification schemes. We evaluate the performance on a given percentage of the training data in order to observe information gain as a function of datapoints. Clearly
the two best ML techniques are on par with subject-dependent CSP classifiers and outperform the
simple zero-training methods (not shown in Table 1 but in Table 2) by far. While SVM scores the
best median loss over all subjects (see Table 1), L1 regularized regression scored better results for
well performing BCI subjects (Figure 3 column 1, row 3). In Figure 3 and Table 2 we furthermore
show the results of the L1 regularized regression and SVM versus the auto-band reference method
(zero-training versus subject-dependent training) as well as vs. the simple zero-training methods
Laplace and band-power. Figure 4 shows all individual temporal filters used to generate the ensemble, where their color codes for the frequency they were used to predict labels of previously unseen
data. As to be expected mostly ?-band related temporal filters were selected. Contrary to what one
may expect, features that generalize well to other subjects? data do not exclusively come from BCI
subjects with low self-prediction errors (see white dashes in Figure 2), in fact there are some features
of weak performing subjects that are necessary to capture all variance of the ensemble. However
there is a strong correlation between subjects with a low self-prediction loss and the generalizability
of their features to predicting other subjects, as can be seen on the right part of Figure 4.
4.1 Focusing on a particular subject
In order to give an intuition of how the ensemble works in detail we will focus on a particular
subject. We chose to use the subject with the lowest reference method cross-validation error (10%).
Given the non-linearity in the band-power estimation (see Figure 1) it is impossible to picture the
resulting ensemble spatial filter exactly. However, by averaging the chosen CSP filters with the
weightings, obtained by the ensemble and multiplying them by their LDA classifier weight, we get
an approximation:
B
X
X
wij Wij Cij
(4)
PEN S =
i=1 j?S\Sk
where wij is the weight matrix, resulting from the `1 regularized regression, given in equations (2)
and (3), Wij the CSP filter, corresponding to temporal filter i and subject j and Cij the LDA weights
(B in Figure 5). For the case of classical auto-band CSP this simply reduces to PCSP = W C (A in
Figure 5). Another way to exemplify the ensemble performance is to refer to a transfer function. By
injecting a sinusoid with a frequency within the corresponding band-pass filter into a given channel
5
40
30
20
10
0
0
10
20
30
40
50
L1 regularized regression
50
L1 regularized regression
L1 regularized regression
50
40
30
20
10
0
50
0
10
30
40
20
10
0
50
50
40
40
40
30
30
30
SVM
50
20
20
20
10
10
10
0
10
20
30
40
0
50
0
10
subject?dependent CSP
20
30
40
0
50
40
40
20
10
ensemble mean
40
ensemble mean
50
30
30
20
10
10
20
30
40
0
50
10
20
30
40
50
10
20
30
40
50
simple band?power C3?C4
50
0
0
Laplace C3?C4
50
0
0
simple band?power C3?C4
50
0
SVM
20
30
Laplace C3?C4
SVM
SVM
subject?dependent CSP
40
30
20
10
0
10
L1 regularized regression
20
30
40
50
0
0
10
20
L1 regularized regression
30
40
50
SVM
Figure 3: Compares the two best-scoring machine learning methods `1 -regularized regression and
Support Vector Machine to subject-dependent CSP and other simple zero-training approaches. The
axis show the classification loss in percent.
1
correlation coefficient: ?0.78
0
0.9
7?30
0.05
0.8
8?15
0.7
10?15
0.6
9?14
0.5
8?13
7?12
cross?validation loss [%]
0.1
7?14
0.4
12?18
0.3
9?12 12?15
18?24
0.15
0.2
0.25
0.3
0.35
0.4
0.2
8?11 11?14
16?22
0.45
0.1
7?10 10?13
5
10
14?20
15
0.5
26?35
20
25
30
35
0
0
50
100
150
200
250
Number of active features
frequency [Hz]
Figure 4: On the left: The used temporal filters and in color-code their contribution to the final L1
regularized regression classification (the scale is normalized from 0 to 1). Cleary ?-band temporal
filters between 10 ? 13Hz are most predictive. On the right: Number of features used vs. selfpredicted cross-validation. A high self-prediction can be seen to yield a large number of features
that are predictable for the whole ensemble.
6
and processing it by the four CSP filters, estimating the bandpower of the resulting signal and finally
combining the four outputs by the LDA classifier, we obtain a response for the particular channel,
where the sinusoid was injected. Repeating this procedure for each channel results in a response
matrix. This procedure can be applied for a single CSP/LDA pair, however we may also repeat
the given method for as many times as features were chosen for a given subject by the ensemble
and hence obtain an accurate description of how the ensemble processes the given EEG data. The
resulting response matrices are displayed in panel C of Figure 5. While the subject-specific pattern
(classical) looks less focused and more diverse the general pattern matches the one obtained by the
ensemble. A third way of visualizing how the ensemble works, we show the primary projections
of the CSP filters that were given the 6 highest weights by the ensemble on the left panel (F) and
the distribution of all weights in panel D. The spatial positions of highest channel weightings differ
slightly for each of the CSP filters given, however the maxima of the projection matrices are clearly
positioned around the primary motor cortex.
5 Conclusion
On the path of bringing BCI technology from the lab to a more practical every day situation, it
becomes indispensable to reduce the setup time which is nowadays more than one hour towards
less than one minute. While dry electrodes provide a first step to avoid the time for placing the
cap, calibration still remained and it is here where we contribute by dispensing with calibration sessions. Our present study is an offline analysis providing a positive answer to the question whether
a subject independent classifier could become reality for a BCI-na??ve user. We have taken great
care in this work to exclude data from a given subject when predicting his/her performance by using
the previously described LOSOCV. In contrast with previous work on ensemble approaches to BCI
classification based on simple majority voting and Adaboost [21, 8] that have utilized only a limited
dataset, we have profitted greatly by a large body of high quality experimental data accumulated over
the years. This has enabled us to choose by means of machine learning technology a very sparse set
of voting classifiers which performed as well as standard, state-of-the-art subject calibrated methods. `1 regularized regression in this case performed better than other methods (such as majority
voting) which we have also tested. Note that, interestingly, the chosen features (see Figure 2), do
not exclusively come from the best performing subjects, in fact some average performer was also
selected. However most white dashes are present in the left half, i.e. most subjects with high autoband reference method performance were selected. Interestingly some subjects with very high BCI
performance are not selected at all, while others generalize well in the sense that their model are able
to predict other subject?s data. No single frequency band dominated classification accuracy ? see
Figure 4. Therefore, the regularization must have selected diverse features. Nevertheless, as can be
seen in panel G of Figure 5 there is significant redundancy between classifiers in the ensemble. Our
approach of finding a sparse solution reduces the dimensionality of the chosen features significantly.
For very able subjects our zero-training method exhibits a slight performance decrease, which however will not prevent them from performing successfully in BCI. The sparsification of classifiers,
in this case, also leads to potential insight into neurophysiological processes. It identifies relevant
cortical locations and frequency bands of neuronal population activity which are in agreement with
general neuroscientific knowledge. While this work concentrated on zero training classification and
not brain activity interpretation, a much closer look is warranted. Movement imagination detection
is not only determined by the cortical representation of the limb whose control is being imagined (in
this case the arm) but also by differentially located cortical regions involved in movement planning
(frontal), execution (fronto-parietal) and sensory feedback (occipito-parietal). Patterns relevant to
BCI detection appear in all these areas and while dominant discriminant frequencies are in the ?
range, higher frequencies appear in our ensemble, albeit in combination with less focused patterns.
So what we have found from our machine learning algorithm should be interpreted as representing
the characteristic neurophysiological variation a large subject group, which in itself is a highly relevant topic that goes beyond the scope of this technical study. Future online studies will be needed
to add further experimental evidence in support of our findings. We plan to adopt the ensemble
approach in combination with a recently developed EEG cap having dry electrodes [18] and thus to
be able to reduce the required preparation time for setting up a running BCI system to essentially
zero. The generic ensemble classifier derived here is also an excellent starting point for a subsequent
coadaptive learning procedure in the spirit of [7].
7
Figure 5: A: primary projections for classical auto-band CSP. B: linearly averaged CSP?s from
the ensemble. C: transfer function for classical auto-band and ensemble CSP?s. D: weightings of 28
ensemble members, the six highest components are shown in F. E: linear average ensemble temporal
filter (red), heuristic (blue). F: primary projections of the 6 ensemble members that received highest
weights. G: Broad-band version of the ensemble for a single subject. The outputs of all basis
classifiers are applied to each trial of one subject. The top row (broad) gives the label, the second
row (broad) gives the output of the classical auto-band CSP, and each of the following rows (thin)
gives the outputs of the individual classifiers of other subjects. The individual classifier outputs are
sorted by their correlation coefficient with respect to the class labels. The trials (columns) are sorted
by true labels with primary key and by mean ensemble output as a secondary key. The row at the
bottom gives the sign of the average ensemble output.
8
References
[1] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?ubler, J. Perelmouter,
E. Taub, and H. Flor. A spelling device for the paralysed. Nature, 398:297?298, 1999.
[2] B. Blankertz, G. Curio, and K.-R. M?uller. Classifying single trial EEG: Towards brain computer interfacing. In T. G. Diettrich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Inf. Proc. Systems
(NIPS 01), volume 14, pages 157?164, 2002.
[3] B. Blankertz, G. Dornhege, M. Krauledat, K.-R. M?uller, V. Kunzmann, F. Losch, and G. Curio. The Berlin
Brain-Computer Interface: EEG-based communication without subject training. IEEE Trans Neural Syst
Rehabil Eng, 14:147?152, 2006.
[4] B. Blankertz, G. Dornhege, S. Lemm, M. Krauledat, G. Curio, and K.-R. M?uller. The Berlin BrainComputer Interface: Machine learning based detection of user specific brain states. Journal of Universal
Computer Science, 12:2006, 2006.
[5] B. Blankertz, G. Dornhege, C. Sch?afer, R. Krepki, J. Kohlmorgen, K.-R. M?uller, V. Kunzmann, F. Losch,
and G. Curio. Boosting bit rates and error detection for the classification of fast-paced motor commands
based on single-trial EEG analysis. IEEE Trans. Neural Sys. Rehab. Eng., 11(2):127?131, 2003.
[6] B. Blankertz, R. Tomioka, S. Lemm, M. Kawanabe, and K.-R. M?uller. Optimizing spatial filters for robust
EEG single-trial analysis. IEEE Signal Proc Magazine, 25(1):41?56, 2008.
[7] Benjamin Blankertz and Carmen Vidaurre. Towards a cure for bci illiteracy: Machine-learning based
co-adaptive learning. BMC Neuroscience, 10, 2009.
[8] R. Boostani and M. H. Moradi. A new approach in the BCI research based on fractal dimension as feature
and adaboost as classifier. J. Neural Eng., 1:212?217, 2004.
[9] G. Dornhege, J.del R. Mill?an, T. Hinterberger, D. McFarland, and K.-R. M?uller, editors. Toward BrainComputer Interfacing. Cambridge, MA: MIT Press, 2007.
[10] T. Elbert, B. Rockstroh, W. Lutzenberger, and N. Birbaumer. Biofeedback of slow cortial potentials. I.
Electroencephalogr. Clin. Neurophysiol., 48:293?301, 1980.
[11] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming (web page and software). http://stanford.edu/ boyd/cvx, 2008.
[12] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning, Second
Edition: Data Mining, Inference, and Prediction (Springer Series in Statistics). Springer New York, 2
edition, 2001.
[13] Z.J. Koles and A. C. K. Soong. EEG source localization: implementing the spatio-temporal decomposition approach. Electroencephalogr. Clin Neurophysiol, 107:343?352, 1998.
[14] M. Krauledat, M. Schr?oder, B. Blankertz, and K.-R. M?uller. Reducing calibration time for brain-computer
interfaces: A clustering approach. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Inf. Proc. Systems (NIPS 07), volume 19, pages 753?760, 2007.
[15] M. Krauledat, P. Shenoy, B. Blankertz, R.P.N. Rao, and K.-R. M?uller. Adaptation in csp-based BCI
systems. In Toward Brain-Computer Interfacing, pages 305?309. MIT Press, 2007.
[16] M. Krauledat, M. Tangermann, B. Blankertz, and K.-R. M?uller. Towards zero training for brain-computer
interfacing. PLoS ONE, 3:e2967, 2008.
[17] R. Polikar. Ensemble based systems in decision making. IEEE Circuits and Systems Magazine, 6(3):21?
45, 2006.
[18] F. Popescu, S. Fazli, Y. Badower, B. Blankertz, and K.-R. M?uller. Single trial classification of motor
imagination using 6 dry EEG electrodes. PLoS ONE, 2:e637, 2007.
[19] H. Ramoser, J. M?uller-Gerkin, and G. Pfurtscheller. Optimal spatial filtering of single trial EEG during
imagined hand movement. IEEE Trans. Rehab. Eng, 8(4):441?446, 2000.
[20] P. Shenoy, M. Krauledat, B. Blankertz, R.P.N. Rao, and K.-R. M?uller. Towards adaptive classification for
BCI. Journal of Neural Engineering, 3(1):R13?R23, 2006.
[21] S. Wang, Z. Lin, and C. Zhang. Network boosting for BCI applications. Book Series Lecture Notes in
Computer Science, 3735:386?388, 2005.
9
| 3671 |@word trial:19 version:1 r13:1 heuristically:1 covariance:1 eng:4 decomposition:1 cleary:1 initial:2 series:2 score:1 exclusively:2 tuned:4 interestingly:3 existing:1 comparing:1 activation:1 must:1 subsequent:1 enables:1 motor:5 remove:1 designed:1 discrimination:1 v:3 cue:3 selected:5 half:1 device:1 xk:4 sys:1 record:1 filtered:2 mental:1 provides:1 detecting:1 node:1 contribute:1 location:1 boosting:2 simpler:1 zhang:1 along:1 become:1 consists:2 dan:1 combine:1 expected:2 indeed:1 bandpower:1 planning:1 brain:10 monopolar:1 automatically:1 actual:1 window:1 kohlmorgen:1 becomes:1 provided:1 estimating:1 matched:1 linearity:1 panel:7 maximizes:1 circuit:1 lowest:2 what:4 argmin:1 interpreted:2 developed:1 finding:3 sparsification:3 dornhege:4 temporal:14 quantitative:1 every:3 ragged:1 voting:5 chronological:1 exactly:1 classifier:24 platt:1 control:2 grant:1 appear:2 comfortable:1 shenoy:2 before:2 positive:1 engineering:1 accordance:1 referenced:1 fluctuation:1 path:1 diettrich:1 chose:4 therein:1 studied:1 specifying:1 co:1 limited:1 range:2 averaged:2 practical:1 practice:1 elbert:1 procedure:5 area:2 universal:1 physiology:1 significantly:1 matching:1 projection:6 idle:3 boyd:2 get:2 selection:1 put:1 applying:1 impossible:1 map:1 go:2 starting:2 convex:2 focused:2 splitting:2 immediately:1 insight:1 datapoints:1 his:1 enabled:1 population:2 variation:1 laplace:4 target:2 user:8 magazine:2 programming:2 agreement:1 element:1 utilized:1 located:1 electromyogram:1 database:1 bottom:1 wang:1 capture:1 region:1 plo:2 movement:12 removed:1 highest:5 decrease:1 mentioned:1 benjamin:2 balanced:1 intuition:1 predictable:1 signature:1 trained:1 solving:1 predictive:4 localization:1 efficiency:1 basis:3 neurophysiol:2 easily:1 represented:1 various:5 train:1 fast:1 tell:1 klaus:1 outcome:1 whose:2 emerged:1 heuristic:1 stanford:1 say:1 bci:34 statistic:1 knn:3 unseen:2 itself:1 final:3 cristian:1 online:3 eigenvalue:2 exemplary:1 propose:1 adaptation:2 rehab:2 relevant:4 combining:1 kunzmann:2 gold:1 description:2 validate:1 differentially:1 crossvalidation:1 olkopf:1 electrode:6 assessing:1 leave:3 cued:1 derive:1 illustrate:1 nearest:1 received:1 strong:1 implemented:1 predicted:1 involves:1 indicate:2 somatosensory:1 synchronized:1 come:2 differ:1 foot:2 closely:1 filter:32 implementing:1 require:2 around:1 considered:3 vidaurre:1 great:1 scope:1 predict:2 adopt:1 estimation:1 proc:3 injecting:1 label:8 individually:1 successfully:2 electroencephalogr:2 hoffman:1 uller:13 proceedure:3 mit:2 clearly:3 interfacing:5 aim:1 csp:27 rather:2 avoid:1 command:1 validated:1 focus:1 derived:1 ubler:1 hk:2 contrast:1 greatly:1 baseline:5 sense:2 inference:2 dependent:7 accumulated:1 postcentral:1 her:1 misclassified:1 wij:8 overall:3 classification:16 among:1 plan:1 art:2 spatial:10 special:1 field:1 construct:1 never:1 having:1 contralaterally:1 bmc:1 placing:2 lit:1 look:2 broad:4 thin:1 future:2 others:1 stimulus:1 employ:1 simultaneously:1 ve:2 individual:9 consisting:2 friedman:1 detection:4 stationarity:1 highly:2 mining:1 predefined:2 accurate:2 paralysed:1 edge:1 nowadays:1 closer:1 necessary:2 rockstroh:1 biofeedback:1 precentral:1 fronto:1 minimal:1 classify:1 column:2 rao:2 entry:1 subset:1 optimally:1 characterize:1 perelmouter:1 emg:1 answer:1 generalizability:1 calibrated:1 ghanayim:1 decoding:1 na:2 recorded:2 choose:1 slowly:1 tile:2 fazli:2 hinterberger:2 book:1 usable:1 imagination:9 return:1 syst:1 potential:3 exclude:1 de:1 coefficient:2 siamac:1 performed:3 view:1 lab:1 red:3 start:1 parallel:1 contribution:1 square:1 accuracy:1 variance:7 characteristic:4 ensemble:40 yield:1 dry:5 generalize:2 weak:1 kotchoubey:1 multiplying:1 confirmed:1 trevor:1 frequency:11 involved:1 thereof:1 di:3 couple:1 gain:1 dataset:6 popular:1 color:2 cap:3 exemplify:1 dimensionality:1 knowledge:1 positioned:1 focusing:1 originally:1 higher:2 day:1 follow:2 adaboost:2 response:3 disciplined:1 erd:2 done:1 furthermore:2 lsr:4 r23:1 correlation:3 jerome:1 hand:6 web:1 del:1 lda:12 quality:2 scientific:1 building:1 consisted:2 normalized:1 true:1 regularization:3 hence:1 sinusoid:2 white:3 visualizing:1 during:4 self:5 rhythm:4 complete:2 l1:8 interface:3 percent:1 novel:2 recently:2 common:4 overview:1 conditioning:1 immobile:1 birbaumer:2 imagined:2 volume:2 occurred:1 slight:1 interpretation:1 significant:2 refer:1 taub:1 cambridge:1 cv:1 outlined:1 gerkin:1 session:15 lpm:3 moving:1 calibration:10 afer:1 cortex:3 operating:1 etc:1 add:1 dominant:1 csps:1 perspective:1 optimizing:1 moderate:1 inf:2 termed:1 indispensable:1 scoring:1 seen:4 care:1 performer:1 employed:2 paradigm:3 redundant:2 signal:3 tangermann:1 multiple:1 reduces:2 smooth:1 technical:1 match:1 adapt:1 cross:7 long:1 lin:1 laplacian:1 prediction:8 regression:17 dipolar:1 essentially:1 represent:1 preserved:1 median:2 source:1 macroscopic:1 sch:2 electrooculogram:1 rest:2 flor:1 bringing:1 subject:75 recording:1 hz:6 member:3 contrary:1 spirit:2 fit:1 florin:1 hastie:1 lasso:1 identified:2 reduce:2 inactive:1 whether:1 six:1 becker:1 harvest:1 penalty:1 york:1 oder:1 matlab:1 krauledat:6 fractal:1 useful:1 repeating:1 band:26 concentrated:1 reduced:1 generate:1 http:1 outperform:1 percentage:2 sign:1 neuroscience:1 tibshirani:1 blue:1 broadly:1 diverse:2 hyperparameter:2 affected:1 sparsifying:1 group:2 four:2 redundancy:1 nevertheless:1 key:2 koles:1 preprocessed:1 prevent:1 kept:1 vast:1 chronologically:1 year:2 package:1 letter:1 injected:1 patch:1 cvx:3 decision:1 scaling:2 bit:1 capturing:1 dash:3 paced:1 fold:1 quadratic:1 paramount:1 activity:2 bp:1 software:2 lemm:2 dominated:1 aspect:1 chair:1 carmen:1 performing:5 combination:3 slightly:1 making:1 soong:1 operant:1 taken:1 equation:5 previously:2 turn:1 needed:1 krepki:1 available:2 multiplied:1 apply:1 observe:1 limb:1 kawanabe:1 appropriate:1 generic:1 subtracted:1 assumes:1 remaining:1 ensure:1 cf:1 top:2 include:1 running:1 clustering:1 iversen:1 clin:2 calculating:1 concatenated:1 restrictive:1 ghahramani:1 classical:7 move:1 question:1 realized:1 primary:5 spelling:1 diagonal:1 exhibit:2 individualized:2 berlin:2 majority:2 topic:1 discriminant:3 reason:1 toward:2 length:1 code:2 index:1 ratio:1 minimizing:1 providing:1 setup:2 mostly:1 cij:6 robert:2 neuroscientific:1 suppress:1 implementation:1 upper:1 vertical:1 forearm:1 datasets:1 displayed:1 parietal:2 situation:1 variability:1 communication:1 schr:1 varied:1 inferred:1 pair:1 required:5 c3:5 c4:5 concisely:1 flowchart:1 hour:1 nip:2 trans:3 able:4 beyond:1 mcfarland:1 pattern:8 program:1 power:7 event:1 braincomputer:2 regularized:12 predicting:5 arm:3 blankertz:12 scheme:2 representing:1 technology:2 brief:1 library:1 eye:1 picture:1 identifies:1 axis:2 popescu:2 auto:6 coupled:1 eog:1 prior:1 relative:2 synchronization:1 loss:10 par:1 neurophysiologically:2 expect:1 lecture:1 generation:2 interesting:1 filtering:2 proven:1 versus:2 validation:8 thresholding:1 x18:1 bank:1 editor:3 seated:1 classifying:1 row:6 accounted:1 surprisingly:1 last:1 repeat:1 offline:3 drastically:1 side:1 bias:1 neighbor:1 sparse:2 feedback:1 dimension:1 cortical:3 stand:2 cure:1 sensory:1 instructed:1 adaptive:3 novice:1 far:1 alpha:1 compact:1 ml:2 active:1 rid:1 consuming:1 spatio:3 discriminative:1 continuous:1 pen:1 sk:5 table:10 reality:1 channel:9 transfer:4 nature:1 robust:1 eeg:14 warranted:1 excellent:1 complex:1 constructing:1 ramoser:1 spread:1 main:1 linearly:1 arrow:1 whole:1 scored:2 edition:2 suffering:1 body:1 neuronal:1 broadband:1 screen:1 depicts:1 slow:1 pfurtscheller:1 dispensing:1 experienced:1 position:1 occipito:1 msec:1 timepoints:1 bandpass:1 tomioka:1 lie:1 candidate:1 third:2 weighting:5 minute:3 remained:1 specific:7 gating:3 er:1 svm:11 evidence:1 curio:4 albeit:1 importance:1 prevail:1 magnitude:1 execution:1 logarithmic:1 lap:1 simply:3 mill:1 neurophysiological:2 visual:2 ux:1 applies:1 springer:2 corresponds:1 relies:1 ma:1 month:1 viewed:1 presentation:1 sorted:3 losch:2 towards:5 rehabil:1 specifically:1 determined:1 corrected:1 reducing:1 averaging:1 pas:2 secondary:1 experimental:3 meaningful:1 rarely:1 quest:1 support:3 mark:1 latter:1 frontal:1 preparation:1 evaluate:1 tested:1 correlated:1 |
2,948 | 3,672 | Replacing supervised classification learning by
Slow Feature Analysis in spiking neural networks
Stefan Klampfl, Wolfgang Maass
Institute for Theoretical Computer Science
Graz University of Technology
A-8010 Graz, Austria
{klampfl,maass}@igi.tugraz.at
Abstract
It is open how neurons in the brain are able to learn without supervision to discriminate between spatio-temporal firing patterns of presynaptic neurons. We show
that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is
able to acquire the classification capability of Fisher?s Linear Discriminant (FLD),
a powerful algorithm for supervised learning, if temporally adjacent samples are
likely to be from the same class. We also demonstrate that it enables linear readout
neurons of cortical microcircuits to learn the detection of repeating firing patterns
within a stream of spike trains with the same firing statistics, as well as discrimination of spoken digits, in an unsupervised manner.
1 Introduction
Since the presence of supervision in biological learning mechanisms is rare, organisms often have
to rely on the ability of these mechanisms to extract statistical regularities from their environment.
Recent neurobiological experiments [1] have suggested that the brain uses some type of slowness
objective to learn the categorization of external objects without a supervisor. Slow Feature Analysis
(SFA) [2] could be a possible mechanism for that. We establish a relationship between the unsupervised SFA learning method and a commonly used method for supervised classification learning:
Fisher?s Linear Discriminant (FLD) [3]. More precisely, we show that SFA approximates the classification capability of FLD by replacing the supervisor with the simple heuristics that two temporally
adjacent samples in the input time series are likely to be from the same class. Furthermore, we
demonstrate in simulations of a cortical microcircuit model that SFA could also be an important
ingredient in extracting temporally stable information from trajectories of network states and that it
supports the idea of ?anytime? computing, i.e., it provides information about the stimulus identity
not only at the end of a trajectory of network states, but already much earlier.
This paper is structured as follows. We start in section 2 with brief recaps of the definitions of
SFA and FLD. We discuss the relationship between these methods for unsupervised and supervised
learning in section 3, and investigate the application of SFA to trajectories in section 4. In section 5
we report results of computer simulations of several SFA readouts of a cortical microcircuit model.
Section 6 concludes with a discussion.
2 Basic Definitions
2.1 Slow Feature Analysis (SFA)
Slow Feature Analysis (SFA) [2] is an unsupervised learning algorithm that extracts the slowest
components yi from a multi-dimensional input time series x by minimizing the temporal variation
1
?(yi ) of the output signal yi , which is defined in [2] as the average of its squared temporal derivative.
Thus the objective is to minimize
min ?(yi ) := hy?i 2 it .
(1)
The notation h?it denotes averaging over time, and y? is the time derivative of y. The additional
constraints of zero mean (hyi it = 0) and unit variance (hyi2 it = 1) avoid the trivial constant solution yi (t) ? 0. If multiple slow features are extracted, a third constraint (hyi yj it = 0, ?j < i)
ensures that they are decorrelated and ordered by decreasing slowness, i.e., y1 is the slowest feature
extracted, y2 the second slowest feature, and so on. In other words, SFA finds those functions gi out
of a certain predefined function space that produce the slowest possible outputs yi = gi (x) under
these constraints.
This optimization problem is hard to solve in the general case [4], but if we assume that the time
series x has zero mean (hxit = 0) and if we only allow linear functions y = wT x the problem
simplifies to the objective
wT hx? x? T it w
min JSF A (w) := T
.
(2)
w hxxT it w
The matrix hxxT it is the covariance matrix of the input time series and hx? x? T it denotes the covariance matrix of time derivatives (or time differences, for discrete time) of the input time series. The
weight vector w which minimizes (2) is the solution to the generalized eigenvalue problem
hx? x? T it w = ?hxxT it w
(3)
corresponding to the smallest eigenvalue ?. To make use of a larger function space one typically
considers linear combinations y = wT z of fixed nonlinear expansions z = h(x) and performs the
optimization (2) in this high-dimensional space.
2.2 Fisher?s Linear Discriminant (FLD)
Fisher?s Linear Discriminant (FLD) [3], on the other hand, is a supervised learning method, since
it is applied to labeled training examples hx, ci, where c ? {1, . . . , C} is the class to which this
example x belongs. The goal is to find a weight vector w so that the ability to predict the class of x
from the value of wT x is maximized.
FLD searches for that projection direction w which maximizes the separation between classes while
at the same time minimizing the variance within classes, thereby minimizing the class overlap of the
projected values:
wT SB w
max JF LD (w) := T
.
(4)
w SW w
For C point sets Sc , each with Nc elements and meansP?c , SB is the between-class covariance
matrix given by the separation of the class means,PSBP= c Nc (?c ? ?)(?c ? ?)T , and SW is the
within-class covariance matrix given by SW = c x?Sc (x ? ?c )(x ? ?c )T . Again, the vector
w optimizing (4) can be viewed as the solution to a generalized eigenvalue problem,
SB w = ?SW w,
(5)
corresponding to the largest eigenvalue ?.
3 SFA can acquire the classification capability of FLD
SFA and FLD receive different data types as inputs: unlabeled time series for SFA, in contrast to
labeled single data points for the FLD. Therefore, in order to apply the unsupervised SFA learning
algorithm to the same classification problem as the supervised FLD, we have to convert the labeled
training samples into a time series of unlabeled data points that can serve as an input to the SFA
algorithm1. In the following we investigate the relationship between the weight vectors found by
both methods for a particular way of time series generation.
1
A first link between SFA and pattern recognition has been established in [5]. There the optimization is
performed over all possible pattern pairs of the same class. However, it might often be implausible to have
access to such an artificial time series, e.g., from the perspective of a readout neuron that receives input on-thefly. We take a different approach and apply the standard SFA algorithm to a time series consisting of randomly
selected patterns of the classification problem, where the class at each time step is switched with a certain
probability.
2
We consider a classification problem with C classes, i.e., assume we are given point sets
PC
S1 , S2 , . . . , SC ? Rn . Let Nc be the number of points in Sc and let N = c=1 Nc be the total
number of points. In order to create a time series xt out of these point sets we define a Markov model
with C states S = {1, 2, . . . , C}, one for each class, and choose at each time step t = 1, . . . , T a
random point from the class that corresponds to the current state in the Markov model. We define
the transition probability from state i ? S to state j ? S as
(
N
a ? Nj
if i 6= j,
P
Pij =
(6)
1 ? k6=j Pik if i = j,
with some appropriate constant a > 0. The stationary distribution of this Markov model is ? =
(N1 /N, N2 /N, . . . , NC /N ). We choose the initial distribution p0 = ?, i.e., at any time t the
probability that point xt is chosen from class c is Nc /N .
For this particular way of generating the time series from the original classification problem we can
express the matrices hxxT it and hx? x? T it of the SFA objective (2) in terms of the within-class and
between-class scatter matrices of the FLD (4), SW and SB , in the following way [6]:
1
1
SW + SB
N
N
2
2
hx? x? T it = SW + a ? SB
N
N
hxxT it =
(7)
(8)
Note that only hx? x? T it depends on a, whereas hxxT it does not.
For small a we can neglect the effect of SB on hx? x? T it in (8). In this case the time series consists
mainly of transitions within a class, whereas switching between the two classes is relatively rare.
Therefore the covariance of time derivatives is mostly determined by the within-class scatter of
the two point sets, and both matrices become approximately proportional: hx? x? T it ? 2/N ? SW .
Moreover, if we assume that SW (and therefore hx? x? T it ) is positive definite, we can rewrite the SFA
objective (2) as
min JSF A (w) ? max
1
JSF A (w)
? max
wT hxxT it w
wT hx? x? T it w
1 1 wT SB w
? max + ? T
? max JF LD (w).
2 2 w SW w
(9)
That is, the weight vector that optimizes the SFA objective (2) also optimizes the FLD objective
(4). For C > 2 this equivalence can be seen by recalling the definition of SFA as a generalized
eigenvalue problem (3) and inserting (7) and (8):
hx? x? T it W = hxxT it W?
SB W = SW W 2??1 ? E ,
(10)
where W = (w1 , . . . , wn ) is the matrix of generalized eigenvectors and ? = diag(?1 , . . . , ?n ) is
the diagonal matrix of generalized eigenvalues. The last line of (10) is just the formulation of FLD as
a generalized eigenvalue problem (5). More precisely, the eigenvectors of the SFA problem are also
LD
A
eigenvectors of the FLD problem. Note that the eigenvalues correspond by ?F
= 2/?SF
? 1,
i
i
SF A
which means the order of eigenvalues is reversed (?i
> 0). Thus, the subspace spanned by the
slowest features is the same that optimizes separability in terms of Fisher?s Discriminant, and the
slowest feature is the weight vector which achieves maximal separation.
Figure 1A demonstrates this relationship on a sample two-class problem in two dimensions for the
special case of N1 = N2 = N/2. In this case at each time the class is switched with probability
p = a/2 or is left unchanged with probability 1 ? p. We interpret the weight vectors found by
both methods as normal vectors of hyperplanes in the input space, which we place simply onto
the mean value ? of all training data points (i.e., the hyperplanes are defined as wT x = ? with
? = wT ?). One sees that the weight vector found by the application of SFA to the training time
series xt generated with p = 0.2 is approximately equal to the weight vector resulting from FLD on
the initial sets of training points. This demonstrates that SFA has extracted the class of the points as
the slowest varying feature by finding a direction that separates both classes.
3
Figure 1: Relationship between SFA and FLD for a two-class problem in 2D. (A) Sample point
sets with 250 points for each class. The dashed line indicates a hyperplane corresponding to the
weight vector wF LD resulting from the application of FLD to the two-class problem. The black
solid line shows a hyperplane for the weight vector wSF A resulting from SFA applied to the time
series generated from these training points as described in the text (T = 5000, p = 0.2). The
dotted line displays an additional SFA hyperplane resulting from a time series generated with p =
0.45. All hyperplanes are placed onto the mean value of all training points. (B) Dependence of
the error between the weight vectors found by FLD and SFA on the switching probability p. This
error is defined as the average angle between the weight vectors obtained on 100 randomly chosen
classification problems. Error bars denote the standard error of the mean.
Figure 1B quantifies the deviation of the weight vector resulting from the application of SFA to the
time series from the one found by FLD on the original points. We use the average angle between both
weight vectors as an error measure. It can be seen that if p is low, i.e., transitions between classes
are rare compared to transitions within a class, the angle between the vectors is small and SFA
approximates FLD very well. The angle increases moderately with increasing p; even with higher
values of p (up to 0.45) the approximation is reasonable and a good classification by the slowest
feature can be achieved (see dotted hyperplane in Figure 1A). As soon as p reaches a value of about
0.5, the error grows almost immediately to the maximal value of 90? . For p = 0.5 (a = 1) points
are chosen independently of their class, making the matrices hx? x? T it and hxxT it proportional. This
means that every possible vector w is a solution to the generalized eigenvalue problem (3), resulting
in an average angle of about 45? .
4 Application to trajectories of training examples
In the previous section we have shown that SFA approximates the classification capability of FLD
if the probability is low that two successive points in the input time series to SFA are from different
classes. Apart from this temporal structure induced by the class information, however, these samples
are chosen independently at each time step. In this section we investigate how the SFA objective
changes when the input time series consists of a sequence of trajectories of samples instead of
individual points only.
First, we consider a time series xt consisting of multiple repetitions of a fixed predefined trajectory
?t, which is embedded into noise input consisting of a random number of points drawn from the
same distribution as the trajectory points, but independently at each time step. It is easy to show
[6] that for such a time series the SFA objective (2) reduces to finding the eigenvector of the matrix
? t corresponding to the largest eigenvalue. ?
? t is the covariance matrix of the trajectory ?t with ?t
?
delayed by one time step, i.e., it measures the temporal covariances (hence the index t) of ?t with time
lag 1. Since the transitions between two successive points of the trajectory ?t occur much more often
in the time series xt than transitions between any other possible pair of points, SFA has to respond
as smoothly as possible (i.e., maximize the temporal correlations) during ?t in order to produce the
4
slowest possible output. This means that SFA is able to detect repetitions of ?t by responding during
such instances with a distinctive shape.
?
Next, we consider a classification problem given by C sets of trajectories, T1 , T2 , . . . , TC ? (Rn )T ,
i.e., the elements of each set Tc are sequences of T? n-dimensional points. We generate a time
series according to the same Markov model as described in the previous section, except that we
do not choose individual points at each time step, rather we generate a sequence of trajectories.
For this time series we can express the matrices hxxT it and hx? x? T it in terms of the within-class
and between-class scatter of the individual points of the trajectories in Tc , analogously to (7) and
(8) [6]. While the expression for hxxT it is unchanged the temporal correlations induced by the
use of trajectories however have an effect on the covariance of temporal differences hx? x? T it . First,
? t with time lag 1 of all available
this matrix additionally depends on the temporal covariance ?
trajectories in all sets Tc . Second, the effective switching probability is reduced by a factor of 1/T?.
Whenever a trajectory is selected, T? points from the same class are presented in succession.
This means that even for a small switching probability2 the objective of SFA cannot be solely reduced to the FLD objective, but rather that there is a trade-off between the tendency to separate
trajectories of different classes (as explained by the relation between SB and SW ) and the tendency
to produce smooth responses during individual trajectories (determined by the temporal covariance
? t ):
matrix ?
min
JSF A (w) =
? tw
wT hx? x? T it w
1
wT SW w
wT ?
?
?
?
p
?
?
,
T
T
T
T
T
w hxx it w
N w hxx it w
w hxxT it w
(11)
where N is here the total number of points in all trajectories and p? is the fraction of transitions
between two successive points of the time series that belong to the same trajectory. The weight vector w which minimizes the first term in (11) is equal to the weight vector found by the application
of FLD to the classification problem of the individual trajectory points (note that SB enters (11)
through hxxT it , cf. eq. (9)). The weight vector which maximizes the second term is the one which
produces the slowest possible response during individual trajectories. If the separation between the
trajectory classes is large compared to the temporal correlations (i.e., the first term in (11) dominates
for the resulting w) the slowest feature will be similar to the weight vector found by FLD on the
corresponding classification problem. On the other hand, as the temporal correlations of the trajectories increase, i.e., the trajectories themselves become smoother, the slowest feature will tend to
favor exploiting this temporal structure of the trajectories over the separation of different classes (in
this case, (11) is dominated by the second term for the resulting w).
5 Application to linear readouts of a cortical microcircuit model
In the following we discuss several computer simulations of a cortical microcircuit of spiking neurons that demonstrate the theoretical arguments given in the previous section. We trained a number
of linear SFA readouts3 on a sequence of trajectories of network states, each of which is defined
by the low-pass filtered spike trains of the neurons in the circuit. Such recurrent circuits typically
provide a temporal integration of the input stream and project it nonlinearly into a high-dimensional
space [7], thereby boosting the expressive power of the subsequent linear SFA readouts. Note, however, that the optimization (2) implicitly performs an additional whitening of the circuit response. As
a model for a cortical microcircuit model we use the laminar circuit from [8] consisting of 560 spiking neurons organized into layers 2/3, 4, and 5, with layer-specific connection probabilities obtained
from experimental data [9, 10].
In the first experiment we investigated the ability of SFA to detect a repeating firing pattern within
noise input of the same firing statistics. We recorded circuit trajectories in response to 200 repetitions
of a fixed spike pattern which are embedded into a continuous Poisson input stream of the same rate.
We then trained linear SFA readouts on this sequence of circuit trajectories (we used an exponential
2
In fact, for sufficiently long trajectories the SFA objective becomes effectively independent of the switching
probability.
3
We interpret the linear combination defined by each slow feature as the weight vector of a hypothetical
linear readout.
5
Figure 2: Detecting embedded spike patterns. (A) From top to bottom: sample stimulus sequence,
response spike trains of the network, and slowest features. The stimulus consists of 10 channels
and is defined by repetitions of a fixed spike pattern (dark gray) which are embedded into random
Poisson input of the same rate. The pattern has a length of 250ms and is made up by Poisson spike
trains of rate 20Hz. The period between two repetitions is drawn uniformly between 100ms and
500ms. The response spike trains of the laminar circuit of [8] are shown separated into layers 2/3, 4,
and 5. The numbers of neurons in the layers are indicated on the left, but only the response of every
12th neuron is plotted. Shown are the 5 slowest features, y1 to y5 , for the network response shown
above. The dashed lines indicate values of 0. (B) Phase plots of low-pass filtered versions (leaky
integration, ? = 100ms) of individual slow features in response to a test sequence of 50 embedded
patterns plotted against each other (black: traces during the pattern, gray: during random Poisson
input).
filter with ? = 30ms and a sample time of 1ms). The period of Poisson input in between two such
patterns was randomly chosen.
At first glance there is no clear difference in Figure 2A between the raw SFA responses during
periods of pattern presentations and during phases of noise input due to the same firing statistics.
However, we found that on average the slow feature responses during noise input are zero, whereas
a characteristic response remains during pattern presentations. This effect is predicted by the theoretical arguments in section 4. It can be seen in phase plots of traces that are obtained by a leaky
integration of the slowest features in response to a test sequence of 50 embedded patterns (see Figure
2B) that the slow features span a subspace where the response during pattern presentations can be
nicely separated from the response during noise input. That is, by simple threshold operations on the
low-pass filtered versions of the slowest features one can in principle detect the presence of patterns
within the continuous input stream. Furthermore, this extracted information is not only available
after a pattern has been presented, but already during the presentation of the pattern, which supports
the idea of ?anytime? computing.
In the second experiment we tested whether SFA is able to discriminate two classes of trajectories
as described in section 4. We performed a speech recognition task using the dataset considered originally in [11] and later in the context of biological circuits in [7, 12, 13]. This isolated spoken digits
dataset consists of the audio signals recorded from 5 speakers pronouncing the digits ?zero?, ?one?,
..., ?nine? in ten different utterances (trials) each. We preprocessed the raw audio files with a model
of the cochlea [14] and converted the resulting analog cochleagrams into 20 spike trains (using the
algorithm in [15]) that serve as input to our microcircuit model (see Figure 3A). We tried to dis6
Figure 3: SFA applied to digit recognition of a single speaker. (A) From top to bottom: cochleagrams, input spike trains, response spike trains of the network, and traces of different linear readouts.
Each cochleagram has 86 channels with analog values between 0 and 1 (white, near 1; black, near
0). Stimulus spike trains are shown for two different utterances of the given digit (black and gray,
the black spike times correspond to the cochleagram shown above). The response spike trains of
the laminar circuit from [8] are shown separated into layers 2/3, 4, and 5. The number of neurons
in each layer is indicated on the left, but only the response of every 12th neuron is plotted. The
responses to the two stimulus spike trains in the panel above are shown superimposed with the corresponding color. Each readout trace corresponds to a weighted sum (?) of network states of the
black responses in the panel above. The trace of the slowest feature (?SF1?, see B) is compared to
traces of readouts trained by FLD and SVM with linear kernel to discriminate at any time between
the network states of the two classes. All weight vectors are normalized to length 1. The dotted line
denotes the threshold of the respective linear classifier. (B) Response of the 5 slowest features y1 to
y5 of the previously learned SFA in response to trajectories of the three test utterances of each
p class
not used for training (black, digit ?one?; gray, digit ?two?). The slowness index ? = T /2? ?(y)
[2] is calculated from these output signals. The angle ? denotes the deviation of the projection direction of the respective feature from the direction found by FLD. The thick curves in the shaded
area display the mean SFA responses over all three test trajectories for each class. (C) Phase plots
of individual slow features plotted against each other (thin lines: individual responses, thick lines:
mean response over all test trajectories).
criminate between trajectories in response to inputs corresponding to utterances of digits ?one? and
?two?, of a single speaker. We kept three utterances of each digit for testing and generated from the
remaining training samples a sequence of 100 input samples, recorded for each sample the response
of the circuit, and concatenated the resulting trajectories in time. Note that here we did not switch
the classes of two successive trajectories with a certain probability because, as explained in the previous section, for long trajectories the SFA response is independent of this switching probability.
Rather, we trained linear SFA readouts on a completely random trajectory sequence.
7
Figure 3B shows the 5 slowest features, y1 to y5 , ordered by decreasing slowness in response to the
trajectories corresponding to the three remaining test utterances for each class, digit ?one? and digit
?two?. In this example, already the slowest feature y1 extracts the class of the input patterns almost
perfectly: it responds with positive values for trajectories in response to utterances of digit ?two?
and with negative values for utterances of digit ?one?. This property of the extracted features, to
respond differently for different stimulus classes, is called the What-information [2]. The second
slowest feature y2 , on the other hand, responds with shapes whose sign is independent of the pattern
identity. One can say that, in principle, y2 encodes simply the presence of and the location within a
response. This is a typical example of a representation of Where-information [2], i.e., the ?pattern
location? regardless of the identity of the pattern. The other slow features y3 to y5 do not extract either What- or Where-information explicitly, but rather a mixed version of both. As a measure for the
discriminative capability of a specific SFA response, i.e., its quality as a possible classifier, we measured the angle between the projection direction corresponding to this slow feature and the direction
of the FLD. It can be seen in Figure 3B that the slowest feature y1 is closest to the FLD. Hence,
according to (11), this constitutes an example where the separation between classes dominates, but
is already significantly influenced by the temporal correlations of the circuit trajectories.
Figure 3C shows phase plots of these slow features shown in Figure 3B plotted against each other.
In the three plots involving feature y1 it can be seen that the directions of the response vector (i.e.,
the vector composed of the slow feature values at a particular point in time) cluster at class-specific
angles, which is characteristic for What-information. On the other hand, these phase plots tend to
form loops in phase space (instead of just straight lines from the origin), where each point on this
loop corresponds to a position within the trajectory. This is a typical property of Where-information.
Similar responses have been theoretically predicted in [4] and found in simulations of a hierarchical
(nonlinear) SFA network trained with a sequence of one-dimensional trajectories [2].
This experiment demonstrates that SFA extracts information about the spoken digit in an unsupervised manner by projecting the circuit trajectories onto a subspace where they are nicely separable
so that they can easily be classified by later processing stages. Moreover, this information is provided not only at the end of a specific trajectory, but is made available already much earlier. After
sufficient training, the slowest feature y1 in Figure 3B responds with positive or negative values indicating the stimulus class almost during the whole duration of of the network trajectory. This again
supports the idea of ?anytime? computing. It can be seen in the bottom panel of Figure 3A that the
slowest feature, which is obtained in an unsupervised manner, achieves a good separation between
the two test trajectories, comparable to the supervised methods of FLD and Support Vector Machine
(SVM) [16] with linear kernel.
6 Discussion
The results of our paper show that Slow Feature Analysis is in fact a very powerful tool, which is
able to approximate the classification capability that results from supervised classification learning.
Its elegant formulation as a generalized eigenvalue problem has allowed us to establish a relationship to the supervised method of Fisher?s Linear Discriminant (FLD). A more detailed discussion of
this relationship, including complete derivations, can be found in [6]. If temporal contiguous points
in the time series are likely to belong to the same class, SFA is able to extract the class as a slowly
varying feature in an unsupervised manner. This ability is of particular interest in the context of
biologically realistic neural circuits because it could enable readout neurons to extract from the trajectories of network states information about the stimulus ? without any ?teacher?, whose existence
is highly dubious in the brain. We have shown in computer simulations of a cortical microcircuit
model that linear readouts trained with SFA are able to detect specific spike patterns within a stream
of spike trains with the same firing statistics and to discriminate between different spoken digits.
Moreover, SFA provides in these tasks an ?anytime? classification capability.
Acknowledgments
We would like to thank Henning Sprekeler and Laurenz Wiskott for stimulating discussions. This
paper was written under partial support by the Austrian Science Fund FWF project # S9102-N13
and project # FP6-015879 (FACETS), project # FP7-216593 (SECO) and project # FP7-231267
(ORGANIC) of the European Union.
8
References
[1] N. Li and J. J. DiCarlo. Unsupervised natural experience rapidly alters invariant object representation in
visual cortex. Science, 321:1502?1507, 2008.
[2] L. Wiskott and T. J. Sejnowski. Slow feature analysis: unsupervised learning of invariances. Neural
Computation, 14(4):715?770, 2002.
[3] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annuals of Eugenics, 7:179?188,
1936.
[4] L. Wiskott. Slow feature analysis: A theoretical analysis of optimal free responses. Neural Computation,
15(9):2147?2177, 2003.
[5] P. Berkes. Pattern recognition with slow feature analysis. Cognitive Sciences EPrint Archive (CogPrint)
4104, February 2005. http://cogprints.org/4104/.
[6] S. Klampfl and W. Maass. A theoretical basis for emergent pattern discrimination in neural systems
through slow feature extraction. Submitted for publication, 2009.
[7] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: A new framework
for neural computation based on perturbations. Neural Computation, 14(11):2531?2560, 2002.
[8] S. H?ausler and W. Maass. A statistical analysis of information processing properties of lamina-specific
cortical microcircuit models. Cerebral Cortex, 17(1):149?162, 2007.
[9] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic interneurons
and synapses in the neocortex. Science, 287:273?278, 2000.
[10] A. M. Thomson, D. C. West, Y. Wang, and A. P. Bannister. Synaptic connections and small circuits
involving excitatory and inhibitory neurons in layers 2?5 of adult rat and cat neocortex: triple intracellular
recordings and biocytin labelling in vitro. Cerebral Cortex, 12(9):936?953, 2002.
[11] J. J. Hopfield and C. D. Brody. What is a moment? Transient synchrony as a collective mechanism for
spatio-temporal integration. Proc. Nat. Acad. Sci. USA, 98(3):1282?1287, 2001.
[12] D. Verstraeten, B. Schrauwen, D. Stroobandt, and J. Van Campenhout. Isolated word recognition with the
liquid state machine: a case study. Inf. Process. Lett., 95(6):521?528, 2005.
[13] R. Legenstein, D. Pecevski, and W. Maass. A learning theory for reward-modulated spike-timingdependent plasticity with application to biofeedback. PLoS Computational Biology, 4(10):1?27, 2008.
[14] R. F. Lyon. A computational model of filtering, detection, and compression in the cochlea. In Proc. IEEE
Int. Conf. Acoustics Speech and Signal Processing, pages 1282?1285, May 1982.
[15] B. Schrauwen and J. V. Campenhout. BSA, a fast and accurate spike train encoding scheme. In Proceedings of the International Joint Conference on Neural Networks, 2003.
[16] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
9
| 3672 |@word trial:1 version:3 compression:1 cochleagram:2 open:1 simulation:5 tried:1 covariance:10 p0:1 thereby:2 solid:1 ld:4 moment:1 initial:2 series:26 liquid:1 current:1 scatter:3 written:1 realistic:1 subsequent:1 plasticity:1 shape:2 enables:1 plot:6 fund:1 discrimination:2 stationary:1 selected:2 filtered:3 provides:2 boosting:1 detecting:1 location:2 successive:4 hyperplanes:3 org:1 become:2 consists:4 fld:31 manner:4 theoretically:1 themselves:1 multi:1 brain:3 decreasing:2 lyon:1 increasing:1 becomes:1 project:5 provided:1 notation:1 moreover:3 maximizes:2 circuit:14 panel:3 laurenz:1 what:4 minimizes:2 eigenvector:1 sfa:56 spoken:4 finding:2 nj:1 temporal:17 every:3 hypothetical:1 y3:1 demonstrates:3 classifier:2 unit:1 positive:3 t1:1 switching:6 acad:1 encoding:1 firing:7 solely:1 approximately:2 might:1 black:7 klampfl:3 equivalence:1 shaded:1 acknowledgment:1 yj:1 testing:1 union:1 definite:1 digit:15 area:1 significantly:1 projection:3 organic:1 word:2 onto:3 unlabeled:2 cannot:1 context:2 regardless:1 independently:3 duration:1 immediately:1 sf1:1 spanned:1 variation:1 us:1 origin:1 element:2 recognition:5 labeled:3 bottom:3 enters:1 wang:2 readout:13 graz:2 ensures:1 plo:1 trade:1 verstraeten:1 environment:1 moderately:1 reward:1 trained:6 rewrite:1 serve:2 distinctive:1 completely:1 basis:1 easily:1 joint:1 hopfield:1 differently:1 emergent:1 cat:1 derivation:1 train:13 separated:3 fast:1 effective:1 sejnowski:1 artificial:1 sc:4 whose:2 heuristic:1 larger:1 solve:1 lag:2 say:1 ability:4 statistic:4 gi:2 favor:1 seco:1 sequence:11 eigenvalue:12 maximal:2 inserting:1 loop:2 rapidly:1 organizing:1 olkopf:1 exploiting:1 regularity:1 cluster:1 produce:4 categorization:1 generating:1 object:2 lamina:1 recurrent:1 measured:1 eq:1 sprekeler:1 predicted:2 indicate:1 s9102:1 direction:7 thick:2 filter:1 enable:1 transient:1 hx:16 biological:2 recap:1 sufficiently:1 considered:1 normal:1 predict:1 pecevski:1 achieves:2 smallest:1 campenhout:2 proc:2 largest:2 repetition:5 create:1 tool:1 weighted:1 stefan:1 mit:1 rather:4 avoid:1 varying:2 publication:1 indicates:1 mainly:1 slowest:24 superimposed:1 contrast:1 wf:1 detect:4 sb:11 typically:2 relation:1 classification:18 pronouncing:1 k6:1 special:1 integration:4 equal:2 nicely:2 extraction:1 biology:1 unsupervised:11 constitutes:1 thin:1 report:1 stimulus:8 t2:1 randomly:3 composed:1 individual:9 delayed:1 phase:7 consisting:4 n1:2 recalling:1 detection:2 interest:1 interneurons:1 investigate:3 highly:1 pc:1 cochleagrams:2 predefined:2 accurate:1 partial:1 experience:1 respective:2 ager:1 biofeedback:1 plotted:5 isolated:2 theoretical:5 instance:1 earlier:2 facet:1 contiguous:1 deviation:2 rare:3 supervisor:2 teacher:1 international:1 off:1 analogously:1 schrauwen:2 w1:1 squared:1 again:2 recorded:3 choose:3 slowly:1 external:1 cognitive:1 conf:1 derivative:4 li:1 converted:1 diversity:1 int:1 explicitly:1 igi:1 depends:2 stream:5 performed:2 later:2 wolfgang:1 start:1 capability:7 synchrony:1 minimize:1 variance:2 characteristic:2 maximized:1 correspond:2 succession:1 raw:2 trajectory:48 straight:1 classified:1 submitted:1 implausible:1 reach:1 influenced:1 synapsis:1 decorrelated:1 whenever:1 synaptic:1 definition:3 against:3 dataset:2 austria:1 anytime:4 color:1 organized:1 higher:1 originally:1 supervised:9 response:35 formulation:2 microcircuit:9 furthermore:2 just:2 stage:1 smola:1 correlation:5 hand:4 receives:1 replacing:2 expressive:1 nonlinear:2 glance:1 quality:1 indicated:2 gray:4 grows:1 usa:1 effect:3 normalized:1 y2:3 hence:2 maass:6 white:1 adjacent:2 during:14 speaker:3 timingdependent:1 rat:1 m:6 generalized:8 complete:1 demonstrate:3 thomson:1 performs:2 dubious:1 spiking:3 vitro:1 cerebral:2 belong:2 organism:1 approximates:3 analog:2 interpret:2 measurement:1 cambridge:1 stable:2 access:1 supervision:2 cortex:3 whitening:1 berkes:1 hxxt:13 closest:1 recent:1 perspective:1 optimizing:1 belongs:1 optimizes:3 apart:1 inf:1 slowness:4 certain:3 yi:6 seen:6 additional:3 maximize:1 period:3 hyi:2 signal:4 dashed:2 smoother:1 multiple:3 reduces:1 smooth:1 long:2 involving:2 basic:1 austrian:1 poisson:5 cochlea:2 kernel:3 achieved:1 receive:1 whereas:3 psbp:1 sch:1 archive:1 file:1 natschl:1 induced:2 tend:2 hz:1 elegant:1 henning:1 eprint:1 recording:1 fwf:1 extracting:1 near:2 presence:3 easy:1 wn:1 switch:1 perfectly:1 idea:3 simplifies:1 whether:1 expression:1 speech:2 nine:1 clear:1 eigenvectors:3 detailed:1 repeating:2 dark:1 neocortex:2 ten:1 reduced:2 generate:2 http:1 inhibitory:1 alters:1 dotted:3 sign:1 discrete:1 express:2 threshold:2 drawn:2 preprocessed:1 kept:1 bannister:1 fp6:1 fraction:1 convert:1 probability2:1 sum:1 angle:8 taxonomic:1 powerful:2 respond:2 place:1 almost:3 reasonable:1 separation:7 legenstein:1 pik:1 comparable:1 layer:7 brody:1 display:2 laminar:3 annual:1 occur:1 precisely:2 constraint:3 ausler:1 encodes:1 hy:1 dominated:1 argument:2 min:4 span:1 separable:1 relatively:1 n13:1 structured:1 according:2 combination:2 separability:1 tw:1 making:1 s1:1 biologically:1 explained:2 projecting:1 invariant:1 remains:1 previously:1 discus:2 mechanism:4 fp7:2 end:2 available:3 operation:1 apply:2 hierarchical:1 appropriate:1 algorithm1:1 existence:1 original:2 denotes:4 responding:1 cf:1 top:2 tugraz:1 remaining:2 sw:13 neglect:1 concatenated:1 establish:2 february:1 unchanged:2 objective:12 already:5 spike:19 dependence:1 diagonal:1 responds:3 subspace:3 reversed:1 link:1 separate:2 thank:1 sci:1 hxx:2 presynaptic:1 considers:1 discriminant:6 trivial:1 y5:4 length:2 dicarlo:1 index:2 relationship:7 minimizing:3 acquire:2 nc:6 mostly:1 trace:6 negative:2 collective:1 neuron:13 markov:4 y1:8 rn:2 perturbation:1 pair:2 nonlinearly:1 connection:2 acoustic:1 learned:1 established:1 adult:1 able:7 suggested:1 bar:1 eugenics:1 pattern:27 biocytin:1 max:5 including:1 power:1 overlap:1 natural:1 rely:1 scheme:1 technology:1 brief:1 temporally:3 gabaergic:1 concludes:1 extract:7 utterance:8 text:1 embedded:6 mixed:1 generation:1 proportional:2 filtering:1 ingredient:1 triple:1 switched:2 pij:1 sufficient:1 wiskott:3 principle:3 excitatory:1 placed:1 last:1 soon:1 free:1 allow:1 institute:1 markram:2 leaky:2 van:1 curve:1 dimension:1 cortical:8 transition:7 calculated:1 lett:1 commonly:1 made:2 projected:1 approximate:1 implicitly:1 neurobiological:1 spatio:2 discriminative:1 search:1 continuous:2 quantifies:1 additionally:1 learn:3 channel:2 expansion:1 investigated:1 european:1 diag:1 did:1 intracellular:1 s2:1 noise:5 whole:1 n2:2 allowed:1 west:1 slow:20 position:1 stroobandt:1 sf:2 exponential:1 third:1 xt:5 specific:6 thefly:1 svm:2 gupta:1 dominates:2 effectively:1 ci:1 nat:1 labelling:1 smoothly:1 tc:4 simply:2 likely:3 visual:1 ordered:2 corresponds:3 extracted:5 ma:1 stimulating:1 identity:3 goal:1 viewed:1 presentation:4 jf:2 fisher:7 hard:1 change:1 determined:2 except:1 uniformly:1 typical:2 averaging:1 wt:13 hyperplane:4 total:2 called:1 discriminate:4 pas:3 tendency:2 experimental:1 invariance:1 indicating:1 support:5 modulated:1 audio:2 tested:1 |
2,949 | 3,673 | Multi-label Prediction via Sparse Infinite CCA
Piyush Rai and Hal Daum?e III
School of Computing, University of Utah
{piyush,hal}@cs.utah.edu
Abstract
Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables. Building upon the recently suggested probabilistic interpretation of CCA, we propose a nonparametric,
fully Bayesian framework that can automatically select the number of correlation components, and effectively capture the sparsity underlying the projections.
In addition, given (partially) labeled data, our algorithm can also be used as a
(semi)supervised dimensionality reduction technique, and can be applied to learn
useful predictive features in the context of learning a set of related tasks. Experimental results demonstrate the efficacy of the proposed approach for both CCA as
a stand-alone problem, and when applied to multi-label prediction.
1
Introduction
Learning with examples having multiple labels is an important problem in machine learning and
data mining. Such problems are encountered in a variety of application domains. For example, in
text classification, a document (e.g., a newswire story) can be associated with multiple categories.
Likewise, in bio-informatics, a gene or protein usually performs several functions. All these settings
suggest a common underlying problem: predicting multivariate responses. When the responses
come from a discrete set, the problem is termed as multi-label classification. The aforementioned
setting is a special case of multitask learning [6] when predicting each label is a task and all the tasks
share a common source of input. An important characteristics of these problems is that the labels
are not independent of each other but actually often have significant correlations with each other. A
na??ve approach to learn in such settings is to train a separate classifier for each label. However, such
an approach ignores the label correlations and leads to sub-optimal performance [20].
In this paper, we show how Canonical Correlation Analysis (CCA) [11] can be used to exploit label
relatedness, learning multiple prediction problems simultaneously. CCA is a useful technique for
modeling dependencies between two (or more) sets of variables. One important application of CCA
is in supervised dimensionality reduction, albeit in the more general setting where each example has
several labels. In this setting, CCA on input-output pair (X, Y) can be used to project inputs X to
a low-dimensional space directed by label information Y. This makes CCA an ideal candidate for
extracting useful predictive features from data in the context of multi-label prediction problems.
The classical CCA formulation, however, has certain inherent limitations. It is non-probabilistic
which means that it cannot deal with missing data, and precludes a Bayesian treatment which can
be important if the dataset size is small. An even more crucial issue is choosing the number of correlation components, which is traditionally dealt with by using cross-validation, or model-selection
[21]. Another issue is the potential sparsity [18] of the underlying projections that is ignored by the
standard CCA formulation.
Building upon the recently suggested probabilistic interpretation of CCA [3], we propose a nonparametric, fully Bayesian framework that can deal with each of these issues. In particular, the proposed
model can automatically select the number of correlation components, and effectively capture the
1
sparsity underlying the projections. Our framework is based on the Indian Buffet Process [9], a
nonparametric Bayesian model to discover latent feature representation of a set of observations. In
addition, our probabilistic model allows dealing with missing data and, in the supervised dimensionality reduction case, can incorporate additional unlabeled data one may have access to, making our
CCA algorithm work in a semi-supervised setting. Thus, apart from being a general, nonparametric, fully Bayesian solution to the CCA problem, our framework can be readily applied for learning
useful predictive features from labeled (or partially labeled) data in the context of learning a set of
related tasks.
This paper is organized as follows. Section 2 introduces the CCA problem and its recently proposed
probabilistic interpretation. In section 3, we describe our general framework for infinite CCA. Section 4 gives a concrete example of an application (multi-label learning) where the proposed approach
can be applied. In particular, we describe a fully supervised setting (when the test data is not available at the time of training), and a semi-supervised setting with partial labels (when we have access
to test data at the time of training). We describe our experiments in section 5, and discuss related
work in section 6 drawing connections of the proposed method with previously proposed ones for
this problem. .
2
Canonical Correlation Analysis
Canonical correlation analysis (CCA) is a useful technique for modeling the relationships among a
set of variables. CCA computes a low-dimensional shared embedding of a set of variables such that
the correlations among the variables is maximized in the embedded space.
More formally, given a pair of variables x ? RD1 and y ? RD2 , CCA seeks to find linear projections
ux and uy such that the variables are maximally correlated in the projected space. The correlation
coefficient between the two variables in the embedded space is given by
?= q
uTx xyT uy
(uTx xxT ux )(uTy yyT uy )
Since the correlation is not affected by rescaling of the projections ux and uy , CCA is posed as a
constrained optimization problem.
max uTx xyT uy , subject to : uTx xxT ux = 1, uTy yyT uy = 1
ux ,uy
It can be shown that the above formulation is equivalent to solving the following generalized eigenvalue problem:
ux
?xx
0
0
?xy
ux
=?
uy
0
?yy
?yx
0
uy
where ? denotes the covariance matrix of size D ? D (where D = D1 + D2 ) obtained from the
data samples X = [x1 , . . . , xn ] and Y = [y1 , . . . , yn ].
2.1
Probabilistic CCA
Bach and Jordan [3] gave a probabilistic interpretation of CCA by posing it as a latent variable
model. To see this, let x and y be two random vectors of size D1 and D2 . Let us now consider the
following latent variable model
z
?
Nor(0, I K ), min{D1 , D2 } ? K
x
?
Nor(?x + Wx z, ?x ), Wx ? RD1 ?K , ?x 0
y
?
Nor(?y + Wy z, ?y ), Wy ? RD2 ?K , ?y 0
Equivalently, we can also write the above as
[x; y] ? Nor(? + Wz, ?)
2
where ? = [?x ; ?y ], W = [Wx ; Wy ], and ? is a block-diagonal matrix consisting of ?x and ?x
on its diagonals. [.; .] denotes row-wise concatenation. The latent variable z is shared between x and
y.
Bach and Jordan [3] showed that, given the maximum likelihood solution for the model parameters,
the expectations E(z|x) and E(z|y) of the latent variable z lie in the same subspace that classical
CCA finds, thereby establishing the equivalence between the above probabilistic model and CCA.
The probabilistic interpretation opens doors to several extension of the basic setup proposed in [3]
which suggested a maximum likelihood approach for parameter estimation. However, it still assumes an apriori fixed number of canonical correlation components. In addition, another important
issue is the sparsity of the underlying projection matrix which is usually ignored.
3
The Infinite Canonical Correlation Analysis Model
Recall that the CCA problem can be defined as [x; y] ? Nor(Wz, ?) (assuming centered data). A
crucial issue in the CCA model is choosing the number of canonical correlation components which
is set to a fixed value in classical CCA (and even in the probabilistic extensions of CCA). In the
Bayesian formulation of CCA, one can use the Automatic Relevance Determination (ARD) prior
[5] on the projection matrix W that gives a way to select this number. However, it would be more
appropriate to have a principled way to automatically figure out this number based on the data.
We propose a nonparametric Bayesian model that selects the number of canonical correlation components automatically. More specifically, we use the Indian Buffet Process [9] (IBP) as a nonparametric prior on the projection matrix W. The IBP prior allows W to have an unbounded number
of columns which gives a way to automatically determine the dimensionality K of the latent space
associated with Z.
3.1
The Indian Buffet Process
The Indian Buffet Process [9] defines a distribution over infinite binary matrices, originally motivated by the need to model the latent feature structure of a given set of observations. The IBP
has been a model of choice in variety of non-parametric Bayesian approaches, such as for factorial
structure learning, learning causal structures, modeling dyadic data, modeling overlapping clusters,
and several others [9].
In the latent feature model, each observation can be thought of as being explained by a set of latent
features. Given an N ? D matrix X of N observations having D features each, we can consider a
decomposition of the form X = ZA + E where Z is an N ? K binary feature-assignment matrix
describing which features are present in each observation. Zn,k is 1 if feature k is present in observation n, and is otherwise 0. A is a K ? D matrix of feature scores, and the matrix E consists of
observation specific noise. A crucial issue in such models is the choosing the number K of latent
features. The standard formulation of IBP lets us define a prior over the binary matrix Z such that
it can have an unbounded number of columns and thus can be a suitable prior in problems dealing
with such structures.
The IBP derivation starts by defining a finite model for K many columns of a N ? K binary matrix.
P (Z) =
K
Y
k=1
?
K ?(mk
?
+K
)?(P ? mk ? 1)
?
?(P + 1 + K
)
(1)
P
Here mk = i Zik . In the limiting case, as K ? ?, it as was shown in [9] that the binary matrix Z
generated by IBP is equivalent to one produced by a sequential generative process. This equivalence
can be best understood by a culinary analogy of customers coming to an Indian restaurant and selecting dishes from an infinite array of dishes. In this analogy, customers represent observations and
dishes represent latent features. Customer 1 selects P oisson(?) dishes to begin with. Thereafter,
each incoming customer n selects an existing dish k with a probability mk /N , where mk denotes
how many previous customers chose that particular dish. The customer n then goes on further to
additionally select P oisson(?/N ) new dishes. This process generates a binary matrix Z with rows
representing customer and columns representing dishes. Many real world datasets have a sparseness
3
Figure 1: The graphical model depicts the fully supervised case when all variables X and Y are
observed. The semisupervised case can have X and/or Y consisting of missing values as well. The
graphical model structure remains the same
property which means that each observation depends only on a subset of all the K latent features.
This means that the binary matrix Z is expected to be reasonably sparse for many datasets. This
makes IBP a suitable choice for also capturing the underlying sparsity in addition to automatically
discovering the number of latent features.
3.2
The Infinite CCA Model
In our proposed framework, the matrix W consisting of canonical correlation vectors is modeled
using an IBP prior. However since W can be real-valued and the IBP prior is defined only for
binary matrices, we represent the (D1 + D2) ? K matrix W as (B ? V), where B = [Bx ; By ]
is a (D1 + D2 ) ? K binary matrix, V = [Vx ; Vy ] is a (D1 + D2 ) ? K real-valued matrix, and
? denotes their element-wise (Hadamard) product. We place an IBP prior on B that automatically
determines K, and a Gaussian prior on V. Note that B and V have the same number of columns.
Under this model, two random vectors x and y can be modeled as x = (Bx ? Vx )z + Ex and
y = (By ? Vy )z + Ey . Here z is shared between x and y, and Ex and Ey are observation specific
noise.
In the full model, X = [x1 , . . . , xN ] is D1 ? N matrix consisting of N samples of D1 dimensions
each, and Y = [y1 , . . . , yN ] is another matrix consisting of N samples of D2 dimensions each. Here
is the generative story for our basic model:
B ? IBP(?)
V ? Nor(0, ?v2 I), ?v ? IG(a, b)
Z ? Nor(0, I)
[X; Y] ? Nor(B ? V)Z, ?),
where ? is a diagonal matrix of size D ? D where D = (D1 + D2 ), with each diagonal entry
having an inverse-Gamma prior..
Since our model is probabilistic, it can also deal the problem when X or Y have missing entries.
This is particularly important in the case of supervised dimensionality reduction (i.e., X consisting
of inputs and Y associated responses) when the labels for some of the inputs are unknown, making
it a model for semi-supervised dimensionality reduction with partially labeled data. In addition,
placing the IBP prior on the projection matrix W (via the binary matrix B) also helps in capturing
the sparsity in W (see results section for evidence).
3.3
Inference
We take a fully Bayesian approach by treating everything at latent variables and computing the
posterior distributions over them. We use Gibbs sampling with a few Metropolis-Hastings steps to
do inference in this model.
4
In what follows, D denotes the data [X; Y], B = [Bx ; By ], and V = [Vx ; Vy ]
Sampling B: Sampling the binary IBP matrix B consists of sampling existing dishes, proposing new
dishes and accepting or rejecting them based on the acceptance ratio in the associated M-H step. For
sampling existing dishes, an entry in B is set as 1 according to p(Bik = 1|D, B?ik , V, Z, ?) ?
m?i,k
D p(D|B, V, F, ?) whereas it isPset as 0 according to p(Bik = 0|D, B?ik , V, Z, ?) ?
D?m?i,k
p(D|B, V, Z, ?). m?i,k = j6=i Bjk is how many other customers chose dish k.
D
For sampling new dishes, we use an M-H step where we simultaneously propose ? =
(K new , V new , Z new ) where K new ? P oisson(?/D). We accept the proposal with an accep?
)
tance probability given by a = min{1, p(rest|?
p(rest|?) }. Here, p(rest|?) is the probability of the data
given parameters ?. We propose V new from its prior (Gaussian) but, for faster mixing, we propose
Z new from its posterior.
Sampling V: We sample the real-valued matrix V from its posterior p(Vi,k |D, B, Z, ?) ?
PN Z 2
PN
?
Nor(Vi,k |?i,k , ?i,k ), where ?i,k = ( n=1 ?k,n
+ ?12 )?1 and ?i,k = ?i,k ( n=1 Ak,n Di,k
)??1
i .
i
v
P
K
?
We define Di,k
= Di,n ? l=1,l6=k (Bi,l Vi,l )Zl,n . The hyperparameter ?v on V has an inversegamma prior and posterior also has the same form. Note that the number of columns in V is the
same as number of columns in the IBP matrix B.
Sampling Z: We sample for Z from its posterior p(Z|D, B, V, ?) ? Nor(Z|?, ?) where ? =
WT (WWT + ?)?1 D and ? = I ? WT (WWT + ?)?1 W, where W = B ? V.
Note that, in our sampling scheme, we considered the matrices Bx and By as simply parts of the big
IBP matrix B, and sampled them together using a single IBP draw. However, one could also sample
them separately as two separate IBP matrices for Bx and By . This would require different IBP draws
for sampling Bx and By with some modification of the existing Gibbs sampler. Different IBP draws
could result in different number of nonzero columns in Bx and By . To deal with this issue, one
could sample Bx (say having Kx nonzero columns) and By (say having Ky nonzero columns) first,
introduce extra dummy columns (|Kx ?Ky | in number) in the matrix having smaller number of nonzero columns, and then set all such columns to zero. The effective K for each iteration of the Gibbs
sampler would be max{Kx , Ky }. A similar scheme could also be followed for the corresponding
real-valued matrices Vx and Vy , sampling them in conjunction with Bx and By respectively.
4
Multitask Learning using Infinite CCA
Having set up the framework for infinite CCA, we now describe its applicability for the problem
of multitask learning. In particular, we consider the setting when each example is associated with
multiple labels. Here predicting each individual label becomes a task to be learned. Although one
can individually learn a separate model for each task, doing this would ignore the label correlations. This makes borrowing the information across tasks crucial, making it imperative to share the
statistical strength across all the task. With this motivation, we apply our infinite CCA model to
capture the label correlations and to learn better predictive features from the data by projective it to
a subspace directed by label information. It has been empirically and theoretically [25] shown that
incorporating label information in dimensionality reduction indeed leads to better projections if the
final goal is prediction.
More concretely, let X = [x1 , . . . , xN ] be an D ? N matrix of predictor variables, and Y =
[y1 , . . . , yN ] be an M ? N matrix of the responses variables (i.e., the labels) with each yi being
an M ? 1 vector of responses for input xi . The labels can take real (for regression) or categorical
(for classification) values. The infinite CCA model is applied on the pair X and Y which is akin to
doing supervised dimensionality reduction for the inputs X. Note that the generalized eigenvalue
problem posed in such a supervised setting of CCA consists of cross-covariance matrix ?XY and
label covariance matrix ?Y Y . Therefore the projection takes into account both the input-output
correlations and the label correlations. Such a subspace therefore is expected to consist of much
better predictive features than one obtained by a na??ve feature extraction approach such as simple
PCA that completely ignores the label information, or approaches like Linear Discriminant Analysis
(LDA) that do take into account label information but ignore label correlations.
5
Multitask learning using the infinite CCA model can be done in two settings: supervised and semisupervised depending on whether or not the inputs of test data are involved in learning the shared
subspace Z.
4.1
Fully supervised setting
In the supervised setting, CCA is done on labeled data (X, Y) to give a single shared subspace
Z ? RK?N that is good across all tasks. A model is then learned in the Z subspace to learn
M task parameters {?m } ? RK?1 where m ? {1, . . . , M }. Each of the parameters ?m is then
used to predict the labels for the test data of task m. However that since the test data is still D
dimensional, we need to either separately project it down onto the K dimensional subspace and do
predictions in this subspace, or ?inflate? each task parameter back to D dimensions by applying
the projection matrix Wx and do predictions in the original D dimensional space. The first option
requires using the fact that P (Z|Xte ) ? P (Xte |Z)P (Z) which is a Gaussian Nor(?Z|X , ?Z|X ) with
?Z|X = (WTx ?x Wx + I)?1 WTx Xte and ?Z|X = (WTx ?x Wx + I)?1 . With the second option, we
can inflate each learned task parameter back to D dimensions by applying the projection matrix Wx .
We choose the second option for the experiments. We call this fully supervised setting as model-1.
4.2
A Semi-supervised setting
In the semi-supervised setting, we combine training data and test data (with unknown labels) as
X = [Xtr , Xte ] and Y = [Ytr , Yte ] where the labels Yte are unknown. The infinite CCA model is
then applied on the pair (X, Y) and the parts of Y consisting of Yte are treated as a latent variables
to be imputed. With this model, we get the embeddings also for the test data and thus training and
testing both take place in the K dimensional subspace, unlike model-1 in which training is done in
K dimensional subspace and prediction are made in the original D dimensional subspace. We call
this semi-supervised setting as model-2.
5
Experiments
Here we report our experimental results on several synthetic and real world datasets. We first show
our results with the infinite CCA as a stand alone algorithm for CCA by using it on a synthetic
dataset demonstrating its effectiveness in capturing the canonical correlations. We then also report
our experiments on applying the infinite CCA model to the problem of multitask learning on two
real world datasets.
5.1
Infinite CCA results on synthetic data
In the first experiment, we demonstrate the effectiveness of our proposed infinite CCA model in
discovering the correct number of canonical correlation components, and in capturing the sparsity
pattern underlying the projection matrix. For this, we generated two datasets of dimensions 25 and
10 respectively, with each having 100 samples. For this synthetic dataset, we knew the ground truth
(i.e., the number of components, and the underlying sparsity of projection matrix). In particular, the
dataset had 4 correlation components with a 63% sparsity in the true projection matrix. We then
ran both classical CCA and infinite CCA algorithm on this dataset. Looking at all the correlations
discovered by classical CCA, we found that it discovered 8 components having significant correlations, whereas our model correctly discovered exactly 4 components in the first place (we extract
the MAP samples for W and Z output by our Gibbs sampler). Thus on this small dataset, standard
CCA indeed seems to be finding spurious correlations, indicating a case of overfitting (the overfitting problem of classical CCA was also observed in [15] when comparing Bayesian versus classical
CCA). Furthermore, as expected, the projection matrix inferred by the classical CCA had no exact
zero entries and even after thresholding significantly small absolute values to zero, the uncovered
sparsity was only about 25%. On the other hand, the projection matrix inferred by the infinite CCA
model had 57% exact zero entries and 62% zero entries after thresholding very small values, thereby
demonstrating its effectiveness in also capturing the sparsity patterns.
6
Model
Full
PCA
CCA
Model-1
Model-2
Acc
0.5583
0.5612
0.5441
0.5842
0.6156
Yeast
F1-macro F1-micro
0.3132
0.3929
0.3144
0.4648
0.2888
0.3923
0.3327
0.4402
0.3463
0.4954
AUC
0.5054
0.5026
0.5135
0.5232
0.5386
Acc
0.7565
0.7233
0.7496
0.7533
0.7664
Scene
F1-macro F1-micro
0.3445
0.3527
0.2857
0.2734
0.3342
0.3406
0.3630
0.3732
0.3742
0.3825
AUC
0.6339
0.6103
0.6346
0.6517
0.6686
Table 1: Results on the multi-label classification task. Bold face indicates the best performance.
Model-1 and Model-2 scores are averaged over 10 runs with different initializations.
5.2
Infinite CCA applied to multi-label prediction
In the second experiment, we use infinite CCA model to learn a set of related task in the context of
multi-label prediction. For our experiments, we use two real-world multi-label datasets (Yeast and
Scene) from the UCI repository. The Yeast dataset consists of 1500 training and 917 test examples,
each having 103 features. The number of labels (or tasks) per example is 14. The Scene dataset
consists of 1211 training and 1196 test examples, each having 294 features. The number of labels
per example for this dataset is 6. We compare the following models for our experiments.
? Full: Train separate classifiers (SVM) on the full feature set for each task.
? PCA: Apply PCA on training and test data and then train separate classifiers for each task
in the low dimensional subspace. This baseline ignores the label information while learning
the low dimensional subspace.
? CCA: Apply classical CCA on training data to extract the shared subspace, learn separate
model (i.e., task parameters) for each task in this subspace, project the task parameters
back to the original D dimensional feature space by applying the projection Wx , and do
predictions on the test data in this feature pace.
? Model-1: Use our supervised infinite CCA model to learn the shared subspace using only
the training data (see section 4.1).
? Model-2: Use our semi-supervised infinite CCA model to simultaneously learn the shared
subspace for both training and test data (see section 4.2).
The performance metrics used are overall accuracy, F1-Macro, F1-Micro, and AUC (Area Under
ROC Curve). For PCA and CCA, we chose K that gives the best performance, whereas this parameter was learned automatically for both of our proposed models. The results are shown in Table-1.
As we can see, both the proposed models do better than the other baselines. Of the two proposed
model, we see that model-2 does better in most cases suggesting that it is useful to incorporate
the test data while learning the projections. This is possible in our probabilistic model since we
could treat the unknown Y?s of the test data as latent variables to be imputed while doing the Gibbs
sampling.
We note here that our results are with cases where we only had access to small number of related task
(yeast has 14, scene has 6). We expect the performance improvements to be even more significant
when the number of (related) tasks is high.
6
Related Work
A number of approaches have been proposed in the recent past for the problem of supervised dimensionality reduction of multi-label data. The few approaches that exist include Partial Least Squares
[2], multi-label informed latent semantic indexing [24], and multi-label dimensionality reduction using dependence maximization (MDDM) [26]. None of these, however, deal with the case when the
data is only partially labeled. Somewhat similar in spirit to our approach is the work on supervised
probabilistic PCA [25] that extends probabilistic PCA to the setting when we also have access to
labels. However, it assumes a fixed number of components and does not take into account sparsity
of the projections.
7
The CCA based approach to supervised dimensionality reduction is more closely related to the
notion of dimension reduction for regression (DRR) which is formally defined as finding a low
dimensional representation z ? RK of inputs x ? RD (K ? D) for predicting multivariate outputs
y ? RM . An important notion in DRR is that of sufficient dimensionality reduction (SDR) [10, 8]
which states that given z, x and y are conditionally independent, i.e., x ?? y|z. As we can see in the
graphical model shown in figure-1, the probabilistic interpretation of CCA yields the same condition
with X and Y being conditionally independent given Z.
Among the DRR based approaches to dimensionality reduction for real-valued multilabel data, Covariance Operator Inverse Regression (COIR) exploits the covariance structures of both the inputs
and outputs [14]. Please see [14] for more details on the connection between COIR and CCA. Besides the DRR based approaches, the problem of extracting useful features from data, particularly
with the goal of making predictions, has also been considered in other settings. The information
bottleneck (IB) method [19] is one such example. Given input-output pairs (X, Y), the information
bottleneck method aims to obtain a compressed representation T of X that can account for Y. IB
achieves this using a single tradeoff parameter to represent the tradeoff between the complexity of
the representation of X, measured by I(X; T), and the accuracy of this representation, measured
by I(T; Y), where I(.; .) denotes the mutual information between two variables. In another recent
work [13], a joint learning framework is proposed which performs dimensionality reduction and
multi-label classification simultaneously.
In the context of CCA as a stand-alone problem, sparsity is another important issue. In particular,
sparsity improves model interpretation and has been gaining lots of attention recently. Existing
works on sparsity in CCA include the double barrelled lasso which is based on a convex least squares
approach [17], and CCA as a sparse solution to the generalized eigenvalue problem [18] which is
based on constraining the cardinality of the solution to the generalized eigenvalue problem to obtain
a sparse solution. Another recent solution is based on a direct greedy approach which bounds the
correlation at each stage [22].
The probabilistic approaches to CCA include the works of [15] and [1], both of which use an automatic relevance determination (ARD) prior [5] to determine the number of relevant components,
which is a rather ad-hoc way of doing this. In contrast, a nonparametric Bayesian alternative proposed here is a more principled to determine the number of components.
We note that the sparse factor analysis model proposed in [16] actually falls out as a special case
of our proposed infinite CCA model if one of the datasets (X or Y) is absent. Besides, the sparse
factor analysis model is limited to factor analysis whereas the proposed model can be seen as an infinite generalization of both an unsupervised problem (sparse CCA), and (semi)supervised problem
(dimensionality reduction using CCA with full or partial label information), with the latter being
especially relevant for multitask learning in the presence of multiple labels.
Finally, multitask learning has been tackled using a variety of different approaches, primarily depending on what notion of task relatedness is assumed. Some of the examples include tasks generated from an IID space [4], and learning multiple tasks using a hierarchical prior over the task space
[23, 7], among others. In this work, we consider multi-label prediction in particular, based on the
premise that that a set of such related tasks share an underlying low-dimensional feature space [12]
that captures the task relatedness.
7
Conclusion
We have presented a nonparametric Bayesian model for the Canonical Correlation Analysis problem
to discover the dependencies between a set of variables. In particular, our model does not assume
a fixed number of correlation components and this number is determined automatically based only
on the data. In addition, our model enjoys sparsity making the model more interpretable. The
probabilistic nature of our model also allows dealing with missing data. Finally, we also demonstrate
the model?s applicability to the problem of multi-label learning where our model, directed by label
information, can be used to automatically extract useful predictive features from the data.
Acknowledgements
We thank the anonymous reviewers for helpful comments. This work was partially supported by
NSF grant IIS-0712764.
8
References
[1] C. Archambeau and F. Bach. Sparse probabilistic projections. In Neural Information Processing Systems
21, 2008.
[2] J. Arenas-Garc??a, K. B. Petersen, and L. K. Hansen. Sparse kernel orthonormalized pls for feature extraction in large data sets. In Neural Information Processing Systems 19, 2006.
[3] F. R. Bach and M. I. Jordan. A Probabilistic Interpretation of Canonical Correlation Analysis. In Technical
Report 688, Dept. of Statistics. University of California, 2005.
[4] J. Baxter. A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research, 12:149?198,
2000.
[5] C. M. Bishop. Bayesian PCA. In Neural Information Processing Systems 11, Cambridge, MA, USA,
1999. MIT Press.
[6] R. Caruana. Multitask Learning. Machine Learning, 28(1):41?75, 1997.
[7] H. Daum?e III. Bayesian Multitask Learning with Latent Hierarchies. In Conference on Uncertainty in
Artificial Intelligence, Montreal, Canada, 2009.
[8] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004.
[9] Z. Ghahramani, T. L. Griffiths, and P. Sollich. Bayesian Nonparametric Latent Feature Models. In
Bayesian Statistics 8. Oxford University Press, 2007.
[10] A. Globerson and N. Tishby. Sufficient dimensionality reduction. J. Mach. Learn. Res., 3:1307?1331,
2003.
[11] H. Hotelling. Relations Between Two Sets of Variables. Biometrika, pages 321?377, 1936.
[12] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting Shared Subspace for Multi-label Classification. 2008.
[13] S. Ji and J. Ye. Linear dimensionality reduction for multi-label classification. In Twenty-first International
Joint Conference on Artificial Intelligence, 2009.
[14] M. Kim and V. Pavlovic. Covariance operator based dimensionality reduction with extension to semisupervised settings. In Twelfth International Conference on Artificial Intelligence and Statistics, Florida
USA, 2009.
[15] A. Klami and S. Kaski. Local dependent components. In ICML ?07: Proceedings of the 24th international
conference on Machine learning, 2007.
[16] P. Rai and H. Daum?e III. The infinite hierarchical factor regression model. In Neural Information Processing Systems 21, 2008.
[17] D. Hardoon J. Shawe-Taylor. The Double-Barrelled LASSO (Sparse Canonical Correlation Analysis). In
Workshop on Learning from Multiple Sources (NIPS), 2008.
[18] B. Sriperumbudur, D. Torres, and G. Lanckriet. The Sparse Eigenvalue Problem. In arXiv:0901.1504v1,
2009.
[19] N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. In Proc. of the 37-th Annual
Allerton Conference on Communication, Control and Computing, pages 368?377.
[20] N. Ueda and K. Saito. Parametric Mixture Models for Multi-labeled Text. Advances in Neural Information
Processing Systems, pages 737?744, 2003.
[21] C. Wang. Variational Bayesian approach to Canonical Correlation Analysis. In IEEE Transactions on
Neural Networks, 2007.
[22] A. Wiesel, M. Kliger, and A. Hero. A Greedy Approach to Sparse Canonical Correlation Analysis. In
arXiv:0801.2748, 2008.
[23] Y. Xue, X. Liao, L. Carin, and B. Krishnapuram. Multi-task Learning for Classification with Dirichlet
Process Priors. The Journal of Machine Learning Research, 8:35?63, 2007.
[24] K. Yu, S. Yu, and V. Tresp. Multi-label Informed Latent Semantic Indexing. In Proceedings of the 28th
annual international ACM SIGIR conference on Research and development in information retrieval, pages
258?265. ACM New York, NY, USA, 2005.
[25] S. Yu, K. Yu, V. Tresp, H. Kriegel, and M. Wu. Supervised Probabilistic Principal Component Analysis.
In KDD ?06: Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery
and data mining, 2006.
[26] Y. Zhang Z. H. Zhou. Multi-Label Dimensionality Reduction via Dependence Maximization. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, AAAI 2008, pages 1503?1505,
2008.
9
| 3673 |@word multitask:9 repository:1 wiesel:1 seems:1 twelfth:1 open:1 d2:8 seek:1 covariance:6 decomposition:1 thereby:2 reduction:20 uncovered:1 efficacy:1 score:2 selecting:1 document:1 past:1 existing:5 comparing:1 readily:1 wx:8 kdd:1 treating:1 interpretable:1 rd2:2 zik:1 alone:3 generative:2 discovering:2 greedy:2 intelligence:5 inversegamma:1 accepting:1 allerton:1 zhang:1 unbounded:2 direct:1 ik:2 consists:5 combine:1 introduce:1 theoretically:1 indeed:2 expected:3 nor:11 multi:21 automatically:10 cardinality:1 becomes:1 project:3 discover:2 underlying:9 xx:1 begin:1 hardoon:1 what:2 proposing:1 informed:2 finding:2 exactly:1 biometrika:1 classifier:3 rm:1 bio:1 zl:1 grant:1 control:1 yn:3 yte:3 understood:1 local:1 treat:1 mach:2 ak:1 oxford:1 establishing:1 chose:3 initialization:1 equivalence:2 archambeau:1 limited:1 projective:1 bi:1 averaged:1 directed:3 uy:9 bjk:1 globerson:1 testing:1 block:1 saito:1 area:1 thought:1 significantly:1 projection:22 griffith:1 protein:1 suggest:1 get:1 cannot:1 unlabeled:1 selection:1 onto:1 operator:2 petersen:1 context:5 applying:4 equivalent:2 map:1 customer:8 missing:5 reviewer:1 go:1 attention:1 convex:1 sigir:1 array:1 embedding:1 notion:3 traditionally:1 limiting:1 hierarchy:1 exact:2 lanckriet:1 element:1 particularly:2 labeled:7 observed:2 wang:1 capture:4 ran:1 principled:2 complexity:1 multilabel:1 solving:1 predictive:6 upon:2 completely:1 joint:2 kaski:1 xxt:2 derivation:1 train:3 describe:4 effective:1 artificial:5 choosing:3 posed:2 valued:5 say:2 drawing:1 otherwise:1 precludes:1 compressed:1 statistic:3 final:1 krishnapuram:1 hoc:1 eigenvalue:5 propose:6 coming:1 product:1 macro:3 uci:1 hadamard:1 relevant:2 mixing:1 ky:3 cluster:1 double:2 piyush:2 help:1 depending:2 montreal:1 measured:2 ard:2 school:1 inflate:2 ibp:19 wtx:3 c:1 come:1 closely:1 correct:1 centered:1 vx:4 oisson:3 everything:1 ytr:1 garc:1 require:1 premise:1 f1:6 generalization:1 anonymous:1 extension:3 considered:2 ground:1 predict:1 achieves:1 estimation:1 proc:1 label:52 hansen:1 individually:1 fukumizu:1 orthonormalized:1 xtr:1 mit:1 gaussian:3 aim:1 rather:1 pn:2 zhou:1 conjunction:1 improvement:1 likelihood:2 indicates:1 contrast:1 sigkdd:1 baseline:2 kim:1 helpful:1 inference:2 dependent:1 accept:1 borrowing:1 spurious:1 relation:1 selects:3 issue:8 classification:8 aforementioned:1 among:4 overall:1 development:1 constrained:1 special:2 mutual:1 apriori:1 having:11 extraction:2 sampling:12 placing:1 yu:5 unsupervised:1 icml:1 carin:1 others:2 report:3 pavlovic:1 inherent:1 few:2 primarily:1 micro:3 simultaneously:4 ve:2 gamma:1 individual:1 consisting:7 sdr:1 acceptance:1 mining:2 arena:1 introduces:1 mixture:1 partial:3 xy:2 xte:4 taylor:1 re:2 causal:1 mk:5 column:13 modeling:5 caruana:1 zn:1 assignment:1 maximization:2 applicability:2 imperative:1 subset:1 entry:6 predictor:1 culinary:1 tishby:2 dependency:3 xue:1 synthetic:4 international:5 probabilistic:20 informatics:1 together:1 concrete:1 na:2 uty:2 aaai:2 choose:1 rescaling:1 bx:9 account:4 potential:1 suggesting:1 bold:1 coefficient:1 depends:1 vi:3 ad:1 lot:1 doing:4 start:1 option:3 utx:4 square:2 accuracy:2 characteristic:1 likewise:1 maximized:1 yield:1 dealt:1 bayesian:17 rejecting:1 produced:1 iid:1 none:1 j6:1 acc:2 za:1 sriperumbudur:1 involved:1 associated:5 di:3 sampled:1 dataset:9 treatment:1 recall:1 knowledge:1 dimensionality:20 improves:1 organized:1 yyt:2 hilbert:1 actually:2 back:3 wwt:2 originally:1 supervised:26 response:5 maximally:1 formulation:5 done:3 furthermore:1 stage:1 correlation:35 hand:1 hastings:1 overlapping:1 defines:1 lda:1 yeast:4 hal:2 semisupervised:3 utah:2 building:2 usa:3 ye:2 true:1 inductive:1 nonzero:4 semantic:2 deal:5 conditionally:2 drr:4 auc:3 please:1 generalized:4 demonstrate:3 performs:2 wise:2 variational:1 recently:4 common:2 empirically:1 ji:2 interpretation:8 significant:3 cambridge:1 gibbs:5 automatic:2 rd:1 newswire:1 shawe:1 had:4 access:4 multivariate:2 posterior:5 showed:1 recent:3 apart:1 dish:13 termed:1 certain:1 binary:11 yi:1 seen:1 additional:1 somewhat:1 ey:2 determine:3 semi:9 ii:1 multiple:7 full:5 technical:1 faster:1 determination:2 cross:2 bach:5 retrieval:1 prediction:13 basic:2 regression:4 liao:1 expectation:1 metric:1 arxiv:2 iteration:1 represent:4 kernel:2 proposal:1 addition:6 whereas:4 separately:2 source:2 crucial:4 extra:1 rest:3 unlike:1 klami:1 comment:1 subject:1 spirit:1 effectiveness:3 jordan:4 bik:2 extracting:3 call:2 presence:1 ideal:1 door:1 iii:3 embeddings:1 constraining:1 baxter:1 variety:3 restaurant:1 gave:1 lasso:2 tradeoff:2 absent:1 bottleneck:3 whether:1 motivated:1 pca:8 akin:1 york:1 ignored:2 useful:9 factorial:1 nonparametric:9 category:1 imputed:2 exist:1 canonical:16 vy:4 nsf:1 dummy:1 yy:1 correctly:1 per:2 pace:1 discrete:1 write:1 hyperparameter:1 affected:1 thereafter:1 demonstrating:2 v1:1 run:1 inverse:2 uncertainty:1 place:3 extends:1 ueda:1 wu:1 draw:3 capturing:5 cca:69 bound:1 followed:1 tackled:1 encountered:1 annual:2 strength:1 scene:4 generates:1 min:2 rai:2 according:2 smaller:1 across:3 sollich:1 metropolis:1 making:5 modification:1 explained:1 indexing:2 previously:1 remains:1 discus:1 describing:1 hero:1 available:1 apply:3 hierarchical:2 v2:1 appropriate:1 hotelling:1 alternative:1 buffet:4 florida:1 original:3 denotes:6 assumes:2 include:4 dirichlet:1 graphical:3 l6:1 yx:1 daum:3 exploit:2 ghahramani:1 especially:1 classical:9 parametric:2 dependence:2 diagonal:4 bialek:1 subspace:18 separate:6 thank:1 concatenation:1 discriminant:1 assuming:1 besides:2 modeled:2 relationship:1 ratio:1 equivalently:1 setup:1 unknown:4 twenty:2 observation:10 datasets:7 finite:1 defining:1 looking:1 communication:1 y1:3 discovered:3 reproducing:1 canada:1 inferred:2 pair:5 connection:2 california:1 learned:4 nip:1 suggested:3 kriegel:1 usually:2 wy:3 pattern:2 sparsity:16 wz:2 max:2 tance:1 gaining:1 suitable:2 treated:1 predicting:4 representing:2 scheme:2 categorical:1 extract:3 tresp:2 text:2 prior:16 acknowledgement:1 discovery:1 embedded:2 fully:8 expect:1 limitation:1 analogy:2 versus:1 validation:1 sufficient:2 thresholding:2 story:2 share:3 row:2 supported:1 enjoys:1 bias:1 fall:1 face:1 absolute:1 sparse:12 curve:1 dimension:6 xn:3 stand:3 world:4 computes:1 ignores:3 concretely:1 made:1 projected:1 ig:1 transaction:1 ignore:2 relatedness:3 gene:1 dealing:3 overfitting:2 incoming:1 assumed:1 knew:1 xi:1 latent:20 table:2 additionally:1 learn:11 reasonably:1 nature:1 correlated:1 posing:1 domain:1 big:1 noise:2 motivation:1 dyadic:1 x1:3 roc:1 depicts:1 torres:1 ny:1 sub:1 pereira:1 candidate:1 xyt:2 lie:1 ib:2 third:1 tang:1 rk:3 down:1 specific:2 bishop:1 svm:1 evidence:1 incorporating:1 consist:1 workshop:1 albeit:1 sequential:1 effectively:2 sparseness:1 kx:3 rd1:2 simply:1 ux:7 partially:5 pls:1 truth:1 determines:1 acm:3 ma:1 goal:2 shared:9 infinite:25 specifically:1 determined:1 wt:2 sampler:3 principal:1 experimental:2 indicating:1 select:4 formally:2 latter:1 relevance:2 indian:5 incorporate:2 dept:1 d1:9 ex:2 |
2,950 | 3,674 | Unsupervised feature learning for audio classification
using convolutional deep belief networks
Honglak Lee
Yan Largman
Peter Pham
Computer Science Department
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Abstract
In recent years, deep learning approaches have gained significant interest as a
way of building hierarchical representations from unlabeled data. However, to
our knowledge, these deep learning approaches have not been extensively studied for auditory data. In this paper, we apply convolutional deep belief networks to audio data and empirically evaluate them on various audio classification
tasks. In the case of speech data, we show that the learned features correspond to
phones/phonemes. In addition, our feature representations learned from unlabeled
audio data show very good performance for multiple audio classification tasks.
We hope that this paper will inspire more research on deep learning approaches
applied to a wide range of audio recognition tasks.
1
Introduction
Understanding how to recognize complex, high-dimensional audio data is one of the greatest challenges of our time. Previous work [1, 2] revealed that learning a sparse representation of auditory
signals leads to filters that closely correspond to those of neurons in early audio processing in mammals. For example, when sparse coding models are applied to natural sounds or speech, the learned
representations (basis vectors) showed a striking resemblance to the cochlear filters in the auditory
cortex. In related work, Grosse et al. [3] proposed an efficient sparse coding algorithm for auditory
signals and demonstrated its usefulness in audio classification tasks.
However, the proposed methods have been applied to learn relatively shallow, one-layer representations. Learning more complex, higher-level representation is still a non-trivial, challenging problem.
Recently, many promising approaches have been proposed to learn the processing steps of the ?second stage and beyond? [4, 5, 6, 7, 8]. These ?deep learning? algorithms try to learn simple features
in the lower layers and more complex features in the higher layers. However, to the best of our
knowledge, these ?deep learning? approaches have not been extensively applied to auditory data.
The deep belief network [4] is a generative probabilistic model composed of one visible (observed)
layer and many hidden layers. Each hidden layer unit learns a statistical relationship between the
units in the lower layer; the higher layer representations tend to become more complex. The deep
belief network can be efficiently trained using greedy layerwise training, in which the hidden layers
are trained one at a time in a bottom-up fashion [4]. Recently, convolutional deep belief networks [9]
have been developed to scale up the algorithm to high-dimensional data. Similar to deep belief
networks, convolutional deep belief networks can be trained in a greedy, bottom-up fashion. By
applying these networks to images, Lee et al. (2009) showed good performance in several visual
recognition tasks [9].
In this paper, we will apply convolutional deep belief networks to unlabeled auditory data (such as
speech and music) and evaluate the learned feature representations on several audio classification
tasks. In the case of speech data, we show that the learned features correspond to phones/phonemes.
In addition, our feature representations outperform other baseline features (spectrogram and MFCC)
1
for multiple audio classification tasks. In particular, our method compares favorably with other stateof-the-art algorithms for the speaker identification task. For the phone classification task, MFCC
features can be augmented with our features to improve accuracy. We also show for certain tasks
that the second-layer features produce higher accuracy than the first-layer features, which justifies
the use of deep learning approaches for audio classification. Finally, we show that our features give
better performance in comparison to other baseline features for music classification tasks. In our
experiments, the learned features often performed much better than other baseline features when
there was only a small number of labeled training examples. To the best of our knowledge, we are
the first to apply deep learning algorithms to a range of audio classification tasks. We hope that this
paper will inspire more research on deep learning approaches applied to audio recognition tasks.
2
2.1
Algorithms
Convolutional deep belief networks
We first briefly review convolutional restricted Boltzmann machines (CRBMs) [9, 10, 11] as building
blocks for convolutional deep belief networks (CDBNs). We will follow the formulation of [9] and
adapt it to a one dimensional setting. For the purpose of this explanation, we assume that all inputs
to the algorithm are single-channel time-series data with nV frames (an nV dimensional vector);
however, the formulation can be straightforwardly extended to the case of multiple channels.
The CRBM is an extension of the ?regular? RBM [4] to a convolutional setting, in which the weights
between the hidden units and the visible units are shared among all locations in the hidden layer.
The CRBM consists of two layers: an input (visible) layer V and a hidden layer H. The hidden
units are binary-valued, and the visible units are binary-valued or real-valued.
Consider the input layer consisting of an nV dimensional array of binary units. To construct the
hidden layer, consider K nW -dimensional filter weights W K (also referred to as ?bases? throughout
this paper). The hidden layer consists of K ?groups? of nH -dimensional arrays (where nH ,
nV ? nW + 1) with units in group k sharing the weights W k . There is also a shared bias bk for each
group and a shared bias c for the visible units. The energy function can then be defined as:
E(v, h)
= ?
nH X
nW
K X
X
hkj Wrk vj+r?1 ?
k=1 j=1 r=1
K
X
k=1
bk
nH
X
hkj ? c
j=1
nV
X
vi .
(1)
i=1
Similarly, the energy function of CRBM with real-valued visible units can be defined as:
E(v, h)
=
nV
nH X
nW
nH
nV
K X
K
X
X
X
X
1X
vi2 ?
hkj Wrk vj+r?1 ?
bk
hkj ? c
vi .
2 i
j=1 r=1
j=1
i=1
k=1
(2)
k=1
The joint and conditional probability distributions are defined as follows:
1
exp(?E(v, h))
Z
? k ?v v)j + bk )
P (hkj = 1|v) = sigmoid((W
X
P (vi = 1|h) = sigmoid( (W k ?f hk )i + c)
P (v, h)
=
(3)
(4)
(for binary visible units)
(5)
(for real visible units),
(6)
k
P (vi |h)
X
= N ormal( (W k ?f hk )i + c, 1)
k
? k , Wk
where ?v is a ?valid? convolution, ?f is a ?full? convolution,1 and W
j
nW ?j+1 . Since all units
in one layer are conditionally independent given the other layer, inference in the network can be
efficiently performed using block Gibbs sampling. Lee et al. [9] further developed a convolutional
RBM with ?probabilistic max-pooling,? where the maxima over small neighborhoods of hidden
units are computed in a probabilistically sound way. (See [9] for more details.) In this paper, we use
CRBMs with probabilistic max-pooling as building blocks for convolutional deep belief networks.
1
Given an m-dimensional vector and an n-dimensional kernel (where m > n), valid convolution gives a
(m ? n + 1)-dimensional vector, and full convolution gives a (m + n ? 1)-dimensional vector.
2
For training the convolutional RBMs, computing the exact gradient for the log-likelihood term is intractable. However, contrastive divergence [12] can be used to approximate the gradient effectively.
Since a typical CRBM is highly overcomplete, a sparsity penalty term is added to the log-likelihood
objective [8, 9]. More specifically, the training objective can be written as
minimizeW,b,c
Llikelihood (W, b, c) + Lsparsity (W, b, c),
(7)
where Llikelihood is a negative log-likelihood that measures how well the CRBM approximates the
input data distribution, and Lsparsity is a penalty term that constrains the hidden units to having
sparse average activations. This sparsity regularization can be viewed as limiting the ?capacity?
of the network, and it often results in more easily interpretable feature representations. Once the
parameters for all the layers are trained, we stack the CRBMs to form a convolutional deep belief
network. For inference, we use feed-forward approximation.
2.2
Application to audio data
For the application of CDBNs to audio data, we first convert time-domain signals into spectrograms. However, the dimensionality of the spectrograms is large (e.g., 160 channels). We apply
PCA whitening to the spectrograms and create lower dimensional representations. Thus, the data
we feed into the CDBN consists of nc channels of one-dimensional vectors of length nV , where nc is
the number of PCA components in our representation. Similarly, the first-layer bases are comprised
of nc channels of one-dimensional filters of length nW .
3
Unsupervised feature learning
3.1
Training on unlabeled TIMIT data
We trained the first and second-layer CDBN representations using a large, unlabeled speech dataset.
First, we extracted the spectrogram from each utterance of the TIMIT training data [13]. The spectrogram had a 20 ms window size with 10 ms overlaps. The spectrogram was further processed using
PCA whitening (with 80 components) to reduce the dimensionality. We then trained 300 first-layer
bases with a filter length (nW ) of 6 and a max-pooling ratio (local neighborhood size) of 3. We
further trained 300 second-layer bases using the max-pooled first-layer activations as input, again
with a filter length of 6 and a max-pooling ratio of 3.
3.2 Visualization
In this section, we illustrate what the network ?learns? through visualization. We visualize the firstlayer bases by multiplying the inverse of the PCA whitening on each first-layer basis (Figure 1).
Each second-layer basis is visualized as a weighted linear combination of the first-layer bases.
high freq.
low freq.
Figure 1: Visualization of randomly selected first-layer CDBN bases trained on the TIMIT data.
Each column represents a ?temporal receptive field? of a first-layer basis in the spectrogram space.
The frequency channels are ordered from the lowest frequency (bottom) to the highest frequency
(top). All figures in the paper are best viewed in color.
3.2.1
Phonemes and the CDBN features
In Figure 2, we show how our bases relate to phonemes by comparing visualizations of each
phoneme with the bases that are most activated by that phoneme.
For each phoneme, we show five spectrograms of sound clips of that phoneme (top five columns in
each phoneme group), and the five first-layer bases with the highest average activations on the given
phoneme (bottom five columns in each phoneme group). Many of the first-layer bases closely match
the shapes of phonemes. There are prominent horizontal bands in the lower frequencies of the firstlayer bases that respond most to vowels (for example, ?ah? and ?oy?). The bases that respond most
3
Example phones ("ah")
First layer bases
Example phones ("oy")
Example phones ("el")
Example phones ("s")
First layer bases
First layer bases
First layer bases
Figure 2: Visualization of the four different phonemes and their corresponding first-layer CDBN
bases. For each phoneme: (top) the spectrograms of the five randomly selected phones; (bottom)
five first-layer bases with the highest average activations on the given phoneme.
to fricatives (for example, ?s?) typically take the form of widely distributed areas of energy in the
high frequencies of the spectrogram. Both of these patterns reflect the structure of the corresponding
phoneme spectrograms.
Closer inspection of the bases provides slight evidence that the first-layer bases also capture more
fine-grained details. For example, the first and third ?oy? bases reflect the upward-slanting pattern
in the phoneme spectrograms. The top ?el? bases mirror the intensity patterns of the corresponding
phoneme spectrograms: a high intensity region appears in the lowest frequencies, and another region
of lesser intensity appears a bit higher up.
3.2.2
Speaker gender information and the CDBN features
In Figure 3, we show an analysis of two-layer CDBN feature representations with respect to the gender classification task (Section 4.2). Note that the network was trained on unlabeled data; therefore,
no information about speaker gender was given during training.
Example phones (female)
First layer bases ("female")
Second layer bases ("female")
Example phones (male)
First layer bases ("male")
Second layer bases ("male")
Figure 3: (Left) five spectrogram samples of ?ae? phoneme from female (top)/male (bottom) speakers. (Middle) Visualization of the five first-layer bases that most differentially activate for female/male speakers. (Right) Visualization of the five second-layer bases that most differentially
activate for female/male speakers.
For comparison with the CDBN features, randomly selected spectrograms of female (top left five
columns) and male (bottom left five columns) pronunciations of the ?ae? phoneme from the TIMIT
dataset are shown. Spectrograms for the female pronunciations are qualitatively distinguishable by a
finer horizontal banding pattern in low frequencies, whereas male pronunciations have more blurred
4
patterns. This gender difference in the vowel pronunciation patterns is typical across the TIMIT
data.
Only the bases that are most biased to activate on either male or female speech are shown. The bases
that are most active on female speech encode the horizontal band pattern that is prominent in the
spectrograms of female pronunciations. On the other hand, the male-biased bases have more blurred
patterns, which again visually matches the corresponding spectrograms.
4
Application to speech recognition tasks
In this section, we demonstrate that the CDBN feature representations learned from the unlabeled
speech corpus can be useful for multiple speech recognition tasks, such as speaker identification,
gender classification, and phone classification. In most of our experiments, we followed the selftaught learning framework [14]. The motivation for self-taught learning comes from situations
where we are given only a small amount of labeled data and a large amount of unlabeled data;2
therefore, one of our main interests was to evaluate the different feature representations given a small
number of labeled training examples (as often assumed in self-taught learning or semi-supervised
learning settings). More specifically, we trained the CDBN on unlabeled TIMIT data (as described
in Section 3.1); then we used the CDBN features for classification on labeled training/test data3 that
were randomly selected from the TIMIT corpus.4
4.1
Speaker identification
We evaluated the usefulness of the learned CDBN representations for the speaker identification task.
The subset of the TIMIT corpus that we used for speaker identification has 168 speakers and 10
utterances (sentences) per speaker, resulting in a total of 1680 utterances. We performed 168-way
classification on this set. For each number of utterances per speaker, we randomly selected training
utterances and testing utterances and measured the classification accuracy; we report the results
averaged over 10 random trials.5 To construct training and test data for the classification task,
we extracted a spectrogram from each utterance in the TIMIT corpus. We denote this spectrogram
representation as ?RAW? features. We computed the first and second-layer CDBN features using the
spectrogram as input. We also computed MFCC features, widely-used standard features for generic
speech recognition tasks. As a result, we obtained spectrogram/MFCC/CDBN representations for
each utterance with multiple (typically, several hundred) frames. In our experiments, we used simple
summary statistics (for each channel) such as average, max, or standard deviation over all the frames.
We evaluated the features using standard supervised classifiers, such as SVM, GDA, and KNN.
The choices of summary statistics and hyperparameters for the classifiers were done using crossvalidation. We report the average classification accuracy (over 10 random trials) with a varying
number of training examples.
Table 1 shows the average classification accuracy for each feature representation. The results
show that the first and second CDBN representations both outperform baseline features (RAW and
MFCC). The numbers compare MFCC and CDBN features with as many of the same factors (such as
preprocessing and classification algorithms) as possible. Further, to make a fair comparison between
CDBN features and MFCC, we used the best performing implementation6 among several standard
implementations for MFCC. Our results suggest that without special preprocessing or postprocess2
In self-taught learning, the labeled data and unlabeled data don?t need to share the same labels or the same
generative distributions.
3
There are two disjoint TIMIT data sets. We drew unlabeled data from the larger of the two for unsupervised
feature learning, and we drew labeled data from the other data set to create our training and test set for the
classification tasks.
4
In the case of phone classification, we followed the standard protocol (e.g., [15]) rather than self-taught
learning framework to evaluate our algorithm in comparison to other methods.
5
Details: There were some exceptions to this; for the case of eight training utterances, we followed
Reynolds (1995) [16]; more specifically, we used eight training utterances (2 sa sentences, 3 si sentences and
first 3 sx sentences); the two testing utterances were the remaining 2 sx sentences. We used cross validation
for selecting hyperparameters for classification, except for the case of 1 utterance per speaker, where we used a
randomly selected validation sentence per speaker.
6
We used Dan Ellis? implementation available at: http://labrosa.ee.columbia.edu/matlab/
rastamat.
5
Table 1: Test classification accuracy for speaker identification using summary statistics
#training utterances per speaker
1
2
3
5
8
RAW
46.7%
43.5%
67.9%
80.6%
90.4%
MFCC
54.4%
69.9%
76.5%
82.6%
92.0%
CDBN L1
74.5%
76.7%
91.3%
93.7%
97.9%
CDBN L2
62.8%
66.2%
84.3%
89.6%
95.2%
CDBN L1+L2
72.8%
76.7%
91.8%
93.8%
97.0%
Table 2: Test classification accuracy for speaker identification using all frames
#training utterances per speaker
1
2
3
5
8
MFCC ([16]?s method)
40.2%
87.9%
95.9%
99.2%
99.7%
CDBN
90.0%
97.9%
98.7%
99.2%
99.7%
MFCC ([16]) + CDBN
90.7%
98.7%
99.2%
99.6%
100.0%
ing (besides the summary statistics which were needed to reduce the number of features), the CDBN
features outperform MFCC features, especially in a setting with a very limited number of labeled
examples.
We further experimented to determine if the CDBN features can achieve competitive performance in
comparison to other more sophisticated, state-of-the-art methods. For each feature representation,
we used the classifier that achieved the highest performance. More specifically, for the MFCC features we replicated Reynolds (1995)?s method,7 and for the CDBN features we used a SVM based
ensemble method.8 As shown in Table 2, the CDBN features consistently outperformed MFCC features when the number of training examples was small. We also combined both methods by taking a
linear combination of the two classifier outputs (before taking the final classification prediction from
each algorithm).9 The resulting combined classifier performed the best, achieving 100% accuracy
for the case of 8 training utterances per speaker.
4.2
Speaker gender classification
We also evaluated the same CDBN features which were learned for the speaker identification task on
the gender classification task. We report the classification accuracy for various quantities of training
examples (utterances) per gender. For each number of training examples, we randomly sampled
training examples and 200 testing examples; we report the test classification accuracy averaged
over 20 trials. As shown in Table 3, both the first and second CDBN features outperformed the
baseline features, especially when the number of training examples were small. The second-layer
CDBN features consistently performed better than the first-layer CDBN features. This suggests that
the second-layer representation learned more invariant features that are relevant for speaker gender
classification, justifying the use of ?deep? architectures.
4.3
Phone classification
Finally, we evaluated our learned representation on phone classification tasks. For this experiment,
we treated each phone segment as an individual example and computed the spectrogram (RAW) and
MFCC features for each phone segment. Similarly, we computed the first-layer CDBN representations. Following the standard protocol [15], we report the 39 way phone classification accuracy on
the test data (TIMIT core test set) for various numbers of training sentences. For each number of
training examples, we report the average classification accuracy over 5 random trials. The summary
7
Details: In [16], MFCC features (with multiple frames) were computed for each utterance; then a Gaussian
mixture model was trained for each speaker (treating each individual MFCC frame as a input example to the
GMM. For the a given test utterance, the prediction was made by determining the GMM model that had the
highest test log-likelihood.
8
In detail, we treated each single-frame CDBN features as an individual example. Then, we trained a multiclass linear SVM for these individual frames. For testing, we computed SVM prediction score for each speaker,
and then aggregated predictions from all the frames. Overall, the highest scoring speaker was selected for the
prediction.
9
The constant for the linear combination was fixed across all the numbers of training utterances, and it was
selected using cross validation.
6
Table 3: Test accuracy for gender classification problem
#training utterances per gender
1
2
3
5
7
10
RAW
68.4%
76.7%
79.5%
84.4%
89.2%
91.3%
MFCC
58.5%
78.7%
84.1%
86.9%
89.0%
89.8%
CDBN L1
78.5%
86.0%
88.9%
93.1%
94.2%
94.7%
CDBN L2
85.8%
92.5%
94.2%
95.8%
96.6%
96.7%
CDBN L1+L2
83.6%
92.3%
94.2%
95.6%
96.5%
96.6%
Table 4: Test accuracy for phone classification problem
#training utterances
100
200
500
1000
2000
3696
RAW
36.9%
37.8%
38.7%
39.0%
39.2%
39.4%
MFCC
58.3%
61.5%
64.9%
67.2%
69.2%
70.8%
MFCC ([15]?s method)
66.6%
70.3%
74.1%
76.3%
78.4%
79.6%
CDBN L1
53.7%
56.7%
59.7%
61.6%
63.1%
64.4%
MFCC+CDBN L1 ([15])
67.2%
71.0%
75.1%
77.1%
79.2%
80.3%
results are shown in Table 4. In this experiment, the first-layer CDBN features performed better
than spectrogram features, but they did not outperform the MFCC features. However, by combining
MFCC features and CDBN features, we could achieve about 0.7% accuracy improvement consistently over all the numbers of training utterances. In the realm of phone classification, in which
significant research effort is often needed to achieve even improvements well under a percent, this
is a significant improvement. [17, 18, 19, 20]
This suggests that the first-layer CDBN features learned somewhat informative features for phone
classification tasks in an unsupervised way. In contrast to the gender classification task, the secondlayer CDBN features did not offer much improvement over the first-layer CDBN features. This
result is not unexpected considering the fact that the time-scale of most phonemes roughly corresponds to the time-scale of the first-layer CDBN features.
5
Application to music classification tasks
In this section, we assess the applicability of CDBN features to various music classification tasks.
Table 5: Test accuracy for 5-way music genre classification
Train examples
1
2
3
5
5.1
RAW
51.6%
57.0%
59.7%
65.8%
MFCC
54.0%
62.1%
65.3%
68.3%
CDBN L1
66.1%
69.7%
70.0%
73.1%
CDBN L2
62.5%
67.9%
66.7%
69.2%
CDBN L1+L2
64.3%
69.5%
69.5%
72.7%
Music genre classification
For the task of music genre classification, we trained the first and second-layer CDBN representations on an unlabeled collection of music data.10 First, we computed the spectrogram (20 ms window
size with 10 ms overlaps) representation for individual songs. The spectrogram was PCA-whitened
and then fed into the CDBN as input data. We trained 300 first-layer bases with a filter length of 10
and a max-pooling ratio of 3. In addition, we trained 300 second-layer bases with a filter length of
10 and a max-pooling ratio of 3.
We evaluated the learned CDBN representation for 5-way genre classification tasks. The training
and test songs for the classification tasks were randomly sampled from 5 genres (classical, electric,
jazz, pop, and rock) and did not overlap with the unlabeled data. We randomly sampled 3-second
segments from each song and treated each segment as an individual training or testing example. We
report the classification accuracy for various numbers of training examples. For each number of
training examples, we averaged over 20 random trials. The results are shown in Table 5. In this task,
the first-layer CDBN features performed the best overall.
10
Available from http://ismir2004.ismir.net/ISMIR_Contest.html.
7
5.2
Music artist classification
Furthermore, we evaluated whether the CDBN features are useful in identifying individual artists.11
Following the same procedure as in Section 5.1, we trained the first and second-layer CDBN representations from an unlabeled collection of classical music data. Some representative bases are
shown in Figure 4. Then we evaluated the learned CDBN representation for 4-way artist identification tasks. The disjoint sets of training and test songs for the classification tasks were randomly
sampled from the songs of four artists. The unlabeled data and the labeled data did not include the
same artists. We randomly sampled 3-second segments from each song and treated each segment as
an individual example. We report the classification accuracy for various quantities of training examples. For each number of training examples, we averaged over 20 random trials. The results are
shown in Table 6. The results show that both the first and second-layer CDBN features performed
better than the baseline features, and that either using the second-layer features only or combining
the first and the second-layer features yielded the best results. This suggests that the second-layer
CDBN representation might have captured somewhat useful, higher-level features than the first-layer
CDBN representation.
high freq.
low freq.
Figure 4: Visualization of randomly selected first-layer CDBN bases trained on classical music data.
Table 6: Test accuracy for 4-way artist identification
Train examples
1
2
3
5
6
RAW
56.0%
69.4%
73.9%
79.4%
MFCC
63.7%
66.1%
67.9%
71.6%
CDBN L1
67.6%
76.1%
78.0%
80.9%
CDBN L2
67.7%
74.2%
75.8%
81.9%
CDBN L1+L2
69.2%
76.3%
78.7%
81.4%
Discussion and conclusion
Modern speech datasets are much larger than the TIMIT dataset. While the challenge of larger
datasets often lies in considering harder tasks, our objective in using the TIMIT data was to restrict
the amount of labeled data our algorithm had to learn from. It remains an interesting problem to
apply deep learning to larger datasets and more challenging tasks.
In this paper, we applied convolutional deep belief networks to audio data and evaluated on various
audio classification tasks. By leveraging a large amount of unlabeled data, our learned features
often equaled or surpassed MFCC features, which are hand-tailored to audio data. Furthermore,
even when our features did not outperform MFCC, we could achieve higher classification accuracy
by combining both. Also, our results show that a single CDBN feature representation can achieve
high performance on multiple audio recognition tasks. We hope that our approach will inspire more
research on automatically learning deep feature hierarchies for audio data.
Acknowledgment
We thank Yoshua Bengio, Dan Jurafsky, Yun-Hsuan Sung, Pedro Moreno, Roger Grosse for helpful
discussions. We also thank anonymous reviewers for their constructive comments. This work was
supported in part by the National Science Foundation under grant EFRI-0835878, and in part by the
Office of Naval Research under MURI N000140710747.
11
In our experiments, we found that artist identification task was more difficult than the speaker identification
task because the local sound patterns can be highly variable even for the same artist.
8
References
[1] E. C. Smith and M. S. Lewicki. Efficient auditory coding. Nature, 439:978?982, 2006.
[2] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381:607?609, 1996.
[3] R. Grosse, R. Raina, H. Kwong, and A.Y. Ng. Shift-invariant sparse coding for audio classification. In UAI, 2007.
[4] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets.
Neural Computation, 18(7):1527?1554, 2006.
[5] M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun. Efficient learning of sparse representations
with an energy-based model. In NIPS, 2006.
[6] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep
networks. In NIPS, 2006.
[7] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of
deep architectures on problems with many factors of variation. In ICML, 2007.
[8] H. Lee, C. Ekanadham, and A. Y. Ng. Sparse deep belief network model for visual area V2. In
NIPS, 2008.
[9] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for
scalable unsupervised learning of hierarchical representations. In ICML, 2009.
[10] G. Desjardins and Y. Bengio. Empirical evaluation of convolutional RBMs for vision. Technical report, 2008.
[11] M. Norouzi, M. Ranjbar, and G. Mori. Stacks of convolutional restricted boltzmann machines
for shift-invariant feature learning. In CVPR, 2009.
[12] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Neural
Computation, 14:1771?1800, 2002.
[13] W. Fisher, G. Doddington, and K. Goudie-Marshall. The darpa speech recognition research
database: Specifications and status. In DARPA Speech Recognition Workshop, 1986.
[14] R. Raina, A. Battle, H. Lee, B. Packer, and A. Y. Ng. Self-taught learning: Transfer learning
from unlabeled data. In ICML, 2007.
[15] P. Clarkson and P. J. Moreno. On the use of support vector machines for phonetic classification.
In ICASSP99, pages 585?588, 1999.
[16] D. A. Reynolds. Speaker identification and verification using gaussian mixture speaker models.
Speech Commun., 17(1-2):91?108, 1995.
[17] F. Sha and L. K. Saul. Large margin gaussian mixture modeling for phonetic classication and
recognition. In ICASSP?06, 2006.
[18] Y.-H. Sung, C. Boulis, C. Manning, and D. Jurafsky. Regularization, adaptation, and nonindependent features improve hidden conditional random fields for phone classification. In
IEEE ASRU, 2007.
[19] S. Petrov, A. Pauls, and D. Klein. Learning structured models for phone recognition. In
EMNLP-CoNLL, 2007.
[20] D. Yu, L. Deng, and A. Acero. Hidden conditional random field with distribution constraints
for phone classification. In Interspeech, 2009.
9
| 3674 |@word trial:6 middle:1 briefly:1 crbms:3 contrastive:2 mammal:1 harder:1 series:1 score:1 selecting:1 reynolds:3 comparing:1 activation:4 si:1 written:1 visible:8 informative:1 shape:1 moreno:2 treating:1 interpretable:1 generative:2 greedy:3 selected:9 inspection:1 smith:1 core:1 provides:1 location:1 five:11 become:1 consists:3 dan:2 crbm:5 roughly:1 automatically:1 window:2 considering:2 lowest:2 what:1 banding:1 developed:2 sung:2 temporal:1 classifier:5 unit:15 grant:1 before:1 local:2 might:1 studied:1 suggests:3 challenging:2 jurafsky:2 limited:1 range:2 averaged:4 acknowledgment:1 lecun:1 testing:5 block:3 procedure:1 nonindependent:1 area:2 empirical:2 yan:1 regular:1 suggest:1 unlabeled:17 acero:1 applying:1 ranjbar:1 demonstrated:1 reviewer:1 hsuan:1 identifying:1 array:2 lamblin:1 ormal:1 variation:1 limiting:1 hierarchy:1 exact:1 recognition:11 muri:1 labeled:9 database:1 observed:1 bottom:7 capture:1 region:2 ranzato:1 highest:6 constrains:1 trained:17 segment:6 basis:4 easily:1 joint:1 darpa:2 icassp:1 various:7 genre:5 train:2 fast:1 activate:3 neighborhood:2 pronunciation:5 stanford:2 valued:4 widely:2 larger:4 cvpr:1 statistic:4 knn:1 emergence:1 final:1 net:2 rock:1 product:1 adaptation:1 relevant:1 combining:3 achieve:5 differentially:2 crossvalidation:1 produce:1 illustrate:1 andrew:1 measured:1 sa:1 come:1 larochelle:2 closely:2 filter:8 kwong:1 anonymous:1 wrk:2 slanting:1 extension:1 pham:1 exp:1 visually:1 nw:7 visualize:1 desjardins:1 early:1 purpose:1 outperformed:2 jazz:1 label:1 create:2 weighted:1 hope:3 gaussian:3 rather:1 fricative:1 varying:1 probabilistically:1 office:1 encode:1 naval:1 improvement:4 consistently:3 likelihood:4 hk:2 cdbns:2 contrast:1 baseline:6 equaled:1 helpful:1 inference:2 el:2 typically:2 hidden:13 upward:1 overall:2 classification:57 among:2 html:1 stateof:1 art:2 special:1 field:5 construct:2 once:1 having:1 ng:5 sampling:1 represents:1 yu:1 unsupervised:5 icml:3 report:9 yoshua:1 modern:1 randomly:12 composed:1 packer:1 recognize:1 divergence:2 individual:8 national:1 consisting:1 vowel:2 interest:2 highly:2 cdbn:62 evaluation:2 male:10 mixture:3 activated:1 closer:1 overcomplete:1 column:5 modeling:1 elli:1 marshall:1 applicability:1 ekanadham:1 deviation:1 subset:1 hundred:1 usefulness:2 comprised:1 osindero:1 straightforwardly:1 combined:2 lee:6 probabilistic:3 again:2 reflect:2 emnlp:1 expert:1 minimizew:1 bergstra:1 coding:4 wk:1 pooled:1 blurred:2 vi:4 performed:8 try:1 competitive:1 timit:13 ass:1 accuracy:20 convolutional:17 phoneme:21 efficiently:2 ensemble:1 correspond:3 identification:13 raw:8 norouzi:1 artist:8 multiplying:1 mfcc:27 finer:1 selftaught:1 ah:2 sharing:1 petrov:1 energy:4 rbms:2 frequency:7 rbm:2 sampled:5 auditory:7 dataset:3 knowledge:3 color:1 dimensionality:2 realm:1 sophisticated:1 appears:2 feed:2 higher:7 supervised:2 follow:1 inspire:3 formulation:2 evaluated:8 done:1 furthermore:2 roger:1 stage:1 hand:2 horizontal:3 resemblance:1 olshausen:1 building:3 regularization:2 firstlayer:2 freq:4 conditionally:1 during:1 self:5 interspeech:1 speaker:29 m:4 prominent:2 yun:1 demonstrate:1 l1:10 largman:1 percent:1 image:2 wise:1 recently:2 sigmoid:2 empirically:1 nh:6 slight:1 approximates:1 significant:3 honglak:1 gibbs:1 similarly:3 had:3 specification:1 cortex:1 whitening:3 base:36 recent:1 showed:2 female:11 commun:1 phone:23 phonetic:2 certain:1 binary:4 scoring:1 captured:1 somewhat:2 spectrogram:27 deng:1 determine:1 aggregated:1 signal:3 semi:1 full:2 multiple:7 sound:4 ing:1 technical:1 match:2 adapt:1 cross:2 offer:1 justifying:1 prediction:5 scalable:1 ae:2 whitened:1 vision:1 surpassed:1 kernel:1 tailored:1 achieved:1 cell:1 addition:3 whereas:1 fine:1 biased:2 nv:8 pooling:6 tend:1 comment:1 leveraging:1 ee:1 chopra:1 revealed:1 bengio:4 architecture:2 restrict:1 reduce:2 lesser:1 multiclass:1 shift:2 whether:1 pca:5 effort:1 penalty:2 song:6 clarkson:1 peter:1 speech:15 matlab:1 deep:29 useful:3 amount:4 extensively:2 band:2 clip:1 processed:1 visualized:1 http:2 outperform:5 disjoint:2 per:9 klein:1 data3:1 taught:5 group:5 four:2 achieving:1 secondlayer:1 gmm:2 year:1 convert:1 inverse:1 respond:2 striking:1 throughout:1 ismir:1 conll:1 bit:1 layer:68 followed:3 courville:1 icassp99:1 yielded:1 constraint:1 layerwise:1 performing:1 relatively:1 department:1 structured:1 combination:3 manning:1 battle:1 across:2 shallow:1 restricted:2 invariant:3 gda:1 mori:1 visualization:8 remains:1 needed:2 fed:1 available:2 apply:5 eight:2 hierarchical:2 v2:1 generic:1 top:6 remaining:1 include:1 music:11 especially:2 classical:3 objective:3 added:1 quantity:2 receptive:2 sha:1 gradient:2 thank:2 capacity:1 cochlear:1 trivial:1 length:6 besides:1 code:1 relationship:1 ratio:4 minimizing:1 nc:3 difficult:1 relate:1 favorably:1 negative:1 implementation:2 boltzmann:2 teh:1 neuron:1 convolution:4 datasets:3 situation:1 extended:1 hinton:2 frame:9 stack:2 intensity:3 bk:4 sentence:7 learned:15 pop:1 nip:3 beyond:1 pattern:9 sparsity:2 challenge:2 poultney:1 max:8 explanation:1 belief:16 vi2:1 greatest:1 overlap:3 natural:2 treated:4 raina:2 improve:2 utterance:22 columbia:1 review:1 understanding:1 l2:8 popovici:1 determining:1 oy:3 interesting:1 validation:3 foundation:1 verification:1 share:1 classication:1 summary:5 supported:1 bias:2 wide:1 saul:1 taking:2 sparse:8 distributed:1 valid:2 forward:1 qualitatively:1 made:1 preprocessing:2 replicated:1 collection:2 efri:1 erhan:1 ranganath:1 approximate:1 status:1 active:1 uai:1 corpus:4 assumed:1 lsparsity:2 don:1 table:12 goudie:1 n000140710747:1 nature:2 learn:4 promising:1 channel:7 ca:1 transfer:1 complex:4 electric:1 domain:1 vj:2 protocol:2 did:5 main:1 motivation:1 hyperparameters:2 paul:1 fair:1 augmented:1 referred:1 representative:1 fashion:2 grosse:4 lie:1 third:1 learns:2 grained:1 experimented:1 svm:4 evidence:1 intractable:1 workshop:1 effectively:1 gained:1 drew:2 mirror:1 justifies:1 sx:2 margin:1 distinguishable:1 visual:2 unexpected:1 ordered:1 lewicki:1 gender:12 corresponds:1 pedro:1 extracted:2 conditional:3 viewed:2 asru:1 shared:3 fisher:1 typical:2 specifically:4 except:1 hkj:5 total:1 exception:1 support:1 doddington:1 constructive:1 evaluate:4 audio:22 |
2,951 | 3,675 | Efficient and Accurate `p-Norm
Multiple Kernel Learning
Marius Kloft
University of California
Berkeley, USA
Pavel Laskov
Universit?at T?ubingen
T?ubingen, Germany
Ulf Brefeld
Yahoo! Research
Barcelona, Spain
?
Klaus-Robert Muller
Technische Universit?at Berlin
Berlin, Germany
S?oren Sonnenburg
Technische Universit?at Berlin
Berlin, Germany
Alexander Zien
LIFE Biosystems GmbH
Heidelberg, Germany
Abstract
Learning linear combinations of multiple kernels is an appealing strategy when
the right choice of features is unknown. Previous approaches to multiple kernel
learning (MKL) promote sparse kernel combinations to support interpretability.
Unfortunately, `1 -norm MKL is hardly observed to outperform trivial baselines in
practical applications. To allow for robust kernel mixtures, we generalize MKL
to arbitrary `p -norms. We devise new insights on the connection between several
existing MKL formulations and develop two efficient interleaved optimization
strategies for arbitrary p > 1. Empirically, we demonstrate that the interleaved
optimization strategies are much faster compared to the traditionally used wrapper approaches. Finally, we apply `p -norm MKL to real-world problems from
computational biology, showing that non-sparse MKL achieves accuracies that go
beyond the state-of-the-art.
1
Introduction
Sparseness is being regarded as one of the key features in machine learning [15] and biology [16].
Sparse models are appealing since they provide an intuitive interpretation of a task at hand by singling out relevant pieces of information. Such automatic complexity reduction facilitates efficient
training algorithms, and the resulting models are distinguished by small capacity. The interpretability is one of the main reasons for the popularity of sparse methods in complex domains such as
computational biology, and consequently building sparse models from data has received a significant amount of recent attention.
Unfortunately, sparse models do not always perform well in practice [7, 15]. This holds particularly
for learning sparse linear combinations of data sources [15], an abstraction of which is known as
multiple kernel learning (MKL) [10]. The data sources give rise to a set of (possibly
correlated)
P
kernel matrices K1 , . . . , KM , and the task is to learn the optimal mixture K = m ?m Km for the
problem at hand. Previous MKL research aims at finding sparse mixtures to effectively simplify
the underlying data representation. For instance, [10] study semi-definite matrices K 0 inducing
sparseness by bounding the trace tr(K) ? c; unfortunately, the resulting semi-definite optimization
problems are computationally too expensive for large-scale deployment.
Recent approaches to MKL promote sparse solutions either by Tikhonov regularization over the
mixing coefficients [25] or by incorporating an additional constraint k?k ? 1 [18, 27] requiring
solutions on the standard simplex, known as Ivanov regularization. Based on the one or the other,
efficient optimization strategies have been proposed for solving `1 -norm MKL using semi-infinite
linear programming [21], second order approaches [6], gradient-based optimization [19], and levelset methods [26]. Other variants of `1 -norm MKL have been proposed in subsequent work addressing practical algorithms for multi-class [18, 27] and multi-label [9] problems.
1
Previous approaches to MKL successfully identify sparse kernel mixtures, however, the solutions
found, frequently suffer fromP
poor generalization performances. Often, trivial baselines using
unweighted-sum kernels K = m Km are observed to outperform the sparse mixture [7]. One reason for the collapse of `1 -norm MKL is that kernels deployed in real-world tasks are usually highly
sophisticated and effectively capture relevant aspects of the data. In contrast, sparse approaches to
MKL rely on the assumption that some kernels are irrelevant for solving the problem. Enforcing
sparse mixtures in these situations may lead to degenerate models. As a remedy, we propose to
sacrifice sparseness in these situations and deploy non-sparse mixtures instead. After submission of
this paper, we learned about a related approach, in which the sum of an `1 - and an `2 -regularizer are
used [12]. Although non-sparse solutions are not as easy to interpret, they account for (even small)
contributions of all available kernels to live up to practical applications.
In this paper, we first show the equivalence of the most common approaches to `1 -norm MKL
[18, 25, 27]. Our theorem allows for a generalized view of recent strands of multiple kernel learning research. Based on the detached view, we extend the MKL framework to arbitrary `p -norm
MKL with p ? 1. Our approach can either be motivated by additionally regularizing over the mixing coefficients k?kpp , or equivalently by incorporating the constraint k?kpp ? 1. We propose two
alternative optimization strategies based on Newton descent and cutting planes, respectively. Empirically, we demonstrate the efficiency and accuracy of none-sparse MKL. Large-scale experiments
on gene start detection show a significant improvement of predictive accuracy compared to `1 - and
`? -norm MKL.
The rest of the paper is structured as follows. We present our main contributions in Section 2,
the theoretical analysis of existing approaches to MKL, our `p -norm MKL generalization with two
highly efficient optimization strategies, and relations to `1 -norm MKL. We report on our empirical
results in Section 3 and Section 4 concludes.
2
2.1
Generalized Multiple Kernel Learning
Preliminaries
In the standard supervised learning setup, a labeled sample D = {(xi , yi )}i=1...,n is given, where
the x lie in some input space X and y ? Y ? R. The goal is to find a hypothesis f ? H,
that generalizes well on new and unseen data. Applying regularized risk minimization returns the
minimizer f ? ,
f ? = argminf Remp (f ) + ??(f ),
P
n
where Remp (f ) = n1 i=1 V (f (xi ), yi ) is the empirical risk of hypothesis f w.r.t. to the loss V :
R ? Y ? R, regularizer ? : H ? R, and trade-off parameter ? > 0. In this paper, we focus on
? 22 and on linear models of the form
?(f ) = 21 kwk
? > ?(x) + b,
fw,b
? (x) = w
(1)
together with a (possibly non-linear) mapping ? : X ? H to a Hilbert space H [20]. We will later
make use of kernel functions K(x, x0 ) = h?(x), ?(x0 )iH to compute inner products in H.
2.2
Learning with Multiple Kernels
When learning with multiple kernels, we are given M different feature mappings ?m : X ?
Hm , m = 1, . . . M , each giving rise to a reproducing kernel
P Km of Hm . Approaches to multiple kernel learning consider linear kernel mixtures K? =
?m Km , ?m ? 0. Compared to Eq. (1),
the primal model for learning with multiple kernels is extended to
? > ?? (xi ) + b =
fw,b,?
(x) = w
?
M p
X
?>
?m w
m ?m (x) + b,
(2)
m=1
??and the composite
? =
where the weight vector w
feature map ?? have a block structure w
?
> >
?>
?
(w
,
.
.
.
,
w
)
and
?
=
?
?
?
.
.
.
?
?
?
,
respectively.
?
1
1
M
M
1
M
The idea in learning with
P multiple kernels is to minimize the loss on the training data w.r.t. to
optimal kernel mixture ?m Km in addition to regularizing ? to avoid overfitting. Hence, in terms
2
of regularized risk minimization, the optimization problem becomes
inf
?
w,b,??0
n
M
1X
? X
?
? m k22 + ?
kw
??[?].
V (fw,b,? (xi ), yi ) +
n i=1
2 m=1
(3)
?
Previous approaches to multiple kernel learning employ regularizers of the form ?(?)
= ||?||1 to
promote sparse kernel mixtures. By contrast, we propose to use smooth convex regularizers of the
?
form ?(?)
= ||?||pp , 1 < p < ?, allowing for non-sparse solutions. The non-convexity of the
?
? m.
resulting optimization problem is not inherent and can be resolved by substituting wm ? ?m w
1
Furthermore, regularization parameter and sample size can be decoupled by introducing C? = n?
?
?
(and adjusting ? ? ? ) which has favorable scaling properties in practice. We obtain the following
convex optimization problem [5] that has also been considered by [25] for hinge loss and p = 1,
!
M
n
M
X
X
1 X kwm k22
>
?
inf
C
+ ?||?||pp ,
(4)
V
wm ?m (xi ) + b, yi +
w,b,??0
2
?
m
m=1
m=1
i=1
where we use the convention that 0t = 0 if t = 0 and ? otherwise. An alternative approach has
been studied by [18, 27] (again using hinge loss and p = 1). They upper bound the value of the
regularizer k?k1 ? 1 and incorporate the latter as an additional constraint into the optimization
problem. For C > 0, they arrive at
!
M
n
M
X
X
1 X ||wm ||22
>
s.t. ||?||pp ? 1. (5)
inf
C
V
wm ?m (xi ) + b, yi +
w,b,??0
2
?
m
m=1
m=1
i=1
Our first contribution shows that both, the Tikhonov regularization in Eq. (4) and the Ivanov regularization in Eq. (5), are equivalent.
? ?) there exists C > 0 such that for each optimal
Theorem 1 Let be p ? 1. For each pair (C,
? ?), we have that (w? , b? , ? ? ? ) is also an optimal solution
solution (w? , b? , ? ? ) of Eq. (4) using (C,
of Eq. (5) using C, and vice versa, where ? > 0 is some multiplicative constant.
Proof. The proof is shown in the supplementary material for lack of space. Sketch of the proof:
We incorporate the regularizer of (4) into the constraints and show that the resulting upper bound is
tight. A variable substitution completes the proof.
2
Zien and Ong [27] showed that the MKL optimization problems by Bach et al. [3], Sonnenburg et
al. [21], and their own formulation are equivalent. As a main implication of Theorem 1 and by using
the result of Zien and Ong it follows that the optimization problem of Varma and Ray [25] and the
ones from [3, 18, 21, 27] all are equivalent.
In addition, our result shows the coupling between trade-off parameter C and the regularization parameter ? in Eq. (4): tweaking one also changes the other and vice versa. Moreover, Theorem 1
implies that optimizing C in Eq. (5) implicitly searches the regularization path for the parameter ?
of Eq. (4). In the remainder, we will therefore focus on the formulation in Eq. (5), as a single parameter is preferable in terms of model selection. Furthermore, we will focus on binary classification
problems with Y = {?1, +1}, equipped with the hinge loss V (f (x), y) = max{0, 1 ? yf (x)}.
However note, that all our results can easily be transferred to regression and multi-class settings
using appropriate convex loss functions and joint kernel extensions.
2.3
Non-Sparse Multiple Kernel Learning
We now extend the existing MKL framework to allow for non-sparse kernel mixtures ?, see also
[13]. Let us begin with rewriting Eq. (5) by expanding the hinge loss into the slack variables as
follows
M
1 X ||wm ||22
+ Ck?k1
(6)
min
?,w,b,?
2 m=1 ?m
!
M
X
0
s.t. ?i : yi
wm ?m (xi ) + b ? 1 ? ?i ; ? ? 0 ; ? ? 0 ; k?kpp ? 1.
m=1
3
Applying Lagrange?s theorem incorporates the constraints into the objective by introducing nonnegative Lagrangian multipliers ?, ? ? Rn , ? ? RM , ? ? R (including a pre-factor of p1 for the
?-Term). Resubstitution of optimality conditions w.r.t. to w, b, ?, and ? removes the dependency
of the Lagrangian on the primal variables. After some additional algebra (e.g., the terms associated
with ? cancel), the Lagrangian can be written as
p !
p?1
M
X
1
1
1 >
p ? 1 ? p?1
>
L=1 ?? ??
?
? Qm ?
,
(7)
p
p
2
m=1
where Qm = diag(y)Km diag(y). Eq. (7) now has to be maximized w.r.t. to the dual variables ?, ?,
subject to ?> y = 0, 0 ? ?i ? C for 1 ? i ? n, and ? ? 0. Let us ignore for a moment the
non-negativity ? ? 0 and solve ?L/?? = 0 for the unbounded ?. Setting the partial derivative to
zero yields
?=
p?1
p ! p
p?1
M
X
1 >
.
? Qm ?
2
m=1
(8)
Interestingly, at optimality, we always have ? ? 0 because the quadratic term in ? is non-negative.
Plugging the optimal ? into Eq. (7), we arrive at the following optimization problem which solely
depends on ?.
max
?
1
1> ? ?
2
M
X
?> Qm ?
p
p?1
! p?1
p
s.t. 0 ? ? ? C1;
?> y = 0.
(9)
m=1
P
In the limit p ? ?, the above problem reduces to the SVM dual (with Q = m Qm ), while p ? 1
gives rise to a QCQP `1 -MKL variant. However, optimizing the dual efficiently is difficult and will
cause numerical problems in the limits p ? 1 and p ? ?.
2.4
Two Efficient Second-Order Optimization Strategies
Many recent MKL solvers (e.g., [19, 24, 26]) are based on wrapping linear programs around SVMs.
From an optimization standpoint, our work is most closely related to the SILP approach [21] and
the simpleMKL method [19, 24]. Both of these methods also aim at efficient large-scale MKL
algorithms. The two alternative approaches proposed for `p -norm MKL proposed in this paper are
largely inspired by these methods and extend them in two aspects: customization to arbitrary norms
and a tight coupling with minor iterations of an SVM solver, respectively.
Our first strategy interleaves maximizing the Lagrangian of (6) w.r.t. ? with minor precision and
Newton descent on ?. For the second strategy, we devise a semi-infinite convex program, which we
solve by column generation with nested sequential quadratically constrained linear programming
(SQCLP). In both cases, the maximization step w.r.t. ? is performed by chunking optimization with
minor iterations. The Newton approach can be applied without a common purpose QCQP solver,
however, convergence can only be guaranteed for the SQCLP [8].
2.4.1
Newton Descent
For a Newton descent on the mixing coefficients, we first compute the partial derivatives
1 w>
?L
m wm
p?1
= ?
+ ??m
2
??m
2 ?m
|
{z
}
and
=:??m
?2L
w> wm
p?2
= m3
+ (p ? 1)??m
2
? ?m
?m
|
{z
}
=:hm
of the original Lagrangian. Fortunately, the Hessian H is diagonal, i.e. given by H = diag(h). The
m-th element sm of the corresponding Netwon step, defined as s := ?H ?1 ?? , is thus computed
by
sm
=
1
2
p+2
2 ?m ||w m || ? ??m
p+1 ,
||wm ||2 + (p ? 1)??m
4
where ? is defined in Eq. (8). However, a Newton step ? t+1 = ? t + s might lead to non-positive ?.
To avoid this awkward situation, we take the Newton steps in the space of log(?) by adjusting the
derivatives according to the chain rule. We obtain
t+1
log(?m
)
=
t
log(?m
)?
t
?t?m /?m
,
t )2 ? ?t /(? t )2
htm /(?m
m
?m
(10)
which corresponds to multiplicative update of ?:
t+1
?m
=
t
?m
? exp
t
?t?m ?m
?t?m ? htm
!
.
(11)
Furthermore we additionally enhance the Newton step by a line search.
2.4.2
Cutting Planes
In order to obtain an alternative optimization strategy, we fix ? and build the partial Lagrangian
w.r.t. all other primal variables w, b, ?. The derivation is analogous to [18, 27] and we omit details
for lack of space. The resulting dual problem is a min-max problem of the form
min max
?
?
M
X
1
?m Qm ?
1> ? ? ?>
2
m=1
s.t. 0 ? ? ? C1;
y > ? = 0;
? ? 0;
k?kpp ? 1.
The above optimization problem is a saddle point problem and can be solved by alternating ? and
? optimization step. While the former can simply be carried out by a support vector machine for a
fixed mixture ?, the latter has been optimized for p = 1 by reduced gradients [18].
We take a different approach and translate the min-max problem into an equivalent semi-infinite
program (SIP) as follows. Denote the value of the target function by t(?, ?) and suppose ?? is
optimal. Then, according to the max-min inequality [5], we have t(?? , ?) ? t(?, ?) for all ? and
?. Hence, we can equivalently minimize an upper bound ? on the optimal value and arrive at
min ?
?,?
s.t.
M
X
1
?m Qm ?
? ? 1> ? ? ?>
2
m=1
(12)
for all ? ? Rn with 0 ? ? ? C1, and y > ? = 0 as well as k?kpp ? 1 and ? ? 0.
[21] optimize the above SIP for p ? 1 with interleaving cutting plane algorithms. The solution of
a quadratic program (here the regular SVM) generates the most strongly violated constraint for the
actual mixture ?. The optimal (? ? , ?) is then identified by solving a linear program with respect to
the set of active constraints. The optimal mixture is then used for computing a new constraint and
so on.
Unfortunately, for p > 1, a non-linearity is introduced by requiring k?kpp ? 1 and such constraint is
unlikely to be found in standard optimization toolboxes that often handle only linear and quadratic
constraints. As a remedy, we propose to approximate k?kpp ? 1 by sequential second-order Taylor
expansion of the form
||?||pp ? 1 +
M
M
p(p ? 3) X
p(p ? 1) X ?p?2 2
?
p(p ? 2)(??m )p?1 ?m +
?m ?m ,
2
2
m=1
m=1
p
where ? p is defined element-wise, that is ? p := (?1p , ..., ?M
). The sequence (? 0 , ? 1 , ? ? ? ) is initialized with a uniform mixture satisfying k? 0 kpp = 1 as a starting point. Successively ? t+1 is computed
? = ? t . Note that the quadratic term in the approximation is diagonal wherefore the subseusing ?
quent quadratically constrained problem can be solved efficiently. Finally note, that this approach
can be further sped-up by an additional projection onto the level-sets in the ?-optimization phase
similar to [26]. In our case, the level-set projection is a convex quadratic problem with `p -norm
constraints and can again be approximated by successive second-order Taylor expansions.
5
2
10
2
10
time in seconds
time in seconds
1
10
0
10
?1
1
10
0
10
10
?2
10
?1
2
3
10
10
sample size
10
1
10
2
10
number of kernels
3
10
Figure 1: Execution times of SVM Training, `p -norm MKL based on interleaved optimization via the Newton,
the cutting plane algorithm (CPA), and the SimpleMKL wrapper. (left) Training using fixed number of 50
kernels varying training set size. (right) For 500 examples and varying numbers of kernels. Our proposed
Newton and CPA obtain speedups of over an order of magnitude. Notice the tiny error bars.
3
Computational Experiments
In this section we study non-sparse MKL in terms of efficiency and accuracy.1 We apply the method
of [21] for `1 -norm results as it is contained as a special case of our cuttingPplane strategy. We write
`? -norm MKL for a regular SVM with the unweighted-sum kernel K = m Km .
3.1
Execution Time
We demonstrate the efficiency of our implementations of non-sparse MKL. We experiment on the
MNIST data set where the task is to separate odd vs. even digits. We compare our `p -norm MKL
with two methods for `1 -norm MKL, simpleMKL [19] and SILP-based chunking [21], and to SVMs
using the unweighted-sum kernel (`? -norm MKL) as additional baseline. We optimize all methods
up to a precision of 10?3 for the outer SVM-? and 10?5 for the ?inner? SIP precision and computed
relative duality gaps. To provide a fair stopping criterion to simpleMKL, we set the stopping criterion
of simpleMKL to the relative duality gap of its `1 -norm counterpart. This way, the deviations of
relative objective values of `1 -norm MKL variants are guaranteed to be smaller than 10?4 . SVM
trade-off parameters are set to C = 1 for all methods.
Figure 1 (left) displays the results for varying sample sizes and 50 precomputed Gaussian kernels
with different bandwidths. Error bars indicate standard error over 5 repetitions. Unsurprisingly,
the SVM with the unweighted-sum kernel is the fastest method. Non-sparse MKL scales similarly
as `1 -norm chunking; the Newton strategy (Section 2.4.1) is slightly faster than the cutting plane
variant (Section 2.4.2) that needs additional Taylor expansions within each ?-step. SimpleMKL
suffers from training an SVM to full precision for each gradient evaluation and performs worst.2
Figure 1 (right) shows the results for varying the number of precomputed RBF kernels for a fixed
sample size of 500. The SVM with the unweighted-sum kernel is hardly affected by this setup and
performs constantly. The `1 -norm MKL by [21] handles the increasing number of kernels best and is
the fastest MKL method. Non-sparse approaches to MKL show reasonable run-times, the Newtonbased `p -norm MKL being again slightly faster than its peer. Simple MKL performs again worst.
Overall, our proposed Newton and cutting plane based optimization strategies achieve a speedup of
often more than one order of magnitude.
3.2
Protein Subcellular Localization
The prediction of the subcellular localization of proteins is one of the rare empirical success stories
of `1 -norm-regularized MKL [17, 27]: after defining 69 kernels that capture diverse aspects of
1
2
Available at http://www.shogun-toolbox.org/
SimpleMKL could not be evaluated for 2000 instances (ran out of memory on a 4GB machine).
6
Table 1: Results for Protein Subcellular Localization
`p -norm
1 - MCC [%]
1
9.13
32/31
9.12
16/15
9.64
8/7
9.84
4/3
9.56
2
10.18
4
10.08
?
10.41
protein sequences, `1 -norm-MKL could raise the predictive accuracy significantly above that of
the unweighted sum of kernels (thereby also improving on established prediction systems for this
problem). Here we investigate the performance of non-sparse MKL.
We download the kernel matrices of the dataset plant3 and follow the experimental setup of [17]
with the following changes: instead of a genuine multiclass SVM, we use the 1-vs-rest decomposition; instead of performing cross-validation for model selection, we report results for the best
models, as we are only interested in the relative performance of the MKL regularizers. Specifically,
for each C ? {1/32, 1/8, 1/2, 1, 2, 4, 8, 32, 128}, we compute the average Mathews correlation coefficient (MCC) on the test data. For each norm, the best average MCC is recorded. Table 1 shows
the averages over several splits of the data.
The results indicate that, indeed, with proper choice of a non-sparse regularizer, the accuracy of
`1 -norm can be recovered. This is remarkable, as this dataset is particular in that it fullfills the rare
condition that `1 -norm MKL performs better than `? -norm MKL. In other words, selecting these
data may imply a bias towards `1 -norm. Nevertheless our novel non-sparse MKL can keep up with
this, essentially by approximating `1 -norm.
3.3
Gene Start Recognition
This experiment aims at detecting transcription start sites (TSS) of RNA Polymerase II binding genes
in genomic DNA sequences. Accurate detection of the transcription start site is crucial to identify
genes and their promoter regions and can be regarded as a first step in deciphering the key regulatory
elements in the promoter region that determine transcription. For our experiments we use the dataset
from [22] which contains a curated set of 8,508 TSS annotated genes built from dbTSS version 4
[23] and refseq genes. These are translated into positive training instances by extracting windows of
size [?1000, +1000] around the TSS. Similar to [4], 85,042 negative instances are generated from
the interior of the gene using the same window size.
Following [22], we employ five different kernels representing the TSS signal (weighted degree with
shift), the promoter (spectrum), the 1st exon (spectrum), angles (linear), and energies (linear). Optimal kernel parameters are determined by model selection in [22]. Every kernel is normalized such
that all points have unit length in feature space. We reserve 13,000 and 20,000 randomly drawn
instances for holdout and test sets, respectively, and use the remaining 60,000 as the training pool.
Figure 2 shows test errors for varying training set sizes drawn from the pool; training sets of the
same size are disjoint. Error bars indicate standard errors of repetitions for small training set sizes.
Regardless of the sample size, `1 -MKL is significantly outperformed by the sum-kernel. On the
contrary, non-sparse MKL significantly achieves higher AUC values than the `? -MKL for sample
sizes up to 20k. The scenario is well suited for `2 -norm MKL which performs best. Finally, for 60k
training instances, all methods but `1 -norm MKL yield the same performance. Again, the superior
performance of non-sparse MKL is remarkable, and of significance for the application domain: the
method using the unweighted sum of kernels [22] has recently been confirmed to be the leading in
a comparison of 19 state-of-the-art promoter prediction programs [1], and our experiments suggest
that its accuracy can be further elevated by non-sparse MKL.
4
Conclusion and Discussion
We presented an efficient and accurate approach to non-sparse multiple kernel learning and showed
that our `p -norm MKL can be motivated as Tikhonov and Ivanov regularization of the mixing coefficients, respectively. Applied to previous MKL research, our result allows for a unified view as so
far seemingly different approaches turned out to be equivalent. Furthermore, we devised two efficient approaches to non-sparse multiple kernel learning for arbitrary `p -norms, p > 1. The resulting
3
from http://www.fml.tuebingen.mpg.de/raetsch/suppl/protsubloc/
7
4?norm unw.?sum
n=5k
1?norm 4/3?norm 2?norm
0.93
n=20k
0.91
1?norm MKL
4/3?norm MKL
2?norm MKL
4?norm MKL
SVM
0.9
0.89
0.88
0
10K
20K
30K
40K
50K
n=60k
AUC
0.92
60K
sample size
Figure 2: Left: Area under ROC curve (AUC) on test data for TSS recognition as a function of the training
set size. Notice the tiny bars indicating standard errors w.r.t. repetitions on disjoint training sets. Right: Corresponding kernel mixtures. For p = 1 consistent sparse solutios are obtained while the optimal p = 2 distributes
wheights on the weighted degree and the 2 spectrum kernels in good agreement to [22].
optimization strategies are based on semi-infinite programming and Newton descent, both interleaved with chunking-based SVM training. Execution times moreover revealed that our interleaved
optimization vastly outperforms commonly used wrapper approaches.
We would like to note that there is a certain preference/obsession for sparse models in the scientific
community due to various reasons. The present paper, however, shows clearly that sparsity by itself
is not the ultimate virtue to be strived for. Rather on the contrary: non-sparse model may improve
quite impressively over sparse ones. The reason for this is less obvious and its theoretical exploration goes well beyond the scope of its submissions. We remark nevertheless that some interesting
asymptotic results exist that show model selection consistency of sparse MKL (or the closely related
group lasso) [2, 14], in other words in the limit n ? ? MKL is guaranteed to find the correct subset
of kernels. However, also the rate of convergence
to the true estimator needs to be considered, thus
?
we conjecture that the rate slower than n which is common to sparse estimators [11] may be one
of the reasons for finding excellent (nonasymptotic) results in non-sparse MKL. In addition to the
convergence rate the variance properties of MKL estimators may play an important role to elucidate
the performance seen in our various simulation experiments.
Intuitively speaking, we observe clearly that in some cases all features even though they may contain
redundant information are to be kept, since putting their contributions to zero does not improve
prediction. I.e. all of them are informative to our MKL models. Note however that this result is
also class specific, i.e. for some classes we may sparsify. Cross-validation based model building that
includes the choice of p will however inevitably tell us which classes should be treated sparse and
which non-sparse.
Large-scale experiments on TSS recognition even raised the bar for `1 -norm MKL: non-sparse MKL
proved consistently better than its sparse counterparts which were outperformed by an unweightedsum kernel. This exemplifies how the unprecedented combination of accuracy and scalability of our
MKL approach and methods paves the way for progress in other real world applications of machine
learning.
Authors? Contributions
The authors contributed in the following way: MK and UB had the initial idea. MK, UB, SS, and AZ each
contributed substantially to both mathematical modelling, design and implementation of algorithms, conception
and execution of experiments, and writing of the manuscript. PL had some shares in the initial phase and KRM
contributed to the text. Most of the work was done at previous affiliations of several authors: Fraunhofer
Institute FIRST (Berlin), Technical University Berlin, and the Friedrich Miescher Laboratory (T?ubingen).
Acknowledgments
This work was supported in part by the German BMBF grant REMIND (FKZ 01-IS07007A) and by the European Community under the PASCAL2 Network of Excellence (ICT-216886).
8
References
[1] T. Abeel, Y. V. de Peer, and Y. Saeys. Towards a gold standard for promoter prediction evaluation.
Bioinformatics, 2009.
[2] F. R. Bach. Consistency of the group lasso and multiple kernel learning. J. Mach. Learn. Res., 9:1179?
1225, 2008.
[3] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and the smo
algorithm. In Proc. 21st ICML. ACM, 2004.
[4] V. B. Bajic, S. L. Tan, Y. Suzuki, and S. Sugano. Promoter prediction analysis on the whole human
genome. Nature Biotechnology, 22(11):1467?1473, 2004.
[5] S. Boyd and L. Vandenberghe. Convex Optimization. Cambrigde University Press, Cambridge, UK, 2004.
[6] O. Chapelle and A. Rakotomamonjy. Second order optimization of kernel parameters. In Proc. of the
NIPS Workshop on Kernel Learning: Automatic Selection of Optimal Kernels, 2008.
[7] C. Cortes, A. Gretton, G. Lanckriet, M. Mohri, and A. Rostamizadeh. Proceedings of the NIPS Workshop
on Kernel Learning: Automatic Selection of Optimal Kernels, 2008.
[8] R. Hettich and K. O. Kortanek. Semi-infinite programming: theory, methods, and applications. SIAM
Rev., 35(3):380?429, 1993.
[9] S. Ji, L. Sun, R. Jin, and J. Ye. Multi-label multiple kernel learning. In Advances in Neural Information
Processing Systems, 2009.
[10] G. Lanckriet, N. Cristianini, L. E. Ghaoui, P. Bartlett, and M. I. Jordan. Learning the kernel matrix with
semi-definite programming. JMLR, 5:27?72, 2004.
[11] H. Leeb and B. M. P?otscher. Sparse estimators and the oracle property, or the return of hodges? estimator.
Journal of Econometrics, 142:201?211, 2008.
[12] C. Longworth and M. J. F. Gales. Combining derivative and parametric kernels for speaker verification.
IEEE Transactions in Audio, Speech and Language Processing, 17(4):748?757, 2009.
[13] C. A. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6:1099?1125, 2005.
[14] Y. Nardi and A. Rinaldo. On the asymptotic properties of the group lasso estimator for linear models.
Electron. J. Statist., 2:605?633, 2008.
[15] S. Olhede, M. Pontil, and J. Shawe-Taylor. Proceedings of the PASCAL2 Workshop on Sparsity in
Machine Learning and Statistics, 2009.
[16] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381:607?609, 1996.
[17] C. S. Ong and A. Zien. An Automated Combination of Kernels for Predicting Protein Subcellular Localization. In Proc. of the 8th Workshop on Algorithms in Bioinformatics, 2008.
[18] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. More efficiency in multiple kernel learning. In
ICML, pages 775?782, 2007.
[19] A. Rakotomamonjy, F. Bach, S. Canu, and Y. Grandvalet. SimpleMKL. Journal of Machine Learning
Research, 9:2491?2521, 2008.
[20] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[21] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large Scale Multiple Kernel Learning. Journal
of Machine Learning Research, 7:1531?1565, July 2006.
[22] S. Sonnenburg, A. Zien, and G. R?atsch. ARTS: Accurate Recognition of Transcription Starts in Human.
Bioinformatics, 22(14):e472?e480, 2006.
[23] Y. Suzuki, R. Yamashita, K. Nakai, and S. Sugano. dbTSS: Database of human transcriptional start sites
and full-length cDNAs. Nucleic Acids Research, 30(1):328?331, 2002.
[24] M. Szafranski, Y. Grandvalet, and A. Rakotomamonjy. Composite kernel learning. In Proceedings of the
International Conference on Machine Learning, 2008.
[25] M. Varma and D. Ray. Learning the discriminative power-invariance trade-off. In IEEE 11th International
Conference on Computer Vision (ICCV), pages 1?8, 2007.
[26] Z. Xu, R. Jin, I. King, and M. Lyu. An extended level method for efficient multiple kernel learning. In
D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing
Systems 21, pages 1825?1832. 2009.
[27] A. Zien and C. S. Ong. Multiclass multiple kernel learning. In Proceedings of the 24th international
conference on Machine learning (ICML), pages 1191?1198. ACM, 2007.
9
| 3675 |@word version:1 norm:48 km:8 simulation:1 decomposition:1 pavel:1 thereby:1 tr:1 kwm:1 moment:1 reduction:1 substitution:1 contains:1 initial:2 selecting:1 wrapper:3 interestingly:1 outperforms:1 existing:3 recovered:1 wherefore:1 written:1 subsequent:1 numerical:1 informative:1 remove:1 update:1 v:2 plane:6 olhede:1 detecting:1 successive:1 preference:1 org:1 cambrigde:1 five:1 unbounded:1 mathematical:1 ray:2 excellence:1 x0:2 sacrifice:1 indeed:1 p1:1 frequently:1 mpg:1 multi:4 kpp:8 inspired:1 nardi:1 ivanov:3 actual:1 equipped:1 solver:3 increasing:1 becomes:1 spain:1 begin:1 underlying:1 moreover:2 window:2 linearity:1 e472:1 substantially:1 unified:1 finding:2 berkeley:1 every:1 sip:3 preferable:1 universit:3 rm:1 qm:7 shogun:1 mathews:1 unit:1 grant:1 omit:1 uk:1 positive:2 limit:3 mach:1 path:1 solely:1 simplemkl:8 might:1 studied:1 equivalence:1 deployment:1 fastest:2 collapse:1 practical:3 acknowledgment:1 silp:2 practice:2 block:1 definite:3 digit:1 pontil:2 area:1 empirical:3 mcc:3 significantly:3 composite:2 projection:2 boyd:1 pre:1 word:2 tweaking:1 regular:2 polymerase:1 protein:5 cpa:2 suggest:1 onto:1 interior:1 selection:6 risk:3 live:1 applying:2 writing:1 optimize:2 szafranski:1 equivalent:5 map:1 lagrangian:6 www:2 maximizing:1 go:2 attention:1 starting:1 regardless:1 convex:6 insight:1 rule:1 estimator:6 regarded:2 varma:2 vandenberghe:1 handle:2 traditionally:1 analogous:1 target:1 deploy:1 suppose:1 play:1 elucidate:1 programming:5 tan:1 hypothesis:2 agreement:1 lanckriet:3 element:3 expensive:1 particularly:1 satisfying:1 approximated:1 recognition:4 curated:1 econometrics:1 submission:2 labeled:1 database:1 observed:2 role:1 solved:2 capture:2 worst:2 region:2 sun:1 sonnenburg:4 trade:4 ran:1 convexity:1 complexity:1 cristianini:1 ong:4 raise:1 solving:3 tight:2 algebra:1 predictive:2 localization:4 efficiency:4 translated:1 exon:1 resolved:1 easily:1 joint:1 htm:2 various:2 regularizer:5 derivation:1 tell:1 klaus:1 peer:2 quite:1 supplementary:1 solve:2 s:1 otherwise:1 statistic:1 unseen:1 emergence:1 itself:1 seemingly:1 sequence:3 brefeld:1 unprecedented:1 propose:4 product:1 remainder:1 relevant:2 turned:1 combining:1 mixing:4 degenerate:1 translate:1 achieve:1 subcellular:4 gold:1 bajic:1 intuitive:1 inducing:1 scalability:1 az:1 olkopf:2 convergence:3 coupling:2 develop:1 protsubloc:1 odd:1 minor:3 received:1 eq:13 progress:1 kortanek:1 implies:1 indicate:3 convention:1 closely:2 annotated:1 correct:1 exploration:1 human:3 material:1 fix:1 generalization:2 preliminary:1 extension:1 pl:1 hold:1 around:2 considered:2 exp:1 mapping:2 scope:1 lyu:1 electron:1 reserve:1 substituting:1 achieves:2 purpose:1 favorable:1 proc:3 outperformed:2 label:2 vice:2 repetition:3 successfully:1 weighted:2 minimization:2 mit:1 clearly:2 genomic:1 rna:1 always:2 gaussian:1 aim:3 ck:1 rather:1 avoid:2 varying:5 sparsify:1 exemplifies:1 focus:3 improvement:1 consistently:1 modelling:1 contrast:2 baseline:3 rostamizadeh:1 abstraction:1 stopping:2 unlikely:1 relation:1 koller:1 interested:1 germany:4 overall:1 classification:1 dual:4 yahoo:1 art:3 constrained:2 special:1 raised:1 genuine:1 field:2 biology:3 kw:1 cancel:1 icml:3 promote:3 simplex:1 report:2 simplify:1 inherent:1 employ:2 randomly:1 phase:2 n1:1 detection:2 tss:6 yamashita:1 highly:2 investigate:1 evaluation:2 e480:1 mixture:16 primal:3 regularizers:3 chain:1 implication:1 accurate:4 partial:3 decoupled:1 taylor:4 initialized:1 re:1 biosystems:1 theoretical:2 mk:2 instance:6 column:1 maximization:1 introducing:2 technische:2 addressing:1 deviation:1 rare:2 uniform:1 deciphering:1 subset:1 rakotomamonjy:4 too:1 dependency:1 st:2 international:3 siam:1 kloft:1 off:4 pool:2 enhance:1 together:1 again:5 vastly:1 recorded:1 successively:1 hodges:1 possibly:2 gale:1 derivative:4 leading:1 return:2 account:1 nonasymptotic:1 de:2 includes:1 coefficient:5 depends:1 piece:1 later:1 view:3 multiplicative:2 performed:1 kwk:1 start:6 wm:9 contribution:5 minimize:2 accuracy:8 variance:1 largely:1 efficiently:2 maximized:1 ulf:1 identify:2 yield:2 acid:1 generalize:1 none:1 confirmed:1 suffers:1 energy:1 pp:4 obvious:1 proof:4 associated:1 dataset:3 adjusting:2 holdout:1 remp:2 proved:1 hilbert:1 sophisticated:1 manuscript:1 higher:1 supervised:1 follow:1 awkward:1 formulation:3 evaluated:1 though:1 strongly:1 done:1 furthermore:4 smola:1 correlation:1 hand:2 sketch:1 saeys:1 lack:2 mkl:70 yf:1 scientific:1 olshausen:1 usa:1 detached:1 ye:1 normalized:1 true:1 requiring:2 building:2 remedy:2 k22:2 regularization:9 hence:2 multiplier:1 alternating:1 former:1 counterpart:2 laboratory:1 auc:3 speaker:1 criterion:2 generalized:2 demonstrate:3 performs:5 image:1 wise:1 novel:1 recently:1 common:3 superior:1 sped:1 empirically:2 ji:1 extend:3 interpretation:1 elevated:1 interpret:1 cdna:1 significant:2 raetsch:1 versa:2 cambridge:2 automatic:3 fml:1 consistency:2 similarly:1 canu:2 language:1 had:2 shawe:1 chapelle:1 interleaf:1 afer:1 resubstitution:1 own:1 recent:4 showed:2 optimizing:2 irrelevant:1 inf:3 scenario:1 tikhonov:3 certain:1 ubingen:3 inequality:1 binary:1 success:1 affiliation:1 life:1 yi:6 muller:1 devise:2 seen:1 additional:6 fortunately:1 determine:1 redundant:1 signal:1 semi:8 zien:6 multiple:22 full:2 ii:1 gretton:1 reduces:1 july:1 smooth:1 technical:1 faster:3 bach:5 cross:2 devised:1 plugging:1 prediction:6 variant:4 regression:1 miescher:1 essentially:1 vision:1 iteration:2 kernel:70 suppl:1 oren:1 strived:1 c1:3 cell:1 addition:3 completes:1 source:2 standpoint:1 crucial:1 sch:3 rest:2 subject:1 facilitates:1 contrary:2 incorporates:1 jordan:2 extracting:1 revealed:1 split:1 easy:1 conception:1 automated:1 bengio:1 identified:1 bandwidth:1 lasso:3 inner:2 idea:2 fkz:1 multiclass:2 shift:1 motivated:2 bartlett:1 gb:1 ultimate:1 suffer:1 speech:1 hessian:1 cause:1 hardly:2 remark:1 speaking:1 biotechnology:1 amount:1 statist:1 svms:2 dna:1 reduced:1 http:2 outperform:2 exist:1 notice:2 disjoint:2 popularity:1 diverse:1 write:1 affected:1 group:3 key:2 putting:1 nevertheless:2 drawn:2 rewriting:1 kept:1 sum:10 run:1 angle:1 nakai:1 arrive:3 reasonable:1 hettich:1 scaling:1 interleaved:5 bound:3 laskov:1 guaranteed:3 display:1 quadratic:5 nonnegative:1 oracle:1 constraint:11 qcqp:2 generates:1 aspect:3 min:6 optimality:2 performing:1 conjecture:1 marius:1 transferred:1 structured:1 speedup:2 according:2 combination:5 poor:1 smaller:1 slightly:2 contain:1 appealing:2 rev:1 intuitively:1 iccv:1 ghaoui:1 chunking:4 computationally:1 slack:1 precomputed:2 german:1 available:2 generalizes:1 apply:2 observe:1 appropriate:1 distinguished:1 alternative:4 slower:1 original:1 remaining:1 hinge:4 newton:13 giving:1 k1:3 build:1 approximating:1 micchelli:1 objective:2 wrapping:1 strategy:14 parametric:1 receptive:1 diagonal:2 pave:1 transcriptional:1 gradient:3 separate:1 berlin:6 capacity:1 outer:1 tuebingen:1 trivial:2 reason:5 enforcing:1 length:2 code:1 remind:1 equivalently:2 setup:3 unfortunately:4 difficult:1 robert:1 argminf:1 trace:1 negative:2 rise:3 implementation:2 design:1 proper:1 unknown:1 perform:1 allowing:1 upper:3 contributed:3 nucleic:1 sm:2 descent:5 inevitably:1 jin:2 situation:3 extended:2 defining:1 rn:2 reproducing:1 arbitrary:5 download:1 community:2 introduced:1 pair:1 toolbox:2 connection:1 optimized:1 friedrich:1 california:1 smo:1 learned:1 quadratically:2 established:1 barcelona:1 nip:2 beyond:2 bar:5 usually:1 sparsity:2 program:6 built:1 interpretability:2 max:6 including:1 memory:1 pascal2:2 power:1 treated:1 fromp:1 rely:1 regularized:3 natural:1 predicting:1 representing:1 improve:2 imply:1 conic:1 concludes:1 carried:1 hm:3 negativity:1 fraunhofer:1 text:1 ict:1 relative:4 unsurprisingly:1 asymptotic:2 loss:7 generation:1 impressively:1 interesting:1 remarkable:2 validation:2 degree:2 verification:1 consistent:1 editor:1 story:1 grandvalet:3 tiny:2 share:1 mohri:1 supported:1 bias:1 allow:2 institute:1 sparse:45 curve:1 world:3 unweighted:7 genome:1 author:3 commonly:1 suzuki:2 far:1 transaction:1 approximate:1 ignore:1 cutting:6 implicitly:1 transcription:4 gene:7 keep:1 overfitting:1 active:1 xi:7 discriminative:1 spectrum:3 search:2 regulatory:1 table:2 additionally:2 learn:2 nature:2 robust:1 expanding:1 improving:1 schuurmans:1 heidelberg:1 expansion:3 excellent:1 complex:1 european:1 bottou:1 domain:2 diag:3 significance:1 main:3 promoter:6 bounding:1 whole:1 fair:1 xu:1 gmbh:1 site:3 roc:1 deployed:1 bmbf:1 precision:4 lie:1 jmlr:1 interleaving:1 theorem:5 specific:1 showing:1 svm:13 cortes:1 virtue:1 incorporating:2 exists:1 ih:1 mnist:1 sequential:2 effectively:2 workshop:4 magnitude:2 execution:4 sparseness:3 gap:2 customization:1 suited:1 simply:1 saddle:1 lagrange:1 rinaldo:1 strand:1 contained:1 binding:1 nested:1 minimizer:1 corresponds:1 constantly:1 acm:2 ma:1 goal:1 king:1 consequently:1 rbf:1 towards:2 krm:1 fw:3 change:2 infinite:5 specifically:1 determined:1 distributes:1 duality:3 experimental:1 leeb:1 m3:1 invariance:1 atsch:2 indicating:1 support:2 latter:2 alexander:1 bioinformatics:3 violated:1 ub:2 incorporate:2 audio:1 regularizing:2 correlated:1 |
2,952 | 3,676 | Potential-Based Agnostic Boosting
Varun Kanade
Harvard University
[email protected]
Adam Tauman Kalai
Microsoft Research
[email protected]
Abstract
We prove strong noise-tolerance properties of a potential-based boosting algorithm, similar to MadaBoost (Domingo and Watanabe, 2000) and SmoothBoost
(Servedio, 2003). Our analysis is in the agnostic framework of Kearns, Schapire
and Sellie (1994), giving polynomial-time guarantees in presence of arbitrary
noise. A remarkable feature of our algorithm is that it can be implemented without reweighting examples, by randomly relabeling them instead. Our boosting
theorem gives, as easy corollaries, alternative derivations of two recent nontrivial results in computational learning theory: agnostically learning decision trees
(Gopalan et al, 2008) and agnostically learning halfspaces (Kalai et al, 2005).
Experiments suggest that the algorithm performs similarly to MadaBoost.
1
Introduction
Boosting procedures attempt to improve the accuracy of general machine learning algorithms,
through repeated executions on reweighted data. Aggressive reweighting of data may lead to poor
performance in the presence of certain types of noise [1]. This has been addressed by a number of
?robust? boosting algorithms, such as SmoothBoost [2, 3] and MadaBoost [4] as well as boosting
by branching programs [5, 6]. Some of these algorithms are potential-based boosters, i.e., natural variants on AdaBoost [7], while others are perhaps less practical but have stronger theoretical
guarantees in the presence of noise.
The present work gives a simple potential-based boosting algorithm with guarantees in the (arbitrary noise) agnostic learning setting [8, 9]. A unique feature of our algorithm, illustrated in Figure
1, is that it does not alter the distribution on unlabeled examples but rather it alters the labels. This
enables us to prove a strong boosting theorem in which the weak learner need only succeed for one
distribution on unlabeled examples. To the best of our knowledge, earlier weak-to-strong boosting
theorems have always relied on the ability of the weak learner to succeed under arbitrary distributions. The utility of our boosting theorem is demonstrated by re-deriving two non-trivial results
in computational learning theory, namely agnostically learning decision trees [10] and agnostically
learning halfspaces [11], which were previously solved using very different techniques.
The main contributions of this paper are, first, giving the first provably noise-tolerant analysis of a
potential-based boosting algorithm, and, second, giving a distribution-specific boosting theorem that
does not require the weak learner to learn over all distributions on x ? X. This is in contrast to recent
work by Long and Servedio, showing that convex potential boosters cannot work in the presence of
random classification noise [12]. The present algorithm circumvents that impossibility result in two
ways. First, the algorithm has the possibility of negating the current hypothesis and hence is not
technically a standard potential-based boosting algorithm. Second, weak agnostic learning is more
challenging than weak learning with random classification noise, in the sense that an algorithm
which is a weak-learner in the random classification noise setting need not be a weak-learner in the
agnostic setting.
Related work. There is a substantial literature on robust boosting algorithms, including algorithms
already mentioned, MadaBoost, SmoothBoost, as well as LogitBoost [13], BrownBoost [14], Nad1
Simplified Boosting by Relabeling Procedure
Inputs: (x1 , y1 ), . . . , (xm , ym ) ? X ? {?1, 1}, T ? 1, and weak learner W .
Output: classifier h : X ? {?1, 1}.
1. Let H 0 = 0
2. For t = 1, . . . , T :
(a) For i = 1, . . . , m:
? wit = min{1, exp(?H t?1 (xi )yi )}
? With probability wit , set y?it = yi , otherwise pick y?it ? {?1, 1} randomly
t
(b) g t = W (x1 , y?1t ), . . . , (xm , y?m
) .
X
(c) ht =
argmax
wit yi g(xi ). /* possibly take negated hypothesis */
g?{g t ,? sign(H t?1 )} i
Pm
1
t
t
i=1 wi yi h (xi )
m
t?1
t t
(d) ? t =
(e) H t (x) = H (x) + ? h (x)
3. Output h = sign(H T ) as hypothesis.
Figure 1: Simplified Boosting by Relabeling Procedure. Each epoch, the algorithm runs the weak
t
learner on relabeled data h(xi , y?it )im
i=1 . In traditional boosting, on each epoch, H is a linear combination of weak hypotheses. For our agnostic analysis, we also need to include the negated current
hypothesis, ? sign(H t?1 ) : X ? {?1, 1}, as a possible weak classifier. ? In practice, to avoid
adding noise, each example would be replaced with three weighted examples: (xi , yi ) with weight
wit , and (xi , ?1) each with weight (1 ? wit )/2.
aBoost [15] and others [16, 17], including extensive experimentation [18, 15, 19]. These are all simple boosting algorithms whose output is a weighted majority of classifiers. Many have been shown
to have formal boosting properties (weak to strong PAC-learning) in a noiseless setting, or partial
boosting properties in noisy settings. There has also been a line of work on boosting algorithms that
provably boost from weak to strong learners either under agnostic or random classification noise,
using branching programs [17, 20, 5, 21, 6]. Our results are stronger than those in the recent work
of Kalai, Mansour, Verbin [6], for two main reasons. First, we propose a simple potential-based
algorithm that can be implemented efficiently. Second, since we don?t change the distribution over
unlabeled examples, we can boost distribution-specific weak learners. In recent work, using a similar idea of relabeling, Kalai, Kanade and Mansour[22] proved that the class of DNFs is learnable in
a one-sided error agnostic learning model. Their algorithm is essentially a simpler form of boosting.
Experiments. Our boosting procedure is quite similar to MadaBoost. The main differences are: (1)
there is the possibility of using the negation of the current hypothesis at each step, (2) examples are
relabeled rather than reweighted, and (3) the step size is slightly different. The goal of experiments
was to understand how significant these differences may be in practice. Preliminary experimental
results, presented in Section 5, suggest that all of these modifications are less important in practice
than theory. Hence, the present simple analysis can be viewed as a theoretical justification for the
noise-tolerance of MadaBoost and SmoothBoost.
1.1
Preliminaries
In the agnostic setting, we consider learning with respect to a distribution over X ?Y . For simplicity,
we will take X be to finite or countable and Y = {?1, 1}. Formally, learning is with respect to some
class of functions, C, where each c ? C is a binary classifier c : X ? {?1, 1}. There is an arbitrary
distribution ? over X and an arbitrary target function f : X ? [?1, 1]. Together these determine
(x)
an arbitrary joint distribution D = h?, f i over X ? {?1, 1} where D(x, y) = ?(x) 1+yf
, i.e.,
2
1
f (x) = ED [y|x]. The error and correlation of a classifier h : X ? {?1, 1} with respect to D, are
1
This quantity is typically referred to as edge in the boosting literature. However, cor(h, D) = 2 edge(h, D)
according to the standard notation, hence we use the notation cor.
2
respectively defined as,
err(h, D) =
Pr [h(x) 6= y]
(x,y)?D
cor(h, D) =
E
(x,y)?D
[h(x)y] = E [h(x)f (x)] = 1 ? 2 err(h, D)
x??
We will omit D when understood from context. The goal of the learning algorithm is to achieve
error (equivalently correlation) arbitrarily close to that of the best classifier in C, namely,
err(C) = err(C, D) = inf err(c, D);
cor(C) = cor(C, D) = sup cor(c, D)
c?C
c?C
A ?-weakly accurate classifier [23] for PAC (noiseless) learning is simply one whose correlation is
at least ? (for some ? ? (0, 1)). A different definition of weakly accurate classifier is appropriate in
the agnostic setting. Namely, for some ? ? (0, 1), h : X ? {?1, 1} is said to be ?-optimal for C
(and D) if,
cor(h, D) ? ? cor(C, D)
Hence, if the labels are totally random then a weak hypothesis need not have any correlation over
random guessing. On the other hand, in a noiseless setting, where cor(C) = 1, this is equivalent
to a ?-weakly accurate hypothesis. The goal is to boost from an algorithm capable of outputting
?-optimal hypotheses to one which outputs a nearly 1-optimal hypothesis, even for small ?.
Let D be a distribution over X ? {?1, 1}. Let w : X ? {?1, 1} ? [0, 1] be a weighting function.
We now define the distribution D relabeled by w, RD,w . Procedurally, one can think of generating a
sample from RD,w by drawing an example (x, y) from D, then with probability w(x, y), outputting
(x, y) directly, and with probability 1 ? w(x, y), outputting (x, y 0 ) where y 0 is uniformly random in
{?1, 1}. Formally,
1 ? w(x, y)
1 ? w(x, ?y)
RD,w (x, y) = D(x, y) w(x, y) +
+ D(x, ?y)
2
2
Note that D and RD,w have the same marginal distributions over unlabeled examples x ? X. Also,
observe that, for any D, w, and h : X ? R,
E
(x,y)?RD,w
[h(x)y] =
E
[h(x)yw(x, y)]
(1)
(x,y)?D
This can be seen by the procedural interpretation above. When (x, y) is returned directly, which
happens with probability w(x, y), we get a contribution of h(x)y, but E[h(x)y 0 ] = 0 for uniform
y 0 ? {?1, 1}.
It is possible to describe traditional supervised learning and active (query) learning in the same
framework. A general (m, q)-learning algorithm is given m unlabeled examples hx1 , . . . , xm i, and
may make q label queries to a query oracle L : X ? {?1, 1}, and it outputs a classifier h : X ?
{?1, 1}. The queries may be active, meaning that queries may only be made to training examples
xi , or membership queries meaning that arbitrary examples x ? X may be queried. The active query
setting where q = m is the standard supervised learning setting where all m labels may be queried.
One can similarly model semi-supervised learning.
Since our boosting procedure does not change the distribution over unlabeled examples, it offers
two advantages: (1) Agnostic weak learning may be defined with respect to a single distribution ?
over unlabeled examples, and (2) The weak learning algorithms may be active (or use membership
queries). In particular, the agnostic weak learning hypothesis for C and ? is that for any f : X ?
[?1, 1], given examples from D = h?, f i, the learner will output a ?-optimal classifier for C. The
advantages of this new definition are: (a) it is not with respect to every distribution on unlabeled
examples (the algorithm may only have guarantees for certain distributions), and (b) it is more
realistic as it does not assume noiseless data. Finding such a weak learner may be quite challenging
since it has to succeed in the agnostic model (where no assumption is made on f ), however it may
be a bit easier in the sense that the learning algorithm need only handle one particular ?.
Definition 1. A learning algorithm is a (?, 0 , ?) agnostic weak learner for C and ? over X if,
for any f : X ? [?1, 1], with probability ? 1 ? ? over its random input, the algorithm outputs
h : X ? [?1, 1] such that, if D = h?, f i,
cor(h, D) = E [h(x)f (x)] ? ? sup E [c(x)f (x)] ? 0
x??
c?C x??
3
The 0 parameter typically decreases quickly with the size of training data, e.g., O(m?1/2 ). To see
why it is necessary, consider a class C = {c1 , c2 } consisting of only two classifiers, and one of them
has correlation 0 and the other has minuscule positive correlation. Then, one cannot even identify
which one has better correlation to within O(m?1/2 ) using m examples. Note that ? can easily
made exponentially small (boosting confidence) using standard techniques.
Lastly, we define sign(z) to be 1 if z ? 0 and ?1 if z < 0.
2
Formal boosting procedure and main results
The formal boosting procedure we analyze is given in Figure 2.
AGNOSTIC B OOSTER
Inputs: hx1 , . . . , xT m+s i, T, s ? 1, label oracle L : X ? {?1, 1}, (m, q)-learner W .
Output: classifier h : X ? {?1, 1}.
1. Let H 0 = 0
2. Query the labels of the first s examples to get y1 = L(x1 ), . . . , ys = L(xs ).
3. For t = 1, . . . , T :
a) Define wt (x, y) = ??0 (H t?1 (x)y) = min{1, exp(?H t?1 (x)y)}
Define Lt : X ? {?1, 1} by:
i) On input x ? X, let y = L(x).
ii) With probability wt (x, y), return y.
iii) Otherwise return ?1 or 1 with equal probability.
b) Let g t = W (hxs+(t?1)m+1 , . . . , xs+tm i, Lt )
c) Let
s
1X t
t
i) ? =
g (xi )wt (xi , yi )
s i=1
s
1X
? sign(H t?1 (xi ))wt (xi , yi )
ii) ? t =
s i=1
d) If ?t ? ? t ,
ht = g t ; ? t = ?t ;
Else,
ht = ? sign(H t?1 ); ? t = ? t ;
t
e) H (x) = H t?1 (x) + ? t ht (x)
4. Output h = sign(H ? ) where ? is chosen so as to minimize empirical error on
h(x1 , y1 ), . . . , (xs , ys )i
Figure 2: Formal Boosting by Relabeling Procedure.
1
29
Theorem 1. If W is a (?, 0 , ?) weak learner with respect to C and ?, s = ?200
2 2 log
? , T = ? 2 2 ,
Algorithm AGNOSTIC B OOSTER (Figure 2) with probability at least 1 ? 4?T outputs a hypothesis
h satisfying:
0
cor(h, D) ? cor(C, D) ?
?
?
Recall that 0 is intended to be very small, e.g., O(m?1/2 ). Also note that the number of calls to
the query oracle L is s plus T times the number of calls made by the weak learner (if the weak
learner is active, then so is the boosting algorithm). We show that two recent non-trivial results,
viz. agnostically learning decision trees and agnostically learning halfspaces follow as corollaries to
Theorem 1. The two results are stated below:
Theorem 2 ([10]). Let C be the class of binary decision trees on {?1, 1}n with at most t leaves,
and let U be the uniform distribution on {?1, 1}n . There exists an algorithm that when given
t, n, , ? > 0, and a label oracle for an arbitrary f : {?1, 1}n ? [?1, 1], makes q = poly(nt/(?))
membership queries and, with probability ? 1 ? ?, outputs h : {?1, 1}n ? {?1, 1} such that for
Uf = hU, f i, err(h, Uf ) ? err(C, Uf ) + .
4
Theorem 3 ([11]). For any fixed > 0, there exists a univariate polynomial p such that the following
holds: Let n ? 1, C be the class of halfspaces in n dimensions, let U be the uniform distribution
on {?1, 1}n , and f : {?1, 1}n ? [?1, 1] be an arbitrary function. There exists a polynomialtime algorithm that, when given m = p(n log(1/?)) labeled examples from Uf = hU, f i, outputs a
classifier h : {?1, 1}n ? {?1, 1} such that err(h, Uf ) ? err(C, Uf ) + . (The algorithm makes no
queries.)
Note that a related theorem was shown for halfspaces over log-concave distributions over X = Rn .
The boosting approach here similarly generalizes to that case in a straightforward manner. This
illustrates how, from the point of view of designing provably efficient agnostic learning algorithms,
the current boosting procedure may be useful.
3
Analysis of Boosting Algorithm
This section is devoted to the analysis of algorithm AGNOSTIC B OOSTER (see Fig 2). As is standard,
the boosting algorithm can be viewed as minimizing a convex potential function. However, the proof
is significantly different than the analysis of AdaBoost [7], where they simply use the fact that the
potential is an upper-bound on the error rate.
Our analysis has two parts. First, we define a conservative relabeling, such as the one we use, to
be one which never relabels/downweights examples that the booster currently misclassifies. We
show that for a conservative reweighting, either the weak learner will make progress, returning a
hypothesis correlated with the relabeled distribution or ? sign(H t?1 ) will be correlated with the
relabeled distribution.
Second, if we find a hypothesis correlated with the relabeled distribution, then the potential on round
t will be noticeably lower than that of round t ? 1. This is essentially a simple gradient descent
analysis, using a bound on the second derivative of the potential. Since the potential is between 0
and 1, it can only drop so many rounds. This implies that sign(H t ) must be a near-optimal classifier
for some t (though the only sure way we have of knowing which one to pick is by testing accuracy
on held-out data).
The potential function we consider, as in MadaBoost, is defined by ? : R ? R,
1?z
?(z) = ?z
e
if z ? 0
if z > 0
Define the potential of a (real-valued) hypothesis H with respect to a distribution D over X?{?1, 1}
as:
?(H, D) = E [?(yH(x))]
(2)
(x,y)?D
0
Note that ?(H , D) = ?(0, D) = 1. We will show that the potential decreases every round of
the algorithm. Notice that the weights in the boosting algorithm correspond to the derivative of the
potential, because ??0 (z) = min{1, exp(?z)} ? [0, 1]. In other words, the weak learning step is
essentially a gradient descent step.
We next state a key fact about agnostic learning in Lemma 1.
Definition 2. Let h : X ? {?1, 1} be a hypothesis. Then weighting function w : X ? {?1, 1} ?
[0, 1] is called conservative for h if w(x, ?h(x)) = 1 for all x ? X.
Note that, if the hypothesis is sign(H t (x)), then a weighting function defined by ??0 (H t (x)y) is
conservative if and only if ?0 (z) = ?1 for all z < 0. We first show that relabeling according to a
conservative weighting function is good in the sense that, if h is far from optimal according to the
original distribution, then after relabeling by w it is even further from optimal.
Lemma 1. For any distribution D over X ? {?1, 1}, classifiers c, h : X ? {?1, 1}, and any
weighting function w : X ? {?1, 1} ? [0, 1] conservative for h,
cor(c, RD,w ) ? cor(h, RD,w ) ? cor(c, D) ? cor(h, D)
5
Proof. By the definition of correlation and eq. (1), cor(c, RD,w ) = ED [c(x)yw(x, y)]. Hence,
cor(c, RD,w ) ? cor(h, RD,w ) = cor(c, D) ? cor(h, D) ?
[(c(x) ? h(x))y(1 ? w(x, y))]
E
(x,y)?D
Finally, consider two cases. In the first case, when 1 ? w(x, y) > 0, we have h(x)y = 1 while
c(x)y ? 1. The second case is 1 ? w(x, y) = 0. In either case, (c(x) ? h(x))y(1 ? w(x, y)) ? 0.
Thus the above equation implies the lemma.
We will use Lemma 1 to show that the weak learner will return a useful hypothesis. The case in
which the weak learner may not return a useful hypothesis is when cor(C, RD,w ) = 0, when the
optimal classifier on the reweighted distribution has no correlation. This can happen, but in this case
it means that either our current hypothesis is close to optimal, or h = sign(H t?1 ) is even worse
than random guessing, and hence we can use its negation as a weak agnostic learner.
We next explain how a ?-optimal classifier on the reweighted distribution decreases the potential.
We will use the following property linear approximation of ?.
Lemma 2. For any x, ? ? R, |?(x + ?) ? ?(x) ? ?0 (x)?| ? ? 2 /2.
Proof. This follows from Taylor?s theorem and the fact the function ? is differentiable everywhere,
and that the left and right second derivatives exist everywhere and are bounded by 1.
Let ht : X ? {?1, 1} be the weak hypothesis that the algorithm finds on round t. This may
either be the hypothesis returned by the weak learner W or ? sign(H t?1 ). The following lemma
lower bounds the decrease in potential caused by adding ? t ht to H t?1 . We will apply the following
Lemma on each round of the algorithm to show that the potential decreases on each round, as long
as the weak hypothesis ht has non-negligible correlation and ? t is suitably chosen.
Lemma 3. Consider any function H : X ? R, hypothesis h : X ? [?1, 1], ? ? R,
and distribution D over X ? {?1, 1}. Let D0 = RD,w be the distribution D relabeled by
w(x, y) = ??0 (yH(x)). Then,
?(H, D) ? ?(H + ?h, D) ? ? cor(h, D0 ) ?
?2
2
Proof. For any (x, y) ? X ? {?1, 1}, using Lemma 2 we know that:
?(H(x)y) ? ?((H(x) + ?h(x))y) ? ?h(x)y(??0 (H(x)y)) ?
?2
2
In the step above we use the fact that h(x)2 y 2 ? 1. Taking expectation over (x, y) from D,
?(H, D) ? ?(H + ?h, D) ?
E
[h(x)y(??0 (H(x)y))] ?
E
[h(x)y] ?
(x,y)?D
=
(x,y)?D 0
?2
2
?2
2
In the above we have used Eq. (1). We are done, by definition of cor(h, D0 ).
Using all the above lemmas, we will show that the algorithm AGNOSTIC B OOSTER returns a hypothesis with correlation (or error) close to that of the best classifier from C. We are now ready to
prove the main theorem.
Proof of Theorem 1. Suppose ?c ? C such that cor(c, D) > cor(sign(H t?1 ), D) +
applying Lemma 1 to H t?1 and setting wt (x, y) = ??0 (H t?1 (x)y), we get that
cor(c, RD,wt ) > cor(sign(H t?1 ), RD,wt ) +
0
?
+ , then
0
+
?
In this case we want to show that the algorithm successfully finds ht with cor(ht , RD,wt ) ?
6
(3)
?
3 .
Let g t be the hypothesis returned by the weak learner W . From Step 3c) in the algorithm:
s
s
1X
1X
?t =
g(xi )wt (xi , yi ); ? t =
? sign(H t?1 )(xi )wt (xi , yi )
s i=1
s i=1
1
t
t
When s = ?200
2 2 log
? , by Chernoff-Hoeffding bounds we know that ? and ? are within an
?
of cor(g t , RD,wt ) and cor(? sign(H t?1 ), RD,wt ) respectively with probability at least
additive 20
1 ? 2?. As defined in Step 3d) in the algorithm, let ? t = max(?t , ? t ). We allow the algorithm to fail
with probability 3? at this stage, possibly caused by the weak-learner and the estimation of ?t , ? t .
Consider two cases: First that cor(c, RD,wt ) ? ?0 + 2 , in this case by the weak learning assumption,
t?1
), RD,wt ) ? 2
cor(g t , RD,wt ) ? ?
2 . In the second case, if this does not hold, then cor(? sign(H
using (3). Thus, even after taking into account the fact that the empirical estimates may be off from
?
?
t
t
t
the true correlations by 20
, we get that cor(ht , RD,wt ) ? ?
3 and that |? ? cor(h , RD,w )| ? 20 .
t
t?1
t t
Using this and Lemma 3, we get that by setting H = H
+ ? h the potential decreases by at
2 2
least ?29 .
When t = 0 and H 0 = 0, ?(H 0 , D) = 1. Since for any H : X ? R, ?(H, D) > 0; we can
have at most T = ?29
2 2 rounds. This guarantees that when the algorithm is run for T rounds, on
0
2
some round t the hypothesis sign(H t ) will have correlation at least sup cor(c, D) ?
? . For
?
3
c?C
1
s = ?200
log
the
empirical
estimate
of
the
correlation
of
the
constructed
hypothesis
on
each
2 2
?
round is within an additive 6 of its true correlation, allowing a further failure probability of ? each
round. Thus the final hypothesis H ? which has the highest empirical correlation satisfies,
0
?
cor(H ? , D) ? sup cor(c, D) ?
?
c?C
Since there is a failure probability of at most 4? on each round, the algorithm succeeds with probability at least 1 ? 4T ?.
4
Applications
We show that recent agnostic learning analyses can be dramatically simplified using our boosting
algorithm. Both of the agnostic algorithms are distribution-specific, meaning that they only work on
one (or a family) of distributions ? over unlabeled examples.
4.1
Agnostically Learning Decision Trees
Recent work has shown how to agnostically learn polynomial-sized decision trees using membership queries, by an L1 gradient-projection algorithm [10]. Here, we show that learning decision
trees is quite simple using our distribution-specific boosting theorem and the Kushilevitz-Mansour
membership query parity learning algorithm as a weak learner [24].
Lemma 4. Running the KM algorithm, using q = poly(n, t, 1/0 ) queries, and outputting the parity
with largest magnitude of estimated Fourier coefficient, is a (? = 1/t, 0 ) agnostic weak learner for
size-t decision trees over the uniform distribution.
The proof of this Lemma is simple using results in [24] and is given in Appendix A. Theorem 2 now
follows easily from Lemma 4 and Theorem 1.
4.2
Agnostically Learning Halfspaces
In the case of learning halfspaces, the weak learner
Pm simply finds the degree-d term, ?S (x) with
1
|S| ? d, with greatest empirical correlation m
i=1 ?S (xi )yi on a data set (x1 , y1 ), . . . , (xm , ym ).
The following lemma is useful in analyzing it.
Lemma 5. For any > 0, there exists d ? 1 such that the following holds. Let n ? 1, C be the class
of halfspaces in n dimensions, let U be the uniform distribution on {?1, 1}n , and f : {?1, 1}n ?
[?1, 1] be an arbitrary function. Then there exists a set S ? [n] of size |S| ? d = 20
such that
4
0
| cor(?S , Uf )| ? (cor(C, Uf ) ? 0 )/nd .
7
Using results from [25] the proofs of Lemma 5 and Theorem 3 are straightforward and are given in
Appendix B.
5
Experiments
We performed preliminary experiments with the new boosting algorithm presented here on 8 datasets
from UCI repository [26]. We converted multi-class problems into binary classification problems
by arbitrarily grouping classes, and ran Adaboost, Madaboost and Agnostic Boost on these datasets,
using stumps as weak learners. Since stumps can accept weighted examples, we passed the exact
weighted distribution to the weak learner.
Our experiments were performed with fractional relabeling, which means the following. Rather than
keeping the label with probability wt (x, y) and making it completely random with the remaining
probability, we added both (x, y) and (x, ?y) with weights (1 + wt (x, y))/2 and (1 ? wt (x, y))/2
respectively. Experiments with random relabeling showed that random relabeling performs much
worse than fractional relabeling.
Table 1 summarizes the final test error on the datasets. In the case of pima and german datasets,
we observed overfitting and the reported test errors are the minimum test error observed for all the
algorithms. In all other cases the test error rate at the end of round 500 is reported. Only pendigits
had a test dataset, for the rest of the datasets we performed 10-fold cross validation. We also added
random classification noise of 5%, 10% and 20% to the datasets and ran the boosting algorithms on
the modified dataset.
Dataset
No Added Noise
Ada Mada Agn
sonar
12.4 14.8 15.3
ionosphere 8.6 9.1 8.1
pima
23.7 23.0 23.6
german 23.1 23.6 23.1
waveform 10.4 10.2 10.3
magic
14.7 14.9 14.5
letter
17.4 18.2 18.3
pendigits 7.4 7.3 8.2
5% noise
Ada Mada Agn
23.9 20.6 24.0
15.8 17.2 14.4
26.1 24.9 25.7
28.5 27.7 27.5
14.9 15.0 13.9
18.2 18.3 18.1
20.9 21.4 21.5
12.1 12.0 13.0
10% Noise
Ada Mada Agn
26.5 26.3 25.1
24.2 23.8 21.8
27.6 26.4 26.7
29.0 29.5 30.0
20.1 19.2 19.1
21.9 22.0 21.5
24.6 24.9 25.2
16.8 16.3 16.9
20% Noise
Ada Mada Agn
34.2 32.7 34.5
32 28.2 27.8
34.3 34.5 34
35.0 34.5 35.1
27.9 27.3 27.1
29.4 29.1 28.7
31.4 31.8 31.6
25.5 25.2 25.3
Table 1: Final test error rates of Adaboost, Madaboost and Agnostic Boosting on 8 datasets. The
first column reports error rates on the original datasets, and the next three report errors on datasets
with 5%, 10% and 20% classification noise added.
6
Conclusion
We show that potential-based agnostic boosting is possible in theory, and also that this may be
done without changing the distribution over unlabeled examples. We show that non-trivial agnostic
learning results, for learning decision trees and halfspaces, can be viewed as simple applications of
our boosting theorem combined with well-known weak learners. Our analysis can be viewed as a
theoretical justification of noise tolerance properties of algorithms like Madaboost and Smoothboost.
Preliminary experiments show that the performance of our boosting algorithm is comparable to that
of Madaboost and Adaboost. A more thorough empirical evaluation of our boosting procedure using
different weak learners is part of future research.
References
[1] T. G. Dietterich. An experimental comparison of three methods for constructing ensembles of decision
trees: bagging, boosting, and randomization. Machine Learning, 40(2):139?158, 2000.
[2] R. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research,
4:633?648, 2003.
[3] D. Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research, 4:101?117, 2003.
8
[4] C. Domingo and O. Watanabe. Madaboost: A modification of adaboost. In Proceedings of the Thirteenth Annual Conference on Learning Theory, pages 180?189, San Francisco, CA, USA, 2000. Morgan
Kaufmann Publishers Inc.
[5] A. Kalai and R. Servedio. Boosting in the presence of noise. In Proceedings of the 35th Annual Symposium
on Theory of Computing (STOC), pages 196?205, 2003.
[6] A. T. Kalai, Y. Mansour, and E. Verbin. On agnostic boosting and parity learning. In STOC ?08: Proceedings of the 40th annual ACM symposium on Theory of computing, pages 629?638, New York, NY, USA,
2008. ACM.
[7] Y. Freund and R. Schapire. Game theory, on-line prediction and boosting. In Proceedings of the Ninth
Annual Conference on Computational Learning Theory, pages 325?332, 1996.
[8] D. Haussler. Decision theoretic generalizations of the pac model for neural net and other learning applications. Inf. Comput., 100(1):78?150, 1992.
[9] M. Kearns, R. Schapire, and L. Sellie.
17(2):115?141, 1994.
Toward Efficient Agnostic Learning.
Machine Learning,
[10] P. Gopalan, A. T. Kalai, and A. R. Klivans. Agnostically learning decision trees. In Proceedings of the
40th annual ACM symposium on Theory of computing, pages 527?536, New York, NY, USA, 2008. ACM.
[11] A. T. Kalai, A. R. Klivans, Y. Mansour, and R. Servedio. Agnostically learning halfspaces. In Proc. 46th
IEEE Symp. on Foundations of Computer Science (FOCS?05), 2005.
[12] P. M. Long and R. A. Servedio. Random classification noise defeats all convex potential boosters. In
ICML, pages 608?615, 2008.
[13] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting.
Annals of Statistics, 28:2000, 1998.
[14] Y. Freund. An adaptive version of the boost-by-majority algorithm. In Proceedings of the Twelfth Annual
Conference on Computational Learning Theory, pages 102?113, 1999.
[15] M. Nakamura, H. Nomiya, and K. Uehara. Improvement of boosting algorithm by modifying the weighting rule. Annals of Mathematics and Artificial Intelligence, 41(1):95?109, 2004.
[16] T. Bylander and L. Tate. Using validation sets to avoid overfitting in adaboost. In G. Sutcliffe and
R. Goebel, editors, FLAIRS Conference, pages 544?549. AAAI Press, 2006.
[17] S. Ben-David, P. M. Long, and Y. Mansour. Agnostic boosting. In Proceedings of the 14th Annual
Conference on Computational Learning Theory, COLT 2001, volume 2111 of Lecture Notes in Artificial
Intelligence, pages 507?516. Springer, 2001.
[18] R. A. McDonald, D. J. Hand, and I. A. Eckley. An empirical comparison of three boosting algorithms on
real data sets with artificial class noise. In T. Windeatt and F. Roli, editors, Multiple Classifier Systems,
volume 2709 of Lecture Notes in Computer Science, pages 35?44. Springer, 2003.
[19] J. K. Bradley and R. Schapire. Filterboost: Regression and classification on large datasets. In J.C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20,
pages 185?192. MIT Press, Cambridge, MA, 2008.
[20] Y. Mansour and D. McAllester. Boosting using branching programs. Journal of Computer and System
Sciences, 64(1):103?112, 2002.
[21] P. M. Long and R. A. Servedio. Adaptive martingale boosting. In NIPS, pages 977?984, 2008.
[22] A. T. Kalai, V. Kanade, and Y. Mansour. Reliable agnostic learning. In COLT ?09: Proceedings of the
22nd annual conference on learning theory, 2009.
[23] M. Kearns and L. Valiant. Cryptographic limitations on learning Boolean formulae and finite automata.
Journal of the ACM, 41(1):67?95, 1994.
[24] E. Kushilevitz and Y. Mansour. Learning decision trees using the Fourier spectrum. SIAM J. on Computing, 22(6):1331?1348, 1993.
[25] A. Klivans, R. O?Donnell, and R. Servedio. Learning intersections and thresholds of halfspaces. Journal
of Computer & System Sciences, 68(4):808?840, 2004.
[26] A. Asuncion and D. J. Newman. UCI Machine Learning Repository [http://www.ics.uci.edu/
?mlearn/MLRepository.html] Irvine, CA: University of California, School of Information and
Computer Science, 2007.
9
| 3676 |@word repository:2 version:1 polynomial:3 stronger:2 nd:2 suitably:1 twelfth:1 bylander:1 hu:2 km:1 pick:2 err:9 current:5 com:1 nt:1 bradley:1 must:1 additive:3 realistic:1 happen:1 enables:1 drop:1 intelligence:2 leaf:1 boosting:56 simpler:1 c2:1 constructed:1 symposium:3 focs:1 prove:3 symp:1 manner:1 multi:1 totally:1 notation:2 bounded:1 agnostic:33 finding:1 guarantee:5 thorough:1 every:2 concave:1 returning:1 classifier:19 platt:1 omit:1 positive:1 negligible:1 understood:1 analyzing:1 plus:1 pendigits:2 challenging:2 practical:1 unique:1 testing:1 practice:3 procedure:10 filterboost:1 empirical:7 significantly:1 projection:1 confidence:1 word:1 suggest:2 get:5 cannot:2 smoothboost:5 eckley:1 unlabeled:10 close:3 hx1:2 context:1 applying:1 www:1 equivalent:1 demonstrated:1 straightforward:2 convex:3 automaton:1 wit:5 simplicity:1 madaboost:12 kushilevitz:2 haussler:1 rule:1 deriving:1 handle:1 justification:2 annals:2 target:1 suppose:1 exact:1 designing:1 domingo:2 hypothesis:29 harvard:2 satisfying:1 labeled:1 observed:2 solved:1 decrease:6 highest:1 halfspaces:11 substantial:1 mentioned:1 ran:2 weakly:3 technically:1 learner:30 completely:1 easily:2 joint:1 derivation:1 describe:1 query:15 artificial:3 newman:1 whose:2 quite:3 valued:1 drawing:1 otherwise:2 ability:1 statistic:1 think:1 noisy:1 final:3 advantage:2 differentiable:1 net:1 propose:1 outputting:4 uci:3 minuscule:1 achieve:1 roweis:1 generating:1 adam:1 ben:1 school:1 progress:1 eq:2 gavinsky:1 strong:5 implemented:2 implies:2 waveform:1 vkanade:1 modifying:1 mcallester:1 noticeably:1 require:1 generalization:1 preliminary:4 randomization:1 im:1 hold:3 ic:1 exp:3 estimation:1 proc:1 label:8 currently:1 largest:1 successfully:1 weighted:4 mit:1 always:1 modified:1 rather:3 kalai:9 avoid:2 corollary:2 viz:1 improvement:1 impossibility:1 contrast:1 sense:3 tate:1 membership:5 typically:2 accept:1 koller:1 provably:3 classification:9 colt:2 html:1 misclassifies:1 marginal:1 equal:1 never:1 chernoff:1 icml:1 nearly:1 alter:1 future:1 others:2 report:2 randomly:2 relabeling:12 replaced:1 argmax:1 consisting:1 intended:1 microsoft:2 negation:2 attempt:1 friedman:1 possibility:2 evaluation:1 devoted:1 held:1 mada:4 accurate:3 edge:2 capable:1 partial:1 necessary:1 tree:12 taylor:1 re:1 theoretical:3 column:1 earlier:1 boolean:1 negating:1 ada:4 uniform:5 optimally:1 reported:2 combined:1 siam:1 donnell:1 off:1 ym:2 together:1 quickly:1 verbin:2 aaai:1 possibly:2 hoeffding:1 worse:2 booster:4 derivative:3 return:5 aggressive:1 potential:23 account:1 converted:1 stump:2 coefficient:1 inc:1 caused:2 performed:3 view:2 analyze:1 sup:4 relied:1 asuncion:1 polynomialtime:1 contribution:2 minimize:1 accuracy:2 kaufmann:1 efficiently:1 ensemble:1 correspond:1 identify:1 weak:42 mlearn:1 explain:1 ed:2 definition:6 failure:2 servedio:8 proof:7 irvine:1 proved:1 dataset:3 recall:1 knowledge:1 fractional:2 varun:1 supervised:3 adaboost:7 follow:1 done:2 though:1 stage:1 lastly:1 correlation:17 hand:2 reweighting:3 logistic:1 yf:1 perhaps:1 usa:3 dietterich:1 true:2 hence:6 illustrated:1 reweighted:4 round:14 game:1 branching:3 flair:1 mlrepository:1 theoretic:1 mcdonald:1 performs:2 l1:1 meaning:3 exponentially:1 defeat:1 volume:2 interpretation:1 significant:1 goebel:1 cambridge:1 queried:2 rd:22 pm:2 similarly:3 mathematics:1 had:1 recent:7 showed:1 inf:2 certain:2 binary:3 arbitrarily:2 yi:10 seen:1 minimum:1 morgan:1 determine:1 semi:1 ii:2 multiple:1 d0:3 smooth:2 downweights:1 offer:1 long:5 cross:1 y:2 prediction:1 variant:1 regression:2 noiseless:4 essentially:3 expectation:1 c1:1 want:1 thirteenth:1 addressed:1 else:1 malicious:1 publisher:1 rest:1 sure:1 call:2 near:1 presence:5 iii:1 easy:1 hastie:1 brownboost:1 agnostically:11 idea:1 tm:1 knowing:1 utility:1 passed:1 returned:3 york:2 dramatically:1 useful:4 gopalan:2 yw:2 schapire:4 http:1 exist:1 alters:1 notice:1 sign:18 estimated:1 tibshirani:1 sellie:2 key:1 procedural:1 threshold:1 changing:1 ht:10 run:2 everywhere:2 letter:1 procedurally:1 family:1 circumvents:1 decision:13 appendix:2 summarizes:1 comparable:1 bit:1 bound:4 fold:1 oracle:4 annual:8 nontrivial:1 fourier:2 klivans:3 min:3 uf:8 according:3 combination:1 poor:1 slightly:1 wi:1 modification:2 happens:1 making:1 pr:1 sided:1 equation:1 previously:1 german:2 fail:1 singer:1 dnfs:1 know:2 cor:41 end:1 generalizes:1 experimentation:1 apply:1 observe:1 appropriate:1 alternative:1 original:2 bagging:1 running:1 include:1 remaining:1 giving:3 already:1 quantity:1 added:4 fa:1 traditional:2 guessing:2 said:1 gradient:3 majority:2 trivial:3 reason:1 toward:1 minimizing:1 equivalently:1 pima:2 stoc:2 stated:1 magic:1 uehara:1 countable:1 cryptographic:1 negated:2 allowing:1 upper:1 datasets:10 finite:2 descent:2 y1:4 mansour:9 rn:1 ninth:1 arbitrary:10 relabels:1 david:1 namely:3 extensive:1 california:1 boost:5 nip:1 below:1 xm:4 program:3 including:2 max:1 reliable:1 greatest:1 natural:1 nakamura:1 improve:1 ready:1 epoch:2 literature:2 freund:2 lecture:2 limitation:1 remarkable:1 validation:2 foundation:1 degree:1 editor:3 roli:1 parity:3 keeping:1 formal:4 allow:1 understand:1 taking:2 tauman:1 tolerance:3 dimension:2 made:4 hxs:1 adaptive:3 simplified:3 san:1 far:1 active:5 tolerant:1 overfitting:2 francisco:1 xi:16 don:1 spectrum:1 sonar:1 why:1 table:2 kanade:3 learn:2 robust:2 ca:2 aboost:1 poly:2 constructing:1 main:5 noise:23 logitboost:1 repeated:1 x1:5 fig:1 referred:1 martingale:1 ny:2 watanabe:2 comput:1 weighting:6 yh:2 theorem:18 formula:1 specific:4 xt:1 showing:1 pac:3 learnable:1 x:3 ionosphere:1 grouping:1 exists:5 adding:2 valiant:1 relabeled:7 magnitude:1 execution:1 illustrates:1 easier:1 intersection:1 lt:2 simply:3 univariate:1 springer:2 satisfies:1 acm:5 ma:1 succeed:3 goal:3 viewed:4 sized:1 agn:4 change:2 uniformly:1 wt:19 kearns:3 conservative:6 lemma:18 called:1 experimental:2 succeeds:1 formally:2 correlated:3 |
2,953 | 3,677 | L1-Penalized Robust Estimation for a Class of Inverse
Problems Arising in Multiview Geometry
Arnak S. Dalalyan and Renaud Keriven
IMAGINE/LabIGM,
Universit?e Paris Est - Ecole des Ponts ParisTech,
Marne-la-Vall?ee, France
dalalyan,[email protected]
Abstract
We propose a new approach to the problem of robust estimation in multiview geometry. Inspired by recent advances in the sparse recovery problem of statistics,
we define our estimator as a Bayesian maximum a posteriori with multivariate
Laplace prior on the vector describing the outliers. This leads to an estimator
in which the fidelity to the data is measured by the L? -norm while the regularization is done by the L1 -norm. The proposed procedure is fairly fast since the
outlier removal is done by solving one linear program (LP). An important difference compared to existing algorithms is that for our estimator it is not necessary
to specify neither the number nor the proportion of the outliers. We present strong
theoretical results assessing the accuracy of our procedure, as well as a numerical
example illustrating its efficiency on real data.
1
Introduction
In the present paper, we are concerned with a class of non-linear inverse problems appearing in the
structure and motion problem of multiview geometry. This problem, that have received a great deal
of attention by the computer vision community in last decade, consists in recovering a set of 3D
points (structure) and a set of camera matrices (motion), when only 2D images of the aforementioned 3D points by some cameras are available. Throughout this work we assume that the internal
parameters of cameras as well as their orientations are known. Thus, only the locations of camera
centers and 3D points are to be estimated. In solving the structure and motion problem by state-ofthe-art methods, it is customary to start by establishing correspondences between pairs of 2D data
points. We will assume in the present study that these point correspondences have been already
established.
One can think of the structure and motion problem as the inverse problem of inverting the operator O
that takes as input the set of 3D points and the set of cameras, and produces as output the 2D images
of the 3D points by the cameras. This approach will be further formalized in the next section.
Generally, the operator O is not injective, but in many situations (for example, when for each pair
of cameras there are at least five 3D points in general position that are seen by these cameras [23]),
there is only a small number of inputs, up to an overall similarity transform, having the same image
by O. In such cases, the solutions to the structure and motion problem can be found using algebraic
arguments.
The main flaw of algebraic solutions is their sensitivity to the noise in the data: very often, thanks
to the noise in the measurements, there is no input that could have generated the observed output.
A natural approach to cope with such situations consists in searching for the input providing the
closest possible output to the observed data. Then, a major issue is how to choose the metric in the
output space. A standard approach [16] consists in measuring the distance between two elements
1
(a)
(b)
(c)
(d)
(e)
Figure 1: (a) One image from the dinosaur sequence. Camera locations and scene points estimated
by the blind L? -cost minimization (b,c) and by the proposed ?outlier aware? procedure (d,e).
of the output space in the Euclidean L2 -norm. In the structure and motion problem with more than
two cameras, this leads to a hard non-convex optimization problem. A particularly elegant way of
circumventing the non-convexity issues inherent to the use of L2 -norm consists in replacing it by the
L? -norm [15, 18, 24, 25, 27, 13, 26]. It has been shown that, for a number of problems, L? -norm
based estimators can be computed very efficiently using, for example, the iterative bisection method
[18, Algorithm 1, p. 1608] that solves a convex program at each iteration. There is however an
issue with the L? -techniques that dampens the enthusiasm of practitioners: it is highly sensitive to
outliers (c.f . Fig. 1). In fact, among all Lq -metrics with q ? 1, the L? -metric is the most seriously
affected by the outliers in the data. Two procedures have been introduced [27, 19] that make the
L? -estimator less sensitive to outliers. Although these procedures demonstrate satisfactory empirical performance, they suffer from a lack of sufficient theoretical support assessing the accuracy of
produced estimates.
The purpose of the present work is to introduce and to theoretically investigate a new procedure
of estimation in presence of noise and outliers. Our procedure combines L? -norm for measuring
the fidelity to the data and L1 -norm for regularization. It can be seen as a maximum a posteriori
(MAP) estimator under uniformly distributed random noise and a sparsity favoring prior on the
vector of outliers. Interestingly, this study bridges the work on the robust estimation in multiview
geometry [12, 27, 19, 21] and the theory of sparse recovery in statistics and signal processing [10,
2, 5, 6].
The rest of the paper is organized as follows. The next section gives the precise formulation of the
translation estimation and triangulation problem to which the presented methodology can be applied.
A brief review of the L? -norm minimization algorithm is presented in Section 3. In Section 4, we
introduce the statistical framework and derive a new procedure as a MAP estimator. The main result
on the accuracy of this procedure is stated and proved in Section 5, while Section 6 contains some
numerical experiments. The methodology of our study is summarized in Section 7.
2
Translation estimation and triangulation
Let us start by presenting a problem of multiview geometry to which our approach can be successfully applied, namely the problem of translation estimation and triangulation in the case of known
rotations. For rotation estimation algorithms, we refer the interested reader to [22, 14] and the
references therein.
Let P?i , i = 1, . . . , m, be a sequence of m cameras that are known up to a translation. Recall that a
camera is characterized by a 3 ? 4 matrix P with real entries that can be written as P = K[R|t], where
K is an invertible 3 ? 3 matrix called the camera calibration matrix, R is a 3 ? 3 rotation matrix and
t ? R3 . We will refer to t as the translation of the camera P. We can thus write P?i = Ki [Ri |t?i ],
i = 1, . . . , m. For a set of unknown scene points U?j ,, j = 1, . . . , n, expressed in homogeneous
coordinates (i.e., U?j is an element of the projective space P3 ), we assume that noisy images of each
U?j by some cameras P?i are observed. Thus, we have at our disposal the measurements
T ? ?
1
e P U
j = 1, . . . , n,
xij = T ? ? 1T i? j? + ? ij ,
(1)
i ? Ij ,
e3 Pi Uj e2 Pi Uj
where e` , ` = 1, 2, 3, stands for the unit vector of R3 having one as the `th coordinate and Ij is the
set of indices of cameras for which the point U?j is visible. We assume that the set {U?j } does not
T
?
3
contain points at infinity: U?j = [X?T
j |1] for some Xj ? R and for every j = 1, . . . , n.
2
We are now in a position to state the problem of translation estimation and triangulation in the
context of multiview geometry. It consists in recovering the 3-vectors {t?i } (translation estimation)
and the 3D points {X?j } (triangulation) from the noisy measurements {xij ; j = 1, . . . , n; i ? Ij } ?
?T
?T
?T T
3(m+n)
R2 . In what follows, we use the notation ? ? = (t?T
. Thus,
1 , . . . , t m , X1 , . . . , Xn ) ? R
?
we are interested in estimating ? .
Remark 1 (Cheirality). It should be noted right away that if the point U?j is in front of the camera
? ?
P?i , then eT
3 Pi Uj ? 0. This is termed cheirality condition. Furthermore, we will assume that none
of the true 3D points U?j lies on the principal plane of a camera P?i . This assumption implies that
T ? ?
T ? ?
? ?
eT
3 Pi Uj > 0 so that the quotients e` Pi Uj /e3 Pi Uj , ` = 1, 2, are well defined.
Remark 2 (Identifiability). The parameter ? we have just defined is, in general, not identifiable from
the measurements {xij }. In fact, one easily checks that, for every ? 6= 0 and for every t ? R3 , the
parameters {t?i , X?j } and {?(t?i ? Ri t), ?(X?j + t)} generate the same measurements. To cope with
? ?
this issue, we assume that t?1 = 03 and that mini,j eT
3 Pi Uj = 1. Thus, in what follows we assume
that t?1 is removed from ? ? and ? ? ? R3(m+n?1) . Further assumptions ensuring the identifiability
of ? ? are given below.
3
Estimation by Sequential Convex Programming
This section presents results on the estimation of ? based on the reprojection error (RE) minimization. This material is essential for understanding the results that are at the core of the present
P work.
In what follows, for every s ? 1, we denote by kxks the Ls -norm of a vector x, i.e.kxkss = j |xj |s
if x = (x1 , . . . , xd )T . As usual, we extend this to s = +? by setting kxk? = maxj |xj |.
A classical method [16] for estimating the parameter ? is based on minimizing the sum of the
b as a minimizer of the cost function C2,2 (?) = P kxij ?
squared REs. This defines the estimator ?
i,j
T T
T
xij (?)k22 , where xij (?) := eT
P
U
;
e
P
U
/e
P
U
is
the
2-vector
that
we
would
obtain
if ?
i
j
i
j
i
j
1
2
3
were the true parameter. It can also be written as
T
T
e1 Ki (Ri Xj + ti ) eT
2 Ki (Ri Xj + ti )
xij (?) = T
;
.
(2)
e3 Ki (Ri Xj + ti ) eT
3 Ki (Ri Xj + ti )
The minimization of C2,2 is a hard nonconvex problem. In general, it does not admit closed-form
solution and the existing iterative algorithms may often get stuck in local minima. An ingenious
idea to overcome this difficulty [15, 17] is based on the minimization of the L? cost function
s ? [1, +?].
(3)
C?,s (?) = max max kxij ? xij (?)ks ,
j=1,...,n i?Ij
Note that the substitution of the L2 -cost function by the L? -cost function has been proved to lead
to improved algorithms in other estimation problems as well, cf., e.g., [8]. This cost function has
a clear practical advantage in that all its sublevel sets are convex. This property ensures that all
minima of C?,s form a convex set and that an element of this set can be computed by solving
a sequence of convex programs [18], e.g., by the bisection algorithm. Note that for s = 1 and
s = +?, the minimization of C?,s can be recast in a sequence of LPs. The main idea behind the
bisection algorithm can be summarized as follows. We aim to designate an algorithm computing
bs ? arg min C?,s (?), for any prespecified s ? 1, over the set of all vectors ? satisfying the
?
?
cheirality condition. Let us introduce the residuals rij (?) = xij ? xij (?) that can be represented as
T
T
aij1 ? aT
ij2 ?
;
,
(4)
rij (?) =
cT
cT
ij ?
ij ?
for some vectors aij` , cij ? R2 . Furthermore, as presented in Remark 2, the cheirality conditions
b
imply the set of linear constraints cT
ij ? ? 1. Thus, the problem of computing ? s can be rewritten as
krij (?)ks ? ?,
minimize ?
subject to
(5)
cT
ij ? ? 1.
T
Note that the inequality krij (?)ks ? ? can be replaced by kAT
ij ?ks ? ?cij ? with Aij = [aij1 ; aij2 ].
Although (5) is not a convex problem, its solution can be well approximated by solving a sequence
of convex feasibility problems.
3
4
Robust estimation by linear programming
This and the next sections contain the main theoretical contribution of the present work. We start
with the precise formulation of the statistical model. We then exhibit a prior distribution on the
unknown parameters of the model that leads to a MAP estimator.
4.1 The statistical model
Let us first observe that, in view of (1) and (4), the model we are considering can be rewritten as
T ? T ? T
aij1 ? aij2 ?
= ? ij , j = 1, . . . , n; i ? Ij .
(6)
? ; T ?
cT
cij ?
ij ?
Pn
Let N = 2 j=1 Ij be the total number of measurements and let M = 3(n + m ? 1) be the size of
the vector ? ? . Let us denote by A (resp. C) the M ? N matrix formed by the concatenation of the
column-vectors aij` (resp. cij 1 ). Similarly, let us denote by ? the N -vector formed by concatenating
?
T ?
the vectors ? ij . In these notation, Eq. (6) is equivalent to aT
p ? = (cp ? )? p , p = 1, . . . , N . This
equation defines the statistical model in the case where there is no outlier. To extend this model to
cover the situation where some outliers are present in the measurements, we introduce the vector
?
th
?
T ?
? ? ? RN defined by ?p? = aT
p ? ? (cp ? )? p so that ?p = 0 if the p measurement is an inlier and
?
|?p | > 0 otherwise. This leads us to the model:
AT ? ? = ? ? + diag(CT ? ? )?,
(7)
where diag(v) stands for the diagonal matrix having the components of v as diagonal entries.
Statement of the problem: Given the matrices A and C, estimate the parameter-vector
? ? = [? ?T ; ? ?T ]T based on the following prior information:
C1 : Eq. (7) holds with some small noise vector ?,
?
C2 : minp cT
p ? = 1,
?
C3 : ? is sparse, i.e., only a small number of coordinates of ? ? are different from zero.
4.2 Sparsity prior and MAP estimator
To derive an estimator of the parameter ? ? , we place ourselves in the Bayesian framework. To this
end, we impose a probabilistic structure on the noise vector ? and introduce a prior distribution on
the unknown vector ?.
Since the noise ? represents the difference (in pixels) between the measurements and the true image
points, it is naturally bounded and, generally, does not exceeds the level of a few pixels. Therefore,
it is reasonable to assume that the components of ? are uniformly distributed in some compact set
of R2 , centered at the origin. We assume in what follows that the subvectors ?ij of ? are uniformly
distributed in the square [??, ?]2 and are mutually independent. Note that this implies that all the
coordinates of ? are independent. In practice, this assumption can be enforced by decorrelating the
measurements using the empirical covariance matrix [20]. We define the prior on ? as the uniform
distribution on the polytope P = {? ? RM : CT ? ? 1}, where the inequality is understood componentwise. The density of this distribution is p1 (?) ? 1P (?), where ? stands for the proportionality
relation and 1P (?) = 1 if ? ? P and 0 otherwise. When P is unbounded, this results in an improper
prior, which is however not a problem for defining the Bayes estimator.
The task of choosing a prior on ? is more delicate in that it should reflect the information that ?
is sparse. The most natural prior would be the one having a density which is a decreasing function
of the L0 -norm of ?, i.e., of the number of its nonzero coefficients. However, the computation of
estimators based on this type of priors is NP-hard. An approach for overcoming this difficulty relies
on using the L1 -norm instead of the L0 -norm. Following this idea, we define the prior distribution
on ? by the probability density p2 (?) ? f (k?k1 ), where f is some decreasing function2 defined
on [0, ?). Assuming in addition that ? and ? are independent, we get the following prior on ?:
?(?) = ?(?; ?) ? 1P (?) ? f (k?k1 ).
1
2
To get a matrix of the same size as A, in the matrix C each column is duplicated two times.
The most common choice is f (x) = e?x corresponding to the multivariate Laplace density.
4
(8)
Theorem 1. Assume that the noise ? has independent entries which are uniformly distributed in
b = [?
bT ; ?
b T ]T based on the prior ? defined by Eq.
[??, ?] for some ? > 0, then the MAP estimator ?
(8) is the solution of the optimization problem:
T
|ap ? ? ?p | ? ?cT
p ?, ?p
(9)
minimize
k?k1
subject to
T
cp ? ? 1, ?p.
The proof of this theorem is a simple exercise and is left to the reader.
Remark 3 (Condition C2 ). One easily checks that any solution of (9) satisfies condition C2 . Indeed,
b it were not the case, then ?
? = ?/
b minp cT ?
b
if for some solution ?
p would satisfy the constraints of
? would have a smaller L1 -norm than that of ?
b , which is in contradiction with the fact that
(9) and ?
b solves (9).
?
b ? is a free parameter that can be interpreted as
Remark 4 (The role of ?). In the definition of ?,
the level of separation of inliers from outliers. The proposed algorithm implicitly assumes that all
the measurements xij for which k? ij k? > ? are outliers, while all the others are treated as inliers.
If ? is unknown, a reasonable way of acting is to impose a prior distribution on the possible values of
b as a MAP estimator based on the prior incorporating the uncertainty
? and to define the estimator ?
on ?. When there are no outliers and the prior on ? is decreasing, this approach leads to the estimator
minimizing the L? cost function. In the presence of outliers, the shape of the prior on ? becomes
more important for the definition of the estimator. This is an interesting point for future investigation.
4.3 Two-step procedure
Building on the previous arguments, we introduce the following two-step algorithm.
Input: {ap , cp ; p = 1, . . . , N } and ?.
bT ; ?
b T ]T as a solution to (9) and set J = {p : ?
Step 1: Compute [?
bp = 0} .
Step 2: Apply the bisection algorithm to the reduced data set {xp ; p ? J}.
Two observations are in order. First, when applying the bisection algorithm at Step 2, we can use
b as the initial value of ?u . The second observation is that a better way of acting would be to
C?,s (?)
minimize the weighted L1 -norm of ?, where the weight assigned to ?p is inversely proportional to
?
?
the depth cT
p ? . Since ? is unknown, a reasonable strategy consists in adding a step in between Step
b ?1 ; p = 1, . . . , N }.
1 and Step 2, which performs the weighted minimization with weights {(cT
p ?)
5
Accuracy of estimation
Let us introduce some additional notation. Recall the definition of P and set ?P = {? : minp cTp ? =
1} and ?P ? = {? ? ? 0 : ?, ? 0 ? ?P, ? 6= ?}. For every subset of indices J ? {1, . . . , N }, we
denote by AJ the M ?N matrix obtained from A by replacing the columns that have an index outside
J by zero. Furthermore, let us define
?J (?) =
sup
? 0 ??P,AT ? 0 6=AT ?
0
kAT
J (? ? ?)k2
,
0
kAT (? ? ?)k2
?J ? {1, . . . , N },
?? ? ?P.
(10)
One easily checks that ?J ? [0, 1] and ?J ? ?J 0 if J ? J 0 .
Assumption A: The real number ? defined by ? = ming??P ? kAT gk2 /kgk2 is strictly positive.
Assumption A is necessary for identifying the parameter vector ? ? even in the case without outliers.
In fact, if ? ? = 0, and if Assumption A is not fulfilled, then3 ? g ? ?P ? such that AT g = 0. That
is, given the matrices A and C, there are two distinct vectors ? 1 and ? 2 in ?P such that AT ? 1 = AT ? 2 .
Therefore, if eventually ? 1 is the true parameter vector satisfying C1 and C3 , then ? 2 satisfies these
conditions as well. As a consequence, the true vector cannot be accurately estimated.
3
We assume for simplicity that ?P is compact.
5
5.1 The noise free case
To evaluate the quality of estimation, we first place ourselves in the case where ? = 0. The estimator
b of ? ? is then defined as a solution to the optimization problem
?
T
?
A ?=?
min k?k1 over ? =
s.t.
.
(11)
?
CT ? ? 1
From now on, for every index set T and for every vector h, hT stands for the vector equal to h on
an index set T and zero elsewhere. The complementary set of T will be denoted by T c .
Theorem 2. Let Assumption A be fulfilled and let T0 (resp. T1 ) denote the index set corresponding
b )T0c ). If ?T0 (? ? ) + ?T0 ?T1 (? ? ) < 1 then,
to the locations of S largest entries4 of ? ? (resp. (? ? ? ?
for some constant C0 , it holds:
b ? ? ? k2 ? C0 k? ? ? ? ? k1 ,
k?
S
? ?S
(12)
?
where
stands for the vector ? with all but the S-largest entries set to zero. In particular, if ? ?
b = ?? .
has no more than S nonzero entries, then the estimation is exact: ?
b It follows from Remark 3 that g ? ?P. To proceed
b and g = ? ? ? ?.
Proof. We set h = ? ? ? ?
with the proof, we need the following auxiliary result, the proof of which can be easily deduced
from [4].
Lemma 1. Let v ? Rd be some vector and let S ? d be a positive integer. If we denote by T the
indices of S largest entries of the vector |v|, then kvT c k2 ? S ?1/2 kvk1 .
Applying Lemma 1 to the vector v = hT0c and to the index set T = T1 , we get
kh(T0 ?T1 )c k2 ? S ?1/2 khT0c k1 .
(13)
On the other hand, summing up the inequalities khT0c k1 ? k(? ? ? h)T0c k1 + k? ?T0c k1 and k? ?T0 k1 ?
k(? ? ? h)T0 k1 + khT0 k1 , and using the relation k(? ? ? h)T0 k1 + k(? ? ? h)T0c k1 = k? ? ? hk1 =
kb
? k1 , we get
khT0c k1 + k? ?T0 k1 ? kb
? k1 + k? ?T0c k1 + khT0 k1 .
(14)
b we have
Since ? ? satisfies the constraints of the optimization problem (11) a solution of which is ?,
?
kb
? k1 ? k? k1 . This inequality, in conjunction with (13) and (14), implies
kh(T0 ?T1 )c k2 ? S ?1/2 khT0 k1 + 2S ?1/2 k? ?T0c k1 ? khT0 k2 + 2S ?1/2 k? ?T0c k1 ,
(15)
where the last step follows from the Cauchy-Schwartz inequality. Using once again the fact that
b and ? ? satisfy the constraints of (11), we get h = AT g. Therefore,
both ?
khk2 ? khT0 ?T1 k2 + kh(T0 ?T1 )c k2 ? khT0 ?T1 k2 + khT0 k2 + 2S ?1/2 k? ?T0c k1
T
?1/2
= kAT
k? ?T0c k1 ? (?2S + ?S )kAT gk2 + 2S ?1/2 k? ?T0c k1
T0 ?T1 gk2 + kAT0 gk2 + 2S
= (?2S + ?S )khk2 + 2S ?1/2 k? ?T0c k1 .
(16)
Since ? ?T0c = ? ? ? ? S , the last inequality yields khk2 ? 2S ?1/2 /(1 ? ?S ? ?2S ) k? ? ? ? ?S k1 .
To complete the proof, it suffices to observe that
b ? ? ? k2 ? kgk2 + khk2 ? ??1 kAgk2 + khk2 = ??1 + 1 khk2 ? C0 k? ? ? ? ? k1 .
k?
S
Remark 5. The assumption ?T0 (? ? ) + ?T0 ?T1 (? ? ) < 1 is close in spirit to the restricted isometry
assumption (cf., e.g., [10, 6, 3] and the references therein). It is very likely that results similar to
that of Theorem 2 hold under other kind of assumptions recently introduced in the theory of L1 minimization [11, 29, 2]. This investigation is left for future research.
We emphasize that the constant C0 is rather small.
For example, if ?T0 (? ? ) + ?T0 ?T1 (? ? ) = 0.5,
?
?
?
T b
then max(kb
? ? ? k2 , kA (? ? ? )k2 ) ? (4/ S)k? ? ? ? ?S k1 .
4
in absolute value
6
5.2 The noisy case
The assumption ? = 0 is an idealization of the reality that has the advantage of simplifying the
mathematical derivations. While such a simplified setting is useful for conveying the main ideas
behind the proposed methodology, it is of major practical importance to discuss the extensions to the
more realistic noisy model. To this end, we introduce the vector b
? of estimated residuals satisfying
b=?
b b
b + diag(CT ?)
AT ?
? and kb
?k? ? ?.
Theorem 3. Let the assumptions of Theorem 2 be fulfilled. If for some > 0 we have
bb
max(k diag(CT ?)
?k2 ; k diag(CT ? ? )?k2 ) ? , then
b ? ? ? k2 ? C0 k? ? ? ? ? k1 + C1
k?
(17)
S
where C0 and C1 are some constants.
bb
b = diag(CT ?)
Proof. Let us define ? = diag(CT ? ? )? and ?
?. On the one hand, in view of (15),
?1/2
?
b . On the other hand, since
k? T0c k1 with h = ? ? ? ?
we have kh(T0 ?T1 )c k2 ? khT0 k2 + 2S
T
b
h = A g + ? ? ?, we have
kh(T0 ?T1 )c k2 ? kAT
? (T0 ?T1 )c k2 ? k? (T0 ?T1 )c k2 ? kAT
(T0 ?T1 )c gk2 ? 2
(T0 ?T1 )c gk2 ? kb
and khT0 k2 ? kAT
? T0 k2 + k? T0 k2 ? kAT
T0 gk2 + kb
T0 gk2 + 2. These inequalities imply that
T
?1/2
kAT gk2 ? kAT
k? ?T0c k1
T0 ?T1 gk2 + kAT0 gk2 + 4 + 2S
? (?T0 ?T1 + ?T0 )kAT gk2 + 4 + 2S ?1/2 k? ?T0c k1 .
To complete the proof, it suffices to remark that
b ? ? ? k2 ? khk2 + kgk2 ? kAT gk2 + kgk2 + 2 ? (1 + ??1 )kAT gk2 + 2
k?
?
1+??1
1??T0 ?T1 ??T0
(4 + 2S ?1/2 k? ?T0c k1 ).
5.3 Discussion
The main assumption in Theorems 2 and 3 is that ?T0 (? ? ) + ?T0 ?T1 (? ? ) < 1. While this assumption
is by no means necessary, it should be recognized that it cannot be significantly relaxed. In fact, the
condition ?T0 (? ? ) < 1 is necessary for ? ? to be consistently estimated. Indeed, if ?T0 (? ? ) = 1,
?
0
T
then it is possible to find ? 0 ? ?P such that AT
T0c ? = AT0c ? , which makes the problem of robust
estimation ill-posed, since both ? ? and ? 0 satisfy (7) with the same number of outliers.
Note also that the mapping J 7? ?J (?) is subadditive, that is ?J ? J 0 (?) ? ?J (?) + ?J 0 (?).
Therefore, the condition of Thm. 2 is fulfilled as soon as ?J (? ? ) < 1/3 for every index set J of
cardinality ? S. Thus, the condition maxJ:|J|?S ?S (? ? ) < 1/3 is sufficient for identifying ? ? in
presence of S outliers, while maxJ:|J|?S ?S (? ? ) < 1 is necessary.
A simple upper bound on ?J , obtained by replacing the sup over ?P by the sup over RM , is ?J (?) ?
kOT
J k, ?? ? ?P, where O = O(A) stands for the Rank(A)?N matrix with orthonormal rows spanning
the image of AT . The matrix norm is understood as the largest singular value. Note that for a given
J, the computation of kOT
J k is far easier than that of ?J (?).
We emphasize that the model we have investigated comprises the robust linear model as a particular
case. Indeed, if the last row of the matrix A is equal to zero as well as all the rows of C except the
last row which that has all the entries equal to one, then the model described by (7) is nothing else
but a linear model with unknown noise variance.
To close this section, let us stress that other approaches (cf., for instance, [9, 7, 1]) recently introduced in sparse learning and estimation may potentially be useful for the problem of robust estimation.
6
Numerical illustration
We implemented the algorithm in MatLab, using the SeDuMi package for solving LPs [28]. We
applied our algorithm of robust estimation to the well-known dinosaur sequence 5 . which consists
5
http://www.robots.ox.ac.uk/?vgg/data1.html
7
Figure 2: (a)-(c) Overhead view of the scene points estimated by the KK-procedure (a), by the SHprocedure (b) and by our procedure. (d) Boxplots of the errors when estimating the camera centers
by our procedure (left) and by the KK-procedure. (e) Boxplots of the errors when estimating the
camera centers by our procedure (left) and by the SH-procedure.
of 36 images of a dinosaur on a turntable, see Fig. 1 (a) for one example. The 2D image points
which are tracked across the image sequence and the projection matrices of 36 cameras are provided
as well. There are 16,432 image points corresponding to 4,983 scene points. This data is severely
affected by outliers which results in a very poor accuracy of the ?blind? L? -cost minimization
procedure. Its maximal RE equals 63 pixel and, as shown in Fig. 1, the estimated camera centers are
not on the same plane and the scatter plot of scene points is inaccurate.
We ran our procedure with ? = 0.5 pixel. If for pth measurement |?p /cT
p ?| was larger than ?/4,
then the it has been considered is an outlier and removed from the dataset. The corresponding 3D
scene point was also removed if, after the step of outlier removal, it was seen by only one camera.
This resulted in removing 1, 306 image points and 297 scene points. The plots (d) and (e) of Fig. 1
show the estimated camera centers and estimated scene points. We see, in particular, that the camera
centers are almost coplanar. Note that in this example, the second step of the procedure described in
Section 4.3 does not improve on the estimator computed at the first step. Thus, an accurate estimate
is obtained by solving only one linear program.
We compared our procedure with the procedures proposed by Sim and Hartley [27], hereafter
referred to as SH-procedure, and by Kanade and Ke [19], hereafter KK-procedure. For the SHprocedure, we iteratively computed the L? -cost minimizer by removing, at each step j, the measurements that had a RE larger than Emax,j ? 0.5, where Emax,j was the largest RE. We have
stopped the SH-procedure when the number of removed measurements exceeded 1,500. This number has been attained after 53 cycles. Therefore, the execution time was approximately 50 times
larger than for our procedure. The estimator obtained by SH-procedure has a maximal RE equal
to 1.33 pixel, whereas the maximal RE for our estimator is of 0.62 pixel. Concerning the KKprocedure, we run it with the parameter value m = N ? NO = 15, 000, which is approximately
the number of inliers detected by our method. Recall that the KK-procedure aims at minimizing the
mth largest RE. As shown in Fig. 2, our procedure performs better than that of [19].
7
Conclusion
In this paper, we presented a rigorous Bayesian framework for the problem of translation estimation and triangulation that have leaded to a new robust estimation procedure. We have formulated
the problem under consideration as a nonlinear inverse problem with a high-dimensional unknown
parameter-vector. This parameter-vector encapsulates the information on the scene points and the
camera locations, as well as the information on the location of outliers in the data. The proposed
estimator exploits the sparse nature of the vector of outliers through L1 -norm minimization. We
have given the mathematical proof of the result demonstrating the efficiency of the proposed estimator under mild assumptions. Real data analysis conducted on the dinosaur sequence supports our
theoretical results.
Acknowledgments
The work of the first author was partially supported by ANR under grants Callisto and Parcimonie.
8
References
[1] F. Bach. Bolasso: model consistent Lasso estimation through the bootstrap. In Twenty-fifth International
Conference on Machine Learning (ICML), 2008. 7
[2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of lasso and Dantzig selector. Ann.
Statist., 37(4):1705?1732, 2009. 2, 6
[3] E. Cand`es and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Ann.
Statist., 35(6):2313?2351, 2007. 6
[4] E. J. Cand`es. The restricted isometry property and its implications for compressed sensing. C. R. Math.
Acad. Sci. Paris, 346(9-10):589?592, 2008. 6
[5] E. J. Cand`es and P. A. Randall. Highly robust error correction by convex programming. IEEE Trans.
Inform. Theory, 54(7):2829?2840, 2008. 2
[6] E. J. Cand`es, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59(8):1207?1223, 2006. 2, 6
[7] C. Chesneau and M. Hebiri. Some theoretical results on the grouped variables Lasso. Math. Methods
Statist., 17(4):317?326, 2008. 7
[8] A. S. Dalalyan, A. Juditsky, and V. Spokoiny. A new algorithm for estimating the effective dimensionreduction subspace. Journal of Machine Learning Research, 9:1647?1678, Aug. 2008. 3
[9] A. S. Dalalyan and A. B. Tsybakov. Aggregation by exponential weighting, sharp PAC-bayesian bounds
and sparsity. Machine Learning, 72(1-2):39?61, 2008. 7
[10] D. Donoho, M. Elad, and V. Temlyakov. Stable recovery of sparse overcomplete representations in the
presence of noise. IEEE Trans. Inform. Theory, 52(1):6?18, 2006. 2, 6
[11] D. L. Donoho and X. Huo. Uncertainty principles and ideal atomic decomposition. IEEE Trans. Inform.
Theory, 47(7):2845?2862, 2001. 6
[12] O. Enqvist and F. Kahl. Robust optimal pose estimation. In ECCV, pages I: 141?153, 2008. 2
[13] R. Hartley and F. Kahl. Optimal algorithms in multiview geometry. In ACCV, volume 1, pages 13 ? 34,
Nov. 2007. 2
[14] R. Hartley and F. Kahl. Global optimization through rotation space search. IJCV, 2009. 2
[15] R. I. Hartley and F. Schaffalitzky. L? minimization in geometric reconstruction problems. In CVPR (1),
pages 504?509, 2004. 2, 3
[16] R. I. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge University
Press, June 2004. 1, 3
[17] F. Kahl. Multiple view geometry and the L? -norm. In ICCV, pages 1002?1009. IEEE Computer Society,
2005. 3
[18] F. Kahl and R. I. Hartley. Multiple-view geometry under the L? norm. IEEE Trans. Pattern Analysis and
Machine Intelligence, 30(9):1603?1617, sep 2008. 2, 3
[19] T. Kanade and Q. Ke. Quasiconvex optimization for robust geometric reconstruction. In ICCV, pages II:
986?993, 2005. 2, 8
[20] Q. Ke and T. Kanade. Uncertainty models in quasiconvex optimization for geometric reconstruction. In
CVPR, pages I: 1199?1205, 2006. 4
[21] H. D. Li. A practical algorithm for L? triangulation with outliers. In CVPR, pages 1?8, 2007. 2
[22] D. Martinec and T. Pajdla. Robust rotation and translation estimation in multiview reconstruction. In
CVPR, pages 1?8, 2007. 2
[23] D. Nist?er. An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern Anal. Mach.
Intell, 26(6):756?777, 2004. 1
[24] C. Olsson, A. P. Eriksson, and F. Kahl. Efficient optimization for L? problems using pseudoconvexity.
In ICCV, pages 1?8, 2007. 2
[25] Y. D. Seo and R. I. Hartley. A fast method to minimize L? error norm for geometric vision problems. In
ICCV, pages 1?8, 2007. 2
[26] Y. D. Seo, H. J. Lee, and S. W. Lee. Sparse structures in L-infinity norm minimization for structure and
motion reconstruction. In ECCV, pages I: 780?793, 2008. 2
[27] K. Sim and R. Hartley. Removing outliers using the L? norm. In CVPR, pages I: 485?494, 2006. 2, 8
[28] J. F. Sturm. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optim.
Methods Softw., 11/12(1-4):625?653, 1999. 7
[29] P. Zhao and B. Yu. On model selection consistency of Lasso. J. Mach. Learn. Res., 7:2541?2563, 2006.
6
9
| 3677 |@word mild:1 illustrating:1 norm:22 proportion:1 c0:6 proportionality:1 covariance:1 simplifying:1 decomposition:1 initial:1 substitution:1 contains:1 hereafter:2 ecole:1 seriously:1 interestingly:1 kahl:6 existing:2 ka:1 enpc:1 optim:1 scatter:1 written:2 numerical:3 visible:1 realistic:1 shape:1 plot:2 juditsky:1 intelligence:1 plane:2 huo:1 core:1 prespecified:1 math:3 location:5 five:2 unbounded:1 mathematical:2 c2:5 consists:7 ijcv:1 combine:1 overhead:1 introduce:8 theoretically:1 indeed:3 p1:1 nor:1 cand:4 inspired:1 ming:1 decreasing:3 considering:1 subvectors:1 becomes:1 cardinality:1 estimating:5 notation:3 bounded:1 provided:1 what:4 kind:1 interpreted:1 every:8 ti:4 xd:1 universit:1 rm:2 k2:26 schwartz:1 uk:1 unit:1 grant:1 arnak:1 t0c:17 positive:2 t1:21 understood:2 local:1 consequence:1 severely:1 acad:1 mach:2 establishing:1 ponts:1 ap:2 approximately:2 therein:2 k:4 dantzig:2 appl:1 projective:1 practical:3 camera:26 acknowledgment:1 atomic:1 practice:1 kat:15 bootstrap:1 procedure:29 empirical:2 significantly:1 projection:1 get:6 cannot:2 close:2 eriksson:1 operator:2 romberg:1 selection:1 context:1 applying:2 function2:1 equivalent:1 map:6 www:1 center:6 dalalyan:4 attention:1 l:1 convex:9 ke:3 formalized:1 recovery:4 identifying:2 simplicity:1 pure:1 emax:2 contradiction:1 estimator:24 chesneau:1 orthonormal:1 searching:1 coordinate:4 laplace:2 resp:4 imagine:2 exact:1 programming:3 homogeneous:1 origin:1 element:3 satisfying:3 particularly:1 approximated:1 observed:3 role:1 rij:2 ensures:1 renaud:1 improper:1 cycle:1 removed:4 ran:1 convexity:1 ij2:1 comm:1 solving:6 efficiency:2 easily:4 sep:1 represented:1 derivation:1 distinct:1 fast:2 effective:1 detected:1 choosing:1 outside:1 posed:1 larger:4 elad:1 cvpr:5 otherwise:2 hk1:1 anr:1 compressed:1 statistic:2 think:1 transform:1 noisy:4 sequence:8 advantage:2 propose:1 reconstruction:5 maximal:3 fr:1 kh:5 reprojection:1 assessing:2 produce:1 inlier:1 derive:2 ac:1 pose:2 measured:1 ij:17 received:1 aug:1 p2:1 eq:3 recovering:2 quotient:1 auxiliary:1 implies:3 implemented:1 solves:2 sim:2 strong:1 hartley:8 kb:7 centered:1 material:1 suffices:2 investigation:2 designate:1 strictly:1 extension:1 correction:1 hold:3 considered:1 great:1 mapping:1 kvt:1 gk2:14 major:2 bickel:1 purpose:1 estimation:27 seo:2 sensitive:2 bridge:1 largest:6 grouped:1 successfully:1 weighted:2 minimization:12 aim:2 rather:1 pn:1 kvk1:1 conjunction:1 l0:2 june:1 consistently:1 rank:1 check:3 rigorous:1 posteriori:2 flaw:1 inaccurate:2 bt:2 mth:1 relation:2 favoring:1 france:1 interested:2 tao:2 pixel:6 arg:1 issue:4 overall:1 fidelity:2 aforementioned:1 orientation:1 denoted:1 ill:1 among:1 art:1 html:1 fairly:1 equal:5 aware:1 once:1 having:4 softw:1 represents:1 yu:1 icml:1 future:2 np:1 others:1 inherent:1 few:1 resulted:1 intell:1 olsson:1 maxj:3 replaced:1 geometry:10 ourselves:2 delicate:1 highly:2 investigate:1 sh:4 behind:2 inliers:3 implication:1 accurate:1 necessary:5 injective:1 sedumi:2 incomplete:1 euclidean:1 re:9 overcomplete:1 theoretical:5 stopped:1 instance:1 column:3 krij:2 cover:1 measuring:2 cost:9 entry:7 subset:1 uniform:1 kxks:1 keriven:2 conducted:1 front:1 thanks:1 density:4 deduced:1 sensitivity:1 international:1 probabilistic:1 lee:2 invertible:1 squared:1 reflect:1 again:1 sublevel:1 choose:1 admit:1 zhao:1 li:1 de:1 summarized:2 coefficient:1 spokoiny:1 satisfy:3 blind:2 view:6 closed:1 sup:3 start:3 bayes:1 aggregation:1 identifiability:2 kgk2:4 contribution:1 minimize:4 formed:2 square:1 accuracy:5 variance:1 kxij:2 efficiently:1 yield:1 ofthe:1 conveying:1 then3:1 bayesian:4 accurately:1 produced:1 bisection:5 none:1 simultaneous:1 inform:3 definition:3 e2:1 naturally:1 proof:8 proved:2 dataset:1 duplicated:1 recall:3 organized:1 exceeded:1 disposal:1 attained:1 methodology:3 specify:1 improved:1 zisserman:1 formulation:2 done:2 decorrelating:1 ox:1 ritov:1 furthermore:3 just:1 hand:3 sturm:1 replacing:3 subadditive:1 nonlinear:1 lack:1 defines:2 aj:1 quality:1 building:1 k22:1 contain:2 true:5 regularization:2 assigned:1 symmetric:1 nonzero:2 satisfactory:1 iteratively:1 deal:1 noted:1 presenting:1 multiview:8 complete:2 demonstrate:1 stress:1 performs:2 l1:8 motion:7 cp:4 image:12 consideration:1 recently:2 common:1 rotation:5 data1:1 enthusiasm:1 tracked:1 volume:1 extend:2 measurement:15 refer:2 cambridge:1 dimensionreduction:1 rd:1 consistency:1 similarly:1 had:1 calibration:1 robot:1 similarity:1 stable:2 multivariate:2 closest:1 recent:1 triangulation:7 isometry:2 termed:1 nonconvex:1 inequality:7 seen:3 minimum:2 additional:1 relaxed:1 impose:2 recognized:1 signal:2 ii:1 multiple:3 exceeds:1 characterized:1 bach:1 concerning:1 e1:1 feasibility:1 ensuring:1 vision:3 metric:3 iteration:1 c1:4 addition:1 whereas:1 else:1 singular:1 khk2:7 rest:1 subject:2 elegant:1 spirit:1 practitioner:1 integer:1 ee:1 presence:4 ideal:1 concerned:1 xj:7 lasso:4 idea:4 vgg:1 t0:34 suffer:1 algebraic:2 e3:3 proceed:1 remark:8 matlab:2 generally:2 useful:2 clear:1 turntable:1 tsybakov:2 statist:3 reduced:1 generate:1 http:1 xij:10 estimated:9 arising:1 fulfilled:4 write:1 affected:2 bolasso:1 demonstrating:1 neither:1 hebiri:1 boxplots:2 ht:1 circumventing:1 sum:1 idealization:1 enforced:1 run:1 inverse:4 package:1 uncertainty:3 cone:1 place:2 throughout:1 reader:2 reasonable:3 dampens:1 almost:1 p3:1 separation:1 ki:5 ct:19 bound:2 correspondence:2 identifiable:1 infinity:2 constraint:4 bp:1 scene:9 ri:6 argument:2 min:2 poor:1 smaller:1 across:1 lp:3 b:1 encapsulates:1 randall:1 outlier:25 restricted:2 iccv:4 equation:1 mutually:1 describing:1 r3:4 eventually:1 discus:1 end:2 coplanar:1 available:1 rewritten:2 apply:1 observe:2 away:1 appearing:1 martinec:1 customary:1 assumes:1 cf:3 exploit:1 k1:37 uj:7 classical:1 society:1 already:1 marne:1 ingenious:1 strategy:1 usual:1 diagonal:2 exhibit:1 subspace:1 distance:1 sci:1 concatenation:1 polytope:1 cauchy:1 spanning:1 assuming:1 index:9 mini:1 providing:1 minimizing:3 illustration:1 kk:4 cij:4 statement:1 potentially:1 pajdla:1 stated:1 anal:1 unknown:7 twenty:1 upper:1 observation:2 nist:1 accv:1 situation:3 defining:1 precise:2 rn:1 vall:1 thm:1 sharp:1 community:1 overcoming:1 introduced:3 inverting:1 pair:2 paris:2 namely:1 c3:2 componentwise:1 toolbox:1 established:1 trans:5 below:1 pattern:2 kot:2 sparsity:3 program:4 recast:1 max:4 natural:2 difficulty:2 treated:1 residual:2 improve:1 brief:1 imply:2 inversely:1 prior:18 review:1 l2:3 removal:2 understanding:1 geometric:4 relative:1 interesting:1 proportional:1 sufficient:2 xp:1 consistent:1 minp:3 principle:1 pi:7 translation:9 eccv:2 row:4 dinosaur:4 penalized:1 elsewhere:1 supported:1 last:5 free:2 soon:1 aij:3 absolute:1 sparse:8 fifth:1 distributed:4 overcome:1 depth:1 xn:1 stand:6 stuck:1 author:1 enqvist:1 simplified:1 pth:1 far:1 cope:2 bb:2 temlyakov:1 nov:1 compact:2 emphasize:2 implicitly:1 selector:2 schaffalitzky:1 global:1 summing:1 search:1 iterative:2 decade:1 reality:1 ctp:1 kanade:3 nature:1 learn:1 robust:13 investigated:1 diag:7 main:6 noise:11 nothing:1 complementary:1 x1:2 fig:5 referred:1 quasiconvex:2 position:2 comprises:1 lq:1 concatenating:1 lie:1 exercise:1 exponential:1 weighting:1 theorem:7 removing:3 pac:1 er:1 sensing:1 r2:3 essential:1 incorporating:1 sequential:1 adding:1 importance:1 execution:1 easier:1 likely:1 expressed:1 kxk:1 partially:1 minimizer:2 satisfies:3 relies:1 formulated:1 ann:2 donoho:2 paristech:1 hard:3 except:1 uniformly:4 acting:2 principal:1 lemma:2 called:1 total:1 e:4 la:1 est:1 internal:1 support:2 evaluate:1 |
2,954 | 3,678 | Learning Bregman Distance Functions and Its
Application for Semi-Supervised Clustering
?
Lei Wu?] , Rong Jin? , Steven C.H. Hoi? , Jianke Zhu\ , and Nenghai Yu]
School of Computer Engineering, Nanyang Technological University, Singapore
?
Department of Computer Science & Engineering, Michigan State University
\
Computer Vision Lab, ETH Zurich, Swiss
]
Univeristy of Science and Technology of China, P.R. China
Abstract
Learning distance functions with side information plays a key role in many machine learning and data mining applications. Conventional approaches often assume a Mahalanobis distance function. These approaches are limited in two aspects: (i) they are computationally expensive (even infeasible) for high dimensional data because the size of the metric is in the square of dimensionality; (ii)
they assume a fixed metric for the entire input space and therefore are unable
to handle heterogeneous data. In this paper, we propose a novel scheme that
learns nonlinear Bregman distance functions from side information using a nonparametric approach that is similar to support vector machines. The proposed
scheme avoids the assumption of fixed metric by implicitly deriving a local distance from the Hessian matrix of a convex function that is used to generate the
Bregman distance function. We also present an efficient learning algorithm for
the proposed scheme for distance function learning. The extensive experiments
with semi-supervised clustering show the proposed technique (i) outperforms the
state-of-the-art approaches for distance function learning, and (ii) is computationally efficient for high dimensional data.
1
Introduction
An effective distance function plays an important role in many machine learning and data mining
techniques. For instance, many clustering algorithms depend on distance functions for the pairwise
distance measurements; most information retrieval techniques rely on distance functions to identify
the data points that are most similar to a given query; k-nearest-neighbor classifier depends on distance functions to identify the nearest neighbors for data classification. In general, learning effective
distance functions is a fundamental problem in both data mining and machine learning.
Recently, learning distance functions from data has been actively studied in machine learning. Instead of using a predefined distance function (e.g., Euclidean distance), researchers have attempted
to learn distance functions from side information that is often provided in the form of pairwise constraints, i.e., must-link constraints for pairs of similar data points and cannot-link constraints for
pairs of dissimilar data points. Example algorithms include [16, 2, 8, 11, 7, 15].
Most distance learning methods assume a Mahalanobis distance. Given two data points x and x0 ,
the distance between x and x0 is calculated by d(x, x0 ) = (x ? x0 )> A(x ? x0 ), where A is the distance metric that needs to be learned from the side information. [16] learns a global distance metric
(GDM) by minimizing the distance between similar data points while keeping dissimilar data points
far apart. It requires solving a Semi-Definite Programming (SDP) problem, which is computationally expensive when the dimensionality is high. BarHillel et al [2] proposed the Relevant Components Analysis (RCA), which is computationally efficient and achieves comparable results as GDM.
The main drawback with RCA is that it is unable to handle the cannot-link constraints. This problem
was addressed by Discriminative Component Analysis (DCA) in [8], which learns a distance metric
by minimizing the distance between similar data points and in the meantime maximizing the distance
1
between dissimilar data points. The authors in [4] proposed an information-theoretic based metric
learning approach (ITML) that learns the Mahalanobis distance by minimizing the differential relative entropy between two multivariate Gaussians. Neighborhood Component Analysis (NCA) [5]
learns a distance metric by extending the nearest neighbor classifier. The maximum-margin nearest
neighbor (LMNN) classifier [14] extends NCA through a maximum margin framework. Yang et
al. [17] propose a Local Distance Metric (LDM) that addresses multimodal data distributions. Hoi
et al. [7] propose a semi-supervised distance metric learning approach that explores the unlabeled
data for metric learning. In addition to learning a distance metric, several studies [12, 6] are devoted
to learning a distance function, mostly non-metric, from the side information.
Despite the success, the existing approaches for distance metric learning are limited in two aspects.
First, most existing methods assume a fixed distance metric for the entire input space, which make
it difficult for them to handle the heterogeneous data. This issue was already demonstrated in [17]
when learning distance metrics from multi-modal data distributions. Second, the existing methods
aim to learn a full matrix for the target distance metric that is in the square of the dimensionality,
making it computationally unattractive for high dimensional data. Although the computation can be
reduced significantly by assuming certain forms of the distance metric (e.g., diagonal matrix), these
simplifications often lead to suboptimal solutions. To address these two limitations, we propose a
novel scheme that learns Bregman distance functions from the given side information. Bregman
distance or Bregman divergence [3] has several salient properties for distance measure. Bregman
distance generalizes the class of Mahalanobis distance by deriving a distance function from a given
convex function ?(x). Since the local distance metric can be derived from the local Hessian matrix of
?(x), Bregman distance function avoids the assumption of fixed distance metric. Recent studies [1]
also reveal the connection between Bregman distances and exponential families of distributions. For
example, Kullback-Leibler divergence is a special Bregman distance when choosing the negative
entropy function for the convex function ?(x).
The objective of this work is to design an efficient and effective algorithm that learns a Bregman
distance function from pairwise constraints. Although Bregman distance or Bregman divergence has
been explored in [1], all these studies assume a predefined Bregman distance function. To the best of
our knowledge, this is the first work that addresses the problem of learning Bregman distances from
the pairwise constraints. We present a non-parametric framework for Bregman distance learning,
together with an efficient learning algorithm. Our empirical study with semi-supervised clustering
show that the proposed approach (i) outperforms the state-of-the-art algorithms for distance metric
learning, and (ii) is computationally efficient for high dimensional data.
The rest of the paper is organized as follows. Section 2 presents the proposed framework of learning
Bregman distance functions from the pairwise constraints, together with an efficient learning algorithm. Section 3 presents the experimental results with semi-supervised clustering by comparing
the proposed algorithms with a number of state-of-the-art algorithms for distance metric learning.
Section 5 concludes this work.
2
2.1
Learning Bregman Distance Functions
Bregman Distance Function
Bregman distance function is defined based on a given convex function. Let ?(x) : Rd 7? R be a
strictly convex function that is twice differentiable. Given ?(x), the Bregman distance function is
defined as
d(x1 , x2 ) = ?(x1 ) ? ?(x2 ) ? (x1 ? x2 )> ??(x2 )
For the convenience of discussion, we consider a symmetrized version of the Bregman distance
function that is defined as follows
d(x1 , x2 ) = (??(x1 ) ? ??(x2 ))> (x1 ? x2 )
(1)
The following proposition shows the properties of d(x1 , x2 ).
Proposition 1. The distance function defined in (1) satisfies the following properties if ?(x) is a
strictly convex function: (a) d(x1 , x2 ) = d(x2 , x1 ), (b) d(x1 , x2 ) ? 0, (c) d(x1 , x2 ) = 0 ? x1 = x2
Remark To better understand the Bregman distance function, we can rewrite d(x1 , x2 ) in (1) as
d(x1 , x2 ) = (x1 ? x2 )> ?2 ?(?
x)(x1 ? x2 )
2
where x
? is a point on the line segment between x1 and x2 . As indicated by the above expression, the
Bregman distance function can be viewed as a general Mahalanobis distance that introduces a local
distance metric A = ?2 ?(?
x). Unlike the conventional Mahalanobis distance where metric A is a
constant matrix throughout the entire space, the local distance metric A = ?2 ?(?
x) is introduced via
the Hessian matrix of convex function ?(x) and therefore depends on the location of x1 and x2 .
Although the Bregman distance function defined in (1) does not satisfy the triangle inequality, the
following proposition shows the degree of violation could be bounded if the Hessian matrix of ?(x)
is bounded.
Proposition 2. Let ? be the closed domain for x. If ?m, M ? R, M > m > 0 and
mI ? min ?2 ?(x) ? max ?2 ?(x) ? M I
x??
x??
where I is the identity matrix, we have the following inequality
p
p
p
?
?
d(xa , xb ) ? d(xa , xc ) + d(xc , xb ) + ( M ? m)[d(xa , xc )d(xc , xb )]1/4
(2)
The proof of this proposition can be found in Appendix A. As indicated
2, the de? by Proposition
?
gree of violation of the triangle inequality is essentially controlled by M ? m. Given a smooth
convex function with almost constant Hessian matrix, we would expect that to a large degree, Bregman distance will satisfy the triangle inequality. In the extreme case when ?(x) = x> Ax/2 and
?2 ?(x) = A, we have a constant Hessian matrix, leading to a complete satisfaction of the triangle
inequality.
2.2
Problem Formulation
To a learn a Bregman distance function, the key is to find the appropriate convex function ?(x) that
is consistent with the given pairwise constraints. In order to learn the convex function ?(x), we take
a non-parametric approach by assuming that ?(?) belongs to a Reproducing Kernel Hilbert Space
H? . Given a kernel function ?(x, x0 ) : Rd ? Rd 7? R, our goal is to search for a convex function
?(x) ? H? such that the induced Bregman distance function, denoted by d? (x, x0 ), minimizes the
overall training error with respect to the given pairwise constraints.
We denote by D = {(x1i , x2i , yi ), i = 1, . . . , n} the collection of pairwise constraints for training.
Each pairwise constraint consists of a pair of instances x1i and x2i , and a label yi that is +1 if x1i and
x2i are similar and ?1 if x1i and x2i are dissimilar. We also introduce X = (x1 , . . . , xN ) to include
the input patterns of all training instances in D.
Following the maximum margin framework for classification, we cast the problem of learning a
Bregman distance function from pairwise constraints into the following optimization problem, i.e.,
n
X
1 2
|?| + C
`(yi [d(x1i , x2i ) ? b])
min
(3)
2 H?
???(H? ),b?R+
i=1
where ?(H) = {f ? H : f is convex} refers to the subspace of functional space H that only
includes convex functions, `(z) = max(0, 1 ? z) is a hinge loss, and C is a penalty cost parameter.
The main challenge with solving the variational problem in (3) is that it is difficult to derive a
representer theorem for ?(x) because it is ??(x) used in the definition of distance function, not
?(x). Note that although it seems to be convenient to regularize ??(x), it will be difficult to restrict
?(x) to be convex. To resolve this problem, we consider a special family of kernel functions ?(x, x0 )
that has the form ?(x1 , x2 ) = h(x>
1 x2 ) where h : R 7? R is a strictly convex function. Examples
of h(z) that guarantees ?(?, ?) to be positive semi-definite are h(z) = |z|d (d ? 1), h(z) = |z + 1|d
(d ? 1), and h(z) = exp(z). For the convenience of discussion, we assume h(0) = 0 throughout
this paper.
First, since ?(x) ? H? , we have
Z
Z
?(x) = dy?(x, y)q(y) = dyh(x> y)q(y)
(4)
where q(y) is a weighting function. Given the training instances x1 , . . . , xN , we divide the space
Rd into A and A? that are defined as
A = span{x1 , . . . , xN }, A? = Null(x1 , . . . , xN )
(5)
3
We define Hk and H? as follows
?
Hk = span{?(x, ?), ?x ? A}, H? = span{?(x, ?), ?x ? A}
(6)
The following proposition summarizes an important property of reproducing kernel Hilbert space
H? when kernel function ?(?, ?) is restricted to the form in Eq. (2.2).
Proposition 3. If the kernel function ?(?, ?) is written in the form of Equation (2.2) with h(0) = 0,
we have Hk and H? form a complete partition of H? , i.e., H? = Hk ? H? , and Hk ?H? .
We therefore have the following representer theorem for ?(x) that minimizes (3)
Theorem 1. The function ?(x) that minimizes (3) admits the following expression
Z
Z
>
?(x) ? Hk =
dyq(y)h(x y) = duq(u)h(x> Xu)
(7)
y?A
N
where u ? R
and X = (x1 , . . . , xN ).
The proof of the above theorem can be found in Appendix B.
2.3
Algorithm
To further derive a concrete expression for ?(x), we restrict q(y) in (7) to the special form: q(y) =
PN
? x ) where ?i ? 0, i = 1, . . . , N are non-negative combination weights. This results
i=1 ?i ?(y
PN i
in ?(x) = i=1 ?i h(x>
i x), and consequently d(xa , xb ) as follows
d(xa , xb ) =
N
X
0 >
>
?i (h0 (x>
a xi ) ? h (xb xi ))xi (xa ? xb )
(8)
i=1
0 >
>
By defining h(xa ) = (h0 (x>
a x1 ), . . . , h (xa xN )) , we can express d(xa , xb ) as follows
d(xa , xb ) = (xa ? xb )> X(? ? [h(xa ) ? h(xb )])
(9)
2
Notice that when h(z) = z /2, we have d(xa , xb ) expressed as
d(xa , xb ) = (xa ? xb )> Xdiag(?)X > (xa ? xb ).
(10)
P
N
This is a Mahanalobis distance with metric A = Xdiag(?)X > = i=1 ?i xi x>
i . When h(z) =
exp(z), we have h(x) = (exp(x> x1 ), . . . , exp(x> xN )), and the resulting distance function is no
longer stationary due to the non-linear function exp(z).
PN
Given the assumption that q(y) = i=1 ?i ?(y ? xi ), we have (3) simplified as
n
X
1 >
? K? + C
?i
(11)
??RN ,b 2
i=1
?
?
s. t. yi (x1i ? x2i )> X(? ? [h(x1i ) ? h(x2i )]) ? b ? 1 ? ?i ,
?i ? 0, i = 1, . . . , n, ?k ? 0, k = 1, . . . , N
PN
>
Note that the constraint ?k ? 0 is introduced to ensure ?(x) =
k=1 ?k h(x xk ) is a convex
function. By defining
min
zi = [h(x1i ) ? h(x2i )] ? [X > (x1i ? x2i )],
(12)
we simplify the problem in (11) as follows
n
min
??RN
+ ,b
L=
X
1 >
? K? + C
`(yi [zi> ? ? b])
2
i=1
where `(z) = max(0, 1 ? z).
4
(13)
We solve the above problem by a simple subgradient descent approach. In particular, at iteration t,
given the current solution ?t and bt , we compute the gradients as
n
n
X
X
?? L = K?t + C
?`(yi [zi> ?t ? bt ])yi zi , ?b L = ?C
?`(yi [zi> ?t ? bt ])yi
(14)
i=1
i=1
where ?`(z) stands for the subgradient of `(z). Let St+ ? D denotes the set of training instances for
which (?t , bt ) suffers a non-zeros loss, i.e.,
St+ = {(zi , yi ) ? D : yi (zi> ?t ? bt ) < 1}
(15)
t
t
We can then express the sub-gradients of L at ? and b as follows:
X
X
?? L = K? ? C
yi zi , ?b L = C
yi
(16)
(zi ,yi )?St+
(zi ,yi )?St+
The new solution, denoted by ?t+1 and bt+1 , is computed as follows:
?
?
?kt+1 = ?[0,+?] ?kt ? ?t [?? L]k , bt+1 = bt ? ?t ?b L
?kt+1
(17)
t+1
where
is the k-th element of vector ? , ?G (x) projects x into the domain G, and ?t is the
step size that is set to be ?t = Ct by following the Pegasos algorithm [10] for solving SVMs. The
pseudo-code of the proposed algorithm is summarized in Algorithm 1.
Algorithm 1 Algorithm of Learning Bregman Distance Functions
INPUT:
? data matrix: X ? RN ?d
? pair-wise constraint {(x1i , x2i , y i ), i = 1, . . . , n}
? kernel function: ?(x1 , x2 ) = h(x>
1 x2 )
? penalty cost parameter C
OUTPUT:
? Bregman coefficients ? ? RN
+, b ? R
PROCEDURE
1: initialize Bregman coefficients: ? = ?0 , b = b0
2: calculate kernel matrix: K = [h(x>
i xj )]N ?N
3: calculate vectors zi : zi = [h(x1i ) ? h(x2i )] ? [X > (x1i ? x2i )]
4: set iteration step t = 1;
5: repeat
6:
(1) update the learning rate: ? = C/t, t = t + 1
7:
(2) update subset of training instances: St+ = {(zi , yi ) ? D : yi (zi> ? ? b) < 1}
8:
(3) compute the gradients
P w.r.t ? and b:
P
9:
?? L = K? ? C zi ?S + yi zi , ?b L = C zi ?S + yi
t
t
10:
(4) update Bregman coefficients ? = (?1 , . . . , ?n ) and threshold b:
11:
b ? b ? ??b L, ?k ? ?[0,+?] (?k ? ?[?? L]k ) , k = 1, . . . , N
12: until convergence
Computational complexity One of the major computational costs for Algorithm 1 is the preparation of kernel matrix K and vector {zi }ni=1 , which fortunately can be pre-computed. Each step of
the subgradient descent algorithm has a linear complexity, i.e., O(max(N, n)), which makes it reasonable even for large data sets with high dimensionality. The number of iterations for convergence
is O(1/?2 ) where ? is the target accuracy. It thus works fine if we are not critical about the accuracy
of the solution.
3
Experiments
We evaluate the proposed distance learning technique by semi-supervised clustering. In particular,
we first learn a distance function from the given pairwise constraints and then apply the learned
distance function to data clustering. We verify the efficacy and efficiency of the proposed technique
by comparing it with a number of state-of-the-art algorithms for distance metric learning.
3.1
Experimental Testbed and Settings
We adopt six well-known datasets from UCI machine learning repository, and six popular text benchmark datasets1 in our experiments. These datasets are chosen for clustering because they vary signif1
The Reuter dataset is available at: http://renatocorrea.googlepages.com/textcategorizationdatasets
5
icantly in properties such as the number of clusters/classes, the number of features, and the number
of instances. The diversity of datasets allows us to examine the effectiveness of the proposed learning technique more comprehensively. The details of the datasets are shown in Table 1.
dataset
breast-cancer
diabetes
ionosphere
liver-disorders
sonar
a1a
#samples
683
768
251
345
208
1,605
#feature
10
8
34
6
60
123
#classes
2
2
2
2
2
2
dataset
w1a
w2a
w6a
WebKB
newsgroup
Reuter
#samples
2,477
3,470
17,188
4,291
7,149
10,789
#feature
300
300
300
19,687
47,411
5,189
#classes
2
2
2
6
11
79
Table 1: The details of our experimental testbed
Similar to previous work [16], the pairwise constraints are created by random sampling. More
specifically, we randomly sample a subset of pairs from the pool of all possible pairs (every two
instances forms a pair). Two instances form a must-link constraint (i.e., yi = +1) if they share the
same class label, and form a cannot-link constraint (i.e., yi = ?1) if they are assigned to different
classes. To calculate the Bregman function, in this experiment, we adopt the non-linear function
h(x) = (exp(x> x1 ), . . . , exp(x> xN )).
To perform data clustering, we run the k-means algorithm using the distance function learned from
500 randomly sampled positive constraints 500 random negative constraints. The number of clusters
is simply set to the number of classes in the ground truth. The initial cluster centroids are randomly
chosen from the dataset. To enable fair comparisons, all comparing algorithms start with the same
set of initial centroids. We repeat each clustering experiment for 20 times, and report the final results
by averaging over the 20 runs.
We compare the proposed Bregman distance learning method using the k-means algorithm for semisupervised clustering, termed Bk-means, with the following approaches: (1) a standard k-means,
(2) the constrained k-means [13] (Ck-means), (3) Ck-means with distance learned by RCA [2], (4)
Ck-means with distance learned by DCA [8], (5) Ck-means with distance learned by the Xing?s
algorithm [16] (Xing), (6) Ck-means with information-theoretic metric learning (ITML) [4], and (7)
Ck-means with a distance function learned by a boosting algorithm (DistBoost) [12].
To evaluate the clustering performance, we use the some standard performance metrics, including pairwise Precision, pairwise Recall, and pairwise F1 measures [9], which are evaluated base on the pairwise results. Specifically, pairwise precision is the ratio of the number
of correct pairs placed in the same cluster over the total number of pairs placed in the same
cluster, pairwise recall is the ratio of the number of correct pairs placed in the same cluster
over the total number of pairs actually placed in the same cluster, and pairwise F1 equals to
2 ? precision ? recall/(precision + recall).
3.2 Performance Evaluation on Low-dimensional Datasets
The first set of experiments evaluates the clustering performance on six UCI datasets. Table 2 shows
the average precision, recall, and F1 measurements of all the competing algorithms given a set of
1, 000 random constraints. The top two highest average F1 scores on each dataset were highlighted
in bold font. From the results in Table 2, we observe that the proposed Bregman distance based
k-means clustering approach (Bk-means) is either the best or the second best for almost all datasets,
indicating that the proposed algorithm is in general more effective than the other algorithms for
distance metric learning.
3.3 Performance Evaluation on High-dimensional Text Data
We evaluate the clustering performance on six text datasets. Since some of the methods are infeasible
for text clustering due to the high dimensionality, we only include the results for the methods which
are feasible for this experiment (i.e., OOM indicates the method takes more than 10 hours, and
OOT indicates the method needs more than 16G REM). Table 3 summarizes the F1 performance
of all feasible methods for datasets w1a, w2a, w6a, WebKB, 20newsgroup and reuter. Since cosine
similarity is commonly used in textual domain, we use k-means, Ck-means in both Euclidian space
and Cosine similarity space as baselines. The best F1 scores are marked in bold in Table 3. The
results show that the learned Bregman distance function is applicable for high dimensional data, and
it outperforms the other commonly used text clustering methods for four out of six datasets.
6
method
baseline
Ck-means
ITML
Xing
RCA
DCA
DistBoost
Bk-means
precision
72.85?3.77
98.10?2.20
97.05?2.77
93.61?0.14
85.40?0.14
94.53?0.34
94.76?0.24
99.04?0.10
method
baseline
Ck-means
ITML
Xing
RCA
DCA
DistBoost
Bk-means
precision
62.35?6.30
57.05?1.24
97.10?2.70
63.46?0.11
100.00?6.19
66.36?3.01
75.91?1.11
97.64?1.93
method
baseline
Ck-means
ITML
Xing
RCA
DCA
DistBoost
Bk-means
precision
52.98?2.05
60.44?4.53
98.68?2.46
96.99?4.53
100.00?13.69
100.00?0.64
76.64?0.57
99.20?1.62
breast
recall
72.52?2.30
81.01?0.10
88.96?0.30
84.19?0.83
94.16?0.29
93.23?0.29
93.83?0.31
98.33?0.24
ionosphere
recall
53.39?2.74
51.28?1.58
59.99?0.31
64.10?0.03
50.36?1.44
67.01?2.12
69.34?0.91
62.71?1.94
sonar
recall
50.84?1.69
51.71?1.17
56.31?2.28
69.81?0.05
69.81?1.33
59.75?0.30
74.48?0.69
74.24?1.23
F1
72.73?3.42
85.31?1.48
91.94?2.15
88.11?0.22
90.18?2.94
93.88?0.22
94.29?0.29
98.37?0.19
precision
52.47?8.93
60.06?1.13
73.93?1.28
58.11?0.48
59.86?2.99
61.23?2.05
64.45?1.02
99.42?0.40
F1
57.28?6.20
61.46?1.36
72.62?1.24
63.52?0.39
66.99?0.45
66.68?0.00
72.72?1.03
73.28?1.93
precision
63.92?8.60
62.90?8.43
93.53?3.28
95.42?2.85
59.56?18.95
70.18?4.27
51.60?1.43
96.89?4.11
F1
51.87?1.47
55.32?1.37
70.46?2.35
79.83?2.70
79.83?5.85
73.11?0.57
75.54?0.62
82.52?1.44
precision
55.81?1.01
69.91?0.08
99.99?0.98
57.70?1.32
76.64?0.08
57.15?1.32
n/a
99.98?0.21
diabetes
recall
57.17?3.68
55.98?0.64
70.11?0.41
58.31?0.16
62.70?2.18
64.88?0.56
68.33?0.98
64.68?0.63
liver-disorders
recall
50.50?0.40
50.35?1.68
55.57?0.10
49.65?0.08
52.15?1.68
50.41?0.07
52.88?1.31
50.29?2.09
a1a
recall
69.99?0.91
80.34?0.18
70.30?0.54
70.89?1.01
66.96?0.35
71.76?1.87
n/a
77.72?0.17
F1
56.41?4.53
57.57?0.85
71.55?0.81
58.21?0.31
61.22?2.59
63.00?0.75
66.33?1.00
77.43?0.92
F1
55.67?5.96
55.13?1.63
68.73?1.40
65.31?1.10
54.92?5.76
58.67?1.63
52.23?1.37
66.86?3.10
F1
62.10?0.99
77.01?0.12
81.76?0.76
63.62?1.21
69.96?0.18
63.63?1.55
n/a
86.32?0.19
Table 2: Evaluation of clustering performance (average precision, recall, and F1) on six UCI
datasets. The top two F1 scores are highlighted in bold font for each dataset.
methods
k-means(EU)
k-means(Cos)
Ck-means(EU)
Ck-means(Cos)
RCA
DCA
ITML
Bk-means
w1a
76.68?0.25
76.87?5.61
87.04?1.15
87.14?2.14
91.00?1.02
92.13?1.04
92.31?0.84
93.43?1.07
w2a
72.59?0.77
73.47?1.35
97.23?1.21
97.14?2.12
96.45?1.17
94.30?2.56
94.12?0.92
96.92?1.02
w6a
76.52?0.97
77.16?1.27
76.52?1.01
75.32?0.91
93.51?1.13
87.44?1.99
96.95 ?0.13
98.64?0.24
WebKB
35.78?0.17
35.18?3.41
70.84?2.29
75.84?1.08
OOM
OOM
OOT
73.94?1.25
newsgroup
16.54?0.05
18.87?0.14
19.12?0.54
20.08?0.49
OOM
OOM
OOM
25.17?1.27
Reuter
43.88?0.23
45.42?0.73
56.00?0.42
58.24?0.82
OOT
OOT
OOT
64.51?0.95
Table 3: Evaluation of clustering F1 performance on the high dimensional text data. Only applicable
methods are shown. OOM indicates ?out of memory?, and OOT indicates ?out of time?.
3.4 Computational Complexity
Here, we evaluate the running time of semi-supervised clustering. For a conventional clustering
algorithm such as k-means, its computational complexity is determined by both the calculation of
distance and the clustering scheme. For a semi-supervised clustering algorithm based on distance
learning, the overall computational time include both the time for training an appropriate distance
function and the time for clustering data points. The average running times of semi-supervised
clustering over the six UCI datasets are listed in Table 4. It is clear that the Bregman distance based
clustering has comparable efficiency with simple methods like RCA and DCA on low dimensional
data, and runs much faster than Xing, ITML, and DistBoost. On the high dimensional text data, it is
much faster than other applicable DML methods.
Algorithm
UCI data(Sec.)
Text data(Min.)
k-means
0.51
0.78
Ck-means
0.72
4.56
ITML
7.59
71.55
Xing
8.56
n/a
RCA
0.88
68.90
DCA
0.90
69.34
DistBoost
13.09
n/a
Bk-means
1.70
3.84
Table 4: Comparison of average running time over the six UCI datasets and subsets of six text
datasets (10% sampling from the datasets in Table 1).
7
4
Conclusions
In this paper, we propose to learn a Bregman distance function for clustering algorithms using a
non-parametric approach. The proposed scheme explicitly address two shortcomings of the existing
approaches for distance fuction/metric learning, i.e., assuming a fixed distance metric for the entire
input space and high computational cost for high dimensional data. We incorporate the Bregman
distance function into the k-means clustering algorithm for semi-supervised data clustering. Experiments of semi-supervised clustering with six UCI datasets and six high dimensional text datasets
have shown that the Bregman distance function outperforms other distance metric learning algorithms in F1 measure. It also verifies that the proposed distance learning algorithm is computationally efficient, and is capable of handling high dimensional data.
Acknowledgements
This work was done when Mr. Lei Wu was an RA at Nanyang Technological University, Singapore. This work was supported in part by MOE tier-1 Grant (RG67/07), NRF IDM Grant
(NRF2008IDM-IDM-004-018), National Science Foundation (IIS-0643494), and US Navy Research Office (N00014-09-1-0663).
APPENDIX A: Proof of Proposition 2
Proof. First, let us denote by f as follows:
?
?
f = ( M ? m)[d(xa , xc )d(xc , xb )]1/4
The square of the right side of Eq. (2) is
p
p
( d(xa , xc ) + d(xc , xb ) + f 1/4 )2 = d(xa , xb ) ? ?(xa , xb , xc ) + ?(xa , xb , xc )
where
p
p
p
?(xa , xb , xc ) = f 2 + 2f d(xa , xc ) + 2f d(xc , xb ) + 2 d(xa , xc )d(xc , xb )
?(xa , xb , xc ) = (??(xa ) ? ??(xc ))(xc ? xb ) + (??(xc ) ? ??(xb ))(xa ? xc ).
From this above equation, the proposition holds if and only if ?(xa , xb , xc ) ? ?(xa , xb , xc ) ? 0.
From the fact that
?(xa , xb , xc ) ? ?(xa , xb , xc )
?
?
?
?
? ?
3
1
3
1
( M ? m)2 + 2( M ? m) d(xa , xc ) 4 d(xc , xb ) 4 + d(xc , xb ) 4 d(xa , xc ) 4 + 2d(xa , xc )d(xc , xb )
p
=
d(xa , xc )d(xc , xb )
since
?
?
M > m and the distance function d(?) ? 0, we get ?(xa , xb , xc ) ? ?(xa , xb , xc ) ? 0.
APPENDIX B: Proof of Theorem 1
Proof. We write ?(x) = ?k (x) + ?? (x) where
Z
Z
>
?k (x) ? Hk =
dyq(y)h(x y), ?? (x) ? H? =
dyq(y)h(x> y)
?
y?A
y?A
Thus, the distance function defined in (1) is then expressed as
?
>?
>
d(xa , xb ) = (xa ? xb ) ??k (xa ) ? ??k (xb ) + (xa ? xb ) (??? (xa ) ? ??? (xb ))
Z
Z
0 >
>
0 >
>
=
q(y)(h0 (x>
q(y)(h0 (x>
a y) ? h (xb y))y (xa ? xb ) +
a y) ? h (xb y))y (xa ? xb )
?
y?A
y?A
Z
>
0 >
>
q(y)(h0 (x>
a y) ? h (xb y))y (xa ? xb ) = (xa ? xb )
=
?
??k (xa ) ? ??k (xb )
?
y?A
|?(x)|2H?
Since
= |?k (x)|2H? +|?? (x)|2H? , the minimizer of (1) should have |?? (x)|2H? = 0. Since
|?? (x)| = h?? (?), ?(x, ?)iH? ? |?(x, ?)|H? |?? |H? = 0,, we have ?? (x) = 0 for any x. We thus
have ?(x) = ?k (x), which leads to the result in the theorem.
8
References
[1] A. Banerjee, S. Merugu, I. Dhillon, and J. Ghosh. Clustering with bregman divergences. In
Journal of Machine Learning Research, pages 234?245, 2004.
[2] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning a mahalanobis metric from
equivalence constraints. JMLR, 6:937?965, 2005.
[3] L. Bregman. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics
and Mathematical Physics, 7:200?217, 1967.
[4] J. V. Davis, B. Kulis, P. Jain, S. Sra, and I. S. Dhillon. Information-theoretic metric learning.
In ICML?07, pages 209?216, Corvalis, Oregon, 2007.
[5] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighborhood component analysis. In NIPS.
[6] T. Hertz, A. B. Hillel, and D. Weinshall. Learning a kernel function for classification with
small training samples. In ICML ?06: Proceedings of the 23rd international conference on
Machine learning, pages 401?408. ACM, 2006.
[7] S. C. H. Hoi, W. Liu, and S.-F. Chang. Semi-supervised distance metric learning for collaborative image retrieval. In Proceedings of IEEE Conference on Computer Vision and Pattern
Recognition (CVPR2008), June 2008.
[8] S. C. H. Hoi, W. Liu, M. R. Lyu, and W.-Y. Ma. Learning distance metrics with contextual
constraints for image retrieval. In Proc. CVPR2006, New York, US, June 17?22 2006.
[9] Y. Liu, R. Jin, and A. K. Jain. Boostcluster: boosting clustering by pairwise constraints. In
KDD?07, pages 450?459, San Jose, California, USA, 2007.
[10] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for svm. In ICML ?07: Proceedings of the 24th international conference on Machine learning,
pages 807?814, New York, NY, USA, 2007. ACM.
[11] L. Si, R. Jin, S. C. H. Hoi, and M. R. Lyu. Collaborative image retrieval via regularized metric
learning. ACM Multimedia Systems Journal, 12(1):34?44, 2006.
[12] T. H. Tomboy, A. Bar-hillel, and D. Weinshall. Boosting margin based distance functions
for clustering. In In Proceedings of the Twenty-First International Conference on Machine
Learning, pages 393?400, 2004.
[13] K. Wagstaff, C. Cardie, S. Rogers, and S. Schr?odl. Constrained k-means clustering with background knowledge. In ICML?01, pages 577?584, San Francisco, CA, USA, 2001. Morgan
Kaufmann Publishers Inc.
[14] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. In NIPS 18, pages 1473?1480, 2006.
[15] L. Wu, S. C. H. Hoi, J. Zhu, R. Jin, and N. Yu. Distance metric learning from uncertain side
information with application to automated photo tagging. In Proceedings of ACM International
Conference on Multimedia (MM2009), Beijing, China, Oct. 19?24 2009.
[16] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning with application to
clustering with side-information. In NIPS2002, 2002.
[17] L. Yang, R. Jin, R. Sukthankar, and Y. Liu. An efficient algorithm for local distance metric
learning. In Proceedings of the Twenty-Second Conference on Artificial Intelligence (AAAI),
2006.
9
| 3678 |@word kulis:1 repository:1 version:1 seems:1 euclidian:1 initial:2 liu:4 efficacy:1 score:3 outperforms:4 existing:4 current:1 comparing:3 com:1 contextual:1 goldberger:1 si:1 must:2 written:1 partition:1 kdd:1 update:3 stationary:1 intelligence:1 xk:1 boosting:3 location:1 mathematical:1 differential:1 consists:1 introduce:1 x0:8 pairwise:20 tagging:1 ra:1 examine:1 sdp:1 multi:1 rem:1 lmnn:1 salakhutdinov:1 resolve:1 solver:1 provided:1 project:1 bounded:2 webkb:3 null:1 weinshall:3 minimizes:3 ghosh:1 finding:1 guarantee:1 pseudo:1 every:1 classifier:3 grant:2 positive:2 engineering:2 local:7 despite:1 twice:1 china:3 studied:1 equivalence:1 co:2 limited:2 nca:2 nanyang:2 definite:2 swiss:1 procedure:1 empirical:1 oot:6 significantly:1 eth:1 convenient:1 pre:1 refers:1 get:1 cannot:3 pegasos:2 unlabeled:1 convenience:2 sukthankar:1 conventional:3 demonstrated:1 maximizing:1 convex:18 disorder:2 deriving:2 regularize:1 handle:3 target:2 play:2 oom:7 programming:2 diabetes:2 element:1 expensive:2 recognition:1 mahanalobis:1 steven:1 role:2 calculate:3 eu:2 russell:1 technological:2 highest:1 complexity:4 depend:1 rewrite:1 solving:3 segment:1 efficiency:2 triangle:4 multimodal:1 w1a:3 jain:2 effective:4 shortcoming:1 query:1 artificial:1 neighborhood:2 choosing:1 h0:5 navy:1 hillel:3 shalev:1 solve:1 highlighted:2 final:1 differentiable:1 propose:5 relevant:1 uci:7 roweis:1 convergence:2 cluster:7 extending:1 derive:2 blitzer:1 liver:2 nearest:5 school:1 b0:1 eq:2 drawback:1 correct:2 enable:1 hoi:6 rogers:1 f1:16 proposition:10 rong:1 strictly:3 a1a:2 hold:1 ground:1 exp:7 lyu:2 major:1 achieves:1 adopt:2 vary:1 proc:1 applicable:3 label:2 aim:1 ck:13 pn:4 office:1 derived:1 ax:1 june:2 indicates:4 hk:7 centroid:2 baseline:4 entire:4 bt:8 issue:1 classification:4 overall:2 denoted:2 ussr:1 univeristy:1 art:4 special:3 initialize:1 constrained:2 equal:1 ng:1 sampling:2 nrf:1 yu:2 icml:4 representer:2 report:1 dml:1 simplify:1 randomly:3 divergence:4 national:1 mining:3 evaluation:4 introduces:1 violation:2 extreme:1 primal:1 devoted:1 xb:49 predefined:2 kt:3 bregman:43 capable:1 euclidean:1 divide:1 uncertain:1 instance:9 cost:4 subset:3 w2a:3 itml:8 st:5 fundamental:1 explores:1 international:4 physic:1 pool:1 together:2 concrete:1 aaai:1 leading:1 actively:1 de:1 diversity:1 summarized:1 bold:3 includes:1 coefficient:3 sec:1 oregon:1 satisfy:2 inc:1 explicitly:1 depends:2 lab:1 closed:1 start:1 xing:8 collaborative:2 square:3 ni:1 accuracy:2 merugu:1 kaufmann:1 identify:2 cardie:1 xdiag:2 researcher:1 suffers:1 definition:1 evaluates:1 proof:6 mi:1 sampled:1 dataset:6 nenghai:1 popular:1 recall:12 knowledge:2 dimensionality:5 organized:1 hilbert:2 actually:1 dca:8 supervised:12 dyh:1 modal:1 formulation:1 evaluated:1 done:1 xa:47 until:1 nonlinear:1 banerjee:1 reveal:1 indicated:2 lei:2 semisupervised:1 usa:3 verify:1 assigned:1 leibler:1 dhillon:2 mahalanobis:7 davis:1 cosine:2 theoretic:3 complete:2 reuter:4 image:3 variational:1 wise:1 novel:2 recently:1 common:1 functional:1 measurement:2 rd:5 mathematics:1 longer:1 similarity:2 base:1 multivariate:1 recent:1 belongs:1 apart:1 termed:1 certain:1 n00014:1 inequality:5 success:1 yi:21 morgan:1 fortunately:1 ldm:1 mr:1 semi:14 ii:4 full:1 jianke:1 smooth:1 faster:2 calculation:1 retrieval:4 controlled:1 heterogeneous:2 vision:2 metric:42 essentially:1 breast:2 iteration:3 kernel:10 addition:1 background:1 fine:1 addressed:1 publisher:1 rest:1 unlike:1 induced:1 gdm:2 effectiveness:1 jordan:1 yang:2 automated:1 xj:1 zi:18 restrict:2 suboptimal:1 competing:1 expression:3 six:11 penalty:2 dyq:3 hessian:6 york:2 remark:1 clear:1 listed:1 nonparametric:1 svms:1 reduced:1 generate:1 http:1 singapore:2 notice:1 estimated:1 odl:1 write:1 shental:1 express:2 key:2 salient:1 four:1 threshold:1 subgradient:3 relaxation:1 beijing:1 run:3 jose:1 extends:1 family:2 throughout:2 almost:2 wu:3 reasonable:1 appendix:4 dy:1 summarizes:2 comparable:2 ct:1 simplification:1 constraint:24 x2:23 aspect:2 min:5 span:3 department:1 combination:1 hertz:2 making:1 restricted:1 rca:9 wagstaff:1 tier:1 computationally:7 equation:2 zurich:1 singer:1 photo:1 generalizes:1 gaussians:1 available:1 apply:1 observe:1 appropriate:2 weinberger:1 symmetrized:1 denotes:1 clustering:35 include:4 ensure:1 top:2 running:3 hinge:1 xc:34 objective:1 already:1 font:2 parametric:3 diagonal:1 gradient:4 subspace:1 distance:111 unable:2 link:5 idm:2 assuming:3 code:1 ratio:2 minimizing:3 difficult:3 mostly:1 negative:3 design:1 twenty:2 perform:1 datasets:17 benchmark:1 jin:5 descent:2 defining:2 hinton:1 schr:1 rn:4 reproducing:2 introduced:2 bk:7 pair:11 cast:1 moe:1 extensive:1 connection:1 california:1 learned:8 textual:1 testbed:2 datasets1:1 hour:1 nip:2 address:4 bar:2 pattern:2 challenge:1 max:4 including:1 memory:1 critical:1 satisfaction:1 rely:1 regularized:1 meantime:1 zhu:2 scheme:6 technology:1 x2i:12 created:1 concludes:1 text:10 acknowledgement:1 relative:1 loss:2 expect:1 limitation:1 srebro:1 foundation:1 degree:2 consistent:1 share:1 cancer:1 repeat:2 placed:4 keeping:1 supported:1 infeasible:2 side:9 understand:1 icantly:1 neighbor:5 comprehensively:1 saul:1 calculated:1 xn:8 stand:1 avoids:2 author:1 collection:1 commonly:2 corvalis:1 simplified:1 san:2 far:1 implicitly:1 kullback:1 global:1 francisco:1 discriminative:1 xi:5 fuction:1 shwartz:1 search:1 sonar:2 table:11 learn:6 ca:1 sra:1 domain:3 main:2 verifies:1 fair:1 x1:28 xu:1 distboost:6 ny:1 precision:12 sub:2 exponential:1 x1i:12 jmlr:1 weighting:1 learns:7 theorem:6 explored:1 admits:1 svm:1 ionosphere:2 unattractive:1 ih:1 margin:5 entropy:2 michigan:1 simply:1 tomboy:1 expressed:2 chang:1 truth:1 satisfies:1 minimizer:1 acm:4 ma:1 oct:1 viewed:1 identity:1 goal:1 consequently:1 marked:1 feasible:2 specifically:2 determined:1 averaging:1 total:2 multimedia:2 experimental:3 attempted:1 newsgroup:3 indicating:1 support:1 dissimilar:4 preparation:1 incorporate:1 evaluate:4 handling:1 |
2,955 | 3,679 | Toward Provably Correct Feature Selection in
Arbitrary Domains
Dimitris Margaritis
Department of Computer Science
Iowa State University
Ames, IA 50010, USA
[email protected]
Abstract
In this paper we address the problem of provably correct feature selection in arbitrary domains. An optimal solution to the problem is a Markov boundary, which
is a minimal set of features that make the probability distribution of a target variable conditionally invariant to the state of all other features in the domain. While
numerous algorithms for this problem have been proposed, their theoretical correctness and practical behavior under arbitrary probability distributions is unclear.
We address this by introducing the Markov Boundary Theorem that precisely characterizes the properties of an ideal Markov boundary, and use it to develop algorithms that learn a more general boundary that can capture complex interactions
that only appear when the values of multiple features are considered together. We
introduce two algorithms: an exact, provably correct one as well a more practical randomized anytime version, and show that they perform well on artificial as
well as benchmark and real-world data sets. Throughout the paper we make minimal assumptions that consist of only a general set of axioms that hold for every
probability distribution, which gives these algorithms universal applicability.
1
Introduction and Motivation
The problem of feature selection has a long history due to its significance in a wide range of important problems, from early ones like pattern recognition to recent ones such as text categorization, gene expression analysis and others. In such domains, using all available features may be
prohibitively expensive, unnecessarily wasteful, and may lead to poor generalization performance,
especially in the presence of irrelevant or redundant features. Thus, selecting a subset of features of
the domain for use in subsequent application of machine learning algorithms has become a standard
preprocessing step. A typical task of these algorithms is learning a classifier: Given a number of
input features and a quantity of interest, called the target variable, choose a member of a family of
classifiers that can predict the target variable?s value as well as possible. Another task is understanding the domain and the quantities that interact with the target quantity.
Many algorithms have been proposed for feature selection. Unfortunately, little attention has been
paid to the issue of their behavior under a variety of application domains that can be encountered in
practice. In particular, it is known that many can fail under certain probability distributions such as
ones that contain a (near) parity function [1], which contain interactions that only appear when the
values of multiple features are considered together. There is therefore an acute need for algorithms
that are widely applicable and can be theoretically proven to work under any probability distribution.
In this paper we present two such algorithms, an exact and a more practical randomized approximate
one. We use the observation (first made in Koller and Sahami [2]) that an optimal solution to the
problem is a Markov boundary, defined to be a minimal set of features that make the probability
distribution of a target variable conditionally invariant to the state of all other features in the domain
(a more precise definition is given later in Section 3) and present a family of algorithms for learning
1
the Markov boundary of a target variable in arbitrary domains. We first introduce a theorem that
exactly characterizes the minimal set of features necessary for probabilistically isolating a variable,
and then relax this definition to derive a family of algorithms that learn a parameterized approximation of the ideal boundary that are provably correct under a minimal set of assumptions, including a
set of axioms that hold for any probability distribution.
In the following section we present work on feature selection, followed by notation and definitions in
Section 3. We subsequently introduce an important theorem and the aforementioned parameterized
family of algorithms in Sections 4 and 5 respectively, including a practical anytime version. We
evaluate these algorithms in Section 6 and conclude in Section 7.
2
Related Work
Numerous algorithms have been proposed for feature selection. At the highest level algorithms can
be classified as filter, wrapper, or embedded methods. Filter methods work without consulting the
classifier (if any) that will make use of their output i.e., the resulting set of selected features. They
therefore have typically wider applicability since they are not tied to any particular classifier family. In contrast, wrappers make the classifier an integral part of their operation, repeatedly invoking
it to evaluate each of a sequence of feature subsets, and selecting the subset that results in minimum estimated classification error (for that particular classifier). Finally, embedded algorithms are
classifier-learning algorithms that perform feature selection implicitly during their operation e.g.,
decision tree learners.
Early work was motivated by the problem of pattern recognition which inherently contains a large
number of features (pixels, regions, signal responses at multiple frequencies etc.). Narendra and
Fukunaga [3] first cast feature selection as a problem of maximization of an objective function over
the set of features to use, and proposed a number of search approaches including forward selection and backward elimination. Later work by machine learning researchers includes the FOCUS
algorithm of Almuallim and Dietterich [4], which is a filter method for deterministic, noise-free
domains. The RELIEF algorithm [5] instead uses a randomized selection of data points to update a
weight assigned to each feature, selecting the features whose weight exceeds a given threshold. A
large number of additional algorithms have appeared in the literature, too many to list here?good
surveys are included in Dash and Liu [6]; Guyon and Elisseeff [1]; Liu and Motoda [7]. An important concept for feature subset selection is relevance. Several notions of relevance are discussed in
a number of important papers such as Blum and Langley [8]; Kohavi and John [9]. The argument
that the problem of feature selection can be cast as the problem of Markov blanket discovery was
first made convincingly in Koller and Sahami [2], who also presented an algorithm for learning an
approximate Markov blanket using mutual information. Other algorithms include the GS algorithm
[10], originally developed for learning of the structure of a Bayesian network of a domain, and extensions to it [11] including the recent MMMB algorithm [12]. Meinshausen and B?uhlmann [13]
recently proposed an optimal theoretical solution to the problem of learning the neighborhood of
a Markov network when the distribution of the domain can be assumed to be a multidimensional
Gaussian i.e., linear relations among features with Gaussian noise. This assumption implies that
the Composition axiom holds in the domain (see Pearl [14] for a definition of Composition); the
difference with our work is that we address here the problem in general domains where it may not
necessarily hold.
3
Notation and Preliminaries
In this section we present notation, fundamental definitions and axioms that will be subsequently
used in the rest of the paper. We use the term ?feature? and ?variable? interchangeably, and denote variables by capital letters (X, Y etc.) and sets of variables by bold letters (S, T etc.). We
denote the set of all variables/features in the domain (the ?universe?) by U. All algorithms presented are independence-based, learning the Markov boundary of a given target variable using the
truth value of a number of conditional independence statements. The use of conditional independence for feature selection subsumes many other criteria proposed in the literature. In particular, the
use of classification accuracy of the target variable can be seen as a special case of testing for its
conditional independence with some of its predictor variables (conditional on the subset selected at
any given moment). A benefit of using conditional independence is that, while classification error
estimates depend on the classifier family used, conditional independence does not. In addition, algorithms utilizing conditional independence for feature selection are applicable to all domain types,
2
e.g., discrete, ordinal, continuous with non-linear or arbitrary non-degenerate associations or mixed
domains, as long as a reliable estimate of probabilistic independence is available.
We denote probabilistic independence by the symbol ? ?? ? i.e., (X?? Y | Z) denotes the fact
that the variables in set X are (jointly) conditionally independent from those in set Y given the
values of the variables in set Z; (X 6?? Y | Z) denotes their conditional dependence. We assume
the existence of a probabilistic independence query oracle that is available to answer any query
of the form (X, Y | Z), corresponding to the question ?Is the set of variables in X independent
of the variables in Y given the value of the variables in Z?? (This is similar to the approach of
learning from statistical queries of Kearns and Vazirani [15].) In practice however, such an oracle
does not exist, but can be approximated by a statistical independence test on a data set. Many tests of
independence have appeared and studied extensively in the statistical literature over the last century;
in this work we use the ?2 (chi-square) test of independence [16].
A Markov blanket of variable X is a set of variables such that, after fixing (by ?knowing?) the value
of all of its members, the set of remaining variables in the domain, taken together as a single setvalued variable, are statistically independent of X. More precisely, we have the following definition.
Definition 1. A set of variables S ? U is called a Markov blanket of variable X if and only if
(X?? U ? S ? {X} | S).
Intuitively, a Markov blanket S of X captures all the information in the remaining domain variables
U ? S ? {X} that can affect the probability distribution of X, making their value redundant as far
as X is concerned (given S). The blanket therefore captures the essence of the feature selection
problem for target variable X: By completely ?shielding? X, a Markov blanket precludes the existence of any possible information about X that can come from variables not in the blanket, making
it an ideal solution to the feature selection problem. A minimal Markov blanket is called a Markov
boundary.
Definition 2. A set of variables S ? U ? {X} is called a Markov boundary of variable X if it is a
minimal Markov blanket of X i.e., none of its proper subsets is a Markov blanket.
Pearl [14] proved that that the axioms of Symmetry, Decomposition, Weak Union, and Intersection
are sufficient to guarantee a unique Markov boundary. These are shown below together with the
axiom of Contraction.
(Symmetry)
(Decomposition)
(Weak Union)
(Contraction)
(Intersection)
(X?
? Y | Z) =? (Y?
? X | Z)
(X?
? Y ? W | Z) =? (X?
? Y | Z) ? (X?
? W | Z)
(X?
? Y ? W | Z) =? (X?
? Y | Z ? W)
(X?
? Y | Z) ? (X?
? W | Y ? Z) =? (X?
? Y ? W | Z)
(X?
? Y | Z ? W) ? (X?
? W | Z ? Y) =? (X?
? Y ? W | Z)
(1)
The Symmetry, Decomposition, Contraction and Weak Union axioms are very general: they are
necessary axioms for the probabilistic definition of independence i.e., they hold in every probability
distribution, as their proofs are based on the axioms of probability theory. Intersection is not universal but it holds in distributions that are positive, i.e., any value combination of the domain variables
has a non-zero probability of occurring.
4
The Markov Boundary Theorem
According to Definition 2, a Markov boundary is a minimal Markov blanket. We first introduce a
theorem that provides an alternative, equivalent definition of the concept of Markov boundary that
we will relax later in the paper to produce a more general boundary definition.
Theorem 1 (Markov Boundary Theorem). Assuming that the Decomposition and Contraction
axioms hold, S ? U ? {X} is a Markov
only if
n boundary of variable X ? U if and o
? T ? U ? {X}, T ? U ? S ?? (X?? T | S ? T) .
(2)
A detailed proof cannot be included here due to space constraints but a proof sketch appears in
Appendix A. According to the above theorem, a Markov boundary S partitions the powerset of
U ? {X} into two parts: (a) set P1 that contains all subsets of U ? S, and (b) set P2 containing
the remaining subsets. All sets in P1 are conditionally independent of X, and all sets in P2 are
conditionally dependent with X.
Intuitively, the two directions of the logical equivalence relation of Eq. (2) correspond to the concept
of Markov blanket and its minimality i.e.,
n the equation
o
? T ? U ? {X}, T ? U ? S =? (X?? T | S ? T)
3
Algorithm 1 The abstract GS(m) (X) algorithm. Returns an m-Markov boundary of X.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
S??
/* Growing phase. */
for all Y ? U ? S ? {X} such that 1 ? |Y| ? m do
if (X 6?
? Y | S) then
S?S?Y
goto line 3
/* Restart loop. */
/* Shrinking phase. */
for all Y ? S do
if (X?
? Y | S ? {Y }) then
S ? S ? {Y }
goto line 8
/* Restart loop. */
return S
or, equivalently, (? T ? U ? S ? {X}, (X?? T | S)) (as T and S are disjoint) corresponds to
the definition of Markov blanket, as it includes T = U ? S ? {X}. In the opposite direction, the
contrapositive form is
n
o
? T ? U ? {X}, T 6? U ? S =? (X 6?? T | S ? T) .
This corresponds to the concept of minimality of the Markov boundary: It states that all sets that
contain a part of S cannot be independent of X given the remainder of S. Informally, this is because
if there existed some set T that contained a non-empty subset T? of S such that (X?? T | S ? T),
then one would be able to shrink S by T? (by the property of Contraction) and therefore S would
not be minimal (more details in Appendix A).
5
A Family of Algorithms for Arbitrary Domains
Theorem 1 defines conditions that precisely characterize a Markov boundary and thus can be thought
of as an alternative definition of a boundary. By relaxing these conditions we can produce a more
general definition. In particular, an m-Markov boundary is defined as follows.
Definition 3. A set of variables S ? U ? {X} of a domain U is called an m-Markov boundary of
variable X ? U if and only if
n
o
? T ? U ? {X} such that |T| ? m, T ? U ? S ?? (X?? T | S ? T) .
We call the parameter m of an m-Markov boundary the Markov boundary margin. Intuitively, an
m-boundary S guarantees that (a) all subsets of its complement (excluding X) of size m or smaller
are independent of X given S, and (b) all sets T of size m or smaller that are not subsets of its
complement are dependent of X given the part of S that is not contained in T. This definition is a
special case of the properties of a boundary stated in Theorem 1, with each set T mentioned in the
theorem now restricted to having size m or smaller. For m = n ? 1, where n = |U |, the condition
|T| ? m is always satisfied and can be omitted; in this case the definition of an (n ? 1)-Markov
boundary results in exactly Eq. (2) of Theorem 1.
We now present an algorithm called GS(m) , shown in Algorithm 1, that provably correctly learns
an m-boundary of a target variable X. GS(m) operates in two phases, a growing and a shrinking
phase (hence the acronym). During the growing phase it examines sets of variables of size up to m,
where m is a user-specified parameter. During the shrinking phase, single variables are examined for
conditional independence and possible removal from S (examining sets in the shrinking phase is not
necessary for provably correct operation?see Appendix B). The orders of examination of the sets
for possible addition and deletion from the candidate boundary are left intentionally unspecified in
Algorithm 1?one can therefore view it as an abstract representative of a family of algorithms, with
each member specifying one such ordering. All members of this family are m-correct, as the proof
of correctness does not depend on the ordering. In practice numerous choices for the ordering exist;
one possibility is to examine the sets in the growing phase in order of increasing set size and, for
each such size, in order of decreasing conditional mutual information I(X, Y, S) between X and
Y given S. The rationale for this heuristic choice is that (usually) tests with smaller conditional sets
tend to be more reliable, and sorting by mutual information tends to lessen the chance of adding false
members of the Markov boundary. We used this implementation in all our experiments, presented
later in Section 6.
Intuitively, the margin represents a trade-off between sample and computational complexity and
completeness: For m = n ? 1 = |U| ? 1, the algorithm returns a Markov boundary in unrestricted
4
Algorithm 2 The RGS(m,k) (X) algorithm, a randomized anytime version of the GS(m) algorithm,
utilizing k random subsets for the growing phase.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
S??
/* Growing phase. */
repeat
Schanged ? false
Y ? subset of U ? S ? {X} of size 1 ? |Y| ? m of maximum dependence out of k random subsets
if (X 6?
? Y | S) then
S?S?Y
Schanged ? true
until Schanged = false
/* Shrinking phase. */
for all Y ? S do
if (X?
? Y | S ? {Y }) then
S ? S ? {Y }
goto line 11
/* Restart loop. */
return S
(arbitrary) domains. For 1 ? m < n ? 1, GS(m) may recover the correct boundary depending
on characteristics of the domain. For example, it will recover the correct boundary in domains
containing embedded parity functions such that the number of variables involved in every k-bit
parity function is m + 1 or less i.e., if k ? m + 1 (parity functions are corner cases in the space
of probability distributions that are known to be hard to learn [17]). The proof of m-correctness of
GS(m) is included in Appendix B. Note that it is based on Theorem 1 and the universal axioms of
Eqs. (1) only i.e., Intersection is not needed, and thus it is widely applicable (to any domain).
A Practical Randomized Anytime Version
While GS(m) is provably correct even in difficult domains such as those that contain parity functions,
it may be impractical with a large number of features as its asymptotic complexity is O(nm ). We
therefore also we here provide a more practical randomized version called RGS(m,k) (Randomized
GS(m) ), shown in Algorithm 2. The RGS(m,k) algorithm has an additional parameter k that limits its
computational requirements: instead of exhaustively examining all possible subsets of (U ?S?{X})
(as GS(m) does), it instead samples k subsets from the set of all possible subsets of (U ? S ? {X}),
where k is user-specified. It is therefore a randomized algorithm that becomes equivalent to GS(m)
given a large enough k. Many possibilities for the method of random selection of the subsets exist;
in our experiments we select a subset Y = {Yi } (1 ? |Y| ? m) with probability proportional
P|Y|
to i=1 (1/p(X, Yi | S)), where p(X, Yi | S) is the p-value of the corresponding (univariate) test
between X and Yi given S, which has a low computational cost.
The RGS(m,k) algorithm is useful in situations where the amount of time to produce an answer
may be limited and/or the limit unknown beforehand: it is easy to show that the growing phase of
GS(m) produces an an upper-bound of the m-boundary of X. Therefore, the RGS(m,k) algorithm,
if interrupted, will return an approximation of this upper bound. Moreover, if there exists time
for the shrinking phase to be executed (which conducts a number of tests linear in n and is thus
fast), extraneous variables will be removed and a minimal blanket (boundary) approximation will
be returned. These features make it an anytime algorithm, which is a more appropriate choice for
situations where critical events may occur that require the interruption of computation, e.g., during
the planning phase of a robot, which may be interrupted at any time due to an urgent external event
that requires a decision to be made based on the present state?s feature values.
6
Experiments
We evaluated the GS(m) and the RGS(m,k) algorithms on synthetic as well as real-world and
benchmark data sets. We first systematically examined the performance on the task of recovering near-parity functions, which are known to be hard to learn [17]. We compared GS(m)
and RGS(m,k) with respect to accuracy of recovery of the original boundary as well as computational cost. We generated domains of sizes ranging from 10 to 100 variables, of which
4 variables (X1 to X4 ) were related through a near-parity relation with bit probability 0.60
and various degrees of noise. The remaining independent variables (X5 to Xn ) act as ?dis5
GS(1)
GS(3)
RGS(1, 1000)
0
0.05
RGS(3, 1000)
Relieved, threshold = 0.001
Relieved, threshold = 0.03
0.1
0.15
0.2
0.25
Noise probability
0.3
0.35
0.4
Probabilistic isolation performance of GS(m) and RELIEVED
Probabilistic isolation performance of GS(m) and RGS(m ,k)
Real-world and benchmark data sets
1
Data set
Balance scale
Balloons
Car evaluation
Credit screening
Monks
Nursery
Tic-tac-toe
Breast cancer
Chess
Audiology
0.8
0.6
0.4
0.2
0
0
0.2
0.4
(m = 3)
GS
0.6
0.8
1
Real-world and benchmark data sets
RGS(m = 3, k = 300) average isolation measure
F1 measure
50 variables, true Markov boundary size = 3
Bernoulli probability = 0.6, 1000 data points
RELIEVED(threshold = 0.03) average isolation measure
F1 measure of GS(m ), RGS(m, k ) and RELIEVED vs. noise level
1.3
1.2
1.1
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
Data set
Balance scale
Balloons
Car evaluation
Credit screening
Monks
Nursery
Tic-tac-toe
Breast cancer
Chess
Audiology
0.8
0.6
0.4
0.2
0
0
0.2
0.4
(m = 3)
average isolation measure
GS
0.6
0.8
1
average isolation measure
Figure 2: Left: F1 measure of GS(m) , RGS(m,k) and RELIEVED under increasing amounts of
noise. Middle: Probabilistic isolation performance comparison between GS(3) and RELIEVED on
real-world and benchmark data sets. Right: Same for GS(3) and RGS(3,1000) .
tractors? and had randomly assigned probabilities i.e., the correct boundary of X1 is B1 =
{X2 , X3 , X4 }. In such domains, learning the boundary of X1 is difficult because of the large
number of distractors and because each Xi ? B1 is independent of X1 given any proper subset
of B1 ? {Xi } (they only become dependent when including all of them in the conditioning set).
To measure an algorithm?s feature selection performance, acF -measure of GS
and RGS
vs. domain size
curacy (fraction of variables correctly included or excluded)
is inappropriate as the accuracy of trivial algorithms such as
returning the empty set will tend to 1 as n increases. Precision and recall are therefore more appropriate, with precision
defined as the fraction of features returned that are in the correct boundary (3 features for X1 ), and recall as the fraction
of the features present in the correct boundary that are returned by the algorithm. A convenient and frequently used
Number of domain variables
measure that combines precision and recall is the F1 meaRunning time of GS
and RGS
vs. domain size
sure, defined as the harmonic mean of precision and recall
[18]. In Fig. 1 (top) we report 95% confidence intervals for
the F1 measure and execution time of GS(m) (margins m =
1 to 3) and RGS(m,k) (margins 1 to 3 and k = 1000 random
subsets), using 20 data sets containing 10 to 100 variables,
with the target variable X1 was perturbed (inverted) by noise
with 10% probability. As can be seen, the RGS(m,k) and
GS(m) using the same value for margin perform comparably
Number of domain variables
with respect to F1 , up to their 95% confidence intervals. With
Figure 1: GS(m) and RGS(m,k) per(m,k)
respect to execution time however RGS
exhibits much
formance with respect to domain
greater scalability (Fig. 1 bottom, log scale); for example, it
size (number of variables). Top: F1
executes in about 10 seconds on average in domains containmeasure, reflecting accuracy. Bot(m)
ing 100 variables, while GS
executes in 1,000 seconds on tom: Execution time in seconds (log
average for this domain size.
scale).
We also compared GS(m) and RGS(m,k) to RELIEF [5], a well-known algorithm for feature selection that is known to be able to recover parity functions in certain cases [5]. RELIEF learns a weight
for each variable and compares it to a threshold ? to decide on its inclusion in the set of relevant variables. As it has been reported [9] that RELIEF can exhibit large variance due to randomization that
is necessary only for very large data sets, we instead used a deterministic variant called RELIEVED
[9], whose behavior corresponds to RELIEF at the limit of infinite execution time. We calculated
the F1 measure for GS(m) , RGS(m,k) and RELIEVED in the presence of varying amounts of noise,
with noise probability ranging from 0 (no noise) to 0.4. We used domains containing 50 variables, as
GS(m) becomes computationally demanding in larger domains. In Figure 2 (left) we show the performance of GS(m) and RGS(m,k) for m equal to 1 and 3, k = 1000 and RELIEVED for thresholds
? = 0.01 and 0.03 for various amounts of noise on the target variable. Again, each experiment was
repeated 20 times to generate 95% confidence intervals. We can observe that even though m = 1
(equivalent to the GS algorithm) performs poorly, increasing the margin m makes it more likely to
recover the correct Markov boundary, and GS(3) (m = 3) recovers the exact blanket even with few
(1,000) data points. RELIEVED does comparably to GS(3) for little noise and for a large threshold,
(m )
(m, k )
1
True Markov boundary size = 3, 1000 data points
Bernoulli probability = 0.6, noise probability = 0.1
1
0.9
GS(1)
GS(2)
GS(3)
RGS(1, 1000)
(2, 1000)
RGS
(3, 1000)
RGS
0.8
F1-measure
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
10
20
30
40
50
60
(m )
70
80
90
100
90
100
(m, k )
True Markov boundary size = 3, 1000 data points
Bernoulli probability = 0.6, noise probability = 0.1
10000
GS(1)
GS(2)
GS(3)
Execution time (sec)
1000
RGS(1, 1000)
RGS(2, 1000)
RGS(3, 1000)
100
10
1
0.1
0.01
10
6
20
30
40
50
60
70
80
but appears to deteriorate for more noisy domains. As we can see it is difficult to choose the ?right?
threshold for RELIEVED?better performing ? at low noise can become worse in noisy environments; in particular, small ? tend to include irrelevant variables while large ? tend to miss actual
members.
We also evaluated GS(m) , RGS(m,k) , and RELIEVED on benchmark and real-world data sets from
the UCI Machine Learning repository. As the true Markov boundary for these is impossible to know,
we used as performance measure a measure of probabilistic isolation by the Markov boundary returned of subsets outside the boundary. For each domain variable X, we measured the independence
of subsets Y of size 1, 2 and 3 given the blanket S of X returned by GS(3) and RELIEVED for
? = 0.03 (as this value seemed to do better in the previous set of experiments), as measured by
the average p-value of the ?2 test between X and Y given S (with p-values of 0 and 1 indicating
ideal dependence and independence, respectively). Due to the large number of subsets outside the
boundary when the boundary is small, we limited the estimation of isolation performance to 2,000
subsets per variable. We plot the results in Figure 2 (middle and right). Each point represents a variable in the corresponding data set. Points under the diagonal indicate better probabilistic isolation
performance for that variable for GS(3) compared to RELIEVED (middle plot) or to RGS(3,1000)
(right plot). To obtain a statistically significant comparison, we used the non-parametric Wilcoxon
paired signed-rank test, which indicated that GS(3) RGS(3,1000) are statistically equivalent to each
other, while both outperformed RELIEVED at the 99.99% significance level (? < 10?7 ).
7
Conclusion
In this paper we presented algorithms for the problem of feature selection in unrestricted (arbitrary
distribution) domains that may contain complex interactions that only appear when the values of
multiple features are considered together. We introduced two algorithms: an exact, provably correct one as well a more practical randomized anytime version, and evaluated them on on artificial,
benchmark and real-world data, demonstrating that they perform well, even in the presence of noise.
We also introduced the Markov Boundary Theorem that precisely characterizes the properties of a
boundary, and used it to prove m-correctness of the exact family of algorithms presented. We made
minimal assumptions that consist of only a general set of axioms that hold for every probability
distribution, giving our algorithms universal applicability.
Appendix A: Proof sketch of the Markov Boundary Theorem
Proof sketch. (=? direction) We need to prove that if S is a Markov boundary of X then (a) for
every set T ? U ? S ? {X}, (X?
? T | S ? T), and (b) for every set T? 6? U ? S that does not
?
?
contain X, (X 6?
? T | S ? T ). Case (a) is immediate from the definition of the boundary and the
Decomposition theorem. Case (b) can be proven by contradiction: Assuming the independence of
T? that contains a non-empty part T?1 in S and a part T?2 in U ? S, we get (from Decomposition)
(X?? T?1 | S ? T?1 ). We can then use Contraction to show that the set S ? T?1 satisfies the independence property of a Markov boundary, i.e., that (X?? U ? (S ? T?1 ) ? {X} | S ? T?1 ), which
contradicts the assumption that S is a boundary (and thus minimal).
(?= direction) We need to prove that if Eq. (2) holds, then S is a minimal Markov blanket. The
proof that S is a blanket is immediate. We can prove minimality by contradiction: Assume S =
S1 ? S2 with S1 a blanket and S2 6= ? i.e., S1 is a blanket strictly smaller than S. Then (X?? S2 |
S1 ) = (X?
? S2 | S ? S2 ). However, since S2 6? U ? S, from Eq. (2) we get (X 6?? S2 | S ? S2 ),
which is a contradiction.
Appendix B: Proof of m-Correctness of GS(m)
Let the value of the set S at the end of the growing phase be SG , its value at the end of the shrinking
phase SS , and their difference S? = SG ? SS . The following two observations are immediate.
Observation 1. For every Y ? U ? SG ? {X} such that 1 ? |Y| ? m, (X?? Y | SG ).
Observation 2. For every Y ? SS , (X 6?? Y | SS ? {Y }).
Lemma 2. Consider variables Y1 , Y2 , . . . , Yt for some t ? 1 and let Y = {Yj }tj=1 . Assuming that
Contraction holds, if (X?
? Yi | S ? {Yj }ij=1 ) for all i = 1, . . . , t, then (X?? Y | S ? Y).
Proof. By induction on Yj , j = 1, 2, . . . , t, using Contraction to decrease the conditioning set S
down to S ? {Yj }ij=1 for all i = 1, 2, . . . , t. Since Y = {Yj }tj=1 , we immediately obtain the
desired relation (X?
? Y | S ? Y).
7
Lemma 2 can be used to show that the variables found individually independent of X during
the shrinking phase are actually jointly independent of X, given the final set SS . Let S? =
{Y1 , Y2 , . . . , Yt } be the set of variables removed (in that order) from SG to form the final set SS
i.e., S? = SG ? SS . Using the above lemma, the following is immediate.
Corollary 3. Assuming that the Contraction axiom holds, (X?? S? | SS ).
Lemma 4. If the Contraction, Decomposition and Weak Union axioms hold, then for every set
T ? U ? SG ? {X} such that (X?
? T | SG ),
(X?? T ? (SG ? SS ) | SS ).
(3)
Furthermore SS is minimal i.e., there does not exist a subset of SS for which Eq. (3) is true.
Proof. From Corollary 3, (X?
? S? | SS ). Also, by the hypothesis, (X?? T | SG ) = (X?? T |
SS ? S? ), where S? = SG ? SS as usual. From these two relations and Contraction we obtain
(X?? T ? S? | SS ).
To prove minimality, let us assume that SS 6= ? (if SS = ? then it is already minimal). We prove
by contradiction: Assume that there exists a set S? ? SS such that (X?? T ? (SG ? S? ) | S? ). Let
W = SS ? S? 6= ?. Note that W and S? are disjoint. We have that
SS ? SS ? S? =? SS ? S? ? SS ? S? ? S? ? T ? (SS ? S? ? S? )
=? W ? T ? (SS ? S? ? S? ) = T ? (SG ? S? )
? Since (X?
? T ? (SG ? S? ) | S? ) and W ? T ? (SS ? S? ? S? ), from Decomposition we
get (X?
? W | S? ).
? From (X?
? W | S? ) and Weak Union we have that for every Y ? W, (X?? Y | S? ?
(W ? {Y })).
? Since S? and W are disjoint and since Y ? W, Y 6? S? . Applying the set equality
(A ? B) ? C = (A ? B) ? (A ? C) to S? ? (W ? {Y }) we obtain S? ? W ? ({Y } ? S? ) =
SS ? {Y }.
? Therefore, ? Y ? W, (X?
? Y | SS ? {Y }).
However, at the end of the shrinking phase, all variables Y in SS (and therefore in W, as W ? SS )
have been evaluated for independence and found dependent (Observation 2). Thus, since W 6= ?,
there exists at least one Y such that (X 6?? Y | SS ? {Y }), producing a contradiction.
Theorem 5. Assuming that the Contraction, Decomposition, and Weak Union axioms hold, Algorithm 1 is m-correct with respect to X.
Proof. We use the Markov Boundary Theorem.nWe first prove that
o
? T ? U ? {X} such that |T| ? m, T ? U ? SS =? (X?? T | SS ? T)
or, equivalently, ? T ? U ? SS ? {X} such that |T| ? m, (X?? T | SS ).
Since U ? SS ? {X} = S? ? (U ? SG ? {X}), S? and U ? SG ? {X} are disjoint, there are three
kinds of sets of size m or less to consider: (i) all sets T ? S? , (ii) all sets T ? U ? SG ? {X},
and (iii) all sets (if any) T = T? ? T?? , T? ? T?? = ?, that have a non-empty part T? ? S? and a
non-empty part T?? ? U ? SG ? {X}.
(i) From Corollary 3, (X?
? S? | SS ). Therefore, from Decomposition, for any set T ? S? ,
(X?
? T | SS ).
(ii) By Observation 1, for every set T ? U ? SG ? {X} such that |T| ? m, (X?? T | SG ).
By Lemma 4 we get (X?
? T ? S? | SS ), from which we obtain (X?? T | SS ) by
Decomposition.
(iii) Since |T| ? m, we have that |T?? | ? m. Since T?? ? U ? SG ? {X}, by Observation 1,
(X?
? T?? | SG ). Therefore, by Lemma 4, (X?? T?? ? S? | SS ). Since T? ? S? ?
??
T ? T? ? T?? ? S? , by Decomposition to obtain (X?? T?? ? T? | SS ) = (X?? T | SS ).
To complete the proof we need to prove that n
o
? T ? U ? {X} such that |T| ? m, T 6? U ? SS =? (X 6?? T | SS ? T) .
Let T = T1 ? T2 , with T1 ? SS and T2 ? U ? SS . Since T 6? U ? SS , T1 contains at least one
variable Y ? SS . From Observation 2, (X 6?? Y | SS ? {Y }). From this and (the contrapositive of)
Weak Union, we get (X 6?
? {Y } ? (T1 ? {Y }) | SS ? {Y } ? (T1 ? {Y })) = (X 6?? T1 | SS ? T1 ).
From (the contrapositive of) Decomposition we get (X 6?? T1 ? T2 | SS ? T1 ) = (X 6?? T |
SS ? T1 ), which is equal to (X 6?
? T | SS ? T1 ? T2 ) = (X 6?? T | SS ? T) as SS and T2 are
disjoint.
8
References
[1] Isabelle Guyon and Andr?e Elisseeff. An introduction to variable and feature selection. Journal
of Machine Learning Research, 3:1157?1182, 2003.
[2] Daphne Koller and Mehran Sahami. Toward optimal feature selection. In Proceedings of the
Tenth International Conference on Machine Learning (ICML), pages 284?292, 1996.
[3] P. M. Narendra and K. Fukunaga. A branch and bound algorithm for feature subset selection.
IEEE Transactions on Computers, C-26(9):917?922, 1977.
[4] H. Almuallim and T. G. Dietterich. Learning with many irrelevant features. In Proceedings of
the National Conference on the Americal Association for Artifical Intelligence (AAAI), 1991.
[5] K. Kira and L. A. Rendell. The feature selection problem: Traditional methods and a new
algorithm. In Proceedings of the National Conference on the Americal Association for Artifical
Intelligence (AAAI), pages 129?134, 1992.
[6] M. Dash and H. Liu. Feature selection for classification. Intelligent Data Analysis, 1(3):
131?156, 1997.
[7] Huan Liu and Hiroshi Motoda, editors. Feature Extraction, Construction and Selection: A
Data Mining Perspective, volume 453 of The Springer International Series in Engineering
and Computer Science. 1998.
[8] Avrim Blum and Pat Langley. Selection of relevant features and examples in machine learning.
Artificial Intelligence, 97(1-2):245?271, 1997.
[9] R. Kohavi and G. H. John. Wrappers for feature subset selection. Artificial Intelligence, 97
(1-2):273?324, 1997.
[10] Dimitris Margaritis and Sebastian Thrun. Bayesian network induction via local neighborhoods.
In Advances in Neural Information Processing Systems 12 (NIPS), 2000.
[11] I. Tsamardinos, C. Aliferis, and A. Statnikov. Algorithms for large scale Markov blanket
discovery. In Proceedings of the 16th International FLAIRS Conference, 2003.
[12] I. Tsamardinos, C. Aliferis, and A. Statnikov. Time and sample efficient discovery of Markov
blankets and direct causal relations. In Proceedings of the 9th ACM SIGKDD International
Conference on Knowledge Discovery and Data Mining, pages 673?678, 2003.
[13] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the
Lasso. Annals of Statistics, 34:1436?1462, 2006.
[14] Judea Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
1988.
[15] Michael Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory.
MIT Press, 1994.
[16] A. Agresti. Categorical Data Analysis. John Wiley and Sons, 1990.
[17] M. Kearns. Efficient noise-tolerant learning from statistical queries. J. ACM, 45(6):983?1006,
1998.
[18] C. J. van Rijsbergen. Information Retrieval. Butterworth-Heinemann, London, 1979.
9
| 3679 |@word repository:1 middle:3 version:6 motoda:2 decomposition:13 contraction:12 invoking:1 paid:1 elisseeff:2 moment:1 wrapper:3 liu:4 contains:4 series:1 selecting:3 john:3 interrupted:2 subsequent:1 partition:1 plot:3 update:1 v:3 intelligence:4 selected:2 monk:2 provides:1 consulting:1 completeness:1 ames:1 daphne:1 direct:1 become:3 prove:8 combine:1 introduce:4 deteriorate:1 theoretically:1 behavior:3 p1:2 examine:1 growing:8 planning:1 frequently:1 chi:1 decreasing:1 little:2 actual:1 inappropriate:1 increasing:3 becomes:2 notation:3 moreover:1 tic:2 kind:1 unspecified:1 developed:1 impractical:1 guarantee:2 every:11 multidimensional:1 act:1 exactly:2 prohibitively:1 classifier:8 returning:1 appear:3 producing:1 positive:1 t1:11 engineering:1 local:1 tends:1 limit:3 signed:1 studied:1 examined:2 meinshausen:2 equivalence:1 relaxing:1 specifying:1 limited:2 range:1 statistically:3 practical:7 unique:1 testing:1 yj:5 practice:3 union:7 x3:1 langley:2 axiom:15 universal:4 thought:1 convenient:1 confidence:3 get:6 cannot:2 selection:29 impossible:1 applying:1 equivalent:4 deterministic:2 yt:2 attention:1 survey:1 recovery:1 immediately:1 contradiction:5 examines:1 utilizing:2 century:1 notion:1 annals:1 target:12 construction:1 user:2 exact:5 us:1 hypothesis:1 recognition:2 expensive:1 approximated:1 bottom:1 capture:3 region:1 ordering:3 trade:1 highest:1 removed:2 balloon:2 decrease:1 mentioned:1 environment:1 complexity:2 exhaustively:1 depend:2 learner:1 completely:1 various:2 fast:1 london:1 hiroshi:1 artificial:4 query:4 neighborhood:2 outside:2 whose:2 heuristic:1 agresti:1 widely:2 larger:1 aliferis:2 relax:2 s:58 precludes:1 plausible:1 statistic:1 jointly:2 noisy:2 final:2 sequence:1 interaction:3 remainder:1 relevant:2 loop:3 uci:1 degenerate:1 poorly:1 scalability:1 empty:5 requirement:1 produce:4 categorization:1 wider:1 derive:1 develop:1 depending:1 fixing:1 measured:2 ij:2 eq:6 p2:2 recovering:1 c:1 blanket:23 implies:1 come:1 indicate:1 direction:4 correct:15 filter:3 subsequently:2 elimination:1 require:1 f1:9 generalization:1 preliminary:1 randomization:1 extension:1 strictly:1 hold:13 considered:3 credit:2 predict:1 narendra:2 early:2 omitted:1 estimation:1 outperformed:1 applicable:3 uhlmann:2 individually:1 correctness:5 butterworth:1 mit:1 gaussian:2 always:1 varying:1 probabilistically:1 corollary:3 focus:1 bernoulli:3 rank:1 contrast:1 sigkdd:1 inference:1 dependent:4 typically:1 koller:3 relation:6 provably:8 pixel:1 issue:1 aforementioned:1 classification:4 among:1 extraneous:1 special:2 mutual:3 equal:2 having:1 extraction:1 x4:2 represents:2 unnecessarily:1 icml:1 others:1 report:1 t2:5 intelligent:2 curacy:1 few:1 randomly:1 national:2 lessen:1 powerset:1 relief:5 phase:18 interest:1 screening:2 possibility:2 mining:2 evaluation:2 tj:2 beforehand:1 integral:1 necessary:4 huan:1 tree:1 conduct:1 desired:1 causal:1 isolating:1 theoretical:2 minimal:15 maximization:1 applicability:3 introducing:1 cost:2 subset:28 predictor:1 examining:2 too:1 characterize:1 reported:1 answer:2 perturbed:1 synthetic:1 fundamental:1 randomized:9 international:4 minimality:4 probabilistic:10 off:1 michael:1 together:5 again:1 aaai:2 satisfied:1 nm:1 containing:4 choose:2 worse:1 corner:1 external:1 return:5 bold:1 subsumes:1 includes:2 sec:1 nwe:1 later:4 view:1 characterizes:3 recover:4 contrapositive:3 square:1 accuracy:4 formance:1 who:1 characteristic:1 variance:1 correspond:1 weak:7 bayesian:2 comparably:2 none:1 researcher:1 executes:2 history:1 classified:1 sebastian:1 definition:19 frequency:1 intentionally:1 involved:1 toe:2 proof:13 recovers:1 judea:1 proved:1 logical:1 recall:4 anytime:6 car:2 distractors:1 knowledge:1 actually:1 reflecting:1 appears:2 originally:1 tom:1 response:1 evaluated:4 shrink:1 though:1 furthermore:1 until:1 sketch:3 defines:1 indicated:1 usa:1 dietterich:2 contain:6 concept:4 true:6 y2:2 hence:1 assigned:2 equality:1 excluded:1 conditionally:5 x5:1 during:5 interchangeably:1 essence:1 flair:1 criterion:1 complete:1 performs:1 reasoning:1 ranging:2 harmonic:1 rendell:1 umesh:1 recently:1 conditioning:2 volume:1 discussed:1 association:3 significant:1 composition:2 isabelle:1 tac:2 inclusion:1 had:1 robot:1 acute:1 etc:3 wilcoxon:1 recent:2 perspective:1 irrelevant:3 certain:2 yi:5 inverted:1 seen:2 minimum:1 additional:2 unrestricted:2 greater:1 redundant:2 signal:1 ii:2 branch:1 multiple:4 exceeds:1 ing:1 long:2 retrieval:1 paired:1 variant:1 breast:2 mehran:1 addition:2 interval:3 kohavi:2 rest:1 sure:1 goto:3 tend:4 member:6 call:1 near:3 presence:3 ideal:4 iii:2 enough:1 concerned:1 easy:1 variety:1 independence:20 affect:1 isolation:10 lasso:1 opposite:1 knowing:1 americal:2 motivated:1 expression:1 returned:5 statnikov:2 repeatedly:1 useful:1 detailed:1 informally:1 tsamardinos:2 amount:4 extensively:1 generate:1 exist:4 andr:1 bot:1 estimated:1 disjoint:5 correctly:2 per:2 kira:1 nursery:2 discrete:1 threshold:8 blum:2 demonstrating:1 capital:1 wasteful:1 tenth:1 backward:1 graph:1 fraction:3 parameterized:2 letter:2 audiology:2 throughout:1 family:10 guyon:2 decide:1 decision:2 appendix:6 bit:2 bound:3 followed:1 dash:2 existed:1 encountered:1 g:48 oracle:2 occur:1 precisely:4 constraint:1 relieved:16 x2:1 argument:1 fukunaga:2 performing:1 department:1 according:2 combination:1 poor:1 smaller:5 iastate:1 son:1 contradicts:1 urgent:1 making:2 s1:4 chess:2 intuitively:4 invariant:2 restricted:1 taken:1 computationally:1 equation:1 fail:1 sahami:3 ordinal:1 needed:1 know:1 end:3 acronym:1 available:3 operation:3 observe:1 appropriate:2 alternative:2 existence:2 original:1 denotes:2 remaining:4 include:2 top:2 giving:1 especially:1 objective:1 question:1 quantity:3 already:1 parametric:1 dependence:3 usual:1 interruption:1 diagonal:1 unclear:1 exhibit:2 traditional:1 thrun:1 restart:3 trivial:1 toward:2 induction:2 assuming:5 rijsbergen:1 balance:2 equivalently:2 difficult:3 unfortunately:1 executed:1 statement:1 margaritis:2 stated:1 implementation:1 proper:2 unknown:1 perform:4 upper:2 observation:8 markov:52 benchmark:7 pat:1 immediate:4 situation:2 excluding:1 precise:1 y1:2 arbitrary:8 introduced:2 complement:2 cast:2 specified:2 deletion:1 pearl:3 nip:1 address:3 able:2 below:1 pattern:2 dimitris:2 usually:1 appeared:2 convincingly:1 including:5 reliable:2 ia:1 critical:1 event:2 demanding:1 examination:1 numerous:3 categorical:1 text:1 understanding:1 literature:3 discovery:4 removal:1 sg:22 asymptotic:1 tractor:1 embedded:3 rationale:1 mixed:1 proportional:1 proven:2 iowa:1 degree:1 sufficient:1 editor:1 systematically:1 cancer:2 repeat:1 parity:8 free:1 last:1 wide:1 benefit:1 van:1 boundary:59 calculated:1 xn:1 world:7 seemed:1 forward:1 made:4 preprocessing:1 far:1 transaction:1 vazirani:2 approximate:2 implicitly:1 gene:1 tolerant:1 b1:3 conclude:1 assumed:1 xi:2 search:1 continuous:1 learn:4 inherently:1 symmetry:3 interact:1 complex:2 necessarily:1 domain:41 significance:2 universe:1 motivation:1 noise:17 s2:8 repeated:1 x1:6 fig:2 representative:1 wiley:1 shrinking:9 precision:4 acf:1 candidate:1 tied:1 learns:2 theorem:18 down:1 shielding:1 symbol:1 almuallim:2 list:1 consist:2 exists:3 schanged:3 false:3 adding:1 avrim:1 execution:5 occurring:1 margin:6 sorting:1 rg:32 intersection:4 univariate:1 likely:1 contained:2 springer:1 corresponds:3 truth:1 chance:1 satisfies:1 acm:2 conditional:11 hard:2 included:4 typical:1 infinite:1 operates:1 heinemann:1 miss:1 kearns:3 lemma:6 called:8 indicating:1 select:1 relevance:2 artifical:2 evaluate:2 |
2,956 | 368 | A four neuron circuit accounts for change sensitive
inhibition in salamander retina
Jeffrey L. Teeters
Lawrence Livennore Lab
PO Box 808, L-426
Livennore CA 94550
Frank H. Eeckman
Lawrence Livennore Lab
PO Box 808, L-270
Livennore CA 94550
Frank S. Werblin
UC-Berkeley
Room 145, LSA
Berkeley CA 94720
Abstract
In salamander retina, the response of On-Off ganglion cells to a central
flash is reduced by movement in the receptive field surround. Through
computer simulation of a 2-D model which takes into account their
anatomical and physiological properties, we show that interactions
between four neuron types (two bipolar and two amacrine) may be
responsible for the generation and lateral conductance of this change
sensitive inhibition. The model shows that the four neuron circuit can
account for previously observed movement sensitive reductions in
ganglion cell sensitivity and allows visualization and prediction of the
spatio-temporal pattern of activity in change sensitive retinal cells.
1 INTRODUCTION
In the salamander retina. the response of transient (On-Off) ganglion cells to a central
flash is reduced by movement in the receptive field surround (Werblin. 1972; Werblin &
Copenhagen. 1974) as illustrated in Fig 1. This phenomenon requires the detection of
change in the surround and the lateral transmission of this change sensitive inhibition to
the ganglion cell dendrites. Wunk & Werblin (1979) showed that all ganglion cells
receive change-sensitive inhibition. and Barnes & Werblin (1987) implicated a changesensitive amacrine cell with widely distributed processes. The change-sensitivity of these
amacrine cells has been traced in part to a truncation of synaptic release from the bipolar
tenninals that presumably drive them (Maguire et al., 1989). The transient response of
these amacrine cells, mediated by voltage gated currents (Barnes & Werblin, 1986; Eliasof
et al., 1987) also contributes to this change sensitivity.
These and other experiments suggest that interactions between four neuron types underlie
both the change detection and the lateral transmission of inhibition (Werblin et al., 1988;
Maguire et al., 1989). To test this hypothesis and make predictions that could be
compared with later experiments we have constructed a computational model of the four
neuron circuit and incorporated it into an overall model of the retina. This model allows
us to simulate the effect of inhibition generated by the four neuron circuit on ganglion
cells.
384
Stimulus:
I
1
Windmill with
central test spot
+1(]
Ganglion Cell Response:
t
Stationary
windmill
Spinning
I---t windmill
1 second
I
1
~@:U( T::::t:~""" "~"]
1
Resting level
NormaJ+
------ ---------I
Figure 1: Change-Sensitive Inhibition. Data is from Werblin (1972).
2 IMPLEMENTING THE HYPOTHETICAL CIRCUIT
The proposed change-sensitive circuit (Werblin et al.. 1988; Maguire et al .? 1989) is
reproduced in Figure 2. This is meant to describe a very local region of the retina where
the receptive fields of the two bipolar cells are spatially overlapping. When a visual
target enters this receptive field. the bipolar cells are both depolarized. The sustained
bipolar cell activates the narrow field amacrine cell that. in tum feeds back to the synaptic
terminal of the transient bipolar cell to truncate transmitter release after a brief (ca. 100
msec) delay. Because the signal reaching the wide field amacrine cell is truncated after
about 100 msec. the wide field amacrine cell will receive excitation when the target enters
the recepti ve field. but will not continue to respond in the presence of the target.
The spatial profiles of synaptic input and output for the cell types involved in the model
are summarized in Figure 3. The bipolar and narrow field amacrine cell sensitivities
extend over a region corresponding roughly to their dendritic spread. The wide field
amacrine cell appears to receive input over a local region near the cell body, but delivers
its inhibitory output over a much wider region corresponding the the full extent (ca. 500
mm) of its processes.
Figure 4 shows the electrical circuit model for each cell type. and illustrates the
interactions between cells that are implemented in the model. In Figure 4. boxes contain
the circuit for each cell and arrows between them represent synaptic interactions thought
.?? NarrowJietd (amacrine . .
To
_ Ganglion
Figure 2: Circuitry to be Analyzed
cells.
1\
Bipolar
Narrow
(Inpul and oulpull
fleld~ (Input and output)
amacrine /
Wide field
"
~Inpull
500
Distance from 0 cell center (Ilm)
500
Figure 3: Spatial Profiles of Input Sensitivity and Output Transmission
to occur as determined through experiments in which a neurotransmitter is puffed onto
bipolar dendrites. Bipolar cells are modeled using two compartments. corresponding to
the cell body and axon terminal as suggested in Maguire et at. (1989). Amacrine cells are
modeled using only one compartment as in Eliasof et at. (1987).
Each compartment has a voltage (Vbs. Vbst, Vbtt. Van. Vaw). The cell body for the
sustained and transient bipolar are assumed to be the same. Batteries in the figure
correspond to excitatory (E+. Ena) or inhibitory reversal potentials (E-. Ek, Eel).
Resistors represent ionic conductances. Circles and arrows through resisters indicate
transmitter dependent conductances which are controlled by the voltage of a presynaptic or
same cell. Functions relating voltages to conductances are mostly linear with a threshold.
More details are given in Teeters et at. (1991).
Neurotransmitter Input
Wide field
Fi~ure 4:
Details of Circuitry
A Four Neuron Circuit Accounts for Change Sensitive Inhibition
3 TESTING THE COMPUTATIONAL MODEL
Computer simulation was used to tune model parameters. and test whether the single cell
properties and proposed interactions between cells shown in Figure 4 are consistent with
the responses recorded from the neurons during applications of a neurotransmitter puff.
Results are shown in Figure 5. Voltage clamp experiments electrically clamp the cell
membrane potential to a constant voltage and determine the current required to maintain
the voltage over time. Downward traces indicate that current is flowing into the cell;
upward traces indicate outward current For simplicity. scales are not shown, but in all
cases the magnitude of the simulated response is close to that of the observed response.
The simulated and observed responses voltage clamps of the wide field amacrine shown in
the fourth row vary because there is a sustained outward current observed experimentally
that is not apparent in the simulations. This shows that the model is not perfect and is
something that needs further investigation.
This difference between the model and observed response does not prevent the
hypothesized function of the circuit from being simulated. This is shown on the bottom
row where both the observed and simulated voltage responses from the wide field amacrine
are transient.
4 SIMULATING INHIBITION TO GANGLION CELLS
Figure 5 illustrates that we have, to a large degree, succeeded in combining the
characteristics of single cells into a model which can explain many of the observed
properties thought to be due to the interaction between these cells in a local region.
Experiment
Observed response
Simulated Response
Neurotransm Itter
Puff Input
-.r--
J
Voltage clamp of
bipOlar cell body
"-
Voltage clamp of
narrow field amacrine
Wide field amacrine
Voltage clamp
Voltage clamp with
pIcrotoxin block
Voltage response
V'
~.,
-"
y-
~
P--
-
E=======::
--v-V-
--"'-
Figure 5: Example Puff Simulations
387
388
Teeters, Eeckman, and Werblin
The next step in our analysis is to investigate how this circuit influences the response of
ganglion cells. To do this requires simulating the input to the bipolar dendrites and
simulating the ganglion cells which receive the transient inhibition generated by the wide
field amacrine. This amounts to a integrated model of an entire patch of retina. including
receptors. horizontal cells. the four neuron circuit discussed earlier. and ganglion cells.
The manner in which we accomplish this is illustrated in Figure 6.
The left side of figure 6 shows the model elements. Receptors and horizontal cells are
modeled as low pass filters with different time constants and different spatial inputs. The
ganglion cell model receives a transient excitatory input generated phenomenologically by
a thresholded high pass filter from the transient bipolar. Inhibitory input to the ganglion
cell is implemented as coming from the transient wide field amacrine cells described
previously. For simplicity. voltage gated currents and spiking are not implemented in the
ganglion cell model. and only the off bipolar pathways are simulated.
The right hand of Figure 6 illustrates how the model is implemented spatially. The
circuit for each cell type is duplicated across the retina patch in a matrix fonnat. The
known spatial properties of each cell. such as the spatial range of transmitter sensitivity
and release are incorporated into the model. Details are given in Teeters et al. 1991.
5 SIMULATING INHIBITION TO GANGLION CELLS
To test if the model can account for the observed reduction in ganglion cell response
during movement in the receptive field surround. we simulated the experiment depicted in
Figure 1. mainly the flashing of a central light during the presence of a stationary and
spinning windmill. The results are shown in Figure 7.
Model Elements
Receptor
Spatial Implementation
R ?"
Horizontal Cell
Threshold
High-pass
filter
wCf).? 1:. .
..
ang Ion
..
.
el
'b~
~E
.'. - E
?t:t-rEcIT
r
?
?
1-} . . ... .....
..
On-Off Ganglion cells
'??
Figure 6: Integrated Retinal Model
A Four Neuron Circuit Accounts for Change Sensitive Inhibition
Rather than displaying a single curve representing the response of a single unit over time,
Figure 7 shows the simultaneous pattern of activity in an array of neurons spatially
distributed across the retina patch at an instant in time (just after a central light spot is
turned on). The neuron responses are the transient bipolar terminal, the wide field
amacrine neurotransmitter release, and the ganglion cell voltage response. On the left
column is shown the response to a flashing spot when the windmill is stationary. On the
right is shown the response to the same flashing spot but with a spinning windmill.
When the windmill is stationary, the transient bipolar terminal responds only to the
center flash. Responses to the windmill vanes are suppressed by the narrow field
amacrine cell causing the appearance of four regions of hyperpolarizing responses around
the center. The wide field amacrine responds to the central test flash and releases
transmitter as shown in the second row. The array of ganglion cells responds to both the
excitatory input generated by the spot at the bipolar terminals and the inhibitory input
generated by the wide field amacrines. Because the wide field inhibition has not yet taken
effect at this point in time, the ganglion cells respond well to the flashing spot.
When the windmill is spinning, as is shown on the right hand column, the transient
bipolar terminals generate a response to the leading edge of the windmill vanes. The wide
field amacrine cells receive excitatory input from the transient bipolar terminal responses
to the vane, and consequently release inhibitory neurotransmitter over a wide area as
shown in in the right column. Because inhibition is being continuously generated by the
spinning windmill, the response of the ganglion cells across the retinal patch has a large
Stationary Windmill
Spinning windmill
Transient Bipolar Terminal
Wide field
Ganglion cell
Fig. 7 - Ganglion Cell Inhibition Caused By Spinning Windmill
389
bowl shaped area of hyperpolarization which reduces the ganglion cell response of the
cells to the central test flash. This is seen by the fact that the height of depolarization in
the centrally located ganglion cells is much smaller under conditions of a spinning
windmill than if the windmill is stationary. This is consistent with the results found
experimentally which are illustrated in Figure 1. Experimental data not yet attained. but
which are predicted by the model simulations illustrated in Figure 7. are the spatial
patterns of activity generated in the bipolar. amacrine. and ganglion cells in response to
the different stimuli.
6 SUMMARY
Using computer simulation of a neurophysiologically based model. we demonstrate that
the experimental data describing properties of four neurons in the inner retina are
compatible with the hypothesis that these neurons are involved in the detection of change
and the feedforward of change-sensitive inhibition to ganglion cells. First. we build a
computational model of the hypothesized four neuron circuit and determine that the
proposed interactions between them are sufficient to reproduce many of the observed
network properties in response to a puff of neurotransmitter. Next. we integrate this
model into a full retina model to simulate their influence on ganglion cell responses.
The model verifies the consistency of presently available data. and allows formation of
predictions of neural activity are subject to refutation or verification by new experiments.
We are currently recording the spatio-temporal response of ganglion cells to moving
stimuli so that direct comparisons to these model predictions can be made.
References
Barnes. S. and Werblin. F.S. (1986). Gated currents generate single spike activity in
amacrine cells of the tiger salamander. Proc. Natl. Acad. Sci. USA 83: 1509 - 1512.
Barnes. S. and Werblin. F.S. (1987). Direct excitatory and lateral inhibitory synaptic
inputs to amacrine cells in the tiger salamander retina. Brain Res. 406: 233 - 237.
Eliasof S .? Barnes S. and Werblin. F.S. (1987). The interaction of ionic currents
mediating single spike activity in retinal amacrine cells of the tiger salamander. 1.
Neurosci. 7: 3512 - 3524.
Maguire. G .? Lukasiewicz. P. and Werblin F.S. (1989). Amacrine cell interactions underlying the response to change in the tiger salamander retina. 1. Neurosci. 9: 726 - 735.
Teeters. J.L .? Eeckman. F.H .? Werblin F.S. (1991). A computer model to visualize
change sensitive responses in the salamander retina. In MA. Arbib and J-P. Ewert (eds.)
Visuomotor Coordination: Amphibians. Comparisons. Models and Robots. Plenum.
Werblin. F.S. (1972). Lateral interactions at inner plexiform layer of a vertebrate retina:
antagonistic response to change. Science. 175: 1008 - 1010.
Werblin. F.S. and Copenhagen. D.R. (1974). Control of retinal sensitivity. III. Lateral
interactions at the inner plexiform layer. 1. Gen. Physiol. 63: 88 - 110.
Werblin. F.S .? Maguire. G., Lukasiewicz, P., Eliasof. S .? and Wu. S. (1988). Neural
interactions mediating the detection of motion in the retina of the tiger salamander. Visual
Neurosci. 1: 317 - 329.
Wunk, D.F. and Werblin, F.S. (1979). Synaptic inputs to ganglion cells in the tiger
salamander retina. 1. Gen. Physiol. 73: 265 - 286.
| 368 |@word simulation:6 reduction:2 current:8 yet:2 physiol:2 hyperpolarizing:1 stationary:6 height:1 constructed:1 direct:2 sustained:3 pathway:1 manner:1 roughly:1 brain:1 terminal:8 vertebrate:1 underlying:1 circuit:15 depolarization:1 temporal:2 berkeley:2 hypothetical:1 bipolar:22 control:1 unit:1 underlie:1 lsa:1 werblin:19 local:3 acad:1 receptor:3 ure:1 range:1 responsible:1 testing:1 block:1 spot:6 area:2 thought:2 suggest:1 onto:1 close:1 influence:2 center:3 simplicity:2 array:2 antagonistic:1 plenum:1 target:3 hypothesis:2 element:2 located:1 observed:10 bottom:1 enters:2 electrical:1 region:6 movement:4 vbs:1 battery:1 po:2 bowl:1 neurotransmitter:6 describe:1 visuomotor:1 formation:1 apparent:1 widely:1 reproduced:1 clamp:7 interaction:12 coming:1 causing:1 turned:1 combining:1 gen:2 transmission:3 perfect:1 wider:1 implemented:4 fonnat:1 predicted:1 indicate:3 filter:3 transient:14 implementing:1 investigation:1 dendritic:1 mm:1 around:1 presumably:1 lawrence:2 visualize:1 circuitry:2 vary:1 proc:1 currently:1 coordination:1 sensitive:12 activates:1 amacrine:27 reaching:1 rather:1 voltage:16 release:6 transmitter:4 mainly:1 salamander:10 dependent:1 el:1 integrated:2 entire:1 reproduce:1 upward:1 overall:1 vane:3 spatial:7 uc:1 field:26 shaped:1 stimulus:3 retina:16 ve:1 jeffrey:1 maintain:1 conductance:4 detection:4 investigate:1 analyzed:1 light:2 natl:1 succeeded:1 edge:1 circle:1 refutation:1 re:1 column:3 earlier:1 delay:1 accomplish:1 sensitivity:7 off:4 eel:1 continuously:1 central:7 recorded:1 ek:1 leading:1 tenninals:1 account:6 potential:2 retinal:5 summarized:1 ilm:1 ewert:1 caused:1 later:1 lab:2 compartment:3 characteristic:1 correspond:1 ionic:2 drive:1 explain:1 simultaneous:1 synaptic:6 ed:1 involved:2 duplicated:1 back:1 appears:1 feed:1 tum:1 attained:1 response:33 flowing:1 amphibian:1 box:3 just:1 hand:2 receives:1 horizontal:3 overlapping:1 usa:1 effect:2 hypothesized:2 contain:1 spatially:3 illustrated:4 during:3 excitation:1 demonstrate:1 delivers:1 motion:1 fi:1 spiking:1 hyperpolarization:1 extend:1 discussed:1 resting:1 relating:1 surround:4 eeckman:3 ena:1 consistency:1 moving:1 robot:1 inhibition:16 something:1 inpul:1 showed:1 continue:1 seen:1 determine:2 signal:1 full:2 reduces:1 windmill:16 controlled:1 prediction:4 represent:2 cell:75 ion:1 receive:5 depolarized:1 plexiform:2 subject:1 recording:1 near:1 presence:2 feedforward:1 iii:1 arbib:1 inner:3 whether:1 tune:1 outward:2 amount:1 ang:1 reduced:2 generate:2 inhibitory:6 anatomical:1 four:12 threshold:2 traced:1 prevent:1 thresholded:1 fourth:1 respond:2 wu:1 patch:4 layer:2 centrally:1 barnes:5 activity:6 occur:1 simulate:2 truncate:1 electrically:1 membrane:1 across:3 smaller:1 suppressed:1 presently:1 taken:1 visualization:1 previously:2 describing:1 reversal:1 available:1 simulating:4 instant:1 build:1 spike:2 receptive:5 responds:3 distance:1 lateral:6 simulated:7 sci:1 presynaptic:1 extent:1 spinning:8 modeled:3 mostly:1 mediating:2 frank:2 trace:2 implementation:1 gated:3 neuron:15 truncated:1 incorporated:2 copenhagen:2 required:1 narrow:5 suggested:1 pattern:3 maguire:6 including:1 phenomenologically:1 representing:1 brief:1 mediated:1 neurophysiologically:1 generation:1 integrate:1 degree:1 sufficient:1 consistent:2 verification:1 displaying:1 row:3 excitatory:5 summary:1 compatible:1 truncation:1 implicated:1 side:1 wide:17 distributed:2 van:1 curve:1 wcf:1 made:1 vaw:1 assumed:1 spatio:2 ca:5 contributes:1 dendrite:3 spread:1 neurosci:3 arrow:2 profile:2 verifies:1 body:4 fig:2 axon:1 msec:2 resistor:1 physiological:1 teeter:5 flashing:4 magnitude:1 illustrates:3 downward:1 depicted:1 appearance:1 ganglion:31 visual:2 ma:1 consequently:1 flash:5 room:1 experimentally:2 change:18 tiger:6 determined:1 pas:3 experimental:2 puff:4 meant:1 phenomenon:1 |
2,957 | 3,680 | Unsupervised Detection of Regions of Interest
Using Iterative Link Analysis
Gunhee Kim
School of Computer Science
Carnegie Mellon University
[email protected]
Antonio Torralba
Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology
[email protected]
Abstract
This paper proposes a fast and scalable alternating optimization technique to detect regions of interest (ROIs) in cluttered Web images without labels. The proposed approach discovers highly probable regions of object instances by iteratively repeating the following two functions: (1) choose the exemplar set (i.e. a
small number of highly ranked reference ROIs) across the dataset and (2) refine
the ROIs of each image with respect to the exemplar set. These two subproblems
are formulated as ranking in two different similarity networks of ROI hypotheses
by link analysis. The experiments with the PASCAL 06 dataset show that our
unsupervised localization performance is better than one of state-of-the-art techniques and comparable to supervised methods. Also, we test the scalability of our
approach with five objects in Flickr dataset consisting of more than 200K images.
1
Introduction
This paper proposes an unsupervised approach to the detection of regions of interest (ROIs) from a
Web-sized dataset (Fig.1). We define the regions of interest as highly probable rectangular regions of
object instances in the images. The extraction of ROIs is extremely helpful for recognition and Web
user interfaces. For example, [3, 5] showed comparative studies in which ROI detection is useful to
learn more accurate models, which leads to nontrivial improvement of classification and localization
performance. In the recognition of indoor scenes [17], the local regions that contain objects may
have special meaning to characterize the scene description. Also, many Web applications allow a
user to attach notes on user-specified regions in a cluttered image (e.g. Flickr Notes). Our algorithm
can make this cumbersome annotation easier by suggesting the regions a user may be interested in.
Our solution to the problem of unsupervised ROI detection is inspired by an alternating optimization. Alternating optimization is one of widely used heuristics where optimization over two sets of
variables is not straightforward, but optimization with respect to one while keeping the other fixed
is much easier and solvable. This approach has been successful in a wide range of areas such as
K-means, Expectation-Maximization, and Iterative Closest Point algorithms [2].
Figure 1: Detection of regions of interest (ROIs). Given a Web-sized dataset, our algorithm detects bounding
box-shaped ROIs that are statistically significant across the dataset in an unsupervised manner. The yellow
boxes are groundtruth labels, and the red and blue ones are ROIs detected by the proposed method.
1
The unsupervised ROI detection can be though of as a chicken-and-egg problem between (1) finding
exemplars of objects in the dataset and (2) localizing object instances in each image. If classrepresentative exemplars are given, the detection of objects in images is solvable (i.e. a conventional
detection or localization problem). Conversely, if object instances are clearly annotated beforehand,
the exemplars can be easily obtained (i.e. a conventional modeling or ranking problem).
Given an image set, first we assume that each image itself is the best ROI (i.e. the most confident
object region). Then a small number of highly ranked ones among the selected ROIs are chosen as
exemplars (called hub seeking), which serve as references to refine the ROIs of each image (called
ROI refinement). We repeat these two updates until convergence. The two steps are formulated as
ranking in two different similarity networks of ROI hypotheses by link analysis. The hub seeking
corresponds to finding a central and diverse hub set in a network of the selected ROIs (i.e. interimage level). The ROI refinement is the ranking in a bipartite graph between the hub sets and all
possible ROI hypotheses of each image (i.e. intra-image level).
Our work is closely related to topics on ROI detection [3, 5, 17, 14], unsupervised localization
[9, 24, 21, 18, 1, 12], and online image collection [13, 19, 6]. The ROI detection and unsupervised
localization share a similar goal of detecting the regions of objects in cluttered images. However,
most previous work has been successful for standard datasets with thousands of images. On the other
hand, our goal is to propose a simple and fast method that can take advantage of enormous amounts
of Web data. The main objective of online image collection is to collect relevant images from highly
noisy data queried by keywords from the Web. Its main limitation is that much of the previous work
requires additional assumptions such as a small number of seed images in the beginning [13], texts
and HTML tags associated with images [19], and user-labeled images [6]. On the other hand, no
additional meta-data are required in our approach.
Recently, link analysis techniques on visual similarity networks were successfully exploited in computer vision problems [12, 15, 11, 16]. [15] applied the random walk with restart technique to
the auto-captioning task. However, their work is a supervised method requiring annotated caption
words for the segmented regions in training images. [12] is similar to ours in that the unsupervised
classification and localization are the main objectives. However, their method suffers from a scalability issue, and thus their experiments were performed using only 600 images. [11] successfully
applied the PageRank technique to a large-scale image search, but unlike ours their approach is evaluated with quite clean images and sub-image level localization is not dealt with. Likewise, [16] also
exploited the matching graph of a large-scale image set, but the localization was not discussed.
The main advantages of our approach are summarized as follows. First, the proposed method is
extremely simple and fast, with compelling performance. Our approach shows superior results over
a state-of-the-art unsupervised localization method [18] for the PASCAL 06 dataset. We proposed
a simple heuristic for scalability to make the computation time linear with the data size without
severe performance drop. For example, the localization of 200K images took only 4.5 hours with
naive matlab implementation on a single PC equipped with Intel Xeon 2.83 GHz CPU (once image
segmentation and feature extraction were done). Second, our approach is dynamic thanks to the
evolving network representation. At every iteration, new ROI hypothesis are added and trivial ones
are removed from the network while reusing a large portion of previously computed information.
Third, unlike most previous work, our approach requires neither human annotation, meta-data, nor
initial seed images. Finally, we evaluate our approach with a challenging Flickr dataset of up to
200K images. Although some work [22] in image retrieval uses millions of images, this work has a
different goal from ours. The objective of image retrieval is to quickly index and search the nearest
images to a given query. On the other hand, our goal is to localize objects in every single image of
a dataset without supervision.
2
ROI Candidates and Description
The input to our algorithm is a set of images I = {I1 , I2 , ..., I|I| }. The first task is to define a set
of ROI hypotheses from the image set R = {R1 , R2 , ..., R|I| }. Ideally, the set of ROI hypotheses
Ra = {ra1 , ..., ram } of an image Ia enumerates all plausible bounding boxes, and at least one of
them is supposed to be a good object annotation. Fig.2 shows the procedure of ROI hypothesis
generation. Given an image, 15 segments are extracted by Normalized cuts [20]. The minimum
rectangle to enclose each segment is defined as initial ROI hypotheses. Since the over-segmentation
2
Figure 2: An example of ROI extraction and description. From left to right: (a) An input image. (b) 15
segments. (c) 43 ROI hypotheses. (d) Distribution of visual words. (e) Edge gradients.
is unavoidable in most cases, the combinations of the initial hypotheses are also considered. We first
compute pairwise minimum paths between the initial hypotheses using the Dijkstra algorithm. Then
the bounding boxes to enclose those minimum paths are added to the ROI hypothesis set. Finally,
rai ?raj
a largely overlapped pair of ROIs is merged if rai
?raj > 0.8. Note that the hypothesis set always
includes the image itself as the largest candidate, and the average set size is about 50.
Each ROI hypothesis is represented by two types of descriptors, which are spatial pyramids of
visual words [17] and HOG [3]. As usual, the visual words are generated by vector quantization to
randomly selected SIFT descriptors. K-means is applied to form a dictionary of 200 visual words. A
visual word is assigned to each pixel of an image by finding nearest cluster center in the dictionary,
and then binned using a two-level spatial pyramid. The oriented gradients are computed by Canny
edge detection and Sobel mask. Then the HOG descriptor is discretized into 20 orientation bins in
the range of [0? ,180? ] by following [3]. The pyramid level is up to three. The similarity measure
between a pair of ROIs is cosine similarity, which is simply calculated by dot product of two L2
normalized histograms. Here both descriptors are equally weighted.
3
3.1
The Algorithm
Similarity Networks and Link Analysis Techniques
All inferences in our approach are based on the link analysis of k-nearest neighbor similarity network
between ROI hypotheses. The similarity network is a weighted graph G = (V, E, W), where V is
the set of vertices that are ROI hypotheses. E and W are edge and weight sets discovered by the
similarity measure in the previous section. Each vertex is only connected to its k-nearest neighbors
with k = a ? log |V| [23], where a is a constant set to 10. It results in a sparse network, which is more
advantageous in terms of computational speed and accuracy. It guarantees that the complexity of
network analysis is O(|V| log |V|) at worst. Finally, the network is row normalized so that the edge
weight from note i and j indicates the probability of a random surfer jumping from i to j. The link
analysis technique we use is PageRank [4, 10]. Given a similarity matrix G, it computes the same
length of PageRank vector p, which assigns a ranked score to each vertex of the network. Intuitively,
the PageRank scores of the network of ROI hypotheses are indices of the goodness of hypotheses.
3.2
Overview of the Algorithm
Algorithm 1 summarizes the proposed algorithm. The main input is the set of ROI hypotheses R
generated by the method of section 2. The output is the set of selected ROIs S ? (? R). In each
image, usually one or two, and rarely more than three, of the most promising ROIs are chosen.
The basic idea of our approach is to jointly optimize the ROI selection of each image and the examplar detection among the selected ROIs. Examplars correspond to hubs in our network representation. We begin with images themselves as an initial set of ROI selection S (0) (Step 1). Even though
this initialization is quite poor, highly ranked hubs among the ROIs are likely to be much more reliable. They are detected by the function Hub seeking (Step 3). Then, the hub sets are exploited
to refine the ROIs of each images by the function Hub seeking (Step 5). In turn, those refined
ROIs are likely to lead to a better hub set at the next iteration. The alternating iterations of those
two functions are expected to lead convergence for not only the best ROI selection of each image
but also the most representative ROIs of the data set. An example of evolution of ROI selection is
shown in Fig.4.(c). Although our algorithm forces to select at least one ROI for each image, the
PageRank vector by Hub seeking can indicate the confidence of each ROI, which can be used to
filter out wrongly selected ROIs. Conceptually, both functions share a similar ranking problem to
3
Figure 3: Examples of hub images. The pictures illustrate highest-ranked images in 10,000 randomly selected
images from five objects of our Flickr dataset and all {train+val}images from two objects of the PASCAL06.
select a small subset of highly ranked nodes from the input networks of ROI hypotheses. They will
be discussed in the following subsections in detail.
Inherently, a good initialization is essential for alternating optimization. Our key assumption is as
follows: Provided that the similarity network includes a sufficiently large number of images, the
hub images are likely to be good references. This is based on the finding of our previous work
[12]: If each visual entity votes for others that are similar to itself, this democratic voting can reveal
the dominant statistics of the image set. Although the images in a dataset are highly variable, the
more repetitive visual information may get more similarity votes, which can be easily and quickly
discovered as hubs by link analysis. Fig.3 supports this argument in our dataset. It illustrates topranked images of our dataset in which the objects are clearly shown in the center with significant
size. Obviously, they are excellent initialization candidates.
Since we deal with discrete patches from unordered natural images on the Web, it is extremely
difficult to analytically understand several important behaviors of our algorithm such as convexity,
convergence, sensitivity to initial guess, and quality of our solution. One widely used assumption
in the optimization with image patches is linearity with small incremental displacement (e.g. AAM
[8]). However, it is not the case in our problem and causes severe computation increase. These
issues may be open challenges for the optimization of large-scale image analysis.
Algorithm 1 The Algorithm
Input: ROI hypothesis R associated with image set I.
Output: The set of selected ROIs S ? (? R), where S ? = S (T ) when converged at T .
1: S (0) ? largest ROI hypothesis in each image.
while S (t?1) 6= S (t) do
2: Generate k -NN similarity network G(t) of S (t) .
3: H(t) ? Hub seeking(G(t) ), where the hub set H(t) ? S (t)
for all Ia ? I unless ROI selection of Ia is not changed for several consecutive times do
(t)
(t)
4: sa ? ROI refinement(H(t) , Ra ), where sa : ROI selection of Ia , Ra : ROI hypotheses of Ia .
(t?1)
(t)
(t)
(t)
.
5: S ? S ? sa \sa
end for
end while
Algorithm 2 Hub seeking function
(t)
Input: (1) Network G . (2) Window size: d.
Output: (1) Hub set H(t) .
1: Compute PageRank vector p of G(t) .
for all vertex v ? G(t) do
2: Find the neighbor set of v Nv = {u| max
reachable probability from v to u > d}.
3: Find local maxima node of v m(v) =
arg maxu p(Nv ) where u ? Nv .
4: H(t) ? v if v = m(v).
end for
3.3
Algorithm 3 ROI refinement function
Input: (1) Hub set H(t) . (2) Ra , ROI hypotheses of Ia
(t)
Output: (1) The selected ROIs sa (? Ra ).
1: Generate k-NN self-similarity matrix Wi of Ra
and k-NN similarity matrix Wo between Ra and H(t)
. Both of them are row-normalized.
2: Generate augmented
bipartite graph W =
?Wi (1 ? ?)Wo
WT
0
o
3: Compute PageRank vector p of W.
4: s?a = arg maxraj p(raj ) where raj ? Ra .
Hub Seeking with Centrality and Diversity
The goal of this step is to detect a hub set H(t) from S (t) by analyzing the network G(t) . The main
criteria are centrality and diversity. In other words, the selected hub set should be not only highly
ranked but also diverse enough not to lose various aspects of an object. To meet this requirement,
we design the hub seeking inspired by Mean Shift [7]. Given feature points, the algorithm creates
a fixed-radius window at each point. Then each window iteratively moves into the direction of
4
Figure 4: (a) An example of a bipartite graph between the hub set and ROI hypotheses of an image. The
similarity between hubs and hypotheses is captured by Wo and the affinity between the hypotheses by Wi . The
hub set is sorted by PageRank values from left and right. The values of leftmost and rightmost are 0.0081 and
0.0024, respectively. They successfully capture various aspects related to the car object. (b) The effect of the
augmented bipartite graph. The left image is with ? = 0 and the right with ? = 0.1. The ranking of hypotheses
is represented by jet colormap from red (high) to blue (low). In the left, the weights from the red box to the blue
one are (0.052, 0.050, 0.049, 0.049, 0.049); in the right, (0.060, 0.060, 0.059, 0.059, 0.057). (c) An example of
ROI evolution. At T = 0, the selected ROI is an image itself but is converged to the real object after T = 5.
the maximum increase in the local density function until it reaches a local maximum. Those local
maxima become the modes, and the data points that converge to the same maxima are clustered.
The proposed algorithm 2 works in the same manner. For each vertex, we define the search window
in the form of maximum reachable probability d (Step 2). The window covers the vertices whose
maximum reachable probability is larger than d. For example, given d = 0.1, wij = 0.6, wjk = 0.2,
the probability of vertices i to k is 0.6 ? 0.2 = 0.12 > d. Then k is considered inside the search
window of i. For the density function, we use the PageRank vector because it is proportional to the
vertex degree if the graph is symmetric and connected [25]. In Step 3, we compute the vector m
that assigns the local maximum vertex within the window of each vertex in G(t) . If v = m(v), the v
is a local maximum, and it is added to H(t) . Additionally, we can easily perform the clustering from
m. For each node, the search window keeps moving the maximum direction indicated by m until it
reaches the local maximum. Then the nodes that converge to the same maxima can be clustered.
3.4
ROI Refinement
Formally, this step is to define a nonparametric function for each image fa : Ra ? R+ (positive real
number) with respect to the hub set H(t) . Then the hypothesis with maximum ranked value is chosen
as the best ROI. In order to solve this problem, we first construct an augmented bipartite graph W
between the hub set H(t) and all possible ROIs Ra as shown in Step 2 of Algorithm 3 (see Fig.4(a)).
For better understanding, let us first consider a pure bipartite graph with ? = 0. Then the matrix W
represents the similarity voting between the ROI candidates and the hub set. If the PageRank vector
p of W is computed, then p(Ra ) summarizes the relative importance of each ROI hypothesis with
respect to the H(t) , which is exactly what we require. Rather than a pure bipartite graph (? = 0),
we augment it by nonzero ?. Fig.4.(b) explains the effects of ?. The left image shows the result of
? = 0. Even though the red hypothesis is the maximum, several hypotheses near the dark gray car
have significant values. With nonzero ? = 0.1, those hypotheses are allowed to augment each other,
so the maximum ROI is changed to a hypothesis on the car. In terms of link analysis, if a random
surfer visits nodes of ROI hypotheses (Ra ), it jumps to other hypotheses with probability ? or other
hubs with 1 ? ?. Since the nearby hypotheses share large portions of rectangles, they have higher
similarity, which results in more votes for nearby hypotheses.
3.5
Scalability Setting
The bottleneck of our approach is the Step 3 of Algorithm 1. The network generation requires
quadratic computation of cosine similarity of S (t) . In order to bound the computational complexity,
we limit the maximum number of images to be considered each run of Algorithm 1 by constant
number N . N should be small enough not to suffer from computational burden. Simultaneously, it
should be large enough to successfully detect the meaningful statistics from an extremely variable
5
dataset. (In experiments, N is set to 10,000.) If the dataset size |I| > N , we randomly sample N
images from I and construct initial consideration set Ic ? I. Algorithm 1 is applied to the image
set Ic to obtain Sc? . Then we generate new Ic by sampling unvisited images from I. In order to
reuse the result of Sc? for the next iteration, we sample x% of N from previous Sc? based on the
PageRank values of the network G? of Sc? . In other words, the highly ranked (i.e. highly confident)
ROIs in the previous step are reused to expedite the convergence of next iteration. We iterate the
above strategy until all images are examined. This simple heuristic allows our technique to analyze
an extremely large dataset in a linear time without significant performance drop.
4
Results
We evaluate our approach with two different experiments, (1) performance tests with PASCAL
VOC 20061 and (2) scalability tests with Flickr images. The PASCAL dataset provides groundtruth
labels, so our approach is quantitatively evaluated and compared with other approaches. Using
Flickr dataset, we examine the scalability of our method in a real-world problem. The images
are collected by a query that consists of one object word and one context word. We downloaded
images of the objects {butterfly+insect(69,990), classic+car(265,731), motorcycle+bike(106,590),
sunflower(165,235), giraffe+zoo(53,620)}. The numbers in parentheses are dataset sizes.
4.1
Performance Tests
The input of our algorithm consists of unlabeled images, which may include a single object (called as
weakly supervised) or multiple objects (called unsupervised). For unsupervised cases, we perform
not only localization but also classification according to object types. The PASCAL 06 dataset is so
challenging to use that only very rare previous work has used it for unsupervised localization. For
comparison, we ran publicly available code of one of the state-of-the-art techniques proposed by
Russell et al2 [18] in the identical setting.
The PASCAL dataset consists of {train+val+test}. However, our approach requires only images
as an input, and thus all of the {train+val+test} images are used without discrimination between
them. Note that our task is an image annotation not a learning problem that requires training and
testing steps. The performance is measured by following the protocol of PASCAL evaluation: (1)
The performance is evaluated from only the {test} set. In practice, there is very little performance
difference between analysis of all {train+val+test} and {test} only. (2) The detection is considered
correct if the overlap between the prediction and ground truth exceeds 50%.
Weakly supervised localization. Fig.5 shows the detection performance as Precision-Recall (PR)
curves. For [18], we iterate experiments by changing the number of topics from two to six, and the
best results are reported. For clear comparison between our results and [18], we select only the best
bounding box in each image. We also present the best result of each object in VOC06 competition.
Strictly speaking, it is not a valid comparison because the experimental setups of VOC06 competition and ours are totally different. However, we illustrate them as references to show how closely
our approach can reach the best supervised methods in VOC 06 for the localization. Although the
performance varies according to objects, our approach significantly outperformed [18] except in
cow. Promisingly, the performances of our approach for bicycle and motorbike are comparable, and
those for bus, cat, and dog objects are superior to the bests of the supervised methods in VOC06.
Unsupervised classification and localization. Here we evaluate how well our approach works
for unsupervised classification and localization tasks (i.e. images of multiple objects without any
annotation are given). Since both our method and [18] aim at sub-image level classification and
detection, we first find out the most confident region of each image, and run the clustering by LDA
in [18] and spectral clustering [20] in our method. The evaluation of classification follows the rule
of VOC06 by the ROC curves as shown in Fig.6. We also show the best of the VOC06 submissions
for supervised classification as a reference. As shown in Fig.6.(a)?(c), our method and [18] present
similar ROC performance. In other words, both methods are quite good at ranking for classification.
However, the classification rates of our method are better by about 10% for both 3-object and 4object cases. (Ours: 69.08%; [18]: 59.05% for {bicycle, car, dog}. Ours: 59.51%; [18]: 50.99%
1
2
The dataset is available at http://www.pascal-network.org/challenges/VOC/
The code is available at http://www.di.ens.fr/?russell/projects/mult seg discovery/index.html
6
Figure 5: Results of weakly supervised localization. PR curves for the {test} sets of all objects in the PASCAL
06 dataset. (Ours: blue; [18]: red; the best of VOC06: green). Note that our localization and that of [18] are
unsupervised, but the VOC06 localization is supervised. (X-axis: recall; Y-axis: precision).
Figure 6: Results of unsupervised classification and localization. (a)?(c) ROC curves for {test} set of
{bicycle, car, dog}. (Ours: blue; [18]: red; the best of VOC06: green). The AUCs of ours, [18], and
the best of VOC06 are bicycle:(0.892, 0.869, 0.948), car:(0.968, 0.965, 0.977), and dog:(0.932, 0.954, 0.876),
respectively. (X-axis: false positive rates, Y-axis: true positive rates). (d)?(f) PR curves for unsupervised
localization of ours (blue) and [18] (magenta). For comparison, we also represent the results of our weakly
supervised localization (red) and the best of VOC 06 (green). (X-axis: recall, Y-axis: precision).
for {bicycle, car, dog, sheep}.) We also show the unsupervised localization performance as PRcurves in Fig.6.(d)?(f). For comparison, we also represent the results of our weakly supervised
experiments and the bests of VOC 06 for corresponding objects. The nontrivial performance drop is
observed due to the classification errors and distraction by other objects in the dataset.
4.2
Scalability Tests
It is an open question how to evaluate the results of a large number of Web-downloaded images that
have no ground-truth. For a quantitative evaluation, we manually annotated 0.5% randomly selected
images of datasets, and they are used as limited but approximate indices of performance measures.
According to the data sizes used in experiments, we randomly pick x% from the annotated set and
(100 ? x)% from the non-annotated set. The x is {20, 10, 5, 1, 0.5, 0.5} for the dataset size of
{500, 5K, 10K, 50K, 100K, 200K}.
Weakly supervised localization. One interesting question we address here is how performances
and computation times vary as a function of data sizes. The experiments are repeated ten times
for each dataset size, and the median (i.e. fifth-best) performance scores are reported. Similarly
to previous tests, we select only the best ROI per image. As shown in Fig.7, the performances of
500 images fluctuate, but the results of the dataset size above 5K are stable. As the dataset size
increases, a small performance improvement is observed. Since the maximum number of images at
each running of the algorithm is bounded by N (= 10, 000), the computation times are linear to the
number of images, and the performances of the data size above N are similar each other.
Perturbation tests. Here we test the goodness of selected ROIs from a different view: robustness of
ROI detection against random network formation. For example, given an image Ia , we can generate
100 sets of 200 randomly selected images that include Ia . If the ROI selection for Ia is repetitive
across 100 different sets, we can say the ROI estimator for Ia is confident. This procedure is similar
to bootstrapping or cross-validation.
7
Figure 7: Weakly supervised localization. (a) PR curves for five objects of our Flickr dataset by varying
dataset sizes from 500 to 200K. (b) The log-log plot between the number of images and computation times for
the car object. The slope of each range is {1.23, 2.05, 0.95, 1.05, 1.28} from left to right.
Figure 8: Examples of perturbation tests. The histograms summarize how many times each ROI is selected
in 100 random sets. The frequencies of particular ROIs are represented by the thickness of bounding boxes
and the jet colormap from red (high) to blue (low). From left to right, the entropies of the distributions are
{0.2419, 1.6846, 2.4331}, respectively. (X-axis: ROI hypotheses; Y-axis: Frequency).
Fig.8 shows some examples of the perturbation tests. The histogram indicates how many times each
ROI hypothesis is selected among 100 random sets. From left to right, one can see the increase of
the difficulty of ROI detection. A peak is observed in an obvious image, but the distribution is wider
in a challenging image. The entropy of the distribution can be an index of the measure of difficulty
or the confidence of the estimator of the image.
More localization examples. Fig.9 shows more examples of localization in our approach. The third
row illustrates some typical examples of failure. Frequently co-occurred objects can be detected
instead such as flowers in butterfly images, insects on sunflowers, other animals in the zoo, and
persons everywhere. Also, sometimes small multiple instances are detected by one ROI or a part of
an object is discovered (e.g. a giraffe face rather than the whole body).
5
Discussion
We proposed an alternating optimization approach for scalable unsupervised ROI detection by analyzing the statistics of similarity links between ROI hypotheses. Both tests with PASCAL 06 and
Flickr datasets showed that our approach is not only comparable to other unsupervised and supervised techniques but also applicable to real images on the Web.
Acknowledgement. Funding for this research was provided by NSF Career award (IIS 0747120).
Figure 9: More examples of object localization. The first and second rows represent successful detection, and
the third row illustrates some typical failures. The yellow boxes are groundtruth labels, and the red and blue
ones are ROIs detected by the proposed method.
8
References
[1] N. Ahuja and S. Todorovic. Learning the taxonomy and models of categories present in arbitrary images.
In ICCV, 2007.
[2] P. J. Besl and N. D. McKay. A method for registration of 3-d shapes. IEEE Trans. on Pattern Analysis
and Machine Intelligence, 14(2):239?256, 1992.
[3] A. Bosch, A. Zisserman, and X. Munoz. Image classification using random forests and ferns. In ICCV,
2007.
[4] S. Brin and L. Page. The anatomy of a large-scale hypertextual web search engine. In WWW, 1998.
[5] O. Chum and A. Zisserman. An exemplar model for learning object classes. In CVPR, 2007.
[6] B. Collins, J. Deng, K. Li, and L. Fei-Fei. Towards scalable dataset construction: An active learning
approach. In ECCV, 2008.
[7] D. Comaniciu and P. Meer. Mean shift: A robutst approach toward feature space analysis. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 24(5):603?619, 2002.
[8] T. F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. IEEE Trans. on Pattern Analysis
and Machine Intelligence, 23(6):681?685, 2001.
[9] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google?s image
search. In ICCV, pages 1816?1823, Oct. 2005.
[10] G. Jeh and J. Widom. Scaling personalized web search. In WWW, 2003.
[11] Y. Jing and S. Baluja. Visualrank, pagerank for google image search. IEEE Trans. on Pattern Analysis
and Machine Intelligence, 30(11):1?31, 2008.
[12] G. Kim, C. Faloutsos, and M. Hebert. Unsupervised modeling of object categories using link analysis
techniques. In CVPR, 2008.
[13] L.-J. Li, G. Wang, and L. Fei-Fei. Optimol: automatic object picture collection via incremental model
learning. In CVPR, 2007.
[14] T. Liu, J. Sun, N.-N. Zheng, X. Tang, and H.-Y. Shum. Learning to detect a salient object. In CVPR,
2007.
[15] J.-Y. Pan, H.-J. Yang, C. Faloutsos, and P. Duygulu. Automatic multimedia cross-modal correlation
discovery. In SIGKDD, 2004.
[16] J. Philbin and A. Zisserman. Object mining using a matching graph on very large image collections. In
ICVGIP, 2008.
[17] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009.
[18] B. C. Russell, A. A. Efros, J. Sivic, W. T. Freeman, and A. Zisserman. Using multiple segmentations to
discover objects and their extent in image collections. In CVPR, 2006.
[19] F. Schroff, A. Criminisi, and A. Zisserman. Harvesting image databases from the web. In ICCV, 2007.
[20] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. on Pattern Analysis and
Machine Intelligence, 22(8):888?905, 2000.
[21] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their
location in images image features. In ICCV, 2005.
[22] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large dataset for non-parametric
object and scene recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 30(11):1958?
1970, 2008.
[23] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17(4):395?416, 2007.
[24] J. Winn and N. Jojic. Locus: Learning object classes with unsupervised segmentation. In ICCV, 2005.
[25] D. Zhou, J. Weston, A. Gretton, O. Bousquet, and B. Sch?olkopf. Ranking on data manifolds. In NIPS,
2004.
9
| 3680 |@word advantageous:1 reused:1 open:2 widom:1 pick:1 initial:7 liu:1 score:3 shum:1 ours:10 rightmost:1 shape:1 drop:3 plot:1 update:1 discrimination:1 intelligence:7 selected:16 guess:1 discovering:1 beginning:1 harvesting:1 detecting:1 provides:1 node:5 location:1 org:1 five:3 become:1 consists:3 inside:1 manner:2 pairwise:1 mask:1 expected:1 ra:12 behavior:1 themselves:1 nor:1 examine:1 frequently:1 discretized:1 inspired:2 detects:1 voc:5 freeman:3 cpu:1 little:1 equipped:1 window:8 totally:1 begin:1 provided:2 linearity:1 project:1 bounded:1 bike:1 discover:1 what:1 finding:4 bootstrapping:1 guarantee:1 quantitative:1 every:2 voting:2 exactly:1 colormap:2 positive:3 local:8 limit:1 analyzing:2 meet:1 path:2 initialization:3 examined:1 conversely:1 gunhee:2 collect:1 challenging:3 co:1 limited:1 range:3 statistically:1 testing:1 practice:1 procedure:2 displacement:1 area:1 evolving:1 significantly:1 mult:1 matching:2 word:11 confidence:2 get:1 unlabeled:1 selection:7 wrongly:1 context:1 optimize:1 conventional:2 www:4 center:2 shi:1 straightforward:1 cluttered:3 rectangular:1 assigns:2 pure:2 rule:1 estimator:2 classic:1 meer:1 construction:1 user:5 caption:1 us:1 hypothesis:41 overlapped:1 promisingly:1 recognition:3 cut:2 submission:1 labeled:1 database:1 observed:3 wang:1 capture:1 worst:1 thousand:1 seg:1 region:14 connected:2 sun:1 russell:4 removed:1 highest:1 ran:1 convexity:1 complexity:2 ideally:1 dynamic:1 weakly:7 segment:3 serve:1 localization:28 bipartite:7 creates:1 examplar:1 easily:3 represented:3 various:2 cat:1 train:4 fast:3 artificial:1 detected:5 query:2 sc:4 formation:1 refined:1 quite:3 heuristic:3 widely:2 plausible:1 whose:1 larger:1 solve:1 say:1 besl:1 cvpr:6 statistic:4 jointly:1 itself:4 noisy:1 online:2 obviously:1 butterfly:2 advantage:2 took:1 propose:1 product:1 fr:1 canny:1 relevant:1 motorcycle:1 supposed:1 description:3 competition:2 scalability:7 wjk:1 olkopf:1 convergence:4 cluster:1 requirement:1 r1:1 jing:1 captioning:1 comparative:1 incremental:2 object:47 wider:1 illustrate:2 measured:1 bosch:1 school:1 exemplar:7 nearest:4 keywords:1 sa:5 edward:1 c:1 enclose:2 indicate:1 direction:2 radius:1 closely:2 annotated:5 merged:1 filter:1 correct:1 criminisi:1 anatomy:1 human:1 brin:1 bin:1 explains:1 require:1 clustered:2 probable:2 strictly:1 sufficiently:1 considered:4 ic:3 roi:88 ground:2 seed:2 surfer:2 maxu:1 bicycle:5 efros:2 dictionary:2 torralba:4 consecutive:1 vary:1 outperformed:1 applicable:1 lose:1 label:4 schroff:1 largest:2 successfully:4 weighted:2 mit:1 clearly:2 always:1 aim:1 rather:2 zhou:1 fluctuate:1 varying:1 improvement:2 indicates:2 sigkdd:1 kim:2 detect:4 helpful:1 inference:1 nn:3 perona:1 wij:1 i1:1 interested:1 pixel:1 arg:2 issue:2 classification:13 among:4 pascal:10 html:2 augment:2 proposes:2 animal:1 art:3 special:1 spatial:2 insect:2 once:1 construct:2 extraction:3 shaped:1 sampling:1 manually:1 identical:1 represents:1 unsupervised:23 others:1 quantitatively:1 randomly:6 oriented:1 simultaneously:1 consisting:1 detection:19 interest:5 highly:11 mining:1 intra:1 zheng:1 evaluation:3 severe:2 sheep:1 pc:1 sobel:1 accurate:1 beforehand:1 edge:4 jumping:1 unless:1 taylor:1 walk:1 instance:5 xeon:1 modeling:2 compelling:1 cover:1 localizing:1 goodness:2 maximization:1 vertex:10 subset:1 rare:1 mckay:1 recognizing:1 successful:3 characterize:1 reported:2 thickness:1 varies:1 confident:4 thanks:1 density:2 peak:1 sensitivity:1 person:1 csail:1 quickly:2 von:1 central:1 unavoidable:1 choose:1 reusing:1 unvisited:1 suggesting:1 li:2 diversity:2 unordered:1 summarized:1 includes:2 ranking:8 performed:1 view:1 philbin:1 analyze:1 red:9 portion:2 annotation:5 slope:1 icvgip:1 publicly:1 accuracy:1 descriptor:4 largely:1 likewise:1 correspond:1 yellow:2 conceptually:1 dealt:1 fern:1 zoo:2 converged:2 quattoni:1 flickr:8 cumbersome:1 suffers:1 reach:3 against:1 failure:2 frequency:2 obvious:1 associated:2 di:1 dataset:33 massachusetts:1 recall:3 enumerates:1 subsection:1 car:9 segmentation:5 higher:1 supervised:14 zisserman:7 modal:1 evaluated:3 box:8 though:3 done:1 until:4 correlation:1 hand:3 web:13 google:2 mode:1 quality:1 reveal:1 indicated:1 gray:1 lda:1 effect:2 contain:1 requiring:1 normalized:5 true:1 evolution:2 analytically:1 assigned:1 alternating:6 symmetric:1 laboratory:1 iteratively:2 nonzero:2 i2:1 jojic:1 deal:1 self:1 auc:1 comaniciu:1 cosine:2 criterion:1 leftmost:1 interface:1 image:108 meaning:1 consideration:1 discovers:1 recently:1 funding:1 superior:2 al2:1 overview:1 million:2 discussed:2 occurred:1 mellon:1 significant:4 munoz:1 queried:1 automatic:2 similarly:1 dot:1 reachable:3 moving:1 stable:1 similarity:20 supervision:1 dominant:1 closest:1 showed:2 raj:4 meta:2 exploited:3 captured:1 minimum:3 additional:2 deng:1 converge:2 ii:1 expedite:1 multiple:4 gretton:1 segmented:1 exceeds:1 jet:2 cross:2 retrieval:2 equally:1 visit:1 award:1 parenthesis:1 prediction:1 scalable:3 basic:1 vision:1 cmu:1 expectation:1 iteration:5 histogram:3 repetitive:2 represent:3 pyramid:3 sometimes:1 chicken:1 winn:1 median:1 sch:1 unlike:2 nv:3 examplars:1 near:1 yang:1 enough:3 iterate:2 cow:1 idea:1 shift:2 bottleneck:1 sunflower:2 six:1 reuse:1 wo:3 suffer:1 speaking:1 cause:1 todorovic:1 matlab:1 antonio:1 useful:1 clear:1 amount:1 repeating:1 nonparametric:1 dark:1 ten:1 category:3 generate:5 http:2 nsf:1 tutorial:1 chum:1 per:1 blue:8 diverse:2 carnegie:1 discrete:1 key:1 salient:1 enormous:1 localize:1 changing:1 neither:1 clean:1 registration:1 rectangle:2 ram:1 graph:11 run:2 luxburg:1 everywhere:1 cootes:1 groundtruth:3 patch:2 summarizes:2 scaling:1 comparable:3 jeh:1 bound:1 quadratic:1 hypertextual:1 refine:3 nontrivial:2 binned:1 fei:6 scene:4 personalized:1 bousquet:1 tag:1 orientation:1 nearby:2 aspect:2 speed:1 argument:1 extremely:5 duygulu:1 rai:2 according:3 combination:1 poor:1 across:3 pan:1 wi:3 intuitively:1 iccv:6 pr:4 previously:1 bus:1 turn:1 locus:1 end:3 available:3 spectral:2 centrality:2 robustness:1 faloutsos:2 motorbike:1 clustering:4 include:2 running:1 seeking:9 objective:3 move:1 added:3 question:2 malik:1 fa:1 strategy:1 parametric:1 usual:1 gradient:2 affinity:1 link:11 entity:1 restart:1 topic:2 manifold:1 collected:1 extent:1 trivial:1 toward:1 length:1 code:2 index:5 difficult:1 setup:1 taxonomy:1 hog:2 subproblems:1 implementation:1 design:1 optimol:1 perform:2 datasets:3 dijkstra:1 discovered:3 perturbation:3 arbitrary:1 pair:2 required:1 specified:1 dog:5 sivic:2 engine:1 hour:1 nip:1 trans:6 address:1 usually:1 flower:1 pattern:6 indoor:2 democratic:1 challenge:2 summarize:1 pagerank:12 reliable:1 max:1 green:3 ia:10 overlap:1 ranked:9 force:1 attach:1 natural:1 solvable:2 difficulty:2 technology:1 picture:2 axis:8 auto:1 naive:1 text:1 understanding:1 l2:1 discovery:2 val:4 acknowledgement:1 relative:1 generation:2 limitation:1 proportional:1 interesting:1 validation:1 downloaded:2 degree:1 tiny:1 share:3 row:5 eccv:1 changed:2 repeat:1 keeping:1 hebert:1 allow:1 understand:1 institute:1 wide:1 neighbor:3 face:1 fifth:1 sparse:1 ghz:1 curve:6 calculated:1 world:1 valid:1 computes:1 collection:5 refinement:5 jump:1 approximate:1 keep:1 aam:1 active:2 fergus:2 search:9 iterative:2 additionally:1 promising:1 learn:1 inherently:1 career:1 forest:1 excellent:1 protocol:1 giraffe:2 main:6 bounding:5 whole:1 allowed:1 repeated:1 body:1 augmented:3 fig:13 intel:1 representative:1 en:1 roc:3 egg:1 ahuja:1 precision:3 sub:2 topranked:1 candidate:4 third:3 tang:1 magenta:1 hub:30 sift:1 r2:1 essential:1 burden:1 quantization:1 false:1 importance:1 illustrates:3 easier:2 entropy:2 simply:1 likely:3 appearance:1 visual:8 corresponds:1 truth:2 extracted:1 oct:1 weston:1 sized:2 formulated:2 goal:5 sorted:1 towards:1 typical:2 except:1 baluja:1 wt:1 called:4 multimedia:1 experimental:1 vote:3 meaningful:1 rarely:1 select:4 formally:1 distraction:1 support:1 collins:1 evaluate:4 |
2,958 | 3,681 | Sparse Estimation Using General Likelihoods and
Non-Factorial Priors
David Wipf and Srikantan Nagarajan, ?
Biomagnetic Imaging Lab, UC San Francisco
{david.wipf, sri}@mrsc.ucsf.edu
Abstract
Finding maximally sparse representations from overcomplete feature dictionaries
frequently involves minimizing a cost function composed of a likelihood (or data
fit) term and a prior (or penalty function) that favors sparsity. While typically the
prior is factorial, here we examine non-factorial alternatives that have a number of
desirable properties relevant to sparse estimation and are easily implemented using
an efficient and globally-convergent, reweighted `1 -norm minimization procedure.
The first method under consideration arises from the sparse Bayesian learning
(SBL) framework. Although based on a highly non-convex underlying cost function, in the context of canonical sparse estimation problems, we prove uniform
superiority of this method over the Lasso in that, (i) it can never do worse, and (ii)
for any dictionary and sparsity profile, there will always exist cases where it does
better. These results challenge the prevailing reliance on strictly convex penalty
functions for finding sparse solutions. We then derive a new non-factorial variant
with similar properties that exhibits further performance improvements in some
empirical tests. For both of these methods, as well as traditional factorial analogs,
we demonstrate the effectiveness of reweighted `1 -norm algorithms in handling
more general sparse estimation problems involving classification, group feature
selection, and non-negativity constraints. As a byproduct of this development, a
rigorous reformulation of sparse Bayesian classification (e.g., the relevance vector
machine) is derived that, unlike the original, involves no approximation steps and
descends a well-defined objective function.
1
Introduction
With the advent of compressive sensing and other related applications, there has been growing interest in finding sparse signal representations from redundant dictionaries [3, 5]. The canonical form
of this problem is given by,
min kxk0 ,
s.t. y = ?x,
(1)
x
where ? ? R
is a matrix whose columns ?i represent an overcomplete or redundant basis (i.e.,
rank(?) = n and m > n), x ? Rm is a vector of unknown coefficients to be learned, and y is the
signal vector. The cost function being minimized represents the `0 norm of x (i.e., a count of the
number of nonzero elements in x). If measurement noise or modeling errors are present, we instead
solve the alternative problem
n?m
min ky ? ?xk22 + ?kxk0 ,
? > 0,
(2)
x
noting that in the limit as ? ? 0, the two problems are equivalent (the limit must be taken outside
of the minimization). From a Bayesian perspective, optimization of either problem can be viewed,
after a exp[?(?)] transformation, as a challenging MAP estimation task with a quadratic likelihood
function and a prior that is both improper and discontinuous. Unfortunately,
an exhaustive search
for the optimal representation requires the solution of up to m
n linear systems of size n ? n, a
?
This research was supported by NIH grants R01DC04855 and R01DC006435.
prohibitively expensive procedure for even modest values of m and n. Consequently, in practical
situations there is a need for approximate methods that efficiently solve (1) or (2) with high probability. Moreover, we would ideally like these methods to generalize to other likelihood functions
and priors for applications such as non-negative sparse coding, classification, and group variable
selection.
One common strategy is to replace kxk0 with a more manageable penalty function g(x) (or prior)
that still favors sparsity. Typically this replacement is a concave, non-decreasing function
of
P
|x| , [|x1 |, . . . , |xm |]T . It is also generally assumed to be factorial, meaning g(x) = i g(xi ).
Given this selection, a recent, very successful optimization technique involves iterative reweighted
`1 minimization, a process that produces more focal estimates with each passing iteration [3, 19].
To implement this procedure, at the (k + 1)-th iteration we compute
X (k)
x(k+1) ? arg min ky ? ?xk22 + ?
(3)
wi |xi |,
x
i
where wi , ?g(xi )/?|xi |. As discussed in [6], these updates are guaranteed to converge to
a local minimum of the underlying cost function by satisfying the conditions of the Global Convergence Theorem (see for example [24]). Moreover, empirical evidence from [3] suggests that
generally only a few iterations, which can be readily computed using standard convex programming
packages, are required. Note that a single iteration with unit weights is equivalent to the traditional
Lasso estimator [14]. However, given an appropriate selection for g(?), e.g., g(x i ) = log(|xi | + ?)
with ? > 0, subsequent iterations have been shown to exhibit substantial improvements over the
Lasso in approximating the solution of (1) or (2) [3].
(k)
(k)
(k)
While certainly successful in practice, there remain fundamental limitations as to what can be
achieved using factorial penalties to approximate kxk0 . Perhaps counterintuitively, it has been
shown in [19] that by considering the wider class of non-factorial penalties, more effective surrogates for kxk0 can be obtained, potentially leading to better approximate solutions of either
(1) or (2). In this paper we consider two non-factorial methods that rely on the same basic iterative reweighted `1 minimization procedure outlined above. In Section 2, we briefly introduce the
non-factorial penalty function first proposed in [19] (based on a dual-form interpretation of sparse
Bayesian learning) and then derive a new iterative reweighted `1 implementation that builds upon
these ideas. We then demonstrate that this algorithm satisfies two desirable properties pertaining to
problem (1): (i) each iteration can only improve the sparsity and, (ii) for any ? and sparsity profile,
there will always exist cases where performance improves over standard ` 1 minimization, which
represents the best convex approximation to (1). Together, these results imply that this reweighting
(0)
scheme can never do worse than Lasso (assuming wi = 1, ?i), and that there will always be cases
where improvement over Lasso is achieved. To a large extent, this removes much of the stigma commonly associated with using non-convex sparsity penalties. Later in Section 3, we derive a second
promising non-factorial variant by starting with a plausible `1 reweighting scheme and then working
backwards to determine the form and properties of the underlying penalty function.
In general, iterative reweighted `1 procedures of any kind are attractive for our purposes because
they can easily be augmented to handle other likelihoods and priors, provided convexity of the
update (3) is preserved (of course the overall cost function being minimized will be non-convex).
For example, to address the extensions mentioned above, in Section 4 we explore adding constraints
such as xi ? 0, replacing |xi | with a norm on groups of variables, and using a logistic instead of
quadratic likelihood term for classification. The latter extension leads to a rigorous reformulation
of sparse Bayesian classification (e.g., the relevance vector machine [15]) that, unlike the original,
involves no approximation steps and descends a well-defined objective function. Finally, Section 5
contains empirical comparisons while Section 6 provides brief concluding remarks.
2
Non-Factorial Methods Based on Sparse Bayesian Learning
A particularly useful non-factorial penalty emerges from a dual-space view [19] of sparse Bayesian
learning (SBL) [15], which is based on the notion of automatic relevance determination (ARD)
[10]. SBL assumes a Gaussian likelihood function p(y|x) = N (y; ?x, ?I), consistent with the
data fit term from (2). The basic ARD prior incorporated by SBL is p(x; ?) = N (x; 0, diag[?]),
where ? ? Rm
+ is a vector of m non-negative hyperparameters governing the prior variance of each
unknown coefficient. These hyperparameters are estimated from the data by first marginalizing over
the coefficients x and then performing what is commonly referred to as evidence maximization or
type-II maximum likelihood [10, 15]. Mathematically, this is equivalent to minimizing
Z
(4)
L(?) , ? log p(y|x)p(x; ?)dx = ? log p(y; ?) ? log |?y | + y T ??1
y y,
where ?y , ?I + ???T and ? , diag[?]. Once some ?? = arg min? L(?) is computed, an
estimate of the unknown coefficients can be obtained by setting xSBL to the posterior mean computed
using ?? :
(5)
xSBL = E[x|y; ?? ] = ?? ?T ??1
y? y.
Note that if any ??,i = 0, as often occurs during the learning process, then xSBL,i = 0 and the
corresponding feature is effectively pruned from the model. The resulting coefficient vector x SBL is
therefore sparse, with nonzero elements corresponding with the ?relevant? features.
It is not immediately apparent how the SBL procedure, which requires optimizing a cost function in
?-space and is based on a factorial prior p(x; ?), relates to solving/approximating (1) and/or (2) via
a non-factorial penalty in x-space. However, it has been shown in [19] that x SBL satisfies
xSBL = arg min ky ? ?xk22 + ?gSBL (x),
(6)
gSBL (x) , min xT ??1 x + log |?I + ???T |,
(7)
x
where
??0
assuming ? = ? and |x| , [|x1 |, . . . , |xm |]T . While not discussed in [19], gSBL (x) is a general
penalty function that only need have ? = ? to obtain equivalence with SBL; other selections may
lead to better performance (more on this in Section 4 below).
The analysis in [19] reveals that replacing kxk0 with gSBL (x) and ? ? 0 leaves the globally minimizing solution to (1) unchanged but drastically reduces the number of local minima (more so than
any possible factorial penalty function). While space precludes the details here, these ideas can be
extended significantly to form conditions, which again are only satisfiable by a non-factorial penalty,
whereby all local minima are smoothed away [21]. Note that while basic `1 -norm minimization also
has no local minima, the global minimum need not always correspond with the global solution to
(1), unlike when using gSBL (x).
It can also be shown that gSBL (x) is a non-decreasing, concave function of |x| (see Appendix),
a desirable property of sparsity-promoting penalties. Importantly, as a direct consequence of this
concavity, (6) can be optimized using a reweighted `1 algorithm (in an analogous fashion to the
factorial case) using
?gSBL (x)
(k+1)
wi
=
.
(8)
?|xi | x=x(k+1)
Although this quantity is not available in closed form (except for the special case where ? ? 0),
it can be estimated by executing: Step I - Initialize by setting w (k+1) ? w(k) , the k-th vector of
weights, Step II - Repeat until convergence
?1 12
(k+1)
T
(k+1) e (k+1) T
f
wi
? ?i ?I + ?W
(9)
X
?
?i ,
f (k+1) , diag[w (k+1) ]?1 and X
e (k+1) , diag[|x(k+1) |]. The derivation is shown in the
where W
Appendix, while further details and analyses are deferred to [20]. Note that cost function descent
is guaranteed with only a single iteration, so we need not execute (9) until convergence. In fact, it
can be shown that a more rudimentary form of reweighted `1 applied to this model in [19] amounts
to performing exactly one such iteration. However, repeated execution of (9) is cheap computationally since it scales as O nmkx(k+1) k0 , where typically kx(k+1) k0 ? n, and is substantially less
intensive than the subsequent `1 step given by (3).
From a theoretical standpoint, `1 reweighting applied to gSBL (x) is guaranteed to aid performance
in the sense described by the following two results, which apply in the case where ? ? 0, ? ? 0.
Before proceeding, we define spark(?) as the smallest number of linearly dependent columns in ?
[5]. It follows then that 2 ? spark(?) ? n + 1.
Theorem 1. When applying iterative reweighted `1 using (9) and wi 6= 0, ?i, the solution sparsity
satisfies kx(k+1) k0 ? kx(k) k0 (i.e., continued iteration can never do worse).
(1)
Theorem 2. Assume that spark(?) = n+1 and consider any instance where standard ` 1 minimization fails to find some x? drawn from support set S with cardinality |S| < (n+1)
2 . Then there exists
a set of signals y (with non-zero measure) generated from S such that non-factorial reweighted ` 1 ,
f (k+1) updated using (9), always succeeds but standard `1 always fails.
with W
Note that Theorem 2 does not in any way indicate what is the best non-factorial reweighting scheme
in practice (for example, in our limited experience with empirical simulations, the selection ? ? 0
is not necessarily always optimal). However, it does suggest that reweighting with non-convex, nonfactorial penalties is potentially very effective, motivating other selections as discussed next. Taken
together, Theorems 1 and 2 challenge the prevailing reliance on strictly convex cost functions, since
they ensure that we can never do worse than the Lasso (which uses the tightest convex approximation
to the `0 norm), and that there will always be cases where improvement over the Lasso is obtained.
3
Bottom-Up Construction of Non-Factorial Penalty
In the previous section, we described what amounts to a top-down formulation of a non-factorial
penalty function that emerges from a particular hierarchical Bayesian model. Based on the insights
gleaned from this procedure (and its distinction from factorial penalties), it is possible to stipulate
alternative penalty functions from the bottom up by creating plausible, non-factorial reweighting
schemes. The following is one such possibility.
Assume for simplicity that ? ? 0. The Achilles heel of standard, factorial penalties is that if we
want to retain a global minimum similar to that of (1), we require a highly concave penalty on each
xi [21]. However, this implies that almost all basic feasible solutions (BFS) to y = ?x, defined as
a solution with kxk0 ? n, will form local minima of the penalty function constrained
to the feasible
region. This is a very undesirable property since there are on the order of m
BFS
with
kxk0 = n,
n
which is equal to the signal dimension and not very sparse. We would really like to find degenerate
BFS, where kxk0 is strictly less than n. Such solutions are exceedingly rare and difficult to find.
Consequently we would like to utilize a non-factorial, yet highly concave penalty that explicitly
favors degenerate BFS. We can accomplish this by constructing a reweighting scheme designed to
avoid non-degenerate BFS whenever possible.
e (k+1) )2 ?T , where ? may be small, and then
Now consider the covariance-like quantity ?I + ?(X
construct weights using the projection of each basis vector ?i as defined via
?1
(k+1)
e (k+1) )2 ?T
wi
? ?Ti ?I + ?(X
?i .
(10)
Ideally, if at iteration k + 1 we are at a bad or non-degenerate BFS, we do not want the newly
(k+1)
computed wi
to favor the present position at the next iteration of (3) by assigning overly large
e (k+1) )2 ?T in (10) will be full
weights to the zero-valued xi . In such a situation, the factor ?(X
rank and so all weights will be relatively modest sized. In contrast, if a rare, degenerate BFS is
e (k+1) )2 ?T will no longer be full rank, and the weights associated with zero-valued
found, then ?(X
coefficients will be set to large values, meaning this solution will be favored in the next iteration.
In some sense, the distinction between (10) and its factorial counterparts, such as the method of
(k+1)
(k+1)
Cand`es et al. [3] which uses wi
? 1/(|xi
|+?), can be summarized as follows: the factorial
methods assign the largest weight whenever the associated coefficient goes to zero; with (10) the
largest weight is only assigned when the associated coefficient goes to zero and kx (k+1) k0 < n.
The reweighting option (10), which bears some resemblance to (9), also has some very desirable
properties beyond the intuitive justification given above. First, since we are utilizing (10) in the
context of reweighted `1 minimization, it would productive to know what cost function, if any, we
are minimizing when we compute each iteration. Using the fundamental theorem of calculus for line
integrals (or the gradient theorem), it follows that the bottom-up (BU) penalty function associated
with (10) is
gBU (x) ,
Z
1
0
?1
T
2 T
e
e
trace X? ?I + ?(? X) ?
? d?.
(11)
Moreover, because each weight wi is a non-increasing function of each xj , ?j, from Kachurovskii?s
theorem [12] it directly follows that (11) is concave and non-decreasing in |x|, and thus naturally
promotes sparsity. Additionally, for ? sufficiently small, it can be shown that the global minimum
of (11) on the constraint y = ?x must occur at a degenerate BFS (Theorem 1 from above also holds
when using (10); Theorem 2 may as well, although we have not formally shown this). And finally,
regarding implementational issues and interpretability, (10) avoids any recursive weight assignments
or inner-loop optimization as when using (9).
4
Extensions
One of the motivating factors for using iterative reweighted `1 optimization is that it is very easy to
incorporate alternative likelihoods and priors. This section addresses three such examples.
Non-Negative Sparse Coding: Numerous applications require sparse solutions where all coefficients
xi are constrained to be non-negative [2]. By adding the contraint x ? 0 to (3) at each iteration, we
can easily compute such solutions using gSBL (x), gBU (x), or any other appropriate penalty function.
Note that in the original SBL formulation, this is not a possibility since the integrals required to
compute the associated cost function or update rules no longer have closed-form expressions.
Group Feature Selection: Another common generalization is to seek sparsity at the level of groups
of features, e.g., the group Lasso [23]. The simultaneous sparse approximation problem [17] is a particularly useful adaptation of this idea relevant to compressive sensing [18], manifold learning [13],
and neuroimaging [22]. In this situation, we are presented with r signals Y , [y ?1 , y?2 , . . . , y?r ]
that were produced by coefficient vectors X , [x?1 , x?2 , . . . , x?r ] characterized by the same sparsity profile or support, meaning that the coefficient matrix X is row sparse. Here we adopt the
notation that x?j represents the j-th column of X while xi? represents the i-th row of X. The sparse
recovery problems (1) and (2) then become
min d(X), s.t. Y = ?X,
and
X
min kY ? ?Xk2F + ?d(X), ? > 0,
X
(12)
Pm
where d(X) , i=1 I [kxi? k > 0] and I[?] is an indicator function. d(X) favors row sparsity and
is a natural extension of the `0 norm to the simultaneous approximation problem.
As before, the combinatorial nature of each optimization problem renders them intractable and so
approximate procedures are required. All of the algorithms discussed herein can naturally be expanded to this domain essentially by substituting the scalar coefficient magnitudes from a given
(k)
iteration |xi | with some row-vector penalty, such as a norm. If we utilize kxi? k2 , then the coefficient matrix update analogous to (3) requires the solution of the more complicated weighted
second-order cone (SOC) program
X (k)
X (k+1) ? arg min kY ? ?Xk2F + ?
wi kxi? k2 .
(13)
X
i
Other selections such as the `? norm are possible as well, providing added generality.
Sparse Classifier Design: At a high level, sparse classifiers can be trained by simply substituting
a (preferrably) convex likelihood function for the quadratic term in (2). For example, to perform
sparse logistic regression we would solve
X
(14)
min
yj log(?Tj? x) + (1 ? yj ) log(1 ? ?Tj? x) + ?g(x),
x
j
where now yj ? {0, 1} and g(x) is an arbitrary, concave-in-|x| penalty. This can be implemented
by iteratively solving an `1 -norm penalized logistic regression problem, which can be efficiently accomplished using a simple majorization-maximization approach [7]. Note that cost function descent
does not require that we compute the full reweighted `1 solution; the iterations from [7] naturally
lend themselves to an efficient partial (or greedy) update before recomputing the weights.
It is very insightful to compare this methodology with the original SBL (or relevance vector machine) classifier derived in [15]. When the Gaussian likelihood p(y|x) is replaced with a Bernoulli
distribution (which leads to the logistic data fit term above), it is no longer possible to compute
the marginalization (4) or the posterior distribution p(x|y; ?), which is used both for optimization
purposes and to make predictive statements about test data. Consequently, a heuristic Laplace approximation is adopted, which requires a second-order Newton inner-loop to fit a Gaussian about
the mode of p(x|y; ?). This Gaussian is then used to transform the classification problem into a
standard regression one with data-dependent (herteroscedastic) noise, and then whatever approach
is used to minimize (4), either the MacKay update rules [15] or a greedy constructive method [16],
can be used in the outer-loop. When (if) a fixed point ?? is reached, the corresponding classifier
coefficients are chosen as the mode of p(x|y; ?? ).
While demonstrably effective in a wide variety of empirical classification tests, the problem with
this formulation of SBL is threefold. First, there are no convergence guarantees of any kind, regardless of which method is used for the outer-loop. Secondly, it is completely unclear what, if any, cost
function is being descended (even approximately) to obtain the classifier coefficients, making it difficult to explore the model for enhancements or analytical purposes. Thirdly, in certain applications
it has been observed that SBL achieves extreme sparsity at the expense of classification accuracy
[4, 11]. There is currently no flexibility in the model to remedy this problem.
These issues are directly addressed by dispensing with the Bayesian hierarchical derivation of SBL
altogether and considering classification in light of (14). Both the MacKay and greedy SBL updates
are equivalent to minimizing (14) with g(x) = gSBL (x), and assuming ? = ? = 1, using coordinate
descent over a set of auxiliary functions (details provided in a forthcoming paper). Unfortunately
however, because these auxiliary functions are based in part on a second-order Laplace approximation, they do not form a strict upper bound and so provable convergence (or even descent) is not
possible. Of course we can always substitute the reweighted `1 scheme discussed above to avoid
this issue, since the underlying cost function in x-space is the same. Perhaps more importantly, to
properly regulate sparsity, when we deviate from the original Bayesian inspiration for this model,
we are free to adjust ? and/or ?. For example, with ? small, the penalty gSBL (x) is more highly
concave favoring sparsity, while in the limit at ? becomes large, it acts like a standard ` 1 norm, still
favoring sparsity but not exceedingly so (the same phenomena occurs when using the penalty (11)).
Likewise, ? is as a natural trade-off parameter balancing the contribution from the two terms in (6)
or (14). Both ? and ? can be tuned via cross-validation if desired.
There is one additional concern regarding SBL that involves marginal likelihood (sometimes called
evidence) calculations. In the standard regression case where marginalization was possible, the optimized quantity ? log p(y; ?) represents an approximation to ? log p(y) that can be used, among
other things, for model comparison. This notion is completely lost when we move to the classification case under consideration. While space precludes the details, if we are willing to substitute
a probit likelihood function for the logistic, it is possible to revert (14) back to the original hierarchical, ?-dependent Bayesian model and obtain a rigorous upper bound on ? log p(y; ?). Finally,
detailed empirical simulations with both logistic- and probit-based classifiers is an area of future
research; preliminary results are promising.
5
Empirical Comparisons
To further examine the algorithms discussed herein, we performed simulations similar to those in [3].
In the first experiment, each trial consisted of generating a 100 ? 256 dictionary ? with iid Gaussian
entries and a sparse vector x? with 60 nonzero, non-negative (truncated Gaussian) coefficients.
A signal is then computed using y = ?x? . We then attempted to recover x? by applying nonnegative `1 reweighting strategies with four different penalty functions: (i) gSBL (x) implemented
using a single iteration of (9), referred to as SBL-I (equivalent to the method from [19]); (ii) g SBL (x)
implemented using multiple iterations
P of (9) as discussed in Section 2, referred to as SBL-II; (iii)
es et al., which
gBU (x); and finally (iv) g(x) =
i log(|xi | + ?), the factorial method of Cand`
represents the current state-of-the-art in reweighted `1 algorithms. In all cases ? was chosen via
coarse cross-validation. Additionally, since we are working with a noise-free signal, we assume
? ? 0 and so the requisite coefficient update (3) with xi ? 0 reduces to a standard linear program.
Given wi = 1, ?i for each algorithm, the first iteration amounts to the non-negative minimum
`1 -norm solution (i.e., the Lasso). Average results from 1000 random trials are displayed in Figure
1 (left), which plots the empirical probability of success in recovering x? versus the iteration number. We observe that standard non-negative `1 never succeeds (see first iteration results); however,
(0)
with only a few reweighted iterations drastic improvement is possible, especially for the bottom-up
approach. By 10 iterations, the non-factorial variants have all exceeded the method of Cand`es et al.
(There was no appreciable improvement by any method after 10 iterations.) This shows both the
efficacy of non-factorial reweighting and the ability to handle constraints on x.
For the second experiment, we used a randomly generated 50 ? 100 dictionary for each trial with
iid Gaussian entries as above, and created 5 coefficient vectors X ? = [x??1 , ..., x??5 ] with matching
sparsity profile and iid Gaussian nonzero coefficients. We then generate the signal matrix Y = ?X ?
and attempt to learn X ? using various group-level reweighting schemes. In this experiment we varied the row sparsity of X ? from d(X ? ) = 30 to d(X ? ) = 40; in general, the more nonzero rows,
the harder the recovery problem becomes. A total of five algorithms modified to the simultaneous
sparse approximation problem were tested using an `2 -norm penalty on each coefficient row: the
four methods from above (executed for 5 iterations each) plus the standard group Lasso (equivalent to a single iteration of any of the other algorithms). Results are presented in Figure 1 (right),
where the performance gap between the factorial and non-factorial approaches is very significant.
Additionally, we have successfully applied this methodology to large neuroimaging data sets [22],
obtaining significant improvements over existing convex approaches such as the group Lasso, consistent with the results in Figure 1. Other related simulation results are contained in [20].
1
0.8
0.9
0.7
0.8
PSfrag replacements
0.5
0.4
0.3
SBL?I
SBL?II
Bottom?Up
Candes et al.
PSfrag replacements
`1 iteration number
0.2
0.1
row sparsity, d(X ? )
p(success)
p(success)
0.6
0
1
2
3
4
5
6
7
`1 iteration number
8
9
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
30
10
SBL?I
SBL?II
Bottom?Up
Candes et al.
Group Lasso
32
34
36
row sparsity, d(X ? )
38
40
Figure 1: Left: Probability of success recovering sparse non-negative coefficients as a function of
reweighted `1 iterations. Right: Iterative reweighted results using 5 simultaneous signal vectors.
Probability of success recovering sparse coefficients for different row sparsity values, i.e., d(X ? ).
6
Conclusion
In this paper we have examined concave, non-factorial priors (which previously have received little
attention) for the purpose of estimating sparse coefficients. When coupled with general likelihood
models and minimized using efficient iterative reweighted `1 methods, these priors offer a powerful
alternative to existing state-of-the-art sparse estimation techniques. We have also shown (for the first
time) exactly what the underlying cost function associated with the SBL classifier is and provided a
more principled algorithm for minimizing it.
Appendix
Concavity of gSBL (x) and derivation of weight updates (9): Because log |?I + ???T | is concave
and non-decreasing with respect to ? ? 0, we can express it as
log |?I + ???T | = min z T ? ? h? (z),
(15)
z?0
where h? (z) is defined as the concave conjugate of h(?) , log |?I + ???T | [1]. We can then
express gSBL (x) via
X x2
i
+ zi ?i ? h? (z).
gSBL (x) = min xT ??1 x + log |?I + ???T | = min
(16)
??0
?,z?0
?i
i
Minimizing over ? for fixed x and z, we get
?1/2
?i = z i
|xi |, ?i.
(17)
Substituting this expression into (16) gives the representation
!
X 1/2
X
x2i
?1/2
?
+
z
z
|x
|
?
h
(z)
=
min
gSBL (x) = min
2zi |xi | ? h? (z),
i
i
i
?1/2
z?0
z?0
zi
|xi |
i
i
(18)
which implies that gSBL (x) can be represented as a minimum of upper-bounding hyperplanes with
respect to |x|, and thus must be concave and non-decreasing since z ? 0 [1]. We also observe that
for fixed z, solving (6) is a weighted `1 minimization problem.
To derive the weight update (9), we only need the optimal value of each zi , which from basic convex
analysis will satisfy
?gSBL (x)
1/2
zi =
.
(19)
2?|xi |
Since this quantity is not available in closed form, we can instead iteratively minimize (16) over
1/2
(k)
? and z. We start by initializing zi ? wi , ?i and then minimize over ? using (17). We then
compute the optimal z for fixed ?, which can be done analytically using
h
?1 i
z = O? log ?I + ???T = diag ?T ?I + ???T
? .
(20)
By substituting (17) into (20) and defining wi
, zi , we obtain the weight update (9). This
procedure is guaranteed to converge to a solution satisfying (19) [20] although, as mentioned
previously, only one iteration is actually required for the overall algorithm.
(k+1)
1/2
Proof of Theorem 1: Before we begin, we should point out that for ? ? 0, the weight update
f (k+1) X
e (k+1) . If ?i is
(9) is still well-specified regardless of the value of the diagonal matrix W
(k+1)
f (k+1) X
e (k+1) ?T , then w
not in the span of ?W
? ? and the corresponding coefficient xi
i
(k+1)
can be set to zero for all future iterations. Otherwise wi
can be computed efficiently using the
Moore-Penrose pseudoinverse and will be strictly nonzero.
For simplicity we will now assume that spark(?) = n + 1, which is equivalent to requiring that
each subset of n columns of ? forms a basis in Rn . The extension to the more general case is
discussed in [20]. From basic linear programming [8], at any iteration the coefficients will satisfy
f (k?1) . Given our simplifying assumptions, there exists only
kx(k) k0 ? n for arbitrary weights W
(k)
two possibilities. If kx k0 = n, then we will automatically satisfy kxh(k+1) ki0 ? kx(k) k0 at the
f (k) ? kx(k) k0 for all
f (k) . In contrast, if kx(k) k0 < n, then rank W
next iteration regardless of W
evaluations of (9) with ? ? 0, enforcing kx(k+1) k0 ? kx(k) k0 .
Proof of Theorem 2: For a fixed dictionary ? and coefficient vector x? , we are assuming that
0
kx? k0 < (n+1)
2 . Now consider a second coefficient vector x with support and sign pattern equal to
?
0
x and define x(i) as the i-th largest coefficient magnitude of x0 . Then there exists a set of kx? k0 ?1
scaling constants ?i ? (0, 1] (i.e., strictly greater than zero) such that, for any signal y generated via
y = ?x0 and x0(i+1) ? ?i x0(i) , i = 1, . . . , kx? k0 ? 1, the minimization problem
? , arg min gSBL (x),
x
s.t. ?x0 = ?x, ? ? 0,
(21)
x
? = x0 . This result
is unimodal and has a unique minimizing stationary point which satisfies x
follows from [21] and the dual-space characterization of the penalty gSBL (x) from [19]. Note that
(21) is equivalent to (6) with ? ? 0, so the reweighted non-factorial update (9) can be applied.
Furthermore, based on the global convergence of these updates discussed above, the sequence of
? = x0 . So we will necessarily learn the generative x0 .
estimates are guaranteed to satisfy x(k) ? x
Let x`1 , arg minx kxk1 , subject to ?x? = ?x. By assumption we know that x`1 6= x? .
Moreover, we can conclude using [9, Theorem 6] that if x`1 fails for some x? , it will fail for any
other x with matching support and sign pattern; it will therefore fail for any x0 as defined above.
Finally, by construction, the set of feasible x0 will have nonzero measure over the support S since
each ?i is strictly nonzero. Note also that this result can likely be extended to the case where
spark(?) < n + 1 and to any x? that satisfies kx? k0 < spark(?) ? 1. The more specific case
addressed above was only assumed to allow direct application of [9, Theorem 6].
References
[1] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
[2] A. Bruckstein, M. Elad, and M. Zibulevsky, ?A non-negative and sparse enough solution of an
underdetermined linear system of equations is unique,? IEEE Trans. Information Theory, vol.
54, no. 11, pp. 4813?4820, Nov. 2008.
[3] E. Cand`es, M. Wakin, and S. Boyd, ?Enhancing sparsity by reweighted `1 minimization,? J.
Fourier Anal. Appl., vol. 14, no. 5, pp. 877?905, 2008.
[4] G. Cawley and N. Talbot, ?Gene selection in cancer classification using sparse logistic regression with Bayesian regularization,? Bioinformatics, vol. 22, no. 19, pp. 2348?2355, 2006.
[5] D. Donoho and M. Elad, ?Optimally sparse representation in general (nonorthogonal) dictionaries via `1 minimization,? Proc. Nat. Acad. Sci., vol. 100, no. 5, pp. 2197?2202, 2003.
[6] M. Fazel, H. Hindi, and S. Boyd, ?Log-Det heuristic for matrix rank minimization with applications to hankel and Euclidean distance matrices,? Proc. American Control Conf., vol. 3, pp.
2156?2162, June 2003.
[7] B. Krishnapuram, L. Carin, M. Figueiredo, and A. Hartemink, ?Sparse multinomial logistic
regression: Fast algorithms and generalization bounds,? IEEE Trans. Pattn Anal. Mach. Intell.,
vol. 27, pp. 957?968, 2005.
[8] D. Luenberger, Linear and Nonlinear Programming, Addison?Wesley, Reading, Massachusetts,
second edition, 1984.
[9] D. Malioutov, M. C?etin, and A.S. Willsky, ?Optimal sparse representations in general overcomplete bases,? IEEE Int. Conf. Acoust., Speech, and Sig. Proc., vol. 2, pp. II?793?796, 2004.
[10] R. Neal, Bayesian Learning for Neural Networks, Springer-Verlag, New York, 1996.
[11] Y. Qi, T. Minka, R. Picard, and Z. Ghahramani, ?Predictive automatic relevance determination
by expectation propagation,? Int. Conf. Machine Learning (ICML), pp. 85?92, 2004.
[12] R. Showalter, ?Monotone operators in Banach space and nonlinear partial differential equations,? Mathematical Surveys and Monographs 49. AMS, Providence, RI, 1997.
[13] J. Silva, J. Marques, and J. Lemos, ?Selecting landmark points for sparse manifold learning,?
Advances in Neural Information Processing Systems 18, pp. 1241?1248, 2006.
[14] R. Tibshirani, ?Regression shrinkage and selection via the Lasso,? Journal of the Royal
Statistical Society, vol. 58, no. 1, pp. 267?288, 1996.
[15] M. Tipping, ?Sparse bayesian learning and the relevance vector machine,? J. Machine Learning
Research, vol. 1, pp. 211?244, 2001.
[16] M. Tipping and A. Faul, ?Fast marginal likelihood maximisation for sparse Bayesian models,?
Ninth Int. Workshop. Artificial Intelligence and Statistics, Jan. 2003.
[17] J. Tropp, ?Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,?
Signal Processing, vol. 86, pp. 589?602, April 2006.
[18] M. Wakin, M. Duarte, S. Sarvotham, D. Baron, and R. Baraniuk, ?Recovery of jointly sparse
signals from a few random projections,? Advances in Neural Information Processing Systems
18, pp. 1433?1440, 2006.
[19] D. Wipf and S. Nagarajan, ?A new view of automatic relevance determination,? Advances in
Neural Information Processing Systems 20, pp. 1625?1632, 2008.
[20] D. Wipf and S. Nagarajan, ?Iterative reweighted `1 and `2 methods for finding sparse solutions,? Submitted, 2009.
[21] D. Wipf and S. Nagarajan, ?Latent variable Bayesian models for promoting sparsity,? Submitted, 2009.
[22] D. Wipf, J. Owen, H. Attias, K. Sekihara, and S. Nagarajan, ?Robust Bayesian Estimation of
the Location, Orientation, and Time Course of Multiple Correlated Neural Sources using MEG,?
NeuroImage, vol. 49, no. 1, pp. 641?655, Jan. 2010.
[23] M. Yuan and Y. Lin, ?Model selection and estimation in regression with grouped variables,? J.
R. Statist. Soc. B, vol. 68, pp. 49?67, 2006.
[24] W. Zangwill, Nonlinear Programming: A Unified Approach, Prentice Hall, New Jersey, 1969.
| 3681 |@word trial:3 briefly:1 sri:1 manageable:1 norm:13 calculus:1 willing:1 simulation:4 seek:1 covariance:1 simplifying:1 harder:1 contains:1 efficacy:1 selecting:1 tuned:1 existing:2 current:1 yet:1 dx:1 must:3 readily:1 assigning:1 subsequent:2 cheap:1 mrsc:1 remove:1 designed:1 update:14 plot:1 stationary:1 greedy:3 leaf:1 generative:1 intelligence:1 provides:1 coarse:1 characterization:1 location:1 hyperplanes:1 five:1 mathematical:1 direct:2 become:1 differential:1 psfrag:2 yuan:1 prove:1 introduce:1 x0:10 cand:4 frequently:1 examine:2 growing:1 themselves:1 srikantan:1 globally:2 decreasing:5 automatically:1 little:1 considering:2 cardinality:1 increasing:1 provided:3 becomes:2 underlying:5 moreover:4 notation:1 begin:1 advent:1 estimating:1 what:7 kind:2 substantially:1 compressive:2 acoust:1 finding:4 transformation:1 unified:1 guarantee:1 ti:1 concave:11 act:1 exactly:2 prohibitively:1 rm:2 k2:2 classifier:7 whatever:1 unit:1 grant:1 control:1 superiority:1 before:4 local:5 limit:3 consequence:1 acad:1 mach:1 approximately:1 plus:1 examined:1 equivalence:1 suggests:1 challenging:1 appl:1 limited:1 fazel:1 practical:1 unique:2 yj:3 zangwill:1 practice:2 recursive:1 implement:1 lost:1 maximisation:1 procedure:9 descended:1 jan:2 area:1 empirical:8 significantly:1 projection:2 matching:2 boyd:3 suggest:1 krishnapuram:1 get:1 undesirable:1 selection:12 operator:1 prentice:1 context:2 applying:2 equivalent:8 map:1 go:2 regardless:3 starting:1 attention:1 convex:14 survey:1 spark:6 simplicity:2 immediately:1 recovery:3 estimator:1 continued:1 insight:1 importantly:2 bfs:8 utilizing:1 rule:2 vandenberghe:1 handle:2 notion:2 coordinate:1 justification:1 analogous:2 updated:1 laplace:2 construction:2 programming:4 us:2 sig:1 element:2 expensive:1 satisfying:2 particularly:2 bottom:6 observed:1 kxk1:1 initializing:1 region:1 improper:1 trade:1 zibulevsky:1 substantial:1 mentioned:2 principled:1 convexity:1 monograph:1 ideally:2 productive:1 trained:1 solving:3 predictive:2 upon:1 basis:3 completely:2 easily:3 k0:16 various:1 represented:1 jersey:1 derivation:3 revert:1 fast:2 effective:3 pertaining:1 artificial:1 outside:1 exhaustive:1 whose:1 apparent:1 heuristic:2 solve:3 plausible:2 valued:2 elad:2 otherwise:1 precludes:2 favor:5 ability:1 statistic:1 transform:1 jointly:1 sequence:1 analytical:1 adaptation:1 stipulate:1 relevant:3 loop:4 degenerate:6 flexibility:1 intuitive:1 ky:5 convergence:6 enhancement:1 produce:1 generating:1 executing:1 wider:1 derive:4 ard:2 received:1 soc:2 implemented:4 descends:2 involves:5 indicate:1 implies:2 faul:1 auxiliary:2 recovering:3 discontinuous:1 require:3 assign:1 nagarajan:5 generalization:2 really:1 preliminary:1 achilles:1 secondly:1 mathematically:1 underdetermined:1 strictly:6 extension:5 hold:1 sufficiently:1 hall:1 exp:1 nonorthogonal:1 substituting:4 lemos:1 achieves:1 dictionary:7 adopt:1 smallest:1 purpose:4 estimation:8 proc:3 combinatorial:1 counterintuitively:1 currently:1 largest:3 grouped:1 successfully:1 weighted:2 minimization:13 kxh:1 always:9 gaussian:8 modified:1 avoid:2 shrinkage:1 derived:2 june:1 improvement:7 properly:1 rank:5 likelihood:15 bernoulli:1 contrast:2 rigorous:3 sense:2 am:1 duarte:1 dependent:3 typically:3 nonfactorial:1 favoring:2 issue:3 arg:6 classification:11 dual:3 overall:2 favored:1 among:1 development:1 orientation:1 prevailing:2 special:1 initialize:1 uc:1 marginal:2 equal:2 construct:1 once:1 never:5 constrained:2 mackay:2 represents:6 icml:1 carin:1 wipf:6 minimized:3 future:2 few:3 randomly:1 composed:1 intell:1 replaced:1 replacement:3 attempt:1 interest:1 highly:4 possibility:3 picard:1 evaluation:1 certainly:1 adjust:1 deferred:1 extreme:1 light:1 tj:2 integral:2 byproduct:1 partial:2 experience:1 modest:2 iv:1 euclidean:1 desired:1 overcomplete:3 theoretical:1 instance:1 column:4 modeling:1 recomputing:1 implementational:1 assignment:1 maximization:2 cost:14 entry:2 rare:2 subset:1 uniform:1 successful:2 motivating:2 optimally:1 providence:1 accomplish:1 kxi:3 fundamental:2 retain:1 bu:1 off:1 together:2 again:1 worse:4 conf:3 creating:1 american:1 leading:1 coding:2 summarized:1 coefficient:28 int:3 satisfy:4 explicitly:1 later:1 view:2 performed:1 lab:1 closed:3 reached:1 start:1 recover:1 option:1 satisfiable:1 complicated:1 candes:2 majorization:1 minimize:3 contribution:1 accuracy:1 baron:1 variance:1 efficiently:3 likewise:1 correspond:1 xk2f:2 generalize:1 bayesian:17 produced:1 iid:3 malioutov:1 submitted:2 simultaneous:5 whenever:2 pp:16 minka:1 naturally:3 associated:7 proof:2 newly:1 massachusetts:1 emerges:2 improves:1 actually:1 back:1 exceeded:1 wesley:1 tipping:2 methodology:2 maximally:1 april:1 formulation:3 execute:1 done:1 generality:1 furthermore:1 governing:1 until:2 working:2 tropp:1 replacing:2 nonlinear:3 reweighting:11 propagation:1 logistic:8 mode:2 perhaps:2 resemblance:1 consisted:1 requiring:1 remedy:1 counterpart:1 analytically:1 assigned:1 inspiration:1 regularization:1 nonzero:8 iteratively:2 moore:1 neal:1 reweighted:22 attractive:1 during:1 whereby:1 demonstrate:2 gleaned:1 silva:1 rudimentary:1 meaning:3 consideration:2 sbl:23 nih:1 common:2 multinomial:1 thirdly:1 analog:1 discussed:9 interpretation:1 banach:1 measurement:1 significant:2 cambridge:1 automatic:3 focal:1 outlined:1 pm:1 longer:3 base:1 posterior:2 recent:1 perspective:1 optimizing:1 certain:1 verlag:1 success:5 accomplished:1 minimum:10 additional:1 greater:1 kxk0:9 converge:2 determine:1 redundant:2 signal:12 ii:10 relates:1 multiple:2 unimodal:1 desirable:4 reduces:2 full:3 determination:3 characterized:1 cross:2 calculation:1 offer:1 ki0:1 lin:1 promotes:1 qi:1 variant:3 involving:1 basic:6 regression:8 essentially:1 enhancing:1 expectation:1 iteration:33 represent:1 sometimes:1 achieved:2 preserved:1 want:2 cawley:1 addressed:2 source:1 standpoint:1 unlike:3 strict:1 subject:1 thing:1 effectiveness:1 noting:1 backwards:1 iii:1 easy:1 enough:1 variety:1 xj:1 fit:4 marginalization:2 forthcoming:1 zi:7 lasso:13 inner:2 idea:3 regarding:2 attias:1 intensive:1 det:1 expression:2 penalty:32 render:1 speech:1 passing:1 york:1 remark:1 generally:2 useful:2 detailed:1 factorial:35 amount:3 statist:1 demonstrably:1 generate:1 exist:2 canonical:2 sign:2 estimated:2 overly:1 tibshirani:1 threefold:1 vol:12 express:2 group:10 four:2 reliance:2 reformulation:2 drawn:1 utilize:2 imaging:1 relaxation:1 monotone:1 cone:1 package:1 powerful:1 baraniuk:1 hankel:1 almost:1 appendix:3 scaling:1 bound:3 guaranteed:5 convergent:1 quadratic:3 nonnegative:1 occur:1 constraint:4 x2:1 ri:1 fourier:1 min:16 concluding:1 pruned:1 performing:2 expanded:1 span:1 relatively:1 conjugate:1 remain:1 wi:15 heel:1 making:1 handling:1 taken:2 computationally:1 xk22:3 equation:2 previously:2 count:1 fail:2 know:2 addison:1 drastic:1 adopted:1 available:2 tightest:1 luenberger:1 promoting:2 apply:1 hierarchical:3 away:1 appropriate:2 regulate:1 observe:2 alternative:5 altogether:1 original:6 substitute:2 assumes:1 top:1 ensure:1 wakin:2 newton:1 ghahramani:1 build:1 especially:1 approximating:2 society:1 unchanged:1 objective:2 move:1 added:1 quantity:4 occurs:2 strategy:2 traditional:2 surrogate:1 unclear:1 exhibit:2 gradient:1 diagonal:1 minx:1 distance:1 sci:1 landmark:1 outer:2 manifold:2 extent:1 provable:1 enforcing:1 willsky:1 assuming:4 meg:1 providing:1 minimizing:8 art:2 difficult:2 unfortunately:2 neuroimaging:2 executed:1 potentially:2 statement:1 expense:1 trace:1 negative:9 implementation:1 design:1 anal:2 unknown:3 perform:1 upper:3 contraint:1 descent:4 displayed:1 truncated:1 marque:1 situation:3 extended:2 incorporated:1 defining:1 incorporate:1 rn:1 varied:1 ninth:1 smoothed:1 arbitrary:2 david:2 required:4 specified:1 optimized:2 learned:1 distinction:2 herein:2 trans:2 address:2 beyond:1 below:1 pattern:2 xm:2 sparsity:23 challenge:2 reading:1 program:2 interpretability:1 royal:1 lend:1 natural:2 rely:1 indicator:1 hindi:1 scheme:7 improve:1 x2i:1 brief:1 imply:1 numerous:1 created:1 negativity:1 coupled:1 deviate:1 prior:13 marginalizing:1 probit:2 bear:1 limitation:1 versus:1 validation:2 consistent:2 balancing:1 row:10 cancer:1 course:3 penalized:1 supported:1 repeat:1 free:2 figueiredo:1 drastically:1 allow:1 wide:1 sparse:41 dimension:1 avoids:1 concavity:2 exceedingly:2 commonly:2 san:1 approximate:4 nov:1 gene:1 global:6 pseudoinverse:1 reveals:1 bruckstein:1 assumed:2 francisco:1 conclude:1 xi:21 search:1 iterative:9 latent:1 additionally:3 promising:2 nature:1 learn:2 robust:1 obtaining:1 necessarily:2 constructing:1 domain:1 diag:5 sekihara:1 linearly:1 bounding:1 noise:3 hyperparameters:2 profile:4 edition:1 repeated:1 x1:2 augmented:1 referred:3 fashion:1 aid:1 dispensing:1 fails:3 position:1 neuroimage:1 theorem:14 down:1 bad:1 xt:2 specific:1 insightful:1 sensing:2 talbot:1 evidence:3 concern:1 exists:3 biomagnetic:1 intractable:1 workshop:1 adding:2 effectively:1 magnitude:2 execution:1 nat:1 kx:15 gap:1 simply:1 explore:2 likely:1 penrose:1 contained:1 hartemink:1 scalar:1 springer:1 satisfies:5 sarvotham:1 viewed:1 sized:1 consequently:3 donoho:1 appreciable:1 replace:1 owen:1 feasible:3 except:1 etin:1 called:1 total:1 e:4 succeeds:2 attempted:1 formally:1 support:5 latter:1 arises:1 relevance:7 ucsf:1 bioinformatics:1 constructive:1 requisite:1 tested:1 phenomenon:1 correlated:1 |
2,959 | 3,682 | Particle-based Variational Inference
for Continuous Systems
Alexander T. Ihler
Dept. of Computer Science
Univ. of California, Irvine
[email protected]
Andrew J. Frank
Dept. of Computer Science
Univ. of California, Irvine
[email protected]
Padhraic Smyth
Dept. of Computer Science
Univ. of California, Irvine
[email protected]
Abstract
Since the development of loopy belief propagation, there has been considerable
work on advancing the state of the art for approximate inference over distributions
defined on discrete random variables. Improvements include guarantees of convergence, approximations that are provably more accurate, and bounds on the results of exact inference. However, extending these methods to continuous-valued
systems has lagged behind. While several methods have been developed to use belief propagation on systems with continuous values, recent advances for discrete
variables have not as yet been incorporated.
In this context we extend a recently proposed particle-based belief propagation
algorithm to provide a general framework for adapting discrete message-passing
algorithms to inference in continuous systems. The resulting algorithms behave
similarly to their purely discrete counterparts, extending the benefits of these more
advanced inference techniques to the continuous domain.
1
Introduction
Graphical models have proven themselves to be an effective tool for representing the underlying
structure of probability distributions and organizing the computations required for exact and approximate inference. Early examples of the use of graph structure for inference include join or
junction trees [1] for exact inference, Markov chain Monte Carlo (MCMC) methods [2], and variational methods such as mean field and structured mean field approaches [3]. Belief propagation
(BP), originally proposed by Pearl [1], has gained in popularity as a method of approximate inference, and in the last decade has led to a number of more sophisticated algorithms based on conjugate
dual formulations and free energy approximations [4, 5, 6].
However, the progress on approximate inference in systems with continuous random variables has
not kept pace with that for discrete random variables. Some methods, such as MCMC techniques, are
directly applicable to continuous domains, while others such as belief propagation have approximate
continuous formulations [7, 8]. Sample-based representations, such as are used in particle filtering,
are particularly appealing as they are relatively easy to implement, have few numerical issues, and
have no inherent distributional assumptions. Our aim is to extend particle methods to take advantage
of recent advances in approximate inference algorithms for discrete-valued systems.
Several recent algorithms provide significant advantages over loopy belief propagation. Doubleloop algorithms such as CCCP [9] and UPS [10] use the same approximations as BP but guarantee
convergence. More general approximations can be used to provide theoretical bounds on the results
of exact inference [5, 3] or are guaranteed to improve the quality of approximation [6], allowing
an informed trade-off between computation and accuracy. Like belief propagation, they can be
formulated as local message-passing algorithms on the graph, making them amenable to parallel
computation [11] or inference in distributed systems [12, 13].
1
In short, the algorithmic characteristics of these recently-developed algorithms are often better, or at
least more flexible, than those of BP. However, these methods have not been applied to continuous
random variables, and in fact this subject was one of the open questions posed at a recent NIPS
workshop [14].
In order to develop particle-based approximations for these algorithms, we focus on one particular
technique for concreteness: tree-reweighted belief propagation (TRW) [5]. TRW represents one
of the earliest of a recent class of inference algorithms for discrete systems, but as we discuss in
Section 2.2 the extensions of TRW can be incorporated into the same framework if desired.
The basic idea of our algorithm is simple and extends previous particle formulations of exact inference [15] and loopy belief propagation [16]. We use collections of samples drawn from the continuous state space of each variable to define a discrete problem, ?lifting? the inference task from
the original space to a restricted, discrete domain on which TRW can be performed. At any point,
the current results of the discrete inference can be used to re-select the sample points from a variable?s continuous domain. This iterative interaction between the sample locations and the discrete
messages produces a dynamic discretization that adapts itself to the inference results.
We demonstrate that TRW and similar methods can be naturally incorporated into the lifted, discrete
phase of particle belief propagation and that they confer similar benefits on the continuous problem
as hold in truly discrete systems. To this end we measure the performance of the algorithm on an
Ising grid, an analogous continuous model, and the sensor localization problem. In each case, we
show that tree-reweighted particle BP exhibits behavior similar to TRW and produces significantly
more robust marginal estimates than ordinary particle BP.
2
Graphical Models and Inference
Graphical models provide a convenient formalism for describing structure within a probability distribution p(X) defined over a set of variables X = {x1 , . . . , xn }. This structure can then be applied
to organize computations over p(X) and construct efficient algorithms for many inference tasks,
including optimization to find a maximum a posteriori (MAP) configuration, marginalization, or
computing the likelihood of observed data.
2.1
Factor Graphs
Factor graphs [17] are a particular type of graphical model that describe the factorization structure of the distribution p(X) using a bipartite graph consisting of factor nodes and variable nodes.
Specifically, suppose such a graph G consists of factor nodes F = {f1 , . . . , fm } and variable nodes
X = {x1 , . . . , xn }. Let Xu ? X denote the neighbors of factor node fu and Fs ? F denote the
neighbors of variable node xs . Then, G is consistent with a distribution p(X) if and only if
m
1 Y
p(x1 , . . . , xn ) =
fu (Xu ).
(1)
Z u=1
In a common abuse of notation, we use the same symbols to represent each variable node and its
associated variable xs , and similarly for each factor node and its associated function fu . Each factor
fu corresponds to a strictly positive function over a subset of the variables. The graph connectivity
captures the conditional independence structure of p(X), enabling the development of efficient exact
and approximate inference algorithms [1, 17, 18]. The quantity Z, called the partition function, is
also of importance in many problems; for example in normalized distributions such as Bayes nets, it
corresponds to the probability of evidence and can be used for model comparison.
A common inference problem is that of computing the marginal distributions of p(X). Specifically,
for each variable xs we are interested in computing the marginal distribution
Z
ps (xs ) =
p(X) ?X.
X\xs
For discrete-valued variables X, the integral is replaced by a summation.
When the variables are discrete and the graph G representing p(X) forms a tree (G has no cycles), marginalization can be performed efficiently using the belief propagation or sum-product algorithm [1, 17]. For inference in more general graphs, the junction tree algorithm [19] creates a
2
tree-structured hypergraph of G and then performs inference on this hypergraph. The computational
complexity of this process is O(ndb ), where d is the number of possible values for each variable and
b is the maximal clique size of the hypergraph. Unfortunately, for even moderate values of d, this
complexity becomes prohibitive for even relatively small b.
2.2
Approximate Inference
Loopy BP [1] is a popular alternative to exact methods and proceeds by iteratively passing ?messages? between variable and factor nodes in the graph as though the graph were a tree (ignoring
cycles). The algorithm is exact when the graph is tree-structured and can provide excellent approximations in some cases even when the graph has loops. However, in other cases loopy BP may
perform poorly, have multiple fixed points, or fail to converge at all.
Many of the more recent varieties of approximate inference are framed explicitly as an optimization of local approximations over locally defined cost functions. Variational or free-energy based
approaches convert the problem of exact inference into the optimization of a free energy function
over the set of realizable marginal distributions M, called the marginal polytope [18]. Approximate
inference then corresponds to approximating the constraint set and/or energy function. Formally,
b
max E? [log P (X)] + H(?) ? max E? [log P (X)] + H(?)
??M
c
??M
where H is the entropy of the distribution corresponding to ?. Since the solution ? may not correspond to the marginals of any consistent joint distribution, these approximate marginals are typically
b decompose
c and approximate entropy H
referred to as pseudomarginals. If both the constraints in M
locally on the graph, the optimization process can be interpreted as a message-passing procedure,
and is often performed using fixed-point equations like those of BP.
Belief propagation can be understood in this framework as corresponding to an outer approximac ? M enforcing local consistency and the Bethe approximation to H [4]. This viewpoint
tion M
provides a clear path to directly improve upon the properties of BP, leading to a number of different algorithms. For example, CCCP [9] and UPS [10] make the same approximations but use an
alternative, direct optimization procedure to ensure convergence. Fractional belief propagation [20]
corresponds to a more general Bethe-like approximation with additional parameters, which can be
modified to ensure that the cost function is convex and used with convergent algorithms [21]. A
special case includes tree-reweighted belief propagation [5], which both ensures convexity and provides an upper bound on the partition function Z. The approximation of M can also be improved
using cutting plane methods, which include additional, higher-order consistency constraints on the
pseudomarginals [6]. Other choices of local cost functions lead to alternative families of approximations [8].
Overall, these advances have provided significant improvements in the state of the art for approximate inference in discrete-valued systems. They provide increased flexibility, theoretical bounds on
the results of exact inference, and can provably increase the quality of the estimates. However, these
advances have not been carried over into the continuous domain.
For concreteness, in the rest of the paper we will use tree-reweighted belief propagation (TRW) [5]
as our inference method of choice, although the same ideas can be applied to any of the discussed
inference algorithms. As we will see shortly, the details specific to TRW are nicely encapsulated
and can be swapped out for those of another algorithm with minimal effort.
The fixed-point equations for TRW lead to a message-passing algorithm similar to BP, defined by
Y mf x (xs )?v
X
Y
v
s
mxs fu (xs ) ?
, mfu xs (xs ) ?
fu (Xu )1/?u
mxt fu (xt )
mfu xs (xs )
fv ?Fs
Xu \xs
xt ?Xu \xs
(2)
The parameters ?v are called edge weights or appearance probabilities. For TRW, the ? are required
to correspond to the fractional occurrence rates of the edges in some collection of tree-structured
subgraphs of G. The choice of ? affects the quality of the approximation; the tightest upper bound
can be obtained via a convex optimization of ? which computes the pseudomarginals as an inner
loop.
3
3
Continuous Random Variables
For continuous-valued random variables, many of these algorithms cannot be applied directly. In
particular, any reasonably fine-grained discretization produces a discrete variable whose domain size
d is quite large. The domain size is typically exponential in the dimension of the variable and the
complexity of the message-passing algorithms is O(ndb ), where n is the total number of variables
and b is the number of variables in the largest factor. Thus, the computational cost can quickly
become intractable even with pairwise factors over low dimensional variables. Our goal is to adapt
the algorithms of Section 2.2 to perform efficient approximate inference in such systems.
For time-series problems, in which G forms a chain, a classical solution is to use sequential Monte
Carlo approximations, generally referred to as particle filtering [22]. These methods use samples to
define an adaptive discretization of the problem with fine granularity in regions of high probability.
The stochastic nature of the discretization is simple to implement and enables probabilistic assurances of quality including convergence rates which are independent of the problem?s dimensionality.
(In sufficiently few dimensions, deterministic adaptive discretizations can also provide a competitive
alternative, particularly if the factors are analytically tractable [23, 24].)
3.1
Particle Representations for Message-Passing
Particle-based approximations have been extended to loopy belief propagation as well. For example,
in the nonparametric belief propagation (NBP) algorithm [7], the BP messages are represented as
Gaussian mixtures and message products are approximated by drawing samples, which are then
smoothed to form new Gaussian mixture distributions. A key aspect of this approach is the fact that
the product of several mixtures of Gaussians is also a mixture of Gaussians, and thus can be sampled
from with relative ease. However, it is difficult to see how to extend this algorithm to more general
message-passing algorithms, since for example the TRW fixed point equations (2) involve ratios and
powers of messages, which do not have a simple form for Gaussian mixtures and may not even form
finitely integrable functions.
Instead, we adapt a recent particle belief propagation (PBP) algorithm [16] to work on the treereweighted formulation. In PBP, samples (particles) are drawn for each variable, and each message
is represented as a set of weights over the available values of the target variable. At a high level,
the procedure iterates between sampling particles from each variable?s domain, performing inference
over the resulting discrete problem, and adaptively updating the sampling distributions. This process
is illustrated in Figure 1. Formally, we define a proposal distribution Ws (xs ) for each variable xs
such that Ws (xs ) is non-zero over the domain of xs . Note that we may rewrite the factor message
computation (2) as an importance reweighted expectation:
?
mfu xs (xs ) ?
E
Xu \xs
?fu (Xu )1/?u
Y
xt ?Xu \xs
?
mxt fu (xt ) ?
Wt (xt )
(3)
Let us index the variables that are neighbors of factor fu as Xu = {xu1 , . . . , xub }. Then, after
(1)
(N )
sampling particles {xs , ? ? ? , xs } from Ws (xs ), we can index a particular assignment of parti(~j)
(j )
(j )
cle values to the variables in Xu with Xu = [xu11 , . . . , xubb ]. We then obtain a finite-sample
approximation of the factor message in the form
?
mfu xuk x(j)
?
uk
1
N b?1
X
?fu
~i:ik =j
~ 1/?u
Xu(i)
?
(il )
Y mxul fu xul
?
(il )
W
x
ul
xul
l6=k
(4)
In other words, we construct a Monte Carlo approximation to the integral using importance weighted
samples from the proposal. Each of the values in the message then represents an estimate of the
continuous function (2) evaluated at a single particle. Observe that the sum is over N b?1 elements,
and hence the complexity of computing an entire factor message is O(N b ); this could be made more
efficient at the price of increased stochasticity by summing over a random subsample of the vectors
4
(2) Inference on discrete system
?
(i)
xs
(j)
(i)
? xt
(j)
f xs , xt
(1) Sample
? Ws (xs )
(3)
(1)
(i)
xs
(3) Adjust
Wt (xt )
Figure 1: Schematic view of particle-based inference. (1) Samples for each variable provide a
dynamic discretization of the continuous space; (2) inference proceeds by optimization or messagepassing in the discrete space; (3) the resulting local functions can be used to change the proposals
Ws (?) and choose new sample locations for each variable.
~i. Likewise, we compute variable messages and beliefs as simple point-wise products:
?v
Q
(j)
?v
m
x
Y
s
f
x
v
s
f
?F
v
s
(j)
(j)
?
mxs fu x(j)
m
x
,
b
(x
)
?
fv xs
s s
s
s
(j)
mfu xs xs
fv ?Fs
(5)
This parallels the development in [16], except here we use factor weights ?
~ to compute messages
according to TRW rather than standard loopy BP.
Just as in discrete problems, it is often desirable to obtain estimates of the log partition function for
use in goodness-of-fit testing or model comparison. Our implementation of TRW-PBP gives us a
stochastic estimate of an upper bound on the true partition function. Using other message passing
approaches that fit into this framework, such as mean field, can provide a similar a lower bound.
These bounds provide a possible alternative to Monte Carlo estimates of marginal likelihood [25].
3.2
Rao-Blackwellized Estimates
Quantities about xs such as expected values under the pseudomarginal can be computed using the
(i)
samples xs . However, for any given variable node xs , the incoming messages to xs given in (4) are
defined in terms of the importance weights and sampled values of the neighboring variables. Thus,
we can compute an estimate of the messages and beliefs defined in (4)?(5) at arbitrary values of xs ,
simply by evaluating (4) at that point. This allows us to perform Rao-Blackwellization, conditioning
on the samples at the neighbors of xs rather than using xs ?s samples directly.
Using this trick we can often get much higher quality estimates from the inference for small N . In
particular, if the variable state spaces are sufficiently small that they can be discretized (for example,
in 3 or fewer dimensions the discretized domain size d may be manageable) but the resulting factor
domain size, db , is intractably large, we can evaluate (4) on the discretized grid for only O(dN b?1 ).
More generally, we can substitute a larger number of samples N 0 N with cost that grows only
linearly in N 0 .
3.3
Resampling and Proposal Distributions
Another critical point is that the efficiency of this procedure hinges on the quality of the proposal
distributions Ws . Unfortunately, this forms a circular problem ? W must be chosen to perform
inference, but the quality of W depends on the distribution and its pseudomarginals. This interdependence motivates an attempt to learn the sampling distributions in an online fashion, adaptively
updating them based on the results of the partially completed inference procedure. Note that this
procedure depends on the same properties as Rao-Blackwellized estimates: that we be able to compute our messages and beliefs at a new set of points given the message weights at the other nodes.
Both [15] and [16] suggest using the current belief at each iteration to form a new proposal distribution. In [15], parametric density estimates are formed using the message-weighted samples
at the current iteration, which form the sampling distributions for the next phase. In [16], a short
Metropolis-Hastings MCMC sequence is run at a single node, using the Rao-Blackwellized belief
estimate to compute an acceptance probability. A third possibility is to use a sampling/importance
5
1
100
0.6
500
0.4
BP
0.2
100
0.6
500
0.4
TRW
0.2
0
1
20
0.8
L1 error
20
L1 error
L1 error
1
0.8
0.6
0.7
?
0.8
0.9
1
PBP 500
TRW?PBP 500
0.6
0.4
0.2
0
0.5
0.8
0
0.5
0.6
0.7
?
0.8
0.9
1
0.5
0.6
0.7
?
0.8
0.9
1
Figure 2: 2-D Ising model performance. L1 error for PBP (left) and TRW-PBP (center) for varying
numbers of particles; (right) PBP and TRW-PBP juxtaposed to reveal the gap for high ?.
resampling (SIR) procedure, drawing a large number of samples, computing weights, and probabilistically retaining only N . In our experiments we draw samples from the current beliefs, as
approximated by Rao-Blackwellized estimation over a fine grid of particles. For variables in more
than 2 dimensions, we recommend the Metropolis-Hastings approach.
4
Ising-like Models
The Ising model corresponds to a graphical model, typically a grid, over binary-valued variables with
pairwise factors. Originating in statistical physics, similar models are common in many applications
including image denoising and stereo depth estimation. Ising models are well understood, and
provide a simple example of how BP can fail and the benefits of more general forms such as TRW.
We initially demonstrate the behavior of our particle-based algorithms on a small (3 ? 3) lattice
of binary-valued variables to compare with the exact discrete implementations, then show that the
same observed behavior arises in an analagous continuous-valued problem.
4.1
Ising model
Our factors consist of single-variable and pairwise functions, given by
?
f (xs ) = [ 0.5 0.5 ]
f (xs , xt ) =
1??
1??
?
(6)
for ? > .5. By symmetry, it is easy to see that the true marginal of each variable is uniform, [.5 .5].
However, around ? ? .78 there is a phase transition; the uniform fixed point becomes unstable and
several others appear, becoming more skewed toward one state or another as ? increases. As the
strength of coupling in an Ising model increases, the performance of BP often degrades sharply,
while TRW is comparatively robust and remains near the true marginals [5].
Figure 2 shows the performance of PBP and TRW-PBP on this model. Each data point represents
the median L1 error between the beliefs and the true marginals, across all nodes and 40 randomly
initialized trials, after 50 iterations. The left plot (BP) clearly shows the phase shift; in contrast,
the error of TRW remains low even for very strong interactions. In both cases, as N increases the
particle versions of the algorithms converge to their discrete equivalents.
4.2
Continuous grid model
The results for discrete systems, and their corresponding intuition, carry over naturally into continuous systems as well. To illustrate on an interpretable analogue of the Ising model, we use the same
graph structure but with real-valued variables, and factors given by:
x2
(xs ? 1)2
|xs ? xt |2
f (xs ) = exp ? s2 + exp ?
f
(x
,
x
)
=
exp
?
. (7)
s
t
2?l
2?l2
2?p2
Local factors consist of bimodal Gaussian mixtures centered at 0 and 1, while pairwise factors
encourage similarity using a zero-mean Gaussian on the distance between neighboring variables.
We set ?l = 0.2 and vary ?p analagously to ? in the discrete model. Since all potentials are Gaussian
mixtures, the joint distribution is also a Gaussian mixture and can be computed exactly.
6
0.4
100
500
0.2
0
?2
0
log(??2
)
p
2
4
0.8
0.6
0.4
1
20
100
L1 error
0.6
1
20
L1 error
L1 error
1
0.8
500
0.2
0
?2
0
log(??2
)
p
2
4
0.8
0.6
PBP 500
TRW?PBP 500
0.4
0.2
0
?2
0
log(??2
)
p
2
4
Figure 3: Continuous grid model performance. L1 error for PBP (left) and TRW-PBP (center) for
varying numbers of particles; (right) PBP and TRW-PBP juxtaposed to reveal the gap for low ?p .
Figure 3 shows the results of running PBP and TRW-PBP on the continuous grid model, demonstrating similar characteristics to the discrete model. The left panel reveals that our continuous grid
model also induces a phase shift in PBP, much like that of the Ising model. For sufficiently small
values of ?p (large values on our transformed axis), the beliefs in PBP collapse to unimodal distributions with an L1 error of 1. In contrast, TRW-PBP avoids this collapse and maintains multi-modal
distributions throughout; its primary source of error (0.2 at 500 particles) corresponds to overdispersed bimodal beliefs. This is expected in attractive models, in which BP tends to ?overcount?
information leading to underestimates of variance; TRW removes some of this overcounting and
may overestimate uncertainty.
As mentioned in Section 3.1, we can use the results of
TRW-PBP to compute an upper bound on the log partition function. We implement naive mean field within this
same framework to achieve a lower bound as well. The
resulting bounds, computed for a continuous grid model
in which mean field collapses to a single mode, are shown
in Figure 4. With sufficiently many particles, the values
produced by TRW-PBP and MF inference bound the true
value, as they should. With only 20 particles per variable,
however, TRW-PBP occasionally fails and yields ?upper
bounds? below the true value. This is not surprising; the
consistency guarantees associated with the importancereweighted expectation take effect only when N is suffi- Figure 4: Bounds on the log partition
function.
ciently large.
5
Sensor Localization
We also demonstrate the presence of these effects in a simulation of a real-world application. Sensor
localization considers the task of estimating the position of a collection of sensors in a network given
noisy estimates of a subset of the distances between pairs of sensors, along with known positions
for a small number of anchor nodes. Typical localization algorithms operate by optimizing to find
the most likely joint configuration of sensor positions. A classical model consists of (at a minimum)
three anchor nodes, and a Gaussian model on the noise in the distance observations.
In [12], this problem is formulated as a graphical model and an alternative solution is proposed
using nonparametric belief propagation to perform approximate marginalization. A significant advantage of this approach is that by providing approximate marginals, we can estimate the degree
of uncertainty in the sensor positions. Gauging this uncertainty can be particularly important when
the distance information is sufficiently ambiguous that the posterior belief is multi-modal, since in
this case the estimated sensor position may be quite far from its true value. Unfortunately, belief
propagation is not ideal for identifying multimodality, since the model is essentially attractive. BP
may underestimate the degree of uncertainty in the marginal distributions and (as in the case of the
Ising-like models in the previous section) collapse into a single mode, providing beliefs which are
misleadingly overconfident.
Figure 5 shows a set of sensor configurations where this is the case. The distance observations
induce a fully connected graph; the edges are omitted for clarity. In this network the anchor nodes
are nearly collinear. This induces a bimodal uncertainty about the locations of the remaining nodes
7
Anchor
Anchor
Anchor
Mobile
Mobile
Mobile
Target
Target
Target
(a) Exact
(b) PBP
(c) TRW-PBP
Figure 5: Sensor location belief at the target node. (a) Exact belief computed using importance
sampling. (b) PBP collapses and represents only one of the two modes. (c) TRW-PBP
overestimates the uncertainty around each mode, but represents both.
? the configuration in which they are all reflected across the crooked line formed by the anchors is
nearly as likely as the true configuration. Although this example is anecdotal, it reflects a situation
which can arise regularly in practice [26].
Figure 5a shows the true marginal distribution for one node, estimated exhaustively using importance
sampling with 5 ? 106 samples. It shows a clear bimodal structure ? a slightly larger mode near the
sensor?s true location and a smaller mode at a point corresponding to the reflection. In this system
there is not enough information in the measurements to resolve the sensor positions. We compare
these marginals to the results found using PBP.
Figure 5b displays the Rao-Blackwellized belief estimate for one node after 20 iterations of PBP
with each variable represented by 100 particles. Only one mode is present, suggesting that PBP?s
beliefs have ?collapsed,? just as in the highly attractive Ising model. Examination of the other
nodes? beliefs (not shown for space) confirms that all are unimodal distributions centered around
their reflected locations. It is worth noting that PBP converged to the alternative set of unimodal
beliefs (supporting the true locations) in about half of our trials. Such an outcome is only slightly
better; an accurate estimate of confidence is equally important.
The corresponding belief estimate generated by TRW-PBP is shown in Figure 5c. It is clearly
bimodal, with significant probability mass supporting both the true and reflected locations. Also,
each of the two modes is less concentrated than the belief in 5b. As with the continuous grid model
we see increased stability at the price of conservative overdispersion. Again, similar effects occur
for the other nodes in the network.
6
Conclusion
We propose a framework for extending recent advances in discrete approximate inference for application to continuous systems. The framework directly integrates reweighted message passing algorithms such as TRW into the lifted, discrete phase of PBP. Furthermore, it allows us to iteratively
adjust the proposal distributions, providing a discretization that adapts to the results of inference,
and allows us to use Rao-Blackwellized estimates to improve our final belief estimates.
We consider the particular case of TRW and show that its benefits carry over directly to continuous
problems. Using an Ising-like system, we argue that phase transitions exist for particle versions of
BP similar to those found in discrete systems, and that TRW significantly improves the quality of the
estimate in those regimes. This improvement is highly relevant to approximate marginalization for
sensor localization tasks, in which it is important to accurately represent the posterior uncertainty.
The flexibility in the choice of message passing algorithm makes it easy to consider several instantiations of the framework and use the one best suited to a particular problem. Furthermore, future
improvements in message-passing inference algorithms on discrete systems can be directly incorporated into continuous problems.
Acknowledgements: This material is based upon work partially supported by the Office of Naval
Research under MURI grant N00014-08-1-1015.
8
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufman, San Mateo, 1988.
[2] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of
images. IEEE Trans. PAMI, 6(6):721?741, November 1984.
[3] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul. An introduction to variational methods for graphical
methods. Machine Learning, 37:183?233, 1999.
[4] J. Yedidia, W. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief
propagation algorithms. Technical Report 2004-040, MERL, May 2004.
[5] M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function.
IEEE Trans. Info. Theory, 51(7):2313?2335, July 2005.
[6] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In NIPS 20, pages 1393?1400.
MIT Press, Cambridge, MA, 2008.
[7] E. Sudderth, A. Ihler, W. Freeman, and A. Willsky. Nonparametric belief propagation. In CVPR, 2003.
[8] T. Minka. Divergence measures and message passing. Technical Report 2005-173, Microsoft Research
Ltd, January 2005.
[9] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: convergent alternatives to
belief propagation. Neural Comput., 14(7):1691?1722, 2002.
[10] Y.-W. Teh and M. Welling. The unified propagation and scaling algorithm. In NIPS 14. 2002.
[11] J. Gonzalez, Y. Low, and C. Guestrin. Residual splash for optimally parallelizing belief propagation. In
In Artificial Intelligence and Statistics (AISTATS), Clearwater Beach, Florida, April 2009.
[12] A. Ihler, J. Fisher, R. Moses, and A. Willsky. Nonparametric belief propagation for self-calibration in
sensor networks. IEEE J. Select. Areas Commun., pages 809?819, April 2005.
[13] J. Schiff, D. Antonelli, A. Dimakis, D. Chu, and M. Wainwright. Robust message-passing for statistical
inference in sensor networks. In IPSN, pages 109?118, April 2007.
[14] A. Globerson, D. Sontag, and T. Jaakkola. Approximate inference ? How far have we come? (NIPS?08
Workshop), 2008. http://www.cs.huji.ac.il/?gamir/inference-workshop.html.
[15] D. Koller, U. Lerner, and D. Angelov. A general algorithm for approximate inference and its application
to hybrid Bayes nets. In UAI 15, pages 324?333, 1999.
[16] A. Ihler and D. McAllester. Particle belief propagation. In AI & Statistics: JMLR W&CP, volume 5,
pages 256?263, April 2009.
[17] F. Kschischang, B. Frey, and H.-A. Loeliger. Factor graphs and the sum-product algorithm. IEEE Trans.
Info. Theory, 47(2):498?519, February 2001.
[18] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Technical Report 629, UC Berkeley Dept. of Statistics, September 2003.
[19] SL Lauritzen and DJ Spiegelhalter. Local computations with probabilities on graphical structures and
their application to expert systems. Journal of the Royal Statistical Society. Series B (Methodological),
pages 157?224, 1988.
[20] W. Wiegerinck and T. Heskes. Fractional belief propagation. In NIPS 15, pages 438?445. 2003.
[21] T. Hazan and A. Shashua. Convergent message-passing algorithms for inference over general graphs with
convex free energies. In UAI 24, pages 264?273. July 2008.
[22] M. S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp. A tutorial on particle filters for online
nonlinear/non-Gaussian Bayesian tracking. 50(2):174?188, February 2002.
[23] J. Coughlan and H. Shen. Dynamic quantization for belief propagation in sparse spaces. Comput. Vis.
Image Underst., 106(1):47?58, 2007.
[24] M. Isard, J. MacCormick, and K. Achan. Continuously-adaptive discretization for message-passing algorithms. In NIPS 21, pages 737?744. 2009.
[25] S. Chib. Marginal likelihood from the gibbs output. JASA, 90(432):1313?1321, 1995.
[26] D. Moore, J. Leonard, D. Rus, and S. Teller. Robust distributed network localization with noisy range
measurements. In 2nd Int?l Conf. on Emb. Networked Sensor Sys. (SenSys?04), pages 50?61, 2004.
9
| 3682 |@word trial:2 version:2 manageable:1 underst:1 nd:1 open:1 confirms:1 simulation:1 mxt:2 carry:2 configuration:5 series:2 loeliger:1 current:4 discretization:7 surprising:1 yet:1 chu:1 must:1 numerical:1 partition:7 pseudomarginals:4 enables:1 remove:1 plot:1 interpretable:1 resampling:2 half:1 prohibitive:1 assurance:1 fewer:1 intelligence:1 isard:1 plane:1 sys:1 coughlan:1 short:2 provides:2 iterates:1 node:22 location:8 treereweighted:1 blackwellized:6 dn:1 along:1 direct:1 become:1 ik:1 consists:2 multimodality:1 interdependence:1 pairwise:4 expected:2 behavior:3 themselves:1 blackwellization:1 multi:2 discretized:3 freeman:2 resolve:1 becomes:2 provided:1 estimating:1 underlying:1 notation:1 panel:1 mass:1 nbp:1 interpreted:1 kaufman:1 dimakis:1 developed:2 informed:1 unified:1 guarantee:3 berkeley:1 exactly:1 uk:1 grant:1 appear:1 organize:1 overestimate:2 positive:1 understood:2 local:7 frey:1 tends:1 path:1 becoming:1 abuse:1 pami:1 mateo:1 ease:1 factorization:1 collapse:5 range:1 globerson:1 testing:1 practice:1 implement:3 procedure:7 area:1 discretizations:1 adapting:1 significantly:2 convenient:1 ups:2 word:1 induce:1 confidence:1 suggest:1 get:1 cannot:1 context:1 collapsed:1 www:1 equivalent:1 map:1 deterministic:1 center:2 overcounting:1 convex:3 shen:1 identifying:1 subgraphs:1 parti:1 stability:1 analogous:1 target:5 suppose:1 smyth:2 exact:13 trick:1 element:1 approximated:2 particularly:3 updating:2 distributional:1 ising:12 muri:1 observed:2 geman:2 capture:1 region:1 ensures:1 cycle:2 connected:1 trade:1 mentioned:1 intuition:1 convexity:1 complexity:4 hypergraph:3 dynamic:3 exhaustively:1 rewrite:1 purely:1 localization:6 bipartite:1 creates:1 upon:2 xul:2 efficiency:1 yuille:1 arulampalam:1 maskell:1 joint:3 represented:3 univ:3 effective:1 describe:1 monte:4 artificial:1 clearwater:1 outcome:1 whose:1 quite:2 posed:1 valued:9 larger:2 cvpr:1 drawing:2 statistic:3 itself:1 noisy:2 final:1 online:2 advantage:3 sequence:1 net:2 propose:1 interaction:2 product:5 maximal:1 neighboring:2 uci:3 loop:2 relevant:1 networked:1 organizing:1 poorly:1 flexibility:2 adapts:2 achieve:1 convergence:4 p:1 extending:3 produce:3 kikuchi:1 coupling:1 andrew:1 develop:1 illustrate:1 ac:1 finitely:1 lauritzen:1 progress:1 p2:1 strong:1 c:1 come:1 filter:1 stochastic:3 ipsn:1 centered:2 mcallester:1 material:1 f1:1 decompose:1 summation:1 extension:1 strictly:1 hold:1 sufficiently:5 around:3 ic:3 exp:3 algorithmic:1 vary:1 early:1 omitted:1 encapsulated:1 estimation:2 suffi:1 applicable:1 integrates:1 largest:1 mfu:5 tool:1 weighted:2 reflects:1 anecdotal:1 mit:1 clearly:2 sensor:16 gaussian:9 aim:1 modified:1 rather:2 lifted:2 varying:2 mobile:3 jaakkola:4 probabilistically:1 office:1 earliest:1 focus:1 naval:1 improvement:4 methodological:1 likelihood:3 contrast:2 realizable:1 posteriori:1 inference:50 typically:3 entire:1 initially:1 w:6 koller:1 originating:1 transformed:1 interested:1 provably:2 issue:1 dual:1 flexible:1 overall:1 html:1 retaining:1 development:3 art:2 special:1 uc:1 marginal:11 field:5 construct:2 nicely:1 beach:1 sampling:8 represents:5 nearly:2 gauging:1 future:1 others:2 recommend:1 intelligent:1 inherent:1 few:2 report:3 gordon:1 randomly:1 lerner:1 chib:1 divergence:1 replaced:1 phase:7 consisting:1 microsoft:1 attempt:1 acceptance:1 message:32 circular:1 possibility:1 highly:2 gamir:1 adjust:2 truly:1 mixture:8 behind:1 chain:2 amenable:1 accurate:2 fu:13 integral:2 edge:3 encourage:1 tree:11 initialized:1 desired:1 re:1 theoretical:2 minimal:1 increased:3 doubleloop:1 formalism:1 merl:1 rao:7 goodness:1 restoration:1 assignment:1 lattice:1 loopy:7 ordinary:1 cost:5 subset:2 uniform:2 optimally:1 adaptively:2 density:1 huji:1 probabilistic:2 off:1 physic:1 quickly:1 continuously:1 connectivity:1 xuk:1 again:1 padhraic:1 choose:1 conf:1 expert:1 leading:2 suggesting:1 potential:1 includes:1 int:1 analagous:1 explicitly:1 xu1:1 depends:2 vi:1 performed:3 tion:1 view:1 hazan:1 shashua:1 competitive:1 bayes:2 maintains:1 parallel:2 minimize:1 formed:2 il:3 accuracy:1 variance:1 characteristic:2 efficiently:1 likewise:1 correspond:2 yield:1 bayesian:2 accurately:1 produced:1 carlo:4 worth:1 converged:1 overcount:1 energy:7 underestimate:2 minka:1 naturally:2 associated:3 ihler:5 irvine:3 sampled:2 popular:1 fractional:3 dimensionality:1 improves:1 sophisticated:1 trw:36 originally:1 higher:2 reflected:3 modal:2 improved:1 wei:1 april:4 formulation:4 evaluated:1 though:1 furthermore:2 just:2 hastings:2 nonlinear:1 propagation:29 mode:8 quality:8 reveal:2 grows:1 effect:3 normalized:1 true:12 counterpart:1 analytically:1 hence:1 overdispersed:1 iteratively:2 moore:1 illustrated:1 reweighted:6 confer:1 attractive:3 skewed:1 self:1 ambiguous:1 generalized:1 demonstrate:3 performs:1 l1:10 reflection:1 cp:1 reasoning:1 image:3 variational:5 wise:1 recently:2 pbp:35 common:3 conditioning:1 volume:1 extend:3 discussed:1 marginals:6 significant:4 measurement:2 cambridge:1 gibbs:2 ai:1 framed:1 grid:10 consistency:3 similarly:2 heskes:1 particle:30 stochasticity:1 dj:1 calibration:1 similarity:1 posterior:2 recent:8 optimizing:1 moderate:1 commun:1 occasionally:1 n00014:1 binary:2 integrable:1 morgan:1 minimum:1 additional:2 guestrin:1 converge:2 july:2 multiple:1 desirable:1 unimodal:3 technical:3 adapt:2 dept:4 cccp:3 equally:1 schematic:1 basic:1 schiff:1 essentially:1 expectation:2 iteration:4 represent:2 bimodal:5 proposal:7 fine:3 pseudomarginal:1 median:1 source:1 sudderth:1 swapped:1 rest:1 operate:1 subject:1 db:1 regularly:1 jordan:2 ciently:1 near:2 presence:1 granularity:1 ideal:1 noting:1 easy:3 enough:1 variety:1 marginalization:4 independence:1 affect:1 fit:2 fm:1 inner:1 idea:2 ndb:2 shift:2 ul:1 effort:1 collinear:1 ltd:1 stereo:1 f:3 sontag:2 passing:16 generally:2 clear:2 involve:1 nonparametric:4 locally:2 induces:2 concentrated:1 http:1 sl:1 exist:1 tutorial:1 moses:1 estimated:2 popularity:1 per:1 pace:1 discrete:30 key:1 demonstrating:1 drawn:2 clarity:1 kept:1 advancing:1 achan:1 graph:18 relaxation:1 concreteness:2 sum:3 convert:1 run:1 uncertainty:7 extends:1 family:2 throughout:1 draw:1 gonzalez:1 scaling:1 bound:16 guaranteed:1 convergent:3 display:1 strength:1 occur:1 constraint:3 sharply:1 bp:19 x2:1 aspect:1 performing:1 juxtaposed:2 relatively:2 structured:4 according:1 overconfident:1 conjugate:1 across:2 slightly:2 smaller:1 appealing:1 metropolis:2 making:1 restricted:1 equation:3 remains:2 discus:1 describing:1 fail:2 overdispersion:1 tractable:1 end:1 junction:2 tightest:1 gaussians:2 available:1 yedidia:1 observe:1 occurrence:1 alternative:8 shortly:1 florida:1 original:1 substitute:1 running:1 include:3 ensure:2 completed:1 graphical:9 remaining:1 hinge:1 l6:1 ghahramani:1 approximating:1 classical:2 comparatively:1 february:2 society:1 question:1 quantity:2 parametric:1 degrades:1 primary:1 exhibit:1 september:1 mx:2 distance:5 maccormick:1 outer:2 polytope:2 argue:1 considers:1 unstable:1 toward:1 enforcing:1 willsky:3 ru:1 index:2 ratio:1 providing:3 difficult:1 unfortunately:3 frank:1 info:2 lagged:1 implementation:2 motivates:1 perform:5 allowing:1 upper:6 teh:1 observation:2 markov:1 enabling:1 finite:1 november:1 behave:1 supporting:2 january:1 situation:1 extended:1 incorporated:4 smoothed:1 arbitrary:1 parallelizing:1 pair:1 required:2 clapp:1 california:3 fv:3 pearl:2 nip:6 trans:3 able:1 proceeds:2 below:1 regime:1 including:3 max:2 royal:1 belief:48 analogue:1 power:1 critical:1 wainwright:3 examination:1 hybrid:1 residual:1 advanced:1 representing:2 improve:3 spiegelhalter:1 axis:1 carried:1 naive:1 teller:1 l2:1 acknowledgement:1 relative:1 sir:1 fully:1 filtering:2 proven:1 degree:2 jasa:1 consistent:2 viewpoint:1 supported:1 last:1 free:6 intractably:1 emb:1 neighbor:4 saul:1 sparse:1 benefit:4 distributed:2 dimension:4 xn:3 cle:1 evaluating:1 depth:1 computes:1 transition:2 avoids:1 collection:3 adaptive:3 made:1 world:1 san:1 far:2 welling:1 approximate:20 cutting:1 clique:1 incoming:1 reveals:1 anchor:7 summing:1 instantiation:1 uai:2 continuous:29 iterative:1 decade:1 learn:1 bethe:3 reasonably:1 robust:4 kschischang:1 nature:1 ignoring:1 messagepassing:1 symmetry:1 angelov:1 excellent:1 constructing:1 domain:11 aistats:1 linearly:1 s2:1 subsample:1 noise:1 arise:1 x1:3 xu:12 referred:2 join:1 fashion:1 fails:1 position:6 exponential:2 comput:2 jmlr:1 third:1 grained:1 specific:1 xt:10 symbol:1 x:43 evidence:1 workshop:3 intractable:1 consist:2 quantization:1 sequential:1 gained:1 importance:7 lifting:1 splash:1 gap:2 mf:2 suited:1 entropy:2 led:1 simply:1 appearance:1 likely:2 tracking:1 partially:2 corresponds:6 ma:1 conditional:1 goal:1 formulated:2 leonard:1 price:2 fisher:1 considerable:1 change:1 specifically:2 except:1 typical:1 wt:2 denoising:1 wiegerinck:1 conservative:1 called:3 total:1 select:2 formally:2 arises:1 alexander:1 evaluate:1 mcmc:3 |
2,960 | 3,683 | Skill Discovery in Continuous Reinforcement
Learning Domains using Skill Chaining
Andrew Barto
Computer Science Department
University of Massachusetts Amherst
Amherst MA 01003 USA
[email protected]
George Konidaris
Computer Science Department
University of Massachusetts Amherst
Amherst MA 01003 USA
[email protected]
Abstract
We introduce a skill discovery method for reinforcement learning in continuous
domains that constructs chains of skills leading to an end-of-task reward. We
demonstrate experimentally that it creates appropriate skills and achieves performance benefits in a challenging continuous domain.
1
Introduction
Much recent research in reinforcement learning (RL) has focused on hierarchical RL methods [1]
and in particular the options framework [2], which adds to the RL framework principled methods
for planning and learning using high level-skills (called options). An important research goal is
the development of methods by which an agent can discover useful new skills autonomously, and
thereby construct its own high-level skill hierarchies. Although several methods exist for creating
new options in discrete domains, none are immediately extensible to, or have been successfully
applied in, continuous domains.
We introduce skill chaining, a skill discovery method for agents in continuous domains. Skill chaining produces chains of skills, each leading to one of a list of designated target events, where the list
can simply contain the end-of-episode event or more sophisticated heuristic events (e.g., intrinsically
interesting events [3]). The goal of each skill in the chain is to enable the agent to reach a state where
its successor skill can be successfully executed. We demonstrate experimentally that skill chaining
creates appropriate skills and achieves performance improvements in the Pinball domain.
2
Background and Related Work
An option, o, consists of three components [2]: an option policy, ?o , giving the probability of
executing each action in each state in which the option is defined; an initiation set indicator function,
Io , which is 1 for states where the option can be executed and 0 elsewhere; and a termination
condition, ?o , giving the probability of option execution terminating in states where it is defined. The
options framework adds methods for planning and learning using options as temporally-extended
actions to the standard RL framework based on the Markov decision process (MDP) framework [4].
Options can be added to an agent?s action repertoire alongside its primitive actions, and the agent
chooses when to execute them in the same way it chooses when to execute primitive actions.
Methods for creating new options must determine when to create an option and how to define its
termination condition (skill discovery), how to expand its initiation set, and how to learn its policy.
Given an option reward function, policy learning can be viewed as just another RL problem. Creation
and termination are typically performed by the identification of option goal states, with an option
created to reach one of its goal states and terminate when it does so. The initiation set is then the set
1
of states from which a goal state can be reached. In previous research, option goal states have been
selected by a variety of methods, the most common relying on computing visit or reward statistics
over individual states to identify useful subgoals [5, 6, 7, 8]. Graph-based methods [9, 10, 11]
build a state graph and use its properties (e.g., local graph cuts [11]) to identify option goals. In
domains with factored state spaces, the agent may create options to change infrequently changing
variables [12, 13]. Finally, some methods extract options by exploiting commonalities in collections
of policies over a single state space [14, 15, 16, 17]. All of these methods compute some statistic over
individual states, in graphs derived from a set of state transitions, or rely on having state variables
with finitely many values. These properties are unlikely to easily generalize to continuous spaces,
where an agent may never see the same state twice.
We know of very little work on skill acquisition in continuous domains where the skills or action
hierarchy are not designed in advance. Mugan and Kuipers [18] use learned qualitatively-discretized
factored models of a continuous state space to derive options. This approach is restricted to domains
where learning such a model is appropriate and feasible. In Neumann et al. [19], an agent learns to
solve a complex task by sequencing motion templates. Both the template parameters and which templates to execute for each state are learned, although the agent?s choices are constrained. However,
the motion templates are parametrized policies designed specifically for the task.
The idea of arranging controllers so that executing one allows the next be executed is known in
robotics as pre-image backchaining or sequential composition [20]. In such work the controllers
and their pre-images (initiation sets) are typically given. Our work can be thought of as providing
the means for learning control policies (and their regions of stability) that are suitable for sequential
composition. The most recent relevant work in this line is by Tedrake [21], who builds a similar tree
to ours in the model-based control setting, where the controllers are locally valid LQR controllers
and their regions of stability (initiation sets) are computed using convex optimization. By contrast,
our work does not require a model and may find superior (optimized) policies but does not provide
formal guarantees.
3
Skill Discovery in Continuous Domains
In discrete domains, the primary reason for creating an option to reach a goal state is to make that
state prominent in learning: a state that may once have been difficult to reach can now be reached
using a single decision (to invoke the option). This effectively modifies the connectivity of the MDP
by connecting the option?s goal states to every state in its initiation set. Another reason for creating
options is transfer: if options are learned in an appropriate space they can be used in later tasks to
speed up learning. If the agent faces a sequence of tasks having the same state space, then options
learned in it are portable [14, 15, 16, 17]; if it faces a sequence of tasks having different but related
state spaces, then the options must be learned using features common to all the tasks [22].
In continuous domains, there is a further reason to create new options. An agent using function
approximation to solve a problem must necessarily obtain an approximate solution. Creating new
options that each have their own function approximator concentrated on a subset of the state space
may result in better overall policies by freeing the primary value function from having to simultaneously represent the complexities of the individual option value functions. Thus, skill discovery
offers an additional representational benefit in continuous domains. However, several difficulties
that are absent or less apparent in discrete domains become important in continuous domains.
Target regions. Most existing skill discovery methods identify a single state as an option target. In
continuous domains, where the agent may never see the same state twice, this must be generalized
to a target region. However, simply defining the target region as a neighborhood about a point will
not necessarily capture the goal of a skill. For example, many of the above methods generate target
regions that are difficult to reach?a too-small neighborhood may make the target nearly impossible
to reach; conversely, a too-large neighborhood may include regions that are not difficult to reach at
all. Similarly, we cannot easily compute statistics over state space regions without first describing
these regions, which is a nontrivial aspect of the problem.
Initiation sets. While in discrete domains it is common for an option?s initiation set to expand
arbitrarily as the agent learns a policy for successfully executing the option, this is not desirable in
continuous domains. In discrete domains without function approximation a policy to reach a subgoal
2
can always be represented exactly; in continuous domains (or even discrete domains with function
approximation), it may only be possible to represent such a policy locally. We are thus required to
determine the extent of a new option?s initiation set either analytically or through trial-and-error.
Representation. An option policy in both discrete and continuous domains should be able to consistently solve a simpler problem than the overall task using a simpler policy. A value table in a domain
with a finite state set is a relatively simple data structure, and updates to it take constant time. Thus,
in a discrete domain it is perfectly feasible to create a new value table for each learned option of the
same dimension as the task value table. In continuous domains with many variables, however, value
function approximation may require hundreds of even thousands of features to represent the overall
task?s value function, and updates are usually linear time. Therefore, ?lightweight? options that use
fewer features than needed to solve the overall problem are desirable in high-dimensional domains,
or when we may wish to create many skills.
Characterization. S?ims?ek and Barto [8] characterize useful subgoals as those likely to lie on a
solution path of the task the agent is facing. Options that are useful across a collection of problems
should have goals that have high probability of falling on the solution paths of some of those problems (although not necessarily the one the agent is currently solving). In a discrete domain where the
agent faces a finite number of tasks, one characterization of an option?s usefulness may be obtained
by treating the MDP as a graph and computing the likelihood that its goal lies on a solution path.
Such a characterization is much more difficult in a continuous domain.
In the following section we develop an algorithm for skill discovery in continuous domains by
addressing these challenges.
4
Skill Chaining
Since a useful option lies on a solution path, it seems natural to first create an option to reach the
task?s goal. The high likelihood that the option can only do so from a local neighborhood about this
region suggests a follow-on step: create an option to reach the states where the first option can be
successfully executed. This section describes skill chaining, a method that formalizes this intuition to
create chains of options to reach a given target event by repeatedly creating options to reach options
created earlier in the chain. First, we describe how to create an option given a target event.
4.1
Creating an Option to Trigger a Target Event
Given an episodic task defined over state space S with reward function R, assume we are given a
goal trigger function T defined over S that evaluates to 1 on states in the goal event and 0 otherwise.
To create an option oT to trigger T , i.e., to reach a state on which T evaluates to 1, we must define
oT ?s termination condition, reward function, and initiation set.
For oT ?s termination condition we simply use T . We set oT ?s reward function to R plus an option
completion reward for triggering T . We can then use a standard RL algorithm to learn oT ?s policy,
for example, using linear function approximation with a suitable set of basis functions to represent
the option?s value function. Obtaining oT ?s initiation set is more difficult because it should consist
of the states from which executing oT succeeds in triggering T . We can treat this as a standard
classification problem, using as positive training examples states in which oT has been executed and
triggered T , and as negative training examples states in which it has been executed and failed to
trigger T . A classifier suited to a potentially non-stationary classification problem with continuous
features can be used to learn oT ?s initiation set.
4.2
Creating Skill Chains
Given an initial target event with trigger function T0 , which for the purposes of this discussion we
consider to be the indicator function of the goal region of task, the agent creates a chain of skills as
follows. First, the agent creates option oT0 to trigger T0 , learns a good policy for this option, and
obtains a good estimate, I?T0 , of its initiation set. We then add event T1 = I?T0 to the list of target
events, so that when the agent first enters I?T0 , it creates a new option oT1 whose goal is to trigger
T1 . That is, the new option?s termination function is set to the indicator function I?T0 , and its reward
3
function becomes the task?s reward function plus an option completion reward for triggering T1 .
Repeating this procedure results in a chain of skills leading from any state in which the agent may
start to the task?s goal region as depicted in Figure 1.
...
(a)
(b)
(c)
Figure 1: An agent creates options using skill chaining. (a) First, the agent encounters a target
event and creates an option to reach it. (b) Entering the initiation set of this first option triggers the
creation of a second option whose target is the initiation set of the first option. (c) Finally, after
many trajectories the agent has created a chain of options to reach the original target.
Note that although the options lie on a chain, the decision to execute each option is part of the agent?s
overall learning problem. Thus, they may not necessarily be executed sequentially; in particular, if
an agent has learned a better policy for some parts of the chain, it may learn to skip some options.
4.3
Creating Skill Trees
The procedure above can create more general structures than chains. More than one option may be
created to reach a target event if that event remains on the target event list after the first option is
created to reach it. Each ?child? option then creates its own chain, resulting in a skill tree, depicted
in Figure 2. This will most likely occur when there are multiple solution trajectories (e.g., when
the agent has multiple start states), or when noise or exploration create multiple segments along a
solution path that cannot be covered by just one option.
...
(a)
(b)
(c)
Figure 2: (a) A skill chaining agent in an environment with multiple start states and two initial target
events. (b) When the agent initially encounters target events it creates options to trigger them. (c)
The initiation sets of these options then become target events, later triggering the creation of new
options so that the agent eventually creates a skill tree covering all solution trajectories.
To control the branching factor of this tree, we need to place three further conditions on option
creation. First, we do not create a new option when a target event is triggered from a state already
in the initiation set of an option targeting that event. Second, we require that the initiation set of
an option does not overlap that of its siblings or parents. (Note that although these conditions seem
computationally expensive, they can be implemented using at most one execution of each initiation
set classifier per visited state?which is required for action selection anyway). Finally, we may find
it necessary to set a limit on the branching factor of the tree by removing a target event once it has
some number of options targeting it.
4
4.4
More General Target Events
Although we have assumed that triggering the task?s end-of-episode event is the only initial target
event, we are free to start with any set of target events. We may thus include measures of novelty
or other intrinsically motivating events [3] as triggers, events that are interesting for domain-specific
reasons (e.g., physically meaningful events for a robot), or more general skill discovery techniques
that can identify regions of interest before the goal is reached.
5
The Pinball Domain
Our experiments use two instances of the Pinball domain, shown in Figure 3.1 The goal is to maneuver the small ball (which always starts in the same place in the first instance, and one of two
places in the second) into the large red hole. The ball is dynamic (drag coefficient 0.995), so its state
is described by four variables: x, y, x? and y.
? Collisions with obstacles are fully elastic and cause
the ball to bounce, so rather than merely avoiding obstacles the agent may choose to use them to
efficiently reach the hole. There are five primitive actions: incrementing or decrementing x? or y? by
a small amount (which incurs a reward of ?5 per action), or leaving them unchanged (which incurs
a reward of ?1 per action); reaching the goal obtains a reward of 10, 000.
Figure 3: The two Pinball Domain instances used for our experiments.
The Pinball domain is an appropriate continuous domain for skill discovery because its dynamic
aspects, sharp discontinuities, and extended dynamic control characteristics make it difficult for
control and function approximation?much more difficult than a simple navigation task, or typical
benchmarks like Acrobot. While a solution with a flat learning system is possible, there is scope for
acquiring skills that could result in a better solution.
5.1
Implementation Details
To learn to solve the overall task for both standard and option-learning agents, we used Sarsa (? =
1, = 0.01) with linear function approximation, using a 4th-order Fourier basis [23] (625 basis
functions per action) with ? = 0.001 for the first instance and a 5th-order Fourier basis (1296 basis
functions per action) with ? = 0.0005 for the second (in both cases ? was systematically varied
and the best performing value used). Option policy learning was accomplished using Q-learning
(?o = 0.0005, ? = 1, = 0.01) with a 3rd-order Fourier basis (256 basis functions per action).
Off-policy updates to an option for states outside its initiation set were ignored (because its policy
does not need to be defined in those states), as were updates from unsuccessful on-policy trajectories
(because their start states were then removed from the initiation set).
To initialize the option?s policy before attempting to learn its initiation set, a newly created option
was first allowed a ?gestation period? of 10 episodes where it could not be executed and its policy
was updated using only off-policy learning. After its gestation period, the option was added to the
agent?s action repertoire. For new option o, this requires expanding the overall action-value function
Q to include o and assigning appropriate initial values to Q(s, o). We therefore sampled the Q values
of transitions that triggered the option?s target event during its gestation, and initialized Q(s, o) to
1
Java source code for Pinball can be downloaded at http://www-all.cs.umass.edu/? gdk/pinball
5
the maximum of these values. This reliably resulted in an optimistic but still fairly accurate initial
value that encouraged the agent to execute the option.
Each option?s initiation set was learned by a logistic regression classifier, initialized to be true everywhere, using 2nd order polynomial features, learning rate ? = 0.1 and 100 sweeps per new data
point. When the agent executed the option, states on trajectories that reached its goal within 250
steps were used as positive examples, and the start states of trajectories that did not were used as
negative examples. We considered an option?s initiation set learned well enough to be added to the
list of target events when its weights changed on average less than 0.15 per episode for two consecutive episodes. Since the Pinball domain has such strong discontinuities, to avoid over-generalization
after this learning period we additionally constrained the initiation set to contain only points within
a Euclidean distance of 0.1 of a positive example. We used a maximum branching factor of 3.
6
Results
Figure 4(a) shows the performance (averaged over 100 runs) in the first Pinball instance for agents
using a flat policy (without options) against agents employing skill chaining, and agents using given
(pre-learned) options that were obtained using skill chaining over 250 episodes in the same task.
4
x 10
0
?2
?4
Return
?6
?8
?10
?12
No Options
Given Options
Skill Chaining
?14
?16
50
100
150
200
250
Episodes
(a)
(b)
Figure 4: (a) Performance in the first Pinball instance (averaged over 100 runs) for agents employing
skill chaining, agents with given options, and agents without options. (b) A good example solution
to the first Pinball instance, showing the acquired options executed along the sample trajectory in
different colors. Primitive actions are in black.
Figure 4(a) shows that the skill chaining agents performed significantly better than flat agents by
50 episodes, and went on to obtain consistently good solutions by 250 episodes, whereas the flat
agents did much worse and were less consistent. Agents that started with given options did very
well initially?with an initial episode return far greater than the average solution eventually learned
by agents without options?and proceeded quickly to the same quality of solution as the agents that
discovered their options. This shows that the options themselves, and not the process of acquiring
them, were responsible for the increase in performance.
Figure 4(b) shows a sample solution trajectory from an agent performing skill chaining in the first
Pinball instance, with the options executed shown in different colors. The figure illustrates that this
agent discovered options corresponding to simple, efficient policies covering segments of the sample
trajectory. It also illustrates that in some places (in this case, the beginning of the trajectory) the
agent learned to bypass a learned option?the black portions of the trajectory show where the agent
employed primitive actions rather than a learned option. In some cases this occurred because poor
policies were learned for those options. In this particular case, the presence of other options freed the
overall policy (using a more complex function approximator) to represent the remaining trajectory
segment better than could an option (with its less complex function approximator). Figure 5 shows
the initiation sets and three sample trajectories from the options used in the trajectory shown in
Figure 4(b). These learned initiation sets show that the discovered option policies are only locally
valid, even though they are represented using Fourier basis functions, which have global support.
6
Figure 5: Initiation sets and sample policy trajectories for the options used in Figure 4(b). Each
initiation set is shown using a density plot, with lightness increasing proportionally to the number
of points in the set for a given (x, y) coordinate, with x? and y? sampled over {?1, ? 12 , 0, 12 , 1}.
Figures 6, 7 and 8 show similar results for the second Pinball instance, although Figure 6 shows a
slight and transient initial penalty for skill chaining agents, before they go on to obtain far better and
more consistent solutions than flat agents. The example trajectory in Figure 7 and initiation sets in
Figure 8 show portions of a successfully formed skill tree.
4
x 10
0
?2
?4
Return
?6
?8
?10
?12
No Options
Given Options
Skill Chaining
?14
?16
50
100
150
200
250
300
Episodes
Figure 6: Performance in the second Pinball instance (averaged over 100 runs) for agents employing
skill chaining, agents with given options, and agents without options.
Figure 7: Good solutions to the second Pinball experimental domain, showing the acquired options
executed along the sample trajectory in different colors. Primitive actions are shown in black.
7
Figure 8: Initiation sets and sample trajectories for the options used in Figure 7.
7
Discussion and Conclusions
The performance gains demonstrated in the previous section show that skill chaining (at least using
an end-of-episode target event) can significantly improve the performance of a RL agent in a challenging continuous domain, by breaking the solution into subtasks and learning lower-order option
policies for each one.
Further benefits could be obtained by including more sophisticated initial target events: any indicator
functions could be used in addition to the end-of-episode event. We expect that methods that identify
regions likely to lie on the solution trajectory before a solution is found will result in the kinds of
early performance gains sometimes seen in discrete skill discovery methods (e.g., [11]).
The primary benefit of skill chaining is that it reduces the burden of representing the task?s value
function, allowing each option to focus on representing its own local value function and thereby
achieving a better overall solution. This implies that skill acquisition is best suited to highdimensional problems where a single value function cannot be well represented using a feasible
number of basis functions in reasonable time. In tasks where a good solution can be well represented using a low-order function approximator, we do not expect to see any benefits when using
skill chaining.
Similar benefits may be obtainable using representation discovery methods [24], which construct
basis functions to compactly represent complex value functions. We expect that such methods will
prove most effective for extended control problems when combined with skill acquisition, where
they can tailor a separate representation for each option rather than for the entire problem.
In this paper we used ?lightweight? function approximators to represent option value functions. In
domains such as robotics where the state space may contain thousands of state variables, we may
require a more sophisticated approach that takes advantage of the notion that although the entire task
may not be reducible to a feasibly sized state space, it is often possible to split it into subtasks that
are. One such approach is abstraction selection [25, 26], where an agent uses sample trajectories
(as obtained during gestation) to select an appropriate abstraction for a new option from a library of
candidate abstractions, potentially resulting in a much easier learning problem.
We conjecture that the ability to discover new skills, and for each skill to employ its own abstraction,
will prove a key advantage of hierarchical reinforcement learning as we try to scale up to extended
control problems in high-dimensional spaces.
Acknowledgments
? ur S?ims?ek and our reviewers for their helpful input. Andrew Barto was
We thank Jeff Johns, Ozg?
supported by the Air Force Office of Scientific Research under grant FA9550-08-1-0418.
References
[1] A.G. Barto and S. Mahadevan. Recent advances in hierarchical reinforcement learning. Discrete Event
Systems, 13:41?77, 2003. Special Issue on Reinforcement Learning.
8
[2] R.S. Sutton, D. Precup, and S.P. Singh. Between MDPs and semi-MDPs: A framework for temporal
abstraction in reinforcement learning. Artificial Intelligence, 112(1-2):181?211, 1999.
[3] S. Singh, A.G. Barto, and N. Chentanez. Intrinsically motivated reinforcement learning. In Proceedings
of the 18th Annual Conference on Neural Information Processing Systems, 2004.
[4] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
[5] B.L. Digney. Learning hierarchical control structures for multiple tasks and changing environments. In
From Animals to Animats 5: Proceedings of the Fifth International Conference on Simulation of Adaptive
Behavior. MIT Press, 1998.
[6] A. McGovern and A.G. Barto. Automatic discovery of subgoals in reinforcement learning using diverse
density. In Proceedings of the 18th International Conference on Machine Learning, pages 361?368, 2001.
? S?ims?ek and A.G. Barto. Using relative novelty to identify useful temporal abstractions in reinforcement
[7] O.
learning. In Proceedings of the 21st International Conference on Machine Learning, pages 751?758,
2004.
? S?ims?ek and A.G. Barto. Skill characterization based on betweenness. In Advances in Neural Informa[8] O.
tion Processing Systems 22, 2009.
[9] I. Menache, S. Mannor, and N. Shimkin. Q-cut?dynamic discovery of sub-goals in reinforcement learning. In Proceedings of the 13th European Conference on Machine Learning, pages 295?306, 2002.
[10] S. Mannor, I. Menache, A. Hoze, and U. Klein. Dynamic abstraction in reinforcement learning via
clustering. In Proceedings of the 21st International Conference on Machine Learning, pages 560?567,
2004.
? S?ims?ek, A.P. Wolfe, and A.G. Barto. Identifying useful subgoals in reinforcement learning by local
[11] O.
graph partitioning. In Proceedings of the 22nd International Conference on Machine Learning, 2005.
[12] B. Hengst. Discovering hierarchy in reinforcement learning with HEXQ. In Proceedings of the 19th
International Conference on Machine Learning, pages 243?250, 2002.
[13] A. Jonsson and A.G. Barto. A causal approach to hierarchical decomposition of factored MDPs. In
Proceedings of the 22nd International Conference on Machine Learning, 2005.
[14] S. Thrun and A. Schwartz. Finding structure in reinforcement learning. In Advances in Neural Information
Processing Systems, volume 7, pages 385?392. The MIT Press, 1995.
[15] D.S. Bernstein. Reusing old policies to accelerate learning on new MDPs. Technical Report UM-CS1999-026, Department of Computer Science, University of Massachusetts at Amherst, April 1999.
[16] T.J. Perkins and D. Precup. Using options for knowledge transfer in reinforcement learning. Technical
Report UM-CS-1999-034, Department of Computer Science, University of Massachusetts Amherst, 1999.
[17] M. Pickett and A.G. Barto. Policyblocks: An algorithm for creating useful macro-actions in reinforcement
learning. In Proceedings of the 19th International Conference of Machine Learning, pages 506?513, 2002.
[18] J. Mugan and B. Kuipers. Autonomously learning an action hierarchy using a learned qualitative state
representation. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, 2009.
[19] G. Neumann, W. Maass, and J. Peters. Learning complex motions by sequencing simpler motion templates. In Proceedings of the 26th International Conference on Machine Learning, 2009.
[20] R.R. Burridge, A.A. Rizzi, and D.E. Koditschek. Sequential composition of dynamically dextrous robot
behaviors. International Journal of Robotics Research, 18(6):534?555, 1999.
[21] R. Tedrake. LQR-Trees: Feedback motion planning on sparse randomized trees. In Proceedings of
Robotics: Science and Systems, 2009.
[22] G.D. Konidaris and A.G. Barto. Building portable options: Skill transfer in reinforcement learning. In
Proceedings of the 20th International Joint Conference on Artificial Intelligence, 2007.
[23] G.D. Konidaris and S. Osentoski. Value function approximation in reinforcement learning using the
Fourier basis. Technical Report UM-CS-2008-19, Department of Computer Science, University of Massachusetts Amherst, June 2008.
[24] S. Mahadevan. Learning representation and control in Markov Decision Processes: New frontiers. Foundations and Trends in Machine Learning, 1(4):403?565, 2009.
[25] G.D. Konidaris and A.G. Barto. Sensorimotor abstraction selection for efficient, autonomous robot skill
acquisition. In Proceedings of the 7th IEEE International Conference on Development and Learning,
2008.
[26] G.D. Konidaris and A.G. Barto. Efficient skill learning using abstraction selection. In Proceedings of the
21st International Joint Conference on Artificial Intelligence, July 2009.
9
| 3683 |@word trial:1 proceeded:1 polynomial:1 seems:1 nd:3 termination:6 simulation:1 decomposition:1 incurs:2 thereby:2 initial:8 lightweight:2 uma:3 lqr:2 ours:1 existing:1 pickett:1 assigning:1 must:5 john:1 designed:2 treating:1 update:4 plot:1 stationary:1 intelligence:4 selected:1 fewer:1 betweenness:1 discovering:1 beginning:1 fa9550:1 characterization:4 mannor:2 simpler:3 five:1 along:3 become:2 qualitative:1 consists:1 prove:2 introduce:2 acquired:2 behavior:2 themselves:1 planning:3 discretized:1 relying:1 little:1 kuiper:2 increasing:1 becomes:1 discover:2 kind:1 finding:1 guarantee:1 formalizes:1 temporal:2 every:1 exactly:1 um:3 classifier:3 schwartz:1 control:9 partitioning:1 grant:1 maneuver:1 before:4 positive:3 t1:3 local:4 treat:1 limit:1 io:1 sutton:2 path:5 black:3 plus:2 twice:2 drag:1 dynamically:1 conversely:1 challenging:2 suggests:1 averaged:3 acknowledgment:1 responsible:1 policyblocks:1 procedure:2 episodic:1 thought:1 java:1 significantly:2 pre:3 cannot:3 targeting:2 selection:4 impossible:1 www:1 demonstrated:1 reviewer:1 modifies:1 primitive:6 go:1 convex:1 focused:1 identifying:1 immediately:1 factored:3 stability:2 anyway:1 coordinate:1 notion:1 arranging:1 updated:1 autonomous:1 hierarchy:4 target:29 trigger:10 us:1 wolfe:1 infrequently:1 expensive:1 osentoski:1 trend:1 rizzi:1 cut:2 reducible:1 enters:1 capture:1 thousand:2 region:14 episode:13 autonomously:2 went:1 removed:1 principled:1 intuition:1 environment:2 complexity:1 reward:13 dynamic:5 terminating:1 singh:2 solving:1 segment:3 creation:4 creates:10 basis:11 compactly:1 easily:2 accelerate:1 joint:3 represented:4 describe:1 effective:1 artificial:4 mcgovern:1 neighborhood:4 outside:1 apparent:1 heuristic:1 whose:2 solve:5 otherwise:1 ability:1 statistic:3 sequence:2 triggered:3 advantage:2 macro:1 relevant:1 representational:1 exploiting:1 parent:1 neumann:2 produce:1 executing:4 derive:1 andrew:2 develop:1 completion:2 freeing:1 finitely:1 strong:1 implemented:1 c:5 skip:1 implies:1 exploration:1 ot1:1 enable:1 successor:1 transient:1 require:4 generalization:1 repertoire:2 sarsa:1 frontier:1 considered:1 scope:1 achieves:2 commonality:1 consecutive:1 early:1 purpose:1 currently:1 visited:1 create:13 successfully:5 koditschek:1 mit:3 always:2 rather:3 reaching:1 avoid:1 barto:16 office:1 derived:1 focus:1 june:1 improvement:1 consistently:2 sequencing:2 likelihood:2 contrast:1 helpful:1 ozg:1 abstraction:9 typically:2 unlikely:1 entire:2 initially:2 expand:2 overall:9 classification:2 issue:1 development:2 animal:1 constrained:2 special:1 initialize:1 fairly:1 construct:3 never:2 having:4 once:2 encouraged:1 nearly:1 pinball:15 ot0:1 report:3 feasibly:1 employ:1 simultaneously:1 resulted:1 individual:3 interest:1 navigation:1 chain:13 accurate:1 necessary:1 tree:9 gestation:4 euclidean:1 initialized:2 old:1 causal:1 digney:1 instance:10 earlier:1 obstacle:2 extensible:1 addressing:1 subset:1 hundred:1 usefulness:1 too:2 characterize:1 motivating:1 chooses:2 combined:1 st:4 density:2 international:14 amherst:7 randomized:1 invoke:1 off:2 connecting:1 quickly:1 precup:2 connectivity:1 choose:1 worse:1 creating:10 ek:5 leading:3 return:3 reusing:1 coefficient:1 performed:2 later:2 try:1 tion:1 optimistic:1 reached:4 start:7 red:1 option:125 portion:2 formed:1 air:1 who:1 efficiently:1 characteristic:1 identify:6 generalize:1 identification:1 none:1 trajectory:20 reach:18 evaluates:2 against:1 konidaris:5 acquisition:4 sensorimotor:1 shimkin:1 sampled:2 newly:1 gain:2 massachusetts:5 intrinsically:3 color:3 knowledge:1 obtainable:1 sophisticated:3 follow:1 april:1 subgoal:1 execute:5 though:1 just:2 logistic:1 quality:1 scientific:1 mdp:3 usa:2 building:1 contain:3 true:1 analytically:1 entering:1 maass:1 during:2 branching:3 covering:2 chaining:20 generalized:1 prominent:1 demonstrate:2 motion:5 image:2 common:3 superior:1 rl:7 volume:1 subgoals:4 occurred:1 slight:1 ims:5 composition:3 cambridge:1 chentanez:1 rd:1 automatic:1 similarly:1 gdk:2 robot:3 add:3 own:5 recent:3 initiation:31 arbitrarily:1 approximators:1 accomplished:1 seen:1 george:1 additional:1 greater:1 employed:1 determine:2 novelty:2 period:3 july:1 semi:1 multiple:5 desirable:2 reduces:1 technical:3 offer:1 visit:1 regression:1 controller:4 physically:1 represent:7 sometimes:1 robotics:4 background:1 whereas:1 addition:1 leaving:1 source:1 ot:9 seem:1 presence:1 bernstein:1 split:1 enough:1 mahadevan:2 variety:1 perfectly:1 triggering:5 idea:1 sibling:1 absent:1 bounce:1 t0:6 motivated:1 penalty:1 peter:1 cause:1 action:20 repeatedly:1 ignored:1 useful:8 collision:1 covered:1 proportionally:1 amount:1 repeating:1 locally:3 concentrated:1 generate:1 http:1 exist:1 per:8 klein:1 diverse:1 discrete:11 key:1 four:1 falling:1 achieving:1 changing:2 freed:1 graph:6 merely:1 run:3 everywhere:1 tailor:1 place:4 reasonable:1 decision:4 annual:1 nontrivial:1 occur:1 perkins:1 flat:5 aspect:2 speed:1 fourier:5 performing:2 attempting:1 relatively:1 conjecture:1 department:5 designated:1 ball:3 poor:1 across:1 describes:1 ur:1 restricted:1 computationally:1 remains:1 describing:1 eventually:2 needed:1 know:1 end:5 hierarchical:5 appropriate:7 mugan:2 encounter:2 original:1 remaining:1 include:3 clustering:1 giving:2 build:2 unchanged:1 sweep:1 added:3 already:1 primary:3 distance:1 separate:1 thank:1 thrun:1 parametrized:1 portable:2 extent:1 reason:4 dextrous:1 code:1 providing:1 difficult:7 executed:12 potentially:2 menache:2 negative:2 implementation:1 reliably:1 policy:31 allowing:1 animats:1 markov:2 benchmark:1 finite:2 defining:1 extended:4 discovered:3 varied:1 sharp:1 jonsson:1 subtasks:2 required:2 optimized:1 learned:17 discontinuity:2 able:1 alongside:1 usually:1 challenge:1 unsuccessful:1 including:1 event:34 suitable:2 difficulty:1 rely:1 natural:1 overlap:1 indicator:4 force:1 representing:2 improve:1 mdps:4 lightness:1 temporally:1 library:1 created:6 started:1 extract:1 discovery:14 relative:1 fully:1 expect:3 interesting:2 approximator:4 facing:1 foundation:1 downloaded:1 agent:57 consistent:2 systematically:1 bypass:1 elsewhere:1 changed:1 supported:1 free:1 formal:1 template:5 face:3 fifth:1 sparse:1 benefit:6 feedback:1 dimension:1 transition:2 valid:2 collection:2 reinforcement:20 qualitatively:1 adaptive:1 employing:3 far:2 approximate:1 skill:60 obtains:2 global:1 sequentially:1 assumed:1 decrementing:1 continuous:22 table:3 additionally:1 learn:6 terminate:1 transfer:3 expanding:1 elastic:1 obtaining:1 complex:5 necessarily:4 european:1 domain:40 did:3 incrementing:1 noise:1 child:1 allowed:1 sub:1 wish:1 lie:5 candidate:1 breaking:1 learns:3 removing:1 specific:1 showing:2 list:5 consist:1 burden:1 sequential:3 effectively:1 acrobot:1 execution:2 illustrates:2 hole:2 easier:1 suited:2 depicted:2 simply:3 likely:3 failed:1 tedrake:2 acquiring:2 informa:1 ma:3 goal:23 viewed:1 sized:1 jeff:1 feasible:3 experimentally:2 change:1 specifically:1 typical:1 called:1 experimental:1 succeeds:1 meaningful:1 select:1 highdimensional:1 support:1 avoiding:1 |
2,961 | 3,684 | Manifold Regularization for SIR with Rate Root-n
Convergence
Wei Bian
School of Computer Engineering
Nanyang Technological University
Singapore, 639798
[email protected]
Dacheng Tao
School of Computer Engineering
Nanyang Technological University
Singapore, 639798
[email protected]
Abstract
In this paper, we study the manifold regularization for the Sliced Inverse Regression (SIR). The manifold regularization improves the standard SIR in two aspects:
1) it encodes the local geometry for SIR and 2) it enables SIR to deal with transductive and semi-supervised learning problems. We prove that the proposed graph
Laplacian based regularization is convergent at rate root-n. The projection directions of the regularized SIR are optimized by using a conjugate gradient method
on the Grassmann manifold. Experimental results support our theory.
1 Introduction
Sliced inverse regression (SIR) [7] was proposed for sufficient dimension reduction. In a regression
setting, with the predictors X and the response Y, the sufficient dimension reduction (SDR) subspace B is defined by the conditional independency Y? X| B T X. Under the assumption that the
distribution of X is elliptic symmetric [7], it has been proved that the SDR subsapce B is related
to the inverse regression curve E(X|Y). It can be estimated at least partially by a generalized eigendecomposition between the covariance matrix of the predictors Cov(X) and the covariance matrix of
the inverse regression curve Cov(E(X|Y)). When Y is a continuous random variable, it is discretized
by slicing its range into several slices so as to estimate E(X|Y) empirically. This procedure reflects
the name of SIR.
For practical applications, the elliptic symmetric assumption on P (X) in SIR cannot be fully satisfied, because many real datasets are embedded on manifolds [1]. Therefore, SIR cannot select an
efficient subspace for predicting the response Y because the local geometry of the predictors X is
ignored. Additionally, SIR only utilizes labeled (given response) data (predictors). Thus, it is valuable to extend SIR to deal with transductive and semi-supervised learning problems by considering
unlabelled samples.
We solve the above two problems of SIR by using the manifold regularization [2], which has been
developed to incorporate the local geometry in learning classification or regression functions. In
this paper, we utilize it to preserve the local geometry of predictors in learning the SDR subspace
B. In addition, it helps SIR to solve transductive/semi-supervised learning problems because the
regularization encodes the marginal distribution of the unlabelled predictors.
Different regularizations for SIR have been well studied, e.g., the non-singular regularization [14],
the ridge regularization [9], and the sparse regularization [8]. However, all existing regularizations
do not encode the local geometry of the predictors. Although the localized sliced inverse regression
[12] considers the local geometry, it is heuristic and does not follow up the regularization framework.
The rest of the paper is organized as following. Section 2 presents the manifold regularization for
SIR. Section 3 proves the convergence of the new manifold regularization. We discuss the optimiza1
tion algorithm of the regularized SIR by using the conjugate gradient method on the Grassmann
manifold in Section 4. Section 5 presents the experimental results on synthetic and real datasets.
Section 6 concludes this paper.
2
Manifold Regularization for SIR
In the rest of the paper, we use terminologies in regression and deem classification as regression
with the category response. Upper case letters X ? Rp and Y ? R are respectively the predictors
and the response, and lower case letters x and y are corresponding realizations. Given a sample
l
l +nu
set, containing nl labeled samples {xi , yi }ni=1
and nu unlabeled samples {xi }n=n
i=nl +1 , we seek an
optimal k-dimensional subspace spanned by B = [?1 , ..., ?k ] such that the response Y is predictable
with the projected predictors B T X. We also use matrix X = [x1 , x2 , ..., xn ] to denote all predictors
in the sample set.
2.1 Sliced Inverse Regression
Suppose the response Y is predictable with a sufficient k-dimensional projection of the original
predictors X. We can consider the following regression model [7].
?
?
Y = f ?1T X, ?2T X, ..., ?kT X, ?
(1)
where ??s are linear independent projection vectors and ? is the independent noise. Given a set
l
of samples {xi , yi }ni=1
, SIR estimates the projection subspace B = [?1 , ..., ?k ] via following steps:
discretize Y by slicing its range into H slices; calculate the sample frequency fh of Y falling into the
? h = E(X|Y = h); estimate the mean
h-th slice and the sample estimation of the conditional mean X
?
??
?
? and covariance matrix ? of predictors X; calculate the matrix ? = P fh X
?h ? X
? X
?h ? X
? T;
X
h
and B is finally obtained by using the generalized eigen-decomposition ?? = ???. It can be proved
that the generalized eigen-decomposition is equivalent to the following optimization,
?
??
??1 T
B ?B .
(2)
max trace B T ?B
B
We refer to (2) as the objective function of SIR and thus we can impose with the manifold regularization on (2).
Remark 2.1 Another way to get the objective (2) is based on the least square formulation for SIR
proposed in [3],
min L (B, C) =
B
H
X
?
?
?
?
?h ? X
? ? ?BCh T ??1 X
?h ? X
? ? ?BCh
fh X
(3)
h=1
where C = [C1 , C2 , ..., Ch ] are auxiliary variables. Eliminate Ch by setting the partial derivative
?L/?Ch = 0, and then (2) can be obtained directly. Additionally, (2) shows that SIR could have
a similar objective as linear discriminant analysis, although they are obtained from different understandings of discriminative dimension reduction.
2.2
Manifold Regularization for SIR
Each dimension reduction projection ? can be deemed as a linear function or a mapping g(x) =
? T x. We expect to preserve the local geometry of the distribution of the predictors X while doing
mapping g(x). Suppose the predictors X are embedded on a manifold M , this can be achieved by
penalizing the gradient ?M g along the manifold M . Because we are dealing with random variables
with the distribution P (X), the following formulation can be applied,
Z
2
R=
k?M gk dP (X).
(4)
X?M
The above formulation is different from the original manifold regularization [2]on the point that
the function g(x) is a dimension reduction mapping here while it is a classification or regression
2
function in [2]. Usually, both the manifold and the marginal distribution of X are unknown. It has
been well studied in manifold learning, however, that the regularization (4) can be approximated by
n=n +n
using the associated graph Laplacian of labeled and unlabeled {xi }i=1 l u .
n=n +n
Construct an adjacent graph for {xi }i=1 1 u , where the pairwise edge weight (W )ij =
?
?
? (kxi ? xj k) is defined by the kernel function ? (?), e.g., the heat kernel ? (d) = exp ?d2 ,
and then the
given
Passociated graph Laplacian is L = D ? W , where D is a diagonal matrix
T
by Dii =
W
.
Thus,
the
regularization
in
(4)
can
be
approximated
by
R
=
g
Lg,
where
ij
j
g = [? T x1 , ..., ? T xn ]. Furthermore, because there are k independent projections B = [?1 , ..., ?k ] ,
we take the summation of k regularizations
R=
k
X
?
?
giT Lgi = trace GT LG
(5)
i=1
where G = [g1 , ..., gk ].
In manifold learning, it is suggested to use the normalized graph Laplacian D?1/2 LD?1/2 to replace L, or to use an equivalent constraint GT DG = I, to get a better performance [1], and the
solution obtained by the normalized graph Laplacian is consistent with weaker conditions than
the unnormalized one
regularized SIR, we normalize the regularization
?
?? [13]. ?In the proposed
?1 T
(5) as R = trace GT DG
G LG , which is equivalent to the constraint GT DG = I.
This normalization makes R invariant to scalar and rotation transformations of the projections
B = [?1 , ...,??k ], which is preferred
? for dimension reduction problems. By adding the regularization
?
??1 T
G LG to SIR (2), and substituting G = X T B, we get the regularized
R = trace GT DG
SIR
?
?
??
??
??1 T
??1 T
B QB
(6)
B ?B ? ?trace B T SB
max SIRr (B) = trace B T ?B
B
where Q = 1/n (n ? 1) XLX T , S = 1/n (n ? 1) XDX T , and ? is the positive weighting factor.
3
Convergence of the Regularization
Different from the existing regularizations [8,9,14] for SIR, which are constructed as deterministic
terms, the manifold regularization in (6) is a random term that involves two data dependent variables
(matrices) Q and S. Therefore, it is necessary to discuss the convergence property of the proposed
manifold regularization.
It has been well proved that both ? and ? converge at rate root-n [7,11,15]. Therefore, the convergence rate of the objective (6) depends on whether the regularization term converges at rate
root-n. Below, we prove that both the sample based estimations Q = 1/n (n ? 1) XLX T and
S = 1/n (n ? 1) XDX T converge to deterministic matrices at rate root-n. Note that the convergence of a special case where the graph Laplacian is built by the kernel function ? (d) = 1 (d < ?)
was proved in [6]. Our proof scheme, however, is quite other than that used in [6]. Additionally, we
target a general choice of kernel ? (?) and also prove the root-n convergence rate which has not been
obtained before.
n=n +n
Although samples {xi }i=1 l u are independent, the dependency of L and D on samples makes
Q and S cannot be expanded as a summation of independent items. Therefore, it is difficult to
apply the law of large numbers and the central limit theorem to prove the convergence and obtain
the corresponding convergence rate. However, we can prove them by constructing the converged
limitation and show that the variance of the sample based estimation with respect to the constructed
limitation decades at rate root-n. Throughout the results obtained in this Section, we assume the
following conditions hold.
Conditions 3.1 For kernel function ? (d) , it satisfies ? (0) =??
1 and |? (d)| 6 1. For the distribution
?
??
?T ?
?
??
of predictors P (X), the fourth order moment exists, i.e.,E ? vec(xxT ) vec(xxT ) ? < ?,
where vec() vectorizes a matrix into a column vector.
3
We start by splitting Q into two parts T1 and T2 ,
n
Q=
n
X
X
1
1
1
XLX T =
(Dii ? Wii ) xi xTi ?
Wij xi xTj = T1 ? T2 . (7)
n (n ? 1)
n (n ? 1) i=1
n (n ? 1)
i6=j
Substituting the function ? (?) into (7), we have
?
?
!
?
!
n
n
n
n
?
P
P
P
P
?
1
1
1
T
? T1 =
? (kxi ? xj k) ? ? (0) xi xi = n
? (kxi ? xj k) xi xTi
?
n(n?1)
n?1
?
i=1 j=1
i=1
! j6=i
?
n
n
n
?
P
P
P
?
1
1
?
Wij xi xTj = n1
? (kxi ? xj k) xTj .
xi n?1
?
? T2 = n(n?1)
i
i6=j
j6=i
Under the condition 3.1, the next two lemmas show the convergence of T1 and T2 , respectively.
Lemma 3.1 Let the conditional expectation ? (x)
? = E (? (kz
? ? xk) |x ), wherez and x are independent and both are sampled from P (X). The E ? (x) xxT exists, and T1 in (8) converges almost
surely at rate n?1/2 , i.e.,
?
?
?
?
a.s
T1 = E ? (x) xxT + O n?1/2 .
(9)
Lemma 3.2 Let the conditional expectation ? (x) ?= E (? (kz
? ? xk) z |x ), where z and x are indeT
pendent and both are sampled from P (X). The E x? (x) exists, and T2 in (8) converges almost
surely at rate n?1/2 , i.e.,
?
?
?
?
a.s
T
T2 = E x? (x) + O n?1/2 .
(10)
The proofs of above two lemmas are given in Section 6. Based on Lemmas 1 and 2, we have the
following two theorems for the convergence of Q and S.
Theorem 3.1 Given the Conditions 3.1, the sample based
Q converges almost surely to
? estimation
?
?
?
a.s
T
T
a deterministic matrix E (Q) = E ? (x) xx ? E x? (x)
at rate n?1/2 , i.e., Q = E (Q) +
? ?1/2 ?
O n
.
Proof. Because Q = T1 ? T2 , the theorem is an immediate result from Lemmas 3.1 and 3.2.
Theorem 3.2 Given the Conditions 3.1, the sample based estimation S converges almost surely to a
?
?
?
?
?
?
a.s
deterministic matrix E ? (x) xxT at rate n?1/2 , i.e., S = E ? (x) xxT + O n?1/2 .
Proof.
1
n(n?1)
1
n(n?1)
n
n
P
P
P
1
Dii xi xTi
=
Dii =
=
? (kxi ? xj k), so S = n(n?1)
j Wij
i=1
j=1
?
!
?
!
n
n
n
P
P
P
P
1
? (kxi ? xj k) xi xTi = n(n?1)
? (kxi ? xj k) + ? (0) xi xTi = T1 +
i=1
n
P
i=1
n
P
j=1
xi xTi .
i=1
Because
1
(n?1)
n
P
i=1
xi xTi
j6=i
is an unbiased estimation of Cov(X), we have
?
?
a.s.
xi xTi = O n?1 . Therefore, according to Lemma 3.1, we have S = T1 +
i=1
?
? a.s. ?
?
?
?
?
?
O n?1 = E ? (x) xxT + O n?1/2 . Note that here E (S) 6= E ? (x) xxT , but equality
can be asymptotically achieved when n ? ?.
1
n(n?1)
4
Optimization on the Grassmann Manifold
The optimization of the regularized SIR (6) is much more difficult than that of the standard SIR (2),
which can be solved by the generalized eigen-decomposition. In this section, we present a conjugate
4
(8)
gradient method on the Grassmann manifold to solve (6), based on the fact it is invariant to scalar and
rotation transformations of the projection B. By exploiting the geometry of the Grassmann manifold,
the conjugate gradient algorithm converges faster than the gradient scheme in the Euclidean space.
Given a constrained optimization problem min F (A) subject to A ? Rp?k and AT A = I, if the
problem further satisfies F (A) = F (AO) for an arbitrary orthonormal matrix O, then it is called
an optimization problem defined on the Grassmann manifold Gpk . By the following theorem, we
can transform (6) into its equivalent form (11) which is defined on the Grassmann manifold.
Theorem 4.1 Suppose that ? is nonsingular and given the eigen-decomposition ??1/2 S??1/2 =
? T , problem (6) is equivalent to
U ?U
??
?
?
?
??1
?
?
?
min F (A) = ?trace AT ?A
+ ?trace AT ?A
AT QA
(11)
AT A=I
? = U T ??1/2 ???1/2 U and Q
? = U T ??1/2 Q??1/2 U . Given the optimal solution A of
where ?
(11), the optimal solution of (6) is given by B = ??1/2 U A .
??
?
??1 T
?
Proof. Substituting B = ??1/2 U A into (6), we have SIRr (A) = trace AT A
A ?A
?
??
?
??1
?
?
?trace AT ?A
AT QA
. Given a nonsingular ?, B = ??1/2 U A is an invertible variable
transform. Thus, we know that if A maximizes SIRr (A) then B maximizes SIRr (B). Because
SIRr (A) is invariant to scalar and rotation transformations, a constraint AT A = I can be added to
(6). We then get (11). This completes the proof.
To implement the conjugate gradient method on the Grassmann manifold, the gradient of F (A)
in (11) is required. According to [4], the gradient GA of F (A) on the manifold is defined by
GA = ?A FA where FA is the gradient of F (A) in the Euclidian space and ?A = I ? AAT is the
projection onto the tangent space at A of the manifold. In case of F (A) in (11), it is given by,
?
?
?
??1
?
??1
?
?
T ?
T?
T
?
?
?
.
(12)
QA
AT ?A
GA = I ? AA ?A ? ? I ? ?A A ?A
A
Next, we present the conjugate gradient method on the Grassmann manifold [4] to solve (11). The
algorithm is given by the following three steps:
? 1-D searching along the geodesic: given the current position Ak , the gradient Gk and the
searching direction Hk , the 1-D searching along the geodesic is given by
?
?
min F (A (t)) s.t. A (t) = F Ak V cos (?t) V T + U sin (?t) V T
(13)
t
where U ?V T is the compact SVD of Hk . Record the minimum solution tk = tmin , and Ak+1 =
A (tk ) as the starting position for next searching.
? Transporting gradient and search direction: parallel transport Gk and Hk from Ak to Ak+1 by
using
? Gk = Gk ? (Ak V sin ?tk + U (I ? cos ?tk )) U T Gk
(14)
? Hk = (?Ak V sin ?tk + U cos ?tk ) ?V T
(15)
? Calculating the conjugate direction: given the gradient Gk+1 at Ak+1 , the conjugate searching
direction is
?
?
?
?
T
Hk+1 = ?Gk+1 + trace (Gk+1 ? ? Gk ) Gk+1 /trace GTk Gk ? Hk .
(16)
Initialize A0 by a random guess (subject to AT0 A0 = I ) and let H0 = ?G0 , and then repeat the
above three steps iteratively to minimize F (A) until convergence, i.e., |F (Ak+1 ) ? F (Ak )| < ?0 .
Note that, the same as the conjugate gradient method in the Euclidian space, the searching direction
Hk has to be resetting as Hk = ?Gk with a period of p (n ? p), i.e., the dimension of the searching
space.
5
5
Experiments
In this section, we evaluate the proposed regularized SIR on two real datasets. We show the results
of the standard SIR and the localized SIR on the same experiments for reference.
5.1
USPS Test
The USPS dataset contains 9,298 handwriting characters of digits 0 to 9. The entire USPS database
is divided into two parts, a training set is with 7,291 samples and a test set is with 2,007 samples
[5]. In our experiment, dimension reduction is first implemented and then the nearest neighbor
rule is used for classification. By using the 1/3 of the data in training set as labeled data and the
rest 2/3 as unlabeled data, we conduct supervised and semisupervised dimension reduction by the
following five methods: supervised training of standard SIR, the manifold regularized SIR, and the
localized SIR, and semi-supervised training of the manifold regularized SIR and the localized SIR.
Performances are evaluated on the independent testing set. Table 1 summarizes the experimental
results. It shows that both the regularized SIR and the localized SIR [12] can achieve superior
performance to the standard SIR, and the manifold regularized SIR performs better than the localized
SIR in both the supervised and the semi-supervised training. Experimental results reflect that the
manifold regularized SIR is effective on exploiting the local geometry of a dataset.
Table 1: Experimental results on the USPS dataset: SIR; the manifold regularized SIR (RSIR);
the localized SIR (LSIR); semi-supervised training of the manifold regularized SIR (sRSIR); semisupervised training of the localized SIR (sLSIR).
Dimensionality
SIR
RSIR
sRSIR
LSIR
sLSIR
5.2
7
0.8635
0.8575
0.8685
0.8301
0.8526
9
0.8794
0.8809
0.8864
0.8421
0.8675
11
?
0.8859
0.8934
0.8535
0.8795
13
?
0.8889
0.8909
0.8724
0.8826
15
?
0.9028
0.9053
0.8789
0.8914
17
?
0.9108
0.9128
0.8949
0.8954
19
?
0.9148
0.9208
0.8989
0.9038
21
?
0.9193
0.9193
0.9003
0.9063
Transductive Visualization
In Coil-20 database [10], each object has 72 images taken from different view angles. All images
are cropped into 128?128 pixel arrays with 256 gray levels. We then reduce the size to 32?32, and
used the first 10 objects for 2-D visualization, with randomly labeled 6 out of 72 images. Figure
1 shows the visualization results obtained by SIR, the proposed regularized SIR and the localized
SIR [12]. The figure shows that by exploiting the unlabeled data via the manifold regularization
for dimension reduction, the performance for data visualization can be significantly improved. The
localized SIR performs better than SIR, but not as good as the regularized SIR.
6000
1000
5000
500
500
4000
0
0
3000
1000
-500
-500
2000
-1000
-1000
1000
-1500
-1500
0
-4000
-3000
-2000
-1000
0
1000
2000
3000
4000
-2000
-1500
-1000
-500
0
500
1000
1500
-1500
-1000
-500
0
500
1000
1500
2000
Figure 1: Visualization of the first 10 objects in Coil-20 database: from left to right, by the standard
SIR, the manifold regularized SIR, and the localized SIR.
6
6
Proofs of Lemmas
Proof of Lemma 3.1 Because the kernel function ? (?) is ?bounded by? |? (d)| 6 1, we have
|? (x)| = |E (? (kz ? xk) |x )| 6 1, which implies that E ? (x) xxT exists. Then, to prove
?
?
?
?
?
?
a.s
T1 = E ? (x) xxT +? O n?1/2 , it is sufficient
to show that E (T1 ) = E ? (x) xxT and
?
?
?
T
T
Cov (vec (T1 )) = E (vec (T1 )) (vec (T1 ))
? (vec (E (T1 ))) (vec (E (T1 ))) = O n?1 .
First, because xi and xj are independent when i 6= j, it follows,
?
?
?
?
n
n
X
X
1
1
?
E (T1 ) = E ?
? (kxi ? xj k)?xi xTi ?
n i=1 n ? 1
j6=i,j=1
?
?
??
n
n
X
1X ? T? 1
(17)
E xi xi
E (? (kxi ? xj k) |xi )??
=
n i=1
n?1
j6=i,j=1
=
1
n
n
X
?
?
?
?
?
E xi xTi ? (xi ) = E ? (x) xxT .
i=1
T
?
Next, we show E (vec (T1 )) (vec (T1 ))
is a summation of two terms, of which one is
?
?
(vec(E (T1 ))) (vec(E (T1 ))) and the other is O n?1 .
?
?
T
E (vec (T1 )) (vec (T1 ))
?
?
??
?
??T ?
n
n
X
X
1
?
?
?
=
?(kxi ? xj k)xi xTi ? ?vec ?
? (kxi ? xj k) xi xTi ?? ?
2 E ?vec
2
n (n ? 1)
i6=j
i6=j
T
=
=
?
n X
n
X
1
E vec ? (kxi ?
2
n2 (n ? 1)
xj k) xi xTi
??
?
vec ? (kxi0 ?
xj 0 k) xi0 xTi0
??T ?
(18)
i6=j i0 6=j 0
X
1
2
n2 (n ? 1)
?
E (?i,j,i0 ,j 0 ) +
i,j,i0 ,j 0 distinct
X
1
2
n2 (n ? 1)
E (?i,j,i0 ,j 0 ),
else
?
??
?
??T
where ?i,j,i0 ,j 0 = vec ? (kxi ? xj k) xi xTi vec ? (kxi0 ? xj 0 k) xi0 xTi0
.
When i, j, i0 , j 0 are distinct, xi ,xj ,xi0 , and xj 0 are independent, we have
??
?
?? ?
?
??T ?
E (?i,j,i0 ,j 0 ) = E vec ? (kxi ? xj k) xi xTi
vec ? (kxi0 ? xj 0 k) xi0 xTi0
?
? ?
??? ?
? ?
???T
= vec E ? (x) xxT
vec E ? (x) xxT
T
= (vec (E (T1 ))) (vec (E (T1 ))) .
?
?
T
Therefore, the first term in E vec (T1 ) (vec (T1 )) is
1
n2 (n?1)2
P
0
i,j,i ,j
E (?i,j,i0 ,j 0 ) =
0
n(n?1)(n?2)(n?3)
n2 (n?1)2
(19)
T
(vec (E (T1 ))) (vec (E (T1 )))
(20)
distinct
T
?
= (vec (E (T1 ))) (vec (E (T1 ))) + O n
?
?1
.
?
?
T
For the second term in E (vec (T1 )) (vec (T1 )) , E (?i,j,i0 ,j 0 ) is bounded by a constant (matrix)
M under the Conditions 3.1, and thus we have
?
?
?
?
X
X
?
?
1
1
n(n ? 1) (4n ? 6)
?
?
0 ,j 0 )? 6
E
(?
M=
M = O n?1 . (21)
?
i,j,i
2
2
2
2
2
2
? n (n ? 1)
? n (n ? 1)
n (n ? 1)
else
else
7
Combining the above two results, we have
?
?
T
T
Cov (vec (T1 )) = E(vec (T1 )) (vec (T1 )) ? (vec (E (T1 ))) (vec (E (T1 ))) = O n?1
(22)
?
?
T
Proof of Lemma 3.2 Similar to the proof of Lemma 3.1, E x? (x) exists. Then, it is sufficient
?
?
?
?
T
to show that E (T2 ) = E x? (x) and Cov (vec (T2 )) = O n?1 . First, we have
?
?
??
n
n
X
X
1
1
E (T2 ) = E ?
xi ?
? (kxi ? xj k) xTj ??
n i=1
n?1
j6=i,j=1
??
? ?
n
n
X
?
?
1X ? ? 1
(23)
E xi
E ? (kxi ? xj k) xTj |xi ??
=
n i=1
n?1
j6=i,j=1
=
1
n
n
X
E (xi ? (xi )) = E (x? (x)) .
i=1
?
?
T
Next, we split E (vec (T2 )) (vec (T2 )) into two terms
?
?
T
E (vec (T2 )) (vec (T2 ))
?
?
??
?
??T ?
n
n
X
X
1
?
?
?
? (kxi ? xj k) xi xTj ? ?vec ?
? (kxi ? xj k) xi xTj ?? ?
=
2 E ?vec
2
n (n ? 1)
i6=j
i6=j
?
? (24)
n X
n
??
?
X
?
??
?
?
??
1
T
?
?
E vec ? (kxi ? xj k) xi xTj
=
vec ? (kxi0 ? xj 0 k) xi0 xTj0
2
n2 (n ? 1)
i6=j i0 6=j 0
X
X
1
1
=
E (?i,j,i0 ,j 0 ) +
E (?i,j,i0 ,j 0 )
2
2
n2 (n ? 1)
n2 (n ? 1) else
0 0
i,j,i ,j distinct
?
??
?
??T
where ?i,j,i0 ,j 0 = vec ? (kxi ? xj k) xi xTj vec ? (kxi0 ? xj 0 k) xi0 xTj0
.
Following the same method used in the proof of Lemma 3.1, we have
X
?
?
1
T
E (?i,j,i0 ,j 0 ) = (vec (E (T2 ))) (vec (E (T2 ))) + O n?1
2
n2 (n ? 1) else
(25)
and
?
?
?
?
X
? ?1 ?
1
?
?
0 ,j 0 )? 6 O n
E
(?
.
?
i,j,i
?
? n2 (n ? 1)2
else
?
?
Therefore, we have Cov (vec (T2 )) = O n?1 .
7
(26)
Conclusion
We have studied the manifold regularization for Sliced Inverse Regression (SIR). The regularized
SIR extended the original SIR in many ways, i.e., it utilizes the local geometry that is ignored
originally and enables SIR to deal with the tranductive/semisupervised learning problems. We also
discussed the statistical properties of the proposed regularization, that under mild conditions, the
manifold regularization converges at rate root-n. To solve the regularized SIR problem, we present
a conjugate gradient method conducted on the Grassmann manifold. Experiments on real datasets
validate the effectiveness of the regularized SIR.
Acknowledgments
This project was supported by the Nanyang Technological University Nanyang SUG Grant (under
project number M58020010).
8
References
[1] Belkin, M. & Niyogi, P. (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6): 1373-1396.
[2] Belkin, M., Niyogi, P. & Sindhwani, V. (2006) Manifold regularization: Ageometric framework
for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 1:
1-48.
[3] Cook, R.D.(2004) Testing predictor contributions in sufficient dimension reduction. Annals of
Statistics, 32: 1061-1092.
[4] Edelman, A., Arias, T.A., & Smith, S.T. (1998) The geometry of algorithms with orthogonality
constraints. SIAM J. Matrix Anal. Appl., 20(2):303-353.
[5] Hastie, T., Buja, A., & Tibshirani, R. (1995) Penalized discriminant analysis. Annals of Statistics,
2: 73-102.
[6] He, X., Deng, C., & Min, W. (2005) Statistical and computational analysis of locality preserving
projection. In 22th International Conference on Machine Learning (ICML).
[7] Li, K. (1991) Sliced inverse regression for dimension reduction (with discussion). J. Amer.
Statist. Assoc., 86:316-342.
[8] Li, L. (2007). Sparse sufficient dimension reduction. Biometrika 94(3): 603-613.
[9] Li, L., & YIN, X. (2008). Sliced inverse regression with regularizations. Biometrics 64: 124-131.
[10] Nene, S.A., Nayar, S.K., & Murase, H. (1996) Columbia object image library: COIL-20. Technical Report No. CUCS-006-96, Dept. of Computer Science, Columbia University.
[11] Saracco, J. (1997). An asymptotic theory for sliced inverse regression. Comm. Statist. Theory
Methods 26: 2141-2171.
[12] Wu, Q., Mukherjee, S., & Liang, F. (2008) Localized sliced inverse regression. Advances in
neural information processing systems 20, Cambridge, MA: MIT Press.
[13] von Luxburg, U., Bousquet, O., & Belkin, M. (2005) Limits of spectral clustering. In L. K.
Saul, Y. Weiss and L. Bottou (Eds.), Advances in neural information processing systems 17,
Cambridge, MA: MIT Press.
[14] Zhong, W., Zeng, P., Ma, P., Liu, J. S., & Zhu, Y. (2005) RSIR: Regularized sliced inverse
regression for motif discovery. Bioinformatics 21: 4169-4175.
[15] Zhu, L.X., & NG, K.W. (1995) Asymptotics of sliced inverse regression. Statistica Sinica 5:
727-736.
9
| 3684 |@word mild:1 d2:1 seek:1 git:1 covariance:3 decomposition:4 euclidian:2 ld:1 moment:1 reduction:13 liu:1 contains:1 existing:2 current:1 enables:2 xdx:2 guess:1 item:1 cook:1 xk:3 smith:1 record:1 five:1 along:3 c2:1 constructed:2 edelman:1 prove:6 pairwise:1 discretized:1 xti:15 considering:1 deem:1 project:2 xx:1 bounded:2 maximizes:2 developed:1 transformation:3 lsir:2 biometrika:1 assoc:1 grant:1 positive:1 before:1 engineering:2 local:9 t1:38 aat:1 limit:2 ak:10 studied:3 appl:1 co:3 range:2 practical:1 acknowledgment:1 transporting:1 testing:2 nanyang:4 implement:1 digit:1 procedure:1 asymptotics:1 significantly:1 projection:10 get:4 onto:1 ga:3 unlabeled:5 cannot:3 equivalent:5 deterministic:4 starting:1 splitting:1 slicing:2 rule:1 array:1 spanned:1 orthonormal:1 searching:7 annals:2 target:1 suppose:3 approximated:2 mukherjee:1 labeled:6 database:3 solved:1 calculate:2 technological:3 valuable:1 predictable:2 comm:1 geodesic:2 usps:4 xxt:14 heat:1 distinct:4 effective:1 h0:1 quite:1 heuristic:1 solve:5 cov:7 niyogi:2 g1:1 statistic:2 transductive:4 transform:2 combining:1 realization:1 achieve:1 validate:1 normalize:1 exploiting:3 convergence:12 converges:7 tk:6 help:1 object:4 nearest:1 ij:2 school:2 pendent:1 auxiliary:1 implemented:1 involves:1 implies:1 murase:1 direction:6 dii:4 ao:1 ntu:2 summation:3 hold:1 exp:1 mapping:3 bch:2 substituting:3 fh:3 estimation:6 reflects:1 mit:2 zhong:1 vectorizes:1 encode:1 hk:8 dependent:1 motif:1 i0:14 sb:1 eliminate:1 entire:1 a0:2 wij:3 tao:1 pixel:1 classification:4 constrained:1 special:1 initialize:1 marginal:2 construct:1 ng:1 icml:1 t2:17 report:1 belkin:3 randomly:1 dg:4 preserve:2 xtj:9 geometry:11 sdr:3 n1:1 nl:2 kt:1 edge:1 partial:1 necessary:1 biometrics:1 conduct:1 euclidean:1 column:1 predictor:16 eigenmaps:1 conducted:1 dependency:1 kxi:20 synthetic:1 international:1 siam:1 invertible:1 rsir:3 von:1 central:1 satisfied:1 reflect:1 containing:1 derivative:1 li:3 depends:1 tion:1 root:8 view:1 doing:1 start:1 parallel:1 xlx:3 contribution:1 minimize:1 square:1 ni:2 variance:1 resetting:1 nonsingular:2 lgi:1 j6:7 converged:1 nene:1 ed:1 frequency:1 associated:1 proof:11 handwriting:1 sampled:2 proved:4 dataset:3 improves:1 dimensionality:2 organized:1 originally:1 supervised:9 follow:1 bian:1 response:7 wei:2 improved:1 formulation:3 evaluated:1 amer:1 furthermore:1 until:1 transport:1 zeng:1 gray:1 semisupervised:3 name:1 normalized:2 unbiased:1 regularization:34 equality:1 symmetric:2 iteratively:1 deal:3 adjacent:1 sin:3 sug:1 unnormalized:1 generalized:4 ridge:1 performs:2 image:4 superior:1 rotation:3 empirically:1 at0:1 extend:1 xi0:6 discussed:1 he:1 dacheng:1 refer:1 cambridge:2 vec:53 i6:8 gt:5 yi:2 preserving:1 minimum:1 impose:1 deng:1 surely:4 converge:2 period:1 semi:6 technical:1 unlabelled:2 faster:1 divided:1 grassmann:10 laplacian:7 regression:19 expectation:2 kernel:6 normalization:1 achieved:2 c1:1 cropped:1 addition:1 completes:1 singular:1 else:6 rest:3 subject:2 effectiveness:1 split:1 xj:28 hastie:1 reduce:1 whether:1 remark:1 ignored:2 statist:2 category:1 singapore:2 estimated:1 tibshirani:1 independency:1 terminology:1 falling:1 penalizing:1 utilize:1 graph:7 asymptotically:1 luxburg:1 inverse:13 letter:2 fourth:1 angle:1 throughout:1 almost:4 wu:1 utilizes:2 summarizes:1 convergent:1 constraint:4 orthogonality:1 x2:1 encodes:2 bousquet:1 aspect:1 min:5 qb:1 expanded:1 according:2 conjugate:10 character:1 invariant:3 taken:1 visualization:5 discus:2 know:1 wii:1 apply:1 elliptic:2 spectral:1 eigen:4 rp:2 original:3 clustering:1 calculating:1 prof:1 objective:4 g0:1 added:1 fa:2 diagonal:1 gradient:16 dp:1 subspace:5 evaluate:1 manifold:41 considers:1 discriminant:2 liang:1 lg:4 difficult:2 sinica:1 gk:14 trace:12 anal:1 unknown:1 upper:1 discretize:1 datasets:4 immediate:1 extended:1 tmin:1 arbitrary:1 buja:1 required:1 optimized:1 cucs:1 nu:2 qa:3 suggested:1 usually:1 below:1 built:1 max:2 regularized:20 predicting:1 zhu:2 scheme:2 library:1 concludes:1 deemed:1 kxi0:5 columbia:2 sg:2 understanding:1 tangent:1 discovery:1 asymptotic:1 sir:64 embedded:2 fully:1 expect:1 law:1 limitation:2 localized:12 eigendecomposition:1 sufficient:7 consistent:1 penalized:1 repeat:1 supported:1 weaker:1 neighbor:1 saul:1 sparse:2 slice:3 curve:2 dimension:13 xn:2 kz:3 projected:1 compact:1 preferred:1 dealing:1 xi:41 discriminative:1 continuous:1 search:1 decade:1 table:2 additionally:3 optimiza1:1 bottou:1 constructing:1 statistica:1 noise:1 n2:10 gtk:1 sliced:11 x1:2 position:2 weighting:1 theorem:7 exists:5 adding:1 aria:1 locality:1 yin:1 partially:1 scalar:3 sindhwani:1 ch:3 aa:1 satisfies:2 ma:3 coil:3 conditional:4 replace:1 lemma:12 called:1 experimental:5 svd:1 select:1 support:1 bioinformatics:1 incorporate:1 dept:1 nayar:1 |
2,962 | 3,685 | Sharing Features among Dynamical Systems
with Beta Processes
Emily B. Fox
Electrical Engineering & Computer Science, Massachusetts Institute of Technology
[email protected]
Erik B. Sudderth
Computer Science, Brown University
[email protected]
Michael I. Jordan
Electrical Engineering & Computer Science and Statistics, University of California, Berkeley
[email protected]
Alan S. Willsky
Electrical Engineering & Computer Science, Massachusetts Institute of Technology
[email protected]
Abstract
We propose a Bayesian nonparametric approach to the problem of modeling related time series. Using a beta process prior, our approach is based on the discovery of a set of latent dynamical behaviors that are shared among multiple time
series. The size of the set and the sharing pattern are both inferred from data. We
develop an efficient Markov chain Monte Carlo inference method that is based on
the Indian buffet process representation of the predictive distribution of the beta
process. In particular, our approach uses the sum-product algorithm to efficiently
compute Metropolis-Hastings acceptance probabilities, and explores new dynamical behaviors via birth/death proposals. We validate our sampling algorithm using
several synthetic datasets, and also demonstrate promising results on unsupervised
segmentation of visual motion capture data.
1 Introduction
In many applications, one would like to discover and model dynamical behaviors which are shared
among several related time series. For example, consider video or motion capture data depicting
multiple people performing a number of related tasks. By jointly modeling such sequences, we
may more robustly estimate representative dynamic models, and also uncover interesting relationships among activities. We specifically focus on time series where behaviors can be individually
modeled via temporally independent or linear dynamical systems, and where transitions between
behaviors are approximately Markovian. Examples of such Markov jump processes include the hidden Markov model (HMM), switching vector autoregressive (VAR) process, and switching linear
dynamical system (SLDS). These models have proven useful in such diverse fields as speech recognition, econometrics, remote target tracking, and human motion capture. Our approach envisions
a large library of behaviors, and each time series or object exhibits a subset of these behaviors.
We then seek a framework for discovering the set of dynamic behaviors that each object exhibits.
We particularly aim to allow flexibility in the number of total and sequence-specific behaviors, and
encourage objects to share similar subsets of the large set of possible behaviors.
One can represent the set of behaviors an object exhibits via an associated list of features. A standard featural representation for N objects, with a library of K features, employs an N ? K binary
matrix F = {fik }. Setting fik = 1 implies that object i exhibits feature k. Our desiderata motivate
a Bayesian nonparametric approach based on the beta process [10, 22], allowing for infinitely many
1
potential features. Integrating over the latent beta process induces a predictive distribution on features known as the Indian buffet process (IBP) [9]. Given a feature set sampled from the IBP, our
model reduces to a collection of Bayesian HMMs (or SLDS) with partially shared parameters.
Other recent approaches to Bayesian nonparametric representations of time series include the HDPHMM [2, 4, 5, 21] and the infinite factorial HMM [24]. These models are quite different from
our framework: the HDP-HMM does not select a subset of behaviors for a given time series, but
assumes that all time series share the same set of behaviors and switch among them in exactly the
same manner. The infinite factorial HMM models a single time-series with emissions dependent
on a potentially infinite dimensional feature that evolves with independent Markov dynamics. Our
work focuses on modeling multiple time series and on capturing dynamical modes that are shared
among the series.
Our results are obtained via an efficient and exact Markov chain Monte Carlo (MCMC) inference algorithm. In particular, we exploit the finite dynamical system induced by a fixed set of features to efficiently compute acceptance probabilities, and reversible jump birth and death proposals to explore
new features. We validate our sampling algorithm using several synthetic datasets, and also demonstrate promising unsupervised segmentation of data from the CMU motion capture database [23].
2 Binary Features and Beta Processes
The beta process is a completely random measure [12]: draws are discrete with probability one, and
realizations on disjoint sets are independent random variables. Consider a probability space ?, and
let B0 denote a finite base measure on ? with total mass B0 (?) = ?. Assuming B0 is absolutely
continuous, we define the following L?evy measure on the product space [0, 1] ? ?:
?(d?, d?) = c? ?1 (1 ? ?)c?1 d?B0 (d?).
(1)
Here, c > 0 is a concentration parameter; we denote such a beta process by BP(c, B0 ). A draw
B ? BP(c, B0 ) is then described by
?
X
B=
? k ?? k ,
(2)
k=1
where (?1 , ?1 ), (?2 , ?2 ), . . . are the set of atoms in a realization of a nonhomogeneous Poisson
process with rate measure ?. If there are atoms in B0 , then these are treated separately; see [22].
The beta process is conjugate to a class of Bernoulli processes [22], denoted by BeP(B), which
provide our sought-for featural representation. A realization Xi ? BeP(B), with B an atomic
measure, is a collection of unit mass atoms on ? located at some subset of the atoms in B. In
particular,
P fik ? Bernoulli(?k ) is sampled independently for each atom ?k in Eq. (2), and then
Xi = k fik ??k .
In many applications, we interpret the atom locations ?k as a shared set of global features. A
Bernoulli process realization Xi then determines the subset of features allocated to object i:
B | B0 , c ? BP(c, B0 )
Xi | B ? BeP(B), i = 1, . . . , N.
(3)
Because beta process priors are conjugate to the Bernoulli process [22], the posterior distribution
given N samples Xi ? BeP(B) is a beta process with updated parameters:
!
K+
X
c
mk
B | X1 , . . . , XN , B0 , c ? BP c + N,
(4)
B0 +
?? .
c+N
c+N k
k=1
Here, mk denotes the number of objects Xi which select the k th feature ?k . For simplicity, we have
reordered the feature indices to list the K+ features used by at least one object first.
Computationally, Bernoulli process realizations Xi are often summarized by an infinite vector of
binary indicator variables fi = [fi1 , fi2 , . . .], where fik = 1 if and only if object i exhibits feature k. As shown by Thibaux and Jordan [22], marginalizing over the beta process measure B,
and taking c = 1, provides a predictive distribution on indicators known as the Indian buffet process (IBP) Griffiths and Ghahramani [9]. The IBP is a culinary metaphor inspired by the Chinese
restaurant process, which is itself the predictive distribution on partitions induced by the Dirichlet
process [21]. The Indian buffet consists of an infinitely long buffet line of dishes, or features. The
first arriving customer, or object, chooses Poisson(?) dishes. Each subsequent customer i selects
a previously tasted dish k with probability mk /i proportional to the number of previous customers
mk to sample it, and also samples Poisson(?/i) new dishes.
2
3 Describing Multiple Time Series with Beta Processes
Assume we have a set of N objects, each of whose dynamics is described by a switching vector
autoregressive (VAR) process, with switches occurring according to a discrete-time Markov process.
Such autoregressive HMMs (AR-HMMs) provide a simpler, but often equally effective, alternative
(i)
(i)
to SLDS [17]. Let yt represent the observation vector of the ith object at time t, and zt the latent
dynamical mode. Assuming an order r switching VAR process, denoted by VAR(r), we have
(i)
zt ? ?
(i)
(5)
(i)
zt?1
(i)
yt =
r
X
(i)
(i)
(i)
(i)
(i)
(i)
?t + et (zt ),
Aj,z(i) yt?j + et (zt ) , Az(i) y
j=1
(6)
t
t
(i)T
(i)T T
(i)
(i)
?t = [yt?1 . . . yt?r ] . The
where et (k) ? N (0, ?k ), Ak = [A1,k . . . Ar,k ], and y
standard HMM with Gaussian emissions arises as a special case of this model when Ak = 0 for
all k. We refer to these VAR processes, with parameters ?k = {Ak , ?k }, as behaviors, and use a
beta process prior to couple the dynamic behaviors exhibited by different objects or sequences.
As in Sec. 2, let fi be a vector of binary indicator variables, where fik denotes whether object i
exhibits behavior k for some t ? {1, . . . , Ti }. Given fi , we define a feature-constrained transition
(i)
distribution ? (i) = {?k }, which governs the ith object?s Markov transitions among its set of dynamic behaviors. In particular, motivated by the fact that a Dirichlet-distributed probability mass
function can be interpreted as a normalized collection of gamma-distributed random variables, for
each object i we define a doubly infinite collection of random variables:
(i)
?jk | ?, ? ? Gamma(? + ??(j, k), 1),
(7)
where ?(j, k) indicates the Kronecker delta function. We denote this collection of transition variables by ? (i) , and use them to define object-specific, feature-constrained transition distributions:
i
h
(i)
(i)
? fi
.
.
.
?
?
j2
j1
(i)
.
(8)
?j =
P
(i)
k|fik =1 ?jk
(i)
Here, ? denotes the element-wise vector product. This construction defines ?j over the full set of
positive integers, but assigns positive mass only at indices k where fik = 1.
(i)
The preceding generative process can be equivalently represented via a sample ?
?j from a finite
P
(i)
Dirichlet distribution of dimension Ki = k fik , containing the non-zero entries of ?j :
(i)
?
?j | fi , ?, ? ? Dir([?, . . . , ?, ? + ?, ?, . . . ?]).
(9)
(i)
The ? hyperparameter places extra expected mass on the component of ?
?j corresponding to a self(i)
transition ?jj , analogously to the sticky hyperparameter of Fox et al. [4]. We refer to this model,
which is summarized in Fig. 1, as the beta process autoregressive HMM (BP-AR-HMM).
4 MCMC Methods for Posterior Inference
We have developed an MCMC method which alternates between resampling binary feature assignments given observations and dynamical parameters, and dynamical parameters given observations
and features. The sampler interleaves Metropolis-Hastings (MH) and Gibbs sampling updates,
which are sometimes simplified by appropriate auxiliary variables. We leverage the fact that fixed
feature assignments instantiate a set of finite AR-HMMs, for which dynamic programming can be
used to efficiently compute marginal likelihoods. Our novel approach to resampling the potentially
infinite set of object-specific features employs incremental ?birth? and ?death? proposals, improving
on previous exact samplers for IBP models with non-conjugate likelihoods.
4.1 Sampling binary feature assignments
?i
be the number of
Let F ?ik denote the set of all binary feature indicators excluding fik , and K+
behaviors currently instantiated by objects other than i. For notational simplicity, we assume that
3
?
?
fi
?k
?
?(i)
B0
?k
?
z(i)1
z(i)2
z(i)3
...
z(i)T
y(i)1
y(i)2
y(i)3
...
y(i)T
i
i
N
Figure 1: Graphical model of the BP-AR-HMM. The beta process distributed measure B | B0 ? BP(1, B0 )
is represented by its masses ?k and locations ?k , as in Eq. (2). The features are then conditionally independent draws fik | ?k ? Bernoulli(?k ), and are used to define feature-constrained transition distributions
(i)
?j | fi , ?, ? ? Dir([?, . . . , ?, ? + ?, ?, . . . ] ? fi ). The switching VAR dynamics are as in Eq. (6).
(i)
?i
these behaviors are indexed by {1, . . . , K+
}. Given the ith object?s observation sequence y1:Ti ,
(i)
transition variables ? (i) = ?1:K ?i ,1:K ?i , and shared dynamic parameters ?1:K ?i , feature indicators
+
+
+
?i
fik for currently used features k ? {1, . . . , K+
} have the following posterior distribution:
(i)
(i)
p(fik | F ?ik, y1:Ti , ? (i), ?1:K ?i , ?) ? p(fik | F ?ik, ?)p(y1:Ti | fi , ?(i), ?1:K ?i ).
+
+
(10)
Here, the IBP prior implies that p(fik = 1 | F ?ik, ?) = mk?i /N , where m?i
k denotes the number of
objects other than object i that exhibit behavior k. In evaluating this expression, we have exploited
the exchangeability of the IBP [9], which follows directly from the beta process construction [22].
For binary random variables, MH proposals can mix faster [6] and have greater statistical efficiency [14] than standard Gibbs samplers. To update fik given F ?ik, we thus use the posterior
of Eq. (10) to evaluate a MH proposal which flips fik to the complement f? of its current value f :
fik ? ?(f? | f )?(fik , f?) + (1 ? ?(f? | f ))?(fik , f )
(
)
(i)
p(fik = f? | F ?ik, y1:Ti , ? (i), ?1:K ?i , ?)
+
?
?(f | f ) = min
,1 .
(i)
p(fik = f | F ?ik, y1:Ti , ? (i), ?1:K ?i , ?)
(11)
+
To compute likelihoods, we combine fi and ? (i) to construct feature-constrained transition distribu(i)
tions ?j as in Eq. (8), and apply the sum-product message passing algorithm [19].
An alternative approach is needed to resample the Poisson(?/N ) ?unique? features associated only
?i
with object i. Let K+ = K+
+ ni , where ni is the number of features unique to object i, and define
f?i = fi,1:K ?i and f+i = fi,K ?i +1:K+ . The posterior distribution over ni is then given by
+
p(ni |
+
(i)
fi , y1:Ti , ? (i), ?1:K ?i , ?)
+
?
( ? )ni e? N
? N
ni !
ZZ
(i)
p(y1:Ti | f?i , f+i = 1, ?(i), ? + , ?1:K ?i , ?+ ) dB0 (? + )dH(? + ),
+
(12)
where H is the gamma prior on transition variables, ? + = ?K ?i +1:K+ are the parameters of unique
+
(i)
?i
features, and ? + are transition parameters ?jk to or from unique features j, k ? {K+
+ 1 : K+ }.
Exact evaluation of this integral is intractable due to dependencies induced by the AR-HMMs.
One early approach to approximate Gibbs sampling in non-conjugate IBP models relies on a finite
truncation [7]. Meeds et al. [15] instead consider independent Metropolis proposals which replace
the existing unique features by n?i ? Poisson(?/N ) new features, with corresponding parameters
??+ drawn from the prior. For high-dimensional models like that considered in this paper, however,
moves proposing large numbers of unique features have low acceptance rates. Thus, mixing rates
are greatly affected by the beta process hyperparameter ?. We instead develop a ?birth and death?
reversible jump MCMC (RJMCMC) sampler [8], which proposes to either add a single new feature,
4
or eliminate one of the existing features in f+i . Some previous work has applied RJMCMC to finite
binary feature models [3, 27], but not to the IBP. Our proposal distribution factors as follows:
?
?
?
?
q(f+i
, ? ?+ , ? ?+ | f+i , ? + , ? + ) = qf (f+i
| f+i )q? (? ?+ | f+i
, f+i , ?+ )q? (? ?+ | f+i
, f+i , ? + ). (13)
P
Let ni =
k f+ik . The feature proposal qf (? | ?) encodes the probabilities of birth and death
moves: a new feature is created with probability 0.5, and each of the ni existing features is deleted
with probability 0.5/ni . For parameters, we define our proposal using the generative model:
Qni
?
?
??+k (?+k
), birth of feature ni + 1;
b0 (?+,n
) k=1
?
i +1
Q
q? (??+ | f+i
(14)
, f+i , ?+ ) =
?
?
(?
),
death of feature ?,
k6=? ?+k +k
where b0 is the density associated with ??1 B0 . The distribution q? (? | ?) is defined similarly, but
using the gamma prior on transition variables of Eq. (7). The MH acceptance probability is then
?
?
, ??+ , ??+ | f+i , ?+ , ?+ ) = min{r(f+i
, ??+ , ? ?+ | f+i , ?+ , ? + ), 1}.
?(f+i
(15)
Canceling parameter proposals with corresponding prior terms, the acceptance ratio r(? | ?) equals
(i)
?
?
p(y1:Ti | [f?i f+i
], ?1:K ?i , ??+ , ?(i) , ? ?+ ) Poisson(n?i | ?/N ) qf (f+i | f+i
)
+
,
(16)
(i)
?
p(y1:Ti | [f?i f+i ], ?1:K ?i , ?+ , ?(i) , ? + ) Poisson(ni | ?/N ) qf (f+i
| f+i )
+
P ?
. Because our birth and death proposals do not modify the values of existing
with n?i = k f+ik
parameters, the Jacobian term normally arising in RJMCMC algorithms simply equals one.
4.2 Sampling dynamic parameters and transition variables
Posterior updates to transition variables ? (i) and shared dynamic parameters ?k are greatly simpli(i)
fied if we instantiate the mode sequences z1:Ti for each object i. We treat these mode sequences as
auxiliary variables: they are sampled given the current MCMC state, conditioned on when resampling model parameters, and then discarded for subsequent updates of feature assignments fi .
Given feature-constrained transition distributions ?(i) and dynamic parameters {?k }, along with
(i)
(i)
the observation sequence y1:Ti , we jointly sample the mode sequence z1:Ti by computing backward
(i)
(i)
(i)
(i)
(i)
?t , ?(i) , {?k }), and then recursively sampling each zt :
messages mt+1,t (zt ) ? p(yt+1:Ti | zt , y
(i)
(i)
(i)
(i)
(i)
(i)
(i)
(i)
?t , ?z(i) mt+1,t (zt ).
zt | zt?1 , y1:Ti , ?(i), {?k } ? ? (i) (zt )N yt ; Az(i) y
(17)
zt?1
t
t
(i)
(i)
Because Dirichlet priors are conjugate to multinomial observations z1:T , the posterior of ?j is
(i)
(i)
(i)
(i)
(i)
(i)
?j | fi , z1:T , ?, ? ? Dir([? + nj1 , . . . , ? + njj?1 , ? + ? + njj , ? + njj+1 , . . . ] ? fi ).
(i)
(18)
(i)
(i)
Here, njk are the number of transitions from mode j to k in z1:T . Since the mode sequence z1:T is
(i)
generated from feature-constrained transition distributions, njk is zero for any k such that fik = 0.
(i)
Thus, to arrive at the posterior of Eq. (18), we only update ?jk for instantiated features:
(i)
(i)
(i)
?jk | z1:T , ?, ? ? Gamma(? + ??(j, k) + njk , 1),
k ? {? | fi? = 1}.
(19)
We now turn to posterior updates for dynamic parameters. We place a conjugate matrix-normal
inverse-Wishart (MNIW) prior [26] on {Ak , ?k }, comprised of an inverse-Wishart prior IW(S0 , n0 )
on ?k and a matrix-normal prior MN (Ak ; M, ?k , K) on Ak given ?k . We consider the follow(i)
(i)
(i)
(i)
ing sufficient statistics based on the sets Y k = {yt | zt = k} and Y? k = {?
yt | zt = k} of
observations and lagged observations, respectively, associated with behavior k:
X
X
(k)
(i) (i)T
(k)
(i) (i)T
Sy?y? =
y
?t y
?t + K
Syy? =
yt y
?t + M K
(i)
(i)
(t,i)|zt =k
(k)
Syy
=
X
(i) (i)
yt yt
(t,i)|zt =k
T
+ M KM T
(k)
(k)
?(k)
(k)T
(k)
? Syy? Sy?y? Sy?y? .
Sy|?y = Syy
(i)
(t,i)|zt =k
Following Fox et al. [5], the posterior can then be shown to equal
(k)
(k)
(k) ?(k)
Ak | ?k , Y k ? MN Ak ; Syy? Sy?y? , ?k , Sy?y? , ?k | Y k ? IW Sy|?y + S0 , |Y k | + n0 .
5
0.5
25
1
1.5
2
20
2.5
Observations
3
3.5
15
4
4.5
10
5
5.5
2
4
6
8
10
12
14
16
18
20
2
4
6
8
10
12
14
16
18
20
0.5
1
5
1.5
2
0
2.5
3
3.5
?5
0
4
200
400
600
800
1000
4.5
5
Time
5.5
(a)
(b)
Figure 2: (a) Observation sequences for each of 5 switching AR(1) time series colored by true mode sequence,
and offset for clarity. (b) True feature matrix (top) of the five objects and estimated feature matrix (bottom)
averaged over 10,000 MCMC samples taken from 100 trials every 10th sample. White indicates active features.
The estimated feature matrices are produced from mode sequences mapped to the ground truth labels according
to the minimum Hamming distance metric, and selecting modes with more than 2% of the object?s observations.
4.3 Sampling the beta process and Dirichlet transition hyperparameters
We additionally place priors on the Dirichlet hyperparameters ? and ?, as well as the beta process
parameter ?. Let F = {f i }. As derived in [9], p(F | ?) can be expressed as
N
X
1
K+
p(F | ?) ? ? exp ? ?
,
(20)
n
n=1
where, as before, K+ is the number of unique features activated in F . As in [7], we place a conjugate
Gamma(a? , b? ) prior on ?, which leads to the following posterior distribution:
N
X
1
.
(21)
p(? | F , a? , b? ) ? p(F | ?)p(? | a? , b? ) ? Gamma a? + K+ , b? +
n
n=1
Transition hyperparameters are assigned similar priors ? ? Gamma(a? , b? ), ? ? Gamma(a? , b? ).
Because the generative process of Eq. (7) is non-conjugate, we rely on MH steps which iteratively
resample ? given ?, and ? given ?. Each sub-step uses a gamma proposal distribution q(? | ?) with
fixed variance ??2 or ??2 , and mean equal to the current hyperparameter value. To update ? given ?,
the acceptance probability is min{r(? ? | ?), 1}, where r(? ? | ?) is defined to equal
?
?
f (? ? )?(?)e?? b? ? ? ???a? ??2?
p(? ? | ?, ?, F )q(? | ? ? )
p(? | ? ? , ?, F )p(? ? )q(? | ? ? )
=
=
.
?
p(? | ?, ?, F )q(? ? | ?)
p(? | ?, ?, F )p(?)q(? ? | ?)
f (?)?(?? )e??b? ? ???? ?a? ??2??
Here, ? = ? 2 /??2 , ?? = ? ?2 /??2 , and f (?) =
Q
?(?Ki +?)Ki
i ?(?)Ki2 ?Ki ?(?+?)Ki
QKi
(j,k)=1
(i)?+??(k,j)?1
?kj
. The
MH sub-step for resampling ? given ? is similar, but with an appropriately redefined f (?).
5 Synthetic Experiments
To test the ability of BP-AR-HMM to discover shared dynamics, we generated five time series that
switched between AR(1) models
(i)
(i) (i)
(i)
(22)
yt = az(i) yt?1 + et (zt )
t
with ak ? {?0.8, ?0.6, ?0.4, ?0.2, 0, 0.2, 0.4, 0.6, 0.8} and process noise covariance ?k drawn
from an IW(0.5, 3) prior. The object-specific features, shown in Fig. 2(b), were sampled from a
truncated IBP [9] using ? = 10 and then used to generate the observation sequences of Fig. 2(a).
The resulting feature matrix estimated over 10,000 MCMC samples is shown in Fig. 2. Comparing
to the true feature matrix, we see that our model is indeed able to discover most of the underlying
latent structure of the time series despite the challenging setting defined by the close AR coefficients.
One might propose, as an alternative to the BP-AR-HMM, using an architecture based on the hierarchical Dirichlet process of [21]; specifically we could use the HDP-AR-HMMs of [5] tied
together with a shared set of transition and dynamic parameters. To demonstrate the difference
between these models, we generated data for three switching AR(1) processes. The first two objects, with four times the data points of the third, switched between dynamical modes defined
6
Normalized Hamming Distance
Normalized Hamming Distance
0.6
0.5
0.4
0.3
0.2
0.1
0
200
400
600
0.6
0.5
0.4
0.3
0.2
0.1
0
800
200
400
600
800
0
Iteration
Iteration
1000
2000
0
1000
Time
2000
0
250
Time
500
0
1000
Time
2000
0
1000
Time
2000
0
250
Time
500
Time
(a)
(b)
(c)
(d)
Figure 3: (a)-(b) The 10th, 50th, and 90th Hamming distance quantiles for object 3 over 1000 trials for the
HDP-AR-HMMs and BP-AR-HMM, respectively. (c)-(d) Examples of typical segmentations into behavior
modes for the three objects at Gibbs iteration 1000 for the two models (top = estimate, bottom = truth).
35
30
30
30
25
25
25
20
20
20
20
30
25
25
20
5
5
10
15
0
?4
?2
0
?2
?5
0
2
?10
4
0
0
?15
?10
2
?15
x
z
0
0
0
0
?10
?5
?10
5
x
0
?10
5
?10
5
?5
?10
4
0
?10
?10
5
?5
10
5
x
0
25
25
20
20
20
30
25
25
20
y
y
15
20
15
10
5
0
10
10
10
5
5
5
0
?10
?10
?5
5
0
0
?10
?20
z
x
5
10
10
x
5
?15
15
10
0
0
?10
5
10
0
?10
?5
0
0
?10
?5
5
0
?10
5
z
z
x
?5
?5
5
?5
5
?10
?5
5
5
0
0
z
x
5
0
10
?5
5
?5
?10
z
x
?5
0
10
?10
10
z
10
5
5
?5
?10
10 ?10
0
?5
5
10
?15
15
5
0
0
5
?10
10
z
10
5
?15
10?5
?5
0
0
?5
10
15
0
?15
10 ?10
10
?5
0
5
15
y
y
y
y
y
15
y
15
15
y
25
20
20
15
10
z
25
20
5
5
x
x
30
25
25
20
0
z
x
25
0
?5
10
5
z
z
?5
0
0
?15
?5
2
x
z
z
?2
?5
?5
0
0
10
x
z
10
5
5
?5
?5
5
?10
2
x
z
5
10
10
5
0
?5
?4
?2
5
5
10
5
?5
10
5
10
5
25
10
10
5
5
10
5
15
15
10
10
15
30
20
y
15
10
10
10
20
20
15
15
25
20
y
y
y
y
y
y
15
15
25
y
30
25
y
30
?5
0
5
5
z
z
x
x
x
x
25
30
30
30
30
25
25
20
20
25
20
25
25
20
20
15
y
y
y
15
y
15
15
15
15
y
15
y
y
20
15
20
20
y
20
y
25
25
25
15
10
10
10
10
10
5
5
?5
5
x
5
?10
?15
0
?10
10
x
z
0
0
?5
10
x
z
5
0
0
5
?10
?15
10
?15
?5
5
0
?5
5
?10
10
z
5
10
10
5
0
0
10
5
10
10
5
10
10
5
0
0
10
5
5
2
0
?5
4
x
5
0
2
?10
6
z
6
?5
5
0
?5
0
?10
15
?10
x
?5
0
10
?5
4
z
5
5
5
0
0
x
z
5
?10
?5
?10
5
x
z
0
5
10
5
x
z
?5
0
z
x
30
30
25
25
20
20
15
15
30
30
25
25
30
25
25
20
20
30
20
y
15
y
20
y
15
y
15
15
y
y
y
25
20
y
25
20
15
15
10
10
5
5
?4
?2
?2
10
10
10
10
?5
10
10
?5
5
0
5
?5
5
5
?10
5
?5
0
15
5
z
?10
?5
10
?5
0
?10
15
5
10
15
20
?5
0
0
5
z
z
x
5
10
z
x
5
5
?15
10
0
0
5
0
5
?5
?10
5
x
x
z
0
2
4
6
0
?4 ?2
?8 ?6
x
2
z
0
?5
2
4
?5
0
6
8
?2 0
?6 ?4
x
2
4
0
5
z
?5
5
0
x
5
z
?15
?10
?5
0
5
10
x
Figure 4: Each skeleton plot displays the trajectory of a learned contiguous segment of more than 2 seconds.
To reduce the number of plots, we preprocessed the data to bridge segments separated by fewer than 300 msec.
The boxes group segments categorized under the same feature label, with the color indicating the true feature
label. Skeleton rendering done by modifications to Neil Lawrence?s Matlab MoCap toolbox [13].
by ak ? {?0.8, ?0.4, 0.8} and the third object used ak ? {?0.3, 0.8}. The results shown in
Fig. 3 indicate that the multiple HDP-AR-HMM model typically describes the third object using
ak ? {?0.4, 0.8} since this assignment better matches the parameters defined by the other (lengthy)
time series. These results reiterate that the feature model emphasizes choosing behaviors rather than
assuming all objects are performing minor variations of the same dynamics.
For the experiments above, we placed a Gamma(1, 1) prior on ? and ?, and a Gamma(100, 1) prior
on ?. The gamma proposals used ??2 = 1 and ??2 = 100 while the MNIW prior was given M = 0,
K = 0.1 ? Id , n0 = d + 2, and S0 set to 0.75 times the empirical variance of the joint set of
first difference observations. At initialization, each time series was segmented into five contiguous
blocks, with feature labels unique to that sequence.
6 Motion Capture Experiments
The linear dynamical system is a common model for describing simple human motion [11], and the
more complicated SLDS has been successfully applied to the problem of human motion synthesis,
classification, and visual tracking [17, 18]. Other approaches develop non-linear dynamical models
using Gaussian processes [25] or based on a collection of binary latent features [20]. However, there
has been little effort in jointly segmenting and identifying common dynamic behaviors amongst a
set of multiple motion capture (MoCap) recordings of people performing various tasks. The BP-ARHMM provides an ideal way of handling this problem. One benefit of the proposed model, versus
the standard SLDS, is that it does not rely on manually specifying the set of possible behaviors.
7
1
2
2
3
3
4
4
5
5
1
2
6
6
3
2
4
5
6
2
4
6
8
10
12
14
16
18
4
6
8
10
12
14
16
18
20
2
1
1
2
2
3
3
4
4
5
5
4
6
8
10
12
14
16
18
20
20
6
6
2
4
6
8
10
12
14
16
18
20
2
4
6
8
10
12
(a)
14
16
18
20
Normalized Hamming Distance
Truth
1
GMM
GMM 1st diff
HMM
HMM 1st diff
BP?AR?HMM
HDP?AR?HMM
0.8
0.7
0.6
0.5
0.4
0.3
0.2
5
10
15
Number of Clusters/States
(b)
Figure 5: (a) MoCap feature matrices associated with BP-AR-HMM (top-left) and HDP-AR-HMM (top-right)
estimated sequences over iterations 15,000 to 20,000, and MAP assignment of the GMM (bottom-left) and
HMM (bottom-right) using first-difference observations and 12 clusters/states. (b) Hamming distance versus
number of GMM clusters / HMM states on raw observations (blue/green) and first-difference observations
(red/cyan), with the BP- and HDP- AR-HMM segmentations (black) and true feature count (magenta) shown for
comparison. Results are for the most-likely of 10 EM initializations using Murphy?s HMM Matlab toolbox [16].
As an illustrative example, we examined a set of six CMU MoCap exercise routines [23], three
from Subject 13 and three from Subject 14. Each of these routines used some combination of the
following motion categories: running in place, jumping jacks, arm circles, side twists, knee raises,
squats, punching, up and down, two variants of toe touches, arch over, and a reach out stretch.
From the set of 62 position and joint angles, we selected 12 measurements deemed most informative
for the gross motor behaviors we wish to capture: one body torso position, two waist angles, one
neck angle, one set of right and left (R/L) shoulder angles, the R/L elbow angles, one set of R/L hip
angles, and one set of R/L ankle angles. The MoCap data are recorded at 120 fps, and we blockaverage the data using non-overlapping windows of 12 frames. Using these measurements, the prior
distributions were set exactly as in the synthetic data experiments except the scale matrix, S0 , of the
MNIW prior which was set to 5 times the empirical covariance of the first difference observations.
This allows more variability in the observed behaviors. We ran 25 chains of the sampler for 20,000
iterations and then examined the chain whose segmentation minimized the expected Hamming distance to the set of segmentations from all chains over iterations 15,000 to 20,000. Future work
includes developing split-merge proposals to further improve mixing rates in high dimensions.
The resulting MCMC sample is displayed in Fig. 4 and in the supplemental video available online.
Although some behaviors are merged or split, the overall performance shows a clear ability to find
common motions. The split behaviors shown in green and yellow can be attributed to the two
subjects performing the same motion in a distinct manner (e.g., knee raises in combination with
upper body motion or not, running with hands in or out of sync with knees, etc.). We compare
our performance both to the HDP-AR-HMM and to the Gaussian mixture model (GMM) method
of Barbi?c et al. [1] using EM initialized with k-means. Barbi?c et al. [1] also present an approach
based on probabilistic PCA, but this method focuses primarily on change-point detection rather than
behavior clustering. As further comparisons, we look at a GMM on first difference observations,
and an HMM on both data sets. The results of Fig. 5(b) demonstrate that the BP-AR-HMM provides
more accurate frame labels than any of these alternative approaches over a wide range of mixture
model settings. In Fig. 5(a), we additionally see that the BP-AR-HMM provides a superior ability
to discover the shared feature structure.
7 Discussion
Utilizing the beta process, we developed a coherent Bayesian nonparametric framework for discovering dynamical features common to multiple time series. This formulation allows for objectspecific variability in how the dynamical behaviors are used. We additionally developed a novel
exact sampling algorithm for non-conjugate beta process models. The utility of our BP-AR-HMM
was demonstrated both on synthetic data, and on a set of MoCap sequences where we showed performance exceeding that of alternative methods. Although we focused on switching VAR processes,
our approach could be equally well applied to a wide range of other switching dynamical systems.
Acknowledgments
This work was supported in part by MURIs funded through AFOSR Grant FA9550-06-1-0324 and ARO Grant
W911NF-06-1-0076.
8
References
[1] J. Barbi?c, A. Safonova, J.-Y. Pan, C. Faloutsos, J.K. Hodgins, and N.S. Pollard. Segmenting motion
capture data into distinct behaviors. In Proc. Graphics Interface, pages 185?194, 2004.
[2] M.J. Beal, Z. Ghahramani, and C.E. Rasmussen. The infinite hidden Markov model. In Advances in
Neural Information Processing Systems, volume 14, pages 577?584, 2002.
[3] A.C. Courville, N. Daw, G.J. Gordon, and D.S. Touretzky. Model uncertainty in classical conditioning.
In Advances in Neural Information Processing Systems, volume 16, pages 977?984, 2004.
[4] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. An HDP-HMM for systems with state persistence. In Proc. International Conference on Machine Learning, July 2008.
[5] E.B. Fox, E.B. Sudderth, M.I. Jordan, and A.S. Willsky. Nonparametric Bayesian learning of switching
dynamical systems. In Advances in Neural Information Processing Systems, volume 21, pages 457?464,
2009.
[6] A. Frigessi, P. Di Stefano, C.R. Hwang, and S.J. Sheu. Convergence rates of the Gibbs sampler, the
Metropolis algorithm and other single-site updating dynamics. Journal of the Royal Statistical Society,
Series B, pages 205?219, 1993.
[7] D. G?or?ur, F. J?akel, and C.E. Rasmussen. A choice model with infinitely many latent features. In Proc.
International Conference on Machine learning, June 2006.
[8] P.J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82(4):711?732, 1995.
[9] T.L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Gatsby
Computational Neuroscience Unit, Technical Report #2005-001, 2005.
[10] N.L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data. The
Annals of Statistics, pages 1259?1294, 1990.
[11] E. Hsu, K. Pulli, and J. Popovi?c. Style translation for human motion. In SIGGRAPH, pages 1082?1089,
2005.
[12] J. F. C. Kingman. Completely random measures. Pacific Journal of Mathematics, 21(1):59?78, 1967.
[13] N. Lawrence. MATLAB motion capture toolbox. http://www.cs.man.ac.uk/ neill/mocap/.
[14] J.S. Liu. Peskun?s theorem and a modified discrete-state Gibbs sampler. Biometrika, 83(3):681?682,
1996.
[15] E. Meeds, Z. Ghahramani, R.M. Neal, and S.T. Roweis. Modeling dyadic data with binary latent factors.
In Advances in Neural Information Processing Systems, volume 19, pages 977?984, 2007.
[16] K.P. Murphy. Hidden Markov model (HMM) toolbox for MATLAB. http://www.cs.ubc.ca/ murphyk/Software/HMM/hmm.html.
[17] V. Pavlovi?c, J.M. Rehg, T.J. Cham, and K.P. Murphy. A dynamic Bayesian network approach to figure
tracking using learned dynamic models. In Proc. International Conference on Computer Vision, September 1999.
[18] V. Pavlovi?c, J.M. Rehg, and J. MacCormick. Learning switching linear models of human motion. In
Advances in Neural Information Processing Systems, volume 13, pages 981?987, 2001.
[19] L.R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition.
Proceedings of the IEEE, 77(2):257?286, 1989.
[20] G.W. Taylor, G.E. Hinton, and S.T. Roweis. Modeling human motion using binary latent variables. In
Advances in Neural Information Processing Systems, volume 19, pages 1345?1352, 2007.
[21] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566?1581, 2006.
[22] R. Thibaux and M.I. Jordan. Hierarchical beta processes and the Indian buffet process. In Proc. International Conference on Artificial Intelligence and Statistics, volume 11, 2007.
[23] Carnegie Mellon University. Graphics lab motion capture database. http://mocap.cs.cmu.edu/.
[24] J. Van Gael, Y.W. Teh, and Z. Ghahramani. The infinite factorial hidden Markov model. In Advances in
Neural Information Processing Systems, volume 21, pages 1697?1704, 2009.
[25] J.M. Wang, D.J. Fleet, and A. Hertzmann. Gaussian process dynamical models for human motion. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 30(2):283?298, 2008.
[26] M. West and J. Harrison. Bayesian Forecasting and Dynamic Models. Springer, 1997.
[27] F. Wood, T. L. Griffiths, and Z. Ghahramani. A non-parametric Bayesian method for inferring hidden
causes. In Proc. Conference on Uncertainty in Artificial Intelligence, volume 22, 2006.
9
| 3685 |@word trial:2 frigessi:1 ankle:1 km:1 seek:1 covariance:2 pavlovi:2 recursively:1 liu:1 series:19 njk:3 selecting:1 existing:4 current:3 comparing:1 subsequent:2 partition:1 informative:1 j1:1 motor:1 plot:2 update:7 n0:3 resampling:4 generative:3 discovering:2 instantiate:2 fewer:1 selected:2 intelligence:3 ith:3 fa9550:1 colored:1 blei:1 provides:4 evy:1 location:2 simpler:1 five:3 along:1 beta:24 ik:9 fps:1 consists:1 doubly:1 combine:1 sync:1 manner:2 indeed:1 expected:2 behavior:32 inspired:1 little:1 metaphor:1 window:1 elbow:1 discover:4 underlying:1 mass:6 interpreted:1 developed:3 proposing:1 supplemental:1 berkeley:2 every:1 ti:15 exactly:2 biometrika:2 uk:1 murphyk:1 unit:2 normally:1 grant:2 segmenting:2 before:1 positive:2 engineering:3 treat:1 modify:1 switching:11 despite:1 ak:12 id:1 approximately:1 merge:1 might:1 black:1 initialization:2 examined:2 specifying:1 challenging:1 hmms:7 range:2 averaged:1 unique:8 acknowledgment:1 atomic:1 block:1 empirical:2 persistence:1 integrating:1 griffith:3 close:1 www:2 map:1 customer:3 yt:14 demonstrated:1 independently:1 emily:1 focused:1 simplicity:2 identifying:1 assigns:1 knee:3 fik:23 bep:4 estimator:1 utilizing:1 rehg:2 variation:1 updated:1 annals:1 target:1 construction:2 exact:4 programming:1 us:2 element:1 recognition:2 particularly:1 located:1 jk:5 updating:1 econometrics:1 database:2 bottom:4 observed:1 envisions:1 electrical:3 capture:10 wang:1 sticky:1 squat:1 remote:1 ran:1 gross:1 skeleton:2 hertzmann:1 dynamic:21 motivate:1 raise:2 segment:3 predictive:4 reordered:1 efficiency:1 meed:2 completely:2 mh:6 joint:2 siggraph:1 represented:2 various:1 separated:1 instantiated:2 distinct:2 effective:1 monte:3 artificial:2 choosing:1 birth:7 quite:1 whose:2 ability:3 statistic:4 neil:1 jointly:3 itself:1 online:1 beal:2 sequence:16 propose:2 aro:1 product:4 j2:1 realization:5 mixing:2 flexibility:1 roweis:2 validate:2 az:3 qni:1 nj1:1 cluster:3 convergence:1 incremental:1 object:35 tions:1 develop:3 ac:1 minor:1 ibp:10 b0:17 eq:8 auxiliary:2 c:5 implies:2 indicate:1 merged:1 njj:3 human:7 stretch:1 considered:1 ground:1 normal:2 exp:1 lawrence:2 sought:1 early:1 resample:2 proc:6 label:5 currently:2 iw:3 bridge:1 individually:1 successfully:1 mit:2 gaussian:4 aim:1 modified:1 rather:2 exchangeability:1 derived:1 focus:3 emission:2 june:1 notational:1 bernoulli:6 indicates:2 likelihood:3 greatly:2 inference:3 dependent:1 eliminate:1 typically:1 hidden:6 selects:1 overall:1 among:7 classification:1 html:1 denoted:2 k6:1 proposes:1 constrained:6 special:1 marginal:1 field:1 construct:1 equal:5 sampling:9 atom:6 zz:1 manually:1 look:1 unsupervised:2 pulli:1 future:1 minimized:1 report:1 gordon:1 employ:2 primarily:1 gamma:13 murphy:3 detection:1 acceptance:6 message:2 evaluation:1 mixture:2 activated:1 chain:6 accurate:1 integral:1 encourage:1 jumping:1 fox:5 indexed:1 taylor:1 initialized:1 circle:1 mk:5 hip:1 modeling:5 markovian:1 ar:25 contiguous:2 w911nf:1 assignment:6 subset:5 entry:1 comprised:1 culinary:1 graphic:2 thibaux:2 dependency:1 dir:3 synthetic:5 chooses:1 st:2 density:1 explores:1 international:4 probabilistic:1 michael:1 analogously:1 together:1 synthesis:1 recorded:1 containing:1 wishart:2 american:1 style:1 kingman:1 potential:1 summarized:2 sec:1 includes:1 coefficient:1 reiterate:1 lab:1 red:1 bayes:1 complicated:1 ni:11 variance:2 akel:1 efficiently:3 sy:7 rabiner:1 syy:5 waist:1 yellow:1 bayesian:10 raw:1 produced:1 emphasizes:1 carlo:3 trajectory:1 history:1 reach:1 touretzky:1 sharing:2 canceling:1 lengthy:1 toe:1 associated:5 attributed:1 di:1 hamming:7 couple:1 sampled:4 hsu:1 peskun:1 massachusetts:2 color:1 torso:1 segmentation:6 routine:2 uncover:1 popovi:1 follow:1 rjmcmc:3 formulation:1 done:1 box:1 arch:1 hand:1 hastings:2 touch:1 reversible:3 overlapping:1 defines:1 mode:12 aj:1 hwang:1 brown:2 normalized:4 true:5 assigned:1 death:7 iteratively:1 neal:1 white:1 conditionally:1 self:1 illustrative:1 demonstrate:4 motion:19 interface:1 stefano:1 wise:1 jack:1 novel:2 fi:17 common:4 superior:1 multinomial:1 mt:2 twist:1 conditioning:1 sheu:1 volume:9 association:1 interpret:1 refer:2 measurement:2 mellon:1 gibbs:6 mathematics:1 similarly:1 funded:1 interleaf:1 etc:1 base:1 add:1 posterior:11 recent:1 showed:1 dish:4 binary:12 life:1 exploited:1 cham:1 minimum:1 greater:1 simpli:1 preceding:1 mocap:8 july:1 multiple:7 full:1 mix:1 reduces:1 alan:1 ing:1 faster:1 match:1 segmented:1 determination:1 long:1 technical:1 equally:2 a1:1 desideratum:1 variant:1 vision:1 cmu:3 poisson:7 metric:1 iteration:6 represent:2 sometimes:1 proposal:14 separately:1 harrison:1 sudderth:4 allocated:1 appropriately:1 extra:1 exhibited:1 induced:3 recording:1 subject:3 jordan:7 integer:1 leverage:1 ideal:1 hjort:1 split:3 rendering:1 switch:2 restaurant:1 architecture:1 reduce:1 fleet:1 whether:1 motivated:1 expression:1 six:1 pca:1 utility:1 effort:1 forecasting:1 speech:2 passing:1 pollard:1 jj:1 cause:1 matlab:4 useful:1 gael:1 governs:1 clear:1 factorial:3 nonparametric:6 induces:1 category:1 generate:1 http:3 tutorial:1 delta:1 disjoint:1 arising:1 estimated:4 neuroscience:1 blue:1 diverse:1 discrete:3 hyperparameter:4 carnegie:1 affected:1 group:1 four:1 drawn:2 deleted:1 clarity:1 preprocessed:1 gmm:6 backward:1 sum:2 wood:1 inverse:2 angle:7 uncertainty:2 place:5 arrive:1 draw:3 capturing:1 ki:5 cyan:1 display:1 courville:1 neill:1 activity:1 kronecker:1 bp:17 software:1 encodes:1 min:3 performing:4 pacific:1 developing:1 according:2 alternate:1 combination:2 conjugate:9 describes:1 em:2 pan:1 ur:1 metropolis:4 evolves:1 modification:1 taken:1 computationally:1 previously:1 describing:2 turn:1 count:1 needed:1 flip:1 fi1:1 available:1 apply:1 hierarchical:3 appropriate:1 robustly:1 alternative:5 buffet:7 faloutsos:1 denotes:4 assumes:1 include:2 dirichlet:8 top:4 graphical:1 running:2 clustering:1 exploit:1 ghahramani:6 chinese:1 ebfox:1 classical:1 society:1 move:2 parametric:1 concentration:1 exhibit:7 amongst:1 september:1 distance:7 maccormick:1 mapped:1 hmm:31 willsky:4 assuming:3 erik:1 hdp:9 modeled:1 relationship:1 index:2 ratio:1 equivalently:1 potentially:2 lagged:1 zt:19 redefined:1 allowing:1 upper:1 teh:2 observation:18 markov:12 datasets:2 discarded:1 finite:6 displayed:1 truncated:1 hinton:1 excluding:1 shoulder:1 variability:2 y1:11 frame:2 inferred:1 complement:1 toolbox:4 z1:7 california:1 coherent:1 learned:2 daw:1 able:1 fi2:1 dynamical:19 pattern:2 green:3 royal:1 video:2 treated:1 rely:2 indicator:5 mn:2 arm:1 improve:1 technology:2 library:2 temporally:1 created:1 qki:1 deemed:1 featural:2 kj:1 prior:21 discovery:1 marginalizing:1 afosr:1 interesting:1 proportional:1 proven:1 var:7 versus:2 switched:2 sufficient:1 s0:4 share:2 translation:1 qf:4 ki2:1 placed:1 truncation:1 arriving:1 supported:1 distribu:1 rasmussen:2 side:1 allow:1 institute:2 wide:2 taking:1 distributed:3 benefit:1 van:1 dimension:2 xn:1 transition:20 evaluating:1 autoregressive:4 collection:6 jump:4 simplified:1 transaction:1 approximate:1 global:1 active:1 xi:7 continuous:1 latent:9 additionally:3 promising:2 nonhomogeneous:1 ca:1 depicting:1 improving:1 hodgins:1 barbi:3 noise:1 hyperparameters:3 dyadic:1 categorized:1 x1:1 fied:1 fig:8 representative:1 body:2 quantiles:1 site:1 west:1 gatsby:1 sub:2 position:2 inferring:1 msec:1 wish:1 exceeding:1 exercise:1 tied:1 jacobian:1 third:3 magenta:1 down:1 theorem:1 specific:4 hdphmm:1 list:2 offset:1 intractable:1 conditioned:1 occurring:1 slds:5 simply:1 explore:1 infinitely:3 likely:1 visual:2 expressed:1 tracking:3 partially:1 springer:1 ubc:1 truth:3 determines:1 relies:1 dh:1 shared:10 replace:1 man:1 change:1 specifically:2 infinite:9 typical:1 diff:2 sampler:7 except:1 total:2 neck:1 indicating:1 select:2 people:2 arises:1 indian:6 absolutely:1 evaluate:1 mcmc:8 handling:1 |
2,963 | 3,686 | Directed Regression
Yi-hao Kao
Stanford University
Stanford, CA 94305
[email protected]
Benjamin Van Roy
Stanford University
Stanford, CA 94305
[email protected]
Xiang Yan
Stanford University
Stanford, CA 94305
[email protected]
Abstract
When used to guide decisions, linear regression analysis typically involves estimation of regression coefficients via ordinary least squares and their subsequent
use to make decisions. When there are multiple response variables and features
do not perfectly capture their relationships, it is beneficial to account for the decision objective when computing regression coefficients. Empirical optimization
does so but sacrifices performance when features are well-chosen or training data
are insufficient. We propose directed regression, an efficient algorithm that combines merits of ordinary least squares and empirical optimization. We demonstrate
through a computational study that directed regression can generate significant
performance gains over either alternative. We also develop a theory that motivates
the algorithm.
1
Introduction
When used to guide decision-making, linear regression analysis typically treats estimation of regression coefficients separately from their use to make decisions. In particular, estimation is carried
out via ordinary least squares (OLS) without consideration of the decision objective. The regression
coefficients are then used to optimize decisions.
When there are multiple response variables and features do not perfectly capture their relationships,
it is beneficial to account for the decision objective when computing regression coefficients. Imperfections in feature selection are common since it is difficult to identify the right features and the
number of features is typically restricted in order to avoid over-fitting.
Empirical optimization (EO) is an alternative to OLS which selects coefficients that minimize empirical loss in the training data. Though it accounts for the decision objective when computing
regression coefficients, EO sacrifices performance when features are well-chosen or training data is
insufficient.
In this paper, we propose a new algorithm ? directed regression (DR) ? which is a hybrid between
OLS and EO. DR selects coefficients that are a convex combination of those that would be selected by OLS and those by EO. The weights of OLS and EO coefficients are optimized via crossvalidation.
We study DR for the case of decision problems with quadratic objective functions. The algorithm
takes as input a training set of data pairs, each consisting of feature vectors and response variables,
together with a quadratic loss function that depends on decision variables and response variables.
Regression coefficients are computed for subsequent use in decision-making. Each future decision
depends on newly sampled feature vectors and is made prior to observing response variables with
the goal of minimizing expected loss.
We present computational results demonstrating that DR can substantially outperform both OLS and
EO. These results are for synthetic problems with regression models that include subsets of relevant
1
features. In some cases, OLS and EO deliver comparable performance while DR reduces expected
loss by about 20%. In none of the cases considered does either OLS or EO outperform DR.
We also develop a theory that motivates DR. This theory is based on a model in which selected
features do not perfectly capture relationships among response variables. We prove that, for this
model, the optimal vector of coefficients is a convex combination of those that would be generated
by OLS and EO.
2
Linear Regression for Decision-Making
Suppose we are given a set of training data pairs O = {(x(1) , y (1) ), ? ? ? , (x(N ) , y (N ) )}. Each nth
(n)
(n)
data pair is comprised of feature vectors x1 , . . . , xK ? <M and a vector y (n) ? <M of response
K
variables. We would like
Pto compute regression coefficients r ? < so that given a data pair (x, y),
the linear combination k rk xk of feature vectors estimates the expectation of y conditioned on x.
We restrict attention to cases where M > 1, with special interest in problems where M is large,
because it is in such situations that DR offers the largest performance gains.
We consider a setting where the regression model is used to guide future decisions. In particular,
after computing regression coefficients, each time we observe feature vectors x1 , . . . , xK we will
have to select a decision u ? <L before observing the response vector y. The choice incurs a loss
`(u, y) = u> G1 u + u> G2 y,
where the matrices G1 ? <L?L and G2 ? <L?M are known, and the former is positive definite and
symmetric. We aim to minimize expected loss, assuming that the conditional expectation of y given
PK
x is k=1 rk xk . As such, given x and r, we select a decision
? K
!
K
X
1 ?1 X
rk xk .
ur (x) = argmin ` u,
rk xk = ? G1 G2
2
u
k=1
k=1
The question is how best to compute the regression coefficients r for this purpose.
To motivate the setting we have described, we offer a hypothetical application.
Example 1. Consider an Internet banner ad campaign that targets M classes of customers. An
average revenue of ym is received per customer of class m that the campaign reaches. This quantity
is random and influenced by K observable factors x1m , . . . , xKm . These factors may be correlated
across customers classes; for example, they could capture customer preferences as they relate to
ad content or how current economic conditions affect customers. For each mth class, the cost of
reaching the um th customer increases with um because ads are first targeted at customers that can
be reached at lower cost. This cost is quadratic, so that we pay ?m u2m to reach um customers, where
?m is a known constant.
The application we have described fits our general
P problem context. It is natural to predict the
response vector y using a linear combination k rk xk of factors with the regression coefficients
rk computed based on past observations O = {(x(1) , y (1) ), ? ? ? , (x(N ) , y (N ) )}. The goal is to
maximize expected revenue less advertising costs. This gives rise to a loss function that is quadratic
in u and y:
M
X
`(u, y) =
(?m u2m ? um ym ).
m=1
One might ask why not construct M separate linear regression models, one for each response variable, each with a separate set of K coefficients. The reason is that this gives rise to M K coefficients;
when M is large and data is limited, this could lead to over-fitting. Models of the sort we consider,
where regression coefficients are shared across multiple response variables, are sometimes referred
to as general linear models and have seen a wide range of applications [7, 8]. It is well-known
that the quality of results is highly sensitive to the choice of features, even more so than for models
involving a single response variable [7].
2
3
Algorithms
Ordinary least squares (OLS) is a conventional approach to computing regression coefficients. This
would produce a coefficient vector
?
?2
N ?
K
?
X
? (n) X
(n) ?
OLS
r
= argmin
rk xk ? .
(1)
?y ?
?
?
r?<K
n=1
k=1
Note that OLS does not take the decision objective into account when computing regression coefficients. Empirical optimization (EO), as studied for example in [2, 6], offers an alternative that does
so. This approach minimizes empirical loss on the training data:
rEO = argmin
N
X
`(ur (x(n) ), y (n) ).
(2)
r?<K n=1
Note that EO does not explicitly aim to estimate the conditional expectation of the response vector.
Instead it focusses on decision loss that would be incurred with the training data. Both rOLS and
rEO can be computed efficiently by minimizing convex quadratic functions.
As we will see in our computational and theoretical analyses, OLS and EO can be viewed as two
extremes, each offering room for improvement. In this paper, we propose an alternative algorithm
? directed regression (DR) ? which produces a convex combination rDR = (1 ? ?)rOLS + ?rEO
of coefficients computed by OLS and EO. The term directed is chosen to indicate that DR is influenced by the decision objective though, unlike EO, it does not simply minimize empirical loss. The
parameter ? ? [0, 1] is computed via cross-validation, with an objective of minimizing average loss
on validation data. Average loss is a convex quadratic function of ?, and therefore can be easily
minimized over ? ? [0, 1].
DR is designed to generate decisions that are more robust to imperfections in feature selection than
OLS. As such, DR addresses issues similar to those that have motivated work in data-driven robust
optimization, as surveyed in [3]. Our focus on making good decisions despite modeling inaccuracies
also complements recent work that studies how models deployed in practice can generate effective
decisions despite their failure to pass basic statistical tests [4].
4
Computational Results
In this section, we present results from applying OLS, EO, and DR to synthetic data. To generate a
data set, we first sample parameters of a generative model as follows:
1. Sample P matrices C1 , . . . , CP ? <M ?Q , with each entry from each matrix drawn independently from N (0, 1).
2. Sample a vector r? ? <P from N (0, I).
3. Sample Ga ? <L?L and Gb ? <L?M , with each entry of each matrix drawn from N (0, 1).
>
Let G1 = G>
a Ga and G2 = Ga Gb .
Given generative model parameters C1 , . . . , CP and r?, we sample each training data pair (x(n) , y (n) )
as follows:
2
1. Sample a vector ?(n) ? <Q from N (0, I) and a vector w(n) ? <M from N (0, ?w
I).
P
P
(n)
(n)
(n)
2. Let y = i=1 r?i Ci ? + w .
(n)
3. For each k = 1, 2, ? ? ? , K, let xk
= Ck ?(n) .
The vector ?(n) can be viewed as a sample from an underlying information space. The matrices
C1 , . . . , CP extract feature vectors from ?(n) . Note that, though response variables depend on P
feature vectors, only K ? P are used in the regression model.
Given generative model parameters and a coefficient vector r ? <K , it is easy to evaluate the
expected loss `(r) = Ex,y [`(ur (x), y)]. It is also easy to evaluate the minimal expected loss `? =
3
10000
6000
OLS
EO
DR
6000
4000
2000
0
10
OLS
EO
DR
5000
Excess Loss
Excess Loss
8000
4000
3000
2000
1000
20
30
N
40
0
45
50
50
55
60
K
(a)
(b)
Figure 1: (a) Excess losses delivered by OLS, EO, and DR, for different numbers N of training
samples. (b) Excess losses delivered by OLS, EO, and DR, using different numbers K of the 60
features.
minr Ex,y [`(ur (x), y)]. We will assess each algorithm in terms of the excess loss `(r) ? `? delivered
by the coefficient vector r that the algorithm computes. Excess loss is nonnegative, and this allows
us to make comparisons in percentage terms.
We carried out two sets of experiments to compare the performance of OLS, EO, and DR. In the
first set, we let M = 15, L = 15, P = 60, Q = 20, ?w = 5, and K = 50. For each N ?
{10, 15, 20, 30, 50}, we ran 100 trials, each with an independently sampled generative model and
training data set. In each trial, each algorithm computes a coefficient vector given the training data
and loss function. With DR, ? is selected via leave-one-out cross-validation when N ? 20, and via
5-fold cross-validation when N > 20. Figure 1(a) plots excess losses averaged over trials. Note that
the excess loss incurred by DR is never larger than that of OLS or EO. Further, when N = 20, the
excess loss of OLS and EO are both around 20% larger than that of DR. For small N , OLS is as
effective as DR, while, EO becomes as effective as DR as N grows large.
In the second set of experiments, we use the same parameter values as in the first set, except we fix
N = 20 and consider use of K ? {45, 50, 55, 58, 60} feature vectors. Again, we ran 100 trials for
each K, applying the three algorithms as in the first set of experiments. Figure 1(b) plots excess
losses averaged over trials. Note that when K = 55, DR delivers excess loss around 20% less than
EO and OLS. When K = P = 60, there are no missing features and OLS matches the performance
of DR.
Figure 2 plots the values of ? selected by cross-validation, each averaged over the 100 trials, as
a function of N and K. As the number of training samples N grows, so does ?, indicating that
DR is weighted more heavily toward EO. As the number of feature vectors K grows, ? diminishes,
indicating that DR is weighted more heavily toward OLS.
5
Theoretical Analysis
In this section, we formulate a generative model for the training data and future observations. For
this model, optimal coefficients are convex combinations of rOLS and rEO . As such, our model and
analysis motivate the use of DR.
5.1
Model
In this section, we describe a generative model that samples the training data set, as well as ?missing
features,? and a representative future observation. We then formulate an optimization problem where
the objective is to minimize expected loss on the future observation conditioned on the training data
and missing features. It may seem strange to condition on missing features since in practice they are
unavailable when computing regression coefficients. However, we will later establish that optimal
4
0.8
0.6
0.6
0.4
0.4
?
?
0.8
0.2
0
10
0.2
20
30
N
40
0
45
50
50
55
60
K
(a)
(b)
Figure 2: (a) The average values of selected ?, for different numbers N of training samples. (b) The
average values of selected ?, using different numbers K of the 60 features.
coefficients are convex combinations of rOLS and rEO , each of which can be computed without
observing missing features. Since directed regression searches over these convex combinations, it
should approximate what would be generated by a hypothetical algorithm that observes missing
features.
We will assume that each feature, whether observed or missing, is a linear function of an ?information vector? drawn from <Q . Specifically, the N training data samples depend on information
vectors ?(1) , . . . , ?(N ) ? <Q . A linear function mapping an information vector to a feature vector
can be represented by a matrix in <M ?Q , and to describe our generative model, it is useful to define
an inner product for such matrices. In particular, we define the inner product between matrices A
and B by
N
1 X
(A?(n) )> (B?(n) ).
hA, Bi =
N n=1
Our generative model takes several parameters as input. First, there are the number of samples
N , the number of response variables M , and the number of feature vectors K. Second, a parameter
?Q specifies the expected dimension of the information vector. Finally, there are standard deviations
?r , ?? , and ?w , of observed feature coefficients, missing feature coefficients, and noise, respectively.
Given parameters N , M , K, ?Q , ?r , ?? , and ?w , the generative model produces data as follows:
1. Sample Q from the geometric distribution with mean ?Q .
2. Sample ?(1) , . . . , ?(N ) ? <Q from N (0, IQ ).
3. Sample C1 , . . . , CK and D1 , ? ? ? , DJ ? <M ?Q with each entry i.i.d. from N (0, 1), where
K + J = M Q.
4. Apply the Gram-Schmidt algorithm with respect to the inner product defined
? 1, . . . , D
? J from the sequence
above to generate an orthonormal basis C?1 , . . . , C?K , D
C1 , . . . , CK , D1 , . . . , DJ .
5. Sample r? ? <K from N (0, ?r2 IK ) and r?? ? <J from N (0, ??2 IJ ).
2
6. For n = 1, ? ? ? , N , sample w(n) ? <M from N (0, ?w
IM ), and let
i
h
x(n) = C1 ?(n) ? ? ? CK ?(n) ,
i
h
? 1 ?(n) ? ? ? D
? J ?(n) ,
z (n) = D
y (n) =
K
X
(n)
rk? xk +
J
X
j=1
k=1
5
(n)
rj?? zj
+ w(n) .
(3)
(4)
(5)
2
7. Sample ?? uniformly from {?(1) , ? ? ? , ?(N ) } and w
? ? <M from N (0, ?w
IM ). Generate x
?,
z?, and y? by the same functions in (3), (4), and (5).
The samples z (1) , . . . , z (N ) , z? represent missing features. The Gram-Schmidt procedure ensures two
? j i = 0, missing features are uncorrelated with observed features. If
properties. First, since hCk , D
this were not the case, observed features would provide information about missing features. Second,
? 1, . . . , D
? J are orthonormal, the distribution of missing features is invariant to rotations in the
since D
J-dimensional subspace from which they are drawn. In other words, all directions in that space are
equally likely.
We define an augmented training set O = {(x(1) , z (1) , y (1) ), ? ? ? , (x(N ) , z (N ) , y (N ) )} and consider
selecting regression coefficients r? ? <K that solve
min E[`(ur (?
x), y?)|O].
r?<K
Note that the probability distribution here is implicitly defined by our generative model, and as such,
r? may depend on N , M , K , ?Q , ?r , ?? , ?w , and O.
5.2
Optimal Solutions
Our primary interest is in cases where prior knowledge about the coefficients r? is weak and does
not significantly influence r?. As such, we will from here on restrict attention to the case where ?r is
asymptotically large. Hence, r? will no longer depend on ?r .
It is helpful to consider two special cases. One is where ?? = 0 and the other is where ?? is
asymptotically large. We will refer to r? in these extreme cases as r?0 and r?? . The following theorem
establishes that these extremes are delivered by OLS and EO.
Theorem 1. For all N , M , K, ?Q , ?w , and O,
?
?2
N ?
K
?
X
? (n) X
(n) ?
r?0 = argmin
rk xk ?
?y ?
?
?
r?<K
n=1
k=1
and
r?? = argmin
r?<K
N
X
`(ur (x(n) ), y (n) ).
n=1
Note that ?? represents the degree of bias in a regression model that assumes there are no missing
features. Hence, the above theorem indicates that OLS is optimal when there is no bias while EO
is optimal as the bias becomes asymptotically large. It is also worth noting that the coefficient
vectors r?0 and r?? can be computed without observing the missing features, though r? is defined by
an expectation that is conditioned on their realizations. Further, computation of r?0 and r?? does not
require knowledge of Q or ?w .
Our next theorem establishes that the coefficient vector r? is always a convex combination of r?0 and
r?? .
Theorem 2. For all N , M , K, ?Q , ?w , ?? , and O,
r? = (1 ? ?)?
r0 + ??
r? ,
where ? =
1
?2
w
1+ N ?
2
.
?
Our two theorems together imply that, with an appropriately selected ? ? [0, 1], (1 ? ?)rOLS +
?rEO = r?. This suggests that directed regression, which optimizes ? via cross-validation to generate
a coefficient vector rDR = (1 ? ?)rOLS + ?rEO , should approximate r? well without observing the
missing features or requiring knowledge of Q, ?? , or ?w .
6
5.3
Interpretation
To develop intuition for our results, we consider an idealized situation where the coefficients r? and
r?? are provided to us by an oracle. Then the optimal coefficient vector would be
rO = argmin E[`(ur (?
x), y?)|O, r? , r?? ].
r?<K
It can be shown that rOLS is a biased estimator of rO , while rEO is an unbiased one. However,
the variance of rOLS is smaller than that of rEO . The optimal tradeoff is indeed captured by the
value of ? provided in Theorem 2. In particular, as the number of training samples N increases,
variance diminishes and ? approaches 1, placing increasing weight on EO. On the other hand, as
the number of observed features K increases, model bias decreases and ? approaches 0, placing
increasing weight on OLS. Our experimental results demonstrate that the value of ? selected by
cross-validation exhibits the same behavior.
6
Extensions
Though we only treated linear models and quadratic objective functions, our work suggests that
there can be significant gains in broader problem settings from a tighter coupling between machine
learning and decision-making. In particular, machine learning algorithms should factor decision
objectives into the learning process. It will be interesting to explore how to do this with other
classes of models and objectives.
One might argue that feature mis-specification is not a critical issue in light of effective methods
for subset selection. In particular, rather than selecting a few features and facing the consequences
of model bias, one might select an enormous set of features and apply a method like the lasso [10]
to identify a small subset. Our view is that even this enormous set will result in model biases that
might be ameliorated by generalizations of DR. There is also the concern that data requirements
grow with the size of the large feature set, albeit slowly. Understanding how to synthesize DR with
subset selection methods is an interesting direction for future research.
Another issue that should be explored is the effectiveness of cross-validation in optimizing ?. In
particular, it would be helpful to understand how the estimate relates to the ideal value of ? identified
by Theorem 2. More general work on the selection of convex combinations of models (e.g., [1, 5])
may lend insights to our setting.
Let us close by mentioning that the ideas behind DR ought to play a role in reinforcement learning
(RL) as presented in [9]. RL algorithms learn from experience to predict a sum of future rewards
as a function of a state, typically by fitting a linear combination of features of the state. This socalled approximate value function is then used to guide sequential decision-making. The problem
we addressed in this paper can be viewed as a single-period version of RL, in the sense that each
decision incurs an immediate cost but bears no further consequences. It would be interesting to
extend our idea to the multi-period case.
Acknowledgments
We thank James Robins for helpful comments and suggestions. The first author is supported by a
Stanford Graduate Fellowship. This research was supported in part by the National Science Foundation through grant CMMI-0653876.
Appendix
h
i
h
i
(n)
(n)
(n)
(n)
Proof of Theorem 1. For each n, let x(n) = x(n)
,
z
=
.
?
?
?
x
z
?
?
?
z
1
1
K
J
? (1)>
?
?
?
>
>
Let X
=
, Z
=
, Y
=
x
? ? ? x(N )>
z (1)> ? ? ? z (N )>
? (1)>
?
>
?
??
?
?
(N )>
, r? = E[r |O], r? = E[r |O]. For any matrix V , let V denote
y
??? y
? j i = 0, ? k, j implies that each column of X is orthogonal to
(V > V )?1 V > . Recall that hCk , D
7
each column of Z. Because r? , r?? , O are jointly Gaussian, as ?r ? ?, we have
?2
?
?
?
?
N ?
K
J
J
X
X
X
?
1 X?
r?
(n)
(n)
(n)
?
?
? + 1
y
?
r
x
?
r
z
rj?2
= argmin 2
k
?
j
j
k
?
?
2
r?
2?
(r,r? ) 2?w n=1 ?
?
?
j=1
j=1
k=1
#
?? 1
??
? ? 1
??2 "
>
?1
1
?
?
(X X) X > Y
X ?w Z
r
Y
?
?
w
?
?
2
w
.
?
= argmin ?
=
?
1
r? ?
0
0
(Z > Z + ?w2 I)?1 Z > Y
(r,r ? )
?? IJ
?
? (1)>
?>
?1
?1
Let a(n) = G1 2 G2 x(n) , b(n) = G1 2 G2 z (n) , A =
, B =
a
? ? ? a(N )>
? (1)>
?
(N )> > . We have
b
??? b
r? =
argmin E[`(ur (?
x), y?)|O] = argmin
r
r
=
argmin
r
=
argmin
r
=
N
X
N
1 X
Ey?[`(ur (?
x), y?)|?
x = x(n) , O]
N n=1
ur (x(n) )> G1 ur (x(n) ) + ur (x(n) )> G2 E[?
y |?
x = x(n) , O]
n=1
N
X
1
1 > (n)> (n)
r a
a r ? r> a(n)> (a(n) r? + b(n) r?? )
4
2
n=1
r? + A? B?
r? = X ? Y + A? B(Z > Z +
2
?w
I)?1 Z > Y.
??2
(6)
Taking ?? ? 0 and ?? ? ? yields
r?0 = X ? Y ,
r?? = X ? Y + A? BZ ? Y .
The first part of the theorem then follows because
?
?2
N ?
K
?
X
? (n) X
(n) ?
2
?
r?0 = X Y = argmin kY ? Xrk = argmin
rk xk ? .
?y ?
?
?
r
r
n=1
(7)
(8)
k=1
We now prove the second part. Note that
argmin
r
N
X
`(ur (x(n) ), y (n) ) = argmin
r
n=1
= argmin r> A> Ar ? 2r>
r
where h
(n)
=
N
X
ur (x(n) )> G1 ur (x(n) ) + ur (x(n) )> G2 y (n)
n=1
h(n)> y (n) = (A> A)?1 H > Y,
n=1
?1
(n)
G>
2 G1 G2 x
N
X
and H =
?
?
?
hk = ?
h(1)>
???
h(N )>
?1
(1)
G>
2 G1 G2 Ck ?
?>
?
. Each kth column of H
?
..
?
.
> ?1
(N )
G2 G1 G2 Ck ?
?1
M ?Q
? 1, ? ? ? , D
? J }.
is in span{col X, col Z} because G>
= span{C1 , ? ? ? , CK , D
2 G1 G2 Ck ? <
0
?
?
Since the residual Y = Y ? XX Y ? ZZ Y upon projecting Y onto span {col X, col Z} is
0
> 0
>
orthogonal to the subspace, we have h>
k Y = 0, ? k and hence H Y = 0. This implies H Y =
>
?
>
?
(n)> (n)
(n)> (n)
(n)> (n)
(n)> (n)
H XX Y + H ZZ Y . Further, since a
a
=h
x ,a
b
=h
z , ? n, we
have
?
?
r?? = X ? Y + A? BZ ? Y = (A> A)?1 A> AX ? Y + A> BZ ? Y
?
?
= (A> A)?1 H > XX ? Y + H > ZZ ? Y = (A> A)?1 H > Y.
? i, D
? j i = 1{i = j}, we have Z > Z = N I. Plugging this into (6)
Proof of Theorem 2. Because hD
and comparing the resultant expression with (7) and (8) yield the desired result.
8
References
[1] J.-Y. Audibert. Aggregated estimators and empirical complexity for least square regression.
Annales de l?Institut Henri Poincare Probability and Statistics, 40(6):685?736, 2004.
[2] P. L. Bartlett and S. Mendelson. Empirical minimization. Probability Theory and Related
Fields, 135(3):311?334, 2006.
[3] D. Bertsimas and A. Thiele. Robust and data-driven optimization: Modern decision-making
under uncertainty. In Tutorials on Operations Research. INFORMS, 2006.
[4] O. Besbes, R. Philips, and A. Zeevi. Testing the validity of a demand model: An operations
perspective. 2007.
[5] F. Bunea, A. B. Tsybakov, and M. H. Wegkamp. Aggregation for Gaussian regression. The
Annals of Statistics, 35(4):1674?1697, 2007.
[6] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and other
learning applications. Information and Computation, 100:78?150, 1992.
[7] K. Kim and N. Timm. Univariate and Multivariate General Linear Models: Theory and
Applications with SAS. Chapman & Hall/CRC, 2006.
[8] K. E. Muller and P. W. Stewart. Linear Model Theory: Univariate, Multivariate, and Mixed
Models. Wiley, 2006.
[9] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
[10] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of Royal Statistical
Society, 1996.
9
| 3686 |@word trial:6 version:1 incurs:2 selecting:2 offering:1 past:1 current:1 comparing:1 subsequent:2 designed:1 plot:3 generative:10 selected:8 xk:12 preference:1 ik:1 prove:2 combine:1 fitting:3 sacrifice:2 indeed:1 expected:8 behavior:1 multi:1 increasing:2 becomes:2 provided:2 xx:3 underlying:1 what:1 pto:1 argmin:17 substantially:1 minimizes:1 ought:1 hypothetical:2 um:4 rols:8 ro:2 grant:1 before:1 positive:1 treat:1 consequence:2 despite:2 sutton:1 might:4 studied:1 suggests:2 mentioning:1 campaign:2 limited:1 range:1 bi:1 averaged:3 graduate:1 directed:8 acknowledgment:1 testing:1 practice:2 definite:1 procedure:1 poincare:1 yan:1 empirical:9 significantly:1 word:1 onto:1 ga:3 selection:6 close:1 context:1 applying:2 influence:1 optimize:1 conventional:1 customer:8 missing:15 attention:2 independently:2 convex:10 formulate:2 rdr:2 estimator:2 insight:1 haussler:1 orthonormal:2 hd:1 annals:1 target:1 suppose:1 heavily:2 play:1 synthesize:1 roy:1 observed:5 role:1 capture:4 ensures:1 decrease:1 thiele:1 observes:1 ran:2 benjamin:1 intuition:1 complexity:1 reward:1 motivate:2 depend:4 deliver:1 upon:1 basis:1 easily:1 represented:1 effective:4 describe:2 stanford:10 larger:2 solve:1 statistic:2 g1:12 jointly:1 delivered:4 sequence:1 net:1 propose:3 product:3 relevant:1 realization:1 x1m:1 kao:1 ky:1 crossvalidation:1 requirement:1 produce:3 leave:1 iq:1 develop:3 coupling:1 informs:1 ij:2 received:1 sa:1 involves:1 indicate:1 implies:2 direction:2 crc:1 require:1 fix:1 generalization:2 tighter:1 im:2 extension:1 around:2 considered:1 hall:1 mapping:1 predict:2 zeevi:1 purpose:1 estimation:3 diminishes:2 sensitive:1 largest:1 establishes:2 weighted:2 bunea:1 minimization:1 mit:1 imperfection:2 always:1 gaussian:2 aim:2 reaching:1 ck:8 avoid:1 rather:1 shrinkage:1 barto:1 broader:1 ax:1 focus:2 improvement:1 indicates:1 hk:1 kim:1 sense:1 helpful:3 typically:4 mth:1 selects:2 issue:3 among:1 socalled:1 special:2 field:1 construct:1 never:1 zz:3 chapman:1 represents:1 placing:2 future:7 minimized:1 few:1 modern:1 national:1 consisting:1 interest:2 highly:1 extreme:3 light:1 behind:1 xrk:1 experience:1 orthogonal:2 institut:1 desired:1 theoretical:2 minimal:1 column:3 modeling:1 ar:1 stewart:1 ordinary:4 cost:5 minr:1 deviation:1 subset:4 entry:3 comprised:1 synthetic:2 banner:1 wegkamp:1 together:2 ym:2 again:1 slowly:1 dr:31 account:4 de:1 coefficient:37 explicitly:1 audibert:1 depends:2 ad:3 idealized:1 later:1 view:1 observing:5 reached:1 sort:1 aggregation:1 minimize:4 square:5 ass:1 variance:2 efficiently:1 yield:2 identify:2 weak:1 none:1 advertising:1 worth:1 reach:2 influenced:2 failure:1 james:1 resultant:1 proof:2 mi:1 gain:3 newly:1 sampled:2 ask:1 recall:1 knowledge:3 response:15 though:5 hand:1 quality:1 grows:3 validity:1 requiring:1 unbiased:1 former:1 hence:3 symmetric:1 theoretic:1 demonstrate:2 cp:3 delivers:1 consideration:1 ols:30 common:1 rotation:1 rl:3 extend:1 interpretation:1 significant:2 refer:1 cambridge:1 dj:2 specification:1 longer:1 timm:1 multivariate:2 recent:1 perspective:1 optimizing:1 optimizes:1 driven:2 yi:1 muller:1 seen:1 captured:1 eo:28 r0:1 ey:1 aggregated:1 maximize:1 period:2 relates:1 multiple:3 rj:2 reduces:1 match:1 offer:3 cross:7 equally:1 plugging:1 involving:1 regression:34 basic:1 expectation:4 bz:3 sometimes:1 represent:1 c1:7 fellowship:1 separately:1 addressed:1 grow:1 appropriately:1 biased:1 w2:1 unlike:1 comment:1 seem:1 effectiveness:1 noting:1 ideal:1 besbes:1 easy:2 affect:1 fit:1 perfectly:3 restrict:2 lasso:2 economic:1 inner:3 identified:1 idea:2 tradeoff:1 whether:1 motivated:1 expression:1 bartlett:1 gb:2 useful:1 tsybakov:1 generate:7 specifies:1 outperform:2 percentage:1 zj:1 tutorial:1 per:1 tibshirani:1 demonstrating:1 enormous:2 drawn:4 asymptotically:3 annales:1 bertsimas:1 sum:1 uncertainty:1 strange:1 decision:29 appendix:1 comparable:1 internet:1 pay:1 fold:1 quadratic:7 nonnegative:1 oracle:1 min:1 span:3 combination:11 beneficial:2 across:2 smaller:1 ur:16 making:7 projecting:1 restricted:1 invariant:1 merit:1 operation:2 apply:2 observe:1 alternative:4 schmidt:2 assumes:1 include:1 establish:1 society:1 objective:12 question:1 quantity:1 primary:1 cmmi:1 exhibit:1 kth:1 subspace:2 separate:2 thank:1 philip:1 bvr:1 argue:1 reason:1 toward:2 assuming:1 relationship:3 insufficient:2 minimizing:3 difficult:1 relate:1 hao:1 rise:2 motivates:2 observation:4 immediate:1 situation:2 complement:1 pair:5 optimized:1 inaccuracy:1 address:1 reo:9 royal:1 lend:1 critical:1 natural:1 hybrid:1 treated:1 residual:1 nth:1 imply:1 carried:2 extract:1 prior:2 geometric:1 understanding:1 xiang:1 loss:27 bear:1 mixed:1 interesting:3 suggestion:1 facing:1 revenue:2 validation:8 foundation:1 incurred:2 degree:1 uncorrelated:1 supported:2 guide:4 bias:6 understand:1 wide:1 taking:1 van:1 dimension:1 gram:2 computes:2 author:1 made:1 reinforcement:2 excess:11 approximate:3 observable:1 ameliorated:1 implicitly:1 henri:1 search:1 why:1 robin:1 learn:1 robust:3 ca:3 correlated:1 unavailable:1 pk:1 noise:1 x1:2 augmented:1 referred:1 representative:1 deployed:1 wiley:1 surveyed:1 col:4 rk:10 theorem:11 pac:1 r2:1 explored:1 concern:1 mendelson:1 albeit:1 sequential:1 ci:1 conditioned:3 demand:1 hck:2 simply:1 likely:1 explore:1 univariate:2 g2:13 ma:1 conditional:2 goal:2 targeted:1 viewed:3 room:1 shared:1 content:1 xkm:1 specifically:1 except:1 uniformly:1 pas:1 experimental:1 indicating:2 select:3 evaluate:2 d1:2 ex:2 |
2,964 | 3,687 | Non-stationary continuous dynamic
Bayesian networks
Marco Grzegorczyk
Department of Statistics, TU Dortmund University, 44221 Dortmund, Germany
[email protected]
Dirk Husmeier
Biomathematics & Statistics Scotland (BioSS)
JCMB, The King?s Buildings, Edinburgh EH93JZ, United Kingdom
[email protected]
Abstract
Dynamic Bayesian networks have been applied widely to reconstruct the structure
of regulatory processes from time series data. The standard approach is based on
the assumption of a homogeneous Markov chain, which is not valid in many realworld scenarios. Recent research efforts addressing this shortcoming have considered undirected graphs, directed graphs for discretized data, or over-flexible
models that lack any information sharing among time series segments. In the
present article, we propose a non-stationary dynamic Bayesian network for continuous data, in which parameters are allowed to vary among segments, and in
which a common network structure provides essential information sharing across
segments. Our model is based on a Bayesian multiple change-point process, where
the number and location of the change-points is sampled from the posterior distribution.
1
Introduction
There has recently been considerable interest in structure learning of Bayesian networks. Examples from the topical field of systems biology are the reconstruction of transcriptional regulatory
networks from gene expression data [1], the inference of signal transduction pathways from protein concentrations [2], and the identification of neural information flow operating in the brains of
songbirds [3]. In particular, dynamic Bayesian networks (DBNs) have been applied, as they allow
feedback loops and recurrent regulatory structures to be modelled while avoiding the ambiguity
about edge directions common to static Bayesian networks. The standard assumption underpinning
DBNs is that of stationarity: time-series data are assumed to have been generated from a homogeneous Markov process. However, regulatory interactions and signal transduction processes in the
cell are usually adaptive and change in response to external stimuli. Likewise, neural information
flow slowly adapts via Hebbian learning to make the processing of sensory information more efficient. The assumption of stationarity is therefore too restrictive in many circumstances, and can
potentially lead to erroneous conclusions.
In the recent past, various research efforts have addressed this issue and proposed models that relax
the stationarity assumption. Talih and Hengartner [4] proposed a time-varying Gaussian graphical
model (GGM), in which the time-varying variance structure of the data was inferred with reversible
jump (RJ) Markov chain Monte Carlo (MCMC). A limitation of this approach is that changes of the
network structure between different segments are restricted to changing at most a single edge, and
the total number of segments is assumed known a priori. Xuan and Murphy [5] developed a related
non-stationary GGM based on a product partition model. The method allows for separate structures
1
Score
Changepoints
Structure
constant
Data format
Latent
variables
Proposed
here
Marginal
Likelihood
node
specific
Yes
Robinson &
Hartemink (2009)
Marginal
Likelihood
whole
network
No
L`ebre
(2008)
Marginal
Likelihood
node
specific
No
Grzegorcyk
et al. (2008)
Marginal
Likelihood
whole
network
Yes
Ko et al.
(2007)
BIC
Continuous
Change-point
process
Discrete
Change-point
process
Continuous
Change-point
process
Continuous
Free
allocation
Continuous
Free
allocation
node
specific
Yes
Table 1: Overview of how our model compares with various related, recently published models.
in different segments, where the number of structures is inferred from the data. The inference
algorithm iterates between a convex optimization for determining the graph structure and a dynamic
programming algorithm for calculating the segmentation. The latter aspect imposes restrictions on
the graph structure (decomposability), though. Moreover, both the models of [4] and [5] are based
on undirected graphs, whereas most processes in systems biology, like neural information flow,
signal transduction and transcriptional regulation, are intrinsically of a directed nature. To address
this shortcoming, Robinson and Hartemink [6] and L?ebre [7] proposed a non-stationary dynamic
Bayesian network. Both methods allow for different network structures in different segments of the
time series, where the location of the change-points and the total number of segments are inferred
from the data with RJMCMC. The essential difference between the two methods is that the model
proposed in [6] is a non-stationary version of the BDe score [8], which requires the data to be
discretized. The method proposed in [7] is based on the Bayesian linear regression model of [9],
which avoids the need for data discretization.
Allowing the network structure to change between segments leads to a highly flexible model. However, this approach faces a conceptual and a practical problem. The practical problem is potential
model over-flexibility1 . Owing to the high costs of postgenomic high-throughput experiments, time
series in systems biology are typically rather short. Modelling short time series segments with separate network structures will almost inevitably lead to inflated inference uncertainty, which calls
for some information sharing between the segments. The conceptual problem is related to the very
premise of a flexible network structure. This assumption is reasonable for some scenarios, like morphogenesis, where the different segments are e.g. associated with the embryonic, larval, pupal, and
adult stages of fruit fly (as discussed in [6]). However, for most cellular processes on a shorter time
scale, it is questionable whether it is the structure rather than just the strength of the regulatory interactions that changes with time. To use the analogy of the traffic flow network invoked in [6]: it
is not the road system (the network structure) that changes between off-peak and rush hours, but the
intensity of the traffic flow (the strength of the interactions). In the same vein, it is not the ability of
a transcription factor to potentially bind to the promoter of a gene and thereby initiate transcription
(the interaction structure), but the extent to which this happens (the interaction strength).
The objective of the present work is to propose and assess a non-stationary continuous-valued DBN
that introduces information sharing among different time series segments via a constrained structure.
Our model is non-stationary with respect to the parameters, while the network structure is kept fixed
among segments. Our model complements the one proposed in [6] in two other aspects: the score
is a non-stationary generalization of the BGe [10] rather than the BDe score, thus avoiding the need
for data discretization, and the patterns of non-stationarity are node-specific, thereby providing extra
model flexibility. Our work is based on [11], [12], and [13]. Like [11], our model is effectively a
mixture of BGe models. We replace the free allocation model of [11] by a change-point process
to incorporate our prior notion that adjacent time points in a time series are likely to be governed
by similar distributions. We borrow from [12] the concept of node-specific change-points to enable
greater model flexibility. However, as opposed to [12], we do not approximate the scoring function
by BIC [14], but compute the proper marginal likelihood. The objective of inference is to infer the
1
Note that as opposed to [7], [6] partially addresses this issue via a prior distribution that discourages
changes in the network structure.
2
location and the node-specific number of change-points from the posterior distribution. An overview
of how our method is related to various recently published related models is provided in Table 1.
2
Methodology
2.1
The dynamic BGe network
DBNs are flexible models for representing probabilistic relationships between interacting variables
(nodes) X1 , . . . , XN via a directed graph G. An edge pointing from Xi to Xj indicates that the
realization of Xj at time point t, symbolically: Xj (t), is conditionally dependent on the realization
of Xi at time point t?1, symbolically: Xi (t?1). The parent node set of node Xn in G, ?n = ?n (G),
is the set of all nodes from which an edge points to node Xn in G. Given a data set D, where Dn,t
and D(?n ,t) are the tth realizations Xn (t) and ?n (t) of Xn and ?n , respectively, and 1 ? t ? m
represents time, DBNs are based on the following homogeneous Markov chain expansion:
P (D|G, ?) =
m
N Y
Y
P Xn (t) = Dn,t |?n (t ? 1) = D(?n ,t?1) , ? n
(1)
n=1 t=2
where ? is the total parameter vector, composed of node-specific subvectors ? n , which specify
the local conditional distributions inQ
the factorization. From Eq. (1) and under the assumption of
parameter independence, P (?|G) = n P (? n |G), the marginal likelihood is given by
P (D|G)
?(Dn?n , G)
=
Z
P (D|G, ?)P (?|G)d? =
N
Y
?(Dn?n , G)
(2)
n=1
Z Y
m
=
P Xn (t) = Dn,t |?n (t ? 1) = D(?n ,t?1) , ? n P (? n |G)d? n
(3)
t=2
where Dn?n := {(Dn,t , D?n ,t?1 ) : 2 ? t ? m} is the subset of data pertaining to node Xn
and parent set ?n . We choose a linear Gaussian distribution for the local conditional distribution
P (Xn |?n , ? n ) in Eq.(1). Under fairly weak regularity conditions discussed in [10] (parameter modularity and conjugacy of the prior2 ), the integral in Eq. (3) has a closed form solution, given by
Eq. (24) in [10]. The resulting expression is called the BGe score3 .
2.2
The non-stationary dynamic change-point BGe model (cpBGe)
To obtain a non-stationary DBN, we generalize Eq. (1) with a node-specific mixture model:
P (D|G, V, K, ?) =
Kn
m Y
N Y
Y
n=1 t=2 k=1
?Vn (t),k
P Xn (t) = Dn,t |?n (t ? 1) = D(?n ,t?1) , ? kn
(4)
where ?Vn (t),k is the Kronecker delta, V is a matrix of latent variables Vn (t), Vn (t) = k indicates
that the realization of node Xn at time t, Xn (t), has been generated by the kth component of a
mixture with Kn components, and K = (K1 , . . . , Kn ). Note that the matrix V divides the data
into several disjoined subsets, each of which can be regarded as pertaining to a separate BGe model
with parameters ? kn . The vectors Vn are node-specific, i.e. different nodes can have different breakpoints. The probability model defined in Eq.(4) is effectively a mixture model with local probability
distributions P (Xn |?n , ? kn ) and it can hence, under a free allocation of the latent variables, approximate any probability distribution arbitrarily closely. In the present work, we change the assignment
of data points to mixture components from a free allocation to a change-point process. This effectively reduces the complexity of the latent variable space and incorporates our prior belief that, in a
2
The conjugate prior is a normal-Wishart distribution. For the present study, we chose the hyperparameters
of this distribution maximally uninformative subject to the regularity conditions discussed in [10].
3
The score equivalence aspect of the BGe model is not required for DBNs, because edge reversals are not
permissible. However, formulating our method in terms of the BGe score is advantageous when adapting the
proposed framework to non-linear static Bayesian networks along the lines of [12].
3
time series, adjacent time points are likely to be assigned to the same component. From Eq. (4), the
marginal likelihood conditional on the latent variables V is given by
Z
Kn
N Y
Y
(5)
?(Dn?n [k, Vn ], G)
P (D|G, V, K)= P (D|G, V, K, ?)P (?)d? =
n=1 k=1
Z Y
m
?Vn (t),k
P (? kn |G)d? kn (6)
?(Dn?n [k, Vn ], G)=
P Xn (t) = Dn,t |?n (t ? 1) = D(?n ,t?1) , ? kn
t=2
Eq. (6) is similar to Eq. (3), except that it is restricted to the subset Dn?n [k, Vn ] :=
{(Dn,t , D?n ,t?1 ) : Vn (t) = k, 2 ? t ? m}. Hence when the regularity conditions defined in
[10] are satisfied, then the expression in Eq.(6) has a closed-form solution: it is given by Eq. (24) in
[10] restricted to the subset of the data that has been assigned to the kth mixture component (or kth
segment). The joint probability distribution of the proposed cpBGe model is given by:
P (G, V, K, D) = P (D|G, V, K) ? P (G) ? P (V|K) ? P (K)
)
(
Kn
N
Y
Y
?n
(7)
?(Dn [k, Vn ], G)
{P (Vn |Kn ) ? P (Kn ) ?
= P (G) ?
n=1
k=1
In the absence of genuine prior knowledge about the regulatory network structure, we assume for
P (G) a uniform distribution on graphs, subject to a fan-in restriction of |?n | ? 3. As prior probability distributions on the node-specific numbers of mixture components Kn , P (Kn ), we take iid
truncated Poisson distributions with shape parameter ? = 1, restricted to 1 ? Kn ? KM AX
(we set KM AX = 10 in our simulations). The prior distribution on the latent variable vectors,
QN
P (V|K) = n=1 {P (Vn |Kn ), is implicitly defined via the change-point process as follows. We
identify Kn with Kn ? 1 change-points bn = {bn,1 , . . . , bn,Kn ?1 } on the continuous interval [2, m].
For notational convenience we introduce the pseudo change-points bn,0 = 2 and bn,Kn = m. For
node Xn the observation at time point t is assigned to the kth component, symbolically Vn (t) = k,
if bn,k?1 ? t < bn,k . Following [15] we assume that the change-points are distributed as the evennumbered order statistics of L := 2(Kn ? 1) + 1 points u1 , . . . , uL uniformly and independently
distributed on the interval [2, m]. The motivation for this prior, instead of taking Kn uniformly
distributed points, is to encourage a priori an equal spacing between the change-points, i.e. to
discourage mixture components (i.e. segments) that contain only a few observations. The evennumbered order statistics prior on the change-point locations bn induces a prior distribution on the
node-specific allocation vectors Vn . Deriving a closed-form expression is involved. However, the
MCMC scheme we discuss in the next section does not sample Vn directly, but is based on local
modifications of Vn based on birth, death and reallocation moves. All that is required for the acceptance probabilities of these moves are P (Vn |Kn ) ratios, which are straightforward to compute.
2.3
MCMC inference
We now describe an MCMC algorithm to obtain a sample {G i , Vi , Ki }i=1,...,I from the posterior
distribution P (G, V, K|D) ? P (G, V, K, D) of Eq. (7). We combine the structure MCMC algorithm4 [17, 18] with the change-point model used in [15], and draw on the fact that conditional on
the allocation vectors V, the model parameters can be integrated out to obtain the marginal likelihood terms ?(Dn?n [k, Vn ], G) in closed form, as shown in the previous section. Note that this
approach is equivalent to the idea underlying the allocation sampler proposed in [13]. The resulting
algorithm is effectively an RJMCMC scheme [15] in the discrete space of network structures and
latent allocation vectors, where the Jacobian in the acceptance criterion is always 1 and can be omitted. With probability pG = 0.5 we perform a structure MCMC move on the current graph G i and
leave the latent variable matrix and the numbers of mixture components unchanged, symbolically:
Vi+1 = Vi and Ki+1 = Ki . A new candidate graph G i+1 is randomly drawn out of the set of
graphs N (G i ) that can be reached from the current graph G i by deletion or addition of a single edge.
The proposed graph G i+1 is accepted with probability:
P (D|G i+1 , Vi , Ki ) P (G i+1 ) |N (G i )|
A(G i+1 |G i ) = min 1,
(8)
P (D|G i , Vi , Ki ) P (G i ) |N (G i+1 )|
4
An MCMC algorithm based on Eq.(10) in [16] is computationally less efficient than when applied to static
Bayesian networks or stationary DBNs, since the local scores would have to be re-computed every time the
positions of the change-points change.
4
p38
mek
raf
pip3
plcg
erk
pkc
akt
pka
pip2
jnk
(a)
(b)
(c)
(d)
Figure 1: Networks from which synthetic data were generated. Panels (a-c) show elementary
network motifs [20]. Panel (d) shows a protein signal transduction network studied in [2], with an
added feedback loop on the root node.
where |.| is the cardinality, and the marginal likelihood terms have been specified in Eq. (5). The
graph is left unchanged, symbolically G i+1 := G i , if the move is not accepted.
With the complementary probability 1 ? pG we leave the graph G i unchanged and perform a move
i
on (Vi , Ki ), where Vni is the latent variable vector of Xn in Vi , and Ki = (K1i , . . . , KN
). We
i
randomly select a node Xn and change its current number of components Kn via a change-point
birth or death move, or its latent variable vector Vni by a change-point re-allocation move. The
change-point birth (death) move increases (decreases) Kni by 1 and may also have an effect on Vni .
The change-point reallocation move leaves Kni unchanged and may have an effect on Vni . Under
fairly mild regularity conditions (ergodicity), the MCMC sampling scheme converges to the desired
posterior distribution if the acceptance probabilities for the three change-point moves (Kni , Vni ) ?
(Kni+1 , Vni+1 ) are chosen of the form min(1, R), see [15], with
QKni+1
i+1
?n
k=1 ?(Dn [k, Vn ], G)
R= Q
?A?B
(9)
i
Kn
?n
i
k=1 ?(Dn [k, Vn ], G)
where A = P (Vni+1 |Kni+1 )P (Kni+1 )/P (Vni |Kni )P (Kni ) is the prior probability ratio, and B is the
inverse proposal probability ratio. The exact form of these factors depends on the move type and
is provided in the supplementary material. We note that the implementation of the dynamic programming scheme proposed in [19] has the prospect to improve the convergence and mixing of the
Markov chain, which we will investigate in our future work.
3
Results on synthetic data
To assess the performance of the proposed model, we applied it to a set of synthetic data generated
from different networks, as shown in Figure 1. The structures in Figure panels 1a-c constitute
elementary network motifs, as studied e.g. in [20]. The network in Figure 1d was extracted from
the systems biology literature [2] and represents a well-studied protein signal transduction pathway.
We added an extra feedback loop on the root node to allow the generation of a Markov chain with
non-zero autocorrelation; note that this modification is not biologically implausible [21].
We generated data with a mixture of piece-wise linear processes and sinusoidal transfer functions.
The advantage of the first approach is the exact knowledge of the true process change-points; the
second approach is more realistic (smooth function) with a stronger mismatch between model and
data-generation mechanism. For example, the network in Figure 1c was modelled as
2?
+ cW ? ?W (t)
m
Z(t + 1) = cX ? X(t) + cY ? Y (t) + ?sin(W (t)) + cZ ? ?Z (t + 1)
(10)
X(t + 1) = ?X (t);
Y (t + 1) = ?Y (t);
W (t + 1) = W (t) +
where the ?. (.) are iid standard Normally distributed. We employed different values cX = cY ?
{0.25, 0.5} and cZ , cW ? {0.25, 0.5, 1} to vary the signal-to-noise ratio and the amount of autocorrelation in W . For each parameter configuration, 25 time series with 41 time points where
independently generated. For the other networks, data were generated in a similar way. Owing
to space restrictions, the complete model specifications have to be relegated to the supplementary
material.
5
1
0.8
Grz. et al.
Ko et al.
BGe
BDe
ref. line
0.6
0.4
0.4
0.6
0.8
(a)
1
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.6
0.8
1
0.4
0.4
0.6
(b)
cpBGe vs. . . .
. . . vs. Grz. et al.
. . . vs. Ko et al.
. . . vs. BGe
. . . vs. BDe
0.8
1
0.4
0.4
(c)
(a)
0.753
<0.0001
<0.0001
<0.0001
(b)
<0.0001
0.074
<0.0001
<0.0001
(c)
<0.0001
<0.0001
<0.0001
<0.0001
0.6
0.8
1
(d)
(d)
0.013
0.002
0.060
<0.0001
Figure 2: Comparison of AUC scores on the synthetic data. The panels (a-d) correspond to those
of Figure 1. The horizontal axis in each panel represents the proposed cpBGe model. The vertical
axis represents the following competing models: BDe (?), BGe (?), the method of Ko et al. [12]
(
), and the method of Grzegorczyk et al. [11] (?), adapted as described in the text. Different symbols of the same shape correspond to different signal-to-noise ratios (SNR) and autocorrelation times
(ACT). Each symbol shows a comparison of two average AUC scores, averaged over 25 (panels ac) or 5 (panel d) time series independently generated for a given SNR/ACT setting. The diagonal
line indicates equal performance; symbols below this lines indicate that the proposed cpBGe model
outperforms the competing model. The table in the bottom shows an overview of the corresponding
p-values obtained from a two-sided paired t-test with Bonferroni correction. For all but three cases
the cpBGe model outperforms the competing model at the standard 5% significance level.
To each data set, we applied the proposed cpBGe model as described in Section 2. We compared its
performance with four alternative schemes. We chose the classical stationary DBNs based on BDe
[8] and BGe [10]. Note that for these models the parameters can be integrated out analytically, and
only the network structure has to be learned. The latter was sampled from the posterior distribution
with structure MCMC [17, 18]. Note that the BDe model requires discretized data, which we effected with the information bottleneck algorithm [22]. Our comparative evaluation also included two
non-linear/non-stationary models with a clearly defined network structure (for the sake of comparability with our approach). We chose the method of Ko et al. [12] for its flexibility and comparative
ease of implementation. The inference scheme is based on the application of the EM algorithm [23]
to a node-specific mixture model subject to a BIC penalty term [14]. We implemented this algorithm
c , using the software package NETLAB [24].
according to the authors? specification in MATLAB
We also compared our model with the approach proposed by Grzegorczyk et al. [11]. We applied the
software available from the authors? website. We replaced the authors? free allocation model by the
change-point process used for our model. This was motivated by the fact that for a fair comparison,
the same prior knowledge about the data structure (time series) should be used. In all other aspects
we applied the method as described in [11]. All MCMC simulations were divided into a burn-in and
a sampling phase, where the length of the burn-in phase was chosen such that standard convergence
criteria based on potential scale reduction factors [25] were met. The software implementations of
all methods used in our study are available upon request. For lack of space, further details have to
be relegated to the supplementary material.
To assess the network reconstruction accuracy, various criteria have been proposed in the literature. In the present study, we chose receiver-operator-characteristic (ROC) curves computed from
the marginal posterior probabilities of the edges (and the ranking thereby induced). Owing to the
large number of simulations ? for each network and parameter setting the simulations were repeated
on 25 (Figures 2a-c) or 5 (Figures 2d) independently generated time series ? we summarized the
performance by the area under the curve (AUC), ranging between 0.5 (expected random predictor)
to 1.0 (perfect predictor). The results are shown in Figure 2 and suggest that the proposed cpBGe
model tends to significantly outperform the competing models. A more detailed analysis with an
6
0.6
0.6
0.6
0.3
0.3
0.3
0
0
10
20
30
40
40
20
20
5
40
0
0
5
20
10
20
30
40
0
0
10
20
30
40
5
5
20
40
40
Figure 3: Results on the Arabidopsis gene expression time series. Top panels: Average posterior
probability of a change-point (vertical axis) at a specific transition time plotted against the transition
time (horizontal axis) for two selected circadian genes (left: LHY, centre: TOC1) and averaged over
all 9 genes (right). The vertical dotted lines indicate the boundaries of the time series segments,
which are related to different entrainment conditions and time intervals. Bottom left and centre panels: Co-allocation matrices for the two selected genes LHY and TOC1. The axes represent time.
The grey shading indicates the posterior probability of two time points being assigned to the same
mixture component, ranging from 0 (black) to 1 (white). Bottom right panel: Predicted regulatory
network of nine circadian genes in Arabidopsis thaliana. Empty circles represent morning genes.
Shaded circles represent evening genes. Edges indicate predicted interactions with a marginal posterior probability greater than 0.5.
investigation of how the signal-to-noise ratio and the autocorrelation parameters effect the relative
performance of the methods has to be relegated to the supplementary material for lack of space.
4
Results on Arabidopsis gene expression time series
We have applied our method to microarray gene expression time series related to the study of circadian regulation in plants. Arabidopsis thaliana seedlings, grown under artificially controlled Te hour-light/Te -hour-dark cycles, were transferred to constant light and harvested at 13 time points in
? -hour intervals. From these seedlings, RNA was extracted and assayed on Affymetrix GeneChip
oligonucleotide arrays. The data were background-corrected and normalized according to standard
c software (Agilent Technologies). We combined four time seprocedures5 , using the GeneSpring
ries, which differed with respect to the pre-experiment entrainment condition and the time intervals:
Te ? {10h, 12h, 14h}, and ? ? {2h, 4h}. The data, with detailed information about the experimental protocols, can be obtained from [27], [11], and [28]. We focused our analysis on 9 circadian
genes6 (i.e. genes involved in circadian regulation). We combined all four time series into a single
set. The objective was to test whether the proposed cpBGe model would detect the different experimental phases. Since the gene expression values at the first time point of a time series segment have
no relation with the expression values at the last time point of the preceding segment, the corresponding boundary time points were appropriately removed from the data7 . This ensures that for all
pairs of consecutive time points a proper conditional dependence relation determined by the nature
of the regulatory cellular processes is given. The top panel of Figure 3 shows the marginal posterior
5
We used RMA rather than GCRMA for reasons discussed in [26].
These 9 circadian genes are LHY, TOC1, CCA1, ELF4, ELF3, GI, PRR9, PRR5, and PRR3.
7
A proper mathematical treatment is given in Section 3 of the supplementary material.
6
7
probability of a change-point for two selected genes (LHY and TOC1), and averaged over all genes.
It is seen that the three concatenation points are clearly detected. There is a slight difference between
the heights of the posterior probability peaks for LHY and TOC1. This behaviour is also captured by
the co-allocation matrices in the bottom row of Figure 3. This deviation indicates that the two genes
are effected by the changing experimental conditions (entrainment, time interval) in different ways
and thus provides a useful tool for further exploratory analysis. The bottom right panel of Figure 3
shows the gene interaction network that is predicted when keeping all edges with marginal posterior
probability above 0.5. There are two groups of genes. Empty circles in the figure represent morning
genes (i.e. genes whose expression peaks in the morning), shaded circles represent evening genes
(i.e. genes whose expression peaks in the evening). There are several directed edges pointing from
the group of morning genes to the evening genes, mostly originating from gene CCA1. This result
is consistent with the findings in [29], where the morning genes were found to activate the evening
genes, with CCA1 being a central regulator. Our reconstructed network also contains edges pointing
into the opposite direction, from the evening genes back to the morning genes. This finding is also
consistent with [29], where the evening genes were found to inhibit the morning genes via a negative
feedback loop. In the reconstructed network, the connectivity within the group of evening genes is
sparser than within the group of morning genes. This finding is consistent with the fact that following the light-dark cycle entrainment, the experiments were carried out in constant-light condition,
resulting in a higher activity of the morning genes overall. Within the group of evening genes, the
reconstructed network contains an edge between GI and TOC1. This interaction has been confirmed
in [30]. Hence while a proper evaluation of the reconstruction accuracy is currently unfeasible ?
like [6] and many related studies, we lack a gold-standard owing to the unknown nature of the true
interaction network ? our study suggests that the essential features of the reconstructed network are
biologically plausible and consistent with the literature.
5
Discussion
We have proposed a continuous-valued non-stationary dynamic Bayesian network, which constitutes
a non-stationary generalization of the BGe model. This complements the work of [6], where a
non-stationary BDe model was proposed. We have argued that a flexible network structure can
lead to practical and conceptual problems, and we therefore only allow the parameters to vary
with time. We have presented a comparative evaluation of the network reconstruction accuracy
on synthetic data. Note that such a study is missing from recent related studies on this topic, like [6]
and [7], presumably because their overall network structure is not properly defined. Our findings
suggest that the proposed non-stationary BGe model achieves a clear performance improvement
over the classical stationary models BDe and BGe as well as over the non-linear/non-stationary
models of [12] and [11]. The application of our model to gene expression time series from circadian
clock-regulated genes in Arabidopsis thaliana has led to a plausible data segmentation, and the
reconstructed network shows features that are consistent with the biological literature.
The proposed model is based on a multiple change-point process. This scheme provides the approximation of a non-linear regulation process by a piecewise linear process under the assumption
that the temporal processes are sufficiently smooth. A straightforward modification would be the
replacement of the change-point process by the allocation model of [13] and [11]. This modification
would result in a fully-flexible mixture model, which is preferable if the smoothness assumption for
the temporal processes is violated. It would also provide a non-linear Bayesian network for static
rather than time series data. While the algorithmic implementation is straightforward, the increased
complexity of the latent variable configuration space would introduce additional challenges for the
mixing and convergence properties of the MCMC sampler. The development of more effective proposal moves, as well as a comparison with alternative non-linear Bayesian network models, like
[31], is a promising subject for future research.
Acknowledgements
Marco Grzegorczyk is supported by the Graduate School ?Statistische Modellbildung? of the Department of Statistics, University of Dortmund. Dirk Husmeier is supported by the Scottish Government Rural and Environment Research and Analysis Directorate (RERAD).
8
References
[1] N. Friedman, M. Linial, I. Nachman, and D. Pe?er. Using Bayesian networks to analyze expression data.
Journal of Computational Biology, 7:601?620, 2000.
[2] K. Sachs, O. Perez, D. Pe?er, D. A. Lauffenburger, and G. P. Nolan. Protein-signaling networks derived
from multiparameter single-cell data. Science, 308:523?529, 2005.
[3] V. A. Smith, J. Yu, T. V. Smulders, A. J. Hartemink, and E. D. Jarvi. Computational inference of neural
information flow networks. PLoS Computational Biology, 2:1436?1449, 2006.
[4] M. Talih and N. Hengartner. Structural learning with time-varying components: Tracking the crosssection of financial time series. Journal of the Royal Statistical Society B, 67(3):321?341, 2005.
[5] X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In Zoubin
Ghahramani, editor, Proceedings of the 24th Annual International Conference on Machine Learning
(ICML 2007), pages 1055?1062. Omnipress, 2007.
[6] J. W. Robinson and A. J. Hartemink. Non-stationary dynamic Bayesian networks. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages
1369?1376. Morgan Kaufmann Publishers, 2009.
[7] S. L`ebre. Analyse de processus stochastiques pour la g?enomique : e? tude du mod`ele MTD et inf?erence de
r?eseaux bay?esiens dynamiques. PhD thesis, Universit?e d?Evry-Val-d?Essonne, 2008.
[8] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of
knowledge and statistical data. Machine Learning, 20:245?274, 1995.
[9] C. Andrieu and A. Doucet. Joint Bayesian model selection and estimation of noisy sinusoids via reversible jump MCMC. IEEE Transactions on Signal Processing, 47(10):2667?2676, 1999.
[10] D. Geiger and D. Heckerman. Learning Gaussian networks. In Proceedings of the Tenth Conference on
Uncertainty in Artificial Intelligence, pages 235?243, San Francisco, CA., 1994. Morgan Kaufmann.
[11] M. Grzegorczyk, D. Husmeier, K. Edwards, P. Ghazal, and A. Millar. Modelling non-stationary gene regulatory processes with a non-homogeneous Bayesian network and the allocation sampler. Bioinformatics,
24(18):2071?2078, 2008.
[12] Y. Ko, C. Zhai, and S.L. Rodriguez-Zas. Inference of gene pathways using Gaussian mixture models.
In BIBM International Conference on Bioinformatics and Biomedicine, pages 362?367. Fremont, CA,
2007.
[13] A. Nobile and A.T. Fearnside. Bayesian finite mixtures with an unknown number of components: The
allocation sampler. Statistics and Computing, 17(2):147?162, 2007.
[14] G. Schwarz. Estimating the dimension of a model. Annals of Statistics, 6:461?464, 1978.
[15] P. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination.
Biometrika, 82:711?732, 1995.
[16] N. Friedman and D. Koller. Being Bayesian about network structure. Machine Learning, 50:95?126,
2003.
[17] P. Giudici and R. Castelo. Improving Markov chain Monte Carlo model search for data mining. Machine
Learning, 50:127?158, 2003.
[18] D. Madigan and J. York. Bayesian graphical models for discrete data. International Statistical Review,
63:215?232, 1995.
[19] P. Fearnhead. Exact and efficient Bayesian inference for multiple changepoint problems. Statistics and
Computing, 16:203?213, 2006.
[20] S. S. Shen-Orr, R. Milo, S. Mangan, and U. Alon. Network motifs in the transcriptional regulation
network of Escherichia coli. Nature Genetics, 31:64?68, 2002.
[21] M. K. Dougherty, J. Muller, D. A. Ritt, M. Zhou, X. Z. Zhou, T. D. Copeland, T. P. Conrads, T. D. Veenstra, K. P. Lu, and D. K. Morrison. Regulation of Raf-1 by direct feedback phosphorylation. Molecular
Cell, 17:215?224, 2005.
[22] A. J. Hartemink. Principled Computational Methods for the Validation and Discovery of Genetic Regulatory Networks. PhD thesis, MIT, 2001.
[23] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, B39(1):1?38, 1977.
[24] I. T. Nabney. NETLAB: Algorithms for Pattern Recognition. Springer Verlag, New York, 2004.
[25] A. Gelman and D. B. Rubin. Inference from iterative simulation using multiple sequences. Statistical
Science, 7:457?472, 1992.
[26] W.K. Lim, K. Wang, C. Lefebvre, and A. Califano. Comparative analysis of microarray normalization
procedures: effects on reverse engineering gene networks. Bioinformatics, 23(13):i282?i288, 2007.
[27] K. D. Edwards, P. E. Anderson, A. Hall, N. S. Salathia, J. C.W. Locke, J. R. Lynn, M. Straume, J. Q.
Smith, and A. J. Millar. Flowering locus C mediates natural variation in the high-temperature response
of the Arabidopsis circadian clock. The Plant Cell, 18:639?650, 2006.
[28] T.C. Mockler, T.P. Michael, H.D. Priest, R. Shen, C.M. Sullivan, S.A. Givan, C. McEntee, S.A. Kay, and
J. Chory. The diurnal project: Diurnal and circadian expression profiling, model-based pattern matching
and promoter analysis. Cold Spring Harbor Symposia on Quantitative Biology, 72:353?363, 2007.
[29] C. R. McClung. Plant circadian rhythms. Plant Cell, 18:792?803, 2006.
[30] J.C.W. Locke, M.M. Southern, L. Kozma-Bognar, V. Hibberd, P.E. Brown, M.S. Turner, and A.J. Millar.
Extension of a genetic network model by iterative experimentation and mathematical analysis. Molecular
Systems Biology, 1:(online), 2005.
[31] S. Imoto, S. Kim, T. Goto, , S. Aburatani, K. Tashiro, Satoru Kuhara, and Satoru Miyano. Bayesian networks and nonparametric heteroscedastic regression for nonlinear modeling of genetic networks. Journal
of Bioinformatics and Computational Biology, 1(2):231?252, 2003.
9
| 3687 |@word mild:1 version:1 advantageous:1 stronger:1 giudici:1 km:2 grey:1 simulation:5 bn:8 b39:1 pg:2 thereby:3 shading:1 biomathematics:1 phosphorylation:1 reduction:1 configuration:2 series:23 score:9 united:1 contains:2 genetic:3 past:1 outperforms:2 affymetrix:1 current:3 discretization:2 imoto:1 plcg:1 realistic:1 partition:1 shape:2 v:5 stationary:21 intelligence:1 leaf:1 website:1 selected:3 scotland:1 smith:2 short:2 provides:3 iterates:1 node:24 location:4 p38:1 height:1 mathematical:2 dn:17 bge:15 along:1 direct:1 symposium:1 assayed:1 pathway:3 combine:1 autocorrelation:4 introduce:2 expected:1 pour:1 brain:1 discretized:3 bibm:1 subvectors:1 cardinality:1 provided:2 mek:1 moreover:1 underlying:1 panel:12 estimating:1 project:1 erk:1 developed:1 finding:4 pseudo:1 temporal:2 every:1 quantitative:1 act:2 questionable:1 preferable:1 universit:1 biometrika:1 uk:1 arabidopsis:6 normally:1 engineering:1 bind:1 local:5 vni:8 tends:1 black:1 chose:4 burn:2 studied:3 equivalence:1 suggests:1 shaded:2 escherichia:1 co:2 heteroscedastic:1 ease:1 factorization:1 graduate:1 averaged:3 directed:4 practical:3 k1i:1 signaling:1 procedure:1 sullivan:1 cold:1 area:1 erence:1 adapting:1 significantly:1 matching:1 pre:1 road:1 madigan:1 protein:4 suggest:2 zoubin:1 convenience:1 unfeasible:1 selection:1 operator:1 kni:8 gelman:1 satoru:2 restriction:3 equivalent:1 mcclung:1 missing:1 straightforward:3 rural:1 independently:4 convex:1 focused:1 processus:1 shen:2 array:1 regarded:1 borrow:1 deriving:1 financial:1 kay:1 notion:1 exploratory:1 variation:1 annals:1 dbns:7 exact:3 programming:2 homogeneous:4 recognition:1 vein:1 bottom:5 fly:1 wang:1 cy:2 ensures:1 cycle:2 plo:1 decrease:1 prospect:1 removed:1 inhibit:1 fremont:1 principled:1 environment:1 dempster:1 complexity:2 dynamic:11 segment:19 upon:1 ries:1 linial:1 joint:2 various:4 grown:1 shortcoming:2 describe:1 monte:3 pertaining:2 detected:1 activate:1 effective:1 artificial:1 birth:3 whose:2 widely:1 valued:2 supplementary:5 plausible:2 relax:1 reconstruct:1 nolan:1 ability:1 statistic:8 gi:2 dougherty:1 multiparameter:1 analyse:1 noisy:1 laird:1 online:1 advantage:1 sequence:1 propose:2 reconstruction:4 interaction:9 product:1 tu:2 loop:4 realization:4 rma:1 mixing:2 flexibility:3 pka:1 adapts:1 gold:1 priest:1 parent:2 regularity:4 convergence:3 empty:2 circadian:10 xuan:2 comparative:4 leave:2 converges:1 perfect:1 recurrent:1 ac:2 alon:1 school:1 eq:14 edward:2 implemented:1 predicted:3 indicate:3 grzegorczyk:6 inflated:1 met:1 direction:2 closely:1 owing:4 enable:1 material:5 argued:1 premise:1 government:1 behaviour:1 generalization:2 givan:1 investigation:1 biological:1 elementary:2 larval:1 extension:1 correction:1 marco:2 underpinning:1 sufficiently:1 considered:1 hall:1 normal:1 presumably:1 algorithmic:1 pointing:3 changepoint:1 vary:3 consecutive:1 achieves:1 omitted:1 nobile:1 estimation:1 nachman:1 currently:1 schwarz:1 tool:1 mit:1 clearly:2 gaussian:4 always:1 rna:1 fearnhead:1 rather:5 zhou:2 ebre:3 varying:3 ax:3 derived:1 notational:1 properly:1 modelling:2 likelihood:10 indicates:5 improvement:1 kim:1 detect:1 inference:10 dependent:1 motif:3 typically:1 integrated:2 relation:2 koller:2 relegated:3 originating:1 germany:1 issue:2 among:4 flexible:6 overall:2 priori:2 development:1 constrained:1 fairly:2 marginal:13 field:1 genuine:1 equal:2 sampling:2 biology:9 represents:4 yu:1 icml:1 throughput:1 constitutes:1 future:2 stimulus:1 piecewise:1 few:1 randomly:2 composed:1 murphy:2 replaced:1 phase:3 replacement:1 bioss:2 friedman:2 stationarity:4 interest:1 acceptance:3 highly:1 investigate:1 mining:1 evaluation:3 introduces:1 mixture:15 light:4 perez:1 chain:7 edge:12 integral:1 encourage:1 shorter:1 incomplete:1 divide:1 re:2 desired:1 rush:1 plotted:1 circle:4 increased:1 modeling:2 crosssection:1 assignment:1 cost:1 addressing:1 decomposability:1 subset:4 snr:2 uniform:1 predictor:2 deviation:1 too:1 kn:27 dependency:1 synthetic:5 combined:2 peak:4 international:3 probabilistic:1 off:1 michael:1 prior2:1 connectivity:1 thesis:2 ambiguity:1 satisfied:1 central:1 opposed:2 choose:1 slowly:1 wishart:1 external:1 algorithm4:1 coli:1 potential:2 sinusoidal:1 de:3 orr:1 summarized:1 ranking:1 vi:7 depends:1 piece:1 root:2 closed:4 analyze:1 traffic:2 reached:1 millar:3 effected:2 raf:2 ass:3 ggm:2 accuracy:3 variance:1 characteristic:1 likewise:1 kaufmann:2 correspond:2 identify:1 lhy:5 yes:3 generalize:1 modelled:2 bayesian:25 identification:1 weak:1 dortmund:4 iid:2 carlo:3 lu:1 confirmed:1 published:2 biomedicine:1 implausible:1 sharing:4 diurnal:2 against:1 involved:2 copeland:1 associated:1 static:4 sampled:2 treatment:1 intrinsically:1 ele:1 knowledge:4 lim:1 segmentation:2 back:1 agilent:1 higher:1 methodology:1 response:2 specify:1 maximally:1 rjmcmc:2 though:1 anderson:1 just:1 stage:1 ergodicity:1 clock:2 horizontal:2 nonlinear:1 reversible:3 lack:4 evry:1 morning:9 rodriguez:1 building:1 effect:4 concept:1 contain:1 true:2 normalized:1 andrieu:1 hence:3 assigned:4 analytically:1 sinusoid:1 brown:1 death:3 white:1 conditionally:1 adjacent:2 inq:1 sin:1 bonferroni:1 songbird:1 auc:3 rhythm:1 criterion:3 complete:1 temperature:1 omnipress:1 ranging:2 wise:1 invoked:1 recently:3 common:2 discourages:1 overview:3 discussed:4 slight:1 smoothness:1 dbn:2 centre:2 specification:2 operating:1 posterior:12 pkc:1 recent:3 multivariate:1 inf:1 reverse:1 scenario:2 verlag:1 arbitrarily:1 muller:1 scoring:1 conrad:1 seen:1 captured:1 greater:2 additional:1 preceding:1 morgan:2 disjoined:1 employed:1 husmeier:3 morrison:1 signal:9 multiple:4 rj:1 infer:1 reduces:1 hebbian:1 smooth:2 determination:1 profiling:1 divided:1 molecular:2 paired:1 controlled:1 ko:6 regression:2 circumstance:1 poisson:1 represent:5 cz:2 normalization:1 cell:5 proposal:2 whereas:1 uninformative:1 addition:1 spacing:1 addressed:1 interval:6 background:1 microarray:2 publisher:1 permissible:1 extra:2 appropriately:1 subject:4 induced:1 goto:1 undirected:2 flow:6 incorporates:1 mod:1 call:1 structural:1 pupal:1 bengio:1 xj:3 bic:3 independence:1 harbor:1 competing:4 opposite:1 idea:1 bottleneck:1 whether:2 expression:14 motivated:1 ul:1 effort:2 penalty:1 york:2 nine:1 constitute:1 matlab:1 ritt:1 useful:1 tude:1 detailed:2 clear:1 amount:1 nonparametric:1 dark:2 induces:1 tth:1 outperform:1 dotted:1 delta:1 discrete:3 milo:1 group:5 four:3 hengartner:2 drawn:1 changing:3 tenth:1 kept:1 graph:14 symbolically:5 realworld:1 inverse:1 package:1 uncertainty:2 oligonucleotide:1 almost:1 reasonable:1 locke:2 vn:21 geiger:2 draw:1 netlab:2 thaliana:3 ki:7 breakpoints:1 nabney:1 fan:1 annual:1 activity:1 strength:3 adapted:1 kronecker:1 software:4 sake:1 statistik:1 aspect:4 u1:1 regulator:1 min:2 formulating:1 spring:1 essonne:1 format:1 flowering:1 transferred:1 department:2 according:2 request:1 combination:1 conjugate:1 across:1 heckerman:2 em:2 lefebvre:1 modification:4 happens:1 biologically:2 restricted:4 sided:1 computationally:1 conjugacy:1 discus:1 mechanism:1 initiate:1 locus:1 reversal:1 available:2 changepoints:1 lauffenburger:1 experimentation:1 reallocation:2 alternative:2 top:2 graphical:2 calculating:1 restrictive:1 k1:1 ghahramani:1 classical:2 society:2 unchanged:4 objective:3 move:12 added:2 concentration:1 dependence:1 diagonal:1 transcriptional:3 southern:1 comparability:1 kth:4 regulated:1 cw:2 separate:3 concatenation:1 topic:1 extent:1 cellular:2 reason:1 length:1 relationship:1 zhai:1 providing:1 ratio:6 kingdom:1 regulation:6 mostly:1 lynn:1 potentially:2 negative:1 bde:9 implementation:4 proper:4 unknown:2 perform:2 allowing:1 vertical:3 observation:2 markov:8 finite:1 inevitably:1 truncated:1 dirk:3 topical:1 interacting:1 intensity:1 morphogenesis:1 inferred:3 complement:2 pair:1 required:2 specified:1 learned:1 deletion:1 hour:4 mediates:1 robinson:3 address:2 adult:1 usually:1 pattern:3 mismatch:1 below:1 challenge:1 royal:2 green:1 belief:1 natural:1 turner:1 representing:1 scheme:7 improve:1 technology:1 axis:4 carried:1 text:1 prior:12 literature:4 acknowledgement:1 review:1 val:1 discovery:1 determining:1 relative:1 plant:4 harvested:1 fully:1 generation:2 limitation:1 allocation:16 analogy:1 scottish:1 validation:1 consistent:5 imposes:1 article:1 fruit:1 editor:2 rubin:2 miyano:1 embryonic:1 row:1 genetics:1 supported:2 last:1 free:6 keeping:1 allow:4 face:1 taking:1 edinburgh:1 talih:2 boundary:2 dimension:1 feedback:5 distributed:4 xn:17 valid:1 avoids:1 curve:2 qn:1 sensory:1 author:3 transition:2 adaptive:1 jump:3 stochastiques:1 san:1 transaction:1 reconstructed:5 approximate:2 implicitly:1 transcription:2 gene:41 doucet:1 conceptual:3 receiver:1 assumed:2 francisco:1 statistische:1 xi:3 califano:1 continuous:9 iterative:2 regulatory:10 latent:11 evening:9 modularity:1 bay:1 table:3 search:1 nature:4 transfer:1 promising:1 ca:2 improving:1 schuurmans:1 expansion:1 du:1 bottou:1 discourage:1 artificially:1 protocol:1 significance:1 sachs:1 promoter:2 directorate:1 whole:2 motivation:1 hyperparameters:1 noise:3 pip3:1 allowed:1 complementary:1 ref:1 fair:1 x1:1 repeated:1 roc:1 transduction:5 differed:1 akt:1 position:1 candidate:1 governed:1 pe:2 chickering:1 jacobian:1 erroneous:1 specific:13 er:2 symbol:3 essential:3 effectively:4 phd:2 te:3 jnk:1 sparser:1 dynamiques:1 cx:2 led:1 likely:2 hartemink:5 pip2:1 partially:1 tracking:1 springer:1 extracted:2 conditional:5 kozma:1 king:1 replace:1 absence:1 considerable:1 change:39 included:1 determined:1 except:1 uniformly:2 entrainment:4 sampler:4 corrected:1 total:3 called:1 accepted:2 experimental:3 la:1 select:1 latter:2 bioinformatics:4 violated:1 incorporate:1 mcmc:12 avoiding:2 |
2,965 | 3,688 | Lower bounds on minimax rates for nonparametric
regression with additive sparsity and smoothness
Garvesh Raskutti1 , Martin J. Wainwright1,2 , Bin Yu1,2
1
UC Berkeley Department of Statistics
2
UC Berkeley Department of Electrical Engineering and Computer Science
Abstract
We study minimax rates for estimating high-dimensional nonparametric regression models with sparse additive structure and smoothness constraints. More precisely, our goal
?
p
is to estimate a function
of the form
P f ? : R ? R that has an additive decomposition
?
h
(X
f ? (X1 , . . . , Xp ) =
j ), where each component function hj lies in some class
j?S j
H of ?smooth? functions, and S ? {1, . . . , p} is an unknown subset with cardinality s = |S|.
Given n i.i.d. observations of f ? (X) corrupted with additive white Gaussian noise where the
covariate vectors (X1 , X2 , X3 , ..., Xp ) are drawn with i.i.d. components from some distribution P, we determine lower bounds on the minimax rate for estimating the regression function
with respect to squared-L2 (P) error. Our main result is a lower bound on the minimax rate
that scales as max s log(p/s)
, s ?2n (H) . The first term reflects the sample size required for
n
performing subset selection, and is independent of the function class H. The second term
s ?2n (H) is an s-dimensional estimation term corresponding to the sample size required for
estimating a sum of s univariate functions, each chosen from the function class H. It depends
linearly on the sparsity index s but is independent of the global dimension p. As a special case,
if H corresponds to functions that are m-times differentiable (an mth -order Sobolev space),
then the s-dimensional estimation term takes the form s?2n (H) ? s n?2m/(2m+1) . Either of
the two terms may be dominant in different regimes, depending on the relation between the
sparsity and smoothness of the additive decomposition.
1 Introduction
Many problems in modern science and engineering involve high-dimensional data, by which we mean that the
ambient dimension p in which the data lies is of the same order or larger than the sample size n. A simple
example is parametric linear regression under high-dimensional scaling, in which the goal is to estimate a
regression vector ? ? ? Rp based on n samples. In the absence of additional structure, it is impossible to
obtain consistent estimators unless the ratio p/n converges to zero which precludes the regime p ? n. In
many applications, it is natural to impose sparsity conditions, such as requiring that ? ? have at most s non-zero
parameters for some s ? p. The method of ?1 -regularized least squares, also known as the Lasso algorithm [14],
has been shown to have a number of attractive theoretical properties for such high-dimensional sparse models
(e.g., [1, 19, 10]).
Of course, the assumption of a parametric linear model may be too restrictive for some applications. Accordingly, a natural extension is the non-parametric regression model y = f ? (x1 , . . . , xp )+w, where w ? N (0, ? 2 )
is additive observation noise. Unfortunately, this general non-parametric model is known to suffer severely from
the ?curse of dimensionality?, in that for most natural function classes, the sample size n required to achieve
a given estimation accuracy grows exponentially in the dimension. This challenge motivates the use of additive non-parametric models (see the bookP
[6] and references therein), in which the function f ? is decomposed
p
?
additively as a sum f (x1 , x2 , ..., xp ) = j=1 h?j (xj ) of univariate functions h?j . A natural sub-class of these
1
models are the sparse additive models, studied by Ravikumar et. al [12], in which
X
f ? (x1 , x2 , ..., xp ) =
h?j (xj ),
(1)
j?S
where S ? {1, 2, . . . , p} is some unknown subset of cardinality |S| = s.
A line of past work has proposed and analyzed computationally efficient algorithms for estimating regression
functions of this form. Just as ?1 -based relaxations such as the Lasso have desirable properties for sparse
parametric models, similar ?1 -based approaches have proven to be successful. Ravikumar et al. [12] propose a
back-fitting algorithm to recover the component functions hj and prove consistency in both subset recovery and
consistency in empirical L2 (Pn ) norm. Meier et al. [9] propose a method that involves a sparsity-smoothness
penalty term, and also demonstrate consistency in L2 (P) norm. In the special case that H is a reproducing
kernel Hilbert space (RKHS), Koltchinskii and Yuan [7] analyze a least-squares estimator based on imposing
an ?1 ? ?H -penalty. The analysis in these paper demonstrates that under certain conditions on the covariates,
such regularized procedures can yield estimators that are consistent in the L2 (P)-norm even when n ? p.
Of complementary interest to the rates achievable by practical methods are the fundamental limits of the estimating sparse additive models, meaning lower bounds that apply to any algorithm. Although such lower bounds
are well-known under classical scaling (where p remains fixed independent of n), to the best of our knowledge,
lower bounds for minimax rates on sparse additive models have not been determined. In this paper, our main
, s?2n (H) .
result is to establish a lower bound on the minimax rate in L2 (P) norm that scales as max s log(p/s)
n
The first term s log(p/s)
is a subset selection term, independent of the univariate function space H in which
n
the additive components lie, that reflects the difficulty of finding the subset S. The second term s?2n (H) in an
s-dimensional estimation term, which depends on the low dimension s but not the ambient dimension p, and
reflects the difficulty of estimating the sum of s univariate functions, each drawn from function class H. Either
the subset selection or s-dimensional estimation term dominates, depending on the relative sizes of n, p, and
s as well as H. Importantly, our analysis applies both in the low-dimensional setting (n ? p) and the highdimensional setting (p ? n) provided that n, p and s are going to ?. Our analysis is based on informationtheoretic techniques centered around the use of metric entropy, mutual information and Fano?s inequality in
order to obtain lower bounds. Such techniques are standard in the analysis of non-parametric procedures under
classical scaling [5, 2, 17], and have also been used more recently to develop lower bounds for high-dimensional
inference problems [16, 11].
The remainder of the paper is organized as follows. In the next section, the results are stated including appropriate preliminary concepts, notation and assumptions. In Section 3, we state the main results, and provide some
comparisons to the rates achieved by existing algorithms. In Section 4, we provide an overview of the proof.
We discuss and summarize the main consequences in Section 5.
2 Background and problem formulation
In this paper, we consider a non-parametric regression model with random design, meaning that we make n
observations of the form
y (i) = f ? (X (i) ) + w(i) ,
for i = 1, 2, . . . , n.
(2)
(i)
Here the random vectors X (i) ? Rp are the covariates, and have elements Xj drawn i.i.d. from some underlying distribution P. We assume that the noise variables w(i) ? N (0, ? 2 ) are drawn independently, and
independent of all X (i) ?s. Given a base class H of univariate functions with norm k ? kH , consider the class of
functions f : Rp ? R that have an additive decomposition:
p
X
F : = f : Rp ? R | f (x1 , x2 , ..., xp ) =
hj (xj ), and khj kH ? 1 ?j = 1, . . . , p .
j=1
Given some integer s ? {1, . . . , p}, we define the function class F0 (s), which is a union of
subspaces of F, given by
p
X
F0 (s) : = f ? F |
I(hj 6= 0) ? s .
p
s
s-dimensional
(3)
j=1
The minimax rate of estimation over F0 (s) is defined by the quantity minfb maxf ? ?F0 (s) Ekfb?f ? k2L2 (P) , where
the expectation is taken over the noise w, and randomness in the sampling, and fb ranges over all (measurable)
2
functions of the observations {(y (i) , X (i) )}ni=1 . The goal of this paper is to determine lower bounds on this
minimax rate.
2.1 Inner products and norms
Given a univariate function hj ? H, we define the usual L2 (P) inner product
Z
hhj , h?j iL2 (P) : =
hj (x)h?j (x) dP(x).
R
(With a slight abuse of notation, we use P to refer to the measure over Rp as well as the induced marginal
measure in each direction defined over R). Without loss of generality (re-centering the functions as needed), we
may assume
Z
E[hj (X)] =
hj (x) dP(x) = 0,
R
for all hj ? H. As a consequence, we have E[f (X1 , . . . , Xp )] = 0 for all functions f ? F0 (s). Given our
2
assumption that the covariate vector X = (X1 , . . . , XP
p ) has independent components, the L (P) inner product
p
? 2
?
on F has the additive decomposition hf, f iL (P) = j=1 hhj , hj iL2 (P) . (Note that if independence were not
assumed the L2 (P) inner product over F would involve cross-terms.)
2.2 Kullback-Leibler divergence
Since we are using information theoretic techniques, we will be using the Kullback-Leibler (KL) divergence as a
measure of ?distance? between distributions. For a given pair of functions f and fe, consider the n-dimensional
T
T
vectors f (X) = f (X (1) ), f (X (2) ), . . . , f (X (n) ) and fe(X) = fe(X (1) ), fe(X (2) ), . . . , fe(X (n) ) . Since
Y |f (X) ? N (f (X), ? 2 In?n ) and Y |fe(X) ? N (fe(X), ? 2 In?n ),
1
kf (X) ? fe(X)k22 .
D(Y |f (X) k Y |fe(X)) =
2? 2
(4)
We also use the notation D(f k fe) to mean the average K-L divergence between the distributions of Y induced
by the functions f and fe respectively. Therefore we have the relation
D(f k fe) = EX D(Y |f (X) k Y |fe(X))
n
=
kf ? fek2L2 (P) .
(5)
2? 2
This relation between average K-L divergence and squared L2 (P) distance plays an important role in our proof.
2.3 Metric entropy for function classes
In this section, we define the notion of metric entropy, which provides a way in which to measure the relative
sizes of different function classes with respect to some metric ?. More specifically, central to our results is the
metric entropy of F0 (s) with respect to the L2 (P) norm.
Definition 1 (Covering and packing numbers). Consider a metric space consisting of a set S and a metric
? : S ? S ? R+ .
(a) An ?-covering of S in the metric ? is a collection {f 1 , . . . , f N } ? S such that for all f ? S, there
exists some i ? {1, . . . , N } with ?(f, f i ) ? ?. The ?-covering number N? (?) is the cardinality of the
smallest ?-covering.
(b) An ?-packing of S in the metric ? is a collection {f 1 , . . . , f M } ? S such that ?(f i , f j ) ? ? for all
i 6= j. The ?-packing number M? (?) is the cardinality of the largest ?-packing.
The covering and packing entropy (denoted by log N? (?) and log M? (?) respectively) are simply the logarithms
of the covering and packing numbers, respectively. It can be shown that for any convex set, the quantities
log N? (?) and log M? (?) are of the same order (within constant factors independent of ?).
3
In this paper, we are interested in packing (and covering) subsets of the function class F0 (s) in the L2 (P) metric,
and so drop the subscript ? from here onwards. En route to characterizing the metric entropy of F0 (s), we need
to understand the metric entropy of the unit balls of our univariate function class H?namely, the sets
BH (1) : = {h ? H | khkH ? 1}.
The metric entropy (both covering and packing entropy) for many classes of functions are known. We provide
some concrete examples here:
(i) Consider the class H = {h? : R ? R | h? (x) = ?x} of all univariate linear functions with the norm
kh? kH = |?|. Then it is known [15] that the metric entropy of BH (1) scales as log M (?; H) ? log(1/?).
(ii) Consider the class H = {h : [0, 1] ? [0, 1] | |h(x) ? h(y)| ? |x ? y|} of all 1-Lipschitz functions on [0, 1] with the norm khkH = supx?[0,1] |h(x)|. In this case, it is known [15] that the metric entropy
scales as log M H (?; H) ? 1/?. Compared to the previous example of linear models, note that the metric
entropy grows much faster as ? ? 0, indicating that the class of Lipschitz functions is much richer.
(iii) Consider the class of Sobolev spaces W m for m ? 1, consisting of all functions that have m derivatives,
1
and the mth derivative is bounded in L2 (P) norm. In this case, it is known that log M (?; H) ? ?? m (e.g., [3]).
Clearly, increasing the smoothness constraint m leads to smaller classes. Such Sobolev spaces are a particular
class of functions whose packing/covering entropy grows at a rate polynomial in 1? .
In our analysis, we require that the metric entropy of BH (1) satisfy the following technical condition:
Assumption 1. Using log M (?; H) to denote the packing entropy of the unit ball BH (1) in the L2 (P)-norm,
assume that there exists some ? ? (0, 1) such that
log M (??; H)
> 1.
??0 log M (?; H)
lim
The condition is required to ensure that log M (c?)/ log M (?) can be made arbitrarily small or large uniformly
over small ? by changing c, so that a bound due to Yang and Barron [17] can be applied. It is satisfied for most
non-parametric classes, including (for instance) the Lipschitz and Sobolev classes defined in Examples (ii) and
(iii) above. It may fail to hold for certain parametric classes, such as the set of linear functions considered
in Example (i); however, we can use an alternative technique to derive bounds for the parametric case (see
Corollary 2).
3 Main result and some consequences
In this section, we state our main result and then develop some of its consequences. We begin with a theorem
that covers the function class F0 (s) in which the univariate function classes H have metric entropy that satisfies
Assumption 1. We state a corollary for the special cases of univariate classes H with metric entropy growing
polynomial in (1/?), and also a corollary for the special case of sparse linear regression.
Consider the observation model (2) where the covariate vectors have i.i.d. elements Xj ? P, and the regression
function f ? ? F0 (s). Suppose that the univariate function class H that underlies F0 (s) satisfies Assumption 1.
Under these conditions, we have the following result:
Theorem 1. Given n i.i.d. samples from the sparse additive model (2), the minimax risk in squared-L2 (P)
norm is lower bounded as
2
? s log(p/s) s 2
? 2
b
min ?max Ekf ? f kL2 (P) ? max
,
? (H) ,
(6)
32n
16 n
fb f ?F0 (s)
where, for a fixed constant c, the quantity ?n (H) = ?n > 0 is largest positive number satisfying the inequality
n?2n
? log M c ?n .
2? 2
(7)
For the case where H has an entropy that is growing to ? at a polynomial rate as ? ? 0?say log M (?; H) =
?(??1/m ) for some m > 12 , we can compute the rate for the s-dimensional estimation term explicitly.
4
Corollary 1. For the sparse additive model (2) with univariate function space H such that such that
log M (?; H) = ?(??1/m ), we have
2
2m
? s log(p/s)
? 2 2m+1
? 2
b
,
(8)
min ?max Ekf ? f kL2 (P) ? max
,C s
32n
n
fb f ?F0 (s)
for some C > 0.
3.1 Some consequences
In this section, we discuss some consequences of our results.
Effect of smoothness: Focusing on Corollary 1, for spaces with m bounded derivatives (i.e., functions in the
2m
Sobolev space W m ), the minimax rate is n? 2m+1 (for details, see e.g. Stone [13]). Clearly, faster rates are
obtained for larger smoothness indices m, and as m ? ?, the rate approaches the parametric rate of n?1 .
Since we are estimating over an s-dimensional space (under the assumption of independence), we are effectively
estimating s univariate functions, each lying within the function space H. Therefore the uni-dimensional rate is
multiplied by s.
Smoothness versus sparsity: It is worth noting that depending on the relative scalings of s, n and p and the metric
entropy of H, it is possible for either the subset selection term or s-dimensional estimation term to dominate
the lower bound. In general, if log(p/s)
= o(?2n (H)), the s-dimensional estimation term dominates, and vice
n
versa (at the boundary, either term determines the minimax rate). In the case of a univariate function class H
with polynomial entropy as in Corollary 1, it can be seen that for n = o((log(p/s))2m+1 ), the s-dimensional
estimation term dominates while for n = ?((log(p/s))2m+1 ), the subset selection term dominates.
Rates for linear models: Using an alternative proof technique (not the one used in this paper), it is possible [11]
to derive the exact minimax rate for estimation in the sparse linear regression model, in which we observe
X
(i)
y (i) =
?j Xj + w(i) , for i = 1, 2, ..., n.
(9)
j?S
Note that this is a special case of the general model (2) in which H corresponds to the class of univariate linear
functions (see Example (i)).
, ns .
Corollary 2. For sparse linear regression model (9), the the minimax rate scales as max s log(p/s)
n
In this case, we see clearly the subset selection term dominates for p ? ?, meaning the subset selection
problem is always ?harder? (in a statistical sense) than the s-dimensional estimation problem. As shown
p
by Bickel et al. [1], the rate achieved by ?1 -regularized methods is s log
under suitable conditions on the
n
covariates X.
Upper bounds: To show that the lower bounds are tight, upper bounds that are matching need to be derived.
Upper bounds (matching up to constant factors) can be derived via a classical information-theoretic approach
(e.g., [5, 2]), which involves constructing an estimator based on a covering set and bounding the covering
entropy of F0 (s). While this estimation approach does not lead to an implementable algorithm, it is a simple
theoretical device to demonstrate that lower bounds are tight. We turn our focus on implementable algorithms
in the next point.
Comparison to existing bounds: We now provide a brief comparison of the minimax lower bounds with upper
bounds on rates achieved by existing implementable algorithms provided by past work [12, 7, 9]. Ravikumar
et al. [12] propose a back-fitting algorithm to minimize the least-squares objective with a sparsity constraint on
the the function f . The rates derived in Koltchinskii and Yuan [7] do not match the lower bounds derived in
Theorem 1. Further, it is difficult to directly compare the rates in Ravikumar et al. [12] and Meier et al. [9] with
our minimax lower bounds since their analysis does not explicitly track the sparsity index s. We are currently
in the process of conducting a thorough comparison with the above-mentioned ?1 -based methods.
4 Proof outline
In this section, we provide an outline of the proof of Theorem 1; due to space constraints, we defer some of
the technical details to the full-length version. The proof is based on a combination of information-theoretic
5
techniques and the concepts of packing and covering entropy, as defined previously in Section 2.3. First, we
provide a high-level overview of the proof. The basic idea is to carefully choose two subsets T1 and T2 of the
function class F0 (s) and lower bound the minimax rates over these two subsets. In Section 4.1, application of
the generalized Fano method?a technique based on Fano?s inequality?to the set T1 defined in equation (10)
yields a lower bound on the subset selection term. In Section 4.2, we apply an alternative method for obtaining
lower bounds over a second set T2 defined in equation (11) that captures the difficulty of estimating the sum
of s univariate functions.. The second technique also exploits Fano?s inequality but uses a more refined upper
bound on the mutual information developed by Yang and Barron [17].
Before procedding, we first note that for any T ? F0 (s), we have
min max Ekfb ? f ? k2 2 ? min max Ekfb ? f ? k2 2
?
fb f ?F0 (s)
L (P)
L (P) .
?
fb f ?T
Moreover, for any subsets T1 , T2 ? F0 (s), we have
b ? f ? k2 2 , min max Ekfb ? f ? k2 2
min ?max Ekfb ? f ? k2L2 (P) ? max min max
Ek
f
,
L
(P)
L
(P)
?
?
fb f ?T1
fb f ?F0 (s)
fb f ?T2
since the bound holds for each of the two terms. We apply this lower bound using the subsets T1 and T2 defined
in equations (10) and (11).
4.1 Bounding the complexity of subset selection
For part of the proof, we use the generalized Fano?s method [4], which we state below without proof. Given
some parameter space, we let d be a metric on it.
Lemma 1. (Generalized Fano Method) For a given integer r ? 2, consider a collection Mr = {P1 , . . . , Pr }
of r probability distributions such that
d(?(Pi ), ?(Pj )) ? ?r for all i 6= j,
and the pairwise KL divergence satisfies
D(Pi k Pj ) ? ?r for all i, j = 1, . . . , r.
Then the minimax risk over the family is lower bounded as
b ? ?r 1 ? ?r + log 2 .
max Ej d(?(Pj ), ?)
j
2
log r
The proof of Lemma 1 involves applying Fano?s inequality over the discrete set of parameters ? ? ? indexed
by the set of distributions Mr . Now we construct the set T1 which creates the set of probability distributions
Mr .
q
Let g be an arbitrary function in H such that kgkL2 (P) = ?4 log (p/s)
. The set T1 is defined as
n
p
X
T1 : = f : f (X1 , X2 , ..., Xp ) =
cj g(Xj ), cj ? {?1, 0, 1} | kck0 = s .
(10)
j=1
T1 may be viewed as a hypercube of F0 (s) and will lead to the lower bound for the ?subset selection? term. This
hypercube construction is often used to prove lower bounds (see Yu [18]). Next, we require a further reduction
of the set T1 to a set A (defined in Lemma 2) to ensure that elements of A are well-separated in L2 (P) norm.
The construction of A is as follows:
Lemma 2. There exists a subset A ? T1 such that:
(i) log |A| ? 12 s log(p/s),
2
(ii) kf ? f ? k2L2 (P) ? ? s log(p/s)
? f, f ? ? A, and
16n
(iii) D(f k f ? ) ? 81 s log(p/s) ?f, f ? ? A.
The proof involves using a combinatoric argument to construct the set A. For an argument on how the set is
constructed, see K?uhn [8]. For s log ps ? 8 log 2, applying the Generalized Fano Method (Lemma 1) together
with Lemma 2 yields the bound
? 2 s log(p/s)
.
min ?max Ekfb ? f ? k2L2 (P) ? min max
Ekfb ? f ? k2L2 (P) ?
?
32n
fb f ?F0 (s)
fb f ?A
This completes the proof for the subset selection term ( s log(p/s)
) in Theorem 1.
n
6
4.2 Bounding the complexity of s-dimensional estimation
Next we derive a bound for the s-dimensional estimation term by determining a lower bound over T2 . Let S be
an arbitrary subset of s integers in {1, 2, .., p}, and define the set FS as
X
T2 : = FS : = f ? F : f (X) =
hj (Xj ) .
(11)
j?S
Clearly FS ? F0 (s) meaning that
Ekfb ? f ? k2L2 (P) .
min ?max Ekfb ? f ? k2L2 (P) ? min max
?
fb f ?FS
fb f ?F0 (s)
We use a technique used in Yang and Barron [17] to lower bound the minimax rate over FS . The idea is to
construct a maximal ?n -packing set for FS and a minimal ?n -covering set for FS , and then to apply Fano?s
inequality to a carefully chosen mixture distribution involving the covering and packing sets (see the full-length
version for details). Following these steps yields the following result:
Lemma 3.
log N (?n ; FS ) + n?2n /2? 2 + log 2
?n2
? 2
b
1?
.
Ekf ? f kL2 (P) ?
min max
?
4
log M (?n ; FS )
fb f ?FS
Now we have a bound with expressions involving the covering and packing entropies of the s-dimensional space
FS . The following Lemma allows bounds on log M (?; FS ) and log N (?; FS ) in terms of the unidimensional
packing and covering entropies respectively:
Lemma 4. Let H be function space with a packing entropy log M (?; H) that satisfies Assumption 1. Then we
have the bounds
?
?
log M (?; FS ) ? s log M (?/ s; H), and log N (?; FS ) ? s log N (?/ s; H).
The proof involves constructing ??s - packing set and covering sets in each of the s dimensions and displaying
that these are ?-packing and coverings sets in FS (respectively). Combining Lemmas 3 and 4 leads to the
inequality
?
s log N (?n / s; H) + n?2n /2? 2 + log 2
?n2
? 2
b
?
1?
.
(12)
Ekf ? f kL2 (P) ?
min max
?
4
s log M (?n / s; H)
fb f ?FS
Now we choose ?n and ?n to meet the following constraints:
?n
n 2
?
? s log N ( ? ; H),
2? 2 n
s
?n
?n
4 log N ( ? ; H) ? log M ( ? ; H).
s
s
and
(13a)
(13b)
Combining Assumption 1 with the well-known relations log M (2?; H) ? log N (2?; H) ? log M (?; H), we
conclude that in order to satisfy inequalities (13a) and (13b), it is sufficient to choose ?n = c?n for a constant c,
2
?
n?n
e
?n ; H) ?
and then require that s log M ( c?
2? 2 . Furthermore, if we define ?n / s = ?n , then this inequality can
s
2
?f
n
be re-expressed as log M (c?en ) ? n2?
2 . For
equation (12) yields the desired rate
thereby completing the proof.
n 2
2? 2 ?n
? log 2, using inequalities (13a) and (13b) together with
2
s?en
Ekfb ? f ? k2L2 (P) ?
min max
,
?
f
?F
16
S
fb
5 Discussion
In this paper, we have derived lower bounds for the minimax risk in squared L2 (P) error for estimating sparse
additive models based on the sum of univariate functions from a function class H. The rates show that the
estimation problem effectively decomposes into a subset selection problem and an s-dimensional estimation
7
problem, and the ?harder? of the two problems (in a statistical sense) determines the rate of convergence.
More concretely, we demonstrated that the subset selection term scales as s log(p/s)
, depending linearly on
n
the number of components s and only logarithmically in the ambient dimension p. This subset selection term is
independent of the univariate function space H. On the other hand, the s-dimensional estimation term depends
on the ?richness? of the univariate function class, measured by its metric entropy; it scales linearly with s and is
independent of p. Ongoing work suggests that our lower bounds are tight in many cases, meaning that the rates
derived in Theorem 1 are minimax optimal for many function classes.
There are a number of ways in which the work can be extended. One implicit and strong assumption in our
analysis was that the covariates Xj , j = 1, 2, ..., p are independent. It would be interesting to investigate the case
when the random variables are endowed with some correlation structure. One would expect the rates to change,
particularly if many of the variables are collinear. It would also be interesting to develop a more complete
understanding of whether computationally efficient algorithms [7, 12, 9] based on regularization achieve the
lower bounds on the minimax rate derived in this paper.
References
[1] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of the Lasso and Dantzig selector. Annals of
Statistics, 2009. To appear.
[2] L. Birg?e. Approximation dans les espaces metriques et theorie de l?estimation. Z. Wahrsch. verw. Gebiete,
65:181?327, 1983.
[3] M. S. Birman and M. Z. Solomjak. Piecewise-polynomial approximations of functions of the classes Wp? .
Math. USSR-Sbornik, 2(3):295?317, 1967.
[4] T. S. Han and S. Verdu. Generalizing the Fano inequality. IEEE Transactions on Information Theory,
40:1247?1251, 1994.
[5] R. Z. Has?minskii. A lower bound on the risks of nonparametric estimates of densities in the uniform
metric. Theory Prob. Appl., 23:794?798, 1978.
[6] T. Hastie and R. Tibshirani. Generalized Additive Models. Chapman and Hall Ltd, Boca Raton, 1999.
[7] V. Koltchinskii and M. Yuan. Sparse recovery in large ensembles of kernel machines. In Proceedings of
COLT, 2008.
[8] T. K?uhn. A lower estimate for entropy numbers. Journal of Approximation Theory, 110:120?124, 2001.
[9] L. Meier, S. van de Geer, and P. Buhlmann. High-dimensional additive modeling. Annals of Statistics, To
appear.
[10] N. Meinshausen and B.Yu. Lasso-type recovery of sparse representations for high-dimensional data. Annals of Statistics, 37(1):246?270, 2009.
[11] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional linear regression over ?q -balls. Technical Report arXiv:0910.2042, UC Berkeley, Department of Statistics, 2009.
[12] P. Ravikumar, H. Liu, J. Lafferty, and L. Wasserman. Sparse additive models. Journal of the Royal
Statistical Society, To appear.
[13] C. J. Stone. Optimal global rates of convergence for nonparametric regression. Annals of Statistics,
10:1040?1053, 1982.
[14] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[15] S. van de Geer. Empirical Processes in M-Estimation. Cambridge University Press, 2000.
[16] M. J. Wainwright. Information-theoretic bounds for sparsity recovery in the high-dimensional and noisy
setting. IEEE Trans. Info. Theory, December 2009. Presented at International Symposium on Information
Theory, June 2007.
[17] Y. Yang and A. Barron. Information-theoretic determination of minimax rates of convergence. Annals of
Statistics, 27(5):1564?1599, 1999.
[18] B. Yu. Assouad, Fano and Le Cam. Research Papers in Probability and Statistics: Festschrift in Honor of
Lucien Le Cam, pages 423?435, 1996.
[19] C. H. Zhang and J. Huang. The sparsity and bias of the lasso selection in high-dimensional linear regression. Annals of Statistics, 36:1567?1594, 2006.
8
| 3688 |@word version:2 polynomial:5 achievable:1 norm:13 additively:1 decomposition:4 thereby:1 harder:2 reduction:1 liu:1 series:1 rkhs:1 past:2 existing:3 additive:19 drop:1 device:1 accordingly:1 wahrsch:1 provides:1 math:1 minskii:1 zhang:1 constructed:1 symposium:1 yuan:3 prove:2 yu1:1 fitting:2 pairwise:1 p1:1 growing:2 decomposed:1 curse:1 cardinality:4 increasing:1 provided:2 estimating:10 notation:3 underlying:1 bounded:4 begin:1 moreover:1 developed:1 finding:1 berkeley:3 thorough:1 demonstrates:1 k2:4 unit:2 appear:3 positive:1 t1:11 engineering:2 before:1 limit:1 severely:1 consequence:6 k2l2:8 subscript:1 meet:1 abuse:1 therein:1 studied:1 koltchinskii:3 dantzig:1 suggests:1 verdu:1 appl:1 meinshausen:1 range:1 practical:1 union:1 x3:1 procedure:2 empirical:2 matching:2 selection:16 bh:4 risk:4 impossible:1 applying:2 measurable:1 demonstrated:1 independently:1 convex:1 recovery:4 wasserman:1 estimator:4 importantly:1 dominate:1 notion:1 annals:6 construction:2 play:1 suppose:1 exact:1 us:1 element:3 logarithmically:1 satisfying:1 particularly:1 role:1 electrical:1 capture:1 boca:1 richness:1 mentioned:1 complexity:2 covariates:4 cam:2 tight:3 creates:1 packing:18 separated:1 refined:1 whose:1 richer:1 larger:2 say:1 precludes:1 statistic:9 noisy:1 differentiable:1 propose:3 product:4 maximal:1 remainder:1 dans:1 combining:2 achieve:2 kh:4 convergence:3 p:1 converges:1 depending:4 develop:3 derive:3 verw:1 measured:1 strong:1 involves:5 direction:1 centered:1 bin:1 require:3 preliminary:1 extension:1 hold:2 lying:1 around:1 considered:1 hall:1 bickel:2 smallest:1 estimation:21 currently:1 lucien:1 largest:2 vice:1 reflects:3 clearly:4 gaussian:1 always:1 ekf:4 pn:1 hj:11 ej:1 shrinkage:1 corollary:7 derived:7 focus:1 june:1 ekfb:10 sense:2 inference:1 mth:2 relation:4 going:1 interested:1 colt:1 denoted:1 ussr:1 special:5 uc:3 mutual:2 marginal:1 construct:3 sampling:1 chapman:1 yu:4 t2:7 report:1 piecewise:1 modern:1 divergence:5 festschrift:1 consisting:2 onwards:1 interest:1 investigate:1 analyzed:1 mixture:1 ambient:3 il2:2 unless:1 indexed:1 logarithm:1 re:2 desired:1 theoretical:2 minimal:1 instance:1 combinatoric:1 modeling:1 cover:1 subset:25 uniform:1 successful:1 too:1 supx:1 corrupted:1 kck0:1 density:1 fundamental:1 international:1 together:2 concrete:1 squared:4 central:1 satisfied:1 choose:3 huang:1 ek:1 derivative:3 de:3 satisfy:2 explicitly:2 depends:3 analyze:1 recover:1 hf:1 defer:1 hhj:2 minimize:1 square:3 ni:1 accuracy:1 il:1 conducting:1 ensemble:1 yield:5 worth:1 randomness:1 simultaneous:1 definition:1 centering:1 kl2:4 proof:14 knowledge:1 lim:1 dimensionality:1 hilbert:1 organized:1 cj:2 carefully:2 back:2 focusing:1 formulation:1 ritov:1 generality:1 furthermore:1 just:1 implicit:1 correlation:1 hand:1 grows:3 effect:1 k22:1 requiring:1 concept:2 regularization:1 leibler:2 wp:1 white:1 attractive:1 covering:18 generalized:5 stone:2 outline:2 theoretic:5 demonstrate:2 complete:1 meaning:5 recently:1 garvesh:1 raskutti:1 overview:2 exponentially:1 slight:1 refer:1 versa:1 imposing:1 cambridge:1 smoothness:8 consistency:3 fano:11 f0:23 han:1 base:1 dominant:1 route:1 certain:2 honor:1 inequality:11 arbitrarily:1 seen:1 additional:1 impose:1 mr:3 determine:2 ii:3 full:2 desirable:1 smooth:1 technical:3 match:1 determination:1 faster:2 cross:1 ravikumar:5 underlies:1 regression:16 basic:1 involving:2 metric:23 expectation:1 arxiv:1 kernel:2 achieved:3 background:1 completes:1 induced:2 december:1 lafferty:1 integer:3 yang:4 noting:1 iii:3 xj:9 independence:2 hastie:1 lasso:6 inner:4 idea:2 unidimensional:1 maxf:1 whether:1 expression:1 ltd:1 collinear:1 penalty:2 suffer:1 f:17 involve:2 nonparametric:4 tsybakov:1 track:1 tibshirani:2 discrete:1 drawn:4 changing:1 pj:3 relaxation:1 sum:5 prob:1 family:1 sobolev:5 scaling:4 bound:43 completing:1 constraint:5 precisely:1 x2:5 argument:2 min:14 performing:1 martin:1 department:3 ball:3 combination:1 smaller:1 kgkl2:1 pr:1 taken:1 computationally:2 equation:4 remains:1 previously:1 discus:2 turn:1 fail:1 needed:1 endowed:1 multiplied:1 apply:4 observe:1 barron:4 appropriate:1 birg:1 alternative:3 rp:5 ensure:2 exploit:1 restrictive:1 establish:1 classical:3 hypercube:2 society:2 objective:1 quantity:3 parametric:12 usual:1 dp:2 subspace:1 distance:2 length:2 index:3 ratio:1 difficult:1 unfortunately:1 fe:13 theorie:1 info:1 stated:1 design:1 motivates:1 unknown:2 upper:5 observation:5 implementable:3 extended:1 reproducing:1 arbitrary:2 buhlmann:1 raton:1 meier:3 required:4 kl:2 pair:1 namely:1 trans:1 below:1 khkh:2 regime:2 sparsity:10 challenge:1 summarize:1 max:21 including:2 royal:2 sbornik:1 wainwright:2 suitable:1 natural:4 difficulty:3 regularized:3 solomjak:1 minimax:23 brief:1 understanding:1 l2:15 kf:3 determining:1 relative:3 loss:1 expect:1 interesting:2 proven:1 versus:1 sufficient:1 xp:9 consistent:2 displaying:1 pi:2 course:1 uhn:2 bias:1 understand:1 characterizing:1 sparse:15 van:2 boundary:1 dimension:7 fb:15 concretely:1 collection:3 made:1 transaction:1 selector:1 informationtheoretic:1 uni:1 kullback:2 global:2 assumed:1 conclude:1 decomposes:1 obtaining:1 constructing:2 main:6 linearly:3 bounding:3 noise:4 n2:3 complementary:1 x1:9 en:3 n:1 sub:1 lie:3 espaces:1 theorem:6 birman:1 covariate:3 dominates:5 exists:3 effectively:2 entropy:27 generalizing:1 simply:1 univariate:19 expressed:1 applies:1 khj:1 corresponds:2 satisfies:4 determines:2 assouad:1 goal:3 viewed:1 lipschitz:3 absence:1 change:1 determined:1 specifically:1 uniformly:1 lemma:10 geer:2 indicating:1 highdimensional:1 gebiete:1 wainwright1:1 ongoing:1 ex:1 |
2,966 | 3,689 | Information-theoretic lower bounds on the oracle
complexity of convex optimization
Alekh Agarwal
Computer Science Division
UC Berkeley
[email protected]
Peter Bartlett
Computer Science Division
Department of Statistics
UC Berkeley
[email protected]
Pradeep Ravikumar
Department of Computer Sciences
UT Austin
[email protected]
Martin J. Wainwright
Department of EECS, and
Department of Statistics
UC Berkeley
[email protected]
Abstract
Despite a large literature on upper bounds on complexity of convex optimization,
relatively less attention has been paid to the fundamental hardness of these problems. Given the extensive use of convex optimization in machine learning and
statistics, gaining a understanding of these complexity-theoretic issues is important. In this paper, we study the complexity of stochastic convex optimization
in an oracle model of computation. We improve upon known results and obtain
tight minimax complexity estimates for various function classes. We also discuss implications of these results for the understanding the inherent complexity of
large-scale learning and estimation problems.
1
Introduction
Convex optimization forms the backbone of many algorithms for statistical learning and estimation.
In large-scale learning problems, in which the problem dimension and/or data are large, it is essential to exploit bounded computational resources in a (near)-optimal manner. For such problems,
understanding the computational complexity of convex optimization is a key issue.
A large body of literature is devoted to obtaining rates of convergence of specific procedures for
various classes of convex optimization problems. A typical outcome of such analysis is an upper
bound on the error?for instance, gap to the optimal cost? as a function of the number of iterations.
Such analyses have been performed for many standard optimization alogrithms, among them gradient descent, mirror descent, interior point programming, and stochastic gradient descent, to name a
few. We refer the reader to standard texts on optimization (e.g., [4, 1, 10]) for further details on such
results.
On the other hand, there has been relatively little study of the inherent complexity of convex optimization problems. To the best of our knowledge, the first formal study in this area was undertaken
in the seminal work of Nemirovski and Yudin [8] (hereafter referred to as NY). One obstacle to
a classical complexity-theoretic analysis, as the authors observed, was that of casting convex optimization problems in a Turing Machine model. They avoided this problem by instead considering a
natural oracle model of complexity in which at every round, the optimization procedure queries an
oracle for certain information on the function being optimized. Working within this framework, the
authors obtained a series of lower bounds on the computational complexity of convex optimization
1
problems. In addition to the original text NY [8], we refer the reader to Nesterov [10] or the lecture
notes by Nemirovski [7].
In this paper, we consider the computational complexity of stochastic convex optimization in the oracle model. Our results lead to a characterization of the inherent difficulty of learning and estimation
problems when computational resources are constrained. In particular, we improve upon the work of
NY [8] in two ways. First, our lower bounds have an improved dependence on the dimension of the
space. In the context of statistical estimation, these bounds show how the difficulty of the estimation
problem increases with the number of parameters. Second, our techniques naturally extend to give
sharper results for optimization over simpler function classes. For instance, they show that the optimal oracle complexity of statistical estimation with quadratic loss is significantly smaller than the
corresponding complexity with absolute loss. Our proofs exploit a new notion of the discrepancy
between two functions that appears to be natural for optimization problems. They are based on a
reduction from a statistical parameter estimation problem to the stochastic optimization problem,
and an application of information-theoretic lower bounds for the estimation problem.
2
Background and problem formulation
In this section, we introduce background on the oracle model of complexity for convex optimization,
and then define the oracles considered in this paper.
2.1
Convex optimization in the oracle model
Convex optimization is the task of minimizing a convex function f over a convex set S ? Rd .
Assuming that the minimum is achieved, it corresponds to computing an element x?f that achieves
the minimum?that is, x?f ? arg minx?S f (x). An optimization method is any procedure that solves
this task, typically by repeatedly selecting values from S. Our primary focus in this paper is the
following question: given any class of convex functions F, what is the minimum computational
labor any such optimization method would expend for any function in F?
In order to address this question, we follow the approach of Nemirovski and Yudin [8], based on the
oracle model of optimization. More precisely, an oracle is a (possibly random) function ? : S 7? I
that answers any query x ? S by returning an element ?(x) in an information set I. The information
set varies depending on the oracle; for instance, for an exact oracle of k th order, the answer to a query
xt consists of f (xt ) and the first k derivatives of f at xt . For the case of stochastic oracles studied
in this paper, these values are corrupted with zero-mean noise with bounded variance.
Given some number of rounds T , an optimization method M designed to approximately minimize
the convex function f over the convex set S proceeds as follows: at any given round t = 1, T , the
method M queries at xt ? S, and the oracle reveals the information ?(xt , f ). The method then uses
this information to decide at which point xt+1 the next query should be made. For a given oracle
function ?, let MT denote the class of all optimization methods M that make T queries according
to the procedure outlined above. For any method M ? MT , we define its error on function f after
T steps as
(M, f, S, ?) := f (xT ) ? inf f (x) = f (xT ) ? f (x?f ),
(1)
x?S
where xT is the method?s query at time T . Note that by definition of x?f , this error is a non-negative
quantity.
2.2
Minimax error
When the oracle is stochastic, the method?s query xT at time T is itself random, since it depends on
the random answers provided by the oracle. In this case, the optimization error (M, f, S, ?) is also
a random variable. Accordingly, for the case of stochastic oracles, we measure the accuracy in terms
of the expected value E? [(M, f, S, ?)], where the expectation is taken over the oracle randomness.
Given a class of functions F, and the class MT of optimization methods making T oracle queries,
we can define the minimax error
? (F, S, ?) :=
inf
sup E? [(MT , f, S, ?)].
MT ?MT f ?F
2
(2)
Note that this definition depends on the optimization set S. In order to obtain uniform bounds, we
define S := {S ? Rd : S convex, kx ? yk? ? 1 for x, y ? S}, and consider the worst-case average
error over all S ? S , given by
? (F, ?) := sup ? (F, S, ?).
(3)
S?S
In the sequel, we provide results for particular classes of oracles. So as to ease the notation, when
the function ? is clear from the context, we simply write ? (F).
It is worth noting that oracle complexity measures only the number of queries to the oracle?for
instance, the number of (approximate) function or gradient evaluations. However, it does not track
computational cost within each component of the oracle query (e.g., the actual flop count associated
with evaluating the gradient).
2.3
Types of Oracle
In this paper we study the class of stochastic first order oracles, which we will denote simply by
O. For this class of oracles, the information set I consists of pairs of noisy function and gradient
evaluations; consequently, any oracle ? in this class can be written as
?(x, f ) = (fb(x), gb(x)),
(4)
where fb(x) and gb(x) are random variables that are unbiased as estimators of the function and gradient values respectively (i.e., Efb(x) = f (x) and Eb
g (x) = ?f (x)). Moreover, we assume that both
fb(x) and gb(x) have variances bounded by one. When the gradient is not defined at x, the notation
?f (x) should be understood to mean any arbitrary subgradient at x. Recall that a subgradient of a
convex function f is any vector v ? Rd such that
f (y) ? f (x) + v > (y ? x).
Stochastic gradient methods are popular examples of algorithms for such oracles.
Notation: For the convenience of the reader, we collect here some notation used throughout the
paper. We use xt1 to refer to the sequence (x1 , . . . , xt ). We refer to the i-th coordinate of any vector
x ? Rd as x(i). For a convex set S, the radius of the largest inscribed `? ball is denoted as r? .
For a convex function f , its minimizer over a set S will be denoted as x?f when S is obvious from
context. We will often use the notation x?? to denote the minimizer of f? if ? is an index variable
over a class. For two distributions p and q, KL(p||q) refers to the Kullback Leibler divergence
between the distributions. The notation I(A) is the 0-1 valued indicator random variable of the
set (equivalently event) A. For two vectors ?, ? ? {?1, +1}d , we define the Hamming distance
Pd
?H (?, ?) := i=1 I[?i 6= ?i ].
3
Main results and their consequences
With the setup of stochastic convex optimization in place, we are now in a position to state the
main results of this paper. In particular, we provide some tight lower bounds on the complexity of
stochastic oracle optimization. We begin by analyzing the minimax oracle complexity of optimization for the class of convex Lipschitz functions. Recall that a function f : Rd ? R is convex if for
all x, y ? Rd and ? ? (0, 1), we have the inequality f (?x + (1 ? ?)y) ? ?f (x) + (1 ? ?)f (y). For
some constant L > 0, we say that the function f is L-Lipschitz on S if |f (x) ? f (y)| ? Lkx ? yk?
for all x, y ? S.
Before stating the results, we note that scaling the Lipschitz constant scales minimax optimization
error linearly. Hence, to keep our results scale-free, we consider 1-Lipschitz functions only. As the
diameter of S is also bounded by 1, this automatically enforces that |f (x)| ? 1, ?x ? S.
Theorem 1. Let F C be the class of all bounded convex 1-Lipschitz functions on Rd . Then there is
a constant c (independent of d) such that
r
d
?
C
sup (F , ?) ? c
.
(5)
T
??O
3
Remarks: This lower bound is tight in the minimax sense, since the method of stochastic gradient
descent attains a matching upper bound for all stochastic first order oracles for any convex set S
(see Chapter 5 of NY [8]). Also, even though this lower bound requires the oracle to have only
bounded variance, we will use an oracle based on Bernoulli random variables, which has all moments bounded. As a result there is no hope to get faster rates in a simple way by assuming bounds
on higher moments for the oracle. This is in interesting contrast to the case of having less than 2
bounded moments where we get slower rates (again, see Chapter 5 of NY [8]).
The above lower bound is obtained by considering the worst case over all convex sets. However,
we expect optimization over a smaller convex set to be easier than over a large set. Indeed, we can
easily obtain a corollary of Theorem 1 that quantifies this intuition.
Corollary 1. Let F C be the class of all bounded convex 1-Lipschitz functions on Rd . Let S be a
convex set such that it contains an `? ball of radius r? and is contained in an `? ball of radius
R? . Then there is a universal constant c such that,
r
r?
d
?
C
sup (F , S, ?) ? c
.
(6)
R? T
??O
is also common in results of [8], and is called the asphericity of S. As
Remark: The ratio Rr?
?
a particular application of above corollary, consider S to be the unit `2 ball. Then r? = ?1d , and
R? = 1. which gives a dimension independent lower bound. This lower bound for the case of the
`2 ball is indeed tight, and is recovered by the stochastic gradient descent algorithm [8].
Just as optimization over simpler sets gets easier, optimization over simple function classes should
be easier too. A natural function class that has been studied extensively in the context of better upper
bounds is that of strongly convex functions. For any given norm k ? k on S, a function f is strongly
convex with coefficient ? means that f (x) ? f (y) + ?f (y)T (x ? y) + ?2 kx ? yk2 for all x, y ? S.
For this class of functions, we obtain a smaller lower bound on the minimax oracle complexity of
optimization.
Theorem 2. Let F S be the class of all bounded strongly convex and 1-Lipschitz functions on Rd .
Then there is a universal constant c such that,
d
sup ? (F S , ?) ? c .
T
??O
(7)
Once again there is a matching upper bound using stochastic gradient descent for example, when
the strong convexity is with respect to the `2 norm. The corollary depending on the geometry of S
follows again.
Corollary 2. Let F S be the class of all bounded convex 1-Lipschitz functions on Rd . Let S be a
convex set such that it contains an `? ball of radius r? . Then there is a universal constant c such
2 d
that sup??O ? (F S , S, ?) ? c Rr?
T.
?
In comparison, Nemirovski and Yudin [8] obtained a lower bound scaling as ? ?1T for the class
F C . Their bound applies only to the class F C , and does not provide any dimension dependence,
as opposed to the bounds provided here. Obtaining the correct dependence yields tight minimax
results, and allows us to highlight the dependence of bounds on the geometry of the set S. Our
proofs are information-theoretic in nature. We characterize the hardness of optimization in terms of
a relatively easy to compute complexity measure. As a result, our technique provides tight lower
bounds for smaller function classes like strongly convex functions rather easily. Indeed, we will also
state a result for general function classes.
3.1
An application to statistical estimation
We now describe a simple application of the results developed above to obtain results on the oracle
complexity of statistical estimation, where the typical setup is the following: given a convex loss
function `, a class of functions F indexed by a d-dimensional parameter ? so that F = {f? : ? ?
4
Rd }, find a function f ? F such that E`(f ) ? inf f ?F E`(f ) ? . If the distribution were known,
this is exactly the problem of computing the -accurate optimizer of a convex function, assuming
the function class F is convex. Even though we do not have the distribution in practice, we typically
are provided with i.i.d. samples from it, which can be used to obtain unbiased estimates of the
value and gradients of the risk functional E`(f ) for any given f . If indeed the computational model
of the estimator were restricted to querying these values and gradients, then the lower bounds in
the previous sections would apply. Our bounds, then allow us to deduce the oracle complexity of
statistical estimation problems in this realistic model. In particular, a case of interest is when we
fix a convex loss function ` and consider the worst oracle complexity over all possible distributions
under which expectation is taken. From our bounds, it is straightforward to deduce:
? For the absolute loss `(f (x), y) = |f (x)
? y|, the oracle complexity of -accurate estimation
over all possible distributions is ? d/2 .
? For the quadratic loss `(f (x), y) = (f (x) ? y)2 , the oracle complexity of -accurate estimation
over all possible distributions is ? (d/).
We can use such an analysis to determine the limits of statistical estimation under computational
constraints. Several authors have recently considered this problem [3, 9], and provided upper bounds
for particular algorithms. In contrast, our results provide algorithm-independent lower bounds on
the complexity of statistical estimation within the oracle model. An interesting direction for future
work is to broader the oracle model so as to more accurately reflect the computational trade-offs in
learning and estimation problems, for instance by allowing a method to pay a higher price to query
an oracle with lower variance.
4
Proofs of results
We now turn to the proofs of our main results, beginning with a high-level outline of the main ideas
common to our proofs.
4.1
High-level outline
Our main idea is to embed the problem of estimating the parameter of a Bernoulli vector (alternatively, the biases of d coins) into a convex optimization problem. We start with an appropriately
chosen subset of the vertices of a d-dimensional hypercube each of which corresponds to some value
of the Bernoulli vector. For any given function class, we then construct a ?difficult? subclass of
functions parameterized by these hypercube vertices. We then show that being able to optimize any
function in this subclass requires estimating its hypercube vertex, that is, the corresponding biases
of the d coins. But the only information for this estimation would be from the coin toss outcomes
revealed by the oracle in T queries. With this set-up, we are able to apply the Fano lower bound for
statistical estimation, as has been done in past work on nonparametric estimation (e.g., [5, 2, 11]).
In more detail, the proofs of Theorems 1 and 2 are both based on a common set of steps, which we
describe here.
Step I: Constructing a difficult subclass of functions. Our first step is to construct a subclass
of functions G ? F that we use to derive lower bounds. Any such subclass is parameterized by
a subset V ? {?1, +1}d of the hypercube, chosen as follows. Recalling that ?H denotes the
Hamming metric on the space {?1, +1}d , we choose V to be a d/4-packing of this hypercube.
That is, V is a subset of the hypercube such that for all ?, ? ? V, the Hamming distance satisfies
?H (?, ?) ? d/4. By standard arguments [6], we can construct such a packing set V with cardinality
?
|V| ? (2/ e)d/2 .
We then let Gbase = {fi+ , fi? , i = 1, . . . , d} denote some base set of 2d functions (to be chosen
depending on the problem at hand). Given the packing set V and some parameter ? ? [0, 1/4], we
define a larger class (with a total of |V| functions) via G(?) := {g? , ? ? V}, where each function
g? ? G(?) has the form
g? (x) =
d
1 X
(1/2 + ?i ?)fi+ (x) + (1/2 ? ?i ?) fi? (x) .
d i=1
5
(8)
In our proofs, the subclasses Gbase and G(?) are chosen such that G(?) ? F, the functions fi+ , fi?
are bounded over the convex set S with a Lipschitz constant independent of dimension d, and the
minimizers x? of g? over Rd are contained in S for all ? ? V. We demonstrate specific choices in
the proofs of Theorems 1 and 2.
Step II: Optimizing well is equivalent to function identification. In this step, we show that if
a method can optimize over the subclass G(?) up to a certain tolerance ?(G(?)), then it must be
capable of identifying which function g? ? G(?) was chosen. We first require a measure for the
closeness of functions in terms of their behavior near each others? minima. Recall that we use
x?f ? Rd to denote a minimizing point of the function f . Given a convex set S ? Rd and two
functions f, g, we define
?(f, g) = inf f (x) + g(x) ? f (x?f ) ? g(x?g ) .
(9)
x?S
The discrepancy measure is non-negative, symmetric in its arguments,1 and satisfies ?(f, g) = 0 if
and only if x?f = x?g , so that we may refer to it as a semimetric.
Given the subclass G(?), we quantify how densely it is packed with respect to the semimetric ? using
the quantity
?(G(?)) = min ?(g? , g? ),
(10)
?6=??V
which we also denote by ?(?) when the class G is clear from the context. We now state a simple
result that demonstrates the utility of maintaining a separation under ? among functions in G(?).
Note that x?? denotes a minimizing argument of the function g? .
Lemma 1. For any x
e ? S, there can be at most one function g? ? G(?) for which
g? (e
x) ? g? (x?? ) ? ?(?)
3 .
Thus, if we have an element x
e that approximately minimizes (meaning up to tolerance ?(?)) one
function in the set G(?), then it cannot approximately minimize any other function in the set.
Proof. For a given x
e ? S, suppose that there exists an ? ? V such that g? (e
x) ? g? (x?? ) ?
From the definition of ?(?) in (10), for any ? ? V, ? 6= ?, we have
?(?)
3 .
?(?) ? g? (e
x) ? g? (x?? ) + g? (e
x) ? g? (x?? ) ? ?(?)/3 + g? (e
x) ? g? (x?? ),
which implies that g? (e
x) ? g? (x?? ) ? 2?(?)/3, from which the claim follows.
Suppose that we choose some function g?? ? G(?), and some method MT is allowed to make T
queries to an oracle with information function ?(?, g?? ). Our next lemma shows that in this set-up,
if the method MT can optimize well over the class G(?), then it must be capable of determining the
true function g?? . Recall the definition (2) of the minimax error in optimization:
Lemma 2. Suppose that some method MT has minimax optimization error upper bounded as
?(?)
.
(11)
E ? (MT , G(?), S, ?) ?
9
Then the method MT can construct an estimator ?
b(MT ) such that max
P? [b
?(MT ) 6= ?? ] ? 31 .
?
? ?V
Proof. Given a method MT that satisfies the bound (11), we construct an estimator ?
b(MT ) of the
true vertex ?? as follows. If there exists some ? ? V such that g? (xT ) ? g? (x? ) ? ?(?)
3 then we
set ?
b(MT ) equal to ?. If no such ? exists, then we choose ?
b(MT ) uniformly at random from V.
From Lemma 1, there can exist only one such ? ? V that satisfies
this inequality. Consequently,
using Markov?s inequality, we have P? [b
?(MT ) 6= ?? ] ? P? ? (MT , g?? , S, ?) ? ?(?)/3 ? 13 .
Maximizing over ?? completes the proof.
We have thus shown that having a low minimax optimization error over G(?) implies that the vertex
? ? V can be identified.
1
However, it fails to satisfy the triangle inequality and so is not a metric.
6
Step III: Oracle answers and coin tosses. We now demonstrate a stochastic first order oracle ?
for which the samples {?(x1 , g? ), . . . , ?(xT , g? )} can be related to coin tosses. In particular, we
associate a coin with each dimension i ? {1, 2, . . . , d}, and consider the set of coin bias vectors
lying in the set
?(?) = (1/2 + ?1 ?, . . . , 1/2 + ?d ?) | ? ? V ,
(12)
Given a particular function g? ? G(?) (or equivalently, vertex ? ? V), we consider the oracle ? that
presents noisy value and gradient samples from g? according to the following prescription:
? Pick an index it ? {1, . . . , d} uniformly at random.
? Draw bit ? {0, 1} according to a Bernoulli distribution with parameter 1/2 + ?it ?.
? Return the value and sub-gradient of the function
gb? (x) = bit fi+t (x) + (1 ? bit )fi?t (x).
By construction, the function value and gradient samples are unbiased estimates of those of g? ;
moreover, the variance of the effective ?noise? is bounded independently of d as long as the Lipschitz
constant is independent of d since the function values and gradients are bounded on S.
Step IV: Lower bounds on coin-tossing Finally, we use information-theoretic methods to lower
bound the probability of correctly estimating the true vertex ?? ? V in our model.
Lemma 3. Given an arbitrary vertex ?? ? V, suppose that we toss a set of d coins with bias
?? = ( 21 + ?1? ?, . . . , 12 + ?2? ?) a total of T times, but that the outcome of only one coin chosen
uniformly at random is revealed at every round. Then for all ? ? 1/4, any estimator ?
b satisfies
16T ? 2 + log 2
?
.
inf max
P[b
?
=
6
?
]
?
1
?
?
d
?
b ?? ?V
2 log(2/ e)
Proof. Denote the Bernoulli distribution for the i-th coin by P?i . Let Yt ? {1, . . . , d} be the variable
indicating the coin revealed at time T , and let Xt ? {0, 1} denote its outcome. With some abuse of
notation, we also denote the distribution of (Xt , Yt ) by P? , and that of the entire data {(Xt , Yt )}Tt=1
by P?T . Note that P? (i, b) = d1 P?i (b). We now apply a version of Fano?s lemma [11] to the set of
distributions P?T for ? ? ?(?). In particular, using the proof of Lemma 3 in [11] we get:
b + log 2
KL(P?T ||P?T0 ) ? b, ??, ?0 ? ?(?) ? inf max P? [?b 6= ?] ? 1 ?
.
(13)
log |?|
?b ???(?)
In our case, we upper bound b as follows:
b = KL(P?T ||P?T0 ) =
T
X
T
KL(P? (Xt , Yt )||P?0 (Xt , Yt )) =
t=1
d
1 XX
KL(P?i (Xt )||P?0 (Xt )).
i
d t=1 i=1
Each term KL(P?i (Xt )||P?0 (Xt )) is at most the KL divergence g(?) between Bernoulli variates
i
with parameters 1/2 + ? and 1/2 ? ?. A little calculation shows that
4?
8? 2
?
,
g(?) = 2? log 1 +
1 ? 2?
1 ? 2?
which is less than 16? 2 as long as ? ? 1/4. Consequently, we conclude that b ? 16T ? 2 . Also, we
note that P[b
? 6= ?? ] = P? [?b 6= ?? ]. Substituting these values and the size of V into (13) yields the
claim.
4.2
Proofs of main results
We are now in a position to prove our main theorems.
7
Proof of Theorem 1: By the construction of our oracle, it is clear that, at each round, only one
coin is revealed to the method MT . Thus Lemma 3 applies to the estimator ?
b(MT ):
16T ? 2 + log 2
?
P[b
?(MT ) 6= ?] ? 1 ? 2
.
(14)
d log(2/ e)
In order to obtain an upper bound on P[b
?(MT ) 6= ?] using Lemma 2, we need to identify the
subclass Gbase of F C . For i = 1, . . . , d, define:
fi+ (x) := x(i) + 1/2, and fi? (x) := x(i) ? 1/2.
We take S to be the `? ball of radius 1/2. It is clear then that the minimizers of g? are contained in
S. Also, the functions fi+ , fi? are bounded in [0, 1] and 1-Lipschitz in the ?-norm, giving the same
?
properties for each function g? . Finally, we note that ?(g? , g? ) = 2?
d ?H (?, ?) ? 2 for ? 6= ? ? V.
?
Setting = ?/18 < 1/2, we obtain ? (F C , ?) ? = 18
= ?(?)
9 . Then by Lemma 2, we have
? 2 +log
1
? 2 .
P? [b
?(MT ) 6= ?] ? 3 which, when combined with equation (14), yields 31 ? 1 ? 2 16T
d
log(2/
e)
Substituting ? = 18 yields T = ? d2 for all d ? 11. Combining this with Theorem 5.3.1 of
NY [8] gives T = ? d2 for all d.
To prove Corollary 1, we note that the proof of Theorem 1 required r? ? 12 . If not, it is easy to see
that the computation of ? on G(?) scales by r? . Further, if the set is contained in a ball of radius
R? , then we need to scale the function with R1? to keep the function values bounded. Taking both
these dependences into account gives the desired result.
Proof of Theorem 2:
In this case, we define the base class
2
2
fi+ (x) = x(i) + 1/2 , and fi? (x) = x(i) ? 1/2 ,
for i = 1, . . . , d.
Then the functions g? are strongly convex w.r.t. the Euclidean norm with coefficient ? = 1/d.
2
Some calculation shows that ?(g? , g? ) = 2?d ?H (?, ?) for all ? 6= ?. The remainder of the proof
is identical to Theorem 1.
The reader might suspect that the dimension dependence in our lower bound for strongly convex
functions is not tight, due to the dependence of ? on the dimension d. However, this is the largest
possible value of ? under the assumptions of the theorem.
4.3
A general result
Armed with the greater understanding from these proofs, we can now state a general result for any
function class F. The proof is similar to that of earlier results.
Theorem 3. For any function class F ? F C , suppose a given base set of functions Gbase yields the
measure ? as defined in (10). Then there exists a universal constant c such that sup??O ? (F S , ?) ?
q
d
c?
T .
Acknowledgements We gratefully acknowledge the support of the NSF under award DMS-0830410
and of DARPA under award HR0011-08-2-0002. Alekh is supported in part by MSR PhD Fellowship.
References
[1] D.P. Bertsekas. Nonlinear programming. Athena Scientific, Belmont, MA, 1995.
[2] L. Birg?e. Approximation dans les espaces metriques et theorie de l?estimation. Z. Wahrsch.
verw. Gebiete, 65:181?327, 1983.
[3] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS. 2008.
[4] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge,
UK, 2004.
8
[5] R. Z. Has?minskii. A lower bound on the risks of nonparametric estimates of densities in the
uniform metric. Theory Prob. Appl., 23:794?798, 1978.
[6] J. Matousek. Lectures on discrete geometry. Springer-Verlag, New York, 2002.
[7] A. S. Nemirovski. Efficient methods in convex programming. Lecture notes.
[8] A. S. Nemirovski and D. B. Yudin. Problem Complexity and Method Efficiency in Optimization. John Wiley UK/USA, 1983.
[9] S. Shalev-Shwartz and N. Srebro. SVM optimization: inverse dependence on training set size.
In ICML, 2008.
[10] Nesterov Y. Introductory lectures on convex optimization: Basic course. Kluwer Academic
Publishers, 2004.
[11] B. Yu. Assouad, Fano and Le Cam. In Festschrift in Honor of L. Le Cam on his 70th Birthday.
Springer-Verlag, 1993.
9
| 3689 |@word msr:1 version:1 norm:4 d2:2 paid:1 pick:1 moment:3 reduction:1 series:1 contains:2 hereafter:1 selecting:1 past:1 wainwrig:1 recovered:1 written:1 must:2 john:1 belmont:1 realistic:1 designed:1 accordingly:1 beginning:1 wahrsch:1 characterization:1 provides:1 minskii:1 simpler:2 consists:2 prove:2 introductory:1 introduce:1 manner:1 expected:1 indeed:4 hardness:2 behavior:1 automatically:1 little:2 actual:1 armed:1 considering:2 cardinality:1 provided:4 begin:1 bounded:17 notation:7 moreover:2 estimating:3 xx:1 what:1 backbone:1 minimizes:1 developed:1 berkeley:6 every:2 subclass:9 exactly:1 returning:1 demonstrates:1 uk:2 unit:1 bertsekas:1 before:1 understood:1 limit:1 consequence:1 despite:1 analyzing:1 approximately:3 abuse:1 might:1 birthday:1 eb:1 studied:2 matousek:1 collect:1 appl:1 ease:1 nemirovski:6 enforces:1 practice:1 procedure:4 area:1 universal:4 significantly:1 matching:2 boyd:1 refers:1 get:4 convenience:1 interior:1 cannot:1 context:5 risk:2 seminal:1 optimize:3 equivalent:1 yt:5 maximizing:1 straightforward:1 attention:1 independently:1 convex:50 identifying:1 estimator:6 vandenberghe:1 his:1 notion:1 coordinate:1 construction:2 suppose:5 exact:1 programming:3 us:1 associate:1 element:3 observed:1 worst:3 pradeepr:1 trade:1 yk:2 intuition:1 pd:1 convexity:1 complexity:27 nesterov:2 cam:2 tight:7 upon:2 division:2 efficiency:1 triangle:1 packing:3 easily:2 darpa:1 various:2 chapter:2 describe:2 effective:1 query:14 outcome:4 shalev:1 larger:1 valued:1 say:1 statistic:3 itself:1 noisy:2 sequence:1 rr:2 remainder:1 dans:1 combining:1 convergence:1 r1:1 depending:3 derive:1 stating:1 verw:1 strong:1 solves:1 c:3 implies:2 quantify:1 direction:1 radius:6 correct:1 stochastic:16 require:1 fix:1 lying:1 considered:2 claim:2 substituting:2 achieves:1 efb:1 optimizer:1 estimation:20 utexas:1 largest:2 hope:1 offs:1 rather:1 casting:1 broader:1 corollary:6 focus:1 bernoulli:6 contrast:2 attains:1 sense:1 minimizers:2 typically:2 entire:1 arg:1 issue:2 among:2 denoted:2 constrained:1 uc:3 equal:1 once:1 construct:5 having:2 identical:1 yu:1 icml:1 discrepancy:2 future:1 others:1 inherent:3 few:1 divergence:2 densely:1 festschrift:1 geometry:3 recalling:1 interest:1 evaluation:2 pradeep:1 devoted:1 implication:1 accurate:3 capable:2 indexed:1 iv:1 euclidean:1 desired:1 instance:5 earlier:1 obstacle:1 cost:2 vertex:8 subset:3 uniform:2 too:1 characterize:1 answer:4 varies:1 eec:2 corrupted:1 combined:1 density:1 fundamental:1 sequel:1 again:3 reflect:1 opposed:1 choose:3 possibly:1 derivative:1 return:1 account:1 de:1 coefficient:2 satisfy:1 depends:2 performed:1 sup:7 start:1 minimize:2 accuracy:1 variance:5 yield:5 identify:1 identification:1 accurately:1 worth:1 randomness:1 definition:4 semimetric:2 obvious:1 dm:1 naturally:1 proof:20 associated:1 hamming:3 popular:1 recall:4 knowledge:1 ut:1 appears:1 higher:2 follow:1 improved:1 formulation:1 done:1 though:2 strongly:6 just:1 hand:2 working:1 nonlinear:1 scientific:1 name:1 usa:1 unbiased:3 true:3 hence:1 symmetric:1 leibler:1 round:5 outline:2 theoretic:6 demonstrate:2 tt:1 meaning:1 recently:1 fi:14 common:3 functional:1 mt:24 extend:1 kluwer:1 refer:5 cambridge:2 rd:14 outlined:1 fano:3 gratefully:1 alekh:3 yk2:1 lkx:1 deduce:2 base:3 optimizing:1 inf:6 certain:2 verlag:2 honor:1 inequality:4 minimum:4 greater:1 tossing:1 determine:1 ii:1 faster:1 academic:1 calculation:2 long:2 prescription:1 ravikumar:1 award:2 basic:1 expectation:2 metric:3 iteration:1 agarwal:1 achieved:1 addition:1 background:2 fellowship:1 completes:1 publisher:1 appropriately:1 suspect:1 inscribed:1 near:2 noting:1 revealed:4 iii:1 easy:2 variate:1 identified:1 idea:2 tradeoff:1 t0:2 bartlett:2 gb:4 utility:1 peter:1 york:1 repeatedly:1 remark:2 clear:4 nonparametric:2 extensively:1 diameter:1 exist:1 nsf:1 track:1 correctly:1 write:1 discrete:1 key:1 undertaken:1 subgradient:2 turing:1 parameterized:2 prob:1 inverse:1 place:1 throughout:1 reader:4 decide:1 separation:1 draw:1 scaling:2 bit:3 bound:38 pay:1 quadratic:2 oracle:51 precisely:1 constraint:1 bousquet:1 argument:3 min:1 martin:1 relatively:3 department:4 according:3 ball:8 smaller:4 making:1 restricted:1 taken:2 resource:2 equation:1 discus:1 count:1 turn:1 apply:3 birg:1 coin:13 slower:1 original:1 denotes:2 maintaining:1 exploit:2 giving:1 classical:1 hypercube:6 question:2 quantity:2 primary:1 dependence:8 gradient:17 minx:1 distance:2 athena:1 assuming:3 index:2 ratio:1 minimizing:3 equivalently:2 setup:2 difficult:2 sharper:1 theorie:1 negative:2 packed:1 allowing:1 upper:9 markov:1 acknowledge:1 descent:6 flop:1 arbitrary:2 pair:1 required:1 kl:7 extensive:1 optimized:1 nip:1 address:1 able:2 hr0011:1 proceeds:1 gaining:1 max:3 wainwright:1 event:1 natural:3 difficulty:2 indicator:1 minimax:11 improve:2 text:2 literature:2 understanding:4 acknowledgement:1 determining:1 loss:6 lecture:4 expect:1 highlight:1 interesting:2 querying:1 srebro:1 austin:1 course:1 supported:1 free:1 formal:1 allow:1 bias:4 taking:1 expend:1 absolute:2 tolerance:2 dimension:8 evaluating:1 yudin:4 fb:3 author:3 made:1 avoided:1 approximate:1 kullback:1 keep:2 reveals:1 xt1:1 conclude:1 shwartz:1 alternatively:1 quantifies:1 nature:1 obtaining:2 bottou:1 constructing:1 main:7 linearly:1 noise:2 allowed:1 body:1 x1:2 referred:1 ny:6 wiley:1 fails:1 position:2 sub:1 espaces:1 theorem:13 embed:1 specific:2 xt:22 svm:1 closeness:1 essential:1 exists:4 mirror:1 phd:1 kx:2 gap:1 easier:3 simply:2 labor:1 contained:4 applies:2 springer:2 corresponds:2 minimizer:2 satisfies:5 assouad:1 ma:1 consequently:3 toss:4 lipschitz:11 price:1 typical:2 uniformly:3 lemma:10 called:1 total:2 indicating:1 support:1 gebiete:1 d1:1 |
2,967 | 369 | Translating Locative Prepositions
Paul W. Munro and Mary Tabasko
Department of Information Science
University of Pittsburgh
Pittsburgh, PA 15260
ABSTRACT
A network was trained by back propagation to map locative expressions
of the form "noun-preposition-noun" to a semantic representation, as in
Cosic and Munro (1988). The network's performance was analyzed
over several simulations with training sets in both English and
German. Translation of prepositions was attempted by presenting a
locative expression to a network trained in one language to generate a
semantic representation; the semantic representation was then presented
to the network trained in the other language to generate the appropriate
preposition.
1 INTRODUCTION
Connectionist approaches have enjoyed success, relative to competing frameworks, in
accounting for context sensitivity and have become an attractive approach to NLP. An architecture (Figure 1) was put forward by Cosic and Munro (1988) to map locative expressions of the form "noun-preposition-noun" to a representation of the spatial relationship
between the referents of the two nouns. The features used in the spatial representations
were abstracted from Herskovits (1986). The network was trained using the generalized
delta rule (Rumelhart, Hinton, and Williams, 1986) on a set of patterns with four components, three syntactic and one semantic. The syntactic components are a pair of nouns
separated by a locative preposition [NI-LP-N21, and the semantic component is a representation of the spatial relationship [SR1.
598
Translating Locative Prepositions
The architecture of the network includes two encoder banks, Eland E2, inspired by
Hinton (1986), to force the development of distributed representations of the nouns. This
was not done to enhance the performance of the network but rather to facilitate analysis of
the network's function, since an important component of Herskovits' theory is the role of
nouns as modifiers of the preposition's ideal meaning.
The networks were trained to perform a pattern-completion task. That is, three components from a pattern are selected from the training set and presented to the input layer; either the LP or the SR component is missing. The task is to provide both the LP and SR
components at the output. Analysis of a network after the learning phase consists of several tests, such as presenting prepositions with no accompanying nouns, in order to obtain an "ideal meaning" for each preposition, and comparing the noun representations at
the encoder banks El and E2.
Noun Unit.
(2S)
clouds
sky
plane
boat
water
lake
river
road
city
Island
camps~e
school
house
floor
room
(10)
N1overN2
N2 over N1
N1 at edge of N2
N1 eniledded in N2
N2 contains N1
Prepo.itlon
Unit.
in
Spatial Relation
Unit.
at
on
table book
glass flowers
I:x7NI grass
crack man
chip fish
N1 w~hin border 01 N2
N1 touching N2
N1 nearN2
N1 far from N2
N2 supports N1
under
(S)
Figure 1: Network Architecture. Inputs are presented at the lowest layer, either across
input banks Nl, LP, and N2 or across input banks Nl, SR, and N2. The bold lines indicate connectivity from all the units in the lower bank to all the units in the upper bank.
The units used to represent the patterns are listed in the table on the right.
2 METHODOLOGY
2.1 THE TRAINING SETS
3125 (25 X 5 X 25) pattern combinations can be formed with the 25 nouns and five prepositions; of these, 137 meaningful expressions were chosen to constitute an English
"training COrpUS". For each phrase, a set of one to three SR units was chosen to represent
the position of the second noun's referent relative to the first noun's. To generate the
German corpus, we picked the best German preposition to describe the spatial representation between the nouns. So. each training set consists of the same set of 13 7 spatial relationships between pairs of nouns. The correspondences between prepositions in the two
languages across training sets is given in Table 1.
599
600
Munro and Tabasko
Table 1: The number of correspondences between the prepositions used in the
English and Gennan training sets.
~
GER
IN
AN
AUF
UNTER
OBER
IN
53
0
0
0
0
AT
ON
4
9
0
12
20
0
0
8
0
0
UNDER
0
0
0
18
0
ABOVE
0
0
0
0
13
2.2 TRANSLATION OF THE PREPOSITIONS
Transforming syntactic expressions to semantic representations and inverting the process
in another language is known as the interlingua approach to machine translation. The network described in this paper is particularly well-suited to this approach since it can perform this transformation in either direction (encoding or decoding). Networks trained
using expressions from two languages can be attached in sequence to accomplish the
translation task. A syntactic triple (NI-LP-N2) from the source language is presented to
the network trained in that language. The resulting SR output is then presented with the
corresponding nouns in the target language as input to the network trained in the target
language, yielding the appropriate preposition in the target language as output. In this
procedure, it is assumed that, relative to the prepositions, the nouns are easy to translate;
that is, the translation of the nouns is assumed to be much less dependent on context. An
example translation of the preposition on in the expression "house on lake" is illustrated
in Figure 2.
3 RESULTS
Eight networks were trained using the two-stroke procedure described above; four using
English language inputs and four using German, with two different learning rates in each
language, and two different initializations for the random number generator in each case.
Various tests were performed on the trained network in order to determine the ideal meaning of each preposition, the network's classification of the various nouns, and the contextual interaction of the nouns with the prepositions. Also, translation of prepositions
from English to German was attempted. The various test modes are described in detail
below.
Translating Locative Prepositions
_1 __ -
__ 1 ____ 1 __
Haus
__1_-
on
[spat tal]
Figure 2: A Schematic View of the Translation Procedure.After training networks in two
languages. a preposition can be appropriately translated from one language to the other by
perlorming a decoding task in the source language followed by an encoding task in the
target language. The figure shows the resulting activity patterns from the expression
"house on lake". The system correctly translates on in English to an in German. In
other contexts. on could correspond to the German auf.
3.1 CONVERGENCE
In each case, the networks converged to states of very low average error (less than 0.5%).
However, in no case did a network learn to respond correctly to every phrase in the training set The performance of each training run was measured by computing the total sum
of squared error over the output units across all 137 training patterns. The errors were analyzed into four types:
LP-LP errors:
SR -LP errors (encoding):
LP-SR errors (decoding):
SR-SR errors:
Errors in the LP output units for (NI-LP-N2) input
Errors in the LP output units for (Nl-SR-N2) input
Errors in the SR output units for (NI-LP-N2) input
Errors in the SR output units for (Nl-SR-N2) input
601
602
Munro and Tabasko
In assessing the perfonnance of the network after learning, the error measure driving the
training (that is, the difference between desired and actual activity levels for every output
unit) is inappropriate. In cases such as this, where the output units are being trained to
binary values, it is much more infonnative to compare the relative activity of the output
units to the desired pattern and simply count the number of inputs that are "wrong". This
approach was used to detennine whether each phrase had been processed correctly or incorrectly by the network. Preposition output errors were counted by identifying the most
highly activated output unit and checking whether it matched the correct preposition.
Since the number of active units in the SR component of each training pattern varies
from one to three, a response was registered as incorrect if any of the units that should
have been off were more active than any of those that Should have been on. These results
are reported in Table 2 as total errors out of the 137 in the training corpus.
Table 2: Number of errors for each task in each simulation (out of 137).
I.P - LP
SR-LP I.p-SR SR - SR
ENG 1
ENG 2
ENG 3
ENG 4
ENGAVG
0
0
0
0
0.00
0
0
0
0
0.00
3
2
2
2
2.25
0
0
0
0
0.00
GER 1
GER2
GER3
GER4
GERAVG
0
0
0
0
0.00
1
1
0
0
0.50
2
3
2
4
2.75
0
0
0
0
0.00
3.2 IDEAL MEANINGS OF THE PREPOSITIONS
To find the unmodified spatial representation the net associates with each preposition, the
prepositions were presented individually to the net and the resulting spatial responses
recorded. This gives a context-free interpretation of each preposition. Figure 3 shows the
output activity on the spatial units for one simulation in each language. The results were
similar for all simulations within a language, demonstrating that the network finds fairly
stable representations for the prepositions. Note that the representations of German auf,
an. and in share much of their activation with those of English on, at. and in, although
its distribution across the prepositions varies. For example, the preposition auf is activated much like English on, but without the units indicating the first object at the edge
of and near the second. These units are found weakly activated in German an, along with
the unit indicating coincidence. The ideal meaning of auf, then, may be somewhere between those of on and at in English.
Translating Locative Prepositions
cv
tv
Z
tv
Z
tv
.E
z
0
cv
z
z
<D
a>
cV
a>
Z
"C
<P
<P
0>
aJ .8
m <PE
tv
~
Z
Z
Z
c:
c
a
(,)
cv
Z
:?
'i
Z
cv
c:
1:2
m g
:::>
a>
c:
E
Z
0
Z
:::>
aJ
cv
'tij
c
8
cv
Z
Z
:;
-------
<P
c:
~
Z
Z
c.
c.
:::>
(/)
C\I
z
1
UBER
_1UNDER
a
m g
:::>
E
Z
(/)
r
E
Z
.E
(,)
1
ABOVE
Z
(II
c:
~
Z
Z
tv
0>
.a
c:
E
c:
<P
~
Z
Z
(/)
E
tii
II>
1 _______ 1_
_I
_____ 1-
-
____ 1 __ 1
UNTER
1 _____ 1 __ 1
AUF-
ON
____ 1 __ AT
IN
0>
.v zcv
f?
a
Z
8
"C
.8
<P
c.
"-
$
Z
cv
Z
a
c.
E
Z
E
(,)
Z
Z
0
,~
"C
z
Z
C\I
0>
c:
tij
(II
tv
z
E
(/)
"C
"C
Z
<D
f?
Z
Z
cv
0
____ 1 __ -
AN
---------
-IN -
-
-- _1 __ -
Figure 3: Ideal Meanings of the Prepositions.
3.3
TRANSLATION
We made eight translations of the 137-phrase training corpus, four from English to
Gennan and four from Gennan to English. The perfonnance for each network over the
training corpus is shown in Table 3. The maximum number of phrases translated incorrectly was eight (94.2 percent correct). and the minimum was one wrong (99.3 percent
correct). The fact that the English networks learned the training corpus better than the
Gennan networks (especially in generating a semantic description for two nouns and a
preposition) shows up in the translation task. The English-to-Gennan translations are
consistently better than the Gennan-to-English.
Table 3: Number of phrases translated incorrectly (out of 137).
SIMlILATION Nl JMBER
1
2
ENG to GER
1
3
3
2
4
AVG
1
1.75
GER to ENG
6
7
6
8
6.75
603
604
Munro and Tabasko
4
DISCUSSION
Even in this highly constrained and very limited demonstration, the simulations performed using the two databases illustrate how connectionist networks can capture structures in different languages and interact.
The "interlingua" approach to machine translation has not shown promise in practical
systems using frameworks based in traditional linguistic theory (Slocum, 1985). The
network presented in this paper, however, supports such an approach using a connectionist framework. Of course, even if it is feasible to construct a space with which to represent semantics adequately for the limited domain of concrete uses of locative prepositions,
representation of arbitrary semantics is quite another story. On the other hand, semantic
representations must be components of any full-scale machine-translation system. In any
event, a system that can learn bidirectional mappings between syntax and semantics from
a set of examples and extend this learning to novel expressions is a candidate for machine
translation (and NLP in general) that warrants further investigation.
We anticipate that any extensive application of back propagation, or any other neural network algorithm, to NLP will involve processing temporal patterns and keeping a dynamic representation of semantic hypotheses, such as the temporal scheme proposed by
Elman (1988).
,,'
Acknowledgements
This research was supported in part by NSF grant IRI-8910368 to the ftrst author and by
the International Computer Science Institute, which kindly provided the ftrst author with
ftnancial support and a stimulating research environment during the summer of 1988.
References
Cosic, C. and Munro, P. W. (1988) Learning to represent and understand locative prepositional phrases. 10th Ann. Conf Cognitive Science Society, 257-262.
Elman, I. L. (1988) Finding structure in time. CRL TR 8801, Center for Research in
Language, University of California, San Diego.
Herskovits, Annette (1986) Language and Spatial Cognition. Cambridge University
Press, Cambridge.
Hinton, Geoffrey (1986) Learning distributed represen tations of concepts. 8 t hAn n .
Con! Cognitive Science Society, 1-12.
Rumelhart, D. E., Hinton, G. and Williams, R. W. (1986) Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the
Microstructure of Cognition. Vol 1. D. E. Rumelhart and I. L McClelland, eds.
Cambridge: MITlBradford.
Slocum, I. (1985) A survey of machine translation: its history, current status, and future
prospects. Computational Linguistics, 11, 1-17.
| 369 |@word simulation:5 accounting:1 eng:6 tr:1 contains:1 current:1 comparing:1 contextual:1 activation:1 must:1 grass:1 selected:1 plane:1 five:1 along:1 become:1 incorrect:1 consists:2 elman:2 inspired:1 actual:1 inappropriate:1 provided:1 matched:1 lowest:1 finding:1 transformation:1 temporal:2 sky:1 every:2 sr1:1 wrong:2 unit:22 grant:1 encoding:3 initialization:1 limited:2 practical:1 procedure:3 road:1 put:1 context:4 map:2 missing:1 center:1 williams:2 iri:1 survey:1 identifying:1 rule:1 target:4 diego:1 us:1 hypothesis:1 pa:1 associate:1 rumelhart:3 particularly:1 database:1 role:1 cloud:1 coincidence:1 capture:1 prospect:1 transforming:1 environment:1 dynamic:1 trained:11 weakly:1 translated:3 chip:1 various:3 separated:1 describe:1 quite:1 encoder:2 syntactic:4 sequence:1 spat:1 net:2 interaction:1 translate:1 description:1 convergence:1 assessing:1 generating:1 object:1 illustrate:1 completion:1 measured:1 school:1 indicate:1 direction:1 correct:3 exploration:1 translating:4 microstructure:1 investigation:1 anticipate:1 accompanying:1 mapping:1 cognition:2 driving:1 individually:1 city:1 rather:1 linguistic:1 gennan:6 consistently:1 referent:2 camp:1 glass:1 dependent:1 el:1 relation:1 semantics:3 classification:1 development:1 noun:22 spatial:10 fairly:1 constrained:1 construct:1 warrant:1 future:1 connectionist:3 phase:1 n1:9 highly:2 analyzed:2 nl:5 yielding:1 activated:3 edge:2 unter:2 perfonnance:2 desired:2 infonnative:1 unmodified:1 phrase:7 reported:1 varies:2 accomplish:1 international:1 sensitivity:1 river:1 off:1 decoding:3 enhance:1 concrete:1 connectivity:1 squared:1 recorded:1 n21:1 conf:1 book:1 cognitive:2 tii:1 bold:1 includes:1 performed:2 view:1 picked:1 parallel:1 formed:1 ni:4 correspond:1 converged:1 stroke:1 history:1 ed:1 e2:2 con:1 back:2 bidirectional:1 methodology:1 response:2 done:1 hand:1 propagation:3 mode:1 aj:2 mary:1 facilitate:1 concept:1 adequately:1 semantic:9 illustrated:1 attractive:1 during:1 generalized:1 syntax:1 presenting:2 percent:2 hin:1 meaning:6 novel:1 attached:1 extend:1 interpretation:1 cambridge:3 cv:9 enjoyed:1 language:21 had:1 stable:1 han:1 touching:1 binary:1 success:1 minimum:1 floor:1 determine:1 ii:3 full:1 schematic:1 represent:4 source:2 appropriately:1 sr:18 near:1 ideal:6 easy:1 architecture:3 competing:1 translates:1 whether:2 expression:9 munro:7 constitute:1 tij:2 listed:1 involve:1 processed:1 mcclelland:1 generate:3 herskovits:3 nsf:1 fish:1 crack:1 delta:1 correctly:3 promise:1 vol:1 four:6 demonstrating:1 sum:1 run:1 respond:1 lake:3 modifier:1 layer:2 followed:1 summer:1 correspondence:2 activity:4 represen:1 tal:1 department:1 tv:6 combination:1 across:5 lp:15 island:1 prepositional:1 german:9 count:1 detennine:1 eight:3 appropriate:2 nlp:3 linguistics:1 somewhere:1 especially:1 society:2 traditional:1 water:1 haus:1 ober:1 relationship:3 demonstration:1 ftrst:2 perform:2 upper:1 incorrectly:3 hinton:4 arbitrary:1 inverting:1 pair:2 extensive:1 auf:6 california:1 registered:1 learned:1 flower:1 pattern:10 below:1 event:1 force:1 boat:1 scheme:1 tations:1 interlingua:2 acknowledgement:1 checking:1 relative:4 ger:4 geoffrey:1 triple:1 generator:1 bank:6 story:1 share:1 translation:16 preposition:36 course:1 supported:1 free:1 english:14 keeping:1 understand:1 institute:1 distributed:3 forward:1 made:1 avg:1 author:2 san:1 counted:1 far:1 status:1 abstracted:1 active:2 corpus:6 pittsburgh:2 assumed:2 table:8 learn:2 interact:1 domain:1 did:1 kindly:1 border:1 paul:1 n2:15 position:1 candidate:1 house:3 pe:1 suited:1 simply:1 stimulating:1 ann:1 room:1 man:1 feasible:1 crl:1 total:2 uber:1 attempted:2 meaningful:1 indicating:2 internal:1 support:3 |
2,968 | 3,690 | Learning from Multiple Partially Observed Views ?
an Application to Multilingual Text Categorization
Massih R. Amini
Interactive Language Technologies Group
National Research Council Canada
Nicolas Usunier
Laboratoire d?Informatique de Paris 6
Universit?e Pierre et Marie Curie, France
[email protected]
[email protected]
Cyril Goutte
Interactive Language Technologies Group
National Research Council Canada
[email protected]
Abstract
We address the problem of learning classifiers when observations have multiple
views, some of which may not be observed for all examples. We assume the
existence of view generating functions which may complete the missing views
in an approximate way. This situation corresponds for example to learning text
classifiers from multilingual collections where documents are not available in all
languages. In that case, Machine Translation (MT) systems may be used to translate each document in the missing languages. We derive a generalization error
bound for classifiers learned on examples with multiple artificially created views.
Our result uncovers a trade-off between the size of the training set, the number
of views, and the quality of the view generating functions. As a consequence,
we identify situations where it is more interesting to use multiple views for learning instead of classical single view learning. An extension of this framework is
a natural way to leverage unlabeled multi-view data in semi-supervised learning.
Experimental results on a subset of the Reuters RCV1/RCV2 collections support
our findings by showing that additional views obtained from MT may significantly
improve the classification performance in the cases identified by our trade-off.
1
Introduction
We study the learning ability of classifiers trained on examples generated from different sources,
but where some observations are partially missing. This problem occurs for example in non-parallel
multilingual document collections, where documents may be available in different languages, but
each document in a given language may not be translated in all (or any) of the other languages.
Our framework assumes the existence of view generating functions which may approximate missing examples using the observed ones. In the case of multilingual corpora these view generating
functions may be Machine Translation systems which for each document in one language produce
its translations in all other languages. Compared to other multi-source learning techniques [6],
we address a different problem here by transforming our initial problem of learning from partially
observed examples obtained from multiple sources into the classical multi-view learning. The contributions of this paper are twofold. We first introduce a supervised learning framework in which
we define different multi-view learning tasks. Our main result is a generalization error bound for
classifiers trained over multi-view observations. From this result we induce a trade-off between the
number of training examples, the number of views and the ability of view generating functions to
produce accurate additional views. This trade-off helps us identify situations in which artificially
generated views may lead to substantial performance gains. We then show how the agreement of
classifiers over their class predictions on unlabeled training data may lead to a much tighter trade-off.
Experiments are carried out on a large part of the Reuters RCV1/RCV2 collections, freely available
from Reuters, using 5 well-represented languages for text classification. Our results show that our
approach yields improved classification performance in both the supervised and semi-supervised
settings.
In the following two sections, we first define our framework, then the learning tasks we address.
Section 4 describes our trade-off bound in the Empirical Risk Minimization (ERM) setting, and
shows how and when the additional, artificially generated views may yield a better generalization
performance in a supervised setting. Section 5 shows how to exploit these results when additional
unlabeled training data are available, in order to obtain a more accurate trade-off. Finally, section 6
describes experimental results that support this approach.
2
Framework and Definitions
In this section, we introduce basic definitions and the learning objectives that we address in our
setting of artificially generated representations.
2.1
Observed and Generated Views
def
A multi-view observation is a sequence x =
(x1 , ..., xV ), where different views xv provide a representation of the same object in different sets Xv . A typical example is given in [3] where each
Web-page is represented either by its textual content (first view) or by the anchor texts which point
to it (second view). In the setting of multilingual classification, each view is the textual representation of a document written in a given language (e.g. English, German, French).
We consider binary classification problems where, given a multi-view observation, some of the
views are not observed (we obviously require that at least one view is observed). This happens, for instance, when documents may be available in different languages, yet a given document
may only be available in a single language. Formally, our observations x belong to the input set
def
X =
(X1 ? {?}) ? ... ? (XV ? {?}), where xv =? means that the v-th view is not observed.
def
In binary classification, we assume that examples are pairs (x, y), with y ? Y =
{0, 1}, drawn
according to a fixed, but unknown distribution D over X ? Y, such that P(x,y)?D (?v : xv =?) = 0
(at least one view is available). In multilingual text classification, a parallel corpus is a dataset where
all views are always observed (i.e. P(x,y)?D (?v : xv =?) = 0), while a comparable corpus is a
dataset where only one view is available for each example (i.e. P(x,y)?D (|{v : xv 6=?}| =
6 1) = 0).
For a given observation x, the views v such that xv 6=? will be called the observed views. The
originality of our setting is that we assume that we have view generating functions ?v?v? : Xv ?
Xv? which take as input a given view xv and output an element of Xv? , that we assume is close
?
to what xv would be if it was observed. In our multilingual text classification example, the view
generating functions are Machine Translation systems. These generating functions can then be used
to create surrogate observations, such that all views are available. For a given partially observed x,
the completed observation x is obtained as:
?v, xv =
xv
?
?v? ?v (xv )
if xv 6=?
?
otherwise, where v ? is such that xv 6=?
(1)
In this paper, we focus on the case where only one view is observed for each example. This setting
corresponds to the problem of learning from comparable corpora, which will be the focus of our
experiments. Our study extends to the situation where two or more views may be observed in a
straightforward manner. Our setting differs from previous multi-view learning studies [5] mainly on
the straightforward generalization to more than two views and the use of view generating functions
to induce the missing views from the observed ones.
2.2
Learning objective
The learning task we address is to find, in some predefined classifier set C, the stochastic classifier c
that minimizes the classification error on multi-view examples (with, potentially, unobserved views)
drawn according to some distribution D as described above. Following the standard multi-view
framework, in which all views are observed [3, 13], we assume that we are given V deterministic
classifier sets (Hv )Vv=1 , each working on one specific view1 . That is, for each view v, Hv is a set
of functions hv : Xv ? {0, 1}. The final set of classifiers C contains stochastic classifiers, whose
output only depends on the outputs of the view-specific classifiers. That is, associated to a set of
classifiers C, there is a function ?C : (Hv )Vv=1 ? X ? [0, 1] such that:
C = {x 7? ?C (h1 , ..., hV , x) |?v, hv ? Hv }
For simplicity, in the rest of the paper, when the context is clear, the function x 7? ?C (h1 , ..., hV , x)
will be denoted by ch1 ,...,hV . The overall objective of learning is therefore to find c ? C with low
generalization error, defined as:
?(c) =
E
(x,y)?D
e (c, (x, y))
(2)
where e is a pointwise error, for instance the 0/1 loss: e(c, (x, y)) = c(x)(1 ? y) + (1 ? c(x))y.
In the following sections, we address this learning task in our framework in terms of supervised and
semi-supervised learning.
3
Supervised Learning Tasks
We first focus on the supervised learning case. We assume that we have a training set S of m
examples drawn i.i.d. according to a distribution D, as presented in the previous section. Depending
on how the generated views are used at both training and test stages, we consider the following
learning scenarios:
- Baseline: This setting corresponds to the case where each view-specific classifier is trained using
the corresponding observed view on the training set, and prediction for a test example is
done using the view-specific classifier corresponding to the observed view:
X
?v, hv ? arg min
e(h, (xv , y))
(3)
h?Hv
(x,y)?S:xv 6=?
In this case we pose ?x, cbh1 ,...,hV (x) = hv (xv ), where v is the observed view for x. Notice
that this is the most basic way of learning a text classifier from a comparable corpus.
- Generated Views as Additional Training Data: The most natural way to use the generated
views for learning is to use them as additional training material for the view-specific classifiers:
X
?v, hv ? arg min
e(h, (xv , y))
(4)
h?Hv
(x,y)?S
with x defined by eq. (1). Prediction is still done using the view-specific classifiers corresponding to the observed view, i.e. ?x, cbh1 ,...,hV (x) = hv (xv ). Although the test set
distribution is a subdomain of the training set distribution [2], this mismatch is (hopefully)
compensated by the addition of new examples.
- Multi-view Gibbs Classifier: In order to avoid the potential bias introduced by the use of generated views only during training, we consider them also during testing. This becomes a standard multi-view setting, where generated views are used exactly as if they were observed.
The view-specific classifiers are trained exactly as above (eq. 4), but the prediction is carried out with respect to the probability distribution of classes, by estimating the probability
of class membership in class 1 from the mean prediction of each view-specific classifier:
?x, cmg
h1 ,...,hV (x) =
1
V
1 X
hv (xv )
V v=1
We assume deterministic view-specific classifiers for simplicity and with no loss of generality.
(5)
- Multi-view Majority Voting: With view generating functions involved in training and test, a natural way to obtain a (generally) deterministic classifier with improved performance is to
take the majority vote associated with the Gibbs classifier. The view-specific classifiers are
again trained as in eq. 4, but the final prediction is done using a majority vote:
( 1
PV
if v=1 hv (xv ) = V2
2
mv
P
?x, ch1 ,...,hV (x) =
(6)
V
V
v
I
otherwise
v=1 hv (x ) > 2
Where I(.) is the indicator function. The classifier outputs either the majority voted class,
or either one of the classes with probability 1/2 in case of a tie.
4
The trade-offs with the ERM principle
We now analyze how the generated views can improve generalization performance. Essentially,
the trade-off is that generated views offer additional training material, therefore potentially helping
learning, but can also be of lower quality, which may degrade learning.
The following theorem sheds light on this trade-off by providing bounds on the baseline vs. multiview strategies. Note that such trade-offs have already been studied in the literature, although in
different settings (see e.g. [2, 4]). Our first result is the following theorem. The notion of function class capacity used here is the empirical Rademacher complexity [1]. Proof is given in the
supplementary material.
Theorem 1 Let D be a distribution over X ? Y, satisfying P(x,y)?D (|{v : xv 6=?}| =
6 1) = 0.
m
Let S = ((xi , yi ))i=1 be a dataset of m examples drawn i.i.d. according to D. Let e be the 0/1
loss, and let (Hv )Vv=1 be the view-specific deterministic classifier sets. For each view v, denote
m
def
{(xv , y) 7? e(h, (xv , y))|h ? Hv }, and denote , for any sequence S v ? (Xv ? Y) v of
e ? Hv =
? m (e ? Hv , S v ) the empirical Rademacher complexity of e ? Hv on S v . Then, we have:
size mv , R
v
Baseline setting: for all 1 > ? > 0, with probability at least 1 ? ? over S:
?(cbh1 ,...,hV
) ? ?inf
hv ?Hv
h
?(cbh?1 ,...,h?
V
r
V
i
X
mv ?
ln(2/?)
v
) +2
Rmv (e ? Hv , S ) + 6
m
2m
v=1
def
{(xvi , yi )|i = 1..m and xvi 6=?}, mv = |S v | and hv ? Hv is the
where, for all v, S v =
classifier minimizing the empirical risk on S v .
Multi-view Gibbs classification setting: for all 1 > ? > 0, with probability at least 1 ? ? over S:
r
V
i 2 X
h
ln(2/?)
mg
v
b
?
?(ch1 ,...,hV ) ? ?inf ?(ch?1 ,...,h? ) +
+?
Rm (e ? Hv , S ) + 6
V
hv ?Hv
V v=1
2m
def
where, for all v, S v =
{(xvi , yi )|i = 1..m}, hv ? Hv is the classifier minimizing the
empirical risk on S v , and
h
i
h
i
? = ?inf ?(cmg
?(cbh?1 ,...,h? )
(7)
h? ,...,h? ) ? ?inf
hv ?Hv
1
V
hv ?Hv
V
This theorem gives us a rule for whether it is preferable to learn only with the observed views
(the baseline setting) or preferable to use the view-generating functions in the multi-view Gibbs
P
? m (e ? Hv , S v ) < 2 P R
? m (e ?
classification setting: we should use the former when 2 v mmv R
v
v
V
v
Hv , S ) + ?, and the latter otherwise.
Let us first explain the role of ? (Eq. 7). The difference between the two settings is in the train
and test distributions for the view-specific
classifiers.
? compares the best achievable error for each
h
i
of the distribution. inf h?v ?Hv ?(cbh? ,...,h? ) is the best achievable error in the baseline setting (i.e.
1
V
without generated
views),
h
i with the automatically generated views, the best achievable error becomes
inf h?v ?Hv ?(cmg
h? ,...,h? ) .
1
V
Therefore ? measures the loss incurred by using the view generating functions. In a favorable
situation, the quality of the generating functions will be sufficient to make ? small.
The terms depending on the complexity of the class of functions may be better explained using
orders of magnitude. Typically, the Rademacher complexity for a sample of size n is usually of
order O( ?1n ) [1].
Assuming, for simplicity,
that all empirical Rademacher complexities in Theorem 1 are approxi?
mately equal to d/ n, where n is the size of the sample on which they are computed, and assuming
that mv = m/V for all v. The trade-off becomes:
q
V
?1
>?
Choose the Multi-view Gibbs classification setting when: d
?
m
m
This means that we expect important performance gains when the number of examples is small, the
generated views of sufficiently high quality for the given classification task, and/or there are many
views available. Note that our theoretical framework does not take the quality of the MT system in a
standard way: in our setup, a good translation system is (roughly) one which generates bag-of-words
representations that allow to correctly discriminate between classes.
Majority voting One advantage of the multi-view setting at prediction time is that we can use a
majority voting scheme, as described in Section 2. In such a case, we expect that ?(cmv
h?1 ,...,h?V ) ?
mg
?(ch? ,...,h? ) if the view-specific classifiers are not correlated in their errors. It can not be guaranteed
1
V
mg
in general, though, since, in general, we can not prove any better than ?(cmv
h?1 ,...,h?V ) ? 2?(ch?1 ,...,h?V )
(see e.g. [9]).
5
Agreement-Based Semi-Supervised Learning
One advantage of the multi-view settings described in the previous section is that unlabeled training
examples may naturally be taken into account in a semi?supervised learning scheme, using existing
approaches for multi-view learning (e.g. [3]).
In this section, we describe how, under the framework of [11], the supervised learning trade-off
presented above can be improved using extra unlabeled examples. This framework is based on
the notion of disagreement between the various view-specific classifiers, defined as the expected
variance of their outputs:
?
!2 ?
X
X
1
1
def
(8)
V (h1 , ..., hV ) =
E ?
hv (xv )2 ?
hv (xv ) ?
V
(x,y)?D V
v
v
The overall idea is that a set of good view-specific classifiers should agree on their predictions,
making the expected variance small. This notion of disagreement has two key advantages. First, it
does not depend on the true class labels, making its estimation easy over a large, unlabeled training
set. The second advantage is that if, during training, it turns out that the view-specific classifiers
have a disagreement of at most ? on the unlabeled set, the set of possible view-specific classifiers
that needs be considered in the supervised learning stage is reduced to:
def
Hv? (?) =
{h?v ? Hv |?v ? 6= v, ?h?v? ? Hv? , V(h?1 , ..., h?V ) ? ? }
Thus, the more the various view-specific classifiers tend to agree, the smaller the possible set of
functions will be. This suggests a simple way to do semi-supervised learning: the unlabeled data
can be used to choose, among the classifiers minimizing the empirical risk on the labeled training
set, those with best generalization performance (by choosing the classifiers with highest agreement
on the unlabeled set). This is particularly interesting when the number of labeled examples is small,
as the train error is usually close to 0.
Theorem 3 of [11] provides a theoretical value B(?, ?) for the minimum number of unlabeled examples required to estimate Eq. 8 with precision ? and probability 1 ? ? (this bound depends on
{Hv }v=1..V ). The following result gives a tighter bound of the generalization error of the multi-view
Gibbs classifier when unlabeled data are available. The proof is similar to Theorem 4 in [11].
Proposition 2 Let 0 ? ? ? 1 and 0 < ? < 1. Under the conditions and notations of Theorem
1, assume furthermore that we have access to u ? B(?/2, ?/2) unlabeled examples drawn i.i.d.
according to the marginal distribution of D on X .
Then, with P
probability at least 1 ? ?, if the empirical risk minimizers hv
?
arg minh?Hv (xv ,y)?S v e(h, (xv , y)) have a disagreement less than ?/2 on the unlabeled
set, we have:
?(cmg
h1 ,...,hV
r
V
i 2 X
h
ln(4/?)
v
?
b
?
+?
Rm (e ? Hv (?), S ) + 6
) ? ?inf ?(ch?1 ,...,h? ) +
V
hv ?Hv
V v=1
2m
We can now rewrite the trade-off between the baseline setting and the multi-view Gibbs classifier,
taking semi-supervised learning into account. Using orders of magnitude, and assuming that for
? m (e ? Hv? (?), S v ) is O(du /?m), with the proportional factor du ? d, the trade-off
each view, R
becomes:
p
?
Choose the mutli-view Gibbs classification setting when: d V /m ? du / m > ?.
Thus, the improvement is even more important than in the supervised setting. Also note that the
more views we have, the greater the reduction in classifier set complexity should be.
Notice that this semi-supervised learning principle enforces agreement between the view specific
classifiers. In the extreme case where they almost always give the same output, majority voting is
then nearly equivalent to the Gibbs classifier (when all voters agree, any vote is equal to the majority
vote). We therefore expect the majority vote and the Gibbs classifier to yield similar performance in
the semi-supervised setting.
6
Experimental Results
In our experiments, we address the problem of learning document classifiers from a comparable
corpus. We build the comparable corpus by sampling parts of the Reuters RCV1 and RCV2 collections [12, 14]. We used newswire articles written in 5 languages, English, French, German,
Italian and Spanish. We focused on 6 relatively populous classes: C15, CCAT, E21, ECAT,
GCAT, M11.
For each language and each class, we sampled up to 5000 documents from the RCV1 (for English)
or RCV2 (for other languages). Documents belonging to more than one of our 6 classes were assigned the label of their smallest class. This resulted in 12-30K documents per language, and 11-34K
documents per class (see Table 1). In addition, we reserved a test split containing 20% of the documents (respecting class and language proportions) for testing. For each document, we indexed
the text appearing in the title (headline tag), and the body (body tags) of each article. As preprocessing, we lowercased, mapped digits to a single digit token, and removed non alphanumeric
tokens. We also filtered out function words using a stop-list, as well as tokens occurring in less than
5 documents.
Documents were then represented as a bag of words, using a TFIDF-based weighting scheme. The
final vocabulary size for each language is given in table 1. The artificial views were produced using
Table 1: Distribution of documents over languages and classes in the comparable corpus.
Language
English
French
German
Italian
Spanish
Total
# docs
18, 758
26, 648
29, 953
24, 039
12, 342
111, 740
(%)
16.78
23.45
26.80
21.51
11.46
# tokens
21, 531
24, 893
34, 279
15, 506
11, 547
Class
C15
CCAT
E21
ECAT
GCAT
M11
Size (all lang.)
18, 816
21, 426
13, 701
19, 198
19, 178
19, 421
(%)
16.84
19.17
12.26
17.18
17.16
17.39
PORTAGE, a statistical machine translation system developed at NRC [15]. Each document from
the comparable corpus was thus translated to the other 4 languages.2
For each class, we set up a binary classification task by using all documents from that class as
positive examples, and all others as negative. We first present experimental results obtained in
supervised learning, using various amounts of labeled examples. We rely on linear SVM models as
base classifiers, using the SVM-Perf package [8]. For comparisons, we employed the four learning
strategies described in section 3: 1? the single-view baseline svb (Eq. 3), 2? generated views as
additional training data gvb (Eq. 4), 3? multi-view Gibbs mvg (Eq. 5), and 4? multi-view majority
voting mvm (Eq. 6). Recall that the second setting, gvb , is the most straightforward way to train and
test classifiers when additional examples are available (or generated) from different sources. It can
thus be seen as a baseline approach, as opposed to the last two strategies (mvg and mvm ), where
view-specific classifiers are both trained and tested over both original and translated documents.
Note also that in our case (V = 5 views), additional training examples obtained from machine
translation represent 4 times as many labeled examples as the original texts used to train the baseline
svb . All test results were averaged over 10 randomly sampled training sets.
Table 2: Test classification accuracy and F1 in the supervised setting, for both baselines (svb , gvb ),
Gibbs (mvg ) and majority voting (mvw ) strategies, averaged over 10 random sets of 10 labeled
examples per view. ? indicates statistically significantly worse performance that the best result,
according to a Wilcoxon rank sum test (p < 0.01) [10].
C15
CCAT
E21
ECAT
GCAT
M11
Strategy
Acc.
F1
Acc. F1
Acc.
F1
Acc. F1
Acc. F1
Acc.
F1
?
?
?
?
?
?
?
?
?
?
?
svb
.559 .388 .639 .403 .557 .294 .579 .374 .800 .501 .651 .483?
gvb
.705 .474? .691? .464? .665? .351? .623? .424? .835? .595? .786? .589?
mvg
.693? .494? .681? .445? .665? .375? .620? .420? .834? .594? .787? .600?
.716 .521 .708 .478 .693 .405 .636 .441 .860 .642 .820 .644
mvm
Results obtained in a supervised setting with only 10 labeled documents per language for training are
summarized in table 2. All learning strategies using the generated views during training outperform
the single-view baseline. This shows that, although imperfect, artificial views do bring additional
information that compensates the lack of labeled data. Although the multi-view Gibbs classifier
predicts based on a translation rather than the original in 80% of cases, it produces almost identical
performance to the gvb run (which only predicts using the original text). These results indicate that
the translation produced by our MT system is of sufficient quality for indexing and classification
purposes. Multi-view majority voting reaches the best performance, yielding a 6 ? 17% improvement in accuracy over the baseline. A similar increase in performance is observed using F1 , which
suggests that the multi-view SVM appropriately handles unbalanced classes.
Figure 1 shows the learning curves obtained on 3 classes, C15, ECAT and M11. These figures show
that when there are enough labeled examples (around 500 for these 3 classes), the artificial views do
not provide any additional useful information over the original-language examples. These empirical
results illustrate the trade-off discussed at the previous section. When there are sufficient original
labeled examples, additional generated views do not provide more useful information for learning
than what view-specific classifiers have available already.
We now investigate the use of unlabeled training examples for learning the view-specific classifiers.
Our overall aim is to illustrate our findings from section 5. Recall that in the case where view-specific
classifiers are in agreement over the class labels of a large number of unlabeled examples, the multiview Gibbs and majority vote strategies should have the same performance. In order to enforce
agreement between classifiers on the unlabeled set, we use a variant of the iterative co-training
algorithm [3]. Given the view-specific classifiers trained on an initial set of labeled examples, we
iteratively assign pseudo-labels to the unlabeled examples for which all classifier predictions agree.
We then train new view-specific classifiers on the joint set of the original labeled examples, and those
unanimously (pseudo-)labeled ones. Key differences between this algorithm and co-training are the
number of views used for learning (5 instead of 2), and the use of unanimous and simultaneous
labeling.
2
The dataset is available from http://multilingreuters.iit.nrc.ca/ReutersMultiLingualMultiView.htm
ECAT
M11
0.8
0.8
0.75
0.75
0.7
0.7
0.7
0.65
0.65
0.65
0.55
0.5
0.6
F1
0.6
F1
F1
C15
0.8
0.75
0.55
0.5
mvm
mvg
svb
0.45
0.4
0.35
0.5
mvm
mvg
svb
0.45
0.4
0.35
10
20
50
100
200
500
Labeled training size
0.6
0.55
mvm
mvg
svb
0.45
0.4
0.35
10
20
50
100
200
500
Labeled training size
10
20
50
100
200
500
Labeled training size
Figure 1: F1 vs. size of the labeled training set for classes C15, ECAT and M11.
We call this iterative process self-learning multiple-view algorithm, as it also bears a similarity with
the self-training paradigm [16]. Prediction from the multi-view SVM models obtained from this
s
).
self-learning multiple-view algorithm is done either using Gibbs (mvgs ) or majority voting (mvm
These results are shown in table 3. For comparison we also trained a TSVM model [7] on each view
separately, a semi-supervised equivalent to the single-view baseline strategy. Note that the TSVM
model mostly out-performs the supervised baseline svb , although the F1 suffers on some classes.
This suggests that the TSVM has trouble handling unbalanced classes in this setting.
Table 3: Test classification accuracy and F1 in the semi-supervised setting, for single-view TSVM
s
), averaged over 10
and multi-view self-learning using either Gibbs (mvgs ) or majority voting (mvm
random sets using 10 labeled examples per view to start. For comparison we provide the single-view
baseline and multi-view majority voting performance for supervised learning.
Strategy
svb
mvm
TSVM
mvgs
s
mvm
C15
Acc.
F1
?
.559 .388?
.716? .521?
.721? .482?
.772 .586
.773 .589
CCAT
Acc.
F1
?
.639 .403?
.708? .478?
.721? .405?
.762 .538
.766 .545
E21
Acc.
F1
?
.557 .294?
.693? .405?
.746? .269?
.765 .470
.767 .473
ECAT
Acc. F1
.579? .374?
.636? .441?
.665? .263?
.691 .504
.701 .508
GCAT
Acc.
F1
?
.800 .501?
.860? .642?
.876? .606?
.903 .729
.905 .734
M11
Acc.
F1
?
.651 .483?
.820? .644?
.834? .706?
.900 .764
.901 .766
The multi-view self-learning algorithm achieves the best classification performance in both accuracy
and F1 , and significantly outperforms both the TSVM and the supervised multi-view strategy in all
s
classes. As expected, the performance of both mvgs and mvm
strategies are similar.
7
Conclusion
The contributions of this paper are twofold. First, we proposed a bound on the risk of the Gibbs
classifier trained over artificially completed multi-view observations, which directly corresponds to
our target application of learning text classifiers from a comparable corpus. We showed that our
bound may lead to a trade-off between the size of the training set, the number of views, and the
quality of the view generating functions. Our result identifies in which case it is advantageous to
learn with additional artificial views, as opposed to sticking with the baseline setting in which a classifier is trained over single view observations. This result leads to our second contribution, which is
a natural way of using unlabeled data in semi-supervised multi-view learning. We showed that in the
case where view-specific classifiers agree over the class labels of additional unlabeled training data,
the previous trade-off becomes even much tighter. Empirical results on a comparable multilingual
corpus support our findings by showing that additional views obtained using a Machine Translation
system may significantly increase classification performance in the most interesting situation, when
there are few labeled data available for training.
Acknowlegdements This work was supported in part by the IST Program of the European Community, under the PASCAL2 Network of Excellence, IST-2002-506778.
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and gaussian complexities: risk bounds and
structural results. Journal of Machine Learning Research, 3:463?482, 2003.
[2] J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. Wortman. Learning bounds for domain
adaptation. In NIPS, 2007.
[3] A. Blum and T. M. Mitchell. Combining labeled and unlabeled sata with co-training. In COLT,
pages 92?100, 1998.
[4] K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. Journal of Machine
Learning Research, 9:1757?1774, 2008.
[5] J. D. R. Farquhar, D. Hardoon, H. Meng, J. Shawe-Taylor, and S. Szedmak. Two view learning:
Svm-2k, theory and practice. In Advances in Neural Information Processing Systems 18, pages
355?362. 2006.
[6] D. R. Hardoon, G. Leen, S. Kaski, and J. S.-T. (eds). Nips workshop on learning from multiple
sources. 2008.
[7] T. Joachims. Transductive inference for text classification using support vector machines. In
ICML, pages 200?209, 1999.
[8] T. Joachims. Training linear svms in linear time. In Proceedings of the ACM Conference on
Knowledge Discovery and Data Mining (KDD), pages 217?226, 2006.
[9] J. Langford and J. Shawe-taylor. Pac-bayes & margins. In NIPS 15, pages 439?446, 2002.
[10] E. Lehmann. Nonparametric Statistical Methods Based on Ranks. McGraw-Hill, New York,
1975.
[11] B. Leskes. The value of agreement, a new boosting algorithm. In COLT, pages 95?110, 2005.
[12] D. D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004.
[13] I. Muslea. Active learning with multiple views. PhD thesis, USC, 2002.
[14] Reuters. Corpus, volume 2, multilingual corpus, 1996-08-20 to 1997-08-19. 2005.
[15] N. Ueffing, M. Simard, S. Larkin, and J. H. Johnson. NRC?s PORTAGE system for WMT. In
In ACL-2007 Second Workshop on SMT, pages 185?188, 2007.
[16] X. Zhu. Semi-supervised learning literature survey. Technical report, Univ. Wisconsis, 2007.
| 3690 |@word achievable:3 proportion:1 advantageous:1 uncovers:1 reduction:1 initial:2 contains:1 document:23 outperforms:1 existing:1 lang:1 yet:1 written:2 alphanumeric:1 kdd:1 v:2 filtered:1 provides:1 boosting:1 prove:1 manner:1 introduce:2 excellence:1 expected:3 roughly:1 multi:33 muslea:1 automatically:1 becomes:5 hardoon:2 estimating:1 notation:1 ch1:3 what:2 minimizes:1 developed:1 finding:3 unobserved:1 pseudo:2 voting:10 interactive:2 tie:1 preferable:2 exactly:2 universit:1 classifier:60 shed:1 rm:2 positive:1 xv:35 consequence:1 meng:1 acl:1 voter:1 studied:1 suggests:3 co:3 statistically:1 averaged:3 enforces:1 testing:2 practice:1 differs:1 digit:2 empirical:10 significantly:4 word:3 induce:2 unlabeled:20 close:2 risk:7 context:1 rcv2:4 equivalent:2 deterministic:4 missing:5 compensated:1 straightforward:3 focused:1 survey:1 simplicity:3 rule:1 handle:1 notion:3 target:1 agreement:7 element:1 satisfying:1 particularly:1 predicts:2 labeled:19 observed:23 role:1 hv:61 trade:18 highest:1 removed:1 substantial:1 rose:1 transforming:1 complexity:7 respecting:1 trained:10 depend:1 rewrite:1 translated:3 htm:1 joint:1 iit:1 represented:3 various:3 kaski:1 train:5 univ:1 informatique:1 describe:1 artificial:4 labeling:1 choosing:1 whose:1 supplementary:1 otherwise:3 compensates:1 ability:2 transductive:1 final:3 obviously:1 sequence:2 mg:3 advantage:4 acknowlegdements:1 fr:1 massih:2 adaptation:1 combining:1 translate:1 sticking:1 rademacher:5 produce:3 categorization:2 generating:14 object:1 help:1 depending:2 derive:1 illustrate:2 pose:1 blitzer:1 eq:9 indicate:1 stochastic:2 material:3 require:1 assign:1 f1:21 generalization:8 proposition:1 tighter:3 tfidf:1 extension:1 helping:1 sufficiently:1 considered:1 around:1 achieves:1 smallest:1 purpose:1 favorable:1 estimation:1 bag:2 cmv:2 label:5 title:1 council:2 lip6:1 headline:1 create:1 svb:9 minimization:1 offs:2 always:2 gaussian:1 aim:1 rather:1 avoid:1 focus:3 joachim:2 improvement:2 rank:2 indicates:1 mainly:1 baseline:16 inference:1 minimizers:1 membership:1 typically:1 italian:2 france:1 arg:3 classification:21 overall:3 among:1 denoted:1 colt:2 marginal:1 equal:2 sampling:1 identical:1 icml:1 nearly:1 others:1 report:1 few:1 randomly:1 national:2 resulted:1 usc:1 investigate:1 mining:1 extreme:1 yielding:1 light:1 cmg:4 predefined:1 accurate:2 cbh:3 indexed:1 taylor:2 theoretical:2 instance:2 subset:1 wortman:2 johnson:1 off:16 again:1 thesis:1 containing:1 choose:3 opposed:2 worse:1 simard:1 li:1 account:2 potential:1 de:1 summarized:1 mv:5 depends:2 view:147 h1:5 analyze:1 start:1 bayes:1 parallel:2 tsvm:6 curie:1 contribution:3 voted:1 accuracy:4 variance:2 reserved:1 yield:3 identify:2 mvg:7 ccat:4 produced:2 acc:12 explain:1 simultaneous:1 reach:1 suffers:1 ed:1 definition:2 c15:7 involved:1 naturally:1 associated:2 proof:2 gain:2 sampled:2 dataset:4 stop:1 view1:1 recall:2 mitchell:1 knowledge:1 gcat:4 supervised:28 improved:3 leen:1 done:4 though:1 generality:1 furthermore:1 stage:2 langford:1 working:1 web:1 hopefully:1 lack:1 french:3 quality:7 true:1 former:1 assigned:1 iteratively:1 during:4 spanish:2 self:5 mutli:1 hill:1 multiview:2 complete:1 performs:1 bring:1 subdomain:1 mt:4 reza:1 volume:1 belong:1 discussed:1 gibbs:17 newswire:1 language:24 shawe:2 wmt:1 access:1 similarity:1 base:1 wilcoxon:1 showed:2 inf:7 scenario:1 binary:3 yi:3 seen:1 minimum:1 additional:16 greater:1 employed:1 freely:1 paradigm:1 rmv:1 semi:13 multiple:10 technical:1 offer:1 prediction:10 variant:1 basic:2 essentially:1 represent:1 addition:2 separately:1 laboratoire:1 source:6 appropriately:1 extra:1 rest:1 smt:1 tend:1 call:1 structural:1 leverage:1 yang:1 e21:4 split:1 easy:1 enough:1 mately:1 ecat:7 identified:1 imperfect:1 idea:1 whether:1 bartlett:1 york:1 cyril:2 generally:1 useful:2 clear:1 amount:1 nonparametric:1 svms:1 reduced:1 http:1 outperform:1 notice:2 correctly:1 per:5 group:2 key:2 four:1 ist:2 blum:1 drawn:5 marie:1 sum:1 run:1 package:1 lehmann:1 extends:1 almost:2 doc:1 comparable:9 def:8 bound:10 guaranteed:1 tag:2 generates:1 min:2 rcv1:5 relatively:1 according:6 belonging:1 describes:2 smaller:1 making:2 happens:1 explained:1 indexing:1 erm:2 taken:1 ln:3 goutte:2 agree:5 turn:1 german:3 usunier:2 available:15 v2:1 enforce:1 amini:2 disagreement:4 pierre:1 appearing:1 existence:2 original:7 assumes:1 trouble:1 completed:2 exploit:1 build:1 classical:2 unanimous:1 objective:3 already:2 occurs:1 strategy:11 surrogate:1 mapped:1 capacity:1 majority:16 degrade:1 assuming:3 pointwise:1 providing:1 minimizing:3 setup:1 mostly:1 mmv:1 potentially:2 farquhar:1 negative:1 unknown:1 m11:7 observation:11 benchmark:1 minh:1 situation:6 gc:2 community:1 canada:2 introduced:1 pair:1 paris:1 required:1 learned:1 textual:2 xvi:3 nip:3 address:7 usually:2 mismatch:1 kulesza:1 program:1 pascal2:1 natural:4 rely:1 indicator:1 zhu:1 scheme:3 improve:2 technology:2 identifies:1 created:1 carried:2 perf:1 originality:1 szedmak:1 text:13 literature:2 discovery:1 loss:4 expect:3 bear:1 interesting:3 proportional:1 incurred:1 sufficient:3 article:2 principle:2 translation:10 token:4 supported:1 last:1 english:4 larkin:1 bias:1 allow:1 vv:3 taking:1 lowercased:1 curve:1 vocabulary:1 collection:6 preprocessing:1 approximate:2 mcgraw:1 multilingual:9 approxi:1 active:1 anchor:1 corpus:13 xi:1 iterative:2 table:7 learn:2 nicolas:2 ca:3 correlated:1 du:3 european:1 artificially:5 domain:1 main:1 nrc:5 reuters:5 x1:2 body:2 precision:1 mvm:11 pv:1 pereira:1 weighting:1 theorem:8 specific:26 showing:2 pac:1 list:1 svm:5 mendelson:1 workshop:2 phd:1 magnitude:2 occurring:1 margin:1 partially:4 ch:4 corresponds:4 lewis:1 acm:1 twofold:2 content:1 typical:1 kearns:1 called:1 total:1 discriminate:1 experimental:4 vote:6 formally:1 support:4 latter:1 crammer:2 unbalanced:2 tested:1 handling:1 |
2,969 | 3,691 | Group Sparse Coding
Samy Bengio
Google
Mountain View, CA
[email protected]
Fernando Pereira
Google
Mountain View, CA
[email protected]
Yoram Singer
Google
Mountain View, CA
[email protected]
Dennis Strelow
Google
Mountain View, CA
[email protected]
Abstract
Bag-of-words document representations are often used in text, image and video
processing. While it is relatively easy to determine a suitable word dictionary for
text documents, there is no simple mapping from raw images or videos to dictionary terms. The classical approach builds a dictionary using vector quantization
over a large set of useful visual descriptors extracted from a training set, and uses a
nearest-neighbor algorithm to count the number of occurrences of each dictionary
word in documents to be encoded. More robust approaches have been proposed
recently that represent each visual descriptor as a sparse weighted combination of
dictionary words. While favoring a sparse representation at the level of visual descriptors, those methods however do not ensure that images have sparse representation. In this work, we use mixed-norm regularization to achieve sparsity at the
image level as well as a small overall dictionary. This approach can also be used to
encourage using the same dictionary words for all the images in a class, providing
a discriminative signal in the construction of image representations. Experimental results on a benchmark image classification dataset show that when compact
image or dictionary representations are needed for computational efficiency, the
proposed approach yields better mean average precision in classification.
1
Introduction
Bag-of-words document representations are widely used in text, image, and video processing [14, 1].
Those representations abstract from spatial and temporal order to encode a document as a vector of
the numbers of occurrences in the document of descriptors from a suitable dictionary. For text
documents, the dictionary might consist of all the words or of all the n-grams of a certain minimum
frequency in the document collection [1].
For images or videos, however, there is no simple mapping from the raw document to descriptor
counts. Instead, visual descriptors must be first extracted and then represented in terms of a carefully constructed dictionary. We will not discuss further here the intricate processes of identifying
useful visual descriptors, such as color, texture, angles, and shapes [14], and of measuring them at
appropriate document locations, such as on regular grids, on special interest points, or at multiple
scales [6].
For dictionary construction, the standard approach in computer vision is to use some unsupervised
vector quantization (VQ) technique, often k-means clustering [14], to create the dictionary. A new
image is then represented by a vector indexed by dictionary elements (codewords), which for element d counts the number of visual descriptors in the image whose closest codeword is d. VQ
1
representations are maximally sparse per descriptor occurrence since they pick a single codeword
for each occurrence, but they may not be sparse for the image as a whole; furthermore, such representations are not that robust with respect to descriptor variability.
Sparse representations have obvious computational benefits, by saving both processing time in handling visual descriptors and space in storing encoded images. To alleviate the brittleness of VQ
representations, several studies proposed representation schemes where each visual descriptor is encoded as a weighted sum of dictionary elements, where the encoding optimizes a tradeoff between
reconstruction error and the ?1 norm of the reconstruction weights [3, 5, 7, 8, 9, 16]. These techniques promote sparsity in determining a small set of codewords from the dictionary that can be
used to efficiently represent each visual descriptor of each image [13].
However, those approaches consider each visual descriptor in the image as a separate coding problem and do not take into account the fact that descriptor coding is just an intermediate step in creating
a bag of codewords representation for the whole image. Thus, sparse coding of each visual descriptor does not guarantee sparse coding of the whole image. This might prevent the use of such methods
in real large scale applications that are constrained by either time or space resources. In this study,
we propose and evaluate the mixed-norm regularizers [12, 10, 2] to take into account the structure
of bags of visual descriptors present in images. Using this approach, we can for example specify an
encoder that exploits the fact that once a codeword has been selected to help represent one of the
visual descriptors of an image, it may as well be used to represent other visual descriptors of the
same image without much additional regularization cost.
Furthermore, while images are represented as bags, the same idea could be used for sets of images,
such as all the images from a given category. In this case, mixed regularization can be used to
specify that when a codeword has been selected to help represent one of the visual descriptors of an
image of a given category, it could as well be used to represent other visual descriptors of any image
of the same category at no additional regularization cost. This form of regularization thus promotes
the use of a small subset of codewords for each category that could be different from category to
category, thus including an indirect discriminative signal in code construction.
Mixed regularization can be applied at two levels: for image encoding, which can be expressed
as a convex optimization problem, and for dictionary learning, using an alternating minimization
procedure. Dictionary regularization promotes a small dictionary size directly, instead of indirectly
through the sparse encoding step.
The paper is organized as follows: Sec. 2 introduces the notation used in the rest of the paper, and
summarizes the technical approach. Sec. 3 describes and solves the convex optimization problem for
mixed-regularization encoding. Sec. 4 extends the technique to learn the dictionary by alternating
optimization. Finally, Sec. 5 presents experimental results on a well-known image database.
2
Problem Statement
We denote scalars with lower-case letters, vectors with bold lower-case letters such as v. We assume
n
that the instance
Pn space is R endowed with the standard inner productn between two vectors u and
v, u ? v = j=1 uj vj . We also use the standard ?p norms k ? kp over R with p ? 1, 2, ?. We often
make use of the fact that u ? u = kuk2 , where as usual we omit the norm subscript for p = 2..
Our main goal is to encode effectively groups of instances in terms of a set of dictionary codewords
|D|
D = {dj }j=1 . For example, if instances are image patches, each group may be the set of patches in
a particular image, and each codeword may represent some kind of average patch. The m?th group
|Gm |
is denoted Gm where Gm = {xm,i }i=1
where each xm,i ? Rn is an instance. When discussing
operations on a single group, we use G for the group in discussion and denote by xi its i?th instance.
Given D and G, our first subgoal, encoding, is to minimize a tradeoff between the reconstruction
error for G in terms of D, and a suitable mixed norm for the matrix of reconstruction weights
that express each xi as a positive linear combination of dj ? D. The tradeoff between accurate
reconstruction or compact encoding is governed through a regularization parameter ?.
Our second subgoal, learning, is to estimate a good dictionary D given a set of training groups
n
{Gm }m=1 . We achieve these goals by alternating between (i) fixing the dictionary to find recon2
struction weights that minimize the sum of encoding objectives for all groups, and (ii) fixing the
reconstruction weights for all groups to find the dictionary that minimizes a tradeoff between the
sum of group encoding objectives and the mixed norm of the dictionary.
3
Group Coding
To encode jointly all the instances in a group G with dictionary D, we solve the following convex
optimization problem:
A? = arg minA Q(A, G, D)
where
and
Q(A, G, D)
?ji
2
P
P|D|
P|D|
= 12 i?G
xi ? j=1 ?ji dj
+ ? j=1 k?j kp
? 0 ?i, j .
|D|
(1)
|G|
The reconstruction matrix A = {?j }j=1 consists of non-negative vectors ?j = (?j1 , . . . , ?j )
specifying the contribution of dj to each instance. The second term of the objective weighs the
mixed ?1 /?p norm of A, which measures reconstruction complexity, with the regularization parameter ? that balances reconstruction quality (the first term) and reconstruction complexity.
The problem of Eq. (1) can be solved by coordinate descent. Leaving all indices intact except for
index r, omitting fixed arguments of the objective, and denoting by c1 and c2 terms which do not
depend on ?r , we obtain the following reduced objective:
2
X
X
1
i
i
xi ?
?j dj ? ?r dr
Q(?r ) =
+ ? k?r kp + c1
2
i?G
j6=r
?
?
X X
? ?ji ?ri (dj ? dr )??ri (xi ? dr )+ 1 (?ri )2 kdr k2? +? k?r k +c2 .
(2)
=
p
2
i?G
j6=r
? be just the reconstruction
We next show how to find the optimum ?r for p = 1 and p = 2. Let Q
term of the objective. Its partial derivatives with respect to each ?ri are
? ? X i
Q=
?j (dj ? dr ) ? xi ? dr + ?ri kdr k2 .
??ri
(3)
j6=r
Let us make the following abbreviation for a given index r,
X
?i = xi ? dr ?
?ji (dj ? dr ) .
(4)
j6=r
It is clear that if ?i ? 0 then the optimum for ?ri is zero. In the derivation below we therefore
employ ?+
i = [?i ]+ where [z]+ = max{0, z}. Next we derive the optimal solution for each of the
norms we consider starting with p = 1. For p = 1 the objective function is separable and we get the
following sub-gradient condition for optimality,
i
2
0 ? ??+
i + ?r kdr k + ?
?+
?
i
i
i ? [0, ?]
?
?
?
|?
|
.
r
r
i
??r
kdr k2
}
| {z
(5)
?[0,1]
Since ?ir ? 0 the above subgradient condition for optimality implies that ?ir = 0 when ?+
i ? ? and
2
otherwise ?ir = (?+
i ? ?)/kdr k .
The objective function is not separable when p = 2. In this case we need to examine the entire
+
+
set of values {?+
i }. We denote by ? the vector whose i?th value is ?i . Assume for now that the
optimal solution has a non-zero norm, k?r k2 > 0. In this case, the gradient of Q(?r ) with an ?2
regularization term is
?r
kdr k2 ?r ? ?+ + ?
.
k?r k
3
At the optimum this vector must be zero, so after rearranging terms we obtain
?1
?
2
?r = kdr k +
?+ .
k?r k
(6)
Therefore, the vector ?r is in the same direction as ?+ which means that we can simply write
?r = s ?+ where s is a non-negative scalar. We thus can rewrite Eq. (6) solely as a function of the
scaling parameter s
?1
?
?+ ,
s ?+ = kdr k2 +
sk?+ k
which implies that
1
?
s=
.
(7)
1
?
kdr k2
k?+ k
We now revisit the assumption that the norm of the optimal solution is greater than zero. Since s
cannot be negative the above expression also provides the condition for obtaining a zero vector for
?r . Namely, the term 1 ? ?/k?+ k must be positive, thus, we get that ?r = 0 if k?+ k ? ? and
otherwise ?r = s?+ where s is defined in Eq. (7).
Once the optimal group reconstruction matrix A is found, we compress the matrix into a single
vector. This vector is of fixed dimension and does not depend on the number of instances that
constitute the group. To do so we simply take the p-norm of each ?j , thus yielding a |D| dimensional
vector. Since we use mixed-norms which are sparsity promoting, in particular the ?1 /?2 mixed-norm,
the resulting vector is likely to be sparse, as we show experimentally in Section 6.
Since visual descriptors and dictionary elements are only accessed through inner products in the
above method, it could be easily generalized to work with Mercer kernels instead.
4
Dictionary Learning
Now that we know how to achieve optimal reconstruction for a given dictionary, we examine how to
learn a good dictionary, that is, a dictionary that balances between reconstruction error, reconstruction complexity, overall complexity relative to the given training set. In particular, we seek a learning
method that facilitates both induction of new dictionary words and the removal of dictionary words
with low predictive power. To achieve this goal, we will apply ?1 /?2 regularization controlled by a
new hyperparameter ?, to dictionary words. For this approach to work, we assume that instances
have been mean-subtracted so that the zero vector 0 is the (uninformative) mean of the data and
regularization towards 0 is equivalent to removing words that do not contribute much to compact
representation of groups.
Let G = {G1 , . . . , Gn } be a set of groups and A = {A1 , . . . , An } the corresponding reconstruction
coefficients relative to dictionary D. Then, the following objective meets the above requirements:
Q(A, D) =
n
X
Q(Am , Gm , D) + ?
m=1
|D|
X
i
kdk kp s.t. ?m,j
? 0 ?i, j, m ,
(8)
k=1
where the single group objective Q(Am , Gm , D) is as in Eq. (1).
In our application we set p = 2 as the norm penalty of the dictionary words. For fixed A, the objective above is convex in D. Moreover, the same coordinate descent technique described above for
finding the optimum reconstruction weights can be used again here after simple algebraic manipulations. Define the following auxiliary variables:
XX
XX
i
i
i
vr =
?m,r
xm,i and ?j,k =
?m,j
?m,k
.
(9)
m
m
i
i
Then, we can express dr compactly as follows. As before, assume that kdr k > 0. Calculating the
gradient with respect to each dr and equating it to zero, we obtain
?
?
X X X
dr
i
i
i
i
?
?m,j
?m,r
dj + (?m,r
)2 dr ? ?m,r
xm,i ? + ?
=0 .
kd
rk
m
i?Gm
j6=r
4
Swapping the sums over m and i with the sum over j, using the auxiliary variables, and noting that
dj does not depend neither on m nor on i, we obtain
X
dr
=0 .
(10)
?j,r dj + ?r,r dr ? v r + ?
kdr k
j6=r
Similarly to the way we solved for ?r , we now define the vector ur = v r ?
following iterate for dr :
?
?1
ur ,
dr = ?r,r
1?
kur k +
P
j6=r
?j,r dj to get the
(11)
where, as above, we incorporated the case dr = 0, by applying the operator [?]+ to the term
1 ? ?/kur k. The form of the solution implies that we can eliminate dr , as it becomes 0, whenever the norm of the residual vector ur is smaller than ?. Thus, the dictionary learning procedure
naturally facilitates the ability to remove dictionary words whose predictive power falls below the
regularization parameter.
5
Experimental Setting
We compare our approach to image coding with previous sparse coding methods by measuring their
impact on classification performance on the PASCAL VOC (Visual Object Classes) 2007 dataset [4].
The VOC datasets contain images from 20 classes, including people, animals (bird), vehicles (aeroplane), and indoor objects (chair), and are considered natural, difficult images for classification.
There are around 2500 training images, 2500 validation images and 5000 test images in total.
For each coding technique under consideration, we explore a range of values for the hyperparameters
? and ?. In the past, many features have been used for VOC classification, with bag-of-words
histograms of local descriptors like SIFT [6] being most popular. In our experiments, we extract
local descriptors based on a regular grid for each image. The grid points are located at every seventh
pixel horizontally and vertically, which produces an average of 3234 descriptors per image. We
used a custom local descriptor that collects Gabor wavelet responses at different orientations, spatial
scales, and spatial offsets from the interest point. Four orientations (0? , 45? , 90? , 135? ) and 27
(scale, offset) combinations are used, for a total of 108 components. The 27 (scale, offset) pairs were
chosen by optimizing a previous image recognition task, unrelated to this paper, using a genetic
algorithm. Tola et al. [15] independently described a descriptor that similarly uses responses at
different orientations, scales, and offsets (see their Figure 2). Overall, this descriptor is generally
comparable to SIFT and results in similar performance.
To build an image feature vector from the descriptors, we thus investigate the following methods:
1. Build a bag-of-words histogram over hierarchical k-means codewords by looking up each
descriptor in a hierarchical k-means tree [11]. We use branching factors of 6 to 13 and a
depth of 3 for a total of between 216 and 2197 codewords. When used with multiple feature
types, this method results in very good classification performance on the VOC task.
2. Jointly train a dictionary and encode each descriptor using an ?1 sparse coding approach
with ? = 0, which was studied previously [5, 7, 9].
3. Jointly train a dictionary and encode sets of descriptors where each set corresponds to a
single image, using ?1 /?2 group sparse coding, varying both ? and ?.
4. Jointly train a dictionary and encode sets of descriptors where each set corresponds to all
descriptors or all images of a single class, using ?1 /?2 sparse coding, varying both ? and ?.
Then, use ?1 /?2 sparse coding to encode the descriptors in individual images and obtain a
single ? vector per image.
As explained before, we normalized all descriptors to have zero mean so that regularizing dictionary
words towards the zero vector implies dictionary sparsity.
In all cases, the initial dictionary used during training was obtained from the same hierarchical kmeans tree, with a branching factor of 10 and depth 4 rather than 3 as used in the baseline method.
This scheme yielded an initial dictionary of size 7873.
5
Mean Average Precision vs Dictionary Size
0.4
Mean Average Precision
0.35
0.3
0.25
HKMeans
?1 ;? = 0;? vary
?1 /?2 ;? = 0;? vary;group=image
?1 /?2 ;? vary;? = 6.8e ? 5;group=image
?1 /?2 ;? = 0;? vary;group=class
?1 /?2 ;? vary;? vary;group=class
0.2
0.15
500
1000
1500
2000
Dictionary Size
Figure 1: Mean Average Precision on the 2007 PASCAL VOC database as a function of the size of
the dictionary obtained by both ?1 and ?1 /?2 regularization approaches when varying ? or ?. We
show results where descriptors are grouped either by image or by class. The baseline system using
hierarchical k-means is also shown.
To evaluate the impact of different coding methods on an important end-to-end task, image classification, we selected the VOC 2007 training set for classifier training, the VOC 2007 validation set
for hyperparameter selection, and the VOC 2007 test set for for evaluation. After the datasets are
encoded with each of the methods being evaluated, a one-versus-all linear SVM is trained on the
encoded training set for each of the 20 classes, and the best SVM hyperparameter C is chosen on
the validation set. Class average precisions on the encoded test set are then averaged across the 20
classes to produce the mean average precision shown in our graphs.
6
Results and Discussion
In Figure 1 we compare the mean average precisions of the competing approaches as encoding
hyperparameters are varied to control the overall dictionary size. For the ?1 approach, achieving
different dictionary size was obtained by tuning ? while setting ? = 0. For the ?1 /?2 approach,
since it was not possible to compare all possible combinations of ? and ?, we first fixed ? to be
zero, so that it could be comparable to the standard ?1 approach with the same setting. Then we
fixed ? to a value which proved to yield good results and varied ?. As it can be seen in Figure 1,
when the dictionary is allowed to be very large, the pure ?1 approach yields the best performance.
On the other hand, when the size of the dictionary matters, then all the approaches based on ?1 /?2
regularization performed better than the ?1 counterpart. Even hierarchical k-means performed better
than the pure ?1 in that case. The version of ?1 /?2 in which we allowed ? to vary provided the best
tradeoff between dictionary size and classification performance when descriptors were grouped per
image, which was to be expected as ? directly promotes sparse dictionaries. More interestingly,
when grouping descriptors per class instead of per image, we get even better performance for small
dictionary sizes by varying ?.
In Figure 2 we compare the mean average precisions of ?1 and ?1 /?2 regularization as average image
size varies. When image size is constrained, which is often the case is large-scale applications, all
6
Mean Average Precision vs Average Image Size
0.4
Mean Average Precision
0.35
0.3
0.25
HKMeans
?1 ;? = 0;? vary
?1 /?2 ;? = 0;? vary;group=image
?1 /?2 ;? vary;? = 6.8e ? 5;group=image
?1 /?2 ;? = 0;? vary;group=class
?1 /?2 ;? vary;? vary;group=class
0.2
0.15
500
1000
1500
2000
Average Image Size
Figure 2: Mean Average Precision on the 2007 PASCAL VOC database as a function of the average
size of each image as encoded using the trained dictionary obtained by both ?1 and ?1 /?2 regularization approaches when varying ? and ?. We show results where descriptors are grouped either by
image or by class. The baseline system using hierarchical k-means is also shown.
200
200
400
400
600
800
600
1000
800
1200
1400
1000
1600
1200
1800
2000
1400
100
200
300
400
500
600
700
800
900
1000
100
200
300
400
500
600
700
800
900
1000
Figure 3: Comparison of the dictionary words used to reconstruct the same image. A pure ?1 coding
was used on the left, while a mixed ?1 /?2 encoding was used on the right plot. Each row represents
the number of times each dictionary word was used in the reconstruction of the image.
the ?1 /?2 regularization choices yield better performance than ?1 regularization. Once again ?1
regularization performed even worse than hierarchical k-means for small image sizes
Figure 3 compares the usage of dictionary words to encode the same image, either using ?1 (on the
left) or ?1 /?2 (on the right) regularization. Each graph shows the number of times a dictionary word
(a row in the plot) was used in the reconstruction of the image. Clearly, ?1 regularization yields
an overall sparser representation in terms of total number of dictionary coefficients that are used.
However, almost all of the resulting dictionary vectors are non-zero and used at least once in the
coding process. As expected, with ?1 /?2 regularization, a dictionary word is either always used or
never used yielding a much more compact representation in terms of the total number of dictionary
words that are used.
7
Overall, mixed-norm regularization yields better performance when the problem to solve includes
resource constraints, either time (a smaller dictionary yields faster image encoding) or space (one
can store or convey more images when they take less space). They might thus be a good fit when
a tradeoff between pure performance and resources is needed, as is often the case for large-scale
applications or online settings.
Finally, grouping descriptors per class instead of per image during dictionary learning promotes the
use of the same dictionary words for all images of the same class, hence yielding some form of weak
discrimination which appears to help under space or time constraints.
Acknowledgments
We would like to thanks John Duchi for numerous discussions and suggestions.
References
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison Wesley, Harlow,
England, 1999.
[2] J. Duchi and Y. Singer. Boosting with structural sparsity. In International Conference on
Machine Learning (ICML), 2009.
[3] M. Elad and M. Aharon. Image denoising via sparse and redundant representation over learned
dictionaries. IEEE Transaction on Image Processing, 15(12):3736?3745, 2006.
[4] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The
PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html.
[5] H. Lee, A. Battle, R. Raina, and A. Y. Ng. Efficient sparse coding algorithms. In Advances in
Neural Information Processing Systems (NIPS), 2007.
[6] D. G. Lowe. Object recognition from local scale-invariant features. In International Conference on Computer Vision (ICCV), pages 1150?1157, 1999.
[7] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In
International Conference on Machine Learning (ICML), 2009.
[8] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE
Transaction on Image Processing, 17(1), 2008.
[9] J. Mairal, M. Leordeanu, F. Bach, M. Hebert, and J. Ponce. Discriminative sparse image
models for class-specific edge detection and image interpretation. In European Conference on
Computer Vision (ECCV), 2008.
[10] S. Negahban and M. Wainwright. Phase transitions for high-dimensional joint support recovery. In Advances in Neural Information Processing Systems 22, 2008.
[11] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In Proceedings of the
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2006.
[12] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection for grouped classification.
Technical Report 743, Dept. of Statistics, University of California Berkeley, 2007.
[13] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy
employed by v1? Vision Research, 37, 1997.
[14] P. Quelhas, F. Monay, J. M. Odobez, D. Gatica-Perez, T. Tuytelaars, and L. J. Van Gool.
Modeling scenes with local descriptors and latent aspects. In International Conference on
Computer Vision (ICCV), 2005.
[15] E. Tola, V. Lepetit, and P. Fua. A fast local descriptor for dense matching. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
[16] J. yang, K. Yu, Y. Gong, and T. Huang. Linear spatial pyramid matching using sparse coding
for image classification. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2009.
8
| 3691 |@word version:1 norm:17 everingham:1 seek:1 pick:1 lepetit:1 initial:2 denoting:1 document:10 genetic:1 interestingly:1 past:1 com:4 must:3 john:1 j1:1 shape:1 remove:1 plot:2 discrimination:1 v:2 selected:3 provides:1 boosting:1 contribute:1 location:1 org:1 accessed:1 constructed:1 c2:2 consists:1 expected:2 intricate:1 examine:2 nor:1 voc:10 struction:1 becomes:1 provided:1 xx:2 notation:1 moreover:1 unrelated:1 mountain:4 kind:1 minimizes:1 finding:1 guarantee:1 temporal:1 sapiro:2 every:1 berkeley:1 k2:7 classifier:1 control:1 omit:1 positive:2 before:2 local:6 vertically:1 encoding:11 subscript:1 solely:1 meet:1 might:3 bird:1 equating:1 studied:1 specifying:1 collect:1 range:1 averaged:1 acknowledgment:1 procedure:2 gabor:1 matching:2 word:23 regular:2 get:4 cannot:1 selection:2 strelow:2 operator:1 applying:1 www:1 equivalent:1 williams:1 odobez:1 starting:1 independently:1 convex:4 identifying:1 recovery:1 pure:4 brittleness:1 coordinate:2 construction:3 gm:7 us:2 samy:1 element:4 recognition:6 located:1 database:3 taskar:1 solved:2 complexity:4 trained:2 depend:3 rewrite:1 predictive:2 efficiency:1 basis:1 compactly:1 easily:1 joint:2 indirect:1 represented:3 derivation:1 train:3 fast:1 kp:4 whose:3 encoded:7 widely:1 solve:2 elad:2 cvpr:3 otherwise:2 reconstruct:1 encoder:1 ability:1 statistic:1 g1:1 tuytelaars:1 jointly:4 kdr:11 online:2 reconstruction:19 propose:1 product:1 achieve:4 optimum:4 requirement:1 produce:2 object:4 help:3 derive:1 gong:1 fixing:2 nearest:1 eq:4 solves:1 auxiliary:2 implies:4 direction:1 alleviate:1 around:1 considered:1 mapping:2 dictionary:69 vary:13 bag:7 grouped:4 create:1 weighted:2 minimization:1 clearly:1 always:1 rather:1 pn:1 varying:5 encode:8 ponce:2 baseline:3 am:2 entire:1 eliminate:1 favoring:1 pixel:1 overall:6 classification:10 arg:1 pascal:4 denoted:1 orientation:3 html:1 animal:1 spatial:4 special:1 constrained:2 field:1 once:4 saving:1 never:1 ng:1 represents:1 yu:1 unsupervised:1 icml:2 promote:1 report:1 employ:1 modern:1 individual:1 phase:1 detection:1 interest:2 investigate:1 custom:1 evaluation:1 introduces:1 yielding:3 perez:1 swapping:1 regularizers:1 accurate:1 edge:1 encourage:1 partial:1 indexed:1 tree:3 overcomplete:1 weighs:1 instance:9 modeling:1 gn:1 measuring:2 restoration:1 cost:2 subset:1 seventh:1 varies:1 thanks:1 international:4 negahban:1 lee:1 again:2 huang:1 dr:17 worse:1 creating:1 derivative:1 account:2 coding:20 sec:4 bold:1 coefficient:2 matter:1 includes:1 stewenius:1 vehicle:1 view:4 performed:3 lowe:1 contribution:1 minimize:2 ir:3 descriptor:43 efficiently:1 yield:7 weak:1 raw:2 j6:7 whenever:1 frequency:1 obvious:1 naturally:1 dataset:2 proved:1 popular:1 color:2 organized:1 carefully:1 appears:1 wesley:1 specify:2 maximally:1 response:2 zisserman:1 subgoal:2 evaluated:1 fua:1 furthermore:2 just:2 hand:1 dennis:1 google:8 quality:1 olshausen:1 usage:1 omitting:1 pascalnetwork:1 contain:1 normalized:1 counterpart:1 regularization:25 hence:1 alternating:3 during:2 branching:2 generalized:1 mina:1 duchi:2 image:74 consideration:1 recently:1 ji:4 interpretation:1 tuning:1 grid:3 similarly:2 dj:12 closest:1 optimizing:1 optimizes:1 manipulation:1 codeword:5 certain:1 store:1 discussing:1 seen:1 minimum:1 additional:2 greater:1 employed:1 gatica:1 determine:1 fernando:1 redundant:1 signal:2 ii:1 multiple:2 technical:2 faster:1 england:1 bach:2 retrieval:1 promotes:4 a1:1 controlled:1 impact:2 scalable:1 vision:8 histogram:2 represent:7 kernel:1 pyramid:1 c1:2 uninformative:1 winn:1 leaving:1 rest:1 facilitates:2 jordan:1 structural:1 noting:1 yang:1 intermediate:1 bengio:2 easy:1 baeza:1 iterate:1 fit:1 competing:1 inner:2 idea:1 tradeoff:6 expression:1 aeroplane:1 penalty:1 algebraic:1 constitute:1 useful:2 generally:1 clear:1 harlow:1 nister:1 category:6 reduced:1 http:1 revisit:1 per:8 write:1 hyperparameter:3 yates:1 express:2 group:26 four:1 achieving:1 prevent:1 neither:1 v1:1 graph:2 subgradient:1 sum:5 angle:1 letter:2 extends:1 almost:1 patch:3 summarizes:1 scaling:1 comparable:2 yielded:1 constraint:2 ri:7 scene:1 aspect:1 argument:1 optimality:2 chair:1 separable:2 relatively:1 combination:4 kd:1 battle:1 describes:1 smaller:2 across:1 ur:3 monay:1 explained:1 invariant:1 iccv:2 resource:3 vq:3 previously:1 discus:1 count:3 singer:3 needed:2 know:1 addison:1 end:2 operation:1 endowed:1 kur:2 aharon:1 promoting:1 apply:1 hierarchical:7 appropriate:1 indirectly:1 occurrence:4 subtracted:1 compress:1 clustering:1 ensure:1 calculating:1 exploit:1 yoram:1 build:3 uj:1 classical:1 objective:11 codewords:7 strategy:1 usual:1 gradient:3 separate:1 evaluate:2 induction:1 code:1 index:4 providing:1 balance:2 difficult:1 statement:1 negative:3 neto:1 datasets:2 benchmark:1 descent:2 variability:1 incorporated:1 looking:1 rn:1 varied:2 namely:1 pair:1 california:1 learned:1 nip:1 below:2 pattern:3 xm:4 indoor:1 sparsity:5 challenge:2 including:2 max:1 video:4 gool:2 power:2 suitable:3 wainwright:1 natural:1 residual:1 raina:1 scheme:2 voc2007:2 numerous:1 tola:2 extract:1 text:4 removal:1 determining:1 relative:2 mixed:12 suggestion:1 versus:1 validation:3 mercer:1 storing:1 row:2 eccv:1 hebert:1 neighbor:1 fall:1 sparse:24 benefit:1 van:2 dimension:1 depth:2 gram:1 transition:1 vocabulary:1 kdk:1 collection:1 ribeiro:1 transaction:2 compact:4 mairal:3 discriminative:3 xi:7 latent:1 sk:1 learn:2 robust:2 ca:4 rearranging:1 obtaining:1 european:1 vj:1 main:1 dense:1 whole:3 hyperparameters:2 allowed:2 convey:1 vr:1 precision:11 sub:1 pereira:2 governed:1 wavelet:1 removing:1 kuk2:1 rk:1 specific:1 covariate:1 sift:2 offset:4 svm:2 grouping:2 consist:1 workshop:1 quantization:2 effectively:1 texture:1 sparser:1 simply:2 likely:1 explore:1 visual:19 horizontally:1 expressed:1 scalar:2 leordeanu:1 corresponds:2 extracted:2 obozinski:1 abbreviation:1 goal:3 kmeans:1 towards:2 experimentally:1 except:1 denoising:1 total:5 experimental:3 intact:1 people:1 support:1 dept:1 regularizing:1 handling:1 |
2,970 | 3,692 | Learning Non-Linear Combinations of Kernels
Corinna Cortes
Google Research
76 Ninth Ave
New York, NY 10011
[email protected]
Mehryar Mohri
Courant Institute and Google
251 Mercer Street
New York, NY 10012
[email protected]
Afshin Rostamizadeh
Courant Institute and Google
251 Mercer Street
New York, NY 10012
[email protected]
Abstract
This paper studies the general problem of learning kernels based on a polynomial
combination of base kernels. We analyze this problem in the case of regression
and the kernel ridge regression algorithm. We examine the corresponding learning
kernel optimization problem, show how that minimax problem can be reduced to a
simpler minimization problem, and prove that the global solution of this problem
always lies on the boundary. We give a projection-based gradient descent algorithm for solving the optimization problem, shown empirically to converge in few
iterations. Finally, we report the results of extensive experiments with this algorithm using several publicly available datasets demonstrating the effectiveness of
our technique.
1 Introduction
Learning algorithms based on kernels have been used with much success in a variety of tasks [17,19].
Classification algorithms such as support vector machines (SVMs) [6, 10], regression algorithms,
e.g., kernel ridge regression and support vector regression (SVR) [16, 22], and general dimensionality reduction algorithms such as kernel PCA (KPCA) [18] all benefit from kernel methods. Positive
definite symmetric (PDS) kernel functions implicitly specify an inner product in a high-dimension
Hilbert space where large-margin solutions are sought. So long as the kernel function used is PDS,
convergence of the training algorithm is guaranteed.
However, in the typical use of these kernel method algorithms, the choice of the PDS kernel, which
is crucial to improved performance, is left to the user. A less demanding alternative is to require
the user to instead specify a family of kernels and to use the training data to select the most suitable
kernel out of that family. This is commonly referred to as the problem of learning kernels.
There is a large recent body of literature addressing various aspects of this problem, including deriving efficient solutions to the optimization problems it generates and providing a better theoretical
analysis of the problem both in classification and regression [1, 8, 9, 11, 13, 15, 21]. With the exception of a few publications considering infinite-dimensional kernel families such as hyperkernels [14]
or general convex classes of kernels [2], the great majority of analyses and algorithmic results focus
on learning finite linear combinations of base kernels as originally considered by [12]. However,
despite the substantial progress made in the theoretical understanding and the design of efficient
algorithms for the problem of learning such linear combinations of kernels, no method seems to reliably give improvements over baseline methods. For example, the learned linear combination does
not consistently outperform either the uniform combination of base kernels or simply the best single
base kernel (see, for example, UCI dataset experiments in [9, 12], see also NIPS 2008 workshop).
This suggests exploring other non-linear families of kernels to obtain consistent and significant
performance improvements.
Non-linear combinations of kernels have been recently considered by [23]. However, here too,
experimental results have not demonstrated a consistent performance improvement for the general
1
learning task. Another method, hierarchical multiple learning [3], considers learning a linear combination of an exponential number of linear kernels, which can be efficiently represented as a product
of sums. Thus, this method can also be classified as learning a non-linear combination of kernels.
However, in [3] the base kernels are restricted to concatenation kernels, where the base kernels
apply to disjoint subspaces. For this approach the authors provide an effective and efficient algorithm and some performance improvement is actually observed for regression problems in very high
dimensions.
This paper studies the general problem of learning kernels based on a polynomial combination of
base kernels. We analyze that problem in the case of regression using the kernel ridge regression
(KRR) algorithm. We show how to simplify its optimization problem from a minimax problem
to a simpler minimization problem and prove that the global solution of the optimization problem
always lies on the boundary. We give a projection-based gradient descent algorithm for solving this
minimization problem that is shown empirically to converge in few iterations. Furthermore, we give
a necessary and sufficient condition for this algorithm to reach a global optimum. Finally, we report
the results of extensive experiments with this algorithm using several publicly available datasets
demonstrating the effectiveness of our technique.
The paper is structured as follows. In Section 2, we introduce the non-linear family of kernels
considered. Section 3 discusses the learning problem, formulates the optimization problem, and
presents our solution. In Section 4, we study the performance of our algorithm for learning nonlinear combinations of kernels in regression (NKRR) on several publicly available datasets.
2 Kernel Family
This section introduces and discusses the family of kernels we consider for our learning kernel
problem. Let K1 , . . . , Kp be a finite set of kernels that we combine to define more complex kernels.
We refer to these kernels as base kernels. In much of the previous work on learning kernels, the
family of kernels considered is that of linear or convex combinations of some base kernels. Here,
we consider polynomial combinations of higher degree d ? 1 of the base kernels with non-negative
coefficients of the form:
X
(1)
?k1 ???kp ? 0.
?k1 ???kp K1k1 ? ? ? Kpkp ,
K? =
0?k1 +???+kp ?d, ki ?0, i?[0,p]
Any kernel function K? of this form is PDS since products and sums of PDS kernels are PDS [4].
k
Note that K? is in fact a linear combination of the PDS kernels K1k1 ? ? ?Kp p . However, the number
d
of coefficients ?k1 ???kp is in O(p ), which may be too large for a reliable estimation from a sample
of size m. Instead, we can assume that for some subset I of all p-tuples (k1 , . . . , kp ), ?k1 ???kp can
k
be written as a product of non-negative coefficients ?1 , . . . , ?p : ?k1 ???kp = ?k11 ? ? ? ?pp . Then, the
general form of the polynomial combinations we consider becomes
X
X
(2)
?k1 ???kp K1k1 ? ? ? Kpkp ,
?k11 ? ? ? ?kpp K1k1 ? ? ? Kpkp +
K=
(k1 ,...,kp )?J
(k1 ,...,kp )?I
where J denotes the complement of the subset I. The total number of free parameters is then
reduced to p+|J|. The choice of the set I and its size depends on the sample size m and possible
prior knowledge about relevant kernel combinations. The second sum of equation (2) defining our
general family of kernels represents a linear combination of PDS kernels. In the following, we
focus on kernels that have the form of the first sum and that are thus non-linear in the parameters
?1 , . . . , ?p . More specifically, we consider kernels K? defined by
X
K? =
?k11 ? ? ? ?kpp K1k1 ? ? ? Kpkp ,
(3)
k1 +???+kp =d
?
p
where ? = (?1 , . . . , ?p ) ? R . For the ease of presentation, our analysis is given for the case d = 2,
where the quadratic kernel can be given the following simpler expression:
p
X
?k ?l Kk Kl .
(4)
K? =
k,l=1
But, the extension to higher-degree polynomials is straightforward and our experiments include
results for degrees d up to 4.
2
3 Algorithm for Learning Non-Linear Kernel Combinations
3.1 Optimization Problem
We consider a standard regression problem where the learner receives a training sample of size
m, S = ((x1 , y1 ), . . . , (xm , ym )) ? (X ? Y )m , where X is the input space and Y ? R the label
space. The family of hypotheses H? out of which the learner selects a hypothesis is the reproducing
kernel Hilbert space (RKHS) associated to a PDS kernel function K? : X ? X ? R as defined in
the previous section. Unlike standard kernel-based regression algorithms however, here, both the
parameter vector ? defining the kernel K? and the hypothesis are learned using the training sample
S.
The learning kernel algorithm we consider is derived from kernel ridge regression (KRR). Let y =
[y1 , . . . , ym ]? ? Rm denote the vector of training labels and let K? denote the Gram matrix of the
kernel K? for the sample S: [K? ]i,j = K? (xi , xj ), for all i, j ? [1, m]. The standard KRR dual
optimization algorithm for a fixed kernel matrix K? is given in terms of the Lagrange multipliers
? ? Rm by [16]:
maxm ??? (K? + ?I)? + 2?? y
(5)
??R
The related problem of learning the kernel K? concomitantly can be formulated as the following
min-max optimization problem [9]:
min max ??? (K? + ?I)? + 2?? y,
??M ??Rm
(6)
where M is a positive, bounded, and convex set. The positivity of ? ensures that K? is positive
semi-definite (PSD) and its boundedness forms a regularization controlling the norm of ?.1 Two
natural choices for the set M are the norm-1 and norm-2 bounded sets,
M1 = {? | ? 0 ? k? ? ?0 k1 ? ?}
(7)
M2 = {? | ? 0 ? k? ? ?0 k2 ? ?}.
(8)
These definitions include an offset parameter ?0 for the weights ?. Some natural choices for ?0
are: ?0 = 0, or ?0 /k?0 k = 1. Note that here, since the objective function is not linear in ?, the
norm-1-type regularization may not lead to a sparse solution.
3.2 Algorithm Formulation
For learning linear combinations of kernels, a typical technique consists of applying the minimax
theorem to permute the min and max operators, which can lead to optimization problems computationally more efficient to solve [8, 12]. However, in the non-linear case we are studying, this
technique is unfortunately not applicable.
Instead, our method for learning non-linear kernels and solving the min-max problem in equation (6)
consists of first directly solving the inner maximization problem. In the case of KRR for any fixed
? the optimum is given by
? = (K? + ?I)?1 y.
(9)
Plugging the optimal expression of ? in the min-max optimization yields the following equivalent
minimization in terms of ? only:
min
F (?) = y? (K? + ?I)?1 y.
(10)
??M
We refer to this optimization as the NKRR problem. Although the original min-max problem has
been reduced to a simpler minimization problem, the function F is not convex in general as illustrated by Figure 1. For small values of ?, concave regions are observed. Thus, standard interiorpoint or gradient methods are not guaranteed to be successful at finding a global optimum.
In the following, we give an analysis which shows that under certain conditions it is however possible
to guarantee the convergence of a gradient-descent type algorithm to a global minimum.
Algorithm 1 illustrates a general gradient descent algorithm for the norm-2 bounded setting which
projects ? back to the feasible set M2 after each gradient step (projecting to M1 is very similar).
1
To clarify the difference between similar acronyms, a PDS function corresponds to a PSD matrix [4].
3
21
200
195
1
0
0.5
20.5
20
1
0 1
0
0.5
0.5
?1
2.09
F(?1,?2)
205
F(?1,?2)
F(?1,?2)
210
?2
2.08
2.07
2.06
1
0
0.5
0.5
?1
?2
0 1
0.5
?1
0 1
?2
Figure 1: Example plots for F defined over two linear base kernels generated from the first two
features of the sonar dataset. From left to right ? = 1, 10, 100. For larger values of ? it is clear that
there are in fact concave regions of the function near 0.
Algorithm 1 Projection-based Gradient Descent Algorithm
Input: ?init ? M2 , ? ? [0, 1], ? > 0, Kk , k ? [1, p]
?? ? ?init
repeat
? ? ??
?? ? ???F (?) + ?
?k, ??k ? max(0, ??k )
normalize ?? , s.t. k?? ? ?0 k = ?
until k?? ? ?k < ?
In Algorithm 1 we have fixed the step size ?, however this can be adjusted at each iteration via
a line-search. Furthermore, as shown later, the thresholding step that forces ?? to be positive is
unnecessary since ?F is never positive.
Note that Algorithm 1 is simpler than the wrapper method proposed by [20]. Because of the closed
form expression (10), we do not alternate between solving for the dual variables and performing a
gradient step in the kernel parameters. We only need to optimize with respect to the kernel parameters.
3.3 Algorithm Properties
We first explicitly calculate the gradient of the objective function for the optimization problem (10).
In what follows, ? denotes the Hadamard (pointwise) product between matrices.
Proposition 1. For any k ? [1, p], the partial derivative of F : ? ? y? (K? + ?I)?1 y with respect
to ?i is given by
?F
= ?2?? Uk ?,
(11)
??k
Pp
where Uk =
r=1 (?r Kr ) ? Kk .
?
?
Proof. In view of the identity ?M Tr(y? M?1 y) = ?M?1 yy? M?1 , we can write:
?
?F
?y (K? + ?I)?1 y ?(K? + ?I)
= Tr
??k
?(K? + ?I)
??k
?1
?
?1 ?(K? + ?I)
= ? Tr (K? + ?I) yy (K? + ?I)
??k
#
"
p
X
?1
?
?1
2
= ? Tr (K? + ?I) yy (K? + ?I)
(?r Kr ) ? Kk
r=1
= ? 2y? (K? + ?I)?1
p
X
(?r Kr ) ? Kk (K? + ?I)?1 y = ?2?? Uk ?.
r=1
4
?F
? 0 for all i ? [1, p] and ?F ? 0.
Matrix Uk just defined in proposition 1 is always PSD, thus ??
k
As already mentioned, this fact obliterates the thresholding step in Algorithm 1. We now provide
guarantees for convergence to a global optimum. We shall assume that ? is strictly positive: ? > 0.
Proposition 2. Any stationary point ?? of the function F : ? ? y? (K? + ?I)?1 y necessarily
maximizes F :
kyk2
F (?? ) = max F (?) =
.
(12)
?
?
Proof. In view of the expression of the gradient given by Proposition 1, at any point ?? ,
?? ? ?F (?? ) = ??
p
X
??k Uk ? = ?? K?? ?.
(13)
i=1
By definition, if ?? is a stationary point, ?F (?? ) = 0, which implies ?? ? ?F (?? ) = 0. Thus,
?? K?? ? = 0, which implies K?? ? = 0, that is
K?? (K?? + ?I)?1 y = 0 ? (K?? + ?I ? ?I)(K?? + ?I)?1 y = 0
?1
y=0
y
+ ?I)?1 y = .
?
(14)
? y ? ?(K?? + ?I)
(15)
? (K??
(16)
Thus, for any such stationary point ?? , F (?? ) = y? (K?? + ?I)?1 y =
maximum.
y? y
? ,
which is clearly a
We next show that there cannot be an interior stationary point, and thus any local minimum strictly
within the feasible set, unless the function is constant.
Proposition 3. If any point ?? > 0 is a stationary point of F : ? ? y? (K? + ?I)?1 y, then the
function is necessarily constant.
Proof. Assume that ?? > 0 is a stationary point, then, by Proposition 2, F (?? ) = y? (K?? +
?
?I)?1 y = y ? y , which implies that y is an eigenvector of (K?? +?I)?1 with eigenvalue ??1 . Equivalently, y is an eigenvector of K?? + ?I with eigenvalue ?, which is equivalent to y ? null(K?? ).
Thus,
p
m
X
X
(17)
?k ?l
y? K?? y =
yr ys Kk (xr , xs )Kl (xr , xs ) = 0.
k,l=1
r,s=1
{z
|
(?)
}
Since the product of PDS functions is also PDS, (*) must be non-negative. Furthermore, since by
assumption ?i > 0 for all i ? [1, p], it must be the case that the term (*) is equal to zero. Thus,
equation 17 is equal to zero for all ? and the function F is equal to the constant kyk2 /?.
The previous propositions are sufficient to show that the gradient descent algorithm will not become
stuck at a local minimum while searching the interior of a convex set M and, furthermore, they
indicate that the optimum is found at the boundary.
The following proposition gives a necessary and sufficient condition for the convexity of F on a
convex region C. If the boundary region defined by k? ? ?0 k = ? is contained in this convex
region, then Algorithm 1 is guaranteed to converge to a global optimum. Let u ? Rp represent an
arbitrary direction of ? in C. We simplify the analysis of convexity in the following derivation by
separating the terms that depend on K? and those depending on Ku , which arise when showing
the positive semi-definiteness of the Hessian, i.e. u? ?2 F u 0. We denote by ? the Kronecker
product of two matrices.
Proposition 4. The function F : ? ? y? (K? + ?I)?1 y is convex over the convex set C iff the
following condition holds for all ? ? C and all u:
e F ? 0,
hM, N ? 1i
5
(18)
Data
Parkinsons
Iono
Sonar
Breast
m
194
351
208
683
p
21
34
60
9
lin. ?1
.70 ? .04
.81 ? .04
.92 ? .03
.71 ? .02
lin. base
.70 ? .03
.82 ? .03
.90 ? .02
.70 ? .02
lin. ?2
.70 ? .03
.81 ? .03
.90 ? .04
.70 ? .02
quad. base
.65 ? .03
.62 ? .05
.84 ? .03
.70 ? .02
quad. ?1
.66 ? .03
.62 ? .05
.80 ? .04
.70 ? .01
quad. ?2
.64 ? .03
.60 ? .05
.80 ? .04
.70 ? .01
Table 1: The square-root of the mean squared error is reported for each method and several datasets.
e is the
where M = 1 ? vec(??? )? ? (Ku ? Ku ), N = 4 1 ? vec(V)? ? (K? ? K? ), and 1
matrix with zero-one entries constructed to select the terms [M]ijkl where i = k and j = l, i.e. it is
non-zero only in the (i, j)th coordinate of the (i, j)th m ? m block.
Proof. For any u ? Rp the expression of the Hessian of F at the point ? ? C can be derived from
that of its gradient and shown to be
u? (?2 F )u = 4?? (K? ? Ku )V(K? ? Ku )? ? ?? (Ku ? Ku )?.
(19)
Expanding each term, we obtain:
m
m
X
X
?i ?j
[K? ]ik [Ku ]ik [V]kl [K? ]ik [K? ]lj
(20)
?? (K? ? Ku )V(K? ? Ku )? =
i,j=1
=
m
X
k,l=1
(?i ?j [Ku ]ik [Ku ]lj )([V]kl [K? ]ik [K? ]lj )
(21)
i,j,k,l=1
P
m2
and ?? (Ku ? Ku )? = m
define the column vector of all
i,j=1 ?i ?j [Ku ]ij [Ku ]ij . Let 1 ? R
ones and let vec(A) denote the vectorization of a matrix A by stacking its columns. Let the matrices
M and N be defined as in the statement of the proposition. Then, [M]ijkl = (?i ?j [Ku ]ik [Ku ]lj )
and [N]ijkl = [V]kl [K? ]ik [K? ]lj . Then, in view of the definition of e
1, the terms of equation (19)
can be represented with the Frobenius inner product,
e F = hM, N ? 1i
e F.
u? (?2 F )u = hM, NiF ? hM, 1i
P
For any ? ? Rp , let K? = i ?i Ki and let V = (K? + ?I)?1 . We now show that the condition
of Proposition 4 is satisfied for convex regions for which ?, and therefore ?, is sufficiently large, in
the case where Ku and K? are diagonal. In that case, M, N and V are diagonal as well and the
condition of Proposition 4 can be rewritten as follows:
X
[Ku ]ii [Ku ]jj ?i ?j (4[K? ]ii [K? ]jj Vij ? 1i=j ) ? 0.
(22)
i,j
Using the fact that V is diagonal, this inequality we can be further simplified
m
X
[Ku ]2ii ?2i (4[K? ]2ii Vii ? 1) ? 0.
(23)
i=1
2
A sufficient condition for this inequality to hold is that
q each term (4[K? ]ii Vii ? 1) be non-negative,
or equivalently that 4K2? V ? I 0, that is K?
p
Pp
mini k=1 ?k [Kk ]ii ? ?/3.
?
3 I.
Therefore, it suffices to select ? such that
4 Empirical Results
To test the advantage of learning non-linear kernel combinations, we carried out a number of experiments on publicly available datasets. The datasets are chosen to demonstrate the effectiveness
of the algorithm under a number of conditions. For general performance improvement, we chose a
number of UCI datasets frequently used in kernel learning experiments, e.g., [7,12,15]. For learning
with thousands of kernels, we chose the sentiment analysis dataset of Blitzer et. al [5]. Finally, for
learning with higher-order polynomials, we selected datasets with large number of examples such as
kin-8nm from the Delve repository. The experiments were run on a 2.33 GHz Intel Xeon Processor
with 2GB of RAM.
6
Kitchen
Electronics
1.7
1.7
Baseline
L1 reg.
L2 reg.
1.65
1.6
1.6
RMSE
RMSE
L2 reg.
1.65
1.55
1.5
1.45
1.45
1000
2000
# bigrams
3000
1.4
0
4000
L1 reg.
1.55
1.5
1.4
0
Baseline
1000
2000
3000
# bigrams
4000
5000
Figure 2: The performance of baseline and learned quadratic kernels (plus or minus one standard
deviation) versus the number of bigrams (and kernels) used.
4.1 UCI Datasets
We first analyzed the performance of the kernels learned as quadratic combinations. For each
dataset, features were scaled to lie in the interval [0, 1]. Then, both labels and features were centered.
In the case of classification dataset, the labels were set to ?1 and the RMSE was reported. We associated a base kernel to each feature, which computes the product of this feature between different
examples. We compared both linear and quadratic combinations, each with a baseline (uniform),
norm-1-regularized and norm-2-regularized weighting using ?0 = 1 corresponding to the weights of
the baseline kernel. The parameters ? and ? were selected via 10-fold cross validation and the error
reported was based on 30 random 50/50 splits of the entire dataset into training and test sets. For the
gradient descent algorithm, we started with ? = 1 and reduced it by a factor of 0.8 if the step was
found to be too large, i.e., the difference k?? ? ?k increased. Convergence was typically obtained
in less than 25 steps, each requiring a fraction of a second (? 0.05 seconds).
The results, which are presented in Table 1, are in line with previous ones reported for learning
kernels on these datasets [7,8,12,15]. They indicate that learning quadratic combination kernels can
sometimes offer improvements and that it clearly does not degrade with respect to the performance
of the baseline kernel. The learned quadratic combination performs well, particularly on tasks where
the number of features was large compared to the number of points. This suggests that the learned
kernel is better regularized than the plain quadratic kernel and can be advantageous is scenarios
where over-fitting is an issue.
4.2 Text Based Dataset
We next analyzed a text-based task where features are frequent word n-grams. Each base kernel
computes the product between the counts of a particular n-gram for the given pair of points. Such
kernels have a direct connection to count-based rational kernels, as described in [8]. We used the
sentiment analysis dataset of Blitzer et. al [5]. This dataset contains text-based user reviews found
for products on amazon.com. Each text review is associated with a 0-5 star rating. The product reviews fall into two categories: electronics and kitchen-wares, each with 2,000 data-points. The data
was not centered in this case since we wished to preserve the sparsity, which offers the advantage of
significantly more efficient computations. A constant feature was included to act as an offset.
For each domain, the parameters ? and ? were chosen via 10-fold cross validation on 1,000 points.
Once these parameters were fixed, the performance of each algorithm was evaluated using 20 random 50/50 splits of the entire 2,000 points into training and test sets. We used the performance of
the uniformly weighted quadratic combination kernel as a baseline, and showed the improvement
when learning the kernel with norm-1 or norm-2 regularization using ?0 = 1 corresponding to the
weights of the baseline kernel. As shown by Figure 2, the learned kernels significantly improved
over the baseline quadratic kernel in both the kitchen and electronics categories. For this case too,
the number of features was large in comparison with the number of points. Using 900 training points
and about 3,600 bigrams, and thus kernels, each iteration of the algorithm took approximately 25
7
KRR, with (dashed) and without (solid) learning
0.25
MSE
0.20
1st degree
2nd degree
3rd degree
4th degree
0.15
0.10
0
20
40
60
80
Training data subsampling factor
100
Figure 3: Performance on the kin-8nm dataset. For all polynomials, we compared un-weighted,
standard KRR (solid lines) with norm-2 regularized kernel learning (dashed lines). For 4th degree
polynomials we observed a clear performance improvement, especially for medium amount of training data (subsampling factor of 10-50). Standard deviations were typically in the order 0.005, so the
results were statistically significant.
seconds to compute with our Matlab implementation. When using norm-2 regularization, the algorithm generally converges in under 30 iterations, while the norm-1 regularization requires an even
fewer number of iterations, typically less than 5.
4.3 Higher-order Polynomials
We finally investigated the performance of higher-order non-linear combinations. For these experiments, we used the kin-8nm dataset from the Delve repository. This dataset has 20,000 examples
with 8 input features. Here too, we used polynomial kernels over the features, but this time we
experimented with polynomials with degrees as high as 4. Again, we made the assumption that all
coefficients of ? are in the form of products of ?i s (see Section 2), thus only 8 kernel parameters
needed to be estimated.
We split the data into 10,000 examples for training and 10,000 examples for testing, and, to investigate the effect of the sample size on learning kernels, subsampled the training data so that only a
fraction from 1 to 100 was used. The parameters ? and ? were determined by 10-fold cross validation on the training data, and results are reported on the test data, see Figure 3. We used norm-2
regularization with ?0 = 1 and compare our results with those of uniformly weighted KRR.
For lower degree polynomials, the performance was essentially the same, but for 4th degree polynomials we observed a significant performance improvement of learning kernels over the uniformly
weighted KRR, especially for a medium amount of training data (subsampling factor of 10-50). For
the sake of readability, the standard deviations are not indicated in the plot. They were typically in
the order of 0.005, so the results were statistically significant. This result corroborates the finding
on the UCI dataset, that learning kernels is better regularized than plain unweighted KRR and can
be advantageous is scenarios where overfitting is an issue.
5 Conclusion
We presented an analysis of the problem of learning polynomial combinations of kernels in regression. This extends learning kernel ideas and helps explore kernel combinations leading to better
performance. We proved that the global solution of the optimization problem always lies on the
boundary and gave a simple projection-based gradient descent algorithm shown empirically to converge in few iterations. We also gave a necessary and sufficient condition for that algorithm to
converge to a global optimum. Finally, we reported the results of several experiments on publicly
available datasets demonstrating the benefits of learning polynomial combinations of kernels. We are
well aware that this constitutes only a preliminary study and that a better analysis of the optimization
problem and solution should be further investigated. We hope that the performance improvements
reported will further motivate such analyses.
8
References
[1] A. Argyriou, R. Hauser, C. Micchelli, and M. Pontil. A DC-programming algorithm for kernel
selection. In International Conference on Machine Learning, 2006.
[2] A. Argyriou, C. Micchelli, and M. Pontil. Learning convex combinations of continuously
parameterized basic kernels. In Conference on Learning Theory, 2005.
[3] F. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Advances
in Neural Information Processing Systems, 2008.
[4] C. Berg, J. P. R. Christensen, and P. Ressel. Harmonic Analysis on Semigroups. SpringerVerlag: Berlin-New York, 1984.
[5] J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Association for Computational Linguistics,
2007.
[6] B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In
Conference on Learning Theory, 1992.
[7] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee. Choosing multiple parameters for
support vector machines. Machine Learning, 46(1-3), 2002.
[8] C. Cortes, M. Mohri, and A. Rostamizadeh. Learning sequence kernels. In Machine Learning
for Signal Processing, 2008.
[9] C. Cortes, M. Mohri, and A. Rostamizadeh. L2 regularization for learning kernels. In Uncertainty in Artificial Intelligence, 2009.
[10] C. Cortes and V. Vapnik. Support-Vector Networks. Machine Learning, 20(3), 1995.
[11] T. Jebara. Multi-task feature and kernel selection for SVMs. In International Conference on
Machine Learning, 2004.
[12] G. Lanckriet, N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. Jordan. Learning the kernel
matrix with semidefinite programming. Journal of Machine Learning Research, 5, 2004.
[13] C. Micchelli and M. Pontil. Learning the kernel function via regularization. Journal of Machine
Learning Research, 6, 2005.
[14] C. S. Ong, A. Smola, and R. Williamson. Learning the kernel with hyperkernels. Journal of
Machine Learning Research, 6, 2005.
[15] A. Rakotomamonjy, F. Bach, Y. Grandvalet, and S. Canu. Simplemkl. Journal of Machine
Learning Research, 9, 2008.
[16] C. Saunders, A. Gammerman, and V. Vovk. Ridge Regression Learning Algorithm in Dual
Variables. In International Conference on Machine Learning, 1998.
[17] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press: Cambridge, MA, 2002.
[18] B. Scholkopf, A. Smola, and K. Muller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural computation, 10(5), 1998.
[19] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
[20] S. Sonnenburg, G. R?atsch, C. Sch?afer, and B. Sch?olkopf. Large scale multiple kernel learning.
Journal of Machine Learning Research, 7, 2006.
[21] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In Conference on Learning Theory, 2006.
[22] V. N. Vapnik. Statistical Learning Theory. Wiley-Interscience, New York, 1998.
[23] M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In International Conference on Machine Learning, 2009.
9
| 3692 |@word repository:2 bigram:4 polynomial:15 seems:1 norm:13 advantageous:2 nd:1 blender:1 tr:4 solid:2 boundedness:1 minus:1 reduction:1 electronics:3 wrapper:1 contains:1 rkhs:1 com:2 written:1 must:2 plot:2 stationary:6 intelligence:1 selected:2 yr:1 fewer:1 readability:1 simpler:5 constructed:1 direct:1 become:1 ik:7 scholkopf:1 prove:2 consists:2 combine:1 fitting:1 interscience:1 introduce:1 examine:1 frequently:1 multi:1 kpp:2 quad:3 considering:1 becomes:1 project:1 bounded:3 maximizes:1 medium:2 null:1 what:1 eigenvector:2 finding:2 guarantee:2 act:1 concave:2 rm:3 k2:2 uk:5 scaled:1 classifier:1 positive:7 local:2 despite:1 ware:1 simplemkl:1 approximately:1 chose:2 plus:1 suggests:2 delve:2 ease:1 statistically:2 testing:1 block:1 definite:2 xr:2 pontil:3 empirical:1 significantly:2 projection:4 word:1 svr:1 cannot:1 interior:2 selection:2 operator:1 applying:1 optimize:1 equivalent:2 demonstrated:1 straightforward:1 convex:11 amazon:1 m2:4 deriving:1 varma:1 searching:1 coordinate:1 controlling:1 user:3 programming:2 hypothesis:3 lanckriet:1 particularly:1 mukherjee:1 observed:4 calculate:1 thousand:1 region:6 ensures:1 sonnenburg:1 substantial:1 mentioned:1 pd:12 convexity:2 cristianini:2 ong:1 interiorpoint:1 depend:1 solving:5 motivate:1 learner:2 various:1 represented:2 derivation:1 effective:1 kp:13 artificial:1 choosing:1 saunders:1 larger:1 solve:1 advantage:2 eigenvalue:3 sequence:1 took:1 product:13 adaptation:1 frequent:1 relevant:1 uci:4 hadamard:1 iff:1 frobenius:1 normalize:1 olkopf:2 convergence:4 optimum:7 converges:1 ben:1 help:1 depending:1 blitzer:3 ij:2 wished:1 progress:1 c:1 implies:3 indicate:2 direction:1 centered:2 require:1 suffices:1 preliminary:1 proposition:12 rostami:1 adjusted:1 exploring:2 extension:1 clarify:1 strictly:2 hold:2 sufficiently:1 considered:4 great:1 algorithmic:1 sought:1 estimation:1 applicable:1 label:4 krr:9 maxm:1 weighted:4 minimization:5 hope:1 mit:1 clearly:2 always:4 parkinson:1 publication:1 derived:2 focus:2 improvement:10 consistently:1 ave:1 rostamizadeh:3 baseline:10 lj:5 entire:2 typically:4 selects:1 issue:2 classification:4 dual:3 equal:3 once:1 never:1 aware:1 represents:1 constitutes:1 report:2 simplify:2 few:4 preserve:1 subsampled:1 kitchen:3 semigroups:1 iono:1 psd:3 investigate:1 introduces:1 analyzed:2 semidefinite:1 partial:1 necessary:3 unless:1 taylor:1 concomitantly:1 theoretical:2 increased:1 column:2 xeon:1 formulates:1 maximization:1 kpca:1 stacking:1 addressing:1 subset:2 entry:1 deviation:3 uniform:2 rakotomamonjy:1 successful:1 too:5 reported:7 hauser:1 st:1 international:4 ym:2 continuously:1 squared:1 again:1 satisfied:1 nm:3 positivity:1 derivative:1 leading:1 star:1 boom:1 coefficient:4 babu:1 explicitly:1 depends:1 later:1 view:3 root:1 closed:1 analyze:2 rmse:3 square:1 publicly:5 efficiently:1 yield:1 ijkl:3 processor:1 classified:1 reach:1 definition:3 pp:3 associated:3 proof:4 rational:1 dataset:13 proved:1 knowledge:1 dimensionality:1 hilbert:2 actually:1 back:1 higher:5 originally:1 courant:2 specify:2 improved:2 formulation:1 evaluated:1 box:1 generality:1 furthermore:4 just:1 smola:3 until:1 nif:1 receives:1 nonlinear:2 google:4 indicated:1 dredze:1 effect:1 requiring:1 multiplier:1 regularization:8 symmetric:1 illustrated:1 kyk2:2 ridge:5 demonstrate:1 performs:1 l1:2 harmonic:1 recently:1 empirically:3 association:1 m1:2 significant:4 refer:2 cambridge:2 vec:3 rd:1 canu:1 shawe:1 chapelle:1 afer:1 base:15 showed:1 recent:1 scenario:2 certain:1 inequality:2 success:1 muller:1 minimum:3 converge:5 dashed:2 semi:2 ii:6 multiple:5 signal:1 cross:3 long:1 lin:3 offer:2 bach:2 y:1 plugging:1 regression:15 basic:1 breast:1 essentially:1 iteration:7 kernel:116 represent:1 sometimes:1 interval:1 crucial:1 sch:3 unlike:1 effectiveness:3 jordan:1 near:1 split:3 variety:1 xj:1 gave:2 inner:3 idea:1 expression:5 pca:1 bartlett:1 gb:1 sentiment:3 york:5 hessian:2 jj:2 matlab:1 generally:1 clear:2 amount:2 svms:2 category:2 reduced:4 outperform:1 estimated:1 disjoint:1 yy:3 gammerman:1 write:1 shall:1 demonstrating:3 ram:1 fraction:2 sum:4 run:1 parameterized:1 uncertainty:1 extends:1 family:10 guyon:1 ki:2 bound:1 guaranteed:3 fold:3 quadratic:9 kronecker:1 sake:1 bousquet:1 generates:1 aspect:1 min:7 performing:1 structured:1 alternate:1 combination:30 christensen:1 projecting:1 restricted:1 ghaoui:1 computationally:1 equation:4 discus:2 count:2 needed:1 acronym:1 studying:1 available:5 rewritten:1 apply:1 hierarchical:2 alternative:1 corinna:2 rp:3 original:1 denotes:2 include:2 subsampling:3 linguistics:1 k1:13 especially:2 micchelli:3 objective:2 already:1 diagonal:3 gradient:14 subspace:1 separating:1 concatenation:1 street:2 majority:1 berlin:1 degrade:1 ressel:1 considers:1 afshin:1 pointwise:1 kk:7 providing:1 mini:1 equivalently:2 unfortunately:1 statement:1 negative:4 design:1 reliably:1 implementation:1 datasets:11 finite:2 descent:8 defining:2 y1:2 dc:1 ninth:1 reproducing:1 arbitrary:1 jebara:1 rating:1 david:1 complement:1 pair:1 kl:5 extensive:2 connection:1 learned:8 boser:1 nip:1 pattern:1 xm:1 sparsity:1 including:1 reliable:1 max:8 suitable:1 demanding:1 natural:2 force:1 regularized:5 minimax:3 cim:1 started:1 carried:1 hm:4 text:4 prior:1 literature:1 understanding:1 l2:3 review:3 srebro:1 versus:1 validation:3 degree:11 sufficient:5 consistent:2 mercer:2 thresholding:2 vij:1 grandvalet:1 mohri:4 repeat:1 free:1 institute:2 fall:1 sparse:1 benefit:2 ghz:1 boundary:5 dimension:2 plain:2 gram:3 unweighted:1 computes:2 author:1 commonly:1 made:2 stuck:1 simplified:1 implicitly:1 global:9 overfitting:1 unnecessary:1 tuples:1 xi:1 corroborates:1 search:1 vectorization:1 un:1 sonar:2 table:2 ku:22 expanding:1 init:2 permute:1 mehryar:1 mse:1 complex:1 necessarily:2 investigated:2 domain:2 bollywood:1 williamson:1 arise:1 body:1 x1:1 referred:1 intel:1 definiteness:1 ny:3 wiley:1 pereira:1 exponential:1 lie:4 weighting:1 kin:3 theorem:1 showing:1 nyu:2 offset:2 cortes:4 x:2 experimented:1 workshop:1 k11:3 vapnik:4 kr:3 illustrates:1 margin:2 vii:2 simply:1 explore:1 lagrange:1 contained:1 corresponds:1 ma:1 identity:1 presentation:1 formulated:1 feasible:2 springerverlag:1 included:1 specifically:1 typical:2 infinite:1 uniformly:3 determined:1 hyperkernels:2 vovk:1 total:1 experimental:1 atsch:1 exception:1 select:3 berg:1 support:5 reg:4 argyriou:2 biography:1 |
2,971 | 3,693 | Asymptotically Optimal Regularization
in Smooth Parametric Models
Percy Liang
University of California, Berkeley
Francis Bach
?
INRIA - Ecole
Normale Sup?erieure, France
[email protected]
[email protected]
Guillaume Bouchard
Xerox Research Centre Europe, France
Michael I. Jordan
University of California, Berkeley
[email protected]
[email protected]
Abstract
Many types of regularization schemes have been employed in statistical learning,
each motivated by some assumption about the problem domain. In this paper,
we present a unified asymptotic analysis of smooth regularizers, which allows us
to see how the validity of these assumptions impacts the success of a particular
regularizer. In addition, our analysis motivates an algorithm for optimizing regularization parameters, which in turn can be analyzed within our framework. We
apply our analysis to several examples, including hybrid generative-discriminative
learning and multi-task learning.
1
Introduction
Many problems in machine learning and statistics involve the estimation of parameters from finite
data. Although empirical risk minimization has favorable limiting properties, it is well known that
this procedure can overfit on finite data. Hence, various forms of regularization have been employed
to control this overfitting. Regularizers are usually chosen based on assumptions about the problem
domain at hand. For example, in classification, we might use L2 regularization if we expect the data
to be separable with a large margin. We might regularize with a generative model if we think it is
roughly well-specified [7, 20, 15, 17]. In multi-task learning, we might penalize deviation between
parameters across tasks if we believe the tasks to be similar [3, 12, 2, 13].
In each case, we would like (1) a procedure for choosing the parameters of the regularizer (for example, its strength) and (2) an analysis that shows the amount by which regularization reduces expected
risk, expressed as a function of the compatibility between the regularizer and the problem domain.
In this paper, we address these two points by developing an asymptotic analysis of smooth regularizers for parametric problems. The key idea is to derive a second-order Taylor approximation of the
expected risk, yielding a simple and interpretable quadratic form which can be directly minimized
with respect to the regularization parameters. We first develop the general theory (Section 2) and
then apply it to some examples of common regularizers used in practice (Section 3).
2
General theory
We use uppercase letters (e.g., L, R, Z) to denote random variables and script letters (e.g., L, R, I)
to denote constant limits
... of random variables. For a ?-parametrized differentiable function ? 7?
f (?; ?), let f?, f?, and f denote the first, second and third derivatives of f with respect to ?, and
let ?f (?; ?) denote the derivative with respect to ?. Let Xn = Op (n?? ) denote a sequence of
1
P
random variables for which n? Xn is bounded in probability. Let Xn ?
? X denote convergence in
probability. For a vector v, let v ? = vv > . Expectation and variance operators are denoted as E[?]
and V[?], respectively.
2.1
Setup
We are given a loss function `(?; ?) parametrized by ? ? Rd (e.g., `((x, y); ?) = 21 (y ? x> ?)2 for
linear regression). Our goal is to minimize the expected risk,
def
?? = argmin L(?),
def
L(?) = EZ?p? [`(Z; ?)],
(1)
??Rd
which averages the loss over some true data generating distribution p? (Z). We do not have access
to p? , but instead receive a sample of n i.i.d. data points Z1 , . . . , Zn drawn from p? . The standard
unregularized estimator minimizes the empirical risk:
n
X
def
def 1
??n0 = argmin Ln (?), Ln (?) =
`(Zi , ?).
(2)
n i=1
??Rd
Although ??n0 is consistent (that is, it converges in probability to ?? ) under relatively weak conditions, it is well known that regularization can improve performance substantially for finite n. Let
Rn (?, ?) be a (possibly data-dependent) regularization function, where ? ? Rb are the regulariza?
tion parameters. For linear regression, we might use squared regularization (Rn (?, ?) = 2n
k?k2 ),
where ? ? R determines the strength. Define the regularized estimator as follows:
def
??? = argmin Ln (?) + Rn (?, ?).
(3)
n
??Rd
The goal of this paper is to choose good values of ? and analyze the subsequent impact on performance. Specifically, we wish to minimize the relative risk:
def
Ln (?) = EZ ,...,Z ?p? [L(??? ) ? L(??0 )],
(4)
1
n
n
n
which is the difference in risk (averaged over the training data) between the regularized and unregularized estimators; Ln (?) < 0 is desirable. Clearly, argmin? Ln (?) is the optimal regularization
parameter. However, it is difficult to get a handle on Ln (?). Therefore, the main focus of this work is
on deriving an asymptotic expansion for Ln (?). In this paper, we make the following assumptions:1
Assumption 1 (Compact support). The true distribution p? (Z) has compact support.
Assumption 2 (Smooth loss). The loss function `(z, ?) is thrice-differentiable with respect to ?.
? ? ) 0).2
Furthermore, assume the expected Hessian of the loss function is positive definite (L(?
Assumption 3 (Smooth regularizer). The regularizer Rn (?, ?) is thrice-differentiable with respect
P
to ? and differentiable with respect to ?. Assume Rn (0, ?) ? 0 and Rn (?, ?) ?
? 0 as n ? ?.
2.2
Rate of regularization strength
Let us establish some basic properties that the regularizer Rn (?, ?) should satisfy. First, a desirable
P
property is consistency (??n? ?
? ?? ), i.e., convergence to the parameters that achieve the minimum
possible risk in our hypothesis class. To achieve this, it suffices (and in general also necessitates)
that (1) the loss class satisfies standard uniform convergence properties [22] and (2) the regularizer
P
has a vanishing impact in the limit of infinite data (Rn (?, ?) ?
? 0). These two properties can be
verified given our assumptions.
The next question is at what rate Rn (?, ?) should converge to 0? As we show in [16], Rn (?, ?) =
Op (n?1 ) is the rate that minimizes the relative risk Ln . With this rate, it is natural to consider the
regularizer as a prior p(? | ?) ? exp{?Rn (?, ?)} (and ?`(z, ?) as the log-likelihood), in which
case ??n? is the maximum a posteriori (MAP) estimate.
1
While we do not explicitly assume convexity of ` and Rn , the local nature of our analysis means that we
are essentially working under strong convexity.
2
This assumption can be weakened. If L? 6 0, the parameters can only be estimated up to the row space of
? But since we are interested in the parameters ? only in terms of L(?), this particular non-identifiability of
L.
the parameters is irrelevant.
2
2.3
Asymptotic expansion
Our main result is the following theorem, which provides a simple interpretable asymptotic expression for the relative risk, characterizing the impact of regularization (see [16] for proof):
Theorem 1. Assume Rn (?, ?? ) = Op (n?1 ). The relative risk admits the following asymptotic
expansion:
5
Ln (?) = L(?) ? n?2 + Op (n? 2 )
(5)
in terms of the asymptotic relative risk:
def 1
? ??1
?
? L??1 } ? 2B > R(?)
?
L(?) = tr{R(?)
L } ? tr{I`` L??1 R(?)
+ tr{I`r (?)L??1 },
(6)
2
def
? ?? )], R(?) def
where L? = E[`(Z;
= limn?? nRn (?, ?? ) (derivatives thereof are defined analodef
def
def
?
?
gously), I`` = E[`(Z; ?? ) ], I`r (?) = limn?? nE[L? n R? n (?)> ], B = limn?? nE[??0 ? ?? ].
n
The most important equation of this paper is (6), which captures the lowest-order terms of the relative
risk defined in (4).
Interpretation The significance of Theorem 1 is in identifying the three problem-dependent contributions to the asymptotic relative risk:
? ??1
?
?
Squared bias of the regularizer tr{R(?)
L }: R(?)
is the gradient of the regularizer at the lim?
iting parameters ?? ; the squared regularizer bias is the squared norm of R(?)
with respect to the
?
Mahalanobis metric given by L. Note that the squared regularizer bias is always positive: it always
increases the risk by an amount which depends on how ?wrong? the regularizer is.
? L??1 }: The key quantity is R(?),
?
Variance reduction provided by the regularizer tr{I`` L??1 R(?)
the Hessian of the regularizer, whose impact on the relative risk is channeled through L??1 and
?
I`` . For convex regularizers, R(?)
0, so we always improve the stability of the estimate by
regularizing. Furthermore, if the loss is the negative log-likelihood and our model is well-specified
(that is, p? (z) = exp{?`(z; ?? )}), then I`` = L? by the first Bartlett identity [4], and the variance
? L??1 }.
reduction term simplifies to tr{R(?)
?
? tr{I`r (?)L??1 }:
Alignment between regularizer bias and unregularized estimator bias 2B > R(?)
The alignment has two parts, the first of which is nonzero only for non-linear models and the second
of which is nonzero only when the regularizer depends on the training data. The unregularized
?
estimator errs in direction B; we can reduce the risk if the regularizer bias R(?)
helps correct for the
> ?
estimator bias (B R(?) > 0). The second part carries the same intuition: the risk is reduced when
the random regularizer compensates for the loss (tr{I`r (?)L??1 } < 0).
2.4
Oracle regularizer
The principal advantage of having a simple expression for L(?) is that we can minimize it with
?
def
respect to ?. Let ?? = argmin? L(?) and call ??n? the oracle estimator. We have a closed form for
?? in the important special case that the regularization parameter ? is the strength of the regularizer:
Corollary 1 (Oracle regularization strength). If Rn (?, ?) = n? r(?) for some r(?), then
?? = argmin L(?) =
?
tr{I`` L??1 r?L??1 } + 2B > r? def C1
=
,
C2
r? > L??1 r?
L(?? ) = ?
C12
.
2C2
(7)
Proof. (6) is a quadratic in ?; solve by differentiation. Compute L(?? ) by substitution.
In general, ?? will depend on ?? and hence is not computable from data; Section 2.5 will remedy
this. Nevertheless, the oracle regularizer provides an upper bound on performance and some insight
into the relevant quantities that make a regularizer useful.
Note L(?? ) ? 0, since optimizing ?? must be no worse than not regularizing since L(0) = 0.
But what might be surprising at first is that the oracle regularization parameter ?? can be negative
3
Estimator
Notation
Relative risk
U NREGULARIZED
??n0
0
O RACLE
?
??n?
L(?? )
P LUGIN
?
??n?n = ??n?1
L? (1)
O RACLE P LUGIN
??
??n??
L? (??? )
Table 1: Notation for the various estimators and their relative risks.
(corresponding to ?anti-regularization?). But if
helps (?? > 0 and L(?) < 0 for 0 < ? < 2?? ).
2.5
?L(?)
??
= ?C1 < 0, then (positive) regularization
Plugin regularizer
While the oracle regularizer Rn (?? , ?) given by (7) is asymptotically optimal, ?? depends on the
?
unknown ?? , so ??n? is actually not implementable. In this section, we develop the plugin regularizer
? n def
as a way to avoid this dependence. The key idea is to substitute ?? with an estimate ?
= ? ? + ?n
1
def
?
? n , ?).
where ?n = Op (n? 2 ). We then use the plugin estimator ???n = argmin Ln (?) + Rn (?
n
?
?
How well does this plugin estimator work, that is, what is its relative risk E[L(??n?n ) ? L(??n0 )]?
? n ) and apply Theorem 1 because L(?) can only be applied to nonWe cannot simply write Ln (?
random arguments. However, we can still leverage existing machinery by defining a new plugin
def
? n , ?) with regularization parameter ?? ? R. Henceforth, the
regularizer Rn? (?? , ?) = ?? Rn (?
superscript ? will denote quantities concerning the plugin regularizer. The corresponding estimator
?
? def
??n?? = argmin? Ln (?) + Rn? (?? , ?) has relative risk L?n (?? ) = E[L(??n?? ) ? L(??n?0 )]. The key
?
?
identity is ??n?n = ??n?1 , which means the asymptotic risk of the plugin estimator ??n?n is simply L? (1).
We could try to squeeze more out of the plugin regularizer by further optimizing ?? according to
??
def
??? = argmin?? L? (?? ) and use the oracle plugin estimator ??n?? rather than just using ?? =
1. In general, this is not useful since ??? might depend on ?? , and the whole point of plugin
is to remove this dependence. However, in a fortuitous turn of events, for some linear models
??
(Sections 3.1 and 3.4), ??? is in fact independent of ?? , and so ??n?? is actually implementable.
Table 1 summarizes all the estimators we have discussed.
The following theorem relates the risks of all estimators we have considered (see [16] for the proof):
Theorem 2 (Relative risk of plugin). The relative risk of the plugin estimator is L? (1) = L(?? )+E,
def
where E = limn?? nE[tr{L? n (?R? n (?? )?n )> L??1 }]. If Rn (?) is linear in ?, then the relative risk
of the oracle plugin estimator is L? (??? ) = L? (1) +
E2
4L(?? )
with ??? = 1 +
E
2L(?? ) .
Note that the sign of E depends on the nature of the error ?n , so P LUGIN could be either better or
worse than O RACLE. On the other hand, O RACLE P LUGIN is always better than P LUGIN. We can
get a simpler expression for E if we know more about ?n (see [16] for the proof):
? n = f (??0 ), then the
Theorem 3. Suppose ?? = f (?? ) for some differentiable f : Rd ? Rb . If ?
n
?1
?
?1
?
results of Theorem 2 hold with E = ?tr{I`` L? ?R(?
)f?L? }.
3
Examples
In this section, we apply our results from Section 2 to specific problems. Having made all the
asymptotic derivations in the general setting, we now only need to make a few straightforward
calculations to obtain the asymptotic relative risks and regularization parameters for a given problem.
We first explore two classical examples from statistics (Sections 3.1 and 3.2) to get some intuition
for the theory. Then we consider two important examples in machine learning (Sections 3.3 and 3.4).
3.1
Estimation of normal means
Assume that data are generated from a multivariate normal distribution with d independent components (p? = N (?? , I)). We use the negative log-likelihood as the loss function: `(x; ?) = 21 (x??)2 ,
so the model is well-specified.
4
In his seminal 1961 paper [14], Stein showed that, surprisingly, the standard
empirical risk
minimizer
def 1 Pn
d?2
0
JS def ?
?
?
?
?n = X = n i=1 Xi is beaten by the James-Stein estimator ?n = X 1 ? nkXk
? 2 in the sense
JS
0
that E[L(??n )] < E[L(??n )] for all n and ?? if d > 2. We will show that the James-Stein estimator
is essentially equivalent to O RACLE P LUGIN with quadratic regularization (r(?) = 21 k?k2 ).
? L? = I, B = 0, r? = ?? , and r? = I. By (7), the oracle regularization
First compute L? n = ?? ? X,
2
d
?
weight is ? = k?? k2 , which yields a relative risk of L(?? ) = ? 2k?d? k2 .
Now let us derive P LUGIN (Section 2.5). We have f (?) =
2d
k?? k2
?
d
k?k2
and f?(?) =
?2d?
k?k4 .
By Theorems 2
d(d?4)
? 2k?
2.
?k
and L (1) =
Note that since E > 0, P LUGIN is always (asymptotiand 3, E =
cally) worse than O RACLE but better than U NREGULARIZED if d > 4.
To get O RACLE P LUGIN, compute ??? = 1 ? d2 (note that this doesn?t depend on ?? ), which results
1? 2
2
(d?2)
2
? ??
d
in Rn? (?) = 12 kXk
? 2 k?k . By Theorem 2, its relative risk is L (? ) = ? 2k?? k2 , which offers a
small improvement over P LUGIN (and is superior to U NREGULARIZED when d > 2).
Note that the O RACLE P LUGIN and P LUGIN are adaptive: We regularize more or less depend? is small or large, respectively. By simple aling on whether our preliminary estimate of X
??
??
? 1 ? ?d?2
, which differs from
= X
gebra, O RACLE P LUGIN has a closed form ??n
nkXk2 +d?2
??
5
JAMES S TEIN by a very small amount: ??n?? ? ??nJS = Op (n? 2 ). O RACLE P LUGIN has the added
benefit that it always shrinks towards zero by an amount between 0 and 1, whereas JAMES S TEIN can
overshoot. Empirically, we found that O RACLE P LUGIN generally had a lower expected risk than
JAMES S TEIN when k?? k is large, but JAMES S TEIN was better when k?? k ? 1.
3.2
Binomial estimation
Consider the estimation of ?, the log-odds of a coin coming up heads. We use the negative loglikelihood loss `(x; ?) = ?x? + log(1 + e? ), where x ? {0, 1} is the outcome of the coin. This
example serves to provide intuition for the bias B appearing in (6), which is typically ignored in
first-order asymptotics or is zero (for linear models).
Consider a regularizer r(?) = 21 (? + 2 log(1 + e?? )), which corresponds to a Beta( ?2 , ?2 ) prior.
Choosing ? has been studied extensively in statistics. Some common choices are the Haldane prior
(? = 0), the reference (Jeffreys) prior (? = 1), the uniform prior (? = 2), and Laplace smoothing
(? = 4). We will choose ? to minimize expected risk adaptively based on data.
...
def
def
def
Define ? = 1+e1??? , v = ?(1 ? ?), and b = ? ? 12 . Then compute L? = v, L = ?2vb, r? = b,
r? = v, B = ?v ?1 b. O RACLE corresponds to ?? = 2 + bv2 . Note that ?? > 0, so again (positive)
regularization always helps.
?
2
We can compute the difference between O RACLE and P LUGIN: E = 2 ? 2v
b2 . If |b| > 4 , E > 0,
which means that P LUGIN is worse; otherwise P LUGIN is actually better. Even when P LUGIN
is worse than O RACLE, P LUGIN is still better than U NREGULARIZED, which can be verified by
checking that L? (1) = ? 25 vb?2 ? 2v ?1 b2 < 0 for all ?? .
3.3
Hybrid generative-discriminative learning
In prediction tasks, we wish to learn a mapping from some input x ? X to an output y ? Y. A
common approach is to use probabilistic models defined by exponential families, which is defined
by a vector of sufficient statistics (features) ?(x, y) ? Rd and an accompanying vector of parameters
? ? Rd . These features can be used to define a generative model (8) or a discriminative model (9):
Z Z
p? (x, y) = exp{?(x, y)> ? ? A(?)}, A(?) = log
exp{?(x, y)> ?}dydx,
(8)
X Y
Z
p? (y | x) = exp{?(x, y)> ? ? A(?; x)}, A(?; x) = log
exp{?(x, y)> ?}dy.
(9)
Y
5
Misspecification
0%
5%
50%
tr{I`` vx?1 vvx?1 }
5
5.38
13.8
2B> (? ? ?xy )
0
-0.073
-1.0
tr{(? ? ?xy )? vx?1 }
0
0.00098
0.034
??
?
310
230
L(?? )
-0.65
-48
-808
Table 2: The oracle regularizer for the hybrid generative-discriminative estimator. As misspecification increases, we regularize less, but the relative risk is reduced more (due to more variance
reduction).
def
Given these definitions, we can either use a generative estimator ??ngen = argmin? Gn (?), where
P
def
n
Gn (?) = ? n1 i=1 log p? (x, y) or a discriminative estimator ??ndis = argmin? Dn (?), where
Pn
1
Dn (?) = ? n i=1 log p? (y | x).
There has been a flurry of work on combining generative and discriminative learning [7, 20, 15,
18, 17]. [17] showed that if the generative model is well-specified (p? (x, y) = p?? (x, y)), then
3
the generative estimator is better in the sense that L(??ngen ) ? L(??ndis ) ? nc + Op (n? 2 ) for some
c ? 0; if the model is misspecified, the discriminative estimator is asymptotically better. To create a
hybrid estimator, let us treat the discriminative and generative objectives as the empirical risk and the
regularizer, respectively, so `((x, y); ?) = ? log p? (y | x), so Ln = Dn and Rn (?, ?) = n? Gn (?).
As n ? ?, the discriminative objective dominates as desired. Our approach generalizes the analysis
of [6], which applies only to unbiased estimators for conditionally well-specified models.
By moment-generating properties of the exponential family, we arrive at the following quantidef
def
?
ties (write ? for ?(X, Y )): L? = vx = Ep? (X) [Vp?? (Y |X) [?|X]], R(?)
= ?(? ? ?xy ) =
def
?
?(Ep?? (X,Y ) [?] ? Ep? (X,Y ) [?]), and R(?)
= ?v = ?Vp?? (X,Y ) [?]. The oracle regularization
parameter is then
?? =
tr{I`` vx?1 vvx?1 } + 2B > (? ? ?xy ) ? tr{I`r vx?1 }
.
tr{(? ? ?xy )? vx?1 }
(10)
The sign and magnitude of ?? provides insight into how generative regularization improves prediction as a function of the model and problem: Specifically, a large positive ?? suggests regularization is helpful. To simplify, assume that the discriminative model is well-specified, that is,
p? (y | x) = p?? (y | x) (note that the generative model could still be misspecified). In this case,
? I`r = vx , and so the numerator reduces to tr{(v ? vx )vx?1 } + 2B > (? ? ?xy ).
I`` = L,
Since v vx (the key fact used in [17]), the variance reduction (plus the random alignment term
from I`r ) is always non-negative with magnitude equal to the fraction of missing information provided by the generative model. There is still the non-random alignment term 2B > (? ? ?xy ), whose
sign depends on the problem. Finally, the denominator (always positive) affects the optimal magnitude of the regularization. If the generative model is almost well-specified, ? will be close to ?xy ,
and the regularizer should be trusted more (large ?? ). Since our analysis is local, misspecification
(how much p?? (x, y) deviates from p? (x, y)) is measured by a Mahalanobis distance between ?
and ?xy , rather than something more stringent and global like KL-divergence.
An empirical example To provide some concrete intuition, we investigated the oracle regularizer
for a synthetic binary classification problem of predicting y ? {0, 1} from x ? {0, 1}k . Using
features ?(x, y) = (I[y = 0]x> , I[y = 1]x> )> defines the corresponding generative (Naive Bayes)
1
3
3 >
1
and discriminative (logistic regression) estimators. We set k = 5, ?? = ( 10
, ? ? ? , 10
, 10
, ? ? ? , 10
) ,
?
and p (x, y) = (1 ? ?)p?? (x, y) + ?p?? (y)p?? (x1 | y)I[x1 = ? ? ? = xk ]. The amount of misspecification is controlled by 0 ? ? ? 1, the fraction of examples whose features are perfectly
correlated.
Table 2 shows how the oracle regularizer changes with ?. As ? increases, ?? decreases (we regularize
less) as expected. But perhaps surprisingly, the relative risk is reduced with more misspecification;
this is due to the fact that the variance reduction term increases and has a quadratic effect on L(?? ).
Figure 1(a) shows the relative risk Ln (?) for various values of ?. The vertical line corresponds
to ?? , which was computed numerically by sampling. Note that the minimum of the curves
6
(argmin? Ln (?)), the desired quantity, is quite close to ?? and approaches ?? as n increases, which
empirically justifies our asymptotic approximations.
Unlabeled data One of the main advantages of having a generative model is that we can leverage unlabeled examples by marginalizing out their hidden outputs. Specifically, suppose we have
?
m i.i.d. unlabeled examples Xn+1 , . . . , XP
n+m ? p (x), with m ? ? as n ? ?. Define the
m
?
unlabeled regularizer as Rn (?, ?) = ? nm i=1 log p? (Xn+i ).
We can compute R? = ? ? ?xy using the stationary conditions of the loss function at ?? . Also,
? = v ? vx , and I`r = 0 (the regularizer doesn?t depend on the labeled data). If the model is
R
conditionally well-specified, we can verify that the oracle regularization parameter ?? is the same as
if we had regularized with Gn . This equivalence suggests that the dominant concern asymptotically
is developing an adequate generative model with small bias and not exactly how it is used in learning.
3.4
Multi-task regression
The intuition behind multi-task learning is to share statistical strength between tasks [3, 12, 2, 13].
Suppose we have K regression tasks. For each task k = 1, . . . , K, we generate each data point
k
, 1). We can treat this
i = 1, . . . , n independently as follows: Xik ? p? (Xik ) and Yik ? N (Xik> ??
>
as a single task problem by concatenating the vectors for all the tasks: Xi = (Xi1> , . . . , XiK )> ?
RKd , Y = (Y 1 , . . . , Y K )> ? RK , and ? = (?1> , . . . , ?K> )> ? RKd . It will also be useful to
represent ? ? RKd by the matrix ? = (?1 , . . . , ?K ) ? Rd?K . The loss function is `((x, y), ?) =
PK
1
k
k> k 2
? ) . Assume the model is conditionally well-specified.
k=1 (y ? x
2
We would like to be flexible in case some tasks are more related than others, so let us define a positive
definite matrix ? ? RK?K of inter-task affinities and use the quadratic regularizer: r(?, ?) =
k?
1 >
= Id , which implies that I`` = L? = IKd .
2 ? (? ? Id )?. For simplicity, assume EXi
Most of the computations that follow parallel those of Section 3.1, only extended to matrices. Substituting the relevant quantities into (6) yields the relative risk: L(?) = 12 tr{?2 ?>
? ?? } ? dtr{?}.
?1
Optimizing with respect to ? produces the oracle regularization parameter ?? = d(?>
and
? ?? )
1 2
?1
?
>
its associated relative risk L(? ) = ? 2 d tr{(?? ?? ) }.
?1
>
?1
; we find that P LUGIN
(2?>
To analyze P LUGIN, first compute f? = ?d(?>
? (?))(?? ?? )
? ?? )
?1
?
)
}.
However,
the
relative
risk of P LUGIN is
increases the asymptotic risk by E = 2dtr{(?>
? ?
?1
still favorable when d > 4, as L? (1) = ? 21 d(d ? 4)tr{(?>
?
)
}
<
0
for
d
>
4.
? ?
We can do slightly better using O RACLE P LUGIN (??? = 1 ? d2 ), which results in a relative risk of
?1
L? (??? ) = ? 21 (d ? 2)2 tr{(?>
}. For comparison, if we had solved the K regression tasks
? ?? )
completely independently with K independent regularization parameters, our relative risk would
PK
k ?2
have been ? 12 (d ? 2)2 ( k=1 k??
k ) (following similar but simpler computations).
We now compare joint versus independent regularization. Let A = ?>
? ?? with eigendecomposition A = U DU > . P
The difference
in relative risks between joint and independent regularization
P
?1
is ? = ? 12 (d ? 2)2 ( k Dkk
? k A?1
kk ) (? < 0 means joint regularization is better). The gap
k
between joint and independent regularization is large when the tasks are non-trivial but similar (??
s
?1
?1
k
are close, but k?? k is large). In that case, Dkk is quite large for k > 1, but all the Akk s are small.
MHC-I binding prediction We evaluated our multitask regularization method on the IEDB
MHC-I peptide binding dataset created by [19] and used by [13]. The goal here is to predict the
binding affinity (represented by log IC50 ) of a MHC-I molecule given its amino-acid sequence (represented by a vector of binary features, reduced to a 20-dimensional real vector using SVD). We
created five regression tasks corresponding to the five most common MHC-I molecules.
We compared four estimators: U NREGULARIZED, D IAG CV (? = cI), U NIFORM CV (using
the same task-affinity for all pairs of tasks with ? = c(1? + 10?5 I)), and P LUGIN CV (? =
?>
? ?1 ), where c was chosen by cross-validation.3 Figure 1 shows the results averaged over
cd(?
n ?n )
3
We performed three-fold cross-validation to select c from 21 candidates in [10?5 , 105 ].
7
0
17
16
?0.01
?0.015
?0.02
?0.025
test risk
relative risk
?0.005
n= 75
n=100
n=150
minimum
oracle reg.
0
10
15
"unregularized"
"diag CV"
"uniform CV"
"plugin CV"
14
13
2
10
regularization
4
200
10
(a)
300
500
800 1000
number of training points (n)
1500
(b)
Figure 1: (a) Relative risk (Ln (?)) of the hybrid generative/discriminative estimator for various ?;
the ? attaining the minimum of Ln (?) is close to the oracle ?? (the vertical line). (b) On the MHCI binding prediction task, test risk for the four multi-task estimators; P LUGIN CV (estimating all
pairwise task affinities using P LUGIN and cross-validating the strength) works best.
30 independent train/test splits. Multi-task regularization actually performs worse than independent
learning (D IAG CV) if we assume all tasks are equally related (U NIFORM CV). By learning the full
matrix of task affinities (P LUGIN CV), we obtain the best results. Note that setting the O(K 2 ) entries
of ? via cross-validation is not computationally feasible, though other approaches are possible [13].
4
Related work and discussion
The subject of choosing regularization parameters has received much attention. Much of the learning
theory literature focuses on risk bounds, which approximate the expected risk (L(??n? )) with upper
bounds. Our analysis provides a different type of approximation?one that is exact in the first few
terms of the expansion. Though we cannot make a precise statement about the risk for any given n,
exact control over the first few terms offers other advantages, e.g., the ability to compare estimators.
To elaborate further, risk bounds are generally based on the complexity of the hypothesis class,
whereas our analysis is based on the variance of the estimator. Vanilla uniform convergence bounds
yield worst-case analyses, whereas our asymptotic analysis is tailored to a particular problem (p?
and ?? ) and algorithm (estimator). Localization techniques [5], regret analyses [9], and stabilitybased bounds [8] all allow for some degree of problem- and algorithm-dependence. As bounds,
however, they necessarily have some looseness, whereas our analysis provides exact constants, at
least the ones associated with the lowest-order terms.
Asymptotics has a rich tradition in statistics. In fact, our methodology of performing a Taylor
expansion of the risk is reminiscent of AIC [1]. However, our aim is different: AIC is intended
for model selection, whereas we are interested in optimizing regularization parameters. The Stein
unbiased risk estimate (SURE) is another method of estimating the expected risk for linear models
[21], with generalizations to non-linear models [11].
In practice, cross-validation procedures [10] are quite effective. However, they are only feasible
when the number of hyperparameters is very small, whereas our approach can optimize many hyperparameters. Section 3.4 showed that combining the two approaches can be effective.
To conclude, we have developed a general asymptotic framework for analyzing regularization, along
with an efficient procedure for choosing regularization parameters. Although we are so far restricted
to parametric problems with smooth losses and regularizers, we think that these tools provide a
complementary perspective on analyzing learning algorithms to that of risk bounds, deepening our
understanding of regularization.
8
References
[1] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic
Control, 19:716?723, 1974.
[2] A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In Advances in Neural
Information Processing Systems (NIPS), pages 41?48, 2007.
[3] B. Bakker and T. Heskes. Task clustering and gating for Bayesian multitask learning. Journal
of Machine Learning Research, 4:83?99, 2003.
[4] M. S. Bartlett. Approximate confidence intervals. II. More than one unknown parameter.
Biometrika, 40:306?317, 1953.
[5] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Annals of
Statistics, 33(4):1497?1537, 2005.
[6] G. Bouchard. Bias-variance tradeoff in hybrid generative-discriminative models. In Sixth
International Conference on Machine Learning and Applications (ICMLA), pages 124?129,
2007.
[7] G. Bouchard and B. Triggs. The trade-off between generative and discriminative classifiers. In
International Conference on Computational Statistics, pages 721?728, 2004.
[8] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning
Research, 2:499?526, 2002.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[10] P. Craven and G. Wahba. Smoothing noisy data with spline functions. estimating the correct
degree of smoothing by the method of generalized cross-validation. Numerische Mathematik,
31(4):377?403, 1978.
[11] Y. C. Eldar. Generalized SURE for exponential families: Applications to regularization. IEEE
Transactions on Signal Processing, 57(2):471?481, 2009.
[12] T. Evgeniou, C. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615?637, 2005.
[13] L. Jacob, F. Bach, and J. Vert. Clustered multi-task learning: A convex formulation. In Advances in Neural Information Processing Systems (NIPS), pages 745?752, 2009.
[14] W. James and C. Stein. Estimation with quadratic loss. In Fourth Berkeley Symposium in
Mathematics, Statistics, and Probability, pages 361?380, 1961.
[15] J. A. Lasserre, C. M. Bishop, and T. P. Minka. Principled hybrids of generative and discriminative models. In Computer Vision and Pattern Recognition (CVPR), pages 87?94, 2006.
[16] P. Liang, F. Bach, G. Bouchard, and M. I. Jordan. Asymptotically optimal regularization in
smooth parametric models. Technical report, ArXiv, 2010.
[17] P. Liang and M. I. Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In International Conference on Machine Learning (ICML), 2008.
[18] A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In Association for the Advancement of Artificial Intelligence (AAAI), 2006.
[19] B. Peters, H. Bui, S. Frankild, M. Nielson, C. Lundegaard, E. Kostem, D. Basch, K. Lamberth, M. Harndahl, W. Fleri, S. S. Wilson, J. Sidney, O. Lund, S. Buus, and A. Sette. A
community resource benchmarking predictions of peptide binding to MHC-I molecules. PLoS
Compututational Biology, 2, 2006.
[20] R. Raina, Y. Shen, A. Ng, and A. McCallum.
Classification with hybrid generative/discriminative models. In Advances in Neural Information Processing Systems (NIPS),
2004.
[21] C. M. Stein. Estimation of the mean of a multivariate normal distribution. Annals of Statistics,
9(6):1135?1151, 1981.
[22] A. W. van der Vaart. Asymptotic Statistics. Cambridge University Press, 1998.
9
| 3693 |@word multitask:2 norm:1 triggs:1 d2:2 jacob:1 elisseeff:1 tr:21 fortuitous:1 carry:1 moment:1 reduction:5 substitution:1 ecole:1 existing:1 com:1 surprising:1 must:1 reminiscent:1 subsequent:1 dydx:1 xrce:1 remove:1 interpretable:2 n0:4 stationary:1 generative:24 intelligence:1 advancement:1 xk:1 mccallum:2 vanishing:1 provides:5 simpler:2 five:2 dn:3 c2:2 along:1 beta:1 symposium:1 sidney:1 pairwise:1 inter:1 expected:9 roughly:1 multi:9 provided:2 estimating:3 bounded:1 notation:2 lowest:2 what:3 argmin:12 substantially:1 minimizes:2 bakker:1 developed:1 unified:1 differentiation:1 nj:1 ikd:1 berkeley:5 tie:1 exactly:1 biometrika:1 k2:7 wrong:1 classifier:1 control:3 positive:7 local:3 treat:2 limit:2 plugin:14 id:2 analyzing:2 lugosi:1 inria:1 might:6 plus:1 weakened:1 studied:1 equivalence:1 suggests:2 iag:2 averaged:2 practice:2 regret:1 definite:2 differs:1 procedure:4 pontil:2 asymptotics:2 empirical:5 mhc:5 vert:1 confidence:1 get:4 cannot:2 close:4 unlabeled:4 operator:1 selection:1 risk:56 seminal:1 optimize:1 equivalent:1 map:1 missing:1 straightforward:1 attention:1 independently:2 convex:2 shen:1 numerische:1 simplicity:1 identifying:1 estimator:35 insight:2 deriving:1 regularize:4 his:1 stability:2 handle:1 laplace:1 limiting:1 annals:2 suppose:3 exact:3 akaike:1 hypothesis:2 recognition:1 labeled:1 ep:3 solved:1 capture:1 worst:1 wang:1 buus:1 plo:1 decrease:1 trade:1 principled:1 intuition:5 convexity:2 complexity:2 flurry:1 overshoot:1 depend:5 ic50:1 localization:1 completely:1 necessitates:1 exi:1 joint:4 various:4 represented:2 regularizer:38 derivation:1 train:1 effective:2 artificial:1 choosing:4 outcome:1 whose:3 quite:3 solve:1 cvpr:1 loglikelihood:1 otherwise:1 compensates:1 ability:1 statistic:10 vaart:1 think:2 noisy:1 superscript:1 sequence:2 differentiable:5 advantage:3 coming:1 fr:1 relevant:2 combining:2 achieve:2 squeeze:1 convergence:4 rademacher:1 produce:1 generating:2 converges:1 help:3 derive:2 develop:2 measured:1 op:7 received:1 strong:1 c:2 implies:1 direction:1 correct:2 vx:11 stringent:1 suffices:1 generalization:2 clustered:1 preliminary:1 hold:1 accompanying:1 considered:1 normal:3 exp:6 mapping:1 predict:1 substituting:1 estimation:6 favorable:2 peptide:2 create:1 tool:1 trusted:1 minimization:1 clearly:1 always:9 aim:1 normale:1 rather:2 avoid:1 pn:2 icmla:1 wilson:1 corollary:1 focus:2 improvement:1 likelihood:3 tradition:1 sense:2 posteriori:1 helpful:1 dependent:2 typically:1 hidden:1 france:2 interested:2 compatibility:1 classification:4 flexible:1 eldar:1 denoted:1 smoothing:3 special:1 haldane:1 equal:1 evgeniou:2 having:3 ng:1 sampling:1 biology:1 look:1 icml:1 minimized:1 others:1 spline:1 simplify:1 report:1 few:3 divergence:1 intended:1 n1:1 alignment:4 analyzed:1 akk:1 yielding:1 uppercase:1 behind:1 regularizers:6 xy:10 machinery:1 taylor:2 desired:2 iedb:1 gn:4 zn:1 deviation:1 entry:1 uniform:4 pal:1 synthetic:1 adaptively:1 international:3 probabilistic:1 xi1:1 off:1 michael:1 concrete:1 druck:1 squared:5 aaai:1 again:1 nm:1 deepening:1 choose:2 possibly:1 cesa:1 henceforth:1 worse:6 derivative:3 channeled:1 attaining:1 c12:1 b2:2 satisfy:1 explicitly:1 depends:5 script:1 try:1 tion:1 closed:2 performed:1 analyze:2 francis:2 sup:1 bayes:1 parallel:1 bouchard:5 regulariza:1 identifiability:1 contribution:1 minimize:4 variance:8 acid:1 yield:3 vp:2 weak:1 identification:1 bayesian:1 definition:1 sixth:1 james:7 thereof:1 e2:1 ndis:2 proof:4 associated:2 minka:1 dataset:1 lim:1 improves:1 niform:2 actually:4 follow:1 methodology:1 formulation:1 evaluated:1 shrink:1 though:2 furthermore:2 just:1 overfit:1 hand:2 working:1 defines:1 logistic:1 perhaps:1 believe:1 effect:1 validity:1 verify:1 true:2 remedy:1 unbiased:2 regularization:44 hence:2 nonzero:2 conditionally:3 mahalanobis:2 numerator:1 game:1 generalized:2 performs:1 percy:1 regularizing:2 misspecified:2 common:4 superior:1 empirically:2 nrn:1 discussed:1 interpretation:1 association:1 numerically:1 cambridge:2 cv:10 rd:8 vanilla:1 erieure:1 consistency:1 automatic:1 heskes:1 mathematics:1 centre:1 thrice:2 had:3 access:1 europe:1 j:2 something:1 multivariate:2 dominant:1 showed:3 perspective:1 optimizing:5 irrelevant:1 rkd:3 binary:2 success:1 errs:1 der:1 minimum:4 employed:2 converge:1 signal:1 ii:1 relates:1 full:1 desirable:2 multiple:1 reduces:2 smooth:7 technical:1 calculation:1 bach:4 offer:2 cross:6 concerning:1 e1:1 equally:1 controlled:1 impact:5 prediction:6 regression:7 basic:1 denominator:1 essentially:2 expectation:1 metric:1 vision:1 arxiv:1 represent:1 tailored:1 kernel:1 penalize:1 receive:1 addition:1 c1:2 whereas:6 nielson:1 interval:1 limn:4 sure:2 subject:1 validating:1 jordan:4 call:1 odds:1 leverage:2 split:1 affect:1 zi:1 perfectly:1 wahba:1 reduce:1 idea:2 simplifies:1 computable:1 tradeoff:1 whether:1 motivated:1 expression:3 bartlett:3 peter:1 hessian:2 adequate:1 ignored:1 useful:3 generally:2 yik:1 involve:1 amount:5 stein:6 extensively:1 reduced:4 generate:1 sign:3 estimated:1 rb:2 write:2 key:5 four:2 nevertheless:1 drawn:1 k4:1 verified:2 asymptotically:5 fraction:2 letter:2 fourth:1 arrive:1 family:3 almost:1 summarizes:1 dy:1 vb:2 def:28 bound:8 aic:2 fold:1 quadratic:6 oracle:17 strength:7 bousquet:2 dkk:2 argument:1 performing:1 separable:1 relatively:1 developing:2 according:1 xerox:2 craven:1 across:1 slightly:1 jeffreys:1 restricted:1 unregularized:5 ln:18 equation:1 computationally:1 mathematik:1 resource:1 turn:2 know:1 serf:1 generalizes:1 apply:4 appearing:1 coin:2 substitute:1 binomial:1 clustering:2 cally:1 establish:1 classical:1 micchelli:1 objective:2 question:1 quantity:5 added:1 parametric:4 dependence:3 gradient:1 affinity:5 distance:1 parametrized:2 trivial:1 kk:1 liang:3 setup:1 difficult:1 nc:1 statement:1 xik:4 dtr:2 negative:5 motivates:1 unknown:2 pliang:1 looseness:1 upper:2 vertical:2 bianchi:1 finite:3 implementable:2 racle:15 anti:1 defining:1 extended:1 head:1 misspecification:5 precise:1 rn:23 community:1 pair:1 specified:9 kl:1 z1:1 california:2 tein:4 nip:3 address:1 usually:1 pattern:1 lund:1 including:1 event:1 natural:1 hybrid:8 regularized:3 predicting:1 raina:1 scheme:1 improve:2 ne:3 created:2 naive:1 deviate:1 prior:5 literature:1 l2:1 checking:1 understanding:1 marginalizing:1 asymptotic:17 relative:29 loss:14 expect:1 versus:1 validation:5 eigendecomposition:1 degree:2 sufficient:1 consistent:1 xp:1 share:1 cd:1 row:1 surprisingly:2 bias:10 allow:1 vv:1 pseudolikelihood:1 characterizing:1 benefit:1 van:1 curve:1 xn:5 rich:1 doesn:2 made:1 adaptive:1 far:1 transaction:2 approximate:2 compact:2 bui:1 global:1 overfitting:1 conclude:1 discriminative:18 xi:2 table:4 lasserre:1 nature:2 learn:1 molecule:3 expansion:5 du:1 investigated:1 necessarily:1 domain:3 diag:1 significance:1 main:3 pk:2 whole:1 hyperparameters:2 complementary:1 amino:1 x1:2 en:1 benchmarking:1 elaborate:1 wish:2 exponential:3 concatenating:1 candidate:1 third:1 theorem:10 rk:2 specific:1 bishop:1 gating:1 admits:1 beaten:1 dominates:1 concern:1 mendelson:1 ci:1 magnitude:3 justifies:1 margin:1 gap:1 simply:2 explore:1 ez:2 expressed:1 kxk:1 applies:1 binding:5 corresponds:3 minimizer:1 determines:1 satisfies:1 conditional:1 goal:3 identity:2 towards:1 feasible:2 change:1 specifically:3 infinite:1 principal:1 svd:1 select:1 guillaume:2 support:2 reg:1 argyriou:1 correlated:1 |
2,972 | 3,694 | Lattice Regression
Maya R. Gupta
Department of Electrical Engineering
University of Washington
Seattle, WA 98195
[email protected]
Eric K. Garcia
Department of Electrical Engineering
University of Washington
Seattle, WA 98195
[email protected]
Abstract
We present a new empirical risk minimization framework for approximating functions from training samples for low-dimensional regression applications where a
lattice (look-up table) is stored and interpolated at run-time for an efficient implementation. Rather than evaluating a fitted function at the lattice nodes without
regard to the fact that samples will be interpolated, the proposed lattice regression
approach estimates the lattice to minimize the interpolation error on the given
training samples. Experiments show that lattice regression can reduce mean test
error by as much as 25% compared to Gaussian process regression (GPR) for digital color management of printers, an application for which linearly interpolating
a look-up table is standard. Simulations confirm that lattice regression performs
consistently better than the naive approach to learning the lattice. Surprisingly,
in some cases the proposed method ? although motivated by computational efficiency ? performs better than directly applying GPR with no lattice at all.
1
Introduction
In high-throughput regression problems, the cost of evaluating test samples is just as important as the
accuracy of the regression and most non-parametric regression techniques do not produce models
that admit efficient implementation, particularly in hardware. For example, kernel-based methods
such as Gaussian process regression [1] and support vector regression require kernel computations
between each test sample and a subset of training examples, and local smoothing techniques such as
weighted nearest neighbors [2] require a search for the nearest neighbors.
For functions with a known and bounded domain, a standard efficient approach to regression is to
store a regular lattice of function values spanning the domain, then interpolate each test sample
from the lattice vertices that surround it. Evaluating the lattice is independent of the size of any
original training set, but exponential in the dimension of the input space making it best-suited to
low-dimensional applications. In digital color management ? where real-time performance often
requires millions of evaluations every second ? the interpolated look-up table (LUT) approach is
the most popular implementation of the transformations needed to convert colors between devices,
and has been standardized by the International Color Consortium (ICC) with a specification called
an ICC profile [3].
For applications where one begins with training data and must learn the lattice, the standard approach
is to first estimate a function that fits the training data, then evaluate the estimated function at the
lattice points. However, this is suboptimal because the effect of interpolation from the lattice nodes
is not considered when estimating the function. This begs the question: can we instead learn lattice
outputs that accurately reproduce the training data upon interpolation?
Iterative post-processing solutions that update a given lattice to reduce the post-interpolation error
have been proposed by researchers in geospatial analysis [4] and digital color management [5]. In
1
this paper, we propose a solution that we term lattice regression, that jointly estimates all of the
lattice outputs by minimizing the regularized interpolation error on the training data. Experiments
with randomly-generated functions, geospatial data, and two color management tasks show that
lattice regression consistently reduces error over the standard approach of evaluating a fitted function
at the lattice points, in some cases by as much as 25%. More surprisingly, the proposed method can
perform better than evaluating test points by Gaussian process regression using no lattice at all.
2
Lattice Regression
The motivation behind the proposed lattice regression is to jointly choose outputs for lattice nodes
that interpolate the training data accurately. The key to this estimation is that the linear interpolation
operation can be directly inverted to solve for the node outputs that minimize the squared error of
the training data. However, unless there is ample training data, the solution will not necessarily
be unique. Also, to decrease estimation variance it may be beneficial to avoid fitting the training
data exactly. For these reasons, we add two forms of regularization to the minimization of the
interpolation error. In total, the proposed form of lattice regression trades off three terms: empirical
risk, Laplacian regularization, and a global bias. We detail these terms in the following subsections.
2.1
Empirical Risk
d
p
We assume that our data is drawn from the bounded input space
D ? R and the output space R ;
collect the training inputs xi ? D in the d ? n matrix X = x1 , . . . , xn and the training outputs
yi ? Rp in the p ? n matrix Y = y1 , . . . , yn . Consider a lattice consisting of m nodes where
Qd
m = j=1 mj and mj is the number of nodes along dimension j. Each node consists of an inputoutput pair (ai ? Rd , bi ? Rp ) and the
inputs {ai } form a grid that contains D within
its convex
hull. Let A be the d ? m matrix A = a1 , . . . , am and B be the p ? m matrix B = b1 , . . . , bm .
For any x ? D, there are q = 2d nodes in the lattice that form a cell (hyper-rectangle) containing
x from which an output will be interpolated; denote the indices of these nodes by c1 (x), . . . , cq (x).
For our purposes, we restrict the interpolation to be a linear combination {w1 (x), . . . , wq (x)} of the
P
surrounding node outputs {bc1 (x) , . . . , bcq (x) }, i.e. f?(x) = i wi (x)bci (x) . There are many interpolation methods that correspond to distinct weightings (for instance, in three dimensions: trilinear,
pyramidal, or tetrahedral interpolation [6]). Additionally, one might consider a higher-order interpolation technique such as tricubic interpolation, which expands the linear weighting to the nodes
directly adjacent to this cell. In our experiments we investigate only the case of d-linear interpolation (e.g. bilinear/trilinear interpolation) because it is arguably the most popular variant of linear
interpolation, can be implemented efficiently, and has the theoretical support of being the maximum
entropy solution to the underdetermined linear interpolation equations [7].
Given the weights {w1 (x), . . . , wq (x)} corresponding to an interpolation of x, let W (x) be the
m ? 1 sparse vector with cj (x)th entry wj (x) for j = 1, . . . , 2d and zeros elsewhere. Further, for
training inputs {x1 , . . . , xn }, let W be the m ? n matrix W = W (x1 ), . . . , W (xn ) . The lattice
outputs B ? that minimize the total squared-`2 distortion between the lattice-interpolated training
outputs BW and the given training outputs Y are
T
B ? = arg min tr BW ? Y BW ? Y
.
(1)
B
2.2
Laplacian Regularization
Alone, the empirical risk term is likely to pose an underdetermined problem and overfit to the training data. As a form of regularization, we propose to penalize the average squared difference of the
output on adjacent lattice nodes using Laplacian regularization. A somewhat natural regularization
of a function defined on a lattice, its inclusion guarantees1 an unique solution to (1).
The graph Laplacian [8] of the lattice is fully defined by the m?m lattice adjacency matrix E where
Eij = 1 for nodes directly adjacent to one another and 0 otherwise. Given E, a normalized version
1
For large enough values of the mixing parameter ?.
2
of the Laplacian can be defined as L = 2(diag(1T E) ? E)/(1T E1), where 1 is the m ? 1 all-ones
vector. The average squared error between adjacent lattice outputs can be compactly represented as
tr BLB
T
=
p
X
P
k=1
1
ij Eij
X
(Bki ? Bkj ) .
2
{i,j | Eij =1}
Thus, inclusion of this term penalizes first-order differences of the function at the scale of the lattice.
2.3
Global Bias
Alone, the Laplacian regularization of Section 2.2 rewards smooth transitions between adjacent
lattice outputs but only enforces regularity at the resolution of the nodes, and there is no incentive
in either the empirical risk or Laplacian regularization term to extrapolate the estimated function
beyond the boundary of the cells that contain training samples. When the training data samples
do not span all of the grid cells, the lattice node outputs reconstruct a clipped function. In order
to endow the algorithm with an improved ability to extrapolate and regularize towards trends in
the data, we also include a global bias term in the lattice regression optimization. The global bias
term penalizes the divergence of lattice node outputs from some global function f? : Rd ? Rp that
approximates the training data and this can be learned using any regression technique.
Given f?, we bias the lattice regression nodes towards f??s predictions for the lattice nodes by minimizing the average squared deviation:
1
tr B ? f?(A) B ? f?(A))T .
m
We hypothesized that the lattice regression performance would be better if the f? was itself a good
regression of the training data. Surprisingly, experiments comparing an accurate f?, an inaccurate f?,
and no bias at all showed little difference in most cases (see Section 3 for details).
2.4
Lattice Regression Objective Function
Combined, the empirical risk minimization, Laplacian regularization, and global bias form the proposed lattice regression objective. In order to adapt an appropriate mixture of these terms, the regularization parameters ? and ? trade-off the first-order smoothness and the divergence from the bias
function, relative to the empirical risk. The combined objective solves for the lattice node outputs
B ? that minimize
1
T
?
BW ? Y BW ? Y
+ ?BLB T +
B ? f?(A) B ? f?(A))T ,
arg min tr
n
m
B
which has the closed form solution
?1
1
? ?
1
?
?
T
T
B =
Y W + f (A)
W W + ?L + I
,
n
m
n
m
(2)
where I is the m ? m identity matrix.
Note that this is a joint optimization over all lattice nodes simultaneously. Since the m ? m matrix
that is inverted in (2) is sparse (it contains no more than 3d nonzero entries per row2 ), (2) can be
solved using sparse Cholesky factorization [9]. On a 64bit 2.6GHz processor using the Matlab
command mldivide, we found that we could compute solutions for lattices that contained on
the order of 104 nodes (a standard size for digital color management profiling [6]) in < 20s using
< 1GB of memory but could not compute solutions for lattices that contained on the order of 105
nodes.
2
For a given row, the only possible non-zero entries of W W T correspond to nodes that are adjacent in one
or more dimensions and these non-zero entries overlap with those of L.
3
3
Experiments
The effectiveness of the proposed method was analyzed with simulations on randomly-generated
functions and tested on a real-data geospatial regression problem as well as two real-data color
management tasks. For all experiments, we compared the proposed method to Gaussian process
regression (GPR) applied directly to the final test points (no lattice), and to estimating test points
by interpolating a lattice where the lattice nodes are learned by the same GPR. For the color management task, we also compared a state-of-the art regression method used previously for this application: local ridge regression using the enclosing k-NN neighborhood [10]. In all experiments we
evaluated the performance of lattice regression using three different global biases: 1) an ?accurate?
bias f? was learned by GPR on the training samples; an ?inaccurate? bias f? was learned by a global
d-linear interpolation3 ; and 3) the no bias case, where the ? term in (2) is fixed at zero.
To implement GPR, we used the MATLAB code provided by Rasmussen and Williams at http:
//www.GaussianProcess.org/gpml. The covariance function was set as the sum of a
squared-exponential with an independent Gaussian noise contribution and all data were demeaned
by the mean of the training outputs before applying GPR. The hyperparameters for GPR were
set by maximizing the marginal likelihood of the training data (for details, see Rasmussen and
Williams [1]). To mitigate the problem of choosing a poor local maxima, gradient descent was
performed from 20 random starting log-hyperparameter values drawn uniformly from [?10, 10]3
and the maximal solution was chosen. The parameters for all other algorithms were set by minimizing the 10-fold cross-validation error using the Nelder-Mead simplex method, bounded to values in
the range [1e?3 , 1e3 ]. The starting point for this search was set at the default parameter setting for
each algorithm: ? = 1 for local ridge regression4 and ? = 1, ? = 1 for lattice regression. Experiments on the simulated dataset comparing this approach to the standard cross-validation over a grid
of values [1e?3 , 1e?2 , . . . , 1e3 ] ? [1e?3 , 1e?2 , . . . , 1e3 ] showed no difference in performance, and
the former was nearly 50% faster.
3.1
Simulated Data
We analyzed the proposed method with simulations on randomly-generated piecewise-polynomial
functions f : Rd ? R formed from splines. These functions are smooth but have features that
occur at different length-scales; two-dimensional examples are shown in Fig. 1. To construct each
function, we first drew ten iid random points {si } from the uniform distribution on [0, 1]d , and ten
iid random points {ti } from the uniform distribution on [0, 1]. Then for each of the d dimensions
we first fit a one-dimensional spline g?k : R ? R to the pairs { si )k , ti )}, where (si )k denotes the
kth component of si . We then combined
the d one-dimensional splines to form the d-dimensional
Pd
function g?(x) = k=1 g?k (x)k , which was then scaled and shifted to have range spanning [0, 100]:
g?(x) ? minz?[0,1]d g?(z)
f (x) = 100
.
maxz?[0,1]d g?(z)
Figure 1: Example random piecewise-polynomial functions created by the sum of one-dimensional
splines fit to ten uniformly drawn points in each dimension.
We considered the very coarse m = 2d lattice formed by the corner vertices of the original lattice and
solved (1) for this one-cell lattice, using the result to interpolate the full set of lattice nodes, forming f?(A).
4
Note that no locality parameter is needed for this local ridge regression as the neighborhood size is automatically determined by enclosing k-NN [10].
3
4
For input dimensions d ? {2, 3}, a set of 100 functions {f1 , . . . , f100 } were randomly generated as
described above and a set of n ? {50, 1000} randomly chosen training inputs {x1 , . . . , xn } were
fit by each regression method. A set of m = 10, 000 randomly chosen test inputs {z1 , . . . , zm }
were used to evaluate the accuracy of each regression method in fitting these functions. For the rth
randomly-generated function fr , denote the estimate of the jth test sample by a regression method
as (?
yj )r . For each of the 100 functions and each regression method we computed the root meansquared errors (RMSE) where the mean is over the m = 10, 000 test samples:
X
m
2 1/2
1
er =
.
fr (zj ) ? (?
yj )r
m j=1
The mean and statistical significance (as judged by a one-sided Wilcoxon with p = 0.05) of {er }
for r = 1, . . . , 100 is shown in Fig. 2 for lattice resolutions of 5, 9 and 17 nodes per dimension.
Legend RGPR direct GPR lattice LR GPR bias LR d-linear bias
Ranking by
Statistical
Significance
R
R
Ranking by
Statistical
Significance
R
10
0
R
10
0
5
9
17
Lattice Nodes Per Dimension
5
9
17
Lattice Nodes Per Dimension
(a) d = 2, n = 50
Ranking by
Statistical
Significance
R
R
(b) d = 2, n = 1000
Ranking by
Statistical
Significance
R
R
R
R
20
Error
20
Error
R
20
Error
Error
20
R
LR no bias
10
0
10
0
5
9
17
Lattice Nodes Per Dimension
5
9
17
Lattice Nodes Per Dimension
(c) d = 3, n = 50
(d) d = 3, n = 1000
Figure 2: Shown is the average RMSE of the estimates given by each regression method on the
simulated dataset. As summarized in the legend, shown is GPR applied directly to the test samples
(dotted line) and the bars are (from left to right) GPR applied to the nodes of a lattice which is then
used to interpolate the test samples, lattice regression with a GPR bias, lattice regression with a dlinear regression bias, and lattice regression with no bias. The statistical significance corresponding
to each group is shown as a hierarchy above each plot: method A is shown as stacked above method
B if A performed statistically significantly better than B.
In interpreting the results of Fig. 2, it is important to note that the statistical significance test compares the ordering of relative errors between each pair of methods across the random functions.
5
That is, it indicates whether one method consistently outperforms another in RMSE when fitting the
randomly drawn functions.
Consistently across the random functions, and in all 12 experiments, lattice regression with a GPR
bias performs better than applying GPR to the nodes of the lattice. At coarser lattice resolutions, the
choice of bias function does not appear to be as important: in 7 of the 12 experiments (all at the low
end of grid resolution) lattice regression using no bias does as well or better than that using a GPR
bias.
Interestingly, in 3 of the 12 experiments, lattice regression with a GPR bias achieves statistically
significantly lower errors (albeit by a marginal average amount) than applying GPR directly to the
random functions. This surprising behavior is also demonstrated on the real-world datasets in the
following sections and is likely due to large extrapolations made by GPR and in contrast, interpolation from the lattice regularizes the estimate which reduces the overall error in these cases.
3.2
Geospatial Interpolation
Interpolation from a lattice is a common representation for storing geospatial data (measurements
tied to geographic coordinates) such as elevation, rainfall, forest cover, wind speed, etc. As a cursory
investigation of the proposed technique in this domain, we tested it on the Spatial Interpolation Comparison 97 (SIC97) dataset [11] from the Journal of Geographic Information and Decision Analysis.
This dataset is composed of 467 rainfall measurements made at distinct locations across Switzerland. Of these, 100 randomly chosen sites were designated as training to predict the rainfall at
the remaining 367 sites. The RMSE of the predictions made by GPR and variants of the proposed
method are presented in Fig 3. Additionally, the statistical significance (as judged by a one-sided
Wilcoxon with p = 0.05) of the differences in squared error on the 367 test samples was computed
for each pair of techniques. In contrast to the previous section in which significance was computed
on the RMSE across 100 randomly drawn functions, significance in this section indicates that one
technique produced consistently lower squared error across the individual test samples.
Legend RGPR direct GPR lattice LR GPR bias LR d-linear bias
Ranking by
Statistical
Significance
R
R
R
R
LR no bias
R
RMSE
100
50
0
5
9
17
33
Lattice Nodes Per Dimension
65
Figure 3: Shown is the RMSE of the estimates given by each method for the SIC97 test samples.
The hierarchy of statistical significance is presented as in Fig. 2.
Compared with GPR applied to a lattice, lattice regression with a GPR bias again produces a lower
RMSE on all five lattice resolutions. However, for four of the five lattice resolutions, there is no
performance improvement as judged by the statistical significance of the individual test errors. In
comparing the effectiveness of the bias term, we see that on four of five lattice resolutions, using no
bias and using the d-linear bias produce consistently lower errors than both the GPR bias and GPR
applied to a lattice.
Additionally, for finer lattice resolutions (? 17 nodes per dimension) lattice regression either outperforms or is not significantly worse than GPR applied directly to the test points. Inspection of the
6
maximal errors confirms the behavior posited in the previous section: that interpolation from the
lattice imposes a helpful regularization. The range of values produced by applying GPR directly
lies within [1, 552] while those produced by lattice regression (regardless of bias) lie in the range
[3, 521]; the actual values at the test samples lie in the range [0, 517].
3.3
Color Management Experiments with Printers
Digital color management allows for a consistent representation of color information among diverse
digital imaging devices such as cameras, displays, and printers; it is a necessary part of many professional imaging workflows and popular among semi-professionals as well. An important component
of any color management system is the characterization of the mapping between the native color
space of a device (RGB for many digital displays and consumer printers), and a device-independent
space such as CIE L? a? b? ? abbreviated herein as Lab ? in which distance approximates perceptual notions of color dissimilarity [12].
For nonlinear devices such as printers, the color mapping is commonly estimated empirically by
printing a page of color patches for a set of input RGB values and measuring the printed colors
with a spectrophotometer. From these training pairs of (Lab, RGB) colors, one estimates the inverse
mapping f : Lab ? RGB that specifies what RGB inputs to send to the printer in order to reproduce
a desired Lab color. See Fig. 4 for an illustration of a color-managed system. Estimating f is
challenging for a number of reasons: 1) f is often highly nonlinear; 2) although it can be expected
to be smooth over regions of the colorspace, it is affected by changes in the underlying printing
mechanisms [13] that can introduce discontinuities; and 3) device instabilities and measurement
error introduce noise into the training data. Furthermore, millions of pixels must be processed in
approximately real-time for every image without adding undue costs for hardware, which explains
the popularity of using a lattice representation for color management in both hardware and software
imaging systems.
L
a
b
R
Learned Device G
Characterization B
1D LUT
R'
G'
1D LUT
B'
1D LUT
Printer
?
L
a
?
?b
Figure 4: A color-managed printer system. For evaluation, errors are measured between the desired
? a
(L, a, b) and the resulting (L,
?, ?b) for a given device characterization.
The proposed lattice regression was tested on an HP Photosmart D7260 ink jet printer and a Samsung
CLP-300 laser printer. As a baseline, we compared to a state-of-the-art color regression technique
used previously in this application [10]: local ridge regression (LRR) using the enclosing k-NN
neighborhood. Training samples were created by printing the Gretag MacBeth TC9.18 RGB image,
which has 918 color patches that span the RGB colorspace. We then measured the printed color
patches with an X-Rite iSis spectrophotometer using D50 illuminant at a 2? observer angle and UV
filter. As shown in Fig. 4 and as is standard practice for this application, the data for each printer
is first gray-balanced using 1D calibration look-up-tables (1D LUTs) for each color channel (see
[10, 13] for details). We use the same 1D LUTs for all the methods compared in the experiment and
these were learned for each printer using direct GPR on the training data.
We tested each method?s accuracy on reproducing 918 new randomly-chosen in-gamut5 test Lab
colors. The test errors for the regression methods the two printers are reported in Tables 1 and 2. As
is common in color management, we report ?E76 error, which is the Euclidean distance between
the desired test Lab color and the Lab color that results from printing the estimated RGB output of
the regression (see Fig. 4).
For both printers, the lattice regression methods performed best in terms of mean, median and 95
%-ile error. Additionally, according to a one-sided Wilcoxon test of statistical significance with
5
We drew 918 samples iid uniformly over the RGB cube, printed these, and measured the resulting Lab
values; these Lab values were used as test samples. This is a standard approach to assuring that the test samples
are Lab colors that are in the achievable color gamut of the printer [10].
7
Table 1: Samsung CLP-300 laser printer
Euclidean Lab Error
Mean
Median 95 %-ile
Local Ridge Regression (to fit lattice nodes)
4.59
4.10
9.80
4.54
4.22
9.33
GPR (direct)
GPR (to fit lattice nodes)
4.54
4.17
9.62
Lattice Regression (GPR bias)
4.31
3.95
9.08
Lattice Regression (Trilinear bias)
4.14
3.75
8.39
4.08
3.72
8.00
Lattice Regression (no bias)
Max
14.59
17.36
15.95
15.11
15.59
17.45
Table 2: HP Photosmart D7260 inkjet printer
Euclidean Lab Error
Mean
Median 95 %-ile
Local Ridge Regression (to fit lattice nodes)
3.34
2.84
7.70
GPR (direct)
2.79
2.45
6.36
2.76
2.36
6.36
GPR (to fit lattice nodes)
Lattice Regression (GPR bias)
2.53
2.17
5.96
Lattice Regression (Trilinear bias)
2.34
1.84
5.89
2.07
1.75
4.89
Lattice Regression (no bias)
Max
14.77
11.08
11.79
10.25
12.48
10.51
The bold face indicates that the individual errors are statistically significantly lower than the
others as judged by a one-sided Wilcoxon significance test (p=0.05). Multiple bold lines indicate that there was no statistically significant difference in the bolded errors.
p = 0.05, all of the lattice regressions (regardless of the choice of bias) were statistically significantly better than the other methods for both printers; on the Samsung, there was no significant
difference between the choice of bias, and on the HP using the using no bias produced consistently
lower errors. These results are surprising for three reasons. First, the two printers have rather different nonlinearities because the underlying physical mechanisms differ substantially (one is a laser
printer and the other is an inkjet printer), so it is a nod towards the generality of the lattice regression that it performs best in both cases. Second, the lattice is used for computationally efficiency,
and we were surprised to see it perform better than directly estimating the test samples using the
function estimated with GPR directly (no lattice). Third, we hypothesized (incorrectly) that better
performance would result from using the more accurate global bias term formed by GPR than using
the very coarse fit provided by the global trilinear bias or no bias at all.
4
Conclusions
In this paper we noted that low-dimensional functions can be efficiently implemented as interpolation from a regular lattice and we argued that an optimal approach to learning this structure from
data should take into account the effect of this interpolation. We showed that, in fact, one can directly estimate the lattice nodes to minimize the empirical interpolated training error and added two
regularization terms to attain smoothness and extrapolation. It should be noted that, in the experiments, extrapolation beyond the training data was not directly tested: test samples for the simulated
and real-data experiments were drawn mainly from within the interior of the training data.
Real-data experiments showed that mean error on a practical digital color management problem
could be reduced by 25% using the proposed lattice regression, and that the improvement was statistically significant. Simulations also showed that lattice regression was statistically significantly
better than the standard approach of first fitting a function then evaluating it at the lattice points.
Surprisingly, although the lattice architecture is motivated by computational efficiency, both our
simulated and real-data experiments showed that the proposed lattice regression can work better
than state-of-the-art regression of test samples without a lattice.
8
References
[1] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive
Computation and Machine Learning), The MIT Press, 2005.
[2] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning, SpringerVerlag, New York, 2001.
[3] D. Wallner, Building ICC Profiles - the Mechanics and Engineering, chapter 4: ICC Profile
Processing Models, pp. 150?167, International Color Consortium, 2000.
[4] W. R. Tobler, ?Lattice tuning,? Geographical Analysis, vol. 11, no. 1, pp. 36?44, 1979.
[5] R. Bala, ?Iterative technique for refining color correction look-up tables,? United States Patent
5,649,072, 1997.
[6] R. Bala and R. V. Klassen, Digital Color Handbook, chapter 11: Efficient Color Transformation Implementation, CRC Press, 2003.
[7] M. R. Gupta, R. M. Gray, and R. A. Olshen, ?Nonparametric supervised learning by linear
interpolation with maximum entropy,? IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), vol. 28, no. 5, pp. 766?781, 2006.
[8] F. Chung, Spectral Graph Theory, Number 92 in Regional Conference Series in Mathematics.
American Mathematical Society, 1997.
[9] T. A. Davis, Direct Methods for Sparse Linear Systems, SIAM, Philadelphia, September 2006.
[10] M. R. Gupta, E. K. Garcia, and E. M. Chin, ?Adaptive local linear regression with application
to printer color management,? IEEE Trans. on Image Processing, vol. 17, no. 6, pp. 936?945,
2008.
[11] G. Dubois, ?Spatial interpolation comparison 1997: Foreword and introduction,? Special Issue
of the Journal of Geographic Information and Descision Analysis, vol. 2, pp. 1?10, 1998.
[12] G. Sharma, Digital Color Handbook, chapter 1: Color Fundamentals for Digital Imaging, pp.
1?114, CRC Press, 2003.
[13] R. Bala, Digital Color Handbook, chapter 5: Device Characterization, pp. 269?384, CRC
Press, 2003.
9
| 3694 |@word version:1 achievable:1 polynomial:2 printer:22 confirms:1 simulation:4 rgb:9 covariance:1 tr:4 contains:2 series:1 united:1 interestingly:1 outperforms:2 comparing:3 surprising:2 si:4 must:2 plot:1 update:1 alone:2 intelligence:1 device:9 inspection:1 cursory:1 lr:6 geospatial:5 coarse:2 characterization:4 node:39 location:1 org:1 five:3 mathematical:1 along:1 direct:6 surprised:1 consists:1 fitting:4 introduce:2 expected:1 behavior:2 isi:1 mechanic:1 automatically:1 little:1 actual:1 begin:1 estimating:4 bounded:3 provided:2 underlying:2 what:1 substantially:1 transformation:2 mitigate:1 every:2 expands:1 ti:2 exactly:1 rainfall:3 scaled:1 yn:1 appear:1 arguably:1 before:1 engineering:3 local:9 bilinear:1 mead:1 interpolation:26 approximately:1 pami:1 might:1 bc1:1 collect:1 challenging:1 factorization:1 bi:1 range:5 statistically:7 lrr:1 unique:2 camera:1 enforces:1 yj:2 practical:1 practice:1 implement:1 empirical:8 significantly:6 printed:3 attain:1 regular:2 consortium:2 interior:1 judged:4 risk:7 applying:5 instability:1 www:1 maxz:1 demonstrated:1 maximizing:1 send:1 williams:3 regardless:2 starting:2 convex:1 resolution:8 regularize:1 notion:1 coordinate:1 hierarchy:2 assuring:1 trend:1 element:1 particularly:1 coarser:1 native:1 electrical:2 solved:2 wj:1 region:1 ordering:1 decrease:1 trade:2 balanced:1 pd:1 reward:1 upon:1 eric:1 efficiency:3 compactly:1 joint:1 samsung:3 represented:1 chapter:4 surrounding:1 stacked:1 laser:3 distinct:2 hyper:1 neighborhood:3 choosing:1 solve:1 distortion:1 otherwise:1 reconstruct:1 bci:1 ability:1 jointly:2 itself:1 final:1 propose:2 maximal:2 zm:1 fr:2 mixing:1 colorspace:2 inputoutput:1 seattle:2 regularity:1 produce:3 pose:1 measured:3 nearest:2 ij:1 solves:1 implemented:2 nod:1 indicate:1 qd:1 differ:1 switzerland:1 filter:1 hull:1 adjacency:1 explains:1 require:2 argued:1 crc:3 f1:1 investigation:1 elevation:1 underdetermined:2 correction:1 considered:2 mapping:3 predict:1 achieves:1 purpose:1 estimation:2 gaussianprocess:1 weighted:1 minimization:3 mit:1 gaussian:6 rather:2 avoid:1 command:1 gpml:1 endow:1 refining:1 d50:1 improvement:2 consistently:7 likelihood:1 indicates:3 mainly:1 contrast:2 baseline:1 am:1 helpful:1 nn:3 inaccurate:2 reproduce:2 pixel:1 arg:2 overall:1 among:2 issue:1 undue:1 smoothing:1 art:3 spatial:2 special:1 marginal:2 cube:1 construct:1 washington:4 look:5 throughput:1 nearly:1 simplex:1 report:1 spline:4 piecewise:2 others:1 randomly:11 composed:1 simultaneously:1 divergence:2 interpolate:4 individual:3 consisting:1 bw:5 friedman:1 investigate:1 highly:1 evaluation:2 mixture:1 analyzed:2 behind:1 bki:1 accurate:3 necessary:1 unless:1 euclidean:3 penalizes:2 desired:3 theoretical:1 fitted:2 instance:1 cover:1 measuring:1 lattice:113 cost:2 deviation:1 vertex:2 subset:1 entry:4 uniform:2 stored:1 reported:1 combined:3 geographical:1 international:2 siam:1 fundamental:1 off:2 luts:2 w1:2 squared:8 again:1 management:14 containing:1 choose:1 worse:1 admit:1 corner:1 american:1 chung:1 account:1 nonlinearities:1 summarized:1 bold:2 blb:2 ranking:5 performed:3 root:1 extrapolation:3 closed:1 wind:1 lab:12 observer:1 rmse:8 contribution:1 minimize:5 formed:3 accuracy:3 variance:1 bolded:1 efficiently:2 correspond:2 trilinear:5 accurately:2 produced:4 iid:3 researcher:1 finer:1 processor:1 pp:7 dataset:4 popular:3 color:42 subsection:1 cj:1 macbeth:1 higher:1 supervised:1 improved:1 evaluated:1 generality:1 furthermore:1 just:1 overfit:1 nonlinear:2 gray:2 building:1 effect:2 hypothesized:2 normalized:1 contain:1 geographic:3 managed:2 former:1 regularization:12 nonzero:1 adjacent:6 davis:1 noted:2 chin:1 ridge:6 performs:4 interpreting:1 image:3 common:2 empirically:1 physical:1 patent:1 million:2 approximates:2 rth:1 measurement:3 significant:3 surround:1 ai:1 smoothness:2 rd:3 uv:1 grid:4 tuning:1 hp:3 inclusion:2 mathematics:1 specification:1 calibration:1 etc:1 add:1 wilcoxon:4 showed:6 bkj:1 store:1 yi:1 inverted:2 somewhat:1 clp:2 sharma:1 semi:1 full:1 multiple:1 reduces:2 smooth:3 faster:1 adapt:1 profiling:1 cross:2 jet:1 posited:1 post:2 e1:1 a1:1 laplacian:8 prediction:2 variant:2 regression:68 ile:3 kernel:2 cie:1 cell:5 c1:1 penalize:1 pyramidal:1 median:3 regional:1 ample:1 legend:3 effectiveness:2 ee:2 enough:1 fit:9 architecture:1 restrict:1 suboptimal:1 hastie:1 reduce:2 whether:1 motivated:2 gb:1 e3:3 york:1 matlab:2 workflow:1 amount:1 nonparametric:1 ten:3 hardware:3 processed:1 reduced:1 http:1 specifies:1 zj:1 shifted:1 dotted:1 estimated:5 per:8 popularity:1 tibshirani:1 diverse:1 hyperparameter:1 incentive:1 affected:1 vol:4 group:1 key:1 four:2 drawn:6 rectangle:1 imaging:4 graph:2 convert:1 sum:2 run:1 inverse:1 angle:1 clipped:1 patch:3 decision:1 bit:1 maya:1 display:2 bala:3 fold:1 occur:1 software:1 interpolated:6 speed:1 min:2 span:2 department:2 designated:1 according:1 combination:1 poor:1 beneficial:1 across:5 wi:1 making:1 sided:4 computationally:1 equation:1 previously:2 abbreviated:1 mechanism:2 needed:2 end:1 operation:1 appropriate:1 spectral:1 professional:2 rp:3 original:2 standardized:1 denotes:1 include:1 remaining:1 approximating:1 society:1 ink:1 objective:3 question:1 added:1 parametric:1 september:1 gradient:1 kth:1 distance:2 simulated:5 evaluate:2 spanning:2 reason:3 consumer:1 code:1 length:1 index:1 cq:1 illustration:1 minimizing:3 olshen:1 implementation:4 enclosing:3 perform:2 datasets:1 descent:1 incorrectly:1 regularizes:1 y1:1 reproducing:1 tetrahedral:1 pair:5 z1:1 meansquared:1 learned:6 herein:1 discontinuity:1 trans:2 beyond:2 bar:1 pattern:1 max:2 memory:1 overlap:1 natural:1 regularized:1 dubois:1 created:2 gamut:1 naive:1 philadelphia:1 icc:4 relative:2 fully:1 digital:12 validation:2 consistent:1 imposes:1 begs:1 storing:1 row:1 elsewhere:1 surprisingly:4 rasmussen:3 jth:1 bias:44 neighbor:2 face:1 sparse:4 ghz:1 regard:1 boundary:1 dimension:14 xn:4 evaluating:6 transition:1 default:1 world:1 made:3 commonly:1 adaptive:2 bm:1 lut:4 confirm:1 global:10 demeaned:1 handbook:3 b1:1 nelder:1 xi:1 search:2 iterative:2 table:8 additionally:4 learn:2 mj:2 channel:1 forest:1 interpolating:2 necessarily:1 domain:3 diag:1 significance:15 linearly:1 motivation:1 noise:2 hyperparameters:1 profile:3 x1:4 fig:8 site:2 exponential:2 lie:3 tied:1 perceptual:1 gpr:37 weighting:2 minz:1 printing:4 third:1 er:2 gupta:4 klassen:1 albeit:1 adding:1 drew:2 dissimilarity:1 suited:1 entropy:2 locality:1 garcia:2 eij:3 likely:2 forming:1 contained:2 identity:1 towards:3 change:1 springerverlag:1 determined:1 uniformly:3 called:1 total:2 f100:1 wq:2 support:2 cholesky:1 illuminant:1 tricubic:1 tested:5 extrapolate:2 |
2,973 | 3,695 | Noise Characterization, Modeling, and Reduction for
In Vivo Neural Recording
1
Zhi Yang1 , Qi Zhao2 , Edward Keefer3,4 , and Wentai Liu1
University of California at Santa Cruz, 2 California Institute of Technology
3
UT Southwestern Medical Center, 4 Plexon Inc
[email protected]
Abstract
Studying signal and noise properties of recorded neural data is critical in developing more efficient algorithms to recover the encoded information. Important
issues exist in this research including the variant spectrum spans of neural spikes
that make it difficult to choose a globally optimal bandpass filter. Also, multiple
sources produce aggregated noise that deviates from the conventional white Gaussian noise. In this work, the spectrum variability of spikes is addressed, based on
which the concept of adaptive bandpass filter that fits the spectrum of individual
spikes is proposed. Multiple noise sources have been studied through analytical
models as well as empirical measurements. The dominant noise source is identified as neuron noise followed by interface noise of the electrode. This suggests
that major efforts to reduce noise from electronics are not well spent. The measured noise from in vivo experiments shows a family of 1/f x spectrum that can
be reduced using noise shaping techniques. In summary, the methods of adaptive
bandpass filtering and noise shaping together result in several dB signal-to-noise
ratio (SNR) enhancement.
1 Introduction
Neurons in the brain communicate through the firing of action potentials. This process induces brief
?voltage? spikes in the surrounding environment that can be recorded by electrodes. The recorded
neural signal may come from single, or multiple neurons. While single neurons only require a
detection algorithm to identify the firings, multiple neurons require the separation of superimposed
activities to obtain individual neuron firings. This procedure, also known as spike sorting, is more
complex than what is required for single neurons.
Spike sorting has acquired general attention. Many algorithms have been reported in the literature [1?7], with each claiming an improved performance based on different data. Comparisons
among different algorithms can be subjective and difficult. For example, benchmarks of in vivo
recordings that thoroughly evaluate the performance of algorithms are unavailable. Also, synthesized sequences with benchmarks obtained through neuron models [8], isolated single neuron
recordings [2], or simultaneous intra- and extra- cellular recordings [9] lack the in vivo recording
environment. As a result, synthesized data provide useful but limited feedback on algorithms.
This paper discusses a noise study, based on which SNR enhancement techniques are proposed.
These techniques are applicable to an unspecified spike sorting algorithm. Specifically, a procedure of online estimating both individual spike and noise spectrum is first applied. Based on the
estimation, a bandpass filter that fits the spectrum of the underlying spike is selected. This maximally reduces the broad band noise without sacrificing the signal integrity. In addition, a comprehensive study of multiple noise sources are performed through lumped circuit model as well
as measurements. Experiments suggest that the dominant noise is not from recording electronics,
1
Figure 1: Block diagram of the proposed noise reduction procedures.
thus de-emphasize the importance of low noise hardware design. More importantly, the measured
noise generally shows a family of 1/f x spectrum, which can be reduced by using noise shaping
techniques [10, 11]. Figure 1 shows the proposed noise reduction procedures.
The rest of this paper is organized as follows. Section 2 focuses on noise sources. Section 3 gives
a Wiener kernel based adaptive bandpass filter. Section 4 describes a noise shaping technique that
uses fractional order differentiation. Section 5 reports experiment results. Section 6 gives concluding
remarks.
2
Noise Spectrum and Noise Model
Recorded neural spikes are superimposed with noise that exhibit non-Gaussian characteristics and
can be approximated as 1/f x noise. The frequency dependency of noise is contributed by multiple sources. Identified noise sources include 1/f ? ?neuron noise [12?14] (notations of 1/f x and
1/f ? represent frequency dependencies of the total noise and neuron noise respectively), electrodeelectrolyte interface noise [15], tissue thermal noise, and electronic noise, which are illustrated in
Figure 2 using a lumped circuit model. Except electrolyte bulk noise (4kT Rb in Figure 2) that has
a flattened spectrum, the rest show frequency dependency. Specifically, 1/f ? ?neuron noise is induced from distant neurons [12?14]. Numeric simulations based on simplified neuron models [12]
suggest that ? can vary a wide range depending on the parameters. For the electrode-electrolyte
interface noise, non-faradaic type in particular, an effective resistance (Ree ) is defined for modeling purposes. Ree generates noise that is attenuated quadratically to frequency in high frequency
region by the interface capacitance (Cee ). Electronic noise consists of two major components: thermal noise (? kT /gm [16]) and flicker noise (or 1/f noise [16]). Flicker noise dominates at lower
frequency range and is fabrication process dependent. Next, we will address the noise model that
will later be used to develop noise removal techniques in Section 3 and Section 4, and verified by
experiment results in section 5.
2.1 1/f ? ?Neuron Noise
Background spiking activities of the vast distant neurons (e.g. spike, synaptic release [17?19])
overlap the spectrum of the recorded spike signal. They usually have small magnitudes and are
noisily aggregated. Analytically, the background activities are described as
XX
Vneu =
vi.neu (t ? ti,k ),
(1)
i
k
where Vneu represents the superimposed background activities of distant neurons; i and ti,k represent the object identification and its activation time respectively, and vi.neu is the spiking activity
template of the ith object. Based on Eq. 1, the power spectrum of Vneu is
P {Vneu } =
X X |Xi (f )|2 fi
i
k
2
< e2?jf (ti,k1 +k ?ti,k1 ) >,
(2)
where < > represents the average over the ensemble and over k1 , P {} is the spectrum operation,
Xi (f ) is the fourier transform of vi.neu , and fi is the frequency of spiking activity vi.neu (the
number of activations divided by a period of time). The spectrum of a delta function spike pulse
2
Figure 2: Noise illustration for extracellular spikes.
P
train ( k < e2?jf (tk1 +k ?tk1 ) >), according to [12], features a lower frequency and exhibits a
1/f ? frequency dependency. As this term multiplies |Xi (f )|2 , the unresolved spiking activities of
distant neurons contribute a spectrum of 1/f x within the signal spectrum.
2.2
Electrode Noise
Assume the electrode-electrolyte interface is the non-faradaic type where charges such as electrons
and ions, can not pass across the interface. In a typical in vivo recording environment that involves
several different ionic particles, e.g. Na+, K+, ..., the current flux of any ith charged particle Ji (x)
at location x assuming spatial concentration ni (x) is described by Nernst equation
zi q
Di ni ??(x),
(3)
Ji (x) = ?Di ?ni (x) + ni (x)? ?
kT
where Di is the diffusion coefficient, ? electrical potential, zi charge of the particle, q the charge of
one electron, k the Boltzmann constant, T the temperature, and ? the convection coefficient. In a
steady state, Ji (x) is zero with the boundary condition of maintaining about 1V voltage drop from
metal to electrolyte. In such a case, the electrode interface can be modeled as a lumped resistor Ree
in parallel with a lumped capacitor Cee . This naturally forms a lowpass filter for the interface noise.
As a result, the induced noise from Ree at the input of the amplifier is
4kT
1
4kT
Ne.e =
(Ree ||j?Cee ||(Rb + j?Ci ))2 =
|
|2 . (4)
1
Ree
Ree 1/Ree + j?Cee + 1/(Rb + j?C
)
i
Referring to the hypothesis that the amplifier input capacitance (Ci ) is sufficiently small, introducing
negligible waveform distortion, the integrated noise by electrode interface satisfies
Z fc2
Z fc2
4kT Ree
2kT
kT
c2
Ne.e df ?
df =
. (5)
tan?1 2?Ree Cee f |ff =f
=fc1 <
2
?Cee
Cee
fc1
fc1 |1 + 2?jf Ree Cee |
Equation 5 suggests reducing electrode interface noise by increasing double layer capacitance (Cee ).
Without increasing the size of electrodes, carbon-nanotube (CNT) coating [20] can dramatically increase electrode surface area, thus, reducing the interface noise. Section 5 will compare conventional
electrodes and CNT coated electrodes from a noise point of view.
In regions away from the interface boundary, ?ni (x) = 0 results in a flattened noise spectrum. Here
we use a lumped bulk resistance Rb in series with the double-layer interface for modeling noise
?tissue
Ne.b = 4kT Rb = 4kT ?
,
(6)
?rs
where Rb is the bulk resistance, ?tissue is the electrolyte resistivity, rs is the radius of the electrode,
and ? is a constant that relates to the electrode geometry. As given in [21], ? ? 0.5 for a plate
electrode.
2.3
Electronic Noise
Noise generated by electronics can be predicted by circuit design tools and validated through measurements. At the frequency of interest, there are two major components: thermal noise of transistors
and flicker noise
4kT
K
1
Nelectronic = Nc.thermal + Nc.f licker = ?
+
,
(7)
gm
Cox W L f
3
where Nc.thermal is the circuit thermal noise, Nc.f licker the flicker noise, gm the transconductance
of the amplifier (?iout /?vin ), ? a circuit architecture dependent constant on the order of O(1), K
a process-dependent constant on the order of 10?25 V 2 F [16], Cox the transistor gate capacitance
density, and W and L the transistor width and length respectively.
Given a design schematic, circuit thermal noise can be reduced by increasing transconductance (gm ),
which is to the first order linear to bias current thus power consumption. Flicker noise can be reduced
using design techniques such as large size input transistors and chopper modulations [22]. By using
advanced semiconductor technologies, also, power and area trade off to noise [16], and elegant design techniques like chopper modulation, current feedback [23], the state-of-the-art low noise neural
amplifier can provide less than 2?V total noise [24]. Such design costs can be necessary and useful
if electronics noise contributes significantly to the total noise. Otherwise, the over-designed noise
specification may be used to trade off other specifications and potentially result in overall improved
performance of the system. Section 5 will present experiments of evaluating noise contribution from
different sources, which show that electronics are not the dominant noise source in our experiments.
2.4
Total Noise
The noise sources as shown in Figure 2 include unresolved neuron activities (Nneu ), electrodeelectrolyte interface noise (Ne.e ), thermal noise from the electrolyte bulk (Ne.b ) and active circuitry
(Nc.thermal ), and flicker noise (Nc.f licker ). The noise spectrum is empirically fitted by
N (f ) = Nneu + Ne.e + Ne.b + Nc.thermal + Nc.f licker ?
N1
+ N0 ,
fx
(8)
where N1 /f x and N0 represent the frequency dependent and flat terms, respectively. Equation 8
describes a combination of both colored noise (1/f x ) and broad band noise, which can be reduced
by using noise removal techniques. Section 3 presents an adaptive filtering scheme used to optimally
attenuate the broad band noise. Section 4 presents a noise shaping technique used to improve the
differentiation between signals and noise within the passband.
3
Adaptive Bandpass Filtering
SNR is calculated by integrating both signal and noise spectrum. Intuitively, a passband, either too
narrow or wide, introduces signal distortion or unwanted noise. Figure 5(b) plots the detected spikes
from one single electrode with different widths and shows the difficulty of optimally sizing the
passband. While a passband that only fits one spike template may introduce waveform distortion to
spikes of other templates, a passband that covers every template will introduce more noise to spikes
of every template. A possible solution is to adaptively assign a passband to each spike waveform
such that each span will be just wide enough to cover the underlying waveform. This section presents
the steps used in order to achieve this solution and includes spike detection, spectrum estimation,
and filter generation.
3.1
Spike Detection
In this work, spike detection is performed using a nonlinear energy operator (NEO) [25] that captures
instantaneous high frequency and high magnitude activities. With a discrete time signal xi , i =
...1, 2, 3..., NEO outputs
?(xi ) = x2i ? xi+1 xi?1 .
(9)
The usefulness of NEO for spike detection can be explored by taking the expectation of Eq. 9
Z
?(xi ) = Rx (0) ? Rx (2 4 T ) ? P (f, i)(1 ? cos4?f 4 T )df,
(10)
where Rx is the auto correlation function, 4T is the sampling interval, and P (f, i) is the estimated
power spectrum density with window centered at sample xi . When the frequency of interest is much
lower than the sampling frequency, 1 ? cos2?f ? is approximately 2? 2 f 2 ? 2 . This emphasizes the
high frequency power spectrum. Because spikes are high frequency activities by definition, NEO
outputs a larger score when spikes are present. An example of NEO based spike detection is shown
in Figure 4, where NEO improves the separation between spikes and the background activity.
4
4
2.5
7
x 10
4
x 10
2
3
1.5
2
1
0.5
1
0
0
?0.5
?1
?1
?1.5
0
5
10
?2
15
0
5
10
5
15
5
x 10
x 10
(a)
(b)
Figure 3: Spike sequence and its corresponding NEO output. (a) Raw sequence of one channel. (b)
The corresponding NEO output of the raw sequence in (a).
3.2
Corner Frequency Estimation
Spectrum estimation of individual spikes is performed to select a corresponding bandpass filter that
balances the spectrum distortion and noise. Knowing its ability to separate bandlimited signals from
broad band noise, a Weiner filter [26] is used here to size the signal passband. In the frequency
domain, denoting PXX and PN N as the signal and noise spectra, Weiner filter is
W (f ) =
PXX (f )
SN R(f )
=
.
PXX (f ) + PN N (f )
SN R(f ) + 1
(11)
Implementing a precise Weiner filter for each detected spike requires considerable computation, as
well as a reliable estimation of the signal spectrum. In this work, we are interested in using one of a
series of prepared bandpass filters Hi (i = 1, 2...n) that better matches the solved ?optimal? Weiner
filter
Z
arg min |Hi (f ) ? W (f )|2 df,
(12)
i
R
subjected to [Hi (f ) ? W (f )]df = 0.
4
Noise Shaping
The adaptive scheme presented in Section 3 tacitly assigns a matched frequency mask to individual
spikes and balances noise and spectrum integrity. The remaining noise exhibits 1/f x frequency
dependency according to Section 2. In this section, we focus on noise shaping techniques to further
distinguish signal from noise.
The fundamentals of noise shaping are straightforward. Instead of equally amplifying the spectrum,
a noise shaping filter allocates more weight to high SNR regions while reducing weight at low SNR
regions. This results in an increased ratio of the integrated signal power over the noise power. In
general, there are a variety of noise shaping filters that can improve the integrated SNR [10]. In this
work, we use a series of fractional derivative operation for noise shaping
dp h(x)
,
(13)
dxp
where h(x) is a general function, p is a positive number (can be integer or non-integer) that adjusts
the degree of noise shaping; the larger the p, the more emphasis on high frequency spectrum. In Z
domain, the realization of fractional derivative operation can be done using binomial series [27]
?
?
X
X
p(p ? 1)...(p ? n + 1) ?n
H(z) = (1 ? z ?1 )p =
z , (14)
h(n)z ?k = 1 ? pz ?1 +
(?1)n
n!
n=0
n=2
D(h(x)) =
where h(n) are the fractional derivative filter coefficients that converge to zero.
The SNR gain in applying a fractional derivative filter H(f ) is
R
R
Ispike (f )|H(f )|2 df
Ispike (f )df
R
R
SN Rgain = 10log
? 10log
,
Inoise (f )|H(f )|2 df
Inoise (f )df
5
(15)
3
10
2.5
Noise power, unit (uV)2
Noise measured using conventinal electrode
Noise measured using CNT coated electrode
Recording, unit mV
1.25
0
?1.25
?2.5
2
10
1
0
1
2
3
Time, unit minute
4
10
5
0
10
20
30
(a)
50
60
(b)
25
25
0 minute
15 minute
30 minute
45 minute
15
10
5
15
10
5
0
0
?5
?5
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
0 minute
15 minute
30 minute
45 minute
20
Power/frequency (dB/Hz)
20
Power/frequency (dB/Hz)
40
Time, unit minute
6
Frequency (kHz)
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
Frequency (kHz)
(c)
(d)
Figure 4: In vivo recording for identifying noise sources. (a) 5-minute recording segment capturing
the decay of background activities. (b) Traces of the estimated noise vs. time are plotted. Black ?
curve represents the noise recorded from a custom tungsten electrode; red N curve represents the
noise recorded from a CNT coated electrodes with the same size. (c), (d) Noise power spectrums estimated at the 0, 15, 30, 45 minutes after the drug injection. In (c) a conventional tungsten electrode
is used. In (d), a CNT coated tungsten electrode of equal size is used for comparison.
where Ispike (f ) and Inoise (f ) are power spectrums of spike and noise respectively. Numeric values
of SNR gain depend on both data and p (the degree of noise shaping). In our experiments, we
empirically choose p in a range of 0.5 to 2.5, where numerically calculated SNR gains using Eq. 15
of in vivo recordings are typically more than 3dB, which is consistent with [10].
5 Experiment
To verify the noise analysis presented in Section 2, an in vivo experiment is performed that uses two
sharp tungsten electrodes separated by 125 ?m to record the hippocampus neuronal activities of a
rat. One of the electrodes is coated with carbon-nanotube (CNT), while the other is uncoated. After
the electrodes have been placed, a euthanizing drug is injected. After 5 seconds of drug injection,
the recording of the two electrodes start and last until to the time of death. The noise analysis
results are summarized and presented in Figure 4. In Figure 4(a), a 5-minute segment that captures
the decaying of background activities is plotted. In Figure 4(b), the estimated noise from 600Hz to
6KHz for both recording sites are plotted, where noise dramatically reduces (> 80%) after the drug
takes effect. Initially, the CNT electrode records a comparatively larger noise (697?V 2 ) compared
with the uncoated electrode (610?V 2 ). After a few minutes, the background noise recorded by
the CNT electrode quickly reduces eventually reaching 37?V 2 that is about 1/3 of noise recorded
by its counterpart (112?V 2 ), suggesting the noise floor of using the uncoated tungsten electrode
(112?V 2 ) is set by the electrode. From these two plots, we can estimate that the neuron noise is
around 500 ? 600?V 2 , electrode interface noise is ? 80?V , while the sum of electronic noise
and electrolyte bulk noise is less than 37?V 2 (only ? 5% of the total noise). Figure 4(c) displays
the 1/f x noise spectrum recorded from the uncoated tungsten electrode (x = 1.8, 1.4, 1.0, 0.9,
estimated at 0, 15, 30, 45 minutes after drug injection). Figure 4(d) displays 1/f x noise spectrum
recorded from the CNT coated electrode (x = 2.1, 1.3, 0.9, 0.8, estimated at 0, 15, 30, 45 minutes
after drug injection).
6
Table 1: Statistics of 1/f x noise spectrum from in vivo preparations.
1/f x
Number of Recordings
0.25
0.25
0.2
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
0
x<1
5
1 ? x < 1.5
38
1.5 ? x < 2
23
0.25
0.15
0.2
0.1
0.15
0.05
0.1
0
0.05
?0.05
0
?0.1
?0.05
?0.15
?0.1
?0.2
?0.15
?0.25
?0.05
?0.05
?0.2
?0.1
x?2
11
0
5
10
15
20
(a)
25
30
35
40
?0.1
0
5
10
15
20
25
30
35
40
?0.25
?0.6
(b)
?0.3
?0.5
?0.4
?0.3
?0.2
?0.1
(c)
0
0.1
0.2
0.3
?0.35
?0.5
?0.4
?0.3
?0.2
?0.1
0
0.1
0.2
(d)
Figure 5: In vivo experiment of evaluating the proposed adaptive bandpass filter. (a) Detected
spikes are aligned and superimposed. (b) Example waveforms that have distinguished widths are
plotted. (c) Feature extraction results using PCA with a global bandpass filter (400Hz to 5KHz)
are displayed. (d) Feature extraction results using PCA with adaptive bandpass filters are displayed
showing a much improved cluster isolation compared to (c).
In the second experiment, 77 recordings of in vivo preparations are used to explore the stochastic distribution of 1/f x noise spectrum. ?x? is averaged at 1.5 with a standard deviation of 0.5
(1/f 1.5?0.5 ). The results are summarized in Table 1.
The third experiment uses an in vivo recording from a behaving cat. This recording is used to
compare the feature extraction results produced by a global bandpass filter (conventional one) and
the proposed adaptive bandpass filter, discussed in Section 3. In Figure 5(a), detected spikes are
superimposed, where ?a thick waveform bundle? is observed. In Figure 5(b), example waveforms
in Figure 5(a) that have different widths are shown. Clearly, these waveforms have noticeably different spectrum spans. In Figure 5(c), feature extraction results using PCA (a widely used feature
extraction algorithm in spike sorting applications) with a global bandpass filter are displayed. As
a comparison, feature extraction results using PCA with adaptive bandpass filters are displayed in
Figure 5(d), where multiple clusters are differentiable in the feature space.
In the fourth experiment, earth mover?s distance (EMD), as a cross-bin similarity measure that is
robust to waveform misalignment [28], is applied to synthesized data for evaluation of the spike
waveform separation before and after noise shaping. Assume VA (i), i = 1, 2..., VB (i), i = 1, 2...
to be the spike waveform bundles from candidate neuron A and B. To estimate the spike variation
of a candidate neuron, two waveforms are randomly picked from a same waveform bundle, and
the distance between them is calculated using EMD. After repeating the procedure many times,
the results are plotted as the black (waveforms from VA ) and blue (waveforms from VA ) traces in
Figure 6. The x-axis indexes the trial and the y-axis is the EMD score. Black/blue traces describe
the intra-cluster waveform variations of the two neurons under testing. To estimate the separation
between candidate neuron A and B, we randomly pick two waveforms, one from VA and the other
from VB , then compute the EMD between them. This procedure is repeated many times and the
EMD vs. trial index is plotted as the red curve in Figure 6. Four pairs of candidate neurons are
tested and shown in Figure 6(a)-(d). It can be observed from Figure 6 that the red curves are not well
differentiated from the black/blue ones, which indicate that candidate neurons are not well separated.
In Figure 6(e)-(h), we apply a similar procedure on the same four pairs of candidate neurons. The
only difference from plots shown in Figure 6(a)-(d) is that the waveforms after noise shaping are
used rather than their original counterparts. In Figure 6(e)-(h), the red curves separate from the
black/blue traces, suggesting that the noise shaping filter improves waveform differentiations.
In the fifth experiment, we apply different orders of noise shaping filters and the same feature extraction algorithm to evaluate the feature extraction results. The noise shaping technique is developed
as a general tool that can be incorporated into an unspecified feature extraction algorithm. Here, we
use PCA as an example. In Figure 7, 8 figures in the same row are results of the same sequence.
Figures from left to right display the feature extraction results with different orders of noise shaping;
from 0 (no noise shaping) to 3.5, and stepped by 0.5. All the tests are obtained after adaptive band7
30
18
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
25
16
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
16
14
20
12
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
14
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
10
12
12
8
10
10
15
8
6
8
6
10
6
4
4
4
5
2
2
2
0
0
50
100
150
200
250
0
300
0
50
100
(a)
200
250
0
300
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
9
8
7
7
6
6
5
5
4
4
3
3
2
2
1
1
100
100
150
200
250
0
300
150
200
250
300
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
50
100
(e)
0
50
100
150
200
250
IntraCluster1 Distance
IntraCluster2 Distance
InterClster1to2 Distance
250
300
IntraCluster1 Distance
IntraCluster2 Distance
InterCluster1to2 Distance
5
4
4
3
3
2
2
0
200
(d)
5
300
150
6
1
0
0
(c)
9
8
50
50
6
10
0
0
(b)
10
0
150
1
0
50
100
(f)
150
200
250
300
0
0
50
100
(g)
150
200
250
300
(h)
Figure 6: EMD vs. trial index. Black and blue trace: EMDs for intra-cluster waveforms; red trace:
EMDs for inter-cluster waveforms. (a)-(d) and (e)-(h) are results of 4 different pairs of neurons
before and after noise shaping respectively. Traces in (a)-(d) and (e)-(h) have one-to-one correspondence. Noise level increases from (a) to (d).
1
1
1
1
1
1
1
1
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
(a)
0.4
0.6
0.8
1
0
0
0.2
(b)
0.4
0.6
0.8
1
0
0
0.2
(c)
0.4
0.6
0.8
1
0
0
0.2
(d)
0.4
0.6
0.8
1
0
0
0.2
(e)
0.4
0.6
0.8
1
0
0.3
0.2
0.1
0
0.2
(f)
0.4
0.6
0.8
1
0
1
1
1
1
1
1
1
1
0.9
0.9
0.9
0.9
0.9
0.9
0.9
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.7
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.6
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.4
0.3
0.3
0.3
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.1
0.1
0.1
0
0
0
0
0
0
0
0.2
0.4
0.6
0.8
1
0
0.2
(i)
0.4
0.6
(j)
0.8
1
0
0.2
0.4
0.6
(k)
0.8
1
0
0.2
0.4
0.6
0.8
1
(l)
0
0.2
0.4
0.6
(m)
0.8
1
0
0.2
0.4
0.6
(n)
0.2
0.8
1
0.4
0.6
0.8
1
0.8
1
(h)
0.9
0
0
(g)
0.3
0.2
0.1
0
0.2
0.4
0.6
(o)
0.8
1
0
0
0.2
0.4
0.6
(p)
Figure 7: Feature extraction results using PCA with different orders of noise shaping. Each row
represents a different sequence. Each column represents a different order of noise shaping (p in
dp f (x)
dxp ), sweeping from 0 (without noise shaping) to 3.5 , stepped by 0.5. (a)-(h) are results of a
synthesized sequence. (i)-(p) are results of an in vivo preparation. Clearly, (f) is better than (a); (m)
is better than (i).
pass filtering. The first sequence (a)-(h) is a synthesized one from public data base [2], the second
sequence is recorded from an in vivo preparation. For both sequences, increased numbers of isolated
clusters can be obtained by appropriately choosing the order of the noise shaping filter.
6 Conclusion
In this paper, a study of multiple noise sources for in vivo neural recording is carried out. The
dominant noise source is identified to be neuron noise followed by interface noise of the electrode.
Overall, the noise exhibits a family of 1/f x spectrum. The concept of adaptive bandpass filter is proposed to reduce noise because it maintains the signal spectrum integrity while maximally reducing
the broad band noise. To reduce the noise within the signal passband and improve waveform separation, a series of fractional order differentiator based noise shaping filters are proposed. The proposed noise removal techniques are generally applicable to an unspecified spike sorting algorithm.
Experiment results from in vivo preparations, synthesized sequences, and comparative recordings
using both conventional and CNT coated electrodes are reported, which verify the noise model and
demonstrate the usefulness of the proposed noise removal techniques.
8
References
[1] Lewicki MS. A review of methods for spike sorting: the detection and classification of neural action
potentials. Network Comput Neural Syst. 1998;9:53?78.
[2] Quian Quiroga R, Nadasdy Z, Ben-Shaul Y. Unsupervised spike detection and sorting with wavelets and
superparamagnetic clustering. Neural Computation. 2004 Aug;16(8):1661?1687.
[3] Bar-Hillel A, Spiro A, Stark E. Spike sorting: Bayesian clustering of non-stationary data. Advances in
Neural Information Processing Systems 17. 2005;p. 105?112.
[4] Zumsteg ZS, Kemere C, O?Driscoll S, Santhanam G, Ahmed RE, Shenoy KV, et al. Power feasibility of
implantable digital spike sorting circuits for neural prosthetic systems. IEEE Trans Neural Syst Rehabil
Eng. 2005 Sep;13(3):272?279.
[5] Vargas-Irwin C, Donoghue JP. Automated spike sorting using density grid contour clustering and subtractive waveform decomposition. J Neurosci Methods. 2007;164(1).
[6] Yang Z, Zhao Q, Liu W. Neural signal classification using a simplified feature set with nonparametric
clustering. Neurocomputing, doi:101016/jneucom200907013, in press;.
[7] Gasthaus J, Wood F, Dilan G, Teh YW. Dependent dirichlet process spike sorting. Advances in Neural
Information Processing Systems 21. 2009;p. 497?504.
[8] Smith LS, Mtetwa N. A tool for synthesizing spike trains with realistic interference. J Neurosci Methods.
2007 Jan;159(1):170?180.
[9] Harris KD, Henze DA, Csicsvari J, Hirase H, Buzsaki G. Accuracy of tetrode spike separation as determined by simultaneous intracellular and extracellular measurements. J Neurophysiol. 2000;84:401?414.
[10] Yang Z, Zhao Q, Liu W. Spike feature extraction using informative samples. Advances in Neural Information Processing Systems 21. 2009;p. 1865?1872.
[11] Yang Z, Zhao Q, Liu W. Improving Spike Separation Using Waveform Derivative. Journal of Neural
Engineering, doi: 101088/1741-2560/6/4/046006. 2009 Aug;6(4).
[12] Davidsen J, Schuster HZ. Simple model for 1/f ? noise. Phys Rev Lett 65, 026120(1)-026120(4). 2002;.
[13] Yu Y, Romero R, Lee TS. Preference of sensory neural coding for 1/f signals. Phys Rev Lett 94,
108103(1)-108103(4). 2005;.
[14] Bedard C, Kroger H, Destexhe A. Does the 1/f frequency scaling of brain reflect self-organized critical
states? Phys Rev Lett 97, 118102(1)-118102(4). 2006;.
[15] Hassibi A, Navid R, Dutton RW, Lee TH. Comprehensive study of noise processes in electrode electrolyte
interfaces. J Appl Phys. 2004 July;96(2):1074?1082.
[16] Razavi B. Design of Analog CMOS Integrated Circuits. Boston, MA:McGraw-Hill; 2001.
[17] Keener J, Sneyd J. Mathematical Physiology. New York: Springer Verlag; 1998.
[18] Manwani A, N Steinmetz P, Koch C. Channel noise in excitable neural membranes. Advances in Neural
Information Processing Systems 12. 2000;p. 142?149.
[19] Fall C, Marland E, Wagner J, Tyson J. Computational Cell Biology. New York: Springer Verlag; 2002.
[20] Keefer EW, Botterman BR, Romero MI, Rossi AF, Gross GW. Carbon nanotube-coated electrodes improve brain readouts. Nat Nanotech. 2008;3:434?439.
[21] Wiley JD, Webster JG. Analysis and control of the current distribution under circular dispersive. IEEE
Trans Biomed Eng. 1982;29:381?385.
?
[22] Denison T, Consoer K, Kelly A, Hachenburg A, Santa W. A 2.2?W 94nV / Hz, chopper-stabilized
instrumentation amplifier for EEG detection in chronic implants. IEEE ISSCC Dig Tech Papers. 2007
Feb;8(6).
[23] Ferrari G, Gozzini F, Sampietro M. A current-sensitive front-end amplifier for nano biosensors with a
2MHz BW. IEEE ISSCC Dig Tech Papers. 2007 Feb;8(7).
[24] Harrison RR. The design of integrated circuits to observe brain activity. Proc IEEE. 2008 July;96:1203?
1216.
[25] Kaiser JF. On a simple algorithm to calculate the energy of a signal. In Proc IEEE Int Conf Acoustic
Speech and Signal Processing. 1990;p. 381?384.
[26] Vaseghi SV. Advanced Digital Signal Processing and Noise Reduction. Wiley-Teubner; 1996.
[27] Hosking J. Fractional differencing. Biometrika. 1981 Jan;68:165?176.
[28] Rubner Y. Perceptual metrics for image database navigation. In: Ph.D. dissertation, Stanford University;
1999. .
9
| 3695 |@word trial:3 cox:2 hippocampus:1 pulse:1 simulation:1 r:2 cos2:1 eng:2 decomposition:1 pick:1 reduction:4 electronics:5 liu:3 series:5 score:2 denoting:1 subjective:1 nadasdy:1 current:5 activation:2 cruz:1 distant:4 realistic:1 informative:1 romero:2 webster:1 drop:1 designed:1 plot:3 n0:2 v:3 stationary:1 selected:1 denison:1 ith:2 smith:1 dissertation:1 record:2 colored:1 characterization:1 contribute:1 location:1 preference:1 mathematical:1 ucsc:1 c2:1 consists:1 isscc:2 introduce:2 acquired:1 inter:1 mask:1 brain:4 globally:1 zhi:1 window:1 increasing:3 estimating:1 notation:1 underlying:2 circuit:9 xx:1 matched:1 what:1 unspecified:3 z:1 developed:1 differentiation:3 every:2 ti:4 charge:3 nneu:2 unwanted:1 biometrika:1 control:1 unit:4 medical:1 shenoy:1 positive:1 negligible:1 engineering:1 before:2 semiconductor:1 firing:3 ree:11 modulation:2 approximately:1 black:6 emphasis:1 studied:1 suggests:2 appl:1 limited:1 tungsten:6 range:3 averaged:1 testing:1 block:1 procedure:7 jan:2 area:2 empirical:1 drug:6 superparamagnetic:1 significantly:1 physiology:1 integrating:1 suggest:2 operator:1 applying:1 conventional:5 charged:1 center:1 chronic:1 straightforward:1 attention:1 l:1 identifying:1 assigns:1 adjusts:1 importantly:1 ferrari:1 fx:1 variation:2 gm:4 tan:1 us:3 hypothesis:1 approximated:1 database:1 observed:2 electrical:1 capture:2 solved:1 calculate:1 region:4 readout:1 trade:2 gross:1 environment:3 tacitly:1 depend:1 segment:2 misalignment:1 neurophysiol:1 sep:1 lowpass:1 cat:1 surrounding:1 train:2 separated:2 effective:1 describe:1 doi:2 detected:4 choosing:1 hillel:1 encoded:1 larger:3 widely:1 stanford:1 distortion:4 otherwise:1 ability:1 statistic:1 wentai:1 transform:1 online:1 dxp:2 sequence:11 differentiable:1 transistor:4 analytical:1 rr:1 unresolved:2 aligned:1 realization:1 achieve:1 buzsaki:1 razavi:1 kv:1 spiro:1 electrode:38 enhancement:2 double:2 cluster:6 produce:1 comparative:1 cmos:1 ben:1 object:2 spent:1 depending:1 develop:1 cnt:10 measured:4 aug:2 eq:3 edward:1 predicted:1 involves:1 come:1 indicate:1 waveform:24 coating:1 radius:1 thick:1 filter:28 stochastic:1 centered:1 public:1 implementing:1 noticeably:1 bin:1 require:2 assign:1 sizing:1 quiroga:1 sufficiently:1 around:1 koch:1 henze:1 electron:2 circuitry:1 major:3 vary:1 purpose:1 earth:1 estimation:5 proc:2 applicable:2 amplifying:1 sensitive:1 tool:3 clearly:2 gaussian:2 reaching:1 rather:1 pn:2 voltage:2 release:1 focus:2 validated:1 superimposed:5 tech:2 dependent:5 integrated:5 typically:1 initially:1 shaul:1 interested:1 biomed:1 issue:1 among:1 overall:2 arg:1 classification:2 multiplies:1 spatial:1 art:1 equal:1 extraction:12 emd:6 sampling:2 biology:1 represents:6 broad:5 yu:1 unsupervised:1 tyson:1 report:1 few:1 randomly:2 steinmetz:1 mover:1 comprehensive:2 individual:5 implantable:1 neurocomputing:1 geometry:1 bw:1 n1:2 amplifier:6 detection:9 interest:2 circular:1 intra:3 custom:1 evaluation:1 introduces:1 navigation:1 bundle:3 kt:11 necessary:1 allocates:1 re:1 plotted:6 sacrificing:1 isolated:2 fitted:1 increased:2 column:1 modeling:3 cover:2 mhz:1 cost:1 introducing:1 deviation:1 snr:9 usefulness:2 fabrication:1 too:1 front:1 optimally:2 reported:2 dependency:5 sv:1 hosking:1 thoroughly:1 referring:1 density:3 adaptively:1 fundamental:1 lee:2 off:2 together:1 quickly:1 na:1 reflect:1 recorded:12 choose:2 nano:1 corner:1 conf:1 derivative:5 zhao:3 stark:1 syst:2 suggesting:2 potential:3 de:1 summarized:2 coding:1 includes:1 coefficient:3 inc:1 int:1 mv:1 vi:4 performed:4 later:1 view:1 picked:1 teubner:1 liu1:1 red:5 start:1 recover:1 decaying:1 parallel:1 vin:1 maintains:1 vivo:16 contribution:1 ni:5 accuracy:1 wiener:1 characteristic:1 ensemble:1 tk1:2 identify:1 identification:1 raw:2 bayesian:1 emphasizes:1 produced:1 ionic:1 rx:3 dig:2 tissue:3 simultaneous:2 resistivity:1 phys:4 inoise:3 synaptic:1 neu:4 definition:1 energy:2 frequency:26 e2:2 naturally:1 di:3 mi:1 cee:9 gain:3 ut:1 fractional:7 improves:2 organized:2 shaping:26 dispersive:1 improved:3 maximally:2 done:1 just:1 correlation:1 until:1 nonlinear:1 lack:1 effect:1 concept:2 verify:2 counterpart:2 analytically:1 manwani:1 death:1 illustrated:1 white:1 gw:1 lumped:5 width:4 self:1 steady:1 rat:1 m:1 plate:1 hill:1 demonstrate:1 interface:17 temperature:1 neo:8 image:1 instantaneous:1 fi:2 spiking:4 ji:3 empirically:2 khz:4 jp:1 discussed:1 analog:1 synthesized:6 numerically:1 measurement:4 attenuate:1 uv:1 grid:1 particle:3 jg:1 specification:2 similarity:1 surface:1 convection:1 behaving:1 base:1 feb:2 dominant:4 integrity:3 noisily:1 instrumentation:1 verlag:2 kroger:1 iout:1 floor:1 aggregated:2 converge:1 period:1 signal:23 july:2 relates:1 multiple:8 reduces:3 kemere:1 match:1 ahmed:1 cross:1 af:1 divided:1 equally:1 va:4 qi:1 schematic:1 variant:1 feasibility:1 navid:1 expectation:1 df:9 metric:1 kernel:1 represent:3 ion:1 cell:1 addition:1 background:7 addressed:1 interval:1 diagram:1 keener:1 source:13 harrison:1 appropriately:1 extra:1 rest:2 nv:1 recording:19 induced:2 elegant:1 db:4 hz:6 capacitor:1 integer:2 yang:3 enough:1 destexhe:1 automated:1 variety:1 dilan:1 fit:3 zi:2 isolation:1 architecture:1 identified:3 reduce:3 knowing:1 attenuated:1 br:1 donoghue:1 weiner:4 quian:1 pca:6 effort:1 resistance:3 speech:1 york:2 differentiator:1 action:2 remark:1 dramatically:2 useful:2 generally:2 santa:2 yw:1 repeating:1 prepared:1 nonparametric:1 band:5 ph:1 induces:1 hardware:1 rw:1 reduced:5 exist:1 electrolyte:8 flicker:6 stabilized:1 delta:1 estimated:6 rb:6 bulk:5 blue:5 hirase:1 discrete:1 santhanam:1 four:2 verified:1 diffusion:1 vast:1 sum:1 wood:1 injected:1 communicate:1 fourth:1 family:3 electronic:4 separation:7 scaling:1 vb:2 capturing:1 layer:2 hi:3 followed:2 distinguish:1 display:3 correspondence:1 activity:15 flat:1 prosthetic:1 generates:1 fourier:1 span:3 concluding:1 transconductance:2 min:1 injection:4 extracellular:2 rossi:1 vargas:1 developing:1 according:2 combination:1 kd:1 membrane:1 describes:2 across:1 rev:3 intuitively:1 emds:2 interference:1 equation:3 discus:1 eventually:1 subjected:1 end:1 studying:1 operation:3 apply:2 observe:1 away:1 differentiated:1 distinguished:1 gate:1 jd:1 original:1 binomial:1 remaining:1 include:2 clustering:4 dirichlet:1 maintaining:1 nanotube:3 k1:3 passband:8 comparatively:1 capacitance:4 spike:49 kaiser:1 concentration:1 exhibit:4 dp:2 distance:26 separate:2 fc2:2 consumption:1 stepped:2 cellular:1 assuming:1 driscoll:1 length:1 modeled:1 index:3 illustration:1 ratio:2 balance:2 nc:8 difficult:2 differencing:1 carbon:3 potentially:1 claiming:1 trace:7 synthesizing:1 design:8 boltzmann:1 contributed:1 teh:1 neuron:29 benchmark:2 t:1 thermal:10 displayed:4 variability:1 precise:1 incorporated:1 gasthaus:1 sharp:1 sweeping:1 pair:3 required:1 csicsvari:1 california:2 acoustic:1 quadratically:1 narrow:1 trans:2 address:1 bar:1 usually:1 including:1 reliable:1 ispike:3 power:13 critical:2 overlap:1 difficulty:1 bandlimited:1 advanced:2 scheme:2 improve:4 technology:2 brief:1 x2i:1 ne:7 fc1:3 axis:2 carried:1 excitable:1 auto:1 sn:3 deviate:1 review:1 literature:1 kelly:1 removal:4 generation:1 filtering:4 digital:2 degree:2 rubner:1 metal:1 consistent:1 subtractive:1 row:2 summary:1 placed:1 last:1 bias:1 institute:1 wide:3 template:5 taking:1 fall:1 wagner:1 fifth:1 feedback:2 boundary:2 calculated:3 numeric:2 evaluating:2 curve:5 contour:1 lett:3 sensory:1 adaptive:12 simplified:2 flux:1 emphasize:1 mcgraw:1 global:3 active:1 xi:9 spectrum:37 coated:8 table:2 dutton:1 channel:2 robust:1 contributes:1 unavailable:1 improving:1 eeg:1 complex:1 domain:2 da:1 neurosci:2 intracellular:1 noise:160 repeated:1 neuronal:1 site:1 ff:1 wiley:2 hassibi:1 bandpass:16 resistor:1 comput:1 candidate:6 perceptual:1 third:1 wavelet:1 pxx:3 minute:16 showing:1 explored:1 pz:1 decay:1 dominates:1 tetrode:1 importance:1 flattened:2 ci:2 magnitude:2 nat:1 implant:1 sorting:11 boston:1 yang1:1 chopper:3 explore:1 lewicki:1 springer:2 satisfies:1 harris:1 ma:1 rehabil:1 jf:4 soe:1 considerable:1 specifically:2 except:1 typical:1 reducing:4 determined:1 total:5 pas:2 ew:1 select:1 irwin:1 preparation:5 evaluate:2 tested:1 schuster:1 |
2,974 | 3,696 | Boosting with Spatial Regularization
Zhen James Xiang1
Yongxin Taylor Xi1
Uri Hasson2
Peter J. Ramadge1
1: Department of Electrical Engineering, Princeton University, Princeton NJ, USA
2: Department of Psychology, and Neuroscience Institute, Princeton University, Princeton NJ, USA
{zxiang, yxi, hasson, ramadge} @ princeton.edu
Abstract
By adding a spatial regularization kernel to a standard loss function formulation
of the boosting problem, we develop a framework for spatially informed boosting.
From this regularized loss framework we derive an efficient boosting algorithm
that uses additional weights/priors on the base classifiers. We prove that the proposed algorithm exhibits a ?grouping effect?, which encourages the selection of
all spatially local, discriminative base classifiers. The algorithm?s primary advantage is in applications where the trained classifier is used to identify the spatial
pattern of discriminative information, e.g. the voxel selection problem in fMRI.
We demonstrate the algorithm?s performance on various data sets.
1
Introduction
When applying off-the-shelf machine learning algorithms to data with spatial dimensions (images,
geo-spatial data, fMRI, etc) a central question arises: how to incorporate prior information on the
spatial characteristics of the data? For example, if we feed a boosting or SVM algorithm with
individual image voxels as features, the voxel spatial information is ignored. Indeed, if we randomly
shuffled the voxels, the algorithm would not notice any difference. Yet in many cases the spatial
arrangement of the voxels together with prior information about expected spatial characteristics of
the data may be very helpful. We are particularly interested in the situation when the trained classifier
is used to identify relevant spatial regions. To make this more concrete, consider the problem of
training a classifier to distinguish two different brain states based on fMRI responses. Successful
classification suggests that the voxels used are important in discriminating between the two classes.
Hence we could use a successful classifier to learn a set of discriminative voxels. We expect that
these voxels will be spatially compact and clustered. How can this prior knowledge be incorporated
into the training of the classifier? In summary, our primary objective is improving the ability of
the trained classifier to usefully identify the spatial pattern of discriminative information. However,
incorporating spatial information into boosting may also improve classification accuracy.
Our key contribution is the development of a framework for spatially regularized boosting. We
do this by adding a spatial regularization kernel to the standard loss minimization formulation of
boosting. We then design an associated boosting algorithm by using coordinate descent on the
regularized loss. We show that the algorithm minimizes the regularized loss function and has a
natural interpretation of boosting with additional adaptive priors/weights on both spatial locations
and training examples. We also show that it exhibits a natural grouping effect on nearby spatial
locations with similar discriminative power.
We believe our contributions are fundamental and relevant to a variety of applications where base
classifiers are attributed with a known auxiliary variable and prior information is known about this
auxiliary variable. However, since our study is motivated by the particular problem of voxel selection
in fMRI analysis, we briefly review the state of the art in this domain so as to put our contribution
into a concrete context.
1
Briefly, the fMRI voxel selection problem is to use the fMRI signal to identify a subset of voxels
that are key in discriminating between two stimuli. One expects such voxels to be spatially compact
and clustered. Traditionally this is done by thresholding a statistical univariate test score on each
voxel [1]. Spatial smoothing prior to this analysis is commonly employed to integrate activity from
neighboring voxels. An extreme case is hypothesis testings on clusters of voxels rather than on
voxels themselves [2]. The problem with these methods is that they greatly sacrifice the spatial
resolution of the results and averaging could hide fine patterns in data. An alternative is to spatially
average the univariate test scores, e.g. thresholding in some transformed domain (e.g. wavelet
domain) [3, 4]. However, this also compromises the spatial accuracy of the result because one
selects discriminating wavelet components, not voxels. A more promising spatially aware approach
selects voxels with tree-based spatial regularization of a univariate statistic [5, 6]. This can achieve
both spatial precision and smoothness but uses a complex regularization method. Our proposed
method also selects single voxels with the help of spatial regularization but operates in a multivariate
classifier framework using a simpler form of regularization.
Recent research has suggested that multivariate analysis has potential advantages over univariate
tests [7, 8], e.g. it brings in machine learning algorithms (such as boosting, SVM, etc.) and therefore might capture more intricate activation patterns involving multiple voxels. To ensure spatial
clustering of selected voxels, one can run a searchlight (a spherical mask) [9] to pre-select clustered
informative features. In each searchlight location, a multivariate analysis is performed to see whether
the masked area contains informative data. One can then train a classifier on the pre-selected voxels.
A variant of this two-stage framework is to train classifiers on a few predefined masks, and then
aggregate these classifiers by boosting [10, 11]. This is faster but assumes detailed prior knowledge
to select the predefined masks. Unlike two-stage approaches, [12] directly uses AdaBoost to train
classifiers with ?rich features? (features involving the values of several adjacent voxels) to capture
spatial structure in the data. Although exhibiting superior performance, this method selects ?rich
features? rather than individual discriminating voxels. Moreover, there is no control on the spatial
smoothness of the results. Our method is similar to [12] in that we combine the feature selection
and classification into one boosting process. But our algorithm operates on single voxels and uses
simple spatial regularization to incorporate spatial information.
The remainder of the paper is organized as follows. After introducing notation in ?2, we formulate our spatial regularization approach in ?3 and derive an associated spatially regularized boosting
algorithm in ?4. We prove an interesting property of the algorithm in ?5 that guarantees the simultaneous selection of equivalent locations that are spatially close. In ?6, we test the algorithm on face
gender detection, OCR image classification, and fMRI experiments.
2
Boosting Preliminaries
In a supervised learning setting, we are given m training instances X = {xi ? Rn , i = 1, . . . , m}
and corresponding binary labels Y = {yi = ?1, i = 1, . . . , m}. Using the training instances X , we
select a pool of base classifiers H = {hj : Rn ? {?1, +1},
Ppj = 1, . . . , p}. Our objective is to train
a composite binary classifier of the form h? (xi ) = sgn( j=1 ?j hj (xi )). We can further assume
that hj ? H ? ?hj ? H, thus all values in ? can be assumed to be nonnegative. Boosting is a
technique for constructing from X , Y and H the weight ? of a composite classifier to best predict
the labels. This can be done by seeking ? to minimize a loss function of the form:
L(X , Y, ?) =
m
X
l(yi , h? (xi )).
(1)
i=1
Various boosting algorithms can be derived as iterative greedy coordinate descent procedures to
minimize (1) [13]. In particular, AdaBoost [14] is of this form with l(yi , h? (xi )) = e?yi h? (xi ) .
The result of a conventional boosting algorithm is determined by the m ? p matrix M = [yi hj (xi )]
? j = hj ? P ?1 ; so
[15]. Under a component permutation x
?i = P xi , the base classifiers become h
? j (?
? = [yi h
M
xi )] = [yi hj (xi )] = M . Hence training on {P xi , yi } or {xi , yi } yields the same ?,
i.e., the arrangement of the components can be arbitrary as long as it is consistent.
The weights ? of a composite classifier not only indicate how to construct the classifier, but also
the relative reliance of the classifier on each of the n instance components. To see this, assume each
2
hj depends on only a single component of x ? Rn , i.e., for some standard basis vector ek , and
function gj : R ? {?1, +1}, hj (x) = gj (eTk x) (the base classifiers are decision stumps). To make
the association between base classifiers and components explicit, let s be the function s(j) = k if
hj (x) = gj (eTk x) and Q = [qkj ] be the n ? p matrix with qkj = 1[s(j)=k] . Then the vector ? = Q?
indicates the relative importance the classifier assigns to each instance component. Although we
used decision stumps above for simplicity, more complex base classifiers such as decision trees could
be used with proper modification of mapping from ? to ?. We call ? the component importance
map. Suppose the instance components reflect spatial structure in the data, e.g. the components are
samples along an interval or pixels in an image. Then the component importance map is indicating
the spatial distribution of weights that the classifier employs. Presumably a good classifier distributes
the weights in accordance with the discriminative power of the components; in which case, the
map is indicating how discriminative information is spatially distributed. It is in this aspect of the
classifier that we are particularly interested. Now as shown above, conventional boosting ignores
spatial information. Our objective, pursued in the next sections, is to incorporated prior information
on spatial structure, e.g. a prior on the component importance map, into the boosting problem.
3
Adding Spatial Regularization
To incorporate spatial information we add spatial regularization of the form ? T K? to the loss (1)
where the kernel K ? Rn?n
++ is positive definite. For concreteness, we employ the exponential loss
l(yi , h? (xi )) = e?yi h? (xi ) . Thus the regularized loss is:
p
m
X
X
Lexp
(X
,
Y,
?)
=
exp(?y
?j hj (xi )) + ?? T K?
(2)
i
reg
=
i=1
j=1
m
X
p
X
exp(?yi
i=1
?j hj (xi )) + ??T QT KQ?.
(3)
j=1
The term ? T K? imposes a spatial smoothness constraint on ?. To see this, consider the eigendecomposition K = U ?U T , where the columns {uj } of U are the orthonormal eigenvectors, ?j
is the eigenvalue of uj and ? = diag(?1 , ?2 , . . . , ?n ). Then the regularizing term can be rewrit1
ten as ?k? 2 U T ?k22 where U T ? is the ?spectrum? of ? under the orthogonal transformation U T .
Rather than standard Tikhonov regularization with k?k22 = kU T ?k22 , we penalize the variation in
direction uj proportional to the eigenvalue ?j . By doing so we are encouraging ? to be close to the
eigenvectors uj with small eigenvalues. This encodes our prior spatial knowledge.
Figure 1: Each graph is the eigenimage of size d ? d corresponding to an eigenvector of K = ?I ? G.
As an example, consider the kernel K = ?I ? G, where G is a Gaussian kernel matrix:
2
1
2
Gij = e? 2 kvi ?vj k2 /r ,
(4)
with vj the spatial location of component j, kvi ? vj k2 the Euclidean distance (other distances
can also be used) between components i and j, and r the radius parameter of the Gaussian kernel.
For the 2D case, i = (i1 , i2 ) ranges over (1, 1), (1, 2), . . . , (d, d). j = (j1 , j2 ) ranges over the same
coordinates. So G is a size d2 ?d2 matrix. We plot the 6 eigenimages of K with smallest eigenvalues
in Figure 1. The regularization imposes a spatial smoothness constraint by encouraging ? to give
more weight to the eigenimages with smaller eigenvalues, e.g. the patterns shown in Figure 1.
4
A Spatially Regularized Boosting Algorithm
We now derive a spatially regularized boosting algorithm (abbreviated as SRB) using coordinate
descent on (3). In particular, in each iteration we choose a coordinate of ? with the largest negative
3
gradient and increase the weight of that coordinate by step size ?. This results in an algorithm similar
to AdaBoost, but with additional consideration of spatial location.
To begin, we take the partial derivative of (3) w.r.t. ?j 0 :
?
p
m
X
X
?
0
Lexp
(X
,
Y,
?)
=
y
h
(x
)
exp(?y
?j hj (xi )) ? 2eTj0 ?QT KQ?.
i j
i
i
??j 0 reg
i=1
j=1
Here ej 0 is the j 0 -th standard basis vector, so eTj0 ?QT KQ? is the j 0 -th element of ?QT KQ?. By
the definition of Q, (eTj0 QT )?KQ? is the s(j 0 )-th element of ?KQ?. Therefore if we define ? to
Pp
be ? = ?2?K?, and wi = exp(?yi j=1 ?j hj (xi )) (1 ? i ? m) to be the unnormalized weight
on training instance xi , then the partial derivative in (4) can be written as:
m
?
X
?
Lexp
yi hj 0 (xi )wi + ?s(j 0 )
reg (X , Y, ?) =
??j 0
i=1
Pm
The term i=1 yi hj 0 (xi )wi is the weighted performance of base classifier hj 0 on the training examples. Normally, we choose hj 0 to maximize this term. This corresponds to choosing the best
base classifier under the current weight distribution. However, here we have an additional term: the
performance of base classifier hj 0 is enhanced by a weight ?s(j 0 ) on its corresponding component
s(j 0 ). We call ? the spatial compensation weight. To proceed, we choose a base classifier hj 0 to
maximize the sum of these two terms and then increase the weight of that base classifier by a step
size ?. This gives Algorithm 1 shown in Figure 2. The key differences from AdaBoost are: (a) the
new algorithm maintains a new set of ?spatial compensation weights? ?; (b) the weights on training
examples wi are not normalized at the end of each iteration.
Algorithm 1 The SRB algorithm
1: wi ? 1, 1 ? i ? m
2: ? ? 0
3: for t = 1 to T do
4:
? ? Q?
5:
? ? ?2?K?
6:
find the ?best? base classifier in the following sense:
j 0 ? arg maxj ?(hj , w) + ?s(j)
7:
choose a step size ?, ?j 0 ? ?j 0 + ?
8:
adjust
weights:
wi e? if yi hj 0 (xi ) = ?1
wi ?
wi e?? if yi hj 0 (xi ) = 1
for 1 ? i ? m
9: end for
Pp
10: Output result: h? (x) = j=1 ?j hj (x)
In both algorithms, ?(hj , w) is defined to be:
?(hj , w) =
m
X
yi hj (xi )wi ,
i=1
which is a performance measure of classifier hj under weight distribution w on training examples.
Algorithm 2 SRB algorithm with backward steps
1: wi ? 1, 1 ? i ? m
2: ? ? 0
3: for t = 1 to T do
4:
? ? Q?
5:
? ? ?2?K?
6:
find the ?best? base classifier in the following sense:
j 0 ? arg maxj ?(hj , w) + ?s(j)
7:
choose a step size ?1 , ?j 0 ? ?j 0 + ?1
8:
adjust
weights:
wi e?1 if yi hj 0 (xi ) = ?1
wi ?
wi e??1 if yi hj 0 (xi ) = 1
9:
find the ?worst? active classifier in the following sense:
j 00 ? arg minj:?j >0 ?(hj , w) + ?s(j)
10:
?j 00 ? ?j 00 ? ?22
11:
adjust
weights again:
wi e??2 /2 if yi hj 00 (xi ) = ?1
wi ?
wi e?2 /2 if yi hj 00 (xi ) = 1
for 1 ? i ? m
12: end for
Pp
13: Output result: h? (x) = j=1 ?j hj (x)
Figure 2: The SRB (spatially regularized boosting algorithms).
To elucidate the effect of the compensation weights, consider the kernel K = ?I ?G, with G defined
? ? ??) where ?
? = G? is the Gaussian smoothing of ? . Therefore,
in (4). In this case, ? = 2?(?
4
a component receives a high compensation weight ?k = 2?(??k ? ??k ) if some neighboring spatial
locations have already been selected (i.e., made ?active?) by the composite classifier. On the other
hand, the weight of a component is reduced (proportional to the magnitude of parameter ?) if it
is already ?active?, i.e., ?k > 0. So the algorithm encourages the selection of base classifiers
associated with ?inactive? locations that are close to ?active? locations.
We can enhance the algorithm by including a backward step each iteration: ?j 00 ? ?j 00 ? ?0 , where
(m
)
X
j 00 = arg min
(5)
yi hj (xi )wi + ?s(j) .
1?j?p,?j >0
i=1
This helps remove prematurely selected base classifiers [16, 17]. This is Algorithm 2 in Figure 2.
Spatial regularization brings no significant computational overhead: Compared to AdaBoost, SRB
has additional steps 4,5, which can be computed in time O(n) every iteration. Adaptive weight ?
incurs no additional complexity for step 6 in our current implementation.
We now briefly discuss the choice of step size ? in Algorithm 1 (?1 and ?2 in Algorithm 2 can be
chosen similarly). ? could be a fixed (small) step size at each iteration. This is not greedy but may
necessitate a large number of iterations. Alternatively, one can be greedy and select ? to minimize
the value of the loss function (3) after the change ?j 0 ? ?j 0 + ?:
W? e? + W+ e?? + ?(? + ?ek0 )T K(? + ?ek0 ),
(6)
P
0
where W? = i:yi hj0 (xi )=?1 exp(?yi h? (xi )), W+ = i:yi hj0 (xi )=1 exp(?yi h? (xi )) and k =
s(j 0 ). Setting the derivative of (6) to 0 yields:
P
W? e? ? W+ e?? ? ?k0 + 2??Kk0 k0 = 0.
(7)
+ ?W? +?k0
Using e?? ? 1 ? ? gives the solution ?? = W+W+W
, which can be used as a step size.
? +2?Kk0 k0
However, for the following slightly more conservative step size we can prove algorithm convergence:
W+ ? W? + ?k0
(W + ? W ? )
?? = min 3
,
,1 .
(8)
W+ + 1.36W? W+ + W? + 2?Kk0 k0
Theorem 1. The step size (8) ensures convergence of Algorithm 1.
Proof. (6) is convex, so its minimum point ?? is the unique solution of (7): f1 (?? ) + f2 (?? ) = 0
where f1 (?) = W? e? ? W+ e?? and f2 (?) = 2?Kk0 k0 ? ? ?k0 . We have the inequality chain:
f1 (?
?) + f2 (?
?) ? g1 (?
?) + f2 (?
?) ? g1 (?
?) + f2 (?
?) = 0 = f1 (?? ) + f2 (?? ),
(9)
where g1 (?) = W? (1 + ?) ? W+ (1 ? ?). So ?? is on the descending slope of (6), which is a sufficient
condition for ?? to reduce the objective (6). Since the objective (3) is nonnegative and each iteration
of the algorithm reduces (3), the algorithm converges. The second inequality in (9) uses monoticity
while the first inequality in (9) uses the following lemma proved in the supplementary material:
+ ?W ? )
Lemma: If 0 < ? ? min{3 W(W
, 1}, then f1 (?) ? g1 (?) ? 0.
+ +1.36W?
5
The Grouping Effect: Asymptotic Analysis
Recall our objective of using the component importance map of the trained classifier to ascertain
the spatial distribution of informative components in the data. Ideally, we would like ? to faithfully
represent this information. In general, however, a boosting algorithm will select a sufficient but
incomplete collection of base classifiers (and hence components) to accomplish the classification.
For example, after selecting one base classifier hj , AdaBoost will adjust the weights of training
examples to make the weighted training error of hj exactly 12 (totally uninformative), thus preventing
the selection of any classifiers similar to hj in the next iteration. In fact, for AdaBoost we can prove
that in the optimal solution ?? , we can transfer coefficient weights between any two equivalent base
classifiers without impacting optimality. So minimizing the loss function (1) does not require any
particular distribution among the ? coefficients of identical components. This is the content of the
following proposition.
5
Proposition 1. Assume hj1 and hj2 , j1 < j2 , are base classifiers with s(j1 ) 6= s(j2 ), and hj1 (xi ) =
hj2 (xi ) for all xi ? X . If ?? minimizes the loss function (1), then for any ? in [0, min{?j?1 , ?j?2 }],
?? also minimizes loss function (1) where ?? = ?? ??ej1 +?ej2 where ej denotes the j-th standard
basis vector in Rp .
Proof. hj1 (xi ) = hj2 (xi ) implies that h?? (xi ) = h?? (xi ) for all xi ? X .
What is desirable is a ?grouping effect?, in which components with similar behavior under H receive
similar ? weights. We will prove that asymptotically, SRB exhibits a ?grouping effect?. In particular,
for kernel K = ?I ? G, G defined in (4), we will look at the minimizer ? ? = Q?? of the loss
?
?
function (2), and in the spirit of [18], establish a bound on the difference |?i1
??i2
| of the coefficients
on two similar components.
To proceed, let ?? minimize (3) with: ? ? = Q?? , ? ? = ?2?K? ? , and the corresponding training
instance weight w? . Let Hk denote the subset of base classifiers acting on component k, i.e., Hk =
{hj ? H : s(j) = k}. The following lemma is proved in the supplementary material:
Pm
Lemma: For any k, 1 ? k ? n, ??k? ? maxhj ?Hk i=1 yi hj (xi )wi? with equality if ?k? > 0.
Assuming K = ?I ? G, G defined in (4), we have the following result:
Theorem 2. Let ??? = G? ? be the smoothed version of vector ? ? . Then for any k1 and k2 :
1 ??
1
?
|?k1 ? ??k2
|+
d(k1 , k2 ),
?
??
Pm
Pm
?
?
i=1 yi hj (xi )wi ? maxhj ?Hk2
i=1 yi hj (xi )wi |.
?
?
|?k1
? ?k2
|?
where d(k1 , k2 ) = | maxhj ?Hk1
(10)
Proof. We prove the following three cases separately:
?
?
?
?
(1).
?k1
and ?k2
are both positive. In this case, using the lemma on ?k1
and ?k2
yields:
(2???? ? 2??? ? ) ? (2???? ? 2??? ? ) = |? ? ? ? ? | = d(vk1 , vk2 ). We can then use the trik1
k1
k2
k2
k1
k2
angle inequality on the LHS to obtain the result.
?
?
?
?
(2). One of P
?k1
and ?k2
is zero the other is positive. P
WLOG assume ?k1
= 0. Then ??k1
?
m
m
?
maxhj ?Hk1 i=1 yi hj (xi )wi? and ??k2
= maxhj ?Hk2 i=1 yi hj (xi )wi? . This gives:
?
?
?k1
? ?k2
? max
hj ?Hk2
m
X
i=1
yi hj (xi )wi? ? max
hj ?Hk1
m
X
yi hj (xi )wi? ? d(vk1 , vk2 ).
i=1
? ? 2???, yields (2???? ? 2??0) ?
Substituting the definition of ?: ? = 2?G? ? 2??? = 2??
k1
?
?
?
?
?
(2???k2
? 2???k2
) ? d(vk1 , vk2 ). Therefore 2???k2
? (2???k2
? 2???k1
) + d(vk1 , vk2 ). Using the
triangle inequality on the right hand side of the previous expression yields the result.
?
?
(3) ?k1
= ?k2
= 0. In this case, the inequality is obvious.
The theorem upper bounds the difference in the importance coefficient of two components by the
?
?
sum of two terms: the first, |??k1
? ??k2
|, takes into account the importance weight of nearby locations.
This term is small when the two locations are spatially close, or when they are in two neighborhoods
that contain a similar amount of important voxels. The second term reflects the dissimilarity between
two voxels. This term measures the difference in the weighted performances of a location?s best base
classifier. Clearly, d(k1 , k2 ) = 0 when components k1 and k2 are identical under H over the training
instances. More generally, we can sort all the training examples by the activation level on a single
component. If sorting on locations k1 and k2 yields the same results, then d(k1 , k2 ) = 0.
6
Experiments
The first experiment is gender classification using features located on 58 annotated landmark points
in the IMM face data set [19] (Figure 3(a)). For each point we extract the first 3 principal components
of a 15?15 window as features. We randomly choose 7 males and 7 females to do leave-one-out 7fold cross-validation for 100 trials. AdaBoost yields an average classification accuracy of ? = 78.8%
6
with a standard deviation of ? = 19.9%. SRB (? = 0.1, r = 10 pixel-length) achieves ? = 80.5% and
? = 18.7%. The component importance map ? of SRB reveals both eyes as discriminating
areas
P
and demonstrates the grouping effect. (All experiments in this section use ? = maxj ( i Gij ). By
(10), a larger ? will make this grouping effect more dominant). The ? for AdaBoost is less smooth
and less interpretable with the most important component on the left chin (Figure 3(b,c)).
(a)
(b)
(c)
Figure 3: Experiment 1. (a): an example showing annotated
points; (b-c): the average component importance map ? (indicated by sizes of the circles) after running (b) AdaBoost
and (c) SRB for 50 iterations.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 4: Experiment 2. (a-d): example images; (e): example training image with noise;
(f): ground truth of discriminative pixels; (g-h):
pixels selected by (g) AdaBoost and (h) SRB.
The second experiment is a binary image classification task. Each image contains the handwritten
digits 1,1,0,3 and a random digit, all in fixed locations. Digits 0 and 1 are swapped between the
classes (Figure 4(a-d)). The handwritten digit images are from the OCR digits data set [20]. To
obtain the training/testing instances we add noise to the images (Figure 4(e)). We test the ability
of several algorithms to: (a) find the discriminating pixels, and (b) if a classification algorithm, accurately classify the classes. The quality of pixel selection is measured by a precision-recall curve,
with ground truth pixels (Figure 4(f)) selected by a t-test on the two classes of noiseless images. This
curve is plotted for the following methods: (1) SRB (? = 0.5, r = ?12 pixel-length) (2) AdaBoost;
(3) thresholding the univariate t-test score; (4) thresholding the first one or two principle component(s); (5) thresholding the pixel coefficients in an LDA model with diagonal covariance (Gaussian
naive bayes classifier); (6) level-set method [6] on a Z-statistics map. We plot the precision-recall
curve by varying the number of iterations (for (1),(2)) or the value of the threshold (for (3)-(6)). We
also tried all methods with Gaussian spatial pre-smoothing as a preprocessing step. The classification accuracies are measured for methods (1), (2) and (5) on separate test data.
The results, averaged over 100 noise realizations, are plotted in Figure 5. SRB showed no loss of
classification accuracy nor convergence speed (usually within 100 iterations), and achieved the best
pixel selection among all methods. It is better than Gaussian naive Bayes and PCA methods, even
when the noise matches the i.i.d. Gaussian assumption of these methods (Figure 5(a,d)). In all cases,
local spatial averaging deteriorates the classification performance of boosting.
In the third experiment, subjects watch a movie during the fMRI scan. The classification task is
to discriminate two types of scenes (faces and objects) based on the fMRI responses. Each fMRI
responses is a single TR scan of the brain volume. We divide the data (14 subjects, 26 face and
18 object fMRI responses) into 10 cross validation groups and average the classification accuracies.
SRB (? = 0.1, r = 5 voxel-length) trained for 100 iterations yields accuracy ? = 73.3% with
? = 9.3% across 14 subjects. AdaBoost yields ? = 75.5% with ? = 4.9%. To make sure this
is significant, we repeated the training with shuffled labels. After shuffling, ? = 49.7%, with
? = 4.6%, which is effectively chance. We note that spatially regularized boosting yields a more
clustered and interpretable selection of voxels. The result for one subject (Figure 6) shows that
standard boosting (AdaBoost) selects voxels scattered in the brain, while SRB selects clustered
voxels and nicely highlights the relevant FFA area [21] and posterior central sulcus [22, 23].
7
Conclusions
The proposed SRB algorithm is applicable to a variety of situations in which one needs to boost
the performance of base classifiers with spatial structure. The mechanism of the algorithm has a
7
0.96
0.94
0.92
Spatial Regularized Boosting
Spatial Regularized Boosting with smoothing
AdaBoost
AdaBoost with smoothing
Gaussian naive Bayes
Gaussian naive Bayes with smoothing
0.9
0.88
0.86
0
50
100
150
iterations of boosting
1
classification accuracy on test images
0.95
0.98
classification accuracy on test images
classification accuracy on test images
1
0.9
0.85
0.8
0.75
0.7
0
200
50
(a)
100
150
iterations of boosting
0.95
0.9
0.85
0.8
0
200
50
(b)
100
150
iterations of boosting
200
(c)
1
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.9
0.8
0.5
Spatial Regularized Boosting
Spatial Regularized Boosting with smoothing
AdaBoost
AdaBoost with smoothing
Univariate test
Univariate test with smoothing
PCA, first PC
PCA, first two PCs
PCA with smoothing, first PC
Gaussian naive Bayes
Gaussian naive Bayes with smoothing
level?set
level?set with smoothing
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
precision
0.6
precision
precision
0.7
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
recall
(d)
0.5
0.5
recall
(e)
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
recall
(f)
Figure 5: Experiment 2. (a-c): test classification accuracy: (a) i.i.d. Gaussian noise, (b) poisson noise, (c)
spatially correlated Gaussian noise. (b,c) share the legend of (a). (d-f): pixel selection performances: (d) i.i.d.
Gaussian noise, (e) poisson noise, (f) spatial correlated Gaussian noise. (e,f) share the legend of (d).
(a)
(b)
(c)
Figure 6: Experiment 3: an example: sets of voxels selected by (a) univariate t-test (b) AdaBoost and (c) SRB
natural interpretation: in each iteration, the algorithm selects a base classifier with the best performance evaluated under two sets of weights: weights on training examples (as in AdaBoost) and
weights on locations. The additional set of location weights encourages or discourages the selection
of certain base classifiers based on the spatial location of base classifiers that have already been selected. Computationally, SRB is as effective as AdaBoost. We demonstrated the effectiveness of the
algorithm both by providing a theoretical analysis of the ?grouping effect? and by experiments on
three data sets. The grouping effect is clearly demonstrated in the face gender detection experiment.
In the OCR classification experiment, the algorithm shows superior performance in pixel selection
accuracy without loss of classification accuracy. The algorithm matches the performance of the
state-of-the-art set estimation methods [6] that use a more complex spatial regularization and cycle
spinning technique. In the fMRI experiment, the algorithm yields a clustered selection of voxels in
positions relevant to the task. An alternative approach, being explored, is to combine searchlight [9]
with a strong learning algorithm (e.g. SVM) to integrate spatial locality and accurate classification.
8
Acknowledgments
The authors thank Princeton University?s J. Insley Blair Pyne Fund for seed research funding.
8
References
[1] K.J. Friston, J. Ashburner, J. Heather, et al. Statistical parametric mapping. Neuroscience Databases: A
Practical Guide, page 237, 2003.
[2] R. Heller, D. Stanley, D. Yekutieli, N. Rubin, and Y. Benjamini. Cluster-based analysis of FMRI data.
NeuroImage, 33(2):599?608, 2006.
[3] D. Van De Ville, T. Blu, and M. Unser. Integrated wavelet processing and spatial statistical testing of
fMRI data. NeuroImage, 23(4):1472?1485, 2004.
[4] D. Van De Ville, M.L. Seghier, F. Lazeyras, T. Blu, and M. Unser. WSPM: Wavelet-based statistical
parametric mapping. NeuroImage, 37(4):1205?1217, 2007.
[5] Z. Harmany, R. Willett, A. Singh, and R. Nowak. Controlling the error in fmri: Hypothesis testing or set
estimation? In Biomedical Imaging, 5th IEEE International Symposium on, pages 552?555, 2008.
[6] R.M. Willett and R.D. Nowak. Minimax optimal level-set estimation. IEEE Transactions on Image
Processing, 16(12):2965?2979, 2007.
[7] J.V. Haxby, M.I. Gobbini, M.L. Furey, A. Ishai, J.L. Schouten, and P. Pietrini. Distributed and overlapping
representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425?2430, 2001.
[8] K.A. Norman, S.M. Polyn, G.J. Detre, and J.V. Haxby. Beyond mind-reading: multi-voxel pattern analysis
of fMRI data. Trends in Cognitive Sciences, 10(9):424?430, 2006.
[9] N. Kriegeskorte, R. Goebel, and P. Bandettini. Information-based functional brain mapping. Proceedings
of the National Academy of Sciences, 103(10):3863?3868, 2006.
[10] V. Koltchinskii, M. Mart?nez-Ramon, and S. Posse. Optimal aggregation of classifiers and boosting maps
in functional magnetic resonance imaging. Advances in Neural Information Processing Systems, 17:705?
712, 2005.
[11] M. Mart??nez-Ram?on, V. Koltchinskii, G.L. Heileman, and S. Posse. fMRI pattern classification using
neuroanatomically constrained boosting. NeuroImage, 31(3):1129?1141, 2006.
[12] Melissa K. Carroll, Kenneth A. Norman, James V. Haxby, and Robert E. Schapire. Exploiting spatial
information to improve fmri pattern classification. In 12th Annual Meeting of the Organization for Human
Brain Mapping, Florence, Italy, 2006.
[13] J.H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics,
29(5):1189?1232, 2001.
[14] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. In European Conference on Computational Learning Theory, pages 23?37, 1995.
[15] C. Rudin, I. Daubechies, and R.E. Schapire. The dynamics of adaboost: Cyclic behavior and convergence
of margins. Journal of Machine Learning Research, 5(2):1557, 2005.
[16] Z.J. Xiang and P.J. Ramadge. Sparse boosting. In IEEE International Conference on Acoustics, Speech
and Signal Processing, 2009.
[17] T. Zhang. Adaptive Forward-Backward Greedy Algorithm for Sparse Learning with Linear Models. In
Proc. Neural Information Processing Systems, 2008.
[18] H. Zou and T. Hastie. Regression shrinkage and selection via the elastic net, with applications to microarrays. JR Statist. Soc. B, 2004.
[19] M.M. Nordstr?m, M. Larsen, J. Sierakowski, and M.B. Stegmann. The IMM face database-an annotated
dataset of 240 face images. Technical report, DTU Informatics, Building 321, 2004.
[20] A. Asuncion and D.J. Newman. UCI machine learning repository, 2007.
[21] N. Kanwisher, J. McDermott, and M.M. Chun. The fusiform face area: a module in human extrastriate
cortex specialized for face perception. Journal of Neuroscience, 17(11):4302?4311, 1997.
[22] U. Hasson, M. Harel, I. Levy, and R. Malach. Large-scale mirror-symmetry organization of human
occipito-temporal object areas. Neuron, 37(6):1027?1041, 2003.
[23] U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach. Intersubject synchronization of cortical activity
during natural vision. Science, 303(5664):1634?1640, 2004.
9
| 3696 |@word trial:1 repository:1 version:1 briefly:3 fusiform:1 kriegeskorte:1 blu:2 d2:2 tried:1 covariance:1 incurs:1 tr:1 extrastriate:1 cyclic:1 contains:2 score:3 selecting:1 current:2 activation:2 yet:1 written:1 informative:3 j1:3 haxby:3 remove:1 plot:2 interpretable:2 fund:1 greedy:5 selected:8 pursued:1 harmany:1 rudin:1 boosting:39 location:17 simpler:1 zhang:1 along:1 become:1 symposium:1 prove:6 combine:2 overhead:1 sacrifice:1 kanwisher:1 intricate:1 expected:1 mask:3 themselves:1 nor:1 indeed:1 multi:1 brain:5 behavior:2 spherical:1 encouraging:2 window:1 totally:1 begin:1 moreover:1 notation:1 yongxin:1 furey:1 kk0:4 what:1 minimizes:3 eigenvector:1 informed:1 transformation:1 nj:2 guarantee:1 temporal:2 every:1 usefully:1 exactly:1 classifier:54 k2:25 demonstrates:1 control:1 normally:1 positive:3 engineering:1 local:2 accordance:1 heileman:1 might:1 koltchinskii:2 heather:1 suggests:1 ramadge:2 range:2 averaged:1 unique:1 acknowledgment:1 practical:1 testing:4 definite:1 digit:5 procedure:1 area:5 composite:4 pre:3 melissa:1 close:4 selection:16 put:1 context:1 applying:1 descending:1 equivalent:2 conventional:2 map:9 demonstrated:2 convex:1 resolution:1 formulate:1 simplicity:1 assigns:1 orthonormal:1 traditionally:1 coordinate:6 variation:1 annals:1 enhanced:1 suppose:1 elucidate:1 controlling:1 us:6 hypothesis:2 element:2 trend:1 particularly:2 located:1 hj2:3 malach:2 database:2 polyn:1 module:1 electrical:1 capture:2 worst:1 region:1 ensures:1 cycle:1 complexity:1 ideally:1 dynamic:1 trained:5 singh:1 compromise:1 f2:6 basis:3 triangle:1 eigenimages:2 k0:8 various:2 train:4 effective:1 newman:1 aggregate:1 choosing:1 neighborhood:1 supplementary:2 larger:1 hk1:3 ability:2 statistic:3 g1:4 advantage:2 eigenvalue:5 net:1 remainder:1 neighboring:2 relevant:4 j2:3 realization:1 uci:1 achieve:1 academy:1 exploiting:1 convergence:4 cluster:2 converges:1 yxi:1 leave:1 help:2 derive:3 develop:1 object:4 measured:2 qt:5 intersubject:1 strong:1 soc:1 auxiliary:2 indicate:1 implies:1 blair:1 exhibiting:1 direction:1 radius:1 annotated:3 human:3 sgn:1 material:2 require:1 f1:5 clustered:6 generalization:1 preliminary:1 proposition:2 ground:2 exp:6 presumably:1 seed:1 mapping:5 predict:1 substituting:1 achieves:1 ventral:1 smallest:1 estimation:3 proc:1 applicable:1 label:3 largest:1 faithfully:1 weighted:3 reflects:1 minimization:1 clearly:2 gaussian:15 rather:3 ppj:1 shelf:1 hj:49 ej:2 varying:1 shrinkage:1 derived:1 indicates:1 greatly:1 hk:3 sense:3 vk2:4 helpful:1 integrated:1 transformed:1 interested:2 selects:7 i1:2 pixel:12 arg:4 classification:23 among:2 impacting:1 development:1 resonance:1 art:2 smoothing:12 spatial:57 constrained:1 aware:1 construct:1 srb:17 nicely:1 identical:2 look:1 fmri:18 report:1 stimulus:1 few:1 employ:2 randomly:2 harel:1 national:1 individual:2 qkj:2 maxj:3 friedman:1 detection:2 organization:2 adjust:4 male:1 extreme:1 pc:3 chain:1 predefined:2 accurate:1 nowak:2 partial:2 lh:1 orthogonal:1 etj0:3 tree:2 incomplete:1 taylor:1 euclidean:1 divide:1 circle:1 plotted:2 theoretical:1 instance:9 column:1 classify:1 geo:1 introducing:1 subset:2 expects:1 deviation:1 masked:1 kq:6 successful:2 ishai:1 accomplish:1 fundamental:1 international:2 discriminating:6 xi1:1 off:1 informatics:1 pool:1 enhance:1 together:1 concrete:2 daubechies:1 again:1 central:2 reflect:1 choose:6 necessitate:1 cognitive:1 ek:1 derivative:3 bandettini:1 account:1 potential:1 de:2 stump:2 coefficient:5 depends:1 performed:1 doing:1 sort:1 maintains:1 bayes:6 aggregation:1 asuncion:1 slope:1 florence:1 contribution:3 minimize:4 accuracy:13 characteristic:2 yield:11 identify:4 handwritten:2 accurately:1 simultaneous:1 minj:1 ashburner:1 definition:2 pp:3 larsen:1 james:2 obvious:1 associated:3 attributed:1 proof:3 proved:2 dataset:1 recall:6 knowledge:3 stanley:1 organized:1 feed:1 supervised:1 adaboost:22 response:4 formulation:2 done:2 evaluated:1 stage:2 biomedical:1 hand:2 receives:1 overlapping:1 brings:2 quality:1 indicated:1 lda:1 believe:1 building:1 usa:2 effect:10 k22:3 normalized:1 contain:1 norman:2 regularization:15 hence:3 shuffled:2 spatially:16 equality:1 i2:2 yekutieli:1 adjacent:1 during:2 encourages:3 unnormalized:1 chin:1 theoretic:1 demonstrate:1 image:16 consideration:1 funding:1 superior:2 specialized:1 discourages:1 functional:2 ek0:2 volume:1 association:1 interpretation:2 willett:2 significant:2 goebel:1 smoothness:4 shuffling:1 pm:4 similarly:1 hj1:3 benjamini:1 cortex:2 carroll:1 gj:3 etc:2 base:27 add:2 dominant:1 multivariate:3 posterior:1 hide:1 recent:1 female:1 showed:1 italy:1 tikhonov:1 certain:1 inequality:6 binary:3 meeting:1 yi:34 mcdermott:1 minimum:1 additional:7 employed:1 maximize:2 signal:2 multiple:1 desirable:1 reduces:1 smooth:1 technical:1 faster:1 match:2 cross:2 long:1 involving:2 variant:1 regression:1 noiseless:1 vision:1 poisson:2 iteration:16 kernel:8 represent:1 achieved:1 penalize:1 receive:1 uninformative:1 fine:1 hk2:3 interval:1 separately:1 swapped:1 unlike:1 posse:2 sure:1 subject:4 legend:2 spirit:1 effectiveness:1 call:2 variety:2 psychology:1 hastie:1 reduce:1 microarrays:1 inactive:1 whether:1 motivated:1 expression:1 pca:4 peter:1 speech:1 proceed:2 etk:2 ignored:1 generally:1 detailed:1 eigenvectors:2 amount:1 ten:1 statist:1 reduced:1 schapire:3 notice:1 neuroscience:3 deteriorates:1 group:1 key:3 reliance:1 threshold:1 sulcus:1 kenneth:1 backward:3 imaging:2 graph:1 asymptotically:1 concreteness:1 ville:2 sum:2 ram:1 run:1 angle:1 decision:4 bound:2 distinguish:1 fold:1 nonnegative:2 activity:2 hasson:3 annual:1 constraint:2 scene:1 encodes:1 nearby:2 aspect:1 speed:1 min:4 optimality:1 department:2 jr:1 smaller:1 slightly:1 ascertain:1 across:1 wi:24 modification:1 computationally:1 lexp:3 abbreviated:1 discus:1 mechanism:1 mind:1 end:3 ocr:3 magnetic:1 alternative:2 pietrini:1 rp:1 assumes:1 clustering:1 ensure:1 denotes:1 running:1 k1:21 uj:4 establish:1 seeking:1 objective:6 question:1 arrangement:2 already:3 gobbini:1 parametric:2 primary:2 diagonal:1 exhibit:3 gradient:2 distance:2 separate:1 thank:1 landmark:1 assuming:1 spinning:1 length:3 providing:1 minimizing:1 ramadge1:1 robert:1 negative:1 design:1 implementation:1 proper:1 upper:1 neuron:1 descent:3 compensation:4 situation:2 incorporated:2 prematurely:1 rn:4 smoothed:1 arbitrary:1 princeton:6 searchlight:3 acoustic:1 boost:1 beyond:1 suggested:1 usually:1 pattern:8 perception:1 hj0:2 ej2:1 reading:1 including:1 max:2 ramon:1 power:2 natural:4 friston:1 regularized:14 minimax:1 improve:2 movie:1 eye:1 dtu:1 fuhrmann:1 zhen:1 extract:1 naive:6 nir:1 prior:11 voxels:27 review:1 heller:1 relative:2 asymptotic:1 freund:1 loss:16 expect:1 permutation:1 highlight:1 xiang:1 interesting:1 synchronization:1 proportional:2 validation:2 eigendecomposition:1 integrate:2 sufficient:2 consistent:1 imposes:2 rubin:1 thresholding:5 principle:1 share:2 summary:1 schouten:1 side:1 guide:1 institute:1 face:10 sparse:2 distributed:2 van:2 curve:3 dimension:1 cortical:1 rich:2 ignores:1 preventing:1 commonly:1 adaptive:3 made:1 collection:1 preprocessing:1 author:1 voxel:7 forward:1 transaction:1 compact:2 active:4 imm:2 reveals:1 assumed:1 discriminative:8 xi:48 alternatively:1 spectrum:1 iterative:1 promising:1 learn:1 ku:1 transfer:1 elastic:1 symmetry:1 improving:1 complex:3 european:1 constructing:1 domain:3 diag:1 vj:3 zou:1 noise:10 repeated:1 scattered:1 detre:1 wlog:1 precision:6 neuroimage:4 position:1 occipito:1 explicit:1 exponential:1 levy:2 third:1 wavelet:4 theorem:3 ffa:1 kvi:2 showing:1 explored:1 unser:2 svm:3 chun:1 grouping:9 incorporating:1 adding:3 effectively:1 importance:9 mirror:1 magnitude:1 dissimilarity:1 uri:1 margin:1 sorting:1 locality:1 vk1:4 univariate:8 nez:2 watch:1 gender:3 corresponds:1 minimizer:1 truth:2 chance:1 mart:2 content:1 change:1 determined:1 operates:2 averaging:2 acting:1 distributes:1 lemma:5 conservative:1 principal:1 gij:2 discriminate:1 indicating:2 select:5 arises:1 scan:2 incorporate:3 reg:3 regularizing:1 correlated:2 |
2,975 | 3,697 | Thresholding Procedures for High Dimensional
Variable Selection and Statistical Estimation
Shuheng Zhou
Seminar f?ur Statistik
ETH Z?urich
CH-8092, Switzerland
Abstract
Given n noisy samples with p dimensions, where n ? p, we show that the multistep thresholding procedure can accurately estimate a sparse vector ? ? Rp in a
linear model, under the restricted eigenvalue conditions (Bickel-Ritov-Tsybakov
09). Thus our conditions for model selection consistency are considerably weaker
than what has been achieved in previous works. More importantly, this method allows very significant values of s, which is the number of non-zero elements in the
true parameter. For example, it works for cases where the ordinary Lasso would
have failed. Finally, we show that if X obeys a uniform uncertainty principle and
if the true parameter is sufficiently sparse, the Gauss-Dantzig selector (Cand`esTao 07) achieves the ?2 loss within a logarithmic factor of the ideal mean square
error one would achieve with an oracle which would supply perfect information
about which coordinates are non-zero and which are above the noise level, while
selecting a sufficiently sparse model.
1
Introduction
In a typical high dimensional setting, the number of variables p is much larger than the number of
observations n. This challenging setting appears in linear regression, signal recovery, covariance
selection in graphical modeling, and sparse approximations. In this paper, we consider recovering
? ? Rp in the following linear model:
Y = X? + ?,
(1.1)
where X is an n ? p design matrix, Y is a vector of noisy observations and ? is the noise term. We
assume throughout this paper that p ??n (i.e. high-dimensional), ? ? N (0, ? 2 In ), and the columns
of X are normalized to have ?2 norm n. Given such a linear model, two key tasks are to identify
the relevant set of variables and to estimate ? with bounded ?2 loss.
In particular, recovery of the sparsity pattern S = supp(?) := {j : ?j 6= 0}, also known as variable
(model) selection, refers to the task of correctly identifying the support set (or a subset of ?significant? coefficients in ?) based on the noisy observations. Even in the noiseless case, recovering ? (or
its support) from (X, Y ) seems impossible when n ? p. However, a line of recent research shows
that it becomes possible when ? is also sparse: when it has a relatively small number of nonzero
coefficients and when the design matrix X is also sufficiently nice, which we elaborate below. One
important stream of research, which we also adopt here, requires computational feasibility for the
estimation methods, among which the Lasso and the Dantzig selector are both well studied and
shown with provable nice statistical properties; see for example [11, 9, 19, 21, 5, 18, 12, 2]. For a
chosen penalization parameter ?n ? 0, regularized estimation with the ?1 -norm penalty, also known
1
as the Lasso [16] or Basis Pursuit [6] refers to the following convex optimization problem
1
?b = arg min
kY ? X?k22 + ?n k?k1 ,
? 2n
(1.2)
where the scaling factor 1/(2n) is chosen by convenience; The Dantzig selector [5] is defined as,
1 T
b
b
(1.3)
(DS) arg min
?
subject to
X (Y ? X ?)
? ?n .
p
b
n
1
??R
?
Our goal in this work is to recover S as accurately as possible: we wish to obtain ?b such that
b \ S| (and sometimes |S? supp(?)|
b also) is small, with high probability, while at the same
| supp(?)
time k?b??k22 is bounded within logarithmic factor of the ideal mean square error one would achieve
with an oracle which would supply perfect information about which coordinates are non-zero and
which are above the noise level (hence achieving the oracle inequality as studied in [7, 5]); We deem
the bound on ?2 -loss as a natural criteria for evaluating a sparse model when it is not exactly S. Let
s = |S|. Given T ? {1, . . . , p}, let us define XT as the n ? |T | submatrix obtained by extracting
columns of X indexed by T ; similarly, let ?T ? R|T | , be a subvector of ? ? Rp confined to T .
Formally, we study a Multi-step Procedure: First we obtain an
pinitial estimator ?init using the Lasso
as in (1.2) or the Dantzig selector as in (1.3), with ?n = ?(? 2 log p/n).
1. We then threshold the estimator ?init with t0 , with the general goal such that, we get a
set I1 with cardinality at most 2s; in general, we also have |I1 ? S| ? 2s, where I1 =
{j ? {1, . . . , p} : ?j,init ? t0 } for some t0 to be specified. Set I = I1 .
2. We then feed (Y, XI ) to either the Lasso estimator as in (1.2) or the ordinary least squares
b where we set ?bI = (X T XI )?1 X T Y and ?bI c = 0.
(OLS) estimator to obtain ?,
I
I
p
3. We then possibly threshold ?bI1 with t1 = 4?n |I1 | (to be specified), to obtain I2 , repeat
b
step 2 with I = I2 to obtain ?bI and set all other coordinates to zero; return ?.
Our algorithm is constructive in that it does not rely on the unknown parameters s, ?min :=
minj?S |?j | or those that characterize the incoherence conditions on X; instead, our choice of ?n
and thresholding parameters only depends on ?, n, and p. In our experiments, we apply only the
first two steps, which we refer to as a two-step procedure; In particular, the Gauss-Dantzig selector
is a two-step procedure with the Dantzig selector as ?init [5]. In theory, we apply the third step only
when ?min is sufficiently large and when we wish to get a ?sparser? model I.
More definitions. For a matrix A, let ?min (A) and ?max (A) denote the smallest and the largest
eigenvalues respectively. We refer to a vector ? ? Rp with at most s non-zero entries, where s ? p,
as a s-sparse vector. Throughout this paper, we assume that n ? 2s and
?
?min (2s) =
2
2
kX?k2 /(n k?k2 ) > 0.
min
?6=0;2s?sparse
(1.4)
It is clear that n ? 2s is necessary, as any submatrix with more than n columns must be singular. In
?
2
2
general, we also assume ?max (s) = max?6=0;s?sparse kX?k2 /(n k?k2 ) < ?. As defined in [4],
the s-restricted isometry constant ?s of X is the smallest quantity such that
2
2
2
(1 ? ?s ) k?k2 ? kXT ?k2 /n ? (1 + ?s ) k?k2 ,
for all T ? {1, . . . , p} with |T | ? s and coefficients sequences (?j )j?T . It is clear that ?s is
non-decreasing in s and 1 ? ?s ? ?min (s) ? ?max (s) ? 1 + ?s . Hence ?2s < 1 implies (1.4).
Occasionally, we use ?T ? R|T | , where T ? {1, . . . , p}, to also represent its 0-extended version
? ? ? Rp such that ?T? c = 0 and ?T? = ?T ; for example in (1.5) below.
Oracle inequalities. The following idea has been explained in [5]; we hence describe it here only
briefly. Note that due to different normalization of columns of X, our expressions are slightly
2
different from those in [5]. Consider the least square estimator ?bI = (XIT XI )?1 XIT Y , where
|I| ? s and consider the ideal least-squares estimator ? ?
2
?? =
arg min
E
? ? ?bI
,
(1.5)
2
I?{1,...,p}, |I|?s
which minimizes the expected mean squared error. It follows from [5] that for ?max (s) < ?,
p
X
2
E k? ? ? ? k2 ? min (1, 1/?max (s))
min(?i2 , ? 2 /n).
(1.6)
i=1
Now we check if for ?max (s) < ?, it holds with high probability that
p
2
X
b
min(?i2 , ? 2 /n), so that
? ? ?
= O(log p)
2
2
b
? ? ?
2
(1.7)
i=1
2
= O(log p) max(1, ?max (s))E k? ? ? ?k2 in view of (1.6).
These bounds are meaningful since
p
X
min(?i2 , ? 2 /n) =
i=1
min
I?{1,...,p}
2
k? ? ?I k2 +
(1.8)
|I|? 2
n
represents the ideal squared bias and variance. We elaborate on conditions on the design, under
which we accomplish these goals using the multi-step procedures in the rest of this section. We now
define a constant ??,a,p for each a > 0, by which we bound the maximum correlation between
the
?
noise and covariates of X, which we only apply to X with column ?2 norm bounded by n; Let
r
T
X ?
?
2 log p
Ta := ? :
, hence
(1.9)
n
? ??,a,p , where ??,a,p = ? 1 + a
n
?
p
(1.10)
P (Ta ) ? 1 ? ( ? log ppa )?1 , for a ? 0; see [5].
Variable selection. Our first result in Theorem 1.1 shows that consistent variable selection is possible under the Restricted Eigenvalue conditions, as formalized in [2]. Similar conditions have been
used by [10] and [17].
Assumption 1.1 (Restricted Eigenvalue assumption RE(s, k0 , X) [2]) For some integer 1 ?
s ? p and a positive number k0 , the following holds:
kX?k2
1
?
?
> 0.
(1.11)
=
min
min
K(s, k0 , X) J0 ?{1,...,p}, ? ? ?6=0,
n k?J0 k2
?
?
|J0 |?s
??J c ? ?k0 k?J0 k1
0 1
If RE(s, k0 , X) is satisfied with k0 ? 1, then the square submatrices of size ? 2s of X T X are necessarily positive definite (see [2]) and hence (1.4) must hold. We do not impose any extra constraint
on s besides what is allowed in order for (1.11) to hold. Note that when s > n/2, it is impossible
for the restricted eigenvalue assumption to hold as XI for any I such that |I| = 2s becomes singular
in this case. Hence our algorithm is especially relevant if one would like to estimate a parameter ?
such that s is very close to n; See Section 4 for such examples. Let ?min := minj?S |?j |.
Theorem 1.1 (Variable selection under Assumption 1.1) Suppose that RE(s, k0 , X) condition
holds, where k0 = 1 for the DS and = 3 for the Lasso. Suppose ?n ? B??,a,p for ??,a,p as in (1.9),
1
4
where B ? 1 for the DS and ? 2 for the Lasso. Let B2 = B?min
(2s) . Let s ? K (s, k0 , X) and
?
?
?
?
?min ? 4 2 max(K(s, k0 , X), 1)?n s + max 4K 2 (s, k0 , X), 2B2 ?n s.
Then with probability at least P (Ta ), the multi-step procedure returns ?b such that
2
b where |I \ S| < B2 and
S ? I := supp(?),
16
2
2
?
|I|
2
log
p(1
+
a)s?
(1
+
B22 /16)
?,a,p
?
,
k?b ? ?k22 ? 2
?min (|I|)
n?2min (2s)
Pp
?
which satisfies (1.7) and (1.8) given that ?min ? ?/ n and i=1 min(?i2 , ? 2 /n) = s? 2 /n.
3
Our analysis builds upon the rate of convergence bounds for ?init derived in [2]. The first implication of this work and also one of the motivations for analyzing the thresholding methods is: under
Assumption 1.1, one can obtain consistent variable selection for very significant values of s, if only
b In our simulations, we recover
a few extra variables are allowed to be included in the estimator ?.
the exact support set S with very high probability using a two-step procedure. Note that we did not
optimize the lower bound on s as we focus on cases when the support of S is large.
Thresholding that achieves the oracle inequalities. The natural question upon obtaining Theorem 1.1 is: is there a good thresholding rule that enables us to obtain a ?
sufficiently sparse estimator
?b when some components of ?S (and hence ?min ) are well below ?/ n, which also satisfies the
oracle inequality as in (1.7)? Before we answer this question, we define s0 as the smallest integer
such that
p
X
p
(1.12)
min(?i2 , ?2 ? 2 ) ? s0 ?2 ? 2 , where ? = 2 log p/n,
i=1
and the (s, s? )-restricted orthogonality constant [4] ?s,s? as the smallest quantity such that
| h XT c, XT ? c? i /n| ? ?s,s? kck2 kc? k2
?
(1.13)
?
?
holds for all disjoint sets T, T ? {1, . . . , p} of cardinality |T | ? s and |T | < s , where s + s? ?
p. Note that ? is non-decreasing in s, s? and small values of ?s,s? indicates that disjoint subsets
covariates in XT and XT ? span nearly orthogonal subspaces.
Theorem 1.2 says that under a uniform uncertainty
principle (UUP), thresholding of an initial
p
Dantzig selector ?init , at the level of ?(? 2 log p/n) indeed identifies a sparse model I of cardinality at most 2s0 such that the ?22 -loss for its corresponding least-squares estimator is indeed
bounded within O(log p) of the ideal mean square error as in (1.5), when ? is as sparse as required
by the Dantzig selector to achieve such an oracle inequality [5]. This is accomplished without any
knowledge of the significant coordinates of ? and not being able to observe parameter values.
Assumption 1.2 (A Uniform Uncertainly Principle) [5] For some integer 1 ? s < n/3, assume
?2s + ?s,2s < 1, which implies that ?min (2s) > ?s,2s given that 1 ? ?2s ? ?min (2s).
p
?
Theorem 1.2 Choose ?, a > 0 and set ?n = ?p,? ?, where ?p,? := ( 1 + a + ? ?1 ) 2 log p/n,
in (1.3). Suppose ? is s-sparse with ?2s + ?s,2s < 1 ? ? . Let threshold t0 be chosen from the
range (C1 ?p,? ?, C4 ?p,? ?] for some constants C1 , C4 to be defined. Then with probability at least
?
b such that |I| ? 2s0 ,
1?( ? log ppa )?1 , the Gauss-Dantzig selector ?b selects a model I := supp(?)
!
p
X
2
2
2
2
2
b
|I \ S| ? s0 ? s, and k? ? ?k ? 2C log p ? /n +
min(? , ? /n) ,
(1.14)
2
3
i
i=1
where C3 depends on a, ? , ?2s , ?s,2s and C4 ; see (3.3).
Our analysis builds upon [5]. Note that allowing t0 to be chosen from a range (as wide as one
would like, with the cost of increasing the constant C3 in (1.14)), saves us from having to estimate
C1 , which indeed depends on ?p
that Assumption 1.1 holds for
2s and ?s,2s . Assumption 1.2 impliesp
k0 = 1 with K(s, k0 , X) = ?min (2s)/(?min (2s) ? ?s,2s ) ? ?min (2s)/(1 ? ?2s ? ?s,2s )
(see [2]); It is an open question if we can derive the same result under Assumption 1.1.
Previous work. Finally, we briefly review related work in multi-step procedures and the role of
sparsity for high-dimensional statistical inference. Before this work, hard thresholding idea has
been shown in [5] (via Gauss-Dantzig selector) as a method to correct the bias of the initial Dantzig
selector. The empirical success of the Gauss-Dantzig selector in terms of improving the statistical
accuracy is strongly evident in their experimental results. Our theoretical analysis on the oracle
inequalities, which hold for the Gauss-Dantzig selector under a uniform uncertainty principle, is
exactly inspired by their theoretical analysis of the initial Dantzig selector under the same conditions.
For the Lasso, [12] has also shown in theoretical analysis that thresholding is effective in obtaining
4
a two-step estimator ?b that is consistent in its support with ?; however, the choice of threshold level
depends on the unknown value ?min (which needs to be sufficiently large) and s, and their theory
does not directly yield (or imply) an algorithm for finding such parameters. Further, as pointed out
by [2], a weakening of their condition is still sufficient for Assumption 1.1 to hold.
The sparse recovery problem under arbitrary noise is also well studied, see [3, 15, 14]. Although
as argued in [3, 14], the best accuracy under arbitrary noise has essentially been achieved in both
work, their bounds are worse than that in [5] (hence the present paper) under the stochastic noise as
discussed in the present paper; see more discussions in [5]. Moreover, greedy algorithms in [15, 14]
require s to be part of their input, while the iterative algorithms in the present paper do not have such
requirement, and hence adapt to the unknown level of sparsity s well. A more general framework
on multi-step variable selection was studied by [20]. They control the probability of false positives
at the price of false negatives, similar to what we aim for in the present paper. Unfortunately, their
analysis is constrained to the case when s is a constant. Finally, under
p a restricted eigenvalue condition slightly stronger than Assumption 1.1, [22] requires s = O( n/ log p) in order to achieve
variable selection consistency using the adaptive Lasso [23] as the second step procedure.
Organization of the paper. We prove Theorem 1.1 essentially in Section 2. A thresholding framework for the general setting is described in Section 3, which also sketches the proof of Theorem 1.2.
Section 4 briefly discusses the relationship between linear sparsity and random design matrices.
Section 5 includes simulation results showing that our two-step procedure is consistent with our
theoretical analysis on variable selection.
2
Thresholding procedure when ?min is large
?
We use a penalization parameter ?n = B??,a,p and assume ?min > C?n s for some constants
B, C throughout this section; we first specify the thresholding parameters in this case. We then show
in Theorem 2.1 that our algorithm works under any conditions so long as the rate of convergence
of the initial estimator obeys the bounds in (2.2). Theorem 1.1 is a corollary of Theorem 2.1 under
Assumption 1.1, given the rate of convergence bounds for ?init following derivations in [2].
The Iterative Procedure. We obtain an initial estimator ?init using the Lasso or the Dantzig selector.
b(0) := ?init ; Iterate through the following steps twice, for i =
Let Sb0 = {j : ?j,init >
q4?n }, and ?
0, 1: (a) Set ti = 4?n |Sbi |; (b) Threshold ?b(i) with ti to obtain I := Sbi+1 , where
q
(i+1)
(i)
= (XIT XI )?1 XIT Y.
(2.1)
Sbi+1 = j ? Sbi : ?bj ? 4?n |Sbi | and compute ?bI
(2)
Return the final set of variables in Sb2 and output ?b such that ?bSb2 = ?bSb and ?bj = 0, ?j ? Sb2c .
2
Theorem 2.1 Let ?n ? B??,a,p , where B ? 1 is a constant suitably chosen such that the initial
estimator ?init satisfies on Ta , for ?init = ?init ? ? and some constants B0 , B1 ,
?
k?init,S k2 ? B0 ?n s and k?init,S c k1 ? B1 ?n s;
(2.2)
?
?
p
?
B1 , 2 2 2 + max B0 , 2B2 ?n s,
(2.3)
Suppose ?min ?
max
where B2 = 1/(B?min (2s)). Then for s ? B12 /16, it holds on Ta that |Sbi | ? 2s, ?i = 1, 2, and
q
?
(2.4)
k?b(i) ? ?k2 ? ??,a,p |Sbi |/?min (|Sbi |) ? ?n B2 2s, ?i = 1, 2,
where ?b(i) are the OLS estimators based on I = Sbi ; Finally, the Iterative Procedure includes the
correct set of variables in Sb2 such that S ? Sb2 ? Sb1 and
1
B2
b
b \ S ?
? 2.
(2.5)
S2 \ S := supp(?)
2
16
16B 2 ?min (|Sb1 |)
5
Remark 2.2 Without the knowledge of ?, one could use ?
b ? ? in ?n ; this will put a stronger
requirement on ?min , but all conclusions of Theorem 2.1 hold. We also note that in order to obtain
Sb1 such that |Sb1 | ? 2s and Sb1 ? S, we only need to threshold ?init at t0 = B1 ?n ?
(see Section 3 and
Lemma 3.2 for an example); instead of having to estimate B1 , we use t0 = ?(?n s) to threshold.
3
A thresholding framework for the general setting
In this section, we wish to derive a meaningful criteria for consistency in variable selection, when
?min is well below the noise level. Suppose that we are given an initial estimator ?init that achieves
the rate of convergence bound as in (1.14), which adapts nearly ideally to the uncertainty in the
support set S and the ?significant? set.pWe show that although we cannot guarantee the presence
of variables indexed by {j : |?j | < ? 2 log p/n} to be included in the final set I (cf. (3.7)) due
to their lack of strength, we wish to include the significant variables from S in I such that the OLS
estimator based on I achieves this almost ideal rate of convergence as ?init does, even though some
variables from S are missing in I. Here we pay a price for the missing variables in order to obtain a
sparse model I. Toward this goal, we analyze the following algorithm under Assumption 1.2.
The General Two-step Procedure: Assume ?2s + ?s,2s < 1 ? ? , where ? > 0;
?
1. First p
we obtain an initial estimator ?min using the Dantzig selector with ?p,? := ( 1 + a+
? ?1 ) 2 log p/n, where ?, a ? 0; we then threshold ?init with t0 , chosen from the range
(C1 ?p,? ?, C4 ?p,? ?], to obtain a set I of cardinality at most 2s, (we prove a stronger result
in Lemma 3.2), where
I := {j ? {1, . . . , p} : ?j,init > t0 } , for C1 as defined in (3.3);
(3.1)
2. In the second step, given a set I of cardinality at most 2s, we run the OLS regression to
obtain obtained via (3.1), ?bI = (XIT XI )?1 XIT Y and set ?bj = 0, ?j 6? I.
Theorem 2 in [5] has shown that the Dantzig selector achieves nearly the ideal level of MSE.
2
Proposition 3.1 [5] Let Y = X? + ?, for ? being i.i.d. N (0, ? 2 ) and kXj k2 = n. Choose ?, a > 0
p
?
and set ?n = ?p,? ? := ( 1 + a + ? ?1 )? 2 log p/n in (1.3). Then if ? is s-sparse with ?2s +
2
?
?s,2s < 1 ? ? , the Dantzig selector obeys with probability at least 1 ? ( ? log ppa )?1 ,
?b ? ?
?
2
?
Pp
2C22 ( 1 + a + ? ?1 )2 log p ? 2 /n + i=1 min ?i2 , ? 2 /n .
From this point on we let ? := ?2s and ? := ?s,2s ; Analysis in [5] (Theorem 2) and the current paper
yields the following constants, where C3 has not been optimized,
1+?
C0
?(1 + ?)
C2 = 2C0? +
where C0? =
+
,
(3.2)
1????
1 ? ? ? ? (1 ? ? ? ?)2
? (1+?)2
?
1?? 2
+ (1 + 1/ 2) 1????
; We now define
where C0 = 2 2 1 + 1????
?
1+?
4(1 + a)
and C32 = 3( 1 + a + ? ?1 )2 ((C0? + C4 )2 + 1) + 2
. (3.3)
1????
?min (2s0 )
We first set up the notation following that in [5]. We order the ?j ?s in decreasing order of magnitude
|?1 | ? |?2 |... ? |?p |.
(3.4)
Pp
2
2 2
2 2
Recall
that s0 is the smallest integer such that
i=1 min(?i , ? ? ) ? s0 ? ? , where ? =
p
2 log p/n. Thus by definition of s0 , as essentially shown in [5], that 0 ? s0 ? s and
!
p
p
2
X
?2 X
2 2
2 2
2
2 2
2 ?
s0 ? ? ? ? ? +
min(?i , ? ? ) ? 2 log p
(3.5)
+
min ?i ,
n
n
i=1
i=1
C1 = C0? +
and s0 ?2 ? 2
?
sX
0 +1
j=1
min(?j2 , ?2 ? 2 ) ? (s0 + 1) min(?s20 +1 , ?2 ? 2 ) for s < p,
6
(3.6)
which implies that min(?s20 +1 , ?2 ? 2 ) < ?2 ? 2 and hence by (3.4),
|?j | < ??
for all j > s0 .
(3.7)
We now show in Lemma 3.2 that thresholding at the level of C?? at step 1 selects a set I of at most
2s0 variables, among which at most s0 are from S c .
Lemma 3.2 Choose ? >p0 such that ?2s + ?s,2s < 1 ? ? . Let ?init
pbe the ?1 -minimizer subject to
?
?1
the constraints, for ? := 2 log p/n and ?p,? := ( 1 + a + t ) 2 log p/n,
1 T
X (Y ? X?init )
? ?p,? ?.
(3.8)
n
?
Given some constant C4 ? C1 , for C1 as in (3.3), choose a thresholding parameter t0 so that
C4 ?p,? ? ? t0 > C1 ?p,? ?; Set I = {j : |?j,init | > t0 }.
Then with probability at least P (Ta ), as detailed in Proposition 3.1, we have for C0? as in (3.2),
|I|
? 2s0 , and |I ? S| ? s + s0 , and
q
?
(C0? + C4 )2 + 1?p,? ? s0 , where D := {1, . . . , p} \ I.
?
k?D k2
(3.9)
(3.10)
Next we show that even if we miss some columns of X in S, we can still hope to get the convergence
rate as required in Theorem 1.2 so long as k?D k2 is bounded and I is sufficiently sparse, for example,
as bounded in Lemma 3.2. We first show in Lemma 3.3 a general result on rate of convergence of
the OLS estimator based on a chosen model I, where a subset of relevant variables are missing.
Lemma 3.3 (OLS estimator with missing variables) Let D := {1, . . . , p} \ I and SR = D ? S
such that I ? SR = ?. Suppose |I ? SR | ? 2s. Then we have on Ta , for the least squares estimator
based on I, ?bI = (XIT XI )?1 XIT Y , it holds that
2
2
p
b
2
?|I|,|SR | k?D k2 + ??,a,p |I| /?min (|I|) + k?D k2 .
?I ? ?
?
2
Now Theorem 1.2 is an immediate corollary of Lemma 3.2 and 3.3 in view of (3.5), given that
|SR | < s, and |I| ? 2s0 and |I ? SR | ? |I ? S| ? s + s0 ? 2s as in Lemma 3.2 (3.9). Hence it is
clear by (3.10) that we cannot cut too many ?significant? variables; in particular, for those that are
?
larger ?? s0 , we can cut at most a constant number of them.
4
Linear sparsity and random matrices
A special case of design matrices that satisfy the Restricted Eigenvalue assumptions are the random
design matrices. This is shown in a large body of work, for example [3, 4, 5, 1, 13], which shows
that the uniform uncertainty principle (UUP) holds for ?generic? or random design matrices for very
significant values of s. For example, it is well known that for a random matrix with i.i.d. Gaussian
variables (that is, Gaussian Ensemble, subject to normalizations of columns), and the Bernoulli and
Subgaussian Ensembles [1, 13], the UUP holds for s = O(n/ log(p/n)); hence the thresholding
procedure can recover a sparse model using nearly a constant number of measurements per nonzero component despite the stochastic noise, when n is a nonnegligible fraction of p. See [5] for
other examples of random designs. In our simulations as shown in Section 5, exact recovery rate of
the sparsity pattern is very high for a few types of random matrices using a two-step procedure, once
the number of samples passes a certain threshold. For example, for an i.i.d. Gaussian Ensemble, the
threshold for exact recovery is n = ?(s log(p/n)), where ? hides a very small constant, when ?min
is sufficiently large; this shows a strong contrast with the ordinary Lasso, for which the probability of
success in terms of exact recovery of the sparsity pattern tends to zero when n < 2s log(p ? s) [19].
In an ongoing work, the author is exploring thresholding algorithms for a broader class of random
designs that satisfy the Restricted Eigenvalue assumptions.
7
(b) p = 512
1.0
0.8
0.6
Prob. of success
0.6
0.0
20
50
100
200
n
500
1000
0
300
600
n
200 300 400 500 600 700 800
1.0
0.8
0.6
0.4
0.2
0.0
400
200
400
500
(d) p = 1024 Sample size vs. Sparsity
s=18
s=36
s=64
s=103
s=128
s=192
s=256
200
100
n
(c) p = 1024
Prob. of success
s=9
s=18
s=32
s=57
s=64
s=96
s=128
0.2
0.4
0.0
0.2
Prob. of success
0.8
s = 8 Two?step
s = 8 Lasso
s = 64 Two?step
s = 64 Lasso
0.4
1.0
(a) p = 256
Prob. of succ.
90%
80%
50
800
100
150
200
250
s
n
Figure 1: (a) Compare the probability of success under s = 8 and 64 for p = 256. The two-step
procedure requires much fewer samples than the ordinary Lasso. (b) (c) show the probability of
success of the two-step procedure under different levels of sparsity when n increases for p = 512
and 1024 respectively; (d) The number of samples n increases almost linearly with s for p = 1024.
5
Illustrative experiments
In our implementation, we choose to use the Lasso as the initial estimator. We show in Figure 1
that the two-step procedure indeed recovers a sparse model using a small number of samples per
non-zero component in ? when X is a Gaussian Ensemble. Similar behavior was also observed
for the Bernoulli Ensemble in our simulations. We run under three cases of p = 256, 512, 1024;
for each p, we increase the sparsity s by roughly equal steps from s = 0.2p/log 0.2p to p/4. For
each tuple (p, s, n), we first generate a random Gaussian Ensemble?of size n ? p as X, where
Xij ? N (0, 1), which is then normalized to have column ?2 -norm n. For a given (p, s, n) and
X, we repeat the following experiment 100 times: 1) Generate a vector ? of length p: within ?
randomly choose s non-zero positions; for each position, we assign a value of 0.9 or ?0.9 randomly.
2) Generate a vector ? of length p according to N (0, Ip ), where Ip is the identity matrix. 3) Compute
b 4) We then compare
Y = X? + ?. Y and X are then fed to the two-step procedure to obtain ?.
b
? with ?; if all components match in signs, we count this experiment as a success. At the end of
the 100 experiments, we compute the percentage of successful runs as the probability of success.
We compare with the ordinary Lasso, for which we search over the full path of LARS [8] and
always choose the ?b that best matches ? in terms of support. Inside
q the?two-step procedure, we
p
always fix ?n ? 0.69 2 log p/n and threshold ?init at t0 = ft log p sb, where sb = |Sb0 | for
n
Sb0 = {j : ?j,init ? 0.5?n }, and ft is a constant chosen from the range of [1/6, 1/3].
Acknowledgments. This research was supported by the Swiss National Science Foundation
(SNF) Grant 20PA21-120050/1. The author thanks Larry Wasserman, Sara van de Geer and Peter B?uhlmann for helpful discussions, comments and their kind support throughout this work.
8
References
[1] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. B. Wakin. A simple proof of the restricted isometry
property for random matrices. Constructive Approximation, 28(3):253?263, 2008.
[2] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. The
Annals of Statistics, 37(4):1705?1732, 2009.
[3] E. Cand`es, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements.
Communications in Pure and Applied Mathematics, 59(8):1207?1223, August 2006.
[4] E. Cand`es and T. Tao. Decoding by Linear Programming. IEEE Trans. Info. Theory, 51:4203?4215, 2005.
[5] E. Cand`es and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of
Statistics, 35(6):2313?2351, 2007.
[6] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM Journal on
Scientific and Statistical Computing, 20:33?61, 1998.
[7] D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation by wavelet shrinkage. Biometrika, 81:425?455,
1994.
[8] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Annals of Statistics, 32(2):407?
499, 2004.
[9] E. Greenshtein and Y. Ritov. Persistency in high dimensional linear predictor-selection and the virtue of
over-parametrization. Bernoulli, 10:971?988, 2004.
[10] V. Koltchinskii. Dantzig selector and sparsity oracle inequalities. Bernoulli, 15(3):799?828, 2009.
[11] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the Lasso. Annals
of Statistics, 34(3):1436?1462, 2006.
[12] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data.
Annals of Statistics, 37(1):246?270, 2009.
[13] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann. Uniform uncertainty principle for bernoulli and
subgaussian ensembles. Constructive Approximation, 28(3):277?289, 2008.
[14] D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples.
Applied and Computational Harmonic Analysis, 26(3):301?321, 2008.
[15] D. Needell and R. Vershynin. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE Journal of Selected Topics in Signal Processing, to appear,
2009.
[16] R. Tibshirani. Regression shrinkage and selection via the Lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267?288,
1996.
[17] S. A. van de Geer. The deterministic Lasso. The JSM Proceedings, American Statistical Association, 2007.
[18] S. A. van de Geer. High-dimensional generalized linear models and the Lasso. The Annals of Statistics,
36:614?645, 2008.
[19] M. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using ?1 -constrained
quadratic programming. IEEE Trans. Inform. Theory, 2008. to appear, also posted as Technical Report
709, 2006, Department of Statistics, UC Berkeley.
[20] L. Wasserman and K. Roeder. High dimensional variable selection. The Annals of Statistics, 37(5A):2178?
2201, 2009.
[21] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning Research,
7:2541?2567, 2006.
[22] S. Zhou, S. van de Geer, and P. B?uhlmann. Adaptive Lasso for high dimensional regression and gaussian
graphical modeling, 2009. arXiv:0903.2515.
[23] H. Zou. The adaptive Lasso and its oracle properties. Journal of the American Statistical Association,
101:1418?1429, 2006.
9
| 3697 |@word briefly:3 version:1 stronger:3 norm:4 seems:1 suitably:1 c0:8 open:1 simulation:4 covariance:1 p0:1 decomposition:1 initial:9 selecting:1 current:1 must:2 enables:1 v:1 greedy:1 fewer:1 selected:1 parametrization:1 persistency:1 c22:1 c2:1 supply:2 prove:2 inside:1 shuheng:1 indeed:4 expected:1 roughly:1 cand:4 behavior:1 multi:5 inspired:1 decreasing:3 cardinality:5 deem:1 becomes:2 increasing:1 pajor:1 bounded:6 moreover:1 notation:1 what:3 kind:1 minimizes:1 finding:1 guarantee:1 berkeley:1 ti:2 exactly:2 sb1:5 k2:20 nonnegligible:1 biometrika:1 control:1 ser:1 grant:1 appear:2 t1:1 positive:3 before:2 tends:1 despite:1 analyzing:1 multistep:1 incoherence:1 path:1 twice:1 koltchinskii:1 dantzig:21 studied:4 meinshausen:2 b12:1 challenging:1 sara:1 bi:8 range:4 obeys:3 acknowledgment:1 bsb:1 atomic:1 definite:1 swiss:1 procedure:22 j0:4 snf:1 empirical:1 eth:1 submatrices:1 matching:1 refers:2 get:3 convenience:1 close:1 selection:17 cannot:2 romberg:1 put:1 impossible:2 optimize:1 deterministic:1 missing:4 urich:1 convex:1 formalized:1 recovery:11 identifying:1 pure:1 wasserman:2 needell:2 estimator:21 rule:1 importantly:1 coordinate:4 annals:7 suppose:6 exact:4 programming:2 ppa:3 element:1 roy:1 cut:2 observed:1 role:1 ft:2 sbi:9 covariates:2 ideally:1 upon:3 basis:2 kxj:1 succ:1 k0:13 derivation:1 describe:1 effective:1 saunders:1 larger:3 say:1 statistic:8 noisy:4 final:2 ip:2 kxt:1 eigenvalue:8 sequence:1 adaptation:1 j2:1 relevant:3 achieve:4 adapts:1 ky:1 convergence:7 requirement:2 cosamp:1 perfect:2 derive:2 b0:3 strong:1 soc:1 recovering:2 implies:3 switzerland:1 correct:2 stochastic:2 lars:1 larry:1 argued:1 require:1 assign:1 fix:1 bi1:1 proposition:2 exploring:1 hold:15 sufficiently:8 bj:3 bickel:2 achieves:5 adopt:1 smallest:5 estimation:4 uhlmann:3 largest:1 hope:1 gaussian:6 always:2 aim:1 zhou:2 shrinkage:2 broader:1 corollary:2 derived:1 xit:8 focus:1 bernoulli:5 check:1 indicates:1 contrast:1 helpful:1 inference:1 roeder:1 sb:2 weakening:1 inaccurate:3 kc:1 i1:5 selects:2 tao:3 arg:3 among:2 constrained:2 special:1 spatial:1 uc:1 equal:1 once:1 having:2 represents:1 yu:2 nearly:4 report:1 few:2 randomly:2 national:1 organization:1 jsm:1 implication:1 tuple:1 necessary:1 orthogonal:2 indexed:2 incomplete:3 re:3 theoretical:4 column:8 modeling:2 ordinary:5 cost:1 subset:3 entry:1 uniform:6 predictor:1 successful:1 too:1 characterize:1 answer:1 accomplish:1 considerably:1 vershynin:1 thanks:1 siam:1 pbe:1 decoding:1 squared:2 satisfied:1 choose:7 possibly:1 davenport:1 worse:1 american:2 zhao:1 return:3 supp:6 de:4 b2:7 includes:2 coefficient:3 satisfy:2 depends:4 stream:1 view:2 analyze:1 recover:3 square:9 uup:3 accuracy:2 variance:1 ensemble:7 yield:2 identify:1 accurately:2 c32:1 simultaneous:1 minj:2 inform:1 definition:2 pp:3 jaegermann:1 proof:2 recovers:1 recall:1 knowledge:2 efron:1 appears:1 feed:1 ta:7 specify:1 devore:1 ritov:3 though:1 strongly:1 correlation:1 d:3 sketch:1 tropp:1 lack:1 scientific:1 k22:3 normalized:2 true:2 hence:12 nonzero:2 i2:8 pwe:1 illustrative:1 criterion:2 generalized:1 evident:1 harmonic:1 ols:6 discussed:1 association:2 significant:8 refer:2 measurement:3 consistency:4 mathematics:1 similarly:1 pointed:1 stable:1 isometry:2 recent:1 hide:1 occasionally:1 certain:1 inequality:7 success:9 accomplished:1 b22:1 impose:1 signal:5 full:1 technical:1 match:2 adapt:1 long:2 feasibility:1 regression:5 noiseless:1 essentially:3 arxiv:1 sometimes:1 represent:1 normalization:2 achieved:2 confined:1 c1:9 singular:2 extra:2 rest:1 sr:6 pass:1 comment:1 subject:3 integer:4 extracting:1 subgaussian:2 presence:1 ideal:8 iterate:1 hastie:1 lasso:25 idea:2 t0:13 expression:1 penalty:1 peter:1 remark:1 clear:3 detailed:1 sb2:3 tsybakov:2 statist:1 generate:3 kck2:1 xij:1 percentage:1 sign:1 disjoint:2 correctly:1 per:2 tibshirani:2 key:1 threshold:12 achieving:1 graph:1 fraction:1 run:3 prob:4 angle:1 uncertainty:6 baraniuk:1 throughout:4 almost:2 scaling:1 submatrix:2 bound:9 pay:1 quadratic:1 oracle:10 strength:1 constraint:2 orthogonality:1 statistik:1 min:51 span:1 relatively:1 department:1 according:1 slightly:2 ur:1 explained:1 restricted:10 discus:1 count:1 fed:1 end:1 pursuit:3 apply:3 observe:1 generic:1 save:1 rp:5 cf:1 include:1 graphical:2 wakin:1 k1:3 especially:1 build:2 question:3 quantity:2 subspace:1 topic:1 toward:1 provable:1 besides:1 length:2 relationship:1 tomczak:1 unfortunately:1 info:1 negative:1 design:9 implementation:1 unknown:3 allowing:1 observation:3 immediate:1 extended:1 communication:1 arbitrary:2 august:1 sharp:1 subvector:1 specified:2 required:2 c3:3 optimized:1 greenshtein:1 c4:8 s20:2 trans:2 able:1 below:4 pattern:3 sparsity:12 max:13 wainwright:1 natural:2 rely:1 regularized:2 imply:1 identifies:1 nice:2 review:1 loss:4 penalization:2 foundation:1 sufficient:1 consistent:4 s0:22 thresholding:17 principle:6 repeat:2 supported:1 bias:2 weaker:1 johnstone:2 wide:1 sparse:20 van:4 dimension:1 evaluating:1 uncertainly:1 author:2 adaptive:3 selector:21 q4:1 b1:5 xi:7 search:1 iterative:4 init:25 obtaining:2 improving:1 mse:1 necessarily:1 posted:1 zou:1 did:1 linearly:1 motivation:1 noise:9 s2:1 allowed:2 body:1 elaborate:2 seminar:1 position:2 wish:4 third:1 wavelet:1 theorem:16 xt:5 showing:1 virtue:1 mendelson:1 false:2 magnitude:1 kx:3 sx:1 sparser:1 chen:1 logarithmic:2 failed:1 ch:1 minimizer:1 satisfies:3 goal:4 identity:1 donoho:2 price:2 hard:1 included:2 typical:1 miss:1 lemma:9 geer:4 gauss:6 experimental:1 e:3 meaningful:2 formally:1 support:8 ongoing:1 constructive:3 |
2,976 | 3,698 | Compressed Least-Squares Regression
Odalric-Ambrym Maillard and R?emi Munos
SequeL Project, INRIA Lille - Nord Europe, France
{odalric.maillard, remi.munos}@inria.fr
Abstract
We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower
dimension M . From any algorithm minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate computed in the
projected subspace (compressed domain) in terms of the excess risk of the estimate built in the high-dimensional space (initial domain). We show that solving
the problem in the compressed domain instead of the initial domain reduces the
estimation error at the price of an increased (but controlled) approximation error.
We apply the analysis to Least-Squares (LS) regression and discuss the excess
risk and numerical complexity of the resulting ?Compressed Least Squares
Re?
gression? (CLSR) in terms of N , K, and M . When we choose
M
=
O(
K),
we
?
show that CLSR has an estimation error of order O(log K/ K).
1
Problem setting
We consider a regression problem where we observe data DK = ({xk , yk }k?K ) (where xk ? X and
yk ? R) are assumed to be independently and identically distributed (i.i.d.) from some distribution
P , where xk ? PX and yk = f ? (xk ) + ?k (xk ), where f ? is the (unknown) target function, and ?k
a centered independent noise of variance ? 2 (xk ). For a given class of functions F, and f ? F, we
define the empirical (quadratic) error
def
LK (f ) =
K
1 X
[yk ? f (xk )]2 ,
K
k=1
and the generalization (quadratic) error
def
L(f ) =
E(X,Y )?P [(Y ? f (X))2 ].
Our goal is to return a regression function fb ? F with lowest possible generalization error L(fb).
Notations: In the sequel we will make use of the following notations about norms: for h : X 7? R,
we write ||h||P for the L2 norm of h with respect to (w.r.t.) the measure P , ||h||PK for the L2 norm
Pn
2 1/2
of h w.r.t. the empirical measure PK , and for u ? Rn , ||u|| denotes by default
.
i=1 ui
The measurable function minimizing the generalization error is f ? , but it may be the case that
f? ?
/ F. For any regression function fb, we define the excess risk
L(fb) ? L(f ? ) = ||fb ? f ? ||2P ,
which decomposes as the sum of the estimation error L(fb) ? inf f ?F L(f ) and the approximation
error inf f ?F L(f ) ? L(f ? ) = inf f ?F ||f ? f ? ||2P which measures the distance between f ? and the
function space F.
1
In this paper we consider a class of linear functions FN defined as the span of a set of N functions
def
def PN
{?n }1?n?N called features. Thus: FN = {f? = n=1 ?n ?n , ? ? RN }.
When the number of data K is larger than the number of features N , the ordinary Least-Squares
Regression (LSR) provides the LS solution f?b which is the minimizer of the empirical risk LK (f )
1
in FN . Note that here LK (f? ) rewrites K
||?? ? Y ||K where ? is the K ? N matrix with elements
(?n (xk ))1?n?N,1?k?K and Y the K-vector with components (yk )1?k?K .
Usual results provide bound on the estimation error as a function of the capacity of the function
space and the number of data. In the case of linear approximation, the capacity measures (such as
covering numbers [23] or the pseudo-dimension [16]) depend on the number of features (for example
the pseudo-dimension is at most N + 1). For example, let f?b be a LS estimate (minimizer of LK
in FN ), then (a more precise statement will be stated later in Subsection 3) the expected estimation
error is bounded as:
N log K
E L(f?b ) ? inf L(f ) ? c?2
,
(1)
f ?FN
K
def
where c is a universal constant, ? = supx?X ?(x), and the expectation is taken with respect to P .
Now, the excess risk is the sum of this estimation error and the approximation error inf f ?FN ||f ?
f ? ||P of the class
S FN . Since the later usually decreases when the number of features N increases
[13] (e.g. when N FN is dense in L2 (P )), we see the usual tradeoff between small estimation error
(low N ) and small approximation error (large N ).
In this paper we are interested in the setting when N is large so that the approximation error is small.
Whenever N is larger than K we face the overfitting problem since there are more parameters than
actual data (more variables than constraints), which is illustrated in the bound (1) which provides
no information about the generalization ability of any LS estimate. In addition, there are many
minimizers (in fact a vector space of same dimension as the null space of ?T ?) of the empirical
risk. To overcome the problem, several approaches have been proposed in the literature:
? LS solution with minimal norm: The solution is the minimizer of the empirical error with minimal (l1 or l2 )-norm: ?
b = arg min??=Y ||?||1 or 2 , (or a robust solution
arg min||???Y ||2 ?? ||?||1 ). The choice of `2 -norm yields the ordinary LS solution. The
choice of `1 -norm has been used for generating sparse solutions (e.g. the Basis Pursuit
[10]), and assuming that the target function admits a sparse decomposition, the field of
Compressed Sensing [9, 21] provides sufficient conditions for recovering the exact solution. However, such conditions (e.g. that ? possesses a Restricted Isometric Property
(RIP)) does not hold in general in this regression setting. On another aspect, solving these
problems (both for l1 or l2 -norm) when N is large is numerically expensive.
? Regularization. The solution is the minimizer of the empirical error plus a penalty term,
for example
fb = arg min LK (f ) + ?||f ||pp , for p = 1 or 2.
f ?FN
where ? is a parameter and usual choices for the norm are `2 (ridge-regression [20]) and
`1 (LASSO [19]). A close alternative is the Dantzig selector [8, 5] which solves: ?
b =
arg min||?||1 ?? ||?T (Y ? ??)||? . The numerical complexity and generalization bounds
of those methods depend on the sparsity of the target function decomposition in FN .
Now if we possess a sequence of function classes (FN )N ?1 with increasing capacity, we may perform structural risk minimization [22] by solving in each model the empirical risk penalized by a
term that depends on the size of the model: fbN = arg minf ?FN ,N ?1 LK (f ) + pen(N, K), where
the penalty term measures the capacity of the function space.
In this paper we follow another approach where instead of searching in the large space FN (where
N > K) for a solution that minimizes the empirical error plus a penalty term, we simply search
for the empirical error minimizer in a (randomly generated) lower dimensional subspace GM ? FN
(where M < K).
Our contribution: We consider a set of M random linear combinations of the initial N features
and perform our favorite LS regression algorithm (possibly regularized) using those ?compressed
2
features?. This is equivalent to projecting the K points {?(xk ) ? RN , k = 1..K} from the initial
domain (of size N ) onto a random subspace of dimension M , and then performing the regression in the ?compressed domain? (i.e. span of the compressed features). This is made possible
because random projections approximately preserve inner products between vectors (by a variant of
the Johnson-Lindenstrauss Lemma stated in Proposition 1.
Our main result is a bound on the excess risk of a linear estimator built in the compressed domain
in terms of the excess risk of the linear estimator built in the initial domain (Section 2). We further
detail the case of ordinary Least-Squares Regression (Section 3) and discuss, in terms of M , N , K,
the different tradeoffs concerning the excess risk (reduced estimation error in the compressed domain versus increased approximation error introduced by the random projection) and the numerical
complexity (reduced complexity of solving the LSR in the compressed domain versus the additional
load of performing the projection).
?
As a consequence, we show that by choosing M = O( K) projections we define a Compressed
Least-Squares Regression which uses O(N K 3/2 ) elementary operations to compute a regression
?
function with estimation error (relatively to the initial function space FN ) of order log K/ K up to
a multiplicative factor which depends on the best approximation of f ? in FN . This is competitive
with the best methods, up to our knowledge.
Related works: Using dimension reduction and random projections in various learning areas has
received considerable interest over the past few years. In [7], the authors use a SVM algorithm in a
compressed space for the purpose of classification and show that their resulting algorithm has good
generalization properties. In [25], the authors consider a notion of compressed linear regression.
For data Y = X? + ?, where ? is the target and ? a standard noise, they use compression of the
set of data, thus considering AY = AX? + A?, where A has a Restricted Isometric Property.
They provide an analysis of the LASSO estimator built from these compressed data, and discuss a
property called sparsistency, i.e. the number of random projections needed to recover ? (with high
probability) when it is sparse. These works differ from our approach in the fact that we do not
consider a compressed (input and/or output) data space but a compressed feature space instead.
In [11], the authors discuss how compressed measurements may be useful to solve many detection,
classification and estimation problems without having to reconstruct the signal ever. Interestingly,
they make no assumption about the signal being sparse, like in our work. In [6, 17], the authors
show how to map a kernel k(x, y) = ?(x) ? ?(y) into a low-dimensional space, while still approximately preserving the inner products. Thus they build a low-dimensional feature space specific for
(translation invariant) kernels.
2
Linear regression in the compressed domain
We remind that the initial set of features is {?n : X 7? R, 1 ? n ? N } and the initial domain
PN
def
FN = {f? = n=1 ?n ?n , ? ? RN } is the span of those features. We write ?(x) the N -vector of
components (?n (x))n?N . Let us now define the random projection. Let A be a M ? N matrix of
i.i.d. elements drawn for some distribution ?. Examples of distributions are:
? Gaussian random variables N (0, 1/M ),
?
? ? Bernoulli distributions, i.e. which takes values ?1/ M with equal probability 1/2,
p
? Distribution taking values ? 3/M with probability 1/6 and 0 with probability 2/3.
The following result (proof in the supplementary material) states the property that inner-product are
approximately preserved through random projections (this is a simple consequence of the JohnsonLindenstrauss Lemma):
Proposition 1 Let (uk )1?k?K and v be vectors of RN . Let A be a M ? N matrix of i.i.d. elements drawn from one of the previously defined distributions. For any ? > 0, ? > 0, for
M ? ?2 1 ?3 log 4K
? , we have, with probability at least 1 ? ?, for all k ? K,
4
?
6
|Auk ? Av ? uk ? v| ? ?||uk || ||v||.
3
def
We now introduce the set of M compressed features (?m )1?m?M such that ?m (x) =
PN
We also write ?(x) the M -vector of components (?m (x))m?M . Thus
n=1 Am,n ?n (x).
PM
def
?(x) = A?(x). We define the compressed domain GM = {g? = m=1 ?m ?m , ? ? RM } the
span of the compressed features (vector space of dimension at most M ). Note that each ?m ? FN ,
thus GM is a subspace of FN .
2.1
Approximation error
We now compare the approximation error assessed in the compressed domain GM versus in the
initial space FN . This applies to the linear algorithms mentioned in the introduction such as ordinary
LS regression (analyzed in details in Section 3), but also its penalized versions, e.g. LASSO and
ridge regression. Define ?+ = arg min??RN L(f? ) ? L(f ? ) the parameter of the best regression
function in FN .
Theorem 1 For any ? > 0, any M ? 15 log(8K/?), let A be a random M ? N matrix defined
like in Proposition 1, and GM be the compressed domain resulting from this choice of A. Then with
probability at least 1 ? ?,
r
8 log(8K/?) + 2
log 4/?
? 2
2
2
inf ||g?f ||P ?
||? || E ||?(X)|| +2 sup ||?(x)||
+ inf ||f ?f ? ||2P .
g?GM
f ?FN
M
2K
x?X
(2)
This theorem shows the tradeoff in terms of estimation and approximation errors for an estimator gb
obtained in the compressed domain compared to an estimator fb obtained in the initial domain:
? Bounds on the estimation error of gb in GM are usually smaller than that of fb in FN when
M < N (since the capacity of FN is larger than that of GM ).
? Theorem 1 says that the approximation error assessed in GM increases by at most
O( log(K/?)
)||?+ ||2 E||?(X)||2 compared to that in FN .
M
def
def
Proof: Let us write f + = f?+ = arg minf ?FN ||f ? f ? ||P and g + = gA?+ . The approximation
error assessed in the compressed domain GM is bounded as
inf ||g ? f ? ||2P
g?GM
?
||g + ? f ? ||2P = ||g + ? f + ||2P + ||f + ? f ? ||2P ,
(3)
since f + is the orthogonal projection of f ? on FN and g + belongs to FN . We now bound ||g + ?
def
def
f + ||2P using concentration inequalities. Define Z(x) = A?+ ? A?(x) ? ?+ ? ?(x). Define ?2 =
log(8K/?)
8
M log(8K/?). For M ? 15 log(8K/?) we have ? < 3/4 thus M ? ?2 /4??3 /6 . Proposition 1
applies and says that on an event E of probability at least 1 ? ?/2, we have for all k ? K,
def
|Z(xk )| ? ?||?+ || ||?(xk )|| ? ?||?+ || sup ||?(x)|| = C
(4)
x?X
On the event E, we have with probability at least 1 ? ? 0 ,
+
||g ?
f + ||2P
=
?
?
K
1 X
EX?PX |Z(X)| ?
|Z(xk )|2 + C 2
K
r
log(2/? 0 )
2K
k=1
r
K
1 X
log(2/? 0 )
?2 ||?+ ||2
||?(xk )||2 + sup ||?(x)||2
K
2K
x?X
k=1
r
log(2/? 0 )
?2 ||?+ ||2 E ||?(X)||2 + 2 sup ||?(x)||2
.
2K
x?X
2
where we applied two times Chernoff-Hoeffding?s inequality. Combining with (3), unconditioning,
and setting ? 0 = ?/2 then with probability at least (1 ? ?/2)(1 ? ? 0 ) ? 1 ? ? we have (2).
4
2.2
Computational issues
We now discuss the relative computational costs of a given algorithm applied either in the initial or
in the compressed domain. Let us write Cx(DK , FN , P ) the complexity (e.g. number of elementary
operations) of an algorithm A to compute the regression function fb when provided with the data
DK and function space FN .
We plot in the table below, both for the initial and the compressed versions of the algorithm A, the
order of complexity for (i) the cost for building the feature matrix, (ii) the cost for computing the
estimator, (iii) the cost for making one prediction (i.e. computing fb(x) for any x):
Construction of the feature matrix
Computing the regression function
Making one prediction
Initial domain
NK
Cx(DK , FN , P )
N
Compressed domain
N KM
Cx(DK , GM , P )
NM
Note that the values mentioned for the compressed domain are upper-bounds on the real complexity
and do not take into account the possible sparsity of the projection matrix A (which would speed up
matrix computations, see e.g. [2, 1]).
3
Compressed Least-Squares Regression
We now analyze the specific case of Least-Squares Regression.
3.1
Excess risk of ordinary Least Squares regression
In order to bound the estimation error, we follow the approach of [13] which truncates (up to the
level ?L where L is a bound, assumed to be known, on ||f ? ||? ) the prediction of the LS regression
function. The ordinary LS regression provides the regression function f?b where
?
b=
argmin
||?||.
??argmin?0 ?RN ||Y ???0 ||
Note that ??T ?
b = ?T Y , hence ?
b = ?? Y ? RN where ?? is the Penrose pseudo-inverse of ?1 .
def
Then the truncated predictor is: fbL (x) = TL [f?b (x)], where
u
if |u| ? L,
def
TL (u) =
L sign(u) otherwise.
Truncation after the computation of the parameter ?
b ? RN , which is the solution of an unconstrained
optimization problem, is easier than solving an optimization problem under the constraint that ||?||
is small (which is the approach followed in [23]) and allows for consistency results and prediction
bounds. Indeed, the excess risk of fbL is bounded as
1 + log K
E(||fb ? f ? ||2P ) ? c0 max{?2 , L2 }
N + 8 inf ||f ? f ? ||2P
(5)
f ?FN
K
where a bound on c0 is 9216 (see [13]). We have a simpler bound when we consider the expectation
EY conditionally on the input data:
N
EY (||fb ? f ? ||2PK ) ? ?2 + inf ||f ? f ? ||2PK
(6)
K f ?F
Remark: Note that because we use the quadratic loss function, by following the analysis in [3],
or by deriving tight bounds on the Rademacher complexity [14] and following Theorem 5.2 of
Koltchinskii?s Saint Flour course, it is actually possible to state assumptions under which we can
remove the log K term in (5). We will not further detail such bounds since our motivation here is
not to provide the tightest possible bounds, but rather to show how the excess risk bound for LS
regression in the initial domain extends to the compressed domain.
1
In the full rank case, ?? = (?T ?)?1 ?T when K ? N and ?? = ?T (??T )?1 when K ? N
5
3.2
Compressed Least-Squares Regression (CLSR)
CLSR is defined as the ordinary LSR in the compressed domain. Let ?b = ?? Y ? RM , where ?
is the K ? M matrix with elements (?m (xk ))1?m?M,1?k?K . The CLSR estimate is defined as
def
gbL (x) = TL [g?b(x)]. From Theorem 1, (5) and (6), we deduce the following excess risk bounds for
the CLSR estimate:
?
q
||?+ || E||?(X)||2
K log(8K/?)
Corollary 1 For any ? > 0, set M = 8
max(?,L)
c0 (1+log K) . Then whenever M ?
15 log(8K/?), with probability at least 1 ? ?, the expected excess risk of the CLSR estimate is
bounded as
r
p
?
(1 + log K) log(8K/?)
? 2
+
E(||bgL ? f ||P ) ? 16 c0 max{?, L}||? || E||?(X)||2
K
r
2
log 4/?
supx ||?(x)||
+ 8 inf ||f ? f ? ||2P .
(7)
? 1+
f ?FN
E||?(X)||2
2K
?
||?+ || E||?(X)||2 p
Now set M =
8K log(8K/?). Assume N > K and that the features (?k )1?k?K
?
are linearly independent. Then whenever M ? 15 log(8K/?), with probability at least 1 ? ?, the
expected excess risk of the CLSR estimate conditionally on the input samples is upper bounded as
r
r
p
2 log(8K/?)
supx ||?(x)||2 log 4/?
? 2
+
2
1+
.
EY (||bgL ? f ||PK ) ? 4?||? || E||?(X)||
K
E||?(X)||2
2K
Proof: Whenever M ? 15 log(8K/?) we deduce from Theorem 1 and (5) that the excess risk of
gbL is bounded as
E(||bgL ? f ? ||2P ) ? c0 max{?2 , L2 }
1 + log K
M
K
h 8 log(8K/?)
+8
||?+ ||2 E||?(X)||2 + 2 sup ||?(x)||2
M
x
r
i
log 4/?
+ inf ||f ? f ? ||2P .
f ?FN
2K
By optimizing on M , we deduce (7). Similarly, using (6) we deduce the following bound on
EY (||bgL ? f ? ||2PK ):
r
8
log 4/?
+ 2
2M
2
2
+
log(8K/?)||? || E||?(X)|| + 2 sup ||?(x)||
+ inf ||f ? f ? ||2PK .
?
f ?FN
K
M
2K
x
By optimizing on M and noticing that inf f ?FN ||f ? f ? ||2PK = 0 whenever N > K and the features
(?k )1?k?K are linearly independent, we deduce the second result.
Remark 1 Note that the second term in the parenthesis of (7) is negligible whenever K log 1/?.
Thus we have the expected excess risk
p
log K/?
E(||bgL ? f ? ||2P ) = O ||?+ || E||?(X)||2 ?
+ inf ||f ? f ? ||2P .
(8)
f ?FN
K
The choice of M in the previous corollary depends on ||?+ || and E||?(X)|| which are a priori
unknown (since f ? and PX are unknown). If we set M independently of ||?+ ||, then an additional multiplicative factor of ||?+ || appears in the bound, and if we replace E||?(X)|| by its bound
supx ||?(x)|| (which is known) then this latter factor will appear instead of the former in the bound.
Complexity of CLSR: The complexity of LSR for computing the regression function in the com2
pressed domain only depends on M and K, and is (see e.g. [4]) Cx(DK , GM , P ) = O(M
? K ) which
5/2
is of order O(K ) when we choose the optimized number of projections M = O( K). However
the leading term when using CLSR is the cost for building the ? matrix: O(N K 3/2 ).
6
4
4.1
Discussion
p
The factor ||?+ || E||?(X)||2
In light of Corollary 1, the important
pfactor which will determine whether the CLSR provides low
generalization error or not is ||?+ || E||?(X)||2 . This factor indicates that a good set of features
(for CLSR) should be such that the norm of those features as well as the norm of the parameter
?+ of the projection of f ? onto the span of those features should be small. A natural question is
whether this product can be made small for appropriate choices of features. We now provide two
specific cases for which this is actually the case: (1) when the features are rescaled orthonormal
basis functions, and (2) when the features are specific wavelet functions. In both cases, we relate
the bound to an assumption of regularity on the function f ? , and show that the dependency w.r.t. N
decreases when the regularity increases, and may even vanish.
Rescaled Orthonormal Features: Consider a set of orthonormal functions (?i )i?1 w.r.t a measure
?, i.e. h?i , ?j i? = ?i,j . In addition we assume that the law of the input data is dominated by ?,
i.e. PX ? C? where C is a constant. For instance, this is the case when the set X is compact, ? is
the uniform measure and PX has bounded density.
def
We define the set of N features as: ?i = ci ?i , where ci > 0, for i ? {1, . . . , N }. Then
PN
PN bi
def
any f ? FN decomposes as f =
i=1 hf, ?i i ?i =
i=1 ci ?i , where bi = hf, ?i i. Thus
PN bi 2
PN 2 R 2
PN 2
2
we have: ||?||2 =
i=1 ( ci ) and E||?|| =
i=1 ci X ?i (x)dPX (x) ? C
i=1 ci . Thus
P
P
N
N
||?+ ||2 E||?||2 ? C i=1 ( cbii )2 i=1 c2i .
Now, linear approximation theory (Jackson-type theorems) tells us that assuming a function f ? ?
L2 (?) is smooth, it may be decomposed onto the span of the N first (?i )i?{1,...,N } functions with
decreasing coefficients |bi | ? i?? for some ? ? 0 that depends on the smoothness of f ? . For
example the class of functions with bounded total variation may be decomposed with Fourier basis
(in dimension 1) with coefficients |bi | ? ||f ||V /(2?i). Thus here ? = 1. Other classes (such as
Sobolev spaces) lead to larger values of ? related to the order of differentiability.
p
? PN
By choosing ci = i??/2 , we have ||?+ || E||?||2 ? C i=1 i?? . Thus if ? > 1, then this term
is bounded by a constant that does not depend on N . If ? = 1 then it is bounded by O(log N ), and
if 0 < ? < 1, then it is bounded by O(N 1?? ).
p
However any orthonormal basis, even rescaled, would not necessarily yield a small ||?+ || E||?||2
term (this is all the more true when the dimension of X is large). The desired property that the
coefficients (?+ )i of the decomposition of f ? rapidly decrease to 0 indicates that hierarchical bases,
such as wavelets, that would decompose the function at different scales, may be interesting.
Wavelets: Consider an infinite family of wavelets in [0, 1]: (?0n ) = (?0h,l ) (indexed by n ? 1 or
equivalently by the scale h ? 0 and translation 0 ? l ? 2h ? 1) where ?0h,l (x) = 2h/2 ?0 (2h x ? l)
and ?0 is the mother wavelet. Then consider N = 2H features (?h,l )1?h?H defined as the rescaled
def
wavelets ?h,l = ch 2?h/2 ?0h,l , where ch > 0 are some coefficients. Assume the mother wavelet
P
is C p (for p ? 1), has at least p vanishing moments, and that for all h ? 0, supx l ?0 (2h x ?
l)2 ? 1. Then the following
p result (proof in the supplementary material) provides a bound on
supx?X ||?(x)||2 (thus on E||?(X)||2 ) by a constant independent of N :
Proposition 2 Assume that f ? is (L, ?)-Lipschitz (i.e. for all v ? X there exists a polynomial pv of
degree b?c such that for all u ? X , |f (u) ? pv (u)| ? L|u ? v|? ) with 1/2 < ? ? p. Then setting
R1
?
ch = 2h(1?2?)/4 , we have ||?+ || supx ||?(x)|| ? L 1?221/2?? 0 |?0 |, which is independent of N .
Notice that the Haar walevets has p = 1 vanishing moment but is not C 1 , thus the Proposition does
not apply directly. However direct computations show that if f ? is L-Lipschitz (i.e. ? = 1) then
0
?h,l
? L2?3h/2?2 , and thus ||?+ || supx ||?(x)|| ? 4(1?2L?1/2 ) with ch = 2?h/4 .
7
4.2
Comparison with other methods
p
In the case when the factor ||?+ || E||?(X)||2 does not depend on N (such as in the previous
example), the bound (8) on the excess
error (assessed in
? risk of CLSR states that the estimation
?
terms of FN ) of CLSR is O(log K/ K). It is clear that whenever N > K (which is the case of
interest here), this is better than the ordinary LSR in the initial domain, whose estimation error is
O(N log K/K).
It is difficult to compare this result with LASSO (or the Dantzig selector that has similar properties
[5]) for which an important aspect is to design sparse regression functions or to recover a solution
assumed to be sparse. From [12, 15, 24] one deduces that under some assumptions, the estimation
error of LASSO is of order S logKN where S is the sparsity (number of non-zero coefficients) of the
?
best regressor f + in FN . If S < K then LASSO is more interesting than CLSR in terms of excess
risk. Otherwise CLSR may be an interesting alternative although this method does not make any
assumption about the sparsity of f + and its goal is not to recover a possible sparse f + but only to
make good predictions. However, in some sense our method finds a sparse solution in the fact that
the regression function gbL lies in a space GM of small dimension M N and can thus be expressed
using only M coefficients.
Now in terms of numerical complexity, CLSR requires O(N K 3/2 ) operations to build the matrix
and compute the regression function, whereas according to [18], the (heuristical) complexity of the
LASSO algorithm is O(N K 2 ) in the best cases (assuming that the number of steps required for
convergence is O(K), which is not proved theoretically). Thus CLSR seems to be a good and
simple competitor to LASSO.
5
Conclusion
We considered the case when the number of features N is larger than the number of data K. The
result stated in Theorem 1 enables to analyze the excess risk of any linear regression algorithm (LS
or its penalized versions) performed in the compressed domain GM versus in the initial space FN .
In the compressed domain the estimation error is reduced but an additional (controlled) approximation error (when compared to thep
best regressor in FN ) comes into the picture. In the case of LS
2 has a mild dependency on N , then by choosing a
regression, when the term ||?+ || E||?(X)||
?
random subspace of dimension
? M = O( K), CLSR has an estimation error (assessed in terms of
FN ) bounded by O(log K/ K) and has numerical complexity O(N K 3/2 ).
In short, CLSR provides an alternative to usual penalization techniques where one first selects a random subspace of lower dimension and then performs an empirical risk minimizer in this subspace.
Further work needs to be done
p to provide additional settings (when the space X is of dimension > 1)
for which the term ||?+ || E||?(X)||2 is small.
Acknowledgements: The authors wish to thank Laurent Jacques for numerous comments and
Alessandro Lazaric and Mohammad Ghavamzadeh for exciting discussions. This work has been
supported by French National Research Agency (ANR) through COSINUS program (project
EXPLO-RA, ANR-08-COSI-004).
References
[1] Dimitris Achlioptas. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of Computer and System Sciences, 66(4):671?687, June 2003.
[2] Nir Ailon and Bernard Chazelle. Approximate nearest neighbors and the fast JohnsonLindenstrauss transform. In STOC ?06: Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pages 557?563, New York, NY, USA, 2006. ACM.
[3] Jean-Yves Audibert and Olivier Catoni. Risk bounds in linear regression through pac-bayesian
truncation. Technical Report HAL : hal-00360268, 2009.
[4] David Bau III and Lloyd N. Trefethen. Numerical linear algebra. Philadelphia: Society for
Industrial and Applied Mathematics, 1997.
8
[5] Peter J. Bickel, Ya?acov Ritov, and Alexandre B. Tsybakov. Simultaneous analysis of Lasso
and Dantzig selector. To appear in Annals of Statistics, 2008.
[6] Avrim Blum. Random projection, margins, kernels, and feature-selection. Subspace, Latent
Structure and Feature Selection, pages 52?68, 2006.
[7] Robert Calderbank, Sina Jafarpour, and Robert Schapire. Compressed learning: Universal
sparse dimensionality reduction and learning in the measurement domain. Technical Report,
2009.
[8] Emmanuel Candes and Terence Tao. The Dantzig selector: Statistical estimation when p is
much larger than n. Annals of Statistics, 35:2313, 2007.
[9] Emmanuel J. Candes and Justin K. Romberg. Signal recovery from random projections. volume 5674, pages 76?86. SPIE, 2005.
[10] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM
Journal on Scientific Computing, 20:33?61, 1998.
[11] Mark A. Davenport, Michael B. Wakin, and Richard G. Baraniuk. Detection and estimation
with compressive measurements. Technical Report TREE 0610, Department of Electrical and
Computer Engineering, Rice University, 2006.
[12] E. Greenshtein and Y. Ritov. Persistency in high dimensional linear predictor-selection and the
virtue of over-parametrization. Bernoulli, 10:971?988, 2004.
[13] L. Gy?orfi, M. Kohler, A. Krzy?zak, and H. Walk. A distribution-free theory of nonparametric
regression. Springer-Verlag, 2002.
[14] Sham M. Kakade, Karthik Sridharan, and Ambuj Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Daphne Koller, Dale Schuurmans,
Yoshua Bengio, and Leon Bottou, editors, Neural Information Processing Systems, pages 793?
800. MIT Press, 2008.
[15] Yuval Nardi and Alessandro Rinaldo. On the asymptotic properties of the group Lasso estimator for linear models. Electron. J. Statist., 2:605?633, 2008.
[16] D. Pollard. Convergence of Stochastic Processes. Springer Verlag, New York, 1984.
[17] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. Neural
Information Processing Systems, 2007.
[18] Saharon Rosset and Ji Zhu. Piecewise linear regularized solution paths. Annals of Statistics,
35:1012, 2007.
[19] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal
Statistical Society, Series B, 58:267?288, 1994.
[20] A. N. Tikhonov. Solution of incorrectly formulated problems and the regularization method.
Soviet Math Dokl 4, pages 1035?1038, 1963.
[21] Yaakov Tsaig and David L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory,
52:1289?1306, 2006.
[22] Vladimir N. Vapnik. The nature of statistical learning theory. Springer-Verlag New York, Inc.,
New York, NY, USA, 1995.
[23] Tong Zhang. Covering number bounds of certain regularized linear function classes. Journal
of Machine Learning Research, 2:527?550, 2002.
[24] Tong Zhang. Some sharp performance bounds for least squares regression with L1 regularization. To appear in Annals of Statistics, 2009.
[25] Shuheng Zhou, John D. Lafferty, and Larry A. Wasserman. Compressed regression. In John C.
Platt, Daphne Koller, Yoram Singer, and Sam T. Roweis, editors, Neural Information Processing Systems. MIT Press, 2007.
9
| 3698 |@word mild:1 version:3 polynomial:1 compression:1 norm:11 seems:1 c0:5 km:1 decomposition:4 pressed:1 jafarpour:1 moment:2 reduction:2 initial:16 series:1 interestingly:1 past:1 chazelle:1 john:2 fn:43 numerical:6 enables:1 remove:1 plot:1 xk:14 parametrization:1 vanishing:2 short:1 persistency:1 provides:7 math:1 simpler:1 daphne:2 zhang:2 direct:1 symposium:1 introduce:1 theoretically:1 shuheng:1 indeed:1 ra:1 expected:4 nardi:1 decomposed:2 decreasing:1 actual:1 considering:1 increasing:1 project:2 provided:1 notation:2 bounded:12 lowest:1 null:1 argmin:2 minimizes:1 compressive:1 pseudo:3 friendly:1 rm:2 uk:3 platt:1 appear:3 negligible:1 engineering:1 consequence:2 laurent:1 path:1 approximately:3 inria:2 plus:2 koltchinskii:1 dantzig:4 bi:5 thirty:1 atomic:1 dpx:1 area:1 universal:2 empirical:11 orfi:1 projection:16 onto:4 close:1 ga:1 selection:4 romberg:1 risk:27 measurable:1 equivalent:1 map:1 l:13 independently:2 recovery:1 wasserman:1 estimator:7 deriving:1 orthonormal:4 jackson:1 searching:1 notion:1 variation:1 annals:4 target:4 gm:15 construction:1 rip:1 exact:1 olivier:1 us:1 element:4 expensive:1 database:1 electrical:1 com2:1 decrease:3 rescaled:4 yk:5 mentioned:2 alessandro:2 agency:1 benjamin:1 ui:1 complexity:14 ghavamzadeh:1 depend:4 solving:5 rewrite:1 tight:1 algebra:1 ali:1 basis:5 various:1 soviet:1 fast:1 tell:1 choosing:3 saunders:1 whose:1 trefethen:1 larger:6 solve:1 supplementary:2 say:2 jean:1 reconstruct:1 compressed:39 otherwise:2 ability:1 anr:2 statistic:4 transform:1 sequence:1 product:4 fr:1 deduces:1 combining:1 rapidly:1 roweis:1 convergence:2 regularity:2 r1:1 rademacher:1 generating:1 heuristical:1 nearest:1 received:1 solves:1 recovering:1 come:1 differ:1 stochastic:1 centered:1 larry:1 material:2 generalization:7 decompose:1 proposition:6 elementary:2 hold:1 considered:1 electron:1 bickel:1 purpose:1 estimation:20 minimization:1 mit:2 gaussian:1 rather:1 pn:10 zhou:1 shrinkage:1 krzy:1 corollary:3 ax:1 june:1 bernoulli:2 rank:1 indicates:2 industrial:1 am:1 sense:1 minimizers:1 koller:2 france:1 interested:1 selects:1 tao:1 arg:7 classification:2 issue:1 priori:1 field:1 equal:1 having:1 chernoff:1 lille:1 minf:2 auk:1 report:3 yoshua:1 richard:1 few:1 piecewise:1 randomly:1 preserve:1 national:1 sparsistency:1 karthik:1 detection:2 interest:2 flour:1 analyzed:1 light:1 orthogonal:1 indexed:1 tree:1 walk:1 re:1 desired:1 minimal:2 increased:2 instance:1 ordinary:8 cost:5 predictor:2 uniform:1 johnson:2 dependency:2 supx:8 rosset:1 recht:1 density:1 siam:1 sequel:2 regressor:2 terence:1 michael:1 fbn:1 nm:1 choose:2 possibly:2 hoeffding:1 davenport:1 leading:1 return:1 account:1 gy:1 lloyd:1 coefficient:6 inc:1 audibert:1 depends:5 performed:1 later:2 multiplicative:2 analyze:2 sup:6 competitive:1 recover:3 hf:2 candes:2 contribution:1 square:11 gression:1 yves:1 variance:1 yield:2 bayesian:1 simultaneous:1 inform:1 whenever:7 competitor:1 pp:1 johnsonlindenstrauss:2 proof:4 spie:1 proved:1 subsection:1 knowledge:1 dimensionality:1 maillard:2 actually:2 appears:1 alexandre:1 isometric:2 follow:2 ritov:2 done:1 cosi:1 lsr:5 achlioptas:1 french:1 scientific:1 hal:2 usa:2 building:2 true:1 former:1 regularization:4 hence:1 illustrated:1 bgl:5 conditionally:2 covering:2 ay:1 ridge:2 mohammad:1 performs:1 l1:3 saharon:1 ji:1 volume:1 numerically:1 measurement:3 zak:1 mother:2 smoothness:1 unconstrained:1 consistency:1 pm:1 similarly:1 mathematics:1 europe:1 deduce:5 base:1 optimizing:2 inf:15 belongs:1 tikhonov:1 verlag:3 certain:1 inequality:2 binary:1 preserving:1 yaakov:1 additional:4 ey:4 determine:1 bau:1 signal:3 ii:1 full:1 sham:1 reduces:1 rahimi:1 smooth:1 technical:3 concerning:1 controlled:2 parenthesis:1 prediction:6 variant:1 regression:40 expectation:2 kernel:4 preserved:1 addition:2 whereas:1 gbl:3 posse:2 comment:1 lafferty:1 sridharan:1 structural:1 iii:2 identically:1 bengio:1 lasso:11 inner:3 tradeoff:3 whether:2 gb:2 penalty:3 peter:1 pollard:1 york:4 remark:2 useful:1 tewari:1 clear:1 nonparametric:1 tsybakov:1 statist:1 differentiability:1 reduced:3 schapire:1 notice:1 sign:1 jacques:1 lazaric:1 tibshirani:1 write:5 group:1 blum:1 drawn:2 sum:2 year:1 inverse:1 noticing:1 baraniuk:1 extends:1 family:1 sobolev:1 bound:30 def:19 followed:1 quadratic:3 annual:1 constraint:2 dominated:1 aspect:2 emi:1 speed:1 span:6 min:5 fourier:1 performing:2 leon:1 px:5 relatively:1 department:1 ailon:1 according:1 combination:1 smaller:1 sam:1 kakade:1 making:2 projecting:1 restricted:2 invariant:1 taken:1 previously:1 discus:5 needed:1 singer:1 pursuit:2 operation:3 tightest:1 apply:2 observe:1 hierarchical:1 appropriate:1 alternative:3 coin:1 denotes:1 saint:1 wakin:1 yoram:1 emmanuel:2 build:2 society:2 question:1 concentration:1 usual:4 subspace:9 distance:1 thank:1 capacity:5 odalric:2 assuming:3 remind:1 minimizing:2 vladimir:1 equivalently:1 difficult:1 truncates:1 robert:3 statement:1 relate:1 stoc:1 nord:1 stated:3 design:1 unknown:3 perform:2 upper:2 av:1 truncated:1 incorrectly:1 ever:1 precise:1 rn:9 sharp:1 introduced:1 david:2 required:1 optimized:1 greenshtein:1 tsaig:1 acov:1 trans:1 justin:1 dokl:1 usually:2 below:1 dimitris:1 eighth:1 sparsity:4 program:1 ambuj:1 built:4 max:4 royal:1 event:2 natural:1 regularized:3 haar:1 zhu:1 numerous:1 picture:1 lk:6 philadelphia:1 nir:1 literature:1 l2:9 acknowledgement:1 relative:1 law:1 asymptotic:1 loss:1 calderbank:1 interesting:3 versus:4 penalization:1 degree:1 sufficient:1 exciting:1 editor:2 translation:2 course:1 penalized:4 supported:1 truncation:2 free:1 ambrym:1 neighbor:1 face:1 taking:1 munos:2 sparse:9 distributed:1 overcome:1 dimension:14 default:1 lindenstrauss:2 fb:13 dale:1 author:5 made:2 projected:1 excess:19 approximate:1 selector:4 compact:1 overfitting:1 assumed:3 thep:1 search:1 pen:1 latent:1 decomposes:2 table:1 favorite:1 nature:1 robust:1 schuurmans:1 bottou:1 necessarily:1 domain:30 pk:8 dense:1 main:1 linearly:2 motivation:1 noise:2 tl:3 ny:2 tong:2 pv:2 wish:1 lie:1 vanish:1 wavelet:7 theorem:8 load:1 specific:4 pac:1 sensing:2 dk:6 admits:1 svm:1 virtue:1 exists:1 avrim:1 vapnik:1 ci:7 catoni:1 margin:2 nk:1 chen:1 easier:1 cx:4 remi:1 simply:1 penrose:1 rinaldo:1 expressed:1 applies:2 springer:3 ch:4 minimizer:6 acm:2 rice:1 goal:2 formulated:1 donoho:2 price:1 replace:1 considerable:1 lipschitz:2 infinite:1 yuval:1 lemma:2 called:2 total:1 bernard:1 c2i:1 ya:1 explo:1 mark:1 latter:1 assessed:5 sina:1 kohler:1 ex:1 |
2,977 | 3,699 | Nonlinear directed acyclic structure learning
with weakly additive noise models
Peter Spirtes
Arthur Gretton
Robert E. Tillman
Carnegie Mellon University Carnegie Mellon University, Carnegie Mellon University
Pittsburgh, PA
MPI for Biological Cybernetics
Pittsburgh, PA
[email protected]
Pittsburgh, PA
[email protected]
[email protected]
Abstract
The recently proposed additive noise model has advantages over previous
directed structure learning approaches since it (i) does not assume linearity
or Gaussianity and (ii) can discover a unique DAG rather than its Markov
equivalence class. However, for certain distributions, e.g. linear Gaussians,
the additive noise model is invertible and thus not useful for structure
learning, and it was originally proposed for the two variable case with a
multivariate extension which requires enumerating all possible DAGs. We
introduce weakly additive noise models, which extends this framework to
cases where the additive noise model is invertible and when additive noise
is not present. We then provide an algorithm that learns an equivalence
class for such models from data, by combining a PC style search using recent
advances in kernel measures of conditional dependence with local searches
for additive noise models in substructures of the Markov equivalence class.
This results in a more computationally efficient approach that is useful for
arbitrary distributions even when additive noise models are invertible.
1
Introduction
Learning probabilistic graphical models from data serves two primary purposes: (i) finding compact representations of probability distributions to make inference efficient and (ii)
modeling unknown data generating mechanisms and predicting causal relationships. Until
recently, most constraint-based and score-based algorithms for learning directed graphical
models from continuous data required assuming relationships between variables are linear
with Gaussian noise. While this assumption may be appropriate in many contexts, there are
well known contexts, such as fMRI images, where variables have nonlinear dependencies and
data do not tend towards Gaussianity. A second major limitation of the traditional algorithms is they cannot identify a unique structure; they reduce the set of possible structures
to an equivalence class which entail the same Markov properties. The recently proposed additive noise model [1] for structure learning addresses both limitations; by taking advantage
of observed nonlinearity and non-Gaussianity, a unique directed acyclic structure can be
identified in many contexts. However, it too suffers from limitations: (i) for certain distributions, e.g. linear Gaussians, the model is invertible and not useful for structure learning;
(ii) it was originally proposed for two variables with a multivariate extension that requires
enumerating all possible DAGs, which is super-exponential in the number of variables.
In this paper, we address the limitations of the additive noise model. We introduce weakly
additive noise models, which have the advantages of additive noise models, but are still
useful when the additive noise model is invertible and in most cases when additive noise is
not present. Weakly additive noise models allow us to express greater uncertainty about the
1
data generating mechanism, but can still identify a unique structure or a smaller equivalence
class in most cases. We also provide an algorithm for learning an equivalence class for such
models from data that is more computationally efficient in the more than two variables case.
Section 2 reviews the appropriate background; section 3 introduces weakly additive noise
models; section 4 describes our learning algorithm; section 5 discusses some related research;
section 6 presents some experimental results; finally, section 7 offers conclusions..
2
Background
Let G = hV, Ei be a directed acyclic graph (DAG), where V denotes the set of vertices and
Eij ? E denotes a directed edge Vi ? Vj . Vi is a parent of Vj and Vj is a child of Vi . For
Vi ? V, PaVGi denotes the parents of Vi and ChVGi denotes the children of Vi . The degree of Vi
is the number of edges with an endpoint at Vi . A v-structure is a triple hVi , Vj , Vk i ? V such
V
/ E and Eki ?
/ E.
that {Vi , Vk } ? PaGj . A v-structure is immoral, or an immorality, if Eik ?
A joint distribution
P
over
variables
corresponding
to
nodes
in
V
is
Markov
with
respect
to
Y
G if PP (V) =
PP Vi | PaVGi . P is faithful to G if every conditional independence true
Vi ?V
in P is entailed by the above factorization. A partially directed acyclic graph (PDAG) H for
G is a mixed graph, i.e. consisting of directed and undirected edges, representing all DAGs
Markov equivalent to G, i.e. DAGs entailing exactly the same conditional independencies.
If Vi ? Vj is a directed edge in H, then all DAGs Markov equivalent to G have this directed
edge; if Vi ? Vj is an undirected edge in H, then some DAGs that are Markov equivalent to
G have the directed edge Vi ? Vj while others have the directed edge Vi ? Vj .
The PC algorithm is a well known constraint-based, or conditional independence based,
structure learning algorithm. It is an improved greedy version of the SGS [2] and IC [3]
algorithms, shown below.
Instead of searching all subsets of V\{Vi , Vj } for an S such
Input : Observed data for variables in V
Output: PDAG G over nodes V
1
2
3
4
G ? the complete undirected graph over the variables in V
For {Vi , Vj } ? V, if ?S ? V\{Vi , Vj }, such that Vi ?
? Vj | S, remove the Vi ? Vj edge
For {Vi , Vj , Vk } ? V such that Vi ? Vj and Vj ? Vk remain as edges, but Vi ? Vk does
not remain, if ?S ? V\{Vi , Vj , Vk }, such that Vi ?
? Vk | {S ? Vj }, orient Vi ? Vj ? Vk
Orient edges to prevent additional immoralities and cycles using the Meek rules [4]
Algorithm 1: SGS/IC algorithm
that Vi ?
? Vj | S, PC (i) initially sets S = ? for all {Vi , Vj } pairs, (ii) checks to see if any
edges can be removed based on the results of conditional independence tests with these S
sets, and (iii) iteratively increases the cardinality of S considered until ?Vk ? V with degree
greater than |S|. S is only considered if it is a subset of nodes connected to Vi or Vj at the
current iteration. PC learns the correct PDAG in the large sample limit when the Markov,
faithfulness, and causal sufficiency (that there are no unmeasured common causes of two
or more measured variables) assumptions hold [2]. The partial correlation based Fisher
Z-transformation test, which assumes linear Gaussian distributions, is used for conditional
independence testing with continuous variables. The statistical advantage of PC is it limits
the number of tests performed, particularly those with large conditioning sets. This also
yields a computational advantage since the number of possible tests is exponential in |V|.
The recently proposed additive noise model approach to structure learning [1] assumes only
that each variable can be represented as a (possibly nonlinear) function f of its parents
plus additive noise ? with some arbitrary distribution, and that the noise components are
n
Y
P(?i ). Consider the two variable case where
mutually independent, i.e. P(?1 , . . . , ?n ) =
i=1
X ? Y is the true DAG, X = ?X , Y = sin(?X) + ?Y , ?X ? U nif (?1, 1), and ?Y ?
U nif (?1, 1). If we regress Y on X (nonparametrically), the forward model, figure 1a, and
2
0.5
Z
0
?0.5
?0.5
0
X
(a)
0.5
1
?1
?2 ?1.5 ?1 ?0.5 0 0.5 1 1.5 2
Y
(b)
10
8
6
4
2
0
?2
?4
?6
?8
?1
1
0.5
X
1
X
Y
2
1.5
1
0.5
0
?0.5
?1
?1.5
?2
?1
0
?0.5
?0.5
0
X
0.5
1
?1
?8 ?6 ?4 ?2 0 2 4 6 8 10
Z
(c)
(d)
Figure 1: Nonparametric regressions with data overlayed for (a) Y regressed on X, (b) X
regressed on Y , (c) Z regressed on X, and (d) X regressed on Z
regress X on Y , the backward model, figure 1b, we observe the residuals ??Y ?
? X and
??X ?
?
/ Y . This provides a criterion for distinguishing X ? Y from X ? Y in many cases,
but there are counterexamples such as the linear Gaussian case, where the forward model
is invertible so we find ??Y ?
? X and ??X ?
? Y . [1, 5] show, however, that whenever f is
nonlinear, the forward model is noninvertible, and when f is linear, the forward model
is only invertible when ? is Gaussian and a few other special cases. Another limitation
of this approach is that it is not closed under marginalization of intermediary variables
when f is nonlinear, e.g. for X ? Y ? Z with X = ?X , Y = X 3 + ?Y , Z = Y 3 + ?Z ,
?X ? U nif (?1, 1), ?Y ? U nif (?1, 1), and ?Z ? U nif (0, 1), observing only X and Z,
figures 1c and 1d, causes us to reject both the forward and backward models. [5] shows this
method can be generalized to more variables. To test whether a DAG is compatible with
the data, we regress each variable on its parents and test whether the resulting residuals are
mutually independent. This procedure is impractical even for a few variables, however, since
the number of possible DAGs grows super-exponentially with the number of variables, e.g.
there are ? 4.2 ? 1018 DAGs with 10 nodes. Since we do not assume linearity or Gaussianity
in this framework, a sufficiently powerful nonparametric independence test must be used.
Typically, the Hilbert Schmidt Independence Criterion [6] is used, which we now define.
Let X be a random variable with domain X . A Hilbert space HX of functions from X to R
is a reproducing kernel Hilbert space (RKHS) if for some kernel k(?, ?) (the reproducing kernel
for HX ), for every f (?) ? HX and x ? X , the inner product hf (?), k(x, ?)iHX = f (x). We may
treat k(x, ?) as a mapping of x to the feature space HX . For x, x? ? X , hk(x, ?), k(x? , ?)iHX =
k(x, x? ), so we can compute inner products efficiently in this high dimensional space. The
Moore-Aronszajn theorem shows that all symmetric positive definite kernels (most popular
kernels) are reproducing kernels that uniquely define corresponding RKHSs [7]. Let Y be
a random variable with domain Y and l(?, ?) the reproducing kernel for HY . We define the
mean map ?X and cross covariance CXY as follows, using ? to denote the tensor product.
?X = EX [k(x, ?)]
CXY = ([k(x, ?) ? ?X ] ? [l(y, ?) ? ?Y ])
If the kernels are characteristic, e.g. Gaussian and Laplace kernels, the mean map is injective
[8, 9, 10] so distinct probability distributions have different mean maps. The Hilbert Schmidt
Independence Criteria (HSIC) HXY = kCXY k2HS measures the dependence of X and Y ,
where k ? kHS denotes the Hilbert Schmidt norm. [9] shows HXY = 0 if and only if X ?
?Y
for characteristic kernels. For m paired i.i.d. samples, let K and L be Gram matrices for
? = HKH and L
? = HLH
k(?, ?) and l(?, ?), i.e. kij = k(xi , xj ). For H = IN ? N1 1N 1TN , let K
? XY = 1 tr K
?L
? , where tr denotes the trace, is an empirical
be centered Gram matrices. H
m2
estimator for HXY [6]. To determine the threshold of a level-? statistical test, we can use
? XY for multiple random assignments of the
the permutation approach (where we compute H
Y samples to X, and use the 1 ? ? quantile of the resulting empirical distribution over
? XY ), or a Gamma approximation to the null distribution of mH
? XY (see [6] for details).
H
3
Weakly additive noise models
We now extend the additive noise model framework to account for cases where additive
noise models are invertible and cases where additive noise may not be present.
3
D
E
Definition 3.1. ? = Vi , PaVGi is a local additive noise model for a distribution P over V
that is Markov to a DAG G = hV, Ei if Vi = f PaVGi + ? is an additive noise model.
Definition 3.2. A weakly additive noise model M = hG, ?i for a distribution P over V is a
DAG G = hV, Ei and set of local additive noise models ?, such D
that P is E
Markov to G, ? ? ?
if and only if ? is a local additive noise model for P, and ? Vi , PaVGi
? ?, ?Vj ? PaVGi
V
such that there exists some graph G ? (not necessarily related to P) such that Vi ? PaGj? and
E
D
V
Vj , PaGj? is a local additive noise model for P.
When we assume a data generating process has a weakly additive noise model representation,
we assume only that there are no cases where X ? Y can be written X = f (Y ) + ?X , but
not Y = f (X) + ?Y . In other words, the data cannot appear as though it admits an
additive noise model representation, but only in the incorrect direction. This representation
is still appropriate when additive noise models are invertible, and when additive noise is
not present: such cases only lead to weakly additive noise models which express greater
underdetermination of the true data generating process.
We now define the notion of distribution-equivalence for weakly additive noise models.
Definition 3.3. A weakly additive noise model M = hG, ?i is distribution-equivalent to
N = hG ? , ?? i if and only if G and G ? are Markov equivalent and ? ? ? if and only if ? ? ?? .
Distribution-equivalence defines what can be discovered about the true data generating
mechanism using observational data. We now define a new structure to partition data
generating processes which instantiate distribution-equivalent weakly additive noise models.
Definition 3.4. A weakly additive noise partially directed acyclic graph (WAN-PDAG) for
M = hG, ?i is a mixed graph H = hV, Ei such that for {Vi , Vj } ? V,
1. Vi ? Vj is a directed edge in H if and only if Vi ? Vj is a directed edge in G and
in all G ? such that N = hG ? , ?? i is distribution-equivalent to M
2. Vi ? Vj is an undirected edge in H if and only if Vi ? Vj is a directed edge in G and
there exists a G ? and N = hG ? , ?? i distribution-equivalent to M such that Vi ? Vj
is a directed edge in G ?
We now get the following results.
Lemma 3.1. Let M = hG, ?i be a weakly additive noise model,
D
E
Vi , PaVGi ? ?, and
N = hG ? , ?? i be distribution equivalent to M. Then PaVGi = PaVGi? and ChVGi = ChVGi? .
Proof. Since M and N are distribution-equivalent, PaVGi = PaVGi? . Thus, ChVGi = ChVGi?
Theorem 3.1. The WAN-PDAG for M = hG, ?i is constructed
D by (i)Eadding all directed
and undirected edges in the PDAG instantiated by M, (ii) ? Vi , PaVGi ? ?, directing all
Vj ? PaVGi as Vj ? Vi and all Vk ? ChVGi as Vi ? Vk , and (iii) applying the extended Meek
rules [4], treating orientions made using ? as background knowledge.
Proof. (i) This is correct because of Markov equivalence [2]. (ii) This is correct by lemma
3.1. (iii) These rules are correct and complete [4].
WAN-PDAGs can used to identify the same information about the data generating mechanism as additive noise models, when additive noise models are identifiable, but provide
a more powerful representation of uncertainty and can be used to discover more information when additive noise models are unidentifiable. The next section describes an efficient
algorithm for learning WAN-PDAGs from data.
4
4
The Kernel PC (kPC) algorithm
We now describe the Kernel PC (kPC) algorithm1 , which consists of two stages: (i) a
constraint-based search using the PC algorithm with a nonparametric conditional independence test (the Fisher Z test is inappropriate since we want to allow nonlinearity and
non-Gaussianity) to identify the Markov equivalence class and (ii) a ?PC-style? search for
noninvertible additive noise models in submodels of the Markov equivalence class.
In the first stage, we use a kernel-based conditional dependence measure similar to HSIC
[9] (see also [11, Section 2.2] for a related quantity with a different normalization). For
? for a reproducing kernel m(?, ?),
a conditioning variable Z with centered Gram matrix M
?1
?
we define the conditional cross covariance CXY |Z = CXZ
? CZZ CZ Y? , where X = (X, Z) and
2
Y? = (Y, Z). Let HXY |Z = kCXY |Z kHS . It follows from [9, Theorem 3] that HXY |Z = 0 if
and only if X ?
? Y |Z when kernels are characteristic. [9] provides the empirical estimator:
? XY |Z = 1 tr(K
?L
? ? 2K
?M
? (M
? + ?IN )?2 M
?L
?+K
?M
? (M
? + ?IN )?2 M
?L
?M
? (M
? + ?IN )?2 M
?)
H
m2
? XY |Z is unknown and difficult to derive so we must use the
The null distribution of H
permutation approach described in section 2. This is not straightforward since permuting
X or Y while leaving Z fixed changes the marginal distribution of X given Z or Y given Z.
We thus (making analogy to the discrete case) must cluster Z and then permute elements
only within clusters for the permutation test, as in [12].
? XY |Z is
This first stage is not computational efficient, however, since each evaluation of H
? XY |Z approximately 1000 times for each pernaively O N 3 and we need to evaluate H
mutation test. Fortunately, we see from [13, Appendix C] that the eigenspectra of Gram
matrices for Gaussian kernels decay very rapidly, so low rank approximations of these matrices can be obtained even when using a very conservative threshold. We implemented the
incomplete Cholesky factorization [14], which can be used to obtain an m ? p matrix G,
where p ? m, and an m ? m permutation matrix P such that K ? P GG? P ? , where K is
? XY |Z
an m ? m Gram matrix. A clever implementation after replacing Gram matrices in H
with their incomplete
Cholesky
factorizations
and
using
an
appropriate
equivalence
to
invert
?
?
3
?
G G + ?Ip for M instead of GG + ?Im results in a straightforward O mp operation.
Unfortunately, this is not numerically stable unless a relatively large regularizer ? is chosen
or only a small number of columns are used in the incomplete Cholesky factorizations.
A more stable (and faster) approach is to obtain incomplete Cholesky factorizations GX , GY ,
and GZ with permutation matrices PX , PY , and PZ , and then obtain the thin SVDs for
HPX GX , HPY GY , and HPZ GZ , e.g HP G = U SV , where U is m ? p, S is the p ? p
diagonal matrix of singular values, and V is p ? p. Now define matrices S?X , S?Y , and S?Z
?X , G
? Y , and G
? Z as follows:
and G
2
sZ
ii
Z
Y
Y 2
X 2
s
?
=
s
?
=
s
s?X
=
s
2
ii
ii
ii
ii
ii
sZ
+?
ii
? X = U X S?X U X ? G
? Y = U Y S?Y U Y ? G
? Z = U Z S?Z U Z ?
G
? XY |Z = 1 tr G
?X G
? Y ? 2G
?X G
?Z G
?Y + G
?X G
?Z G
?Y G
? Z stably and effiWe can compute H
2
m
ciently in O mp3 by choosing an appropriate associative ordering of matrix multiplications.
Figure 2 shows that this method leads to a significant increase in speed when used with a
permutation test for conditional independence without significantly affecting the empirically
observed type I error rate for a level-.05 test.
In the second stage, we look for additive noise models in submodels of the Markov equivalence class because (i) it may be more efficient to do so and require fewer tests since
orientations implied by an additive noise model may imply further orientations and (ii) we
1
MATLAB code may be obtained from http://www.andrew.cmu.edu/?rtillman/kpc
5
15
Empirical Type I Error Rate
Time (minutes)
20
Naive
Incomplete Cholesky + SVD
10
5
0
200
400
600
800
Sample Size
1000
1
0.8
Naive
Incomplete Cholesky + SVD
0.6
0.4
0.2
0
200
400
600
800
Sample Size
1000
Figure 2: Runtime and Empirical Type I Error Rate. Results are over the generation of 20
3-node DAGs for which X ?
? Y |Z and the generating distribution was Gaussian.
may find more orientations by considering submodels, e.g. if all relations are linear and only
one variable has a non-Gaussian noise term. The basic strategy used is a?PC-style? greedy
search where we look for undirected edges in the current mixed graph (starting with the
PDAG resulting from the first stage) adjacent to the fewest other undirected edges. If these
edges can be oriented using additive noise models, we make the implied orientations, apply
the extended Meek rules, and then iterate until no more edges can be oriented. Algorithm
2 provides pseudocode. Let G = hV, Ei be the resulting PDAG and ?Vi ? V, let UVGi denote
the nodes connected to Vi in G by an undirected edge. We get the following results.
Input : PDAG G = hV, Ei
Output: WAN-PDAG G = hV, Ei
1
s?1
2
while max UVGi ? s do
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
Vi ?V
foreach Vi ? V such that UVGi = s or UVGi < s and UVGi was updated do
s? ? s
while s? > 0 do
foreach S ? UVGi such that |S| = s? and ?Sk ? S, orienting Sk ? Vi , does
not create an immorality do
Nonparametrically regress Vi on PaVGi ? S and compute the residual ??iS
V
if ??iS ?
? S and ?Vj ? S and S? ? UGj such that. regressing Vj on
? S? ? {Vi } then
PaGVj ? S? ? Vi results in the residual ??jS? ?{Vi } ?
Vi
?Sk ? S, orient Sk ? Vi , and ?Ul ? UG \S orient Vi ? Ul
Apply the extended Meek rules
?Vm ? V, update UVGm , set s? = 1, and break
end
end
s? ? s? ? 1;
end
end
s?s+1
end
Algorithm 2: Second Stage of kPC
Lemma 4.1. If an edge is oriented in the second stage of kPC, it is implied by a noninvertible
local additive noise model.
D
E
Proof. If the condition at line 8 is true then Vi , PaVGi ? S is a noninvertible local additive
noise model. All Ul ? UVGi \S must be children of Vi by lemma 3.1.
6
0.8
0.8
0.8
0.6
0.4
0.2
200
0.6
0.4
0.2
400
600
800
Sample Size
0
1000
kPC
PC
GES
LiNGAM
200
0.6
0.4
0.2
400
600
800
Sample Size
0
1000
1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
kPC
PC
GES
LiNGAM
0.2
0
200
400
600
800
Sample Size
0.4
kPC
PC
GES
LiNGAM
0.2
0
1000
Linear Gaussian
Recall
Recall
0
kPC
PC
GES
LiNGAM
Precision
1
Precision
1
Recall
Precision
1
200
400
600
800
Sample Size
kPC
PC
GES
LiNGAM
200
400
600
800
Sample Size
1000
kPC
PC
GES
LiNGAM
0.4
0.2
0
1000
Linear Non-Gaussian
200
400
600
800
Sample Size
1000
Nonlinear Non-Gaussian
Figure 3: Precision and Recall
Lemma 4.2. Suppose ? = hVi , Wi is a noninvertible local additive noise model. Then kPC
will make all orientations implied by ?.
?
? = W\PaG for PaG at the current iteration. kPC must terminate with s > |S|
Proof. Let S
Vi
Vi
D
E
V
V
? ? |U i | so S = S
? at some iteration. Since Vi , Pa i ? S
? is a noninvertible local
since |S|
G
G
additive noise model, line 8 is satisfied so all edges connected to Vi are oriented.
Theorem 4.1. Assume data is generated according to some weakly additive noise model
M = hG, ?i. Then kPC will return the WAN-PDAG instantiated by M assuming perfect
conditional independence information, Markov, faithfulness, and causal sufficiency.
Proof. The PC algorithm is correct and complete with respect to conditional independence
[2]. Orientations made with respect to additive noise models are correct by lemma 4.1 and
all such orientations that can be made are made by lemma 4.2. The Meek rules, which are
correct and complete [4], are invoked after each orientation made with respect to additive
noise models so they are invoked after all such orientations are made.
5
Related research
kPC is similar in spirit to the PC-LiNGAM structure learning algorithm [15], which assumes
dependencies are linear with either Gaussian or non-Gaussian noise. PC-LiNGAM combines
the PC algorithm with LiNGAM to learn structures referred to as ngDAGs. KCL [11] is
a heuristic search for a mixed graph that uses the same kernel-based dependence measures
as kPC (while not determining significance threhsholds via a hypothesis test), but does
not take advantage of additive noise models. [16] provides a more efficient algorithm for
learning additive noise models, by first finding a causal ordering after doing a series of
high dimensional regressions and HSIC independence tests and then pruning the resulting
DAG implied by this ordering. Finally, [17] proposes a two-stage procedure for learning
additive noise models from data that is similar to kPC, but requires the additive noise
model assumptions in the first stage where the Markov equivalence class is identified.
6
Experimental results
To evaluate kPC, we generated 20 random 7-nodes DAGs using the MCMC algorithm in [18]
and sampled 1000 data points from each DAG under three conditions: linear dependencies
7
LIPL
I
LOCC
LACC
LIPL
LIFG
I
LOCC
LACC
LMTG
LIFG
LMTG
kPC
iMAGES
Figure 4: Structures learned by kPC and iMAGES
with Gaussian noise, linear dependencies with non-Gaussian noise, and nonlinear dependencies with non-Gaussian noise. We generated non-Gaussian noise using the same procedure
as [19] and used polynomial and trigonometric functions for nonlinear dependencies.
We compared kPC to PC, the score-based GES with the BIC-score [20], and the ICA-based
LiNGAM [19], which assumes linear dependencies and non-Gaussian noise. We applied two
metrics in measuring performance vs sample size: precision, i.e. proportion of directed edges
in the resulting graph that are in the true DAG, and recall, i.e. proportion of directed edges
in the true DAG that are in the resulting graph. Figure 3 reports the results. In the linear
Gaussian case, we see PC shows slightly better performance than kPC in precision, which is
unsurprising since PC assumes linear Gaussian distributions. Only LiNGAM shows better
recall, but worse precision. LiNGAM performs significantly better than the other algorithms
in the linear non-Gaussian case. kPC performs about the same as PC in precision and recall,
which again is unsurprising since previous simulation results have shown that nonlinearity,
but not non-Gaussianity can significantly affect the performance of PC. In the nonlinear
non-Gaussian case, kPC performs slightly better than PC in precision. We note, however,
that in some of these cases the performance of kPC was significantly better.2
We also ran kPC on data from an fMRI experiment that is analyzed in [21] where nonlinear
dependencies can be observed. Figure 4 shows the structure that kPC learned, where each
of the nodes corresponds to a particular brain region. This structure is the same as the one
learned by the (GES-style) iMAGES algorithm in [21] except for the absence of one edge.
However, iMAGES required background knowledge to direct the edges. kPC successfully
found the same directed edges without using any background knowledge. Domain experts
in neuroscience have confirmed the plausibility of the observed relationships.
7
Conclusion
We introduced weakly additive noise models, which extend the additive noise model framework to cases such as the linear Gaussian, where the additive noise model is invertible and
thus unidentifiable, as well as cases where additive noise is not present. The weakly additive
noise framework allows us to identify a unique DAG when the additive noise model assumptions hold, and a structure that is at least as specific as a PDAG (possibly still a unique
DAG) when some additive noise assumptions fail. We defined equivalence classes for such
models and introduced the kPC algorithm for learning these equivalence classes from data.
Finally, we found that the algorithm performed well on both synthetic and real data.
Acknowledgements
We thank Dominik Janzing and Bernhard Sch?olkopf for helpful comments. RET was funded
by a grant from the James S. McDonnel Foundation. AG was funded by DARPA IPTO
FA8750-09-1-0141, ONR MURI N000140710747, and ARO MURI W911NF0810242.
2
When simulating nonlinear data, we must be careful to ensure that variances do not blow up
and result in data for which no finite sample method can show adequate performance. This has the
unfortunate side effect that the nonlinear data generated may be well approximated using linear
methods. Future research will consider more sophisticated methods for simulating data that is more
appropriate when comparing kPC to linear methods.
8
References
[1] P. O. Hoyer, D. Janzing, J. M. Mooij, J. Peters, and B. Sch?olkopf. Nonlinear causal
discovery with additive noise models. In Advances in Neural Information Processing
Systems 21, 2009.
[2] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. 2nd
edition, 2000.
[3] J. Pearl. Causality: Models, Reasoning, and Inference. 2000.
[4] C. Meek. Causal inference and causal explanation with background knowledge. In
Proceedings of the 11th Conference on Uncertainty in Artificial Intelligence, 1995.
[5] K. Zhang and A. Hyv?arinen. On the identifiability of the post-nonlinear causal model.
In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, 2009.
[6] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. J. Smola. A kernel
statistical test of independence. In Advances in Neural Information Processing Systems
20, 2008.
[7] Nachman Aronszajn. Theory of reproducing kernels. Transactions of the American
Mathematical Society, 68(3):337404, 1950.
[8] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method
for the two-sample-problem. In Advances in Neural Information Processing Systems
19, 2007.
[9] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional
dependence. In Advances in Neural Information Processing Systems 20, 2008.
[10] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Injective
hilbert space embeddings of probability measures. In Proceedings of the 21st Annual
Conference on Learning Theory, 2008.
[11] X. Sun, D. Janzing, B. Scholk?
opf, and K. Fukumizu. A kernel-based causal learning
algorithm. In Proceedings of the 24th International Conference on Machine Learning,
2007.
[12] X. Sun. Causal inference from statistical data. PhD thesis, Max Plank Institute for
Biological Cybernetics, 2008.
[13] F. R. Bach and M. I. Jordan. Kernel independent component analysis. Journal of
Machine Learning Research, 3:1?48, 2002.
[14] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2:243?264, 2001.
[15] P. O. Hoyer, A. Hyv?arinen, R. Scheines, P. Spirtes, J. Ramsey, G. Lacerda, and
S. Shimizu. Causal discovery of linear acyclic models with arbitrary distributions.
In Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, 2008.
[16] J. M. Mooij, D. Janzing, J. Peters, and B. Scholk?
opf. Regression by dependence minimization and its application to causal inference in additive noise models. In Proceedings
of the 26th International Conference on Machine Learning, 2009.
[17] K. Zhang and A. Hyv?arinen. Acyclic causality discovery with additive noise: An
information-theoretical perspective. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases 2009,
2009.
[18] G. Melan?con, I. Dutour, and M. Bousquet-M?elou. Random generation of dags for
graph drawing. Technical Report INS-R0005, Centre for Mathematics and Computer
Sciences, 2000.
[19] S. Shimizu, P. Hoyer, A. Hyv?arinen, and A. Kerminen. A linear non-gaussian acyclic
model for causal discovery. Journal of Machine Learning Research, 7:1003?2030, 2006.
[20] D. M. Chickering. Optimal structure identification with greedy search. Journal of
Machine Learning Research, 3:507?554, 2002.
[21] J. D. Ramsey, S. J. Hanson, C. Hanson, Y. O. Halchenko, R. A. Poldrack, and C. Glymour. Six problems for causal inference from fMRI. NeuroImage, 2009. In press.
9
| 3699 |@word version:1 polynomial:1 norm:1 proportion:2 nd:1 hyv:4 simulation:1 covariance:2 tr:4 series:1 score:3 halchenko:1 rkhs:1 fa8750:1 ramsey:2 current:3 com:1 comparing:1 gmail:1 must:6 written:1 w911nf0810242:1 additive:66 partition:1 remove:1 treating:1 update:1 v:1 greedy:3 instantiate:1 fewer:1 tillman:1 intelligence:3 provides:4 node:8 gx:2 zhang:2 lacerda:1 mathematical:1 constructed:1 direct:1 incorrect:1 consists:1 combine:1 introduce:2 ica:1 brain:1 inappropriate:1 cardinality:1 considering:1 discover:2 linearity:2 null:2 what:1 mp3:1 finding:2 transformation:1 ret:1 impractical:1 ag:1 every:2 runtime:1 exactly:1 grant:1 appear:1 positive:1 local:9 treat:1 limit:2 approximately:1 plus:1 equivalence:16 factorization:5 directed:21 unique:6 faithful:1 testing:1 practice:1 definite:1 procedure:3 empirical:5 reject:1 significantly:4 word:1 get:2 cannot:2 clever:1 context:3 applying:1 py:1 www:1 equivalent:10 map:3 straightforward:2 starting:1 m2:2 rule:6 estimator:2 searching:1 notion:1 unmeasured:1 laplace:1 hsic:3 updated:1 suppose:1 distinguishing:1 us:1 hypothesis:1 lanckriet:1 pa:4 element:1 approximated:1 particularly:1 muri:2 database:1 observed:5 hv:7 svds:1 region:1 cycle:1 connected:3 sun:3 ordering:3 removed:1 ran:1 weakly:17 entailing:1 joint:1 mh:1 darpa:1 represented:1 regularizer:1 fewest:1 distinct:1 instantiated:2 describe:1 kcl:1 artificial:3 choosing:1 heuristic:1 drawing:1 ip:1 associative:1 eki:1 advantage:6 aro:1 product:3 combining:1 rapidly:1 trigonometric:1 olkopf:6 parent:4 cluster:2 generating:8 perfect:1 derive:1 andrew:2 measured:1 czz:1 implemented:1 rasch:1 direction:1 correct:7 centered:2 observational:1 require:1 arinen:4 hx:4 biological:2 im:1 extension:2 hold:2 sufficiently:1 considered:2 ic:2 mapping:1 major:1 hvi:2 purpose:1 intermediary:1 kpc:30 nachman:1 teo:1 create:1 successfully:1 fukumizu:4 minimization:1 gaussian:24 super:2 rather:1 immorality:3 vk:11 rank:2 check:1 hk:1 helpful:1 inference:6 typically:1 initially:1 relation:1 orientation:9 plank:1 proposes:1 special:1 marginal:1 look:2 thin:1 fmri:3 eik:1 others:1 report:2 future:1 few:2 causation:1 oriented:4 gamma:1 consisting:1 overlayed:1 n1:1 evaluation:1 regressing:1 introduces:1 entailed:1 analyzed:1 pc:26 permuting:1 hg:10 edge:30 partial:1 arthur:2 injective:2 xy:10 unless:1 incomplete:6 causal:14 theoretical:1 kij:1 column:1 modeling:1 measuring:1 kerminen:1 assignment:1 vertex:1 subset:2 too:1 unsurprising:2 dependency:8 sv:1 synthetic:1 st:1 borgwardt:1 international:2 probabilistic:1 vm:1 invertible:10 again:1 thesis:1 satisfied:1 wan:6 possibly:2 worse:1 expert:1 american:1 style:4 return:1 account:1 blow:1 gy:2 gaussianity:6 mp:1 vi:61 performed:2 break:1 closed:1 observing:1 doing:1 hf:1 substructure:1 identifiability:1 mutation:1 kcxy:2 cxy:3 variance:1 characteristic:3 efficiently:1 yield:1 identify:5 identification:1 underdetermination:1 confirmed:1 pdag:12 cybernetics:2 suffers:1 janzing:4 whenever:1 definition:4 sriperumbudur:1 pp:2 regress:4 james:1 proof:5 con:1 sampled:1 popular:1 recall:7 knowledge:5 hilbert:6 sophisticated:1 originally:2 improved:1 sufficiency:2 unidentifiable:2 though:1 ihx:2 stage:9 smola:2 until:3 correlation:1 nif:5 ei:7 aronszajn:2 nonlinear:14 replacing:1 nonparametrically:2 defines:1 stably:1 hlh:1 grows:1 orienting:1 effect:1 true:7 symmetric:1 iteratively:1 moore:1 spirtes:3 adjacent:1 sin:1 uniquely:1 mpi:1 criterion:3 generalized:1 scholk:2 gg:2 complete:4 tn:1 performs:3 reasoning:1 image:5 invoked:2 recently:4 common:1 pseudocode:1 ug:1 empirically:1 poldrack:1 endpoint:1 conditioning:2 exponentially:1 foreach:2 extend:2 numerically:1 mellon:3 significant:1 counterexample:1 dag:23 mathematics:1 hp:1 nonlinearity:3 centre:1 funded:2 stable:2 entail:1 j:1 multivariate:2 recent:1 perspective:1 certain:2 onr:1 greater:3 additional:1 fortunately:1 determine:1 ii:15 multiple:1 gretton:6 technical:1 faster:1 plausibility:1 offer:1 cross:2 bach:1 post:1 paired:1 prediction:1 regression:3 basic:1 cmu:3 metric:1 iteration:3 kernel:25 normalization:1 cz:1 invert:1 background:6 want:1 affecting:1 fine:1 singular:1 leaving:1 sch:6 eigenspectra:1 comment:1 tend:1 undirected:8 spirit:1 jordan:1 ciently:1 iii:3 embeddings:1 iterate:1 independence:13 marginalization:1 xj:1 bic:1 affect:1 identified:2 reduce:1 inner:2 enumerating:2 whether:2 six:1 ul:3 song:1 peter:3 cause:2 matlab:1 adequate:1 useful:4 nonparametric:3 http:1 neuroscience:1 carnegie:3 discrete:1 express:2 independency:1 threshold:2 prevent:1 backward:2 graph:12 orient:4 uncertainty:5 powerful:2 extends:1 k2hs:1 submodels:3 appendix:1 meek:6 identifiable:1 annual:1 constraint:3 regressed:4 hy:1 bousquet:1 speed:1 relatively:1 px:1 glymour:2 according:1 smaller:1 describes:2 remain:2 slightly:2 wi:1 making:1 lingam:12 computationally:2 scheines:2 mutually:2 scheinberg:1 discus:1 mechanism:4 fail:1 ge:8 serf:1 end:5 gaussians:2 operation:1 apply:2 observe:1 appropriate:6 simulating:2 schmidt:3 rkhss:1 algorithm1:1 hpy:1 assumes:5 denotes:6 ensure:1 unfortunate:1 graphical:2 quantile:1 rtillman:2 noninvertible:6 society:1 tensor:1 implied:5 quantity:1 strategy:1 primary:1 dependence:6 traditional:1 diagonal:1 hoyer:3 thank:1 assuming:2 code:1 relationship:3 difficult:1 unfortunately:1 robert:1 trace:1 implementation:1 pdags:2 unknown:2 markov:17 finite:1 hxy:5 extended:3 directing:1 discovered:1 reproducing:6 arbitrary:3 introduced:2 pair:1 required:2 faithfulness:2 hanson:2 learned:3 pearl:1 address:2 below:1 max:2 explanation:1 predicting:1 residual:4 representing:1 imply:1 gz:2 naive:2 review:1 sg:2 acknowledgement:1 discovery:5 mooij:2 multiplication:1 determining:1 opf:2 permutation:6 mixed:4 generation:2 limitation:5 acyclic:8 analogy:1 triple:1 foundation:1 degree:2 principle:1 compatible:1 side:1 allow:2 institute:1 taking:1 pag:2 gram:6 forward:5 made:6 transaction:1 hkh:1 pruning:1 compact:1 bernhard:1 sz:2 pittsburgh:3 xi:1 search:8 continuous:2 sk:4 n000140710747:1 terminate:1 learn:1 permute:1 necessarily:1 european:1 domain:3 vj:34 significance:1 noise:75 edition:1 child:3 causality:2 referred:1 precision:9 neuroimage:1 exponential:2 chickering:1 dominik:1 learns:2 theorem:4 minute:1 specific:1 decay:1 admits:1 pz:1 svm:1 exists:2 phd:1 shimizu:2 eij:1 partially:2 corresponds:1 khs:2 conditional:13 careful:1 towards:1 fisher:2 absence:1 change:1 except:1 lemma:7 conservative:1 experimental:2 svd:2 cholesky:6 ipto:1 evaluate:2 mcmc:1 ex:1 |
2,978 | 37 | 709
TIME-SEQUENTIAL SELF-ORGANIZATION OF HIERARCHICAL
NEURAL NETWORKS
Ronald H. Silverman
Cornell University Medical College, New York, NY 10021
Andrew S. Noetzel
polytechnic University, Brooklyn, NY 11201
ABSTRACT
Self-organization of multi-layered networks can be realized
by time-sequential organization of successive neural layers.
Lateral inhibition operating in the surround of firing cells in
each layer provides for unsupervised capture of excitation
patterns presented by the previous layer. By presenting patterns
of increasing complexity, in co-ordination with network selforganization, higher levels of the hierarchy capture concepts
implicit in the pattern set.
INTRODUCTION
A
fundamental
difficulty
in
self-organization
of
hierarchical, multi-layered, networks of simple neuron-like cells
is the determination of the direction of adjustment of synaptic
link weights between neural layers not directly connected to input
or output patterns.
Several different approaches have been used
to address this problem. One is to provide teaching inputs to the
cells in internal layers of the hierarchy.
Another is use of
1
back-propagated error signals ,2 from the uppermost neural layer,
which is fixed to a desired outfut pattern.
A third is the
"competitive learning" mechanism,
in which a Hebbian synaptic
modification rule is used, with mutual inhibition among cells of
each layer preventing them from becoming conditioned to the same
patterns.
The use of explicit teaching inputs is generally felt to be
undesirable because such signals must,
in essence, provide
individual direction to each neuron in internal layers of the
network. This requires extensive control signals, and is somewhat
contrary to the notion of a self-organizing system.
Back-propagation
provides
direction
for
link
weight
modification of internal layers based on feedback from higher
neural layers. This method allows true self-organization, but at
the cost of specialized neural pathways over which these feedback
signals must travel.
In this report, we describe a simple feed-forward method for
self-organization of hierarchical neural networks. The method is
a variation of the technique of competitive learning.
It calls
for successive neural layers to initiate modification of their
afferent synaptic link weights only after the previous layer has
completed its own self-organization. Additionally, the nature of
the patterns captured can be controlled by providing an organized
? American Institute of Physics 1988
710
group of pattern sets which would excite the lowermost (input)
layer of the network in concert with training of successive
such a collection of pattern sets might be viewed as a
layers.
"lesson plan."
MODEL
The network is composed of neuron-like cells, organized in
hierarchical layers.
Each cell is excited by variably weighted
afferent connections from the outputs of the previous (lower)
layer. Cells of the lowest layer take on the values of the input
pattern.
The cells themselves are of the McCulloch-pitts type:
they fire only after their excitation exceeds a threshold, and are
otherwise inactive.
Let Si(t) do,,} be the state of cell i at
time t.
Let Wij' a real number ranging from 0 to "
be the
weight, or strength, of the synapse connecting cell i to cell j.
Let eij be the local excitation of cell i at the synaptic
connect~on
from cell j .
The excitation received along each
synaptic connection is integrated locally over time as follows:
e? . (t) = e .. ( t-l) + w? . S? (t)
~J
~J
~J
~
(1)
Synaptic connections may, therefore be viewed as capacitive.
The total excitation, Ej , is the sum of the local excitations of
cell j.
= Ee .. (t)
~
~J
( 2)
The use of the time-integrated activity of a synaptic
connection between two neurons,
instead of the more usual
instantaneous classification of neurons as "active" or "inactive",
permits each synapse to provide a statistical measure of the
activity of the input, which is assumed to be inherently
stochastic.
It also embodies the principle of learning based on
locally available information and allows for implementations of
the synapse as a capacitive element.
Over time, the total excitation of individual neurons on a
give layer will increase. When excitation exceeds a threshold, a,
then the neuron fires, otherwise it is inactive.
=
if
else
Sj (t)
=
E j (t)
>
a
( 3)
o
During a neuron I s training phase, a modified Hebbian rule
results in changes in afferent synaptic link weights such that,
upon firing, synapses with integrated activity greater than mean
activity are reinforced, and those with less than mean activity
are weakened.
More formally, if Sj(t) = 1 then the synapse
weights are modified by
(4 )
Here, n represents the fan-in to a cell, and k is a small,
positive constant. The "sign" function specifies the direction of
change and the "sine" function determines the magnitude of
change.
The sine curve provides the property that intermediate
711
link weights are subject to larger modifications than weights near
zero or saturation.
This helps provide for stable end-states
after learning.
Another effect of the integration of synaptic activity may be
seen.
A synapse of small weight is allowed to contribute to the
firing of a cell (and hence have its weight incremented) if a
series of patterns presented to the network consistently excite
that synapse.
The sequence of pattern presentations, therefore,
becomes a factor in network self-organization.
Upon firing, the active cell inhibits other cells in its
vicinity
(lateral
inhibi tion) ?
This
mechanism
supports
unsupervised, competi ti ve learning.
By preventing cells in the
neighborhood of an active cell from modifying their afferent
connections in response to a pattern, they are left available for
capture of new patterns.
Suppose there are n cells in a
particular level.
The lateral inhibitory mechanism is specified
as follows:
eik(t)
=
0
If S . (t) = 1 then
for all i, tor k = (j-m)mod(n) to (j+m)mod(n)
(5)
Here, m specifies the size of a "neighborhood."
A neighborhood
significantly larger than a pattern set will result in a number of
untrained cells. A neighborhood smaller than the pattern set will
tend to cause cells to attempt to capture more than one pdttern.
Schematic representations of an individual cell and the
network organization are provided in Figures 1 and 2.
It is the pattern generator, or "instructor", that controls
the form that network organization will take. The initial set of
patterns are repeated until the first layer is trained.
Next, a
new pattern set is used to excite the lowermost (trained) level of
the network, and so, induce training in the next layer of the
hierarchy.
Each of the patterns of the new set is composed of
elements (or subpatterns) of the old set.
The structure of
successive pattern sets is such that each set is either a more
complex combination of elements from the previous set (as words
are composed of letters) or a generaliza tlon of some concept
implicit in the previous set (such as line orientation).
Network organization, as described above, requires some
exchange of control
signals between the
network and
the
instructor.
The instructor requires information regarding firing
of cells during training in order to switch to a new patterns
appropriately.
Obviously, if patterns are switched before any
cells fire, learning will either not take place or will be smeared
over a number of patterns.
If a single pattern excites the
network until one or more cells are fully trained, subsequent
presentation of a non-orthogonal pattern could cause the trained
cell to fire before any naive cell because of its saturated link
weights.
The solution is simply to allow gradual training over
the full complement of the pattern set.
After a few firings, a
new pattern should be provided.
After a layer has been trained,
the instructor provides a control signal to that layer which
permanently fixes the layer's afferent synaptic link weights.
712
Excitation
Lateral
Inhibtio~n
__~
Lateral
Inhibtion
Excitatory Inputs
Fig. 1.
Schematic of neuron.
Shading of afferent synaptic connections
indicates variations in levels of local
time-integrated excitation.
Fig. 2.
Schematic of network showing
lateral inhibition and forward excitation.
Shading of neurons, indicating degree of
training, indicates time-sequential
organization of successive neural layers.
713
SIMULATIONS
an example, simulations were run in which a network was
taught
to
differentiate
vertical
from
horizontal
line
orientation. This problem is of interest because it represents a
case in which pattern sets cannot be separated by a single layer
of connections.
This is so because the set of vertical (or
horizontal) lines has activity at all positions within the input
matrix.
Two variations were simulated. In the first simulation, the
input was a 4x4 matrix.
This was completely connected with
These cell$ had fixed
unidirectional links to 25 cells.
inhibi tory connections to the nearest five cells on either side
(using a circular arrangement), and excited, using complete
connectivity, a ring of eight cells, wi th inhibition over the
nearest neighbor on either side.
Ini tially, all excitatory link weights were small, random
numbers. Each pattern of the initial input consisted of a single
active row or column in the input matrix.
Active elements had,
during any clock cycle, a probability of 0.5 of being "on", while
inactive elements had a 0.05 probability of being "on."
After exposure to the initial pattern set, all cells on the
first layer captured some input pattern, and all eight patterns
had been captured by two or more cells.
The next pattern set consisted of two subsets of four
vertical and four horizontal lines.
The individual lines were
presented until a few firings took place within the trained layer,
and then another line from the same subset was used to excite the
network.
After the upper layer responed with a few firings, and
some training occured, the other set was used to excite the
network in a similar manner. After five cycles, all cells on the
uppermost layer had become sensitive, in a postionally independent
manner, to lines of a vertical or a horizontal orientation. Due
to
lateral
inhibition,
adj acent
cells
developed
opposite
orientation specificities.
In the second simulation, a 6x6 input matrix was connected to
six cells, which were, in turn, connected to two cells. For this
network, the lateral inhibitory range extended over the entire set
of cells of each layer.
The initial input set consisted of six
patterns, each of which was a pair of either vertical lines or
horizontal lines.
After excitation by this set, each of the six
middle level cells became sensitized to one of the input
patterns. Next, the set of vertical and horizontal patterns were
grouped into two sUDsets:
vertical lines and horizontal lines.
Individual patterns from one subset were presented until a cell,
of the previously trained layer, fired.
After one of the two
cells on the uppermost layer fired, the procedure was repeated
with the pattern set of opposite orientation.
After 25 cycles,
the two cells on the uppermost layer had developed opposite
orientation specificities.
Each of these cells was shown to be
responsive, in a positionally independent manner, to any single
As
714
line of appropriate orientation.
CONCLUSION
Competitive learning mechanisms, when applied sequentially to
successive layers in a hierarchical structure, can capture pattern
elements,
at
lower
levels
of
the
hierarchy,
and
their
generalizations, or abstractions, at higher levels.
In the above mechanism, learning is externally directed, not
by explicit teaching signals or back-propagation, but by provision
of
instruction
sets
consisting
of
patterns
of
increasing
complexity, to be input to the lowermost layer of the network in
concert with successive organization of higher neural layers.
The central difficulty of this method involves the design of
pattern sets - a procedure whose requirements may not be obvious
in all cases.
The method is, however, attractive due to its
simplicity of concept and design, providing for multi-level selforganization without direction by elaborate control signals.
Several research goals suggest themselves: 1) simplification
or elimination of control signals, 2) generalization of rules for
structuring of pattern sets, 3) extension of this learning
principle to recurrent networks,
and 4) gaining a deeper
understanding of the role of time as a factor in network selforganization.
REFERENCES
t.
2.
3.
D. E. Rumelhart and G.E. Hinton, Nature 323, 533 (1986).
K. A. Fukushima, BioI. Cybern. 55, 5 (1986).
D. E. Rumelhart and D. Zipser, Cog. Sci. 9, 75 (1985).
| 37 |@word middle:1 selforganization:3 instruction:1 simulation:4 gradual:1 excited:2 shading:2 initial:4 series:1 lowermost:3 adj:1 si:1 must:2 ronald:1 subsequent:1 concert:2 positionally:1 provides:4 contribute:1 successive:7 five:2 along:1 become:1 pathway:1 manner:3 themselves:2 multi:3 increasing:2 becomes:1 provided:2 mcculloch:1 lowest:1 developed:2 ti:1 control:6 medical:1 positive:1 before:2 local:3 firing:8 becoming:1 might:1 subpatterns:1 weakened:1 tory:1 co:1 range:1 inhibtion:1 directed:1 silverman:1 procedure:2 significantly:1 instructor:4 induce:1 word:1 specificity:2 suggest:1 cannot:1 undesirable:1 layered:2 cybern:1 exposure:1 simplicity:1 rule:3 notion:1 variation:3 hierarchy:4 suppose:1 element:6 rumelhart:2 variably:1 role:1 capture:5 connected:4 cycle:3 incremented:1 complexity:2 trained:7 upon:2 completely:1 separated:1 describe:1 neighborhood:4 whose:1 larger:2 otherwise:2 obviously:1 differentiate:1 sequence:1 took:1 noetzel:1 organizing:1 fired:2 inhibi:2 requirement:1 ring:1 help:1 andrew:1 recurrent:1 nearest:2 excites:1 received:1 involves:1 direction:5 modifying:1 stochastic:1 elimination:1 exchange:1 fix:1 generalization:2 extension:1 pitt:1 tor:1 travel:1 sensitive:1 grouped:1 uppermost:4 weighted:1 smeared:1 modified:2 ej:1 cornell:1 structuring:1 consistently:1 indicates:2 abstraction:1 integrated:4 entire:1 wij:1 among:1 classification:1 orientation:7 plan:1 integration:1 mutual:1 x4:1 represents:2 unsupervised:2 eik:1 report:1 few:3 composed:3 ve:1 individual:5 phase:1 consisting:1 fire:4 fukushima:1 attempt:1 organization:13 interest:1 circular:1 saturated:1 orthogonal:1 old:1 desired:1 column:1 cost:1 subset:3 connect:1 fundamental:1 physic:1 connecting:1 connectivity:1 central:1 american:1 afferent:6 sine:2 tion:1 competitive:3 unidirectional:1 became:1 reinforced:1 lesson:1 synapsis:1 synaptic:11 obvious:1 propagated:1 occured:1 organized:2 provision:1 back:3 feed:1 higher:4 x6:1 response:1 synapse:6 implicit:2 until:4 clock:1 horizontal:7 propagation:2 effect:1 concept:3 true:1 consisted:3 hence:1 vicinity:1 attractive:1 during:3 self:8 essence:1 excitation:12 ini:1 presenting:1 complete:1 ranging:1 instantaneous:1 specialized:1 surround:1 teaching:3 had:6 stable:1 operating:1 inhibition:5 own:1 captured:3 seen:1 greater:1 somewhat:1 signal:9 full:1 hebbian:2 exceeds:2 determination:1 controlled:1 schematic:3 cell:45 else:1 appropriately:1 subject:1 tend:1 contrary:1 mod:2 call:1 ee:1 near:1 zipser:1 intermediate:1 switch:1 opposite:3 regarding:1 inactive:4 six:3 york:1 cause:2 generaliza:1 generally:1 locally:2 specifies:2 inhibitory:2 sign:1 taught:1 group:1 four:2 threshold:2 sum:1 run:1 letter:1 place:2 layer:36 simplification:1 fan:1 activity:7 strength:1 felt:1 inhibits:1 combination:1 smaller:1 wi:1 modification:4 previously:1 turn:1 mechanism:5 initiate:1 end:1 available:2 permit:1 eight:2 polytechnic:1 hierarchical:5 appropriate:1 responsive:1 permanently:1 capacitive:2 completed:1 embodies:1 arrangement:1 realized:1 usual:1 link:9 lateral:8 sci:1 simulated:1 providing:2 implementation:1 design:2 upper:1 vertical:7 neuron:10 extended:1 hinton:1 complement:1 pair:1 specified:1 extensive:1 connection:8 brooklyn:1 address:1 pattern:43 saturation:1 gaining:1 difficulty:2 naive:1 understanding:1 fully:1 generator:1 tially:1 switched:1 degree:1 principle:2 row:1 excitatory:2 side:2 allow:1 deeper:1 institute:1 neighbor:1 feedback:2 curve:1 preventing:2 forward:2 collection:1 sj:2 active:5 sequentially:1 assumed:1 excite:5 additionally:1 nature:2 inherently:1 untrained:1 complex:1 allowed:1 repeated:2 fig:2 elaborate:1 ny:2 position:1 explicit:2 third:1 externally:1 tlon:1 cog:1 showing:1 sequential:3 magnitude:1 conditioned:1 eij:1 simply:1 adjustment:1 determines:1 bioi:1 viewed:2 presentation:2 goal:1 change:3 total:2 indicating:1 formally:1 college:1 internal:3 support:1 |
2,979 | 370 | Exploiting Syllable Structure
in a Connectionist Phonology Model
David S. Touretzky Deirdre W. Wheeler
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213-3890
Abstract
In a previous paper (Touretzky & Wheeler, 1990a) we showed how adding a
clustering operation to a connectionist phonology model produced a parallel processing account of certain "iterative" phenomena. In this paper we show how the
addition of a second structuring primitive, syllabification, greatly increases the
power of the model. We present examples from a non-Indo-European language
that appear to require rule ordering to at least a depth of four. By adding syllabification circuitry to structure the model's perception of the input string, we are
able to handle these examples with only two derivational steps. We conclude that
in phonology, derivation can be largely replaced by structuring.
1 Introduction
In linguistics a grammar is an abstract formal system describing a language. The term
psycho-grammar has been suggested for systems that express the linguistic knowledge
that actually exists in speakers' heads (George, 1989). Psycho-grammars may differ from
grammars as a result of performance demands, limited memory capacity, or other aspects
of mental representations. Psycho-grammars are still somewhat abstract, in that they
are concerned with mental rather than physical phenomena. The term physio-grammar
(George, 1989) refers to the the physical representation of grammatical knowledge in neural
structures, such as (perhaps) synapse strengths. Detailed proposals for physio-grammars
do not yet exist; the field of neurolinguistics is insufficiently advanced to support such
proposals at present.
We are developing a theory of phonology that is compatible with gross constraints on
neural processing and cognitive plausibility. Our research, then, is on the construction of
psycho-grammars at the phonological level. We use a connectionist model to demonstrate
612
Exploiting Syllable Structure in a Connectionist Phonology Model
M-level
I
M-P Rules
P-level
I
P-F Rules
- - - - - - .......1 F-level
Figure 1: Structure of the model.
the computational feasibility of the psycho-grammar architecture we propose. In this paper
we show how the addition of syllabification as a primitive operation greatly increases the
scope and power of the model at little computational cost.
2 Structure of the Model
Our model. shown in Figure 1, has three levels of representation. Following Lakoff (1989),
they are labeled M, P, and F. The M, or morpho-phonemic level, is a sequence of phonemes
constructed by concatenating abstract underlying representations of morphemes. The P, or
phonemic level, is an intermediate representation that is constrained to hold syllabically
well-formed strings. The F, or phonetic level, is the surface level representation: a sequence
of phonetic segments. Derivations are performed by mapping strings from M to P level,
and then from P to F level, under the control of a set of language-specific rules. These
rules alter the mapping in various ways to effect processes such as voicing assimilation and
vowel harmony.
The model has a number of important constraints. Rules at a given level (M-P or P-F)
apply in a single parallel step during the mapping from one level to the next. There
is no iterative rule application. "Iterative" processes are instead handled by a parallel
clustering mechanism described in Touretzky & Wheeler (1990a,1991). The connectionist
implementation uses limited-depth, strictly feed-forward Circuitry, so the model has minimal
computational complexity.
Another very important constraint is that only two levels of derivation are provided, M-P
and P-F, so there is no room for the long chains of ordered rules that other phonological
theories permit. However, in standard analyses some languages appear to require long rule
chains. The problem for those who want to eliminate such chains on grounds of cognitive
implausibility 1 is to reformulate existing linguistic analyses to account for the data in some
1Here
we are referring to Goldsmith (1990) and Lakoff (1989), as well as our own work.
613
614
Touretzky and Wheeler
I hro+aht fi/
hro ht u
hr6 htu
r6 ht u
r6 hdu
[r6 hdu]
"he has disappeared"
vowel deletion
stress
initial h-deletion
pre-son. voicing
I ':\+k+hrek+?1
~khrek?
,
A k hreke?
,
A k hrege?
[~ k hrege?]
"I will push it"
stress
epenthesis
pre-son. voicing
Figure 2: Two Mohawk derivations.
other way. This is not always easy to do, especially in our model, which is more tightly
constrained than either the Goldsmith or Lakoff proposals. Such reformulations help us
to see how psycho-grammar diverges from grammar when computational constraints are
taken into consideration.
3 A Problem From Mohawk
In Mohawk, an American Indian language, stress is placed on the penultimate syllable of a
word. Since there are processes in Mohawk that add and delete vowels from words, their
interaction with the stress rule is problematic. Figure 2 shows two Mohawk derivations
in a standard generative account.2 The first example shows us that vowel deletion must
precede stress assignment. The penultimate vowel Ia! in the underlying form does not
appear in the surface form of the word. Instead stress is assigned to the preceding vowel,
10/, which is is the penultimate vowel in the surface form. The second example shows that
stress assignment must precede vowel epenthesis (insertion), because the epenthetic leI that
appears in the surface form is not counted when determining the penultimate vowel. Since
the epenthetic Ie! is also the trigger for presonorant voicing in this example, we see that
voicing must be ordered after vowel epenthesis. Together these two examples indicate the
following rule ordering: Vowel deletion < Stress < Epenthesis < Pre-sonorant voiCing.
But this is a depth of four, and our model permits only two levels of derivation. We therefore
must produce an alternative account of these four processes that requires fewer derivations.
To do so, we rely on three features of the model: parallel rule application, multi-level
representations, and a structuring primitive: syllabification.
4 Representation of Syllable Structure
Most insertion and deletion operations are syllabically-motivated (Ito, 1989). By adding
a syllabification mechanism to our model, we can replace certain derivational (stringrewriting) steps with more constrained and perhaps cognitively less taxing structuring
steps. Linguists represent syllables as tree structures, as in the left portion of Figure 3. The
nucleus of the syllable is normally a vowel. Any preceding consonants form the onset, and
any following consonants the coda. The combined nucleus and coda make up the rime. In
the middle portion of Figure 3 the syllabic structure of the English word ''tokens'' (phonetic
transcription [tok?nz]) is shown in this hierarchically structured form. The right portion
shows how we encode the same information in our model USing a set of onset, nucleus,
2These examples, derived from Halle & Clements (1983), are cited in Lakoff (1989). We thank
Marianne Mithun (p.c.) for correcting an error in the original data.
Exploiting Syllable Structure in a Connectionist Phonology Model
a
a
11 /)"
syllable
//\
onset
0
N
0
N
0
k
?
I
nucleus coda
0
onset:
C
/'\
n
coda:
?
n
z
+
+
+
+
nucleus :
k
+
+
z
Figure 3: Representations for syllable structure.
M:
hroahtii
M:
i\khrek ?
onset:
nucleus:
coda:
++ +
+ +
+
onset:
nucleus:
coda:
++ +
+
+
+
+
vowel del.
stress (M -P)
P:
F:
P:
hr 6 ht ii
r 6 hdii
,
h-del.; pre-son.
voicing (P-F)
F:
epenthesis
stress (M-P)
i\khr eke?
~khrege?
pre-son.
voicing (P-F)
Figure 4: Our solution to the Mohawk problem.
and coda bits, or ONe bits for short. We have no explicit representation for rimes, but this
could be added if necessary.
In Mohawk, the vowel deletion and epenthesis processes are both syllabically motivated.
Vowel deletion enforces a constraint against branching nuclei. 3 Epenthesis inserts a vowel
to break up a word-final consonant cluster (jk/ followed by glottal stop f!f) that would be an
illegal syllable coda. Our contention is that syllabification operates on the M-Ievel string
by setting the associated ONe bits in such a way that the P-Ievel string will be syllabically
well-formed. The ONe bits share control with the M-P rules of the mapping from M to P
level.
Every M-level segment must have one of its ONe bits set in order to be mapped to P-Ievel.
Thus, the syllabifier can cause a vowel to be deleted at P simply by failing to set its nucleus
bit, as occurs for the Ia! in /hroahtiil in Figure 4. For the Ii\khrek?I example, note in Figure 4
that the /kI has been marked as an onset by the syllabifier and the f!1 as a coda; there is
no intervening nucleus. This automatically triggers an insertion by the M-P map, so that a
vowel will appear between these two segments at P-Ievel. The vowel chosen is the default
or "unmarked" vowel for that particular language; for Mohawk it is leI. For further details
of the syllabification algorithm, see Touretzky & Wheeler (1990b).
The left half of Figure 5 shows our formulation of the Mohawk stress rule, which assigns
stress to the penultimate nucleus of a word. Rather than looking directly at the M-Ievel
buffer, the rule looks at the ''projection'' of the nucleus tier. By this we mean the M-Ievel
substring consisting of those segments whose nucleus bit is set. The # symbol indicates
a word boundary. Since vowels deleted by the syllabifier have no nucleus bit set, and
3This constraint is not shared by all languages. Furthermore, deletion is only one possible solution;
another would be to insert a consonant or glide, such as Iwl, to separate the vowels into different
syllables. Each language makes its own choices about how constraint violations are to be repaired.
615
616
Touretzky and Wheeler
M[nucleus]:
P:
[]
[]
I
[+stress]
#
P:
F:
[-son]
I
[+voice]
[+son]
Figure 5: Rules for Mohawk stress (M-P) and presonorant voicing (P-F).
epenthetic vowels that will be inserted by the syllabifier have no nucleus bit at M-Ievel,
insertion and deletion processes can proceed in parallel with stress assignment. At P-Ievel,
all that's left to be done in this example is pre-sonorant voicing, handled by the P-F rule
shown in the right half of the figure.
5 More Complex Stress Rules
In Mohawk, stress falls on the penultimate syllable regardless of the internal structure of
the syllable. This stress assignment rule is quite simple compared to some other languages.
For example, "quantity sensitive" languages make distinctions among syllable types for
purposes of stress assignment. A syllable consisting of an optional onset and a single, short
vowel in the rime is normally said to be "light," while syllables with codas and/or long
vowels (often represented as double nuclei) are designated "heavy," and typically attract
stress. Thus, for example, in Aguacatec Mayan (Hayes, 1981) stress falls on the rightmost
syllable with a long vowel, otherwise the final syllable.
In order to account for syllable weight distinctions we introduce an additional level of
representation, as illustrated in Figure 6 using C and V to represent consonants and vowels,
respectively. The "mora" bit is activated for all segments that contribute to syllable weight
in the language. In this particular language only vowels are important for determining the
weight of syllables, so the mora bit is activated for all and only the vocalic segments. Once
moras have been identified, universal principles come into play, and bits for "syllable"
and "heavy syllable" are set. The syllable bit is activated for the first of a sequence of
one or more moras; the heavy syllable bit is activated for syllables containing two or more
moras. With this enriched representation, the stress patterns of quantity-sensitive languages
can be straightforwardly generated. To stress the last heavy syllable, we assign [+stress]
to segments on the heavy syllable tier that have word boundaries to their right. (Word
boundaries must be projected down to the heavy syllable tier for this purpose.)
Languages like Yana (Hayes, 1981), in which both long vowels and codas make syllables
heavy, have a slightly different representation at the mora level. In these languages, coda
consonants as well as vocalic segments trigger the activation of the mora bit, as illustrated
in Figure 7. Here again, while specification of what counts as a mora is a language-specific
parameter, once the mora bits are set the syllable and heavy syllable representations follow
from universal prinCiples. The mora bit is activated for any segment which has either the
nucleus or coda bit set, essentially collapsing the nucleus and coda tiers. The Yana stress
rule targets the leftmost heavy syllable in a word, no matter how far it might occur from
the initial word boundary, or the first syllable if none are heavy. The latter case requires a
separate rule with a slightly more complex environment; rules of this form are discussed in
Wheeler & Touretzky (1991).
Exploiting Syllable Structure in a Connectionist Phonology Model
#
onset
nucleus
coda
C
v
+
c v c v
+
+
+
+
V
c
V
+
V
V
+
+
C
#
+
+
+
c
+
+
mora
syllable
heavy syllable
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Figure 6: Long vowels make syllables heavy in Aguacatec Mayan.
#
onset
nucleus
coda
mora
syllable
heavy syllable
C
V
+
C
V
C
+
+
C
V
+
+
C
V
V
+
+
V
V
+
+
+
+
+
+
+
+
+
C
#
+
+
+
C
+
+
+
+
+
+
+
+
+
+
+
+
Figure 7: Long vowels or codas make syllables heavy in Yana.
6 Discussion
For the linguist, it is interesting to see how structuring operations such as clustering and
syllabification can take some of the pressure off derivation, thereby allowing strict limits to
be maintained on derivational depth. But what is the significance of this work for connectionists? Unlike most other attempts to model phonological processes in neural networks,
we demonstrate the influence computational modeling can have on the development of a
linguistic theory. In designing a system for expressing linguistic processes, there must be
some sort of cost metric to determine which operations are computationally feasible and
which are not. A connectionist implementation provides a natural cost metric: size (depth,
fanout, component count) of the required threshold logic circuity.
It is doubtful that the structure of our model corresponds to that of some cortical language
area, and we reject any simplistic analogy between threshold logic units and neurons. Using
circuit complexity as a cost metric can be independently justified on grounds of simplicity
and theoretical elegance. If one measures cost in some more abstract way, there is a danger
that computationally expensive mechanisms may lurk beneath the grammar's apparent
simpliCity. An example is the local rule ordering proposal of Anderson (1974), in which
explicit rule ordering is eliminated by introducing a much more complex mechanism for
determining, on a case-by-case basis, the order in which rules should apply.
If the mental representation of utterances is fundamentally different from the discrete
symbolic form we've assumed.4 we may be using the wrong cost metric for determining
cognitive plausibility. However. we are constrained. like everyone else. to work within the
computational frameworks that are presently available.
4Por example: if phonetic strings tum out to be represented in the brain as chaotic trajectories in
a high dimensional dynamical system, or something equally exotic.
617
618
Touretzky and Wheeler
There remains the question of why structuring should be preferred over derivation. First,
since some mutation processes are sensitive to syllabic structure, this information would
have to be computed even if insertions and deletions weren't handled by the syllabifier.
Second, structuring is a highly constrained operation; it merely annotates an existing string
to reflect constituency relationships, whereas derivations can make arbitrary changes to a
string. We therefore assume that derivations have a higher cognitive cost, despite the fact
that they can be computed fairly efficiently in our model by the mapping matrix described
in Touretzky & Wheeler (1991). Finally, adding extra derivational levels increases the
difficulty of phonological rule induction, a topic of current research.
Acknowledgements
This work was sponsored by a grant from the Hughes Aircraft Corporation, by National
Science Foundation grant EET-8716324, and by the Office of Naval Research under contract
number NOOOI4-86-K-0678.
References
Anderson, S. R. (1974) The Organization of Phonology. New York: Academic Press.
George, A. (1989) How not to become confused about linguistics. In A. George (ed.),
Reflections on Chomsky, 90-110. Oxford, UK: Basil Blackwell.
Goldsmith, J. A. (1990) Autosegmental and Metrical Phonology.
Blackwell.
Oxford, UK: Basil
Halle, M., and Clements, G. N. (1983) Problem Book in Phonology: A Workbook for
Introductory Courses in Linguistics and Modern Phonology. Cambridge, MA: The MIT
Press.
Hayes, B. (1981)A Metrical Theory ofStress Rules. Doctoral dissertation, MIT, Cambridge,
MA.
It6, J. (1989) A prosodic theory of epenthesis. Natural Language and Linguistic Theory.
7(2),217-259.
Lakoff, G. (1989) Cognitive phonology. Draft of paper presented at the UC-Berkeley
Workshop on Constraints vs. Rules, May 1989.
Touretzky, D. S., and Wheeler, D. W. (1990a) A computational basis for phonology. In D.
S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, 372-379. San
Mateo, CA: Morgan Kaufmann.
Touretzky, D. S., and Wheeler, D. W. (1990b) Two derivations suffice: the role of syllabification in cognitive phonology. In C. Tenny (ed.), The MIT Parsing Volume, 1989-1990,
21-35. MIT Center for Cognitive SCience, Parsing Project Working Papers 3.
Touretzky, D. S., and Wheeler, D. W. (1991) Sequence manipulation using parallel mapping
networks. Neural Computation 3(1):98-109.
Wheeler, D. W., and Touretzky, D. S. (1991) From syllables to stress: acognitivelyplausible
model. In K. Deaton, M. Noske, and M. Ziolkowski (eds.), CLS 26-l/: Papers from the
P arasession on The Syllable in Phonetics and P hono logy, 1990. Chicago Linguistic Society.
| 370 |@word aircraft:1 middle:1 pressure:1 thereby:1 autosegmental:1 initial:2 rightmost:1 existing:2 current:1 clements:2 activation:1 yet:1 must:7 parsing:2 chicago:1 sponsored:1 v:1 generative:1 fewer:1 half:2 short:2 htu:1 dissertation:1 mental:3 provides:1 draft:1 contribute:1 constructed:1 become:1 introductory:1 introduce:1 multi:1 brain:1 deirdre:1 automatically:1 eke:1 little:1 provided:1 confused:1 underlying:2 exotic:1 circuit:1 suffice:1 project:1 what:2 string:8 corporation:1 berkeley:1 every:1 wrong:1 uk:2 control:2 normally:2 unit:1 grant:2 appear:4 local:1 limit:1 despite:1 oxford:2 might:1 nz:1 doctoral:1 mateo:1 physio:2 limited:2 enforces:1 hughes:1 chaotic:1 wheeler:13 danger:1 area:1 universal:2 reject:1 illegal:1 projection:1 pre:6 word:11 refers:1 hdii:1 symbolic:1 chomsky:1 influence:1 map:1 center:1 primitive:3 regardless:1 independently:1 simplicity:2 assigns:1 correcting:1 rule:28 mora:9 handle:1 construction:1 trigger:3 play:1 target:1 us:1 designing:1 pa:1 expensive:1 jk:1 labeled:1 inserted:1 role:1 ordering:4 gross:1 environment:1 complexity:2 insertion:5 segment:9 basis:2 various:1 represented:2 derivation:12 prosodic:1 whose:1 quite:1 apparent:1 otherwise:1 grammar:12 final:2 sequence:4 propose:1 interaction:1 beneath:1 intervening:1 epenthetic:3 exploiting:4 cluster:1 double:1 diverges:1 produce:1 disappeared:1 help:1 school:1 phonemic:2 indicate:1 come:1 differ:1 require:2 assign:1 rime:3 weren:1 strictly:1 insert:2 hold:1 ground:2 marianne:1 scope:1 mapping:6 circuitry:2 purpose:2 failing:1 precede:2 harmony:1 sensitive:3 mit:4 always:1 rather:2 office:1 linguistic:6 structuring:7 encode:1 derived:1 naval:1 indicates:1 iwl:1 greatly:2 attract:1 eliminate:1 psycho:6 typically:1 glottal:1 among:1 morpheme:1 development:1 constrained:5 fairly:1 uc:1 field:1 once:2 phonological:4 eliminated:1 look:1 alter:1 connectionist:8 fundamentally:1 modern:1 tightly:1 ve:1 national:1 cognitively:1 replaced:1 consisting:2 vowel:31 attempt:1 organization:1 highly:1 violation:1 light:1 activated:5 chain:3 metrical:2 necessary:1 tree:1 theoretical:1 doubtful:1 minimal:1 delete:1 modeling:1 assignment:5 cost:7 introducing:1 straightforwardly:1 combined:1 referring:1 cited:1 ie:1 contract:1 off:1 together:1 again:1 reflect:1 containing:1 por:1 collapsing:1 logy:1 cognitive:7 book:1 american:1 account:5 matter:1 onset:10 performed:1 break:1 portion:3 sort:1 parallel:6 mutation:1 formed:2 phoneme:1 largely:1 who:1 efficiently:1 kaufmann:1 produced:1 substring:1 none:1 trajectory:1 touretzky:14 ed:4 against:1 elegance:1 associated:1 stop:1 noooi4:1 knowledge:2 actually:1 appears:1 feed:1 tum:1 higher:1 follow:1 synapse:1 formulation:1 done:1 anderson:2 furthermore:1 working:1 khr:1 del:2 perhaps:2 lei:2 effect:1 lakoff:5 assigned:1 illustrated:2 during:1 branching:1 maintained:1 speaker:1 coda:17 leftmost:1 stress:26 goldsmith:3 demonstrate:2 reflection:1 phonetics:1 consideration:1 contention:1 fi:1 physical:2 volume:1 discussed:1 he:1 mellon:1 expressing:1 cambridge:2 language:18 specification:1 surface:4 annotates:1 add:1 something:1 own:2 showed:1 manipulation:1 phonetic:4 certain:2 buffer:1 morgan:1 george:4 somewhat:1 additional:1 preceding:2 determine:1 ii:2 academic:1 morpho:1 plausibility:2 long:7 equally:1 feasibility:1 simplistic:1 essentially:1 metric:4 represent:2 proposal:4 justified:1 want:1 taxing:1 whereas:1 addition:2 else:1 extra:1 unlike:1 strict:1 intermediate:1 easy:1 concerned:1 architecture:1 identified:1 motivated:2 handled:3 fanout:1 proceed:1 cause:1 linguist:2 york:1 detailed:1 ievel:8 constituency:1 exist:1 problematic:1 carnegie:1 discrete:1 express:1 four:3 threshold:2 basil:2 deleted:2 ht:3 merely:1 mithun:1 bit:18 ki:1 followed:1 syllable:43 strength:1 insufficiently:1 occur:1 constraint:8 syllabification:9 aht:1 aspect:1 structured:1 developing:1 designated:1 slightly:2 son:6 presently:1 taken:1 tier:4 computationally:2 remains:1 describing:1 count:2 mechanism:4 reformulations:1 available:1 operation:6 permit:2 apply:2 voicing:10 alternative:1 voice:1 original:1 clustering:3 linguistics:3 vocalic:2 phonology:14 especially:1 society:1 added:1 quantity:2 occurs:1 question:1 said:1 thank:1 mapped:1 separate:2 capacity:1 penultimate:6 topic:1 induction:1 mayan:2 relationship:1 reformulate:1 implementation:2 allowing:1 neuron:1 tok:1 optional:1 looking:1 head:1 arbitrary:1 david:1 required:1 blackwell:2 connectionists:1 distinction:2 deletion:10 able:1 suggested:1 dynamical:1 perception:1 pattern:1 memory:1 everyone:1 power:2 ia:2 natural:2 rely:1 difficulty:1 hr:1 advanced:1 halle:2 utterance:1 acknowledgement:1 determining:4 repaired:1 interesting:1 analogy:1 derivational:4 syllabic:2 foundation:1 nucleus:21 principle:2 share:1 heavy:14 compatible:1 course:1 token:1 placed:1 last:1 english:1 formal:1 fall:2 grammatical:1 boundary:4 depth:5 default:1 cortical:1 forward:1 projected:1 san:1 counted:1 far:1 epenthesis:8 preferred:1 transcription:1 logic:2 eet:1 hayes:3 pittsburgh:1 conclude:1 consonant:6 assumed:1 iterative:3 why:1 ca:1 european:1 complex:3 cl:1 significance:1 hierarchically:1 unmarked:1 enriched:1 assimilation:1 explicit:2 indo:1 concatenating:1 r6:3 ito:1 sonorant:2 down:1 specific:2 symbol:1 exists:1 workshop:1 adding:4 push:1 demand:1 simply:1 ordered:2 corresponds:1 ma:2 marked:1 room:1 replace:1 shared:1 feasible:1 change:1 operates:1 tenny:1 glide:1 internal:1 support:1 latter:1 indian:1 phenomenon:2 |
2,980 | 3,700 | Reading Tea Leaves: How Humans Interpret Topic Models
Jonathan Chang ?
Facebook
1601 S California Ave.
Palo Alto, CA 94304
[email protected]
Jordan Boyd-Graber ?
Institute for Advanced Computer Studies
University of Maryland
[email protected]
Sean Gerrish, Chong Wang, David M. Blei
Department of Computer Science
Princeton University
{sgerrish,chongw,blei}@cs.princeton.edu
Abstract
Probabilistic topic models are a popular tool for the unsupervised analysis of text,
providing both a predictive model of future text and a latent topic representation
of the corpus. Practitioners typically assume that the latent space is semantically
meaningful. It is used to check models, summarize the corpus, and guide exploration of its contents. However, whether the latent space is interpretable is in need
of quantitative evaluation. In this paper, we present new quantitative methods for
measuring semantic meaning in inferred topics. We back these measures with
large-scale user studies, showing that they capture aspects of the model that are
undetected by previous measures of model quality based on held-out likelihood.
Surprisingly, topic models which perform better on held-out likelihood may infer
less semantically meaningful topics.
1
Introduction
Probabilistic topic models have become popular tools for the unsupervised analysis of large document
collections [1]. These models posit a set of latent topics, multinomial distributions over words, and
assume that each document can be described as a mixture of these topics. With algorithms for fast
approxiate posterior inference, we can use topic models to discover both the topics and an assignment
of topics to documents from a collection of documents. (See Figure 1.)
These modeling assumptions are useful in the sense that, empirically, they lead to good models of
documents. They also anecdotally lead to semantically meaningful decompositions of them: topics
tend to place high probability on words that represent concepts, and documents are represented as
expressions of those concepts. Perusing the inferred topics is effective for model verification and
for ensuring that the model is capturing the practitioner?s intuitions about the documents. Moreover,
producing a human-interpretable decomposition of the texts can be a goal in itself, as when browsing
or summarizing a large collection of documents.
In this spirit, much of the literature comparing different topic models presents examples of topics and
examples of document-topic assignments to help understand a model?s mechanics. Topics also can
help users discover new content via corpus exploration [2]. The presentation of these topics serves,
either explicitly or implicitly, as a qualitative evaluation of the latent space, but there is no explicit
quantitative evaluation of them. Instead, researchers employ a variety of metrics of model fit, such as
perplexity or held-out likelihood. Such measures are useful for evaluating the predictive model, but
do not address the more explatory goals of topic modeling.
?
Work done while at Princeton University.
1
TOPIC 1
TOPIC 2
TOPIC 3
computer,
technology,
system,
service, site,
phone,
internet,
machine
sell, sale,
store, product,
business,
advertising,
market,
consumer
play, film,
movie, theater,
production,
star, director,
stage
(a) Topics
Red Light, Green Light: A
2-Tone L.E.D. to
Simplify Screens
TOPIC 1
The three big Internet
portals begin to distinguish
among themselves as
shopping malls
Stock Trades: A Better Deal
For Investors Isn't Simple
TOPIC 2
Forget the Bootleg, Just
Download the Movie Legally
The Shape of Cinema,
Transformed At the Click of
a Mouse
TOPIC 3
Multiplex Heralded As
Linchpin To Growth
A Peaceful Crew Puts
Muppets Where Its Mouth Is
(b) Document Assignments to Topics
Figure 1: The latent space of a topic model consists of topics, which are distributions over words, and a
distribution over these topics for each document. On the left are three topics from a fifty topic LDA model
trained on articles from the New York Times. On the right is a simplex depicting the distribution over topics
associated with seven documents. The line from each document?s title shows the document?s position in the
topic space.
In this paper, we present a method for measuring the interpretatability of a topic model. We devise
two human evaluation tasks to explicitly evaluate both the quality of the topics inferred by the
model and how well the model assigns topics to documents. The first, word intrusion, measures
how semantically ?cohesive? the topics inferred by a model are and tests whether topics correspond
to natural groupings for humans. The second, topic intrusion, measures how well a topic model?s
decomposition of a document as a mixture of topics agrees with human associations of topics with a
document. We report the results of a large-scale human study of these tasks, varying both modeling
assumptions and number of topics. We show that these tasks capture aspects of topic models not
measured by existing metrics and?surprisingly?models which achieve better predictive perplexity
often have less interpretable latent spaces.
2
Topic models and their evaluations
Topic models posit that each document is expressed as a mixture of topics. These topic proportions
are drawn once per document, and the topics are shared across the corpus. In this paper we will
consider topic models that make different assumptions about the topic proportions. Probabilistic
Latent Semantic Indexing (pLSI) [3] makes no assumptions about the document topic distribution,
treating it as a distinct parameter for each document. Latent Dirichlet allocation (LDA) [4] and the
correlated topic model (CTM) [5] treat each document?s topic assignment as a multinomial random
variable drawn from a symmetric Dirichlet and logistic normal prior, respectively.
While the models make different assumptions, inference algorithms for all of these topic models
build the same type of latent space: a collection of topics for the corpus and a collection of topic
proportions for each of its documents. While this common latent space has explored for over two
decades, its interpretability remains unmeasured.
Pay no attention to the latent space behind the model
Although we focus on probabilistic topic models, the field began in earnest with latent semantic
analysis (LSA) [6]. LSA, the basis of pLSI?s probabilistic formulation, uses linear algebra to decompose a corpus into its constituent themes. Because LSA originated in the psychology community,
early evaluations focused on replicating human performance or judgments using LSA: matching
performance on standardized tests, comparing sense distinctions, and matching intuitions about
synonymy (these results are reviewed in [7]). In information retrieval, where LSA is known as latent
semantic indexing (LSI) [8], it is able to match queries to documents, match experts to areas of
expertise, and even generalize across languages given a parallel corpus [9].
2
The reticence to look under the hood of these models has persisted even as models have moved
from psychology into computer science with the development of pLSI and LDA. Models either use
measures based on held-out likelihood [4, 5] or an external task that is independent of the topic space
such as sentiment detection [10] or information retrieval [11]. This is true even for models engineered
to have semantically coherent topics [12].
For models that use held-out likelihood, Wallach et al. [13] provide a summary of evaluation
techniques. These metrics borrow tools from the language modeling community to measure how well
the information learned from a corpus applies to unseen documents. These metrics generalize easily
and allow for likelihood-based comparisons of different models or selection of model parameters
such as the number of topics. However, this adaptability comes at a cost: these methods only measure
the probability of observations; the internal representation of the models is ignored.
Griffiths et al. [14] is an important exception to the trend of using external tasks or held-out likelihood.
They showed that the number of topics a word appears in correlates with how many distinct senses
it has and reproduced many of the metrics used in the psychological community based on human
performance. However, this is still not a deep analysis of the structure of the latent space, as it does
not examine the structure of the topics themselves.
We emphasize that not measuring the internal representation of topic models is at odds with their
presentation and development. Most topic modeling papers display qualitative assessments of the
inferred topics or simply assert that topics are semantically meaningful, and practitioners use topics
for model checking during the development process. Hall et al. [15], for example, used latent
topics deemed historically relevant to explore themes in the scientific literature. Even in production
environments, topics are presented as themes: Rexa (http://rexa.info), a scholarly publication search
engine, displays the topics associated with documents. This implicit notion that topics have semantic
meaning for users has even motivated work that attempts to automatically label topics [16]. Our
goal is to measure the success of interpreting topic models across number of topics and modeling
assumptions.
3
Using human judgments to examine the topics
Although there appears to be a longstanding assumption that the latent space discovered by topic
models is meaningful and useful, evaluating such assumptions is difficult because discovering topics
is an unsupervised process. There is no gold-standard list of topics to compare against for every
corpus. Thus, evaluating the latent space of topic models requires us to gather exogenous data.
In this section we propose two tasks that create a formal setting where humans can evaluate the two
components of the latent space of a topic model. The first component is the makeup of the topics. We
develop a task to evaluate whether a topic has human-identifiable semantic coherence. This task is
called word intrusion, as subjects must identify a spurious word inserted into a topic. The second
task tests whether the association between a document and a topic makes sense. We call this task
topic intrusion, as the subject must identify a topic that was not associated with the document by the
model.
3.1
Word intrusion
To measure the coherence of these topics, we develop the word intrusion task; this task involves
evaluating the latent space presented in Figure 1(a). In the word intrusion task, the subject is presented
with six randomly ordered words. The task of the user is to find the word which is out of place or
does not belong with the others, i.e., the intruder. Figure 2 shows how this task is presented to users.
When the set of words minus the intruder makes sense together, then the subject should easily
identify the intruder. For example, most people readily identify apple as the intruding word in the
set {dog, cat, horse, apple, pig, cow} because the remaining words, {dog, cat,
horse, pig, cow} make sense together ? they are all animals. For the set {car, teacher,
platypus, agile, blue, Zaire}, which lacks such coherence, identifying the intruder is
difficult. People will typically choose an intruder at random, implying a topic with poor coherence.
In order to construct a set to present to the subject, we first select at random a topic from the model.
We then select the five most probable words from that topic. In addition to these words, an intruder
3
Word Intrusion
Topic Intrusion
Figure 2: Screenshots of our two human tasks. In the word intrusion task (left), subjects are presented with a set
of words and asked to select the word which does not belong with the others. In the topic intrusion task (right),
users are given a document?s title and the first few sentences of the document. The users must select which of
the four groups of words does not belong.
word is selected at random from a pool of words with low probability in the current topic (to reduce
the possibility that the intruder comes from the same semantic group) but high probability in some
other topic (to ensure that the intruder is not rejected outright due solely to rarity). All six words are
then shuffled and presented to the subject.
3.2
Topic intrusion
The topic intrusion task tests whether a topic model?s decomposition of documents into a mixture of
topics agrees with human judgments of the document?s content. This allows for evaluation of the
latent space depicted by Figure 1(b). In this task, subjects are shown the title and a snippet from a
document. Along with the document they are presented with four topics (each topic is represented by
the eight highest-probability words within that topic). Three of those topics are the highest probability
topics assigned to that document. The remaining intruder topic is chosen randomly from the other
low-probability topics in the model.
The subject is instructed to choose the topic which does not belong with the document. As before, if
the topic assignment to documents were relevant and intuitive, we would expect that subjects would
select the topic we randomly added as the topic that did not belong. The formulation of this task
provides a natural way to analyze the quality of document-topic assignments found by the topic
models. Each of the three models we fit explicitly assigns topic weights to each document; this task
determines whether humans make the same association.
Due to time constraints, subjects do not see the entire document; they only see the title and first
few sentences. While this is less information than is available to the algorithm, humans are good
at extrapolating from limited data, and our corpora (encyclopedia and newspaper) are structured to
provide an overview of the article in the first few sentences. The setup of this task is also meaningful
in situations where one might be tempted to use topics for corpus exploration. If topics are used
to find relevant documents, for example, users will likely be provided with similar views of the
documents (e.g. title and abstract, as in Rexa).
For both the word intrusion and topic intrusion tasks, subjects were instructed to focus on the
meanings of words, not their syntactic usage or orthography. We also presented subjects with the
option of viewing the ?correct? answer after they submitted their own response, to make the tasks
more engaging. Here the ?correct? answer was determined by the model which generated the data,
presented as if it were the response of another user. At the same time, subjects were encouraged to
base their responses on their own opinions, not to try to match other subjects? (the models?) selections.
In small experiments, we have found that this extra information did not bias subjects? responses.
4
Experimental results
To prepare data for human subjects to review, we fit three different topic models on two corpora.
In this section, we describe how we prepared the corpora, fit the models, and created the tasks
described in Section 3. We then present the results of these human trials and compare them to metrics
traditionally used to evaluate topic models.
4
4.1
Models and corpora
In this work we study three topic models: probabilistic latent semantic indexing (pLSI) [3], latent
Dirichlet allocation (LDA) [4], and the correlated topic model (CTM) [5], which are all mixed
membership models [17]. The number of latent topics, K, is a free parameter in each of the
models; here we explore this with K = 50, 100 and 150. The remaining parameters ? ?k , the topic
multinomial distribution for topic k; and ?d , the topic mixture proportions for document d ? are
inferred from data. The three models differ in how these latent parameters are inferred.
pLSI In pLSI, the topic mixture proportions ?d are a parameter for each document. Thus, pLSI
is not a fully generative model, and the number of parameters grows linearly with the number of
documents. We fit pLSI using the EM algorithm [18] but regularize pLSI?s estimates of ?d using
pseudo-count smoothing, ? = 1.
LDA LDA is a fully generative model of documents where the mixture proportions ?d are treated as
a random variable drawn from a Dirichlet prior distribution. Because the direct computation of the
posterior is intractable, we employ variational inference [4] and set the symmetric Dirichlet prior
parameter, ?, to 1.
CTM In LDA, the components of ?d are nearly independent (i.e., ?d is statistically neutral). CTM
allows for a richer covariance structure between topic proportions by using a logistic normal prior
over the topic mixture proportions ?d . For each topic, k, a real ? is drawn from a normal distribution
and exponentiated. This set of K non-negative numbers are then normalized to yield ?d . Here, we
train the CTM using variational inference [5].
We train each model on two corpora. For each corpus, we apply a part of speech tagger [19] and
remove all tokens tagged as proper nouns (this was for the benefit of the human subjects; success in
early experiments required too much encyclopedic knowledge). Stop words [20] and terms occurring
in fewer than five documents are also removed. The two corpora we use are 1.) a collection of
8447 articles from the New York Times from the years 1987 to 2007 with a vocabulary size of 8269
unique types and around one million tokens and 2.) a sample of 10000 articles from Wikipedia
(http://www.wikipedia.org) with a vocabulary size of 15273 unique types and three million tokens.
4.2
Evaluation using conventional objective measures
There are several metrics commonly used to evaluate topic models in the literature [13]. Many of
these metrics are predictive metrics; that is, they capture the model?s ability to predict a test set of
unseen documents after having learned its parameters from a training set. In this work, we set aside
20% of the documents in each corpus as a test set and train on the remaining 80% of documents. We
then compute predictive rank and predictive log likelihood.
To ensure consistency of evaluation across different models, we follow Teh et al.?s [21] approximation
of the predictive likelihood p(wd |Dtrain ) using p(wd |Dtrain ) ? p(wd |??d ), where ??d is a point estimate
of the posterior topic proportions for document d. For pLSI ??d is the MAP estimate; for LDA and
CTM ??d is the mean of the variational posterior. With this information, we can ask what words the
model believes will be in the document and compare it with the document?s actual composition.
Given document wd , we first estimate ??d and then for every word in the vocabulary, we compute
P
?
p(w|??d ) =
z p(w|z)p(z|?d ). Then we compute the average rank for the terms that actually
appeared in document wd (we follow the convention that lower rank is better).
The average word likelihood and average rank across all documents in our test set are shown in
Table 1. These results are consistent with the values reported in the literature [4, 5]; in most cases
CTM performs best, followed by LDA.
4.3
Analyzing human evaluations
The tasks described in Section 3 were offered on Amazon Mechanical Turk (http://www.mturk.com),
which allows workers (our pool of prospective subjects) to perform small jobs for a fee through a
Web interface. No specialized training or knowledge is typically expected of the workers. Amazon
Mechanical Turk has been successfully used in the past to develop gold-standard data for natural
language processing [22] and to label images [23]. For both the word intrusion and topic intrusion
5
Table 1: Two predictive metrics: predictive log likelihood/predictive rank. Consistent with values reported in the
literature, CTM generally performs the best, followed by LDA, then pLSI. The bold numbers indicate the best
performance in each row.
C ORPUS
N EW YORK T IMES
W IKIPEDIA
T OPICS
50
100
150
50
100
150
LDA
-7.3214 / 784.38
-7.2761 / 778.24
-7.2477 / 777.32
-7.5257 / 961.86
-7.4629 / 935.53
-7.4266 / 929.76
50 topics
CTM
-7.3335 / 788.58
-7.2647 / 762.16
-7.2467 / 755.55
-7.5332 / 936.58
-7.4385 / 880.30
-7.3872 / 852.46
100 topics
P LSI
-7.3384 / 796.43
-7.2834 / 785.05
-7.2382 / 770.36
-7.5378 / 975.88
-7.4748 / 951.78
-7.4355 / 945.29
150 topics
1.0
New York Times
0.8
0.6
Model Precision
0.4
0.2
0.0
?
1.0
0.8
Wikipedia
0.6
0.4
?
0.2
?
?
?
?
?
?
0.0
CTM
LDA
pLSI
CTM
LDA
pLSI
CTM
?
?
?
LDA
pLSI
Figure 3: The model precision (Equation 1) for the three models on two corpora. Higher is better. Surprisingly,
although CTM generally achieves a better predictive likelihood than the other models (Table 1), the topics it
infers fare worst when evaluated against human judgments.
tasks, we presented each worker with jobs containing ten of the tasks described in Section 3. Each
job was performed by 8 separate workers, and workers were paid between $0.07 ? $0.15 per job.
Word intrusion As described in Section 3.1, the word intrusion task measures how well the inferred
topics match human concepts (using model precision, i.e., how well the intruders detected by the
subjects correspond to those injected into ones found by the topic model).
Let ?km be the index of the intruding word among the words generated from the k th topic inferred by
model m. Further let im
k,s be the intruder selected by subject s on the set of words generated from the
kth topic inferred by model m and let S denote the number of subjects. We define model precision
by the fraction of subjects agreeing with the model,
P
m
m
MPm
(1)
k =
s 1(ik,s = ?k )/S.
Figure 3 shows boxplots of the precision for the three models on the two corpora. In most cases
LDA performs best. Although CTM gives better predictive results on held-out likelihood, it does not
perform as well on human evaluations. This may be because CTM finds correlations between topics
and correlations within topics are confounding factors; the intruder for one topic might be selected
from another highly correlated topic. The performance of pLSI degrades with larger numbers of
topics, suggesting that overfitting [4] might affect interpretability as well as predictive power.
Figure 4 (left) shows examples of topics with high and low model precisions from the NY Times
data fit with LDA using 50 topics. In the example with high precision, the topic words all coherently
express a painting theme. For the low precision example, ?taxis? did not fit in with the other political
words in the topic, as 87.5% of subjects chose ?taxis? as the intruder.
The relationship between model precision, MPm
k , and the model?s estimate of the likelihood of
the intruding word in Figure 5 (top row) is surprising. The highest probability did not have the
best interpretability; in fact, the trend was the opposite. This suggests that as topics become more
fine-grained in models with larger number of topics, they are less useful for humans. The downward
6
0.000
0.125
0.250
0.375
0.500
0.625
0.750
0.875
25
Microsoft
Word
Lindy Hop
15
20
John
Quincy
Adams
5
10
Book
0
Number of Documents
15
10
5
Number of Topics
0
committee
legislation
proposal
republican
taxis
fireplace
garage
house
kitchen
list
artist
exhibition
gallery
museum
painting
americans
japanese
jewish
states
terrorist
1.000
!3.5
!3.0
!2.5
Model Precision
!2.0
!1.5
!1.0
!0.5
0.0
Topic Log Odds
Figure 4: A histogram of the model precisions on the New York Times corpus (left) and topic log odds on
the Wikipedia corpus (right) evaluated for the fifty topic LDA model. On the left, example topics are shown
for several bins; the topics in bins with higher model precision evince a more coherent theme. On the right,
example document titles are shown for several bins; documents with higher topic log odds can be more easily
decomposed as a mixture of topics.
New York Times
Wikipedia
0.80
?
?
?
0.75
?
?
?
?
?
?
0.65
?
?
?
? ?
? ?
?
?
?
?
?
?
?
?
?2.0
?
?
?2.5
?7.32
?7.30
?7.28
?7.26
?7.24
?7.52
Predictive Log Likelihood
?7.50
?7.48
?7.46
?7.44
Model
?
CTM
?
LDA
?
pLSI
Number of topics
?
Topic Log Odds
?1.5
?
?
0.70
?1.0
?
?
Model Precision
?
?
?
?
?
50
?
100
?
150
?
?7.42
?7.40
Figure 5: A scatter plot of model precision (top row) and topic log odds (bottom row) vs. predictive log
likelihood. Each point is colored by model and sized according to the number of topics used to fit the model.
Each model is accompanied by a regression line. Increasing likelihood does not increase the agreement between
human subjects and the model for either task (as shown by the downward-sloping regression lines).
sloping trend lines in Figure 5 implying that the models are often trading improved likelihood for
lower interpretability.
The model precision showed a negative correlation (Spearman?s = 0 235 averaged across all
models, corpora, and topics) with the number of senses in WordNet of the words displayed to the
subjects [24] and a slight positive correlation ( = 0 109) with the average pairwise Jiang-Conrath
similarity of words1 [25].
Topic intrusion In Section 3.2, we introduced the topic intrusion task to measure how well a topic
model assigns topics to documents. We define the topic log odds as a quantitative measure of the
agreement between the model and human judgments on this task. Let dm denote model m?s point
estimate of the topic proportions vector associated with document d (as described in Section 4.2).
Further, let jdms
1
K be the intruding topic selected by subject s for document d on model
m and let jdm denote the ?true? intruder, i.e., the one generated by the model. We define the topic
log odds as the log ratio of the probability mass assigned to the true intruder to the probability mass
1
Words without entries in WordNet were ignored; polysemy was handled by taking the maximum over all
senses of words. To handle words in the same synset (e.g. ?fight? and ?battle?), the similarity function was
capped at 10.0.
7
50 topics
100 topics
150 topics
0
?2
?
Topic Log Odds
?3
?
?
?
?
?
?
?
?
?4
?5
New York Times
?1
?
?
?
0
?1
?3
?
?
?
?
?
?
?
?
?
?
?
?
LDA
pLSI
?
?4
Wikipedia
?2
?5
?6
?
?
?
?
?7
CTM
LDA
pLSI
CTM
LDA
pLSI
CTM
Figure 6: The topic log odds (Equation 2) for the three models on two corpora. Higher is better. Although CTM
generally achieves a better predictive likelihood than the other models (Table 1), the topics it infers fare worst
when evaluated against human judgments.
assigned to the intruder selected by the subject,
P
?m
?m
TLOm
d =(
s log ?d,j m ? log ?d,j m )/S.
d,?
d,s
(2)
TLOm
d ,
The higher the value of
the greater the correspondence between the judgments of the model
and the subjects. The upper bound on TLOm
d is 0. This is achieved when the subjects choose
intruders with a mixture proportion no higher than the true intruder?s.
Figure 6 shows boxplots of the topic log odds for the three models. As with model precision, LDA and
pLSI generally outperform CTM. Again, this trend runs counter to CTM?s superior performance on
predictive likelihood. A histogram of the TLO of individual Wikipedia documents is given in Figure 4
(right) for the fifty-topic LDA model. Documents about very specific, unambiguous concepts, such as
?Lindy Hop,? have high TLO because it is easy for both humans and the model to assign the document
to a particular topic. When documents express multiple disparate topics, human judgments diverge
from those of the model. At the low end of the scale is the article ?Book? which touches on diverse
areas such as history, science, and commerce. It is difficult for LDA to pin down specific themes in
this article which match human perceptions.
Figure 5 (bottom row) shows that, as with model precision, increasing predictive likelihood does
not imply improved topic log odds scores. While the topic log odds are nearly constant across
all numbers of topics for LDA and pLSI, for CTM topic log odds and predictive likelihood are
negatively correlated, yielding the surprising conclusion that higher predictive likelihoods do not lead
to improved model interpretability.
5
Discussion
We presented the first validation of the assumed coherence and relevance of topic models using
human experiments. For three topic models, we demonstrated that traditional metrics do not capture
whether topics are coherent or not. Traditional metrics are, indeed, negatively correlated with the
measures of topic quality developed in this paper. Our measures enable new forms of model selection
and suggest that practitioners developing topic models should thus focus on evaluations that depend
on real-world task performance rather than optimizing likelihood-based measures.
In a more qualitative vein, this work validates the use of topics for corpus exploration and information
retrieval. Humans appreciate the semantic coherence of topics and can associate the same documents
with a topic that a topic model does. An intriguing possibility is the development of models that
explicitly seek to optimize the measures we develop here either by incorporating human judgments
into the model-learning framework or creating a computational proxy that simulates human judgments.
Acknowledgements
David M. Blei is supported by ONR 175-6343, NSF CAREER 0745520 and grants from Google and
Microsoft. We would also like to thank Dan Osherson for his helpful comments.
8
References
[1] Blei, D., J. Lafferty. Text Mining: Theory and Applications, chap. Topic Models. Taylor and Francis, 2009.
[2] Mimno, D., A. Mccallum. Organizing the OCA: learning faceted subjects from a library of digital books.
In JCDL. 2007.
[3] Hofmann, T. Probabilistic latent semantic analysis. In UAI. 1999.
[4] Blei, D., A. Ng, M. Jordan. Latent Dirichlet allocation. JMLR, 3:993?1022, 2003.
[5] Blei, D. M., J. D. Lafferty. Correlated topic models. In NIPS. 2005.
[6] Landauer, T., S. Dumais. Solutions to Plato?s problem: The latent semantic analsyis theory of acquisition,
induction, and representation of knowledge. Psychological Review, 2(104):211?240, 1997.
[7] Landauer, T. K. On the computational basis of learning and cognition: Arguments from LSA. The
Psychology of Learning and Motivation, 41:43?84, 2002.
[8] Deerwester, S., S. Dumais, T. Landauer, et al. Indexing by latent semantic analysis. Journal of the
American Society of Information Science, 41(6):391?407, 1990.
[9] Berry, M. W., S. T. Dumais, T. A. Letsche. Computational methods for intelligent information access. In
Supercomputing. 1995.
[10] Titov, I., R. McDonald. A joint model of text and aspect ratings for sentiment summarization. In HLT.
2008.
[11] Wei, X., B. Croft. LDA-based document models for ad-hoc retrieval. In SIGIR. 2006.
[12] Boyd-Graber, J. L., D. M. Blei, X. Zhu. Probabalistic walks in semantic hierarchies as a topic model for
WSD. In HLT. 2007.
[13] Wallach, H. M., I. Murray, R. Salakhutdinov, et al. Evaluation methods for topic models. In ICML. 2009.
[14] Griffiths, T., M. Steyvers. Probabilistic topic models. In T. Landauer, D. McNamara, S. Dennis, W. Kintsch,
eds., Latent Semantic Analysis: A Road to Meaning. Laurence Erlbaum, 2006.
[15] Hall, D., D. Jurafsky, C. D. Manning. Studying the history of ideas using topic models. In EMNLP. 2008.
[16] Mei, Q., X. Shen, C. Zhai. Automatic labeling of multinomial topic models. In KDD. 2007.
[17] Erosheva, E., S. Fienberg, J. Lafferty. Mixed-membership models of scientific publications. PNAS,
101(Suppl 1):5220 ? 5227, 2004.
[18] Dempster, A., N. Laird, D. Rubin, et al. Maximum likelihood from incomplete data via the EM algorithm.
Journal of the Royal Statistical Society. Series B, 39(1):1?38, 1977.
[19] Schmid, H. Probabilistic part-of-speech tagging using decision trees. In Proceedings of International
Conference on New Methods in Language Processing. 1994.
[20] Loper, E., S. Bird. NLTK: the natural language toolkit. In Proceedings of the ACL-02 Workshop on
Effective tools and methodologies for teaching natural language processing and computational linguistics.
2002.
[21] Teh, Y. W., K. Kurihara, M. Welling. Collapsed variational inference for HDP. In NIPS. 2008.
[22] Snow, R., B. O?Connor, D. Jurafsky, et al. Cheap and fast?but is it good? evaluating non-expert
annotations for natural language tasks. In EMNLP. 2008.
[23] Deng, J., W. Dong, R. Socher, et al. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR.
2009.
[24] Miller, G. A. Nouns in WordNet: A lexical inheritance system. International Journal of Lexicography,
3(4):245?264, 1990.
[25] Jiang, J. J., D. W. Conrath. Semantic similarity based on corpus statistics and lexical taxonomy. In
Proceedings on International Conference on Research in Computational Linguistics. 1997.
9
| 3700 |@word trial:1 kintsch:1 proportion:11 laurence:1 earnest:1 km:1 seek:1 decomposition:4 covariance:1 paid:1 minus:1 tlo:2 score:1 series:1 document:68 past:1 existing:1 current:1 com:2 comparing:2 wd:5 surprising:2 scatter:1 intriguing:1 must:3 readily:1 john:1 kdd:1 shape:1 hofmann:1 cheap:1 remove:1 treating:1 interpretable:3 extrapolating:1 plot:1 aside:1 implying:2 generative:2 leaf:1 discovering:1 selected:5 tone:1 fewer:1 mpm:2 mccallum:1 colored:1 blei:7 provides:1 org:1 five:2 tagger:1 along:1 direct:1 become:2 ik:1 director:1 qualitative:3 consists:1 dan:1 pairwise:1 tagging:1 expected:1 indeed:1 market:1 themselves:2 examine:2 mechanic:1 faceted:1 salakhutdinov:1 decomposed:1 chap:1 automatically:1 actual:1 increasing:2 begin:1 discover:2 moreover:1 intruding:4 alto:1 provided:1 sloping:2 mass:2 what:1 developed:1 assert:1 quantitative:4 every:2 pseudo:1 growth:1 sale:1 grant:1 lsa:6 producing:1 before:1 service:1 positive:1 multiplex:1 treat:1 taxi:3 analyzing:1 jiang:2 solely:1 might:3 chose:1 bird:1 acl:1 wallach:2 suggests:1 jurafsky:2 limited:1 exhibition:1 outright:1 statistically:1 averaged:1 unique:2 commerce:1 hood:1 chongw:1 mei:1 area:2 boyd:2 matching:2 word:46 synonymy:1 griffith:2 road:1 suggest:1 selection:3 put:1 collapsed:1 www:2 conventional:1 map:1 demonstrated:1 optimize:1 jewish:1 lexical:2 attention:1 focused:1 sigir:1 shen:1 amazon:2 identifying:1 assigns:3 theater:1 borrow:1 regularize:1 his:1 steyvers:1 handle:1 notion:1 unmeasured:1 traditionally:1 hierarchy:1 play:1 user:9 us:1 engaging:1 agreement:2 trend:4 associate:1 legally:1 database:1 vein:1 bottom:2 inserted:1 wang:1 capture:4 imes:1 worst:2 trade:1 highest:3 removed:1 counter:1 intuition:2 environment:1 dempster:1 asked:1 trained:1 depend:1 algebra:1 predictive:20 negatively:2 basis:2 easily:3 joint:1 stock:1 osherson:1 represented:2 cat:2 train:3 distinct:2 fast:2 effective:2 describe:1 query:1 detected:1 horse:2 labeling:1 richer:1 film:1 larger:2 cvpr:1 garage:1 ability:1 statistic:1 unseen:2 syntactic:1 itself:1 validates:1 laird:1 reproduced:1 hoc:1 jcdl:1 propose:1 product:1 relevant:3 organizing:1 achieve:1 gold:2 intuitive:1 moved:1 constituent:1 adam:1 help:2 develop:4 measured:1 job:4 crew:1 c:1 involves:1 trading:1 come:2 convention:1 differ:1 indicate:1 posit:2 screenshots:1 snow:1 correct:2 exploration:4 human:33 engineered:1 viewing:1 enable:1 opinion:1 bin:3 assign:1 shopping:1 decompose:1 probable:1 im:1 around:1 hall:2 normal:3 cognition:1 predict:1 achieves:2 early:2 ctm:23 label:2 prepare:1 palo:1 title:6 agrees:2 create:1 successfully:1 tool:4 rather:1 varying:1 publication:2 focus:3 loper:1 rank:5 check:1 likelihood:25 intrusion:21 political:1 ave:1 sense:5 summarizing:1 helpful:1 inference:5 membership:2 typically:3 entire:1 fight:1 spurious:1 transformed:1 among:2 oca:1 development:4 animal:1 smoothing:1 noun:2 field:1 once:1 construct:1 having:1 ng:1 encouraged:1 hop:2 sell:1 look:1 unsupervised:3 nearly:2 icml:1 future:1 simplex:1 report:1 others:2 simplify:1 intelligent:1 employ:2 few:3 randomly:3 museum:1 individual:1 kitchen:1 microsoft:2 attempt:1 detection:1 possibility:2 highly:1 mining:1 evaluation:14 chong:1 mixture:10 yielding:1 light:2 sgerrish:1 behind:1 sens:3 held:7 worker:5 tree:1 incomplete:1 taylor:1 walk:1 psychological:2 modeling:6 measuring:3 assignment:6 cost:1 neutral:1 entry:1 mcnamara:1 erlbaum:1 too:1 dtrain:2 reported:2 v:1 answer:2 teacher:1 dumais:3 international:3 probabilistic:9 dong:1 pool:2 diverge:1 together:2 mouse:1 again:1 containing:1 choose:3 emnlp:2 external:2 book:3 expert:2 american:2 creating:1 suggesting:1 accompanied:1 star:1 bold:1 explicitly:4 ad:1 performed:1 view:1 try:1 exogenous:1 analyze:1 francis:1 red:1 investor:1 option:1 parallel:1 erosheva:1 annotation:1 miller:1 correspond:2 judgment:10 identify:4 yield:1 painting:2 generalize:2 artist:1 advertising:1 expertise:1 researcher:1 apple:2 history:2 submitted:1 hlt:2 facebook:2 ed:1 against:3 acquisition:1 turk:2 dm:1 associated:4 stop:1 cinema:1 popular:2 ask:1 knowledge:3 car:1 infers:2 sean:1 adaptability:1 actually:1 back:1 scholarly:1 appears:2 higher:7 follow:2 methodology:1 response:4 improved:3 wei:1 formulation:2 done:1 evaluated:3 just:1 stage:1 implicit:1 rejected:1 correlation:4 dennis:1 web:1 touch:1 assessment:1 lack:1 google:1 logistic:2 lda:26 quality:4 scientific:2 grows:1 usage:1 concept:4 true:4 normalized:1 tagged:1 assigned:3 shuffled:1 symmetric:2 semantic:15 deal:1 cohesive:1 during:1 unambiguous:1 mcdonald:1 performs:3 interpreting:1 interface:1 meaning:4 variational:4 image:2 began:1 common:1 wikipedia:7 specialized:1 superior:1 multinomial:4 empirically:1 overview:1 million:2 association:3 belong:5 fare:2 slight:1 interpret:1 composition:1 connor:1 automatic:1 consistency:1 teaching:1 replicating:1 language:7 toolkit:1 access:1 similarity:3 base:1 posterior:4 own:2 plsi:21 showed:2 confounding:1 optimizing:1 perplexity:2 phone:1 store:1 onr:1 success:2 devise:1 jbg:1 greater:1 deng:1 multiple:1 pnas:1 infer:1 match:5 retrieval:4 heralded:1 ensuring:1 regression:2 mturk:1 metric:12 histogram:2 represent:1 orthography:1 achieved:1 suppl:1 proposal:1 addition:1 fine:1 fifty:3 extra:1 umiacs:1 umd:1 comment:1 subject:31 tend:1 plato:1 simulates:1 lafferty:3 spirit:1 jordan:2 practitioner:4 odds:14 call:1 easy:1 variety:1 affect:1 fit:8 psychology:3 click:1 cow:2 reduce:1 opposite:1 idea:1 whether:7 expression:1 motivated:1 six:2 handled:1 sentiment:2 speech:2 york:7 deep:1 ignored:2 useful:4 generally:4 prepared:1 encyclopedia:1 ten:1 http:3 outperform:1 lsi:2 nsf:1 per:2 blue:1 diverse:1 tea:1 express:2 group:2 four:2 drawn:4 boxplots:2 fraction:1 year:1 deerwester:1 legislation:1 run:1 injected:1 wsd:1 place:2 intruder:18 coherence:6 decision:1 fee:1 capturing:1 bound:1 internet:2 pay:1 followed:2 distinguish:1 display:2 correspondence:1 identifiable:1 constraint:1 letsche:1 aspect:3 argument:1 department:1 structured:1 according:1 developing:1 poor:1 manning:1 spearman:1 battle:1 across:7 em:2 agreeing:1 indexing:4 fienberg:1 equation:2 remains:1 pin:1 count:1 committee:1 serf:1 end:1 studying:1 available:1 eight:1 apply:1 titov:1 hierarchical:1 standardized:1 dirichlet:6 remaining:4 platypus:1 ensure:2 top:2 linguistics:2 lexicography:1 build:1 murray:1 society:2 appreciate:1 objective:1 added:1 coherently:1 degrades:1 traditional:2 kth:1 separate:1 maryland:1 thank:1 prospective:1 topic:203 seven:1 gallery:1 induction:1 consumer:1 hdp:1 index:1 relationship:1 zhai:1 providing:1 ratio:1 difficult:3 setup:1 taxonomy:1 info:1 negative:2 disparate:1 proper:1 summarization:1 perform:3 teh:2 upper:1 observation:1 snippet:1 displayed:1 situation:1 persisted:1 discovered:1 download:1 community:3 makeup:1 inferred:10 rating:1 david:2 introduced:1 dog:2 required:1 mechanical:2 sentence:3 imagenet:1 california:1 coherent:3 distinction:1 learned:2 engine:1 nip:2 address:1 able:1 capped:1 perception:1 appeared:1 reading:1 summarize:1 pig:2 green:1 interpretability:5 royal:1 belief:1 mall:1 mouth:1 power:1 business:1 natural:6 treated:1 undetected:1 advanced:1 zhu:1 movie:2 technology:1 historically:1 republican:1 imply:1 library:1 created:1 deemed:1 schmid:1 isn:1 text:5 prior:4 literature:5 review:2 checking:1 acknowledgement:1 berry:1 inheritance:1 fully:2 expect:1 mixed:2 allocation:3 conrath:2 validation:1 digital:1 gather:1 verification:1 consistent:2 offered:1 article:6 proxy:1 rubin:1 encyclopedic:1 production:2 row:5 summary:1 token:3 surprisingly:3 supported:1 free:1 synset:1 guide:1 allow:1 understand:1 formal:1 institute:1 bias:1 exponentiated:1 terrorist:1 taking:1 benefit:1 mimno:1 vocabulary:3 evaluating:5 world:1 instructed:2 collection:6 commonly:1 longstanding:1 supercomputing:1 welling:1 correlate:1 newspaper:1 emphasize:1 implicitly:1 peaceful:1 overfitting:1 uai:1 corpus:26 assumed:1 landauer:4 search:1 latent:30 decade:1 reviewed:1 table:4 ca:1 career:1 depicting:1 probabalistic:1 anecdotally:1 japanese:1 agile:1 polysemy:1 did:4 linearly:1 big:1 motivation:1 graber:2 site:1 rarity:1 screen:1 ny:1 precision:17 position:1 theme:6 explicit:1 originated:1 house:1 jmlr:1 grained:1 evince:1 croft:1 down:1 nltk:1 specific:2 showing:1 explored:1 list:2 grouping:1 intractable:1 incorporating:1 workshop:1 socher:1 portal:1 downward:2 occurring:1 browsing:1 forget:1 depicted:1 simply:1 explore:2 likely:1 expressed:1 ordered:1 chang:1 applies:1 gerrish:1 determines:1 goal:3 presentation:2 sized:1 tempted:1 shared:1 content:3 determined:1 semantically:6 kurihara:1 wordnet:3 called:1 experimental:1 meaningful:6 ew:1 exception:1 select:5 internal:2 people:2 jonathan:1 relevance:1 evaluate:5 princeton:3 rexa:3 correlated:6 |
2,981 | 3,701 | Occlusive Components Analysis
?
J?org Lucke
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
Richard Turner
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, UK
[email protected]
Maneesh Sahani
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, UK
[email protected]
Marc Henniges
Frankfurt Institute for Advanced Studies
Goethe-University Frankfurt, Germany
[email protected]
Abstract
We study unsupervised learning in a probabilistic generative model for occlusion.
The model uses two types of latent variables: one indicates which objects are
present in the image, and the other how they are ordered in depth. This depth
order then determines how the positions and appearances of the objects present,
specified in the model parameters, combine to form the image. We show that the
object parameters can be learnt from an unlabelled set of images in which objects
occlude one another. Exact maximum-likelihood learning is intractable. However,
we show that tractable approximations to Expectation Maximization (EM) can
be found if the training images each contain only a small number of objects on
average. In numerical experiments it is shown that these approximations recover
the correct set of object parameters. Experiments on a novel version of the bars
test using colored bars, and experiments on more realistic data, show that the
algorithm performs well in extracting the generating causes. Experiments based
on the standard bars benchmark test for object learning show that the algorithm
performs well in comparison to other recent component extraction approaches.
The model and the learning algorithm thus connect research on occlusion with the
research field of multiple-causes component extraction methods.
1
Introduction
A long-standing goal of unsupervised learning on images is to be able to learn the shape and form of
objects from unlabelled scenes. Individual images usually contain only a small subset of all possible
objects. This observation has motivated the construction of algorithms?such as sparse coding (SC;
[1]) or non-negative matrix factorization (NMF; [2]) and its sparse variants?based on learning in
latent-variable models, where each possible object, or part of an object, is associated with a variable
controlling its presence or absence in a given image. Any individual ?hidden cause? is rarely active,
corresponding to the small number of objects present in any one image. Despite this plausible
motivation, these algorithms make severe approximations. Perhaps the most crucial is that in the
underlying latent variable models, objects or parts thereof, combine linearly to form the image. In
real images the combination of individual objects depends on their relative distance from the camera
or eye. If two objects occupy the same region in planar space, the nearer one occludes the other, i.e.,
the hidden causes non-linearly compete to determine the pixel values in the region of overlap.
In this paper we extend multiple-causes models such as SC or NMF to handle occlusion. The idea
of using many hidden ?cause? variables to control the presence or absence of objects is retained,
but these variables are augmented by another set of latent variables which determine the relative
1
depth of the objects, much as in the z-buffer employed by computer graphics. In turn, this enables
the simplistic linear combination rule to be replaced by one in which nearby objects occlude those
that are more distant. One of the consequences of moving to a richer, more complex model is that
inference and learning become correspondingly harder. One of the main contributions of this paper
is to show how to overcome these difficulties.
The problem of occlusion has been addressed in different contexts [3, 4, 5, 6]. Prominent probabilistic approaches [3, 4] assign pixels in multiple images taken from the same scene to a fixed number
of image layers. The approach is most frequently applied to automatically remove foreground and
background objects. Those models are in many aspects more general than the approach discussed
here. However, they model, in contrast to our approach, data in which objects maintain a fixed
position in depth relative to the other objects.
2
A Generative Model for Occlusion
The occlusion model contains three important elements. The first is a set of variables which controls
the presence or absence of objects in a particular image (this part will be analogous, e.g., to NMF).
The second is a variable which controls the relative depths of the objects that are present. The third
is the combination rule which describes how closer active objects occlude more distant ones.
To model the presence or absence of an object we use H binary hidden variables s1 , . . . , sH . We
assume that the presence of one object is independent of the presence of the others and assume, for
simplicity, equal probabilities ? for objects to be present:
QH
QH
p(~s | ?) = h=1 Bernoulli(sh ; ?) = h=1 ? sh (1 ? ?)1?sh .
(1)
Objects in a real image can be ordered by their depth and it is this ordering which determines
which of two overlapping objects occludes the other. The depth-ordering is captured in the model
by randomly and uniformly choosing a member ?
? of the
P set G(|~s|) which contains all permutation
functions ?
? : {1, . . . , |~s|} ? {1, . . . , |~s|}, with |~s| = h sh . More formally, the probability of ?
?
given ~s is defined by:
p(?
? | ~s) = |~s1|! with ?
? ? G(|~s|) .
(2)
Note that we could have defined the order in depth independently of ~s, by choosing from G(H) with
1
p(?
? ) = H!
. But then, because the depth of absent objects (sh = 0) is irrelevant, no more than |~s|!
distinct choices of ?
? would have resulted in different images.
A
B
object
objects permutation
image
Figure 1: A Illustration of how two
object masks and features combine to
generate an image (generation without
noise). B Graphical model of the generation process with hidden permutation
variable ?
?.
The final stage of the generative model describes how to produce the image given a selection of
active causes and an ordering in relative depth of these causes. One approach would be to choose the
closest object and to set the image equal to the feature vector associated with this object. However,
this would mean that every image generated from the model would comprise just one object; the
closest. What is missing from this description is a notion of the extent of an object and the fact
that it might only contribute to a local selection of pixels in an image. For this reason, our model
contains two sets of parameters. One set of parameters, W ? RH?D , describes what contribution
an object makes to each pixel (D is the number of pixels). The vector (Wh1 , . . . , WhD ) is therefore
described as the mask of object h. If an object is highly localized, this vector will contain many zero
elements. The other set of paramenters, T ? RH?C , represent the features of the objects. A feature
vector T~h ? RC describing object h might, for instance, be the object?s rgb-color (C = 3 in that
case). Fig. 1A illustrates the combination of masks and features, and Fig. 1B shows the graphical
model of the generation process.
Let us formalize how an image is generated given the parameters ? = (W, T ) and given the hidden
variables S = (~s, ?
? ). Before we consider observation noise, we define the generation of a noiseless
2
image T~ (S, ?) to be given by:
T~ d (S, ?) = Who d T~ho
? (S, h) =
where ho = argmaxh {? (S, h) Whd } ,
?
?
? 0
3
2
?
? (h)?1
|~
s|?1
?
?
if sh = 0
if sh = 1 and |~s| = 1
(3)
+ 1 otherwise
In (3) the order in depth is represented by the mapping ? whose specific form will facilitate later
algebraic steps. To illustrate the combination rule (3) and the mapping ? consider Fig. 1A and
Fig. 2. Let us assume that the mask values Whd are zero or one (although we will later also allow
for continuous values). As depicted in Fig. 1A an object h with sh = 1 occupies all image pixels
with Whd = 1 and does not occupy pixels with Whd = 0. For all pixels with Whd = 1 the vector
T~h sets the pixels? values to a specific feature, e.g., to a specific color. The function ? maps all
causes h with sh = 0 to zero while all other causes are mapped to values within the interval [1, 2]
(see Fig. 2). ? assigns a proximity value ? (S, h) > 0 to each present object. For a given pixel d the
h sh
A
h sh
? (S, h)
?
B
? (S, h)
h sh
?
C
? (S, h)
Figure 2: Visualization of
the mapping ? . A and B
show the two possible mappings for two causes, and
C shows one possible mapping for four causes.
?
combination rule (3) simply states that of all objects with Whd = 1, the most proximal is used to set
the pixel property. Given the latent variables and the noiseless image T~ (S, ?), we take the observed
variables Y = (~y1 , . . . , ~yD ) to be drawn independently from a Gaussian distribution (which is the
usual choice for component extraction systems):
QD
p(Y | S, ?) = d=1 p(~y d | T~ d (S, ?)),
p(~y | ~t) = N (~y ; ~t, ? 2 1) .
(4)
Equations (1) to (4) represent a generative model for occlusion.
3
Maximum Likelihood
One approach to learning the parameters ? = (W, T ) of this model from data Y = {Y (n) }n=1,...,N
is to use Maximum Likelihood learning, that is,
?? = argmax? {L(?)} with L(?) = log p(Y (1) , . . . , Y (N ) | ?) .
(5)
However, as there is usually a large number of objects that can potentially be present in the training images, and as the likelihood involves summing over all combinations of objects and associated orderings, the computation of (5) is typically intractable. Moreover, even if it were tractably
computable, optimization of the likelihood is made problematic by an analytical intractability arising from the fact that the occlusion non-linearity is non-differentiable. The following section describes how to side-step the computational intractability within the standard Expectation Maximisation (EM) formalism for maximum likelihood learning, using a truncated expansion of sums for
the sufficient statistics. Furthermore, as the M-Step of EM requires gradients to be computed, the
section also describes how to side-step the analytical intractability by an approximate version of the
model?s non-linearity.
To find the parameters ?? at least approximately, we use the variational EM formalism (e.g., [7]) and
introduce the free-energy function F(?, q) which is a function of ? and an unknown distribution
q(S (1) , . . . , S (N ) ) over the hidden variables. F(?, q) is a lower bound of the likelihood L(?).
Approximations introduced later on can be interpreted as choosing specific functions q, although
(for brevity) we will not make this relation explicit. In the model described
above, in which each
Q
image is drawn independently and identically, q(S (1) , . . . , S (N ) ) = n qn (S (n) , ?? ) which is taken
to be parameterized by ?? . The free-energy can thus be written as:
N X
h
X
i
F(?, q) =
qn (S , ?? ) log p(Y (n) | S, ?) + log p(S | ?)
+ H(q) , (6)
n=1
S
3
P P
where the function H(q)P= ? n S qn (S , ?? ) log(qn (S , ?? )) (the Shannon entropy) is independent of ?. Note that S in (6) sums over all possible states of S = (~s, ?
? ), i.e., over all binary
vectors and all associated permutations in depth. This is the source of the computational intractability. In the EM scheme F(?, q) is maximized alternately with respect to the distribution, q, in the
E-step (while the parameters, ?, are kept fixed) and with respect to parameters, ?, in the M-step
(while q is kept fixed). It can be shown that an EM iteration increases the likelihood or leaves it
unchanged. In practical applications EM is found to increase the likelihood to likelihood maxima,
although these can be local.
M-Step. The M-Step of EM, in which the free-energy, F, is optimized with respect to the parameters, is canonically derived by taking derivatives of F with respect to the parameters. Unfortunately,
this standard procedure is not directly applicable because of the non-linear nature of occlusion as
reflected by the combination rule (3). However, it is possible to approximate the combination rule
by the differentiable function,
PH
?
~
?
h=1 (? (S, h) Whd ) Whd Th
~
T d (S, ?) :=
.
(7)
PH
?
h=1 (? (S, h) Whd )
Note that for ? ? ? the function T~ ? d (S, ?) is equal to the combination rule in (3). T~ ? d (S, ?) is
differentiable w.r.t. the parameters Whd and Thc (c ? {1, . . . , C}) and it applies for large ?:
? ~?
?Wid T d (S, ?)
? ~?
?Tic T d (S, ?)
? A?id (S, W ) T~i ,
? A?id (S, W ) Wid ~ec ,
with
A?id (S, W ) :=
(? (S,i) Wid )?
PH
?,
h=1 (? (S,h) Whd )
Aid (S, W ) := lim A?id (S, W ) ,
(8)
???
where ~ec is a unit vector in feature space with entry 1 at position c and zero elsewhere (the approximations on the left-hand-side above become equalities for ? ? ?). We can now compute
approximations to the derivatives of F(?, q). For large values of ? the following holds:
T
N X
X
?
? ~?
?
(n) ~ ?
~
F(?, q) ?
qn (S , ? )
T d (S, ?)
f ~y , T d (S, ?) ,
(9)
?Wid
?Wid
n=1
S
T
N X
D
X
X
?
? ~?
?
(n) ~ ?
~
(S,
F(?,
q)
?
q
(S
,
?
)
T
?)
f
~
y
,
T
(S,
?)
, (10)
d
n
d
?Tic
?Tic
n=1
S
d=1
?
where f~(~y (n) , ~t ) :=
log p(~y (n) | ~t ) = ?? ?2 (~y (n) ? ~t ).
?~t
Setting the derivatives (9) and (10) to zero and inserting equations (8) yields the following necessary
conditions for a maximum of the free energy that hold in the limit ? ? ?:
XX
X
(n)
(n)
hAid (S, W )iqn Wid ~yd
hAid (S, W )iqn T~iT ~yd
n
n
d
, T~i = X X
.
(11)
Wid = X
T ~
~
hAid (S, W )iqn Ti Ti
hAid (S, W )iqn (Wid )2
n
n
d
Note that equations (11) are not straight-forward update rules. However, we can use them in the
fixed-point sense and approximate the parameters which appear on the right-hand-side of the equations using the values from the previous iteration.
Equations (11), together with the exact posterior qn (S, ?? ) = p(S | ~y (n) , ?? ), represent a maximumlikelihood based learning algorithm for the generative model (1) to (4). Note, however, that due to
the multiplication of the weights and the mask, Whd T~h in (3), there is degeneracy in the parameters:
given h the combination T~d remains unchanged for the operation T~h ? ?T~h and Whd ? Whd /?
with ? 6= 0. To remove the degeneracy we set after each iteration:
X
new
Whd
= Whd / W h , T~hnew = W h T~h , where W h =
Whd with I = {d | Wid > 0.5}. (12)
d?I
For reasons that will briefly be discussed later, the use of W h instead of, e.g., Whmax = maxd {Whd }
is advantageous for some data, although for many other types of data Whmax works equally well.
4
E-Step. The crucial entities that have to be computed for update equations (11) are the sufficient
statistics hAid (S, W )iqn , i.e., the expectation of the function Aid (S, W ) in (8) over the distribution
of hidden states S. In order to derive a computationally tractable learning algorithm the expectation
hAid (S, W )iqn is re-written and approximated as follows,
X
X
p(S, Y (n) | ?? ) Aid (S, W )
p(S, Y (n) | ?? ) Aid (S, W )
hAid (S, W )iqn =
S
X
?
? Y (n) | ?? )
p(S,
?
S
S,(|~
s|??)
X
? Y (n) | ?? )
p(S,
. (13)
? ~
S,(|
s?|??)
That is, in order to approximate (13), the problematic sums in the numerator and denominator have
been truncated. We only sum over states ~s with less or equal ? non-zero entries. Approximation (13)
replaces the intractable exact E-step by one whose computational cost scales only polynomially with
H (roughly cubically for ? = 3). As for other approximate EM approaches, there is no guarantee
that this approximation will always result in an increase of the data likelihood. For data points that
were generated by a small number of causes on average we can, however, expect the approximation
to match an exact E-step with increasing accuracy the closer we get to the optimum. For reasons
highlighted earlier, such data will be typical in image modelling. A truncation approach similar to
(13) has successfully been used in the context of the maximal causes generative model in [8]. Also
in the case of occlusion we will later see that in numerical experiments using approximation (13)
the true generating causes are indeed recovered.
4
Experiments
In order to evaluate the algorithm it has been applied to artificial data, where its performance can
be compared to ground truth, and to more realistic visual data. In all the experiments we use image
pixels as input variables ~yd . The entries of the observed variables ~yd are set by the pixels? rgb-color
vector, ~yd ? [0, 1]3 . In all trials of all experiments the initial values of the mask parameters Whd and
the feature parameters Thc were independently and uniformly drawn from the interval [0, 1].
Learning and annealing. The free-energy landscape traversed by EM algorithms is often multimodal. Therefore EM algorithms can converge to local optima. However, this problem can be
alleviated using deterministic annealing as described in [9, 10]. For the model under consideration
here annealing amounts to the substitutions ? ? ? ? , (1 ? ?) ? (1 ? ?)? , and (1/? 2 ) ? (?/? 2 ),
with ? = 1/T? in the E-step equations. During learning, the ?temperature? parameter T? is decreased
from an initial value T?init to 1. To update the parameters W and T we applied the M-step equations
(11). For the sufficient statistics hAid (S, W )iqn we used approximation (13) with A?id (S, W ) in
(8) instead of Aid (S, W ) and with ? = 3 if not stated otherwise. The parameter ? was increased
1
during learning with ? = 1??
(with a maximum of ? = 20 to avoid numerical instabilities). In all
experiments we used 100 EM iterations and decreased T? linearly except for 10 initial iterations at
T? = T?init and 20 final iterations at T? = 1. In addition to annealing, a small amount of independent
and identically distributed Gaussian noise (standard deviation 0.01) was added to the masks and the
features, Whd and Tdc , to help escape local optima. This parameter noise was linearly decreased to
zero during the last 20 iterations of each trial.
The colored bars test. The component extraction capabilities of the model were tested using the
colored bars test. This test is a generalization of the classical bars test [11] which has become a
popular benchmark task for non-linear component extraction. In the standard bars test with H = 8
bars the input data are 16-dimensional vectors, representing a 4 ? 4 grid of pixels, i.e., D = 16.
The single bars appear at the 4 vertical and 4 horizontal positions. For the colored bars test, the bars
have colors T~hgen which are independently and uniformly drawn from the rgb-color-cube [0, 1]3 .
Once chosen, they remain fixed for the generation of the data set. For each image a bar appears
independently with a probability ? = 28 which results in two bars per image on average (the standard
value in the literature). For the bars active in an image, a ranking in depth is randomly and uniformly
chosen from the permutation group. The color of each pixel is determined by the least distant bar
and is black if the pixel is occupied by no bar. N = 500 images were generated for learning and
Fig. 3A shows a random selection of 13 examples. The learning algorithms were applied to the
colored bars test with H = 8 hidden units and D = 16 input units. The observation noise was set
5
C
A
B
W T
iteration
1
20
40
100
Figure 3: Application to the colored bars test. A Selection of 13 of the N = 500 data points used
for learning. B Changes of the parameters W and T for the algorithm with H = 8 hidden units.
Each row shows W and T for the specified EM iteration. C Feature vectors at the iterations in B
displayed as points in color space (for visualization we used the 2-D hue and saturation plane of the
HSV color space). Crosses are the real generating values, black circles the current model values T~h ,
and grey circles those of the previous iterations.
to ? = 0.05 and learning was initialized with T?init = 12 D. The inferred approximate maximumlikelihood parameters converged to values close to the generating parameters in 44 of 50 trials. In
6 trials the algorithm represented 7 of the 8 causes. Its success rate, or reliability, is thus 88%.
Fig. 3B shows the time-course of a typical trial during learning. As can be observed, the mask value
W and the feature values T converged to values close to the generating ones. For data with added
Gaussian pixel noise (? gen =?=0.05) the algorithms converges to values representing all causes in
48 of 50 trials (96% reliability). A higher average number of causes per input reduced reliability.
A maximum of three causes (on average) were used for the noiseless bars test. This is considered
a difficult task in the standard bars test. With otherwise the same parameters our algorithm had a
reliability of 26% (50 trials) on this data. Performance seemed limited by the difficulty of the data
rather than by the limitations of the used approximation. We could not increase the reliability of the
algorithm when we increased the accuracy of (13) by setting ? = 4 (instead of ? = 3). Reliability
seemed much more affected by changes to parameters for annealing and parameter noise, i.e., by
changes to those parameters that affect the additional mechanisms to avoid local optima.
The standard bars test. Instead of choosing the bar colors randomly as above, they can also be set
to specific values. In particular, if all bar colors are white, T~ = (1, 1, 1)T , the classical version of the
bars test is recovered. Note that the learning algorithms can be applied to this standard form without
modification. When the generating parameters were as above (eight bars, probability of a bar to be
present 28 , N = 500), all bars were successfully extracted in 42 of 50 trials (84% reliability). For
2
a bars test with ten bars, D = 5 ? 5, a probability of 10
for each bar to be present, and N = 500
data points, the algorithm with model parameters as above extracted all bars in 43 of 50 trials (86%
reliability; mean number of extracted bars 9.5). Reliability for this test increased when we increased
the number of training images. For N = 1000 instead of 500 reliability increased to 94% (50 trials;
mean number of extracted bars 9.9). The bars test with ten bars is probably the one most frequently
found in the literature. Linear and non-linear component extraction approaches are compared, e.g.,
in [12, 8] and usually achieve lower reliability values than the presented algorithm. Classical ICA
and PCA algorithms investigated in [13] never succeeded in extracting all bars. Relatively recent
approaches can achieve reliability values higher than 90% but often only by introducing additional
constraints (compare R-MCA [8], or constrained forms of NMF [14]).
More realistic input. One possible criticism of the bars tests above is that the bars are relatively
simple objects. The purpose of this section is, therefore, to demonstrate the performance of the
algorithm when images contain more complicated objects. Sized objects were taken from the COIL100 dataset [15] with relatively uniform color distribution (objects 2, 4, 47, 78, 94, 97; all with zero
degree rotation). The images were scaled down to 15 ? 15 pixels and randomly placed on a black
background image of 25 ? 25 pixels. Downscaling introduced blurred object edges and to remove
this effect dark pixels were set to black. The training images were generated with each object being
6
C
A
B
W
T
iteration
1
10
25
50
100
Figure 4: Application to images of cluttered objects. A Selection of 14 of the N = 500 data points.
B Parameter change displayed as in Fig. 3. C Change of feature vectors displayed as in Fig. 3.
present with probability 26 and at a random depth. N = 500 such images were generated. Example
images1 are given in Fig. 4A. We applied the learning algorithm with H = 6, an initial temperature
for annealing of T?init = 41 D, and parameters as above otherwise. Fig. 4B shows the development
of parameter values during learning. As can be observed, the mask values converged to represent
the different objects, and the feature vectors converged to values representing the mean object color.
Note that the model is not matched to the dataset as each object has a fixed distribution of color
values which is a poor match to a Gaussian distribution with a constant color mean. The model
reacted by assigning part of the real color distribution to the mask values which are responsible
for the 3-dimensional appearance of the masks (see Fig. 4B). Note that the normalization (12) was
motivated by this observation because it can better tolerate high mask value variances. We ran 50
trials using different sets of N = 500 images generated as above. In 42 of the trials (84%) the
algorithm converged to values representing all six objects together with appropriate values for their
mean colors. In seven trials the algorithm converged to a local optima (average number of extracted
objects was 5.8). In 50 trials with 8 objects (we added objects 36 and 77 of the COIL-100 database)
an algorithm with same parameters but H = 8 extracted all objects in 40 of the trials (reliability
80%, average number of extracted objects 7.7).
5
Discussion
We have studied learning in the generative model of occlusion (1) to (4). Parameters can be optimized given a collection of N images in which different sets of causes are present at different
positions in depth. As briefly discussed earlier, the problem of occlusion has been addressed by
other system before. E.g., the approach in [3, 4] uses a fixed number of layers, so called sprites, to
model an order in depth. The approach assigns, to each pixel, probabilities that it has been generated
by a specific sprite. Typically, the algorithms are applied to data which consist of images that have a
small number of foreground objects (usually one or two) on a static or slowly changing background.
Typical applications of the approach are figure-ground separation and the automatic removal of the
background or foreground objects. The approach using sprites is in many aspects more general than
the model presented in this paper. It includes, for instance, variable estimation for illumination and,
importantly, addresses the problem of invariance by modeling object transformations. Regarding the
modelling of object arrangements, our approach is, however, more general. The additional hidden
variable used for object arrangements allows our model to be applied to images of cluttered scenes.
The approach in [3, 4] assumes a fixed object arrangement, i.e., it assumes that each object has the
same depth position in all training images. Our approach therefore addresses an aspect of visual
data that is complementary to the aspects modeled in [3, 4]. Models that combine the advantages of
1
Note that this appears much easier for a human observer because he/she can also make use of object
knowlege, e.g., of the gestalt law of proximity. The difficulty of the data would become obvious if all pixels in
each image of the data set were permuted by a fixed permutation map.
7
both approaches thus promise interesting advancements, e.g., towards systems that can learn from
video data in which objects change their positions in depth.
Another interesting aspect of the model presented in this work is its close connection to component
extraction methods. Algorithms such as SC, NMF or maximal causes analysis (MCA; [8]) use superpositions of elementary components to explain the data. ICA and SC have prominently been applied
to explain neural response properties, and NMF is a popular approach to learn components for visual
object recognition [e.g. 14, 16]. Our model follows these multiple-causes methods by assuming the
data to consist of independently generated components. It distinguishes itself, however, by the way
in which these components are assumed to combine. ICA, SC, NMF and many other models assume
linear superposition, MCA uses a max-function instead of the sum, and other systems use noisy-or
combinations. In the class of multiple-causes approaches our model is the first to generalize the
combination rule to one that models occlusion explicitly. This required an additional variable for
depth and the introduction of two sets of parameters: masks and features. Note that in the context of
multiple-causes models, masks have recently been introduced in conjunction with ICA [17] in order
to model local contrast correlation in image patches. For our model, the combination of masks and
vectorial feature parameters allow for applications to more general sets of data than those used for
classical component extraction. In numerical experiments we have used color images for instance.
However, we can apply our algorithm also to grey-level data such as used for other algorithms. This
allows for a direct quantitative comparison of the novel algorithm with state-of-the-art component
extraction approaches. The reported results for the standard bars test show the competitiveness of
our approach despite its larger set of parameters [compare, e.g., 12, 8]. A limitation of the training
method used is its assumption of relatively sparsely active hidden causes. This limitation is to some
extent shared, e.g., with SC or sparse versions of NMF. Experiments with higher ? values in (13)
indicate, however, that the performance of the algorithm is not so much limited by the accuracy of
the E-step, but rather by the more challenging likelihood landscape for less sparse data.
For applications to visual data, color is the most straight-forward feature to model. Possible alternatives are, however, Gabor feature vectors which model object textures (see, e.g., [18] and references
therein), SWIFT features [19], or vectors using combinations of color and texture [e.g. 6]. Depending on the choice of feature vectors and the application domain, it might be necessary to generalize
the model. It is, for instance, straight-forward to introduce more complex feature vectors. Although
one feature, e.g. one color, per cause can represent a suitable model for many applications, it can for
other applications also make sense to use multiple feature vectors per cause. In the extreme case as
many feature vectors as pixels could be used, i.e., T~h ? T~hd . The derivation of update rules for such
features would proceed along the same lines as the derivations for single features T~h . Furthermore,
individual prior parameters for the frequency of object appearances could be introduced. Such parameters could be trained with an approach similar to the one in [8]. Additional parameters could
also be introduced to model different prior probabilities for different arrangements in depth. An easy
alteration would be, for instance, to always map one specific hidden unit to the most distant position
in depth in order to model a background. Finally, the most interesting, but also most challenging
generalization direction would be the inclusion of invariance principles. In its current form the
model has, in common with state-of-the-art component extraction algorithms, the assumption that
the component locations are fixed. Especially for images of objects, changes in planar component
positions have to be addressed in general. Possible approaches that have been used in the literature
can, for instance, be found in [3, 4] in the context of occlusion modeling, in [20] in the context of
NMF, and in [18] in the context of object recognition. Potential future application domains for our
approach would, however, also include data sets for which component positions are fixed. E.g., in
many benchmark databases for face recognition, faces are already in a normalized position. For
component extraction, faces can be regarded as combinations of a background faces ?occluded? by
mouth, nose, and eye textures which can themselves be occluded by beards, sunglasses, or hats.
In summary, the studied occlusion model advances generative modeling approaches to visual data
by explicitly modeling object arrangements in depth. The approach complements established approaches of occlusion modeling in the literature by generalizing standard approaches to multiplecauses component extraction.
Acknowledgements. We gratefully acknowledge funding by the German Federal Ministry of Education and Research (BMBF) in the project 01GQ0840 (Bernstein Focus Neurotechnology Frankfurt), the Gatsby Charitable Foundation, and the Honda Research Institute Europe GmbH.
8
References
[1] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381:607 ? 609, 1996.
[2] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization.
Nature, 401(6755):788?91, 1999.
[3] N. Jojic and B. Frey. Learning flexible sprites in video layers. Conf. on Computer Vision and
Pattern Recognition, 1:199?206, 2001.
[4] C. K. I. Williams and M. K. Titsias. Greedy learning of multiple objects in images using robust
statistics and factorial learning. Neural Computation, 16(5):1039?1062, 2004.
[5] K. Fukushima. Restoring partly occluded patterns: a neural network model. Neural Networks,
18(1):33?43, 2005.
[6] C. Eckes, J. Triesch, and C. von der Malsburg. Analysis of cluttered scenes using an elastic
matching approach for stereo images. Neural Computation, 18(6):1441?1471, 2006.
[7] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse,
and other variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998.
[8] J. L?ucke and M. Sahani. Maximal causes for non-linear component extraction. Journal of
Machine Learning Research, 9:1227 ? 1267, 2008.
[9] N. Ueda and R. Nakano. Deterministic annealing EM algorithm. Neural Networks, 11(2):271?
282, 1998.
[10] M. Sahani. Latent variable models for neural data analysis, 1999. PhD Thesis, Caltech.
[11] P. F?oldi?ak. Forming sparse representations by local anti-Hebbian learning. Biol Cybern, 64:165
? 170, 1990.
[12] M. W. Spratling. Learning image components for object recognition. Journal of Machine
Learning Research, 7:793 ? 815, 2006.
[13] S. Hochreiter and J. Schmidhuber. Feature extraction through LOCOCODE. Neural Computation, 11:679 ? 714, 1999.
[14] P. O. Hoyer. Non-negative matrix factorization with sparseness constraints. Journal of Machine
Learning Research, 5:1457?1469, 2004.
[15] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (COIL-100). Technical
report, cucs-006-96, 1996.
[16] H. Wersing and E. K?orner. Learning optimized features for hierarchical models of invariant
object recognition. Neural Computation, 15(7):1559?1588, 2003.
[17] U. K?oster, J. T. Lindgren, M. Gutmann, and A. Hyv?arinen. Learning natural image structure
with a horizontal product model. In Int. Conf. on Independent Component Analysis and Signal
Separation (ICA), pages 507?514, 2009.
[18] P. Wolfrum, C. Wolff, J. L?ucke, and C. von der Malsburg. A recurrent dynamic model for
correspondence-based face recognition. Journal of Vision, 8(7):1?18, 2008.
[19] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal
of Computer Vision, 60(2):91?110, 2004.
[20] J. Eggert, H. Wersing, and E. K?orner. Transformation-invariant representation and NMF. In
Int. J. Conf. on Neural Networks (IJCNN), pages 2535?2539, 2004.
9
| 3701 |@word trial:15 briefly:2 version:4 advantageous:1 ucke:2 grey:2 hyv:1 rgb:3 harder:1 initial:4 substitution:1 contains:3 recovered:2 current:2 assigning:1 written:2 numerical:4 realistic:3 distant:4 occludes:2 shape:1 enables:1 remove:3 update:4 occlude:3 generative:8 leaf:1 advancement:1 greedy:1 plane:1 colored:6 contribute:1 hsv:1 location:1 honda:1 org:1 rc:1 along:1 direct:1 become:4 competitiveness:1 combine:5 introduce:2 mask:15 ica:5 indeed:1 roughly:1 themselves:1 frequently:2 automatically:1 increasing:1 project:1 xx:1 underlying:1 moreover:1 linearity:2 occlusive:1 matched:1 what:2 tic:3 interpreted:1 transformation:2 guarantee:1 quantitative:1 every:1 ti:2 hnew:1 scaled:1 uk:4 control:3 unit:7 appear:2 before:2 local:8 frey:1 limit:1 consequence:1 despite:2 ak:1 id:5 triesch:1 yd:6 approximately:1 might:3 black:4 therein:1 studied:2 challenging:2 factorization:3 limited:2 downscaling:1 practical:1 camera:1 responsible:1 restoring:1 maximisation:1 procedure:1 maneesh:2 gabor:1 alleviated:1 matching:1 get:1 close:3 selection:5 context:6 cybern:1 instability:1 map:3 deterministic:2 missing:1 williams:1 independently:7 cluttered:3 simplicity:1 assigns:2 rule:10 importantly:1 regarded:1 hd:1 handle:1 notion:1 analogous:1 construction:1 controlling:1 qh:2 exact:4 us:3 element:2 approximated:1 recognition:7 sparsely:1 database:2 observed:4 region:2 gutmann:1 ordering:4 ran:1 seung:1 occluded:3 dynamic:1 trained:1 titsias:1 distinctive:1 multimodal:1 represented:2 derivation:2 distinct:1 london:2 artificial:1 sc:6 choosing:4 whose:2 richer:1 larger:1 plausible:1 otherwise:4 statistic:4 highlighted:1 itself:1 noisy:1 final:2 emergence:1 whd:21 advantage:1 differentiable:3 analytical:2 ucl:4 maximal:3 product:1 inserting:1 gen:1 canonically:1 achieve:2 description:1 optimum:5 produce:1 generating:6 incremental:1 converges:1 object:84 help:1 illustrate:1 derive:1 ac:2 depending:1 recurrent:1 involves:1 indicate:1 murase:1 qd:1 direction:1 correct:1 occupies:1 wid:9 human:1 education:1 arinen:1 assign:1 generalization:2 elementary:1 traversed:1 hold:2 proximity:2 considered:1 ground:2 mapping:5 purpose:1 estimation:1 applicable:1 superposition:2 successfully:2 federal:1 gaussian:4 always:2 rather:2 occupied:1 avoid:2 conjunction:1 derived:1 focus:1 she:1 bernoulli:1 indicates:1 likelihood:12 modelling:2 contrast:2 criticism:1 sense:2 inference:1 cubically:1 typically:2 hidden:13 relation:1 reacted:1 germany:2 pixel:23 flexible:1 development:1 constrained:1 art:2 cube:1 field:3 equal:4 comprise:1 extraction:14 once:1 never:1 unsupervised:2 foreground:3 future:1 others:1 report:1 richard:1 escape:1 distinguishes:1 gq0840:1 randomly:4 resulted:1 individual:4 replaced:1 argmax:1 occlusion:16 maintain:1 fukushima:1 highly:1 severe:1 sh:13 extreme:1 wc1n:2 succeeded:1 closer:2 edge:1 necessary:2 initialized:1 re:1 circle:2 instance:6 formalism:2 earlier:2 increased:5 modeling:5 ar:2 queen:2 maximization:1 cost:1 introducing:1 deviation:1 subset:1 entry:3 uniform:1 graphic:1 reported:1 connect:1 learnt:1 proximal:1 iqn:8 international:1 standing:1 probabilistic:2 lee:1 together:2 von:2 thesis:1 choose:1 slowly:1 conf:3 derivative:3 potential:1 de:2 alteration:1 coding:1 includes:1 int:2 blurred:1 explicitly:2 ranking:1 depends:1 later:5 view:1 observer:1 lowe:1 recover:1 capability:1 complicated:1 contribution:2 square:2 accuracy:3 variance:1 who:1 maximized:1 yield:1 landscape:2 generalize:2 fias:2 straight:3 converged:6 explain:2 nene:1 orner:2 energy:5 frequency:1 thereof:1 obvious:1 associated:4 static:1 degeneracy:2 dataset:2 popular:2 color:20 lim:1 lococode:1 formalize:1 appears:2 higher:3 tolerate:1 planar:2 reflected:1 response:1 furthermore:2 just:1 stage:1 correlation:1 hand:2 horizontal:2 overlapping:1 perhaps:1 olshausen:1 facilitate:1 effect:1 contain:4 true:1 normalized:1 equality:1 jojic:1 neal:1 white:1 numerator:1 during:5 prominent:1 demonstrate:1 eggert:1 performs:2 temperature:2 image:56 variational:1 consideration:1 novel:2 recently:1 funding:1 common:1 rotation:1 permuted:1 extend:1 discussed:3 he:1 kluwer:1 frankfurt:7 automatic:1 grid:1 inclusion:1 gratefully:1 luecke:1 reliability:13 had:1 moving:1 europe:1 lindgren:1 closest:2 posterior:1 recent:2 irrelevant:1 schmidhuber:1 buffer:1 binary:2 success:1 maxd:1 der:2 caltech:1 captured:1 ministry:1 additional:5 mca:3 employed:1 determine:2 converge:1 signal:1 multiple:8 keypoints:1 hebbian:1 technical:1 unlabelled:2 match:2 cross:1 long:1 equally:1 variant:2 simplistic:1 denominator:1 noiseless:3 expectation:4 vision:3 iteration:12 represent:5 normalization:1 hochreiter:1 cell:1 background:6 addition:1 addressed:3 interval:2 annealing:7 argmaxh:1 source:1 decreased:3 crucial:2 probably:1 member:1 jordan:1 extracting:2 presence:6 bernstein:1 identically:2 easy:1 affect:1 idea:1 regarding:1 computable:1 absent:1 motivated:2 pca:1 six:1 stereo:1 algebraic:1 sprite:4 proceed:1 cause:28 factorial:1 amount:2 coil100:1 dark:1 hue:1 ten:2 ph:3 reduced:1 generate:1 occupy:2 problematic:2 neuroscience:2 arising:1 per:4 promise:1 affected:1 group:1 four:1 drawn:4 changing:1 henniges:2 kept:2 sum:5 compete:1 parameterized:1 ueda:1 separation:2 patch:1 layer:3 bound:1 correspondence:1 replaces:1 lucke:1 ijcnn:1 vectorial:1 constraint:2 scene:4 nearby:1 aspect:5 relatively:4 combination:16 poor:1 describes:5 remain:1 em:15 modification:1 s1:2 invariant:3 taken:3 computationally:1 equation:8 visualization:2 remains:1 turn:1 describing:1 mechanism:1 german:1 nose:1 tractable:2 operation:1 eight:1 apply:1 hierarchical:1 appropriate:1 alternative:1 ho:2 hat:1 assumes:2 include:1 graphical:3 malsburg:2 nakano:1 especially:1 classical:4 unchanged:2 added:3 arrangement:5 already:1 receptive:1 usual:1 hoyer:1 gradient:1 distance:1 mapped:1 entity:1 seven:1 extent:2 reason:3 assuming:1 code:1 retained:1 modeled:1 illustration:1 difficult:1 unfortunately:1 potentially:1 negative:3 stated:1 unknown:1 vertical:1 observation:4 benchmark:3 acknowledge:1 oldi:1 displayed:3 truncated:2 anti:1 hinton:1 y1:1 nmf:10 inferred:1 introduced:5 complement:1 required:1 specified:2 optimized:3 connection:1 cucs:1 established:1 nearer:1 tractably:1 alternately:1 able:1 bar:39 address:2 usually:4 pattern:2 saturation:1 max:1 video:2 mouth:1 overlap:1 suitable:1 difficulty:3 natural:2 advanced:2 turner:2 representing:4 scheme:1 swift:1 eye:2 sunglass:1 library:1 columbia:1 sahani:3 oster:1 prior:2 literature:4 acknowledgement:1 removal:1 multiplication:1 relative:5 law:1 expect:1 permutation:6 generation:5 limitation:3 images1:1 interesting:3 localized:1 foundation:1 degree:1 tdc:1 hgen:1 sufficient:3 principle:1 editor:1 charitable:1 intractability:4 row:1 elsewhere:1 course:1 summary:1 placed:1 last:1 free:5 truncation:1 side:4 allow:2 haid:8 institute:3 taking:1 correspondingly:1 face:5 sparse:7 distributed:1 overcome:1 depth:22 qn:6 seemed:2 forward:3 made:1 collection:1 ec:2 polynomially:1 gestalt:1 approximate:6 uni:2 active:5 summing:1 assumed:1 continuous:1 latent:6 learn:3 nature:3 robust:1 elastic:1 init:4 expansion:1 investigated:1 complex:2 marc:1 domain:2 thc:2 main:1 linearly:4 rh:2 motivation:1 noise:7 complementary:1 augmented:1 fig:13 gmbh:1 beard:1 gatsby:5 aid:5 bmbf:1 position:11 explicit:1 goethe:2 prominently:1 third:1 down:1 specific:7 intractable:3 consist:2 texture:3 phd:1 illumination:1 illustrates:1 justifies:1 sparseness:1 easier:1 entropy:1 depicted:1 generalizing:1 simply:1 appearance:3 forming:1 visual:5 ordered:2 applies:1 truth:1 determines:2 extracted:7 coil:2 goal:1 sized:1 towards:1 shared:1 absence:4 change:7 wersing:2 typical:3 except:1 uniformly:4 determined:1 wolff:1 called:1 invariance:2 partly:1 shannon:1 rarely:1 formally:1 maximumlikelihood:2 brevity:1 evaluate:1 tested:1 biol:1 nayar:1 |
2,982 | 3,702 | Periodic Step-Size Adaptation for
Single-Pass On-line Learning
Chun-Nan Hsu1,2,? , Yu-Ming Chang1 , Han-Shen Huang1 and Yuh-Jye Lee3
1
Institute of Information Science, Academia Sinica, Taipei 115, Taiwan
2
USC/Information Sciences Institute, Marina del Rey, CA 90292, USA
3
Department of Computer Science and Information Engineering,
National Taiwan University of Science and Technology, Taipei 106, Taiwan
?
[email protected]
Abstract
It has been established that the second-order stochastic gradient descent (2SGD)
method can potentially achieve generalization performance as well as empirical
optimum in a single pass (i.e., epoch) through the training examples. However,
2SGD requires computing the inverse of the Hessian matrix of the loss function,
which is prohibitively expensive. This paper presents Periodic Step-size Adaptation (PSA), which approximates the Jacobian matrix of the mapping function and
explores a linear relation between the Jacobian and Hessian to approximate the
Hessian periodically and achieve near-optimal results in experiments on a wide
variety of models and tasks.
1
Introduction
On-line learning has been studied for decades. Early works concentrate on minimizing the required
number of model corrections made by the algorithm through a single pass of training examples.
More recently, on-line learning is considered as a solution of large scale learning mainly because
of its fast convergence property. New on-line learning algorithms for large scale learning, such as
SMD [1] and EG [2], are designed to learn incrementally to achieve fast convergence. They usually
still require several passes (or epochs) through the training examples to converge at a satisfying
model. However, the real bottleneck of large scale learning is I/O time. Reading a large data set
from disk to memory usually takes much longer than CPU time spent in learning. Therefore, the
study of on-line learning should focus more on single-pass performance. That is, after processing
all available training examples once, the learned model should generalize as well as possible so
that used training example can really be removed from memory to minimize disk I/O time. In
natural learning, single-pass learning is also interesting because it allows for continual learning from
unlimited training examples under the constraint of limited storage, resembling a nature learner.
Previously, many authors, including [3] and [4], have established that given a sufficiently large set
of training examples, 2SGD can potentially achieve generalization performance as well as empirical
optimum in a single pass through the training examples. However, 2SGD requires computing the
inverse of the Hessian matrix of the loss function, which is prohibitively expensive. Many attempts
to approximate the Hessian have been made. For example, one may consider to modify L-BFGS [5]
for online settings. L-BFGS relies on line search. But in online settings, we only have the surface of
the loss function given one training example, as opposed to all in batch settings. The search direction
obtained by line search on such a surface rarely leads to empirical optimum. A review of similar
attempts can be found in Bottou?s tutorial [6], where he suggested that none is actually sufficient to
achieve theoretical single-pass performance in practice. This paper presents a new 2SGD method,
called Periodic Step-size Adaptation (PSA). PSA approximates the Jacobian matrix of the mapping
function and explores a linear relation between the Jacobian and Hessian to approximate the Hessian
1
periodically. The per-iteration time-complexity of PSA is linear to the number of nonzero dimensions of the data. We analyze the accuracy of the approximation and derive the asymptotic rate of
convergence for PSA. Experimental results show that for a wide variety of models and tasks, PSA is
always very close to empirical optimum in a single-pass. Experimental results also show that PSA
can run much faster compared to state-of-the-art algorithms.
2
Aitken?s Acceleration
Let w ? ?? be a ?-dimensional weight vector of a model. A machine learning problem can be
formulated as a fixed-point iteration that solves the equation w = ?(w), where ? is a mapping
? : ?? ? ?? , until w? = ?(w? ). Assume that the mapping ? is differentiable. Then we
can apply Aitken?s acceleration, which attempts to extrapolate to the local optimum in one step, to
accelerate the convergence of the mapping:
?
w? = w(?) + (I ? J)?1 (?(w(?) ) ? w(?) ),
?
(1)
?
where J := ? (w ) is the Jacobian of the mapping ? at w . When ?? := eig(J) ? (?1, 1), the
mapping ? is guaranteed to converge. That is, when ? ? ?, w(?) ? w? .
It is usually difficult to compute J for even a simple machine learning model. To alleviate this issue,
we can approximate J with the estimates of its ?-th eigenvalue ?? by
(?)
??
(?)
:=
?(w(?) )? ? w?
and extrapolate at each dimension ? by:
(?+1)
w?
(?)
(?)
(??1)
w? ? w?
,
??,
(?)
(2)
(?)
= w? + (1 ? ?? )?1 (?(w(?) )? ? w? ) .
(3)
In practice, Aitken?s acceleration alternates a step for preparing ? (?) and a step for the extrapolation.
That is, when ? is an even number, ? is used to obtain w(?+1) . Otherwise, the extrapolation (3) is
used. A benefit of the above approximation is that the cost for performing an extrapolation is ?(?),
linear in terms of the dimension.
3
Periodic Step-Size Adaptation
When ? is a gradient descent update rule, that is, ?(w) ? w ? ?g(w; D), where ? is a scalar
step size, D is the entire set of training examples, and g(w; D) is the gradient of a loss function to
be minimized, Aitken?s acceleration is equivalent to Newton?s method, because
J = ?? (w) = I ? ?H(w; D),
(I ? J)?1
(4)
1
= H(w; D)?1 , and ?(w) ? w = w ? ?g(w; D) ? w = ??g(w; D),
?
where H(w; D) = ? ? (w; D), the Hessian matrix of the loss function, and the extrapolation given in
(1) becomes
1
w = w + (I ? J)?1 (?(w) ? w) = w ? H?1 ?g = w ? H?1 g.
?
In this case, Aitken?s acceleration enjoys the same local quadratic convergence as Newton?s method.
This can also be extended to a SGD update rule: w(?+1) ? w(?) ? ? ? g(w(?) ; B(?) ), where the
mini-batch B ? D, ?B? ? ?D?, is a randomly selected small subset of D. A genuine on-line learner
usually has ?B? = 1. We consider a positive vector-valued step-size ? ? ??+ and ??? denotes
component-wise (Hadamard) product of two vectors. Again, by exploiting (4), since
eig(I ? diag(?)H) = eig(?? ) = eig(J) ? ?,
where ? is an estimated eigenvalue of J as given in (2), when H is a symmetric matrix, its eigenvalue
is given by
1 ? eig(J)
eig(J) = 1 ? ?? eig(H) ? eig(H) =
.
??
2
Therefore, we can update the step size component-wise by
(?)
eig(H?1 ) =
??
??
??
(?+1)
?
? ??
?
.
(?)
1 ? eig(J)
1 ? ??
1 ? ??
(5)
Since the mapping ? in SGD involves the gradient g(w(?) ; B(?) ) of a randomly selected training
example B(?) , ? is itself a random variable. It is unlikely that we can obtain a reliable eigenvalue
estimation at each single iteration. To increase stationary of the mapping, we take advantage of the
law of large numbers and aggregate consecutive SGD mappings into a new mapping
?? = ?(?(. . . ?(w) . . .)),
|
{z
}
?
which reduces the variance of gradient estimation by 1? , compared to the plain SGD mapping ?.
The approximation is valid because w(?+?) , ? = 0, . . . , ? ? 1 are approximately fixed when ? is
sufficiently small [7].
We can proceed to estimate the eigenvalues of ?? from w(?) , w(?+?) and w(?+2?) by applying (2)
for each component ?:
(?+2?)
???? =
w?
(?+?)
w?
(?+?)
? w?
(?)
? w?
.
(6)
We note that our aggregate mapping ?? is different from a mapping that takes ? mini-batches as the
input in a single iteration. Their difference is similar to that between batch and stochastic gradient
descent. Aggregate mappings have ? chances to adjust its search direction, while mappings that use
? mini-batches together only have one.
With the estimated eigenvalues, we can present the complete update rule to adjust the step size
vector ?. To ensure that the estimated values of eig(J) ? (?1, 1) and to ensure numerical stability,
we introduce a positive constant ? < 1 as the upper bound of ??
??? ?. Let u denote the constrained ?? ? .
Its components are given by
?? := sgn(?
??? ) min(??
??? ?, ?),
??.
(7)
Then we can update the step size every 2? iterations based on u by:
? (?+2?+1) = v ? ? (?+2?) ,
(8)
where v is a discount factor with components defined by
?? :=
? + ??
,
?+?+?
??.
(9)
1
> ?? ? 1 + ? to ensure
The discount factor is derived from (5) and the fact that when ? < 1, 1??
numerical stability, with ? and ? controlling the range. Let ? be the maximum value and ? be the
minimum value of ?? . We can obtain ? and ? by solving ? ? ?? ? ? for all ?. Since ?? ? ?? ? ?,
we have ?? = ? when ?? = ? and ?? = ? when ?? = ??. Solving these equations yields:
?=
?+?
2(1 ? ?)
? and ? =
?.
???
???
(10)
For example, if we want to set ? = 0.9999 and ? = 0.99, then ? and ? will be 201? and 0.0202?,
respectively. Setting 0 < ? < ? ? 1 ensures that the step size is decreasing and approaches zero so
that SGD can be guaranteed to converge [7].
Algorithm 1 shows the PSA algorithm. In a nutshell, PSA applies SGD with a fixed step size
and periodically updates the step size by approximating Jacobian of the aggregated mapping. The
complexity per iteration is ?( ?? ) because the cost of eigenvalue estimation given in (6) is 2? and it
is required for every 2? iterations. That is, PSA updates ? after learning from 2? ? B examples.
3
Algorithm 1 The PSA Algorithm
1: Given: ?, ?, ? < 1 and ?
?+?
2: Initialize ? (0) and ? (0) ; ? ? 0; ? ? ???
? and ? ? 2(1??)
? Equation (10)
??? ?
3: repeat
4:
Choose a small batch B(?) uniformly at random from the set of training examples D
5:
update ?(?+1) ? ?(?) ? ? ? g(?(?) ; B(?) )
? SGD update
6:
if (? + 1) mod 2? = 0 then
? Update ?
update ???? ?
7:
(?+2?)
(?+?)
???
(?+?)
(?)
??
???
??
10:
11:
12:
13:
14:
15:
For all ?, update ?? ?
For all ?, update ?? ?
update ? (?+1) ? v ? ?
else
? (?+1) ? ? (?)
end if
???+1
until Convergence
4
Analysis of PSA
8:
9:
? Equation (6)
sgn(?
??? ) min(??
??? ?, ?)
?+??
?+?+?
(?)
? Equation (7)
? Equation (9)
? Equation (8)
(?)
We analyze the accuracy of ?? as an eigenvalue estimate as follows. Let eigen decomposition
J = Q?Q?1 and u? be column vectors of Q and v?? be row vectors of Q?1 . Then we have
J? =
?
?
??? u? v?? ,
?=1
where ?? is the ?-th eigenvalue of J. By applying Taylor?s expansion to ?, we have
w(?) ? w?
w(??1) ? w?
?
?
? ?(?) = w(?) ? w(??1)
?
? ?(?+1) = w(?+1) ? w(?)
?
J? (w(0) ? w? )
J??1 (w(0) ? w? )
J? J?1 (J ? I)(w(0) ? w? )
?
?
?=1
?? ??? u? v?? J?1 (J ? I)(w(0) ? w? )
Now let
??? := e?? u? v?? J?1 (J ? I)(w(0) ? w? ),
where e? is the ?-th column of I. Let ?? be the ?-th element of ? and ??? be the largest eigenvalue
of J such that ??? ?= 0. Then
??
?
?+1
(?+1)
??? + ??=?? (?? /??? )? ?? ??? /????
??
?=1 ?? ???
?
.
= ??
=
?? ?
(?)
?
1 + ??=?? (?? /??? )? ??? /????
?
?=1 ?? ???
?
Therefore, we can conclude that
? ?? ? ??? as ? ? ? because ??, if ??? ?= 0 then ?? /??? ? 1. ??? ? ?? is the ?-th
componentwise rate of convergence.
? ?? = ?? if J is a diagonal matrix. In this case, our approximation is exact. This happens
when there are high percentages of missing data for a Bayesian network model trained
by EM [8] and when features are uncorrelated for training a conditional random field
model [9].
? ?? is the average of eigenvalues weighted by ??? ??? . Since ??? is usually the largest when
? = ?, we have ?? ? ?? .
4
When we have the least possible step size ? (?+1) = ?? (?) for all ? mod 2? = 0 in PSA, the
expectation of w(?) obtained by PSA can be shown to be:
?(w(?) )
=
w? +
=
?
? (
?
?=1
(?)
)
?
? ? ? (0) ? ? ? ? H(w? ; D) (w(0) ? w? )
w + S (w(0) ? w? ).
The rate of convergence is governed by the largest eigenvalue of S(?) . We now derive a bound of
this eigenvalue.
Theorem 1 Let ?? be the least eigenvalue of H(w? ; D). The asymptotic rate of convergence of PSA
is bounded by
}
{ (0)
?? ?? ?
.
eig(S(?) ) ? exp
1??
Proof We can show that
eig(S(?) )
? (
?
=
?=1
?
?
1 ? ? (0) ? ? ? ? ??
{
exp ?
?
?
?
(0)
?? ?
)
? ?? ?
?=1
}
{
= exp ??
(0)
??
?
?
?
? ?? ?
?=1
}
because for any 0 ? ?? < 1, 1 ? ?? ? ???? ,
0?
?
?
?=1
?
?
(1 ? ?? ) ?
???? = ??
??
?=1
??
.
?=1
Now, since
?
?
?=1
we have
(?)
?
?? ? ?
? ?
?
???
? ?? ?
?
?
?
??
?? ? = ?
? ? ??
?=0
{
eig(S ) ? exp ??
(0)
??
?=0
?
?
?=1
?
? ?? ?
}
? exp
?
when ? ? ?,
1??
{
?? (0) ?? ?
1??
}
when ? ? ?.
?
Though this analysis suggests that for rapid convergence to ?? , we should assign ? ? 1 with a
large ? and ? (0) , it is based on a worst-case scenario and thus insufficient as a practical guideline
for parameter assignment. In practice, we fix (?, ?, ?) = (0.9999, 0.99, 0.9) and tune ? as follows.
When the training set size ?D? ? 2000, set ? in the order of 0.5?D?/1000 is usually sufficient.
This setting implies that the step size will be adjusted per ?D?/1000 examples. In fact, when ?
is in the same order, PSA performs similarly. Consider the following three settings: (?, ?, ?) =
(10, 0.9999, 0.99), (100, 0.999, 0.9) or (1, 0.99999, 0.999). They all yield nearly identical singlepass F-scores for the BaseNP task (see Section 5). The first setting was used in this paper. To see
why this is the case, consider the decreasing factor ?? (see (8) and (9)), which will be confined within
the interval (?, ?). Assume that ?? is selected at ransom uniformly, then the mean of ?? = 0.995
when (?, ?) = (0.9999, 0.99) and ?? will be decreased by a factor of 0.995 on average in each PSA
update. When ? = 10, PSA will update ?? per 20 examples. After learning from 200 examples, PSA
will decrease ?? 10 times by a combined factor of 0.9511. Similarly, we can obtain that the factors
for the other two settings are 0.95 and 0.9512, respectively, nearly identical.
5
5
Experimental Results
Table 1 shows the tasks chosen for our comparison. The tasks for CRF have been used in competitions and the performance was measured by F-score. Weight for CRF reported here is Number
of features provided by CRF++. Target provides the empirical optimal performance achieved
by batch learners. If PSA accurately approximates 2SGD, then its single-pass performance should
be very close to Target. The target F-score for BioNLP/NLPBA is not >85% as reported in [1]
because it was due to a bug that included true labels as a feature 1 .
Table 1: Tasks for the experiments.
Task
Model
Training
Test
Base NP
Chunking
BioNLP/NLPBA
BioCreative 2
LS FD
LS OCR
MNIST [14]
CRF
CRF
CRF
CRF
LSVM
LSVM
CNN
8936
8936
18546
15000
2734900
1750000
60000
2012
2012
3856
5000
2734900
1750000
10000
5.1
Tag/Class
3
23
11
3
2
2
10
Weight
Target
1015662
7448606
5977675
10242972
900
1156
134066
94.0% [10]
93.6% [11]
70.0% [12]
86.5% [13]
3.26%
23.94%
0.99%
Conditional Random Field
We compared PSA with plain SGD and SMD [1] to evaluate PSA?s performance for training
conditional random fields (CRF). We implemented PSA by replacing the L-BFGS optimizer in
CRF++ [11]. For SMD, we used the implementation available in the public domain 2 . Our SGD
implementation for CRF is from Bottou 3 . All the above implementations are revisions of CRF++.
Finally, we ran the original CRF++ with default settings to obtain the performance results of LBFGS. We simply used the original parameter settings for SGD and SMD as given in the literature.
(0)
For PSA, we used ? = 0.9, (?, ?) = (0.9999, 0.99), ? = 10, and ?? = 0.1, ??. The batch size
is one for all tasks. These parameters were determined by using a small subset from CoNLL 2000
baseNP and we simply used them for all tasks. All of the experiments reported here for CRF were
ran on an Intel Q6600 Fedora 8 i686 PC with 4G RAM.
Table 2 compares SGD variants in terms of the execution time and F-scores achieved after processing
the training examples for a single pass. Since the loss function in CRF training is convex, the
convergence results of L-BFGS can be considered as the empirical minimum. The results show that
single-pass F-scores achieved by PSA are about as good as the empirical minima, suggesting that
PSA has effectively approximated Hessian in CRF training.
Fig. 1 shows the learning curves in terms of the CPU time. Though as expected, plain SGD is the
fastest, it is remarkable that PSA is faster than SMD for all tasks. SMD is supposed to have an edge
here because the mini-batch size for SMD was set to 6 or 8, as specified in [1], while PSA used one
for all tasks. But PSA is still faster than SMD partly because PSA can take advantage of the sparsity
trick as plain SGD [15].
5.2
Linear SVM
We also evaluated PSA?s single-pass performance for training linear SVM. It is straightforward to
apply PSA as a primal optimizer for linear SVM. We used two very large data sets: FD (face detection) and OCR (see Table 1), from the Pascal large-scale learning challenge in 2008 and compared
the performance of PSA with the state-of-the-art linear SVM solvers: Liblinear 1.33 [16], the winner
of the challenge, and SvmSgd, from Bottou?s SGD web site. They have been shown to outperform
many well-known linear SVM solvers, such as SVM-perf [17] and Pegasos [15].
1
Thanks to Shing-Kit Chan of the Chinese University of Hong Kong for pointing that out.
Available under LGPL from the following URL: http://sml.nicta.com.au/code/crfsmd/.
3
http://leon.bottou.org/projects/sgd.
2
6
Base NP
time F-score
Method (pass)
SGD (1)
SMD (1)
PSA (1)
L-BFGS (batch)
1.15
41.50
16.30
221.17
Chunking
time F-score
92.42
91.81
93.31
93.91
13.04
350.00
160.00
8694.40
92.26
91.89
93.16
93.78
BioNLP/NLPBA
time F-score
12.23
522.00
206.00
20130.00
BioCreative 2
time F-score
66.37
66.53
69.41
70.30
3.18
497.71
191.61
1601.50
34.33
69.04
80.79
86.82
Table 2: CPU time in seconds and F-scores achieved after a single pass of CRF training.
BaseNP
Chunking
94.5
94.5
94
94
93.5
93.5
93
F?score
F?score
93
92.5
92
PSA
91.5
92
PSA
91.5
SMD
91
92.5
L?BFGS
90.5
90
0
50
100
SMD
SGD
91
SGD
150
L?BFGS
90.5
90
0
200
200
400
Time(sec)
NLPBA04
800
1000
90
70
80
60
70
50
PSA
40
60
PSA
50
SMD
SMD
SGD
30
20
0
SGD
40
L?BFGS
100
200
300
400
500
1200
BioCreative 2 GM Task
80
F?score
F?score
600
Time(sec)
600
700
30
0
800
Time(sec)
L?BFGS
100
200
300
400
500
Time(sec)
Figure 1: Comparison of CPU time; Horizontal lines indicate target F-scores.
We selected L2-regularized logistic regression as the loss function for PSA and Liblinear because
it is twice differentiable. The weight ? of the margin error term was set to one. We kept SvmSgd
intact. The experiment was run on an Open-SUSE Linux machine with Intel Xeon E7320 CPU
(2.13GHz) and 64GB RAM. Table 3 shows the results. Again, PSA achieves the best single-pass
accuracy for both tasks. Its test accuracies are very close to that of converged Liblinear. PSA takes
much less time than the other two solvers. PSA (1) is faster than SvmSgd (1) for SVM because
SvmSgd uses the sparsity trick [15], which speeds up training for sparse data, but otherwise may
slow down. Both data sets we used turn out to be dense, i.e., with no zero features. We implemented
PSA with the sparsity trick for CRF only but not for SVM and CNN.
Method (pass)
Liblinear converge
Liblinear (1)
SvmSgd (20)
SvmSgd (10)
SvmSgd (1)
PSA (1)
LS FD
accuracy
time
96.74
91.43
93.78
93.77
93.60
95.10
4648.49
290.58
1135.67
567.68
56.78
30.65
LS OCR
accuracy
time
76.06
74.33
73.71
73.76
75.68
4454.42
398.00
473.35
46.96
25.33
Table 3: Test accuracy rates and elapsed CPU time in seconds by various linear SVM solvers.
7
The parameter settings for PSA are basically the same as those for CRF but with a large period
? = 1250 for FD and 500 for OCR. For FD, the worst accuracy by PSA is 94.66% with ? between
250 to 2000. For OCR, the worst is 75.20% with ? between 100 to 1000, suggesting that PSA is not
very sensitive to parameter settings.
5.3
Convolutional Neural Network
Approximating Hessian is particularly challenging when the loss function is non-convex. We tested
PSA in such a setting by applying PSA to train a large convolutional neural network for the original
10-class MNIST task (see Table 1). We tried to duplicate the implementation of LeNet described in
[18] in C++. Our implementation, referred to as ?LeNet-S?, is a simplified variant of LeNet-5. The
differences include that the sub-sampling layers in LeNet-S picks only the upper-left value from a
2 ? 2 area and abandons the other three. LeNet-S used more maps (50 vs. 16) in the third layer and
less nodes (120 vs. 100) in the fifth layer, due to the difference in the previous sub-sampling layer.
Finally, we did not implement the Gaussian connections in the last layer. We trained LeNet-S by
plain SGD and PSA. The initial ? for SGD was 0.7 and decreased by 3 percent per pass. For PSA,
(0)
we used ? = 0.9, (?, ?) = (0.99999, 0.999), ? = 10, ?? = 0.5, ??, and the mini-batch size is
one for all tasks. We also adapted a trick given in [19] which advises that step sizes in the lower
layers should be larger than in the higher
layer. Following their trick, the initial step sizes for the
?
first and the third layers were 5 and 2.5 times as large as those for the other layers, respectively.
The experiments were ran on an Intel Q6600 Fedora 8 i686 PC with 4G RAM.
Table 4 shows the results. To obtain the empirical optimal error rate of our LeNet-S model, we ran
plain SGD with sufficient passes and obtained 0.99% error rate at convergence, slightly higher than
LeNet-5?s 0.95% [18]. Single-pass performance of PSA with the layer trick is within one percentage
point to the target. Starting from an initial weight closer to the optimum helped improving PSA?s
performance further. We ran SGD 100 passes with randomly selected 10K training examples then
re-started training with PSA using the rest 50K training examples for a single pass. Though PSA did
achieve a better error rate, this is infeasible because it took 4492 seconds to run SGD 100 passes.
Finally, though not directly comparable, we also report the performance of TONGA given in [20] as
a reference. TONGA is a 2SGD method based on natural gradient.
Method (pass)
time
error
Method (pass)
SGD (1)
SGD (140)
TONGA (n/a)
266.77
37336.20
500.00
2.36
0.99
2.00
PSA w/o layer trick (1)
PSA w/ layer trick (1)
PSA re-start (1)
time
error
311.95
311.00
253.72
2.31
1.97
1.90
Table 4: CPU time in seconds and percentage test error rates for various neural network trainers.
6
Conclusions
It has been shown that given a sufficiently large training set, a single pass of 2SGD generalizes as
well as the empirical optimum. Our results show that PSA provides a practical solution to accomplish near optimal performance of 2SGD as predicted theoretically for a variety of large scale models
and tasks with a reasonably low cost per iteration compared to competing 2SGD methods. The benefit of 2SGD with PSA over plain SGD becomes clearer when the scale of the tasks are increasingly
large. For non-convex neural network tasks, since the curvature of the error surface is so complex,
it is still very challenging for an eigenvalue approximation method like PSA. A complete version of
this paper will appear as [21]. Source codes of PSA are available at http://aiia.iis.sinica.edu.tw.
References
[1] S.V.N. Vishwanathan, Nicol N. Schraudolph, Mark W. Schmidt, and Kevin P. Murphy. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings
of the 23rd International Conference on Machine Learning (ICML?06), Pittsburgh, PA, USA,
June 2006.
8
[2] Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter L. Bartlett. Exponentiated gradient algorithms for conditional random fields and max-margin markov networks.
Journal of Machine Learning Research, 9:1775?1822, August 2008.
[3] Noboru Murata and Shun-Ichi Amari. Statistical analysis of learning dynamics. Signal Processing, 74(1):3?28, April 1999.
[4] L?eon Bottou and Yann LeCun. On-line learning for very large data sets. Applied Stochastic
Models in Business and Industry, 21(2):137?151, 2005.
[5] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, 1999.
[6] L?eon Bottou. The tradeoffs of large-scale learning. Tutorial, the 21st Annual Conference
on Neural Information Processing Systems (NIPS 2007), Vancouver, BC, Canada, December
2007. http://leon.bottou.org/talks/largescale.
[7] Albert Benveniste, Michel Metivier, and Pierre Priouret. Adaptive Algorithms and Stochastic
Approximations. Springer-Verlag, 1990.
[8] Chun-Nan Hsu, Han-Shen Huang, and Bo-Hou Yang. Global and componentwise extrapolation for accelerating data mining from large incomplete data sets with the EM algorithm. In
Proceedings of the Sixth IEEE International Conference on Data Mining (ICDM?06), pages
265?274, Hong Kong, China, December 2006.
[9] Han-Shen Huang, Bo-Hou Yang, Yu-Ming Chang, and Chun-Nan Hsu. Global and componentwise extrapolations for accelerating training of Bayesian networks and conditional random
fields. Data Mining and Knowledge Discovery, 19(1):58?91, 2009.
[10] Fei Sha and Fernando Pereira. Shallow parsing with conditional random fields. In Proceedings
of Human Language Technology, the North American Chapter of the Association for Computational Linguistics (NAACL?03), pages 213?220, 2003.
[11] Taku Kudo. CRF++: Yet another CRF toolkit, 2006. Available under LGPL from the following
URL: http://crfpp.sourceforge.net/.
[12] Burr Settles. Biomedical named entity recognition using conditional random fields and novel
feature sets. In Proceedings of the Joint Workshop on Natural Language Processing in
Biomedicine and its Applications (JNLPBA-2004), pages 104?107, 2004.
[13] Cheng-Ju Kuo, Yu-Ming Chang, Han-Shen Huang, Kuan-Ting Lin, Bo-Hou Yang, Yu-Shi
Lin, Chun-Nan Hsu, and I-Fang Chung. Rich feature set, unification of bidirectional parsing
and dictionary filtering for high f-score gene mention tagging. In Proceedings of the Second
BioCreative Challenge Evaluation Workshop, pages 105?107, 2007.
[14] Yann LeCun and Corinna Cortes. The MNIST database of handwritten digits, 1998.
http://yann.lecun.com/exdb/mnist/.
[15] Shai Shalev-Shwartz, Yoram Singer, and Nathan Srebro. Pegasos: Primal Estimated subGrAdient SOlver for SVM. In ICML?07: Proceedings of the 24th international conference on
Machine learning, pages 807?814, New York, NY, USA, 2007. ACM Press.
[16] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2001.
Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[17] Thorsten Joachims. Training linear SVMs in linear time. In Proceedings of the 12th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD?06),
pages 217?226, New York, NY, USA, 2006. ACM.
[18] Yann LeCun, L?eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning
applied to document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[19] Yann LeCun, Leon Bottou, Genevieve B. Orr, and Klaus-Robert Muller. Efficient backprop.
In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer, 1998.
[20] Nicolas LeRoux, Pierre-Antoine Manzagol, and Yoshua Bengio. Topmoumoute online natural
gradient algorithm. In Advances in Neural Information Processing Systems, 20 (NIPS 2007),
Cambridge, MA, USA, 2008. MIT Press.
[21] Chun-Nan Hsu, Yu-Ming Chang, Han-Shen Huang, and Yuh-Jye Lee. Periodic step-size adaptation in second-order gradient descent for single-pass on-line structured learning. To appear in
Mchine Learning, Special Issue on Structured Prediction. DOI: 10.1007/s10994-009-5142-6,
2009.
9
| 3702 |@word kong:2 cnn:2 version:1 disk:2 open:1 tried:1 decomposition:1 pick:1 sgd:39 mention:1 liblinear:5 initial:3 score:16 bc:1 document:1 com:2 yet:1 parsing:2 hou:3 periodically:3 academia:1 numerical:3 kdd:1 designed:1 update:16 v:2 stationary:1 selected:5 amir:1 provides:2 node:1 org:2 burr:1 introduce:1 theoretically:1 aitken:5 tagging:1 expected:1 rapid:1 isi:1 ming:4 decreasing:2 cpu:7 solver:5 becomes:2 provided:1 revision:1 bounded:1 project:1 every:2 continual:1 nutshell:1 prohibitively:2 appear:2 positive:2 engineering:1 local:2 modify:1 koo:1 approximately:1 twice:1 au:1 studied:1 q6600:2 china:1 suggests:1 challenging:2 advises:1 fastest:1 limited:1 range:1 practical:2 globerson:1 lecun:5 practice:3 implement:1 digit:1 area:1 empirical:9 pegasos:2 close:3 storage:1 applying:3 www:1 equivalent:1 map:1 missing:1 shi:1 resembling:1 straightforward:1 starting:1 l:4 convex:3 shen:5 rule:3 fang:1 stability:2 s10994:1 controlling:1 target:6 gm:1 exact:1 us:1 trick:9 element:1 pa:1 expensive:2 satisfying:1 approximated:1 particularly:1 recognition:2 database:1 csie:1 worst:3 ensures:1 decrease:1 removed:1 trade:1 ran:5 complexity:2 dynamic:1 metivier:1 trained:2 solving:2 learner:3 accelerate:1 joint:1 various:2 chapter:1 talk:1 train:1 fast:2 doi:1 aggregate:3 kevin:1 shalev:1 klaus:1 huang1:1 valued:1 larger:1 otherwise:2 amari:1 itself:1 abandon:1 kuan:1 online:3 advantage:2 differentiable:2 eigenvalue:15 net:1 took:1 product:1 adaptation:5 hadamard:1 achieve:6 supposed:1 bug:1 competition:1 sourceforge:1 exploiting:1 convergence:12 optimum:7 spent:1 derive:2 clearer:1 measured:1 solves:1 implemented:2 predicted:1 involves:1 implies:1 indicate:1 concentrate:1 direction:2 stochastic:5 human:1 sgn:2 settle:1 public:1 shun:1 backprop:1 require:1 assign:1 fix:1 generalization:2 really:1 alleviate:1 ntu:1 adjusted:1 correction:1 sufficiently:3 considered:2 wright:1 exp:5 mapping:17 pointing:1 optimizer:2 early:1 consecutive:1 achieves:1 dictionary:1 estimation:3 label:1 sensitive:1 largest:3 weighted:1 mit:1 tonga:3 always:1 gaussian:1 derived:1 focus:1 june:1 joachim:1 mainly:1 sigkdd:1 entire:1 chang1:1 unlikely:1 relation:2 trainer:1 issue:2 pascal:1 art:2 constrained:1 initialize:1 special:1 genuine:1 once:1 field:8 sampling:2 preparing:1 identical:2 yu:5 icml:2 nearly:2 minimized:1 np:2 report:1 yoshua:2 duplicate:1 randomly:3 national:1 murphy:1 usc:1 attempt:3 detection:1 fd:5 mining:4 evaluation:1 adjust:2 genevieve:1 yuh:2 pc:2 primal:2 edge:1 closer:1 unification:1 incomplete:1 taylor:1 re:2 theoretical:1 column:2 xeon:1 industry:1 assignment:1 cost:3 subset:2 reported:3 periodic:5 accomplish:1 combined:1 thanks:1 st:1 explores:2 international:4 ju:1 lee:1 michael:1 together:1 linux:1 again:2 opposed:1 choose:1 shing:1 huang:4 american:1 chung:2 michel:1 suggesting:2 bfgs:9 orr:2 sml:1 sec:4 north:1 suse:1 helped:1 extrapolation:6 analyze:2 start:1 shai:1 minimize:1 accuracy:8 convolutional:2 variance:1 murata:1 yield:2 generalize:1 bayesian:2 handwritten:1 accurately:1 basically:1 none:1 converged:1 biomedicine:1 sixth:1 proof:1 hsu:4 knowledge:2 actually:1 bidirectional:1 higher:2 april:1 evaluated:1 though:4 biomedical:1 until:2 horizontal:1 web:1 replacing:1 eig:14 del:1 incrementally:1 noboru:1 logistic:1 usa:5 naacl:1 true:1 xavier:1 lenet:8 symmetric:1 nonzero:1 eg:1 psa:62 hong:2 exdb:1 complete:2 crf:20 performs:1 percent:1 wise:2 novel:1 recently:1 leroux:1 winner:1 association:1 he:1 approximates:3 cambridge:1 rd:1 similarly:2 language:2 toolkit:1 han:5 longer:1 surface:3 base:2 patrick:1 curvature:1 carreras:1 chan:1 scenario:1 verlag:1 jorge:1 muller:2 minimum:3 kit:1 converge:4 aggregated:1 period:1 fernando:1 signal:1 ii:1 stephen:1 taku:1 reduces:1 faster:4 kudo:1 schraudolph:1 lin:3 icdm:1 marina:1 prediction:1 variant:2 regression:1 expectation:1 albert:1 iteration:8 confined:1 achieved:4 want:1 interval:1 decreased:2 else:1 source:1 rest:1 pass:4 december:2 mod:2 near:2 yang:3 bengio:2 variety:3 competing:1 haffner:1 tradeoff:1 bottleneck:1 bartlett:1 url:2 gb:1 accelerating:2 peter:1 hessian:10 proceed:1 rey:1 york:2 tune:1 discount:2 svms:1 http:7 outperform:1 percentage:3 tutorial:2 estimated:4 per:6 ichi:1 libsvm:2 kept:1 nocedal:1 ram:3 subgradient:1 run:3 inverse:2 named:1 chih:2 yann:5 lsvm:2 conll:1 comparable:1 bound:2 layer:12 smd:13 nan:5 guaranteed:2 cheng:1 quadratic:1 annual:1 adapted:1 constraint:1 vishwanathan:1 fei:1 software:1 unlimited:1 tag:1 nathan:1 speed:1 min:2 leon:3 performing:1 department:1 structured:2 alternate:1 slightly:1 em:2 increasingly:1 tw:2 shallow:1 happens:1 thorsten:1 chunking:3 equation:7 previously:1 turn:1 cjlin:1 singer:1 end:1 available:6 generalizes:1 apply:2 ocr:5 pierre:2 batch:11 schmidt:1 corinna:1 eigen:1 original:3 denotes:1 ensure:3 include:1 linguistics:1 newton:2 yoram:1 taipei:2 eon:3 ting:1 chinese:1 approximating:2 sha:1 diagonal:1 antoine:1 gradient:12 entity:1 nicta:1 taiwan:3 code:2 priouret:1 mini:5 insufficient:1 minimizing:1 manzagol:1 sinica:2 difficult:1 robert:1 potentially:2 bionlp:3 implementation:5 guideline:1 upper:2 markov:1 descent:4 extended:1 august:1 canada:1 required:2 specified:1 componentwise:3 connection:1 elapsed:1 learned:1 established:2 nip:2 suggested:1 usually:6 reading:1 sparsity:3 challenge:3 including:1 memory:2 reliable:1 max:1 terry:1 natural:4 business:1 regularized:1 largescale:1 technology:2 library:1 started:1 perf:1 epoch:2 review:1 literature:1 l2:1 discovery:2 nicol:1 vancouver:1 asymptotic:2 law:1 loss:8 interesting:1 filtering:1 srebro:1 remarkable:1 sufficient:3 editor:1 benveniste:1 uncorrelated:1 row:1 repeat:1 last:1 infeasible:1 enjoys:1 exponentiated:1 institute:2 wide:2 face:1 fifth:1 sparse:1 benefit:2 ghz:1 curve:1 dimension:3 plain:7 valid:1 default:1 rich:1 author:1 made:2 adaptive:1 simplified:1 approximate:4 gene:1 global:2 pittsburgh:1 conclude:1 shwartz:1 search:4 decade:1 why:1 table:10 learn:1 nature:1 reasonably:1 ca:1 nicolas:1 improving:1 expansion:1 bottou:9 complex:1 domain:1 diag:1 did:2 dense:1 fig:1 intel:3 site:1 referred:1 slow:1 ny:2 sub:2 pereira:1 governed:1 topmoumoute:1 jacobian:6 third:2 theorem:1 down:1 jen:1 jye:2 svm:10 chun:5 cortes:1 workshop:2 mnist:4 effectively:1 execution:1 margin:2 simply:2 lbfgs:1 scalar:1 bo:3 chang:4 applies:1 springer:3 chance:1 relies:1 acm:3 ma:1 conditional:8 formulated:1 acceleration:5 included:1 determined:1 uniformly:2 called:1 pas:23 partly:1 experimental:3 kuo:1 intact:1 rarely:1 mark:1 support:1 collins:1 accelerated:1 evaluate:1 tested:1 extrapolate:2 |
2,983 | 3,703 | Regularized Distance Metric Learning:
Theory and Algorithm
Rong Jin1
Shijun Wang2
Yang Zhou1
1
Dept. of Computer Science & Engineering, Michigan State University, East Lansing, MI 48824
2
Radiology and Imaging Sciences, National Institutes of Health, Bethesda, MD 20892
[email protected] [email protected] [email protected]
Abstract
In this paper, we examine the generalization error of regularized distance metric
learning. We show that with appropriate constraints, the generalization error of
regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present
an efficient online learning algorithm for regularized distance metric learning. Our
empirical studies with data classification and face recognition show that the proposed algorithm is (i) effective for distance metric learning when compared to the
state-of-the-art methods, and (ii) efficient and robust for high dimensional data.
1 Introduction
Distance metric learning is a fundamental problem in machine learning and pattern recognition. It is
critical to many real-world applications, such as information retrieval, classification, and clustering
[6, 7]. Numerous algorithms have been proposed and examined for distance metric learning. They
are usually classified into two categories: unsupervised metric learning and supervised metric learning. Unsupervised distance metric learning, or sometimes referred to as manifold learning, aims to
learn a underlying low-dimensional manifold where the distance between most pairs of data points
are preserved. Example algorithms in this category include ISOMAP [13] and Local Linear Embedding (LLE) [8]. Supervised metric learning attempts to learn distance metrics from side information
such as labeled instances and pairwise constraints. It searches for the optimal distance metric that
(a) keeps data points of the same classes close, and (b) keeps data points from different classes far
apart. Example algorithms in this category include [17, 10, 15, 5, 14, 19, 4, 12, 16]. In this work,
we focus on supervised distance metric learning.
Although a large number of studies were devoted to supervised distance metric learning (see the survey in [18] and references therein), few studies address the generalization error of distance metric
learning. In this paper, we examine the generalization error for regularized distance metric learning.
Following the idea of stability analysis [1], we show that with appropriate constraints, the generalization error of regularized distance metric learning is independent from the dimensionality of data,
making it suitable for handling high dimensional data. In addition, we present an online learning
algorithm for regularized distance metric learning, and show its regret bound. Note that although
online metric learning was studied in [9], our approach is advantageous in that (a) it is computationally more efficient in handling the constraint of SDP cone, and (b) it has a proved regret bound while
[9] only shows a mistake bound for the datasets that can be separated by a Mahalanobis distance. To
verify the efficacy and efficiency of the proposed algorithm for regularized distance metric learning,
we conduct experiments with data classification and face recognition. Our empirical results show
that the proposed online algorithm is (1) effective for metric learning compared to the state-of-the-art
methods, and (2) robust and efficient for high dimensional data.
1
2 Regularized Distance Metric Learning
Let D = {zi = (xi , yi ), i = 1, . . . , n} denote the labeled examples, where xk = (x1k , . . . , xdk ) ? Rd
is a vector of d dimension and yi ? {1, 2, . . . , m} is class label. In our study, we assume that
the norm of any example is upper bounded by R, i.e., supx |x|2 ? R. Let A ? Sd?d
be the
+
distance metric to be learned, where the distance between two data points x and x? is calculated as
|x ? x? |2A = (x ? x? )? A(x ? x? ).
Following the idea of maximum margin classifiers, we have the following framework for regularized
distance metric learning:
?
?
?1
?
X
2C
min
|A|2F +
g yi,j 1 ? |xi ? xj |2A : A 0, tr(A) ? ?(d)
(1)
A ?2
?
n(n ? 1)
i<j
where
? yi,j is derived from class labels yi and yj , i.e., yi,j = 1 if yi = yj and ?1 otherwise.
? g(z) is the loss function. It outputs a small value when z is a large positive value, and a large
value when z is large negative. We assume g(z) to be convex and Lipschitz continuous with
Lipschitz constant L.
? |A|2F is the regularizer that measures the complexity of the distance metric A.
? tr(A) ? ?(d) is introduced to ensure a bounded domain for A. As will be revealed later,
this constraint will become active only when the constraint constant ?(d) is sublinear in
d, i.e., ? ? O(dp ) with p < 1. We will also show how this constraint could affect the
generalization error of distance metric learning.
3 Generalization Error
Let AD be the distance metric learned by the algorithm in (1) from the training examples D. Let
ID (A) denote the empirical loss , i.e.,
X
2
ID (A) =
g yi,j 1 ? |xi ? xj |2A
(2)
n(n ? 1) i<j
For the convenience of presentation, we also write g yi,j (1 ? |xi ? xj |2A ) = V (A, zi , zj ) to highlight its dependence on A and two examples zi and zj . We denote by I(A) the loss of A over the
true distribution, i.e.,
I(A) = E(zi ,zj ) [V (A, zi , zj )]
(3)
Given the empirical loss ID (A) and the loss over the true distribution I(A), we define the estimation
error as
DD
=
I(AD ) ? ID (AD )
(4)
In order to show the behavior of estimation error, we follow the analysis based on the stability of
the algorithm [1]. The uniform stability of an algorithm determines the stability of the algorithm
when one of the training examples is replaced with another. More specifically, an algorithm A has
uniform stability ? if
?(D, z), ?i, sup |V (AD , u, v) ? V (ADz,i , u, v)| ? ?
(5)
u,v
where Dz,i stands for the new training set that is obtained by replacing zj ? D with a new example
z. We further define ? = ?/n as the uniform stability ? behaves like O(1/n).
The advantage of using stability analysis for the generalization error of regularized distance metric
learning. This is because the example pair (zi , zj ) used for training distance metrics are not I.I.D.
although zi is, making it difficult to directly utilize the results from statistical learning theory.
In the analysis below, we first show how to derive the generalization error bound for regularized
distance metric learning given the uniform stability ? (or ?). We then derive the uniform stability
constant for the regularized distance metric learning framework in (1).
2
3.1 Generalization Error Bound for Given Uniform Stability
Analysis in this section follows closely [1], and we therefore omit the detailed proofs.
Our analysis utilizes the McDiarmid inequality that is stated as follows.
Theorem 1. (McDiarmid Inequality) Given random variables {vi }li=1 , vi? , and a function F : v l ?
R satisfying
?
?
?
sup
?
v1 ,...,vl ,vi
?
?
?F (v1 , . . . , vl ) ? F (v1 , . . . , vi?1 , vi , vi+1 , . . . , vl )? ? ci ,
the following statement holds
2?
Pr (|F (v1 , . . . , vl ) ? E(F (v1 , . . . , vl ))| > ?) ? 2 exp ? Pl
2
i=1 ci
!
To use the McDiarmid inequality, we first compute E(DD ).
Lemma 1. Given a distance metric learning algorithm A has uniform stability ?/n, we have the
following inequality for E(DD )
?
E(DD ) ? 2
(6)
n
where n is the number of training examples in D.
The result in the following lemma shows that the condition in McDiarmid inequality holds.
Lemma 2. Let D be a collection of n randomly selected training examples, and Di,z be the collection of examples that replaces zi in D with example z. We have |DD ? DDi,z | bounded as follows
|DD ? DDi,z | ?
2? + 8L?(d) + 2g0
n
(7)
where g0 = supz,z? |V (0, z, z ? )| measures the largest loss when distance metric A is 0.
Combining the results in Lemma 1 and 2, we can now derive the the bound for the generalization
error by using the McDiarmid inequality.
Theorem 2. Let D denote a collection of n randomly selected training examples, and AD be the
distance metric learned by the algorithm in (1) whose uniform stability is ?/n. With probability
1 ? ?, we have the following bound for I(AD )
r
ln(2/?)
2?
+ (2? + 4L?(d) + 2g0 )
(8)
I(AD ) ? ID (AD ) ?
n
2n
3.2 Generalization Error for Regularized Distance Metric Learning
First, we show that the superium of tr(AD ) is O(d1/2 ), which verifies that ?(d) should behave
sublinear in d. This is summarized by the following proposition.
Proposition 1. The trace constraint in (1) will be activated only when
p
?(d) ? 2dg0 C
(9)
where g0 = supz,z? |V (0, z, z ? )|.
Proof. It follows directly from [tr(AD )/d]2 ? |AD |2F ? 2C sup |V (0, z, z ?)| ? Cg0 .
z,z ?
To bound the uniform stability, we need the following proposition
Proposition 2. For any two distance metrics A and A? , we have the following inequality hold for
any examples zu and zv
|V (A, zu , zv ) ? V (A? , zu , zv )| ? 4LR2 |A ? A? |F
3
(10)
The above proposition follows directly from the fact that (a) V (A, z, z ? ) is Lipschitz continuous and
(b) |x|2 ? R for any example x. The following lemma bounds |AD ? AD? |F .
Lemma 3. Let D denote a collection of n randomly selected training examples, and by z = (x, y) a
randomly selected example. Let AD be the distance metric learned by the algorithm in (1). We have
|AD ? ADi,z |F ?
8CLR2
n
(11)
The proof of the above lemma can be found in Appendix A.
By putting the results in Lemma 3 and Proposition 2, we have the following theorem for the stability
of the Frobenius norm based regularizer.
Theorem 3. The uniform stability for the algorithm in (1) using the Frobenius norm regularizer,
denoted by ?, is bounded as follows
?=
?
32CL2 R4
?
n
n
(12)
where ? = 32CL2 R4
Combing Theorem 3 and 2, we have the following theorem for the generalization error of distance
metric learning algorithm in (1) using the Frobenius norm regularizer
Theorem 4. Let D be a collection of n randomly selected examples, and AD be the distance metric
learned by the algorithm in (1) with h(A) = |A|2F . With probability 1 ? ?, we have the following
bound for the true loss function I(AD ) where AD is learned from (1) using the Frobenius norm
regularizer
r
ln(2/?)
32CL2 R4
2 4
I(AD ) ? ID (AD ) ?
+ 32CL R + 4Ls(d) + 2g0
(13)
n
2n
?
where s(d) = min 2dg0 C, ?(d) .
Remark? The most important feature in the estimation error is that it converges in the order of
O(s(d)/ n). By choosing ?(d) to have a low dependence of d (i.e., ?(d) ? dp with p ? 1), the
proposed framework for regularized distance metric learning will be robust to the high dimensional
data. In the extreme case, by setting ?(d) to be a constant, the estimation error will be independent
from the dimensionality of data.
4 Algorithm
In this section, we discuss an efficient algorithm for solving (1). We assume a hinge loss for g(z),
i.e., g(z) = max(0, b ? z), where b is the classification margin. To design an online learning
algorithm for regularized distance metric learning, we follow the theory of gradient based online
learning [2] by defining potential function ?(A) = |A|2F /2. Algorithm 1 shows the online learning
algorithm.
The theorem below shows the regret bound for the online learning algorithm in Figure 1.
Theorem 5. Let the online learning algorithm 1 run with learning rate ? > 0 on a sequence
(xt , x?t ), yt , t = 1, . . . , n. Assume |x|2 ? R for all the training examples. Then, for all distance
d?d
metric M ? S+
, we have
1
1
2
bn ?
L
L
(M
)
+
|M
|
n
F
1 ? 8R4 ?/b
2?
where
?n (M ) =
n
X
t=1
n
X
bn =
max 0, b ? yt (1 ? |xt ? x?t |2M ) , L
max 0, b ? yt (1 ? |xt ? x?t |2At?1 )
t=1
4
Algorithm 1 Online Learning Algorithm for Regularized Distance Metric Learning
1: INPUT: predefined learning rate ?
2: Initialize A0 = 0
3: for t = 1, . . . , T do
4:
Receive a pair of training examples {(x1t , yt1 ), (x2t , yt2 )}
5:
Compute the class label yt : yt = +1 if yt1 = yt2 , and yt = ?1
otherwise.
6:
if the training pair (x1t , x2t ), yt is classified correctly, i.e., yt 1 ? |x1t ? x2t |2At?1 > 0 then
7:
At = At?1 .
8:
else
9:
At = ?S+ (At?1 ? ?yt (xt ? x?t )(xt ? x?t )? ), where ?S+ (M ) projects matrix M into the
SDP cone.
10:
end if
11: end for
The proof of this theorem can be found in Appendix B. Note that the above online learning algorithm
require computing ?S+ (M ), i.e., projecting matrix M onto the SDP cone, which is expensive for
high dimensional data. To address this challenge, first notice that M ? = ?S+ (M ) is equivalent to the
optimization problem M ? = arg minM ? 0 |M ? ? M |F . We thus approximate At = ?S+ (At?1 ?
?yt (xt ? x?t )(xt ? x?t )? ) with At = At?1 ? ?t yt (xt ? x?t )(xt ? x?t )? where ?t is computed as
follows
?t = arg min |?t ? ?| : ?t ? [0, ?], At?1 ? ?t yt (xt ? x?t )(xt ? x?t )? 0
(14)
?t
The following theorem shows the solution to the above optimization problem.
Theorem 6. The optimal solution ?t to the problem in (14) is expressed as
?
yt = ?1
?t =
? ?1
min ?, [(xt ? x?t )? A?1
(x
?
x
)]
yt = +1
t
t?1 t
Proof of this theorem can be found in the supplementary materials. Finally, the quantity (xt ?
?1
x?t )At?1
(xt ? x?t ) can be computed by solving the following optimization problem
max 2u? (xt ? x?t ) ? u? Au
u
whose optimal value can be computed efficiently using the conjugate gradient method [11].
Note that compared to the online metric learning algorithm in [9], the proposed online learning
algorithm for metric learning is advantageous in that (i) it is computationally more efficient by
avoiding projecting a matrix into a SDP cone, and (ii) it has a provable regret bound while [9] only
presents the mistake bound for the separable datasets.
5 Experiments
We conducted an extensive study to verify both the efficiency and the efficacy of the proposed
algorithms for metric learning. For the convenience of discussion, we refer to the propoesd online
distance metric learning algorithm as online-reg. To examine the efficacy of the learned distance
metric, we employed the k Nearest Neighbor (k-NN) classifier. Our hypothesis is that the better the
distance metric is, the higher the classification accuracy of k-NN will be. We set k = 3 for k-NN
for all the experiments according to our experience.
We compare our algorithm to the following six state-of-the-art algorithms for distance metric learning as baselines: (1) Euclidean distance metric; (2) MahalanobisP
distance metric, which is comn
puted as the inverse of covariance matrix of training samples, i.e., ( i=1 xi xi )?1 ; (3) Xing?s algorithm proposed in [17]; (4) LMNN, a distance metric learning algorithm based on the large margin
nearest neighbor classifier [15]; (5) ITML, an Information-theoretic metric learning based on [4];
and (6) Relevance Component Analysis (RCA) [10]. We set the maximum number of iterations for
Xing?s method to be 10, 000. The number of target neighbors in LMNN and parameter ? in ITML
5
Table 1: Classification error (%) of a k-NN (k = 3) classifier on the ten UCI data sets using seven
different metrics. Standard deviation is included.
Dataset
1
2
3
4
5
6
7
8
9
Eclidean
19.5 ? 2.2
39.9 ? 2.3
36.0 ? 2.0
4.0 ? 1.7
30.6 ? 1.9
25.4 ? 4.2
31.9 ? 2.8
18.9 ? 0.5
2.0 ? 0.4
Mahala
18.8 ? 2.5
6.7 ? 0.6
42.1 ? 4.0
10.4 ? 2.7
29.1 ? 2.1
18.4 ? 3.4
10.0 ? 2.8
37.3 ? 0.5
6.1 ? 0.5
Xing
29.3 ? 17.2
40.1 ? 2.6
43.5 ? 12.5
3.1 ? 2.0
30.6 ? 1.9
23.3 ? 3.4
24.6 ? 7.5
16.1 ? 0.6
12.4 ? 0.8
LMNN
13.8 ? 2.5
3.6 ? 1.1
33.1 ? 0.6
3.9 ? 1.6
29.6 ? 1.8
15.2 ? 3.1
4.5 ? 2.4
18.4 ? 0.4
1.6 ? 0.3
ITML
8.6 ? 1.7
40.0 ? 2.3
39.8 ? 3.3
3.2 ? 1.6
28.8 ? 2.1
17.1 ? 4.1
28.7 ? 3.7
23.3 ? 1.3
2.5 ? 0.4
RCA
17.4 ? 1.5
3.8 ? 0.4
41.6 ? 0.7
2.9 ? 1.5
28.6 ? 2.3
13.9 ? 2.2
1.8 ? 1.5
30.6 ? 0.7
2.8 ? 0.4
Online-reg
13.2 ? 2.2
3.7 ? 1.2
37.3 ? 4.1
3.2 ? 1.3
27.7 ? 1.3
12.9 ? 2.2
1.8 ? 1.1
19.8 ? 0.6
2.9 ? 0.4
Table 2: p-values of the Wilcoxon signed-rank test of the 7 methods on the 9 datasets.
Methods
Eclidean Mahala Xing LMNN ITML RCA Online-reg
Euclidean
1.000
0.734
0.641
0.004
0.496 0.301
0.129
Mahala
0.734
1.000
0.301
0.008
0.570 0.004
0.004
Xing
0.641
0.301
1.000
0.027
0.359 0.074
0.027
LMNN
0.004
0.008
0.027
1.000
0.129 0.496
0.734
ITML
0.496
0.570
0.359
0.129
1.000 0.820
0.164
RCA
0.301
0.004
0.074
0.496
0.820 1.000
0.074
Online-reg
0.129
0.004
0.027
0.734
0.164 0.074
1.000
were tuned by cross validation over the range from 10?4 to 104 . All the algorithms are implemented
and run using Matlab. All the experiment are run on a AMD Processor 2.8G machine, with 8GMB
RAM and Linux operation system.
5.1 Experiment (I): Comparison to State-of-the-art Algorithms
We conducted experiments of data classification over the following nine datasets from UCI repository: (1) balance-scale, with 3 classes, 4 features, and 625 instances; (2) breast-cancer, with 2
classes, 10 features, and 683 instance; (3) glass, with 6 classes, 9 features, and 214 instances; (4)
iris, with 3 classes, 4 features, and 150 instances; (5) pima, with 2 classes, 8 features, and 768 instances; (6) segmentation, with 7 classes, 19 features, and 210 instances; (7)wine, with 3 classes,
13 features, and 178 instances; (8) waveform, with 3 classes, 21 features, and 5000 instances; (9)
optdigits, with 10 classes, 64 features, 3823 instances. For all the datasets, we randomly select 50%
samples for training, and use the remaining samples for testing. Table 1 shows the classification
errors of all the metric learning methods over 9 datasets averaged over 10 runs, together with the
standard deviation. We observe that the proposed metric learning algorithm deliver performance that
comparable to the state-of-the-art methods. In particular, for almost all datasets, the classification
accuracy of the proposed algorithm is close to that of LMNN, which has yielded overall the best
performance among six baseline algorithms. This is consistent with the results of the other studies,
which show LMNN is among the most effective algorithms for distance metric learning.
To further verify if the proposed method performs statistically better than the baseline methods, we
conduct statistical test by using Wilcoxon signed-rank test [3]. The Wilcoxon signed-rank test is a
non-parametric statistical hypothesis test for the comparisons of two related samples. It is known to
be safer than the Student?s t-test because it does not assume normal distributions. From table 2, we
find that the regularized distance metric learning improves the classification accuracy significantly
compared to Mahalanobis distance, Xing?s method and RCA at significant level 0.1. It performs
slightly better than ITML and is comparable to LMNN.
6
att?face
att?face
1
7000
LMNN
6000
Running time (seconds)
Classification accuracy
0.9
0.8
0.7
0.6
0.5
Euclidean
Mahalanobis
LMNN
ITML
RCA
Online_reg
0.4
0.3
0.2
0.1
0.1
0.12
0.14
0.16
0.18
ITML
RCA
5000
Online_reg
4000
3000
2000
1000
0
0.1
0.2
Image resize ratio
0.12
0.14
0.16
0.18
0.2
Image resize ratio
(a)
(b)
Figure 1: (a) Face recognition accuracy of kNN and (b) running time of LMNN, ITML, RCA and
online reg algorithms on the ?att-face? dataset with varying image sizes.
5.2 Experiment (II): Results for High Dimensional Data
To evaluate the dependence of the regularized metric learning algorithms on data dimensions, we
tested it by the task of face recognition. The AT&T face database 1 is used in our study. It consists
of grey images of faces from 40 distinct subjects, with ten pictures for each subject. For every
subject, the images were taken at different times, with varied the lighting condition and different
facial expressions (open/closed-eyes, smiling/not-smiling) and facial details (glasses/no-glasses).
The original size of each image is 112 ? 92 pixels, with 256 grey levels per pixel.
To examine the sensitivity to data dimensionality, we vary the data dimension (i.e., the size of
images) by compressing the original images into size different sizes with the image aspect ratio
preserved. The image compression is achieved by bicubic interpolation (the output pixel value is a
weighted average of pixels in the nearest 4-by-4 neighborhood). For each subject, we randomly spit
its face images into training set and test set with ratio 4 : 6. A distance metric is learned from the
collection of training face images, and is used by the kNN classifier (k = 3) to predict the subject ID
of the test images. We conduct each experiment 10 times, and report the classification accuracy by
averaging over 40 subjects and 10 runs. Figure 1 (a) shows the average classification accuracy of the
kNN classifier using different distance metric learning algorithms. The running times of different
metric learning algorithms for the same dataset is shown in Figure 1 (b). Note that we exclude
Xing?s method in comparison because its extremely long computational time. We observed that
with increasing image size (dimensions), the regularized distance metric learning algorithm yields
stable performance, indicating that the it is resilient to high dimensional data. In contrast, for almost
all the baseline methods except ITML, their performance varied significantly as the size of the input
image changed. Although ITML yields stable performance with respect to different size of images,
its high computational cost (Figure 1), arising from solving a Bregman optimization problem in each
iteration, makes it unsuitable for high-dimensional data.
6 Conclusion
In this paper, we analyze the generalization error of regularized distance metric learning. We show
that with appropriate constraint, the regularized distance metric learning could be robust to high
dimensional data. We also present efficient learning algorithms for solving the related optimization problems. Empirical studies with face recognition and data classification show the proposed
approach is (i) robust and efficient for high dimensional data, and (ii) comparable to the state-of-theart approaches for distance learning. In the future, we plan to investigate different regularizers and
their effect for distance metric learning.
1
http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
7
ACKNOWLEDGEMENTS
The work was supported in part by the National Science Foundation (IIS-0643494) and the U. S.
Army Research Laboratory and the U. S. Army Research Office (W911NF-09-1-0421). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF and ARO.
Appendix A: Proof of Lemma 3
Proof. We introduce the Bregmen divergence for the proof of this lemma. Given a convex function
of matrix ?(X), the Bregmen divergence between two matrices A and B is computed as follows:
d? (A, B) = ?(B) ? ?(A) ? tr ??(A)? (B ? A)
We define convex function N (X) and VD (X) as follows:
N (X) = kXk2F ,
VD (X) =
X
2
V (X, zi , zj )
n(n ? 1) i<j
and furthermore convex function TD (X) = N (X) + CVD (X). We thus have
dN (AD , ADi,z ) + dN (ADi,z , AD ) ? dTD (AD , ADi,z ) + dTDi,z (ADi,z , AD )
X
C
=
[V (ADi,z , zi , zj ) ? V (ADi,z , z, zj ) + V (AD , z, zj ) ? V (AD , zi , zj )]
n(n ? 1)
j6=i
8CLR2
?
|AD ? ADi,z |F
n
The first inequality follows from the fact that both N (X) and VD (X) are convex in X. The second
step holds because matrix AD and ADi,z minimize the objective function TD (X) and TDi,z (X),
respectively, and therefore
?
(ADi,z ? AD ) ?TD (AD ) ? 0,
?
(AD ? ADi,z ) ?TDi,z (ADi,z ) ? 0
Since dN (A, B) = kA ? Bk2F , we therefore have
|AD ? ADi,z |2F ?
8CLR2
|AD ? ADi,z |F ,
n
which leads to the result in the lemma.
Appendix B: Proof of Theorem 7
Proof. We denote by A?t = At?1 ? ?y(xt ? x?t )(xt ? x?t )? and At = ?S+ (A?t ). Following Theorem
11.1 and Theorem 11.4 [2], we have
n
1
1X
b
?
Ln ? Ln (M ) ? D? (M, A0 ) +
D?? (At?1 , A?t )
?
? t=1
where
1
1
|A ? B|2F , ?(A) = ?? (A) = |A|2F
2
2
Using the relation A?t = At?1 ? ?y(xt ? x?t )(xt ? x?t )? and A0 = 0, we have
n
i
1 X h
1
2
b
Ln ? Ln (M ) ?
|M |F +
I yt (1 ? |xt ? x?t |2At?1 ) < 0 |xt ? x?t |4
2?
2? t=1
D?? (A, B) =
By assuming |x|2 ? R for any training example, we have |xt ? x?t |42 ? 16R4 . Since
n
n
h
i
X
X
16R4
16R4 b
? 2
? 4
I yt (1 ? |xt ? xt |At?1 ) < 0 |xt ? xt | ?
max(0, b ? yt (1 ? |xt ? x?t |2At?1 ))
=
Ln
b
b
t=1
t=1
we thus have the result in the theorem
8
References
[1] Bousquet, Olivier, and Andr?e Elisseeff. Stability and generalization. Journal of Machine
Learning Research, 2:499?526, March 2002.
[2] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006.
[3] G.W. Corder and D.I. Foreman. Nonparametric Statistics for Non-Statisticians: A Step-by-Step
Approach. New Jersey: Wiley, 2009.
[4] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In
Proceedings of the 24th international conference on Machine Learning, 2007.
[5] A. Globerson and S. Roweis. Metric learning by collapsing classes. In Advances in Neural
Information Processing Systems, 2005.
[6] Steven C.H. Hoi, Wei Liu, and Shih-Fu Chang. Semi-supervised distance metric learning for
collaborative image retrieval. In Proceedings of IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2008.
[7] Steven C.H. Hoi, Wei Liu, Michael R. Lyu, and Wei-Ying Ma. Learning distance metrics with
contextual constraints for image retrieval. In Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition (CVPR), 2006.
[8] L. K. Saul and S. T. Roweis. Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research, 4, 2003.
[9] Shai Shalev-Shwartz, Yoram Singer, and Andrew Y. Ng. Online and batch learning of pseudometrics. In Proceedings of the twenty-first international conference on Machine learning,
pages 94?101, 2004.
[10] N. Shental, T. Hertz, D. Weinshall, and M. Pavel. Adjustment learning and relevant component
analysis. In Proceedings of the Seventh European Conference on Computer Vision, volume 4,
pages 776?792, 2002.
[11] Jonathan R Shewchuk. An introduction to the conjugate gradient method without the agonizing
pain. Technical report, Carnegie Mellon University, Pittsburgh, PA, USA, 1994.
[12] Luo Si, Rong Jin, Steven C. H. Hoi, and Michael R. Lyu. Collaborative image retrieval via
regularized metric learning. In ACM Multimedia Systems Journal (MMSJ), 2006.
[13] J.B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290, 2000.
[14] I.W. Tsang, P.M. Cheung, and J.T. Kwok. Kernel relevant component analysis for distance
metric learning. In IEEE International Joint Conference on Neural Networks (IJCNN), 2005.
[15] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. In Advances in Neural Information Processing Systems, 2005.
[16] Lei Wu, Steven C.H. Hoi, Rong Jin, Jianke Zhu, and Nenghai Yu. Distance metric learning
from uncertain side information with application to automated photo tagging. In Proceedings
of ACM International Conference on Multimedia (MM), 2009.
[17] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning, with application
to clustering with side-information. In Advances in Neural Information Processing Systems,
2002.
[18] L. Yang and R. Jin. Distance metric learning: A comprehensive survey. Michigan State
University, Tech. Rep., 2006.
[19] L. Yang, R. Jin, R. Sukthankar, and Y. Liu. An efficient algorithm for local distance metric
learning. In the Proceedings of the Twenty-First National Conference on Artificial Intelligence
Proceedings (AAAI), 2006.
9
| 3703 |@word kulis:1 repository:1 compression:1 advantageous:2 norm:5 open:1 grey:2 bn:2 covariance:1 pavel:1 elisseeff:1 tr:5 reduction:1 liu:3 efficacy:3 att:3 tuned:1 ka:1 contextual:1 luo:1 si:1 comn:1 intelligence:1 selected:5 xk:1 cse:1 mcdiarmid:5 dn:3 become:1 consists:1 introduce:1 lansing:1 pairwise:1 tagging:1 behavior:1 examine:4 sdp:4 lmnn:11 globally:1 td:3 gov:1 increasing:1 project:1 underlying:1 bounded:4 weinshall:1 finding:1 every:1 classifier:6 uk:1 omit:1 positive:1 engineering:1 local:2 sd:1 mistake:2 id:7 interpolation:1 lugosi:1 signed:3 therein:1 studied:1 examined:1 r4:7 au:1 kxk2f:1 range:1 statistically:1 averaged:1 globerson:1 yj:2 testing:1 regret:4 empirical:5 significantly:2 gabor:1 convenience:2 close:2 onto:1 sukthankar:1 www:1 equivalent:1 mahala:3 dz:1 yt:17 l:1 convex:5 survey:2 supz:2 d1:1 embedding:1 stability:16 target:1 olivier:1 hypothesis:2 shewchuk:1 pa:1 recognition:8 satisfying:1 expensive:1 labeled:2 database:1 observed:1 steven:4 dg0:2 tsang:1 compressing:1 russell:1 complexity:1 cam:1 solving:4 deliver:1 efficiency:2 joint:1 jersey:1 regularizer:5 foreman:1 separated:1 distinct:1 jain:1 effective:3 artificial:1 zhou1:1 choosing:1 neighborhood:1 shalev:1 whose:2 supplementary:1 cvpr:2 otherwise:2 statistic:1 knn:3 radiology:1 think:1 online:20 advantage:1 sequence:1 aro:1 pseudometrics:1 uci:2 combining:1 relevant:2 roweis:2 frobenius:4 x1t:3 converges:1 derive:3 andrew:1 ac:1 blitzer:1 nearest:4 implemented:1 waveform:1 closely:1 opinion:1 material:2 hoi:4 ddi:2 require:1 resilient:1 generalization:15 proposition:6 rong:3 pl:1 hold:4 mm:1 normal:1 exp:1 lyu:2 predict:1 vary:1 wine:1 estimation:4 label:3 largest:1 weighted:1 aim:1 gmb:1 varying:1 agonizing:1 office:1 derived:1 focus:1 rank:3 tech:1 contrast:1 baseline:4 glass:3 facedatabase:1 nn:4 vl:5 a0:3 relation:1 pixel:4 arg:2 classification:15 overall:1 among:2 denoted:1 html:1 plan:1 art:5 initialize:1 ng:2 yu:1 unsupervised:3 theart:1 future:1 report:2 few:1 randomly:7 national:3 divergence:2 comprehensive:1 replaced:1 statistician:1 attempt:1 investigate:1 extreme:1 activated:1 devoted:1 regularizers:1 predefined:1 bicubic:1 bregman:1 fu:1 experience:1 facial:2 conduct:3 euclidean:3 bk2f:1 uncertain:1 instance:10 w911nf:1 cost:1 deviation:2 jin1:1 uniform:10 conducted:2 seventh:1 itml:11 supx:1 fundamental:1 sensitivity:1 international:4 michael:2 together:1 linux:1 reflect:1 cesa:1 aaai:1 collapsing:1 li:1 combing:1 potential:1 exclude:1 de:1 summarized:1 student:1 ad:33 vi:6 later:1 view:1 closed:1 analyze:1 sup:3 xing:8 shai:1 lr2:1 collaborative:2 minimize:1 accuracy:7 efficiently:1 yield:2 lighting:1 cc:1 j6:1 processor:1 classified:2 minm:1 proof:10 mi:1 di:1 proved:1 dataset:3 nenghai:1 dimensionality:5 improves:1 segmentation:1 higher:1 supervised:5 follow:2 attarchive:1 wei:3 furthermore:1 langford:1 replacing:1 nonlinear:1 puted:1 lei:1 usa:2 effect:1 smiling:2 verify:3 true:3 isomap:1 laboratory:1 dhillon:1 mahalanobis:3 game:1 davis:1 iris:1 theoretic:2 performs:2 silva:1 dtd:1 image:19 nih:1 behaves:1 volume:1 refer:1 significant:1 mellon:1 cambridge:1 rd:1 stable:2 nicolo:1 wilcoxon:3 apart:1 inequality:8 rep:1 yi:9 employed:1 semi:1 ii:5 jianke:1 technical:1 cross:1 long:1 retrieval:4 dept:1 prediction:1 breast:1 vision:3 metric:79 iteration:2 sometimes:1 kernel:1 achieved:1 preserved:2 addition:2 receive:1 else:1 subject:6 jordan:1 yang:3 revealed:1 automated:1 xj:3 affect:1 zi:11 fit:1 idea:2 six:2 expression:1 x1k:1 york:1 nine:1 remark:1 matlab:1 detailed:1 nonparametric:1 ten:2 locally:1 tenenbaum:1 category:3 dtg:1 http:1 zj:11 nsf:1 notice:1 andr:1 arising:1 correctly:1 per:1 write:1 carnegie:1 shental:1 zv:3 putting:1 shih:1 utilize:1 v1:5 imaging:1 ram:1 cone:4 run:5 inverse:1 almost:2 wu:1 utilizes:1 yt1:2 appendix:4 resize:2 comparable:3 bound:13 replaces:1 yielded:1 xdk:1 ijcnn:1 constraint:10 bousquet:1 aspect:1 min:4 cl2:3 extremely:1 separable:1 according:1 cvd:1 march:1 conjugate:2 hertz:1 slightly:1 bethesda:1 making:3 projecting:2 pr:1 rca:8 taken:1 computationally:2 ln:7 discus:1 x2t:3 singer:1 end:2 photo:1 operation:1 observe:1 kwok:1 appropriate:3 batch:1 weinberger:1 wang2:1 original:2 clustering:2 include:2 ensure:1 remaining:1 running:3 hinge:1 yt2:2 unsuitable:1 yoram:1 objective:1 g0:5 quantity:1 parametric:1 dependence:3 md:1 gradient:3 dp:2 pain:1 distance:67 vd:3 seven:1 manifold:3 amd:1 provable:1 assuming:1 ratio:4 balance:1 ying:1 difficult:1 statement:1 pima:1 trace:1 negative:1 stated:1 design:1 twenty:2 bianchi:1 upper:1 datasets:7 jin:4 behave:1 defining:1 varied:2 introduced:1 pair:4 extensive:1 learned:8 address:2 usually:1 pattern:3 below:2 challenge:1 max:5 suitable:2 critical:1 regularized:23 zhu:1 eye:1 numerous:1 picture:1 health:1 geometric:1 acknowledgement:1 loss:8 highlight:1 sublinear:2 validation:1 foundation:1 consistent:1 dd:6 cancer:1 changed:1 supported:1 side:3 lle:1 institute:1 neighbor:4 saul:2 face:12 dimension:4 calculated:1 world:1 stand:1 author:1 collection:6 far:1 approximate:1 keep:2 global:1 active:1 pittsburgh:1 xi:6 shwartz:1 msu:2 search:1 continuous:2 table:4 learn:2 robust:5 sra:1 rongjin:1 adi:14 cl:2 necessarily:1 european:1 domain:1 verifies:1 referred:1 ny:1 wiley:1 adz:1 theorem:17 shijun:1 xt:27 zu:3 ci:2 margin:4 michigan:2 army:2 expressed:2 adjustment:1 recommendation:1 chang:1 determines:1 acm:2 ma:1 optdigits:1 presentation:1 cheung:1 lipschitz:3 safer:1 included:1 specifically:1 except:1 averaging:1 lemma:11 multimedia:2 east:1 indicating:1 select:1 jonathan:1 relevance:1 evaluate:1 reg:5 tested:1 avoiding:1 handling:3 |
2,984 | 3,704 | Robust Principal Component Analysis:
Exact Recovery of Corrupted Low-Rank Matrices by
Convex Optimization
John Wright?, Yigang Peng, Yi Ma
Visual Computing Group
Microsoft Research Asia
{jowrig,v-yipe,mayi}@microsoft.com
Arvind Ganesh, Shankar Rao
Coordinated Science Laboratory
University of Illinois at Urbana-Champaign
{abalasu2,srrao}@uiuc.edu
Abstract
Principal component analysis is a fundamental operation in computational data
analysis, with myriad applications ranging from web search to bioinformatics to
computer vision and image analysis. However, its performance and applicability
in real scenarios are limited by a lack of robustness to outlying or corrupted observations. This paper considers the idealized ?robust principal component analysis? problem of recovering a low rank matrix A from corrupted observations
D = A + E. Here, the corrupted entries E are unknown and the errors can be
arbitrarily large (modeling grossly corrupted observations common in visual and
bioinformatic data), but are assumed to be sparse. We prove that most matrices
A can be efficiently and exactly recovered from most error sign-and-support patterns by solving a simple convex program, for which we give a fast and provably
convergent algorithm. Our result holds even when the rank of A grows nearly
proportionally (up to a logarithmic factor) to the dimensionality of the observation space and the number of errors E grows in proportion to the total number of
entries in the matrix. A by-product of our analysis is the first proportional growth
results for the related problem of completing a low-rank matrix from a small fraction of its entries. Simulations and real-data examples corroborate the theoretical
results, and suggest potential applications in computer vision.
1
Introduction
The problem of finding and exploiting low-dimensional structure in high-dimensional data is taking
on increasing importance in image, audio and video processing, web search, and bioinformatics,
where datasets now routinely lie in thousand- or even million-dimensional observation spaces. The
curse of dimensionality is in full play here: meaningful inference with limited number of observations requires some assumption that the data have low intrinsic complexity, e.g., that they are
low-rank [1], sparse in some basis [2], or lie on some low-dimensional manifold [3, 4]. Perhaps the
simplest useful assumption is that the observations all lie near some low-dimensional subspace. In
other words, if we stack all the observations as column vectors of a matrix M ? Rm?n , the matrix
should be (approximately) low rank. Principal component analysis (PCA) [1, 5] seeks the best (in
an `2 -sense) such low-rank representation of the given data matrix. It enjoys a number of optimality
properties when the data are only mildly corrupted by small noise, and can be stably and efficiently
computed via the singular value decomposition.
?
For more information, see http://perception.csl.illinois.edu/matrix-rank/home.html.
This work was partially supported by NSF IIS 08-49292, NSF ECCS 07-01676, and ONR N00014-09-1-0230.
1
One major shortcoming of classical PCA is its brittleness with respect to grossly corrupted or outlying observations [5]. Gross errors are ubiquitous in modern applications in imaging and bioinformatics, where some measurements may be arbitrarily corrupted (e.g., due to occlusion or sensor failure)
or simply irrelevant to the structure we are trying to identify. A number of natural approaches to
robustifying PCA have been explored in the literature. These approaches include influence function
techniques [6, 7], multivariate trimming [8], alternating minimization [9], and random sampling
techniques [10]. Unfortunately, none of these existing approaches yields a polynomial-time algorithm with strong performance guarantees.1
In this paper, we consider an idealization of the robust PCA problem, in which the goal is to recover a
low-rank matrix A from highly corrupted measurements D = A + E. The errors E can be arbitrary
in magnitude, but are assumed to be sparsely supported, affecting only a fraction of the entries
of D. This should be contrasted with the classical setting in which the matrix A is perturbed by
small (but densely supported) noise. In that setting, classical PCA, computed via the singular value
decomposition, remains optimal if the noise is Gaussian. Here, on the other hand, even a small
fraction of large errors can cause arbitrary corruption in PCA?s estimate of the low rank structure, A.
Our approach to robust PCA is motivated by two recent, and tightly related, lines of research. The
first set of results concerns the robust solution of over-determined linear systems of equations in the
presence of arbitrary, but sparse errors. These results imply that for generic systems of equations, it
is possible to correct a constant fraction of arbitrary errors in polynomial time [11]. This is achieved
by employing the `1 -norm as a convex surrogate for the highly-nonconvex `0 -norm. A parallel
(and still emerging) line of work concerns the problem of computing low-rank matrix solutions
to underdetermined linear equations [12, 13]. One of the most striking results concerns the exact
completion of low-rank matrices from only a small fraction of their entries [13, 14, 15, 16].2 There, a
similar convex relaxation is employed, replacing the highly non-convex matrix rank with the nuclear
norm (or sum of singular values).
The robust PCA problem outlined above combines aspects of both of these lines of work: we wish
to recover a low-rank matrix from large but sparse errors. We will show that combining the solutions
to the above problems (nuclear norm minimization for low-rank recovery and `1 -minimization for
error correction) yields a polynomial-time algorithm for robust PCA that provably succeeds under
broad conditions:
With high probability, solving a simple convex program perfectly recovers a
m
generic matrix A ? Rm?m of rank as large as C log(m)
, from errors affecting
2
up to a constant fraction of the m entries.
This conclusion holds with high probability as the dimensionality m increases, implying that in
high-dimensional observation spaces, sparse and low-rank structures can be efficiently and exactly
separated. This behavior is an example of the so-called the blessing of dimensionality [17].
However, this result would remain a theoretical curiosity without scalable algorithms for solving the
associated convex program. To this end, we discuss how a near-solution to this convex program can
be obtained relatively efficiently via proximal gradient [18, 19] and iterative thresholding techniques,
similar to those proposed for matrix completion in [20, 21]. For large matrices, these algorithms are
significantly faster and more scalable than general-purpose convex program solvers.
Our analysis also implies an extension of existing results for the low-rank matrix completion problem, and including the first results applicable to the proportional growth setting where the rank of
the matrix grows as a constant (non-vanishing) fraction of the dimensionality:
With overwhelming probability, solving a simple convex program perfectly recovers a generic matrix A ? Rm?m of rank as large as Cm, from observations
consisting of only a fraction ?m2 (? < 1) of its entries.
1
Random sampling approaches guarantee near-optimal estimates, but have complexity exponential in the
rank of the matrix A0 . Trimming algorithms have comparatively lower computational complexity, but guarantee
only locally optimal solutions.
2
A major difference between robust PCA and low-rank matrix completion is that here we do not know
which entries are corrupted, whereas in matrix completion the support of the missing entries is given.
2
Organization of this paper. This paper is organized as follows. Section 2 formulates the robust
principal component analysis problem more precisely and states the main results of this paper, placing these results in the context of existing work. The proof (available in [22]) relies on standard ideas
from linear algebra and concentration of measure, but is beyond the scope of this paper. Section 3
extends existing proximal gradient techniques to give a simple, scalable algorithm for solving the
robust PCA problem. In Section 4, we perform simulations and experiments corroborating the theoretical results and suggesting their applicability to real-world problems in computer vision. Finally,
in Section 5, we outline several promising directions for future work.
2
Problem Setting and Main Results
We assume that the observed data matrix D ? Rm?n was generated by corrupting some of the
entries of a low-rank matrix A ? Rm?n . The corruption can be represented as an additive error
E ? Rm?n , so that D = A + E. Because the error affects only a portion of the entries of D, E is a
sparse matrix. The idealized (or noise-free) robust PCA problem can then be formulated as follows:
Problem 2.1 (Robust PCA). Given D = A + E, where A and E are unknown, but A is known to
be low rank and E is known to be sparse, recover A.
This problem formulation immediately suggests a conceptual solution: seek the lowest rank A that
could have generated the data, subject to the constraint that the errors are sparse: kEk0 ? k. The
Lagrangian reformulation of this optimization problem is
min rank(A) + ?kEk0
A,E
subj A + E = D.
(1)
If we could solve this problem for appropriate ?, we might hope to exactly recover the pair (A0 , E0 )
that generated the data D. Unfortunately, (1) is a highly nonconvex optimization problem, and
no efficient solution is known.3 We can obtain a tractable optimization problem by relaxing
(1),
P
replacing the `0 -norm with the `1 -norm, and the rank with the nuclear norm kAk? = i ?i (A),
yielding the following convex surrogate:
min kAk? + ?kEk1
subj A + E = D.
A,E
(2)
This relaxation can be motivated by observing that kAk? + ?kEk1 is the convex envelope of
rank(A) + ?kEk0 over the set of (A, E) such that max(kAk2,2 , kEk1,? ) ? 1. Moreover, recent
advances in our understanding of the nuclear norm heuristic for low-rank solutions to matrix equations [12, 13] and the `1 heuristic for sparse solutions to underdetermined linear systems [11, 24],
suggest that there might be circumstances under which solving the tractable problem (2) perfectly
recovers the low-rank matrix A0 . The main result of this paper will be to show that this is indeed
true under surprisingly broad conditions. A sketch of the result is as follows: For ?almost all? pairs
(A0 , E0 ) consisting of a low-rank matrix A0 and a sparse matrix E0 ,
(A0 , E0 ) = arg min kAk? + ?kEk1
A,E
subj A + E = A0 + E0 ,
and the minimizer is uniquely defined. That is, under natural probabilistic models for low-rank and
sparse matrices, almost all observations D = A0 + E0 generated as the sum of a low-rank matrix
A0 and a sparse matrix E0 can be efficiently and exactly decomposed into their generating parts by
solving a convex program.4
Of course, this is only possible with an appropriate choice of the regularizing parameter ? > 0.
From the optimality conditions for the convex program
(2), it is not difficult to show that for matrices
D ? Rm?m , the correct scaling is ? = O m?1/2 . Throughout this paper, unless otherwise stated,
we will fix ? = m?1/2 . For simplicity, all of our results in this paper will be stated for square
matrices D ? Rm?m , although there is little difficulty in extending them to non-square matrices.
In a sense, this problem subsumes both the low rank matrix completion problem and the `0 -minimization
problem, both of which are NP-hard and hard to approximate [23].
4
Notice that this is not an ?equivalence? result for (1) and (2) ? rather than asserting that the solutions
of these two problems are equal with high probability, we directly prove that the convex program correctly
decomposes D = A0 + E0 into (A0 , E0 ). A natural conjecture, however, is that under the conditions of our
main result, (A0 , E0 ) is also the solution to (1) for some choice of ?.
3
3
It should be clear that not all matrices A0 can be successfully recovered by solving the convex
program (2). Consider, e.g., the rank-1 case where U = [ei ] and V = [ej ]. Without additional prior
knowledge, the low-rank matrix A = U SV ? cannot be recovered from even a single gross error. We
therefore restrict our attention to matrices A0 whose row and column spaces are not aligned with the
standard basis. This can be done probabilistically, by asserting that the marginal distributions of U
and V are uniform on the Stiefel manifold Wm
r :
Definition 2.2 (Random orthogonal model [13]). We consider a matrix A0 to be distributed according to the random orthogonal model of rank r if its left and right singular vectors are independent
uniformly distributed m?r matrices with orthonormal columns.5 In this model, the nonzero singular
values of A0 can be arbitrary.
Our model for errors is similarly natural: each entry of the matrix is independently corrupted with
some probability ?s , and the signs of the corruptions are independent Rademacher random variables.
Definition 2.3 (Bernoulli error signs and support). We consider an error matrix E0 to be drawn from
the Bernoulli sign and support model with parameter ?s if the entries of sign(E0 ) are independently
distributed, each taking on value 0 with probability 1 ? ?s , and ?1 with probability ?s /2 each. In
this model, the magnitude of the nonzero entries in E0 can be arbitrary.
Our main result is the following (see [22] for a proof):
Theorem 2.4 (Robust recovery from non-vanishing error fractions). For any p > 0, there exist
constants (C0? > 0, ??s > 0, m0 ) with the following property: if m > m0 , (A0 , E0 ) ? Rm?m ?
Rm?m with the singular spaces of A0 ? Rm?m distributed according to the random orthogonal
model of rank
m
r ? C0?
(3)
log(m)
and the signs and support of E0 ? Rm?m distributed according to the Bernoulli sign-and-support
model with error probability ? ??s , then with probability at least 1 ? Cm?p
1
(A0 , E0 ) = arg min kAk? + ? kEk1
m
subj A + E = A0 + E0 ,
(4)
and the minimizer is uniquely defined.
In other words, matrices A0 whose singular spaces are distributed according to the random orthogonal model can, with probability approaching one, be efficiently recovered from almost all corruption
sign and support patterns without prior knowledge of the pattern of corruption.
Our line of analysis also implies strong results for the matrix completion problem studied in [13, 15,
14, 16]. We again refer the interested reader to [22] for a proof of the following result:
Theorem 2.5 (Matrix completion in proportional growth). There exist numerical constants m0 , ??r ,
??s , C all > 0, with the following property: if m > m0 and A0 ? Rm?m is distributed according to
the random orthogonal model of rank
r ? ??r m,
(5)
and ? ? [m] ? [m] is an independently chosen subset of [m] ? [m] in which the inclusion of each
pair (i, j) is an independent Bernoulli(1 ? ?s ) random variable with ?s ? ??s , then with probability
at least 1 ? exp (?Cm),
A0 = arg min kAk?
subj A(i, j) = A0 (i, j) ? (i, j) ? ?,
(6)
and the minimizer is uniquely defined.
Relationship to existing work. Contemporaneous results due to [25] show that for A0 distributed
according to the random orthogonal model, and E0 with Bernoulli support, correct recovery occurs
with high probability provided
kE0 k0 ? C m1.5 log(m)?1 max(r, log m)?1/2 .
(7)
This is an interesting result, especially since it makes no assumption on the signs of the errors.
However, even for constant rank r it guarantees correction of only a vanishing fraction o(m1.5 )
5
I.e., distributed according to the Haar measure on the Stiefel manifold Wm
r .
4
m2 of errors. In contrast, our main result, Theorem 2.4, states that even if r grows proportional
to m/ log(m), non-vanishing fractions of errors are corrected with high probability. Both analyses
start from the optimality condition for the convex program (2). The key technical component of
this improved result is a probabilistic analysis of an iterative refinement technique for producing a
dual vector that certifies optimality of the pair (A0 , E0 ). This approach extends techniques used
in [11, 26], with additional care required to handle an operator norm constraint arising from the
presence of the nuclear norm in (2). For further details we refer the interested reader to [22].
Finally, while Theorem 2.5 is not the main focus of this paper, it is interesting in light of results by
[15]. That work proves that in the probabilistic model considered here, a generic m ? m rank-r
matrix can be efficiently and exactly completed from a subset of only
Cmr log8 (m)
(8)
m
2
entries. For r > polylog(m) , this bound exceeds the number m of possible observations. A similar
result for spectral methods [14] gives exact completion from O(m log(m)) measurements when
r = O(1). In contrast, our Theorem 2.5 implies that for certain scenarios with r as large as ?r m,
the matrix can be completed from a subset of (1 ? ?s )m2 entries. For matrices of large rank, this is
a significant extension of [15]. However, our result does not supersede (8) for smaller ranks.
3
Scalable Optimization for Robust PCA
There are a number of possible approaches to solving the robust PCA semidefinite program (2). For
small problem sizes, interior point methods offer superior accuracy and convergence rates. However,
off-the-shelf interior point solvers become impractical for data matrices larger than about 70 ? 70,
due to the O(m6 ) complexity of solving for the step direction. For the experiments in this paper
we use an alternative first-order method based on the proximal gradient approach of [18],6 which
we briefly introduce here. For further discussion of this approach, as well as alternatives based on
duality, please see [27]. This algorithm solves a slightly relaxed version of (2), in which the equality
constraint is replaced with a penalty term:
min ?kAk? + ??kEk1 + 12 kD ? A ? Ek2F .
(9)
Here, ? is a small constant; as ? & 0, the solutions to (9) approach the solution set of (2).
The approach of [18] minimizes functions of this type by forming separable quadratic approxima?k ) that are conspicuously
tions to the data fidelity term kD?A?Ek2F at aspecial set of points (A?k , E
?2
chosen to obtain a convergence rate of O k
. The solutions to these subproblems,
2
Ak+1 = arg min ?kAk? +
A ? A?k ? 14 ?A kD ? A ? Ek2F A? ,E?
,
(10)
k
k
A
F
2
?k ? 1 ?E kD ? A ? Ek2F ? ?
(11)
Ek+1 = arg min ??kEk1 +
E ? E
,
4
A ,E
k
E
k
F
can be efficiently computed via the soft thresholding operator (for E) and the singular value thresholding operator (for A, see [20]). We terminate the iteration when the subgradient
?k , E
?k ? Ek+1 + Ak+1 ? A?k
A?k ? Ak+1 + Ek+1 ? E
? ? ?kAk? + ??kEk1 + 12 kD ? A ? Ek2F
Ak+1 ,Ek+1
7
has sufficiently small Frobenius norm. In practice, convergence speed is dramatically improved by
employing a continuation strategy in which ? starts relatively large and then decreases geometrically
at each iteration until reaching a lower bound, ?
? (as in [21]).
The entire procedure is summarized as Algorithm 1 below. We encourage the interested reader to
?k ), as well
consult [18] for a more detailed explanation of the choice of the proximal points (A?k , E
as a convergence proof ([18] Theorem 4.1). As we will see in the next section, in practice the total
number of iterations is often as small as 200. Since the dominant cost of each iteration is computing
the singular value decomposition, this means that it is often possible to obtain a provably robust
PCA with only a constant factor more computational resources than required for conventional PCA.
6
That work is similar in spirit to the work of [19], and has also applied to matrix completion in [21].
More precisely, as suggested in [21], we terminate when the norm of this subgradient is less than
2 max(1, k(Ak+1 , Ek+1 )kF ) ? ? . In our experiments, we set ? = 10?7 .
7
5
Algorithm 1: Robust PCA via Proximal Gradient with Continuation
1: Input: Observation matrix D ? Rm?n , weight ?.
2: A0 , A?1 ? 0, E0 , E?1 ? 0, t0 , t?1 ? 1, ?0 ? .99kDk2,2 , ?
? ? 10?5 ?0 .
3: while not converged
?1
?k ? Ek + tk?1 ?1 (Ek ? Ek?1 ).
4:
A?k ? Ak + tk?1
(Ak ? Ak?1 ), E
tk
tk
1
A
?
?
?
5:
Yk ? Ak ? 2 Ak + Ek ? D .
6:
(U, S, V ) ? svd(YkA ), Ak+1 ? U S ? ?2 I + V ? .
?k ? 1 A?k + E
?k ? D .
7:
YkE ? E
2
i
h
?
11
.
8:
Ek+1 ? sign[YkE ] ? |YkE | ? ??
2
+
?
2
1+ 1+4tk
9:
tk+1 ?
, ? ? max(.9?, ?
?).
2
10: end while
11: Output: A, E.
4
Simulations and Experiments
In this section, we first perform simulations corroborating our theoretical results and clarifying their
implications. We then sketch two computer vision applications involving the recovery of intrinsically low-dimensional data from gross corruption: background estimation from video and face
subspace estimation under varying illumination. 8
Simulation: proportional growth. We first demonstrate the exactness of the convex programming heuristic, as well as the efficacy of Algorithm 1, on random matrix examples of increasing
dimension. We generate A0 as a product of two independent m ? r matrices whose elements are
i.i.d. N (0, 1) random variables. We generate E0 as a sparse matrix whose support is chosen uniformly at random, and whose non-zero entries are independent and uniformly distributed in the range
.
? The
[?500, 500]. We apply the proposed algorithm to the matrix D = A0 + E0 to recover A? and E.
?1/2
results are presented in Table 1. For these experiments, we choose ? = m
. We observe that the
proposed algorithm is successful in recovering A0 even when 10% of its entries are corrupted.
m
rank(A0 )
kE0 k0
100
200
400
800
100
200
400
800
5
10
20
40
5
10
20
40
500
2,000
8,000
32,000
1,000
4,000
16,000
64,000
?
kA?A
0 kF
kA0 kF
?4
3.0 ? 10
2.1 ? 10?4
1.4 ? 10?4
9.9 ? 10?5
3.1 ? 10?4
2.3 ? 10?4
1.6 ? 10?4
1.2 ? 10?4
?
rank(A)
? 0
kEk
#iterations
time (s)
5
10
20
40
5
10
20
40
506
2,012
8,030
32,062
1,033
4,042
16,110
64,241
104
104
104
104
108
107
107
106
1.6
7.9
64.8
531.6
1.6
8.0
66.7
542.8
Table 1: Proportional growth. Here the rank of the matrix grows in proportion (5%) to the dimensionality
m; and the number of corrupted measurements grows in proportion to the number of entries m2 , top 5% and
bottom 10%, respectively. The time reported is for Matlab implementation run on a 2.8 GHz MacBook Pro.
Simulation: phase transition w.r.t. rank and error sparsity. We next examine how the rank
of A and the proportion of errors in E affect the performance our algorithm. We fix m = 200,
.
0)
and vary ?r = rank(A
and the error probability ?s between 0 and 1. For each ?r , ?s pair, we
m
generate 10 pairs (A0 , E0 ) as in the above experiment. We deem (A0 , E0 ) successfully recovered
8
Here, we use these intuitive examples and data illustrate how our algorithm can be used as a simple, general
tool to effectively separate low-dimensional and sparse structures occurring in real visual data. Appropriately
harnessing additional structure (e.g., the spatial coherence of the error [28]) may yield even more effective
algorithms.
6
?
0 kF
if the recovered A? satisfies kA?A
< 0.01. Figure 1 (left) plots the fraction of correct recoveries.
kA0 kF
White denotes perfect recovery in all experiments, and black denotes failure for all experiments. We
observe that there is a relatively sharp phase transition between success and failure of the algorithm
roughly above the line ?r + ?s = 0.35. To verify this behavior, we repeat the experiment, but only
vary ?r and ?s between 0 and 0.4 with finer steps. These results, seen in Figure 1 (right), show that
phase transition remains fairly sharp even at higher resolution.
1
0.3
s
0.6
?
?s
0.8
0.2
0.4
0.1
0.2
0.2 0.4 0.6 0.8
?
1
0.1
r
0.2
?
0.3
r
Figure 1: Phase transition wrt rank and error sparsity. Here, ?r = rank(A)/m, ?s = kEk0 /m2 . Left:
(?r , ?s ) ? [0, 1]2 . Right: (?r , ?s ) ? [0, 0.4]2 .
Experiment: background modeling from video. Background modeling or subtraction from
video sequences is a popular approach to detecting activity in the scene, and finds application in
video surveillance from static cameras. Background estimation is complicated by the presence of
foreground objects such as people, as well as variability in the background itself, for example due
to varying illumination. In many cases, however, it is reasonable to assume that these background
variations are low-rank, while the foreground activity is spatially localized, and therefore sparse. If
the individual frames are stacked as columns of a matrix D, this matrix can be expressed as the sum
of a low-rank background matrix and a sparse error matrix representing the activity in the scene. We
illustrate this idea using two examples from [29] (see Figures 2). In Figure 2(a)-(c), the video sequence consists of 200 frames of a scene in an airport. There is no significant change in illumination
in the video, but a lot of activity in the foreground. We observe that our algorithm is very effective in
separating the background from the activity. In Figure 2(d)-(f), we have 550 frames from a scene in
a lobby. There is little activity in the video, but the illumination changes drastically towards the end
of the sequence. We see that our algorithm is once again able to recover the background, irrespective
of the illumination change.
Experiment: removing shadows and specularities from face images. Face recognition is another domain in computer vision where low-dimensional linear models have received a great deal
of attention, mostly due to the work of [30]. The key observation is that under certain idealized
circumstances, images of the same face under varying illumination lie near an approximately ninedimensional linear subspace known as the harmonic plane. However, since faces are neither perfectly convex nor Lambertian, face images taken under directional illumination often suffer from
self-shadowing, specularities, or saturations in brightness.
Given a matrix D whose columns represent well-aligned training images of a person?s face under
various illumination conditions, our Robust PCA algorithm offers a principled way of removing
such spatially localized artifacts. Figure 3 illustrates the results of our algorithm on images from
subsets 1-3 of the Extended Yale B database [31]. The proposed algorithm algorithm removes the
specularities in the eyes and the shadows around the nose region. This technique is potentially useful
for pre-processing training images in face recognition systems to remove such deviations from the
low-dimensional linear model.
5
Discussion and Future Work
Our results give strong theoretical and empirical evidences for the efficacy of using convex programming to recover low-rank matrices from corrupted observations. However, there remain many
fascinating open questions in this area. From a mathematical perspective, it would be interesting to
7
(a)
(b)
(d)
(c)
(e)
(f)
Figure 2: Background modeling. (a) Video sequence of a scene in an airport. The size of each frame is
72 ? 88 pixels, and a total of 200 frames were used. (b) Static background recovered by our algorithm. (c)
Sparse error recovered by our algorithm represents activity in the frame. (d) Video sequence of a lobby scene
with changing illumination. The size of each frame is 64 ? 80 pixels, and a total of 550 frames were used. (e)
Static background recovered by our algorithm. (f) Sparse error. The background is correctly recovered even
when the illumination in the room changes drastically in the frame on the last row.
(a)
(b)
(c)
(a)
(b)
(c)
Figure 3: Removing shadows and specularities from face images. (a) Cropped and aligned images of a
person?s face under different illuminations from the Extended Yale B database. The size of each image is
96 ? 84 pixels, a total of 31 different illuminations were used for each person. (b) Images recovered by our
algorithm. (c) The sparse errors returned by our algorithm correspond to specularities in the eyes, shadows
around the nose region, or brightness saturations on the face.
know if it is possible to remove the logarithmic factor in our main result. The phase transition experiment in Section 4 suggests that convex programming actually succeeds even for rank(A0 ) < ?r m
and kE0 k0 < ?s m2 , where ?r and ?s are sufficiently small positive constants. Another interesting
and important question is whether the recovery is stable in the presence of small dense noise. That
is, suppose we observe D = A0 + E0 + Z, where Z is a noise vector of small `2 -norm (e.g., Gaussian noise). A natural approach is to now minimize kAk? + ?kEk1 , subject to a relaxed constraint
kD ? A ? EkF ? ?. For matrix completion, [16] showed that a similar relaxation gives stable
recovery ? the error in the solution is proportional to the noise level. Finally, while this paper has
sketched several examples on visual data, we believe that this powerful new tool pertains to a wide
range of high-dimensional data, for example in bioinformatics and web search.
References
[1] C. Eckart and G. Young. The approximation of one matrix by another of lower rank. Psychometrika,
1(3):211?218, 1936.
8
[2] S. Chen, D. Donoho, and M. Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1):129?
159, 2001.
[3] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality
reduction. Science, 290(5500):2319?2323, 2000.
[4] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15(6):1373?1396, 2003.
[5] I. Jolliffe. Principal Component Analysis. Springer-Verlag, New York, New York, 1986.
[6] P. Huber. Robust Statistics. Wiley, New York, New York, 1981.
[7] F. De La Torre and M. Black. A framework for robust subspace learning. IJCV, 54(1-3):117?142, 2003.
[8] R. Gnanadesikan and J. Kettenring. Robust estimates, residuals, and outlier detection with multiresponse
data. Biometrics, 28(1):81?124, 1972.
[9] Q. Ke and T. Kanade. Robust l1 norm factorization in the presence of outliers and missing data by
alternative convex programming. In CVPR, 2005.
[10] M. Fischler and R. Bolles. Random sample consensus: A paradigm for model fitting with applications to
image analysis and automated cartography. Communications of the ACM, 24(6):381?385, 1981.
[11] E. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Info. Thy., 51(12):4203?4215,
2005.
[12] B. Recht, M. Fazel, and P. Parillo. Guaranteed minimum rank solution of matrix equations via nuclear
norm minimization. SIAM Review, submitted for publication.
[13] E. Candes and B. Recht. Exact matrix completion via convex optimzation. Foundations of Computational
Mathematics, to appear.
[14] A. Montanari R. Keshavan and S. Oh. Matrix completion from a few entries. preprint, 2009.
[15] E. Candes and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, submitted for publication.
[16] E. Candes and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, to appear.
[17] D. Donoho. High-dimensional data analysis: The curses and blessings of dimensionality. AMS Math
Challenges Lecture, 2000.
[18] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems.
SIAM Journal on Imaging Science, (1):183?202, 2009.
[19] Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127?
152, 2005.
[20] J. Cai, E. Candes, and Z. Shen. A singular value thresholding algorithm for matrix completion. preprint,
http://arxiv.org/abs/0810.3286, 2008.
[21] K.-C. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least
squares problems. preprint, http://math.nus.edu.sg/?matys/apg.pdf, 2009.
[22] J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery of
corrupted low-rank matrices via convex optimization. Journal of the ACM, submitted for publication.
[23] E. Amaldi and V. Kann. On the approximability of minimizing nonzero variables or unsatisfied relations
in linear systems. Theoretical Computer Science, 209(2):237?260, 1998.
[24] D. Donoho. For most large underdetermined systems of linear equations the minimal l1 -norm solution is
also the sparsest solution. Communications on Pure and Applied Mathematics, 59(6):797?829, 2006.
[25] V. Chandrasekaran, S. Sanghavi, P. Parrilo, and A. Willsky. Sparse and low-rank matrix decompositions.
In IFAC Symposium on System Identification, 2009.
[26] J. Wright and Y. Ma. Dense error correction via `1 -minimization. IEEE Transactions on Information
Theory, to appear.
[27] Z. Lin, A. Ganesh, J. Wright, M. Chen, L. Wu, and Y. Ma. Fast convex optimization algorithms for exact
recovery of a corrupted low-rank matrix. SIAM Journal on Optimization, submitted for publication.
[28] V. Cevher, , M. F. Duarte, C. Hegde, and R. G. Baraniuk. Sparse signal recovery using markov random
fields. In NIPS, 2008.
[29] L. Li, W. Huang, I. Gu, and Q. Tian. Statistical modeling of complex backgrounds for foreground object
detection. IEEE Transactions on Image Processing, 13(11), 2004.
[30] R. Basri and D. Jacobs. Lambertian reflection and linear subspaces. IEEE Trans. PAMI, 25(3):218?233,
2003.
[31] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for face
recognition under variable lighting and pose. IEEE Trans. PAMI, 23(6):643?660, 2001.
9
| 3704 |@word version:1 briefly:1 polynomial:3 proportion:4 norm:17 c0:2 open:1 simulation:6 seek:2 decomposition:5 jacob:1 brightness:2 reduction:2 efficacy:2 existing:5 recovered:11 com:1 ka:2 toh:1 john:1 additive:1 numerical:1 remove:3 plot:1 implying:1 plane:1 vanishing:4 detecting:1 math:2 org:1 mathematical:2 become:1 symposium:1 prove:2 consists:1 ijcv:1 combine:1 kdk2:1 fitting:1 introduce:1 peng:1 thy:1 huber:1 indeed:1 roughly:1 cand:1 examine:1 uiuc:1 nor:1 behavior:2 decomposed:1 yke:3 csl:1 curse:2 overwhelming:1 solver:2 increasing:2 deem:1 provided:1 psychometrika:1 moreover:1 little:2 lowest:1 cm:3 minimizes:1 emerging:1 finding:1 impractical:1 guarantee:4 growth:5 exactly:5 rm:14 parillo:1 appear:3 producing:1 positive:1 ecc:1 ak:11 approximately:2 pami:2 might:2 black:2 studied:1 equivalence:1 suggests:2 relaxing:1 limited:2 factorization:1 range:2 tian:1 fazel:1 camera:1 atomic:1 practice:2 procedure:1 area:1 empirical:1 significantly:1 word:2 pre:1 suggest:2 cannot:1 interior:2 operator:3 shankar:1 context:1 influence:1 conventional:1 lagrangian:1 missing:2 hegde:1 attention:2 independently:3 convex:26 resolution:1 ke:1 simplicity:1 recovery:12 immediately:1 shen:1 pure:1 m2:6 nuclear:7 brittleness:1 orthonormal:1 oh:1 handle:1 variation:1 play:1 suppose:1 exact:6 programming:6 element:1 recognition:3 sparsely:1 database:2 observed:1 bottom:1 preprint:3 thousand:1 region:2 eckart:1 decrease:1 yk:1 gross:3 principled:1 complexity:4 fischler:1 nesterov:1 kriegman:1 solving:10 algebra:1 myriad:1 basis:3 gu:1 georghiades:1 k0:3 routinely:1 represented:1 various:1 stacked:1 separated:1 fast:3 shortcoming:1 effective:2 harnessing:1 saunders:1 whose:6 heuristic:3 larger:1 solve:1 cvpr:1 otherwise:1 niyogi:1 statistic:1 itself:1 sequence:5 cai:1 product:2 aligned:3 combining:1 intuitive:1 frobenius:1 exploiting:1 convergence:4 extending:1 rademacher:1 generating:1 perfect:1 tk:6 tions:1 polylog:1 illustrate:2 completion:16 pose:1 object:2 approxima:1 received:1 solves:1 strong:3 recovering:2 shadow:4 implies:3 direction:2 correct:4 torre:1 log8:1 fix:2 underdetermined:3 extension:2 correction:3 hold:2 sufficiently:2 considered:1 wright:4 around:2 exp:1 great:1 scope:1 m0:4 major:2 vary:2 purpose:1 estimation:3 applicable:1 shadowing:1 gnanadesikan:1 successfully:2 tool:2 minimization:7 hope:1 exactness:1 sensor:1 gaussian:2 ekf:1 rather:1 reaching:1 ej:1 shelf:1 shrinkage:1 varying:3 surveillance:1 probabilistically:1 publication:4 focus:1 rank:60 bernoulli:5 cartography:1 contrast:2 sense:2 am:1 duarte:1 inference:1 entire:1 a0:35 relation:1 interested:3 tao:2 provably:3 sketched:1 pixel:3 arg:5 dual:1 html:1 fidelity:1 plan:1 spatial:1 special:1 fairly:1 airport:2 marginal:1 equal:1 once:1 field:1 sampling:2 placing:1 broad:2 represents:1 nearly:1 amaldi:1 foreground:4 future:2 np:1 sanghavi:1 belkin:1 few:2 modern:1 densely:1 tightly:1 individual:1 beck:1 replaced:1 phase:5 occlusion:1 consisting:2 microsoft:2 ab:1 detection:2 organization:1 trimming:2 highly:4 multiresponse:1 yielding:1 light:1 semidefinite:1 implication:1 encourage:1 orthogonal:6 unless:1 biometrics:1 e0:25 theoretical:6 minimal:1 cevher:1 column:5 modeling:5 soft:1 rao:2 teboulle:1 corroborate:1 formulates:1 applicability:2 cost:1 deviation:1 entry:20 subset:4 uniform:1 successful:1 eigenmaps:1 reported:1 perturbed:1 corrupted:16 sv:1 proximal:6 person:3 recht:2 fundamental:1 siam:4 probabilistic:3 off:1 decoding:1 again:2 choose:1 huang:1 ek:10 li:1 suggesting:1 potential:1 parrilo:1 de:2 summarized:1 subsumes:1 coordinated:1 idealized:3 lot:1 observing:1 portion:1 wm:2 recover:7 start:2 parallel:1 complicated:1 candes:4 minimize:1 square:3 accuracy:1 kek:1 efficiently:8 yield:3 identify:1 correspond:1 directional:1 identification:1 none:1 lighting:1 corruption:6 finer:1 converged:1 submitted:4 definition:2 failure:3 grossly:2 associated:1 proof:4 recovers:3 static:3 macbook:1 intrinsically:1 popular:1 knowledge:2 dimensionality:9 ubiquitous:1 organized:1 actually:1 higher:1 asia:1 kann:1 improved:2 formulation:1 done:1 until:1 langford:1 hand:1 sketch:2 web:3 replacing:2 ei:1 ganesh:3 nonlinear:1 lack:1 keshavan:1 stably:1 artifact:1 perhaps:1 grows:6 believe:1 verify:1 true:1 equality:1 alternating:1 spatially:2 laboratory:1 nonzero:3 white:1 deal:1 self:1 uniquely:3 please:1 kak:10 trying:1 pdf:1 yun:1 outline:1 demonstrate:1 bolles:1 l1:2 reflection:1 pro:1 stiefel:2 silva:1 ranging:1 image:14 harmonic:1 common:1 superior:1 million:1 m1:2 measurement:4 refer:2 significant:2 outlined:1 mathematics:2 similarly:1 inclusion:1 illinois:2 stable:2 dominant:1 multivariate:1 recent:2 showed:1 perspective:1 irrelevant:1 scenario:2 n00014:1 nonconvex:2 certain:2 verlag:1 onr:1 arbitrarily:2 success:1 yi:1 seen:1 minimum:1 additional:3 care:1 relaxed:2 employed:1 belhumeur:1 subtraction:1 paradigm:1 signal:1 ii:1 full:1 champaign:1 technical:1 faster:1 exceeds:1 smooth:2 arvind:1 offer:2 ka0:2 ifac:1 lin:1 laplacian:1 scalable:4 involving:1 vision:5 circumstance:2 arxiv:1 iteration:5 represent:1 achieved:1 affecting:2 whereas:1 background:14 cropped:1 singular:10 appropriately:1 envelope:1 subject:2 spirit:1 consult:1 near:5 presence:5 m6:1 automated:1 affect:2 kek0:4 perfectly:4 restrict:1 approaching:1 idea:2 t0:1 whether:1 motivated:2 pca:19 penalty:1 suffer:1 returned:1 york:4 cause:1 matlab:1 dramatically:1 useful:2 proportionally:1 clear:1 detailed:1 conspicuously:1 locally:1 tenenbaum:1 simplest:1 http:3 continuation:2 generate:3 exist:2 nsf:2 notice:1 sign:10 arising:1 correctly:2 group:1 key:2 reformulation:1 drawn:1 changing:1 neither:1 kettenring:1 imaging:2 relaxation:4 subgradient:2 fraction:12 cone:1 idealization:1 sum:3 geometrically:1 run:1 inverse:1 baraniuk:1 powerful:1 striking:1 extends:2 almost:3 throughout:1 reader:3 reasonable:1 chandrasekaran:1 wu:1 home:1 coherence:1 scaling:1 bound:2 apg:1 completing:1 guaranteed:1 convergent:1 yale:2 quadratic:1 fascinating:1 activity:7 subj:5 precisely:2 constraint:4 scene:6 aspect:1 speed:1 robustifying:1 optimality:4 min:8 approximability:1 separable:1 relatively:3 conjecture:1 according:7 kd:6 remain:2 smaller:1 slightly:1 yigang:1 outlier:2 taken:1 equation:6 resource:1 remains:2 discus:1 jolliffe:1 wrt:1 know:2 nose:2 tractable:2 end:3 certifies:1 available:1 operation:1 pursuit:1 apply:1 observe:4 lambertian:2 generic:4 appropriate:2 spectral:1 alternative:3 robustness:1 top:1 denotes:2 include:1 completed:2 especially:1 prof:1 classical:3 comparatively:1 question:2 occurs:1 strategy:1 concentration:1 kak2:1 surrogate:2 gradient:5 subspace:5 separate:1 separating:1 clarifying:1 manifold:3 considers:1 consensus:1 willsky:1 cmr:1 relationship:1 ke0:3 minimizing:1 difficult:1 unfortunately:2 mostly:1 potentially:1 subproblems:1 info:1 stated:2 implementation:1 unknown:2 perform:2 observation:16 datasets:1 urbana:1 markov:1 extended:2 variability:1 communication:2 frame:9 stack:1 arbitrary:6 sharp:2 pair:6 required:2 nu:1 nip:1 trans:3 beyond:1 suggested:1 curiosity:1 below:1 pattern:3 perception:1 able:1 sparsity:2 challenge:1 program:12 kek1:9 saturation:2 including:1 max:4 video:10 explanation:1 power:1 natural:5 difficulty:1 regularized:1 haar:1 residual:1 representing:1 imply:1 eye:2 irrespective:1 prior:2 literature:1 understanding:1 review:2 kf:5 geometric:1 sg:1 unsatisfied:1 lecture:1 interesting:4 proportional:7 localized:2 foundation:1 thresholding:5 corrupting:1 row:2 course:1 supported:3 surprisingly:1 free:1 repeat:1 last:1 enjoys:1 drastically:2 wide:1 taking:2 face:12 sparse:21 distributed:10 ghz:1 dimension:1 world:1 transition:5 asserting:2 refinement:1 outlying:2 employing:2 contemporaneous:1 transaction:3 approximate:1 basri:1 lobby:2 global:1 conceptual:1 corroborating:2 assumed:2 search:3 iterative:3 decomposes:1 table:2 promising:1 terminate:2 kanade:1 robust:23 matys:1 complex:1 domain:1 main:8 dense:2 montanari:1 noise:9 wiley:1 wish:1 sparsest:1 exponential:1 lie:4 young:1 theorem:6 removing:3 explored:1 concern:3 evidence:1 intrinsic:1 effectively:1 importance:1 magnitude:2 illumination:13 illustrates:1 occurring:1 mildly:1 chen:2 logarithmic:2 specularities:5 simply:1 forming:1 visual:4 expressed:1 partially:1 springer:1 minimizer:3 satisfies:1 relies:1 acm:2 ma:4 goal:1 formulated:1 donoho:3 towards:1 room:1 hard:2 change:4 determined:1 contrasted:1 uniformly:3 corrected:1 principal:7 total:5 called:1 blessing:2 duality:1 svd:1 la:1 succeeds:2 e:1 meaningful:1 support:9 people:1 pertains:1 bioinformatics:4 accelerated:1 audio:1 regularizing:1 ek2f:5 |
2,985 | 3,705 | An Online Algorithm for
Large Scale Image Similarity Learning
Gal Chechik
Google
Mountain View, CA
[email protected]
Varun Sharma
Google
Bengalooru, Karnataka, India
[email protected]
Uri Shalit
ICNC, The Hebrew University
Israel
[email protected]
Samy Bengio
Google
Mountain View, CA
[email protected]
Abstract
Learning a measure of similarity between pairs of objects is a fundamental problem in machine learning. It stands in the core of classification methods like kernel
machines, and is particularly useful for applications like searching for images
that are similar to a given image or finding videos that are relevant to a given
video. In these tasks, users look for objects that are not only visually similar but
also semantically related to a given object. Unfortunately, current approaches for
learning similarity do not scale to large datasets, especially when imposing metric
constraints on the learned similarity. We describe OASIS, a method for learning
pairwise similarity that is fast and scales linearly with the number of objects and
the number of non-zero features. Scalability is achieved through online learning
of a bilinear model over sparse representations using a large margin criterion and
an efficient hinge loss cost. OASIS is accurate at a wide range of scales: on a standard benchmark with thousands of images, it is more precise than state-of-the-art
methods, and faster by orders of magnitude. On 2.7 million images collected
from the web, OASIS can be trained within 3 days on a single CPU. The nonmetric similarities learned by OASIS can be transformed into metric similarities,
achieving higher precisions than similarities that are learned as metrics in the first
place. This suggests an approach for learning a metric from data that is larger by
orders of magnitude than was handled before.
1
Introduction
Learning a pairwise similarity measure from data is a fundamental task in machine learning. Pair
distances underlie classification methods like nearest neighbors and kernel machines, and similarity
learning has important applications for ?query-by-example? in information retrieval. For instance,
a user may wish to find images that are similar to (but not identical copies of) an image she has;
a user watching an online video may wish to find additional videos about the same subject. In all
these cases, we are interested in finding a semantically-related sample, based on the visual content
of an image, in an enormous search space. Learning a relatedness function from examples could be
a useful tool for such tasks.
A large number of previous studies of learning similarities have focused on metric learning, like
in the case of a positive semidefinite matrix that defines a Mahalanobis distance [19]. However,
similarity learning algorithms are often evaluated in a context of ranking [16, 5]. When the amount
1
of training data available is very small, adding positivity constraints for enforcing metric properties
is useful for reducing overfitting and improving generalization. However, when sufficient data is
available, as in many modern applications, adding positive semi-definitiveness constraints is very
costly, and its benefit in terms of generalization may be limited. With this view, we take here an
approach that avoids imposing positivity or symmetry constraints on the learned similarity measure.
Some similarity learning algorithms assume that the available training data contains real-valued pairwise similarities or distances. Here we focus on a weaker supervision signal: the relative similarity
of different pairs [4]. This signal is also easier to obtain, here we extract similarity information from
pairs of images that share a common label or are retrieved in response to a common text query in an
image search engine.
The current paper presents an approach for learning semantic similarity that scales up to two orders
of magnitude larger than current published approaches. Three components are combined to make
this approach fast and scalable: First, our approach uses an unconstrained bilinear similarity. Given
two images p1 and p2 we measure similarity through a bilinear form p1 Wp2 , where the matrix
W is not required to be positive, or even symmetric. Second we use a sparse representation of
the images, which allows to compute similarities very fast. Finally, the training algorithm that
we developed, OASIS, Online Algorithm for Scalable Image Similarity learning, is an online dual
approach based on the passive-aggressive algorithm [2]. It minimizes a large margin target function
based on the hinge loss, and converges to high quality similarity measures after being presented with
a small fraction of the training pairs.
We find that OASIS is both fast and accurate at a wide range of scales: for a standard benchmark with
thousands of images, it achieves better or comparable results than existing state-of-the-art methods,
with computation times that are shorter by an order of magnitude. For web-scale datasets, OASIS
can be trained on more than two million images within three days on a single CPU. On this large
scale dataset, human evaluations of OASIS learned similarity show that 35% of the ten nearest
neighbors of a given image are semantically relevant to that image.
2
Learning Relative Similarity
We consider the problem of learning a pairwise similarity function S, given supervision on the relative similarity between two pairs of images. The algorithm is designed to scale well with the number
of samples and the number of features, by using fast online updates and a sparse representation.
Formally, we are given a set of images P, where each image is represented as a vector p ? Rd . We
assume that we have access to an oracle that, given a query image pi ? P, can locate two other
?
+
?
images, p+
i ? P and pi ? P, such that pi ? P is more relevant to pi ? P than pi ? P. Formally,
+
?
we could write that relevance(pi , pi ) > relevance(pi , pi ). However, unlike methods that assume
that a numerical value of the similarity is available, relevance(pi , pj ) ? R, we use this weaker
form of supervision, and only assume that some pairs of images can be ranked by their relevance
to a query image pi . The relevance measure could reflect that the relevant image p+
i belongs to the
same class of images as the query image, or reflect any other semantic property of the images.
Our goal is to learn a similarity function SW (pi , pj ) parameterized by W that assigns higher similarity scores to the pairs of more relevant images (with a safety margin),
?
S(pi , p+
i ) > S(pi , pi ) + 1 ,
?
?pi , p+
i , pi ? P
.
(1)
In this paper, we consider a parametric similarity function that has a bi-linear form,
SW (pi , pj ) ? pTi W pj
(2)
with W ? Rd?d . Importantly, if the image vectors pi ? Rd are sparse, namely, the number of
non-zero entries ki ? kpi k0 is small, ki ? d, then the value of the score defined in Eq. (2) can be
computed very efficiently even when d is large. Specifically, SW can be computed with complexity
of O(ki kj ) regardless of the dimensionality d. To learn a scoring function that obeys the constraints
in Eq. (1), we define a global
hinge losses over all possible triplets in
P loss LW that accumulates
+ ?
+ ?
the training set: LW ?
l
(p
,
p
,
p
),
with the loss for a single triplet being
i
(pi ,pi ,pi )?P 3 W i i
+ ?
+
?
lW (pi , pi , pi ) ? max 0, 1 ? SW (pi , pi ) + SW (pi , pi ) .
2
To minimize the global loss LW , we propose an algorithm that is based on the Passive-Aggressive
family of algorithms [2]. First, W is initialized to the identity matrix W0 = Id?d . Then, the
?
algorithm iteratively draws a random triplet (pi , p+
i , pi ), and solves the following convex problem
with a soft margin:
Wi = argmin
W
1
kW ? Wi?1 k2F ro + C?
2
?
s.t. lW (pi , p+
i , pi ) ? ?
and
? ? 0 (3)
where k?kF ro is the Frobenius norm (point-wise L2 norm). At the ith iteration, Wi is updated to
optimize a trade-off between staying close to the previous parameters Wi?1 and minimizing the
?
loss on the current triplet lW (pi , p+
i , pi ). The aggressiveness parameter C controls this trade-off.
?
To solve the problem in Eq. (3) we follow the derivation in [2]. When lW (pi , p+
i , pi ) = 0, it is clear
that Wi = Wi?1 satisfies Eq. (3) directly. Otherwise, we define the Lagrangian
L(W, ?, ?, ?) =
1
?
kW ? Wi?1 k2F ro + C? + ? (1 ? ? ? pTi W(p+
i ? pi )) ? ??
2
(4)
where ? ? 0 and ? ? 0 are the Lagrange multipliers. The optimal solution is obtained when the
gradient vanishes ?L(W,?,?,?)
= W ? Wi?1 ? ? Vi = 0, where Vi is the gradient matrix at the
?W
?
? T
?lW
d +
current step Vi = ?W = [p1i (p+
i ? pi ), . . . , pi (pi ? pi )] . When image vectors are sparse, the
?
gradient Vi is also sparse, hence the update step costs only O(|pi |0 ? (kp+
i k0 + kpi k0 )), where the
L0 norm kxk0 is the number of nonzero values in x. Differentiating the Lagrangian with respect to
= C ? ? ? ? = 0 which, knowing that ? ? 0, means that ? ? C. Plugging
? we obtain ?L(W,?,?,?)
??
?
back into the Lagrangian in Eq. (4), we obtain L(? ) = ? 12 ? 2 kVi k2 + ? (1 ? pTi Wi?1 (p+
i ? pi )).
Finally, taking the derivative of this second Lagrangian with respect to ? and using ? ? C, we obtain
W
?
= Wi?1 + ? Vi
?
lWi?1 (pi , p+
i , pi )
.
= min C,
kVi k2
(5)
The optimal update for the new W therefore has a form of a gradient descent step with a step size ?
that can be computed exactly. Applying this algorithm for classification tasks was shown to yield a
small cumulative online loss, and selecting the best Wi during training using a hold-out validation
set was shown to achieve good generalization [2].
It should be emphasized that OASIS is not guaranteed to learn a parameter matrix that is positive,
or even symmetric. We study variants of OASIS that enforce symmetry or positivity in Sec. 4.3.2.
3
Related Work
Learning similarity using relative relevance has been intensively studied, and a few recent approaches aim to address learning at large scale. For small-scale data, there are two main groups of
similarity learning approaches. The first approach, learning Mahalanobis distances, can be viewed
as learning a linear projection of the data into another space (often of lower dimensionality), where a
Euclidean distance is defined among pairs of objects. Such approaches include Fisher?s Linear Discriminant Analysis (LDA), relevant component analysis (RCA) [1], supervised global metric learning [18], large margin nearest neighbor (LMNN) [16], and metric learning by collapsing classes [5]
(MLCC). Other constraints like sparseness are sometimes induced over the learned metric [14]. See
also a review in [19] for more details.
The second family of approaches, learning kernels, is used to improve performance of kernel based
classifiers. Learning a full kernel matrix in a non parametric way is prohibitive except for very
small data sets. As an alternative, several studies suggested learning a weighted sum of pre-defined
kernels [11] where the weights are learned from data. In some applications this was shown to be
inferior to uniform weighting of the kernels [12]. The work in [4] further learns a weighting over
local distance functions for every image in the training set. Non linear image similarity learning was
also studied in the context of dimensionality reduction, as in [8].
Finally, Jain et al [9] (based on Davis et al [3]) aim to learn metrics in an online setting. This work
is one of the closest work with respect to OASIS: it learns online a linear model of a [dis-]similarity
3
Query image
Top 5 relevant images retrieved by OASIS
Table 1: OASIS: Successful cases from the web dataset. The relevant text queries for each image
are shown beneath the image (not used in training).
function between documents (images); the main difference is that Jain et al [9] try to learn a true
distance, imposing positive definiteness constraints, which makes the algorithm more complex and
more constrained. We argue in this paper that in the large scale regime, imposing these constraints
throughout could be detrimental.
Learning a semantic similarity function between images was also studied in [13]. There, semantic
similarity is learned by representing each image by the posterior probability distribution over a
predefined set of semantic tags, and then computing the distance between two images as the distance
between the two underlying posterior distributions. The representation size of each image therefore
grows with the number of semantic classes.
4
Experiments
We tested OASIS on two datasets spanning a wide regime of scales. First, we tested its scalability on
2.7 million images collected from the web. Then, to quantitatively compare the precision of OASIS
with other, small-scale metric-learning methods, we tested OASIS using Caltech-256, a standard
machine vision benchmark.
Image representation. We use a sparse representation based on bags of visual words [6]. These
features were systematically tested and found to outperform other features in related tasks, but the
details of the visual representation is outside the focus of this paper. Broadly speaking, features are
extracted by dividing each image into overlapping square blocks, representing each block by edge
and color histograms, and finding the nearest block in a predefined set (dictionary) of d = 10, 000
vectors of such features. An image is thus represented as the number of times each dictionary visual
word was present in it, yielding vectors in Rd with an average of 70 non-zero values.
Evaluation protocol. We evaluated the performance of all algorithms using precision-at-top-k, a
standard ranking precision measure based on nearest neighbors. For each query image in the test set,
all other test images were ranked according to their similarity to the query image, and the number of
same-class images among the top k images (the k nearest neighbors) is computed, and then averaged
across test images. We also calculated the mean average precision (mAP), a measure that is widely
used in the information retrieval community.
4.1
Web-Scale Experiment
We first tested OASIS on a set of 2.7 million images scraped from the Google image search engine.
We collected a set of ?150K anonymized text queries, and for each of these queries, we had access
to a set of relevant images. To compute an image-image relevance measure, we first obtained measures of relevance between images and text queries. This was achieved by collecting anonymized
clicks over images collected from the set of text queries. We used this query-image click counts
4
C(query,image) to compute the (unnormalized) probability that two images are co-queried as Relevance(image,image) = C T C. The relevance matrix was then thresholded to keep only the top 1
percent values. We trained OASIS on a training set of 2.3 million images, and tested performance on
0.4 million images. The number of training iterations (each corresponding to sampling one triplet)
was selected using a second validation set of around 20000 images, over which the performance
saturated after 160 million iterations. Overall, training took a total of ?4000 minutes on a single
CPU of a standard modern machine.
Table 1 shows the top five images as ranked by OASIS on two examples of query-images in the test
set. In these examples, OASIS captures similarity that goes beyond visual appearance: most top
ranked images are about the same concept as the query image, even though that concept was never
provided in a textual form, and is inferred in the viewers mind (?dog?, ?snow?). This shows that
learning similarity across co-queried images can indeed capture the semantics of queries even if the
queries are not explicitly used during training.
To obtain a quantitative evaluation of the ranking obtained by OASIS we created an evaluation
benchmark, by asking human evaluators to mark if a set of candidate images were semantically
relevant to a set of 25 popular image queries. For each query image, evaluators were presented with
the top-10 images ranked by OASIS, mixed with 10 random images. Given the relevance ranking
from 30 evaluators, we computed the precision of each OASIS rank as the fraction of people that
marked each image as relevant to the query image. On average across all queries and evaluators,
OASIS rankings yielded precision of ? 40% at the top 10 ranked images.
As an estimate of an ?upper bound? on the difficulty of the task, we also computed the precision
obtained by human evaluators: For every evaluator, we used the rankings of all other evaluators
as ground truth, to compute his precision. As with the ranks of OASIS, we computed the fraction
of evaluators that marked an image as relevant, and repeated this separately for every query and
human evaluator, providing a measure of ?coherence? per query. Fig. 1(a) shows the mean precision
obtained by OASIS and human evaluators for every query in our data. For some queries OASIS
achieves precision that is very close to that of the mean human evaluator. In many cases OASIS
achieves precision that is as good or better than some evaluators.
(a)
1
(b)
Human precision
OASIS precision
fast LMNN (MNIST 10 categories) ~190 days
nd
projected extrapolation (2 poly)
OASIS (Web data)
runtime (min)
precision
0.8
0.6
0.4
2days
2.3M
60K
1.5 hrs
100K
5min
0.2
2 days
3 hrs
3hrs
5 min
37sec
9sec
0
0
5
10
15
20
query ID (sorted by precision)
25
60
600
10K
100K
2M
number of images (log scale)
Figure 1: (a) Precision of OASIS and human evaluators, per query, using rankings of all (remaining)
human evaluators as a ground truth. (b) Comparison of the runtime of OASIS and fast-LMNN[17],
over a wide range of scales. LMNN results (on MNIST data) are faster than OASIS results on
subsets of the web data. However LMNN scales quadratically with the number of samples, hence is
three times slower on 60K images, and may be infeasible for handling 2.3 million images.
We further studied how the runtime of OASIS scales with the size of the training set. Figure 1(b)
shows that the runtime of OASIS, as found by early stopping on a separate validation set, grows
linearly with the train set size. We compare this to the fastest result we found in the literature, based
on a fast implementation of LMNN [17]. The LMNN algorithm scales quadratically with the number
of objects, although their experiments with MNIST data show that the active set of constraints grows
linearly. This could be because MNIST has 10 classes only.
5
(a)
10 classes
(b)
20 classes
(c)
50 classes
0.5
0.3
0.4
0.2
0.2
precision
precision
precision
0.2
0.3
0.1
0.1
0
0
OASIS
MCML
LEGO
LMNN
Euclidean
10
0.1
Random
20
30
number of neighbors
40
50
0
0
OASIS
MCML
LEGO
LMNN
Euclidean
10
Random
20
30
number of neighbors
40
OASIS
LEGO
LMNN
Euclidean
50
0
0
10
Random
20
30
number of neighbours
40
50
Figure 2: Comparison of the performance of OASIS, LMNN, MCML, LEGO and the Euclidean
metric in feature space. Each curve shows the precision at top k as a function of k neighbors. The
results are averaged across 5 train/test partitions (40 training images, 25 test images per class), error
bars are standard error of the means (s.e.m.), black dashed line denotes chance performance.
4.2
Caltech256 Dataset
To compare OASIS with small-scale methods we used the Caltech256 dataset [7], containing images collected from Google image search and from PicSearch.com. Images were assigned to 257
categories and evaluated by humans in order to ensure image quality and relevance. After we have
pre-processed the images, and filtered images that were too small, we were left with 29461 images
in 256 categories. To allow comparisons with methods that were not optimized for sparse representation, we also reduced the block vocabulary size d from 10000 to 1000.
We compared OASIS with the following metric learning methods.
(1) Euclidean - The standard Euclidean distance in feature space (equivalent to using the identity
matrix W = Id?d ). (2) MCML [5] - Learning a Mahalanobis distance such that same-class samples are mapped to the same point, formulated as a convex problem. (3) LMNN [16] - learning a
Mahalanobis distance for aiming to have the k-nearest neighbors of a given sample belong to the
same class while separating different-class samples by a large margin. As a preprocessing phase,
images were projected to a basis of the principal components (PCA) of the data, with no dimensionality reduction. (4) LEGO [9] - Online learning of a Mahalanobis distance using a Log-Det
regularization per instance loss, that is guaranteed to yield a positive semidefinite matrix. We used a
variant of LEGO that, like OASIS, learns from relative distances.1
We tested all methods on subsets of classes taken from the Caltech256 repository. For OASIS,
images from the same class were treated as similar. Each subset was built such that it included
semantically diverse categories, controlled for classification difficulty. We tested sets containing 10,
20 and 50 classes, each spanning the range of difficulties.
We used two levels of 5-fold cross validation, one to train the model, and a second to select
hyper parameters of each method (early stopping time for OASIS; the ? parameter for LMNN
(? ? {0.125, 0.25, 0.5}), and the regularization parameter ? for LEGO (? ? {0.02, 0.08, 0.32}).
Results reported below were obtained by selecting the best value of the hyper parameter and then
training again on the full training set (40 images per class).
Figure 2 compares the precision obtained with OASIS, with the four competing approaches. OASIS
achieved consistently superior results throughout the full range of k (number of neighbors) tested,
and on all four sets studied. LMNN performance on the training set was often high, suggesting that
it overfits the training set, as was also observed sometimes by [16].
Table 2 shows the total CPU time in minutes for training all algorithms compared, and for four
subsets of classes at sizes 10, 20, 50 and 249. Data is not given when runtime was longer than 5
days or performance was worse than the Euclidean baseline. For the purpose of a fair comparison,
we tested two implementations of OASIS: The first was fully implemented Matlab. The second had
the core loop of the algorithm implemented in C and called from Matlab. All other methods used
1
We have also experimented with the methods of [18], which we found to be too slow, and with RCA [1],
whose precision was lower than other methods. These results are not included in the evaluations below.
6
Table 2: Runtime (minutes) on a standard CPU of all compared methods
num
classes
10
20
50
249
OASIS
Matlab
42 ? 15
45 ? 8
25 ? 2
485 ? 113
OASIS
Matlab+C
0.12 ? .03
0.15 ? .02
1.60 ? .04
1.13 ? .15
MCML
Matlab+C
1835 ? 210
7425 ? 106
LEGO
Matlab
143 ? 44
533 ? 49
711 ? 28
LMNN
Matlab+C
337 ? 169
631 ? 40
960 ? 80
fastLMNN
Matlab+C
247 ? 209
365 ? 62
2109 ? 67
code supplied by the authors implemented in Matlab, with core parts implemented in C. Due to
compatibility issues, fast-LMNN was run on a different machine, and the given times are rescaled to
the same time scale as all other algorithms. LEGO is fully implemented in Matlab. All other code
was compiled (mex) to C. The C implementation of OASIS is significantly faster, since Matlab does
not use the potential speedup gained by sparse images.
OASIS is significantly faster, with a runtime that is shorter by orders of magnitudes than MCML
even on small sets, and about one order of magnitude faster than LMNN. The run time of OASIS
and LEGO was measured until the point of early stopping. OASIS memory requirements grow
quadratically with the size of the dictionary. For a large dictionary of 10K, the parameters matrix
takes 100M floats, or 0.4 Giga bytes of memory.
(a)
(b)
mean average precision
precision
0.3
0.2
0.1
0
0
OASIS
PROJ OASIS
ONLINE?PROJ OASIS
DISSIM?OASIS
Euclidean
10
Random
20
30
number of neighbors
0.22
0.2
0.18
40
50
proj. every 5000
proj. every 50000
proj. after complete
50K
100K 150K
learning steps
200K
250K
Figure 3: (a) Comparing symmetric variants of OASIS on the 20-class subset, similar results obtained with other sets. (b) mAP along training for three PSD projection schemes.
4.3
Symmetry and positivity
The similarity matrix W learned by OASIS is not guaranteed to be positive or even symmetric.
Some applications, like ranking images by semantic relevance to a given image query are known to
be non-symmetric when based on human judgement [15]. However, in some applications symmetry
or positivity constraints reflects a prior knowledge that may help in avoiding overfitting. We now
discuss variants of OASIS that learn a symmetric or positive matrices.
4.3.1 Symmetric similarities
A simple approach to enforce symmetry is to project the OASIS model W onto the set of symmetric
matrices W? = sym(W) = 12 WT + W . Projection can be done after each update (denoted
Online-Proj-Oasis) or after learning is completed (Proj-Oasis). Alternatively, the asymmetric score
function SW (pi , pj ) in lW can be replaced with a symmetric score
?
SW
(pi , pj ) ? ?(pi ? pj )T W (pi ? pj ) .
(6)
and used to derive an OASIS-like algorithm (which we name Dissim-Oasis). The optimal update for
+ T
?
? T
this loss has a symmetric gradient V?i = (pi ? p+
i )(pi ? pi ) ? (pi ? pi )(pi ? pi ) . Therefore,
0
i
if W is initialized with a symmetric matrix (e.g., the identity) all W are guaranteed to remain
7
symmetric. Dissim-Oasis is closely related to LMNN [16]. This can be seen be casting the batch
?
objective of LMNN, into an online setup, which has the form err(W ) = ?? ? SW
(pi , p+
i ) + (1 ?
+ ?
?
?) ? lW (pi , pi , pi ). This online version of LMNN becomes equivalent to Dissim-Oasis for ? = 0.
Figure 3(a) compares the precision of the different symmetric variants with the original OASIS. All symmetric variants performed slightly worse, or equal, to the original asymmetric OASIS. The precision of Proj-Oasis was equivalent to that of OASIS, most likely since asymmetric OASIS actually converged to an almost-symmetric model (as measured by a symmetry index
2
?(W) = ksym(W)k
= 0.94).
kWk2
4.3.2 Positive similarity
Most similarity learning approaches focus on learning metrics. In the context of OASIS, when W is
positive semi definite (PSD), it defines a Mahalanobis distance over the images. The matrix squareroot of W, AT A = W can then be used to project the data into a new space in which the Euclidean
distance is equivalent to the W distance in the original space.
We experimented with positive variants of OASIS, where we repeatedly projected the learned model
onto the set of PSD matrices, once every t iterations. Projection is done by taking the eigen decomposition W = V ? D ? VT where V is the eigenvector matrix and D is a the diagonal eigenvalues
matrix limited to positive eigenvalues. Figure 3(b) traces precision on the test set throughout learning
for various values of t.
The effect of positive projections is complex. First, continuously projecting at every step helps
to reduce overfitting, as can be observed by the slower decline of the blue curve (upper smooth
curve) compared to the orange curve (lowest curve). However, when projection is performed after
many steps, (instead of continuously), performance of the projected model actually outperforms
the continuous-projection model (upper jittery curve). The reason for this effect is likely to be
that estimating the positive sub-space is very noisy when only based on a few samples. Indeed,
accurate estimation of the negative subspace is known to be a hard problem, in that the estimated
eigenvalues of eigenvectors ?near zero?, is relatively large. We found that this effect was so strong,
that the optimal projection strategy is to avoid projection throughout learning completely. Instead,
projecting into PSD after learning (namely, after a model was chosen using early stopping) provided
the best performance in our experiments.
An interesting alternative to obtain a PSD matrix was explored by [10, 9]. Using a LogDet divergence between two matrices Dld (X, Y ) = tr(XY ?1 ) ? log(det(XY ?1 )) ensures that, given an
initial PSD matrix, all subsequent matrices will be PSD as well. It will be interesting to test the
effect of using LogDet regularization in the OASIS setup.
5
Discussion
We have presented OASIS, a scalable algorithm for learning image similarity that captures both
semantic and visual aspects of image similarity. Three key factors contribute to the scalability of
OASIS. First, using a large margin online approach allows training to converge even after seeing
a small fraction of potential pairs. Second, the objective function of OASIS does not require the
similarity measure to be a metric during training, although it appears to converge to a near-symmetric
solution, whose positive projection is a good metric. Finally, we use a sparse representation of low
level features which allows to compute scores very efficiently.
OASIS learns a class-independent model: it is not aware of which queries or categories were shared
by two similar images. As such, it is more limited in its descriptive power and it is likely that classdependent similarity models could improve precision. On the other hand, class-independent models
could generalize to handle classes that were not observed during training, as in transfer learning.
Large scale similarity learning, applied to images from a large variety of classes, could therefore be
a useful tool to address real-world problems with a large number of classes.
This paper focused on the training part of metric learning. To use the learned metric for ranking, an
efficient procedure for scoring a large set of images is needed. Techniques based on locality-sensitive
hashing could be used to speed up evaluation, but this is outside the scope of this paper.
8
References
[1] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning Distance Functions using Equivalence Relations. In Proc. of 20th International Conference on Machine Learning
(ICML), pages 11?18, 2003.
[2] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive
algorithms. JMLR, 7:551?585, 2006.
[3] J.V. Davis, B. Kulis, P. Jain, S. Sra, and I.S. Dhillon. Information-theoretic metric learning. In
ICML 24, pages 209?216, 2007.
[4] A. Frome, Y. Singer, F. Sha, and J. Malik. Learning globally-consistent local distance functions
for shape-based image retrieval and classification. In International Conference on Computer
Vision, pages 1?8, 2007.
[5] A. Globerson and S. Roweis. Metric Learning by Collapsing Classes. NIPS, 18:451, 2006.
[6] D. Grangier and S. Bengio. A discriminative kernel-based model to rank images from text
queries. Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 30(8):1371?
1384, 2008.
[7] G. Griffin, A. Holub, and P. Perona. Caltech-256 object category dataset. Technical Report
7694, CalTech, 2007.
[8] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition
(CVPR), volume 2, 2006.
[9] P. Jain, B. Kulis, I. Dhillon, and K. Grauman. Online metric learning and fast similarity search.
In NIPS, volume 22, 2008.
[10] B. Kulis, M.A. Sustik, and I.S. Dhillon. Low-rank kernel learning with bregman matrix divergences. Journal of Machine Learning Research, 10:341?376, 2009.
[11] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel
matrix with semidefinite programming. JMLR, 5:27?72, 2004.
[12] W. S. Noble. Multi-kernel learning for biology. In NIPS workshop on kernel learning, 2008.
[13] N. Rasiwasia and N. Vasconcelos. A study of query by semantic example. In 3rd International
Workshop on Semantic Learning and Applications in Multimedia, 2008.
[14] R. Rosales and G. Fung. Learning sparse metrics via linear programming. In Proceedings of
the 12th ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 367?373. ACM New York, NY, USA, 2006.
[15] A. Tversky. Features of similarity. Psychological Review, 84(4):327?352, 1977.
[16] K. Weinberger, J. Blitzer, and L. Saul. Distance metric learning for large margin nearest neighbor classification. NIPS, 18:1473, 2006.
[17] K.Q. Weinberger and L.K. Saul. Fast solvers and efficient implementations for distance metric
learning. In ICML25, pages 1160?1167, 2008.
[18] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to
clustering with side-information. In S. Becker, S. Thrun, and K. Obermayer, editors, NIPS 15,
pages 521?528, Cambridge, MA, 2003. MIT Press.
[19] L. Yang. Distance metric learning: A comprehensive survey. Technical report, Michigan State
Univ., 2006.
9
| 3705 |@word kulis:3 repository:1 version:1 judgement:1 norm:3 nd:1 dekel:1 decomposition:1 tr:1 reduction:3 initial:1 contains:1 score:5 selecting:2 document:1 outperforms:1 existing:1 err:1 current:5 com:4 comparing:1 numerical:1 partition:1 subsequent:1 shape:1 designed:1 update:5 intelligence:1 prohibitive:1 selected:1 ith:1 core:3 filtered:1 num:1 contribute:1 evaluator:14 five:1 along:1 pairwise:4 indeed:2 p1:2 multi:1 lmnn:20 globally:1 cpu:5 solver:1 becomes:1 provided:2 project:2 underlying:1 estimating:1 lowest:1 israel:1 weinshall:1 mountain:2 argmin:1 minimizes:1 eigenvector:1 developed:1 finding:3 gal:2 quantitative:1 every:8 collecting:1 runtime:7 exactly:1 ro:3 k2:2 classifier:1 grauman:1 control:1 underlie:1 before:1 positive:15 safety:1 local:2 aiming:1 bilinear:3 accumulates:1 id:3 black:1 studied:5 equivalence:1 suggests:1 co:2 fastest:1 limited:3 range:5 bi:1 obeys:1 averaged:2 globerson:1 lecun:1 block:4 definite:1 procedure:1 significantly:2 projection:10 chechik:1 pre:2 word:2 seeing:1 onto:2 close:2 context:3 applying:1 optimize:1 equivalent:4 map:2 lagrangian:4 go:1 regardless:1 kpi:2 focused:2 convex:2 hadsell:1 survey:1 assigns:1 importantly:1 his:1 searching:1 handle:1 updated:1 target:1 user:3 programming:2 us:1 samy:1 lanckriet:1 recognition:1 particularly:1 asymmetric:3 observed:3 caltech256:3 capture:3 thousand:2 ensures:1 trade:2 rescaled:1 russell:1 vanishes:1 complexity:1 cristianini:1 tversky:1 trained:3 basis:1 completely:1 k0:3 represented:2 various:1 derivation:1 train:3 univ:1 jain:4 fast:11 describe:1 scraped:1 kp:1 query:33 hyper:2 outside:2 hillel:1 shalev:1 whose:2 larger:2 valued:1 solve:1 widely:1 cvpr:1 otherwise:1 noisy:1 online:17 descriptive:1 eigenvalue:3 tpami:1 took:1 propose:1 relevant:12 beneath:1 loop:1 achieve:1 roweis:1 frobenius:1 scalability:3 requirement:1 converges:1 staying:1 object:7 help:2 derive:1 blitzer:1 ac:1 measured:2 nearest:8 eq:5 strong:1 dividing:1 implemented:5 solves:1 frome:1 p2:1 rosales:1 snow:1 closely:1 human:11 aggressiveness:1 require:1 generalization:3 viewer:1 hold:1 around:1 ground:2 visually:1 scope:1 mapping:1 achieves:3 dictionary:4 early:4 purpose:1 estimation:1 proc:1 bag:1 label:1 sensitive:1 tool:2 weighted:1 reflects:1 mit:1 aim:2 avoid:1 casting:1 l0:1 focus:3 she:1 consistently:1 rank:4 dld:1 sigkdd:1 baseline:1 stopping:4 el:1 perona:1 relation:1 proj:8 transformed:1 interested:1 semantics:1 compatibility:1 issue:1 classification:6 dual:1 overall:1 denoted:1 among:2 art:2 constrained:1 orange:1 equal:1 once:1 never:1 vasconcelos:1 aware:1 sampling:1 ng:1 identical:1 kw:2 biology:1 look:1 k2f:2 icml:2 noble:1 report:2 quantitatively:1 few:2 modern:2 neighbour:1 divergence:2 comprehensive:1 replaced:1 phase:1 psd:7 mining:1 evaluation:6 saturated:1 semidefinite:3 yielding:1 predefined:2 accurate:3 bregman:1 edge:1 xy:2 shorter:2 euclidean:10 initialized:2 shalit:2 psychological:1 instance:2 soft:1 asking:1 cost:2 entry:1 subset:5 uniform:1 successful:1 too:2 reported:1 combined:1 fundamental:2 international:4 huji:1 off:2 continuously:2 again:1 reflect:2 containing:2 positivity:5 collapsing:2 watching:1 worse:2 derivative:1 rasiwasia:1 aggressive:3 suggesting:1 potential:2 sec:3 explicitly:1 ranking:9 vi:5 performed:2 view:3 try:1 extrapolation:1 overfits:1 xing:1 minimize:1 il:1 square:1 efficiently:2 yield:2 generalize:1 published:1 converged:1 dataset:5 popular:1 intensively:1 color:1 knowledge:2 dimensionality:5 holub:1 nonmetric:1 actually:2 back:1 appears:1 higher:2 hashing:1 varun:1 day:6 follow:1 response:1 supervised:1 evaluated:3 though:1 done:2 until:1 hand:1 web:7 overlapping:1 google:8 defines:2 quality:2 lda:1 grows:3 name:1 effect:4 usa:1 concept:2 multiplier:1 true:1 hence:2 assigned:1 regularization:3 symmetric:16 iteratively:1 nonzero:1 dhillon:3 semantic:10 mahalanobis:6 during:4 inferior:1 davis:2 unnormalized:1 criterion:1 complete:1 theoretic:1 passive:3 percent:1 image:107 wise:1 common:2 superior:1 volume:2 million:8 belong:1 kwk2:1 cambridge:1 imposing:4 queried:2 rd:5 unconstrained:1 grangier:1 had:2 access:2 similarity:51 supervision:3 longer:1 compiled:1 dissim:4 closest:1 posterior:2 recent:1 retrieved:2 belongs:1 p1i:1 vt:1 scoring:2 caltech:3 seen:1 additional:1 kxk0:1 sharma:1 converge:2 signal:2 semi:2 dashed:1 full:3 smooth:1 technical:2 faster:5 cross:1 retrieval:3 plugging:1 controlled:1 scalable:3 variant:7 vision:3 metric:26 iteration:4 kernel:12 sometimes:2 histogram:1 mex:1 achieved:3 separately:1 grow:1 float:1 unlike:1 subject:1 induced:1 jordan:2 lego:10 near:2 chopra:1 yang:1 bengio:3 variety:1 competing:1 click:2 reduce:1 decline:1 knowing:1 det:2 handled:1 pca:1 bartlett:1 becker:1 speaking:1 york:1 logdet:2 repeatedly:1 matlab:11 useful:4 clear:1 eigenvectors:1 amount:1 ten:1 processed:1 category:6 reduced:1 outperform:1 supplied:1 estimated:1 per:5 blue:1 broadly:1 diverse:1 write:1 shental:1 group:1 key:1 four:3 enormous:1 achieving:1 pj:8 thresholded:1 fraction:4 sum:1 run:2 parameterized:1 mcml:6 place:1 family:2 throughout:4 almost:1 draw:1 squareroot:1 coherence:1 griffin:1 comparable:1 ki:3 bound:1 guaranteed:4 fold:1 oracle:1 yielded:1 constraint:10 tag:1 aspect:1 speed:1 min:4 relatively:1 speedup:1 fung:1 according:1 hertz:1 across:4 remain:1 slightly:1 pti:3 wi:11 projecting:2 invariant:1 rca:2 ghaoui:1 taken:1 discus:1 count:1 needed:1 mind:1 singer:2 sustik:1 available:4 enforce:2 alternative:2 batch:1 weinberger:2 slower:2 eigen:1 original:3 top:9 remaining:1 include:1 denotes:1 ensure:1 completed:1 clustering:1 hinge:3 sw:8 especially:1 society:1 objective:2 malik:1 parametric:2 costly:1 strategy:1 sha:1 diagonal:1 obermayer:1 gradient:5 detrimental:1 subspace:1 distance:23 separate:1 mapped:1 separating:1 thrun:1 w0:1 mail:1 argue:1 collected:5 discriminant:1 spanning:2 enforcing:1 reason:1 code:2 index:1 providing:1 minimizing:1 hebrew:1 setup:2 unfortunately:1 trace:1 negative:1 implementation:4 upper:3 datasets:3 benchmark:4 descent:1 precise:1 locate:1 community:1 inferred:1 pair:10 required:1 namely:2 dog:1 optimized:1 engine:2 learned:11 textual:1 quadratically:3 nip:5 address:2 beyond:1 suggested:1 bar:2 below:2 pattern:2 regime:2 built:1 max:1 memory:2 video:4 power:1 icnc:1 ranked:6 difficulty:3 treated:1 hr:3 representing:2 scheme:1 improve:2 classdependent:1 created:1 extract:1 kj:1 text:6 review:2 literature:1 l2:1 byte:1 kf:1 prior:1 discovery:1 relative:5 loss:10 fully:2 mixed:1 interesting:2 validation:4 sufficient:1 anonymized:2 consistent:1 editor:1 systematically:1 pi:61 share:1 copy:1 sym:1 infeasible:1 dis:1 side:1 weaker:2 allow:1 india:1 wide:4 neighbor:12 taking:2 saul:2 differentiating:1 sparse:11 benefit:1 curve:6 calculated:1 vocabulary:1 stand:1 avoids:1 cumulative:1 world:1 author:1 projected:4 preprocessing:1 transaction:1 relatedness:1 keep:1 global:3 overfitting:3 active:1 discriminative:1 shwartz:1 alternatively:1 search:5 continuous:1 triplet:5 table:4 learn:6 transfer:1 ca:2 sra:1 symmetry:6 improving:1 complex:2 poly:1 protocol:1 jittery:1 main:2 linearly:3 repeated:1 fair:1 fig:1 definiteness:1 slow:1 ny:1 precision:29 sub:1 wish:2 candidate:1 lw:10 weighting:2 jmlr:2 learns:4 minute:3 emphasized:1 kvi:2 explored:1 experimented:2 workshop:2 mnist:4 adding:2 gained:1 keshet:1 magnitude:6 uri:2 margin:8 sparseness:1 easier:1 locality:1 michigan:1 appearance:1 likely:3 visual:6 lagrange:1 oasis:79 truth:2 satisfies:1 chance:1 extracted:1 acm:2 lwi:1 ma:1 goal:1 identity:3 viewed:1 marked:2 sorted:1 formulated:1 shared:1 fisher:1 content:1 hard:1 included:2 specifically:1 except:1 reducing:1 semantically:5 wt:1 principal:1 total:2 called:1 multimedia:1 formally:2 select:1 giga:1 mark:1 people:1 crammer:1 relevance:13 tested:10 avoiding:1 handling:1 |
2,986 | 3,706 | Heterogeneous Multitask Learning with Joint
Sparsity Constraints
Xiaolin Yang
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Seyoung Kim
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Eric P. Xing
Machine Learning Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
Multitask learning addresses the problem of learning related tasks that presumably share some commonalities on their input-output mapping functions. Previous approaches to multitask learning usually deal with homogeneous tasks, such
as purely regression tasks, or entirely classification tasks. In this paper, we consider the problem of learning multiple related tasks of predicting both continuous and discrete outputs from a common set of input variables that lie in a highdimensional feature space. All of the tasks are related in the sense that they share
the same set of relevant input variables, but the amount of influence of each input
on different outputs may vary. We formulate this problem as a combination of linear regressions and logistic regressions, and model the joint sparsity as L1 /L? or
L1 /L2 norm of the model parameters. Among several possible applications, our
approach addresses an important open problem in genetic association mapping,
where the goal is to discover genetic markers that influence multiple correlated
traits jointly. In our experiments, we demonstrate our method in this setting, using
simulated and clinical asthma datasets, and we show that our method can effectively recover the relevant inputs with respect to all of the tasks.
1
Introduction
In multitask learning, one is interested in learning a set of related models for predicting multiple
(possibly) related outputs (i.e., tasks) given a set of input variables [4]. In many applications, the
multiple tasks share a common input space, but have different functional mappings to different
output variables corresponding to different tasks. When the tasks and their corresponding models
are believed to be related, it is desirable to learn all of the models jointly rather than treating each
task as independent of each other and fitting each model separately. Such a learning strategy that
allows us to borrow information across tasks can potentially increase the predictive power of the
learned models.
Depending on the type of information shared among the tasks, a number of different algorithms have
been proposed. For example, hierarchical Bayesian models have been applied when the parameter
values themselves are thought to be similar across tasks [2, 14]. A probabilistic method for modeling
the latent structure shared across multiple tasks has been proposed [16]. For problems of which the
input lies in a high-dimensional space and the goal is to recover the shared sparsity structure across
tasks, a regularized regression method has been proposed [10].
In this paper, we consider an interesting and not uncommon scenario of multitask learning, where
the tasks are heterogeneous and bear a union support. That is, each task can be either a regression
or classification problem, with the inputs lying in a very high-dimensional feature space, but only a
small number of the input variables (i.e., predictors) are relevant to each of the output variables (i.e.,
1
responses). Furthermore, we assume that all of the related tasks possibly share common relevant
predictors, but with varying amount of influence on each task.
Previous approaches for multitask learning usually consider a set of homogeneous tasks, such as regressions only, or classifications only. When each of these discrete or continuous prediction tasks is
treated separately, given a high-dimensional design, the lasso method that penalizes the loss function
with an L1 norm of the parameters has been a popular approach for variable selection [13, 11], since
the L1 regularization has the property of shrinking parameters corresponding to irrelevant predictors
exactly to zero. One of the successful extensions of the standard lasso is the group lasso that uses an
L1 /L2 penalty defined over predictor groups [15], instead of just the L1 penalty ubiquitously over
all predictors. Recently, a more general L1 /Lq -regularized regression scheme with q > 0 has been
thoroughly investigated [17]. When the L1 /Lq penalty is used in estimating the regression function
for a single predictive task, it makes use of information about the grouping of input variables, and
applies the L1 penalty over the Lq norm of the regression coefficients for each group of inputs. As
a result, variable selection can be effectively achieved on each group rather than on each individual
input variable. This type of regularization scheme can be also used against the output variables in
a single classification task with multi-way (rather than binary) prediction, where the output is expanded from univariate to multivariate with dummy variables for each prediction category. In this
situation the group lasso can promote selecting the same set of relevant predictors across all of the
dummy variables (which is desirable since these dummy variables indeed correspond to only a single multi-way output). In our multitask learning problem, when the L1 /L2 penalty of group lasso is
used for multitask regression [9, 10, 1], the L2 norm is applied to the regression coefficients for each
input across all tasks, and the L1 norm is applied to these L2 norms, playing the role of selecting
common input variables relevant to one or more tasks via a sparse union support recovery. Since the
parameter estimation problem formulated with such penalty terms has a convex objective function,
many of the algorithms developed for a general convex optimization problem can be used for solving
the learning problem. For example, an interior point method and a preconditioned conjugate gradient algorithm have been used to solve a large-scale L1 -regularized linear regression and logistic
regression [8]. In [6, 13], a coordinate-descent method was used in solving an L1 -regularized linear
regression and generalized linear models, where the soft thresholding operator gives a closed-form
solution for each coordinate in each iteration.
In this paper, we consider the more challenging, but realistic scenario of having heterogenous outputs, i.e., both continuous and discrete responses, in multitask learning. This means that the tasks
in question consist of both regression and classification problems. Assuming a linear regression for
continuous-valued output and a logistic regression for discrete-valued output with dummy variables
for multiple categories, an L1 /Lq penalty can be used to learn both types of tasks jointly for a sparse
union support recovery. Since the L1 /Lq penalty selects the same relevant inputs for all dummy outputs for each classification task, the desired consistency in chosen relevant inputs across the dummy
variables corresponding to the same multi-way response is automatically maintained. We consider
particular cases of L1 /Lq regularizations with q = 2 and q = ?.
Our work is primarily motivated by the problem of genetic association mapping based on genomewide genotype data of single nucleotide polymorphisms (SNPs), and phenotype data such as disease
status, clinical traits, and microarray data collected over a large number of individuals. The goal in
this study is to identify the SNPs (or inputs) that explain the variation in the phenotypes (or outputs),
while reducing false positives in the presence of a large number of irrelevant SNPs from the genomescale data. Since many clinical traits for a given disease are highly correlated, it is greatly beneficial
to combine information across multiple such related phenotypes because the inputs often involve
millions of SNPs and the association signals of causal (or relevant) SNPs tend to be very weak
when computed individually. However, statistically significant patterns can emerge when the joint
associations to multiple related traits are estimated properly. Over the recent years, researchers
started recognizing the importance of the joint analysis of multiple correlated phenotypes [5, 18],
but there has been a lack of statistical tools to systematically perform such analysis. In our previous
work [7], we developed a regularized regression method, called a graph-guided fused lasso, for
multitask regression problem that takes advantage of the graph structure over tasks to encourage a
selection of common inputs across highly correlated traits in the graph. However, this method can
only be applied to the restricted case of correlated continuous-valued outputs. In reality, the set of
clinical traits related to a disease often contains both continuous- and discrete-valued traits. As we
2
demonstrate in our experiments, the L1 /Lq regularization for the joint regression and classification
can successfully handle this situation.
The paper is organized as follows. In Section 2, we introduce the notation and the basic formulation
for joint regression-classification problem, and describe the L1 /L? and L1 /L2 regularized regressions for heterogeneous multitask learning in this setting. In Section 3, we formulate the parameter
estimation as a convex optimization problem, and present an interior-point method for solving it.
Section 4 presents experimental results on simulated and asthma datasets. In Section 5, we conclude
with a brief discussion of future work.
2 Joint Multitask Learning of Linear Regressions and Multinomial Logistic
Regressions
Suppose that we have K tasks of learning a predictive model for the output variable, given a common
set of P input variables. In our joint regression-classification setting, we assume that the K tasks
consist of Kr tasks with continuous-valued outputs and Kc tasks with discrete-valued outputs of an
arbitrary number of categories.
For each of the Kr regression problems, we assume a linear relationship between the input vector
X of size P and the kth output Yk as follows:
(r)
(r)
Yk = ?k0 + X? k + ?,
(r)
(r)
k = 1, ..., Kr ,
(r)
where ? k = (?k1 , . . . , ?kP )0 represents a vector of P regression coefficients for the kth regression
(r)
task, with the superscript (r) indicating that this is a parameter for regression; ?k0 represents the
intercept; and ? denotes the residual.
Let yk = (yk1 , . . . , ykN )0 represent the vector of observations for the kth output over N samples;
and X represent an N ? P matrix X = (x1 , . . . , xN )0 of the input shared across all of the K tasks,
(r)
where xi = (xi1 , . . . , xiP )0 denotes the ith sample. Given these data, we can estimate the ? k ?s by
minimizing the sum of squared error:
Lr =
Kr
X
(r)
(r)
(r)
(r)
(yk ? 1?k0 ? X? k )0 ? (yk ? 1?k0 ? X? k ),
(1)
k=1
where 1 is an N -vector of 1?s.
For the tasks with discrete-valued output, we set up a multinomial (i.e., softmax) logistic regression
for each of the Kc tasks, assuming that the kth task has Mk categories:
(c)
(c)
exp (?k0 + x? km )
, for m = 1, . . . , Mk ? 1,
P (Yk = m|X = x) =
PMk ?1
(c)
(c)
1 + l=1 exp (?k0 + x? kl )
1
P (Yk = Mk |X = x) =
,
PMk ?1
(c)
(c)
1 + l=1 exp (?k0 + x? kl )
(c)
(c)
(2)
(c)
where ? km = (?km1 , . . . , ?kmP )0 , m = 1, . . . , (Mk ? 1), is the parameter vector for the mth
(c)
category of the kth classification task, and ?k0 is the intercept.
Assuming that the measurements for the Kc output variables are collected for the same set of N
samples as in the regression tasks, we expand each output data yki for the kth task of the ith sample
into a set of Mk binary variables y 0ki = (yk1i , . . . , ykMk i ), where each ykmi , m = 1, . . . , Mk , takes
value 1 if the ith P
sample for the kth classification task belongs to the mth category and value 0 otherwise, and thus m ykmi = 1. Using the observations for the output variable in this representation
(c)
and the shared input data X, one can estimate the parameters ? km ?s by minimizing the negative
log-likelihood given as below:
? M ?1
!
M
Kc
N X
P
P
k
k ?1
?
?
X
X
X
X
X
(c)
(c)
(c)
(c)
Lc = ?
ykmi (?k0 +
xij ?kmj ) ? log 1 +
exp (?k0 +
xij ?kmj ) . (3)
i=1 k=1
m=1
m=1
j=1
3
j=1
In this joint regression-classification problem, we form a global objective function by combining the
two empirical loss functions in Equations (1) and (3):
L = Lr + Lc .
(r)
(4)
(c)
This is equivalent to estimating the ? k ?s and ? km ?s independently for each of the K tasks, assuming that there are no shared patterns in the way that each of the K output variables is dependent
on the input variables. Our goal is to increase the performance of variable selection and prediction
power by allowing the sharing of information among the heterogeneous tasks.
3 Heterogeneous Multitask Learning with Joint Sparse Feature Selection
In real-world applications, often the covariates lie in a very high-dimensional space with only a
small fraction of them being involved in determining the output, and the goal is to recover the
sparse structure in the predictive model by selecting the true relevant covariates. For example, in
a genetic association mapping, often millions of genetic markers over a population of individuals
are examined to find associations with the given phenotype such as clinical traits, disease status,
or molecular phenotypes. The challenge in this type of study is to locate the true causal SNPs that
influence the phenotype. We consider the case where the related tasks share the same sparsity pattern
such that they have a common set of relevant input variables for both the regression and classification
tasks and the amount of influence of the relevant input variables on the output may vary across the
tasks. We introduce an L1 /Lq regularization to the problem of the heterogeneous multitask learning
in Equation (4) as below:
L = Lr + Lc + ?Pq ,
(5)
where Pq is the group penalty to the sum of linear regression loss and logistic loss, and ? is a
regularization parameter which determines the sparsity level and could be chosen by cross validation.
We consider two extreme cases of the L1 /Lq penalty for group variable selection in our problem
which are L? norm and L2 norm across different tasks in one dimension.
?X
?X
P
P
?
?
??
(r)
(c)
(r)
(c)
|? j , ? j |L2 ,
(6)
max |?kj |, |?kmj |
or P2 =
P? =
j=1
(r)
k,m
j=1
(c)
where ? j , ? j are vector of parameters over all regression and classification tasks, respectively,
for the jth dimension. Here, the L? and L2 norms over the parameters across different tasks can
regulate the joint sparsity among tasks. The L1 /L? and L1 /L2 norms encourage group sparsity
(r)
(c)
in a similar way in that the ?kj ?s and ?kmj ?s are set to 0 simultaneously for all of the tasks for
dimension j if the L? or L2 norm for that dimension is set to be 0. Similarly, if the L1 operator
(r)
(c)
selects a non-zero value for the L? or L2 norm of the ?kj ?s and ?kmj ?s for the jth input, the
(r)
(c)
same input is considered as relevant possibly to all of the tasks, and the ?kj ?s and ?kmj ?s can
have any non-zero values smaller than the maximum or satisfying the L2 -norm constraints. The
L1 /L? penalty tends to encourage the parameter values to be the same across all tasks for a given
input [17], whereas under L1 /L2 penalty the values of the parameters across tasks tend to be more
different for a given input than in the L1 /L? penalty.
4 Optimization Method
Different methods such as gradient descent, steepest descent, Newton?s method and Quasi-Newton
method can be used to solve the problem in Equation (5). Although second-order methods have a
fast convergence near the global minimum of the convex objective functions, they involve computing a Hessian matrix and inverting it, which can be infeasible in a high-dimensional setting. The
coordinate-descent method iteratively updates each element of the parameter vector one at a time,
using a closed-form update equation given all of the other elements. However, since it is a first-order
method, the speed of convergence becomes slow as the number of tasks and dimension increase. In
[8], the truncated Newton?s method that uses a preconditionor and solves the linear system instead of
inverting the Hessian matrix has been proposed as a fast optimization method for a very large-scale
4
problem. The linear regression loss and logistic regression loss have different forms. The interior
method optimizes their original loss function without any transformation so that it is more intuitive
to see how the two heterogeneous tasks affect each other.
In this section, we discuss the case of the L1 /L? penalty since the same optimization method can be
easily extended to handle the L1 /L2 penalty. First, we re-write the problem of minimizing Equation
(5) with the nondifferentiable L1 /L? penalty as
minimize Lr + Lc + ?
P
X
uj
j=1
?
?
(r)
(c)
subject to max |?kj |, |?kmj | < uj , for j = 1, . . . , P, k = 1, . . . , Kr + Kc .
k,m
Further re-writing the constraints in the above problem, we obtain 2?P ? (Kr +
inequality constraints as follows:
(r)
PKc
k=1 (Mk
?uj < ?kj < uj ,
for
k = 1, . . . , Kr , j = 1, . . . , P,
(c)
for
k = 1, . . . , Kc , j = 1, . . . , P, m = 1, . . . , Mk ? 1.
?uj < ?kmj < uj ,
(7)
? 1))
Using the barrier method [3], we re-formulate the objective function in Equation (7) into an unconstrained problem given as
Kr X
P
P ?
?
X
X
(c)
(c)
LBarrier = Lr + Lc + ?
uj +
I? (??kj ? uj ) + I? (?kj ? uj )
j=1
+
k=1 j=1
Kc M
P
k ?1 X
X
X
(c)
(c)
I? (??kmj ? uj ) + I? (?kmj ? uj ),
k=1 m=1 j=1
where
?
I? (x) =
0 x?0
.
? x>0
Then, we apply the log barrier function I? (f (x)) = ?(1/t) log(?f (x)), where t is an additional
parameter that determines the accuracy of the approximation.
(r)
(c)
Let ? denote the set of parameters ? k ?s and ? km ?s. Given a strictly feasible ?, t = t(0) > 0,
? > 1, and tolerance ? > 0, we iterate the following steps until convergence.
Step 1 Compute ?? (t) by minimizing LBarrier , starting at ?.
Step 2 Update: ? := ?? (t)
Step 3 Stopping criterion: quit if m/t < ? where m is the number of constraint functions.
Step 4 Increase t: t := t?
In Step 1, we use the Newton?s method to minimize LBarrier at t. In each iteration, we increase t in Step 4, so that we have a more accurate approximation of I? (u) through I? (f (x)) =
?(1/t) log(?f (x)).
In Step 1, we find the direction towards the optimal solution using Newton?s method:
#
"
??
= ?g,
H
?u
where ?? and ?u are the searching directions of the model parameters and bounding parameters.
The g in the above equation is the gradient vector given as g = [g (r) , g (c) , g (u) ]T , where g (r) has
Kr components for regression tasks, g (c) has Kc ? (Mk ? 1) components for classification tasks,
and H is the Hessian matrix given as:
?
?
R
0
D(r)
?
?
H=?
L
D(c) ?
? 0
?,
(r)
(c)
D
D
F
5
0
5
0
10
10
5
5
Parameters
5
Parameters
10
Parameters
Parameters
10
0
?5
?5
0
0.5
?/max|?|
1
?5
0
0.5
?/max|?|
?10
0
1
0
?5
0.5
?/max|?|
1
?10
0
0.5
?/max|?|
1
(a)
(b)
(c)
(d)
Figure 1: The regularization path for L1 /L? -regularized methods. (a) Regression parameters estimated from the heterogeneous task learning method, (b) regression parameters estimated from regression tasks only, (c) logistic-regression parameters estimated from the heterogeneous task learning method, and (d) logistic-regression parameters estimated from classification tasks only. Blue
curves: irrelevant inputs; Red curves: relevant inputs.
where R and L are second derivatives of the parameters ? for regression tasks in the form of R =
?2 Lr + ?2 Pg |?? (r) ?? (r) , L = ?2 Lc + ?2 Pg |?? (c) ?? (c) , D = ?2 Pg |???u and F = D(r) + D(c) .
In the overall interior-point method, the process of constructing and inverting Hessian matrix is the
most time-consuming part. In order to make the algorithm scalable to a large problem, we use a
preconditionor diag(H) of the Hessian matrix H, and apply the preconditioned conjugate-gradient
algorithm to compute the searching direction.
5 Experiments
We demonstrate our methods for heterogeneous multitask learning with L1 /L? and L1 /L2 regularizations on simulated and asthma datasets, and compare their performances with those from solving
two types of multitask-learning problems for regressions and classifications separately.
5.1
Simulation Study
In the context of genetic association analysis, we simulate the input and output data with known
model parameters as follows. We start from the 120 haplotypes of chromosome 7 from the population of European ancestry in HapMap data [12], and randomly mate the haplotypes to generate
genotype data for 500 individuals. We randomly select 50 SNPs across the chromosome as inputs.
(r)
(c)
In order to simulate the parameters ? k ?s and ? km ?s, we assume six regression tasks and a single
classification task with five categories, and choose five common SNPs from the total of 50 SNPs as
relevant covariates across all of the tasks. We fill the non-zero entries in the regression coefficients
(r)
? k ?s with values uniformly distributed in the interval [a, b] with 5 ? a, b ? 10, and the non-zero
(c)
entries in the logistic-regression parameters ? km ?s such that the five categories are separated in the
output space. Given these inputs and the model parameters, we generate the output values, using
2
the noise for regression tasks distributed as N (0, ?sim
). In the classification task, we expand the
single output into five dummy variables representing different categories that take values of 0 or 1
depending on which category each sample belongs to. We repeat this whole process of simulating
inputs and outputs to obtain 50 datasets, and report the results averaged over these datasets.
The regularization paths of the different multitask-learning methods with an L1 /L? regularization
obtained from a single simulated dataset are shown in Figure 1. The results from learning all of the
tasks jointly are shown in Figures 1(a) and 1(c) for regression and classification tasks, respectively,
whereas the results from learning the two sets of regression and classification tasks separately are
shown in Figures 1(b) and 1(d). The red curves indicate the parameters for true relevant inputs, and
the blue curves for true irrelevant inputs. We find that when learning both types of tasks jointly, the
parameters of the irrelevant inputs are more reliably set to zero along the regularization path than
learning the two types of tasks separately.
In order to evaluate the performance of the methods, we use two criteria of sensitivity/specificity
plotted as receiver operating characteristic (ROC) curves and prediction errors on test data. To obtain
ROC curves, we estimate the parameters, sort the input-output pairs according to the magnitude of
(r)
(c)
the estimated ? kj ?s and ? kmj ?s, and compare the sorted list with the list of input-output pairs with
(r)
(c)
true non-zero ? kj ?s and ? kmj ?s.
6
1
0.8
0.6
M (L /L )
?
1
0.4
HM (L1/L?)
M (L1/L2)
0.2
0.6
M (L /L )
HM (L1/L?)
0.2
0.4
0.6
1?Specificity
0.8
0.6
M (L /L )
HM (L1/L?)
M (L1/L2)
0.2
HM (L1/L2)
0
0
1
0.2
0.4
0.6
1?Specificity
0.8
?
1
0.4
M (L1/L2)
0.2
HM (L1/L2)
0
0
?
1
0.4
Sencitivity
1
0.8
Sencitivity
1
0.8
Sencitivity
Sencitivity
1
0.8
0.6
M (L /L )
HM (L1/L?)
M (L1/L2)
0.2
HM (L1/L2)
0
0
1
0.2
0.4
0.6
1?Specificity
0.8
?
1
0.4
HM (L1/L2)
0
0
1
0.2
0.4
0.6
1?Specificity
0.8
1
(a)
(b)
(c)
(d)
Figure 2: ROC curves for detecting true relevant input variables when the sample size N varies. (a)
Regression tasks with N = 100, (b) classification tasks with N = 100, (c) regression tasks with
N = 200, and (d) classification tasks with N = 200. Noise level N (0,1) was used. The joint
regression-classification methods achieve nearly perfect accuracy, and their ROC curves are completely aligned with the axes.?M? indicates homogeneous multitask learning, and ?HM? heterogenous
multitask learning (This notation is the same for the following other figures).
0.4
400
0.2
400
200
200
0.05
100
0
M
HM
M
HM
(L /L ) (L /L ) (L /L ) (L /L )
1 ?
1 ?
1 2
1 2
1
2
1
0
0
M
HM
M
HM
(L1/L?) (L1/L?) (L /L ) (L /L )
1
2
0.1
0.05
100
M
HM
M
HM
(L /L ) (L /L ) (L /L ) (L /L )
1 ?
1 ?
0.2
0.15
300
0.1
0.3
0.25
500
0.15
300
0
600
0.25
500
0.35
700
0.3
Prediction error
Classification error
Prediction error
600
0.4
800
0.35
700
Classification error
800
2
1
2
M
HM
M
HM
(L /L ) (L /L ) (L /L ) (L /L )
1 ?
1 ?
1
2
1
2
(a)
(b)
(c)
(d)
Figure 3: Prediction errors when the sample size N varies. (a) Regression tasks with N =100, (b)
classification tasks with N = 100, (c) regression tasks with N = 200, and (d) classification tasks
with N = 200. Noise level N (0,1) was used.
We vary the sample size to N = 100 and 200, and show the ROC curves for detecting true relevant
inputs using different methods in Figure 2. We use ?sim = 1 to generate noise in the regression
tasks. Results for the regression and classification tasks with N = 100 are shown in Figure 2(a) and
(b) respectively, and similarly, the results with N = 200 in Figure 2(c) and (d). The results with
L1 /L? penalty are shown with color blue and green to compare the homogeneous and heterogeneous methods. Red and yellow are results using the L1 /L2 penalty. Although the performance of
learning the two types of tasks separately improves with a larger sample size, the joint estimation
performs significantly better for both sample sizes. A similar trend can be seen in the prediction
errors for the same simulated datasets in Figure 3.
1
1
0.8
0.8
0.6
M (L /L )
?
1
0.4
HM (L /L )
1
?
M (L1/L2)
0.2
0.6
M (L /L )
HM (L /L )
1
0.2
0.4
0.6
1?Specificity
0.8
?
M (L1/L2)
0.2
HM (L1/L2)
0
0
?
1
0.4
0.6
M (L /L )
0
0
0.2
0.4
0.6
1?Specificity
0.8
?
1
0.4
HM (L /L )
1
?
M (L1/L2)
0.2
HM (L1/L2)
1
Sencitivity
1
0.8
Sencitivity
1
0.8
Sencitivity
Sencitivity
In order to see how different signal-to-noise ratios affect the performance, we vary the noise level
2
2
to ?sim
= 5 and ?sim
= 8, and plot the ROC curves averaged over 50 datasets with a sample size
N = 300 in Figure 4. Our results show that for both of the signal-to-noise ratios, learning regression
and classification tasks jointly improves the performance significantly. The same observation can be
made from the prediction errors in Figure 5. We can see that the L1 /L2 method tends to improve
the variable selection, but the tradeoff is that the prediction error will be high when the noise level
is low. While L1 /L? has a good balance between the variable selection accuracy and prediction
error at a lower noise level, as the noise increases, the L1 /L2 outperforms L1 /L? in both variable
selection and prediction accuracy.
0.6
M (L /L )
HM (L /L )
1
0
0
0.2
0.4
0.6
1?Specificity
0.8
?
M (L1/L2)
0.2
HM (L1/L2)
1
?
1
0.4
HM (L1/L2)
1
0
0
0.2
0.4
0.6
1?Specificity
0.8
1
(a)
(b)
(c)
(d)
Figure 4: ROC curves for detecting true relevant input variables when the noise level varies. (a)
Regression tasks with noise level N (0, 5), (b) classification tasks with noise level N (0, 5), (c) regression tasks with noise level N (0, 8), and (d) classification tasks with noise level N (0, 8). Sample
size N =300 was used.
7
0.4
36
0.3
Prediction error
Classification error
Prediction error
0.4
0.35
38
0.25
34
32
0.2
0.15
30
28
0.1
0.05
26
M
HM
M
HM
(L /L ) (L /L ) (L /L ) (L /L )
1 ?
1 ?
1 2
1 2
0
80
78
76
74
72
70
68
66
64
0.35
Classification error
40
0.2
0.15
0.1
0.05
0
M
HM
M
HM
(L1/L?) (L1/L?) (L /L ) (L /L )
1 2
1 2
M
HM
M
HM
(L1/L?) (L1/L?) (L /L ) (L /L )
1 2
1 2
0.3
0.25
M
HM
M
HM
(L1/L?) (L1/L?) (L /L ) (L /L )
1 2
1 2
(a)
(b)
(c)
(d)
Figure 5: Prediction errors when the noise level varies. (a) Regression tasks with noise level
N (0, 52 ), (b) classification tasks with noise level N (0, 52 ), (c) regression tasks with noise level
N (0, 82 ), and (d) classification tasks with noise level N (0, 82 ). Sample size N =300 was used.
1
2
0.8
0.6
4
1
2
0.8
0.6
4
0.4
6
0.4
6
0.2
8
10
20
30
0
0.2
8
10
20
30
0
(a)
(b)
Figure 6: Parameters estimated from the asthma dataset for discovery of causal SNPs for the correlated phenotypes. (a) Heterogeneous task learning method, and (b) separate analysis of multitask
regressions and multitask classifications. The rows represent tasks, and the columns represent SNPs.
5.2
Analysis of Asthma Dataset
We apply our method to the asthma dataset with 34 SNPs in the IL4R gene of chromosome 11
and five asthma-related clinical traits collected over 613 patients. The set of traits includes four
continuous-valued traits related to lung physiology such as baseline predrug FEV1, maximum
FEV1, baseline predrug FVC, and maximum FVC as well as a single discrete-valued trait with five
categories. The goal of this analysis is to discover whether any of the SNPs (inputs) are influencing each of the asthma-related traits (outputs). We fit the joint regression-classification method with
L1 /L? and L1 /L2 regularizations, and compare the results from fitting L1 /L? and L1 /L2 regularized methods only for the regression tasks or only for the classification task. We show the estimated
parameters for the joint learning with L1 /L? penalty in Figure 6(a) and the separate learning with
L1 /L? penalty in Figure 6(b), where the first four rows correspond to the four regression tasks,
the next four rows are parameters for the four dummy variables of the classification task, and the
columns represent SNPs. We can see that the heterogeneous multitask-learning method encourages
to find common causal SNPs for the multiclass classification task and the regression tasks.
6 Conclusions
In this paper, we proposed a method for a recovery of union support in heterogeneous multitask
learning, where the set of tasks consists of both regressions and classifications. In our experiments
with simulated and asthma datasets, we demonstrated that using L1 /L2 or L1 /L? regularizations
in the joint regression-classification problem improves the performance for identifying the input
variables that are commonly relevant to multiple tasks.
The sparse union support recovery as was presented in this paper is concerned with finding inputs
that influence at least one task. In the real-world problem of association mapping, there is a clustering structure such as co-regulated genes, and it would be interesting to discover SNPs that are causal
to at least one of the outputs within the subgroup rather than all of the outputs. In addition, SNPs in
a region of chromosome are often correlated with each other because of the non-random recombination process during inheritance, and this correlation structure, called linkage disequilibrium, has
been actively investigated. A promising future direction would be to model this complex correlation
pattern in both the input and output spaces within our framework.
Acknowledgments EPX is supported by grant NSF DBI-0640543, NSF DBI-0546594, NSF IIS-0713379,
NIH grant 1R01GM087694, and an Alfred P. Sloan Research Fellowship.
8
References
[1] A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning,
73(3):243?272, 2008.
[2] B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine
Learning Research, 4:83?99, 2003.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[4] R. Caruana. Multitask learning. Machine Learning, 28:41?75, 1997.
[5] V. Emilsson, G. Thorleifsson, B. Zhang, A.S. Leonardson, F. Zink, J. Zhu, S. Carlson, A. Helgason, G.B.
Walters, S. Gunnarsdottir, et al. Variations in dna elucidate molecular networks that cause disease. Nature,
452(27):423?28, 2008.
[6] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Technical Report 703, Department of Statistics, Stanford University, 2009.
[7] S. Kim and E. P. Xing. Statistical estimation of correlated genome associations to a quantitative trait
network. PLoS Genetics, 5(8):e1000587, 2009.
[8] K. Koh, S. Kim, and S. Boyd. An interior-point method for large-scale l1-regularized logistic regression.
Journal of Machine Learning Research, 8(8):1519?1555, 2007.
[9] G. Obozinski, B. Taskar, and M. Jordan. Joint covariate selection for grouped classification. Technical
Report 743, Department of Statistics, University of California, Berkeley, 2007.
[10] G. Obozinski, M.J. Wainwright, and M.J. Jordan. High-dimensional union support recovery in multivariate regression. In Advances in Neural Information Processing Systems 21, 2008.
[11] M. Schmidt, G. Fung, and R. Rosales. Fast optimization methods for l1 regularization: a comparative
study and two new approaches. In Proceedings of the European Conference on Machine Learning, 2007.
[12] The International HapMap Consortium. A haplotype map of the human genome. Nature, 437:1399?1320,
2005.
[13] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of Royal Statistical Society,
Series B, 58(1):267?288, 1996.
[14] K. Yu, V. Tresp, and A. Schwaighofer. Learning gaussian processes from multiple tasks. In Proceedings
of the 22nd International Conference on Machine Learning, 2005.
[15] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of
Royal Statistical Society, Series B, 68(1):49?67, 2006.
[16] J. Zhang, Z. Ghahramani, and Y. Yang. Flexible latent variable models for multi-task learning. Machine
Learning, 73(3):221?242, 2008.
[17] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite absolute
penalties. Technical Report 703, Department of Statistics, University of California, Berkeley, 2008.
[18] J. Zhu, B. Zhang, E.N. Smith, B. Drees, R.B. Brem, L. Kruglyak, R.E. Bumgarner, and E.E. Schadt.
Integrating large-scale functional genomic data to dissect the complexity of yeast regulatory networks.
Nature Genetics, 40:854?61, 2008.
9
| 3706 |@word multitask:25 norm:13 nd:1 open:1 km:7 simulation:1 pg:3 contains:1 series:2 selecting:3 genetic:6 outperforms:1 realistic:1 treating:1 plot:1 update:3 fvc:2 ith:3 steepest:1 smith:1 lr:6 detecting:3 zhang:3 five:6 along:1 yuan:1 consists:1 fitting:2 combine:1 introduce:2 indeed:1 themselves:1 multi:5 automatically:1 becomes:1 discover:3 estimating:2 notation:2 bakker:1 developed:2 finding:1 transformation:1 quantitative:1 berkeley:2 exactly:1 grant:2 positive:1 influencing:1 tends:2 drees:1 path:4 examined:1 challenging:1 co:1 statistically:1 averaged:2 acknowledgment:1 union:6 pontil:1 empirical:1 thought:1 significantly:2 physiology:1 boyd:2 composite:1 integrating:1 specificity:9 consortium:1 interior:5 selection:13 operator:2 context:1 influence:6 intercept:2 writing:1 equivalent:1 map:1 demonstrated:1 starting:1 independently:1 convex:6 formulate:3 recovery:5 identifying:1 dbi:2 borrow:1 fill:1 vandenberghe:1 rocha:1 population:2 handle:2 searching:2 coordinate:4 variation:2 e1000587:1 elucidate:1 suppose:1 homogeneous:4 us:2 pa:3 element:2 trend:1 satisfying:1 yk1:1 xiaolin:1 role:1 taskar:1 region:1 plo:1 yk:7 disease:5 complexity:1 covariates:3 solving:4 kmj:12 predictive:4 purely:1 eric:1 completely:1 easily:1 joint:17 k0:10 separated:1 walter:1 fast:3 describe:1 kp:1 larger:1 solve:2 valued:9 stanford:1 otherwise:1 statistic:4 jointly:6 superscript:1 advantage:1 yki:1 relevant:20 combining:1 aligned:1 achieve:1 intuitive:1 epx:1 convergence:3 comparative:1 perfect:1 depending:2 stat:1 sim:4 solves:1 p2:1 c:2 indicate:1 rosales:1 direction:4 guided:1 human:1 hapmap:2 polymorphism:1 quit:1 extension:1 strictly:1 lying:1 considered:1 exp:4 presumably:1 mapping:6 genomewide:1 vary:4 commonality:1 estimation:5 individually:1 grouped:3 successfully:1 tool:1 genomic:1 gaussian:1 rather:4 shrinkage:1 varying:1 ax:1 properly:1 likelihood:1 indicates:1 greatly:1 kim:3 sense:1 baseline:2 dependent:1 stopping:1 xip:1 mth:2 kc:8 expand:2 quasi:1 selects:2 interested:1 overall:1 classification:44 among:4 flexible:1 softmax:1 evgeniou:1 having:1 represents:2 yu:2 nearly:1 promote:1 future:2 report:4 primarily:1 randomly:2 simultaneously:1 individual:4 friedman:1 highly:2 uncommon:1 extreme:1 genotype:2 accurate:1 encourage:3 nucleotide:1 penalizes:1 desired:1 re:3 causal:5 plotted:1 mk:9 column:2 modeling:1 soft:1 caruana:1 entry:2 predictor:6 recognizing:1 successful:1 varies:4 thoroughly:1 international:2 sensitivity:1 probabilistic:1 xi1:1 fused:1 squared:1 choose:1 possibly:3 derivative:1 zhao:1 actively:1 kruglyak:1 includes:1 coefficient:4 sloan:1 closed:2 red:3 xing:2 recover:3 start:1 sort:1 lung:1 minimize:2 accuracy:4 characteristic:1 ykn:1 correspond:2 identify:1 yellow:1 weak:1 bayesian:2 researcher:1 explain:1 sharing:1 against:1 involved:1 dataset:4 popular:1 color:1 improves:3 organized:1 response:3 formulation:1 furthermore:1 just:1 asthma:9 until:1 correlation:2 marker:2 lack:1 logistic:11 yeast:1 true:8 regularization:15 iteratively:1 deal:1 during:1 encourages:1 maintained:1 criterion:2 generalized:2 demonstrate:3 performs:1 l1:78 snp:17 recently:1 nih:1 common:9 brem:1 functional:2 multinomial:2 haplotype:3 million:2 association:9 trait:14 mellon:3 significant:1 measurement:1 cambridge:1 sssykim:1 unconstrained:1 consistency:1 heskes:1 similarly:2 pq:2 r01gm087694:1 operating:1 multivariate:2 pkc:1 recent:1 irrelevant:5 belongs:2 optimizes:1 scenario:2 inequality:1 binary:2 ubiquitously:1 seen:1 minimum:1 additional:1 signal:3 ii:1 multiple:11 desirable:2 technical:3 clinical:6 believed:1 cross:1 lin:1 molecular:2 prediction:16 scalable:1 regression:74 basic:1 heterogeneous:14 patient:1 cmu:3 kmp:1 iteration:2 represent:5 achieved:1 whereas:2 addition:1 separately:6 fellowship:1 interval:1 microarray:1 subject:1 tend:2 jordan:2 near:1 yang:2 presence:1 concerned:1 iterate:1 affect:2 fit:1 hastie:1 lasso:7 tradeoff:1 multiclass:1 whether:1 motivated:1 six:1 linkage:1 penalty:21 hessian:5 cause:1 involve:2 amount:3 category:11 dna:1 generate:3 xij:2 nsf:3 estimated:8 disequilibrium:1 dummy:8 tibshirani:2 blue:3 bumgarner:1 alfred:1 carnegie:3 discrete:8 write:1 group:9 four:5 graph:3 fraction:1 year:1 sum:2 entirely:1 ki:1 constraint:5 helgason:1 speed:1 simulate:2 expanded:1 department:6 fung:1 according:1 combination:1 conjugate:2 across:17 beneficial:1 smaller:1 restricted:1 koh:1 equation:7 discus:1 apply:3 hierarchical:2 regulate:1 simulating:1 schmidt:1 original:1 denotes:2 clustering:2 newton:5 carlson:1 recombination:1 k1:1 uj:11 ghahramani:1 society:2 objective:4 question:1 strategy:1 gradient:4 kth:7 regulated:1 separate:2 simulated:6 nondifferentiable:1 collected:3 preconditioned:2 assuming:4 relationship:1 ratio:2 minimizing:4 balance:1 potentially:1 km1:1 negative:1 design:1 reliably:1 perform:1 allowing:1 pmk:2 observation:3 datasets:8 mate:1 descent:5 truncated:1 situation:2 extended:1 locate:1 arbitrary:1 inverting:3 pair:2 kl:2 california:2 learned:1 subgroup:1 heterogenous:2 address:2 usually:2 pattern:4 below:2 sparsity:7 challenge:1 zink:1 max:6 green:1 royal:2 wainwright:1 power:2 treated:1 regularized:9 predicting:2 residual:1 zhu:2 representing:1 scheme:2 improve:1 epxing:1 brief:1 started:1 hm:33 tresp:1 kj:10 l2:38 discovery:1 inheritance:1 determining:1 loss:7 bear:1 interesting:2 validation:1 thresholding:1 systematically:1 playing:1 share:5 row:3 genetics:2 repeat:1 supported:1 jth:2 infeasible:1 barrier:2 emerge:1 absolute:1 sparse:5 tolerance:1 distributed:2 curve:11 dimension:5 xn:1 world:2 genome:2 made:1 commonly:1 status:2 gene:2 global:2 receiver:1 pittsburgh:3 conclude:1 consuming:1 xi:1 continuous:8 latent:2 ancestry:1 regulatory:1 reality:1 promising:1 learn:2 chromosome:4 nature:3 investigated:2 european:2 complex:1 constructing:1 diag:1 bounding:1 noise:20 whole:1 x1:1 roc:7 slow:1 lc:6 shrinking:1 lq:9 dissect:1 lie:3 covariate:1 gating:1 list:2 grouping:1 consist:2 false:1 effectively:2 importance:1 kr:9 schadt:1 magnitude:1 phenotype:8 univariate:1 schwaighofer:1 applies:1 determines:2 obozinski:2 goal:6 formulated:1 seyoung:1 sorted:1 towards:1 shared:6 feasible:1 reducing:1 uniformly:1 called:2 total:1 experimental:1 indicating:1 select:1 highdimensional:1 support:6 evaluate:1 argyriou:1 correlated:8 |
2,987 | 3,707 | Fast Image Deconvolution
using Hyper-Laplacian Priors
Dilip Krishnan,
Dept. of Computer Science,
Courant Institute,
New York University
[email protected]
Rob Fergus,
Dept. of Computer Science,
Courant Institute,
New York University
[email protected]
Abstract
The heavy-tailed distribution of gradients in natural scenes have proven effective
priors for a range of problems such as denoising, deblurring and super-resolution.
?
These distributions are well modeled by a hyper-Laplacian p(x) ? e?k|x| , typically with 0.5 ? ? ? 0.8. However, the use of sparse distributions makes the
problem non-convex and impractically slow to solve for multi-megapixel images.
In this paper we describe a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors. We adopt
an alternating minimization scheme where one of the two phases is a non-convex
problem that is separable over pixels. This per-pixel sub-problem may be solved
with a lookup table (LUT). Alternatively, for two specific values of ?, 1/2 and 2/3
an analytic solution can be found, by finding the roots of a cubic and quartic polynomial, respectively. Our approach (using either LUTs or analytic formulae) is
able to deconvolve a 1 megapixel image in less than ?3 seconds, achieving comparable quality to existing methods such as iteratively reweighted least squares
(IRLS) that take ?20 minutes. Furthermore, our method is quite general and can
easily be extended to related image processing problems, beyond the deconvolution application demonstrated.
1
Introduction
Natural image statistics are a powerful tool in image processing, computer vision and computational
photography. Denoising [14], deblurring [3], transparency separation [11] and super-resolution [20],
are all tasks that are inherently ill-posed. Priors based on natural image statistics can regularize these
problems to yield high-quality results. However, digital cameras now have sensors that record images with tens of megapixels (MP), e.g. the latest Canon DSLRs have over 20MP. Solving the above
tasks for such images in a reasonable time frame (i.e. a few minutes or less), poses a severe challenge
to existing algorithms. In this paper we focus on one particular problem: non-blind deconvolution,
and propose an algorithm that is practical for very large images while still yielding high quality
results.
Numerous deconvolution approaches exist, varying greatly in their speed and sophistication. Simple
filtering operations are very fast but typically yield poor results. Most of the best-performing approaches solve globally for the corrected image, encouraging the marginal statistics of a set of filter
outputs to match those of uncorrupted images, which act as a prior to regularize the problem. For
these methods, a trade-off exists between accurately modeling the image statistics and being able to
solve the ensuing optimization problem efficiently. If the marginal distributions are assumed to be
Gaussian, a closed-form solution exists in the frequency domain and FFTs can be used to recover the
image very quickly. However, real-world images typically have marginals that are non-Gaussian, as
shown in Fig. 1, and thus the output is often of mediocre quality. A common approach is to assume
the marginals have a Laplacian distribution. This allows a number of fast ?1 and related TV-norm
methods [17, 22] to be deployed, which give good results in a reasonable time. However, studies
1
log2 Probability
0
?5
?10
Empirical
Gaussian (?=2)
Laplacian (?=1)
Hyper?Laplacian (?=0.66)
?15
?100
?80
?60
?40
?20
0
20
40
60
80
100
Gradient
Figure 1: A hyper-Laplacian with exponent ? = 2/3 is a better model of image gradients than
a Laplacian or a Gaussian. Left: A typical real-world scene. Right: The empirical distribution
of gradients in the scene (blue), along with a Gaussian fit (cyan), a Laplacian fit (red) and a hyperLaplacian with ? = 2/3 (green). Note that the hyper-Laplacian fits the empirical distribution closely,
particularly in the tails.
of real-world images have shown the marginal distributions have significantly heavier tails than a
Laplacian, being well modeled by a hyper-Laplacian [4, 10, 18]. Although such priors give the best
quality results, they are typically far slower than methods that use either Gaussian or Laplacian priors. This is a direct consequence of the problem becoming non-convex for hyper-Laplacians with
? < 1, meaning that many of the fast ?1 or ?2 tricks are no longer applicable. Instead, standard
optimization methods such as conjugate gradient (CG) must be used. One variant that works well
in practice is iteratively reweighted least squares (IRLS) [19] that solves a series of weighted leastsquares problems with CG, each one an ?2 approximation to the non-convex problem at the current
point. In both cases, typically hundreds of CG iterations are needed, each involving an expensive
convolution of the blur kernel with the current image estimate.
In this paper we introduce an efficient scheme for non-blind deconvolution of images using a hyperLaplacian image prior for 0 < ? ? 1. Our algorithm uses an alternating minimization scheme where
the non-convex part of the problem is solved in one phase, followed by a quadratic phase which can
be efficiently solved in the frequency domain using FFTs. We focus on the first phase where at each
pixel we are required to solve a non-convex separable minimization. We present two approaches to
solving this sub-problem. The first uses a lookup table (LUT); the second is an analytic approach
specific to two values of ?. For ? = 1/2 the global minima can be determined by finding the
roots of a cubic polynomial analytically. In the ? = 2/3 case, the polynomial is a quartic whose
roots can also be found efficiently in closed-form. Both IRLS and our approach solve a series of
approximations to the original problem. However, in our method each approximation is solved by
alternating between the two phases above a few times, thus avoiding the expensive CG descent used
by IRLS. This allows our scheme to operate several orders of magnitude faster. Although we focus
on the problem of non-blind deconvolution, it would be straightforward to adapt our algorithm to
other related problems, such as denoising or super-resolution.
1.1 Related Work
Hyper-Laplacian image priors have been used in a range of settings: super-resolution [20], transparency separation [11] and motion deblurring [9]. In work directly relevant to ours, Levin et al. [10]
and Joshi et al. [7] have applied them to non-blind deconvolution problems using IRLS to solve for
the deblurred image. Other types of sparse image prior include: Gaussian Scale Mixtures (GSM)
[21], which have been used for image deblurring [3] and denoising [14] and student-T distributions
for denoising [25, 16]. With the exception of [14], these methods use CG and thus are slow.
The alternating minimization that we adopt is a common technique, known as half-quadratic splitting, originally proposed by Geman and colleagues [5, 6]. Recently, Wang et al. [22] showed how it
could be used with a total-variation (TV) norm to deconvolve images. Our approach is closely related to this work: we also use a half-quadratic minimization, but the per-pixel sub-problem is quite
different. With the TV norm it can be solved with a straightforward shrinkage operation. In our
work, as a consequence of using a sparse prior, the problem is non-convex and solving it efficiently
is one of the main contributions of this paper.
Chartrand [1, 2] has introduced non-convex compressive sensing, where the usual ?1 norm on the
signal to be recovered is replaced with a ?p quasi-norm, where p < 1. Similar to our approach, a
splitting scheme is used, resulting in a non-convex per-pixel sub-problem. To solve this, a Huber
2
approximation (see [1]) to the quasi-norm is used, allowing the derivation of a generalized shrinkage
operator to solve the sub-problem efficiently. However, this approximates the original sub-problem,
unlike our approach.
2
Algorithm
We now introduce the non-blind deconvolution problem. x is the original uncorrupted linear
grayscale image of N pixels; y is an image degraded by blur and/or noise, which we assume to
be produced by convolving x with a blur kernel k and adding zero mean Gaussian noise. We assume that y and k are given and seek to reconstruct x. Given the ill-posed nature of the task, we
regularize using a penalty function |.|? that acts on the output of a set of filters f1 , . . . , fj applied
to x. A weighting term ? controls the strength of the regularization. From a probabilistic perspective, we seek the MAP estimate of x: p(x|y, k) ? p(y|x, k)p(x), the first term being a Gaussian
likelihood and second being the hyper-Laplacian image prior. Maximizing p(x|y, k) is equivalent
to minimizing the cost ? log p(x|y,
?k):
?
N
J
X ?
X
? (x ? k ? y)2i +
min
|(x ? fj )i |? ?
(1)
x
2
i=1
j=1
where i is the pixel index, and ? is the 2-dimensional convolution operator. For simplicity, we
use two first-order derivative filters f1 = [1 -1] and f2 = [1 -1]T , although additional ones can
easily be added (e.g. learned filters [13, 16], or higher order derivatives). For brevity, we denote
Fij x ? (x ? fj )i for j = 1, .., J.
Using the half-quadratic penalty method [5, 6, 22], we now introduce auxiliary variables wi1 and wi2
(together denoted as w) at each pixel that allow us to move the Fij x terms outside the |.|? expression,
giving a new cost
function:
X ?
?
2
1
1 2
2
2 2
1 ?
2 ?
min
(x ? k ? y)i +
kFi x ? wi k2 + kFi x ? wi k2 + |wi | + |wi |
(2)
x,w
2
2
i
where ? is a weight that we will vary during the optimization, as described in Section 2.3. As
? ? ?, the solution of Eqn. 2 converges to that of Eqn. 1. Minimizing Eqn. 2 for a fixed ? can
be performed by alternating between two steps, one where we solve for x, given values of w and
vice-versa. The novel part of our algorithm lies in the w sub-problem, but first we briefly describe
the x sub-problem and its straightforward solution.
2.1 x sub-problem
Given a fixed value of w from the previous iteration, Eqn. 2 is quadratic in x. The optimal x is thus:
T
T
? T
?
1T 1
2T 2
F F + F F + K K x = F 1 w1 + F 2 w2 + K T y
(3)
?
?
where Kx ? x ? k. Assuming circular boundary conditions, we can apply 2D FFT?s which diagonalize the convolution
F 1 , F 2 , K, enabling us to find the optimal x directly:
matrices
1 ?
F(F ) ? F(w1 ) + F(F 2 )? ? F(w2 ) + (?/?)F(K)? ? F(y)
x = F ?1
(4)
F(F 1 )? ? F(F 1 ) + F(F 2 )? ? F(F 2 ) + (?/?)F(K)? ? F(K)
where ? is the complex conjugate and ? denotes component-wise multiplication. The division is also
performed component-wise. Solving Eqn. 4 requires only 3 FFT?s at each iteration since many of
the terms can be precomputed. The form of this sub-problem is identical to that of [22].
2.2 w sub-problem
Given a fixed x, finding the optimal w consists of solving 2N independent 1D problems of the form:
?
w? = arg min |w|? + (w ? v)2
(5)
w
2
j
?
where v ? Fi x. We now describe two approaches to finding w .
2.2.1
Lookup table
For a fixed value of ?, w? in Eqn. 5 only depends on two variables, ? and v, hence can easily be
tabulated off-line to form a lookup table. We numerically solve Eqn. 5 for 10, 000 different values
of v over the range encountered in?our problem (?0.6 ? v ? 0.6). This is repeated for different ?
values, namely integer powers of 2 between 1 and 256. Although the LUT gives an approximate
solution, it allows the w sub-problem to be solved very quickly for any ? > 0.
3
2.2.2
Analytic solution
For some specific values of ?, it is possible to derive exact analytical solutions to the w sub-problem.
For ? = 2, the sub-problem is quadratic and thus easily solved. If ? = 1, Eqn. 5 reduces to a 1-D
shrinkage operation [22]. For some special cases of 1 < ? < 2, there exist analytic solutions [26].
Here, we address the more challenging case of ? < 1 and we now describe a way to solve Eqn. 5
for two special cases of ? = 1/2 and ? = 2/3. For non-zero w, setting the derivative of Eqn. 5 w.r.t
w to zero gives:
?|w|??1 sign(w) + ?(w ? v) = 0
(6)
For ? = 1/2, this becomes, with successive simplification:
|w|?1/2 sign(w) + 2?(w ? v) = 0
(7)
|w|?1 = 4? 2 (v ? w)2
3
2
2
(8)
2
w ? 2vw + v w ? sign(w)/4? = 0
(9)
2
At first sight Eqn. 9 appears to be two different cubic equations with the ?1/4? term, however we
need only consider one of these as v is fixed and w? must lie between 0 and v. Hence we can replace
sign(w) with sign(v) in Eqn. 9:
w3 ? 2vw2 + v 2 w ? sign(v)/4? 2 = 0
(10)
For the case ? = 2/3, using a similar derivation, we arrive at:
8
w4 ? 3vw3 + 3v 2 w2 ? v 3 w +
=0
(11)
27? 3
there being no sign(w) term as it conveniently cancels in this case. Hence w? , the solution of Eqn. 5,
is either 0 or a root of the cubic polynomial in Eqn. 10 for ? = 1/2, or equivalently a root of the
quartic polynomial in Eqn. 10 for ? = 2/3. Although it is tempting to try the same manipulation
for ? = 3/4, this results in a 5th order polynomial, which can only be solved numerically.
Finding the roots of the cubic and quartic polynomials: Analytic formulae exist for the roots
of cubic and quartic polynomials [23, 24] and they form the basis of our approach, as detailed in
Algorithms 2 and 3. In both the cubic and quartic cases, the computational bottleneck is the cube
root operation. An alternative way of finding the roots of the polynomials Eqn. 10 and Eqn. 11 is
to use a numerical root-finder such as Newton-Raphson. In our experiments, we found NewtonRaphson to be slower and less accurate than either the analytic method or the LUT approach (see
[8] for futher details).
Selecting the correct roots: Given the roots of the polynomial, we need to determine which one
corresponds to the global minima of Eqn. 5. When ? = 1/2, the resulting cubic equation can have:
(a) 3 imaginary roots; (b) 2 imaginary roots and 1 real root, or (c) 3 real roots. In the case of (a),
the |w|? term means Eqn. 5 has positive derivatives around 0 and the lack of real roots implies the
derivative never becomes negative, thus w? = 0. For (b), we need to compare the costs of the single
real root and w = 0, an operation that can be efficiently performed using Eqn. 13 below. In (c)
we have 3 real roots. Examining Eqn. 7 and Eqn. 8, we see that the squaring operation introduces
a spurious root above v when v > 0, and below v when v < 0. This root can be ignored, since
w? must lie between 0 and v. The cost function in Eqn. 5 has a local maximum near 0 and a local
minimum between this local maximum and v. Hence of the 2 remaining roots, the one further from
0 will have a lower cost. Finally, we need to compare the cost of this root with that of w = 0 using
Eqn. 13.
We can use similar arguments for the ? = 2/3 case. Here we can potentially have: (a) 4 imaginary
roots, (b) 2 imaginary and 2 real roots, or (c) 4 real roots. In (a), w? = 0 is the only solution. For
(b), we pick the larger of the 2 real roots and compare the costs with w = 0 using Eqn. 13, similar
to the case of 3 real roots for the cubic. Case (c) never occurs: the final quartic polynomial Eqn. 11
was derived with a cubing operation from the analytic derivative. This introduces 2 spurious roots
into the final solution, both of which are imaginary, thus only cases (a) and (b) are possible.
In both the cubic and quartic cases, we need an efficient way to pick between w = 0 and a real root
that is between 0 and v. We now describe a direct mechanism for doing this which does not involve
the expensive computation of the cost function in Eqn. 51 .
Let r be the non-zero real root. 0 must be chosen if it has lower cost in Eqn. 5. This implies:
1
This requires the calculation of a fractional power, which is slow, particularly if ? = 2/3.
4
?
?v 2
(r ? v)2 >
2
2
?
??1
sign(r)|r|
+ (r ? 2v) ? 0 , r ? 0
2
|r|? +
(12)
Since we are only considering roots of the polynomial, we can use Eqn. 6 to eliminate sign(r)|r|??1
from Eqn. 6 and Eqn. 12, yielding the condition:
(? ? 1)
r ? 2v
, v?0
(13)
(? ? 2)
since sign(r) = sign(v). So w? = r if r is between 2v/3 and v in the ? = 1/2 case or between
v/2 and v in the ? = 2/3 case. Otherwise w? = 0. Using this result, picking w? can be efficiently
coded, e.g. lines 12?16 of Algorithm 2. Overall, the analytic approach is slower than the LUT, but
it gives an exact solution to the w sub-problem.
2.3
Summary of algorithm
We now give the overall algorithm using a LUT for the w sub-problem. As outlined in Algorithm
1 below, we minimize Eqn. 2 by alternating the x and w sub-problems T times, before increasing
the value of ? and repeating. Starting with some small value ?0 we scale it by a factor ?Inc until it
exceeds some fixed value ?Max . In practice, we find that a single inner iteration suffices (T = 1),
although more can sometimes be needed when ? is small.
Algorithm 1 Fast image deconvolution using hyper-Laplacian priors
Require: Blurred image y, kernel k, regularization weight ?, exponent ? (?0)
Require: ? regime parameters: ?0 , ?Inc , ?Max
Require: Number of inner iterations T .
1: ? = ?0 , x = y
2: Precompute constant terms in Eqn. 4.
3: while ? < ?Max do
4:
iter = 0
5:
for i = 1 to T do
6:
Given x, solve Eqn. 5 for all pixels using a LUT to give w
7:
Given w, solve Eqn. 4 to give x
8:
end for
9:
? = ?Inc ? ?
10: end while
11: return Deconvolved image x
As with any non-convex optimization problem, it is difficult to derive any guarantees regarding the
convergence of Algorithm 1. However, we can be sure that the global optimum of each sub-problem
will be found, given the fixed x and w from the previous iteration. Like other methods that use
this form of alternating minimization [5, 6, 22], there is little theoretical guidance for setting the ?
schedule. We find that the simple scheme shown in Algorithm 1 works well to minimize Eqn. 2 and
its proxy Eqn. 1. The experiments in Section 3 show our scheme achieves very similar SNR levels
to IRLS, but at a greatly lower computational cost.
3
Experiments
We evaluate the deconvolution performance of our algorithm on images, comparing them to numerous other methods: (i) ?2 (Gaussian) prior on image gradients; (ii) Lucy-Richardson [15]; (iii) the
algorithm of Wang et al. [22] using a total variation (TV) norm prior and (iv) a variant of [22] using
an ?1 (Laplacian) prior; (v) the IRLS approach of Levin et al. [10] using a hyper-Laplacian prior
with ? = 1/2, 2/3, 4/5. Note that only IRLS and our method use a prior with ? < 1. For the
IRLS scheme, we used the implementation of [10] with default parameters, the only change being
the removal of higher order derivative filters to enable a direct comparison with other approaches.
Note that IRLS and ?2 directly minimize Eqn. 1, while our method,
? and the TV and ?1 approaches of
[22] minimize the cost in Eqn. 2, using T = 1, ?0 = 1, ?Inc = 2 2, ?Max = 256. In our approach,
we use ? = 1/2 and ? = 2/3, and compare the performance of the LUT and analytic methods as
well. All runs were performed with multithreading enabled (over 4 CPU cores).
5
We evaluate the algorithms using a set of blurry images, created in the following way. 7 in-focus
grayscale real-world images were downloaded from the web. They were then blurred by real-world
camera shake kernels from [12]. 1% Gaussian noise was added, followed by quantization to 255
discrete values. In any practical deconvolution setting the blur kernel is never perfectly known.
Therefore, the kernel passed to the algorithms was a minor perturbation of the true kernel, to mimic
kernel estimation errors. In experiments with non-perturbed kernels (not shown), the results are
similar to those in Tables 3 and 1 but with slightly higher SNR levels. See Fig. 2 for an example of a
kernel from [12] and its perturbed version. Our evaluation metric was the SNR between the original
??(?
x)k2
? and the deconvolved output x, defined as 10 log10 k?xk?
?.
image x
x) being the mean of x
x?xk2 , ?(?
In Table 1 we compare the algorithms on 7 different images, all blurred with the same 19?19 kernel.
For each algorithm we exhaustively searched over different regularization weights ? to find the value
that gave the best SNR performance, as reported in the table. In Table 3 we evaluate the algorithms
with the same 512?512 image blurred by 8 different kernels (from [12]) of varying size. Again,
the optimal value of ? for each kernel/algorithm combination was chosen from a range of values
based on SNR performance. Table 2 shows the running time of several algorithms on images up
to 3072?3072 pixels. Figure 2 shows a larger 27?27 blur being deconvolved from two example
images, comparing the output of different methods.
The tables and figures show our method with ? = 2/3 and IRLS with ? = 4/5 yielding higher
quality results than other methods. However, our algorithm is around 70 to 350 times faster than
IRLS depending on whether the analytic or LUT method is used. This speedup factor is independent
of image size, as shown by Table 2. The ?1 method of [22] is the best of the other methods, being
of comparable speed to ours but achieving lower SNR scores. The SNR results for our method are
almost the same whether we use LUTs or analytic approach. Hence, in practice, the LUT method is
preferred, since it is approximately 5 times faster than the analytic method and can be used for any
value of ?.
Image
#
1
2
3
4
5
6
7
Blurry
6.42
10.73
12.45
8.51
12.74
10.85
11.76
Av. SNR gain
Av. Time
(secs)
?2
14.13
17.56
19.30
16.02
16.59
15.46
17.40
6.14
79.85
Lucy
12.54
15.15
16.68
14.27
13.28
12.00
15.22
3.67
1.55
TV
15.87
19.37
21.83
17.66
19.34
17.13
18.58
8.05
0.66
?1
16.18
19.86
22.77
18.02
20.25
17.59
18.85
8.58
0.75
IRLS
?=1/2
14.61
18.43
21.53
16.34
19.12
15.59
17.08
7.03
354
IRLS
?=2/3
15.45
19.37
22.62
17.31
19.99
16.58
17.99
7.98
354
IRLS
?=4/5
16.04
20.00
22.95
17.98
20.20
17.04
18.61
8.48
354
Ours
?=1/2
16.05
19.78
23.26
17.70
21.28
17.79
18.58
8.71
L:1.01
A:5.27
Ours
?=2/3
16.44
20.26
23.27
18.17
21.00
17.89
18.96
8.93
L:1.00
A:4.08
Table 1: Comparison of SNRs and running time of 9 different methods for the deconvolution of
7 576?864 images, blurred with the same 19?19 kernel. L=Lookup table, A=Analytic. The best
performing algorithm for each kernel is shown in bold. Our algorithm with ? = 2/3 beats IRLS
with ? = 4/5, as well as being much faster. On average, both these methods outperform ?1 , demonstrating the benefits of a sparse prior.
Image
size
256?256
512?512
1024?1024
2048?2048
3072?3072
?1
0.24
0.47
2.34
9.34
22.40
IRLS
?=4/5
78.14
256.87
1281.3
4935
-
Ours (LUT)
?=2/3
0.42
0.55
2.78
10.72
24.07
Ours (Analytic)
?=2/3
0.7
2.28
10.87
44.64
100.42
Table 2: Run-times of different methods for a range of image sizes, using a 13?13 kernel. Our LUT
algorithm is more than 100 times faster than the IRLS method of [10].
4
Discussion
We have described an image deconvolution scheme that is fast, conceptually simple and yields
high quality results. Our algorithm takes a novel approach to the non-convex optimization prob6
L2
Original
Blurred
SNR=7.31
Original
Blurred
SNR=2.64
L1
SNR=14.89
t=0.1
SNR=18.10
t=0.8
Ours ?=2/3
SNR=18.96
t=1.2
IRLS ?=4/5
SNR=19.05
t=483.9
L2
L1
SNR=11.58
t=0.1
SNR=13.64
t=0.8
Ours ?=2/3
SNR=14.15
t=1.2
IRLS ?=4/5
SNR=14.28
t=482.1
Figure 2: Crops from two images (#1 & #5) being deconvolved by 4 different algorithms, including
ours using a 27?27 kernel (#7). In the bottom left inset, we show the original kernel from [12]
(lower) and the perturbed version provided to the algorithms (upper), to make the problem more
realistic. This figure is best viewed on screen, rather than in print.
7
Kernel
# / size
#1: 13?13
#2: 15?15
#3: 17?17
#4: 19?19
#5: 21?21
#6: 23?23
#7: 27?27
#8: 41?41
Av. SNR gain
Av. Time
(sec)
Blurry
10.69
11.28
8.93
10.13
9.26
7.87
6.76
6.00
?2
17.22
16.14
14.94
15.27
16.55
15.40
13.81
12.80
6.40
57.44
Lucy
14.49
13.81
12.16
12.38
13.60
13.32
11.55
11.19
3.95
1.22
TV
19.21
17.94
16.50
16.83
18.72
17.01
15.42
13.53
8.03
0.50
?1
19.41
18.29
16.86
17.25
18.83
17.42
15.69
13.62
8.31
0.55
IRLS
?=1/2
17.20
16.17
15.34
15.97
17.23
15.66
14.59
12.68
6.74
271
IRLS
?=2/3
18.22
17.26
16.36
16.98
18.36
16.73
15.68
13.60
7.78
271
IRLS
?=4/5
18.87
18.02
16.99
17.57
18.88
17.40
16.38
14.25
8.43
271
Ours
?=1/2
19.36
18.14
16.73
17.29
19.11
17.26
15.92
13.73
8.33
L:0.81
A:2.15
Ours
?=2/3
19.66
18.64
17.25
17.67
19.34
17.77
16.29
13.68
8.67
L:0.78
A:2.23
Table 3: Comparison of SNRs and running time of 9 different methods for the deconvolution of a
512?512 image blurred by 7 different kernels. L=Lookup table, A=Analytic. Our algorithm beats
all other methods in terms of quality, with the exception of IRLS on the largest kernel size. However,
our algorithm is far faster than IRLS, being comparable in speed to the ?1 approach.
lem arising from the use of a hyper-Laplacian prior, by using a splitting approach that allows the
non-convexity to become separable over pixels. Using a LUT to solve this sub-problem allows for
orders of magnitude speedup in the solution over existing methods. Our Matlab implementation is
available online at http://cs.nyu.edu/?dilip/wordpress/?page_id=122.
A potential drawback to our method, common to the TV and ?1 approaches of [22], is its use of
frequency domain operations which assume circular boundary conditions, something not present in
real images. These give rise to boundary artifacts which can be overcome to some extend with edge
tapering operations. However, our algorithm is suitable for very large images where the boundaries
are a small fraction of the overall image.
Although we focus on deconvolution, our scheme can be adapted to a range of other problems which
rely on natural image statistics. For example, by setting k = 1 the algorithm can be used to denoise,
or if k is a defocus kernel it can be used for super-resolution. The speed offered by our algorithm
makes it practical to perform these operations on the multi-megapixel images from modern cameras.
Algorithm 2: Solve Eqn. 5 for ? = 1/2
Algorithm 3: Solve Eqn. 5 for ? = 2/3
Require: Target value v, Weight ?
Require: Target value v, Weight ?
1: ? = 10?6
1: ? = 10?6
2: {Compute intermediary terms m, t1 , t2 , t3 }
2: {Compute intermediary terms m, t1 , . . . , t7 :}
3: m = ?sign(v)/4? 2
3: m = 8/(27? 3 )
4: t1 = 2v/3
4: t1 = ?9/8 ? v 2
p
? ?
3
5: t2 = v 3 /4
3
2
3
5: t2 = ?27m ? 2v + 3 3 27m + 4mv
2
6: t3 = ?1/8 ? mv
6: t3 = v 2 /t2
p
7: t4 = ?t
?m3 /27 + m2 v 4 /256
3 /2 +
7: {Compute 3 roots, r1 , r2 , r3 :}
?
3
1/3
1/3
8: t5 = t4
8: r1 = t1 + 1/(3 ??2 ) ? t2 + 2 /3 ? t3
9: t6 = 2(?5/18
? t1 + t5 + m/(3 ? t5 ))
9: r2 = t1 ? (1 ? ?3i)/(6 ? 21/3 ) ? t2
p
2/3
10:
t
=
t
/3
+
t6
7
1
? (1 + ?3i)/(3 ? 2 ) ? t3
11: {Compute 4 roots, rp
1/3
1 , r2 , r3 , r4 :}
10: r3 = t1 ? (1 + ? 3i)/(6 ? 2 ) ? t2
12:
r
=
3v/4
+
(t
+
1
7
? (1 ? 3i)/(3 ? 22/3 ) ? t3
p?(t1 + t6 + t2 /t7 ))/2
13:
r
=
3v/4
+
(t
?
?(t1 + t6 + t2 /t7 ))/2
2
7
11: {Pick global minimum from (0, r1 , r2 , r3 )}
p
14: r3 = 3v/4 + (?t7 + p?(t1 + t6 ? t2 /t7 ))/2
12: r = [r1 , r2 , r3 ]
13: c1 = (abs(imag(r)) < ?) {Root must be real} 15: r4 = 3v/4 + (?t7 ? ?(t1 + t6 ? t2 /t7 ))/2
14: c2 = real(r)sign(v) > (2/3 ? abs(v))
16: {Pick global minimum from (0, r1 , r2 , r3 , r4 )}
{Root must obey bound of Eqn. 13}
17: r = [r1 , r2 , r3 , r4 ]
15: c3 = real(r)sign(v) < abs(v) {Root < v}
18: c1 = (abs(imag(r)) < ?) {Root must be real}
16: w?= max((c1 &c2 &c3 )real(r)sign(v))sign(v)19: c2 = real(r)sign(v) > (1/2 ? abs(v))
{Root must obey bound in Eqn. 13}
return w?
20: c3 = real(r)sign(v) < abs(v) {Root < v}
21: w? = max((c1 &c2 &c3 )real(r)sign(v))sign(v)
return w?
8
References
[1] R. Chartrand. Fast algorithms for nonconvex compressive sensing: Mri reconstruction from
very few data. In IEEE International Symposium on Biomedical Imaging (ISBI), 2009.
[2] R. Chartrand and V. Staneva. Restricted isometry properties and nonconvex compressive sensing. Inverse Problems, 24:1?14, 2008.
[3] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. Freeman. Removing camera shake
from a single photograph. ACM TOG (Proc. SIGGRAPH), 25:787?794, 2006.
[4] D. Field. What is the goal of sensory coding? Neural Computation, 6:559?601, 1994.
[5] D. Geman and G. Reynolds. Constrained restoration and recovery of discontinuities. PAMI,
14(3):367?383, 1992.
[6] D. Geman and C. Yang. Nonlinear image recovery with half-quadratic regularization. PAMI,
4:932?946, 1995.
[7] N. Joshi, L. Zitnick, R. Szeliski, and D. Kriegman. Image deblurring and denoising using color
priors. In CVPR, 2009.
[8] D. Krishnan and R. Fergus. Fast image deconvolution using hyper-laplacian priors, supplementary material. NYU Tech. Rep. 2009, 2009.
[9] A. Levin. Blind motion deblurring using image statistics. In NIPS, 2006.
[10] A. Levin, R. Fergus, F. Durand, and W. Freeman. Image and depth from a conventional camera
with a coded aperture. ACM TOG (Proc. SIGGRAPH), 26(3):70, 2007.
[11] A. Levin and Y. Weiss. User assisted separation of reflections from a single image using a
sparsity prior. PAMI, 29(9):1647?1654, Sept 2007.
[12] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Understanding and evaluating blind deconvolution algorithms. In CVPR, 2009.
[13] S. Osindero, M. Welling, and G. Hinton. Topographic product models applied to natural scene
statistics. Neural Computation, 1995.
[14] J. Portilla, V. Strela, M. J. Wainwright, and E. P. Simoncelli. Image denoising using a scale
mixture of Gaussians in the wavelet domain. IEEE TIP, 12(11):1338?1351, November 2003.
[15] W. Richardson. Bayesian-based iterative method of image restoration. 62:55?59, 1972.
[16] S. Roth and M. J. Black. Fields of Experts: A Framework for Learning Image Priors. In CVPR,
volume 2, pages 860?867, 2005.
[17] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms.
Physica D, 60:259?268, 1992.
[18] E. Simoncelli and E. H. Adelson. Noise removal via bayesian wavelet coring. In ICIP, pages
379?382, 1996.
[19] C. V. Stewart. Robust parameter estimation in computer vision. SIAM Reviews, 41(3):513?537,
Sept. 1999.
[20] M. F. Tappen, B. C. Russell, and W. T. Freeman. Exploiting the sparse derivative prior for
super-resolution and image demosaicing. In SCTV, 2003.
[21] M. Wainwright and S. Simoncelli. Scale mixtures of gaussians and teh statistics of natural
images. In NIPS, pages 855?861, 1999.
[22] Y. Wang, J. Yang, W. Yin, and Y. Zhang. A new alternating minimization algorithm for total
variation image reconstruction. SIAM J. Imaging Sciences, 1(3):248?272, 2008.
[23] E. W. Weisstein.
Cubic formula.
http://mathworld.wolfram.com/
CubicFormula.html.
[24] E. W. Weisstein.
Quartic equation.
http://mathworld.wolfram.com/
QuarticEquation.html.
[25] M. Welling, G. Hinton, and S. Osindero. Learning sparse topographic representations with
products of student-t distributions. In NIPS, 2002.
[26] S. Wright, R. Nowak, and M. Figueredo. Sparse reconstruction by separable approximation.
IEEE Trans. Signal Processing, page To appear, 2009.
9
| 3707 |@word version:2 mri:1 briefly:1 polynomial:12 norm:7 seek:2 pick:4 tapering:1 series:2 score:1 selecting:1 t7:7 ours:11 reynolds:1 existing:4 imaginary:5 current:2 recovered:1 comparing:2 com:2 must:8 numerical:1 realistic:1 blur:5 analytic:16 half:4 rudin:1 xk:1 core:1 wolfram:2 record:1 successive:1 zhang:1 along:1 c2:4 direct:3 become:1 symposium:1 consists:1 megapixel:3 introduce:3 huber:1 multi:2 freeman:4 globally:1 encouraging:1 little:1 cpu:1 considering:1 increasing:1 becomes:2 provided:1 what:1 strela:1 compressive:3 finding:6 guarantee:1 act:2 k2:3 control:1 imag:2 appear:1 positive:1 before:1 t1:12 local:3 consequence:2 becoming:1 approximately:1 pami:3 black:1 r4:4 challenging:1 hyperlaplacian:2 range:6 kfi:2 practical:3 camera:5 practice:3 w4:1 empirical:3 significantly:1 operator:2 deconvolve:2 mediocre:1 equivalent:1 map:1 demonstrated:1 conventional:1 maximizing:1 roth:1 latest:1 straightforward:3 starting:1 convex:11 resolution:6 simplicity:1 splitting:3 recovery:2 m2:1 regularize:3 enabled:1 variation:4 target:2 user:1 exact:2 us:2 deblurring:6 trick:1 expensive:3 particularly:2 tappen:1 geman:3 bottom:1 solved:8 wang:3 trade:1 russell:1 convexity:1 hertzmann:1 kriegman:1 exhaustively:1 singh:1 solving:5 tog:2 division:1 f2:1 basis:1 easily:4 siggraph:2 derivation:2 snrs:2 fast:8 effective:1 describe:5 hyper:14 outside:1 quite:2 whose:1 posed:2 solve:16 larger:2 cvpr:3 supplementary:1 reconstruct:1 otherwise:1 statistic:8 richardson:2 topographic:2 final:2 online:1 analytical:1 propose:1 reconstruction:3 product:2 relevant:1 roweis:1 exploiting:1 convergence:1 optimum:1 r1:6 converges:1 derive:2 depending:1 pose:1 minor:1 solves:1 auxiliary:1 c:3 implies:2 fij:2 closely:2 correct:1 drawback:1 filter:5 enable:1 material:1 require:5 f1:2 suffices:1 leastsquares:1 assisted:1 physica:1 around:2 wright:1 vary:1 adopt:2 achieves:1 xk2:1 wi1:1 estimation:2 intermediary:2 applicable:1 proc:2 largest:1 vice:1 wordpress:1 tool:1 weighted:1 minimization:7 sensor:1 gaussian:11 sight:1 super:6 rather:1 shrinkage:3 varying:2 derived:1 focus:5 likelihood:1 greatly:2 tech:1 cg:5 dilip:3 squaring:1 typically:5 eliminate:1 spurious:2 quasi:2 pixel:11 arg:1 overall:3 ill:2 html:2 denoted:1 exponent:2 constrained:1 special:2 marginal:3 cube:1 field:2 never:3 vw3:1 identical:1 adelson:1 cancel:1 mimic:1 t2:11 few:3 deblurred:1 modern:1 replaced:1 phase:5 ab:6 circular:2 evaluation:1 severe:1 introduces:2 mixture:3 yielding:3 accurate:1 edge:1 nowak:1 iv:1 guidance:1 theoretical:1 deconvolved:4 modeling:1 stewart:1 restoration:2 cost:11 snr:19 hundred:1 examining:1 levin:6 osindero:2 reported:1 perturbed:3 international:1 siam:2 probabilistic:1 off:2 picking:1 luts:2 together:1 quickly:2 tip:1 w1:2 again:1 convolving:1 derivative:8 expert:1 return:3 potential:1 lookup:6 student:2 sec:2 bold:1 coding:1 inc:4 blurred:8 mp:2 mv:2 blind:7 depends:1 performed:4 root:40 try:1 closed:2 doing:1 red:1 recover:1 contribution:1 minimize:4 square:2 degraded:1 efficiently:7 yield:3 t3:6 chartrand:3 conceptually:1 bayesian:2 accurately:1 irls:25 produced:1 gsm:1 colleague:1 frequency:3 gain:2 color:1 fractional:1 schedule:1 appears:1 originally:1 courant:2 higher:4 wei:2 furthermore:1 biomedical:1 until:1 eqn:43 web:1 nonlinear:2 lack:1 artifact:1 quality:8 true:1 analytically:1 regularization:4 hence:5 alternating:8 iteratively:2 reweighted:2 during:1 generalized:1 motion:2 l1:2 fj:3 reflection:1 sctv:1 image:65 photography:1 meaning:1 novel:2 recently:1 wise:2 fi:1 common:3 volume:1 tail:2 extend:1 approximates:1 marginals:2 numerically:2 versa:1 outlined:1 longer:1 something:1 isometry:1 showed:1 quartic:9 perspective:1 manipulation:1 nonconvex:2 rep:1 durand:2 uncorrupted:2 canon:1 minimum:5 additional:1 determine:1 tempting:1 signal:2 ii:1 simoncelli:3 reduces:1 transparency:2 exceeds:1 faster:7 match:1 adapt:1 calculation:1 raphson:1 dept:2 coded:2 finder:1 laplacian:20 variant:2 involving:1 crop:1 vision:2 metric:1 iteration:6 kernel:22 sometimes:1 c1:4 diagonalize:1 w2:3 operate:1 unlike:1 sure:1 integer:1 joshi:2 vw:1 near:1 yang:2 iii:1 krishnan:2 fft:2 fit:3 gave:1 w3:1 perfectly:1 inner:2 regarding:1 bottleneck:1 whether:2 expression:1 heavier:1 passed:1 penalty:2 tabulated:1 york:2 matlab:1 ignored:1 detailed:1 involve:1 shake:2 repeating:1 ten:1 http:3 outperform:1 exist:3 sign:20 arising:1 per:3 blue:1 discrete:1 iter:1 demonstrating:1 achieving:2 imaging:2 fraction:1 run:2 inverse:1 powerful:1 arrive:1 almost:1 reasonable:2 separation:3 comparable:3 cyan:1 bound:2 followed:2 simplification:1 quadratic:7 encountered:1 strength:1 vw2:1 adapted:1 scene:4 speed:4 argument:1 min:3 demosaicing:1 performing:2 separable:4 speedup:2 tv:8 precompute:1 poor:1 combination:1 conjugate:2 slightly:1 wi:4 rob:1 lem:1 osher:1 restricted:1 multithreading:1 equation:3 precomputed:1 mechanism:1 r3:8 needed:2 end:2 available:1 operation:10 gaussians:2 apply:1 obey:2 blurry:3 alternative:1 slower:3 rp:1 original:7 denotes:1 remaining:1 include:1 running:3 log2:1 newton:1 log10:1 giving:1 move:1 added:2 print:1 occurs:1 usual:1 gradient:6 ensuing:1 assuming:1 fatemi:1 modeled:2 index:1 minimizing:2 equivalently:1 difficult:1 potentially:1 negative:1 rise:1 implementation:2 perform:1 allowing:1 upper:1 av:4 convolution:3 teh:1 enabling:1 descent:1 november:1 beat:2 extended:1 hinton:2 frame:1 portilla:1 perturbation:1 introduced:1 namely:1 required:1 c3:4 icip:1 learned:1 discontinuity:1 nip:3 address:1 able:2 beyond:1 trans:1 below:3 wi2:1 laplacians:1 regime:1 challenge:1 sparsity:1 green:1 max:6 including:1 wainwright:2 power:2 suitable:1 natural:6 rely:1 scheme:10 numerous:2 created:1 sept:2 prior:25 understanding:1 l2:2 removal:3 review:1 multiplication:1 weisstein:2 filtering:1 proven:1 isbi:1 digital:1 downloaded:1 coring:1 offered:1 proxy:1 heavy:1 summary:1 t6:6 allow:1 institute:2 szeliski:1 sparse:7 benefit:1 boundary:4 default:1 overcome:1 world:5 depth:1 evaluating:1 t5:3 sensory:1 far:2 lut:13 welling:2 approximate:1 preferred:1 aperture:1 global:5 assumed:1 fergus:5 alternatively:1 grayscale:2 iterative:1 tailed:1 table:16 nature:1 robust:1 mathworld:2 inherently:1 defocus:1 complex:1 domain:4 zitnick:1 main:1 noise:5 denoise:1 repeated:1 fig:2 screen:1 cubic:11 deployed:1 slow:3 sub:19 lie:3 weighting:1 ffts:2 wavelet:2 formula:3 minute:2 removing:1 specific:3 inset:1 sensing:3 nyu:4 r2:7 deconvolution:18 exists:2 quantization:1 adding:1 magnitude:3 t4:2 kx:1 sophistication:1 lucy:3 photograph:1 yin:1 conveniently:1 futher:1 corresponds:1 acm:2 viewed:1 goal:1 replace:1 change:1 typical:1 determined:1 corrected:1 impractically:1 denoising:7 total:4 m3:1 exception:2 searched:1 newtonraphson:1 brevity:1 evaluate:3 avoiding:1 |
2,988 | 3,708 | Ranking Measures and Loss Functions
in Learning to Rank
Wei Chen?
Chinese Academy of sciences
[email protected]
Tie-Yan Liu
Microsoft Research Asia
[email protected]
Zhiming Ma
Chinese Academy of sciences
[email protected]
Yanyan Lan
Chinese Academy of sciences
[email protected]
Hang Li
Microsoft Research Asia
[email protected]
Abstract
Learning to rank has become an important research topic in machine learning.
While most learning-to-rank methods learn the ranking functions by minimizing
loss functions, it is the ranking measures (such as NDCG and MAP) that are used
to evaluate the performance of the learned ranking functions. In this work, we
reveal the relationship between ranking measures and loss functions in learningto-rank methods, such as Ranking SVM, RankBoost, RankNet, and ListMLE. We
show that the loss functions of these methods are upper bounds of the measurebased ranking errors. As a result, the minimization of these loss functions will lead
to the maximization of the ranking measures. The key to obtaining this result is to
model ranking as a sequence of classification tasks, and define a so-called essential loss for ranking as the weighted sum of the classification errors of individual
tasks in the sequence. We have proved that the essential loss is both an upper
bound of the measure-based ranking errors, and a lower bound of the loss functions in the aforementioned methods. Our proof technique also suggests a way to
modify existing loss functions to make them tighter bounds of the measure-based
ranking errors. Experimental results on benchmark datasets show that the modifications can lead to better ranking performances, demonstrating the correctness of
our theoretical analysis.
1
Introduction
Learning to rank has become an important research topic in many fields, such as machine learning
and information retrieval. The process of learning to rank is as follows. In training, a number of
sets are given, each set consisting of objects and labels representing their rankings (e.g., in terms of
multi-level ratings1 ). Then a ranking function is constructed by minimizing a certain loss function
on the training data. In testing, given a new set of objects, the ranking function is applied to produce
a ranked list of the objects.
Many learning-to-rank methods have been proposed in the literature, with different motivations and
formulations. In general, these methods can be divided into three categories [3]. The pointwise
approach, such as subset regression [5] and McRank [10], views each single object as the learning instance. The pairwise approach, such as Ranking SVM [7], RankBoost [6], and RankNet [2],
regards a pair of objects as the learning instance. The listwise approach, such as ListNet [3] and
?
1
The work was performed when the first and the third authors were interns at Microsoft Research Asia.
In information retrieval, such a label represents the relevance of a document to the given query.
1
ListMLE [16], takes the entire ranked list of objects as the learning instance. Almost all these
methods learn their ranking functions by minimizing certain loss functions, namely the pointwise,
pairwise, and listwise losses. On the other hand, however, it is the ranking measures that are used
to evaluate the performance of the learned ranking functions. Taking information retrieval as an example, measures such as Normalized Discounted Cumulative Gain (NDCG) [8] and Mean Average
Precision (MAP) [1] are widely used, which obviously differ from the loss functions used in the
aforementioned methods. In such a situation, a natural question to ask is whether the minimization
of the loss functions can really lead to the optimization of the ranking measures.2
Actually people have tried to answer this question. It has been proved in [5] and [10] that the regression and classification based losses used in the pointwise approach are upper bounds of (1?NDCG).
However, for the pairwise and listwise approaches, which are regarded as the state-of-the-art of
learning to rank [3, 11], limited results have been obtained. The motivation of this work is to reveal
the relationship between ranking measures and the pairwise/listwise losses.
The problem is non-trivial to solve, however. Note that ranking measures like NDCG and MAP
are defined with the labels of objects (i.e., in terms of multi-level ratings). Therefore it is relatively
easy to establish the connection between the pointwise losses and the ranking measures, since the
pointwise losses are also defined with the labels of objects. In contrast, the pairwise and listwise
losses are defined with the partial or total order relations among objects, rather than their individual
labels. As a result, it is much more difficult to bridge the gap between the pairwise/listwise losses
and the ranking measures.
To tackle the challenge, we propose making a transformation of the labels on objects to a permutation
set. All the permutations in the set are consistent with the labels, in the sense that an object with a
higher rating is ranked before another object with a lower rating in the permutation. We then define
an essential loss for ranking on the permutation set as follows. First, for each permutation, we
construct a sequence of classification tasks, with the goal of each task being to distinguish an object
from the objects ranked below it in the permutation. Second, the weighted sum of the classification
errors of individual tasks in the sequence is computed. Third, the essential loss is defined as the
minimum value of the weighted sum over all the permutations in the set.
Our study shows that the essential loss has several nice properties, which help us reveal the relationship between ranking measures and the pairwise/listwise losses. First, it can be proved that the
essential loss is an upper bound of measure-based ranking errors such as (1?NDCG) and (1?MAP).
Furthermore, the zero value of the essential loss is a sufficient and necessary condition for the zero
values of (1?NDCG) and (1?MAP). Second, it can be proved that the pairwise losses in Ranking
SVM, RankBoost, and RankNet, and the listwise loss in ListMLE are all upper bounds of the essential loss. As a consequence, we come to the conclusion that the loss functions used in these methods
can bound (1?NDCG) and (1?MAP) from above. In other words, the minimization of these loss
functions can effectively maximize NDCG and MAP.
The proofs of the above results suggest a way to modify existing pairwise/listwise losses so as
to make them tighter bounds of (1?NDCG). We hypothesize that tighter bounds will lead to better
ranking performances; we tested this hypothesis using benchmark datasets. The experimental results
show that the methods minimizing the modified losses can outperform the original methods, as well
as many other baseline methods. This validates the correctness of our theoretical analysis.
2
Related work
In this section, we review the widely-used loss functions in learning to rank, ranking measures in
information retrieval, and previous work on the relationship between loss functions and ranking
measures.
2
Note that recently people try to directly optimize ranking measures [17, 12, 14, 18]. The relationship
between ranking measures and the loss functions in such work is explicitly known. However, for other methods,
the relationship is unclear.
2
2.1
Loss functions in learning to rank
Let x = {x1 , ? ? ? , xn } be the objects be to ranked.3 Suppose the labels of the objects are given
as multi-level ratings L = {l(1), ..., l(n)}, where l(i) ? {r1 , ..., rK } denotes the label of xi [11].
Without loss of generality, we assume l(i) ? {0, 1, ..., K ? 1} and name the corresponding labels
as K-level ratings. If l(i) > l(j), then xi should be ranked before xj . Let F be the function class
and f ? F be a ranking function. The optimal ranking function is learned from the training data
by minimizing a certain loss function defined on the objects, their labels, and the ranking function.
Several approaches have been proposed to learn the optimal ranking function.
In the pointwise approach, the loss function is defined on the basis of single objects. For example,
in subset regression [5], the loss function is as follows,
Lr (f ; x, L) =
n
X
i=1
2
f (xi ) ? l(i) .
(1)
In the pairwise approach, the loss function is defined on the basis of pairs of objects whose labels
are different. For example, the loss functions of Ranking SVM [7], RankBoost [6], and RankNet [2]
all have the following form,
Lp (f ; x, L) =
n?1
X
n
X
s=1 i=1,l(i)<l(s)
? f (xs ) ? f (xi ) ,
(2)
where the ? functions are hinge function (?(z) = (1 ? z)+ ), exponential function (?(z) = e?z ),
and logistic function (?(z) = log(1 + e?z )) respectively, for the three algorithms.
In the listwise approach, the loss function is defined on the basis of all the n objects. For example,
in ListMLE [16], the following loss function is used,
Ll (f ; x, y) =
n?1
X
s=1
n
X
? f (xy(s) ) + ln
i=s
exp(f (xy(i) )) ,
(3)
where y is a randomly selected permutation (i.e., ranked list) that satisfies the following condition:
for any two objects xi and xj , if l(i) > l(j), then xi is ranked before xj in y. Notation y(i)
represents the index of the object ranked at the i-th position in y.
2.2
Ranking measures
Several ranking measures have been proposed in the literature to evaluate the performance of a
ranking function. Here we introduce two of them, NDCG [8] and MAP[1], which are popularly
used in information retrieval.
NDCG is defined with respect to K-level ratings L,
N DCG(f ; x, L) =
n
1 X
G l(?f (r)) D(r),
Nn r=1
where ?f is the ranked list produced by ranking function f , G is an increasing function (named
the gain
D is a decreasing function (named the position discount function), and Nn =
Pfunction),
n
1
max? r=1 G l(?(r)) D(r). In practice, one usually sets G(z) = 2z ? 1; D(z) = log (1+z)
if
2
z ? C, and D(z) = 0 if z > C (C is a fixed integer).
MAP is defined with respect to 2-level ratings as follows,
M AP (f ; x, L) =
1
n1
X
s:l(?f (s))=1
P
i?s
I{l(?f (i))=1}
s
.
(4)
where I{?} is the indicator function, and n1 is the number of objects with label 1. When the labels
are given in terms of K-level ratings (K > 2), a common practice of using MAP is to fix a level
k ? , and regard all the objects whose levels are lower than k ? as having label 0, and regard the other
objects as having label 1 [11].
From the definitions of NDCG and MAP, we can see that their maximum values are both one.
Therefore, we can consider (1?NDCG) and (1?MAP) as ranking errors. For ease of reference, we
call them measure-based ranking errors.
3
For example, for information retrieval, x represents the documents associated with a query.
3
2.3
Previous bounds
For the pointwise approach, the following results have been obtained in [5] and [10].4
The regression based pointwise loss is an upper bound of (1?NDCG),
1 ? N DCG(f ; x, L) ?
n
1/2
1 X
D(i)2
Lr (f ; x, L)1/2 .
2
Nn
i=1
The classification based pointwise loss is also an upper bound of (1?NDCG),
1 ? N DCG(f ; x, L) ?
?
n
n
n
1/2 X
1/2
Y
15 2 X
D(i)2 ? n
D(i)2/n
I{?l(i)6=l(i)}
,
Nn
i=1
i=1
i=1
where ?l(i) is the label of object xi predicted by the classifier, in the setting of 5-level ratings.
For the pairwise approach, the following result has been obtained [9],
1 ? M AP (f ; x, L) ? 1 ?
n1
X
? 2
1
(Lp (f ; x, L) + Cn2 1 +1 )?1 (
i) .
n1
i=1
According to the above results, minimizing the regression and classification based pointwise losses
will minimize (1?NDCG). Note that the zero values of these two losses are sufficient but not necessary conditions for the zero value of (1?NDCG). That is, when (1?NDCG) is zero, the loss
functions may still be very large [10]. For the pairwise losses, the result is even weaker: their zero
values are even not sufficient for the zero value of (1-MAP).
To the best of our knowledge, there was no other theoretical result for the pairwise/listwise losses.
Given that the pairwise and listwise approaches are regarded as the state-of-the-art in learning to
rank [3, 11], it is very meaningful and important to perform more comprehensive analysis on these
two approaches.
3
Main results
In this section, we present our main results on the relationship between ranking measures and the
pairwise/listwise losses. The basic conclusion is that many pairwise and listwise losses are upper
bounds of a quantity which we call the essential loss, and the essential loss is an upper bound of
both (1?NDCG) and (1?MAP). Furthermore, the zero value of the essential loss is a sufficient and
necessary condition for the zero values of (1?NDCG) and (1?MAP).
3.1
Essential loss: ranking as a sequence of classifications
In this subsection, we describe the essential loss for ranking.
First, we propose an alternative representation of the labels of objects (i.e., multi-level ratings). The
basic idea is to construct a permutation set, with all the permutations in the set being consistent with
the labels. The definition that a permutation is consistent with multi-level ratings is given as below.
Definition 1. Given multi-level ratings L and permutation y, we say y is consistent with L, if
?i, s ? {1, ..., n} satisfying i < s, we always have l(y(i)) ? l(y(s)), where y(i) represents the index
of the object that is ranked at the i-th position in y. We denote YL = {y|y is consistent with L}.
According to the definition, it is clear that the NDCG and MAP of a ranking function equal one, if
and only if the ranked list (permutation) given by the ranking function is consistent with the labels.
Second, given each permutation y ? YL , we decompose the ranking of objects x into several sequential steps. For each step s, we distinguish xy(s) , the object ranked at the s-th position in y, from
all the other objects ranked below the s-th position in y, using ranking function f .5 Specifically, we
denote x(s) = {xy(s) , ? ? ? , xy(n) } and define a classifier based on f , whose target output is y(s),
Tf (x(s) ) = arg
max
j?{y(s),??? ,y(n)}
4
f (xj ).
(5)
Note that the bounds given in the original papers of [5] and [10] are with respect to DCG. Here we give their
equivalent forms in terms of NDCG, and set P (?|xi , S) = ?l(i) (?) in the bound of [5], for ease of comparison.
5
For simplicity and clarity, we assume f (xi ) 6= f (xj ) ?i 6= j, such that the classifier will have a unique
output. It can be proved (see [4]) that the main results in this paper still hold without this assumption.
4
It is clear that there are n ? s possible outputs of this classifier, i.e., {y(s), ? ? ? , y(n)}. The 0-1
loss for this classification task can be written as follows, where the second equality is based on the
definition of Tf ,
n
Y
ls f ; x(s) , y(s) = I{Tf (x(s) )6=y(s)} = 1 ?
I{f (xy(s) )>f (xy(i) )} .
i=s+1
We give a simple example in Figure 1 to illustrate the aforementioned process of decomposition.
? y ?
A
? B ?
C
? ? ?
B
? A ?
C
y
incorrect
======?
remove A
B
C
?
B
C
y
correct
=
=====
=?
remove B
C
?
C
Figure 1: Modeling ranking as a sequence of classifications
Suppose there are three objects, A, B, and C, and a permutation y = (A, B, C). Suppose the output
of the ranking function for these objects is (2, 3, 1), and accordingly the predicted ranked list is
? = (B, A, C). At step one of the decomposition, the ranking function predicts object B to be on
the top of the list. However, A should be on the top according to y. Therefore, a prediction error
occurs. For step two, we remove A from both y and ?. Then the ranking function predicts object B
to be on the top of the remaining list. This is in accordance with y and there is no prediction error.
After that, we further remove object B, and it is easy to verify there is no prediction error in step
three either. Overall, the ranking function makes one error in this sequence of classification tasks.
Third, we assign a non-negative weight ?(s)(s = 1, ? ? ? , n ? 1) to the classification task at the
s-th step, representing its importance to the entire sequence. We compute the weighted sum of the
classification errors of all individual tasks,
L? (f ; x, y) ,
n?1
X
s=1
?(s) 1 ?
n
Y
i=s+1
I{f (xy(s) )>f (xy(i) )} ,
(6)
and then define the minimum value of the weighted sum over all the permutations in YL as the
essential loss for ranking.
L? (f ; x, L) = min L? (f ; x, y).
y?YL
(7)
According to the above definition of the essential loss, we can obtain its following nice property.
Denote the ranked list produced by f as ?f . Then it is easy to verify that,
L? (f ; x, L) = 0 ?? ?y ? YL satisfying L? (f ; x, y) = 0 ?? ?f = y ? YL .
In other words, the essential loss is zero if and only if the permutation given by the ranking function
is consistent with the labels. Further considering the discussions on the consistent permutation at
the begining of this subsection, we can come to the conclusion that the zero value of the essential
loss is a sufficient and necessary condition for the zero values of (1-NDCG) and (1-MAP).
3.2
Essential loss: upper bound of measure-based ranking errors
In this subsection, we show that the essential loss is an upper bound of (1?NDCG) and (1?MAP),
when specific weights ?(s) are used.
PK
Theorem 1. Given K-level rating data (x, L) with nk objects having label k and i=k? ni > 0,
then ?f , the following inequalities hold,
(1)
(2)
1
L? (f ; x, L), where ?1 (s) = G l(y(s)) D(s), ?y ? YL ;
Nn 1
1
1 ? M AP (f ; x, L) ? PK
L?2 (f ; x, L), where ?2 (s) ? 1.
i=k? ni
1 ? N DCG(f ; x, L) ?
Proof. (1) We now prove the inequality for (1?NDCG). First, we reformulate NDCG using the
permutation set YL . This can be done by changing the index of the sum in NDCG from the rank
5
position r in ?f to the rank position s in ?y ? YL . Considering that s = y ?1 ?f (r) and r =
?f?1 y(s) , it is easy to verify,
N DCG(f ; x, L) =
n
n
1 X
1 X
G l ?f (?f?1 y(s)) D ?f?1 (y(s)) =
G l(y(s)) D ?f?1 (y(s)) .
Nn s=1
Nn s=1
Second, we consider the essential loss case by case. Note that
L?1 (f ; x, L) = min
y?YL
n?1
X
s=1
n
Y
G l(y(s)) D(s) 1 ?
I{??1 (y(s))<??1 (y(i))} .
Then ?y ? YL , if position s satisfies
i=s+1
f
Qn
i=s+1 I{?f?1 (y(s))<?f?1 (y(i))}
f
= 1 (i.e., ?i > s, ?f?1 (y(s)) <
Qn
?f?1 (y(i))), we have ?f?1 (y(s)) ? s. As a consequence, D(s) i=s+1 I{??1 (y(s))<??1 (y(i))} =
f
f
Qn
D(s) ? D ?f?1 (y(s)) . Otherwise, if i=s+1 I{??1 (y(s))<??1 (y(i))} = 0, it is easy to see that
f
f
Qn
D(s) i=s+1 I{??1 (y(s))<??1 (y(i))} = 0 ? D ?f?1 (y(s)) . To sum up, ?s ? {1, 2, ..., n ? 1},
f
f
Qn
D(s) i=s+1 I{??1 (y(s))<??1 (y(i))} ? D ?f?1 (y(s)) . Further considering ?f?1 (y(n)) ? n and
f
f
D(?) is a decreasing function, we have D(n) ? D ?f?1 (y(n)) . As a result, we obtain,
1 ? N DCG(f ; x, L) =
n
1
1 X
G l(y(s)) D(s) ? D ?f?1 (y(s)) ?
L? (f ; x, L).
Nn s=1
Nn 1
(2) We then prove the inequality for (1?MAP). First, we prove the result for 2-level ratings. Given
2-level rating data (x, L), it can be proved (see Lemma 1 in [4]) that L?2 (f ; x, L) = n1 ? i0 + 1,
where i0 denotes the position of the first object with
label 0 in ?f , and i0 ? n1 +1. We then consider
P
P
i?s I{l(?f (i))=1}
n1 1 ? M AP (f ; x, L) = n1 ? s: l(?f (s))=1
case by case. If i0 > n1 (i.e., the
s
first object with label 0 is ranked after position n1 in ?f ), then all the objects with label 1 are ranked
before the objects with label 0. Thus n1 (1 ? M AP (f ; x, L)) = n1 ? n1 = 0 = L?2 (f ; x, L).
If i0 (?f ) ? n1 , there are i0 (?f ) ? 1 objects with label 1 ranked before all the objects with label
0. Thus n1 (1 ? M AP (f ; x, L)) ? n1 ? i0 (?f ) + 1 = L?2 (f ; x, L). This proves the theorem for
2-level ratings.
Second, given K-level rating data (x, L), we denote the 2-level ratings induced by L as L0 . Then it
is easy to verify YL ? YL0 . As a result, we have,
L?2 (f ; x, L0 ) = min L?2 (f ; x, y) ? min L?2 (f ; x, y) = L?2 (f ; x, L).
y?YL0
y?YL
Using the result for 2-level ratings, we obtain
1
1 ? M AP (f ; x, L) = 1 ? M AP (f ; x, L0 ) ? PK?1
i=k?
3.3
ni
1
L?2 (f ; x, L0 ) ? PK?1
i=k?
ni
L?2 (f ; x, L).
Essential loss: lower bound of loss functions
In this section, we show that many pairwise/listwise losses are upper bounds of the essential loss.
Theorem 2. The pairwise losses in Ranking SVM, RankBoost, and RankNet, and the listwise loss
in ListMLE are all upper bounds of the essential loss, i.e.,
(1)
L? (f ; x, L) ?
(2)
L? (f ; x, L) ?
max
1?s?n?1
1
ln 2
?(s) Lp (f ; x, L);
max
1?s?n?1
?(s) Ll (f ; x, y), ?y ? YL .
Proof. (1) We now prove the inequality for the pairwise losses. First, we reformulate the pairwise
losses using permutation set YL ,
Lp (f ; x, L) =
n?1
X
s=1
n
X
i=s+1,
l(y(s))6=l(y(i))
n
X X
n?1
? f (xy(s) ) ? f (xy(i) ) =
a y(i), y(s) ? f (xy(s) ) ? f (xy(i) ) ,
s=1 i=s+1
6
where y is an arbitrary permutation in YL , a(i, j) = 1 if l(i) 6= l(j); a(i, j) = 0 otherwise. Note that
only those pairs whose first object has a larger label than the second one are counted in the pairwise
loss. Thus, the value of the pairwise loss is equal ?y ? YL .
Second, we consider
the value of a Tf (x(s) ), y(s) case by case. ?y and ?s ? {1, 2, ..., n ? 1},
if a Tf (x(s) ), y(s) = 1 (i.e., ?i0 > s, satisfying l(y(i0 )) 6= l(y(s)) and f (xy(i0 ) ) > f (xy(s) )),
considering that function ? in Ranking SVM, RankBoost and RankNet are all non-negative, nonincreasing, and ?(0) = 1, we have,
n
X
i=s+1
a y(i), y(s) ? f (xy(s) ) ? f (xy(i) )
a y(i0 ), y(s) ? f (xy(s) ) ? f (xy(i0 ) ) = ? f (xy(s) ) ? f (xy(i0 ) ) > 1 = a Tf (x(s) ), y(s) .
?
Pn
If a Tf (x(s) ), y(s) = 0, it is clear that i=s+1 a y(i), y(s) ? f (xy(s) ) ? f (xy(i) ) ? 0 =
a Tf (x(s) ), y(s) . Therefore,
n?1
X
n
X
?(s)
s=1
i=s+1
X
n?1
a y(i), y(s) ? f (xy(s) ) ? f (xy(i) ) ?
?(s)a Tf (x(s) ), y(s) .
(8)
s=1
Third, it can be proved (see Lemma 2 in [4]) that the following inequality holds,
L? (f ; x, L) ? max
y?YL
n?1
X
s=1
?(s)a Tf (x(s) ), y(s) .
Considering inequality (8) and noticing that the pairwise losses are equal ?y ? YL , we have
L? (f ; x, L) ? max
y?YL
n?1
X
?(s)
s=1
n
X
i=s+1
a y(i), y(s) ? f (xy(s) ) ? f (xy(i) ) ?
max
1?s?n?1
?(s) Lp (f ; x, L).
(2) We then prove the inequality for the loss function of ListMLE. Again, we prove the result case by
case. Consider the loss of ListMLE in Eq.(3). ?y and ?s ? {1, 2, ..., n ? 1}, if I{Tf (x(s) )6=y(s)} = 1
Pn
(i.e., ?i0 > s satisfying f (xy(i0 ) ) > f (xy(s) )), then ef (xy(s) ) < 21 i=s ef (xy(s) ) . Therefore, we
have ? ln P ne
? ln P ne
f (xy(s) )
i=s
f (xy(s) )
i=s
e
e
f (xy(i) )
f (xy(i) )
> ln 2 = ln 2 I{Tf (x(s) )6=y(s)} . If I{Tf (x(s) )6=y(s)} = 0, then it is clear
> 0 = ln 2 I{Tf (x(s) )6=y(s)} . To sum up, we have,
n?1
X
n?1
ef (xy(s) ) X
?(s) ? ln Pn
>
?(s) ln 2 I{Tf (x(s) )6=y(s)} ? ln 2 min L? (?f , y) = ln 2 L? (?f , L).
f (xy(i) )
y?YL
i=s e
s=1
s=1
By further relaxing the inequality, we obtain the following result,
L? (f ; x, L) ?
3.4
1
ln 2
max
1?s?n?1
?(s) Ll (f ; x, y), ?y ? YL .
Summary
We have the following inequalities by combining the results obtained in the previous subsections.
(1) The pairwise losses in Ranking SVM, RankBoost, and RankNet are upper bounds of (1?NDCG)
and (1?MAP).
G(K ? 1)D(1) p
L (f ; x, L);
Nn
1
1 ? M AP (f ; x, L) ? PK
Lp (f ; x, L).
n
i=k? i
1 ? N DCG(f ; x, L) ?
(2) The listwise loss in ListMLE is an upper bound of (1?NDCG) and (1?MAP).
G(K ? 1)D(1) l
L (f ; x, y), ?y ? YL ;
Nn ln 2
1
1 ? M AP (f ; x, L) ?
Ll (f ; x, y), ?y ? YL .
P
ln 2 K
n
i
?
i=k
1 ? N DCG(f ; x, L) ?
7
Table 1: Ranking accuracy on OHSUMED
Methods
NDCG@5
NDCG@10
Methods
NDCG@5
NDCG@10
4
Regression
0.4278
0.4110
RankNet
0.4568
0.4414
W-RankNet
0.4868
0.4604
Ranking SVM
0.4164
0.414
ListMLE
0.4471
0.4347
RankBoost
0.4494
0.4302
W-ListMLE
0.4588
0.4453
FRank
0.4588
0.4433
ListNet
0.4432
0.441
SVMMAP
0.4516
0.4319
Discussion
The proofs of Theorems 1 and 2 actually suggest a way to improve existing loss functions. The key
idea is to introduce weights related to ?1 (s) to the loss functions so as to make them tighter bounds
of (1?NDCG).
Specifically, we introduce weights to the pairwise and listwise losses in the following way,
? p (f ; x, L) =
L
n?1
X
s=1
G l(y(s)) D 1 +
? l (f ; x, y) =
L
n?1
X
n
X
G l(y(s)) D(s) ? f (xy(s) ) + ln
exp(f (xy(i) )) .
s=1
K?1
X
k=l(y(s))+1
nk
n
X
i=s+1
a y(i), y(s) ? f (xy(s) ) ? f (xy(i) ) , ?y ? YL ;
i=s
It can be proved (see Proposition 1 in [4]) that the above weighted losses are still upper bounds of
(1?NDCG) and they are lower bounds of the original pairwise and listwise losses. In other words,
the above weighted loss functions are tighter bounds of (1?NDCG) than existing loss functions.
We tested the effectiveness of the weighted loss functions on the OHSUMED dataset in LETOR 3.0.6
We took RankNet and ListMLE as example algorithms. The methods that minimize the weighted
loss functions are referred to as W-RankNet and W-ListMLE. From Table 1, we can see that (1)
W-RankNet and W-ListMLE significantly outperform RankNet and ListMLE. (2) W-RankNet and
W-ListMLE also outperform other baselines on LETOR such as Regression, Ranking SVM, RankBoost, FRank [15], ListNet and SVMMAP [18]. These experimental results seem to indicate that
optimizing tighter bounds of the ranking measures can lead to better ranking performances.
5
Conclusion and future work
In this work, we have proved that many pairwise/listwise losses in learning to rank are actually upper
bounds of measure-based ranking errors. We have also shown a way to improve existing methods
by introducing appropriate weights to their loss functions. Experimental results have validated our
theoretical analysis. As future work, we plan to investigate the following issues.
(1) We have modeled ranking as a sequence of classifications, when defining the essential loss. We
believe this modeling has its general implication for ranking, and will explore its other usages.
(2) We have taken NDCG and MAP as two examples in this work. We will study whether the
essential loss is an upper bound of other measure-based ranking errors.
(3) We have taken the loss functions in Ranking SVM, RankBoost, RankNet and ListMLE as examples in this study. We plan to investigate the loss functions in other pairwise and listwise ranking
methods, such as RankCosine [13], ListNet [3], FRank [15] and QBRank [19].
(4) While we have mainly discussed the upper-bound relationship in this work, we will study
whether loss functions in existing learning-to-rank methods are statistically consistent with the essential loss and the measure-based ranking errors.
6
http://research.microsoft.com/?letor
8
References
[1] R. Baeza-Yates and B. Ribeiro-Neto. Modern Information Retrieval. Addison Wesley, May
1999.
[2] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender.
Learning to rank using gradient descent. In ICML ?05: Proceedings of the 22nd International
Conference on Machine learning, pages 89?96, New York, NY, USA, 2005. ACM.
[3] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to
listwise approach. In ICML ?07: Proceedings of the 24th International Conference on Machine
learning, pages 129?136, New York, NY, USA, 2007. ACM.
[4] W. Chen, T.-Y. Liu, Y. Lan, Z. Ma, and H. Li. Essential loss: Bridge the gap between ranking
measures and loss functions in learning to rank. Technical report, Microsoft Research, MSRTR-2009-141, 2009.
[5] D. Cossock and T. Zhang. Statistical analysis of bayes optimal subset ranking. Information
Theory, 54:5140?5154, 2008.
[6] Y. Freund, R. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining
preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[7] R. Herbrich, K. Obermayer, and T. Graepel. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers, pages 115?132, Cambridge, MA, 1999.
MIT.
[8] K. J?arvelin and J. Kek?al?ainen. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20(4):422?446, 2002.
[9] T. Joachims. Optimizing search engines using clickthrough data. In KDD ?02: Proceedings
of the 8th ACM SIGKDD international conference on Knowledge discovery and data mining,
pages 133?142, New York, NY, USA, 2002. ACM.
[10] P. Li, C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and
gradient boosting. In NIPS ?07: Advances in Neural Information Processing Systems 20, pages
897?904, Cambridge, MA, 2008. MIT.
[11] T.-Y. Liu, J. Xu, T. Qin, W.-Y. Xiong, and H. Li. Letor: Benchmark dataset for research
on learning to rank for information retrieval. In SIGIR ?07 Workshop, San Francisco, 2007.
Morgan Kaufmann.
[12] Q. L. Olivier Chapelle and A. Smola. Large margin optimization of ranking measures. In NIPS
workshop on Machine Learning for Web Search 2007, 2007.
[13] T. Qin, X.-D. Zhang, M.-F. Tsai, D.-S. Wang, T.-Y. Liu, , and H. Li. Query-level loss functions
for information retrieval. Information Processing and Management, 44(2):838?855, 2008.
[14] M. Taylor, J. Guiver, S. Robertson, and T. Minka. Softrank: optimizing non-smooth rank
metrics. In Proceedings of the International Conference on Web search and web data mining,
pages 77?86, Palo Alto, California, USA, 2008. ACM.
[15] M.-F. Tsai, T.-Y. Liu, T. Qin, H.-H. Chen, and W.-Y. Ma. Frank: a ranking method with fidelity
loss. In SIGIR ?07: Proceedings of the 30th annual ACM SIGIR conference, pages 383?390,
Amsterdam, The Netherlands, 2007. ACM.
[16] F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li. Listwise approach to learning to rank - theory
and algorithm. In ICML ?08: Proceedings of the 25th International Conference on Machine
learning, pages 1192?1199. Omnipress, 2008.
[17] J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. In SIGIR ?07:
Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 391?398, 2007.
[18] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing
average precision. In SIGIR ?07: Proceedings of the 30th annual international ACM SIGIR
conference on Research and development in information retrieval, pages 271?278, New York,
NY, USA, 2007. ACM.
[19] Z. Zheng, H. Zha, T. Zhang, O. Chapelle, K. Chen, and G. Sun. A general boosting method
and its application to learning ranking functions for web search. In NIPS ?07: Advances in
Neural Information Processing Systems 20, pages 1697?1704. MIT, Cambridge, MA, 2008.
9
| 3708 |@word mcrank:2 nd:1 tried:1 decomposition:2 liu:7 document:2 existing:6 com:3 written:1 kdd:1 listmle:16 hypothesize:1 remove:4 ainen:1 selected:1 accordingly:1 lr:2 renshaw:1 boosting:4 preference:1 herbrich:1 zhang:4 constructed:1 become:2 incorrect:1 prove:6 introduce:3 pairwise:29 multi:6 discounted:1 decreasing:2 ohsumed:2 considering:5 increasing:1 notation:1 alto:1 transformation:1 tackle:1 tie:1 classifier:5 hamilton:1 before:5 accordance:1 modify:2 consequence:2 ndcg:37 ap:10 suggests:1 relaxing:1 ease:2 limited:1 statistically:1 unique:1 testing:1 practice:2 yan:1 significantly:1 deed:1 word:3 suggest:2 hangli:1 shaked:1 optimize:1 equivalent:1 map:22 l:1 sigir:7 guiver:1 simplicity:1 regarded:2 target:1 suppose:3 olivier:1 hypothesis:1 robertson:1 satisfying:4 tyliu:1 predicts:2 wang:2 sun:1 arvelin:1 basis:3 describe:1 query:3 whose:4 widely:2 solve:1 larger:1 say:1 otherwise:2 validates:1 obviously:1 sequence:9 took:1 propose:2 qin:4 cao:1 combining:2 academy:3 r1:1 letor:4 produce:1 object:44 help:1 illustrate:1 ac:3 eq:1 predicted:2 come:2 indicate:1 differ:1 pfunction:1 popularly:1 correct:1 assign:1 fix:1 really:1 decompose:1 proposition:1 tighter:6 hold:3 exp:2 label:29 palo:1 bridge:2 correctness:2 tf:15 weighted:9 minimization:3 mit:3 always:1 rankboost:10 modified:1 rather:1 pn:3 l0:4 validated:1 joachim:2 rank:23 mainly:1 contrast:1 sigkdd:1 baseline:2 am:2 sense:1 nn:11 i0:15 entire:2 dcg:9 relation:1 arg:1 classification:15 aforementioned:3 among:1 overall:1 issue:1 fidelity:1 development:2 plan:2 art:2 field:1 construct:2 equal:3 having:3 represents:4 icml:3 future:2 report:1 modern:1 randomly:1 comprehensive:1 individual:4 consisting:1 microsoft:5 n1:16 yl0:2 investigate:2 mining:2 zheng:1 evaluation:1 nonincreasing:1 implication:1 partial:1 necessary:4 xy:41 taylor:1 theoretical:4 instance:3 modeling:2 maximization:1 introducing:1 subset:3 lazier:1 answer:1 micorsoft:2 international:7 yl:25 again:1 management:1 li:8 explicitly:1 ranking:80 performed:1 view:1 try:1 zha:1 bayes:1 minimize:2 ni:4 accuracy:1 ir:1 kek:1 kaufmann:1 produced:2 definition:6 minka:1 proof:5 associated:1 gain:3 proved:9 dataset:2 ask:1 knowledge:2 subsection:4 graepel:1 actually:3 wesley:1 higher:1 asia:3 listnet:4 wei:1 formulation:1 done:1 generality:1 furthermore:2 smola:1 hand:1 web:4 logistic:1 reveal:3 believe:1 usage:1 usa:5 name:1 normalized:1 verify:4 equality:1 ll:4 learningto:1 omnipress:1 ef:3 recently:1 common:1 cossock:1 discussed:1 cambridge:3 chapelle:2 optimizing:4 certain:3 inequality:9 morgan:1 minimum:2 maximize:1 multiple:1 smooth:1 technical:1 adarank:1 retrieval:12 divided:1 prediction:3 regression:8 basic:2 metric:1 yue:1 induced:1 effectiveness:1 yanyan:1 integer:1 call:2 seem:1 easy:6 baeza:1 xj:5 idea:2 cn:3 whether:3 york:4 ranknet:15 clear:4 netherlands:1 discount:1 category:1 http:1 schapire:1 outperform:3 msrtr:1 yates:1 key:2 begining:1 lan:2 demonstrating:1 clarity:1 changing:1 rankcosine:1 sum:8 noticing:1 named:2 almost:1 wu:1 bound:32 distinguish:2 annual:3 min:5 relatively:1 according:4 lp:6 modification:1 making:1 taken:2 ln:15 singer:1 addison:1 ordinal:1 appropriate:1 xiong:1 alternative:1 original:3 denotes:2 top:3 remaining:1 hinge:1 chinese:3 establish:1 prof:1 question:2 quantity:1 occurs:1 unclear:1 obermayer:1 gradient:2 topic:2 trivial:1 pointwise:10 relationship:8 index:3 reformulate:2 minimizing:6 modeled:1 svmmap:2 difficult:1 frank:4 negative:2 neto:1 clickthrough:1 perform:1 upper:19 softrank:1 datasets:2 benchmark:3 descent:1 situation:1 defining:1 arbitrary:1 rating:19 pair:3 namely:1 connection:1 mazm:1 engine:1 california:1 learned:3 nip:3 below:3 usually:1 challenge:1 max:8 ranked:19 natural:1 indicator:1 representing:2 improve:2 ne:2 finley:1 hullender:1 nice:2 literature:2 review:1 discovery:1 qbrank:1 freund:1 loss:104 permutation:21 cn2:1 sufficient:5 consistent:9 summary:1 weaker:1 burges:2 taking:1 listwise:23 regard:3 boundary:1 xia:1 xn:1 cumulative:1 qn:5 author:1 san:1 counted:1 ribeiro:1 transaction:1 hang:1 francisco:1 xi:9 search:4 table:2 learn:3 obtaining:1 pk:5 main:3 motivation:2 x1:1 xu:2 referred:1 ny:4 precision:2 position:10 exponential:1 third:4 rk:1 theorem:4 specific:1 list:9 x:1 svm:10 essential:27 workshop:2 sequential:1 effectively:1 importance:1 cumulated:1 iyer:1 margin:3 nk:2 chen:4 gap:2 explore:1 intern:1 amsterdam:1 satisfies:2 amt:1 acm:11 ma:6 goal:1 specifically:2 lemma:2 called:1 total:1 experimental:4 meaningful:1 people:2 radlinski:1 support:1 relevance:1 tsai:3 evaluate:3 tested:2 |
2,989 | 3,709 | Fast Graph Laplacian Regularized Kernel Learning
via Semidefinite?Quadratic?Linear Programming
Xiao-Ming Wu
Dept. of IE
The Chinese University of Hong Kong
[email protected]
Anthony Man-Cho So
Dept. of SE&EM
The Chinese University of Hong Kong
[email protected]
Zhenguo Li
Dept. of IE
The Chinese University of Hong Kong
[email protected]
Shuo-Yen Robert Li
Dept. of IE
The Chinese University of Hong Kong
[email protected]
Abstract
Kernel learning is a powerful framework for nonlinear data modeling. Using the
kernel trick, a number of problems have been formulated as semidefinite programs
(SDPs). These include Maximum Variance Unfolding (MVU) (Weinberger et al.,
2004) in nonlinear dimensionality reduction, and Pairwise Constraint Propagation
(PCP) (Li et al., 2008) in constrained clustering. Although in theory SDPs can
be efficiently solved, the high computational complexity incurred in numerically
processing the huge linear matrix inequality constraints has rendered the SDP
approach unscalable. In this paper, we show that a large class of kernel learning
problems can be reformulated as semidefinite-quadratic-linear programs (SQLPs),
which only contain a simple positive semidefinite constraint, a second-order cone
constraint and a number of linear constraints. These constraints are much easier to
process numerically, and the gain in speedup over previous approaches is at least
of the order m2.5 , where m is the matrix dimension. Experimental results are also
presented to show the superb computational efficiency of our approach.
1 Introduction
Kernel methods provide a principled framework for nonlinear data modeling, where the inference
in the input space can be transferred intactly to any feature space by simply treating the associated kernel as inner products, or more generally, as nonlinear mappings on the data (Sch?olkopf &
Smola, 2002). Some well-known kernel methods include support vector machines (SVMs) (Vapnik, 2000), kernel principal component analysis (kernel PCA) (Sch?olkopf et al., 1998), and kernel
k-means (Shawe-Taylor & Cristianini, 2004). Naturally, an important issue in kernel methods is
kernel design. Indeed, the performance of a kernel method depends crucially on the kernel used,
where different choices of kernels often lead to quite different results. Therefore, substantial efforts
have been made to design appropriate kernels for the problems at hand. For instance, in (Chapelle
& Vapnik, 2000), parametric kernel functions are proposed, where the focus is on model selection
(Chapelle & Vapnik, 2000). The modeling capability of parametric kernels, however, is limited. A
more natural idea is to learn specialized nonparametric kernels for specific problems. For instance,
in cases where only inner products of the input data are involved, kernel learning is equivalent to the
learning of a kernel matrix. This is the main focus of recent kernel methods.
Currently, many different kernel learning frameworks have been proposed. These include spectral
kernel learning (Li & Liu, 2009), multiple kernel learning (Lanckriet et al., 2004), and the Breg1
man divergence-based kernel learning (Kulis et al., 2009). Typically, a kernel learning framework
consists of two main components: the problem formulation in terms of the kernel matrix, and an
optimization procedure for finding the kernel matrix that has certain desirable properties. As seen
in, e.g., the Maximum Variance Unfolding (MVU) method (Weinberger et al., 2004) for nonlinear
dimensionality reduction (see (So, 2007) for related discussion) and Pairwise Constraint Propagation (PCP) (Li et al., 2008) for constrained clustering, a nice feature of such a framework is that
the problem formulation often becomes straightforward. Thus, the major challenge in optimizationbased kernel learning lies in the second component, where the key is to find an efficient procedure
to obtain a positive semidefinite kernel matrix that satisfies certain properties.
Using the kernel trick, most kernel learning problems (Graepel, 2002; Weinberger et al., 2004;
Globerson & Roweis, 2007; Song et al., 2008; Li et al., 2008) can naturally be formulated as
semidefinite programs (SDPs). Although in theory SDPs can be efficiently solved, the high computational complexity has rendered the SDP approach unscalable. An effective and widely used heuristic
for speedup is to perform low-rank kernel approximation and matrix factorization (Weinberger et al.,
2005; Weinberger et al., 2007; Li et al., 2009). In this paper, we investigate the possibility of further
speedup by studying a class of convex quadratic semidefinite programs (QSDPs). These QSDPs
arise in many contexts, such as graph Laplacian regularized low-rank kernel learning in nonlinear
dimensionality reduction (Sha & Saul, 2005; Weinberger et al., 2007; Globerson & Roweis, 2007;
Song et al., 2008; Singer, 2008) and constrained clustering (Li et al., 2009). In the aforementioned
papers, a QSDP is reformulated as an SDP with O(m2 ) variables and a linear matrix inequality of
size O(m2 ) ? O(m2 ). Such a reformulation is highly inefficient and unscalable, as it has an order
of m9 time complexity (Ben-Tal & Nemirovski, 2001, Lecture 6). In this paper, we propose a novel
reformulation that exploits the structure of the QSDP and leads to a semidefinite-quadratic-linear
program (SQLP) that can be solved by the standard software SDPT3 (T?ut?unc?u et al., 2003). Such a
reformulation has the advantage that it only has one positive semidefinite constraint on a matrix of
size m ? m, one second-order cone constraint of size O(m2 ) and a number of linear constraints on
O(m2 ) variables. As a result, our reformulation is much easier to process numerically. Moreover,
a simple complexity analysis shows that the gain in speedup over previous approaches is at least
of the order m2.5 . Experimental results show that our formulation is indeed far more efficient than
previous ones.
The rest of the paper is organized as follows. We review related kernel learning problems in Section
2 and present our formulation in Section 3. Experiment results are reported in Section 4. Section 5
concludes the paper.
2
The Problems
In this section, we briefly review some kernel learning problems that arise in dimensionality reduction and constrained clustering. They include MVU (Weinberger et al., 2004), Colored MVU
(Song et al., 2008), (Singer, 2008), Pairwise Semidefinite Embedding (PSDE) (Globerson & Roweis,
2007), and PCP (Li et al., 2008). MVU maximizes the variance of the embedding while preserving
local distances of the input data. Colored MVU generalizes MVU with side information such as
class labels. PSDE derives an embedding that strictly respects known similarities, in the sense that
objects known to be similar are always closer in the embedding than those known to be dissimilar.
PCP is designed for constrained clustering, which embeds the data on the unit hypersphere such that
two objects that are known to be from the same cluster are embedded to the same point, while two
objects that are known to be from different clusters are embedded orthogonally. In particular, PCP
seeks the smoothest mapping for such an embedding, thereby propagating pairwise constraints.
Initially, each of the above problems is formulated as an SDP, whose kernel matrix K is of size n?n,
where n denotes the number of objects. Since such an SDP is computationally expensive, one can
try to reduce the problem size by using graph Laplacian regularization. In other words, one takes
K = QY QT , where Q ? Rn?m consists of the smoothest m eigenvectors of the graph Laplacian
(m ? n), and Y is of size m ? m (Sha & Saul, 2005; Weinberger et al., 2007; Song et al., 2008;
Globerson & Roweis, 2007; Singer, 2008; Li et al., 2009). The learning of K is then reduced to
the learning of Y , leading to a quadratic semidefinite program (QSDP) that is similar to a standard
quadratic program (QP), except that the feasible set of a QSDP resides in the positive semidefinite
cone as well. The intuition behind this low-rank kernel approximation is that a kernel matrix of the
2
form K = QY QT actually, to some degree, preserves the proximity of objects in the feature space.
Detailed justification can be found in the related work mentioned above.
Next, we use MVU and PCP as representatives to demonstrate how the SDP formulations emerge
from nonlinear dimensionality reduction and constrained clustering.
2.1
MVU
The SDP of MVU (Weinberger et al., 2004) is as follows:
max tr(K) = I ? K
(1)
K
s.t.
n
X
kij = 0,
(2)
i,j=1
kii + kjj ? 2kij = d2ij , ?(i, j) ? N ,
(3)
K ? 0,
(4)
where K = (kij ) denotes the kernel matrix to be learned, I denotes the identity matrix, tr(?) denotes
the trace of a square matrix, ? denotes the element-wise dot product between matrices, dij denotes
the Euclidean distance between the i-th and j-th objects, and N denotes the set of neighbor pairs,
whose distances are to be preserved in the embedding.
The constraint in (2) centers the embedding at the origin, thus removing the translation freedom.
The constraints in (3) preserve local distances. The constraint K ? 0 in (4) specifies that K must
be symmetric and positive semidefinite, which is necessary since K is taken as the inner product
matrix of the embedding. Note P
that given the constraint in (2), the variance of the embedding
1
is characterized by V(K) = 2n
i,j (kii + kjj ? 2kij ) = tr(K) (Weinberger et al., 2004) (see
related discussion in (So, 2007), Chapter 4). Thus, the SDP in (1-4) maximizes the variance of the
embedding while keeping local distances unchanged. After K is obtained, kernel PCA is applied to
K to compute the low-dimensional embedding.
2.2
PCP
The SDP of PCP (Li et al., 2008) is:
??K
min L
(5)
K
s.t. kii = 1, i = 1, 2, . . . , n,
kij = 1, ?(i, j) ? M,
kij = 0, ?(i, j) ? C,
K ? 0,
(6)
(7)
(8)
(9)
? denotes the normalized graph Laplacian, M denotes the set of object pairs that are known
where L
to be from the same cluster, and C denotes those that are known to be from different clusters.
The constraints in (6) map the objects to the unit hypersphere. The constraints in (7) map two objects
that are known to be from the same cluster to the same vector. The constraints in (8) map two objects
that are known to be from different clusters to vectors that are orthogonal. Let X = {xi }ni=1 be the
data set, F be the feature space, and ? : X ? F be the associated feature map of K. Then, the
degree of smoothness of ? on the data graph can be captured by (Zhou et al., 2004):
?
?
n
? ?(x ) ?(x ) ?2
1 X
?
i
j ?
? ? K,
S(?) =
(10)
? p
wij ? ?
? =L
? dii
2
djj ?
i,j=1
F
Pn
where wij is the similarity of xi and xj , dii = j=1 wij , and k ? kF is the distance metric in F.
The smaller the value S(?), the smoother is the feature map ?. Thus, the SDP in (5-9) seeks the
smoothest feature map that embeds the data on the unit hypersphere and at the same time respects the
pairwise constraints. After K is solved, kernel k-means is then applied to K to obtain the clusters.
3
2.3
Low-Rank Approximation: from SDP to QSDP
The SDPs in MVU and PCP are difficult to solve efficiently because their computational complexity
scales at least cubically in the size of the matrix variable and the number of constraints (Borchers,
1999). A useful heuristic is to use low-rank kernel approximation, which is motivated by the observation that the degree of freedom in the data is often much smaller than the number of parameters
in a fully nonparametric kernel matrix K. For instance, it may be equal to or slightly larger than
the intrinsic dimension of the data manifold (for dimensionality reduction) or the number of clusters
(for clustering). Another more specific observation is that it is often desirable to have nearby objects
mapped to nearby points, as is done in MVU or PCP.
Based on these observations, instead of learning a fully nonparametric K, one learns a K of the
form K = QY QT , where Q and Y are of sizes n ? m and m ? m, respectively, where m ? n. The
matrix Q should be smooth in the sense that nearby objects in the input space are mapped to nearby
points (the i-th row of Q is taken as a new representation of xi ). Q is computed prior to the learning
of K. In this way, the learning of a kernel matrix K is reduced to the learning of a much smaller
Y , subject to the constraint that Y ? 0. This idea is used in (Weinberger et al., 2007) and (Li et al.,
2009) to speed up MVU and PCP, respectively, and is also adopted in Colored MVU (Song et al.,
2008) and PSDE (Globerson & Roweis, 2007) for efficient computation.
The choice of Q can be different for MVU and PCP. In (Weinberger et al., 2007), Q =
(v2 , . . . , vm+1 ), where {vi } are the eigenvectors of the graph Laplacian. In (Li et al., 2009),
Q = (u1 , . . . , um ), where {ui } are the eigenvectors of the normalized graph Laplacian. Since
such Q?s are obtained from graph Laplacians, we call the learning of K of the form K = QY QT
the Graph Laplacian Regularized Kernel Learning problem, and denote the methods in (Weinberger
et al., 2007) and (Li et al., 2009) by RegMVU and RegPCP, respectively.
With K = QY QT , RegMVU and RegPCP become:
X
RegMVU : max tr(Y ) ? ?
((QY QT )ii ? 2(QY QT )ij + (QY QT )jj ? d2ij )2 ,
Y ?0
RegPCP : min
Y ?0
X
(11)
(i,j)?N
((QY QT )ij ? tij )2 ,
(12)
(i,j,tij )?S
where ? > 0 is a regularization parameter and S = {(i, j, tij ) | (i, j) ? M ? C, or i = j, tij =
1 if (i, j) ? M or i = j, tij = 0 if (i, j) ? C}. Both RegMVU and RegPCP can be succinctly
rewritten in the unified form:
min yT Ay + bT y
y
s.t. Y ? 0,
(13)
(14)
2
where y = vec(Y ) ? Rm denotes the vector obtained by concatenating all the columns of Y , and
A ? 0 (Weinberger et al., 2007; Li et al., 2009). Note that this problem is convex since both the
objective function and the feasible set are convex.
Problem (13-14) is an instance of the so-called convex quadratic semidefinite program (QSDP),
where the objective is a quadratic function in the matrix variable Y . Note that similar QSDPs arise
in Colored MVU, PSDE, Conformal Eigenmaps (Sha & Saul, 2005), Locally Rigid Embedding
(Singer, 2008), and Kernel Matrix Completion (Graepel, 2002). Before we present our approach for
tackling the QSDP (13-14), let us briefly review existing approaches in the literature.
2.4
Previous Approach: from QSDP to SDP
Currently, a typical approach for tackling a QSDP is to use the Schur complement (Boyd & Vandenberghe, 2004) to rewrite it as an SDP (Sha & Saul, 2005; Weinberger et al., 2007; Li et al., 2009;
Song et al., 2008; Globerson & Roweis, 2007; Singer, 2008; Graepel, 2002), and then solve it using
an SDP solver such as CSDP1 (Borchers, 1999) or SDPT32 (Toh et al., 2006). In this paper, we call
1
2
https://projects.coin-or.org/Csdp/
http://www.math.nus.edu.sg/?mattohkc/sdpt3.html
4
this approach the Schur Complement Based SDP (SCSDP) formulation. For the QSDP in (13-14),
the equivalent SDP takes the form:
min ? + bT y
(15)
y,?
?
s.t. Y ? 0 and
Im2
1
(A 2 y)T
1
A2 y
?
?
? 0,
(16)
1
where A 2 is the matrix square root of A, Im2 is the identity matrix of size m2 ? m2 , and ? is a slack
variable serving as an upper bound of yT Ay. The second semidefinite cone constraint is equivalent
1
1
to (A 2 y)T (A 2 y) ? ? by the Schur complement.
Although the SDP in (15-16) has only m(m + 1)/2 + 1 variables, it has two semidefinite cone
constraints, of sizes m?m and (m2 +1)?(m2 +1), respectively. Such an SDP not only scales poorly,
but is also difficult to process numerically. Indeed, by considering Problem (15-16) as an SDP in
the standard dual form, the number of iterations required by standard interior-point algorithms is of
the order m, and the total number of arithmetic operations required is of the order m9 (Ben-Tal &
Nemirovski, 2001, Lecture 6). In practice, it takes only a few seconds to solve the aforementioned
SDP when m = 10, but can take more than 1 day when m = 40 (see Section 4 for details). Thus,
it is not surprising that m is set to a very small value in the related methods?for example, m = 10
in (Weinberger et al., 2007) and m = 15 in (Li et al., 2009). However, as noted by the authors in
(Weinberger et al., 2007), a larger m does lead to better performance. In (Li et al., 2009), the authors
suggest that m should be larger than the number of clusters.
Is this formulation from QSDP to SDP the best we can have? The answer is no. In the next section,
we present a novel formulation that leads to a semidefinite-quadratic-linear program (SQLP), which
is much more efficient and scalable than the one above. For instance, it takes about 15 seconds when
m = 30, 2 minutes when m = 40, and 1 hour when m = 100, as reported in Section 4.
3
Our Formulation: from QSDP to SQLP
In this section, we formulate the QSDP in (13-14) as an SQLP. Though our focus here is on the
QSDP in (13-14), we should point out that our method applies to any convex QSDP.
Recall that the size of A is m2 ? m2 . Let r be the rank of A. With Cholesky factorization, we can
obtain an r ? m2 matrix B such that A = B T B, as A is symmetric positive semidefinite and of rank
r (Golub & Loan, 1996). Now, let z = By. Then, the QSDP in (13-14) is equivalent to:
min ? + bT y
(17)
s.t. z = By,
(18)
y,z,?
T
z z ? ?,
Y ? 0.
(19)
(20)
Next, we show that the constraint in (19) is equivalent to a second-order cone constraint. Let Kn be
the second-order cone of dimension n, i.e.,
Kn = {(x0 ; x) ? Rn : x0 ? kxk},
1??
T T
where k ? k denotes the standard Euclidean norm. Let u = ( 1+?
2 , 2 , z ) . Then, the following
holds.
Theorem 3.1. zT z ? ? if and only if u ? Kr+2 .
1?? 2
2
T
Proof. Note that u ? Rr+2 , since z ? Rr . Also, note that ? = ( 1+?
2 ) ? ( 2 ) . If z z ? ?,
1?? 2
1+?
1??
1+? 2
T
T T
then ( 2 ) ? ( 2 ) = ? ? z z, which means that 2 ? k( 2 , z ) k. In particular, we have
1?? 2
2
T
T
u ? Kr+2 . Conversely, if u ? Kr+2 , then ( 1+?
2 ) ? ( 2 ) + z z, thus implying z z ? ?.
Let ei (where i = 1, 2, . . . , r + 2) be the i-th basis vector, and let C = (0r?2 , Ir?r ). Then, we have
(e1 ? e2 )T u = ?, (e1 + e2 )T u = 1, and z = Cu. Hence, by Theorem 3.1, the problem in (17-20)
5
swiss roll
sample (n=2000)
(a)
(b)
Figure 1: Swiss Roll. (a) The true manifold. (b) A set of 2000 points sampled from the manifold.
is equivalent to:
min (e1 ? e2 )T u + bT y
(21)
s.t. (e1 + e2 )T u = 1,
By ? Cu = 0,
u ? Kr+2 ,
Y ? 0,
(22)
(23)
(24)
(25)
y,u
which is an instance of the SQLP problem (T?ut?unc?u et al., 2003). Note that in this formulation,
we have traded the semidefinite cone constraint of size (m2 + 1) ? (m2 + 1) in (16) with one
second-order cone constraint of size r + 2 and r + 1 linear constraints. As it turns out, such a
formulation is much easier to process numerically and can be solved much more efficiently.
? Indeed,
using standard interior-point algorithms, the number of iterations required is of the order m (BenTal & Nemirovski, 2001, Lecture 6), and the total number of arithmetic operations required is of the
order m6.5 (T?ut?unc?u et al., 2003). This compares very favorably with the m9 arithmetic complexity
of the SCSDP approach, and our experimental results indicate that the speedup in computation is
quite substantial. Moreover, in contrast with the SCSDP formulation, which does not take advantage
of the low rank structure of A, our formulation does take advantage of such a structure.
4
Experimental Results
In this section, we perform several experiments to demonstrate the viability of our SQLP formulation
and its superior computational performance. Since both the SQLP formulation and the previous
SCSDP formulation can be solved by standard softwares to a satisfying gap tolerance, the focus in
this comparison is not on the accuracy aspect but on the computational efficiency aspect.
We set the relative gap tolerance for both algorithms to be 1e-08. We use SDPT3 (Toh et al., 2006;
T?ut?unc?u et al., 2003) to solve the SQLP. We follow (Weinberger et al., 2007; Li et al., 2009) and
use CSDP 6.0.1 (Borchers, 1999) to solve the SCSDP. All experiments are conducted in Matlab
7.6.0(R2008a) on a PC with 2.5GHz CPU and 4GB RAM.
Two benchmark databases, Swiss Roll3 and USPS4 are used in our experiments. Swiss Roll (Figure 1(a)) is a standard manifold model used for manifold learning and nonlinear dimensionality
reduction. In the experiments, we use the data set shown in Figure 1(b), which consists of 2000
points sampled from the Swiss Roll manifold. USPS is a handwritten digits database widely used
for clustering and classification. It contains images of handwritten digits from 0 to 9 of size 16 ? 16,
and has 7291 training examples and 2007 test examples. In the experiments, we use a subset of
USPS with 2000 images, containing the first 200 examples of each digit from 0-9 in the training
data. The feature to represent each image is a vector formed by concatenating all the columns of the
image intensities. In the sequel, we shall refer to the two subsets used in the experiments simply as
Swiss Roll and USPS.
3
4
http://www.cs.toronto.edu/?roweis/lle/code.html
http://www-stat.stanford.edu/?tibs/ElemStatLearn/
6
Table 1: Computational Results on Swiss Roll (Time: second; # Iter: number of iterations)
SCSDP
SQLP
m
Time
# Iter Time per Iter
Time
# Iter Time per Iter rank(A)
10
3.84
29
0.13
0.96
32
0.03
64
15
60.36
30
2.01
1.75
31
0.06
153
20
557.79
32
17.43
4.48
35
0.13
264
25
2821.76
34
82.99
7.84
37
0.21
403
30 13039.30
37
352.41
13.39
35
0.38
537
35 38559.50
33
1168.50
29.74
35
0.85
670
40
> 1 day
?
?
74.01
35
2.12
852
50
?
?
?
213.26
35
6.09
1152
60
?
?
?
467.90
35
13.37
1451
80
?
?
?
1729.65
39
44.35
2062
100
?
?
?
3988.31
36
110.79
2623
Table 2: Computational Results on USPS (Time: second; # Iter: number of iterations)
SCSDP
SQLP
m
Time
# Iter Time per Iter
Time
# Iter Time per Iter rank(A)
10
2.84
21
0.14
0.47
16
0.03
100
15
42.96
22
1.95
1.26
17
0.07
225
20
461.38
27
17.09
3.35
17
0.20
400
25
2572.72
31
82.99
5.97
14
0.43
625
30 10576.01
30
352.53
15.72
19
0.83
900
35 35173.60
30
1172.50
44.53
17
2.62
1225
40
> 1 day
?
?
133.58
20
6.68
1600
50
?
?
?
362.24
16
22.64
2379
60
?
?
?
936.58
19
49.29
2938
80
?
?
?
1784.12
17
104.95
3112
100
?
?
?
2900.44
17
170.61
3111
The Swiss Roll (resp. USPS) is used to derive the QSDP in RegMVU (resp. RegPCP). For RegMVU,
the 4NN graph is used, i.e., wij = 1 if xi is within the 4NN of xj or vice versa, and wij = 0
otherwise. We verified that the 4NN graph derived from our Swiss Roll data is connected. For
RegPCP, we construct the graph following the approach suggested in (Li et al., 2009). Specifically,
we have wij = exp(?d2ij /(2? 2 )) if xi is within 20NN of xj or vice versa, and wij = 0 otherwise.
Here, ? is the averaged distance from each object to its 20-th nearest neighbor. For the pairwise
constraints used in RegPCP, we randomly generate 20 similarity constraints for each class, and 20
dissimilarity constraints for every two classes, yielding a total of 1100 constraints. For each data set,
m ranges over {10, 15, 20, 25, 30, 35, 40, 50, 60, 80, 100}. In summary, for each data set, 11 QSDPs
are formed. Each QSDP gives rise to one SQLP and one SCSDP. Therefore, for each data set, 11
SQLPs and 11 SCSDPs are derived.
4.1
The Results
The computational results of the programs are shown in Tables 1 and 2. For each program, we
report the total computation time, the number of iterations needed to achieve the required tolerance,
and the average time per iteration in solving the program. A dash (?) in the box indicates that the
corresponding program takes too much time to solve. We choose to stop the program if it fails to
converge within 1 day. This happens for the SCSDP with m = 40 on both data sets.
?From the tables, we see that solving an SQLP is consistently much more faster than solving an
SCSDP. To see the scalability, we plot the solution time (Time) against the problem size (m) in
Figure 2. It can be seen that the solution time of the SCSDP grows much faster than that of the
SQLP. This demonstrates the superiority of our proposed approach.
7
4
4
Swiss Roll
x 10
x 10
USPS
3.5
SCSDP
SQLP
3.5
SCSDP
SQLP
3
3
Time (second)
Time (second)
2.5
2.5
2
1.5
2
1.5
1
1
0.5
0.5
10 15 20 25 30 35 40
50
60
80
100
10 15 20 25 30 35 40
m
50
60
80
100
m
(a)
(b)
Figure 2: Curves on computational cost: Solution Time vs. Problem Scale.
We also note that the per-iteration computational costs of SCSDP and SQLP are drastically different.
Indeed, for the same problem size m, it takes much less time per iteration for the SQLP than that for
the SCSDP. This is not very surprising, as the SQLP formulation takes advantage of the low rank
structure of the data matrix A.
5
Conclusions
We have studied a class of convex optimization programs called convex Quadratic Semidefinite
Programs (QSDPs), which arise naturally from graph Laplacian regularized kernel learning (Sha &
Saul, 2005; Weinberger et al., 2007; Li et al., 2009; Song et al., 2008; Globerson & Roweis, 2007;
Singer, 2008). A QSDP is similar to a QP, except that it is subject to a semidefinite cone constraint
as well. To tackle the QSDP, one typically uses the Schur complement to rewrite it as an SDP
(SCSDP), thus resulting in a large linear matrix inequality constraint. In this paper, we argue that
this formulation is not computationally optimal and have proposed a novel formulation that leads to
a semidefinite-quadratic-linear program (SQLP). Our formulation introduces one positive semidefinite constraint, one second-order cone constraint and a set of linear constraints. This should be
contrasted with the two large semidefinite cone constraints in the SCSDP. Our complexity analysis
and experimental results have shown that the proposed SQLP formulation scales far better than the
SCSDP formulation.
Acknowledgements
The authors would like to thank Professor Kim-Chuan Toh for his valuable comments. This research work was supported in part by GRF grants CUHK 2150603, CUHK 414307 and CRF grant
CUHK2/06C from the Research Grants Council of the Hong Kong SAR, China, as well as the
NSFC-RGC joint research grant N CUHK411/07.
References
Ben-Tal, A., & Nemirovski, A. (2001). Lectures on Modern Convex Optimization: Analysis, Algorithms, and
Engineering Applications. MPS?SIAM Series on Optimization. Philadelphia, Pennsylvania: Society for
Industrial and Applied Mathematics.
Borchers, B. (1999). CSDP, a C Library for Semidefinite Programming. Optimization Methods and Software,
11/12, 613?623.
Boyd, S., & Vandenberghe, L. (2004). Convex Optimization. Cambridge: Cambridge University Press. Available online at http://www.stanford.edu/?boyd/cvxbook/.
Chapelle, O., & Vapnik, V. (2000). Model Selection for Support Vector Machines. In S. A. Solla, T. K. Leen
and K.-R. M?uller (Eds.), Advances in Neural Information Processing Systems 12: Proceedings of the 1999
Conference, 230?236. Cambridge, Massachusetts: The MIT Press.
8
Globerson, A., & Roweis, S. (2007). Visualizing Pairwise Similarity via Semidefinite Programming. Proceedings of the 11th International Conference on Artificial Intelligence and Statistics (pp. 139?146).
Golub, G. H., & Loan, C. F. V. (1996). Matrix Computations. Baltimore, Maryland: The Johns Hopkins
University Press. Third edition.
Graepel, T. (2002). Kernel Matrix Completion by Semidefinite Programming. Proceedings of the 12th International Conference on Artificial Neural Networks (pp. 694?699). Springer?Verlag.
Kulis, B., Sustik, M. A., & Dhillon, I. S. (2009). Low?Rank Kernel Learning with Bregman Matrix Divergences. The Journal of Machine Learning Research, 10, 341?376.
Lanckriet, G. R. G., Cristianini, N., Bartlett, P., El Ghaoui, L., & Jordan, M. I. (2004). Learning the Kernel
Matrix with Semidefinite Programming. The Journal of Machine Learning Research, 5, 27?72.
Li, Z., & Liu, J. (2009). Constrained Clustering by Spectral Kernel Learning. To appear in the Proceedings of
the 12th IEEE International Conference on Computer Vision.
Li, Z., Liu, J., & Tang, X. (2008). Pairwise Constraint Propagation by Semidefinite Programming for Semi?
Supervised Classification. Proceedings of the 25th International Conference on Machine Learning (pp.
576?583).
Li, Z., Liu, J., & Tang, X. (2009). Constrained Clustering via Spectral Regularization. Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition 2009 (pp. 421?428).
Sch?olkopf, B., & Smola, A. J. (2002). Learning with Kernels: Support Vector Machines, Regularization,
Optimization, and Beyond. Cambridge, Massachusetts: The MIT Press.
Sch?olkopf, B., Smola, A. J., & M?uller, K.-R. (1998). Nonlinear Component Analysis as a Kernel Eigenvalue
Problem. Neural Computation, 10, 1299?1319.
Sha, F., & Saul, L. K. (2005). Analysis and Extension of Spectral Methods for Nonlinear Dimensionality
Reduction. Proceedings of the 22nd International Conference on Machine Learning (pp. 784?791).
Shawe-Taylor, J., & Cristianini, N. (2004). Kernel Methods for Pattern Analysis. Cambridge: Cambridge
University Press.
Singer, A. (2008). A Remark on Global Positioning from Local Distances. Proceedings of the National
Academy of Sciences, 105, 9507?9511.
So, A. M.-C. (2007). A Semidefinite Programming Approach to the Graph Realization Problem: Theory,
Applications and Extensions. Doctoral dissertation, Stanford University.
Song, L., Smola, A., Borgwardt, K., & Gretton, A. (2008). Colored Maximum Variance Unfolding. In J. C.
Platt, D. Koller, Y. Singer and S. Roweis (Eds.), Advances in Neural Information Processing Systems 20:
Proceedings of the 2007 Conference, 1385?1392. Cambridge, Massachusetts: The MIT Press.
Toh, K. C., T?ut?unc?u, R. H., & Todd, M. J. (2006). On the Implementation and Usage of SDPT3 ? A MATLAB
Software Package for Semidefinite?Quadratic?Linear Programming, Version 4.0. User?s Guide.
T?ut?unc?u, R. H., Toh, K. C., & Todd, M. J. (2003). Solving Semidefinite?Quadratic?Linear Programs using
SDPT3. Mathematical Programming, 95, 189?217.
Vapnik, V. N. (2000). The Nature of Statistical Learning Theory. Statistics for Engineering and Information
Science. New York: Springer?Verlag. Second edition.
Weinberger, K. Q., Packer, B. D., & Saul, L. K. (2005). Nonlinear Dimensionality Reduction by Semidefinite
Programming and Kernel Matrix Factorization. Proceedings of the 10th International Workshop on Artificial
Intelligence and Statistics (pp. 381?388).
Weinberger, K. Q., Sha, F., & Saul, L. K. (2004). Learning a Kernel Matrix for Nonlinear Dimensionality
Reduction. Proceedings of the 21st International Conference on Machine Learning (pp. 85?92).
Weinberger, K. Q., Sha, F., Zhu, Q., & Saul, L. K. (2007). Graph Laplacian Regularization for Large?Scale
Semidefinite Programming. Advances in Neural Information Processing Systems 19: Proceedings of the
2006 Conference (pp. 1489?1496). Cambridge, Massachusetts: The MIT Press.
Zhou, D., Bousquet, O., Lal, T. N., Weston, J., & Sch?olkopf, B. (2004). Learning with Local and Global
Consistency. Advances in Neural Information Processing Systems 16: Proceedings of the 2003 Conference
(pp. 595?602). Cambridge, Massachusetts: The MIT Press.
9
| 3709 |@word kong:5 cu:2 version:1 briefly:2 kulis:2 norm:1 nd:1 seek:2 crucially:1 thereby:1 tr:4 reduction:10 liu:4 contains:1 series:1 existing:1 surprising:2 toh:5 tackling:2 pcp:12 must:1 john:1 treating:1 designed:1 plot:1 v:1 implying:1 intelligence:2 dissertation:1 colored:5 hypersphere:3 math:1 toronto:1 org:1 mathematical:1 become:1 consists:3 x0:2 pairwise:8 indeed:5 sdp:22 ming:1 cpu:1 solver:1 considering:1 becomes:1 project:1 moreover:2 maximizes:2 superb:1 unified:1 finding:1 every:1 tackle:1 um:1 rm:1 demonstrates:1 platt:1 unit:3 grant:4 superiority:1 appear:1 positive:7 before:1 engineering:2 local:5 todd:2 nsfc:1 doctoral:1 studied:1 china:1 conversely:1 limited:1 factorization:3 nemirovski:4 range:1 averaged:1 globerson:8 practice:1 optimizationbased:1 swiss:10 digit:3 procedure:2 boyd:3 word:1 suggest:1 unc:6 interior:2 selection:2 mvu:16 context:1 www:4 equivalent:6 map:6 center:1 yt:2 straightforward:1 convex:9 formulate:1 m2:16 vandenberghe:2 his:1 embedding:12 justification:1 sar:1 resp:2 user:1 programming:11 us:1 origin:1 lanckriet:2 trick:2 element:1 expensive:1 satisfying:1 recognition:1 database:2 tib:1 solved:6 d2ij:3 connected:1 solla:1 valuable:1 principled:1 substantial:2 intuition:1 mentioned:1 complexity:7 ui:1 cristianini:3 rewrite:2 solving:4 efficiency:2 basis:1 usps:6 joint:1 chapter:1 fast:1 effective:1 borchers:4 artificial:3 quite:2 heuristic:2 widely:2 whose:2 solve:6 larger:3 stanford:3 otherwise:2 statistic:3 online:1 advantage:4 rr:2 eigenvalue:1 propose:1 product:4 realization:1 poorly:1 achieve:1 roweis:10 academy:1 grf:1 olkopf:5 scalability:1 cluster:9 ben:3 object:13 derive:1 completion:2 stat:1 propagating:1 nearest:1 ij:2 qt:9 c:1 indicate:1 dii:2 kii:3 strictly:1 extension:2 hold:1 proximity:1 exp:1 mapping:2 traded:1 major:1 a2:1 label:1 currently:2 council:1 vice:2 unfolding:3 uller:2 mit:5 always:1 zhou:2 pn:1 derived:2 focus:4 consistently:1 rank:12 indicates:1 hk:4 contrast:1 industrial:1 kim:1 sense:2 inference:1 rigid:1 el:1 nn:4 cubically:1 typically:2 bt:4 initially:1 koller:1 wij:7 issue:1 aforementioned:2 html:2 dual:1 classification:2 constrained:8 equal:1 construct:1 report:1 few:1 modern:1 randomly:1 preserve:2 divergence:2 national:1 packer:1 freedom:2 huge:1 investigate:1 possibility:1 highly:1 golub:2 introduces:1 semidefinite:34 pc:1 behind:1 yielding:1 bregman:1 closer:1 necessary:1 orthogonal:1 taylor:2 euclidean:2 instance:6 kij:6 modeling:3 column:2 cost:2 subset:2 csdp:3 dij:1 eigenmaps:1 conducted:1 too:1 reported:2 kn:2 answer:1 cho:1 st:1 borgwardt:1 international:7 siam:1 ie:6 sequel:1 vm:1 unscalable:3 hopkins:1 containing:1 choose:1 inefficient:1 leading:1 li:24 m9:3 sqlp:20 mp:1 depends:1 vi:1 try:1 root:1 capability:1 yen:1 formed:2 square:2 ni:1 ir:1 variance:6 roll:9 efficiently:4 accuracy:1 handwritten:2 sdps:5 ed:2 against:1 pp:9 involved:1 e2:4 naturally:3 associated:2 proof:1 gain:2 sampled:2 stop:1 massachusetts:5 recall:1 ut:6 dimensionality:10 graepel:4 organized:1 actually:1 day:4 follow:1 supervised:1 formulation:22 done:1 though:1 box:1 leen:1 smola:4 hand:1 ei:1 nonlinear:12 propagation:3 grows:1 usage:1 contain:1 normalized:2 true:1 regularization:5 hence:1 symmetric:2 dhillon:1 visualizing:1 noted:1 djj:1 hong:5 ay:2 crf:1 demonstrate:2 image:4 wise:1 novel:3 superior:1 specialized:1 qp:2 numerically:5 im2:2 refer:1 versa:2 vec:1 cambridge:9 smoothness:1 consistency:1 mathematics:1 shawe:2 dot:1 chapelle:3 similarity:4 recent:1 certain:2 verlag:2 inequality:3 seen:2 preserving:1 captured:1 converge:1 cuhk:6 semi:1 smoother:1 multiple:1 desirable:2 ii:1 arithmetic:3 gretton:1 smooth:1 positioning:1 faster:2 characterized:1 e1:4 laplacian:10 scalable:1 vision:2 metric:1 iteration:8 kernel:58 represent:1 qy:9 preserved:1 baltimore:1 sch:5 rest:1 comment:1 subject:2 schur:4 jordan:1 call:2 viability:1 m6:1 xj:3 pennsylvania:1 inner:3 idea:2 reduce:1 motivated:1 pca:2 sdpt3:5 bartlett:1 gb:1 elemstatlearn:1 effort:1 song:8 reformulated:2 york:1 jj:1 remark:1 matlab:2 generally:1 useful:1 se:2 eigenvectors:3 detailed:1 tij:5 nonparametric:3 chuan:1 locally:1 svms:1 reduced:2 http:5 specifies:1 generate:1 per:7 serving:1 shall:1 key:1 iter:10 reformulation:4 verified:1 ram:1 graph:16 cone:12 package:1 powerful:1 wu:1 bound:1 dash:1 quadratic:13 constraint:39 software:4 tal:3 nearby:4 bousquet:1 u1:1 speed:1 aspect:2 min:6 rendered:2 speedup:5 transferred:1 smaller:3 slightly:1 em:1 happens:1 ghaoui:1 taken:2 computationally:2 slack:1 turn:1 singer:8 needed:1 conformal:1 sustik:1 studying:1 generalizes:1 adopted:1 rewritten:1 operation:2 available:1 v2:1 appropriate:1 spectral:4 weinberger:22 coin:1 denotes:12 clustering:10 include:4 exploit:1 chinese:4 society:1 unchanged:1 objective:2 parametric:2 sha:8 distance:8 thank:1 mapped:2 maryland:1 manifold:6 argue:1 code:1 difficult:2 robert:1 favorably:1 trace:1 rise:1 design:2 implementation:1 zt:1 perform:2 upper:1 observation:3 benchmark:1 rn:2 intensity:1 complement:4 pair:2 required:5 lal:1 learned:1 hour:1 nu:1 beyond:1 suggested:1 pattern:2 laplacians:1 challenge:1 program:18 max:2 natural:1 regularized:4 zhu:1 orthogonally:1 library:1 concludes:1 philadelphia:1 nice:1 review:3 prior:1 literature:1 kf:1 sg:1 acknowledgement:1 relative:1 embedded:2 fully:2 lecture:4 incurred:1 degree:3 xiao:1 translation:1 row:1 succinctly:1 summary:1 supported:1 keeping:1 drastically:1 side:1 lle:1 guide:1 saul:9 neighbor:2 emerge:1 zhenguo:1 tolerance:3 ghz:1 curve:1 dimension:3 resides:1 author:3 made:1 far:2 global:2 xi:5 table:4 learn:1 nature:1 kjj:2 anthony:1 shuo:1 main:2 arise:4 edition:2 representative:1 embeds:2 fails:1 mattohkc:1 concatenating:2 smoothest:3 lie:1 third:1 learns:1 tang:2 removing:1 minute:1 theorem:2 specific:2 derives:1 intrinsic:1 workshop:1 vapnik:5 kr:4 dissimilarity:1 gap:2 easier:3 simply:2 kxk:1 applies:1 springer:2 satisfies:1 weston:1 identity:2 formulated:3 man:2 feasible:2 professor:1 loan:2 typical:1 except:2 specifically:1 manchoso:1 contrasted:1 principal:1 called:2 total:4 experimental:5 rgc:1 support:3 cholesky:1 dissimilar:1 dept:4 |
2,990 | 371 | Associative Memory in a Network of 'biological'
Neurons
\Vulfram Gerstner ?
Department of Physics
University of California
Ber keley, CA 94720
Abstract
The Hopfield network (Hopfield, 1982,1984) provides a simple model of an
associative memory in a neuronal structure. This model, however, is based
on highly artificial assumptions, especially the use of formal-two state neurons (Hopfield, 1982) or graded-response neurons (Hopfield, 1984). \Vhat
happens if we replace the formal neurons by 'real' biological neurons? \Ve
address this question in two steps. First, we show that a simple model of
a neuron can capture all relevant features of neuron spiking, i. e., a wide
range of spiking frequencies and a realistic distribution of interspike intervals. Second, we construct an associative memory by linking these neurons
together. The analytical solution for a large and fully connected network
shows that the Hopfield solution is valid only for neurons with a short refractory period. If the refractory period is longer than a crit.ical duration
ie, the solutions are qualitatively different. The associative character of
the solutions, however, is preserved.
1
INTRODUCTION
Information received at the sensory level is encoded in spike trains which are then
transmitted to different parts of the brain where the main processing steps occur.
Since all the spikes of any particular neuron look alike, the information of the spike
train is obviously not contained in the exact shape of the spikes, but rather in
their arrival times and in the correlations between the spikes. A model neuron
which tries to keep track of the voltage trace even during the spiking-like the
?present address: Physik-Department der TU Muenchen, Institut fuer Theoretische
Physik,D-8046 Garching bei Muenchen
84
Associative Memory in a Network of 'Biological' Neurons
Hodgkin Huxley equations (Hodgkin, 1952) and similar models-carries therefore
non-essential details, if we are only interested in the information of the spike train.
On the other hand, a simple two-state neuron or threshold model is too simplistic
since it cannot reproduce the variety of spiking behaviour found in real neurons. The
same is true for continuous or analog model neurons which disregard the stochastic
nature of neuron firing completely. In this work we construct a model of the neuron
which is intermediate between these extremes. Vve are not concerned with the shape
of the spikes and detailed voltage traces, but we want realistic interval distributions
and rate functions. Finally, we link these neurons together to capture collective
effects and we construct a network that can function as an associative memory.
2
THE MODEL NEURON
From a neural-network point of view it is often convenient to consider a neuron
as a simple computational unit with no internal parameters. In this case, the
neuron is described either as a 'digital' theshold unit or as a nonlinear 'analog'
element with a sigmoid input-output relation. \Vhile such a simple model might be
useful for formal considerations in abstract networks, it is hard to see how it could
be modified to include realistic features of neurons: How can we account for the
statistical properties of the spike train beyond the mean firing frequencies? What
about bursting or oscillating neurons? - to mention but a few of the problems with
real neurons.
\Ve would like to use a model neuron which is closer to biology in the sense that it
produces spike trains comparable of those in real neurons. Our description of the
spiking dynamics therefore emphasizes three basic notions of neurobiology: threshold, refractory period, and noise. In particular we describe the internal state of the
neuron by the membrane voltage h which depends on the synaptic contributions
from other neurons as well as on the spiking history of the neuron itself. In a simple
threshold crossing process, a spike would be initiated as soon as the voltage h(t)
crosses the threshold (). Due to the statistical fluctuations of the momentary voltage
around h(t), however, the spiking will be a statistical event, the spikes coming a
bit too early or a bit too late compared to the formal threshold crossing time, depending on the direction of the fluctuations . This fact will be taken into account by
introducing a probabilistic spiking rate r, which depends on the difference between
the membrane voltage h and the threshold () in an exponential fashion:
r
= -7'01 exp[,B(h -
(})],
(1)
where the formal temperature (3-1 is a measure for the noise and 7'0 is an internal
time constant of the neuron . If h changes only slowly during a conveniently chosen time 7'1, we can integrate over 7'1, which yields the probability PF(h) of firing
during a time step of length 7'1. This gives us an analytic procedure to switch from
continuous time to the discrete time step representation used later on.
If a spike is initiated in a real neuron, the neuron goes through a cycle of ion influx
and efflux which changes the potential on a fast time scale and prevents immediate
firing of another spike. To model this we reset the potential after each spike by
85
86
Gerstner
adding a negative refractory field hr(t) to the potential:
= h?(t) + hr(t),
hr(t) = Lcr(t -ti),
(2)
h(t)
with
(3)
i
where ti is the time of the ith spike and h'(t) is the postsynaptic potential due
to incoming spikes from other neurons. The form of the refractory function Cr(T)
together with the noise level {3 determine the firing characteristics of the neuron.
\Vith fairly simple refractory fields we can achieve a sigmoid dependence of the
firing frequency upon the input current (figure 1) and realistic spiking statistics
(figure 3).
Standard Neuron
f-I - plot
200.0
...
150.0
J:
.S
>.
u
..
c:
100.0
:;)
f7
~
50.0
0.0
-10.0
0.0
-5.0
10.0
5.0
input
Figure 1: f-I-plot (frequency versus input current) for a standard neuron with
absolute and relative refractory period. The absolute refractory period lasts for
a
5ms followed by an exponentially decaying relative refractory function (time
constant 2ms). The refractory function is shown in figure 2.
=
Standard Neuron
refroctory function
r------..---==--..:.---,..--------,
0 .0
-'20.0
-40.0
-60.0
-80.0
-100.0
L..-_---1._ _--'-_ _ _ _ _ _ _"'--_ _ _ _--J
0.0
20.0
10.0
30.0
lime in ms
Figure 2: Refractory function of the model used in figure 1.
Indeed, the interval distribution changes from an approximate Poisson distribution
for driving currents below threshold to an approximate Gaussian distribution above
Associative Memory in a Network of 'Biological' Neurons
87
threshold. Different forms of the refractory function can lead to bursting behavior
or to model neurons with adaptive behavior.
In figure 4 we show a bursting neuron defined by a long-tailed refractory function
with a slight overshooting at intermediate time delays. At low input level, the bursts
are noise induced and appear in irregular intervals. For larger driving currents the
spiking changes to regular bursting. Even a model with a simple absolute refractory
period
(4)
has many interesting features. The explicit solution for a network of these neurons
is given in the following sections.
Standard Neuron
spikelroin, inpul - +/-0
0.40 , . - - - - - - - . - -
0.0
100.0
200.0
300.0
? 400.0
500.0
400.0
500.0
lime in ms
0.30
Standard Neuron
spikelroin. inpul - -2
~
:a
.&o
..Ii.
0.20
~
o
~
0.0
100.0
300.0
200.0
0.10
lime in ms
L __---"lCL:=:===:r:======----'---------l
0.00
0.0
200.0
100.0
300.0
lime in ms
Figure 3: Spike trains and Interval distributions for the model of figure I at two
different input levels.
88
Gerstner
3
THE NETWORK
So far we have only described the dynamics which initiates the spikes in the neurons.
Now we have to describe the spikes themselves and their synaptic transmission to
other neurons. To keep track of the spikes we assign to each neuron a two state
variable Sj which usually rests at -1 and flips to +1 only when a spike is initiated.
In the discrete time step representation that we assume in the following the output
of each neuron is then described by a sequence of Ising spins Sj{t n ).
Bursting Neuron
spiketroin. input - -1
100.0
200.0
300.0
time in ms
Bursting Neuron
spiketroin. input - -2
100.0
200.0
300.0
time in ms
Figure 4: Spike trains for a bursting neuron. At low input level the bursts are noise
induced and appear in irregular intervals, at high input level the bursting is regular.
In a network of neurons, neuron i may recieve a spike from neuron j via the synaptic
connection, and the spike will evoke a postsynaptic potential at i. The strength of
this response will depend on the synaptic efficacy Jii' The time course of this
response, however, can be taken to have a generic form independent of the strength
of the synapse. We formalize these ideas assuming linearity and write
hi(tn) =
L hi L c{Tm)Si(tn -
Tm),
i
'T",
where c( T) might be an experimental response function and
normalized variable proportional to Sj.
(5)
Sj
is a conveniently
For the synaptic efficacies we assume the Hebbian matrix also taken by Hopfield
1 p
~ elJelJ
J .. (6)
I) N L...J'i 'i '
1J=1
Associative Memory in a Network of 'Biological' Neurons
=
where the varables ~r
?1, (1 < i < N, 1 ~ J..l < p) describe the p random
patterns to be stored. We can obtain these synaptic weights by a Hebbian learning
procedure. It is now straightforward to incorporate the internal dynamics of the
neurons, which we described in the preceding section. The refractory field can be
introduced as the diagonal elements of the synaptic connection matrix
(7)
If all the neurons are equivalent, the diagonal elements must be independent of i
and Jii(T) = (r(T) describes the generic voltage response of our model neuron after
firing of a spike.
4
RESULTS
\Ve can solve this model analytically in the limit of a large and fully connected
network. The solution depends on an additional parameter p which characterizes
the maximum spiking frequency of the neurons. To compare our results with the
Hopfield model, we replace PF(h), calculated from (1), by the generic form ~(1 +
tanh(J3h? and we take the case of the simple refractory field (4). In this case the
parameter p is related to the absolute refractory period by p =
For a large
maximum spiking frequency or 'Y -+ 0, we recover the Hopfield solutions. For I
larger than a critical value
the solutions are qualitatively different: there is a
regime of inverse temperatures in which both the retrieval solution and the trivial
solution are stable. This allows the network to remain undecided, if the initial
overlap with one of the patterns is not large enough. This is in contrast to the
Hopfield model (Hopfield 1982,1984) where the network is always forced into one of
the retrieval states. 'Ve compared our analytic solutions with computersimulations
which verified that the calculated stationary solutions are indeed stable states of
the network with a wide basin of attraction. Thus the basic associative memory
characteristics of the standard Hopfield model are robust under the replacement of
the two state neurons by more biological neurons.
'l'!l.
,e
5
CONCLUSIONS
\\Te constructed a network of neurons with intrinsic spiking behaviour and realistic
postsynaptic response. In addition to the standard solutions we have undecided
network states which might have a biological significance in the process of decision
making. There remain of course a number of unbiological features in the network,
e.g. the assumption of full connectivity, the symmetry of the connections and the
linearity of the learning rule. But most of these assumptions can be overcome at
least in principle (see e.g. Amit 1989 for references). Our results confirm the general
robustness of attractor neural networks to biological modifications, but they suggest
that including more biological details also adds interesting features to the variety
of states available to the network.
89
90
Gerstner
overlop CS 0 function of temperoture
rcl'KlOfy period
1.0
~Dm""
_1
,.f,oC'tOI')' period
1.0
90"'-
-1%
c.&
C.?
.t n
M
0 .'
C.'
t.2
C"
,
C.o
C.2
0.6
0.4
CC
C.2
0 ..
C4
CE
C.I
te""PC, D\Uf.
Figure 5: Stationary states of the network . Depending on the length of the refractory period the retrieval behavior varies. Figures a and b show the overlap with
1/{3. For a neuron a with
one of the learned patterns for different noise level T
short refractory period (figure a) the overlap curve is similar to those of the Hopfield
model. For longer refractory periods (figure b) the curve is qualitatively different,
showing a regime of bistability at intermediate noise levels. If the network is working at these noise levels it depends on the initial overlap with the learned pattern
whether the network will go to the trivial state with overlap 0 or t.o the retrieval
state with large overlap (overlap m 1 corresponds to perfect retrieval.).
=
=
Acknowledgements
I would like to thank \\TilIiam Bialek and his students at Berkeley for their generous
hospitality and numerous stimulating discussions. Thanks also to J .L.\'anHemmen
and to Andreas Herz for many helpful comments and advice. I acknowledge the
financial support of the German Academic Exchange Service (DAAD) who made
my stay at Berkeley possible.
References
Hopfield,J.J. (1982), Neural Networks and Physical Systems with Emergent ColIective Computational Abilities, Proc.Natl.Acad.Sci USA 79,2554-2558.
Hopfield,J.J. (1984), Neurons with Graded Response have Collective Computational
Properties like those of Two-State-Neurons, Proc.Natl.Acad.Sci USA 81, 3088-3092.
Hodgkin,A.L. and Huxley,A.F. (1952) A Quantitative Description of Membrane
Current and its Application to Conduction and Excitation in Nerve, J .Physiology
117,500-544.
Amit,D.J., (1989) Modeling Brain Function: The \\Torld of Attractor Neural Networks, CH.7. Cambridge University Press.
| 371 |@word physik:2 mention:1 carry:1 initial:2 efficacy:2 current:5 si:1 must:1 realistic:5 interspike:1 shape:2 analytic:2 plot:2 overshooting:1 stationary:2 ith:1 short:2 provides:1 unbiological:1 burst:2 constructed:1 indeed:2 behavior:3 themselves:1 brain:2 pf:2 linearity:2 what:1 berkeley:2 quantitative:1 ti:2 unit:2 appear:2 service:1 limit:1 acad:2 initiated:3 firing:7 fluctuation:2 might:3 bursting:8 range:1 procedure:2 physiology:1 convenient:1 regular:2 suggest:1 cannot:1 equivalent:1 go:2 straightforward:1 duration:1 rule:1 attraction:1 his:1 financial:1 notion:1 exact:1 element:3 crossing:2 ising:1 capture:2 connected:2 cycle:1 efflux:1 dynamic:3 depend:1 crit:1 upon:1 completely:1 hopfield:14 emergent:1 train:7 undecided:2 forced:1 fast:1 describe:3 artificial:1 lcl:1 encoded:1 larger:2 solve:1 ability:1 statistic:1 itself:1 associative:9 obviously:1 sequence:1 analytical:1 vhile:1 coming:1 reset:1 tu:1 relevant:1 achieve:1 description:2 transmission:1 produce:1 oscillating:1 perfect:1 depending:2 received:1 c:1 direction:1 stochastic:1 exchange:1 behaviour:2 assign:1 biological:9 around:1 exp:1 vith:1 driving:2 early:1 generous:1 f7:1 proc:2 tanh:1 hospitality:1 gaussian:1 always:1 modified:1 rather:1 cr:1 voltage:7 contrast:1 sense:1 helpful:1 ical:1 relation:1 reproduce:1 interested:1 fairly:1 field:4 construct:3 lcr:1 biology:1 look:1 few:1 ve:4 replacement:1 attractor:2 highly:1 extreme:1 pc:1 natl:2 closer:1 institut:1 modeling:1 bistability:1 introducing:1 delay:1 too:3 stored:1 conduction:1 varies:1 my:1 thanks:1 ie:1 stay:1 probabilistic:1 physic:1 together:3 connectivity:1 slowly:1 account:2 potential:5 jii:2 rcl:1 student:1 depends:4 later:1 try:1 view:1 characterizes:1 decaying:1 recover:1 keley:1 contribution:1 spin:1 characteristic:2 who:1 yield:1 theoretische:1 daad:1 emphasizes:1 cc:1 history:1 synaptic:7 frequency:6 dm:1 formalize:1 nerve:1 response:7 synapse:1 correlation:1 hand:1 working:1 nonlinear:1 usa:2 effect:1 normalized:1 true:1 analytically:1 during:3 excitation:1 oc:1 m:8 tn:2 temperature:2 consideration:1 sigmoid:2 spiking:13 physical:1 refractory:20 exponentially:1 linking:1 analog:2 slight:1 cambridge:1 stable:2 longer:2 add:1 inpul:2 der:1 transmitted:1 additional:1 preceding:1 determine:1 period:12 ii:1 full:1 hebbian:2 academic:1 cross:1 long:1 retrieval:5 simplistic:1 muenchen:2 basic:2 poisson:1 ion:1 irregular:2 preserved:1 addition:1 want:1 interval:6 rest:1 toi:1 comment:1 induced:2 intermediate:3 enough:1 concerned:1 variety:2 switch:1 fuer:1 andreas:1 idea:1 tm:2 whether:1 garching:1 useful:1 detailed:1 theshold:1 track:2 herz:1 discrete:2 write:1 threshold:8 ce:1 verified:1 inverse:1 hodgkin:3 decision:1 lime:4 comparable:1 bit:2 hi:2 followed:1 strength:2 occur:1 huxley:2 influx:1 uf:1 department:2 membrane:3 describes:1 remain:2 postsynaptic:3 character:1 making:1 happens:1 alike:1 modification:1 taken:3 equation:1 german:1 initiate:1 flip:1 available:1 generic:3 robustness:1 include:1 especially:1 graded:2 amit:2 question:1 spike:25 dependence:1 diagonal:2 bialek:1 link:1 thank:1 sci:2 trivial:2 assuming:1 length:2 trace:2 negative:1 collective:2 neuron:64 acknowledge:1 immediate:1 neurobiology:1 introduced:1 connection:3 c4:1 california:1 learned:2 address:2 beyond:1 below:1 usually:1 pattern:4 regime:2 including:1 memory:8 event:1 critical:1 overlap:7 hr:3 numerous:1 acknowledgement:1 relative:2 fully:2 interesting:2 proportional:1 versus:1 digital:1 integrate:1 basin:1 principle:1 course:2 last:1 soon:1 formal:5 ber:1 wide:2 absolute:4 overcome:1 calculated:2 curve:2 valid:1 sensory:1 qualitatively:3 adaptive:1 made:1 far:1 sj:4 approximate:2 keep:2 evoke:1 confirm:1 incoming:1 vhat:1 continuous:2 tailed:1 nature:1 robust:1 ca:1 symmetry:1 gerstner:4 significance:1 main:1 noise:8 arrival:1 neuronal:1 vve:1 advice:1 fashion:1 momentary:1 explicit:1 exponential:1 late:1 bei:1 showing:1 essential:1 intrinsic:1 adding:1 te:2 conveniently:2 prevents:1 contained:1 ch:1 corresponds:1 stimulating:1 replace:2 hard:1 change:4 experimental:1 disregard:1 internal:4 support:1 incorporate:1 |
2,991 | 3,710 | Randomized Pruning: Efficiently Calculating
Expectations in Large Dynamic Programs
Alexandre Bouchard-C?ot?e1
[email protected]
Slav Petrov2,?
[email protected]
1
Computer Science Division
University of California at Berkeley
Berkeley, CA 94720
Dan Klein1
[email protected]
2
Google Research
76 Ninth Ave
New York, NY 10011
Abstract
Pruning can massively accelerate the computation of feature expectations in large models.
However, any single pruning mask will introduce bias. We present a novel approach which
employs a randomized sequence of pruning masks. Formally, we apply auxiliary variable
MCMC sampling to generate this sequence of masks, thereby gaining theoretical guarantees about convergence. Because each mask is generally able to skip large portions of an
underlying dynamic program, our approach is particularly compelling for high-degree algorithms. Empirically, we demonstrate our method on bilingual parsing, showing decreasing
bias as more masks are incorporated, and outperforming fixed tic-tac-toe pruning.
1
Introduction
Many natural language processing applications, from discriminative training [18, 9] to minimumrisk decoding [16, 34], require the computation of expectations over large-scale combinatorial
spaces. Problem scale comes from a combination of large constant factors (such as the massive
grammar sizes in monolingual parsing) or high-degree algorithms (such as the many dimensions of
bitext parsing). In both cases, the primary mechanism for efficiency has been pruning, wherein large
regions of the search space are skipped on the basis of some computation mask. For example, in
monolingual parsing, entire labeled spans may be skipped on the basis of posterior probabilities in
a coarse grammar [17, 7]. Conditioning on these masks, the underlying dynamic program can be
made to run arbitrarily quickly.
Unfortunately, aggressive pruning introduces biases in the resulting expectations. As an extreme
example, features with low expectation may be pruned down to zero if their supporting structures are completely skipped. One option is to simply prune less aggressively and spend more
time on a single, more exhaustive expectation computation, perhaps by carefully tuning various
thresholds [26, 12] and using parallel computing [9, 38]. However, we present a novel alternative:
randomized pruning. In randomized pruning, multiple pruning masks are used in sequence. The resulting sequence of expectation computations are averaged, and errors average out over the multiple
computations. As a result, time can be directly traded against approximation quality, and errors of
any single mask can be overcome.
Our approach is based on the idea of auxiliary variable sampling [31], where a set of auxiliary
variables formalizes the idea of a pruning mask. Resampling the auxiliary variables changes the
mask at each iteration, so that the portion of the chart that is unconstrained at a given iteration can
improve the mask for subsequent iterations. In other words, pruning decisions are continuously
revisited and revised. Since our approach is formally grounded in the framework of block Gibbs
sampling [33], it inherits desirable guarantees as a consequence. If one needs successively better
?
Work done while at the University of California at Berkeley.
1
wever, that the applicability of this approach is in no way limited to parsing. The settings
randomized pruning will be most advantageous will be those in which high-order dynamic
s can be vastly sped up by masking, yet no single aggressive mask is likely to be adequate.
ndomized pruning
e need for expectations
ms for discriminative training, consensus decoding, and unsupervised learning typically
a(0, 1) +
S
(a) of expectations.
(c)
+
epetitively computing a large number
In discriminative (b)
training of proba???
+
arsers, for example [18, 32], one needs to repeatedly parse the entire training set in order
to
+
+
a(3,
4)
NP
VP
.
.
T
! !
!
the necessary expected feature tcounts. In this setup (Figure 1), the conditional distribution
.
???
!
+
!
.
-valued random variable T given a yield
y(T
)
=
w
is
modeled
using
a
log-linear
model
:
P
V
NP
.
a(0, 3) ! ! vector
+ !
s ? log Z(?, w)}, in which ? ? RK is a parameter
t|y(T ) = w) = exp{!?, f (t, w)"
???
+
+
+
+
+
K
.
w) ? R is a feature function. Training
such2 the
a model
involves
the computation
of 3the 4 5
She 1 heard
noise
0
4 5
3
0
1
2
a(0, 5) +
g gradient in between each update of ? (skipping an easy to compute regularization term):
$
!
"#
Figure
1: A parse
the
chart
cells Assignments
(b), from which the assignment vector
f (ttree
wi(a)
) ?and
E? [f
(T,corresponding
wi )|y(T ) = w
? log
P? (T = ti |y(T ) =
wi ) =
i ,Tree
i] ,
(c) is extracted.
Not shown are the labels of the dynamic programming chart cells.
i?I
i?I
Figure 1: A parse tree, from which the assignment variables are extracted. A linearization into an
wi : i ? I} are the trainingassignment
sentences with
corresponding
gold
trees {ti }.
vector
is shown at the
right.
approximations, more iterations can be performed, with a guarantee of convergence to the true
term in the above equation can be computed in linear time, while the second requires a
expectations.
me dynamic program (the inside-outside algorithm), which computes constituent posteriors
approximations,
more
caninterested
be
performed,
with
a guarantee
of convergence
to the true the
ossible spans of words (the
cellsofincourse,
Figureiterations
computing
expectations
is a finite
In chart
practice,
we1).areHence,
only
in the
behavior
after
number of iterations:
expectations.
he bottleneck here. While it
is not would
impossible
to calculate
expectationsprevious
exactly,heuristics
this
method
be useless
if it didthese
not outperform
in the time range bounded by
utationally very expensive,In
limiting
previous
work
to toy
within15the
word
sentences
the
exact computation
time.
Here,
we investigate
empirical
performance
on English-Chinese
bitext
practice,
of course,
we
are
onlysetups
interested
behavior
after
a finite number
of iterations: the
35], or necessitating aggressive
pruning
[26,
12]
that
is
not
well
understood.
parsing,
showing
that
bias
decreases
over
time.
Moreover,
we
show
that
our
randomized
pruning
method would be useless if it did not outperform previous heuristics in a time range bounded by
outperforms
standard single-mask
tic-tac-toe
pruning
[40],performance
achieving lower
bias over a rangebitext
of total
the
exact computation
time. Here, we
investigate
empirical
on English-Chinese
computation
times.
Our
technique
is
orthogonal
to
approaches
that
use
parallel
computation
[9,
proximate expectations with
a single
pruning
maskdecreases over time. Moreover, we show that our randomized pruning38],
parsing,
showing
that bias
and can be additionally
parallelized
at the sentence
level.achieving lower bias over a range of total
outperforms
standard single-mask
tic-tac-toe
pruning [40],
se of monolingual parsing, computation
the
computation
of
feature
count
expectations
is
usually
times.
Our
technique
is
orthogonal
to
approaches
use parallel
computation
[9, 38],
In what follows, we explain the method in the contextapproxof that
parsing
to make
the exposition
more
with a pruning mask whichand
allows
the
omission
of
low
probability
constituents.
Formally,
can
be
additionally
parallelized
at
the
sentence
level.
concrete, and because our experiments are on similar combinatorial objects (bitext derivations).
g mask is a map from the set
M of all possible
spans
to the setof{prune,
keep}, indicating
Note,
the
applicability
this
is in
no way
limited
to parsing.
The settings
In
whathowever,
follows, that
we explain
the method
in
theapproach
context of
parsing
because
it makes
the exposition
in which
randomized
pruningour
will
be most advantageous
willcombinatorial
be those in which
high-order
dynamic
more
concrete,
and because
experiments
are on similar
objects
(bitext derivaprograms
canhowever,
sped
by masking,ofyet
noapproach
single aggressive
mask
is likely
to be adequate.
tions).
Note,
theupapplicability
this
is in no way
limited
to parsing.
The
2be vastlythat
settings in which randomized pruning will be most advantageous will be those in which high-order
dynamic programs can be vastly sped up by masking, yet no single aggressive mask is likely to be
2 Randomized pruning
adequate.
2.1
2
The need for expectations
Randomized pruning
Algorithms for discriminative training, consensus decoding, and unsupervised learning typically
involve repetitively computing a large number of expectations. In discriminative training of proba2.1
The
need for
bilistic
parsers,
for expectations
example [18, 32], one needs to repeatedly parse the entire training set in order to
compute the necessary expected feature counts. In this setup (Figure 1), the conditional distribution
Algorithms for discriminative training, consensus decoding, and unsupervised learning typically
of a tree-valued random variable T given a yield y(T ) = w is modeled using a log-linear model :
involve repetitively computing a large number of expectations. In discriminative
training of probaP? (T = t|y(T ) = w) = exp{!?, f (t, w)" ? log Z(?, w)}, in which ? ? RK is a parameter vector
bilistic
parsers, forKexample [18, 32], one needs to repeatedly parse the entire training set in order to
and f (t, w) ? R is a feature function. Training such a model involves the computation of the
compute the necessary expected feature counts. In this setup (Figure 1), the conditional distribution
following
gradient
in between
update
of ? (skipping
to compute
term):
of
a tree-valued
random
variableeach
T given
a yield
y(T ) = waniseasy
modeled
using aregularization
log-linear model
:
#
$
!
"
K
P? (T = t|y(T ) = w) = exp{h?, f (t, w)i ? log Z(?, w)}, in which ? ? R is a parameter vector
f (t , wi ) ? E? [f (T, wi )|y(T ) = wi ] ,
? log
P? (T = ti |y(T ) = wi ) =
and f (t, w) ? RK is
a feature function. Training suchi a model
involves the computation of the
i?I
i?I
following gradient in between each update of ?:
where {wi : i Y
? I} are the training sentences
n corresponding gold trees {ti }.
o
Xwith
?
log
P
(T
=
t
|y(T
)
=
w
)
=
f
(t
,
w
)
?
E
[f
(T,
w
)|y(T
)
=
w
]
, requires a
?
i
i
i
i
?
i
i
The first term in the above equation can be computed in linear time, while the second
i?I
i?I
cubic-time dynamic
program (the inside-outside
algorithm), which computes constituent posteriors
for all possible spans of words (the chart cells in Figure 1). Hence, computing expectations is
where {wi : i ? I} are the training sentences with corresponding gold trees {ti }.
indeed the bottleneck here. While it is not impossible to calculate these expectations exactly, this
The
first term in thevery
above
equation can
be computed
linear
while with
the second
requires
a
is computationally
expensive,
limiting
previousinwork
to time,
toy setups
15 word
sentences
cubic-time
dynamic
program (the
inside-outside
which
computes
constituent posteriors
[18, 32, 35],
or necessitating
aggressive
pruningalgorithm),
[26, 12] that
is not
well understood.
for all possible spans of words (the chart cells in Figure 1). Hence, computing expectations is
indeed
the bottleneckexpectations
here. Whilewith
it is anot
impossible
to mask
calculate these expectations exactly, it
2.2 Approximate
single
pruning
is computationally very expensive, limiting previous work to toy setups with 15 word sentences
[18,
32,case
35],oformonolingual
necessitatingparsing,
aggressive
[26, 12]
that is not
wellexpectations
understood.is usually approxIn the
the pruning
computation
of feature
count
imated with a pruning mask which allows the omission of low probability constituents. Formally,
a pruning mask is a map from the set M of all2possible spans to the set {prune, keep}, indicating
2
0
0
1
1
Selections
Assignments
0
+
? ?
? + ?
? ? + ?
+ + + + +
0
0
0
0
1
0
2
1 selected
0
0
3
0
0
1
4
0
5
0
0 excluded
1
+
2
3
positive
Masks
4
?
5
0
negative
1
2
keep
3
4
5
prune
Figure 2: An example of how a selection vector s and an assignment vector a are turned into a
pruning mask m.
2.2
Approximate expectations with a single pruning mask
In the case of monolingual parsing, the computation of feature count expectations is usually approximated with a pruning mask, which allows the omission of low probability constituents. Formally,
a pruning mask is a map from the set M of all possible spans to the set {prune, keep}, indicating
whether a given span is to be ignored. It is easy to incorporate such a pruning mask into existing
dynamic programming algorithms for computing expectations: Whenever a dynamic programming
state is considered, we first consult the mask and skip over the pruned states, greatly accelerating
the computation (see Algorithm 3 for a schematic description of the pruned inside pass). However,
the expected feature counts Em [f ] computed by pruned inside-outside with a single mask m are not
exact, introducing a systematic error and biasing the model in undesirable ways.
2.3
Approximate expectations with a sequence of masks
To reduce the bias resulting from the use of a single pruning mask, we propose a novel algorithm that
can combine several masks. Given a sequence of masks, m(1) , m(2) , . . . , m(N ) , we will average the
PN
expectations under each of them N1 i=1 Em(i) [f ]. Our contribution is to show a principled way of
computing a sequence of masks such that this average not only has theoretical guarantees, but also
has good finite-sample performance. The key is to define a set of auxiliary variables, and we present
this construction in more detail in the following sections. In this section, we present the algorithm
operationally.
The masks are defined via two vector-valued Markov chains: a selection chain with current value denoted by s, and an assignment chain with current value a. Both s and a are vectors with coordinates
indexed by spans over the current sentence: ? ? M = {hj, ki : 0 ? j < k ? n = |w|}. Elements
s? specify whether a span ? will be selected (s? = 1) or excluded (0) in the current iteration (i). The
assignment vector a then determines, for each span, whether it would be forbidden if selected (or
negative, a? = ?) or required (positive, +) to be a constituent.
Our masks m = m(s, a) are generated deterministically from the selection and assignment vectors.
The deterministic procedure uses s to pick a few spans and values to fix from a, forming a mask m.
Note that a single span ? that is both positive and selected implies that all of the spans ? crossing ?
should be pruned (i.e. all of the spans such that neither ? ? ? nor ? ? ? holds). This compilation of
the pruning constraints is described in Algorithm 2. The type of the return value m of this function
is also a vector with coordinates corresponding to spans: m? ? {prune, keep}. Computation of this
mask is illustrated on a concrete example in Figure 2.1
We can now summarize how randomized pruning works (see Algorithm 1 for pseudocode). At
the beginning of every iteration (i), the first step is to sample new values of the selection vector
conditioning on the current selection vector. We will refer to the transition probability of this Markov
chain on selection vectors as k ? . Once a new mask m has been precomputed from the current
selection vector and assignments, pruned inside-outside scores are computed using this mask. The
1
It may seem that Algorithm 2 is also slow, introducing a new bottleneck. However, |s| is small in practice,
and the constant is much smaller since it does not depend on the grammar, making this algorithm fast in practice.
3
? 0 < |M | i.o.) = 1 to maintain irreducibility.
if a? = ? and ? = ? then
T is deterministic. We therefore require thatEP(|s|
for i ? 1, 2, . . . , N do
m? ? prune
?
We now describe in more detail the effect thats each
distribution
? k setting
(s, ?) of a, s has on the posterior continue
outer loop
on T . We start by developing the form of the posterior
distribution over
m ? CreateMask(a,
s) trees when there
if a? is=a+single
and ? ! ?
selected auxiliary variable, i.e. T |(A? = a).Compute
If a = PrunedInside(w,m)
?, sampling from T |A? = and
? requires
the
? ! ? then
same dynamic program as for exact sampling,Compute
except that
a single cell in the chart is m
pruned
(the
PrunedOutside(w,m)
? ? prune
cell ?). The setting where a = + is more interesting:
can outer
be loop
S ? E + in
Emthis
f case significantly more cells
continue
pruned. Indeed, all constituents overlapping with
pruned. This can lead to a speed-up of up
a ? k?sare
(a, ?)
m
?
keep
?
E
constraints
are
to a multiplicative constant of 8 = 23 , when
the span
? has length |?| = |w|
return
2 . More
N
return
m
maintained during resampling steps in practice (i.e. |s| > 1), leading to a high empirical speedup.
Algorithm 2 : CreateMask(s,a) Algorithm 3 : PrunedInside(w, m)
for ? ? M do
{Initialize the chart in the standard way}
for ? ? s do
for ? = %j, k& ? M , bottom-up do
if a? = ? and ? = ? then
if m? = keep then
m? ? prune
for l : j < l < k do
continue outer loop
if m"j,l# = m"l,k# = keep then
if a? = + and ? ! ?
{Loop over grammar symbols,
and ? ! ? then
update inside scores in the
m? ? prune
standard way}
continue outer loop
return chart
m? ? keep
return m
Consider now the problem of jointly resampling the block containing T and a collectio
Consider now the problem of jointly resampling the block containing T and a collection
of excluded
auxiliary variables
{A? : ? ?
/ s} given a collection of selected ones. We can write the d
auxiliary variables {A? : ? ?
/ s} given a collection of selected ones. We can write the decomposition:
Algorithm 3 : PrunedInside(w, m)
Y `
?
?
Y `
`
?
?
Figure 3: Pseudo-codeP`for
the
case
ofway}
monolingual parsing
T = randomized
t, S|C = P(T
= pruning
t|C) thePchart
Ain
a?the
|T standard
=
t
{Initialize
? =in
P T = (assuming
t, S|C = P(T =a t|C)
P A? = a? |T = t
??s
/k& ? M , bottom-up do
for
?
=
%j,
??s
/
grammar with no unaries except at pre-terminal
We
have omitted PrunedOutside because Y
Y positions.
? then
?
?
?
? = keep
P(T if=m
t|C)
1 to
a? PrunedInside.
= 1[? ? t] ,
of limited space, but its structure is= very
similar
= P(T = t|C)
1 a? = 1[? ? t]
for l ?:?s
j < l < k do
/
??s
/
if m"j,l# = m"l,k# = keep then
!
"
! !
"
where S "= A? = a? : ? ?
/ s is a configuration of the{Loop
excluded
overauxiliary
grammarvariables
symbols,
whereand
S "=C =A? A
=? a=? : ? ?
/ s is a configuration of the excluded auxiliary variables a
inside
in the
a? : ? ? s is a configuration of the selected ones. Theupdate
first factor
in scores
the second
aline
?again
s is a pruned
configuration of the selected ones. The first factor in the second line is a
? : ?is
inside-outside
scores
are then
used in3).two
first,
to calculate
the expected
standard
way}
dynamic program
(described
in Algorithm
Theways:
product
of indicator
functions
shows
that once
afeature
dynamic
program
(described
in counts
Algorithmunder
3). The product of indicator functions sho
tree has been
picked,
the[fexcluded
auxiliary
variables
be set to new
values deterministically
by
the pruned
model,
Em
], which
are added
to can
a running
average;
second, to resample
new values for
return
chart
reading from the sampled
the assignment
vector.2 tree t whether ? is a constituent, for each ? ?/ s.
5
Consider
theGibbs
problem
of jointly
resampling
thekblock
Given a selection vector s, we denote the
inducednow
block
kernel
described
above by
s (?, ?).containing T and a collection of excluded
0
variables
{Athe
??
/ s} given
a collection
ofgiven
selected
ones.
We can write
the decomposition:
Since
this kernel
on the previous
state
through
assignments
the auxiliary
vari- the
Let us
describe
independs
more detail
how aauxiliary
newonly
assignment
a isof
updated
previous
assign? :vector
|M |
we canis
also
as a transition
on the space
variable
assignmentables,
a. This
a write
two itstep
update kernel
process.
First,{+,
a ?}
tree
tofisauxiliary
sampled
from
the chart computed by
"
ments: ks (a, a ).
Y `
`
?
?
Algorithm 1 : AuxVar(w, f )
a, s ? random initialization
E?0
for i ? 1, 2, . . . , N do
s ? k ? (s, ?)
m ? CreateMask(s, a)
Compute PrunedInside(w,m)
Compute PrunedOutside(w,m)
E ? E + Em f
a ? ks (a, ?)
E
return N
PrunedInside(w, m) (Figure 1, left). This can be doneP in
quadratic
time
usingP aAstandard
algorithm
T =
t, S|C = P(T
= t|C)
? = a? |T = t
??s
/
[19, 3.2
13].The
Next,
the
assignments
are
set
to
a
new
value
deterministically
as
follows:
for
each
span ?,
selection chain
Y ?
?
= P(T =We
t|C) will
1 denote
a? = 1[? ?this
t] property
a? = + if ? is a constituent in t, and a? = ? otherwise (Figure 1, right).
?
??s
/
iteration the selection s of the auxiliary
by [?There
? t].is a separate mechanism, k , that updates at each
!
"
!
variables. This mechanism corresponds to picking which Gibbs operator ks will be used to tranwhere S = A? = a? : ? ?
/ s is a configuration of the excluded auxiliary variables and C = A? =
" above. We will denote the random variable
sition in
Markov3.2
chain
assignments
described
We defer
tothe
Section
foronthe
description
of the
selectionofvector
updates?the
these
updates
configuration
the selected
ones. The firstform
factor of
in the
second
line is again a pruned
? :??
corresponding to the selection vector s atastate
(i)s byisSa(i)
.
dynamic program
in Algorithm 3). The product of indicator functions shows that once a
will be easier to motivate after the analysis
of the(described
algorithm.
In standard treatments of MCMC algorithms [33, 22], the variables S (i) are restricted to be either independent (a mixture of kernels), or deterministic enumerations (an alternation of ker5
nels).
However this restriction can be relaxed to having S (i) be itself a Markov chain with kernel
Analysis
k ? : {0, 1}|M | ?{0, 1}|M | ? [0, 1]. This relaxation can be thought of as allowing stochastic policies
for kernel selection. 3
3
In this 3section
we show that the procedure described above can be viewed as running an MCMC
There is a short and intuitive argument to justify this relaxation. Let x? be a state from k? , and consider
?
algorithm.
ThisPimplies
that
guarantees
associated
of infinite
algorithms extend to our
the set of paths
starting at x
andthe
extended
until they first
return to x? . with
Many ofthis
theseclass
paths have
P
a.s.
N
procedure. In particular, consistency holds: N1 i=1 Em(i) f ?? Ef.
5
3.1
Auxiliary variables and the assignment Markov chain
We start by formally describing the Markov chain over assignments. This is done by defining a
collection of Gibbs operators ks (?, ?) indexed by a selection vectors s.
The original state space (the space of trees) does not easily decompose into a graphical model where
textbook Gibbs sampling could be applied, so we first augment the state space with auxiliary variables. Broadly speaking, an auxiliary variable is a state augmentation such that the target distribution
is a marginal of the expanded distribution. It is called auxiliary because the parts of the samples corresponding to the augmentation are discarded at the end of the computation. At an intermediate
stage, however, the state augmentation helps explore the space efficiently.
This technique is best explained with a concrete example in our parsing setup. In this case, the
augmentation is a collection of |M | binary-valued random variables, each corresponding to a span
of the current sentence w. The auxiliary variable corresponding to span ? ? M will be denoted by
A? . We define the auxiliary variables by specifying their conditional distribution A? |(T = t). This
conditional is a deterministic function: P(A? |T = t) = [? ? t].
With this augmentation, we can now describe the sampler. It is a block Gibbs sampler, meaning that
it resamples a subset of the random variables, conditioning on the other ones. Even when the subsets
selected across iterations overlap, acceptance probabilities are still guaranteed to be one [33].
2
The second operation only needs the inside scores.
4
The blocks of resampled variables will always contain T as well as a subset of the excluded auxiliary
variables. Note that when conditioning on all of the auxiliary variables, the posterior distribution on
T is deterministic. We therefore require that P(|s| < |M | i.o.) = 1 to maintain irreducibility.
We now describe in more detail the effect that each setting of a, s has on the posterior distribution
on T . We start by developing the form of the posterior distribution over trees when there is a single
selected auxiliary variable, i.e. T |(A? = a). If a = ?, sampling from T |A? = ? requires the
same dynamic program as for exact sampling, except that a single cell in the chart is pruned (the
cell ?). The setting where a = + is more interesting: in this case significantly more cells can be
pruned. Indeed, all constituents overlapping with ? are pruned. This can lead to a speed-up of up
to a multiplicative constant of 8 = 23 , when the span ? has length |?| = |w|
2 . More constraints are
maintained during resampling steps in practice (i.e. |s| > 1), leading to a large empirical speedup.
Consider now the problem of jointly resampling the block containing T and a collection of excluded
auxiliary variables {A? : ? ?
/ s} given a collection of selected ones. We can write the decomposition:
Y `
`
?
?
P T = t, S|C = P(T = t|C)
P A? = a? |T = t
??s
/
Y ?
?
= P(T = t|C)
1 a? = [? ? t] ,
??s
/
where S = A? = a? : ? ?
/ s is a configuration of the excluded auxiliary variables and C = A? =
a? : ? ? s is a configuration of the selected ones. The first factor in the second line is again a pruned
dynamic program (described in Algorithm 3). The product of indicator functions shows that once a
tree has been picked, the excluded auxiliary variables can be set to new values deterministically by
reading from the sampled tree t whether ? is a constituent, for each ? ?
/ s.
Given a selection vector s, we denote the induced block Gibbs kernel described above by ks (?, ?).
Since this kernel depends on the previous state only through the assignments of the auxiliary variables, we can also write it as a transition kernel on the space {+, ?}|M | of auxiliary variable assignments: ks (a, a0 ).
3.2
The selection chain
There is a separate mechanism, k ? , that updates at each iteration the selection s of the auxiliary
variables. This mechanism corresponds to picking which Gibbs operator ks will be used to transition in the Markov chain on assignments described above. We will denote the random variable
corresponding to the selection vector s at state (i) by S (i) .
In standard treatments of MCMC algorithms [33, 22], the variables S (i) are restricted to be either independent (a mixture of kernels), or deterministic enumerations (an alternation of kernels). However this restriction can be relaxed to having S (i) be itself a Markov chain with kernel
k ? : {0, 1}|M | ?{0, 1}|M | ? [0, 1]. This relaxation can be thought of as allowing stochastic policies
for kernel selection.3
The choice of k ? is important. To understand why, recall that in the situation where (A? = ?), a
single cell in the chart is pruned, whereas in the case where (A? = +), a large fraction of the chart
can be ignored. The construction of k ? is therefore where having a simpler model or heuristic at
hand can play a role: as a way to favor the selection of constituents that are likely to be positive,
so that better speedup can be achieved. Note that the algorithm can recover from mistakes in the
simpler model, since the assignments of the auxiliary variables are also resampled.
Another issue that should be considered when designing k ? is that it should avoid self-transitions
(repeating the same set of selections). To see why, note that if (s, a) = (s0 , a0 ), then m = m(s, a) =
There is a short and intuitive argument to justify this relaxation. Let x? be a state from k? , and consider
the set of paths P starting at x? and extended until they first return to x? . Many of these paths have infinite
length, however if k? is positive recurrent, k? (?, ?), will assign probability zero to these paths. We then use
the following reduction: when the chain is at x? , first pick a path from P under the distribution induced by
k? (this is a mixture of kernels). Once a path is selected, deterministically follow the edges in the path until
coming back to x? (alternation of kernels). Since mixtures and alternations of ?-invariant kernels preserve
?-invariance, we are done.
3
5
m0 f
m(s0 , a0 ) = m0 and hence Em f +E
= Em f . The estimator is unchanged in this case, even after
2
paying the computational cost of a second iteration.
The mechanism we used takes both of these issues into consideration. First, it uses a simpler model
(for instance a grammar with fewer non-terminal symbols) to pick a subset M 0 ? M of the spans
that have high posterior probability. Our kernel k ? is restricted to selection vectors s such that
s ? M 0 . Next, in order to avoid repetition, our kernel transitions from a previous selection s to the
0
0
next one, s0 , as follows: after picking a random subset R ? s of size |s|
2 , define s = (M \s) ? R.
0
|
Provided that the chain is initialized with |s| = 2|M
3 , this scheme has the property that it changes a
large portion of the state at every iteration (more precisely, |s ? s0 | = 31 ), and moreover all subsets
0
|
are eventually resampled with probability one. Note that this update depends on
of M 0 of size 2|M
3
the previous selection vector, but not on the assignment vector.
Given the asymmetric effect between conditioning on positive versus negative auxiliary variables, it
is tempting to let the k ? depend on the current assignment of the auxiliary variables. Unfortunately
such schemes will not converge to the correct distribution in general. Counterexamples are given in
the adaptive MCMC literature [2].
3.3
Accelerated averaging
In this section, we justify the way expected sufficient statistics are estimated from the collection of
samples. In other words, how the variable E is updated in Algorithm 1.
In a generic MCMC situation, once samples X (1) , X (2) , . . . are collected, the traditional way of
estimating expected sufficient statistics f is to average ?hard counts,? i.e. to use the estimator:
PN
SN = N1 i=1 f (X (i) ). In our case X (i) contains the current tree and assignments, (T (i) , A(i) ).
For general Metropolis-Hastings chains, this is often the only method available. On the other hand,
in our parsing setup?and more generally, with any Gibbs sampler?it turns out that there is a more
efficient way of combining the samples [23]. The idea behind this alternative is to take ?soft counts.?
This is what we do when we add Em f to the running average in Algorithm 1.
Suppose we have extracted samples X (1) , X (2) , . . . , X (i) , with corresponding selection vectors
S (1) , S (2) , . . . , S (i) . In order to transition to the next step, we will have to sample from the probability distribution denoted by kS (i) (X (i) , ?). In the standard setting, we would extract a single sample
X (i+1) and add f (X (i+1) ) to a running average.
More
formally, the accelerated averaging method consists of adding the following soft count instead:
R
f (x)kS (i) (X (i) , dx), which can be computed with one extra pruned outside computation in our
parsing setup. This quantity was
denoted Em f in the previous
section. The final estimator then has
PN ?1 R
1
4 0
(i)
the form: SN = N ?1 i=1 f (x) kS (i) X , dx .
4
Experiments
While we used the task of monolingual parsing to illustrate our randomized pruning procedure, the
technique is most powerful when the dynamic program is a higher-order polynomial. We therefore
demonstrate the utility of randomized pruning on a bitext parsing task. In bitext parsing, we have
sentence-aligned corpora from two languages, and are computing expectations over aligned parse
trees [6, 28]. The model we use is most similar to [3], but we extend this model and allow rules to
mix terminals and non-terminals, as is often done in the context of machine translation [8]. These
rules were excluded in [3] for tractability reasons, but our sampler allows efficient sampling in this
more challenging setup.
In the terminology of adaptor grammars [19], our sampling step involves resampling an adapted
derivation given a base measure derivation for each sentence. Concretely, the problem is to sample
from a class of isotonic bipartite graphs over the nodes of two trees. By isotonic we mean that the
4
As a side note, we make the observation that this estimator is reminiscent of a structure mean field update.
It is different though, since it is still an asymptotically unbiased estimator, while mean fields approximations
converge in finite time to a biased estimate.
6
Fixed
Auxiliary variables
1x105
1.5
2500
1x104
1000
1.5
Mean L2 bias
3500
Speed-up
Mean time (ms)
2
2
Exact
Sampling step
1x106
Mean L2 bias
1x107
1500
1
1
0.5
0.5
100
Tic-tac-toe
Auxiliary variables
500
10
0
40
60
80
100
Product length
(a)
120
40
60
80
100
120
0
60
80
100
120
Product length
Product length
(b)
(c)
140
1000
10000
100000
Mean time (ms)
(d)
Figure 4: Because each sampling step is three orders of magnitude faster than the exact computation
(a,b), we can afford to average over multiple samples and thereby reduce the L2 bias compared to
a fixed pruning scheme (c). Our auxiliary variable sampling scheme also substantially outperforms
the tic-tac-toe pruning heuristic (d).
edges E of this bipartite graph should have the property that if two non-terminals ?, ?0 and ?, ? 0 are
aligned in the sampled bipartite graph, i.e. (?, ?0 ) ? E and (?, ? 0 ) ? E, then ? ? ? ? ?0 ? ? 0 ,
where ? ? ? denotes that ? is an ancestor of ?. The weight (up to a proportionality constant) of
each of these alignments is obtained as follows: first, consider each aligned point as the left-hand of
a rule. Next, multiply the score of these rules. If we let p, q be the length of the two sentences, one
can check that this yields a dynamic program of complexity O(pb+1 q b+1 ), where b is the branching
factor (we follow [3] and use b = 3).
We picked this particular bilingual bitext parsing formalism for two reasons. First, it is relevant to
machine translation research. Several researchers have found that state-of-the-art performance can
be attained using grammars that mix terminals and non-terminals in their rules [8, 14]. Second, the
randomized pruning method is most competitive in cases where the dynamic program has a sufficiently high degree. We did experiments on monolingual parsing that showed that the improvements
were not significant for most sentence lengths, and inferior to the coarse-to-fine method of [25].
The bitext parsing version of the randomized pruning algorithm is very similar to the monolingual
case. Rather than being over constituent spans, our auxiliary variables in the bitext case are over
induced alignments of synchronous derivations. A pair of words is aligned if it is emitted by the
same synchronous rule. Note that this includes many-to-many and null alignments since several or
zero lexical elements can be emitted by a single rule. Given two aligned sentences, the auxiliary
variables Ai,j are the pq binary random variables indicating whether word i is aligned with word j.
To compare our approximate inference procedure to exact inference, we follow previous work [15,
29] and measure the L2 distance between the pruned expectations and the exact expectations.5
4.1
Results
We ran our experiments on the Chinese Treebank (and its English translation) [39], limiting the
product of the sentence lengths of the two sentences to p ? q ? 130. This was necessary because computing exact expectations (as needed for comparing to our baseline) quickly becomes
prohibitive. Note that our pruning method, in contrast, can handle much longer sentences without problem?one pass through all 1493 sentences with a product length of less than 1000 took 28
minutes on one 2.66GHz Xeon CPU.
We used the BerkeleyAligner [21] to obtain high-precision, intersected alignments to construct the
high-confidence set M 0 of auxiliary variables needed for k ? (Section 3.2)?in other words, to construct the support of the selection chain S (i) .
For randomized pruning to be efficient, we need to be able to extract a large number of samples
within the time required for computing the exact expectations. Figure 4(a) shows the average time
required to compute the full dynamic program and the dynamic program required to extract a single sample for varying sentence product lengths. The ratio between the two (explicitly shown in
P
PK ?
1
More precisely, we averaged this bias across the sentence-pairs: bias(?) = |I|
i?I
k=1 E?,i [fk ] ?
?2
? ?,i [fk ] , where E?,i [f ], E
? ?,i [f ] are shorthands notations for exact and approximate expectations.
E
5
7
Figure 4(b)) increases with the sentence lengths, and reaches three orders of magnitude, making it
possible to average over a large number of samples, while still greatly reducing computation time.
We can compute expectations for many samples very efficiently, but how accurate are the approximated expectations? Figure 4(c) shows that averaging over several masks reduces bias significantly.
In particular, the bias increases considerably for longer sentences when only a single sample is used,
but remains roughly constant when we average multiple samples. To determine the number of samples in this experiment, we measured the time required for exact inference, and ran the auxiliary
variable sampler for half of that time. The main point of Figure 4(c) is to show that under realistic running time conditions, the bias of the auxiliary variable sampler stays roughly constant as a
function of sentence length.
Finally, we compared the auxiliary variable algorithm to tic-tac-toe pruning, a heuristic proposed in
[40] and improved in [41]. Tic-tac-toe is an algorithm that efficiently precomputes a figure of merit
for each bispan. This figure of merit incorporates an inside score and an outside score. To compute
this score, we used a product of the two IBM model 1 scores (one for each directionality). When a
bispan figure of merit falls under a threshold, it is pruned away.
In Figure 4(d), each curve corresponds to a family of heuristics with varying aggressiveness. With
tic-tac-toe, aggressiveness is increased via the cut-off threshold, while with the auxiliary variable
sampler, it is controlled by letting the sampler run for more iterations. For each algorithm, its
coordinates correspond to the mean L2 bias and mean time in milliseconds per sentence. The plot
shows that there is a large regime where the auxiliary variable algorithm dominates tic-tac-toe for
this task. Our method is competitive up to a mean running time of about 15 sec/sentence, which is
well above the typical running time one needs for realistic, large scale training.
5
Related work
There is a large body of related work on approximate inference techniques. When the goal is to
maximize an objective function, simple beam pruning [10] can be sufficient. However, as argued in
[4], beam pruning is not appropriate for computing expectations because the resulting approximation
is too concentrated around the mode. To overcome this problem, [5] suggest adding a collection of
samples to a beam of k-best estimates. Their approach is quite different to ours as no auxiliary
variables are used.
Auxiliary variables are quite versatile and have been used to create MCMC algorithms that can
exploit gradient information [11], efficient samplers for regression [1], for unsupervised Bayesian
inference [31], automatic sampling of generic distribution [24] and non-parametric Bayesian statistics [37, 20, 36]. In computer vision, in particular, an auxiliary variable sampler developed by [30]
is widely used for image segmentation [27].
6
Conclusion
Mask-based pruning is an effective way to speed up large dynamic programs for calculating feature
expectations. Aggressive masks introduce heavy bias, while conservative ones offer only limited
speed-ups. Our results show that, at least for bitext parsing, using many randomized aggressive
masks generated with an auxiliary variable sampler is superior in time and bias to using a single,
more conservative one. The applicability of this approach is in no way limited to the cases considered here. Randomized pruning will be most advantageous when high-order dynamic programs can
be vastly sped up by masking, yet no single aggressive mask is likely to be adequate.
References
[1] J. Albert and S. Chib. Bayesian analysis of binary and polychotomous response data. JASA, 1993.
[2] C. Andrieu and E. Moulines. On the ergodicity properties of some adaptive MCMC algorithms. Ann.
Appl. Probab., 2006.
[3] P. Blunsom, T. Cohn, C. Dyer, and M. Osborne. A Gibbs sampler for phrasal synchronous grammar
induction. In EMNLP, 2009.
8
[4] P. Blunsom, T. Cohn, and M. Osborne. A discriminative latent variable model for statistical machine
translation. In ACL-HLT, 2008.
[5] P. Blunsom and M. Osborne. Probabilistic inference for machine translation. In EMNLP, 2008.
[6] D. Burkett and D. Klein. Two languages are better than one (for syntactic parsing). In EMNLP ?08, 2008.
[7] E. Charniak and M. Johnson. Coarse-to-fine n-best parsing and maxent discriminative reranking. In ACL,
2005.
[8] D. Chiang. A hierarchical phrase-based model for statistical machine translation. In ACL, 2005.
[9] S. Clark and J. R. Curran. Parsing the WSJ using CCG and log-linear models. In ACL, 2004.
[10] M. Collins. Head-Driven Statistical Models for Natural Language Parsing. PhD thesis, UPenn, 1999.
[11] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics Letters B, 1987.
[12] J. Finkel, A. Kleeman, and C. Manning. Efficient, feature-based, conditional random field parsing. In
ACL, 2008.
[13] J. R. Finkel, C. D. Manning, and A. Y. Ng. Solving the problem of cascading errors: Approximate
Bayesian inference for linguistic annotation pipelines. In EMNLP, 2006.
[14] M. Galley, M. Hopkins, K. Knight, and D. Marcu. What?s in a translation rule? In HLT-NAACL, 2004.
[15] A. Globerson and T. Jaakkola. Approximate inference using planar graph decomposition. In NIPS, 2006.
[16] J. Goodman. Parsing algorithms and metrics. In ACL, 1996.
[17] J. Goodman. Global thresholding and multiple-pass parsing. In EMNLP, 1997.
[18] M. Johnson. Joint and conditional estimation of tagging and parsing models. In ACL, 2001.
[19] M. Johnson, T. L. Griffiths, and S. Goldwater. Bayesian inference for PCFGs via Markov Chain Monte
Carlo. In ACL, 2007.
[20] P. Liang, M. I. Jordan, and B. Taskar. A permutation-augmented sampler for Dirichlet process mixture
models. In ICML, 2007.
[21] P. Liang, B. Taskar, and D. Klein. Alignment by agreement. In NAACL, 2006.
[22] D. J. C. MacKay. Information Theory, Inference and Learning Algorithms. Cambridge U. Press, 2003.
[23] I. W. McKeague and W. Wefelmeyer. Markov chain Monte Carlo and Rao-Blackwellization. Statistical
Planning and Inference, 2000.
[24] R. Neal. Slice sampling. Annals of Statistics, 2000.
[25] S. Petrov and D. Klein. Improved inference for unlexicalized parsing. In HLT-NAACL ?07, 2007.
[26] S. Petrov and D. Klein. Discriminative log-linear grammars with latent variables. In NIPS, 2008.
[27] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 2000.
[28] D. Smith and N. Smith. Bilingual parsing with factored estimation: Using english to parse korean. In
EMNLP ?04, 2004.
[29] D. A. Smith and J. Eisner. Dependency parsing by belief propagation. In EMNLP, 2008.
[30] R. H. Swendsen and J. S. Wang. Nonuniversal critical dynamics in MC simulations. Rev. Lett, 1987.
[31] M. A. Tanner and W. H. Wong. The calculation of posterior distributions by data augmentation. JASA,
1987.
[32] B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. Max-margin parsing. In EMNLP, 2004.
[33] L. Tierney. Markov chains for exploring posterior distributions. The Annals of Statistics, 1994.
[34] I. Titov and J. Henderson. Loss minimization in parse reranking. In EMNLP, 2006.
[35] J. Turian, B. Wellington, and I. D. Melamed. Scalable discriminative learning for natural language parsing
and translation. In NIPS, 2006.
[36] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov
model. In ICML, 2008.
[37] S. G. Walker. Sampling the Dirichlet mixture model with slices. Communications in Statistics - Simulation
and Computation, 2007.
[38] J. Wolfe, A. Haghighi, and D. Klein. Fully distributed em for very large datasets. In ICML ?08, 2008.
[39] N. Xue, F-D Chiou, and M. Palmer. Building a large-scale annotated Chinese corpus. In COLING, 2002.
[40] H. Zhang and D. Gildea. Stochastic lexicalized inversion transduction grammar for alignment. In ACL,
2005.
[41] H. Zhang, C. Quirk, R. C. Moore, and D. Gildea. Bayesian learning of non-compositional phrases with
synchronous parsing. In ACL, 2008.
9
| 3710 |@word version:1 inversion:1 polynomial:1 advantageous:4 proportionality:1 simulation:2 decomposition:4 pick:3 thereby:2 versatile:1 reduction:1 configuration:8 contains:1 score:10 charniak:1 ours:1 outperforms:3 existing:1 current:9 com:1 comparing:1 skipping:2 yet:3 dx:2 reminiscent:1 parsing:38 subsequent:1 realistic:2 plot:1 update:11 resampling:8 operationally:1 selected:16 fewer:1 prohibitive:1 half:1 reranking:2 intelligence:1 beginning:1 smith:3 short:2 chiang:1 coarse:3 revisited:1 node:1 simpler:3 zhang:2 consists:1 shorthand:1 dan:1 combine:1 inside:11 introduce:2 mask:46 indeed:4 unaries:1 roughly:2 expected:7 nor:1 planning:1 blackwellization:1 behavior:2 terminal:7 moulines:1 decreasing:1 cpu:1 enumeration:2 becomes:1 provided:1 estimating:1 underlying:2 bounded:2 moreover:3 notation:1 null:1 what:3 tic:9 substantially:1 textbook:1 developed:1 guarantee:6 pseudo:1 formalizes:1 berkeley:5 every:2 ti:5 exactly:3 positive:6 understood:3 mistake:1 consequence:1 path:8 blunsom:3 acl:10 initialization:1 k:10 specifying:1 challenging:1 appl:1 limited:6 isof:1 pcfgs:1 range:3 palmer:1 averaged:2 globerson:1 auxvar:1 practice:6 block:8 procedure:5 proba2:1 empirical:4 mckeague:1 significantly:3 thought:2 ups:1 word:12 pre:1 confidence:1 griffith:1 suggest:1 undesirable:1 selection:26 operator:3 context:2 impossible:3 isotonic:2 restriction:2 wong:1 map:3 deterministic:6 lexical:1 shi:1 starting:2 factored:1 suchi:1 estimator:5 rule:8 cascading:1 tagging:1 handle:1 coordinate:3 limiting:4 updated:2 construction:2 target:1 parser:2 massive:1 exact:13 programming:3 play:1 us:2 designing:1 curran:1 annals:2 agreement:1 melamed:1 element:2 crossing:1 expensive:3 particularly:1 approximated:2 wolfe:1 marcu:1 asymmetric:1 cut:2 labeled:1 bottom:2 role:1 taskar:3 wang:1 calculate:4 region:1 decrease:1 knight:1 ran:2 principled:1 complexity:1 dynamic:26 motivate:1 depend:2 solving:1 division:1 efficiency:1 bipartite:3 basis:2 completely:1 accelerate:1 easily:1 joint:1 various:1 derivation:4 fast:1 describe:4 effective:1 monte:3 outside:8 pendleton:1 exhaustive:1 quite:2 heuristic:6 spend:1 valued:5 widely:1 otherwise:1 grammar:11 favor:1 statistic:6 syntactic:1 jointly:4 itself:2 final:1 sequence:7 took:1 propose:1 product:11 coming:1 turned:1 loop:6 combining:1 aligned:7 suppose:1 relevant:1 gold:3 description:2 intuitive:2 constituent:14 convergence:3 wsj:1 object:2 tions:1 help:1 recurrent:1 illustrate:1 quirk:1 measured:1 adaptor:1 paying:1 auxiliary:46 c:2 skip:2 come:1 involves:4 bilistic:2 implies:1 correct:1 annotated:1 stochastic:3 aggressiveness:2 require:3 argued:1 assign:2 fix:1 decompose:1 exploring:1 hold:2 sufficiently:1 considered:3 around:1 swendsen:1 exp:3 sho:1 traded:1 m0:2 omitted:1 resample:1 estimation:2 combinatorial:2 label:1 ain:1 repetition:1 create:1 minimization:1 always:1 rather:1 pn:3 hj:1 avoid:2 finkel:2 varying:2 jaakkola:1 linguistic:1 inherits:1 she:1 improvement:1 check:1 greatly:2 contrast:1 ave:1 skipped:3 baseline:1 inference:12 entire:4 typically:3 a0:3 hidden:1 koller:1 ancestor:1 interested:1 issue:2 denoted:4 augment:1 art:1 initialize:2 mackay:1 marginal:1 wever:1 construct:2 once:6 having:3 field:3 sampling:17 ng:1 unsupervised:4 icml:3 np:2 employ:1 few:1 chib:1 preserve:1 saatci:1 n1:3 proba:1 maintain:2 acceptance:1 imated:1 investigate:2 multiply:1 alignment:6 henderson:1 introduces:1 mixture:6 extreme:1 behind:1 compilation:1 chain:19 accurate:1 edge:2 necessary:4 orthogonal:2 tree:18 indexed:2 maxent:1 initialized:1 theoretical:2 roweth:1 instance:1 formalism:1 soft:2 compelling:1 xeon:1 increased:1 rao:1 assignment:23 phrase:2 applicability:3 introducing:2 cost:1 subset:6 tractability:1 galley:1 johnson:3 too:1 dependency:1 xue:1 considerably:1 upenn:1 randomized:20 stay:1 probabilistic:1 physic:1 systematic:1 off:1 decoding:4 tanner:1 picking:3 quickly:2 continuously:1 concrete:4 polychotomous:1 hopkins:1 vastly:3 again:3 augmentation:6 successively:1 containing:4 thesis:1 emnlp:9 usingp:1 leading:2 return:9 toy:3 aggressive:10 sec:1 includes:1 explicitly:1 depends:2 performed:2 multiplicative:2 picked:3 portion:3 start:3 recover:1 option:1 parallel:3 bouchard:2 masking:4 competitive:2 defer:1 annotation:1 contribution:1 gildea:2 chart:14 efficiently:4 yield:4 correspond:1 vp:1 goldwater:1 bayesian:6 mc:1 carlo:3 researcher:1 kennedy:1 explain:2 reach:1 whenever:1 hlt:3 against:1 petrov:2 toe:9 associated:1 x106:1 sampled:4 treatment:2 recall:1 segmentation:2 carefully:1 back:1 alexandre:1 higher:1 attained:1 follow:3 planar:1 wherein:1 specify:1 improved:2 response:1 done:4 though:1 ergodicity:1 stage:1 until:3 hand:3 hastings:1 parse:8 cohn:2 overlapping:2 propagation:1 google:2 mode:1 quality:1 perhaps:1 building:1 effect:3 naacl:3 normalized:1 contain:1 true:2 unbiased:1 andrieu:1 regularization:1 hence:3 aggressively:1 excluded:12 moore:1 neal:1 illustrated:1 during:2 self:1 branching:1 inferior:1 maintained:2 m:3 demonstrate:2 necessitating:2 meaning:1 resamples:1 consideration:1 novel:3 ef:1 image:2 superior:1 pseudocode:1 sped:4 empirically:1 conditioning:5 extend:2 he:1 refer:1 significant:1 cambridge:1 gibbs:9 counterexample:1 tac:9 bitext:10 tuning:1 unconstrained:1 consistency:1 fk:2 automatic:1 ai:1 language:5 pq:1 longer:2 add:2 base:1 posterior:12 forbidden:1 showed:1 driven:1 massively:1 outperforming:1 arbitrarily:1 continue:4 alternation:4 binary:3 x105:1 relaxed:2 prune:10 parallelized:2 converge:2 determine:1 maximize:1 wellington:1 tempting:1 multiple:5 desirable:1 mix:2 full:1 reduces:1 faster:1 calculation:1 repetitively:2 offer:1 proximate:1 e1:1 controlled:1 schematic:1 scalable:1 regression:1 sition:1 metric:1 expectation:37 vision:1 albert:1 iteration:14 grounded:1 kernel:18 achieved:1 cell:11 beam:4 whereas:1 fine:2 walker:1 haghighi:1 goodman:2 ot:1 extra:1 biased:1 induced:3 incorporates:1 seem:1 jordan:1 emitted:2 consult:1 intermediate:1 easy:2 nonuniversal:1 irreducibility:2 reduce:2 idea:3 bottleneck:3 whether:6 synchronous:4 utility:1 accelerating:1 york:1 speaking:1 afford:1 repeatedly:3 adequate:4 compositional:1 ignored:2 generally:2 heard:1 se:1 involve:2 gael:1 repeating:1 concentrated:1 generate:1 outperform:2 millisecond:1 estimated:1 x107:1 per:1 klein:7 broadly:1 write:6 key:1 terminology:1 threshold:3 pb:1 achieving:2 intersected:1 tierney:1 neither:1 graph:4 relaxation:4 asymptotically:1 fraction:1 run:2 letter:1 powerful:1 family:1 decision:1 ki:1 resampled:3 guaranteed:1 quadratic:1 adapted:1 constraint:3 athe:1 precisely:2 anot:1 speed:5 argument:2 span:23 pruned:20 expanded:1 speedup:3 developing:2 slav:2 combination:1 manning:3 smaller:1 across:2 em:10 wi:10 metropolis:1 rev:1 making:2 explained:1 restricted:3 invariant:1 pipeline:1 computationally:2 equation:3 remains:1 describing:1 count:10 mechanism:6 precomputed:1 eventually:1 turn:1 needed:2 merit:3 letting:1 dyer:1 end:1 available:1 operation:1 apply:1 titov:1 hierarchical:1 away:1 generic:2 appropriate:1 alternative:2 original:1 denotes:1 running:7 dirichlet:2 graphical:1 calculating:2 exploit:1 eisner:1 ghahramani:1 chinese:4 unchanged:1 objective:1 malik:1 added:1 quantity:1 parametric:1 primary:1 traditional:1 gradient:4 distance:1 separate:2 outer:4 me:1 collected:1 consensus:3 unlexicalized:1 reason:2 induction:1 assuming:1 length:13 modeled:3 useless:2 ratio:1 liang:2 setup:9 unfortunately:2 korean:1 negative:3 policy:2 allowing:2 teh:1 observation:1 revised:1 markov:11 discarded:1 datasets:1 finite:4 supporting:1 defining:1 extended:2 incorporated:1 situation:2 head:1 communication:1 ninth:1 omission:3 pair:2 required:5 sentence:26 california:2 nip:3 able:2 in3:1 usually:3 pattern:1 biasing:1 reading:2 summarize:1 regime:1 program:20 gaining:1 max:1 belief:1 overlap:1 critical:1 natural:3 hybrid:1 indicator:4 scheme:4 improve:1 extract:3 sn:2 probab:1 literature:1 l2:5 loss:1 fully:1 permutation:1 monolingual:8 interesting:2 versus:1 nels:1 clark:1 degree:3 jasa:2 sufficient:3 s0:4 thresholding:1 treebank:1 heavy:1 translation:8 ibm:1 course:1 we1:1 english:4 bias:19 allow:1 understand:1 side:1 fall:1 distributed:1 ghz:1 slice:2 overcome:2 dimension:1 curve:1 transition:7 vari:1 lett:1 computes:3 concretely:1 made:1 collection:11 adaptive:2 x104:1 probap:1 transaction:1 pruning:49 approximate:8 keep:11 global:1 andthe:1 corpus:2 discriminative:11 search:1 latent:2 why:2 additionally:2 ca:1 did:2 pk:1 main:1 noise:1 bilingual:3 turian:1 tothe:1 osborne:3 body:1 augmented:1 thats:1 cubic:2 transduction:1 ny:1 slow:1 precision:1 position:1 deterministically:5 coling:1 down:1 rk:3 minute:1 showing:3 symbol:3 phrasal:1 ments:1 dominates:1 ofthis:1 lexicalized:1 adding:2 ccg:1 phd:1 magnitude:2 linearization:1 margin:1 easier:1 simply:1 likely:5 explore:1 forming:1 van:1 duane:1 corresponds:3 determines:1 extracted:3 precomputes:1 conditional:7 viewed:1 goal:1 ann:1 exposition:2 change:2 hard:1 directionality:1 infinite:3 except:3 reducing:1 typical:1 justify:3 sampler:13 averaging:3 conservative:2 total:2 called:1 pas:3 invariance:1 indicating:4 formally:7 support:1 collins:2 accelerated:2 incorporate:1 mcmc:8 |
2,992 | 3,711 | Perceptual Multistability as Markov Chain Monte
Carlo Inference
Samuel J. Gershman
Department of Psychology and Neuroscience Institute
Princeton University
Princeton, NJ 08540
[email protected]
Edward Vul & Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Cambridge, MA 02139
{evul,jbt}@mit.edu
Abstract
While many perceptual and cognitive phenomena are well described in terms of
Bayesian inference, the necessary computations are intractable at the scale of realworld tasks, and it remains unclear how the human mind approximates Bayesian
computations algorithmically. We explore the proposal that for some tasks, humans use a form of Markov Chain Monte Carlo to approximate the posterior distribution over hidden variables. As a case study, we show how several phenomena
of perceptual multistability can be explained as MCMC inference in simple graphical models for low-level vision.
1
Introduction
People appear to make rational statistical inferences from noisy, uncertain input in a wide variety
of perceptual and cognitive domains [1, 9]. However, the computations for such inference, even for
relatively small problems, are often intractable. For larger problems like those people face in the
real world, the space of hypotheses that must be entertained is infinite. So how can people achieve
solutions that seem close to the Bayesian ideal? Recent work has suggested that people may use
approximate inference algorithms similar to those used for solving large-scale problems in Bayesian
AI and machine learning [23, 4, 14]. ?Rational models? of human cognition at the level of computational theories are often inspired by models for analogous inferences in machine learning. In the
same spirit of reverse engineering cognition, we can also look to the general-purpose approximation
methods used in these engineering fields as the inspiration for ?rational process models??principled
algorithmic models for how Bayesian computations are implemented approximately in the human
mind.
Several authors have recently proposed that humans approximate complex probabilistic inferences
by sampling [19, 14, 21, 6, 4, 24, 23], constructing Monte Carlo estimates similar to those used in
Bayesian statistics and AI [16]. A variety of psychological phenomena have natural interpretations
in terms of Monte Carlo methods, such as resource limitations [4], stochastic responding [6, 23] and
order effects [21, 14]. The Monte Carlo methods that have received most attention to date as rational
process models are importance sampling and particle filtering, which are traditionally seen as best
suited to certain classes of inference problems: static low dimensional models and models with
explicit sequential structure, respectively. Many problems in perception and cognition, however,
1
require inference in high dimensional models with sparse and noisy observations, where the correct
global interpretation can only be achieved by propagating constraints from the ambiguous local
information across the model. For these problems, Markov Chain Monte Carlo (MCMC) methods
are often the method of choice in AI and machine vision [16]. Our goal in this paper is to explore
the prospects for rational process models of perceptual inference based on MCMC.
MCMC refers to a family of algorithms that sample from the joint posterior distribution in a highdimensional model by gradually drifting through the hypothesis space of complete interpretations,
following a Markov chain that asymptotically spends time at each point in the hypothesis space
proportional to its posterior probability. MCMC algorithms are quite flexible, suitable for a wide
range of approximate inference problems that arise in cognition, but with a particularly long history
of application in visual inference problems ([8] and many subsequent papers).
The chains of hypotheses generated by MCMC shows characteristic dynamics distinct from other
sampling algorithms: the hypotheses will be temporally correlated and as the chain drifts through hypothesis space, it will tend to move from regions of low posterior probability to regions of high probability; hence hypotheses will tend to cluster around the modes. Here we show that the characteristic
dynamics of MCMC inference in high-dimensional, sparsely coupled spatial models correspond to
several well-known phenomena in visual perception, specifically the dynamics of multistable percepts.
Perceptual multistability [13] has long been of interest both phenomenologically and theoretically
for models of perception as Bayesian inference [7, 20, 22, 10]. The classic example of perceptual
multistability is the Necker cube, a 2D line drawing of a cube perceived to alternate between two
different depth configurations (Figure 1A). Another classic phenomenon, extensively studied in psychophysics but less well known outside the field, is binocular rivalry [2]: when incompatible images
are presented to the two eyes, subjects report a percept that alternates between the images presented
to the left eye and that presented to the right (e.g., Figure 1B).
Bayesian modelers [7, 20, 22, 10] have interpreted these multistability phenomena as reflections
of the shape of the posterior distribution arising from ambiguous observations, images that could
have plausibly been generated by two or more distinct scenes. For the Necker cube, two plausible
depth configurations have indistinguishable 2D projections; with binocular rivalry, two mutually
exclusive visual inputs have equal perceptual fidelity. Under these conditions, the posterior over
scene interpretations is bimodal, and rivalry is thought to reflect periodic switching between the
modes. Exactly how this ?mode-switching? relates to the mechanisms by which the brain implements Bayesian perceptual inference is less clear, however. Here we explore the hypothesis that
the dynamics of multistability can be understood in terms of the output of an MCMC algorithm,
drawing posterior samples in spatially structured probabilistic models for image interpretation.
Traditionally, bistability has been explained in non-rational mechanistic terms, for example, in terms
of physiological mechanisms for adaptation or reciprocal inhibition between populations of neurons.
Dayan [7] studied network models for Bayesian perceptual inference that estimate the maximum a
posteriori scene interpretation, and proposed that multistability might occur in the presence of a
multimodal posterior due to an additional neural oscillatory process whose function is specifically
to induce mode-switching. He speculated that this mechanism might implement a form of MCMC
inference but he did not pursue the connection formally. Our proposal is most closely related to the
work of Sundareswara and Schrater [20, 22], who suggested that mode-switching in Necker cubetype images reflects a rational sampling-based algorithm for approximate Bayesian inference and
decision making. They presented an elegant sampling scheme that could account for Necker cube
bistability, with several key assumptions: (1) that the visual system draws a sequence of samples
from the posterior over scene interpretations; (2) that the posterior probability of each sample is
known; (3) that samples are weighted based on the product of their posterior probabilities and a
memory decay process favoring more recently drawn samples; and (4) that perceptual decisions are
made deterministically based on the sample with highest weight.
Our goal here is a simpler analysis that comes closer to the standard MCMC approaches used for
approximate inference in Bayesian AI and machine vision, and establishing a clearer link between
the mechanisms of perception in the brain and rational approximate inference algorithms on the
engineering side. As in most applications of Bayesian inference in machine vision [8, 16], we do not
assume that the visual system has access to the full posterior distribution over scene interpretations,
2
Figure 1: (A) Necker cube. (B) Binocular rivalry stimuli. (C) Markov random field image model with lattice
and ring (D) topologies. Shaded nodes correspond to observed variables; unshaded nodes correspond to hidden
variables.
which is expected to be extremely high-dimensional and complex. The visual system might be
able to evaluate only relative probabilities of two similar hypotheses (as in Metropolis-Hastings),
or to compute local conditional posteriors of one scene variable conditioned on its neighbors (as
in Gibbs sampling). We also do not make extra assumptions about weighting samples based on
memory decay, or require that conscious perceptual decisions be based on a memory for samples;
consciousness has access to only the current state of the Markov chain, reflecting the observer?s
current brain state.
Here we show that several characteristic phenomena of multistability derive naturally from applying
standard MCMC inference to Markov random fields (MRFs) ? high dimensional, loosely coupled
graphical models with spatial structure characteristic of many low-level and mid-level vision problems. Specifically, we capture the classic findings of Gamma-distributed mode-switching times in
bistable perception; the biasing effects of contextual stimuli; the situations in which fused (rather
than bistable) percepts occur, and the propagation of perceptual switches in traveling waves across
the visual field. Although it is unlikely that this MCMC scheme corresponds exactly to any process
in the visual system, and it is almost surely too simplified or limited as a general account of perceptual multistability, our results suggest that MCMC could provide a promising foundation on which
to build rational process-level accounts of human perception and perhaps cognition more generally.
2
Markov random field image model
Our starting point is a simple and schematic model of vision problems embodying the idea that
images are generated by a set of hidden variables with local dependencies. Specifically, we assume
that each observed image element xi is connected to a hidden variable zi by a directed edge, and each
hidden variable is connected to its neighbors (in set ci ) by an undirected edge (thus implying that
each hidden variable is conditionally independent of all others given its neighbors). This Markov
property is often exploited in computer vision [8] because elements of an image tend to depend on
their adjacent neighbors, but are less influenced by more distant elements. Formally, this assumption
corresponds to a Markov random field (MRF). Different topologies of the MRF (e.g., lattice or ring)
can be used to capture the structure of different visual objects (Figure 1C,D). The joint distribution
over configurations of hidden and observed variables is given by:
"
#
X
?1
P (z, x) = Z exp ?
R(xi |zi ) ? V (zi |zci ) ,
(1)
i
where Z is a normalizing constant, and R and V are potential functions. In a Gaussian MRF, the
conditional potential function over hidden node i is given by
X
V (zi |zci ) = ?i ? ?
(zi ? zj )2 ,
(2)
j?ci
where ? is a precision (inverse variance) parameter specifying the coupling between neighboring
hidden nodes; when ? is large, a node will be strongly influenced by its neighbors. The ?i term
represents the prior mean of zi , which can be used to encode contextual biases, as we discuss below.
We construct the likelihood potential R(xi |zi ) to express the ambiguity of the image by making it
multimodal: several different hidden causes are equally likely to have generated the image. Since
3
for our purposes only the likelihood of xi matters, we can arbitrarily set xi = 0 and formalize the
multimodal likelihood as a mixture of Gaussians evaluated at points a and b:
R(xi |zi ) = N (zi ; a, ? 2 ) + N (zi ; b, ? 2 ).
(3)
The computational problem for vision (as we are framing it) is to infer the hidden causes of an
observed image. Given an observed image x, the posterior distribution over hidden causes z is
P (z|x) = R
P (x|z)P (z)
.
P (x|z)P (z)dz
z
(4)
There are a number of reasons why Equation 4 may be computationally intractable. One is that the
integration in the denominator may be high dimensional and lacking an analytical solution. Another
is that there may not exist a simple functional form for the posterior. Assuming it is intractable to
perform exact inference, we now turn to approximate solutions based on sampling.
3
Markov chain monte carlo
The basic idea behind Monte Carlo methods is to approximate a distribution with a set of samples
drawn from that distribution. In order to use Monte Carlo approximations, one must be able to draw
samples from the posterior, but it is often impossible to do so directly. MCMC methods address
this problem by drawing samples from a Markov chain that converges to the posterior distribution
[16]. There are many variations of MCMC methods but here we will focus on the simplest: the
Metropolis algorithm [18]. Each step of the algorithm consists of two stages: a proposal stage and
an acceptance stage. An accepted proposal is a sample from a Markov chain that provably converges
to the posterior. We will refer to z(l) as the ?state? at step l. In the proposal stage,
a new state z0 is
0 (l)
proposed by generating a random sample from a proposal density Q z ; z
that depends on the
current state. In the acceptance stage, this proposal is accepted with probability
"
#
P (z0 |x)
(l+1)
0 (l)
,
P z
= z |z
= min 1,
(5)
P z(l) |x
where we have assumed for simplicity that the proposal is symmetric: Q(z0 ; z) = Q(z; z0 ). If the
proposal is rejected, the current state is repeated in the chain.
4
Results
We now show how the Metropolis algorithm applied to the MRF image model gives rise to a number
of phenomena in binocular rivalry experiments. Unless mentioned otherwise, we use the following
parameters in our simulations: ? = 0, ? = 0.25, ? = 0.1, a = 1, b = ?1. For the ring topology, we
used ? = 0.2 to compensate for the fewer neighbors around each node as compared to the lattice
topology. The sampler was run for 200, 000 iterations. For some simulations, we systematically manipulated certain parameters to demonstrate their role in the model. We have found that the precise
values of these parameters have relatively little effect on the model?s behavior. For all simulations
we used a Gaussian proposal (with standard deviation 1.5) that alters the state of one hidden node
(selected at random) on each iteration.
4.1
Distribution of dominance durations
One of the most robust findings in the literature on perceptual multistability is that switching times
in binocular rivalry between different stable percepts tend to follow a Gamma-like distribution. In
other words, the ?dominance? durations of stability in one mode tend to be neither overwhelmingly
short nor long. This effect is so characteristic of binocular rivalry that there have been countless
psychophysical experiments measuring the differences in Gamma switching time parameters across
manipulations, and testing whether Gamma, or log-normal distributions are best [2]. To account for
this characteristic behavior, many papers have described neural circuits that could produce switching
oscillations with the right stochastic dynamics (e.g., [25]). Existing rational process models of
multistability [7, 20, 22] likewise appeal to specific implementational-level constraints to produce
4
Figure 2: (A) Simulated timecourse of bistability in the lattice MRF. Plotted on the y-axis is the number of nodes
with value greater than 0. The horizontal lines show the thresholds for a perceptual switch. (B) Distribution
of simulated dominance durations (mean-normalized) for MRF with lattice topology. Curves show gamma
distributions fitted to simulated (with parameter values shown on the right) and empirical data, replotted from
[17]
this effect. In contrast, here we show how Gamma-distributed dominance durations fall naturally
out of MCMC operating on an MRF.
We constructed a 4 ? 4 grid to model a typical binocular rivalry grating. In the typical experiment
reporting a Gamma distribution of dominance durations, subjects are asked to say which of two
images corresponds to their ?global? percept. To make the same query of the current state of our
simulated MCMC chain, we defined a perceptual switch to occur when at least 2/3 of the hidden
nodes turn positive or negative. Figure 2A shows a sample of the timecourse1 and the distribution of
dominance durations and maximum-likelihood estimates for the Gamma parameters ? (shape) and
? (scale), demonstrating that the durations produced by MCMC are well-described by a Gamma
distribution (Figure 2B).
It is interesting to note that the MRF structure of the problem (representing the multivariate structure
of low-level vision) is an important pre-condition to obtaining a Gamma-like distribution of dominance durations: When considering MCMC on only a single node, the measured dominance durations tend to be exponentially-distributed. The Gamma distribution may arise in MCMC on an MRF
because each hidden node takes an exponentially-distributed amount of time to switch (and these
switches follow roughly one after another). In these settings, the total amount of time until enough
nodes switch to one mode will be Gamma-distributed (i.e., the sum of exponentially-distributed random variables is Gamma-distributed). [20, 22] also used this idea to explain mode-switching. In
their model, each sample is paired with a weight initialized to the sample?s posterior probability, and
the sample with the largest weight designated as the dominant percept. Since multiple samples may
correspond to the same percept, a particular percept will lose dominance only when the weights on
all such samples decrease below the weights on samples of the non-dominant percept. By assuming
an exponential decay on the weights, the time it takes for a single sample to lose dominance will
be approximately exponentially distributed, leading to a Gamma distribution on the time it takes for
multiple samples of the same percept to lose dominance. Here we have attempted to capture this
effect within a rational inference procedure by attributing the exponential dynamics to the operation of MCMC on individual nodes in the MRF, rather than a memory decay process on individual
samples.
4.2
Contextual biases
Much discussion in research on multistability revolves around the extent to which it is influenced by
top-down processes like prior knowledge and attention [2]. In support of the existence of top-down
1
It may seem surprising that the model spends relatively little time near the extremes, and that switches are
fairly gradual. This is not the phenomenology of bistability in a Necker cube, but it is the phenomenology
of binocular rivlary with grating-like stimuli where experiments have shown that substantial time is spent in
transition periods [3]. It seems that this is the case in scenarios where a simple planar MRF with nearest
neighbor smoothness like the one we?re considering is a good model. To capture the perception of depth in
the Necker cube, or rivalry with more complex higher-level stimuli (like natural scenes), a more complex and
densely interconnected graphical model would be required ? in such cases the perceptual switching dynamics
will be different.
5
Figure 3: (A) Stimuli used by [5] in their experiment. On the top are the standard tilted grating patches presented
dichoptically. On the bottom are the tilted grating patches superimposed on a background of rightward-tilting
gratings, a contextual cue that biases dominance towards the rightward-tilting grating patch. (B) Simulated
timecourse of transient preference for a lattice-topology MRF with and without a contextual cue (averaged
over 100 runs of the sampler). (C) Empirical timecourse of transient preference fitted with a scaled cumulative
Gaussian function, reprinted with permission from [17].
influences, several studies have shown that contextual cues can bias the relative dominance of rival
stimuli. For example, [5] superimposed rivalrous tilted grating patches on a background of either
rightward or leftward tilting gratings (Figure 3A) and showed that the direction of background tilt
shifted dominance towards the monocular stimulus with context-compatible tilt. Following [20, 22],
we modeled this result by assuming that the effect of context is to shift the prior mean towards the
contextually-biased interpretation. We simulated this contextual bias by setting the prior mean ? =
1. Figure 3B shows the timecourse of transient preference (probability of a particular interpretation
at each timepoint) for the ?context? and ?no-context? simulations, illustrating this persistent bias.
Another property of this timeseries is the initial bias exhibited by both the context and no-context
conditions, a phenomenon observed experimentally [17, 22] (Figure 3C). In fact, this is a distinctive
property of Markov chains (as pointed out by [22]): MCMC algorithms generally take multiple
iterations before they converge to the stationary distribution [16]. This initial period is known as the
?burn-in.? Thus, human perceptual inference may similarly require an initial burn-in period to reach
the stationary distribution.
4.3
Deviations from stable rivalry: fusion
Most models have focused on the ?stable? portions of the bistable dynamics of rivalry; however, in
addition to the mode-hopping behavior that characterizes this phenomenon, bistable percepts often
produce other states. In some conditions the two percepts are known to fuse, rather than rival: the
percept then becomes a composite or superposition of the two stimuli (and hence no alternation is
perceived). This fused perceptual state can be induced most reliably by decreasing the distance in
feature space between the two stimuli [11] (Figure 4B) or decreasing the contrast of both stimuli
[15]. These relations are shown schematically in Figure 4A. Neither neural, nor algorithmic, nor
computational models of rivalry have thus far attempted to explain these findings.
In experiments on ?fusion?, subjects are given three options to report their percept: one of two global
precepts or something in between. We define such a fused percept as a perceptual state lying between
the two ?bistable? modes ? that is, an interpretation between the two rivalrous, high-probability
interpretations. We can interpret manipulation of feature space distance in terms of the distance
between the modes, and reductions of contrast as increases in the variance around the modes. When
such manipulations are introduced to the MRF model, the posterior distribution changes as in Figure
4A (inset). By making the modes closer together or increasing the variance around the modes,
greater probability mass is assigned to an intermediate zone between the modes?a fused percept.
Thus, manipulating stimulus separation (feature distance) or stimulus fidelity (contrast) changes
the parameterizations of the likelihood function, and these manipulations produce systematically
increasing odds of fused percepts, matching the phenomenology of these stimuli (Figure 4B).
6
Figure 4: (A) Schematic illustration of manipulating orientation (feature space distance) and contrast in binocular rivalry stimuli. The inset shows effects of different likelihood parameterizations on the posterior distribution,
designed to mimic these experimental manipulations. (B) Experimental effects of increasing feature space distance (depth and color difference) between rivalrous gratings on exclusivity of monocular percepts, reprinted
with permission from [11]. Increasing the distance in feature space between rivalrous stimuli (C) or the contrast of both stimuli (D), modeled as increasing the variance around the modes, increases the probability of
observing an exclusive percept in simulations.
4.4
Traveling waves
Fused percepts are not the only deviations from bistability. In other circumstances, particularly
in binocular rivalry, stability is often incomplete across the visual field, producing ?piecemeal?
rivalry, in which one portion of the visual field looks like the image in one eye, while another
portion looks like the image in the other eye. One tantalizing feature of these piecemeal percepts
is the phenomenon known as traveling waves: subjects tend to perceive a perceptual switch as a
?wave? propagating over the visual field [26, 12]: the suppressed stimulus becomes dominant in
an isolated location of the visual field and then gradually spreads. These traveling waves reveal an
interesting local dynamics during an individual switch itself, rather than just the Gamma-distributed
dynamics of the time between complete switches of dominance. Like fused percepts, these intraswitch dynamics have been generally ignored by models of multistability.
Demonstrating the dynamics of traveling waves within patches of the percept requires a different
method of probing perception. Wilson et al. [26] used annular stimuli (Figure 5A), and probed
a particular patch along the annulus; they showed that the time at which the suppressed stimulus
in the test patch becomes dominant is a function of the distance (around the circumference of the
annulus) between the test patch and the patch where a dominance switch was induced by transiently
increasing the contrast of the suppressed stimulus. This dependence of switch-time on distance
(Figure 5B) suggested to Wilson et al. that stimulus dominance was propagating around the annulus.
Using fMRI, Lee et al. [12] showed that the propagation of this ?traveling wave? can be measured
in primary visual cortex (V1; Figure 5): they used the retinotopic structure of V1 to identify brain
regions corresponding to different portions of the the visual field, then measured the timing of the
response in these regions to the induced dominance switch as a function of the cortical distance from
the location of the initial switch. They found that the temporal delay in the response increased as a
function of cortical distance from the V1 representation of the top of the annulus (Figure 5C).
To simulate such traveling waves within the percept of a stimulus, we constructed an MRF with ring
topology and measured the propagation time (the time at which a mode-switch occurs) at different
hidden nodes along the ring. To simulate the transient increase in contrast at one location to induce
a switch, we initialized one node?s state to be +1 and the rest to be ?1. Consistent with the idea
of wave propagation, Figure 5D shows the average time for a simulated node to switch modes as
a function of distance around the ring. Intuitively, nodes will tend to switch in a kind of ?domino
effect? around the ring; the local dependencies in the MRF ensure that nodes will be more likely
to switch modes once their neighbors have switched. Thus, once a switch at one node has been
accepted by the Metropolis algorithm, a switch at its neighbor is likely to follow.
5
Conclusion
We have proposed a ?rational process? model of perceptual multistability based on the idea that
humans approximate the posterior distribution over the hidden causes of their visual input with a
set of samples. In particular, the dynamics of the sample-generating process gives rise to much of
7
Figure 5: Traveling waves in binocular rivalry. (A) Annular stimuli used by Lee et al. (left and center panels) and
the subject percept reported by observers (right panel), in which the low contrast stimulus was seen to spread
around the annulus, starting at the top. Figure reprinted with permission from [12]. (B) Propagation time as
a function of distance around the annulus, replotted from [26]. Filled circles represent radial gratings, open
circles represent concentric gratings. (C) Anatomical image (left panel) showing the retinotopically-mapped
coordinates of the initial and probe locations in V1. Right panel shows the measured fMRI responses for the
two outlined subregions. (D) A transient increase in contrast of the suppressed stimulus induces a perceptual
switch at the location of contrast change. The propagation time for a switch at a probe location increases with
distance (around the annulus) from the switch origin.
the rich dynamics in multistable perception observed experimentally. These dynamics may be an
approximation to the MCMC algorithms standardly used to solve difficult inference problems in
machine learning and statistics [16].
The idea that perceptual multistability can be construed in terms of sampling in a Bayesian model
was first proposed by [20, 22], and our work follows theirs closely in several respects. However, we
depart from that work in the theoretical underpinnings of our model: It is not transparent how well
the sampling scheme in [22, 24] approximates Bayesian inferences, or how it corresponds to standard algorithms where the full posterior is not assumed to be available when drawing samples. Our
goal here is to show how some of the basic phenomena of multistable perception can be understood
straightforwardly as the output of familiar, simple and effective methods for approximate inference
in Bayesian machine vision.
A related point of divergence between our model and that of [20, 22], as well as other Bayesian models of multistable perception [7, 10], is that we are able to explain multistable perception in terms of
a well-defined inference procedure that doesn?t require ad-hoc appeals to neurophysiological processes like noise, adaptation, inhibition, etc. Thus, our contribution is to show how an inference
algorithm widely used in statistics and computer science can give rise naturally to perceptual multistability phenomena. Of course, we do not wish to argue that neurophysiological processes are
irrelevant. Our goal here was to abstract away from implementational details and make claims about
the algorithmic level. Clearly an important avenue for future work is relating algorithms like MCMC
to neural processes (indeed this connection was suggested previously by [7]).
Another important direction in which to extend this work is from rivalry with low-level stimuli to
more complex vision problems that involve global coherence over the image (such as in natural
scenes). Although similar perceptual dynamics have been observed with a wide range of ambiguous
stimuli, the absence of obvious transition periods with the Necker cube suggests that these dynamics
may differ in important ways from perception of rivalry stimuli.
Acknowledgments: This work was supported by ONR MURI: Complex Learning and Skill Transfer with Video Games N00014-07-1-0937 (PI: Daphne Bavelier); NDSEG fellowship to EV and
NSF DRMS Dissertation grant to EV.
8
References
[1] J.R. Anderson. The adaptive character of thought. Lawrence Erlbaum Associates, 1990.
[2] R. Blake. A primer on binocular rivalry, including current controversies. Brain and Mind, 2(1):5?38,
2001.
[3] J.W. Brascamp, R. van Ee, A.J. Noest, RH Jacobs, and A.V. van den Berg. The time course of binocular
rivalry reveals a fundamental role of noise. Journal of Vision, 6(11):8, 2006.
[4] S.D. Brown and M. Steyvers. Detecting and predicting changes. Cognitive Psychology, 58(1):49?67,
2009.
[5] O.L. Carter, T.G. Campbell, G.B. Liu, and G. Wallis. Contradictory influence of context on predominance
during binocular rivalry. Clinical and Experimental Optometry, 87:153?162, 2004.
[6] N.D. Daw and A.C. Courville. The pigeon as particle filter. Advances in Neural Information Processing
Systems, 20, 2007.
[7] P. Dayan. A hierarchical model of binocular rivalry. Neural Computation, 10(5):1119?1135, 1998.
[8] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of
images. IEEE Transactions of Pattern Analysis and Machine Intelligence, 6:721?741, 1984.
[9] T.L. Griffiths and J.B. Tenenbaum. Optimal predictions in everyday cognition. Psychological Science,
17(9):767?773, 2006.
[10] J. Hohwy, A. Roepstorff, and K. Friston. Predictive coding explains binocular rivalry: An epistemological
review. Cognition, 108(3):687?701, 2008.
[11] T. Knapen, R. Kanai, J. Brascamp, J. van Boxtel, and R. van Ee. Distance in feature space determines
exclusivity in visual rivalry. Vision Research, 47(26):3269?3275, 2007.
[12] S.H. Lee, R. Blake, and D.J. Heeger. Traveling waves of activity in primary visual cortex during binocular
rivalry. Nature Neuroscience, 8(1):22?23, 2005.
[13] D.A. Leopold and N.K. Logothetis. Multistable phenomena: changing views in perception. Trends in
Cognitive Sciences, 3(7):254?264, 1999.
[14] R.P. Levy, Reali. F., and T.L. Griffiths. Modeling the effects of memory on human online sentence
processing with particle filters. Advances in Neural Information Processing Systems, 21:937, 2009.
[15] L. Liu, C.W. Tyler, and C.M. Schor. Failure of rivalry at low contrast: evidence of a suprathreshold
binocular summation process. Vision research, 32(8):1471?1479, 1992.
[16] D.J.C. MacKay. Information theory, inference and learning algorithms. Cambridge University Press,
2003.
[17] P. Mamassian and R. Goutcher. Temporal dynamics in bistable perception. Journal of Vision, 5(4):7,
2005.
[18] N. Metropolis and S. Ulam. The Monte Carlo method. Journal of the American Statistical Association,
pages 335?341, 1949.
[19] A.N. Sanborn, T.L. Griffiths, and D.J. Navarro. A more rational model of categorization. In Proceedings
of the 28th annual conference of the cognitive science society, pages 726?731, 2006.
[20] P.R. Schrater and R. Sundareswara. Theory and dynamics of perceptual bistability. Advances in Neural
Information Processing Systems, 19:1217, 2007.
[21] L. Shi, N.H. Feldman, and T.L. Griffiths. Performing bayesian inference with exemplar models. In
Proceedings of the 30th Annual Conference of the Cognitive Science Society, pages 745?750, 2008.
[22] R. Sundareswara and P.R. Schrater. Perceptual multistability predicted by search model for Bayesian
decisions. Journal of Vision, 8(5):12, 2008.
[23] E. Vul, N.D. Goodman, T.L. Griffiths, and J.B. Tenenbaum. One and done? Optimal decisions from very
few samples. Proceedings of the 31st Annual Meeting of the Cognitive Science Society, 2009.
[24] E. Vul and H. Pashler. Measuring the crowd within: Probabilistic representations within individuals.
Psychological Science, 19(7):645?647, 2008.
[25] H.R. Wilson. Minimal physiological conditions for binocular rivalry and rivalry memory. Vision Research,
47(21):2741?2750, 2007.
[26] H.R. Wilson, R. Blake, and S.H. Lee. Dynamics of travelling waves in visual perception. Nature,
412(6850):907?910, 2001.
9
| 3711 |@word illustrating:1 seems:1 open:1 gradual:1 simulation:5 jacob:1 reduction:1 initial:5 configuration:3 liu:2 existing:1 current:6 contextual:7 surprising:1 reali:1 must:2 optometry:1 tilted:3 distant:1 subsequent:1 shape:2 designed:1 implying:1 cue:3 fewer:1 selected:1 stationary:2 intelligence:1 reciprocal:1 short:1 dissertation:1 detecting:1 parameterizations:2 node:19 location:6 preference:3 simpler:1 daphne:1 along:2 constructed:2 persistent:1 consists:1 theoretically:1 indeed:1 expected:1 roughly:1 behavior:3 nor:3 brain:6 inspired:1 decreasing:2 little:2 considering:2 increasing:6 becomes:3 retinotopic:1 circuit:1 mass:1 panel:4 kind:1 interpreted:1 spends:2 pursue:1 finding:3 nj:1 temporal:2 exactly:2 scaled:1 grant:1 appear:1 producing:1 positive:1 before:1 engineering:3 local:5 understood:2 timing:1 switching:10 establishing:1 approximately:2 might:3 burn:2 studied:2 specifying:1 shaded:1 suggests:1 revolves:1 limited:1 contextually:1 range:2 averaged:1 directed:1 acknowledgment:1 testing:1 implement:2 procedure:2 empirical:2 thought:2 composite:1 projection:1 matching:1 word:1 induce:2 refers:1 pre:1 radial:1 suggest:1 griffith:5 close:1 context:7 applying:1 impossible:1 influence:2 pashler:1 unshaded:1 dz:1 circumference:1 center:1 shi:1 attention:2 starting:2 duration:9 focused:1 simplicity:1 perceive:1 steyvers:1 classic:3 population:1 stability:2 traditionally:2 variation:1 analogous:1 coordinate:1 logothetis:1 exact:1 hypothesis:9 origin:1 associate:1 element:3 trend:1 particularly:2 sparsely:1 muri:1 geman:2 exclusivity:2 observed:8 role:2 bottom:1 capture:4 region:4 connected:2 decrease:1 prospect:1 highest:1 principled:1 mentioned:1 substantial:1 asked:1 bavelier:1 dynamic:20 controversy:1 sundareswara:3 depend:1 solving:1 predictive:1 distinctive:1 rightward:3 multimodal:3 joint:2 distinct:2 effective:1 monte:10 query:1 outside:1 crowd:1 quite:1 whose:1 larger:1 plausible:1 solve:1 say:1 drawing:4 otherwise:1 widely:1 statistic:3 noisy:2 itself:1 online:1 hoc:1 sequence:1 precept:1 analytical:1 interconnected:1 product:1 adaptation:2 neighboring:1 date:1 achieve:1 everyday:1 cluster:1 ulam:1 produce:4 generating:2 categorization:1 ring:7 converges:2 object:1 spent:1 derive:1 coupling:1 clearer:1 propagating:3 exemplar:1 measured:5 nearest:1 received:1 grating:11 edward:1 implemented:1 predicted:1 come:1 differ:1 direction:2 closely:2 correct:1 filter:2 stochastic:3 human:9 transient:5 bistable:6 suprathreshold:1 explains:1 require:4 transparent:1 timepoint:1 summation:1 lying:1 around:13 blake:3 normal:1 exp:1 tyler:1 lawrence:1 cognition:7 algorithmic:3 claim:1 purpose:2 perceived:2 lose:3 tilting:3 superposition:1 predominance:1 largest:1 reflects:1 weighted:1 mit:1 clearly:1 gaussian:3 rather:4 hohwy:1 wilson:4 overwhelmingly:1 encode:1 focus:1 likelihood:6 superimposed:2 contrast:12 posteriori:1 inference:33 dayan:2 mrfs:1 unlikely:1 hidden:17 relation:1 favoring:1 manipulating:2 provably:1 fidelity:2 flexible:1 orientation:1 spatial:2 integration:1 psychophysics:1 fairly:1 cube:8 field:12 equal:1 construct:1 once:2 mackay:1 sampling:9 represents:1 look:3 mimic:1 fmri:2 report:2 stimulus:28 others:1 transiently:1 few:1 future:1 manipulated:1 gamma:15 densely:1 divergence:1 individual:4 familiar:1 interest:1 acceptance:2 mixture:1 extreme:1 behind:1 chain:13 edge:2 closer:2 underpinnings:1 necessary:1 unless:1 filled:1 incomplete:1 loosely:1 mamassian:1 initialized:2 circle:2 re:1 plotted:1 isolated:1 theoretical:1 minimal:1 fitted:2 uncertain:1 psychological:3 increased:1 modeling:1 implementational:2 measuring:2 bistability:6 restoration:1 lattice:6 deviation:3 delay:1 erlbaum:1 too:1 reported:1 straightforwardly:1 dependency:2 kanai:1 periodic:1 st:1 density:1 fundamental:1 probabilistic:3 lee:4 together:1 fused:7 domino:1 reflect:1 ambiguity:1 ndseg:1 cognitive:8 american:1 leading:1 account:4 potential:3 coding:1 matter:1 depends:1 ad:1 view:1 observer:2 observing:1 characterizes:1 portion:4 wave:12 option:1 contribution:1 construed:1 variance:4 characteristic:6 percept:25 who:1 correspond:4 zci:2 likewise:1 identify:1 necker:8 bayesian:20 produced:1 annulus:7 carlo:10 history:1 oscillatory:1 explain:3 influenced:3 reach:1 failure:1 obvious:1 naturally:3 modeler:1 static:1 rational:13 massachusetts:1 knowledge:1 color:1 formalize:1 reflecting:1 campbell:1 higher:1 follow:3 planar:1 response:3 evaluated:1 done:1 strongly:1 anderson:1 rejected:1 binocular:19 stage:5 just:1 until:1 traveling:9 hastings:1 horizontal:1 propagation:6 mode:20 perhaps:1 reveal:1 effect:11 normalized:1 brown:1 hence:2 inspiration:1 assigned:1 spatially:1 symmetric:1 jbt:1 consciousness:1 conditionally:1 adjacent:1 indistinguishable:1 during:3 game:1 ambiguous:3 samuel:1 complete:2 demonstrate:1 reflection:1 image:21 recently:2 functional:1 tilt:2 exponentially:4 retinotopically:1 extend:1 interpretation:12 approximates:2 he:2 schrater:3 interpret:1 theirs:1 refer:1 relating:1 association:1 cambridge:2 gibbs:2 ai:4 feldman:1 smoothness:1 grid:1 outlined:1 similarly:1 pointed:1 particle:3 access:2 stable:3 cortex:2 operating:1 inhibition:2 etc:1 dominant:4 something:1 posterior:23 multivariate:1 recent:1 showed:3 leftward:1 irrelevant:1 reverse:1 manipulation:5 scenario:1 certain:2 n00014:1 onr:1 arbitrarily:1 alternation:1 meeting:1 vul:3 joshua:1 exploited:1 epistemological:1 seen:2 additional:1 greater:2 surely:1 converge:1 period:4 relates:1 full:2 multiple:3 infer:1 annular:2 clinical:1 long:3 compensate:1 equally:1 paired:1 schematic:2 prediction:1 mrf:15 basic:2 denominator:1 vision:17 circumstance:1 iteration:3 represent:2 bimodal:1 achieved:1 proposal:10 background:3 addition:1 schematically:1 fellowship:1 goodman:1 extra:1 biased:1 rest:1 exhibited:1 navarro:1 subject:5 tend:8 elegant:1 undirected:1 induced:3 spirit:1 seem:2 odds:1 ee:2 near:1 presence:1 ideal:1 intermediate:1 enough:1 variety:2 switch:24 psychology:2 zi:10 topology:7 idea:6 reprinted:3 avenue:1 shift:1 whether:1 cause:4 ignored:1 generally:3 clear:1 involve:1 amount:2 schor:1 mid:1 tenenbaum:3 extensively:1 conscious:1 embodying:1 rival:2 simplest:1 subregions:1 induces:1 carter:1 exist:1 zj:1 nsf:1 alters:1 shifted:1 neuroscience:2 algorithmically:1 arising:1 anatomical:1 probed:1 express:1 dominance:18 key:1 threshold:1 demonstrating:2 drawn:2 changing:1 neither:2 v1:4 asymptotically:1 fuse:1 relaxation:1 sum:1 realworld:1 inverse:1 run:2 reporting:1 family:1 almost:1 patch:9 oscillation:1 draw:2 separation:1 incompatible:1 decision:5 coherence:1 courville:1 annual:3 activity:1 occur:3 constraint:2 scene:8 simulate:2 extremely:1 min:1 performing:1 relatively:3 department:2 structured:1 designated:1 alternate:2 across:4 suppressed:4 character:1 metropolis:5 making:3 explained:2 gradually:2 intuitively:1 den:1 computationally:1 resource:1 mutually:1 remains:1 equation:1 discus:1 turn:2 mechanism:4 monocular:2 previously:1 mind:3 mechanistic:1 travelling:1 available:1 gaussians:1 operation:1 multistability:17 phenomenology:3 probe:2 hierarchical:1 away:1 permission:3 primer:1 drifting:1 existence:1 responding:1 top:5 ensure:1 graphical:3 hopping:1 plausibly:1 build:1 society:3 psychophysical:1 move:1 occurs:1 depart:1 primary:2 exclusive:2 dependence:1 unclear:1 sanborn:1 distance:15 link:1 mapped:1 simulated:7 argue:1 extent:1 reason:1 assuming:3 modeled:2 illustration:1 difficult:1 negative:1 rise:3 reliably:1 perform:1 brascamp:2 observation:2 neuron:1 markov:14 noest:1 timeseries:1 situation:1 precise:1 drift:1 concentric:1 standardly:1 introduced:1 required:1 connection:2 timecourse:4 sentence:1 leopold:1 framing:1 daw:1 address:1 able:3 suggested:4 below:2 perception:16 ev:2 pattern:1 biasing:1 replotted:2 including:1 memory:6 video:1 suitable:1 phenomenologically:1 natural:3 friston:1 predicting:1 representing:1 scheme:3 technology:1 eye:4 temporally:1 axis:1 coupled:2 prior:4 literature:1 review:1 countless:1 relative:2 lacking:1 interesting:2 limitation:1 filtering:1 proportional:1 multistable:6 gershman:1 foundation:1 switched:1 consistent:1 systematically:2 pi:1 compatible:1 course:2 supported:1 sjgershm:1 side:1 bias:7 institute:2 wide:3 neighbor:9 face:1 fall:1 rivalry:28 sparse:1 distributed:9 van:4 curve:1 depth:4 cortical:2 world:1 transition:2 cumulative:1 rich:1 doesn:1 author:1 made:1 adaptive:1 simplified:1 far:1 piecemeal:2 transaction:1 approximate:11 skill:1 global:4 reveals:1 assumed:2 xi:6 search:1 why:1 promising:1 nature:2 transfer:1 robust:1 obtaining:1 complex:6 constructing:1 domain:1 did:1 spread:2 rh:1 rivalrous:4 noise:2 arise:2 repeated:1 evul:1 probing:1 precision:1 explicit:1 deterministically:1 exponential:2 wish:1 heeger:1 perceptual:29 levy:1 weighting:1 z0:4 down:2 specific:1 inset:2 showing:1 appeal:2 decay:4 physiological:2 normalizing:1 fusion:2 intractable:4 evidence:1 sequential:1 importance:1 ci:2 entertained:1 conditioned:1 suited:1 attributing:1 tantalizing:1 pigeon:1 explore:3 likely:3 neurophysiological:2 visual:19 speculated:1 corresponds:4 determines:1 ma:1 conditional:2 goal:4 towards:3 absence:1 experimentally:2 change:4 infinite:1 specifically:4 typical:2 sampler:2 contradictory:1 total:1 wallis:1 accepted:3 experimental:3 attempted:2 zone:1 formally:2 highdimensional:1 berg:1 people:4 support:1 evaluate:1 mcmc:24 princeton:3 phenomenon:14 correlated:1 |
2,993 | 3,712 | Speeding up Magnetic Resonance Image Acquisition
by Bayesian Multi-Slice Adaptive Compressed
Sensing
Matthias W. Seeger
Saarland University and Max Planck Institute for Informatics
Campus E1.4, 66123 Saarbr?ucken, Germany
[email protected]
Abstract
We show how to sequentially optimize magnetic resonance imaging measurement
designs over stacks of neighbouring image slices, by performing convex variational inference on a large scale non-Gaussian linear dynamical system, tracking
dominating directions of posterior covariance without imposing any factorization
constraints. Our approach can be scaled up to high-resolution images by reductions to numerical mathematics primitives and parallelization on several levels. In
a first study, designs are found that improve significantly on others chosen independently for each slice or drawn at random.
1
Introduction
Magnetic resonance imaging (MRI) [10, 6] is a very flexible imaging modality. Inflicting no harm
on patients, it is used for an ever-growing number of diagnoses in health-care. Its most serious
limitation is acquisition speed, being based on a serial idea (gradient encoding) with limited scope
for parallelization. Fourier (aka. k-space) coefficients are sampled along smooth trajectories (phase
encodes), many of which are needed for reconstructions of sufficient quality [17, 1]. Long scan
times lead to patient annoyance, grave errors due to movement, and high running costs. The Nyquist
sampling theorem [2] fundamentally limits traditional linear image reconstruction, but with modern
3D MRI scenarios, dense sampling is not practical anymore. Acquisition is accelerated to some
extent in parallel MRI1 , by using receive coil arrays [19, 9]: the sensitivity profiles of different
coils provide part of the localization normally done by more phase steps. A different idea is to use
(nonlinear) sparse image reconstruction, with which the Nyquist limit can be undercut robustly for
images, emphasized recently as compressed sensing [5, 3]. While sparse reconstruction has been
used for MRI [28, 12], we address the more fundamental question of how to optimize the sampling
design for sparse reconstruction over a specific real-world signal class (MR images) in an adaptive
manner, avoiding strong assumptions such as exact, randomly distributed sparsity that do not hold
for real images [23]. Our approach is in line with recent endeavours to extend MRI capabilities
and reduce its cost, by complementing expensive, serial hardware with easily parallelizable digital
computations.
We extend the framework of [24], the first approximate Bayesian method for MRI sampling optimization applicable at resolutions of clinical interest. Their approach falls short of real MRI practice
on a number of points. They considered single image slices only, while stacks2 of neighbouring
1
While parallel MRI is becoming the standard, its use is not straightforward. The sensitivity maps are
unknown up front, depend partly on what is scanned, and their reliable estimation can be difficult.
2
?Stack-of-slices? acquisition along the z axis works by transmitting a narrow-band excitation pulse while
applying a magnetic field gradient linear in z. If the echo time (between excitation and readout) is shorter than
1
slices are typically acquired. Reconstruction can be improved significantly by taking the strong
statistical dependence between pixels of nearby slices into account [14, 26, 18]. Design optimization is a joint problem as well: using the same acquisition pattern for neighbouring slices is clearly
redundant. Second, the latent image was modelled as real-valued in [24], while in reality it is a
complex-valued signal. To our knowledge, the few directly comparable approaches rely on ?trialand-error? exploration [12, 16, 27], requiring substantially more human expert interventions and real
MRI measurements, whose high costs our goal-directed method aims to minimize.
Our extension to stacks of slices requires new technology. Global Gaussian covariances have to
be approximated, a straightforward extension of which to many slices is out of the question. We
show how to use approximate Kalman smoothing, implementing message passing by the Lanczos
algorithm, which has not been done in machine learning before (see [20, 25] for similar proposals
to oceanography problems). Our technique is complementary to mean field variational inference
approximations (?variational Bayes?), where most correlations are ruled out a priori. We track
the dominating posterior covariance directions inside our method, allowing them to change during
optimization. While our double loop approach may be technically more demanding to implement,
relaxation as well as algorithm are characterized much better (convex problem; algorithm reducing to
standard computational primitives), running orders of magnitude faster. Beyond MRI, applications
could be to Bayesian inference over video streams, or to computational photography [11]. Our
approach is parallelizable on several levels. This property is essential to even start projecting such
applications: on the scale demanded by modern MRI applications, with practitioners being used to
view images directly after acquisition, little else but highly parallelizable approaches are viable.
Large scale variational inference is reviewed and extended to complex-valued data in Section 2,
lifted to non-Gaussian linear dynamical systems in Section 3, and the experimental design extension
is given in Section 4. Results of a preliminary study on data from a Siemens 3T scanner are provided
in Section 5, using a serial implementation.
2
Large Scale Sparse Inference
Our motivation is to improve MR image reconstruction, not by finding a better estimation technique,
but by sampling data more economically. A latent MR image slice u ? Cn (n pixels) is measured
by a design matrix X ? Cm?n : y = Xu + ? (? ? N (0, ? 2 I) models noise). For Cartesian
MRI, X = IS,? Fn , Fn the 2D fast Fourier transform, S ? {1, . . . , n} the sampling pattern (which
partitions into complete columns or rows: phase encodes, the atomic units of the design). Sparse
reconstruction works by encoding super-Gaussian image statistics in a non-Gaussian prior, then
finding the posterior mode (MAP estimation): a convex quadratic program for the model employed
here. To improve the measurement design X itself, posterior information beyond (and independent
of) its mode is required, chiefly posterior covariances.
We briefly review [24], extending it to complex-valued u. The super-Gaussian image prior P (u) is
adapted by placing potentials on absolute values |sj |, the posterior has the form
Yq
P (u|y) ? N (y|Xu, ? 2 I)
e??j |sj /?| , s = Bu ? Cq .
j=1
Here, B is a sparsity transform [24]. We use the C ? R2 embedding, s = (sj ), sj ? R2 ,
and norm potentials e??j ksj /?k . Two main ideas lead to [24]. First, inference is relaxed to an
optimization problem by lower-bounding the log partition function [7] (intuitively, each Laplace
potential e??j ksj /?k is lower-bounded by a Gaussian-form potential of variance ?j > 0), leading to
?(?) = log |A| + h(?) + minu R(u, ?), R := ? ?2 ky ? Xuk2 + sT ??1 s , ? = (?j ), (1)
h(?) = (? 2 )T ?. This procedure implies a Gaussian approximation Q(u|y) = N (u|u? , ? 2 A?1 )
to P (u|y), with A = X H X + B T ??1 B and u? = u? (?). The complex extension is formally
similar to [24] (? there is ? ?1 here): ? := (diag ?)?I2 = diag(?1 , ?1 , ?2 , . . . )T , B := Borig ?I2 ,
Borig the real-valued sparsity transform. Q(u|y) is fitted to P (u|y) by min?0 ?: a convex problem
[24]. Used within an automatic decision architecture, convexity and robustness of inference become
assets that are more important than smaller bias after a lot of human expert attention.
the repeat time (between phase encodes), several slices are acquired in an interleaved fashion, separated by
slice gaps to avoid crosstalk [17].
2
Second, ?(?) can be minimized very efficiently by a double loop algorithm [24]. The computationally intensive log |A| term is concave in ? ?1 . Upper-bounding it tangentially by the affine
z T (? ?1 ) ? g ? (z) at outer loop (OL) update points, the resulting ?z ? ? decouples and is minimized much more efficiently in inner loops (ILs). min?0 ?z leaves us with
n
o
X
h?j (|sj |) , h?j (|sj |) := ?j (zj + (|sj |/?)2 )1/2 , (2)
min ?z (u) = ? ?2 ky ? Xuk2 + 2
u
j
a penalized least squares problem. At convergence, u? = EQ [u|y], ?j ? (zj + |s?,j /?|2 )1/2 /?j .
We can use iteratively reweighted least squares (IRLS), each step of which needs a linear system to be solved of the structure of A. Refitting z (OL updates) is much harder: z ? (I ?
1T ) diag?1 (BA?1 B T ) = (I ? 1T )(? ?2 VarQ [sj |y]). In terms of Gaussian (Markov) random
fields, the inner optimization needs posterior mean computations only, while OL updates require
bulk Gaussian variances [21, 15]. The reason why the double loop algorithm is much faster than
previous approaches is that only few variance computations are required. The extension to complexvalued u is non-trivial only when it comes to IRLS search direction computations (see Appendix).
Given multi-slice data (Xt , yt ), t = 1, . . . , T , we can use an undirected hidden Markov model
over image slices u = (ut ) ? CnT . By the stack-of-slices methodology, the likelihood potentials P (yt |ut ) are independent, and P (ut ) from above serves as single-node potential, based on
st = But . If st? := ut ? ut+1 , the
Qndependence between neighbouring slices is captured by
additional Laplace coupling potentials i=1 e??c,i |(st? )i /?| . The variational parameters ?t at each
node are complemented by coupling parameters ?t? ? Rn+ . The Gaussian Q(u|y), y = (yt ),
has the same form as above with a huge A ? CnT ?nT . Inheriting the Markov structure, it is a
Gaussian linear dynamical system (LDS) with very high-dimensional states. How will an efficient
extension of the double loop algorithm look like? The IL criterion ?z should be coupled between
neighbouring slices, by way of potentials on st? . OL updates are more difficult to lift: we have to
approximate marginal variances in a Gaussian LDS. We will do this by Kalman smoothing, approximating inversion in message computations (conversion from natural to moment parameters) by the
Lanczos algorithm.
The central role of Gaussian covariance for approximating non-Gaussian posteriors has not been
emphasized much in machine learning, where if Bayesian computations are intractable, simpler
?variational Bayesian? concepts are routinely used, imposing factorization constraints on the posterior up front. While such constraints can be adjusted in light of the data, this is difficult and typically
not done. Factorization assumptions are a double-edged sword: they radically simplify implementations, but result in non-convex algorithms, and half of the problem is left undone. Our approach
offers an alternative: by using Lanczos on Q(u|y), we retain precisely the maximum-covariance
directions of intermediate fits to the posterior, without running into combinatorial or non-convex
problems. Finally, we place more varied sparsity penalties on the in-plane dimensions [24] than on
the third one. This is justified by voxels typically being larger and spaced with a gap in the third
dimension, with partial volume effects reducing sparsity. Moreover, a non-local sparsity transform
along the third dimension would destroy the Markovian structure essential for efficient computation.
3
Approximate Inference over Multiple Slices
We aim to extend the single slice method of [24] to the hidden Markov extension, thereby reusing
code whenever possible. The variational criterion is (1) with
X
X
h(?) =
ht (?t ) + I{t<T } ht? (?t? ), R =
Rt + I{t<T } Rt? , ?t? := (diag ?t? ) ? I2 ,
t
t
Rt = ? ?2 kyt ? Xt ut k2 + sTt ??1
Rt? = ? ?2 sTt? ??1
t st ,
t? st? .
The coupling term log |A| is upper-bounded (? ? ?z ), so that the IL criterion ?z (u) is the sum
of terms ?t,zt (ut ), ?t?,zt? (st? ). Problems of the form minu ?z , jointly convex with couplings
between neighbours, are routinely addressed in parallel convex optimization. In order to update ut ,
we consider its neighbours ut?1 , ut+1 fixed, massaging ?t,zt (ut ) + ?(t?1)?,z(t?1)? (s(t?1)? ) +
? = (B T , I, I)T , s? = (sT , (ut ? ut?1 )T , (ut ? ut+1 )T )T ,
?t?,zt? (st? ) into the form of [24]: B
t
? = ut . These updates can be run asynchronously in parallel, sending ut to neighbours after every
u
few IRLS steps.
3
For OL updates, we have to compute zt = ? ?2 (I ? 1T )VarQ [st |y] and zt? = ? ?2 (I ?
1T )VarQ [st? |y], where Q(u|y) is a Gaussian LDS (fixed ?). To output a global criterion
value, an estimate of log |A| is required as well. We use the two-filter Kalman information
smoother, which entails passing Gaussian-form messages along the chain in both directions. Once
all messages are available, marginal (co)variances are computed at each node in parallel. Shift
Q(u|y) to zero mean (EQ [u|y] = u? is found in the IL). Denoting N U (A) = N U (u|A) :=
?2 T
e?(1/2)? u Au , Q(u|y) consists of single node potentials ?t (ut ) = N U (At ) and pair poH
T ?1
tentials ?t? (st? ) = N U (??1
Defining messages
t? ), where At := Xt Xt + B ?t B.
U ?
U ?
Mt? (ut ) =R N (A t? ), M?t (ut ) = N (A ?t ), the usual message propagation equation is
Mt? (ut ) ? M(t?1)? (ut?1 )?(t?1)? (s(t?1)? )dut?1 ?t (ut ), so that
? t? = At + M(A
? (t?1)? , ?(t?1)? ),
A
? ?) := ??1 ? ??1 (A
? + ??1 )?1 ??1 .
M(A,
(3)
? ?t = At + M(A
? ?(t+1) , ?t? ). Denote Mt? := M(A
? t? , ?t? ), M?t :=
In the same way, A
?
M(A ?t , ?(t?1)? ). Once all messages have been computed, the node marginal Q(ut |y) has
? t := At + M(t?1)? + M?(t+1) . If ? := (?1 ? ?2 ) ? I, the precision
precision matrix A
? t? , A
? ?(t+1) ) + ???1 ?T , and st? = ?T (uT , uT )T .
matrix of Q(ut , ut+1 |y) is diag(A
t?
t
t+1
? ?1 and Mt? . Finally, by tracking normalization conCovQ [st? |y] can be written in terms of A
t+1
P
? t? + ??1 | + P ? log |A
? ?t + ??1
?
?
stants: log |A| = t<t? log |A
t?
t>t
(t?1)? | + log |A t?| for any t. In
?
practice, we average over t. The algorithm is sketched in Algorithm 1.
Algorithm 1 Double loop variational inference algorithm
repeat
if first iteration then
Default-initialize z ? 1, u = 0.
else
Run Kalman smoothing to determine Mt? , and (in parallel) M?t .
Determine node variances zt , pair variances zt? , and log |A| from messages. Refit upper
bound ?z to ? (tangent at ?). Initialize u = u? (previous solution).
end if
repeat
Distributed IRLS to minimize min? ?z w.r.t. u.
Each local update of ut entails solving a linear system (conjugate gradients).
until u? = argminu ?z converged
Update ?j = (zj + |s?,j /?|2 )1/2 /?j .
until outer loop converged
For reconstruction, we run parallel MAP estimation. Following [12], we smooth out the nondifferentiable l1 penalty by |sj /?| ? (? + |sj /?|2 )1/2 for very small ? > 0, then use nonlinear conjugate
gradients with Armijo line search. Nodes return with ?ut ?z at the line minimum ut , the next search
direction is centrally determined and distributed (just a scalar has to be transferred). This is not the
same as centralized CG: line searches are distributed and not done on the global criterion.
We briefly comment on how to approximate Kalman message passing by way of the Lanczos algorithm [8], full details are given in [22]. Gaussian (Markov) random field practitioners will appreciate the difficulties: there is no locally connected MRF structure, and the Q(u|y) are highly nonstationary, being fitted to a posterior with non-Gaussian statistics (edges in the image, etc). Message
passing requires the inversion of a precision matrix A. The idea behind Lanczos approximations is
PCA: if A ? U?U T , ? the l n smallest eigenvalues, U??1 U T is the PCA approximation of
A?1 . With matrices A of certain spectral decay, this representation can be approximated by Lanc? t? , Mt? has the same rank
zos (see [24, 22] for details). For a low rank PCA approximation of A
(see Appendix), which allows to run Gaussian message passing tractably. In a parallel implementation, the forward and backward filter passes run in parallel, passing low rank messages (the rank km
of these should be smaller than the rank kc for subsequent marginal covariance computations). On
a lower level, both matrix-vector multiplications with Xt (FFT) and reorthogonalizations required
during the Lanczos algorithm can easily be parallelized on commodity graphics hardware.
4
4
Sampling Optimization by Bayesian Experimental Design
With our multi-slice variational inference algorithm in place, we address sampling optimization
by Bayesian sequential experimental design, following [24]. At slice t, the information gain score
?(X? ) := log |I +X? CovQ [ut |y]X?T | is computed for a fixed number of phase encode candidates
X? ? Cd?n not yet in Xt , the score maximizer is appended, and a novel measurement is acquired
(for the maximizer only). ?(X? ) depends primarily on the marginal posterior covariance matrix
CovQ [ut |y], computed by Gaussian message passing just as variances in OL updates above (while a
single value ?(X? ) can be estimated more efficiently, the dominating eigendirections of the global
covariance matrix seem necessary to approximate many score values for different candidates X? ).
Once messages have been passed, scores can be computed in parallel at different nodes. A purely
sequential approach, extending one design Xt by one encode in each round, is not tractable. In
practice, we extend several node designs Xt in each round (a fixed subset Cit ? {1, . . . , T }; ?it? the
round number). Typically, Cit repeats cyclically. This is approximate, since candidates are scored
independently at each node. Certainly, Cit should not contain neighbouring nodes. In the interleaved
stack-of-slices methodology, scan time is determined by the largest factor Xt (number of rows), so
we strive for balanced designs here.
To sum up, our adaptive design optimization algorithm starts with an initial variational inference
phase for a start-up design (low frequencies only), then runs through a fixed number of design
rounds. Each round starts with Gaussian message passing, based on which scores are computed at
nodes t ? Cit , new measurements are acquired, and designs Xt are extended. Finally, variational
inference is run for the extended model, using a small number of OL iterations (only one in our
experiments). Time can be saved by basing the first OL update on the same messages and node
marginal covariances than the design score computations (neglecting their change through new phase
encodes).
5
Experiments
We present experimental results, comparing designs found by our Bayesian joint design optimization
method against alternative choices on real MRI data. We use the model of Section 2, with the
prior previously used in [24] (potentials of strength ?a on wavelet coefficients, of strength ?r on
Cartesian finite differences). While the MRI signal u is complex-valued, phase contributions are
mostly erroneous,
Q and reconstruction as well as design optimization are improved by multiplying
a further term i e?(?i /?)|=(ui )| into each single node prior potential, easily incorporated into the
generic setup by appending I ? ?2T to B. We focus on Cartesian MRI (phase encodes are complete
columns3 in k-space): a more clinically relevant setting than spiral sampling treated in [24].
We use data of resolution 64?64 (in-plane) to test our approach with a serial implementation. While
this is not a resolution of clinical relevance, a truly parallel implementation is required in order to
run our method at resolutions 256 ? 256 or beyond: an important point for future work.
5.1
Quality of Lanczos Variance Approximations
We begin with experiments to analyze the errors in Lanczos variance approximations. Recall from
[24] that variances are underestimated. We work with a single slice of resolution 64 ? 64, using
a design X of 30 phase encodes, running a single common OL iteration (default-initialized z),
comparing different ways of continuing from there: exact z computations (Cholesky decomposition
of A) versus Lanczos approximations with different numbers of steps k. Results are in Figure 1.
While the relative approximation errors are rather large uniformly, there is a clear structure to them:
the largest (and also the very smallest) true values zj are approximated significantly more accurately
than smaller true values. This structure can be used to motivate why, in the presence of large errors
over all coefficients, our inference still works well for sparse linear models, indeed in some cases better than if exact computations are used (Figure 1, upper right). The spectrum of A shows a roughly
linear decay, so that the largest and smallest eigenvalues (and eigenvectors) are well-approximated
3
Our data are sagittal head scans, where the frequency encode direction (along which oversampling is
possible at no extra cost) is typically chosen vertically (the longer anatomic axis).
5
2.04
Spectrum of A
3
2.02
2.5
exact
k=1500
k=750
k=500
k=100
2
1.98
2
1.96
1.5
1.94
1
1.92
0.5
0
1.9
1000
2000
3000
4000
5000
6000
7000
8000
1.88
1
2
3
4
Outer loop iteration
5
6
7
Figure 1: Lanczos approximations of Gaussian variances, at beginning of second OL iteration, 64 ? 64
data (upper left). Spectral decay of inverse covariance matrix A roughly linear (upper middle). l2 reconstruction error of posterior mean estimate after subsequent OL iterations, for exact variance computation vs.
k = 250, 500, 750, 1500 Lanczos steps (upper right). Lower panel: Relative accuracy zj 7? zk,j /zj at beginning of second OL iteration, separately for ?a? sites (on wavelet coefficients; red), ?r? sites (on derivatives;
blue), and ?i? sites (on =(u); green).
by Lanczos, while the middle part of the spectrum is not penetrated. Contributions to the largest
values zj come dominatingly from small eigenvalues (large eigenvalues of A?1 ), explaining their
smaller relative error. On the other hand, smaller values zj are strongly underestimated (zk,j zj ),
which means that the selective shrinkage effect underlying sparse linear models (shrink most coefficients strongly, but some not at all) is strengthened by these systematic errors. Finally, the IL
penalties are ?j (zj + |sj /?|2 )1/2 , enforcing sparsity more strongly for smaller zj . Therefore, Lanczos approximation errors lead to strengthened sparsity in subsequent ILs, but least so for sites with
largest true zj . As an educated guess, this effect might even compensate for the fact that Laplace
potentials may not be sparse enough for natural images.
5.2
Joint Design Optimization
We use sagittal head scan data of resolution 64 ? 64 in-plane, 32 slices, acquired on a Siemens
3T scanner (phase direction anterior-posterior), see [22] for further details. We consider joint and
independent MAP reconstruction (for the latter, we run nonlinear CG separately for each slice), for
a number of different design choices: {Xt } optimized jointly by our method here [op-jt]; each
Xt optimized separately, by running the complex variant of [24] on slice ut [op-sp]; Xt = X for
all t, with X optimized on the most detailed slice (number 16, Figure 2, row 2 middle) [op-eq];
and encodes of each Xt drawn at random, from the density proposed in [12] [rd], respecting the
typical spectral decay of images [4] (all designs contain the 8 lowest-frequency encodes). Results
for rd are averaged over ten repetitions. For all setups but op-eq, Xt are different across t.
Hyperparameters are adjusted based on MAP reconstruction results for a fixed design picked ad hoc
(?a = ?r = 0.01, ?i = 0.1 in-plane; ?c = 0.08 between slices), then used for all design optimization
and MAP reconstruction runs. We run the op-jt optimization with an odd-even schedule {Cit } (all
odd (even) t ? 0, . . . , T ? 1 for odd (even) ?it?); results for two other schedules of period four come
out very similar, but require more running time. For variational inference, we run 6 OL iterations
in the initial, 1 OL iteration in each design round, with up to 30 IL steps (ILs in design rounds
typically converged in 2?3 steps). The rank parameters (number of Lanczos steps)4 were km = 100,
kc = 250 (here, ut has n
? = 8192 real coefficients). Results are given in Figure 2.
First, across all designs, joint MAP reconstruction improves significantly upon independent MAP
reconstruction. This improvement is strongest by far for op-jt (see Figure 2, rows 3,4), which
for joint reconstruction improves on all other variants significantly, especially with 16?30 phase
4
We repeated op-jt partly with km = 250, with very similar MAP reconstruction errors for the final
designs, but significantly longer run time.
6
7
7
op?jt
op?sp
op?eq
rd(avg)
6
L2 reconstruction error
L2 reconstruction error
6
5
4
3
2
1
op?jt
op?sp
op?eq
rd(avg)
5
4
3
2
10
15
20
25
30
Number phase encodes
35
40
1
45
10
15
20
25
30
Number phase encodes
35
40
45
? MAP | ? |utrue |k of MAP reconstruction for different measureFigure 2: Top row: l2 reconstruction errors k|u
ment designs. Left: joint MAP reconstruction; right: independent MAP reconstruction of each slice. op-jt:
{Xt } optimized jointly; op-sp: Xt optimized separately for each slice; op-eq: Xt = X, optimized on
slice 16; rd: Xt variable density drawn at random (averaged over 10 repetitions).
Rows 2?4: Images for op-jt (25 encodes), slices 15?17. Row 2: true images (range 0?0.35). Row 3: errors
joint MAP. Row 4: errors indep. MAP (range 0?0.08).
encodes, where scan time is reduced by a factor 2?4 (Nyquist sampling requires 64 phase encodes).
op-eq does worst in this domain: with a model of dependencies between slices in place, it pays
7
off to choose different Xt for each slice. rnd does best from about 35 phase encodes on. While
this suboptimal behaviour of our optimization will be analyzed more closely in future work, it is our
experience so far that the gain in using greedy sequential Bayesian design optimization over simpler
choices is generally largest below 1/2 Nyquist.
6
Conclusions
We showed how to implement MRI sampling optimization by Bayesian sequential experimental
design, jointly over a stack of neighbouring slices, extending the single slice technique of [24].
Restricting ourselves to undersampling of Cartesian encodes, our method can be applied in practice whenever dense Cartesian sampling is well under control (sequence modification is limited to
skipping encodes). We exploit the hidden Markov structure of the model by way of a Lanczos
approximation of Kalman smoothing. While the latter has been proposed for spatial statistics applications [20, 25], it has not been used for non-Gaussian approximate inference before, nor in the
context of sparsity-favouring image models or non-linear experimental design. Our method is a general alternative to structured variational mean field approximations typically used for non-Gaussian
dynamical systems, in that dominating covariances are tracked a posteriori, rather than eliminating
most of them a priori through factorization assumptions. In a first study, we obtain encouraging
results in the range below 1/2 Nyquist. In future work, we will develop a truly parallel implementation, with which higher resolutions can be processed. We are considering extensions of our design
optimization technology to 3D MRI5 and to parallel MRI with receiver coil arrays [19, 9], whose
combination with k-space undersampling can be substantially more powerful than each acceleration
technique on its own [13].
Appendix
For norm potentials, h?j (sj ) = h?j (ksj k), and the Hessians to solve for IRLS Newton directions
do not have the form of A anymore. In order to understand this, note that we do not use complex
calculus here: s 7? |s| is not complex differentiable at any s ? C. Rather, we use the C ? R2
embedding, then standard real-valued optimization for variables twice the size. If ?j := (h?j )0 , ?j :=
(h?j )00 at ksj k 6= 0, then using ?sj ksj k = sj /ksj k, we have ??sj h?j = ?j I2 + ?2j (ksj k2 I2 ?
sj sTj ), ?j := (?j /ksj k ? ?j )1/2 /ksj k. Since ksj k2 I2 ? sj sTj = ?sj (?sj )T , ? := ?2 ?1T ? ?1 ?2T ,
the Hessian is X H X + BH (s) B T . If s? := ((diag ?) ? ?)s, then for any v ? R2q : H (s) v =
((diag ?)?I2 )v+((diag w)?I2 )?
s , where wj := vjT s?j , j = 1, . . . , q, which shows how to compute
Hessian matrix-vector multiplications, thus to implement IRLS steps in the complex-valued case.
? t? and Mt? matrices. For a PCA approxiRecall that messages are passed, alternating between A
n
? ?km
T
?
orthonormal, Tt? tridiagonal (obtained by running
mation A t? ? Qt? Tt? Qt? , Qt? ? R
? t? ), low rank algebra gives
km Lanczos steps for A
? t? , ??1 ) = Qt? T ?1 + QT ?t? Qt? ?1 QT = Vt? V T , Vt? ? Rn? ?km ,
Mt? = M(A
t?
t?
t?
t?
t?
2
? (t+1)? = At+1 + Vt? V T
computed in O(n km
) by way of a Cholesky decomposition. Now, A
t?
becomes the precision matrix for the next Lanczos run: MVMs have additional complexity of
O(n km ). Given all messages, node covariances are PCA-approximated by running Lanczos on
T
T
for kc iterations. Pair variances VarQ [st? |y] are esAt + V(t?1)? V(t?1)?
+ V?(t+1) V?(t+1)
timated by running Lanczos on vectors of size 2?
n (say for kc /2 iterations; the precision matrix is
given in Section 3). More details are given in [22].
Acknowledgments
This work is partly funded by the Excellence Initiative of the German research foundation (DFG). It is part of
an ongoing collaboration with Rolf Pohmann, Hannes Nickisch and Bernhard Sch?olkopf, MPI for Biological
Cybernetics, T?ubingen, where data for this study has been acquired.
5
In 3D MRI, image volumes are acquired without slice selection, using phase encoding along two dimensions. There are no unmeasured slice gaps and voxels are isotropic, but scan time is much longer.
8
References
[1] M.A. Bernstein, K.F. King, and X.J. Zhou. Handbook of MRI Pulse Sequences. Elsevier Academic Press,
1st edition, 2004.
[2] R. Bracewell. The Fourier Transform and Its Applications. McGraw-Hill, 3rd edition, 1999.
[3] E. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from
highly incomplete frequency information. IEEE Trans. Inf. Theo., 52(2):489?509, 2006.
[4] H. Chang, Y. Weiss, and W. Freeman. Informative sensing. Technical Report 0901.4275v1 [cs.IT], ArXiv,
2009.
[5] D. Donoho. Compressed sensing. IEEE Trans. Inf. Theo., 52(4):1289?1306, 2006.
[6] A. Garroway, P. Grannell, and P. Mansfield. Image formation in NMR by a selective irradiative pulse. J.
Phys. C: Solid State Phys., 7:L457?L462, 1974.
[7] M. Girolami. A variational method for learning sparse and overcomplete representations. N. Comp.,
13:2517?2532, 2001.
[8] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins University Press, 3rd edition, 1996.
[9] M. A. Griswold, P. M. Jakob, R. M. Heidemann, M. Nittka, V. Jellus, J. Wang, B. Kiefer, and A. Haase.
Generalized autocalibrating partially parallel acquisitions (GRAPPA). Magn. Reson. Med., 47(6):1202?
10, 2002.
[10] P. Lauterbur. Image formation by induced local interactions: Examples employing nuclear magnetic
resonance. Nature, 242:190?191, 1973.
[11] A. Levin, W. Freeman, and F. Durand. Understanding camera trade-offs through a Bayesian analysis of
light field projections. In European Conference on Computer Vision, LNCS 5305, pages 88?101. Springer,
2008.
[12] M. Lustig, D. Donoho, and J. Pauly. Sparse MRI: The application of compressed sensing for rapid MR
imaging. Magn. Reson. Med., 85(6):1182?1195, 2007.
[13] M. Lustig and J. Pauly. SPIR-iT: Iterative self consistent parallel imaging reconstruction from arbitrary
k-space. Magn. Reson. Med., 2009. In print.
[14] B. Madore, G. Glover, and N. Pelc. Unalising by Fourier-encoding the overlaps using the temporal
dimension (UNFOLD), applied to cardiac imaging and fMRI. Magn. Reson. Med., 42:813?828, 1999.
[15] D. Malioutov, J. Johnson, and A. Willsky. Low-rank variance estimation in large-scale GMRF models. In
ICASSP, 2006.
[16] G. Marseille, R. de Beer, M. Fuderer, A. Mehlkopf, and D. van Ormondt. Nonuniform phase-encode
distributions for MRI scan time reduction. J. Magn. Reson. B, 111(1):70?75, 1996.
[17] D. McRobbie, E. Moore, M. Graves, and M. Prince. MRI: From Picture to Proton. Cambridge University
Press, 2nd edition, 2007.
[18] C. Mistretta, O. Wieben, J. Velikina, W. Block, J. Perry, Y. Wu, K. Johnson, and Y. Wu. Highly constrained
backprojection for time-resolved MRI. Magn. Reson. Med., 55:30?40, 2006.
[19] K. Pruessmann, M. Weiger, M. Scheidegger, and P. Boesiger. SENSE: Sensitivity encoding for fast MRI.
Magn. Reson. Med., 42:952?962, 1999.
[20] M. Schneider and A. Willsky. Krylov subspace algorithms for space-time oceanography data assimilation.
In IEEE International Geoscience and Remote Sensing Symposium, 2000.
[21] M. Schneider and A. Willsky. Krylov subspace estimation. SIAM J. Comp., 22(5):1840?1864, 2001.
[22] M. Seeger. Speeding up magnetic resonance image acquisition by Bayesian multi-slice adaptive compressed sensing. Supplemental Appendix, 2010.
[23] M. Seeger and H. Nickisch. Compressed sensing and Bayesian experimental design. In ICML 25, 2008.
[24] M. Seeger, H. Nickisch, R. Pohmann, and B. Sch?olkopf. Bayesian experimental design of magnetic
resonance imaging sequences. In NIPS 21, pages 1441?1448, 2009.
[25] D. Treebushny and H. Madsen. On the construction of a reduced rank square-root Kalman filter for
efficient uncertainty propagation. Future Gener. Comput. Syst., 21(7):1047?1055, 2005.
[26] J. Tsao, P. Boesinger, and K. Pruessmann. k-t BLAST and k-t SENSE: Dynamic MRI with high frame
rate exploting spatiotemporal correlations. Magn. Reson. Med., 50:1031?1042, 2003.
[27] F. Wajer. Non-Cartesian MRI Scan Time Reduction through Sparse Sampling. PhD thesis, Delft University
of Technology, 2001.
[28] J. Weaver, Y. Xu, D. Healy, and L. Cromwell. Filtering noise from images with wavelet transforms. Magn.
Reson. Med., 21(2):288?295, 1991.
9
| 3712 |@word economically:1 trialand:1 mri:25 briefly:2 norm:2 inversion:2 middle:3 eliminating:1 nd:1 km:8 calculus:1 pulse:3 covariance:13 decomposition:2 thereby:1 solid:1 harder:1 moment:1 reduction:3 initial:2 score:6 mseeger:1 denoting:1 favouring:1 comparing:2 nt:1 anterior:1 skipping:1 yet:1 written:1 john:1 fn:2 numerical:1 partition:2 subsequent:3 informative:1 update:11 v:1 half:1 leaf:1 guess:1 greedy:1 complementing:1 plane:4 isotropic:1 beginning:2 short:1 node:15 simpler:2 saarland:2 along:6 glover:1 become:1 symposium:1 viable:1 initiative:1 consists:1 inside:1 manner:1 blast:1 excellence:1 acquired:7 indeed:1 rapid:1 utrue:1 cand:1 nor:1 growing:1 multi:4 grappa:1 ol:14 roughly:2 freeman:2 gener:1 little:1 ucken:1 encouraging:1 considering:1 becomes:1 provided:1 begin:1 campus:1 bounded:2 moreover:1 panel:1 underlying:1 lowest:1 what:1 mvms:1 cm:1 substantially:2 supplemental:1 finding:2 temporal:1 every:1 commodity:1 concave:1 decouples:1 scaled:1 k2:3 nmr:1 control:1 normally:1 unit:1 intervention:1 planck:1 before:2 educated:1 magn:9 local:3 vertically:1 limit:2 encoding:5 becoming:1 might:1 twice:1 au:1 dut:1 co:1 factorization:4 limited:2 range:3 averaged:2 directed:1 practical:1 acknowledgment:1 camera:1 crosstalk:1 atomic:1 practice:4 block:1 implement:3 procedure:1 lncs:1 healy:1 unfold:1 undone:1 significantly:6 projection:1 selection:1 romberg:1 stj:2 bh:1 inflicting:1 applying:1 complexvalued:1 context:1 optimize:2 map:15 yt:3 primitive:2 straightforward:2 attention:1 independently:2 convex:8 resolution:8 gmrf:1 array:2 orthonormal:1 nuclear:1 embedding:2 unmeasured:1 laplace:3 reson:9 construction:1 exact:6 neighbouring:7 expensive:1 approximated:5 role:1 solved:1 wang:1 worst:1 readout:1 wj:1 connected:1 indep:1 remote:1 movement:1 trade:1 marseille:1 balanced:1 convexity:1 ui:1 respecting:1 complexity:1 dynamic:1 motivate:1 depend:1 solving:1 algebra:1 technically:1 localization:1 purely:1 upon:1 easily:3 joint:8 icassp:1 resolved:1 routinely:2 separated:1 fast:2 lift:1 formation:2 whose:2 grave:1 larger:1 dominating:4 valued:8 solve:1 say:1 compressed:6 statistic:3 transform:5 echo:1 itself:1 varq:4 jointly:4 asynchronously:1 covq:2 hoc:1 eigenvalue:4 final:1 matthias:1 sequence:3 differentiable:1 reconstruction:26 ment:1 interaction:1 relevant:1 loop:9 ky:2 olkopf:2 convergence:1 double:6 extending:3 cnt:2 coupling:4 develop:1 measured:1 qt:7 odd:3 op:18 eq:8 strong:2 c:1 implies:1 come:3 girolami:1 direction:9 closely:1 saved:1 filter:3 exploration:1 human:2 implementing:1 require:2 behaviour:1 preliminary:1 biological:1 adjusted:2 extension:8 hold:1 scanner:2 considered:1 stt:2 minu:2 scope:1 smallest:3 estimation:6 xuk2:2 applicable:1 combinatorial:1 largest:6 basing:1 repetition:2 offs:1 clearly:1 gaussian:25 mation:1 aim:2 super:2 rather:3 avoid:1 zhou:1 shrinkage:1 lifted:1 encode:4 focus:1 improvement:1 rank:9 likelihood:1 aka:1 seeger:4 cg:2 sense:2 posteriori:1 inference:15 elsevier:1 typically:7 hidden:3 kc:4 selective:2 germany:1 tao:1 sketched:1 pixel:2 flexible:1 priori:2 resonance:6 smoothing:4 spatial:1 initialize:2 haase:1 marginal:6 field:6 once:3 constrained:1 sampling:13 placing:1 look:1 icml:1 future:4 minimized:2 report:1 others:1 fundamentally:1 serious:1 primarily:1 simplify:1 few:3 randomly:1 neighbour:3 modern:2 dfg:1 phase:18 ourselves:1 delft:1 interest:1 message:18 huge:1 highly:4 centralized:1 poh:1 certainly:1 golub:1 truly:2 analyzed:1 light:2 behind:1 chain:1 edge:1 partial:1 necessary:1 neglecting:1 experience:1 shorter:1 incomplete:1 continuing:1 initialized:1 ruled:1 prince:1 overcomplete:1 fitted:2 column:1 markovian:1 lanczos:19 cost:4 subset:1 levin:1 johnson:2 tridiagonal:1 front:2 graphic:1 dependency:1 spatiotemporal:1 massaging:1 nickisch:3 st:17 density:2 fundamental:1 sensitivity:3 ksj:10 international:1 retain:1 bu:1 refitting:1 systematic:1 informatics:1 off:1 siam:1 hopkins:1 transmitting:1 thesis:1 central:1 choose:1 bracewell:1 expert:2 strive:1 leading:1 return:1 derivative:1 reusing:1 syst:1 account:1 potential:13 de:2 coefficient:6 depends:1 stream:1 ad:1 view:1 lot:1 picked:1 root:1 analyze:1 red:1 start:4 bayes:1 parallel:15 capability:1 contribution:2 minimize:2 il:8 square:3 stants:1 appended:1 variance:15 tangentially:1 efficiently:3 accuracy:1 spaced:1 kiefer:1 modelled:1 bayesian:14 lds:3 accurately:1 irls:6 trajectory:1 multiplying:1 asset:1 cybernetics:1 comp:2 malioutov:1 converged:3 strongest:1 parallelizable:3 phys:2 whenever:2 against:1 acquisition:8 frequency:4 endeavour:1 sampled:1 gain:2 annoyance:1 recall:1 knowledge:1 ut:35 improves:2 schedule:2 higher:1 methodology:2 improved:2 wei:1 hannes:1 done:4 shrink:1 strongly:3 just:2 correlation:2 until:2 hand:1 nonlinear:3 maximizer:2 propagation:2 perry:1 mode:2 quality:2 oceanography:2 effect:3 requiring:1 concept:1 contain:2 true:4 alternating:1 iteratively:1 moore:1 i2:8 reweighted:1 round:7 during:2 self:1 excitation:2 mpi:1 criterion:5 generalized:1 fmri:1 hill:1 complete:2 tt:2 l1:1 image:27 variational:14 photography:1 novel:1 recently:1 common:1 mt:8 tracked:1 volume:2 extend:4 anatomic:1 penetrated:1 measurement:5 cambridge:1 imposing:2 edged:1 automatic:1 rd:7 mathematics:1 funded:1 entail:2 longer:3 etc:1 posterior:14 own:1 recent:1 showed:1 madsen:1 inf:2 scenario:1 certain:1 ubingen:1 durand:1 vt:3 pauly:2 captured:1 minimum:1 additional:2 care:1 relaxed:1 mr:4 employed:1 parallelized:1 determine:2 schneider:2 redundant:1 period:1 signal:4 smoother:1 multiple:1 full:1 smooth:2 technical:1 faster:2 characterized:1 academic:1 clinical:2 long:1 offer:1 compensate:1 serial:4 e1:1 mrf:1 mmci:1 variant:2 mansfield:1 patient:2 vision:1 arxiv:1 iteration:11 normalization:1 receive:1 proposal:1 justified:1 separately:4 heidemann:1 addressed:1 underestimated:2 else:2 scheidegger:1 modality:1 sch:2 parallelization:2 extra:1 pohmann:2 pass:1 comment:1 induced:1 med:8 undirected:1 seem:1 practitioner:2 nonstationary:1 presence:1 intermediate:1 bernstein:1 spiral:1 enough:1 fft:1 fit:1 architecture:1 suboptimal:1 reduce:1 idea:4 cn:1 inner:2 intensive:1 shift:1 pca:5 passed:2 nyquist:5 penalty:3 passing:8 hessian:3 generally:1 clear:1 eigenvectors:1 detailed:1 transforms:1 band:1 locally:1 hardware:2 ten:1 processed:1 cit:5 reduced:2 zj:12 oversampling:1 estimated:1 track:1 bulk:1 blue:1 diagnosis:1 four:1 lustig:2 drawn:3 undersampling:2 ht:2 backward:1 v1:1 imaging:7 destroy:1 relaxation:1 sum:2 run:14 inverse:1 powerful:1 uncertainty:2 eigendirections:1 place:3 wu:2 decision:1 appendix:4 comparable:1 interleaved:2 bound:1 pay:1 centrally:1 quadratic:1 kyt:1 adapted:1 scanned:1 strength:2 constraint:3 precisely:1 encodes:16 nearby:1 fourier:4 speed:1 min:4 performing:1 transferred:1 structured:1 clinically:1 combination:1 conjugate:2 smaller:6 across:2 cardiac:1 modification:1 projecting:1 intuitively:1 computationally:1 equation:1 vjt:1 previously:1 german:1 needed:1 tractable:1 serf:1 sending:1 end:1 available:1 spectral:3 generic:1 magnetic:7 anymore:2 robustly:1 appending:1 alternative:3 robustness:1 top:1 running:9 newton:1 exploit:1 especially:1 approximating:2 appreciate:1 backprojection:1 question:2 print:1 dependence:1 rt:4 traditional:1 usual:1 gradient:4 subspace:2 outer:3 nondifferentiable:1 extent:1 trivial:1 reason:1 enforcing:1 willsky:3 kalman:7 code:1 cq:1 difficult:3 mostly:1 setup:2 ba:1 zos:1 design:38 implementation:6 zt:8 refit:1 unknown:1 allowing:1 upper:7 conversion:1 markov:6 finite:1 defining:1 extended:3 ever:1 incorporated:1 head:2 frame:1 rn:2 varied:1 stack:6 jakob:1 arbitrary:1 nonuniform:1 pair:3 required:5 optimized:6 proton:1 timated:1 narrow:1 saarbr:1 nip:1 tractably:1 address:2 beyond:3 esat:1 trans:2 dynamical:4 pattern:2 below:2 krylov:2 sparsity:9 rolf:1 program:1 max:1 reliable:1 video:1 green:1 overlap:1 demanding:1 natural:2 rely:1 difficulty:1 treated:1 weaver:1 improve:3 technology:3 yq:1 picture:1 axis:2 coupled:1 health:1 speeding:2 prior:4 review:1 voxels:2 tangent:1 l2:4 multiplication:2 understanding:1 relative:3 graf:1 limitation:1 filtering:1 versus:1 digital:1 foundation:1 sagittal:2 sword:1 affine:1 sufficient:1 consistent:1 beer:1 principle:1 cd:1 collaboration:1 row:9 penalized:1 repeat:4 theo:2 bias:1 understand:1 institute:1 fall:1 explaining:1 taking:1 absolute:1 sparse:11 distributed:4 slice:40 van:2 dimension:5 default:2 world:1 forward:1 adaptive:4 avg:2 far:2 employing:1 sj:19 approximate:8 uni:1 mcgraw:1 bernhard:1 global:4 sequentially:1 handbook:1 harm:1 receiver:1 spectrum:3 search:4 latent:2 demanded:1 iterative:1 why:2 reality:1 reviewed:1 nature:1 zk:2 robust:1 complex:9 european:1 domain:1 diag:8 inheriting:1 sp:4 dense:2 main:1 motivation:1 noise:2 bounding:2 profile:1 scored:1 hyperparameters:1 edition:4 repeated:1 complementary:1 xu:3 site:4 fashion:1 strengthened:2 assimilation:1 precision:5 comput:1 candidate:3 third:3 wavelet:3 cyclically:1 theorem:1 erroneous:1 specific:1 emphasized:2 xt:20 jt:8 sensing:8 r2:3 decay:4 essential:2 intractable:1 restricting:1 sequential:4 phd:1 magnitude:1 cartesian:6 gap:3 tracking:2 partially:1 scalar:1 geoscience:1 chang:1 rnd:1 springer:1 radically:1 chiefly:1 complemented:1 coil:3 goal:1 king:1 acceleration:1 donoho:2 tsao:1 change:2 loan:1 determined:2 typical:1 reducing:2 uniformly:1 argminu:1 partly:3 experimental:8 e:1 siemens:2 formally:1 cholesky:2 latter:2 scan:8 armijo:1 relevance:1 accelerated:1 ongoing:1 avoiding:1 |
2,994 | 3,713 | Monte Carlo Sampling for Regret Minimization in
Extensive Games
Kevin Waugh
School of Computer Science
Carnegie Mellon University
Pittsburgh PA 15213-3891
[email protected]
Marc Lanctot
Department of Computing Science
University of Alberta
Edmonton, Alberta, Canada T6G 2E8
[email protected]
Martin Zinkevich
Yahoo! Research
Santa Clara, CA, USA 95054
[email protected]
Michael Bowling
Department of Computing Science
University of Alberta
Edmonton, Alberta, Canada T6G 2E8
[email protected]
Abstract
Sequential decision-making with multiple agents and imperfect information is
commonly modeled as an extensive game. One efficient method for computing
Nash equilibria in large, zero-sum, imperfect information games is counterfactual
regret minimization (CFR). In the domain of poker, CFR has proven effective, particularly when using a domain-specific augmentation involving chance outcome
sampling. In this paper, we describe a general family of domain-independent CFR
sample-based algorithms called Monte Carlo counterfactual regret minimization
(MCCFR) of which the original and poker-specific versions are special cases. We
start by showing that MCCFR performs the same regret updates as CFR on expectation. Then, we introduce two sampling schemes: outcome sampling and external
sampling, showing that both have bounded overall regret with high probability.
Thus, they can compute an approximate equilibrium using self-play. Finally, we
prove a new tighter bound on the regret for the original CFR algorithm and relate this new bound to MCCFR?s bounds. We show empirically that, although the
sample-based algorithms require more iterations, their lower cost per iteration can
lead to dramatically faster convergence in various games.
1
Introduction
Extensive games are a powerful model of sequential decision-making with imperfect information,
subsuming finite-horizon MDPs, finite-horizon POMDPs, and perfect information games. The past
few years have seen dramatic algorithmic improvements in solving, i.e., finding an approximate
Nash equilibrium, in two-player, zero-sum extensive games. Multiple techniques [1, 2] now exist
for solving games with up to 1012 game states, which is about four orders of magnitude larger than
the previous state-of-the-art of using sequence-form linear programs [3].
Counterfactual regret minimization (CFR) [1] is one such recent technique that exploits the fact that
the time-averaged strategy profile of regret minimizing algorithms converges to a Nash equilibrium.
The key insight is the fact that minimizing per-information set counterfactual regret results in minimizing overall regret. However, the vanilla form presented by Zinkevich and colleagues requires
the entire game tree to be traversed on each iteration. It is possible to avoid a full game-tree traversal. In their accompanying technical report, Zinkevich and colleagues discuss a poker-specific CFR
1
variant that samples chance outcomes on each iteration [4]. They claim that the per-iteration cost
reduction far exceeds the additional number of iterations required, and all of their empirical studies
focus on this variant. The sampling variant and its derived bound are limited to poker-like games
where chance plays a prominent role in the size of the games. This limits the practicality of CFR
minimization outside of its initial application of poker or moderately sized games. An additional
disadvantage of CFR is that it requires the opponent?s policy to be known, which makes it unsuitable for online regret minimization in an extensive game. Online regret minimization in extensive
games is possible using online convex programming techniques, such as Lagrangian Hedging [5],
but these techniques can require costly optimization routines at every time step.
In this paper, we present a general framework for sampling in counterfactual regret minimization.
We define a family of Monte Carlo CFR minimizing algorithms (MCCFR), that differ in how they
sample the game tree on each iteration. Zinkevich?s vanilla CFR and a generalization of their chancesampled CFR are both members of this family. We then introduce two additional members of this
family: outcome-sampling, where only a single playing of the game is sampled on each iteration; and
external-sampling, which samples chance nodes and the opponent?s actions. We show that under a
reasonable sampling strategy, any member of this family minimizes overall regret, and so can be used
for equilibrium computation. Additionally, external-sampling is proven to require only a constantfactor increase in iterations yet achieves an order reduction in the cost per iteration, thus resulting an
asymptotic improvement in equilibrium computation time. Furthermore, since outcome-sampling
does not need knowledge of the opponent?s strategy beyond samples of play from the strategy, we
describe how it can be used for online regret minimization. We then evaluate these algorithms
empirically by using them to compute approximate equilibria in a variety of games.
2
Background
An extensive game is a general model of sequential decision-making with imperfect information. As
with perfect information games (such as Chess or Checkers), extensive games consist primarily of a
game tree: each non-terminal node has an associated player (possibly chance) that makes the decision at that node, and each terminal node has associated utilities for the players. Additionally, game
states are partitioned into information sets where a player cannot distinguish between two states in
the same information set. The players, therefore, must choose actions with the same distribution at
each state in the same information set. We now define an extensive game formally, introducing the
notation we use throughout the paper.
Definition 1 [6, p. 200] a finite extensive game with imperfect information has the following components:
? A finite set N of players. A finite set H of sequences, the possible histories of actions, such
that the empty sequence is in H and every prefix of a sequence in H is also in H. Define
h v h0 to mean h is a prefix of h0 . Z ? H are the terminal histories (those which are not
a prefix of any other sequences). A(h) = {a : ha ? H} are the actions available after a
non-terminal history, h ? H \ Z.
? A function P that assigns to each non-terminal history a member of N ? {c}. P is the
player function. P (h) is the player who takes an action after the history h. If P (h) = c
then chance determines the action taken after history h.
? For each player i ? N ? {c} a partition Ii of {h ? H : P (h) = i} with the property that
A(h) = A(h0 ) whenever h and h0 are in the same member of the partition. For Ii ? Ii
we denote by A(Ii ) the set A(h) and by P (Ii ) the player P (h) for any h ? Ii . Ii is the
information partition of player i; a set Ii ? Ii is an information set of player i.
? A function fc that associates with every information set I where P (I) = c a probability
measure fc (?|I) on A(h) (fc (a|I) is the probability that a occurs given some h ? I), where
each such probability measure is independent of every other such measure.1
1
Traditionally, an information partition is not specified for chance. In fact, as long as the same chance
information set cannot be revisited, it has no strategic effect on the game itself. However, this extension allows
us to consider using the same sampled chance outcome for an entire set of histories, which is an important part
of Zinkevich and colleagues? chance-sampling CFR variant.
2
? For each player i ? N a utility function ui from the terminal states Z to the reals R. If
N = {1, 2} and u1 = ?u2 , it is a zero-sum extensive game. Define ?u,i = maxz ui (z) ?
minz ui (z) to be the range of utilities to player i.
In this paper, we will only concern ourselves with two-player, zero-sum extensive games. Furthermore, we will assume perfect recall, a restriction on the information partitions such that a player
can always distinguish between game states where they previously took a different action or were
previously in a different information set.
2.1
Strategies and Equilibria
A strategy of player i, ?i , in an extensive game is a function that assigns a distribution over A(Ii )
to each Ii ? Ii . We denote ?i as the set of all strategies for player i. A strategy profile, ?, consists
of a strategy for each player, ?1 , . . . , ?n . We let ??i refer to the strategies in ? excluding ?i .
Let ? ? (h) be the probability of history h occurring if all players choose actions according to ?. We
can decompose ? ? (h) = ?i?N ?{c} ?i? (h) into each player?s contribution to this probability. Here,
?
?i? (h) is the contribution to this probability from player i when playing according to ?. Let ??i
(h)
be the product ofP
all players? contribution (including chance) except that of player i. For I ? H,
define ? ? (I) = h?I ? ? (h), as the probability of reaching a particular information set given all
?
players play according to ?, with ?i? (I) and ??i
(I) defined similarly. Finally, let ? ? (h, z) =
?
? ? (z)/? ? (h) if h v z, and zero otherwise. Let ?i? (h, z) and ??i
(h, z) P
be defined similarly. Using
this notation, we can define the expected payoff for player i as ui (?) = h?Z ui (h)? ? (h).
Given a strategy profile, ?, we define a player?s best response as a strategy that maximizes their
expected payoff assuming all other players play according to ?. The best-response value for player
i is the value of that strategy, bi (??i ) = max?i0 ??i ui (?i0 , ??i ). An -Nash equilibrium is an
approximation of a Nash equilibrium; it is a strategy profile ? that satisfies
?i ? N
ui (?i0 , ??i )
ui (?) + ? max
0
?i ??i
(1)
If = 0 then ? is a Nash Equilibrium: no player has any incentive to deviate as they are all playing
best responses. If a game is two-player and zero-sum, we can use exploitability as a metric for
determining how close ? is to an equilibrium, ? = b1 (?2 ) + b2 (?1 ).
2.2
Counterfactual Regret Minimization
Regret is an online learning concept that has triggered a family of powerful learning algorithms. To
define this concept, first consider repeatedly playing an extensive game. Let ?it be the strategy used
by player i on round t. The average overall regret of player i at time T is:
RiT =
T
X
1
t
max
ui (?i? , ??i
) ? ui (? t )
?
T ?i ??i t=1
(2)
Moreover, define ?
?it to be the average strategy for player i from time 1 to T . In particular, for each
information set I ? Ii , for each a ? A(I), define:
PT
?t
t
t
t=1 ?i (I)? (a|I)
?
?i (a|I) =
.
(3)
PT
t
?
t=1 ?i (I)
There is a well-known connection between regret, average strategies, and Nash equilibria.
T
Theorem 1 In a zero-sum game, if Ri?{1,2}
? , then ?
? T is a 2 equilibrium.
An algorithm for selecting ?it for player i is regret minimizing if player i?s average overall regret
t
(regardless of the sequence ??i
) goes to zero as t goes to infinity. Regret minimizing algorithms in
self-play can be used as a technique for computing an approximate Nash equilibrium. Moreover, an
algorithm?s bounds on the average overall regret bounds the convergence rate of the approximation.
Zinkevich and colleagues [1] used the above approach in their counterfactual regret algorithm (CFR).
The basic idea of CFR is that overall regret can be bounded by the sum of positive per-informationset immediate counterfactual regret. Let I be an information set of player i. Define ?(I?a) to be
3
a strategy profile identical to ? except that player i always chooses action a from information set
I. Let ZI be the subset of all terminal histories where a prefix of the history is in the set I; for
z ? ZI let z[I] be that prefix. Since we are restricting ourselves to perfect recall games z[I] is
unique. Define counterfactual value vi (?, I) as,
X
?
??i
(z[I])? ? (z[I], z)ui (z).
(4)
vi (?, I) =
z?ZI
T
T
The immediate counterfactual regret is then Ri,imm
(I) = maxa?A(I) Ri,imm
(I, a), where
T
Ri,imm
(I, a) =
T
1 X
t
, I) ? vi (? t , I)
vi (?(I?a)
T t=1
(5)
Let x+ = max(x, 0). The key insight of CFR is the following result.
Theorem 2 [1, Theorem 3]
RiT ?
P
I?Ii
T,+
Ri,imm
(I)
Using regret-matching2 the positive per-information set immediate counterfactual regrets can be
driven to zero, thus driving average overall
p regret
? to zero. This results in an average overall regret
bound [1, Theorem 4]: RiT ? ?u,i |Ii | |Ai |/ T , where |Ai | = maxh:P (h)=i |A(h)|. We return to
this bound, tightening it further, in Section 4.
This result suggests an algorithm for computing equilibria via self-play, which we will refer to as
vanilla CFR. The idea is to traverse the game tree computing counterfactual values using Equation 4.
Given a strategy, these values define regret terms for each player for each of their information sets
using Equation 5. These regret values accumulate and determine the strategies at the next iteration
using the regret-matching formula. Since both players are regret minimizing, Theorem 1 applies
and so computing the strategy profile ?
? t gives us an approximate Nash Equilibrium. Since CFR
only needs to store values at each information set, its space requirement is O(|I|). However, as
previously mentioned vanilla CFR requires a complete traversal of the game tree on each iteration,
which prohibits its use in many large games. Zinkevich and colleagues [4] made steps to alleviate
this concern with a chance-sampled variant of CFR for poker-like games.
3
Monte Carlo CFR
The key to our approach is to avoid traversing the entire game tree on each iteration while still having
the immediate counterfactual regrets be unchanged in expectation. In general, we want to restrict
the terminal histories we consider on each iteration. Let Q = {Q1 , . . . , Qr } be a set of subsets of
Z, such that their union spans the set Z. We will call one of these subsets a block. On each iteration
we will sample one of these blocks and only consider the terminal histories P
in that block. Let qj > 0
r
be the probability of considering block Qj for the current iteration (where j=1 qj = 1).
P
Let q(z) = j:z?Qj qj , i.e., q(z) is the probability of considering terminal history z on the current
iteration. The sampled counterfactual value when updating block j is:
X
1
?
v?i (?, I|j) =
ui (z)??i
(z[I])? ? (z[I], z)
(6)
q(z)
z?Qj ?ZI
Selecting a set Q along with the sampling probabilities defines a complete sample-based CFR algorithm. Rather than doing full game tree traversals the algorithm samples one of these blocks, and
then examines only the terminal histories in that block.
Suppose we choose Q = {Z}, i.e., one block containing all terminal histories and q1 = 1. In
this case, sampled counterfactual value is equal to counterfactual value, and we have vanilla CFR.
Suppose instead we choose each block to include all terminal histories with the same sequence of
chance outcomes (where the probability of a chance outcome is independent of players? actions as
2
Regret-matching selects actions with probability proportional to their positive regret, i.e., ?it (a|I) =
T,+
Ri,imm
(I, a). Regret-matching satisfies Blackwell?s approachability criteria. [7, 8]
P
T,+
Ri,imm
(I, a)/ a0 ?A(I)
4
in poker-like games). Hence qj is the product of the probabilities in the sampled sequence of chance
outcomes (which cancels with these same probabilities in the definition of counterfactual value) and
we have Zinkevich and colleagues? chance-sampled CFR.
Sampled counterfactual value was designed to match counterfactual value on expectation. We show
this here, and then use this fact to prove a probabilistic bound on the algorithm?s average overall
regret in the next section.
Lemma 1 Ej?qj [?
vi (?, I|j)] = vi (?, I)
Proof:
Ej?qj [?
vi (?, I|j)] =
X
qj v?i (?, I|j) =
j
P
=
X
z?ZI
=
X
j:z?Qj
q(z)
qj
X
X
j
z?Qj ?ZI
qj ?
? (z[I])? ? (z[I], z)ui (z)
q(z) ?i
?
??i
(z[I])? ? (z[I], z)ui (z)
?
??i
(z[I])? ? (z[I], z)ui (z) = vi (?, I)
(7)
(8)
(9)
z?ZI
Equation 8 follows from the fact that Q spans Z. Equation 9 follows from the definition of q(z).
This results in the following MCCFR algorithm. We sample a block and for each information
set that contains a prefix of a terminal history in the block we compute the sampled immediate
t
counterfactual regrets of each action, r?(I, a) = v?i (?(I?a)
, I) ? v?i (? t , I). We accumulate these
regrets, and the player?s strategy on the next iteration applies the regret-matching algorithm to the
accumulated regrets. We now present two specific members of this family, giving details on how the
regrets can be updated efficiently.
Outcome-Sampling MCCFR. In outcome-sampling MCCFR we choose Q so that each block
contains a single terminal history, i.e., ?Q ? Q, |Q| = 1. On each iteration we sample one terminal
history and only update each information set along that history. The sampling probabilities, qj must
specify a distribution over terminal histories. We will specify this distribution using a sampling
0
profile, ? 0 , so that q(z) = ? ? (z). Note that any choice of sampling policy will induce a particular
distribution over the block probabilities q(z). As long as ?i0 (a|I) > , then there exists a ? > 0 such
that q(z) > ?, thus ensuring Equation 6 is well-defined.
0
The algorithm works by sampling z using policy ? 0 , storing ? ? (z). The single history is then
traversed forward (to compute each player?s probability of playing to reach each prefix of the history,
?i? (h)) and backward (to compute each player?s probability of playing the remaining actions of the
history, ?i? (h, z)). During the backward traversal, the sampled counterfactual regrets at each visited
information set are computed (and added to the total regret).
?
ui (z)??i
(z)?i? (z[I]a, z)
wI ? 1 ? ?(a|z[I]) if (z[I]a) v z
r?(I, a) =
, where wI =
?wI ? ?(a|z[I])
otherwise
? ?0 (z)
(10)
One advantage of outcome-sampling MCCFR is that if our terminal history is sampled according to
0
the opponent?s policy, so ??i
= ??i , then the update no longer requires explicit knowledge of ??i as
0
0
it cancels with the ??i . So, wI becomes ui (z)?i? (z[I], z)/?i? (z). Therefore, we can use outcomesampling MCCFR for online regret minimization. We would have to choose our own actions so that
?i0 ? ?it , but with some exploration to guarantee qj ? ? > 0. By balancing the regret caused by
exploration with the regret caused by a small ? (see Section 4 for how MCCFR?s bound depends
upon ?), we can bound the average overall regret as long as the number of playings T is known in
advance. This effectively mimics the approach taking by Exp3 for regret minimization in normalform games [9]. An alternative form for Equation 10 is recommended for implementation. This and
other implementation details can be found in the paper?s supplemental material or the appendix of
the associated technical report [10].
5
External-Sampling MCCFR. In external-sampling MCCFR we sample only the actions of the
opponent and chance (those choices external to the player). We have a block Q? ? Q for each
pure strategy of the opponent and chance, i.e.,, for each deterministic mapping ? from I ? Ic ?
IN \{i}Qto A(I). The block
Q probabilities are assigned based on the distributions fc and ??i , so
q? = I?Ic fc (? (I)|I) I?IN \{i} ??i (? (I)|I). The block Q? then contains all terminal histories
z consistent with ? , that is if ha is a prefix of z with h ? I for some I ? I?i then ? (I) = a. In
practice, we will not actually sample ? but rather sample the individual actions that make up ? only
?
as needed. The key insight is that these block probabilities result in q(z) = ??i
(z). The algorithm
iterates over i ? N and for each doing a post-order depth-first traversal of the game tree, sampling
actions at each history h where P (h) 6= i (storing these choices so the same actions are sampled at
all h in the same information set). Due to perfect recall it can never visit more than one history from
the same information set during this traversal. For each such visited information set the sampled
counterfactual regrets are computed (and added to the total regrets).
X
ui (z)?i? (z[I]a, z)
(11)
r?(I, a) = (1 ? ?(a|I))
z?Q?ZI
Note that the summation can be easily computed during the traversal by always maintaining a
weighted sum of the utilities of all terminal histories rooted at the current history.
4
Theoretical Analysis
We now present regret bounds for members of the MCCFR family, starting with an improved bound
for vanilla CFR that depends more explicitly on the exact structure of the extensive game. Let ~ai be
~ i be
a subsequence of a history such that it contains only player i?s actions in that history, and let A
the set of all such player i action subsequences. Let Ii (~ai ) be the set of all information sets where
player i?s action sequence up
p to that informationpset is ~ai . Define the M -value for player i of the
P
game to be Mi = ~ai ?A~ i |Ii (~a)|. Note that |Ii | ? Mi ? |Ii | with both sides of this bound
being realized by some game. We can strengthen vanilla CFR?s regret bound using this constant,
which also appears in the bounds for the MCCFR variants.
Theorem 3 When using vanilla CFR for player i, RiT ? ?u,i Mi
p
?
|Ai |/ T .
We now turn our attention to the MCCFR family of algorithms, for which we can provide probabilistic regret bounds. We begin with the most exciting result: showing that external-sampling requires
only a constant factor more iterations than vanilla CFR (where the constant depends on the desired
confidence in the bound).
Theorem 4 For any p ? (0, 1], when using external-sampling
MCCFR, with probability at least
?
p
?
2
T
?
1 ? p, average overall regret is bounded by, Ri ? 1 + p ?u,i Mi |Ai |/ T .
Although requiring the same order of iterations, note that external-sampling need only traverse a
fraction of the tree on each iteration. For balanced games where
p players make roughly equal numbers
of decisions, the iteration cost of external-sampling is O( |H|), while vanilla CFR is O(|H|),
meaning external-sampling MCCFR requires asymptotically less time to compute an approximate
equilibrium than vanilla CFR (and consequently chance-sampling CFR, which is identical to vanilla
CFR in the absence of chance nodes).
Theorem 5 For any p ? (0, 1], when using outcome-sampling MCCFR where ?z ? Z either
?
??i
(z) = 0 or q(z) ? ? > 0 at every timestep, with probability 1 ? p, average overall regret
?
p
?
is bounded by RiT ? 1 + ?p2 1? ?u,i Mi |Ai |/ T
The proofs for the theorems in this section can be found in the paper?s supplemental material and
as an appendix of the associated technical report [10]. The supplemental material also presents a
slightly complicated, but general result for any member of the MCCFR family, from which the two
theorems presented above are derived.
6
Game
OCP
Goof
LTTT
PAM
|H| (106 )
22.4
98.3
70.4
91.8
|I| (103 )
2
3294
16039
20
l
5
14
18
13
M1
45
89884
1333630
9541
M2
32
89884
1236660
2930
tvc
28s
110s
38s
120s
tos
46?s
150?s
62?s
85?s
tes
99?s
150ms
70ms
28ms
Table 1: Game properties. The value of |H| is in millions and |I| in thousands, and l = maxh?H |h|.
tvc , tos , and tes are the average wall-clock time per iteration4 for vanilla CFR, outcome-sampling
MCCFR, and external-sampling MCCFR.
5
Experimental Results
We evaluate the performance of MCCFR compared to vanilla CFR on four different games. Goofspiel [11] is a bidding card game where players have a hand of cards numbered 1 to N , and take
turns secretly bidding on the top point-valued card in a point card stack using cards in their hands.
Our version is less informational: players only find out the result of each bid and not which cards
were used to bid, and the player with the highest total points wins. We use N = 7 in our experiments. One-Card Poker [12] is a generalization of Kuhn Poker [13], we use a deck of size 500.
Princess and Monster [14, Research Problem 12.4.1] is a pursuit-evasion game on a graph, neither
player ever knowing the location of the other. In our experiments we use random starting positions,
a 4-connected 3 by 3 grid graph, and a horizon of 13 steps. The payoff to the evader is the number of
steps uncaptured. Latent Tic-Tac-Toe is a twist on the classic game where moves are not disclosed
until after the opponent?s next move, and lost if invalid at the time they are revealed. While all of
these games have imperfect information and roughly of similar size, they are a diverse set of games,
varying both in the degree (the ratio of the number of information sets to the number of histories)
and nature (whether due to chance or opponent actions) of imperfect information. The left columns
of Table 1 show various constants, including the number of histories, information sets, game length,
and M-values, for each of these domains.
We used outcome-sampling MCCFR, external-sampling MCCFR, and vanilla CFR to compute an
approximate equilibrium in each of the four games. For outcome-sampling MCCFR we used an
epsilon-greedy sampling profile ? 0 . At each information set, we sample an action uniformly randomly with probability and according to the player?s current strategy ? t . Through experimentation
we found that = 0.6 worked well across all games; this is interesting because the regret bound
suggests ? should be as large as possible. This implies that putting some bias on the most likely
outcome to occur is helpful. With vanilla CFR we used to an implementational trick called pruning
to dramatically reduce the work done per iteration. When updating one player?s regrets, if the other
player has no probability of reaching the current history, the entire subtree at that history can be
pruned for the current iteration, with no effect on the resulting computation. We also used vanilla
CFR without pruning to see the effects of pruning in our domains.
Figure 1 shows the results of all four algorithms on all four domains, plotting approximation quality
as a function of the number of nodes of the game tree the algorithm touched while computing.
Nodes touched is an implementation-independent measure of computation; however, the results are
nearly identical if total wall-clock time is used instead. Since the algorithms take radically different
amounts of time per iteration, this comparison directly answers if the sampling variants? lower cost
per iteration outweighs the required increase in the number of iterations. Furthermore, for any
fixed game (and degree of confidence
? that the bound holds), the algorithms? average overall regret
is falling at the same rate, O(1/ T ), meaning that only their short-term rather than asymptotic
performance will differ.
The graphs show that the MCCFR variants often dramatically outperform vanilla CFR. For example,
in Goofspiel, both MCCFR variants require only a few million nodes to reach ? < 0.5 where CFR
takes 2.5 billion nodes, three orders of magnitude more. In fact, external-sampling, which has
the tightest theoretical computation-time bound, outperformed CFR and by considerable margins
(excepting LTTT) in all of the games. Note that pruning is key to vanilla CFR being at all practical
in these games. For example, in Latent Tic-Tac-Toe the first iteration of CFR touches 142 million
nodes, but later iterations touch as few as 5 million nodes. This is because pruning is not possible
4
As measured on an 8-core Intel Xeon 2.5 GHz machine running Linux x86 64 kernel 2.6.27.
7
Goofspiel
2
Latent Tic-Tac-Toe
1.8
CFR
CFR with pruning
MCCFR-outcome
MCCFR-external
1.8
1.6
1.4
CFR
CFR with pruning
MCCFR-outcome
MCCFR-external
1.6
1.4
1.2
1.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
1e+09
2e+09
3e+09
Nodes Touched
4e+09
5e+09
0
One-Card Poker
0.25
4e+08
Nodes Touched
6e+08
Princess and Monster
20
CFR
CFR with pruning
MCCFR-outcome
MCCFR-external
0.2
2e+08
CFR
CFR with pruning
MCCFR-outcome
MCCFR-external
18
16
14
12
0.15
10
8
0.1
6
4
0.05
2
0
0
0
2e+08
4e+08
6e+08
Nodes Touched
8e+08
1e+09
0
1e+08
2e+08
3e+08
Nodes Touched
4e+08
5e+08
Figure 1: Convergence rates of Vanilla CFR, outcome-sampled MCCFR, and external-sampled MCCFR for various games. The y axis in each graph represents the exploitability of the strategies for
the two players ? (see Section 2.1).
in the first iteration. We believe this is due to dominated actions in the game. After one or two
traversals, the players identify and eliminate dominated actions from their policies, allowing these
subtrees to pruned. Finally, it is interesting to note that external-sampling was not uniformly the best
choice, with outcome-sampling performing better in Goofspiel. With outcome-sampling performing
worse than vanilla CFR in LTTT, this raises the question of what specific game properties might
favor one algorithm over another and whether it might be possible to incorporate additional game
specific constants into the bounds.
6
Conclusion
In this paper we defined a family of sample-based CFR algorithms for computing approximate equilibria in extensive games, subsuming all previous CFR variants. We also introduced two sampling
schemes: outcome-sampling, which samples only a single history for each iteration, and externalsampling, which samples a deterministic strategy for the opponent and chance. In addition to presenting a tighter bound for vanilla CFR, we presented regret bounds for both sampling variants,
which showed that external sampling with high probability gives an asymptotic computational time
improvement over vanilla CFR. We then showed empirically in very different domains that the reduction in iteration time outweighs the increase in required iterations leading to faster convergence.
There are three interesting directions for future work. First, we would like to examine how the
properties of the game effect the algorithms? convergence. Such an analysis could offer further
algorithmic or theoretical improvements, as well as practical suggestions, such as how to choose
a sampling policy in outcome-sampled MCCFR. Second, using outcome-sampled MCCFR as a
general online regret minimizing technique in extensive games (when the opponents? strategy is not
known or controlled) appears promising. It would be interesting to compare the approach, in terms
of bounds, computation, and practical convergence, to Gordon?s Lagrangian hedging [5]. Lastly,
it seems like this work could be naturally extended to cases where we don?t assume perfect recall.
Imperfect recall could be used as a mechanism for abstraction over actions, where information sets
are grouped by important partial sequences rather than their full sequences.
8
References
[1] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in games with incomplete information. In Advances in Neural Information Processing
Systems 20 (NIPS), 2008.
[2] Andrew Gilpin, Samid Hoda, Javier Pe?na, and Tuomas Sandholm. Gradient-based algorithms
for finding Nash equilibria in extensive form games. In 3rd International Workshop on Internet
and Network Economics (WINE?07), 2007.
[3] D. Koller, N. Megiddo, and B. von Stengel. Fast algorithms for finding randomized strategies
in game trees. In Proceedings of the 26th ACM Symposium on Theory of Computing (STOC
?94), pages 750?759, 1994.
[4] Martin Zinkevich, Michael Johanson, Michael Bowling, and Carmelo Piccione. Regret minimization in game with incomplete information. Technical Report TR07-14, University of
Alberta, 2007. http://www.cs.ualberta.ca/research/techreports/2007/
TR07-14.php.
[5] Geoffrey J. Gordon. No-regret algorithms for online convex programs. In In Neural Information Processing Systems 19, 2007.
[6] Martin J. Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, 1994.
[7] Sergiu Hart and Andreu Mas-Colell. A simple adaptive procedure leading to correlated equilibrium. Econometrica, 68(5):1127?1150, September 2000.
[8] D. Blackwell. An analog of the minimax theorem for vector payoffs. Pacific Journal of Mathematics, 6:1?8, 1956.
[9] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. Gambling in a rigged
casino: The adversarial multi-arm bandit problem. In 36th Annual Symposium on Foundations
of Computer Science, pages 322?331, 1995.
[10] Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte carlo sampling for regret minimization in extensive games. Technical Report TR09-15, University of
Alberta, 2009. http://www.cs.ualberta.ca/research/techreports/2009/
TR09-15.php.
[11] S. M. Ross. Goofspiel ? the game of pure strategy. Journal of Applied Probability, 8(3):621?
625, 1971.
[12] Geoffrey J. Gordon. No-regret algorithms for structured prediction problems. Technical Report
CMU-CALD-05-112, Carnegie Mellon University, 2005.
[13] H. W. Kuhn. Simplified two-person poker. Contributions to the Theory of Games, 1:97?103,
1950.
[14] Rufus Isaacs. Differential Games: A Mathematical Theory with Applications to Warfare and
Pursuit, Control and Optimization. John Wiley & Sons, 1965.
9
| 3713 |@word version:2 maz:1 seems:1 approachability:1 rigged:1 q1:2 dramatic:1 reduction:3 initial:1 contains:4 selecting:2 prefix:8 past:1 current:6 com:1 clara:1 yet:1 must:2 john:1 partition:5 designed:1 update:3 greedy:1 short:1 core:1 iterates:1 node:15 revisited:1 traverse:2 location:1 tr09:2 mathematical:1 along:2 differential:1 symposium:2 prove:2 consists:1 introduce:2 expected:2 roughly:2 examine:1 multi:1 terminal:20 informational:1 alberta:6 considering:2 becomes:1 begin:1 bounded:4 notation:2 maximizes:1 moreover:2 what:1 tic:3 minimizes:1 prohibits:1 maxa:1 supplemental:3 finding:3 guarantee:1 every:5 megiddo:1 control:1 positive:3 limit:1 might:2 pam:1 suggests:2 limited:1 range:1 bi:1 averaged:1 unique:1 practical:3 carmelo:2 union:1 regret:69 block:17 practice:1 lost:1 procedure:1 empirical:1 matching:4 confidence:2 induce:1 numbered:1 cannot:2 close:1 mccfr:39 restriction:1 zinkevich:11 maxz:1 lagrangian:2 deterministic:2 www:2 secretly:1 go:2 regardless:1 starting:2 attention:1 convex:2 economics:1 assigns:2 pure:2 m2:1 insight:3 examines:1 classic:1 traditionally:1 updated:1 pt:2 play:7 suppose:2 ualberta:4 exact:1 programming:1 strengthen:1 pa:1 associate:1 trick:1 particularly:1 updating:2 monster:2 role:1 thousand:1 connected:1 e8:2 highest:1 mentioned:1 balanced:1 nash:10 ui:18 moderately:1 princess:2 econometrica:1 traversal:8 raise:1 solving:2 upon:1 bidding:2 easily:1 various:3 fast:1 effective:1 describe:2 monte:5 rubinstein:1 kevin:2 outcome:27 outside:1 h0:4 larger:1 valued:1 otherwise:2 favor:1 itself:1 online:8 sequence:11 triggered:1 advantage:1 took:1 product:2 ocp:1 x86:1 qr:1 billion:1 convergence:6 empty:1 requirement:1 perfect:6 converges:1 andrew:1 measured:1 school:1 p2:1 c:4 implies:1 differ:2 kuhn:2 direction:1 exploration:2 material:3 require:4 generalization:2 wall:2 decompose:1 alleviate:1 tighter:2 summation:1 traversed:2 extension:1 accompanying:1 hold:1 ic:2 equilibrium:22 algorithmic:2 mapping:1 claim:1 driving:1 achieves:1 wine:1 outperformed:1 visited:2 ross:1 grouped:1 weighted:1 minimization:15 techreports:2 mit:1 always:3 reaching:2 rather:4 avoid:2 ej:2 johanson:2 varying:1 derived:2 focus:1 improvement:4 adversarial:1 helpful:1 waugh:3 warfare:1 abstraction:1 i0:5 accumulated:1 entire:4 eliminate:1 a0:1 koller:1 bandit:1 selects:1 overall:14 yahoo:2 art:1 special:1 equal:2 never:1 having:1 sampling:48 identical:3 represents:1 cancel:2 nearly:1 mimic:1 future:1 report:6 gordon:3 few:3 primarily:1 randomly:1 individual:1 ourselves:2 subtrees:1 partial:1 traversing:1 tree:12 incomplete:2 desired:1 theoretical:3 column:1 xeon:1 disadvantage:1 implementational:1 yoav:1 cost:5 introducing:1 strategic:1 subset:3 colell:1 answer:1 chooses:1 person:1 international:1 randomized:1 probabilistic:2 michael:6 na:1 linux:1 augmentation:1 von:1 cesa:1 containing:1 choose:7 possibly:1 worse:1 external:21 leading:2 return:1 stengel:1 b2:1 casino:1 inc:1 caused:2 explicitly:1 vi:8 depends:3 hedging:2 later:1 doing:2 start:1 complicated:1 contribution:4 php:2 who:1 efficiently:1 rufus:1 identify:1 carlo:5 pomdps:1 history:37 reach:2 whenever:1 definition:3 colleague:6 isaac:1 toe:3 naturally:1 associated:4 proof:2 mi:5 sampled:17 counterfactual:22 recall:5 knowledge:2 javier:1 routine:1 actually:1 auer:1 appears:2 normalform:1 response:3 specify:2 improved:1 done:1 furthermore:3 lastly:1 clock:2 until:1 hand:2 touch:2 defines:1 quality:1 believe:1 usa:1 effect:4 samid:1 concept:2 requiring:1 cald:1 hence:1 assigned:1 round:1 game:79 bowling:5 self:3 during:3 rooted:1 criterion:1 m:3 prominent:1 presenting:1 complete:2 performs:1 meaning:2 empirically:3 twist:1 million:4 analog:1 m1:1 accumulate:2 mellon:2 refer:2 ai:9 tac:3 rd:1 vanilla:23 grid:1 mathematics:1 similarly:2 longer:1 maxh:2 own:1 recent:1 showed:2 driven:1 store:1 seen:1 additional:4 determine:1 recommended:1 ii:19 multiple:2 full:3 exceeds:1 technical:6 faster:2 match:1 exp3:1 offer:1 long:3 ofp:1 hart:1 post:1 visit:1 controlled:1 ensuring:1 prediction:1 involving:1 variant:11 basic:1 subsuming:2 cmu:2 expectation:3 metric:1 iteration:34 kernel:1 background:1 want:1 addition:1 checker:1 member:8 call:1 revealed:1 variety:1 bid:2 zi:8 tr07:2 restrict:1 imperfect:8 idea:2 reduce:1 knowing:1 qj:16 whether:2 utility:4 peter:1 action:26 repeatedly:1 dramatically:3 santa:1 amount:1 http:2 schapire:1 outperform:1 exist:1 per:10 diverse:1 carnegie:2 incentive:1 key:5 four:5 putting:1 falling:1 evasion:1 neither:1 backward:2 timestep:1 asymptotically:1 graph:4 fraction:1 sum:8 year:1 powerful:2 family:11 reasonable:1 throughout:1 lanctot:3 decision:5 appendix:2 sergiu:1 bound:25 internet:1 distinguish:2 annual:1 occur:1 infinity:1 worked:1 ri:8 dominated:2 u1:1 span:2 pruned:2 performing:2 martin:5 department:2 pacific:1 according:6 structured:1 across:1 slightly:1 sandholm:1 son:1 partitioned:1 wi:4 making:3 chess:1 ariel:1 taken:1 equation:6 previously:3 discus:1 turn:2 mechanism:1 needed:1 available:1 pursuit:2 tightest:1 opponent:10 experimentation:1 qto:1 alternative:1 original:2 top:1 remaining:1 include:1 running:1 outweighs:2 maintaining:1 lttt:3 unsuitable:1 exploit:1 giving:1 practicality:1 epsilon:1 unchanged:1 move:2 added:2 realized:1 occurs:1 question:1 strategy:29 costly:1 poker:11 september:1 gradient:1 win:1 card:8 cfr:56 assuming:1 length:1 tuomas:1 modeled:1 ratio:1 minimizing:8 robert:1 stoc:1 relate:1 tightening:1 implementation:3 policy:6 allowing:1 bianchi:1 finite:5 immediate:5 payoff:4 extended:1 excluding:1 ever:1 andreu:1 stack:1 canada:2 introduced:1 required:3 specified:1 extensive:19 connection:1 blackwell:2 nip:1 beyond:1 program:2 including:2 max:4 arm:1 minimax:1 scheme:2 mdps:1 axis:1 deviate:1 nicol:1 determining:1 asymptotic:3 freund:1 piccione:2 interesting:4 suggestion:1 proportional:1 proven:2 geoffrey:2 foundation:1 agent:1 degree:2 t6g:2 consistent:1 exciting:1 plotting:1 playing:7 storing:2 balancing:1 course:1 side:1 bias:1 taking:1 ghz:1 depth:1 forward:1 commonly:1 made:1 adaptive:1 simplified:1 far:1 approximate:8 pruning:9 imm:6 b1:1 pittsburgh:1 excepting:1 don:1 subsequence:2 latent:3 table:2 additionally:2 promising:1 nature:1 exploitability:2 ca:5 goofspiel:5 hoda:1 marc:2 domain:7 profile:8 osborne:1 intel:1 gambling:1 edmonton:2 wiley:1 position:1 explicit:1 pe:1 minz:1 touched:6 theorem:11 formula:1 specific:6 showing:3 concern:2 consist:1 exists:1 disclosed:1 restricting:1 sequential:3 effectively:1 workshop:1 magnitude:2 te:2 subtree:1 occurring:1 horizon:3 margin:1 fc:5 likely:1 deck:1 u2:1 applies:2 constantfactor:1 radically:1 chance:22 determines:1 satisfies:2 acm:1 ma:1 sized:1 consequently:1 invalid:1 absence:1 tvc:2 considerable:1 except:2 uniformly:2 lemma:1 called:2 total:4 experimental:1 player:60 formally:1 rit:5 gilpin:1 incorporate:1 evaluate:2 correlated:1 |
2,995 | 3,714 | A Game-Theoretic Approach to
Hypergraph Clustering
Samuel Rota Bul`o
Marcello Pelillo
University of Venice, Italy
{srotabul,pelillo}@dsi.unive.it
Abstract
Hypergraph clustering refers to the process of extracting maximally coherent
groups from a set of objects using high-order (rather than pairwise) similarities.
Traditional approaches to this problem are based on the idea of partitioning the
input data into a user-defined number of classes, thereby obtaining the clusters as
a by-product of the partitioning process. In this paper, we provide a radically different perspective to the problem. In contrast to the classical approach, we attempt
to provide a meaningful formalization of the very notion of a cluster and we show
that game theory offers an attractive and unexplored perspective that serves well
our purpose. Specifically, we show that the hypergraph clustering problem can
be naturally cast into a non-cooperative multi-player ?clustering game?, whereby
the notion of a cluster is equivalent to a classical game-theoretic equilibrium concept. From the computational viewpoint, we show that the problem of finding the
equilibria of our clustering game is equivalent to locally optimizing a polynomial
function over the standard simplex, and we provide a discrete-time dynamics to
perform this optimization. Experiments are presented which show the superiority
of our approach over state-of-the-art hypergraph clustering techniques.
1 Introduction
Clustering is the problem of organizing a set of objects into groups, or clusters, in a way as to have
similar objects grouped together and dissimilar ones assigned to different groups, according to some
similarity measure. Unfortunately, there is no universally accepted formal definition of the notion
of a cluster, but it is generally agreed that, informally, a cluster should correspond to a set of objects
satisfying two conditions: an internal coherency condition, which asks that the objects belonging to
the cluster have high mutual similarities, and an external incoherency condition, which states that
the overall cluster internal coherency decreases by adding to it any external object.
Objects similarities are typically expressed as pairwise relations, but in some applications higherorder relations are more appropriate, and approximating them in terms of pairwise interactions can
lead to substantial loss of information. Consider for instance the problem of clustering a given set of
d-dimensional Euclidean points into lines. As every pair of data points trivially defines a line, there
does not exist a meaningful pairwise measure of similarity for this problem. However, it makes
perfect sense to define similarity measures over triplets of points that indicate how close they are
to being collinear. Clearly, this example can be generalized to any problem of model-based point
pattern clustering, where the deviation of a set of points from the model provides a measure of their
dissimilarity. The problem of clustering objects using high-order similarities is usually referred to
as the hypergraph clustering problem.
In the machine learning community, there has been increasing interest around this problem. Zien
and co-authors [24] propose two approaches called ?clique expansion? and ?star expansion?, respectively. Both approaches transform the similarity hypergraph into an edge-weighted graph, whose
edge-weights are a function of the hypergraph?s original weights. This way they are able to tackle
1
the problem with standard pairwise clustering algorithms. Bolla [6] defines a Laplacian matrix for
an unweighted hypergraph and establishes a link between the spectral properties of this matrix and
the hypergraph?s minimum cut. Rodr`?guez [16] achieves similar results by transforming the hypergraph into a graph according to ?clique expansion? and shows a relationship between the spectral
properties of a Laplacian of the resulting matrix and the cost of minimum partitions of the hypergraph. Zhou and co-authors [23] generalize their earlier work on regularization on graphs and
define a hypergraph normalized cut criterion for a k-partition of the vertices, which can be achieved
by finding the second smallest eigenvector of a normalized Laplacian. This approach generalizes
the well-known ?Normalized cut? pairwise clustering algorithm [19]. Finally, in [2] we find another
work based on the idea of applying a spectral graph partitioning algorithm on an edge-weighted
graph, which approximates the original (edge-weighted) hypergraph. It is worth noting that the approaches mentioned above are devised for dealing with higher-order relations, but can all be reduced
to standard pairwise clustering approaches [1]. A different formulation is introduced in [18], where
the clustering problem with higher-order (super-symmetric) similarities is cast into a nonnegative
factorization of the closest hyper-stochastic version of the input affinity tensor.
All the afore-mentioned approaches to hypergraph clustering are partition-based. Indeed, clusters
are not modeled and sought directly, but they are obtained as a by-product of the partition of the input
data into a fixed number of classes. This renders these approaches vulnerable to applications where
the number of classes is not known in advance, or where data is affected by clutter elements which
do not belong to any cluster (as in figure/ground separation problems). Additionally, by partitioning,
clusters are necessarily disjoint sets, although it is in many cases natural to have overlapping clusters,
e.g., two intersecting lines have the point in the intersection belonging to both lines.
In this paper, following [14, 20] we offer a radically different perspective to the hypergraph clustering problem. Instead of insisting on the idea of determining a partition of the input data, and hence
obtaining the clusters as a by-product of the partitioning process, we reverse the terms of the problem and attempt instead to derive a rigorous formulation of the very notion of a cluster. This allows
one, in principle, to deal with more general problems where clusters may overlap and/or outliers
may get unassigned. We found that game theory offers a very elegant and general mathematical
framework that serves well our purposes. The basic idea behind our approach is that the hypergraph
clustering problem can be considered as a multi-player non-cooperative ?clustering game?. Within
this context, the notion of a cluster turns out to be equivalent to a classical equilibrium concept from
(evolutionary) game theory, as the latter reflects both the internal and external cluster conditions
alluded to before. We also show that there exists a correspondence between these equilibria and
the local solutions of a polynomial, linearly-constrained, optimization problem, and provide an algorithm for finding them. Experiments on two standard hypergraph clustering problems show the
superiority of the proposed approach over state-of-the-art hypergraph clustering techniques.
2 Basic notions from evolutionary game theory
Evolutionary game theory studies models of strategic interactions (called games) among large
numbers of anonymous agents. A game can be formalized as a triplet ? = (P, S, ?), where
P = {1, . . . , k} is the set of players involved in the game, S = {1, . . . , n} is the set of pure
strategies (in the terminology of game-theory) available to each player and ? : S k ? R is the payoff
function, which assigns a payoff to each strategy profile, i.e., the (ordered) set of pure strategies
played by the individuals. The payoff function ? is assumed to be invariant to permutations of the
strategy profile. It is worth noting that in general games, each player may have its own set of strategies and own payoff function. For a comprehensive introduction to evolutionary game theory we
refer to [22].
By undertaking an evolutionary setting we assume to have a large population of non-rational agents,
which are randomly matched to play a game ? = (P, S, ?). Agents are considered non-rational, because each of them initially chooses a strategy from S, which will be always played when selected
for the game. An agent, who selected strategy i ? S, is called i-strategist. Evolution in the population takes place, because we assume that there exists a selection mechanism, which, by analogy with
a Darwinian process, spreads the fittest strategies in the population to the detriment of the weakest
one, which will in turn be driven to extinction. We will see later in this work a formalization of such
a selection mechanism.
2
The state of the population at a given time t can be represented as a n-dimensional vector x(t),
where xi (t) represents the fraction of i-strategists in the population at time t. The set of all possible
states describing a population is given by
(
)
X
n
?= x?R :
xi = 1 and xi ? 0 for all i ? S ,
i?S
which is called standard simplex. In the sequel we will drop the time reference from the population
state, where not necessary. Moreover, we denote with ?(x) the support of x ? ?, i.e., the set of
strategies still alive in population x ? ?: ?(x) = {i ? S : xi > 0}.
If y(i) ? ? is the probability distribution identifying which strategy the ith player will adopt if
drawn to play the game ?, then the average payoff obtained by the agents can be computed as
k
Y
X
.
(1)
ys(j)
?(s1 , . . . , sk )
u(y(1) , . . . , y(k) ) =
j
(s1 ,...,sk )?S k
j=1
Note that (1) is invariant to any permutation of the input probability vectors.
Assuming that the agents are randomly and independently drawn from a population x ? ? to play
the game ?, the population average payoff is given by u(xk ), where xk is a shortcut for x, . . . , x
repeated k times. Furthermore, the average payoff that an i-strategist obtains in a population x ? ?
is given by u(ei , xk?1 ), where ei ? ? is a vector with xi = 1 and zero elsewhere.
An important notion in game theory is that of equilibrium [22]. A population x ? ? is in equilibrium
when the distribution of strategies will not change anymore, which intuitively happens when every
individual in the population obtains the same average payoff and no strategy can thus prevail on the
other ones. Formally, x ? ? is a Nash equilibrium if
u(ei , xk?1 ) ? u(xk ) ,
for all i ? S .
(2)
In other words, every agent in the population performs at most as well as the population average
payoff. Due to the multi-linearity of u, a consequence of (2) is that
u(ei , xk?1 ) = u(xk ) ,
for all i ? ?(x) ,
(3)
i.e., all the agents that survived the evolution obtain the same average payoff, which coincides with
the population average payoff.
A key concept pertaining to evolutionary game theory is that of an evolutionary stable strategy
[7, 22]. Such a strategy is robust to evolutionary pressure in an exact sense. Assume that in a
population x ? ?, a small share ? of mutant agents appears, whose distribution of strategies is
y ? ?. The resulting postentry population is given by w? = (1 ? ?)x + ?y. Biological intuition
suggests that evolutionary forces select against mutant individuals if and only if the average payoff
of a mutant agent in the postentry population is lower than that of an individual from the original
population, i.e.,
u(y, w?k?1 ) < u(x, w?k?1 ) .
(4)
A population x ? ? is evolutionary stable (or an ESS) if inequality (4) holds for any distribution of
mutant agents y ? ? \ {x}, granted the population share of mutants ? is sufficiently small (see, [22]
for pairwise contests and [7] for n-wise contests).
An alternative, but equivalent, characterization of ESSs involves a leveled notion of evolutionary
stable strategies [7]. We say that x ? ? is an ESS of level j against y ? ?, if there exists j ?
{0, . . . , k ? 1} such that both conditions
u(yj+1 , xk?j?1 ) <
i+1
k?i?1
u(yj , xk?j ) ,
i
k?i
(5)
u(y , x
) = u(y , x ) , for all 0 ? i < j ,
(6)
are satisfied. Clearly, x ? ? is an ESS if it satisfies a condition of this form for every y ? ? \ {x}.
It is straightforward to see that any ESS is a Nash equilibrium [22, 7]. An ESS, which satisfies
conditions (6) with j never more than J, will be called an ESS of level J. Note that for the generic
case most of the preceding conditions will be superfluous, i.e., only ESSs of level 0 or 1 are required
[7]. Hence, in the sequel, we will consider only ESSs of level 1. It is not difficult to verify that any
ESS (of level 1) x ? ? satisfies
u(w?k ) < u(xk ) ,
(7)
for all y ? ? \ {x} and small enough values of ?.
3
3 The hypergraph clustering game
The hypergraph clustering problem can be described by an edge-weighted hypergraph. Formally,
an edge-weighted hypergraph is a triplet H = (V, E, s), where V = {1, . . . , n} is a finite set
of vertices, E ? P(V ) \ {?} is the set of (hyper-)edges (here, P(V ) is the power set of V ) and
s : E ? R is a weight function which associates a real value with each edge. Note that negative
weights are allowed too. Although hypergraphs may have edges of varying cardinality, we will focus
on a particular class of hypergraphs, called k-graphs, whose edges have all fixed cardinality k ? 2.
In this paper, we cast the hypergraph clustering problem into a game, called (hypergraph) clustering
game, which will be played in an evolutionary setting. Clusters are then derived from the analysis of the ESSs of the clustering game. Specifically, given a k-graph H = (V, E, s) modeling a
hypergraph clustering problem, where V = {1, . . . , n} is the set of objects to cluster and s is the
similarity function over the set of objects in E, we can build a game involving k players, each of
them having the same set of (pure) strategies, namely the set of objects to cluster V . Under this
setting, a population x ? ? of agents playing a clustering game represents in fact a cluster, where
xi is the probability for object i to be part of it. Indeed, any cluster can be modeled as a probability
distribution over the set of objects to cluster. The payoff function of the clustering game is defined
in a way as to favour the evolution of agents supporting highly coherent objects. Intuitively, this
is accomplished by rewarding the k players in proportion to the similarity that the k played objects
have. Hence, assuming (v1 , . . . , vk ) ? V k to be the tuple of objects selected by k players, the payoff
function can be simply defined as
1
s ({v1 , . . . , vk }) if {v1 , . . . , vk } ? E ,
?(v1 , . . . , vk ) = k!
(8)
0
else ,
where the term 1/k! has been introduced for technical reasons.
Given a population x ? ? playing the clustering game, we have that the average population payoff
u(xk ) measures the cluster?s internal coherency as the average similarity of the objects forming the
cluster, whereas the average payoff u(ei , xk?1 ) of an agent supporting object i ? V in population
x, measures the average similarity of object i with respect to the cluster.
An ESS of a clustering game incorporates the properties of internal coherency and external incoherency of a cluster:
internal coherency: since ESSs are Nash equilibria, from (3), it follows that every object contributing to the cluster, i.e., every object in ?(x), has the same average similarity with respect to
the cluster, which in turn corresponds to the cluster?s overall average similarity. Hence, the
cluster is internally coherent;
external incoherency: from (2), every object external to the cluster, i.e., every object in V \ ?(x),
has an average similarity which does not exceed the cluster?s overall average similarity.
There may still be cases where the average similarity of an external object is the same as
that of an internal object, mining the cluster?s external incoherency. However, since x is
an ESS, from (7) we see that whenever we try to extend a cluster with small shares of
external objects, the cluster?s overall average similarity drops. This guarantees the external
incoherency property of a cluster to be also satisfied.
Finally, it is worth noting that this theory generalizes the dominant-sets clustering framework which
has recently been introduced in [14]. Indeed, ESSs of pairwise clustering games, i.e. clustering
games defined on graphs, correspond to the dominant-set clusters [20, 17]. This is an additional
evidence that ESSs are meaningful notions of cluster.
4 Evolution towards a cluster
In this section we will show that the ESSs of a clustering game are in one-to-one correspondence
with (strict) local solution of a non-linear optimization program. In order to find ESSs, we will also
provide a dynamics due to Baum and Eagon, which generalizes the replicator dynamics [22].
Let H = (V, E, s) be a hypergraph clustering problem and ? = (P, V, ?) be the corresponding
clustering game. Consider the following non-linear optimization problem:
X
Y
xi , subject to x ? ? .
(9)
maximize f (x) =
s(e)
e?E
i?e
4
It is simple to see that any first-order Karush-Kuhn-Tucker (KKT) point x ? ? of program (9) [13]
is a Nash equilibrium of ?. Indeed, by the KKT conditions there exist ?i ? 0, i ? S, and ? ? R
such that for all i ? S,
?f (x)i + ?i ? ? =
1
u(ei , xk?1 ) + ?i ? ? = 0
k
and
?i xi = 0 ,
where ? is the gradient operator. From this it follows straightforwardly that u(ei , xk?1 ) ? u(xk )
for all i ? S. Moreover, it turns out that any strict local maximizer x ? ? of (9) is an ESS of ?.
Indeed, by definition, a strict local maximizer of this program satisfies u(zk ) = f (z) < f (x) =
u(xk ), for any z ? ? \ {x} close enough to x, which is in turn equivalent to (7) for sufficiently
small values of ?.
The problem of extracting ESSs of our hypergraph clustering game can thus be cast into the problem
of finding strict local solutions of (9). We will address this optimization task using a result due to
Baum and Eagon [3], who introduced a class of nonlinear transformations in probability domain.
Theorem 1 (Baum-Eagon). Let P (x) be a homogeneous polynomial in the variables xi with nonnegative coefficients, and let x ? ?. Define the mapping z = M(x) as follows:
n
.X
xj ?j P (x),
zi = xi ?i P (x)
i = 1, . . . , n.
(10)
j=1
Then P (M(x)) > P (x), unless M(x) = x. In other words M is a growth transformation for the
polynomial P .
The Baum-Eagon inequality provides an effective iterative means for maximizing polynomial functions in probability domains, and in fact it has served as the basis for various statistical estimation
techniques developed within the theory of probabilistic functions of Markov chains [4]. It was also
employed for the solution of relaxation labelling processes [15].
Since f (x) is a homogeneous polynomial in the variables xi , we can use the transformation of
Theorem 1 in order to find a local solution x ? ? of (9), which in turn provides us with an ESS of the
hypergraph clustering game. By taking the support of x, we have a cluster under our framework. The
complexity of finding a cluster is thus O(?|E|), where |E| is the number of edges of the hypergraph
describing the clustering problem and ? is the average number of iteration needed to converge. Note
that ? never exceeded 100 in our experiments.
In order to obtain the clustering, in principle, we have to find the ESSs of the clustering game.
This is a non-trivial, although still feasible, task [21], which we leave as a future extension of this
work. By now, we adopt a naive peeling-off strategy for our cluster extraction procedure. Namely,
we iteratively find a cluster and remove it from the set of objects, and we repeat this process on
the remaining objects until a desired number of clusters have been extracted. The set of extracted
ESSs with this procedure does not technically correspond to the ESSs of the original game, but to
ESSs of sub-games of it. The cost of this approximation is that we unfortunately loose (by now) the
possibility of having overlapping clusters.
5 Experiments
In this section we present two types of experiments. The first one addresses the problem of line
clustering, while the second one addresses the problem of illuminant-invariant face clustering. We
tested our approach against Clique Averaging algorithm (C AVERAGE), since it was the best performing approach in [2] on the same type of experiments. Specifically, C AVERAGE outperformed
Clique Expansion [10] combined with Normalized cuts, Gibson?s Algorithm under sum and product
model [9], kHMeTiS [11] and Cascading RANSAC [2]. We also compare against Super-symmetric
Non-negative Tensor Factorization (S NTF) [18], because it is the only approach, other than ours,
which does not approximate the hypergraph to a graph.
Since both C AVERAGE and S NTF, as opposed to our method, require the number of classes K to be
specified, we run them with values of K ? {K ? ? 1, K ? , K ? + 1} among which the optimal one
(K ? ) is present. This allows us to verify the robustness of the approaches under wrong values of K,
which may occur in general as the optimal number of clusters is not known in advance.
5
1.2
HoCluGame
Cav. K=2
Cav. K=3
Cav. K=4
Sntf K=2
Sntf K=3
Sntf K=4
1.5
1
1
0.8
F-measure
0.5
0
?0.5
0.6
0.4
?1
0.2
?1.5
?1.5
?1
?0.5
0
0.5
1
1.5
2
0
0
0.01 0.02
(a) Example of three lines with ? = 0.04.
?
0.08
(b) Three lines.
1.2
1.2
HoCluGame
Cav. K=3
Cav. K=4
Cav. K=5
Sntf K=3
Sntf K=4
Sntf K=5
1
HoCluGame
Cav. K=4
Cav. K=5
Cav. K=6
Sntf K=4
Sntf K=5
Snft K=6
1
0.8
F-measure
0.8
F-measure
0.04
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
0.01 0.02
0.04
?
0.08
0
(c) Four lines.
0.01 0.02
0.04
?
0.08
(d) Five lines.
Figure 1: Results on clustering 3, 4 and 5 lines perturbed with increasing levels of Gaussian noise
(? = 0, 0.01, 0.02, 0.04, 0.08).
We executed the experiments on a AMD Sempron 3Ghz computer with 1Gb RAM. Moreover, we
evaluated the quality of a clustering by computing the average F-measure of each cluster in the
ground-truth with the most compatible one in the obtained solution (according to a one-to-one correspondence).
5.1 Line clustering
We consider the problem of clustering lines in spaces of dimension greater than two, i.e., given a
set of points in Rd , the task is to find sets of collinear points. Pairwise measures of similarity are
useless and at least three points are needed. The dissimilarity measure on triplets of points is given
by their mean distance to the best fitting line. If d(i, j, k) is the dissimilarity of points {i, j, k}, the
similarity function is given by s({i, j, k}) = exp(?d(i, j, k)2 /? 2 ), where ? is a scaling parameter,
which has been optimally selected for all the approaches according to a small test set.
We conducted two experiments, in order to assess the robustness of the approaches to both local
and global noise. Local noise refers to a Gaussian perturbation applied to the points of a line, while
global noise consists of random outlier points.
A first experiment consists in clustering 3, 4 and 5 lines generated in the 5-dimensional space
[?2, 2]5 . Each line consists of 20 points, which have been perturbed according to 5 increasing
levels of Gaussian noise, namely ? = 0, 0.01, 0.02, 0.04, 0.08. With this setting there are no outliers
and every point should be assigned to a line (e.g., see Figure 1(a)). Figure 1(b) shows the results
obtained with three lines. We reported, for each noise level, the mean and the standard deviation
of the average F-measures obtained by the algorithms on 30 randomly generated instances. Note
that, if the optimal K is used, C AVERAGE and S NTF perform well and the influence of local noise
is minimal. This behavior intuitively makes sense under moderate perturbations, because if the approaches correctly partitioned the data without noise, it is unlikely that the result will change by
slightly perturbing them. Our approach however achieves good performances as well, although we
can notice that with the highest noise level, the performance slightly drops. This is due to the fact
that points deviating too much from the overall cluster average collinearity will be excluded as they
undermine the cluster?s internal coherency. Hence, some perturbed points will be considered outliers. Nevertheless, it is worth noting that by underestimating the optimal number of classes both
C AVERAGE and S NTF exhibit a drastic performance drop, whereas the influence of overestimations
6
1.2
HoCluGame
Cav. K=2
Cav. K=3
Cav. K=4
Sntf K=2
Sntf K=3
Sntf K=4
1.5
1
1
0.5
F-measure
0.8
0
?0.5
0.6
0.4
?1
First line
Second line
?1.5
0.2
Outliers
?2
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
0
0
10
(a) Example of two lines with 40 outliers.
?
40
(b) Two lines.
1.2
1.2
HoCluGame
Cav. K=2
Cav. K=3
Cav. K=4
Sntf K=2
Sntf K=3
Sntf K=4
1
HoCluGame
Cav. K=3
Cav. K=4
Cav. K=5
Sntf K=3
Sntf K=4
Sntf K=5
1
0.8
F-measure
0.8
F-measure
20
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0
10
20
?
40
0
(c) Three lines.
10
20
?
40
(d) Four lines.
Figure 2: Results on clustering 2, 3 and 4 lines with an increasing number of outliers (0, 10, 20, 40).
has a lower impact on the two partition-based algorithms. By increasing the number of lines involved
in the experiment from three to four (Figure 1(c)) and to five (Figure 1(d)) the scenario remains almost the same for our approach and S NTF, while we can notice a slight decrease of C AVERAGE?s
performance.
The second experiment consists in clustering 2, 3 and 4 slightly perturbed lines (with fixed local
noise ? = 0.01) generated in the 5-dimensional space [?2, 2]5 . Again, each line consists of 20
points. This time however we added also global noise, i.e., 0, 10, 20 and 40 random points as outliers
(e.g., see Figure 2(a)). Figure 2(b) shows the results obtained with two lines. Here, the supremacy
of our approach over partition-based ones is clear. Indeed, our method is not influenced by outliers
and therefore it performs almost perfectly, whereas C AVERAGE and S NTF perform well only without
outliers and with the optimal K. It is interesting to notice that, as outliers are introduced, C AVERAGE
and S NTF perform better with K > 2. Indeed, the only way to get rid of outliers is to group them in
additional clusters. However, since outliers are not mutually similar and intuitively they do not form
a cluster, we have that the performance of C AVERAGE and S NTF drop drastically as the number of
outliers increases. Finally, by increasing the number of lines from two to three (Figure 2(c)) and
to four (Figure 2(d)), the performance of C AVERAGE and S NTF get worse, while our approach still
achieves good results.
5.2 Illuminant-invariant face clustering
In [5] it has been shown that images of a Lambertian object illuminated by a point light source lie in
a three dimensional subspace. According to this result, if we assume that four images of a face form
the columns of a matrix then d = s24 /(s21 + ? ? ? + s24 ) provides us with a measure of dissimilarity,
where si is the ith singular value of this matrix [2]. We use this dissimilarity measure for the face
clustering problem and we consider as dataset the Yale Face Database B and its extended version
[8, 12]. In total we have faces of 38 individuals, each under 64 different illumination conditions. We
compared our approach against C AVERAGE and S NTF on subsets of this face dataset. Specifically,
we considered cases where we have faces from 4 and 5 random individuals (10 faces per individual),
and with and without outliers. The case with outliers consists in 10 additional faces each from a
different individual. For each of those combinations, we created 10 random subsets. Similarly to the
case of line clustering, we run C AVERAGE and S NTF with values of K ? {K ? ? 1, K ? , K ? + 1},
where K ? is the optimal one.
7
n. of classes:
n. of outliers:
C AVERAGE K=3
C AVERAGE K=4
C AVERAGE K=5
C AVERAGE K=6
S NTF K=3
S NTF K=4
S NTF K=5
S NTF K=6
HoCluGame
4
0
0.63?0.11
0.96?0.06
0.91?0.06
0.62?0.12
0.87?0.07
0.82?0.09
0.95?0.03
5
10
0.59?0.07
0.84?0.07
0.79?0.05
0.58?0.10
0.81?0.08
0.76?0.09
0.94?0.02
0
0.56?0.14
0.85?0.12
0.84?0.09
0.61?0.13
0.86?0.12
0.85?0.08
0.95?0.05
10
0.58?0.07
0.83?0.06
0.82?0.06
0.59?0.09
0.80?0.07
0.79?0.11
0.94?0.02
Table 1: Experiments on illuminant-invariant face clustering.
In Table 1 we report the average F-measures (mean and standard deviation) obtained by the three
approaches. The results are consistent with those obtained in the case of line clustering with the
exception of S NTF, which performs worse than the other approaches on this real-world application.
C AVERAGE and our algorithm perform comparably well when clustering 4 individuals without outliers. However, our approach turns out to be more robust in every other tested case, i.e., when the
number of classes increases and when outliers are introduced. Indeed, C AVERAGE?s performance
decreases, while our approach yields the same good results.
In both the experiments of line and face clustering the execution times of our approach were higher
than those of C AVERAGE, but considerably lower than S NTF. The main reason why C AVERAGE
run faster is that our approach and S NTF work directly on the hypergraph without resorting to pairwise relations, which is indeed what C AVERAGE does. Further, we mention that our code was not
optimized to improve speed and all the approaches were run without any sampling policy.
6 Discussion
In this paper, we offered a game-theoretic perspective to the hypergraph clustering problem. Within
our framework the clustering problem is viewed as a multi-player non-cooperative game, and classical equilibrium notions from evolutionary game theory turn out to provide a natural formalization
of the notion of a cluster. We showed that the problem of finding these equilibria (clusters) is equivalent to solving a polynomial optimization problem with linear constraints, which we solve using an
algorithm based on the Baum-Eagon inequality. An advantage of our approach over traditional techniques is the independence from the number of clusters, which is indeed an intrinsic characteristic
of the input data, and the robustness against outliers, which is especially useful when solving figureground-like grouping problems. We also mention, as a potential positive feature of the proposed
approach, the possibility of finding overlapping clusters (e.g., along the lines presented in [21]), although in this paper we have not explicitly dealt with this problem. The experimental results show
the superiority of our approach with respect to the state of the art in terms of quality of solution. We
are currently studying alternatives to the plain Baum-Eagon dynamics in order to improve efficiency.
Acknowledgments. We acknowledge financial support from the FET programme within EU FP7,
under the SIMBAD project (contract 213250). We also thank Sameer Agarwal and Ron Zass for
providing us with the code of their algorithms.
References
[1] S. Agarwal, K. Branson, and S. Belongie. Higher order learning with graphs. In Int. Conf. on
Mach. Learning, volume 148, pages 17?24, 2006.
[2] S. Agarwal, J. Lim, L. Zelnik-Manor, P. Perona, D. Kriegman, and S. Belongie. Beyond
pairwise clustering. In IEEE Conf. Computer Vision and Patt. Recogn., volume 2, pages 838?
845, 2005.
[3] L. E. Baum and J. A. Eagon. An inequality with applications to statistical estimation for
probabilistic functions of Markov processes and to a model for ecology. Bull. Amer. Math.
Soc., 73:360?363, 1967.
8
[4] L. E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the
statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statistics, 41:164?
171, 1970.
[5] P. Belhumeur and D. Kriegman. What is the set of images of an object under all possible
lighting conditions. Int. J. Comput. Vision, 28(3):245?260, 1998.
[6] M. Bolla. Spectral, euclidean representations and clusterings of hypergraphs. Discr. Math.,
117:19?39, 1993.
[7] M. Broom., C. Cannings, and G. T. Vickers. Multi-player matrix games. Bull. Math. Biology,
59(5):931?952, 1997.
[8] A. S. Georghiades., P. N. Belhumeur, and D. J. Kriegman. From few to many: illumination
cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal.
Machine Intell., 23(6):643?660, 2001.
[9] D. Gibson, J. M. Kleinberg, and P. Raghavan. VLDB, chapter Clustering categoral data: An
approach based on dynamical systems., pages 311?322. Morgan Kaufmann Publishers Inc.,
1998.
[10] T. Hu and K. Moerder. Multiterminal flows in hypergraphs. In T. Hu and E. S. Kuh, editors,
VLSI circuit layout: theory and design, pages 87?93. 1985.
[11] G. Karypis and V. Kumar. Multilevel k-way hypergraph partitioning. VLSI Design, 11(3):285?
300, 2000.
[12] K. C. Lee, J. Ho, and D. Kriegman. Acquiring linear subspaces for face recognition under
variable lighting. IEEE Trans. Pattern Anal. Machine Intell., 27(5):684?698, 2005.
[13] D. G. Luenberger. Linear and nonlinear programming. Addison Wesley, 1984.
[14] M. Pavan and M. Pelillo. Dominant sets and pairwise clustering. IEEE Trans. Pattern Anal.
Machine Intell., 29(1):167?172, 2007.
[15] M. Pelillo. The dynamics of nonlinear relaxation labeling processes. J. Math. Imag. and Vision,
7(4):309?323, 1997.
[16] J. Rodr`?guez. On the Laplacian spectrum and walk-regular hypergraphs. Linear and Multilinear Algebra, 51:285?297, 2003.
[17] S. Rota Bul`o. A game-theoretic framework for similarity-based data clustering. PhD thesis,
University of Venice, 2009.
[18] A. Shashua, R. Zass, and T. Hazan. Multi-way clustering using super-symmetric non-negative
tensor factorization. In Europ. Conf. on Comp. Vision, volume 3954, pages 595?608, 2006.
[19] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal.
Machine Intell., 22:888?905, 2000.
[20] A. Torsello, S. Rota Bul`o, and M. Pelillo. Grouping with asymmetric affinities: a gametheoretic perspective. In IEEE Conf. Computer Vision and Patt. Recogn., pages 292?299,
2006.
[21] A. Torsello, S. Rota Bul`o, and M. Pelillo. Beyond partitions: allowing overlapping groups in
pairwise clustering. In Int. Conf. Patt. Recogn., 2008.
[22] J. W. Weibull. Evolutionary game theory. Cambridge University Press, 1995.
[23] D. Zhou, J. Huang, and B. Sch?olkopf. Learning with hypergraphs: clustering, classification,
embedding. In Adv. in Neur. Inf. Processing Systems, volume 19, pages 1601?1608, 2006.
[24] J. Y. Zien, M. D. F. Schlag, and P. K. Chan. Multilevel spectral hypergraph partitioning
with arbitrary vertex sizes. IEEE Trans. on Comp.-Aided Design of Integr. Circ. and Systems,
18:1389?1399, 1999.
9
| 3714 |@word collinearity:1 version:2 polynomial:7 proportion:1 extinction:1 vldb:1 zelnik:1 hu:2 pressure:1 asks:1 thereby:1 mention:2 ours:1 soules:1 si:1 guez:2 partition:8 s21:1 remove:1 drop:5 selected:4 xk:16 es:25 ith:2 underestimating:1 provides:4 characterization:1 math:5 ron:1 five:2 mathematical:1 along:1 consists:6 fitting:1 pairwise:14 indeed:10 behavior:1 multi:6 cardinality:2 increasing:6 project:1 matched:1 moreover:3 linearity:1 circuit:1 what:2 eigenvector:1 weibull:1 developed:1 finding:7 transformation:3 guarantee:1 unexplored:1 every:10 tackle:1 growth:1 wrong:1 partitioning:7 internally:1 imag:1 superiority:3 unive:1 before:1 positive:1 local:10 consequence:1 mach:1 suggests:1 co:2 branson:1 factorization:3 karypis:1 acknowledgment:1 yj:2 procedure:2 survived:1 gibson:2 word:2 refers:2 rota:4 regular:1 get:3 close:2 selection:2 operator:1 context:1 applying:1 influence:2 equivalent:6 shi:1 baum:8 maximizing:1 straightforward:1 layout:1 independently:1 formalized:1 identifying:1 assigns:1 pure:3 cascading:1 financial:1 population:26 embedding:1 notion:11 play:3 user:1 exact:1 programming:1 homogeneous:2 associate:1 element:1 satisfying:1 recognition:2 asymmetric:1 cut:5 cooperative:3 database:1 adv:1 eu:1 decrease:3 highest:1 substantial:1 mentioned:2 transforming:1 nash:4 intuition:1 hypergraph:34 complexity:1 overestimation:1 kriegman:4 dynamic:5 solving:2 algebra:1 technically:1 efficiency:1 basis:1 georghiades:1 represented:1 various:1 recogn:3 chapter:1 effective:1 pertaining:1 labeling:1 hyper:2 whose:3 incoherency:5 solve:1 say:1 circ:1 statistic:1 transform:1 advantage:1 vickers:1 propose:1 interaction:2 product:4 organizing:1 gametheoretic:1 fittest:1 olkopf:1 cluster:56 perfect:1 leave:1 object:30 derive:1 pose:1 pelillo:6 soc:1 europ:1 involves:1 indicate:1 kuhn:1 undertaking:1 stochastic:1 raghavan:1 require:1 multilevel:2 karush:1 anonymous:1 biological:1 multilinear:1 extension:1 hold:1 around:1 considered:4 ground:2 sufficiently:2 exp:1 equilibrium:12 mapping:1 ntf:18 achieves:3 sought:1 smallest:1 adopt:2 purpose:2 estimation:2 outperformed:1 currently:1 grouped:1 establishes:1 weighted:5 reflects:1 clearly:2 always:1 gaussian:3 super:3 manor:1 rather:1 zhou:2 unassigned:1 varying:1 derived:1 focus:1 vk:4 mutant:5 contrast:1 rigorous:1 sense:3 typically:1 unlikely:1 initially:1 perona:1 relation:4 vlsi:2 overall:5 rodr:2 among:2 classification:1 art:3 constrained:1 mutual:1 never:2 having:2 extraction:1 sampling:1 biology:1 represents:2 marcello:1 future:1 simplex:2 report:1 few:1 randomly:3 comprehensive:1 individual:9 intell:4 deviating:1 attempt:2 ecology:1 interest:1 highly:1 mining:1 possibility:2 light:1 behind:1 superfluous:1 chain:2 edge:11 tuple:1 necessary:1 unless:1 euclidean:2 walk:1 desired:1 minimal:1 instance:2 column:1 earlier:1 modeling:1 bolla:2 bull:2 cost:2 strategic:1 deviation:3 subset:2 vertex:3 maximization:1 conducted:1 too:2 schlag:1 optimally:1 reported:1 straightforwardly:1 pavan:1 perturbed:4 considerably:1 chooses:1 combined:1 sequel:2 probabilistic:3 rewarding:1 off:1 contract:1 lee:1 together:1 intersecting:1 s24:2 again:1 thesis:1 satisfied:2 opposed:1 huang:1 worse:2 external:10 conf:5 potential:1 star:1 coefficient:1 int:3 inc:1 explicitly:1 leveled:1 later:1 try:1 hazan:1 shashua:1 ass:1 kaufmann:1 who:2 characteristic:1 correspond:3 yield:1 generalize:1 dealt:1 comparably:1 worth:4 served:1 lighting:3 comp:2 influenced:1 whenever:1 definition:2 against:6 involved:2 tucker:1 naturally:1 rational:2 dataset:2 lim:1 segmentation:1 agreed:1 appears:1 exceeded:1 wesley:1 higher:4 maximally:1 wei:1 formulation:2 evaluated:1 amer:1 furthermore:1 until:1 undermine:1 ei:7 nonlinear:3 overlapping:4 maximizer:2 defines:2 quality:2 concept:3 normalized:5 verify:2 evolution:4 regularization:1 assigned:2 hence:5 excluded:1 symmetric:3 iteratively:1 deal:1 attractive:1 game:46 whereby:1 samuel:1 coincides:1 criterion:1 generalized:1 fet:1 theoretic:4 performs:3 image:4 wise:1 recently:1 petrie:1 replicator:1 perturbing:1 volume:4 belong:1 hypergraphs:6 approximates:1 extend:1 slight:1 refer:1 cambridge:1 rd:1 trivially:1 resorting:1 similarly:1 contest:2 stable:3 similarity:22 dominant:3 closest:1 own:2 showed:1 chan:1 perspective:5 italy:1 optimizing:1 driven:1 reverse:1 moderate:1 scenario:1 inf:1 inequality:4 accomplished:1 morgan:1 minimum:2 additional:3 greater:1 preceding:1 employed:1 belhumeur:2 converge:1 maximize:1 zien:2 afore:1 sameer:1 technical:1 faster:1 offer:3 devised:1 zass:2 y:1 laplacian:4 impact:1 involving:1 basic:2 ransac:1 vision:5 iteration:1 agarwal:3 achieved:1 whereas:3 else:1 singular:1 source:1 publisher:1 figureground:1 sch:1 strict:4 subject:1 elegant:1 incorporates:1 flow:1 extracting:2 noting:4 exceed:1 enough:2 xj:1 independence:1 zi:1 perfectly:1 idea:4 favour:1 gb:1 granted:1 collinear:2 render:1 discr:1 generally:1 useful:1 clear:1 informally:1 clutter:1 locally:1 reduced:1 exist:2 notice:3 coherency:6 disjoint:1 correctly:1 per:1 patt:3 discrete:1 affected:1 group:5 key:1 four:5 terminology:1 nevertheless:1 multiterminal:1 drawn:2 v1:4 ram:1 graph:10 relaxation:2 fraction:1 sum:1 cone:1 run:4 place:1 almost:2 venice:2 separation:1 scaling:1 illuminated:1 played:4 correspondence:3 yale:1 nonnegative:2 simbad:1 occur:1 constraint:1 alive:1 kleinberg:1 speed:1 kumar:1 performing:1 according:6 neur:1 combination:1 belonging:2 slightly:3 partitioned:1 s1:2 happens:1 outlier:20 invariant:5 intuitively:4 alluded:1 mutually:1 remains:1 turn:8 describing:2 mechanism:2 loose:1 needed:2 addison:1 integr:1 fp7:1 drastic:1 serf:2 studying:1 generalizes:3 available:1 luenberger:1 lambertian:1 appropriate:1 spectral:5 generic:1 anymore:1 eagon:7 alternative:2 robustness:3 ho:1 original:4 clustering:68 remaining:1 build:1 especially:1 approximating:1 classical:4 tensor:3 malik:1 added:1 strategy:18 traditional:2 evolutionary:14 affinity:2 gradient:1 exhibit:1 distance:1 higherorder:1 link:1 subspace:2 thank:1 amd:1 broom:1 trivial:1 reason:2 assuming:2 code:2 modeled:2 relationship:1 useless:1 providing:1 detriment:1 difficult:1 unfortunately:2 executed:1 negative:3 anal:4 design:3 policy:1 perform:5 allowing:1 markov:3 finite:1 acknowledge:1 supporting:2 payoff:16 extended:1 perturbation:2 arbitrary:1 community:1 introduced:6 cast:4 pair:1 required:1 namely:3 specified:1 optimized:1 coherent:3 trans:5 address:3 able:1 beyond:2 usually:1 pattern:5 dynamical:1 program:3 power:1 overlap:1 natural:2 force:1 improve:2 created:1 naive:1 determining:1 contributing:1 loss:1 dsi:1 permutation:2 interesting:1 analogy:1 agent:14 offered:1 consistent:1 principle:2 viewpoint:1 editor:1 playing:2 share:3 elsewhere:1 compatible:1 repeat:1 drastically:1 formal:1 taking:1 face:14 torsello:2 ghz:1 dimension:1 plain:1 world:1 unweighted:1 author:2 universally:1 programme:1 approximate:1 obtains:2 kuh:1 clique:4 dealing:1 global:3 kkt:2 rid:1 assumed:1 belongie:2 xi:11 spectrum:1 iterative:1 triplet:4 sk:2 why:1 table:2 additionally:1 zk:1 robust:2 obtaining:2 expansion:4 necessarily:1 darwinian:1 domain:2 spread:1 main:1 linearly:1 noise:11 profile:2 repeated:1 allowed:1 referred:1 formalization:3 sub:1 comput:1 lie:1 peeling:1 theorem:2 weakest:1 evidence:1 exists:3 intrinsic:1 grouping:2 adding:1 prevail:1 phd:1 dissimilarity:5 execution:1 labelling:1 illumination:2 occurring:1 intersection:1 simply:1 forming:1 expressed:1 ordered:1 vulnerable:1 acquiring:1 radically:2 corresponds:1 satisfies:4 truth:1 extracted:2 insisting:1 viewed:1 bul:4 ann:1 towards:1 shortcut:1 change:2 feasible:1 cav:18 specifically:4 aided:1 averaging:1 called:7 total:1 accepted:1 experimental:1 player:11 meaningful:3 exception:1 formally:2 select:1 internal:8 support:3 latter:1 dissimilar:1 illuminant:3 tested:2 |
2,996 | 3,715 | Structured output regression for detection with
partial truncation
Andrea Vedaldi
Andrew Zisserman
Department of Engineering
University of Oxford
Oxford, UK
{vedaldi,az}@robots.ox.ac.uk
Abstract
We develop a structured output model for object category detection that explicitly
accounts for alignment, multiple aspects and partial truncation in both training and
inference. The model is formulated as large margin learning with latent variables
and slack rescaling, and both training and inference are computationally efficient.
We make the following contributions: (i) we note that extending the Structured
Output Regression formulation of Blaschko and Lampert [1] to include a bias term
significantly improves performance; (ii) that alignment (to account for small rotations and anisotropic scalings) can be included as a latent variable and efficiently
determined and implemented; (iii) that the latent variable extends to multiple aspects (e.g. left facing, right facing, front) with the same formulation; and (iv),
most significantly for performance, that truncated and truncated instances can be
included in both training and inference with an explicit truncation mask.
We demonstrate the method by training and testing on the PASCAL VOC 2007
data set ? training includes the truncated examples, and in testing object instances
are detected at multiple scales, alignments, and with significant truncations.
1
Introduction
There has been a steady increase in the performance of object category detection as measured by the
annual PASCAL VOC challenges [3]. The training data provided for these challenges specifies if an
object is truncated ? when the provided axis aligned bounding box does not cover the full extent of
the object. The principal cause of truncation is that the object partially lies outside the image area.
Most participants simple disregard the truncated training instances and learn from the non-truncated
ones. This is a waste of training material, but more seriously many truncated instances are missed
in testing, significantly reducing the recall and hence decreasing overall recognition performance.
In this paper we develop a model (Fig. 1) which explicitly accounts for truncation in both training and testing, and demonstrate that this leads to a significant performance boost. The model is
specified as a joint kernel and learnt using an extension of the structural SVM with latent variables
framework of [13]. We use this approach as it allows us to address a second deficiency of the provided supervision ? that the annotation is limited to axis aligned bounding boxes, even though the
objects may be in plane rotated so that the box is a loose bound. The latent variables allow us to
specify a pose transformation for each instances so that we achieve a spatial correspondence between all instances with the same aspect. We show the benefits of this for both the learnt model and
testing performance.
Our model is complementary to that of Felzenszwalb et al. [4] who propose a latent SVM framework, where the latent variables specify sub-part locations. The parts give their model some tolerance to in plane rotation and foreshortening (though an axis aligned rectangle is still used for a first
1
RIGHT
LEFT
LEFT
LEFT
LEFT
(a)
RIGHT
LEFT
LEFT
LEFT
(b)
(c)
(d)
Figure 1: Model overview. Detection examples on the VOC images for
the bicycle class demonstrate that the model can handle severe truncations (a-b), multiple objects (c), multiple aspects (d), and pose variations
(small in-plane rotations) (e). Truncations caused by the image boundary, (a) & (b), are a significant problem for template based detectors,
since the template can then only partially align with the image. Small
in-plane rotations and anisotropic rescalings of the template are approximated efficiently by rearranging sub-blocks of the HOG template (white
boxes in (e)).
RIGHT
(e)
stage as a ?root filter?) but they do not address the problem of truncation. Like them we base our
implementation on the efficient and successful HOG descriptor of Dalal and Triggs [2].
Previous authors have also considered occlusion (of which truncation is a special case). Williams et
al. [11] used pixel wise binary latent variables to specify the occlusion and an Ising prior for spatial
coherence. Inference involved marginalizing out the latent variables using a mean field approximation. There was no learning of the model from occluded data. For faces with partial occlusion, the
resulting classifier showed an improvement over a classifier which did not model occlusion. Others
have explicitly included occlusion at the model learning stage, such as the Constellation model of
Fergus et al. [5] and the Layout Consistent Random Field model of Winn et al. [12]. There are numerous papers on detecting faces with various degrees of partial occlusion from glasses, or synthetic
truncations [6, 7].
Our contribution is to define an appropriate joint kernel and loss function to be used in the context
of structured output prediction. We then learn a structured regressor, mapping an image to a list
of objects with their pose (or bounding box), while at the same time handling explicitly truncation
and multiple aspects. Our choice of kernel is inspired by the restriction kernel of [1]; however, our
kernel performs both restriction and alignment to a template, supports multiple templates to handle
different aspects and truncations, and adds a bias term which significantly improves performance.
We refine pose beyond translation and scaling with an additional transformation selected from a
finite set of possible perturbations covering aspect ratio change and small in plane rotations. Instead
of explicitly transforming the image with each element of this set (which would be prohibitively expensive) we introduce a novel approximation based on decomposing the HOG descriptor into small
blocks and quickly rearranging those. To handle occlusions we selectively switch between an object
descriptor and an occlusion descriptor. To identify which portions of the template are occluded we
use a field of binary variables. These could be treated as latent variables; however, since we consider
here only occlusions caused by the image boundaries, we can infer them deterministically from the
position of the object relative to the image boundaries. Fig. 1 illustrates various detection examples
including truncation, multiple instances and aspects, and in-plane rotation.
In training we improve the ground-truth pose parameters, since the bounding boxes and aspect associations provided in PASCAL VOC are quite coarse indicators of the object pose. For each instance
we add a latent variable which encodes a pose adjustment. Such variables are then treated as part of
learning using the technique presented in [13]. However, while there the authors use the CCCP algorithm to treat the case of margin rescaling, here we show that a similar algorithm applies to the case
of slack rescaling. The resulting optimization alternates between optimizing the model parameters
given the latent variables (a convex problem solved by a cutting plane algorithm) and optimizing the
latent variable given the model (akin to inference).
2
The overall method is computationally efficient both in training and testing, achieves very good
performances, and it is able to learn and recognise truncated objects.
2
Model
Our purpose is to learn a function y = f (x), x ? X , y ? Y which, given an image x, returns the
poses y of the objects portrayed in the image. We use the structured prediction learning framework
of [9, 13], which considers along with the input and output variables x and y, an auxiliary latent
variable h ? H as well (we use h to specify a refinement to the ground-truth pose parameters). The
? x (w) where
function f is then defined as f (x; w) = y
? x (w)) = argmax F (x, y, h; w),
(?
yx (w), h
F (x, y, h; w) = hw, ?(x, y, h)i,
(1)
(y,h)?Y?H
and ?(x, y, h) ? Rd is a joint feature map. This maximization estimates both y and h from the
data x and corresponds to performing inference. Given training data (x1 , y1 ), . . . , (xN , yN ), the
parameters w are learned by minimizing the regularized empirical risk
R(w) =
N
1
C X
? i (w)),
? i (w), h
kwk2 +
?(yi , y
2
N i=1
? i (w) = y
? xi (w),
where y
? i (w) = h
? x (w).
h
i
(2)
Here ?(yi , y, h) ? 0 is an appropriate loss function that encodes the cost of an incorrect prediction.
In this section we develop the model ?(x, y, h), or equivalently the joint kernel function
K(x, y, h, x0 , y0 , h0 ) = h?(x, y, h), ?(x0 , y0 , h0 )i, in a number of stages. We first define the kernel
for the case of a single unoccluded instance in an image including scale and perturbing transformations, then generalise this to include truncations and aspects; and finally to multiple instances. An
appropriate loss function ?(yi , y, h) is subsequently defined which takes into account the pose of
the object specified by the latent variables.
2.1
Restriction and alignment kernel
Denote by R a rectangular region of the image x, and by x|R the image cropped to that rectangle.
A restriction kernel [1] is the kernel K((x, R), (x0 , R0 )) = Kimage (x|R , x0 |R ) where Kimage is an
appropriate kernel between images. The goal is that the joint kernel should be large when the two
regions have similar appearance.
Our kernel is similar, but captures both the idea of restriction and alignment. Let R0 be a reference
rectangle and denote by R(p) = gp R0 the rectangle obtained from R0 by a geometric transformation
? be an image defined on the reference
gp : R2 ? R2 . We call p the pose of the rectangle R(p). Let x
rectangle R0 and let H(?
x) ? Rd be a descriptor (e.g. SIFT, HOG, GIST [2]) computed from the
image appearance. Then a natural definition of the restriction and alignment kernel is
K((x, p), (x0 , p0 )) = Kdescr (H(x; p), H(x0 ; p0 ))
(3)
where Kdescr is an appropriate kernel for descriptors, and H(x; p) is the descriptor computed on the
transformed patch x as H(x; p) = H(gp?1 x).
Implementation with HOG descriptors. Our choice of the HOG descriptor puts some limits on
the space of poses p that can be efficiently explored. To see this, it is necessary to describe how
HOG descriptors are computed.
The computation starts by decomposing the image x into cells of d ? d pixels (here d = 8). The
descriptor of a cell is the nine dimensional histogram of the orientation of the image gradient inside
the cell (appropriately weighed and normalized as in [2]). We obtain the HOG descriptor of a
rectangle of w ? h cells by stacking the enclosed cell descriptors (this is a 9 ? w ? h vector). Thus,
given the cell histograms, we can immediately obtain the HOG descriptors H(x, y) for all the cellaligned translations (x, y) of the dw ? dh pixels rectangle. To span rectangles of different scales
this construction is simply repeated on the rescaled image gs?1 x, where gs (z) = ? s z is a rescaling,
? > 0, and s is a discrete scale parameter.
3
To further refine pose beyond scale and translation, here we consider an additional perturbation gt ,
indexed by a pose parameter t, selected in a set of transformations g1 , . . . , gT (in the experiments
we use 16 transformations, obtained from all combinations of rotations of ?5 and ?10 degrees and
scaling along x of 95%, 90%, 80% and 70%). Those could be achieved in the same manner as
scaling by transforming the image gt?1 x for each one, but this would be very expensive (we would
need to recompute the cell descriptors every time). Instead, we approximate such transformations
by rearranging the cells of the template (Fig. 1 and 2). First, we partition the w ? h cells of the
template into blocks of m ? m cells (e.g. m = 4). Then we transform the center of each block
according to gt and we translate the block to the new center (approximated to units of cells). Notice
that we could pick m = 1 (i.e. move each cell of the template independently), but we prefer to use
blocks as this accelerates inference (see Sect. 4).
Hence, pose is for us a tuple (x, y, s, t) representing translation, scale, and additional perturbation.
Since HOG descriptors are designed to be compared with a linear kernel, we define
K((x, p), (x0 , p0 )) = h?(x, p), ?(x0 , p0 )i,
2.2
?(x, p) = H(x; p).
(4)
Modeling truncations
If part of the object is occluded (either by clutter or by the image boundaries), some of the descriptor
cells will be either unpredictable or undefined. We explicitly deal with occlusion at the granularity
of the HOG cells by adding a field of w ? h binary indicator variables v ? {0, 1}wh . Here vj = 1
means that the j-th cell of the HOG descriptor H(x, p) should be considered to be visible, and
vj = 0 that it is occluded. We thus define a variant of (4) by considering the feature map
(v ? 19 ) H(x, p)
?(x, p, v) =
(5)
((1wh ? v) ? 19 ) H(x, p)
where 1d is a d-dimensional vector of all ones, ? denotes the Kroneker product, and the Hadamard
(component wise) product. To understand this expression, recall that H is the stacking of w ? h 9dimensional histograms, so for instance (v ? 19 ) H(x, p) preserves the visible cells and nulls the
others. Eq. (5) is effectively defining a template for the object and one for the occlusions.
Notice that v are in general latent variables and should be estimated as such. However here we
note that the most severe and frequent occlusions are caused by the image boundaries (finite field of
view). In this case, which we explore in the experiments, we can write v = v(p) as a function of
the pose p, and remove the explicit dependence on v in ?. Moreover the truncated HOG cells are
undefined and can be assigned a nominal common value. So here we work with a simplified kernel,
in which occlusions are represented by a scalar proportional to the number of truncated cells:
(v(p) ? 19 ) H(x, p)
?(x, p) =
(6)
wh ? |v(p)|
2.3
Modeling aspects
A template model works well as long as pose captures accurately enough the transformations resulting from changes in the viewing conditions. In our model, the pose p, combined with the robustness
of the HOG descriptor, can absorb a fair amount of viewpoint induced deformation. It cannot, however, capture the 3D structure of a physical object. Therefore, extreme changes of viewpoint require
switching between different templates. To this end, we augment pose with an aspect indicator a (so
that pose is the tuple p = (x, y, s, t, a)), which indicates which template to use.
Note that now the concept of pose has been generalized to include a parameter, a, which, differently
from the others, does not specify a geometric transformation. Nevertheless, pose specifies how the
model should be aligned to the image, i.e. by (i) choosing the template that corresponds to the
aspect a, (ii) translating and scaling such template according to (x, y, s), and (iii) applying to it
the additional perturbation gt . In testing, all such parameters are estimated as part of inference.
In training, they are initialized from the ground truth data annotations (bounding boxes and aspect
labels), and are then refined by estimating the latent variables (Sect. 2.4).
4
We assign each aspect to a different ?slot? of the feature vector ?(x, p). Then we null all but the
one of the slots, as indicated by a:
?
?
?a=1 ?1 (x; p)
?
?
..
(7)
?(x, p) = ?
?
.
?a=A ?A (x; p)
where ?a (x; p) is a feature vector in the form of (6). In this way, we compare different templates
for different aspects, as indicated by a.
The model can be extended to capture symmetries of the aspects (resulting from symmetries of the
objects). For instance, a left view of a bicycle can be obtained by mirroring a right view, so that the
same template can be used for both aspects by defining
?(x; p) = ?a=left ?left (x; p) + ?a=right flip ?right (x; p),
(8)
where flip is the operator that ?flips? the descriptor (this can be defined in general by computing the
descriptor of the mirrored image, but for HOG it reduces to rearranging the descriptor components).
The problem remains of assigning aspects to the training data. In the Pascal VOC data, objects are
labeled with one of five aspects: front, left, right, back, undefined. However, such assignments may
not be optimal for use in a particular algorithm. Fortunately, our method is able to automatically
reassign aspects as part of the estimation of the hidden variables (Sect. 2.4 and Fig. 2).
2.4
Latent variables
The PASCAL VOC bounding boxes yield only a coarse estimate of the ground truth pose parameters
(e.g. they do not contain any information on the object rotation) and the aspect assignments may
also be suboptimal (see previous section). Therefore, we introduce latent variables h = (?p) that
encode an adjustment to the ground-truth pose parameters y = (p). In practice, the adjustment ?p
is a small variation of translation x, y, scale s, and perturbation t, and can switch the aspect a all
together.
We modify the feature maps to account for the adjustment in the obvious way. For instance (6)
becomes
(v(p + ?p) ? 19 ) H(x, p + ?p)
?(x, p, ?p) =
(9)
wh ? |v(p + ?p)|
2.5
Variable number of objects: loss function, bias, training
So far, we have defined the feature map ?(x, y) = ?(x; p) for the case in which the label y = (p)
contains exactly one object, but an image may contain no or multiple object instances (denoted
respectively y = and y = (p1 , . . . , pn )). We define the loss function between a ground truth label
yi and the estimated output y as
?
if yi = y = ,
?0
?(yi , y) = 1 ? overl(B(p), B(p0 )) if yi = (p) and y = (p0 ),
(10)
?
1
if yi 6= and y = , or yi = and y 6= ,
where B is the ground truth bounding box, and B 0 is the prediction (the smallest axis aligned bounding box that contains the warped template gp R0 ). The overlap score between B and B 0 is given by
overl(B, B 0 ) = |B ? B 0 |/|B ? B 0 |. Note that the ground truth poses are defined so that B(pl )
matches the PASCAL provided bounding boxes [1] (or the manually extended ones for the truncated ones).
The hypothesis y = (no object) receives score F (x, ; w) = 0 by defining ?(x, ) = 0 as in [1].
In this way, the hypothesis y = (p) is preferred to y = only if F (x, p; w) > F (x, ; w) = 0,
which implicitly sets the detection threshold to zero. However, there is no reason to assume that this
threshold should be appropriate (in Fig. 2 we show that it is not). To learn an arbitrary threshold,
it suffices to append to the feature vector ?(x, p) a large constant ?bias , so that the score of the
hypothesis y = (p) becomes F (x, (p); w) = hw, ?(x, p)i + ?bias wbias . Note that, since the constant
is large, the weight wbias remains small and has negligible effect on the SVM regularization term.
5
Finally, an image may contain more than one instance of the object. The model can be extended
PL
to this case by setting F (x, y; w) =
l=1 F (x, pl ; w) + R(y), where R(y) encodes a ?repulsive? force that prevents multiple overlapping detections of the same object. Performing inference with such a model becomes however combinatorial and in general very difficult. Fortunately, in training the problem can be avoided entirely. If an image contains multiple instances,
the image is added to the training set multiple times, each time ?activating? one of the instances,
and ?deactivating? the others. Here ?deactivating? an instance simply means removing it from
the detector search space. Formally, let p0 be the pose of the active instance and p1 , . . . , pN
the poses of the inactive ones. A pose p is removed from the search space if, and only if,
maxi overl(B(p), B(pi )) ? max{overl(B(p), B(p0 )), 0.2}.
3
Optimisation
Minimising the regularised risk R(w) as defined by Eq. (2) is difficult as the loss depends on w
? i (w) (see Eq. (1)). It is however possible to optimise an upper bound (derived
? i (w) and h
through y
below) given by
N
1
C X
kwk2 +
max ?(yi , y, h) [1 + hw, ?(xi , y, h)i ? hw, ?(xi , yi , h?i (w))i] .
2
N i=1 (y,h)?Y?H
(11)
Here h?i (w) = argmaxh?H hw, ?(xi , yi , h)i completes the label (yi , h?i (w)) of the sample xi (of
which only the observed part yi is known from the ground truth).
Alternation optimization. Eq. (11) is not a convex energy function due to the dependency of h?i (w)
on w. Similarly to [13], however, it is possible to find a local minimum by alternating optimizing w
and estimating h?i . To do this, the CCCP algorithm proposed in [13] for the case of margin rescaling,
must be extended to the slack rescaling formulation used here.
Starting from an estimate wt of the solution, define h?it = hi (wt ), so that, for any w,
hw, ?(xi , yi , h?i (w))i = max
hw, ?(xi , yi , h0 )i ? hw, ?(xi , yi , h?it )i
0
h
and the equality holds for w = wt . Hence the energy (11) is bounded by
N
C X
1
kwk2 +
max ?(yi , y, h) [1 + hw, ?(xi , y, h)i ? hw, ?(xi , yi , h?it )i]
2
N i=1 (y,h)?Y?H
(12)
and the bound is strict for w = wt . Optimising (12) will therefore result in an improvement of the
energy (11) as well. The latter can be carried out with the cutting-plane technique of [9].
Derivation of the bound (11). The derivation involves a sequence of bounds, starting from
h
i
? i (w)) ? ?(yi , y
? i (w)) 1 + hw, ?(xi , y
? i (w))i ? hw, ?(xi , yi , h?i (w))i
? i (w), h
? i (w), h
? i (w), h
?(yi , y
(13)
This bound holds because, by construction, the quantity in the square brackets is not smaller than
? i (w) and h? (w). We further upper
? i (w), h
one, as can be verified by substituting the definitions of y
i
bound the loss by
?
? i (w)) ? ?(yi , y, h) [1 + hw, ?(xi , y, h)i ? hw, ?(xi , yi , h?i (w))i] ??
? i (w), h
?(yi , y
?
max
(y,h)?Y?H
? i (w)
y=?
yi (w),h=h
?(yi , y, h) [1 + hw, ?(xi , y, h)i ? hw, ?(xi , yi , h?i (w))i]
(14)
? i (w) are defined as the max? i (w) and h
Substituting this bound into (2) yields (11). Note that y
imiser of hw, ?(xi , y, h)i alone (see Eq. 1), while the energy maximised in (14) depends on the loss
?(yi , y, h) as well.
6
VOC 2007 left?right bicycles
1
0.9
0.8
MISC
0.7
LEFT
precision
0.6
0.5
0.4
0.3
baseline 22.9
+ bias 33.7
+ test w/ trunc. 55.7
+ train w/ trunc. 58.6
+ empty cells count 60.0
+ transformations 63.0
0.2
0.1
0
0
0.2
0.4
0.6
(b)
0.8
1
(a)
recall
Figure 2: Effect of different model components. The left panel evaluates the effect of different components of the model on the task of learning a detector for the left-right facing PASCAL
VOC 2007 bicycles. In order of increasing AP (see legend): baseline model (see text); bias term
(Sect. 2.5); detecting trunctated instances, training on truncated instances, and counting the truncated cells as a feature (Sect.: 2.2); with searching over small translation, scaling, rotation, skew
(Sect. 2.1). Right panel: (a) Original VOC specified bounding box and aspect; (b) alignment and aspect after pose inference ? in addition to translation and scale, our templates are searched over a set
of small perturbations. This is implemented efficiently by breaking the template into blocks (dashed
boxes) and rearranging those. Note that blocks can partially overlap to capture foreshortening. The
ground truth pose parameters are approximate because they are obtained from bounding boxes (a).
The algorithm improves their estimate as part of inference of the latent variables h. Notice that not
only translation, scale, and small jitters are re-estimated, but also the aspect subclass can be updated.
In the example, an instance originally labeled as misc (a) is reassigned to the left aspect (b).
4
Experiments
Data. As training data we use the PASCAL VOC annotations. Each object instance is labeled
with a bounding box and a categorical aspect variable (left, right, front, back, undefined). From
the bounding box we estimate translation and scale of the object, and we use aspect to select one
of multiple HOG templates. Symmetric aspects (e.g. left and right) are mapped to the same HOG
template as suggested in Sect. 2.3.
While our model is capable of handling correctly truncations, truncated bounding boxes provide a
poor estimate of the pose of the object pose which prevents using such objects for training. While we
could simply avoid training with truncated boxes (or generate artificially truncated examples whose
pose would be known), we prefer exploiting all the available training data. To do this, we manually
augment all truncated PASCAL VOC annotations with an additional ?physical? bounding box. The
purpose is to provide a better initial guess for the object pose, which is then refined by optimizing
over the latent variables.
Training and testing speed. Performing inference with the model requires evaluating hw, ?(x, p)i
for all possible poses p. This means matching a HOG template O(W HT A) times, where W ?
H is the dimension of the image in cells, T the number of perturbations (Sect. 2.1), and A the
number of aspects (Sect. 2.3).1 For a given scale and aspect, matching the template for all locations
reduces to convolution. Moreover, by breaking the template into blocks (Fig. 2) and pre-computing
the convolution with each of those, we can quickly compute perturbations of the template. All in
all, detection requires roughly 30 seconds per image with the full model and four aspects. The
cutting plane algorithm used to minimize (12) requires at each iteration solving problems similar
to inference. This can be easily parallelized, greatly improving training speed. To detect additional
objects at test time we run inference multiple times, but excluding all detections that overlap by
more than 20% with any previously detected object.
1
Note that we do not multiply by the number S of scales as at each successive scale W and H are reduced
geometrically.
7
Figure 3: Top row. Examples of detected bicycles. The dashed boxes are bicycles that were detected
with or without truncation support, while the solid ones were detectable only when truncations were
considered explicitly. Bottom row. Some cases of correct detections despite extreme truncation for
the horse class.
Benefit of various model components. Fig. 2 shows how the model improves by the successive
introduction of the various features of the model. The example is carried on the VOC left-right
facing bicycle, but similar effects were observed for other categories. The baseline model uses
only the HOG template without bias, truncations, nor pose refinement (Sect. 2.1). The two most
significant improvements are (a) the ability of detecting truncated instances (+22% AP, Fig. 3) and
(b) the addition of the bias (+11% AP). Training with the truncated instances, adding the number
of occluded HOG cells as a feature component, and adding jitters beyond translation and scaling all
yield an improvement of about +2?3% AP.
Full model. The model was trained to detect the class bicycle in the PASCAL VOC 2007 data, using
five templates, initialized from the PASCAL labeling left, right, front/rear, other. Initially, the pose
refinimenent h is null and the alternation optimization algorithm is iterated five times to estimate
the model w and refinement h. The detector is then tested on all the test data, enabling multiple
detections per image, and computing average-precision as specified by [3]. The AP score was 47%.
By comparison, the state of the art for this category [8] achieves 56%. The experiment was repeated
for the class horse, obtaining a score of 40%. By comparison, the state of the art on this category,
our MKL sliding window classifier [10], achieves 51%. Note that the proposed method uses only
HOG, while the others use a combination of at least two features. However [4], using only HOG but
a flexible part model, also achieves superior results. Further experiments are needed to evaluate the
combined benefits of truncation/occlusion handling (proposed here), with multiple features [10] and
flexible parts [4].
Conclusions
We have shown how structured output regression with latent variables provides an integrated and effective solution to many problems in object detection: truncations, pose variability, multiple objects,
and multiple aspects can all be dealt in a consistent framework.
While we have shown that truncated examples can be used for training, we had to manually extend
the PASCAL VOC annotations for these cases to include rough ?physical? bounding boxes (as a hint
for the initial pose parameters). We plan to further extend the approach to infer pose for truncated
examples in a fully automatic fashion (weak supervision).
Acknowledgments. We are grateful for discussions with Matthew Blaschko. Funding was provided
by the EU under ERC grant VisRec no. 228180; the RAEng, Microsoft, and ONR MURI N0001407-1-0182.
8
References
[1] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In Proc. ECCV, 2008.
[2] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Proc. CVPR,
2005.
[3] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The
PASCAL Visual Object Classes Challenge 2008 (VOC2008) Results. http://www.
pascal-network.org/challenges/VOC/voc2008/workshop/index.html,
2008.
[4] P. F. Felzenszwalb, R. B. Grishick, D. McAllister, and D. Ramanan. Object detection with
discriminatively trained part based models. PAMI, 2009.
[5] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scaleinvariant learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, volume 2, pages 264?271, June 2003.
[6] K. Hotta. Robust face detection under partial occlusion. In Proceedings of the IEEE International Conference on Image Processing, 2004.
[7] Y. Y. Lin, T. L. Liu, and C. S. Fuh. Fast object detection with occlusions. In Proceedings of
the European Conference on Computer Vision, pages 402?413. Springer-Verlag, May 2004.
[8] P. Schnitzspan, M. Fritz, S. Roth, and B. Schiele. Discriminative structure learning of hierarchical representations for object detection. In Proc. CVPR, 2009.
[9] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for
interdependent and structured output spaces. In Proc. ICML, 2004.
[10] A. Vedaldi, V. Gulshan, M. Varma, and A. Zisserman. Multiple kernels for object detection.
In Proc. ICCV, 2009.
[11] O. Williams, A. Blake, and R. Cipolla. The variational ising classifier (VIC) algorithm for
coherently contaminated data. In Proc. NIPS, 2005.
[12] J. Winn and J. Shotton. The Layout Consistent Random Field for Recognizing and Segmenting
Partially Occluded Objects. In Proc. CVPR, 2006.
[13] C.-N. J. Yu and T. Joachims. Learning structural SVMs with latent variables. In Proc. ICML,
2009.
9
| 3715 |@word dalal:2 everingham:1 triggs:2 p0:8 pick:1 solid:1 initial:2 liu:1 contains:3 score:5 seriously:1 assigning:1 must:1 visible:2 partition:1 hofmann:1 remove:1 designed:1 gist:1 alone:1 selected:2 guess:1 plane:9 maximised:1 detecting:3 coarse:2 recompute:1 location:2 successive:2 provides:1 deactivating:2 org:1 five:3 along:2 incorrect:1 inside:1 manner:1 introduce:2 x0:8 mask:1 roughly:1 p1:2 nor:1 andrea:1 inspired:1 voc:15 decreasing:1 automatically:1 fuh:1 unpredictable:1 considering:1 increasing:1 becomes:3 provided:6 blaschko:3 moreover:2 estimating:2 bounded:1 panel:2 null:3 transformation:10 every:1 subclass:1 voc2008:2 exactly:1 prohibitively:1 classifier:4 uk:2 unit:1 grant:1 ramanan:1 yn:1 segmenting:1 negligible:1 engineering:1 local:1 treat:1 modify:1 limit:1 switching:1 despite:1 oxford:2 ap:5 pami:1 limited:1 acknowledgment:1 testing:8 practice:1 block:9 area:1 empirical:1 significantly:4 vedaldi:3 matching:2 pre:1 altun:1 cannot:1 tsochantaridis:1 operator:1 put:1 context:1 risk:2 applying:1 restriction:6 weighed:1 map:4 www:1 center:2 roth:1 williams:3 layout:2 starting:2 independently:1 convex:2 rectangular:1 immediately:1 varma:1 dw:1 handle:3 searching:1 variation:2 updated:1 construction:2 nominal:1 us:2 schnitzspan:1 hypothesis:3 regularised:1 element:1 recognition:3 approximated:2 expensive:2 ising:2 labeled:3 muri:1 observed:2 bottom:1 solved:1 capture:5 region:2 sect:10 eu:1 rescaled:1 removed:1 transforming:2 schiele:1 occluded:6 trunc:2 trained:2 grateful:1 solving:1 easily:1 joint:5 differently:1 various:4 represented:1 derivation:2 train:1 fast:1 describe:1 effective:1 detected:4 horse:2 labeling:1 outside:1 h0:3 choosing:1 refined:2 quite:1 whose:1 cvpr:3 ability:1 g1:1 gp:4 transform:1 scaleinvariant:1 sequence:1 propose:1 product:2 frequent:1 aligned:5 hadamard:1 translate:1 achieve:1 az:1 exploiting:1 empty:1 extending:1 overl:4 rotated:1 object:44 andrew:1 ac:1 pose:40 develop:3 measured:1 eq:5 implemented:2 auxiliary:1 involves:1 correct:1 filter:1 subsequently:1 human:1 viewing:1 translating:1 material:1 require:1 activating:1 assign:1 suffices:1 extension:1 pl:3 hold:2 considered:3 ground:10 blake:1 bicycle:8 mapping:1 matthew:1 substituting:2 achieves:4 smallest:1 purpose:2 estimation:1 proc:8 label:4 combinatorial:1 rough:1 pn:2 avoid:1 encode:1 derived:1 june:1 joachim:2 improvement:4 indicates:1 greatly:1 baseline:3 detect:2 glass:1 inference:14 rear:1 integrated:1 initially:1 hidden:1 perona:1 transformed:1 pixel:3 overall:2 orientation:1 pascal:14 augment:2 denoted:1 flexible:2 html:1 plan:1 spatial:2 special:1 art:2 field:6 manually:3 optimising:1 yu:1 unsupervised:1 icml:2 mcallister:1 others:5 contaminated:1 hint:1 oriented:1 foreshortening:2 preserve:1 argmax:1 occlusion:16 microsoft:1 detection:18 multiply:1 severe:2 alignment:8 extreme:2 bracket:1 undefined:4 tuple:2 capable:1 partial:5 necessary:1 indexed:1 iv:1 initialized:2 re:1 deformation:1 instance:25 modeling:2 cover:1 assignment:2 maximization:1 stacking:2 cost:1 recognizing:1 successful:1 front:4 dependency:1 learnt:2 synthetic:1 combined:2 fritz:1 international:1 regressor:1 together:1 quickly:2 warped:1 rescaling:6 return:1 account:5 waste:1 includes:1 explicitly:7 caused:3 depends:2 root:1 view:3 portion:1 start:1 participant:1 annotation:5 contribution:2 minimize:1 square:1 gulshan:1 descriptor:22 who:1 efficiently:4 yield:3 identify:1 window:1 dealt:1 weak:1 iterated:1 accurately:1 detector:4 definition:2 evaluates:1 energy:4 involved:1 obvious:1 wh:4 recall:3 improves:4 back:2 originally:1 zisserman:4 specify:5 formulation:3 ox:1 box:21 though:2 stage:3 receives:1 overlapping:1 mkl:1 indicated:2 effect:4 normalized:1 concept:1 contain:3 hence:3 assigned:1 regularization:1 alternating:1 equality:1 symmetric:1 misc:2 white:1 deal:1 covering:1 steady:1 generalized:1 demonstrate:3 performs:1 image:32 wise:2 variational:1 novel:1 funding:1 common:1 rotation:9 superior:1 physical:3 overview:1 perturbing:1 volume:1 anisotropic:2 association:1 extend:2 kwk2:3 significant:4 rd:2 automatic:1 similarly:1 erc:1 had:1 robot:1 supervision:2 gt:5 align:1 base:1 add:2 showed:1 optimizing:4 verlag:1 binary:3 onr:1 alternation:2 yi:29 minimum:1 additional:6 fortunately:2 r0:6 parallelized:1 dashed:2 ii:2 sliding:1 multiple:20 full:3 infer:2 reduces:2 match:1 minimising:1 long:1 lin:1 cccp:2 prediction:4 variant:1 regression:4 optimisation:1 vision:2 histogram:4 kernel:18 iteration:1 achieved:1 cell:22 cropped:1 addition:2 winn:3 completes:1 argmaxh:1 rescalings:1 appropriately:1 strict:1 induced:1 legend:1 call:1 structural:2 counting:1 granularity:1 iii:2 enough:1 shotton:1 switch:2 suboptimal:1 idea:1 inactive:1 expression:1 akin:1 cause:1 nine:1 reassign:1 mirroring:1 amount:1 clutter:1 svms:1 category:5 reduced:1 generate:1 specifies:2 http:1 mirrored:1 notice:3 estimated:4 correctly:1 per:2 discrete:1 write:1 four:1 nevertheless:1 threshold:3 localize:1 verified:1 ht:1 rectangle:9 geometrically:1 run:1 jitter:2 extends:1 missed:1 recognise:1 patch:1 coherence:1 prefer:2 scaling:7 accelerates:1 bound:8 entirely:1 hi:1 correspondence:1 refine:2 annual:1 g:2 deficiency:1 encodes:3 aspect:34 speed:2 span:1 performing:3 structured:9 department:1 according:2 alternate:1 combination:2 poor:1 smaller:1 y0:2 iccv:1 computationally:2 remains:2 previously:1 slack:3 loose:1 count:1 skew:1 detectable:1 needed:1 flip:3 end:1 repulsive:1 available:1 decomposing:2 hierarchical:1 appropriate:6 robustness:1 original:1 denotes:1 top:1 include:4 yx:1 move:1 added:1 quantity:1 coherently:1 dependence:1 gradient:2 mapped:1 extent:1 considers:1 reason:1 index:1 ratio:1 minimizing:1 equivalently:1 difficult:2 hog:22 append:1 implementation:2 upper:2 convolution:2 finite:2 enabling:1 truncated:21 defining:3 extended:4 excluding:1 variability:1 y1:1 perturbation:8 arbitrary:1 specified:4 kroneker:1 learned:1 boost:1 nip:1 address:2 beyond:3 able:2 suggested:1 below:1 pattern:1 challenge:4 including:2 max:6 optimise:1 gool:1 overlap:3 treated:2 natural:1 regularized:1 force:1 indicator:3 representing:1 improve:1 vic:1 numerous:1 axis:4 carried:2 categorical:1 text:1 prior:1 geometric:2 interdependent:1 marginalizing:1 relative:1 loss:8 fully:1 discriminatively:1 proportional:1 enclosed:1 facing:4 degree:2 consistent:3 viewpoint:2 pi:1 translation:10 row:2 eccv:1 truncation:23 bias:9 allow:1 understand:1 generalise:1 template:29 felzenszwalb:2 face:3 benefit:3 tolerance:1 boundary:5 dimension:1 xn:1 evaluating:1 van:1 author:2 refinement:3 simplified:1 avoided:1 far:1 approximate:2 cutting:3 absorb:1 preferred:1 implicitly:1 unoccluded:1 active:1 visrec:1 fergus:2 xi:17 discriminative:1 search:2 latent:23 reassigned:1 learn:5 robust:1 rearranging:5 symmetry:2 obtaining:1 improving:1 european:1 artificially:1 vj:2 did:1 bounding:16 lampert:2 repeated:2 complementary:1 fair:1 x1:1 fig:8 wbias:2 fashion:1 precision:2 sub:2 position:1 explicit:2 deterministically:1 lie:1 portrayed:1 breaking:2 hw:18 removing:1 sift:1 constellation:1 list:1 r2:2 svm:3 explored:1 maxi:1 workshop:1 adding:3 effectively:1 illustrates:1 margin:3 simply:3 appearance:2 explore:1 visual:1 prevents:2 adjustment:4 partially:4 scalar:1 applies:1 springer:1 cipolla:1 corresponds:2 truth:10 dh:1 slot:2 goal:1 formulated:1 change:3 included:3 determined:1 reducing:1 wt:4 principal:1 disregard:1 selectively:1 formally:1 select:1 support:3 searched:1 latter:1 evaluate:1 tested:1 handling:3 |
2,997 | 3,716 | Time-Varying Dynamic Bayesian Networks
Le Song, Mladen Kolar and Eric P. Xing
School of Computer Science, Carnegie Mellon University
{lesong, mkolar, epxing}@cs.cmu.edu
Abstract
Directed graphical models such as Bayesian networks are a favored formalism
for modeling the dependency structures in complex multivariate systems such as
those encountered in biology and neural science. When a system is undergoing dynamic transformation, temporally rewiring networks are needed for capturing the dynamic causal influences between covariates. In this paper, we propose time-varying dynamic Bayesian networks (TV-DBN) for modeling the structurally varying directed dependency structures underlying non-stationary biological/neural time series. This is a challenging problem due the non-stationarity and
sample scarcity of time series data. We present a kernel reweighted `1 -regularized
auto-regressive procedure for this problem which enjoys nice properties such as
computational efficiency and provable asymptotic consistency. To our knowledge,
this is the first practical and statistically sound method for structure learning of TVDBNs. We applied TV-DBNs to time series measurements during yeast cell cycle
and brain response to visual stimuli. In both cases, TV-DBNs reveal interesting
dynamics underlying the respective biological systems.
1
Introduction
Analysis of biological networks has led to numerous advances in understanding the organizational
principles and functional properties of various biological systems, such as gene regulatory systems [1] and central nervous systems [2]. However, most such results are based on static networks,
that is, networks with invariant topology over a given set of biological entities. A major challenge in
systems biology is to understand and model, quantitatively, the dynamic topological and functional
properties of biological networks. We refer to these time or condition specific biological circuitries
as time-varying networks or structural non-stationary networks, which are ubiquitous in biological
systems. For example (i) over the course of a cell cycle, there may exist multiple biological ?themes?
that determine functions of each gene and their regulatory relations, and these ?themes? are dynamic
and stochastic. As a result, the molecular networks at each time point are context-dependent and
can undergo systematic rewiring rather than being invariant over time [3]. (ii) The emergence of
a unified cognitive moment relies on the coordination of scattered mosaics of functionally specialized brain regions. Neural assemblies, distributed local networks of neurons transiently linked by
dynamic connections, enable the emergence of coherent behaviour and cognition [4].
A key technical hurdle preventing us from an in-depth investigation of the mechanisms that drive
temporal biological processes is the unavailability of serial snapshots of time-varying networks underlying biological processes. Current technology does not allow for experimentally determining a
series of time specific networks for a realistic dynamic biological system. Usually, only time series
measurements of the activities of the nodes can be made, such as microarray, EEG or fMRI. Our
goal is to recover the latent time-varying networks underlying biological processes, with temporal
resolution up to every single time point based on time series measurements of the nodal states. Recently, there has been a surge of interests along this direction [5, 6, 7, 8, 9, 10]. However, most
existing approaches are computationally expensive, making large scale genome-wide reverse engineering nearly infeasible. Furthermore, these methods also lack formal statistical characterization of
1
the estimation procedure. For instance, non-stationary dynamic Bayesian networks are introduced
in [9], where the structures are learned via MCMC sampling; such approach is not likely to scale
up to more than 1000 nodes and without a regularization term it is also prone to overfitting when
the dimension of the data is high but the number of observations is small. More recent efforts have
focused on efficient kernel reweighted or total-variation penalized sparse structure recovery methods for undirected time-varying networks [10, 11, 12], which possess both attractive computational
schemes and rigorous statistical consistency results. However, what has not been addressed so far is
how to recover directed time-varying networks. Our current paper advances in this direction.
More specifically, we propose time-varying dynamic Bayesian networks (TV-DBN) for modeling
the directed time-evolving network structures underlying non-stationary biological time series. To
make this problem statistically tractable, we rely on the assumption that the underlying network
structures are sparse and vary smoothly across time. We propose a kernel reweighted `1 -regularized
auto-regressive approach for learning this sequence of networks. Our approach has the following attractive properties: (i) The aggregation of observations from adjacent time points by kernel reweighting greatly alleviates the statistical problem of sample scarcity when the networks can change at
each time point whereas only one or a few time series replicates are available. (ii) The problem
of structural estimation for a TV-DBN decomposes into a collection of simpler and atomic structural learning problems. We can choose from a battery of highly scalable `1 -regularized least-square
solvers for learning each structure. (iii) We can formally characterize the conditions under which our
estimation procedure is structurally consistent: as time series are sampled in increasing resolution,
our algorithm can recover the true structure of the underlying TV-DBN with high probability.
It is worth emphasizing that our approach is very different from earlier approaches, such as the
structure learning algorithms for dynamic Bayesian networks [13], which learn time-homogeneous
dynamic systems with fixed node dependencies, or approaches which start from an a priori static
network and then trace time-dependent activities [3]. The Achilles? heel of this latter approach
is that edges that are transient over a short period of time may be missed by the summary static
network in the first place. Furthermore, our approach is also different from change point based algorithms [14, 8] which first segment time series and then fit an invariant structure to each segment.
These approaches can only recover piece-wise stationary models rather than constantly varying networks. In our experiments, we demonstrate the advantange of TV-DBNs using synthetic networks.
We also apply TV-DBNs to real world datasets: a gene expression dataset measured during yeast
cell cycle; and an EEG dataset recorded during a motor imagination task. In both cases, TV-DBNs
reveal interesting time-varying causal structures of the underlying biological systems.
2
Preliminary
We concern ourselves with stochastic processes in time or space domains, such as the dynamic control of gene expression during cell cycle, or the sequential activation of brain areas during cognitive
decision making, of which the state of a variable at one time point is determined by the states of a
set of variables at previous time points. Models describing a stochastic temporal processes can be
naturally represented as dynamic Bayesian networks (DBN) [15]. Taking the transcriptional regulation of gene expression as an example, let X t := (X1t , . . . , Xpt )> ? Rp be a vector representing the
expression levels of p genes at time t, a stochastic dynamic process can be modeled by a ?first-order
Markovian transition model? p(X t |X t?1 ), which defines the probabilistic distribution of gene expressions at time t given those at time t ? 1. Under this assumption, the likelihood of the observed
expression levels of these genes over a time series of T steps can be expressed as:
YT
YT Yp
p(X 1 , . . . , X T ) = p(X 1 )
p(X t |X t?1 ) = p(X 1 )
p(Xit |X?t?1
),
(1)
i
t=2
t=2
i=1
where we assume that the topology of the networks is specified by a set of regulatory relations
X?t?1
:= {Xjt?1 : Xjt?1 regulates Xit }, and hence the transition model p(X t |X t?1 ) factors over
i
Q
individual genes, i.e., i p(Xit |X?t?1
). Each p(Xit |X?t?1
) can be viewed as a regulatory gate funci
i
tion that takes multiple covariates (regulators) and produce a single response.
A simple form of the transition model p(X t |X t?1 ) in a DBN is a linear dynamics model:
X t = A ? X t?1 + ,
p?p
? N (0, ? 2 I),
(2)
where A ? R
is a matrix of coefficients relating the expressions at time t ? 1 to those of the
next time point, and is a vector of isotropic zero mean Gaussian noise with variance ? 2 . In this
2
) can be expressed as a
case, the gate function that defines the conditional distribution p(Xit |X?t?1
i
t
t?1
t
t?1
2
univariate Gaussian, i.e., p(Xi |X?i ) = N (Xi ; Ai? X , ? ), where Ai? denotes the ith row of
the matrix A. This model is also known as an auto-regressive model.
The major reason for favoring DBNs over standard Bayesian networks (BN) or undirected graphical models is its enhanced semantic interpretability. An edge in a BN does not necessarily imply
causality due to the Markov equivalence of different edge configurations in the network [16]. In
DBNs (of the type defined above), all directed edges only point from time t ? 1 to t, which bear a
natural causal implication and are more likely to suggest regulatory relations. The auto-regressive
model in (2) also offers an elegant formal framework for consistent estimation of the structures of
DBNs; we can read off the edges between variables in X t?1 and X t by simply identifying the
nonzero entries in the transition matrix A. For example, the non-zero entries of Ai? represent the
set of regulator X?i that directly lead to a response on Xi .
Contrary to the name of dynamic Bayesian networks may suggest, DBNs are time-invariant models
and the underlying network structures do not change over time. That is, the dependencies between
variables in X t?1 and X t are fixed, and both p(X t |X t?1 ) and A are invariant over time. The term
?dynamic? only means that the DBN can model dynamical systems. In the sequel, we will present a
new formalism where the structures of DBNs are time-varying rather than invariant.
3
A New Formalism: Time-Varying Dynamic Bayesian Networks
We will focus on recovering the directed time-varying network structure (or the locations of nonzero entries in A) rather than the exact edge values. This is related to the structure estimation
problems studied in [11, 12], but in our case for auto-regressive models (and hence directed networks). Structure estimation results in parse models for easy interpretation, but it is statistically
more challenging than the value estimation problem. This is also different from estimating a nonstationary model in the conventional sense, where one interests in recovering the exact values of the
varying coefficients [17, 18]. To make this distinction clear, we use the following 3 examples:
!
!
!
0 1 0
0 0.1 0
0 1 0.1
B1 = 0 0 1 ,
B2 = 0 0 3 ,
B3 = 0 0 1.1 .
(3)
0 0 0
0 0 0
0 0.1 0
Matrices B1 and B2 encode the same graph structure, since the locations of their non-zero entries
are exactly same. Although B1 is closer to B3 than B2 in terms of matrix values (eg. measured in
Frobenius norm), they encodes very different graph strucutres.
Formally, let graph G t = (V, E t ) represents the conditional independence relations between the
components of random vectors X t?1 and X t . The vertex set V is a common set of variables
underlying X 1:T , i.e., each node in V corresponds to a sequence of variables Xi1:T . The edge set
E t ? V ? V contains directed edges from components of X t?1 to those of X t ; an edge (i, j) 6? E t
if and only if Xit is conditionally independent of Xjt?1 given the rest of the variables in the model.
Due to the time-varying nature of the networks, the transition model pt (X t |X t?1 ) in (1) becomes
time dependent. In the case of the auto-regressive DBN in (2), its time-varying extension becomes:
X t = At ? X t?1 + ,
? N (0, ? 2 I),
(4)
and our goal is to estimate the non-zero entries in the sequence of time dependent transition matrices
t
t
t
{At } (t = 1 . . . T ). The
in network G t associated with each At can
directed edges E := E t(A )
t
be recovered via E = (i, j) ? V ? V | i 6= j, Aij 6= 0 .
4
Estimating Time-Varying DBN
Note that if we follow the naive assumption that each temporal snapshot is a completely different
network, the task of jointly estimating {At } by maximizing the log-likelihood would be statistically
impossible because the estimator would suffer from extremely high variance due to sample scarcity.
Therefore, we make a statistically tractable yet realistic assumption that the underlying network
structures are sparse and vary smoothly across time; and hence temporally adjacent networks are
likely to share common edges than temporally distal networks.
3
Overall, we have designed a procedure that decomposes the problem of estimating the time-varying
networks along two orthogonal axes. The first axis is along the time, where we estimate the network
for each time point separately by reweighting the observations accordingly; and the second axis is
along the set of genes, where we estimate the neighborhood for each gene separately and then join
these neighborhoods to form the overall network. One benefit of such decomposition is that the
estimation problem is reduced to a set of atomic optimizations, one for each node i (i = 1 . . . |V|) at
each time point t? (t? = 1 . . . T ):
?
XT
?
?
?t? = argmin 1
A
wt (t)(xti ? Ati? xt?1 )2 + ?
Ati?
,
(5)
i?
?
t=1
T
1
At ?R1?n
i?
where ? is a parameter for the `1 -regularization term, which controls the number of non-zero en?t? , and hence the sparsity of the networks; wt? (t) is the weighting of an
tries in the estimated A
i?
observation from time t when we estimate the network at time t? . More specifically, it is defined as
?
(t?t? )
, where Kh (?) = K( h? ) is a symmetric nonnegative kernel function and h is
wt (t) = PTKhK
(t?t? )
t=1
h
2
the kernel bandwidth. We use a Gaussian RBF kernel, Kh (t) = exp(? th ), in our later experiments.
Note that multiple measurements at the same time point are considered as i.i.d. observations and can
be trivially handled by assigning them the same weights.
The objective defined in (5) is essentially a weighted regression problem. The square loss function
is due to the fact that we are fitting a linear model with uncorrelated Gaussian noise. Two other
key components of the objective are: (i) a kernel reweighting scheme for aggregating observations
across time; and (ii) an `1 -regularization for sparse structure estimation. The first component originates from our assumption that the structural changes of the network vary smoothly across time. This
assumption allows us to borrow information across time by reweighting the observations from different time points and then treating them as if they were i.i.d. observations. Intuitively, the weighting
should place more emphasis on observations at or near time point t? with weights becoming smaller
as observations move further away from time point t? . The second component is to promote sparse
structure and avoid model overfitting. This is also consistent with the biological observation that networks underlying biological processes are parsimonious in structure. For example, a transcription
factor only controls a small fraction of target genes at particular time point or under a specific condition [19]. It is well-known that `1 -regularized least square linear regression, has a parsimonious
property, and exhibits model-selection consistency (i.e., recovers the set of true non-zero regression
coefficients asymptotically) in noisy settings even when p T [20].
Note that our procedure can also be easily extended to learn the structure of auto-regressive models
PD
of higher order D: X t = d=1 At (d) ? X t?d + , ? N (0, ? 2 I). The change we need to
make is to incorporate the higher order auto-regressive coefficients in the square loss function, i.e.,
PD
?
?
(xti ? d=1 Ati? (d)xt?d )2 , and penalize the `1 -norms of these Ati? (d) correspondingly.
5
Optimization
Estimating time-varying networks using the decomposition scheme above requires solving a collection of optimization problems in (5). In a genome-wide reverse engineering task, there can be tens
of thousands of genes and hundreds of time points, so one can easily have a million optimization
problems. Therefore, it is essential to use an efficient algorithm for solving the atomic optimization
problem in (5), which can be trivially parallelized for each genes at each time point.
Instead of solving the form of the optimization problem in (5), we will push the weighting
p
?
wt (t) into p
the square loss function bypscaling the covariates and response variables by wt? (t),
? t?1 ?
i.e. x
?ti ?
wt? (t)xti and x
wt? (t)xt?1 . After this transformation, the optimization
problem becomes a standard `1 -regularized least-square problem which can be solved via a battery of highly scalable and specialized solvers, such as the shooting algorithm [21]. The shooting
algorithm is a simple, straightforward and fast algorithm that iteratively solves a system of nonlinPT
?
? t?1 ? x
ear equations related to the optimality condition of problem (5): T2 t=1 (Ati? x
?ti )?
xt?1
=
j
?
?? sign(Atij ) (?j = 1 . . . p). At each iteration of the shooting algorithm, one entries of Ai? is updated by holding all other entries fixed. Overall, our procedure for estimating time-varying networks
is summarized in Algorithm 1, which uses the shooting algorithm as the key building block (step
4
Algorithm 1: Procedure for Estimating Time-Varying DBN
10
Input: Time series {x1 , . . . , xT }, regularization parameter ? and kernel parameter h.
Output: Time-varying networks {A1 , . . . , AT }.
begin
Introduce variable A0 and randomly initialize it
for i = 1 . . . p do
for t? = 1 . . . T do
?
t? ?1
Initialize: Ati? ? Ai?
p
p
? t?1 ? wt? (t)xt?1 (t = 1 . . . T )
Scale time series: x
?ti ? wt? (t)xti , x
?
while Ati? not converges do
for j = 1 . . . p do
PT P
PT
? t?1
?k ? x
xjt?1 , bj = T2 t=1 x
?jt?1 x
?t?1
Compute: Sj ? T2 t=1 ( k6=j Atik x
?ti )?
j
t?
Update: Aij ? (sign(Sj ? ?)? ? Sj )/bj , if |Sj | > ?, otherwise 0
11
end
1
2
3
4
5
6
7
8
9
7-10). In step 5, we uses a warm start for each atomic optimization problem: since the networks
?
t? ?1
vary smoothly across time, we can use Ai?
as a good initialization for Ati? for further speedup.
6
Statistical Properties
In this section, we study the statistical consistency of the estimation procedure in Section 4. Our
analysis is different from the consistency results presented by [11] on recovering time-varying undirected graphical models. Their analysis deals with Frobenius norm consistency which is a weaker
result than the structural consistency we pursue here. Our structural consistency result for TV-DBNs
estimation procedure follows the proof strategy of [20]; however, the analysis is complicated by two
major factors. First, times series observations are very often non-i.i.d.? current observations may
depend on past history. Second, we are modeling non-stationary processes, where we need to deal
with the additional bias term that arises due to locally stationary approximation to non-stationarity.
In the following, we state our assumptions and theorem, but leave the detailed proof of this theorem
for a full version of the paper (a sketch of the proof can be found in the appendix).
Theorem 1 Assume that the conditions below hold:
1. Elements of At are smooth functions with bounded second derivatives, i.e. there exists a
?
?2
t
constant L > 0 s.t. | ?t
Atij | < L and | ?t
2 Aij | < L.
2. The minimum absolute value of non-zero elements of At is bounded away from zero at
observation points, and this bound tends to zero as we observe more and more samples,
i.e., amin := mint?{1/T,2/T,...,1} mini?[p],j?Sit |Atij | > 0.
3. Let ?t = E[X t (X t )T ] = [?ij (t)]pi,j=1 and let Sit denote the set of non-zero elements of
the i-th row of the matrix At , i.e. Sit = {j ? [p] : Atij 6= 0}. Assume that there exist a
constant d ? (0, 1] s.t. maxj?Sit ,k6=j |?jk (t)| ? ds , ?i ? [p], t ? [0, 1], where s is an upper
bound on the number of non-zero elements, i.e. s = maxt?[0,1] maxi?[p] |Sit |.
4. The kernel K(?) : R 7? R is a symmetric function and has bounded support on [0, 1]. There
exists a constant MK s.t. maxx?R |K(x)| ? MK and maxx?R K(x)2 ? MK .
p
Let the regularization parameter scale as ? = O( (log p)/T h), the minimum absolute non-zero
?
p
entry amin of At be sufficiently large (amin ? 2?). If h = O(T 1/3 ) and s log
T h = o(1) then
?
?
?t ) = supp(At )] ? 1,
P[supp(A
5
T ? ?,
?t? ? [0, 1].
(6)
7
Experiments
To the best of our knowledge, this is the first practical method for structure learning of non-stationary
DBNs. Thus we mainly compare with static DBN structure learning methods. The goal is to demonstrate the advantage of TV-DBNs for modeling time-varying structures of non-stationary processes
which are ignored by traditional approaches. We conducted 3 experiments using synthetic data,
gene expression data and EEG signals. In these experiments, TV-DBNs either better recover the
underlying networks, or provide better explanatory power for the underlying biological processes.
Synthetic Data In this experiment, we generate synthetic time series using a first order autoregressive models with smoothly varying model structures. More specifically, we first generate
8 different anchor transition matrices At1 . . . At8 , each of which corresponds to an Erd?os-R?enyi
random graph of node size p = 50 and average indegree of 2 (we have also experimented with
p = 75 and 100 which provides similar results). We then evenly space these 8 anchor matrices,
and interpolate a suitable number of intermediate matrices to match the number of observations
T . Due to the interpolation, the average indegree of each node is around 4. With the sequence of
{At }(t = 1 . . . T ), we simulate the time series according to equation (4) with noise variance ? 2 = 1.
We then study the behavior of TV-DBNs and static DBNs [22] in recovering the underlying varying
networks as we increase the number of observations T . We also compare with a piecewise constant
DBN that estimate a static network for each segment obtained from change point detection [14].
For the TV-DBN, we choose the bandwidth parameter h of the Gaussian kernel according to
T2
the spacing between two adjacent anchor matrices (T /7) such that exp(? 49h
) = exp(?1).
For all methods, we choose the regularization parameter such that the resulting networks has
an average indegree of 4. We evaluate the performance using an F1 score, which is the harmonic mean of precision and recall scores in retrieving the true time-varying network edges.
We can see that estimating a static DBN or a piecewise
constant DBN does not provide a good estimation of the
network structures (Figure 1). In contrast, the TV-DBN
leads to a significantly higher F1 score, and its performance also benefit quickly from increasing the number
of observations. Note that these results are not surprising since time-varying networs simply fit better with the
data generating process. As time-varying networks occur
often in biological systems, we expect TV-DBNs will be
useful for studying biological systems.
Figure 1: F1 score of estimating timevarying networks for different methods.
Yeast Gene Regulatory Networks. In this experiment, we will reverse engineer the time varying
gene regulatory networks from time series of gene expression measured across two yeast cell cycles.
A yeast cell cycle is divided into four stages: S phase for DNA synthesis, M phase for mitosis,
and G1 and G2 phase separating S and M phases. We use two time series (alpha30 and alpha38)
from [23] which are technical replicates of each other with a sampling interval of 5 minutes and
a total of 25 time points across two yeast cell cycles. We consider a set of 3626 genes which
are common to both arrays. We choose the bandwidth parameter h such that the weighting decay
to exp(?1) for half of a cell cycle, i.e. exp(?62 /h) = exp(?1). We choose the regularization
parameter such that the sparsity of the networks are around 0.01.
During the cell cycle of yeasts, there exist multiple underlying ?themes? that determine the functionalities of each gene and their relationships to each other, and such themes are dynamical and
stochastic. As a result, the gene regulatory networks at each time point are context-dependent and
can undergo systematic rewiring, rather than being invariant over time. A summary of the estimated
time-varying networks are visualized in Figure 2. We group genes according to 50 ontology groups.
We can see that the most active groups of genes are related to background processes such as cytoskeleton organization, enzyme regulator activity, ribosome activity. We can also spot transient
interactions, for instance, between genes related to site of polarized growth and nucleolus (time
point 18), and between genes related to ribosome and cellular homeostasis (time point 24). Note
that, although gene expressions are measured across two cell cycles, the values do not necessarily
exhibit periodic behavior. In fact, only a small fraction of yeast genes (less than 20%) has been
reported to exhibit cycling behavior [23].
6
(b) t2
(c) t4
(d) t6
(e) t8
(f) t10
(g) t12
(h) t14
(i) t16
(j) t18
(k) t20
(l) t22
(m) t24
(a) t1
Figure 2: Interactions between gene ontological groups. The weight of an edge between two ontological
groups is the total number of connection between genes in the two groups. We thresholded the edge weight
such that only the dominant interactions are displayed.
Table 1: The number of enriched unique gene
sets discovered by the static and time-varying
Next we study genes sets that are related to specific networks respectively. Here we are interested
stage of cell cycle where we expect to see periodic be- in recall score: the time-varying networks better
havior. In particular, we obtain gene sets known to models the biological system.
be related to G1, S and S/G2 stage respectively.1 We
DBN TV-DBN
use interactivity, which is the total number of edges a
TF
7
23
group of genes is connected to, to describe the activity
Knockout
7
26
of each group of genes. Since the regulatory networks
Ontology
13
77
are directed, we can examine both indegree and outdegree separately for each gene sets. In Figure 3(a)(b)(c), the interactivities of these genes indeed
exhibit periodic behavior which corresponds well with their supposed functions in cell cycles.
We also plot the histogram of indegree and outdegree (averaged across time) for the time-varying
networks in Figure 3(d). We find that the outdegrees approximately follow a scale free distribution
with largest outdegree reaching 90. This corresponds well with the biological observation that there
are a few genes (regulators) that regulate a lot of other genes. The indegree distribution is very different from that of the outdegree, and it exhibits a clear peak between 5 and 6. This also corresponds
well with biological observations that most genes are controlled only by a few regulators.
To further assess the modeling power of the time-varying networks and its advantage over static
network, we perform gene set enrichment studies. More specifically, we use three types of information to define the gene sets: transcription factor binding targets (TF), gene knockout signatures
(Knockout), and gene ontology (Ontology) groups [24]. We partition the genes in the time varying
networks at each time point into 50 groups using spectral clustering, and then test whether these
groups are enriched with genes from any predefined gene sets. We use a max-statistic and a 99%
confidence level for the test [25]. Table 1 indicates that time-varying networks are able to discover
more functional groups as defined by the genes sets than static networks as commonly used in biological literature. In the appendix, we also visualize the time spans of these active functional groups.
It can be seen that many of them are dynamic and transient, and not captured by a static network.
Brain Response to Visual Stimuli. In this experiment, we will explore the interactions between
brain regions in response to visual stimuli using TV-DBNs. We use the EEG dataset from [26]
where five healthy subjects (labeled ?aa?, ?al?, ?av?, ?aw? and ?ay? respectively) were required to
imagine body part movement based on visual cues in order to generate EEG changes. We focus our
1
We obtain gene sets from http://genome-www.stanford.edu/cellcycle/data/rawdata/KnowGenes.doc.
7
(a)
(b)
(c)
(d)
Figure 3: (a) Genes specific to G1 phase are being regulated periodically; we can see that the average indegree of these genes increases during G1 stage and starts to decline right after the G1 phase. (b) S phase
specific genes periodically regulate other genes; we can see that the average outdegree of these genes peaks at
the end of S phase and starts to decline right after S phase. (c) The interactivity of S/G2 specific genes also
show nice correspondence with their functional roles; we can see that the average outdegree increases till G2
phase and then starts to decline. (d) Indegree and outdegree distribution averaged over 24 time points.
SB
t = 1.0s
t = 1.5s
t = 2.0s
t = 2.5s
al
av
Figure 4: Temporal progression of brain interactions for subject ?al? and BCI ?illiterate? ?av?. The plot for the
other 3 subjects can be found in the appendix. The dots correspond to EEG electrode positions in 10-5 system.
analysis on trials related to right hand imagination, and signals in the window [1.0, 2.5] second after
the visual cue is presented. We bandpass filter data at 8?12 Hz to obtain EEG alpha activity. We
further normalize each EEG channel to zero mean and unit variance, and estimate the time-varying
networks for all 5 subject using exactly the same regularization parameter and kernel bandwidth
(h s.t. exp(?(0.5)2 /h) = exp(?1)). We tried a range of different regularization parameters, but
obtained qualitatively similar results to Figure 4.
What is particularly interesting in this dataset is that subject ?av? is called BCI ?illiterate?; he/she
is unable to generate clear EEG changes during motor imagination. The estimated time-varying
networks reveal that the brain interactions of subject ?av? is particularly weak and the brain connectivity actually decreases as the experiment proceeds. In contrast, all other four subjects show an
increased brain interaction as they engage in active imagination. Particularly, these increased interactions occur between visual and motor cortex. This dynamic coherence between visual and motor
cortex corresponds nicely to the fact that subjects are consciously transforming visual stimuli into
motor imaginations which involves the motor cortex. It seems that subject ?av? fails to perform such
integration due to the disruption of brain interactions.
8
Conclusion
In this paper, we propose time-varying dynamic Bayesian networks (TV-DBN) for modeling the
varying network structures underlying non-stationary biological time series. We have designed a
simple and scalable kernel reweighted structural learning algorithm to make the learning possible.
Given the rapid advances in data collection technologies for biological systems, we expect that complex, high-dimensional, and feature rich data from complex dynamic biological processes, such as
cancer progression, immune responses, and developmental processes, will continue to grow. Thus,
we believe our new method is a timely contribution that can narrow the gap between imminent
methodological needs and the available data and offer deeper understanding of the mechanisms and
processes underlying biological networks.
Acknowledgments LS is supported by a Ray and Stephenie Lane Research Fellowship. EPX is supported
by grant ONR N000140910758, NSF DBI-0640543, NSF DBI-0546594, NSF IIS-0713379 and an Alfred P.
Sloan Research Fellowship. We also thank Grace Tzu-Wei Huang for helpful discussions.
8
References
[1] A. L. Barabasi and Z. N. Oltvai. Network biology: Understanding the cell?s functional organization.
Nature Reviews Genetics, 5(2):101?113, 2004.
[2] Francisco Varela, Jean-Philippe Lachaux, Eugenio Rodriguez, and Jacques Martinerie. The brainweb:
Phase synchronization and large-scale integration. Nature Reviews Neuroscience, 2:229?239, 2001.
[3] N. Luscombe, M. Babu, H. Yu, M. Snyder, S. Teichmann, and M. Gerstein. Genomic analysis of regulatory network dynamics reveals large topological changes. Nature, 431:308?312, 2004.
[4] Eugenio Rodriguez, Nathalie George, Jean-Philippe Lachaux, Jacques Martinerie, Bernard Renault, and
Francisco J. Varela1. Perception?s shadow: long-distance synchronization of human brain activity. Nature,
397(6718):430?433, 1999.
[5] M. Talih and N. Hengartner. Structural learning with time-varying components: Tracking the crosssection of financial time series. J. Royal Stat. Soc. B, 67(3):321C341, 2005.
[6] S. Hanneke and E. P. Xing. Discrete temporal models of social networks. In Workshop on Statistical
Network Analysis, ICML06, 2006.
[7] F. Guo, S. Hanneke, W. Fu, and E. P. Xing. Recovering temporally rewiring networks: A model-based
approach. In International Conference in Machine Learning, 2007.
[8] X. Xuan and K. Murphy. Modeling changing dependency structure in multivariate time series. In International Conference in Machine Learning, 2007.
[9] J. Robinson and A. Hartemink. Non-stationary dynamic bayesian networks. In Neural Information Processing Systems, 2008.
[10] Amr Ahmed and Eric P. Xing. Tesla: Recovering time-varying networks of dependencies in social and
biological studies. Proceeding of the National Academy of Sciences, in press, 2009.
[11] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs. In Computational Learning
Theory, 2008.
[12] L. Song, M. Kolar, and E. Xing. Keller: Estimating time-evolving interactions between genes. In Bioinformatics (ISMB), 2009.
[13] N. Friedman, M. Linial, I. Nachman, and D. Peter. Using bayesian networks to analyze expression data.
Journal of Computational Biology, 7:601?620, 2000.
[14] N. Dobingeon, J. Tourneret, and M. Davy. Joint segmentation of piecewise constant autoregressive processes by using a hierarchical model and a bayesian sampling approach. IEEE Transactions on Signal
Processing, 55(4):1251?1263, 2007.
[15] K. Kanazawa, D. Koller, and S. Russell. Stochastic simulation algorithms for dynamic probabilistic
networks. Uncertainty in AI, 1995.
[16] L. Getoor, N. Friedman, D. Koller, and B. Taskar. Learning probabilistic models with link uncertainty.
Journal of Machine Learning Research, 2002.
[17] R. Dahlhaus. Fitting time series models to nonstationary processes. Ann. Statist, (25):1?37, 1997.
[18] C. Andrieu, M. Davy, and A. Doucet. Efficient particle filtering for jump markov systems: Application to
time-varying autoregressions. IEEE Transactions on Signal Processing, 51(7):1762?1770, 2003.
[19] E. H. Davidson. Genomic Regulatory Systems. Academic Press, 2001.
[20] Florentina Bunea. Honest variable selection in linear and logistic regression models via `1 and `1 + `2
penalization. Electronic Journal of Statistics, 2:1153, 2008.
[21] W. Fu. Penalized regressions: the bridge versus the lasso. Journal of Computational and Graphical
Statistics, 7(3):397?416, 1998.
[22] M. Schmidt, A. Niculescu-Mizil, and K Murphy. Learning graphical model structure using l1regularization paths. In AAAI, 2007.
[23] Tata Pramila, Wei Wu, Shawna Miles, William Noble, and Linda Breeden. The forkhead transcription factor hcm1 regulates chromosome segregation genes and fills the s-phase gap in the transcriptional circuitry
of the cell cycle. Gene and Development, 20:2266?2278, 2006.
[24] Jun Zhu, Bin Zhang, Erin Smith, Becky Drees, Rachel Brem, Leonid Kruglyak, Roger Bumgarner, and
Eric E Schadt. Integrating large-scale functional genomic data to dissect the complexity of yeast regulatory networks. Nature Genetics, 40:854?861, 2008.
[25] T. Nichols and A. Holmes. Nonparametric permutation tests for functional neuroimaging: a primer with
examples. Human Brain Mapping, 15:1?25, 2001.
[26] G. Dornhege, B. Blankertz, G. Curio, and K.R. M?uller. Boosting bit rates in non-invasive eeg single-trial
classifications by feature combination and multi-class paradigms. IEEE Trans. Biomed. Eng., 51:993?
1002, 2004.
9
| 3716 |@word trial:2 version:1 norm:3 seems:1 simulation:1 tried:1 bn:2 decomposition:2 eng:1 moment:1 configuration:1 series:22 contains:1 score:5 ati:8 past:1 existing:1 current:3 recovered:1 surprising:1 activation:1 yet:1 assigning:1 realistic:2 partition:1 periodically:2 motor:6 designed:2 treating:1 update:1 plot:2 stationary:11 half:1 cue:2 nervous:1 accordingly:1 isotropic:1 ith:1 smith:1 short:1 regressive:8 characterization:1 provides:1 node:7 location:2 boosting:1 simpler:1 zhang:1 five:1 nodal:1 along:4 retrieving:1 shooting:4 fitting:2 ray:1 introduce:1 indeed:1 rapid:1 ontology:4 behavior:4 surge:1 examine:1 multi:1 brain:12 xti:4 window:1 solver:2 increasing:2 becomes:3 begin:1 estimating:10 underlying:18 bounded:3 discover:1 linda:1 what:2 argmin:1 pursue:1 unified:1 transformation:2 dornhege:1 temporal:6 every:1 ti:4 growth:1 exactly:2 control:3 originates:1 unit:1 grant:1 knockout:3 t1:1 engineering:2 local:1 aggregating:1 tends:1 drees:1 path:1 becoming:1 interpolation:1 approximately:1 emphasis:1 initialization:1 studied:1 equivalence:1 challenging:2 range:1 statistically:5 averaged:2 ismb:1 directed:10 practical:2 unique:1 acknowledgment:1 atomic:4 block:1 spot:1 procedure:9 area:1 evolving:2 maxx:2 significantly:1 t10:1 imminent:1 davy:2 confidence:1 integrating:1 suggest:2 selection:2 context:2 influence:1 impossible:1 www:1 conventional:1 yt:2 maximizing:1 straightforward:1 l:1 keller:1 focused:1 resolution:2 recovery:1 identifying:1 wasserman:1 estimator:1 holmes:1 array:1 dbi:2 borrow:1 fill:1 financial:1 variation:1 updated:1 imagine:1 dbns:18 enhanced:1 pt:3 exact:2 target:2 homogeneous:1 us:2 mosaic:1 engage:1 element:4 expensive:1 jk:1 particularly:3 labeled:1 observed:1 role:1 taskar:1 solved:1 thousand:1 region:2 t12:1 cycle:13 connected:1 movement:1 decrease:1 russell:1 pd:2 transforming:1 developmental:1 covariates:3 complexity:1 battery:2 dynamic:27 signature:1 n000140910758:1 depend:1 solving:3 segment:3 linial:1 eric:3 efficiency:1 completely:1 easily:2 joint:1 various:1 represented:1 enyi:1 fast:1 describe:1 neighborhood:2 jean:2 stanford:1 otherwise:1 bci:2 statistic:3 g1:5 emergence:2 jointly:1 noisy:1 sequence:4 advantage:2 propose:4 rewiring:4 interaction:10 alleviates:1 till:1 academy:1 supposed:1 amin:3 frobenius:2 kh:2 normalize:1 x1t:1 epx:1 electrode:1 r1:1 produce:1 generating:1 xuan:1 converges:1 leave:1 stat:1 measured:4 ij:1 school:1 solves:1 soc:1 recovering:6 c:1 involves:1 shadow:1 direction:2 functionality:1 filter:1 stochastic:6 human:2 enable:1 transient:3 bin:1 behaviour:1 f1:3 investigation:1 preliminary:1 achilles:1 biological:29 extension:1 hold:1 sufficiently:1 considered:1 around:2 exp:8 cognition:1 bj:2 visualize:1 mapping:1 circuitry:2 major:3 vary:4 barabasi:1 estimation:12 nachman:1 coordination:1 healthy:1 bridge:1 homeostasis:1 largest:1 tf:2 t22:1 weighted:1 bunea:1 uller:1 genomic:3 gaussian:5 martinerie:2 rather:5 reaching:1 avoid:1 zhou:1 varying:46 timevarying:1 encode:1 ax:1 xit:6 focus:2 l1regularization:1 she:1 methodological:1 likelihood:2 indicates:1 mainly:1 t14:1 contrast:2 rigorous:1 greatly:1 sense:1 helpful:1 dependent:5 niculescu:1 sb:1 a0:1 explanatory:1 relation:4 favoring:1 koller:2 interested:1 biomed:1 overall:3 classification:1 favored:1 priori:1 k6:2 development:1 integration:2 initialize:2 nicely:1 sampling:3 biology:4 represents:1 outdegree:7 yu:1 nearly:1 promote:1 noble:1 fmri:1 t2:5 stimulus:4 quantitatively:1 transiently:1 few:3 piecewise:3 randomly:1 national:1 interpolate:1 individual:1 murphy:2 maxj:1 phase:12 ourselves:1 renault:1 william:1 friedman:2 detection:1 stationarity:2 interest:2 organization:2 highly:2 replicates:2 implication:1 predefined:1 edge:15 closer:1 fu:2 respective:1 orthogonal:1 causal:3 mk:3 instance:2 formalism:3 modeling:8 earlier:1 markovian:1 increased:2 crosssection:1 organizational:1 vertex:1 entry:8 hundred:1 conducted:1 characterize:1 reported:1 dependency:6 aw:1 periodic:3 synthetic:4 peak:2 international:2 sequel:1 systematic:2 probabilistic:3 off:1 xi1:1 synthesis:1 quickly:1 t16:1 connectivity:1 aaai:1 central:1 recorded:1 ear:1 choose:5 tzu:1 huang:1 cognitive:2 imagination:5 derivative:1 yp:1 supp:2 kruglyak:1 b2:3 summarized:1 erin:1 coefficient:4 babu:1 sloan:1 piece:1 tion:1 try:1 later:1 lot:1 linked:1 analyze:1 xing:5 recover:5 aggregation:1 start:5 complicated:1 timely:1 contribution:1 ass:1 square:6 variance:4 correspond:1 mitosis:1 weak:1 bayesian:14 consciously:1 worth:1 drive:1 hanneke:2 history:1 invasive:1 naturally:1 associated:1 proof:3 recovers:1 static:11 sampled:1 dataset:4 recall:2 knowledge:2 ubiquitous:1 segmentation:1 actually:1 mkolar:1 higher:3 follow:2 response:7 wei:2 erd:1 furthermore:2 roger:1 stage:4 d:1 sketch:1 hand:1 parse:1 o:1 reweighting:4 lack:1 rodriguez:2 defines:2 logistic:1 reveal:3 yeast:9 believe:1 name:1 b3:2 building:1 nichols:1 true:3 andrieu:1 regularization:9 hence:4 read:1 symmetric:2 nonzero:2 iteratively:1 semantic:1 ribosome:2 deal:2 eg:1 reweighted:4 attractive:2 unavailability:1 during:8 adjacent:3 conditionally:1 distal:1 ay:1 demonstrate:2 disruption:1 wise:1 harmonic:1 recently:1 outdegrees:1 common:3 specialized:2 becky:1 functional:8 brem:1 regulates:2 million:1 interpretation:1 he:1 relating:1 functionally:1 mellon:1 measurement:4 refer:1 ai:7 dbn:19 consistency:8 trivially:2 particle:1 dot:1 immune:1 cortex:3 dominant:1 enzyme:1 multivariate:2 recent:1 mint:1 reverse:3 onr:1 continue:1 seen:1 minimum:2 additional:1 captured:1 george:1 parallelized:1 brainweb:1 determine:2 paradigm:1 period:1 signal:4 ii:4 multiple:4 sound:1 full:1 smooth:1 technical:2 match:1 academic:1 ahmed:1 offer:2 long:1 divided:1 serial:1 molecular:1 a1:1 controlled:1 scalable:3 xjt:4 regression:5 xpt:1 cmu:1 essentially:1 iteration:1 kernel:13 represent:1 histogram:1 cell:14 penalize:1 whereas:1 hurdle:1 separately:3 spacing:1 addressed:1 interval:1 background:1 grow:1 fellowship:2 microarray:1 rest:1 posse:1 subject:9 hz:1 undergo:2 undirected:4 elegant:1 contrary:1 lafferty:1 nonstationary:2 structural:8 near:1 intermediate:1 iii:1 easy:1 independence:1 fit:2 topology:2 bandwidth:4 lasso:1 decline:3 cellcycle:1 honest:1 whether:1 expression:11 handled:1 lesong:1 effort:1 song:2 suffer:1 peter:1 ignored:1 useful:1 clear:3 detailed:1 nonparametric:1 ten:1 locally:1 statist:1 visualized:1 dna:1 reduced:1 generate:4 http:1 exist:3 nsf:3 sign:2 estimated:3 jacques:2 neuroscience:1 bumgarner:1 alfred:1 carnegie:1 discrete:1 snyder:1 group:13 key:3 four:2 varela:1 hengartner:1 changing:1 thresholded:1 graph:5 asymptotically:1 fraction:2 uncertainty:2 place:2 rachel:1 electronic:1 wu:1 missed:1 parsimonious:2 doc:1 polarized:1 decision:1 appendix:3 coherence:1 gerstein:1 florentina:1 bit:1 capturing:1 bound:2 correspondence:1 topological:2 encountered:1 nonnegative:1 activity:7 occur:2 t24:1 encodes:1 lane:1 regulator:5 simulate:1 extremely:1 optimality:1 span:1 speedup:1 tv:19 according:3 combination:1 across:10 smaller:1 heel:1 making:2 intuitively:1 invariant:7 computationally:1 equation:2 segregation:1 describing:1 mechanism:2 needed:1 tractable:2 end:2 studying:1 available:2 apply:1 observe:1 progression:2 away:2 regulate:2 spectral:1 hierarchical:1 schmidt:1 gate:2 rp:1 primer:1 denotes:1 clustering:1 assembly:1 graphical:5 objective:2 move:1 amr:1 strategy:1 indegree:8 traditional:1 grace:1 transcriptional:2 exhibit:5 cycling:1 regulated:1 distance:1 unable:1 thank:1 separating:1 entity:1 link:1 evenly:1 tata:1 cellular:1 reason:1 provable:1 modeled:1 relationship:1 mini:1 kolar:2 regulation:1 neuroimaging:1 holding:1 atik:1 trace:1 dahlhaus:1 lachaux:2 perform:2 upper:1 av:6 neuron:1 snapshot:2 observation:19 datasets:1 markov:2 mladen:1 displayed:1 philippe:2 extended:1 discovered:1 enrichment:1 introduced:1 atij:4 required:1 specified:1 connection:2 coherent:1 learned:1 distinction:1 narrow:1 robinson:1 trans:1 able:1 proceeds:1 usually:1 dynamical:2 below:1 perception:1 sparsity:2 challenge:1 t20:1 interpretability:1 max:1 royal:1 tourneret:1 power:2 suitable:1 getoor:1 natural:1 rely:1 regularized:5 warm:1 mizil:1 blankertz:1 representing:1 scheme:3 zhu:1 technology:2 epxing:1 imply:1 temporally:4 numerous:1 t18:1 axis:2 eugenio:2 jun:1 auto:8 naive:1 nice:2 understanding:3 literature:1 review:2 autoregressions:1 determining:1 asymptotic:1 synchronization:2 loss:3 expect:3 bear:1 permutation:1 interesting:3 interactivity:2 filtering:1 versus:1 penalization:1 at1:1 ontological:2 consistent:3 nathalie:1 principle:1 uncorrelated:1 share:1 pi:1 maxt:1 row:2 cancer:1 genetics:2 penalized:2 course:1 prone:1 summary:2 free:1 t6:1 infeasible:1 enjoys:1 aij:3 formal:2 allow:1 understand:1 weaker:1 bias:1 wide:2 deeper:1 taking:1 correspondingly:1 absolute:2 sparse:5 distributed:1 benefit:2 talih:1 depth:1 dimension:1 world:1 transition:7 genome:3 autoregressive:2 preventing:1 rich:1 made:1 collection:3 commonly:1 qualitatively:1 jump:1 far:1 social:2 transaction:2 sj:4 alpha:1 transcription:3 gene:57 doucet:1 overfitting:2 active:3 anchor:3 reveals:1 b1:3 francisco:2 xi:3 davidson:1 regulatory:12 latent:1 decomposes:2 table:2 learn:2 nature:6 channel:1 chromosome:1 eeg:10 complex:3 necessarily:2 domain:1 t8:1 oltvai:1 noise:3 tesla:1 x1:1 enriched:2 causality:1 site:1 join:1 en:1 body:1 scattered:1 precision:1 structurally:2 theme:4 position:1 fails:1 bandpass:1 dissect:1 weighting:4 theorem:3 emphasizing:1 minute:1 specific:7 xt:7 jt:1 undergoing:1 maxi:1 experimented:1 decay:1 concern:1 sit:5 essential:1 exists:2 workshop:1 kanazawa:1 sequential:1 supported:2 schadt:1 curio:1 push:1 t4:1 gap:2 smoothly:5 led:1 simply:2 likely:3 univariate:1 explore:1 mile:1 visual:8 expressed:2 hartemink:1 tracking:1 g2:4 binding:1 aa:1 corresponds:6 relies:1 constantly:1 conditional:2 goal:3 viewed:1 ann:1 rbf:1 leonid:1 experimentally:1 change:9 specifically:4 determined:1 wt:9 engineer:1 total:4 called:1 bernard:1 formally:2 support:1 guo:1 latter:1 arises:1 bioinformatics:1 scarcity:3 incorporate:1 evaluate:1 mcmc:1 |
2,998 | 3,717 | Learning in Markov Random Fields using
Tempered Transitions
Ruslan Salakhutdinov
Brain and Cognitive Sciences and CSAIL
Massachusetts Institute of Technology
[email protected]
Abstract
Markov random fields (MRF?s), or undirected graphical models, provide a powerful framework for modeling complex dependencies among random variables.
Maximum likelihood learning in MRF?s is hard due to the presence of the global
normalizing constant. In this paper we consider a class of stochastic approximation algorithms of the Robbins-Monro type that use Markov chain Monte Carlo to
do approximate maximum likelihood learning. We show that using MCMC operators based on tempered transitions enables the stochastic approximation algorithm
to better explore highly multimodal distributions, which considerably improves
parameter estimates in large, densely-connected MRF?s. Our results on MNIST
and NORB datasets demonstrate that we can successfully learn good generative
models of high-dimensional, richly structured data that perform well on digit and
object recognition tasks.
1 Introduction
Markov random fields (MRF?s) provide a powerful tool for representing dependency structure between random variables. They have been successfully used in various application domains, including machine learning, computer vision, and statistical physics. The major limitation of MRF?s is
the need to compute the partition function, whose role is to normalize the joint distribution over the
set of random variables. Maximum likelihood learning in MRF?s is often very difficult because of
the hard inference problem induced by the partition function. When modeling high-dimensional,
richly structured data, the inference problem becomes much more difficult because the distribution
we need to infer is likely to be highly multimodal [17]. Multimodality is common in real-world
distributions, such as the distribution of natural images, in which an exponentially large number
of possible image configurations have extremely low probability, but there are many very different
images that occur with similar probabilities.
To date, there has been very little work addressing the problem of efficient learning in large, denselyconnected MRF?s that contain millions of parameters. While there exists a substantial literature on
developing approximate learning algorithms for arbitrary MRF?s, many of these algorithms are unlikely to work well when dealing with high-dimensional inputs. Methods that are based on replacing
the likelihood term with some tractable approximations, such as pseudo-likelihood [1] or mixtures
of random spanning trees [11], perform very poorly for densely-connected MRF?s with strong dependency structures [3]. When using variational methods, such as loopy BP [18] and TRBP [16],
learning often gets trapped in poor local optima [5, 13]. MCMC-based algorithms, including MCMC
maximum likelihood estimators [3, 20] and Contrastive Divergence [4], typically suffer from high
variance (or strong bias) in their estimates, and can sometimes be painfully slow. The main problem
here is the inability of Markov chains to efficiently explore distributions with many isolated modes.
1
In this paper we concentrate on the class of stochastic approximation algorithms of the RobbinsMonro type that use MCMC to estimate the model?s expected sufficient statistics, needed for maximum likelihood learning. We first show that using this class of algorithms allows us to make very
rapid progress towards finding a fairly good set of parameters, even for models containing millions
of parameters. Second, we show that using MCMC operators based on tempered transitions [9] enables the stochastic algorithm to better explore highly multimodal distributions, which considerably
improves parameter estimates, particularly in large, densely-connected MRF?s. Our results on the
MNIST and NORB datasets demonstrate that the stochastic approximation algorithm together with
tempered transitions can be successfully used to model high-dimensional real-world distributions.
2 Maximum Likelihood Learning in MRF?s
Let x ?X K be a random vector on K variables, where each xi takes on values in some discrete
alphabet. Let ?(x) denote a D-dimensional vector of sufficient statistics, and let ? ? RD be a vector
of canonical parameters. The exponential family associated with sufficient statistics ? consists of
the following parameterized set of probability distributions:
X
p? (x)
1
p(x; ?) =
=
exp (?? ?(x)),
Z(?) =
exp (?? ?(x)),
(1)
Z(?)
Z(?)
x
where p? (?) denotes the unnormalized probability distribution and Z(?) is the partition function.
For example, consider the following binary pairwise MRF. Given a graph G = (V, E) with vertices
V and edges E, the probability distribution over a binary random vector x ? {0, 1}K is given by:
?
?
X
X
1
1
?ij xi xj +
(2)
exp ?? ?(x) =
exp ?
?i xi ?.
p(x; ?) =
Z(?)
Z(?)
i?V
(i,j)?E
The derivative of the log-likelihood for an observation x0 with respect to parameter vector ? can be
obtained from Eq. 1:
? log p(x0 ; ?)
= ?(x0 ) ? Ep(x;?) [?(x)],
(3)
??
where EP [?] denotes an expectation with respect to distribution P . Except for simple models such
as the tree structured graphs exact maximum likelihood learning is intractable, because exact computation of the expectation Ep(x;?) [?] takes time that is exponential in the treewidth of the graph1.
One approach is to learn model parameters by maximizing the pseudo-likelihood (PL) [1], which
replaces the likelihood with a tractable product of conditional probabilities:
PPL (x0 ; ?) =
K
Y
p(xk |x0,?k ; ?),
(4)
k=1
where x0,?k denotes an observation vector x0 with xk omitted. Pseudo-likelihood provides good
estimates for weak dependence, when p(xk |x?k ) ? p(xk ), or when it well approximates the true
likelihood function. For MRF?s with strong dependence structure, it is unlikely to work well.
Another approach, called the MCMC maximum likelihood estimator (MCMC-MLE) [3], has been
shown to sometimes provide considerably better results than PL [3, 20]. The key idea is to use
importance sampling to approximate the model?s partition function. Consider running a Markov
chain to obtain samples x(1) , x(2) , ..., x(n) from some fixed proposal distribution p(x; ?)2 . These
samples can be used to approximate the log-likelihood ratio for an observation x0 :
L(?) = log
p(x0 ; ?)
p(x0 ; ?)
=
(? ? ?)? ?(x0 ) ? log
?
(? ? ?)? ?(x0 ) ? log
Z(?)
Z(?)
n
1X
n
(5)
e(???)
?
?(x(i) )
= Ln (?),
(6)
i=1
1
For many interesting models considered in this paper exact computation of Ep(x;?) [?] takes time that is
exponential in the dimensionality of x.
2
We will also assume that p(x; ?) 6= 0 whenever p(x; ?) 6= 0, ??.
2
Algorithm 1 Stochastic Approximation Procedure.
1: Given an observation x0 . Randomly initialize ?1 and M sample particles {x1,1 , ...., x1,M }.
2: for t = 1 : T (number of iterations) do
3:
for m = 1 : M (number of parallel Markov chains) do
4:
Sample xt+1,m given xt,m using transition operator T?t (xt+1,m ? xt,m ).
5:
end for
h
i
PM
t+1,m
1
6:
Update: ?t+1 = ?t + ?t ?(x0 ) ? M
?(x
)
.
m=1
7:
Decrease ?t .
8: end for
P
P
?
?
(i)
Z(?)
= x e(???) ?(x) p(x; ?) ? n1 ni=1 e(???) ?(x ) . Prowhere we used the approximation: Z(?)
vided our Markov chain is ergodic, it can be shown that Ln (?) ?
? L(?) for all ?. It can further be
shown that, under the ?usual? regularity conditions, if ??n maximizes Ln (?) and ?? maximizes L(?),
a.s.
then ??n ??? ?? . This implies that as the number of samples n, drawn from our proposal distributions, goes to infinity, MCMC-MLE will converge to the true maximum likelihood estimator. While
this estimator provides nice asymptotic convergence guarantees, it performs very poorly in practice,
particularly when the parameter vector ? is high-dimensional. In high-dimensional spaces, the variance of an estimator Ln (?) will be very large, or possibly infinite, unless the proposal distribution
p(x; ?) is a near-perfect approximation to p(x; ?). While there have been some attempts to improve
MCMC-MLE by considering a mixture of proposal distributions [20], they do not fix the problem
when learning MRF?s with millions of parameters.
3 Stochastic Approximation Procedure (SAP)
We now consider a stochastic approximation procedure that uses MCMC to estimate the model?s
expected sufficient statistics. SAP belongs to the general class of well-studied stochastic approximation algorithms of the Robbins-Monro type [19, 12]. The algorithm itself dates back to 1988 [19],
but only recently it has been shown to work surprisingly well when training large MRF?s, including
restricted Boltzmann machines [15] and deep Boltzmann machines [14, 13].
The idea behind learning a parameter vector ? using SAP is straightforward. Let x0 be our observation. Then the state and the parameters are updated sequentially:
(7)
?t+1 = ?t + ?t ?(x0 ) ? ?(xt+1 ) , where xt+1 ? T?t (xt+1 ? xt ).
Given xt , we sample a new state xt+1 using the transition operator T?t (xt+1 ? xt ) that leaves
p(?; ?t ) invariant. A new parameter ?t+1 is then obtained by replacing the intractable expectation Ep(x;?t ) [?(x)] with ?(xt+1 ). In practice, we typically maintain a set of M sample points
X t = {xt,1 , ...., xt,M }, which we will often refer to as sample particles. In this case, the intractable
PM
model?s expectation is replaced by the sample average 1/M m=1 ?(xt+1,m ). The procedure is
summarized in Algorithm 1.
One important property of this algorithm is that just like MCMC-MLE, it can be shown to asymptotically converge to the maximum likelihood estimator ?? .3 In particular, for fully visible discrete
1
MRF?s, if one uses a Gibbs transition operator and the learning rate is set to ?t = (t+1)U
, where U
a.s.
is a positive constant, such that U > 2KC0 C1 , then ?t ??? ?? (see Theorem 4.1 of [19]). Here K
is the dimensionality of x, C0 = max{||?(x0 ) ? ?(x)||; x ? X K } is the largest magnitude of the
gradient, and C1 is the maximum variation of ? when one changes the values of a single component
only: C1 = max{||?(x) ? ?(y)||; x, y ? X K , k ? {1, ..., K}, y?k = x?k }.
The proof of convergence relies on the following simple decomposition. First, let S(?) denote the
p(x0 ;?)
true gradient of the log-likelihood function: S(?) = ? log ??
= ?(x0 ) ? Ep(x;?) [?(x)]. The
parameter update rule then takes the following form:
?t+1 = ?t + ?t ?(x0 ) ? ?(xt+1 ) = ?t + ?t S(?t ) + ?t Ep(x;?) [?(x)] ? ?(xt+1 )
=
3
that
?t + ?t S(?t ) + ?t ?t .
(8)
One necessary condition
P
P? for2almost sure convergence requires the learning rate to decrease with time, so
?
?
=
?
and
t
t=0
t=0 ?t < ?.
3
Algorithm 2 Tempered Transitions Run.
1:
2:
3:
4:
5:
6:
7:
8:
9:
Initialize ?0 < ?1 < ... < ?S = 1. Given a current state xS .
for s = S ? 1 : 0 (Forward pass) do
Sample xs given xs+1 using Ts (xs ? xs+1 ).
end for
? 0 = x0 .
Set x
for s = 0 : S ? 1 (Backward pass) do
? s+1 given x
? s using Tes (?
? s ).
Sample x
xs+1 ? x
end for
h Q
i
?
?s?1 ??s ?
? S with probability: min 1, S
Accept a new state x
p (?
xs )?s ??s?1 .
s=1 p (xs )
The first term (rhs. of Eq. 8) is the discretization of the ordinary differential equation ?? = S(?). The
algorithm is therefore a perturbation of this discretization with the noise term ?t . The proof proceeds
by showing that the noise term is not too large. Intuitively, as the learning rate becomes sufficiently
small compared to the mixing rate of the Markov chain, the chain will stay close to the stationary
distribution, even if it is only run for a few MCMC steps per parameter update. This, in turn, will
ensure that the noise term ?t goes to zero.
When looking at the behavior of this algorithm in practice, we find that initially it makes very rapid
progress towards finding a sensible region in the parameter space. However, as the algorithm begins to capture the multimodality of the data distribution, the Markov chain tends to mix poorly,
producing highly correlated samples for successive parameter updates. This often leads to poor parameter estimates, especially when modeling complex, high-dimensional distributions. The main
problem here is the inability of the Markov chain to efficiently explore a distribution with many
isolated modes. However, the transition operators T?t (xt+1 ? xt ) used in the stochastic approximation algorithm do not necessarily need to be simple Gibbs or Metropolis-Hastings updates to
guarantee almost sure convergence. Instead, we propose to use MCMC operators based on tempered transitions [9] that can more efficiently explore highly multimodal distributions. In addition,
implementing tempered transitions requires very little extra work beyond the implementation of the
Gibbs sampler.
3.1 Tempered Transitions
Suppose that our goal is to sample from p(x; ?). We first define a sequence of intermediate probability distributions: p0 , ..., pS , with pS = p(x; ?) and p0 being more spread out and easier to sample
from than pS . Constructing a suitable sequence of intermediate probability distributions will in
general depend on the problem. One general way to define this sequence is:
ps (x)
? p? (x; ?)?s ,
(9)
with ?inverse temperatures? ?0 < ?1 < ... < ?S = 1 chosen by the user. For each s = 1, .., S?1 we
define a transition operator Ts (x? ? x) that leaves ps invariant. In our implementation Ts (x? ? x)
is the Gibbs sampling operator. We also need to define a reverse transition operator Tes (x ? x? ) that
satisfies the following reversibility condition for all x and x? :
ps (x)Ts (x? ? x) = Tes (x ? x? )ps (x? ).
(10)
If Ts is reversible, then Tes is the same as Ts . Many commonly used transition operators, such
as Metropolis?Hastings, are reversible. Non-reversible operators are usually composed of several
reversible sub-transitions applied in sequence Ts = Q1 ...QK , such as the single component updates
in a Gibbs sampler. The reverse operator can be simply constructed from the same sub-transitions,
but applied in the reverse order Tes = QK ...Q1 .
Given the current state x of the Markov chain, tempered transitions apply a sequence of transition
operators TS?1 . . . T0 Te0 . . . TeS?1 that systematically ?move? the sample particle x from the original
complex distribution to the easily sampled distribution, and then back to the original distribution. A
? is accepted or rejected based on ratios of probabilities of intermediate states.
new candidate state x
Since p0 is less concentrated than pS , the sample particle will have a chance to move around the
state space more easily, and we may hope that the probability distribution of the resulting candidate
4
Log?probability
Restricted Boltzmann Machine
h
?200
?300
?154
Maximum
Exact Maximum
Likelihood
Stochastic
Approximation
?400
?155 Likelihood
Log?probability
?100
?500
x
?600
0
x
?157
?158
Stochastic
Approximation
?159
?160
50
10e2
10e3
10e4
Number of Gibbs Updates (log?scale)
60
70
80
90
100
Number of Gibbs Updates (? 1000 )
0.982
label
0.95
0.9
0.85
Stochastic
Approximation
0.8
0.75
0.7
0.65
0
10e2
10e3
10e4
Number of Gibbs Updates (log?scale)
% correctly classified
h
?156
1
% correctly classified
Semi-restricted
Boltzmann Machine
Tempered
Transitions
0.98
Tempered
Transitions
0.978
0.976
0.974
0.972
Stochastic
Approximation
0.97
100
120
140
160
180
200
Number of Gibbs Updates (? 1000 )
Figure 1: Experimental results on MNIST dataset. Top: Toy RBM with 10 hidden units. The x-axis show the
number of Gibbs updates and the y-axis displays the training log-probability in nats. Bottom: Classification
performance of the semi-restricted Boltzmann machines with 500 hidden units on the full MNIST datasets.
state will be much broader than the mode in which the current start state resides. The procedure
is shown in Algorithm 2. Note that there is no need to compute the normalizing constants of any
intermediate distributions.
Tempered transitions can make major changes to the current state, which allows the Markov chain
to produce less correlated samples between successive parameter updates. This can greatly improve
the accuracy of the estimator, but is also more computationally expensive. We therefore propose to
alternate between applying a more expensive tempered transitions operator and the standard Gibbs
updates. We call this algorithm Trans-SAP.
4 Experimental Results
In our experiments we used the MNIST and NORB datasets. To speed-up learning, we subdivided
datasets into minibatches, each containing 100 training cases, and updated the parameters after each
minibatch. The number of sample particles used for estimating the model?s expected sufficient statistics was also set to 100. For the stochastic approximation algorithm, we always apply a single Gibbs
update to the sample particles. In all experiments, the learning rates were set by quickly running a
few preliminary experiments and picking the learning rates that worked best on the validation set.
We also use natural logarithms, providing values in nats.
4.1 MNIST
The MNIST digit dataset contains 60,000 training and 10,000 test images of ten handwritten digits
(0 to 9), with 28?28 pixels. The dataset was binarized: each pixel value was stochastically set
to 1 with probability proportional to its pixel intensity. From the training data, a random sample of
10,000 images was set aside for validation.
In our first experiment we trained a small restricted Boltzmann machine (RBM). An RBM is a particular type of Markov random field that has a two-layer architecture, in which the visible binary
stochastic units x are connected to hidden binary stochastic units h, as shown in Fig. 1. The probability that the model assigns to a visible vector x is:
?
?
X
X
X
X
1
P (x; ?) =
(11)
?j h j ? .
?i xi +
?ij xi hj +
exp ?
Z(?)
j
i
i,j
h
5
Samples before
Tempered Transitions
Samples after
Tempered Transitions
Model Samples
Figure 2: Left: Sample particles produced by the stochastic approximation algorithm after 100,000 parameter
updates. Middle: Sample particles after applying a tempered transitions run. Right: Samples generated from
the current model by randomly initializing all binary states and running the Gibbs sampler for 500,000 steps.
After applying tempered transitions, sample particles look more like the samples generated from the current
model. The images shown are the probabilities of the visible units given the binary states of the hidden units.
The model had 10 hidden units. This allowed us to calculate the exact value of the partition function
simply by summing out the 784 visible units for each configuration of the hiddens. For the stochastic
approximation procedure, the total number of parameter updates was 100,000, so the learning took
about 25.6 minutes on a Pentium 4 3.00GHz machine. The learning rate was kept fixed at 0.01 for
the first 10,000 parameter updates, and was then annealed as 10/(1000+t). For comparison, we also
trained the same model using exact maximum likelihood with exactly the same learning schedule.
Perhaps surprisingly, SAP makes very rapid progress towards the maximum likelihood solution,
even though the model contains 8634 free parameters. The top panel of Fig. 1 further shows that
combining regular Gibbs updates with tempered transitions provides a more accurate estimator. We
applied tempered transitions only during the last 50,000 Gibbs steps, alternating between 200 Gibbs
updates and a single tempered transitions run that used 50 ??s spaced uniformly from 1 to 0.9.
The acceptance rate for the tempered transitions was about 0.8. To be fair, we compared different
algorithms based on the total number of Gibbs steps. For SAP, parameters were updated after each
Gibbs step (see Algorithm 1), whereas for Trans-SAP, parameters were updated after each Gibbs
update but not during the tempered transitions run4 . Hence Trans-SAP took slightly less computer
time compared to the plain SAP. Pseudo-likelihood and MCMC maximum likelihood estimators
perform quite poorly, even for this small toy problem.
In our second experiment, we trained a larger semi-restricted Boltzmann machine that contained
705,622 parameters. In contrast to RBM?s, the visible units in this model form a fully connected
pairwise binary MRF (see Fig. 1, bottom left panel). The model had 500 hidden units and was
trained to model the joint probability distribution over the digit images and labels. The total number
of Gibbs updates was set to 200,000, so the learning took about 19.5 hours. The learning rate was
kept fixed at 0.05 for the first 50,000 parameter updates, and was then decreased as 100/(2000 + t).
The bottom panel of Fig. 1 shows classification performance on the full MNIST test set. As expected, SAP makes very rapid progress towards finding a good setting of the parameter values.
Using tempered transitions further improves classification performance. As in our previous experiment, tempered transitions were only applied during the last 100,000 Gibbs updates, alternating
between 1000 Gibbs updates and a single tempered transitions run that used 500 ??s spaced uniformly from 1 to 0.9. The acceptance rate was about 0.7. After learning was complete, in addition
to classification performance, we also estimated the log-probability that both models assigned to
the test data. To estimate the models? partition functions, we used Annealed Importance Sampling
[10, 13] ? a technique that is very similar to tempered transitions. The plain stochastic approximation algorithm achieved an average test log-probability of -87.12 per image, whereas Trans-SAP
achieved a considerably better average test log-probability of -85.91.
4
This reduced the total number of parameter updates from 100, 000 to 50, 000 + 50, 000 ? 2/3 = 83, 333.
6
Training Samples
Model trained with
Tempered Transitions
Model trained without
Tempered Transitions
Figure 3: Results on the NORB dataset. Left: Random samples from the training set. Samples generated from
the two RBM models, trained using SAP with (Middle) and without (Right) tempered transitions. Samples
were generated by running the Gibbs sampler for 100,000 steps.
To get an intuitive picture of how tempered transitions operate, we looked at the sample particles
before and after applying a tempered transitions run. Figure 2 shows sample particles after 100,000
parameter updates. Observe that the particles look like the real handwritten digits. However, a run of
tempered transitions reveals that the current model is very unbalanced, with more probability mass
placed on images of four. To further test whether the ?refreshed? particles were representative of
the current model, we generated samples from the current model by randomly initializing binary
states of the visible and hidden units, and running the Gibbs sampler for 500,000 steps. Clearly,
the refreshed particles look more like the samples generated from the true model. This in turn allows Trans-SAP to better estimate the model?s expected sufficient statistics, which greatly facilitates
learning a better generative model.
4.2 NORB
Results on MNIST show that the stochastic approximation algorithm works well on the relatively
simple task of handwritten digit recognition. In this section we present results on a considerably
more difficult dataset. NORB [6] contains images of 50 different 3D toy objects with 10 objects in
each of five generic classes: planes, cars, trucks, animals, and humans. The training set contains
24,300 stereo image pairs of 25 objects, whereas the test set contains 24,300 stereo pairs of the
remaining, different 25 objects. The goal is to classify each object into its generic class. From the
training data, 4,300 cases were set aside for validation.
Each image has 96?96 pixels with integer greyscale values in the range [0,255]. We further reduced
the dimensionality of each image from 9216 down to 4488 by using larger pixels around the edges of
the image5 . We also augmented the training data with additional unlabeled data by applying simple
pixel translations, creating a total of 1,166,400 training instances. To deal with raw pixel data, we
followed the approach of [8] by first learning a Gaussian-binary RBM with 4000 hidden units, and
then treating the the activities of its hidden layer as ?preprocessed? data. The model was trained
using contrastive divergence learning for 500 epochs. The learned low-level RBM effectively acts
as a preprocessor that transforms greyscale images into 4000-dimensional binary vectors, which we
use as the input for training our models.
We proceeded to training an RBM with 4000 hidden units using binary representations learned
by the preprocessor module6 . The RBM, containing over 16 million parameters, was trained in a
completely unsupervised way. The total number of Gibbs updates was set to 400,000. The learning rate was kept fixed at 0.01 for the first 100,000 parameter updates, and was then annealed as
100/(1000 + t). Similar to the previous experiments, tempered transitions were applied during the
last 200,000 Gibbs updates, alternating between 1000 Gibbs updates and a single tempered transitions run that used 1000 ??s spaced uniformly from 1 to 0.9.
5
6
The dimensionality of each training vector, representing a stereo pair, was 2?4488 = 8976.
The resulting model is effectively a Deep Belief Network with two hidden layers.
7
Figure 3 shows samples generated from two models, trained using stochastic approximation with
and without tempered transitions. Both models were able to learn a lot of regularities in this highdimensional, highly-structured data, including various object classes, different viewpoints and lighting conditions. The plain stochastic approximation algorithm produced a very unbalanced model
with a large fraction of the model?s probability mass placed on images of humans. Using tempered
transitions allowed us to learn a better and more balanced generative model, including the lighting effects. Indeed, the plain SAP achieved a test log-probability of -611.08 per image, whereas
Trans-SAP achieved a test log-probability of -598.58.
We also tested the classification performance of both models simply by fitting a logistic regression
model to the labeled data (using only the 24300 labeled training examples without any translations)
using the top-level hidden activities as inputs. The model trained by SAP achieved an error rate
of 8.7%, whereas the model trained using Trans-SAP reduced the error rate down to 8.4%. This is
compared to 11.6% achieved by SVM?s, 22.5% achieved by logistic regression applied directly in
the pixel space, and 18.4% achieved by K-nearest neighbors [6].
5 Conclusions
We have presented a class of stochastic approximation algorithms of the Robbins-Monro type that
can be used to efficiently learn parameters in large densely-connected MRF?s. Using MCMC operators based on tempered transitions allows the stochastic approximation algorithm to better explore
highly multimodal distributions, which in turn allows us to learn good generative models of handwritten digits and 3D objects in a reasonable amount of computer time.
In this paper we have concentrated only on using tempered transition operators. There exist a variety
of other methods for sampling from distributions with many isolated modes, including simulated
tempering [7] and parallel tempering [3], all of which can be incorporated into SAP. In particular,
the concurrent work of [2] employs parallel tempering techniques to imrpove mixing in RBM?s.
There are, however, several advantages of using tempered transitions over other existing methods.
First, tempered transitions do not require specifying any extra variables, such as the approximate
values of normalizing constants of intermediate distributions, which are needed for the simulated
tempering method. Second, tempered transitions have modest memory requirements, unlike, for
example, parallel tempering, since the acceptance rule can be computed on the fly as the intermediate
states are generated. Finally, the implementation of tempered transitions requires almost no extra
work beyond implementing the Gibbs sampler, and can be easily integrated into existing code.
Acknowledgments
We thank Vinod Nair for sharing his code for blurring and translating NORB images. This research
was supported by NSERC.
References
[1] J. Besag. Efficiency of pseudolikelihood estimation for simple Gaussian fields. Biometrica,
64:616?618, 1977.
[2] G. Desjardins, A. Courville, Y. Bengio, P. Vincent, and O. Delalleau. Tempered Markov chain
Monte Carlo for training of restricted Boltzmann machines. Technical Report 1345, University
of Montreal, 2009.
[3] C. Geyer. Markov chain Monte Carlo maximum likelihood. In Computing Science and Statistics, pages 156?163, 1991.
[4] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8):1711?1800, 2002.
[5] A. Kulesza and F. Pereira. Structured learning with approximate inference. In NIPS, 2007.
[6] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In CVPR (2), pages 97?104, 2004.
[7] E. Marinari and G. Parisi. Simulated tempering: A new Monte Carlo scheme. Europhysics
Letters, 19:451?458, 1992.
8
[8] V. Nair and G. Hinton. Implicit mixtures of restricted Boltzmann machines. In Advances in
Neural Information Processing Systems, volume 21, 2009.
[9] R. Neal. Sampling from multimodal distributions using tempered transitions. Statistics and
Computing, 6:353?366, 1996.
[10] R. Neal. Annealed importance sampling. Statistics and Computing, 11:125?139, 2001.
[11] P. Pletscher, C. Ong, and J. Buhmann. Spanning tree approximations for conditional random
fields. In Proceedings of the International Conference on Artificial Intelligence and Statistics,
volume 5, 2009.
[12] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Stat., 22:400?407,
1951.
[13] R. Salakhutdinov. Learning and evaluating Boltzmann machines. Technical Report UTML TR
2008-002, Department of Computer Science, University of Toronto, 2008.
[14] R. Salakhutdinov and G. Hinton. Deep Boltzmann machines. In Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 5, pages 448?455, 2009.
[15] T. Tieleman. Training restricted Boltzmann machines using approximations to the likelihood
gradient. In Machine Learning, Proceedings of the Twenty-first International Conference
(ICML 2008). ACM, 2008.
[16] M. Wainwright, T. Jaakkola, and A. Willsky. Tree-reweighted belief propagation algorithms
and approximate ML estimation by pseudo-moment matching. In AI and Statistics, volume 9,
2003.
[17] M. Welling and C. Sutton. Learning in Markov random fields with Contrastive Free Energies. In Proceedings of the International Conference on Artificial Intelligence and Statistics,
volume 10, 2005.
[18] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free-energy approximations
and generalized belief propagation algorithms. IEEE Transactions on Information Theory,
51(7):2282?2312, 2005.
[19] L. Younes. Estimation and annealing for Gibbsian fields. Ann. Inst. Henri Poincar?e (B),
24(2):269?294, 1988.
[20] S. Zhu and X. Liu. Learning in Gibbsian fields: How accurate and how fast can it be? In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR00), pages 2?9. IEEE, 2000.
9
| 3717 |@word proceeded:1 middle:2 c0:1 decomposition:1 p0:3 contrastive:4 q1:2 tr:1 moment:1 configuration:2 contains:5 liu:1 existing:2 current:9 discretization:2 visible:7 partition:6 enables:2 utml:1 treating:1 update:30 aside:2 stationary:1 generative:4 leaf:2 intelligence:3 plane:1 xk:4 geyer:1 provides:3 math:1 toronto:1 successive:2 five:1 constructed:1 differential:1 consists:1 fitting:1 multimodality:2 x0:21 pairwise:2 indeed:1 expected:5 rapid:4 behavior:1 brain:1 salakhutdinov:3 freeman:1 little:2 considering:1 becomes:2 begin:1 estimating:1 maximizes:2 panel:3 mass:2 finding:3 guarantee:2 pseudo:5 binarized:1 act:1 exactly:1 unit:13 producing:1 positive:1 before:2 local:1 tends:1 sutton:1 studied:1 specifying:1 range:1 acknowledgment:1 lecun:1 practice:3 digit:7 procedure:6 poincar:1 matching:1 regular:1 get:2 graph1:1 close:1 operator:17 unlabeled:1 applying:5 maximizing:1 annealed:4 go:2 straightforward:1 ergodic:1 assigns:1 estimator:9 rule:2 his:1 variation:1 updated:4 suppose:1 user:1 exact:6 us:2 recognition:4 particularly:2 expensive:2 labeled:2 ep:7 role:1 bottom:3 fly:1 initializing:2 capture:1 calculate:1 region:1 connected:6 decrease:2 substantial:1 balanced:1 nats:2 ong:1 trained:12 depend:1 efficiency:1 blurring:1 completely:1 multimodal:6 joint:2 easily:3 various:2 alphabet:1 fast:1 monte:4 artificial:3 whose:1 quite:1 larger:2 cvpr:1 delalleau:1 statistic:13 itself:1 sequence:5 advantage:1 parisi:1 took:3 propose:2 product:2 combining:1 date:2 mixing:2 poorly:4 te0:1 kc0:1 intuitive:1 normalize:1 convergence:4 regularity:2 optimum:1 p:8 requirement:1 produce:1 perfect:1 object:9 montreal:1 stat:1 pose:1 nearest:1 ij:2 progress:4 eq:2 strong:3 treewidth:1 implies:1 concentrate:1 stochastic:26 human:2 translating:1 implementing:2 require:1 subdivided:1 fix:1 preliminary:1 painfully:1 pl:2 sufficiently:1 considered:1 around:2 exp:5 major:2 desjardins:1 omitted:1 ruslan:1 estimation:3 label:2 robbins:4 largest:1 concurrent:1 successfully:3 tool:1 hope:1 mit:1 clearly:1 always:1 gaussian:2 hj:1 broader:1 jaakkola:1 likelihood:27 greatly:2 pentium:1 contrast:1 besag:1 inst:1 inference:3 unlikely:2 typically:2 accept:1 initially:1 hidden:12 integrated:1 pixel:8 among:1 classification:5 animal:1 fairly:1 initialize:2 field:9 reversibility:1 sampling:6 look:3 unsupervised:1 icml:1 report:2 few:2 employ:1 randomly:3 composed:1 densely:4 divergence:3 replaced:1 n1:1 maintain:1 attempt:1 acceptance:3 highly:7 mixture:3 behind:1 chain:13 accurate:2 gibbsian:2 edge:2 necessary:1 modest:1 unless:1 tree:4 logarithm:1 isolated:3 instance:1 classify:1 modeling:3 loopy:1 ordinary:1 addressing:1 vertex:1 too:1 dependency:3 considerably:5 hiddens:1 international:4 csail:1 stay:1 physic:1 picking:1 together:1 quickly:1 containing:3 huang:1 possibly:1 cognitive:1 stochastically:1 creating:1 derivative:1 expert:1 toy:3 summarized:1 lot:1 start:1 parallel:4 monro:4 ni:1 accuracy:1 variance:2 qk:2 efficiently:4 spaced:3 weak:1 handwritten:4 raw:1 vincent:1 produced:2 carlo:4 lighting:3 classified:2 whenever:1 sharing:1 energy:2 e2:2 associated:1 proof:2 rbm:10 refreshed:2 sampled:1 sap:18 dataset:5 richly:2 massachusetts:1 car:1 improves:3 dimensionality:4 schedule:1 back:2 wei:1 though:1 just:1 rejected:1 implicit:1 hastings:2 replacing:2 reversible:4 propagation:2 minibatch:1 mode:4 logistic:2 perhaps:1 effect:1 contain:1 true:4 hence:1 assigned:1 alternating:3 neal:2 deal:1 reweighted:1 during:4 unnormalized:1 generalized:1 complete:1 demonstrate:2 performs:1 temperature:1 image:17 variational:1 recently:1 common:1 exponentially:1 volume:5 million:4 approximates:1 refer:1 gibbs:28 ai:1 rd:1 pm:2 particle:14 had:2 belongs:1 reverse:3 binary:11 tempered:44 additional:1 converge:2 biometrica:1 semi:3 full:2 mix:1 infer:1 technical:2 mle:4 europhysics:1 mrf:18 regression:2 vision:2 expectation:4 iteration:1 sometimes:2 achieved:8 c1:3 proposal:4 addition:2 whereas:5 decreased:1 annealing:1 extra:3 operate:1 unlike:1 sure:2 induced:1 undirected:1 facilitates:1 call:1 integer:1 near:1 presence:1 intermediate:6 bengio:1 vinod:1 variety:1 xj:1 architecture:1 idea:2 t0:1 whether:1 stereo:3 suffer:1 e3:2 deep:3 transforms:1 amount:1 ten:1 concentrated:2 younes:1 reduced:3 exist:1 canonical:1 trapped:1 estimated:1 per:3 correctly:2 discrete:2 ppl:1 key:1 four:1 drawn:1 tempering:6 preprocessed:1 kept:3 backward:1 graph:2 asymptotically:1 fraction:1 run:8 inverse:1 parameterized:1 powerful:2 vided:1 letter:1 family:1 almost:2 reasonable:1 layer:3 followed:1 display:1 courville:1 replaces:1 truck:1 activity:2 occur:1 infinity:1 worked:1 bp:1 speed:1 extremely:1 min:1 relatively:1 structured:5 developing:1 department:1 alternate:1 poor:2 slightly:1 metropolis:2 rsalakhu:1 intuitively:1 restricted:9 invariant:2 ln:4 equation:1 computationally:1 turn:3 needed:2 tractable:2 end:4 yedidia:1 apply:2 observe:1 generic:3 original:2 denotes:3 running:5 ensure:1 top:3 remaining:1 graphical:1 especially:1 move:2 looked:1 dependence:2 usual:1 gradient:3 thank:1 simulated:3 sensible:1 spanning:2 willsky:1 code:2 ratio:2 providing:1 minimizing:1 difficult:3 greyscale:2 implementation:3 boltzmann:12 twenty:1 perform:3 observation:5 markov:17 datasets:5 t:8 hinton:3 looking:1 incorporated:1 perturbation:1 arbitrary:1 intensity:1 pair:3 trbp:1 learned:2 hour:1 nip:1 trans:7 beyond:2 able:1 proceeds:1 usually:1 pattern:1 kulesza:1 including:6 max:2 memory:1 belief:3 wainwright:1 suitable:1 natural:2 buhmann:1 pletscher:1 zhu:1 representing:2 scheme:1 improve:2 technology:1 picture:1 axis:2 nice:1 literature:1 epoch:1 asymptotic:1 fully:2 interesting:1 limitation:1 proportional:1 validation:3 sufficient:6 viewpoint:1 systematically:1 translation:2 surprisingly:2 last:3 free:3 placed:2 supported:1 bias:1 pseudolikelihood:1 institute:1 neighbor:1 ghz:1 plain:4 transition:53 world:2 resides:1 evaluating:1 forward:1 commonly:1 welling:1 transaction:1 henri:1 approximate:7 dealing:1 ml:1 global:1 sequentially:1 reveals:1 summing:1 norb:7 xi:5 learn:6 bottou:1 complex:3 necessarily:1 constructing:2 domain:1 main:2 spread:1 rh:1 noise:3 allowed:2 fair:1 x1:2 augmented:1 fig:4 representative:1 marinari:1 slow:1 sub:2 pereira:1 exponential:3 candidate:2 theorem:1 e4:2 minute:1 down:2 xt:20 preprocessor:2 showing:1 x:8 svm:1 normalizing:3 intractable:3 exists:1 mnist:9 effectively:2 importance:3 magnitude:1 te:6 easier:1 simply:3 explore:6 likely:1 contained:1 nserc:1 tieleman:1 satisfies:1 relies:1 chance:1 minibatches:1 nair:2 acm:1 conditional:2 goal:2 ann:2 towards:4 hard:2 change:2 infinite:1 except:1 uniformly:3 sampler:6 called:1 total:6 pas:2 accepted:1 experimental:2 invariance:1 highdimensional:1 inability:2 unbalanced:2 mcmc:15 tested:1 correlated:2 |
2,999 | 3,718 | Segmenting Scenes by Matching Image Composites
Bryan C. Russell1 Alexei A. Efros2,1 Josef Sivic1 William T. Freeman3 Andrew Zisserman4,1
1
INRIA?
2
3
Carnegie Mellon University
CSAIL MIT
4
University of Oxford
Abstract
In this paper, we investigate how, given an image, similar images sharing the same
global description can help with unsupervised scene segmentation. In contrast
to recent work in semantic alignment of scenes, we allow an input image to be
explained by partial matches of similar scenes. This allows for a better explanation
of the input scenes. We perform MRF-based segmentation that optimizes over
matches, while respecting boundary information. The recovered segments are then
used to re-query a large database of images to retrieve better matches for the target
regions. We show improved performance in detecting the principal occluding and
contact boundaries for the scene over previous methods on data gathered from the
LabelMe database.
1 Introduction
Segmenting semantic objects, and more broadly image parsing, is a fundamentally challenging problem. The task is painfully under-constrained ? given a single image, it is extremely difficult to partition it into semantically meaningful elements, not just blobs of similar color or texture. For example,
how would the algorithm figure out that doors and windows on a building, which look quite different, belong to the same segment? Or that the grey pavement and a grey house next to it are different
segments? Clearly, information beyond the image itself is required to solve this problem.
In this paper, we argue that some of this extra information can be extracted by also considering
images that are visually similar to the given one. With the increasing availability of Internetscale image collections (in the millions of images!), this idea of data-driven scene matching has
recently shown much promise for a variety of tasks. Simply by finding matching images using a
low-dimentinal descriptor and transfering any associated labels onto the input image, impressive results have been demonstrated for object and scene recognition [22], object detection [18, 11], image
geo-location [7], and particular object and event annotation [15], among others. Even if the image
collection does not contain any labels, it has been shown to help tasks such as image completion and
exploration [6, 21], image colorization [22], and 3D surface layout estimation [5].
However, as noted by several authors and illustrated in Figure 1, the major stumbling block of all
the scene-matching approaches is that, despite the large quantities of data, for many types of images the quality of the matches is still not very good. Part of the reason is that the low-level image
descriptors used for matching are just not powerful enough to capture some of the more semantic
similarity. Several approaches have been proposed to address this shortcoming, including synthetically increasing the dataset with transformed copies of images [22], cleaning matching results using
clustering [18, 7, 5], automatically prefiltering the dataset [21], or simply picking good matches by
hand [6]. All these appraoches improve performance somewhat but don?t alleviate this issue entirely.
We believe that there is a more fundamental problem ? the variability of the visual world is just so
vast, with exponential number of different object combinations within each scene, that it might be
?
?
WILLOW project-team, Laboratoire d?Informatique de l?Ecole
Normale Sup?erieure ENS/INRIA/CNRS
UMR 8548
1
Figure 1: Illustration of the scene matching problem. Left: Input image (along with the output
segmentation given by our system overlaid) to be matched to a dataset of 100k street images. Notice
that the output segment boundaries align well with the depicted objects in the scene. Top right:
top three retrieved images, based on matching the gist descriptor [14] over the entire image. The
matches are not good. Bottom right: Searching for matches within each estimated segment (using
the same gist representation within the segment) and compositing the results yields much better
matches to the input image.
futile to expect to always find a single overall good match at all! Instead, we argue that an input
image should be explained by a spatial composite of different regions taken from different database
images. The aim is to break-up the image into chunks that are small enough to have good matches
within the database, but still large enough that the matches retain their informative power.
1.1 Overview
In this work, we propose to apply scene matching to the problem of segmenting out semantically
meaningful objects (i.e. we seek to segment objects enclosed by the principal occlusion and contact
boundaries and not objects that are part-of or attached to other objects). The idea is to turn to
our advantage the fact that scene matches are never perfect. What typically happens during scene
matching is that some part of the image is matched quite well, while other parts are matched only
approximately, at a very coarse level. For example, for a street scene, one matching image could
have a building match very well, but getting the shape of the road wrong, while another matching
image could get the road exactly right, but have a tree instead of a building. These differences in
matching provide a powerful signal to identify objects and segmentation boundaries. By computing
a matching image composite, we should be able to better explain the input image (i.e. match each
region in the input image to semantically similar regions in other images) than if we used a single
best match.
The starting point of our algorithm is an input image and an ?image stack? ? a set of coarsely
matching images (5000 in our case) retrieved from a large dataset using a standard image matching
technique (gist [14] in our case). In essence, the image stack is itself a dataset, but tailor-made to
match the overall scene structure for the particular input image. Intuitively, our goal is to use the
image stack to segment (and ?explain?) the input image in a semantically meaningful way. The idea
is that, since the stack is already more-or-less aligned, the regions corresponding to the semantic objects that are present in many images will consistently appear in the same spatial location. The input
image can then be explained as a patch-work of these consistent regions, simultaneously producing
a segmentation, as well as composite matches, that are better than any of the individual matches
within the stack.
There has been prior work on producing a resulting image using a stack of aligned images depicting
the same scene, in particular the PhotoMontage work [1], which optimally selects regions from the
globally aligned images based on a quality score to composite a visually pleasing output image.
Recently, there has been work based on the PhotoMontage framework that tries to automatically
align images depicting the same scene or objects to perform segmentation [16], region-filling [23],
and outlier detection [10]. In contrast, in this work, we are attempting to work on a stack of visually
similar, but physically different, scenes. This is in the same spirit as the contemporary work of [11],
2
except they work on supervised data, whereas we are completely unsupervised. Also related is the
contemporary work of [9].
Our approach combines boundary-based and region-based segmentation processes together within
a single MRF framework. The boundary process (Section 2) uses the stack to determine the likely
semantic boundaries between objects. The region process (Section 3) aims to group pixels belonging
to the same object across the stack. These cues are combined together within an MRF framework
which is solved using GraphCut optimization (Section 4). We present results in Section 5.
2 Boundary process: data driven boundary detection
Information from only a single image is in many cases not sufficient for recovering boundaries between objects. Strong image edges could correspond to internal object structures, such as a window
or a wheel of a car. Additionally, boundaries between objects often produce weak image evidence,
as for example the boundary between a building and road of similar color partially occluding each
other.
Here, we propose to analyze the statistics of a large number of related images (the stack) to help
recover boundaries between objects. We will exploit the fact that objects tend not to rest at exactly
the same location relative to each other in a scene. For example, in a street scene, a car may be
adjacent to regions belonging to a number of objects, such as building, person, road, etc. On the
other hand, relative positions of internal object structures will be consistent across many images. For
example, wheels and windows on a car will appear consistently at roughly similar positions across
many images.
To recover object boundaries, we will measure the ability to consistently match locally to the same
set of images in the stack. Intuitively, regions inside an object will tend to match to the same set of
images, each having similar appearance, while regions on opposite sides of a boundary will match to
different sets of images. More formally, given an oriented line passing through an image point p at
orientation ?, we wish to analyze the statistics of two sets of images with similar appearance on each
side of the line. For each side of the oriented line, we independently query the stack of images by
forming a local image descriptor modulated by a weighted mask. We use a half-Gaussian weighting
mask oriented along the line and centered at image point p. This local mask modulates the Gabor
filter responses (8 orientations over 4 scales) and the RGB color channels, with a descriptor formed
by averaging the Gabor energy and color over 32?32 pixel spatial bins. The Gaussian modulated
descriptor g(p, ?) captures the appearance information on one side of the boundary at point p and
orientation ?. Appearance descriptors extracted in the same manner across the image stack are
compared with the query image descriptor using the L1 distance. Images in the stack are assumed
to be coarsely aligned, and hence matches are considered only at the particular query location p
and orientation ? across the stack, i.e. matching is not translation invariant. We believe this type of
spatially dependent matching is suitable for scene images with consistent spatial layout considered
in this work. The quality of the matches can be further improved by fine aligning the stack images
with the query [12].
For each image point p and orientation ?, the output of the local matching on the two sides of the
oriented line are two ranked lists of image stack indices, S r and Sl , where the ordering of each list is
given by the L1 distance between the local descriptors g(p, ?) of the query image and each image in
the stack. We compute Spearman?s rank correlation coefficient between the two rank-ordered lists
6 ni=1 d2i
,
(1)
?(p, ?) = 1 ?
n(n2 ? 1)
where n is the number of images in the stack and d i is the difference between ranks of the stack
image i in the two ranked lists, S r and Sl . A high rank correlation should indicate that point p
lies inside an object?s extent, whereas a low correlation should indicate that point p is at an object
boundary with orientation ?. We note however, that low rank correlations could be also caused by
poor quality of local matches. Figure 2 illustrates the boundary detection process.
For efficiency reasons, we only compute the rank correlation score along points and orientations
marked as boundaries by the probability of boundary edge detector (PB) [13], with boundary orientations ? ? [0, ?) quantized in steps of ?/8. The final boundary score P DB of the proposed data
3
A
A
B
B
Figure 2: Data driven boundary detection. Left: Input image with query edges shown. Right: The
top 9 matches in a large collection of images for each side of the query edges. Rank correlation for
occlusion boundary (A): -0.0998; rank correlation within the road region (B): 0.6067. Notice that
for point B lying inside an object (the road), the ranked sets of retrieved images for the two sides
of the oriented line are similar, resulting in a high rank correlation score. At point A lying at an
occlusion boundary between the building and the sky, the sets of retrieved images are very different,
resulting in a low rank correlation score.
driven boundary detector is a gating of the maximum PB response over all orientations, P B , and the
rank correlation coefficient ?,
1 ? ?(p, ?)
?
?[PB (p, ?) = max PB (p, ?)].
(2)
2
??
Note that this type of data driven boundary detection is very different from image based edge detection [4, 13] as (i) strong image edges can receive a low score provided the matched image structures
on each side of the boundary co-occur in many places in the image collection, and (ii) weak image edges can receive a high score, provided the neighboring image structures on each side of the
weak image boundary do not co-occur often in the database. In contrast to the PB detector, which
is trained from manually labelled object boundaries, data driven boundary scores are determined
based on co-occurrence statistics of similar scenes and require no additional manual supervision.
Figure 3 shows examples of data driven boundary detection results. Quantitative evaluation is given
in section 5.
PDB (p, ?) = PB (p, ?)
3 Region process: data driven image grouping
The goal is to group pixels in a query image that are likely to belong to the same object or a major
scene element (such as a building, a tree, or a road). Instead of relying on local appearance similarity,
such as color or texture, we again turn to the dataset of scenes in the image stack to suggest the
groupings.
Our hypothesis is that regions corresponding to semantically meaningful objects would be coherent
across a large part of the stack. Therefore, our goal is to find clusters within the stack that are both
(i) self-consistent, and (ii) explain well the query image. Note that for now, we do not want to make
any hard decisions, therefore, we want to allow multiple clusters to be able to explain overlapping
parts of the query image. For example, a tree cluster and a building cluster (drawn from different
parts of the stack) might be able to explain the same patch of the image, and both hypotheses should
be retained. This way, the final segmentation step in the next section will be free to chose the best
set of clusters based on all the information available within a global framework.
Therefore our approach is to find clusters of image patches that match the same images within the
stack. In other words, two patches in the query image will belong to the same group if the sets
of their best matching images from the database are similar. As in the boundary process described
in section 2, the query image is compared with each database image only at the particular query
patch location, i.e. the matching is not translation invariant. Note that patches with very different
appearance can be grouped together as long as they match the same database images. For example, a
4
(a)
(b)
(c)
(d)
Figure 3: Data driven boundary detection. (a) Input image. (b) Ground truth boundaries. (c) P B [13].
(d) Proposed data driven boundary detection. Notice enhanced object boundaries and suppressed
false positives boundaries inside objects.
door and a window of a building can be grouped together despite their different shape and appearance
as long as they co-occur together (and get matched) in other images. This type of matching is
different from self-similarity matching [20] where image patches within the same image are grouped
together if they look similar.
Formally, given a database of N scene images, each rectangular patch in the query image is described
by an N dimensional binary vector, y, where the i-th element y [i] is set to 1 if the i-th image in the
database is among the m = 1000 nearest neighbors of the patch. Other elements of y are set to 0.
The nearest neighbors for each patch are obtained by matching the local gist and color descriptors at
the particular image location as described in section 2, but here center weighted by a full Gaussian
mask with ? = 24 pixels.
We now wish to find cluster centers c k for k ? {1, . . . , K}. Many methods exist for finding clusters
in such space. For example, one can think of the desired object clusters as ?topics of an image stack?
and apply one of the standard topic discovery methods like probabilistic latent semantic analysis
(pLSA) [8] or Latent Dirichlet Allocation (LDA) [2]. However, we found that a simple K-means
algorithm applied to the indicator vectors produced good results. Clearly, the number of clusters,
K, is an important parameter. Because we are not trying to discover all the semantic objects within
a stack, but only those that explain well the query image, we found that a relatively small number
of clusters (e.g. 5) is sufficient. Figure 4 shows heat maps of the similarity (measured as c Tk y) of
each binary vector to the recovered cluster centers. Notice that regions belonging to the major scene
components are highlighted. Although hard K-means clustering is applied to cluster patches at this
stage, a soft similarity score for each patch under each cluster is used in a segmentation cost function
incorporating both region and boundary cues, described next.
4 Image segmentation combining boundary and region cues
In the preceding two sections we have developed models for estimating data-driven scene boundaries and coherent regions from the image stack. Note that while both the boundary and the region
processes use the same data, they are in fact producing very different, and complementary, types of
information. The region process aims to find large groups of coherent pixels that co-occur together
often, but is not too concerned about precise localization. The boundary process, on the other hand,
focuses rather myopically on the local image behavior around boundaries but has excellent localiza5
Figure 4: Data driven image grouping. Left: input image. Right: heat maps indicating groupings
of pixels belonging to the same scene component, which are found by clustering image patches that
match the same set of images in the stack (warmer colors correspond to higher similarity to a cluster
center). Notice that regions belonging to the major scene components are highlighted. Also, local
regions with different appearances (e.g. doors and windows in the interior of the building) can map
to the same cluster since they only need to match to the same set of images. Finally, the highlighted
regions tend to overlap, thereby providing multiple hypotheses for a local region.
tion. Both pieces of information are needed for a successful scene segmentation and explanation.
In this section, we propose to use a single MRF-based optimization framework for this task, that
will negotiate between the more global region process and the well-localized boundary process. We
set up a multi-state MRF on pixels for segmentation, where the states correspond to the K different
image stack groups from section 3. The MRF is formulated as follows:
min
x
?i (xi , yi ) +
i
?i,j (xi , xj )
(3)
(i,j)
where xi ? {0, 1, . . . , K} is the state at pixel i corresponding to one of K different image stack
groups (section 3), ? i are unary costs defined by similarity of a patch at pixel i, described by an
indicator vector y i (section 3), to each of the K image stack groups, and ? i,j are binary costs for
a boundary-dependent Potts model (section 2). We also allow an additional outlier state x i = 0
for regions that do not match any of the clusters well. For the pairwise term, we assume a 4neighbourhood structure, i.e. the extent is over adjacent horizontal and vertical neighbors. The
unary term in Equation 3 encourages pixels explained well by the same group of images from the
stack to receive the same label. The binary term encourages neighboring pixels to have the same
label, except in a case of a strong boundary evidence.
In more details, the unary term is given by
?s(ck , yi ) k ? {1, . . . , K}
?i (xi = k, yi ) =
?
k=0
(4)
where ? is a scalar parameter, and s(c k , yi ) = cTk yi is the similarity between indicator vector y i
describing the local image appearance at pixel i (section 3) and the k-th cluster center c k .
The pairwise term is defined as
?i,j (xi , xj ) = (? + ?f (i, j)) ?[xi = xj ]
(5)
where f (i, j) is a function dependent on the output of the data-driven boundary detector P DB (Equation 2), and ? and ? are scalar parameters. Since P DB is a line process with output strength and
orientation defined at pixels rather than between pixels, as in the standard contrast dependent pairwise term [3], we must take care to place the pairwise costs consistently along one side of each
continuous boundary. For this, let P i = max? PDB (i, ?) and ?i = argmax? PDB (i, ?). If i and
j are vertical neighbors, with i on top, then f (i, j) = max{0, P j ? Pi }. If i and j are horizontal
neighbors, with i on the left, then f (i, j) = max{0, (P j ? Pi )?[?j < ?/2], (Pi ? Pj )?[?i ? ?/2]}.
Notice that since PDB is non-negative everywhere, we only incorporate a cost into the model when
the difference between adjacent P DB elements is positive.
We minimize Equation (3) using graph cuts with alpha-beta swaps [3]. We optimized the parameters
on a validation set by manual tuning on the boundary detection task (section 5). We set ? = ?0.1,
? = 0.25, and ? = ?0.25. Note that the number of recovered segments is not necessarily equal to
the number of image stack groups K.
6
1
PB
Data?driven detector (PDB)
0.9
Segmentation
Figure 5: Evaluation of the boundary detection task on the
principal occlusion and contact boundaries extracted from
the LabelMe database [17]. We show precision-recall curves
for PB [13] (blue triangle line) and our data-driven boundary detector (red circle line). Notice that we achieve improved performance across all recalls. We also show the
precision and recall of the output segmentations (green star),
which achieves 0.55 precision at 0.09 recall. At the same recall level, PB and the data-driven boundary detector achieves
0.45 and 0.50 precision, respectively.
0.8
0.7
Precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
0.4
0.5
Recall
0.6
0.7
0.8
0.9
1
5 Experimental evaluation
In this section, we evaluate the data-driven boundary detector and the proposed image segmentation
model on a challenging dataset of complex street scenes from the LabelMe database [19]. For the
unlabelled scene database, we use a dataset of 100k street scene images gathered from Flickr [21].
Boundary detection and image grouping are then applied only within this candidate set of images.
Figure 6 shows several final segmentations. Notice that the recovered segments correspond to the
large objects depicted in the images, with the segment boundaries aligning along the objects? boundaries. For each segment, we re-query the image stack by using the segment as a weighted mask to
retrieve images that match the appearance within the segment. The top matches for each segment
are stitched together to form a composite, which are shown in Figure 6. As a comparison, we show
the top matches using the global descriptor. Notice that the composites better align with the contents
depicted in the input image.
We quantitatively evaulate our system by measuring how well we can detect ground truth object
boundaries provided by human labelers. To evaluate object boundary detection, we use 100 images
depicting street scenes from the benchmark set of the LabelMe database [19]. The benchmark set
consists of fully labeled images taken from around the world. A number of different types of edges
are implicitly labeled in the LabelMe database, such as those arising through occlusion, attachment,
and contact with the ground. For this work, we filter out attached objects (e.g. a window is attached
to a building and hence does not generate any object boundaries) using the techniques outlined
in [17]. Note that this benchmark is more appropriate for our task than the BSDS [13] since the
dataset explicitly contains occlusion boundaries and not interior contours.
To measure performance, we used the evaluation procedure outlined in [13], which aligns output
boundaries for a given threshold to the ground truth boundaries to compute precision and recall.
A curve is generated by evaluating at all thresholds. For a boundary to be considered correct, we
assume that it must lie within 6 pixels of the ground truth boundary.
Figure 5 shows a precision-recall curve for the data-driven boundary detector. We compare against
PB using color [13]. Notice that we achieve higher precision at all recall levels. We also plot the
precision and recall of the output segmentation produced by our system. Notice that the segmentation produced the highest precision (0.55) at 0.09 recall. The improvement in performance at low
recall is largely due to the ability to suppress interior contours due to attached objects (c.f. Figure 3).
However, we tend to miss small, moveable objects, which accounts for the lower performance at
high recall.
6 Conclusion
We have shown that unsupervised analysis of a large image collection can help segmenting complex
scenes into semantically coherent parts. We exploit object variations over related images using
MRF-based segmentation that optimizes over matches while preserving scene boundaries obtained
by a data driven boundary detection process. We have demonstrated an improved performance in
detecting the principal occlusion and contact boundaries over previous methods on a challenging
dataset of complex street scenes from LabelMe. Our work also suggests that other applications of
7
Figure 6: Left: Output segmentation produced by our system. Notice that the segment boundaries
align well with the depicted objects in the scene. Top right: Top matches for each recovered segment,
which are stitched together to form a composite. Bottom right: Top whole-image matches using the
gist descriptor. By recovering the segmentation, we are able to recover improved semantic matches.
scene matching, such as object recognition or computer graphics, might benefit from segment-based
explanations of the query scene.
Acknowledgments: This work was partially supported by ONR MURI N00014-06-1-0734, ONR
MURI N00014-07-1-0182, NGA NEGI-1582-04-0004, NSF grant IIS-0546547, gifts from Microsoft Research and Google, and Guggenheim and Sloan fellowships.
8
References
[1] A. Agarwala, M. Dontcheva, M. Agrawala, S. Drucker, A. Colburn, B. Curless, D. Salesin, and M. Cohen.
Interactive digital photomontage. In SIGGRAPH, 2004.
[2] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[3] Y. Boykov, O. Veksler, and R. Zabih. Fast approximate energy minimization via graph cuts. IEEE Trans.
on Pattern Analysis and Machine Intelligence, 23(11), 2001.
[4] J. F. Canny. A computational approach to edge detection. IEEE Trans. on Pattern Analysis and Machine
Intelligence, 8(6):679?698, 1986.
[5] S. K. Divvala, A. A. Efros, and M. Hebert. Can similar scenes help surface layout estimation? In IEEE
Workshop on Internet Vision, associated with CVPR, 2008.
[6] J. Hays and A. Efros. Scene completion using millions of photographs. In ?SIGGRAPH?, 2007.
[7] J. Hays and A. A. Efros. IM2GPS: estimating geographic information from a single image. In CVPR,
2008.
[8] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 43:177?
196, 2001.
[9] M. K. Johnson, K. Dale, S. Avidan, H. Pfister, W. T. Freeman, and W. Matusik. CG2Real: Improving
the realism of computer-generated images using a large collection of photographs. Technical Report
2009-034, MIT CSAIL, 2009.
[10] H. Kang, A. A. Efros, M. Hebert, and T. Kanade. Image composition for object pop-out. In IEEE
Workshop on 3D Representation for Recognition (3dRR-09), in assoc. with CVPR, 2009.
[11] C. Liu, J. Yuen, and A. Torralba. Nonparametric scene parsing: label transfer via dense scene alignment.
In CVPR, 2009.
[12] C. Liu, J. Yuen, A. Torralba, J. Sivic, and W. T. Freeman. SIFT flow: dense correspondence across
different scenes. In ECCV, 2008.
[13] D. Martin, C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness,
color, and texture cues. IEEE Trans. on Pattern Analysis and Machine Intelligence, 26(5):530?549, 2004.
[14] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145?175, 2001.
[15] T. Quack, B. Leibe, and L. V. Gool. World-scale mining of objects and events from community photo
collections. In CIVR, 2008.
[16] C. Rother, V. Kolmogorov, T. Minka, and A. Blake. Cosegmentation of image pairs by histogram matching
- incorporating a global constraint into MRFs. In CVPR, 2006.
[17] B. C. Russell and A. Torralba. Building a database of 3D scenes from user annotations. In CVPR, 2009.
[18] B. C. Russell, A. Torralba, C. Liu, R. Fergus, and W. T. Freeman. Object recognition by scene alignment.
In Advances in Neural Info. Proc. Systems, 2007.
[19] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. LabelMe: a database and web-based tool
for image annotation. IJCV, 77(1-3):157?173, 2008.
[20] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. In CVPR, 2007.
[21] J. Sivic, B. Kaneva, A. Torralba, S. Avidan, and W. T. Freeman. Creating and exploring a large photorealistic virtual space. In First IEEE Workshop on Internet Vision, associated with CVPR, 2008.
[22] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large dataset for non-parametric
object and scene recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 30(11):1958?
1970, 2008.
[23] O. Whyte, J. Sivic, and A. Zisserman. Get out of my picture! Internet-based inpainting. In British
Machine Vision Conference, 2009.
9
| 3718 |@word plsa:1 grey:2 seek:1 rgb:1 brightness:1 thereby:1 inpainting:1 shechtman:1 liu:3 contains:1 score:9 ecole:1 warmer:1 colburn:1 recovered:5 must:2 parsing:2 partition:1 informative:1 hofmann:1 shape:3 plot:1 gist:5 cue:4 half:1 intelligence:4 realism:1 blei:1 detecting:2 coarse:1 quantized:1 location:6 along:5 beta:1 consists:1 ijcv:2 combine:1 inside:4 manner:1 pairwise:4 mask:5 behavior:1 roughly:1 multi:1 freeman:6 globally:1 relying:1 automatically:2 window:6 considering:1 increasing:2 gift:1 project:1 provided:3 matched:5 discover:1 estimating:2 what:1 developed:1 finding:2 sky:1 quantitative:1 interactive:1 exactly:2 wrong:1 assoc:1 grant:1 appear:2 producing:3 segmenting:4 positive:2 local:13 despite:2 oxford:1 approximately:1 inria:2 might:3 umr:1 chose:1 suggests:1 challenging:3 co:5 acknowledgment:1 block:1 procedure:1 gabor:2 composite:8 matching:27 word:1 road:7 pdb:5 suggest:1 get:3 onto:1 wheel:2 interior:3 map:3 demonstrated:2 center:5 layout:3 starting:1 independently:1 rectangular:1 retrieve:2 searching:1 variation:1 target:1 enhanced:1 user:1 cleaning:1 us:1 hypothesis:3 element:5 recognition:5 cut:2 muri:2 database:17 labeled:2 bottom:2 solved:1 capture:2 moveable:1 region:28 ordering:1 russell:3 contemporary:2 highest:1 respecting:1 d2i:1 trained:1 segment:18 localization:1 efficiency:1 completely:1 swap:1 triangle:1 siggraph:2 transfering:1 kolmogorov:1 informatique:1 heat:2 shortcoming:1 fast:1 query:18 quite:2 solve:1 cvpr:8 ability:2 statistic:3 think:1 highlighted:3 itself:2 final:3 blob:1 advantage:1 propose:3 canny:1 neighboring:2 aligned:4 combining:1 holistic:1 compositing:1 achieve:2 description:1 getting:1 cluster:18 negotiate:1 produce:1 perfect:1 object:49 help:5 tk:1 andrew:1 completion:2 measured:1 nearest:2 strong:3 recovering:2 whyte:1 indicate:2 correct:1 filter:2 exploration:1 centered:1 human:1 virtual:1 bin:1 require:1 civr:1 alleviate:1 yuen:2 painfully:1 exploring:1 lying:2 around:2 considered:3 ground:5 blake:1 visually:3 overlaid:1 bsds:1 major:4 achieves:2 efros:4 torralba:8 estimation:2 proc:1 label:5 grouped:3 tool:1 weighted:3 minimization:1 mit:2 clearly:2 always:1 gaussian:3 aim:3 normale:1 rather:2 ck:1 focus:1 improvement:1 consistently:4 rank:11 potts:1 contrast:4 detect:2 dependent:4 mrfs:1 cnrs:1 unary:3 entire:1 typically:1 transformed:1 willow:1 selects:1 josef:1 pixel:15 agarwala:1 issue:1 among:2 orientation:10 overall:2 constrained:1 spatial:5 equal:1 never:1 photomontage:3 having:1 ng:1 manually:1 look:2 unsupervised:4 filling:1 others:1 report:1 fundamentally:1 quantitatively:1 oriented:5 simultaneously:1 individual:1 murphy:1 argmax:1 occlusion:7 william:1 microsoft:1 pleasing:1 detection:16 investigate:1 mining:1 alexei:1 evaluation:4 alignment:3 stitched:2 edge:9 partial:1 tree:3 re:2 desired:1 circle:1 soft:1 ctk:1 modeling:1 measuring:1 cost:5 geo:1 veksler:1 successful:1 johnson:1 too:1 graphic:1 optimally:1 my:1 combined:1 chunk:1 person:1 fundamental:1 csail:2 retain:1 probabilistic:2 picking:1 together:9 again:1 creating:1 account:1 de:1 star:1 availability:1 coefficient:2 caused:1 explicitly:1 sloan:1 piece:1 tion:1 break:1 try:1 analyze:2 sup:1 red:1 recover:3 annotation:3 minimize:1 formed:1 ni:1 cosegmentation:1 descriptor:12 largely:1 gathered:2 yield:1 identify:1 correspond:4 salesin:1 weak:3 curless:1 produced:4 agrawala:1 explain:6 detector:9 flickr:1 sharing:1 manual:2 aligns:1 against:1 energy:2 minka:1 associated:3 photorealistic:1 dataset:11 recall:13 color:9 car:3 segmentation:21 higher:2 supervised:1 response:2 improved:5 zisserman:1 just:3 stage:1 correlation:10 hand:3 horizontal:2 web:1 overlapping:1 google:1 quality:4 lda:1 believe:2 building:12 contain:1 geographic:1 hence:2 spatially:1 irani:1 semantic:9 illustrated:1 adjacent:3 during:1 self:3 encourages:2 drr:1 essence:1 noted:1 trying:1 l1:2 image:144 recently:2 boykov:1 overview:1 cohen:1 attached:4 million:3 belong:3 mellon:1 composition:1 tuning:1 erieure:1 outlined:2 impressive:1 surface:2 similarity:9 etc:1 align:4 aligning:2 supervision:1 labelers:1 recent:1 retrieved:4 optimizes:2 driven:19 n00014:2 hay:2 stumbling:1 binary:4 onr:2 yi:5 preserving:1 additional:2 somewhat:1 preceding:1 care:1 determine:1 signal:1 ii:3 multiple:2 full:1 technical:1 match:37 unlabelled:1 long:2 matusik:1 graphcut:1 mrf:7 avidan:2 oliva:1 vision:3 physically:1 histogram:1 receive:3 whereas:2 want:2 fine:1 fellowship:1 laboratoire:1 myopically:1 extra:1 rest:1 envelope:1 tend:4 db:4 flow:1 spirit:1 jordan:1 door:3 synthetically:1 enough:3 concerned:1 variety:1 xj:3 opposite:1 idea:3 drucker:1 passing:1 nonparametric:1 locally:1 zabih:1 generate:1 sl:2 exist:1 nsf:1 notice:12 estimated:1 arising:1 bryan:1 blue:1 broadly:1 carnegie:1 promise:1 coarsely:2 group:9 threshold:2 pb:10 drawn:1 pj:1 vast:1 graph:2 nga:1 everywhere:1 powerful:2 tailor:1 place:2 patch:14 decision:1 entirely:1 internet:3 correspondence:1 strength:1 occur:4 constraint:1 scene:51 extremely:1 min:1 attempting:1 relatively:1 martin:1 combination:1 poor:1 guggenheim:1 belonging:5 spearman:1 across:9 suppressed:1 happens:1 explained:4 intuitively:2 outlier:2 invariant:2 taken:2 equation:3 turn:2 describing:1 needed:1 photo:1 available:1 apply:2 leibe:1 appropriate:1 occurrence:1 fowlkes:1 neighbourhood:1 top:9 clustering:3 dirichlet:2 exploit:2 contact:5 malik:1 already:1 quantity:1 parametric:1 distance:2 street:7 topic:2 argue:2 extent:2 reason:2 rother:1 index:1 colorization:1 illustration:1 retained:1 providing:1 difficult:1 info:1 negative:1 suppress:1 perform:2 vertical:2 benchmark:3 variability:1 team:1 precise:1 stack:35 community:1 pair:1 required:1 optimized:1 sivic:3 coherent:4 kang:1 pop:1 trans:4 address:1 beyond:1 able:4 negi:1 pattern:4 including:1 max:4 video:1 explanation:3 gool:1 green:1 power:1 event:2 suitable:1 ranked:3 overlap:1 natural:1 indicator:3 improve:1 picture:1 attachment:1 prior:1 discovery:1 relative:2 fully:1 expect:1 allocation:2 enclosed:1 localized:1 validation:1 digital:1 sufficient:2 consistent:4 tiny:1 pi:3 translation:2 eccv:1 supported:1 copy:1 free:1 hebert:2 side:10 allow:3 divvala:1 neighbor:5 benefit:1 boundary:73 curve:3 world:3 prefiltering:1 contour:2 evaluating:1 dale:1 author:1 collection:7 made:1 alpha:1 approximate:1 implicitly:1 global:5 assumed:1 xi:6 fergus:2 don:1 continuous:1 latent:4 additionally:1 kanade:1 channel:1 transfer:1 depicting:3 improving:1 futile:1 excellent:1 necessarily:1 complex:3 dense:2 whole:1 n2:1 complementary:1 en:1 precision:10 position:2 wish:2 exponential:1 lie:2 house:1 candidate:1 weighting:1 british:1 gating:1 sift:1 list:4 evidence:2 grouping:5 incorporating:2 workshop:3 false:1 modulates:1 texture:3 illustrates:1 depicted:4 photograph:2 simply:2 likely:2 appearance:10 forming:1 visual:1 ordered:1 partially:2 scalar:2 truth:4 extracted:3 goal:3 marked:1 formulated:1 labelled:1 labelme:7 content:1 hard:2 determined:1 except:2 semantically:6 averaging:1 miss:1 principal:4 pfister:1 experimental:1 meaningful:4 occluding:2 indicating:1 formally:2 internal:2 modulated:2 incorporate:1 evaluate:2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.